| #
75996a40 |
| 14-Jul-2024 |
kre <kre@NetBSD.org> |
PR kern/58425 -- Disallow INT_MIN as a (negative) pid arg.
Since -INT_MIN is undefined, and to point of negative pid args is to negate them, and use the result as a pgrp id instead, we need to avoi
PR kern/58425 -- Disallow INT_MIN as a (negative) pid arg.
Since -INT_MIN is undefined, and to point of negative pid args is to negate them, and use the result as a pgrp id instead, we need to avoid accidentally negating INT_MIN.
Since pid_t is just an integral type, of unspecified width, when testing pid_t value test for <= INT_MIN (or > INT_MIN sometimes) rather than == INT_MIN. When testing int values, just == INT_MIN is all that is needed, < INT_MIN cannot occur.
XXX pullup -9, -10
show more ...
|
| #
d9d62d0f |
| 02-Jun-2024 |
andvar <andvar@NetBSD.org> |
Fix various typos, mainly triple letters.
|
| #
68fa5843 |
| 05-Oct-2023 |
ad <ad@NetBSD.org> |
Arrange to update cached LWP credentials in userret() rather than during syscall/trap entry, eliminating a test+branch on every syscall/trap.
This wasn't possible in the 3.99.x timeframe when l->l_c
Arrange to update cached LWP credentials in userret() rather than during syscall/trap entry, eliminating a test+branch on every syscall/trap.
This wasn't possible in the 3.99.x timeframe when l->l_cred came about because there wasn't a reliable/timely way to force an ONPROC LWP running on a remote CPU into the kernel (which is just about the only new thing in this scheme).
show more ...
|
| #
0f335007 |
| 04-Oct-2023 |
ad <ad@NetBSD.org> |
kauth_cred_hold(): return cred verbatim so that donating a reference to another data structure can be done more elegantly.
|
| #
a355028f |
| 04-Oct-2023 |
ad <ad@NetBSD.org> |
Eliminate l->l_ncsw and l->l_nivcsw. From memory think they were added before we had per-LWP struct rusage; the same is now tracked there.
|
| #
715431a6 |
| 04-Sep-2023 |
simonb <simonb@NetBSD.org> |
Whitespace nit.
|
| #
0dec6ba3 |
| 09-Apr-2023 |
riastradh <riastradh@NetBSD.org> |
kern: KASSERT(A && B) -> KASSERT(A); KASSERT(B)
|
| #
bdc740e0 |
| 26-Oct-2022 |
riastradh <riastradh@NetBSD.org> |
kern/exec_elf.c: Get emul_netbsd from sys/proc.h.
|
| #
d28cf90b |
| 01-Jul-2022 |
riastradh <riastradh@NetBSD.org> |
kern: Omit stale locking comment in proc_crmod_leave.
|
| #
60441c6e |
| 07-May-2022 |
mrg <mrg@NetBSD.org> |
bump maxthreads default.
bump the default MAXLWP to 4096 from 2048, and adjust the default limits seen to be 2048 cur / 4096 max. remove the linkage to maxuprc entirely.
remove cpu_maxlwp() that i
bump maxthreads default.
bump the default MAXLWP to 4096 from 2048, and adjust the default limits seen to be 2048 cur / 4096 max. remove the linkage to maxuprc entirely.
remove cpu_maxlwp() that isn't implemented anywhere. instead, grow the maxlwp for larger memory systems, picking 1 lwp per 1MiB of ram, limited to 65535 like the system limit.
remove some magic numbers.
i've been having weird firefox issues for a few months now and it turns out i was having pthread_create() failures and since bumping the defaults i've had none of the recent issues.
show more ...
|
| #
2e9df72e |
| 07-Apr-2022 |
andvar <andvar@NetBSD.org> |
fix various typos in comments.
|
| #
9ab52bf8 |
| 13-Mar-2022 |
riastradh <riastradh@NetBSD.org> |
kern: Fix ordering of loads for pid_table and pid_tbl_mask.
This introduces a load-acquire where there was none before. This is a simple correctness change. We could avoid the load-acquire, and us
kern: Fix ordering of loads for pid_table and pid_tbl_mask.
This introduces a load-acquire where there was none before. This is a simple correctness change. We could avoid the load-acquire, and use only load-consume, if we used a pointer indirection for _both_ pid_table and pid_tbl_mask. Takes a little more work, and probably costs an additional cache line of memory traffic, but might be worth it to avoid the load-acquire for pid lookup.
Reported-by: syzbot+c49e405d0b977aeed663@syzkaller.appspotmail.com Reported-by: syzbot+1c88ee7086f93607cea1@syzkaller.appspotmail.com Reported-by: syzbot+da4e9ed1319b75fe2ef3@syzkaller.appspotmail.com
show more ...
|
| #
11d36e84 |
| 10-Mar-2022 |
riastradh <riastradh@NetBSD.org> |
kern: Use atomic_store_release/atomic_load_consume for pid_table.
This is read without the lock, so ordering is required.
|
| #
576702f1 |
| 12-Feb-2022 |
thorpej <thorpej@NetBSD.org> |
Add inline functions to manipulate the klists that link up knotes via kn_selnext:
- klist_init() - klist_fini() - klist_insert() - klist_remove()
These provide some API insulation from the implemen
Add inline functions to manipulate the klists that link up knotes via kn_selnext:
- klist_init() - klist_fini() - klist_insert() - klist_remove()
These provide some API insulation from the implementation details of these lists (but not completely; see vn_knote_attach() and vn_knote_detach()). Currently just a wrapper around SLIST(9).
This will make it significantly easier to switch kn_selnext linkage to a different kind of list.
show more ...
|
| #
fe22bed1 |
| 24-Dec-2020 |
nia <nia@NetBSD.org> |
Avoid negating the minimum size of pid_t (this overflows).
Reported-by: syzbot+e2eb02f9dfaf4f2e6626@syzkaller.appspotmail.com
|
| #
2538a353 |
| 17-Sep-2020 |
martin <martin@NetBSD.org> |
PR kern/55665: temporarily comment out an assertion that is known to trigger in some conditions (where ignoring the wrap around does no harm for now)
|
| #
9fc45356 |
| 05-Sep-2020 |
riastradh <riastradh@NetBSD.org> |
Round of uvm.h cleanup.
The poorly named uvm.h is generally supposed to be for uvm-internal users only.
- Narrow it to files that actually need it -- mostly files that need to query whether curlw
Round of uvm.h cleanup.
The poorly named uvm.h is generally supposed to be for uvm-internal users only.
- Narrow it to files that actually need it -- mostly files that need to query whether curlwp is the pagedaemon, which should maybe be exposed by an external header.
- Use uvm_extern.h where feasible and uvm_*.h for things not exposed by it. We should split up uvm_extern.h but this will serve for now to reduce the uvm.h dependencies.
- Use uvm_stat.h and #ifdef UVMHIST uvm.h for files that use UVMHIST(ubchist), since ubchist is declared in uvm.h but the reference evaporates if UVMHIST is not defined, so we reduce header file dependencies.
- Make uvm_device.h and uvm_swap.h independently includable while here.
ok chs@
show more ...
|
| #
32591f1d |
| 28-Aug-2020 |
riastradh <riastradh@NetBSD.org> |
Fix pasto in previous -- pass the right size to memset...
|
| #
4d35c45e |
| 28-Aug-2020 |
riastradh <riastradh@NetBSD.org> |
Nix trailing whitespace.
|
| #
fb66a217 |
| 28-Aug-2020 |
riastradh <riastradh@NetBSD.org> |
Zero out more lock snapshots in sysctl exposure.
|
| #
1d0978b8 |
| 26-Aug-2020 |
christos <christos@NetBSD.org> |
Instead of returning 0 when sysctl kern.expose_address=0, return a random hashed value of the data. This allows sockstat to work without exposing kernel addresses or being setgid kmem.
|
| #
4b8a875a |
| 11-Jun-2020 |
ad <ad@NetBSD.org> |
uvm_availmem(): give it a boolean argument to specify whether a recent cached value will do, or if the very latest total must be fetched. It can be called thousands of times a second and fetching th
uvm_availmem(): give it a boolean argument to specify whether a recent cached value will do, or if the very latest total must be fetched. It can be called thousands of times a second and fetching the totals impacts not only the calling LWP but other CPUs doing unrelated activity in the VM system.
show more ...
|
| #
9f15ed54 |
| 26-May-2020 |
kamil <kamil@NetBSD.org> |
Catch up with the usage of struct vmspace::vm_refcnt
Use the dedicated reference counting routines.
Change the type of struct vmspace::vm_refcnt and struct vm_map::ref_count to volatile.
Remove th
Catch up with the usage of struct vmspace::vm_refcnt
Use the dedicated reference counting routines.
Change the type of struct vmspace::vm_refcnt and struct vm_map::ref_count to volatile.
Remove the unnecessary vm->vm_map.misc_lock locking in process_domem().
Reviewed by <ad>
show more ...
|
| #
0eaaa024 |
| 23-May-2020 |
ad <ad@NetBSD.org> |
Move proc_lock into the data segment. It was dynamically allocated because at the time we had mutex_obj_alloc() but not __cacheline_aligned.
|
| #
20180cb1 |
| 23-May-2020 |
ad <ad@NetBSD.org> |
- Replace pid_table_lock with a lockless lookup covered by pserialize, with the "writer" side being pid_table expansion. The basic idea is that when doing an LWP lookup there is usually already
- Replace pid_table_lock with a lockless lookup covered by pserialize, with the "writer" side being pid_table expansion. The basic idea is that when doing an LWP lookup there is usually already a lock held (p->p_lock), or a spin mutex that needs to be taken (l->l_mutex), and either can be used to get the found LWP stable and confidently determine that all is correct.
- For user processes LSLARVAL implies the same thing as LSIDL ("not visible by ID"), and lookup by ID in proc0 doesn't really happen. In-tree the new state should be understood by top(1), the tty subsystem and so on, and would attract the attention of 3rd party kernel grovellers in time, so remove it and just rely on LSIDL.
show more ...
|