| #
6779c023 |
| 15-Oct-2023 |
riastradh <riastradh@NetBSD.org> |
kern_mutex.c: Sort includes. No functional change intended.
|
| #
fac91bbe |
| 15-Oct-2023 |
riastradh <riastradh@NetBSD.org> |
sys/lwp.h: Nix sys/syncobj.h dependency.
Remove it in ddb/db_syncobj.h too.
New sys/wchan.h defines wchan_t so that users need not pull in sys/syncobj.h to get it.
Sprinkle #include <sys/syncobj.h
sys/lwp.h: Nix sys/syncobj.h dependency.
Remove it in ddb/db_syncobj.h too.
New sys/wchan.h defines wchan_t so that users need not pull in sys/syncobj.h to get it.
Sprinkle #include <sys/syncobj.h> in .c files where it is now needed.
show more ...
|
| #
6ed72b5f |
| 23-Sep-2023 |
ad <ad@NetBSD.org> |
- Simplify how priority boost for blocking in kernel is handled. Rather than setting it up at each site where we block, make it a property of syncobj_t. Then, do not hang onto the priority boos
- Simplify how priority boost for blocking in kernel is handled. Rather than setting it up at each site where we block, make it a property of syncobj_t. Then, do not hang onto the priority boost until userret(), drop it as soon as the LWP is out of the run queue and onto a CPU. Holding onto it longer is of questionable benefit.
- This allows two members of lwp_t to be deleted, and mi_userret() to be simplified a lot (next step: trim it down to a single conditional).
- While here, constify syncobj_t and de-inline a bunch of small functions like lwp_lock() which turn out not to be small after all (I don't know why, but atomic_*_relaxed() seem to provoke a compiler shitfit above and beyond what volatile does).
show more ...
|
| #
8479531b |
| 07-Sep-2023 |
ad <ad@NetBSD.org> |
Remove dodgy and unused mutex_owner_running() & rw_owner_running().
|
| #
f4853583 |
| 17-Jul-2023 |
riastradh <riastradh@NetBSD.org> |
kern: New struct syncobj::sobj_name member for diagnostics.
XXX potential kernel ABI change -- not sure any modules actually use struct syncobj but it's hard to rule that out because sys/syncobj.h l
kern: New struct syncobj::sobj_name member for diagnostics.
XXX potential kernel ABI change -- not sure any modules actually use struct syncobj but it's hard to rule that out because sys/syncobj.h leaks into sys/lwp.h
show more ...
|
| #
90059fae |
| 01-May-2023 |
riastradh <riastradh@NetBSD.org> |
mutex(9): Write comments in terms of ordering semantics.
Phrasing things in terms of implementation details like `acquiring and locking cache lines' both suggests a particular cache coherency protoc
mutex(9): Write comments in terms of ordering semantics.
Phrasing things in terms of implementation details like `acquiring and locking cache lines' both suggests a particular cache coherency protocol, paints an incomplete picture for more involved protocols, and doesn't really help to prove theorems the way ordering relations do.
No functional change intended.
show more ...
|
| #
6183a462 |
| 01-May-2023 |
riastradh <riastradh@NetBSD.org> |
mutex(9): Omit needless membar_consumer.
In practical terms, this is not necessary because MUTEX_SET_WAITERS already issues MUTEX_MEMBAR_ENTER, which on all architectures is a sequential consistency
mutex(9): Omit needless membar_consumer.
In practical terms, this is not necessary because MUTEX_SET_WAITERS already issues MUTEX_MEMBAR_ENTER, which on all architectures is a sequential consistency barrier, i.e., read/write-before-read/write, subsuming membar_consumer.
In theoretical terms, MUTEX_MEMBAR_ENTER might imply only write-before-read/write, so one might imagine that the read-before-read ordering of membar_consumer _could_ be necessary. However, the memory operations that are significant here are:
1. load owner := mtx->mtx_owner 2. store mtx->mtx_owner := owner | MUTEX_BIT_WAITERS 3. load owner->l_cpu->ci_curlwp to test if equal to owner
(1) is program-before (2) and at the same memory location, mtx->mtx_owner, so (1) happens-before (2).
And (2) is separated in program order by MUTEX_MEMBAR_ENTER from (3), so (2) happens-before (3).
So even if the membar_consumer were intended to guarantee that (1) happens-before (3), it's not necessary, because we can already prove it from MUTEX_MEMBAR_ENTER.
But actually, we don't really need (1) happens-before (3), exactly; what we really need is (2) happens-before (3), since this is a little manifestation of Dekker's algorithm between cpu_switchto and mutex_exit, where each CPU sets one flag and must ensure it is visible to the other CPUs before testing the other flag -- one flag here is the MUTEX_BIT_WAITERS bit, and the other `flag' here is the condition owner->l_cpu->ci_curlwp == owner; the corresponding logic, in cpu_switchto, is:
1'. store owner->l_cpu->ci_curlwp := owner 2'. load mtx->mtx_owner to test if MUTEX_BIT_WAITERS set
show more ...
|
| #
617315eb |
| 12-Apr-2023 |
riastradh <riastradh@NetBSD.org> |
kern: Nix mutex_owner.
There is no valid reason to use this except in assertions of the form
KASSERT(mutex_owner(lock) == curlwp),
which is more obviously spelled as
KASSERT(mutex_owned(lock)).
kern: Nix mutex_owner.
There is no valid reason to use this except in assertions of the form
KASSERT(mutex_owner(lock) == curlwp),
which is more obviously spelled as
KASSERT(mutex_owned(lock)).
Exception: There's one horrible kludge in zfs that abuses this, which should be eliminated.
XXX kernel revbump -- deleting symbol
PR kern/47114
show more ...
|
| #
322364be |
| 24-Feb-2023 |
riastradh <riastradh@NetBSD.org> |
mutex(9): Simplify membars.
- Elide macro indirection for membar_acquire. - Use atomic_store_release instead of membar_release and store.
No functional change intended. Possible very very very ver
mutex(9): Simplify membars.
- Elide macro indirection for membar_acquire. - Use atomic_store_release instead of membar_release and store.
No functional change intended. Possible very very very very minor performance gain on architectures with a native store-release instruction.
Note: It is possible there are some code paths that worked by accident with this change which could, in theory, break now, such as the logic I recently fixed in kern_descrip.c that assumed a mutex_enter/exit cycle would serve as a store-before-store barrier:
fp->f_... = ...; // A
/* fd_affix */ mutex_enter(&fp->f_lock); fp->f_count++; mutex_exit(&fp->f_lock); ... ff->ff_file = fp; // B
This logic was never correct, and is likely already broken in practice on aarch64 because the mutex_exit stub already uses STLXR instead of DMB ISH(ST); STXR. This change only affects the slow path of mutex_exit, so it doesn't change even the accidental guarantees mutex_exit makes in all paths.
show more ...
|
| #
57b6c53c |
| 23-Feb-2023 |
riastradh <riastradh@NetBSD.org> |
KERNEL_LOCK(9): Minor tweaks to ci->ci_biglock_wanted access.
1. Use atomic_load_relaxed to read ci->ci_biglock_wanted from another CPU, for clarity and to avoid the appearance of data races in t
KERNEL_LOCK(9): Minor tweaks to ci->ci_biglock_wanted access.
1. Use atomic_load_relaxed to read ci->ci_biglock_wanted from another CPU, for clarity and to avoid the appearance of data races in thread sanitizers. (Reading ci->ci_biglock_wanted on the local CPU need not be atomic because no other CPU can be writing to it.)
2. Use atomic_store_relaxed to update ci->ci_biglock_wanted when we start to spin, to avoid the appearance of data races.
3. Add comments to explain what's going on and cross-reference the specific matching membars in mutex_vector_enter.
related to PR kern/57240
show more ...
|
| #
c92291af |
| 27-Jan-2023 |
ozaki-r <ozaki-r@NetBSD.org> |
Sprinkle __predict_{true,false} for panicstr checks
|
| #
28e2c7f0 |
| 05-Dec-2022 |
skrll <skrll@NetBSD.org> |
Simplify. Same code before and after.
|
| #
06c75c1f |
| 26-Oct-2022 |
riastradh <riastradh@NetBSD.org> |
mutex(9): Properly declare _mutex_init in sys/mutex.h.
|
| #
7ba2faa8 |
| 09-Apr-2022 |
riastradh <riastradh@NetBSD.org> |
mutex(9): Convert to membar_acquire/release.
Except for setting the waiters bit -- not sure if this is actually required to be store-before-load/store. Seems unlikely -- surely we'd have seen some
mutex(9): Convert to membar_acquire/release.
Except for setting the waiters bit -- not sure if this is actually required to be store-before-load/store. Seems unlikely -- surely we'd have seen some serious bug by now if not, because membar_enter has failed to guarantee that on x86! -- but I'm leaving it for now until I have time to think enough about it to be sure one way or another.
show more ...
|
| #
26788c94 |
| 25-Aug-2021 |
thorpej <thorpej@NetBSD.org> |
- In kern_mutex.c, if MUTEX_CAS() is not defined, define it in terms of atomic_cas_ulong(). - For arm, ia64, m68k, mips, or1k, riscv, vax: don't define our own MUTEX_CAS(), as they either use ato
- In kern_mutex.c, if MUTEX_CAS() is not defined, define it in terms of atomic_cas_ulong(). - For arm, ia64, m68k, mips, or1k, riscv, vax: don't define our own MUTEX_CAS(), as they either use atomic_cas_ulong() or equivalent (atomic_cas_uint() on m68k). - For alpha and sparc64, don't define MUTEX_CAS() in terms of their own _lock_cas(), which has its own memory barriers; the call sites in kern_mutex.c already have the appropriate memory barrier calls. Thus, alpha and sparc64 can use default definition. - For sh3, don't define MUTEX_CAS() in terms of its own _lock_cas(); atomic_cas_ulong() is strong-aliased to _lock_cas(), therefore defining our own MUTEX_CAS() is redundant.
Per thread: https://mail-index.netbsd.org/tech-kern/2021/07/25/msg027562.html
show more ...
|
| #
ab671aa0 |
| 03-Apr-2021 |
thorpej <thorpej@NetBSD.org> |
Fix an IPI deadlock scenario that resulted in a TLB shootdown timeout panic reported by John Klos on port-alpha:
- pmap_tlb_shootnow(): If we acquire a pmap's activation lock, we will have raised
Fix an IPI deadlock scenario that resulted in a TLB shootdown timeout panic reported by John Klos on port-alpha:
- pmap_tlb_shootnow(): If we acquire a pmap's activation lock, we will have raised the IPL on the current CPU to IPL_SCHED until we drop the tlb_lock (due to how nested spin mutexes work). As such, when we release the activation lock, forcibly lower our IPL back to IPL_VM so that we can receive and process IPIs while waiting for other CPUs to process the shootdowns. - mutex_vector_enter(): Invoke SPINLOCK_SPIN_HOOK while spinning to acquire a spin mutex. This is a nop on most platforms, but it's important on the Alpha. Without this, IPIs (and thus TLB shootdowns) cannot be processed if trying to acquire an IPL_SCHED spin mutex such as those used by the scheduler.
...and while we're poking around in here:
- Rework the Alpha SPINLOCK_SPIN_HOOK to only check curcpu()->ci_ipis if the current CPU's IPL is >= IPL_CLOCK (thus ensuring that preemption is disabled and thus guaranteeing that curcpu() is stable). (Alpha does not yet support kernel preemption, but this is now one less thing that would need to be fixed.)
show more ...
|
| #
dbd8077e |
| 02-Mar-2021 |
rin <rin@NetBSD.org> |
Consistently right-justify backslash in macro definition. No binary changes.
|
| #
b18dbbf9 |
| 15-Dec-2020 |
skrll <skrll@NetBSD.org> |
it's cpu_switchto (not cpu_switch)
|
| #
1c4c1220 |
| 15-Dec-2020 |
skrll <skrll@NetBSD.org> |
Fixup the big mutex_exit comment a little. It's actually, cpu_switchto that posts a store fence and it's AFTER setting curlwp.
More to come.
|
| #
7ef7e7e9 |
| 14-Dec-2020 |
skrll <skrll@NetBSD.org> |
Trailing whitespace
|
| #
cd30436c |
| 12-May-2020 |
ad <ad@NetBSD.org> |
PR kern/55251 (use of ZFS may trigger kernel memory corruption (KASAN error))
Previous wasn't quite right. Redo it differently - disable preemption earlier instead.
|
| #
58cf4a47 |
| 12-May-2020 |
ad <ad@NetBSD.org> |
PR kern/55251: use of ZFS may trigger kernel memory corruption
mutex_vector_enter(): reload mtx_owner with preemption disabled before calling mutex_oncpu(), otherwise lwp_dtor() can intervene.
|
| #
faf73726 |
| 08-Mar-2020 |
chs <chs@NetBSD.org> |
split an "a && b" assertion into two so it's clear in the dump which condition was not true even if both are true by the time the dump is written.
|
| #
3f1e3a85 |
| 23-Jan-2020 |
ad <ad@NetBSD.org> |
Update a comment.
|
| #
7c0780d9 |
| 07-Jan-2020 |
ad <ad@NetBSD.org> |
hppa has custom adaptive mutexes. Allow it to build again while not reintroducing the main read of mtx_owner that I wanted to eliminate.
|