#
af1a00f9 |
| 03-Jul-2024 |
jsg <jsg@openbsd.org> |
remove __mp_release_all_but_one(), unused since sched_bsd.c rev 1.92 ok claudio@
|
#
de29a8a5 |
| 29-May-2024 |
claudio <claudio@openbsd.org> |
Convert SCHED_LOCK from a recursive kernel lock to a mutex.
Over the last weeks the last SCHED_LOCK recursion was removed so this is now possible and will allow to split up the SCHED_LOCK in a upcom
Convert SCHED_LOCK from a recursive kernel lock to a mutex.
Over the last weeks the last SCHED_LOCK recursion was removed so this is now possible and will allow to split up the SCHED_LOCK in a upcoming step.
Instead of implementing an MP and SP version of SCHED_LOCK this just always uses the mutex implementation. While this makes the local s argument unused (the spl is now tracked by the mutex itself) it is still there to keep this diff minimal.
Tested by many. OK jca@ mpi@
show more ...
|
#
55c63681 |
| 26-Mar-2024 |
bluhm <bluhm@openbsd.org> |
Improve spinning in mtx_enter().
Instead of calling mtx_enter_try() in each spinning loop, do it only if the result of a lockless read indicates that the mutex has been released. This avoids some e
Improve spinning in mtx_enter().
Instead of calling mtx_enter_try() in each spinning loop, do it only if the result of a lockless read indicates that the mutex has been released. This avoids some expensive atomic compare-and-swap operations. Up to 5% reduction of spinning time during kernel build can been seen on a 8 core amd64 machine. On other machines there was no visible effect.
Test on powerpc64 has revealed a bug in mtx_owner declaration. Not the variable was volatile, but the object it points to. Move the volatile declaration in struct mutex to avoid a hang when going to multiuser.
from Mateusz Guzik; input kettenis@ jca@; OK mpi@
show more ...
|
#
8d3b4fd4 |
| 26-Apr-2022 |
dv <dv@openbsd.org> |
Bump __mp_lock_spinout to INT_MAX.
The previous value set years ago was causing amd64 kernels to spin out when run with MP_LOCKDEBUG during boot.
ok kettenis@
|
#
c9289374 |
| 05-Mar-2020 |
claudio <claudio@openbsd.org> |
The 'lock spun out' db_printf needs a newline. All other MP_LOCKDEBUG messages do have the newline already. OK anton@ kettenis@
|
#
643f9e39 |
| 04-Jun-2019 |
visa <visa@openbsd.org> |
Let SP kernel work with WITNESS. The necessary instrumentation was missing from the SP variant of mtx_enter() and mtx_enter_try(). mtx_leave() was correct already.
Prompted by and OK patrick@
|
#
8c00de5e |
| 23-Apr-2019 |
visa <visa@openbsd.org> |
Remove file name and line number output from witness(4)
Reduce code clutter by removing the file name and line number output from witness(4). Typically it is easy enough to locate offending locks us
Remove file name and line number output from witness(4)
Reduce code clutter by removing the file name and line number output from witness(4). Typically it is easy enough to locate offending locks using the stack traces that are shown in lock order conflict reports. Tricky cases can be tracked using sysctl kern.witness.locktrace=1 .
This patch additionally removes the witness(4) wrapper for mutexes. Now each mutex implementation has to invoke the WITNESS_*() macros in order to utilize the checker.
Discussed with and OK dlg@, OK mpi@
show more ...
|
#
3fbde817 |
| 23-Mar-2019 |
visa <visa@openbsd.org> |
Add a simple spinning mutex for ddb. Unlike mutex(9), this lock keeps on spinning even if `db_active' or `panicstr' has been set. The new mutex also disables IPIs in the critical section.
OK mpi@ pa
Add a simple spinning mutex for ddb. Unlike mutex(9), this lock keeps on spinning even if `db_active' or `panicstr' has been set. The new mutex also disables IPIs in the critical section.
OK mpi@ patrick@
show more ...
|
#
6a986ed6 |
| 25-Feb-2019 |
visa <visa@openbsd.org> |
Fix memory barrier in __mtx_leave(). membar_exit_before_atomic() cannot be used in the routine because there is no subsequent atomic operation. membar_exit() has to be used instead.
The mistake has
Fix memory barrier in __mtx_leave(). membar_exit_before_atomic() cannot be used in the routine because there is no subsequent atomic operation. membar_exit() has to be used instead.
The mistake has not caused problems because on most platforms membar_exit_before_atomic() is membar_exit(). Only amd64 and i386 have a dedicated membar_exit_before_atomic(), and their exit barriers are no-ops.
OK dlg@
show more ...
|
#
51f2d5aa |
| 15-Jun-2018 |
visa <visa@openbsd.org> |
Simplify #ifdefs. The kernel_lock symbol is no longer needed when building a uniprocessor kernel with WITNESS.
OK mpi@
|
#
e0c5510e |
| 08-Jun-2018 |
guenther <guenther@openbsd.org> |
Constipate all the struct lock_type's so they go into .rodata
ok visa@
|
#
531d8034 |
| 14-May-2018 |
mpi <mpi@openbsd.org> |
Stopping counting and reporting CPU time spent spinning on a lock as system time.
Introduce a new CP_SPIN "scheduler state" and modify userland tools to display the % of timer a CPU spents spinning.
Stopping counting and reporting CPU time spent spinning on a lock as system time.
Introduce a new CP_SPIN "scheduler state" and modify userland tools to display the % of timer a CPU spents spinning.
Based on a diff from jmatthew@, ok pirofti@, bluhm@, visa@, deraadt@
show more ...
|
#
6dae24f4 |
| 26-Apr-2018 |
mpi <mpi@openbsd.org> |
Drop into ddb(4) if pmap_tlb_shoot*() take too much time in MP_LOCKDEBUG kernels.
While here sync all MP_LOCKDEBUG/while loops.
ok mlarkin@, visa@
|
#
fdf140be |
| 25-Apr-2018 |
mpi <mpi@openbsd.org> |
Teach mtx_enter_try(9) to avoid deadlocks after a panic.
ok deraadt@
|
#
1bb039c8 |
| 27-Mar-2018 |
mpi <mpi@openbsd.org> |
Try harder to execute code protected by mutexes after entering ddb(4).
Should prevent a panic after panic reported by mlarkin@.
ok mlarkin@, visa@
|
#
8ed6e35e |
| 20-Mar-2018 |
mpi <mpi@openbsd.org> |
Do not panic from ddb(4) when a lock requirement isn't fulfilled.
Extend the logic already present for panic() to any DDB-related operation such that if ddb(4) is entered because of a fault or other
Do not panic from ddb(4) when a lock requirement isn't fulfilled.
Extend the logic already present for panic() to any DDB-related operation such that if ddb(4) is entered because of a fault or other trap it is still possible to call 'boot reboot'.
While here stop printing splassert() messages as well, to not fill the buffer.
ok visa@, deraadt@
show more ...
|
#
ac549d3f |
| 19-Feb-2018 |
mpi <mpi@openbsd.org> |
Include <sys/mutex.h> directly instead of relying on other headers to include it.
|
#
0d50a1e8 |
| 19-Feb-2018 |
jsg <jsg@openbsd.org> |
Directly include sys/mplock.h when needed instead of depending on indirect inclusion. Fixes non-MULTIPROCESSOR WITNESS build.
ok visa@ mpi@
|
#
85b318e2 |
| 14-Feb-2018 |
mpi <mpi@openbsd.org> |
Put WITNESS only functions with the rest of the locking primitives.
|
#
11f61d64 |
| 10-Feb-2018 |
mpi <mpi@openbsd.org> |
Merge license blocks now that they are identical.
|
#
54292cf9 |
| 10-Feb-2018 |
mpi <mpi@openbsd.org> |
Artur Grabowski agreed to relicense his C mutex implementation under ISC.
This will prevent a copyright-o-rama in kern_lock.c
|
#
dd192d6e |
| 08-Feb-2018 |
mpi <mpi@openbsd.org> |
Remove CSRG copyright, there isn't any code left from Berkeley here.
In 2016 natano@ removed the last two functions remaining from the CSRG time: lockinit() and lockstatus(). At that time they were
Remove CSRG copyright, there isn't any code left from Berkeley here.
In 2016 natano@ removed the last two functions remaining from the CSRG time: lockinit() and lockstatus(). At that time they were already wrappers around recursive rwlocks functions from thib@ that tedu@ committed in 2013.
ok deraadt@
show more ...
|
#
710c2d98 |
| 25-Jan-2018 |
mpi <mpi@openbsd.org> |
Move common mutex implementations to a MI place.
Archs not yet converted can to the jump by defining __USE_MI_MUTEX.
ok visa@
|
#
ba0fc568 |
| 04-Dec-2017 |
mpi <mpi@openbsd.org> |
Change __mp_lock_held() to work with an arbitrary CPU info structure and extend ddb(4) "ps /o" output to print which CPU is currently holding the KERNEL_LOCK().
Tested by dhill@, ok visa@
|
#
5a26f776 |
| 17-Oct-2017 |
visa <visa@openbsd.org> |
Add a machine-independent implementation for the mplock. This reduces code duplication and makes it easier to instrument lock primitives.
The MI mplock uses the ticket lock code that has been in use
Add a machine-independent implementation for the mplock. This reduces code duplication and makes it easier to instrument lock primitives.
The MI mplock uses the ticket lock code that has been in use on amd64, i386 and sparc64. These are the architectures that now switch to the MI code.
The lock_machdep.c files are unhooked from the build but not removed yet, in case something goes wrong.
OK mpi@, kettenis@
show more ...
|