#
aa563902 |
| 11-Jul-2023 |
claudio <claudio@openbsd.org> |
Rework sleep_setup()/sleep_finish() to no longer hold the scheduler lock between calls.
Instead of forcing an atomic operation across multiple calls use a three step transaction. 1. setup sleep stat
Rework sleep_setup()/sleep_finish() to no longer hold the scheduler lock between calls.
Instead of forcing an atomic operation across multiple calls use a three step transaction. 1. setup sleep state by calling sleep_setup() 2. recheck sleep condition to ensure that the event did not fire before sleep_setup() registered the proc onto the sleep queue 3. call sleep_finish() to either sleep or keep on running based on the step 2 outcome and any possible signal delivery
To make this work wakeup from signals, single thread api and wakeup(9) need to be aware if a process is between step 1 and step 3 so that the process is not enqueued back onto the runqueue while going to sleep. Introduce the p_flag P_WSLEEP to detect this situation.
On top of this remove the spl dance in msleep() which is no longer required. It is ok to process interrupts between step 1 and 3.
OK mpi@ cheloha@
show more ...
|
#
b2536c64 |
| 28-Jun-2023 |
claudio <claudio@openbsd.org> |
First step at removing struct sleep_state.
Pass the timeout and sleep priority not only to sleep_setup() but also to sleep_finish(). With that sls_timeout and sls_catch can be removed from struct sl
First step at removing struct sleep_state.
Pass the timeout and sleep priority not only to sleep_setup() but also to sleep_finish(). With that sls_timeout and sls_catch can be removed from struct sleep_state.
The timeout is now setup first thing in sleep_finish() and no longer as last thing in sleep_setup(). This should not cause a noticeable difference since the code run between sleep_setup() and sleep_finish() is minimal.
OK kettenis@
show more ...
|
#
2b46a8cb |
| 05-Dec-2022 |
deraadt <deraadt@openbsd.org> |
zap a pile of dangling tabs
|
#
0d280c5f |
| 14-Aug-2022 |
jsg <jsg@openbsd.org> |
remove unneeded includes in sys/kern ok mpi@ miod@
|
#
41d7544a |
| 20-Jan-2022 |
bluhm <bluhm@openbsd.org> |
Shifting signed integers left by 31 is undefined behavior in C. found by kubsan; joint work with tobhe@; OK miod@
|
#
9fde647a |
| 09-Sep-2021 |
mpi <mpi@openbsd.org> |
Add THREAD_PID_OFFSET to tracepoint arguments that pass a TID to userland.
Bring these values in sync with the `tid' builtin which already include the offset. This is necessary to build script comp
Add THREAD_PID_OFFSET to tracepoint arguments that pass a TID to userland.
Bring these values in sync with the `tid' builtin which already include the offset. This is necessary to build script comparing them, like:
tracepoint:sched:enqueue { @ts[arg0] = nsecs; }
tracepoint:sched:on__cpu /@ts[tid]/ { latency = nsecs - @ts[tid]; }
Discussed with and ok bluhm@
show more ...
|
#
d73de46f |
| 06-Jul-2021 |
kettenis <kettenis@openbsd.org> |
Introduce CPU_IS_RUNNING() and us it in scheduler-related code to prevent waiting on CPUs that didn't spin up. This will allow us to spin down CPUs in the future to save power as well.
ok mpi@
|
#
2c6d48bb |
| 29-Jun-2021 |
kettenis <kettenis@openbsd.org> |
Didn't intend to commit the CPU_IS_RUNNING() changes just yet, so revert those bits.
|
#
11548269 |
| 29-Jun-2021 |
kettenis <kettenis@openbsd.org> |
SMP support. Mostly works, but occasionally craps out during boot.
ok drahn@
|
#
436960cf |
| 08-Feb-2021 |
mpi <mpi@openbsd.org> |
Simplify sleep_setup API to two operations in preparation for splitting the SCHED_LOCK().
Putting a thread on a sleep queue is reduce to the following:
sleep_setup(); /* check condition or release
Simplify sleep_setup API to two operations in preparation for splitting the SCHED_LOCK().
Putting a thread on a sleep queue is reduce to the following:
sleep_setup(); /* check condition or release lock */ sleep_finish();
Previous version ok cheloha@, jmatthew@, ok claudio@
show more ...
|
#
47507885 |
| 09-Jan-2021 |
gnezdo <gnezdo@openbsd.org> |
Use sysctl_int_bounded in sysctl_hwsmt
Prefer error reporting is to silent clipping.
OK millert@
|
#
db7041cd |
| 11-Jun-2020 |
dlg <dlg@openbsd.org> |
get rid of a vestigial bit of the sbartq.
i should have removed the sbartq pointer in r1.47 when i removed the sbartq.
|
#
3d8a8d53 |
| 21-Feb-2020 |
claudio <claudio@openbsd.org> |
Remove sigacts structure sharing. The only process that used sharing was proc0 which is used for kthreads and idle threads. proc0 and all those other kernel threads don't handle signals so there is n
Remove sigacts structure sharing. The only process that used sharing was proc0 which is used for kthreads and idle threads. proc0 and all those other kernel threads don't handle signals so there is no benefit in sharing. Simplifies the code a fair bit since the refcnt is gone. OK kettenis@
show more ...
|
#
55e608a4 |
| 05-Feb-2020 |
mpi <mpi@openbsd.org> |
Remove dead store, from Amit Kulkarni.
|
#
24e0bd45 |
| 30-Jan-2020 |
mpi <mpi@openbsd.org> |
Split `p_priority' into `p_runpri' and `p_slppri'.
Using different fields to remember in which runqueue or sleepqueue threads currently are will make it easier to split the SCHED_LOCK().
With this
Split `p_priority' into `p_runpri' and `p_slppri'.
Using different fields to remember in which runqueue or sleepqueue threads currently are will make it easier to split the SCHED_LOCK().
With this change, the (potentially boosted) sleeping priority is no longer overwriting the thread priority. This let us get rids of the logic required to synchronize `p_priority' with `p_usrpri'.
Tested by many, ok visa@
show more ...
|
#
91b2ecf6 |
| 21-Jan-2020 |
mpi <mpi@openbsd.org> |
Import dt(4) a driver and framework for Dynamic Profiling.
The design is fairly simple: events, in the form of descriptors on a ring, are being produced in any kernel context and being consumed by a
Import dt(4) a driver and framework for Dynamic Profiling.
The design is fairly simple: events, in the form of descriptors on a ring, are being produced in any kernel context and being consumed by a userland process reading /dev/dt.
Code and hooks are all guarded under '#if NDT > 0' so this commit shouldn't introduce any change as long as dt(4) is disable in GENERIC.
ok kettenis@, visa@, jasper@, deraadt@
show more ...
|
#
2afc0175 |
| 04-Nov-2019 |
visa <visa@openbsd.org> |
Restore the old way of dispatching dead procs through idle proc. The new way needs more thought.
|
#
dfa1de3e |
| 02-Nov-2019 |
visa <visa@openbsd.org> |
Move dead procs to the reaper queue immediately after context switch. This eliminates a forced context switch to the idle proc. In addition, sched_exit() no longer needs to sum proc runtime because m
Move dead procs to the reaper queue immediately after context switch. This eliminates a forced context switch to the idle proc. In addition, sched_exit() no longer needs to sum proc runtime because mi_switch() will do it.
OK mpi@ a while ago
show more ...
|
#
48349d21 |
| 01-Nov-2019 |
mpi <mpi@openbsd.org> |
Kill resched_proc() and instead call need_resched() when a thread is added to the runqueue of a CPU.
This fix out-of-sync cases when the priority of a thread wasn't reflecting the runqueue it was si
Kill resched_proc() and instead call need_resched() when a thread is added to the runqueue of a CPU.
This fix out-of-sync cases when the priority of a thread wasn't reflecting the runqueue it was sitting in leading to unnecessary context switch.
ok visa@
show more ...
|
#
76e7c40e |
| 15-Oct-2019 |
mpi <mpi@openbsd.org> |
Reduce the number of places where `p_priority' and `p_stat' are set.
This refactoring will help future scheduler locking, in particular to shrink the SCHED_LOCK().
No intended behavior change.
ok
Reduce the number of places where `p_priority' and `p_stat' are set.
This refactoring will help future scheduler locking, in particular to shrink the SCHED_LOCK().
No intended behavior change.
ok visa@
show more ...
|
#
17b25159 |
| 01-Jun-2019 |
mpi <mpi@openbsd.org> |
Revert to using the SCHED_LOCK() to protect time accounting.
It currently creates a lock ordering problem because SCHED_LOCK() is taken by hardclock(). That means the "priorities" of a thread shoul
Revert to using the SCHED_LOCK() to protect time accounting.
It currently creates a lock ordering problem because SCHED_LOCK() is taken by hardclock(). That means the "priorities" of a thread should be moved out of the SCHED_LOCK() first in order to make progress.
Reported-by: syzbot+8e4863b3dde88eb706dc@syzkaller.appspotmail.com via anton@ as well as by kettenis@
show more ...
|
#
4b91b74a |
| 31-May-2019 |
mpi <mpi@openbsd.org> |
Use a per-process mutex to protect time accounting instead of SCHED_LOCK().
Note that hardclock(9) still increments p_{u,s,i}ticks without holding a lock.
ok visa@, cheloha@
|
#
cd89219e |
| 26-Mar-2019 |
visa <visa@openbsd.org> |
Make sure that each ci has its spc_deferred queue initialized. Otherwise, the system can crash in smr_call_impl() if SMT is enabled later.
Crash reported by jcs@
|
#
f2396460 |
| 26-Feb-2019 |
visa <visa@openbsd.org> |
Introduce safe memory reclamation, a mechanism for reclaiming shared objects that readers can access without locking. This provides a basis for read-copy-update operations.
Readers access SMR-protec
Introduce safe memory reclamation, a mechanism for reclaiming shared objects that readers can access without locking. This provides a basis for read-copy-update operations.
Readers access SMR-protected shared objects inside SMR read-side critical section where sleeping is not allowed. To reclaim an SMR-protected object, the writer has to ensure mutual exclusion of other writers, remove the object's shared reference and wait until read-side references cannot exist any longer. As an alternative to waiting, the writer can schedule a callback that gets invoked when reclamation is safe.
The mechanism relies on CPU quiescent states to determine when an SMR-protected object is ready for reclamation.
The <sys/smr.h> header additionally provides an implementation of singly- and doubly-linked lists that can be used together with SMR. These lists allow lockless read access with a concurrent writer.
Discussed with many OK mpi@ sashan@
show more ...
|
#
d52dbbbe |
| 17-Nov-2018 |
cheloha <cheloha@openbsd.org> |
Add new KERN_CPUSTATS sysctl(2) so we can identify offline CPUs.
Because of hw.smt we need a way to determine whether a given CPU is "online" or "offline" from userspace. KERN_CPTIME2 is an array,
Add new KERN_CPUSTATS sysctl(2) so we can identify offline CPUs.
Because of hw.smt we need a way to determine whether a given CPU is "online" or "offline" from userspace. KERN_CPTIME2 is an array, and so cannot be cleanly extended for this purpose, so add a new sysctl(2) KERN_CPUSTATS with an extensible struct. At the moment it's just KERN_CPTIME2 with a flags member, but it can grow as needed.
KERN_CPUSTATS appears to have been defined by BSDi long ago, but there are few (if any) packages in the wild still using the symbol so breakage in ports should be near zero. No other system inherited the symbol from BSDi, either.
Then, use the new sysctl(2) in systat(1) and top(1):
- systat(1) draws placeholder marks ('-') instead of percentages for offline CPUs in the cpu view.
- systat(1) omits offline CPU ticks when drawing the "big bar" in the vmstat view. The upshot is that the bar isn't half idle when half your logical CPUs are disabled.
- top(1) does not draw lines for offline CPUs; if CPUs toggle on or offline in interactive mode we redraw the display to expand/reduce space for the new/missing CPUs. This is consistent with what some top(1) implementations do on Linux.
- top(1) omits offline CPUs from the totals when CPU totals are combined into a single line (the '-1' flag).
Originally prompted by deraadt@. Discussed endlessly with deraadt@, ketennis@, and sthen@. Tested by jmc@ and jca@. Earlier versions also discussed with jca@. Earlier versions tested by jmc@, tb@, and many others.
docs ok jmc@, kernel bits ok ketennis@, everything ok sthen@, "Is your stuff in yet?" deraadt@
show more ...
|