#
ed07db5b |
| 13-Sep-2023 |
claudio <claudio@openbsd.org> |
Revert commitid: yfAefyNWibUyjkU2, ESyyH5EKxtrXGkS6 and itscfpFvJLOj8mHB;
The change to the single thread API results in crashes inside exit1() as found by Syzkaller. There seems to be a race in the
Revert commitid: yfAefyNWibUyjkU2, ESyyH5EKxtrXGkS6 and itscfpFvJLOj8mHB;
The change to the single thread API results in crashes inside exit1() as found by Syzkaller. There seems to be a race in the exit codepath. What exactly fails is not really clear therefor revert for now.
This should fix the following Syzkaller reports: Reported-by: syzbot+38efb425eada701ca8bb@syzkaller.appspotmail.com Reported-by: syzbot+ecc0e8628b3db39b5b17@syzkaller.appspotmail.com and maybe more.
Reverted commits: ---------------------------- Protect ps_single, ps_singlecnt and ps_threadcnt by the process mutex.
The single thread API needs to lock the process to enter single thread mode and does not need to stop the scheduler.
This code changes ps_singlecount from a count down to zero to ps_singlecnt which counts up until equal to ps_threadcnt (in which case all threads are properly asleep).
Tested by phessler@, OK mpi@ cheloha@ ---------------------------- Change how ps_threads and p_thr_link are locked away from using SCHED_LOCK.
The per process thread list can be traversed (read) by holding either the KERNEL_LOCK or the per process ps_mtx (instead of SCHED_LOCK). Abusing the SCHED_LOCK for this makes it impossible to split up the scheduler lock into something more fine grained.
Tested by phessler@, ok mpi@ ---------------------------- Fix SCHED_LOCK() leak in single_thread_set()
In the (q->p_flag & P_WEXIT) branch is a continue that did not release the SCHED_LOCK. Refactor the code a bit to simplify the places SCHED_LOCK is grabbed and released.
Reported-by: syzbot+ea26d351acfad3bb3f15@syzkaller.appspotmail.com OK kettenis@
show more ...
|
#
13095e6d |
| 08-Sep-2023 |
claudio <claudio@openbsd.org> |
Change how ps_threads and p_thr_link are locked away from using SCHED_LOCK.
The per process thread list can be traversed (read) by holding either the KERNEL_LOCK or the per process ps_mtx (instead o
Change how ps_threads and p_thr_link are locked away from using SCHED_LOCK.
The per process thread list can be traversed (read) by holding either the KERNEL_LOCK or the per process ps_mtx (instead of SCHED_LOCK). Abusing the SCHED_LOCK for this makes it impossible to split up the scheduler lock into something more fine grained.
Tested by phessler@, ok mpi@
show more ...
|
#
42609633 |
| 04-Sep-2023 |
claudio <claudio@openbsd.org> |
Protect ps_single, ps_singlecnt and ps_threadcnt by the process mutex.
The single thread API needs to lock the process to enter single thread mode and does not need to stop the scheduler.
This code
Protect ps_single, ps_singlecnt and ps_threadcnt by the process mutex.
The single thread API needs to lock the process to enter single thread mode and does not need to stop the scheduler.
This code changes ps_singlecount from a count down to zero to ps_singlecnt which counts up until equal to ps_threadcnt (in which case all threads are properly asleep).
Tested by phessler@, OK mpi@ cheloha@
show more ...
|
#
94c38e45 |
| 29-Aug-2023 |
claudio <claudio@openbsd.org> |
Remove p_rtime from struct proc and replace it by passing the timespec as argument to the tuagg_locked function.
- Remove incorrect use of p_rtime in other parts of the tree. p_rtime was almost alwa
Remove p_rtime from struct proc and replace it by passing the timespec as argument to the tuagg_locked function.
- Remove incorrect use of p_rtime in other parts of the tree. p_rtime was almost always 0 so including it in any sum did not alter the result. - In main() the update of time can be further simplified since at that time only the primary cpu is running. - Add missing nanouptime() call in cpu_hatch() for hppa - Rename tuagg_unlocked to tuagg_locked like it is done in the rest of the tree.
OK cheloha@ dlg@
show more ...
|
#
817c1871 |
| 25-Apr-2023 |
claudio <claudio@openbsd.org> |
Rename ps_refcnt to ps_threadcnt in struct process and implement P_HASSIBLING() using this count. OK mvs@ mpi@
|
#
79b24ea9 |
| 29-Dec-2022 |
guenther <guenther@openbsd.org> |
Add ktrace struct tracepoints for siginfo_t to the kernel side of waitid(2) and __thrsigdivert(2) and teach kdump(1) to handle them. Also report more from the siginfo_t inside PSIG tracepoints.
ok m
Add ktrace struct tracepoints for siginfo_t to the kernel side of waitid(2) and __thrsigdivert(2) and teach kdump(1) to handle them. Also report more from the siginfo_t inside PSIG tracepoints.
ok mpi@
show more ...
|
#
60972c46 |
| 19-Dec-2022 |
guenther <guenther@openbsd.org> |
Add WTRAPPED opiton for waitid(2) to control whether CMD_TRAPPED state changes are reported. That's the 6th bit, so switch to hex constants. Adjust #if tests for consistency
ok kettenis@
|
#
2b46a8cb |
| 05-Dec-2022 |
deraadt <deraadt@openbsd.org> |
zap a pile of dangling tabs
|
#
d32eaf92 |
| 03-Nov-2022 |
guenther <guenther@openbsd.org> |
Style: always use *retval and never retval[0] in syscalls, to reflect that retval is just a single return value.
ok miod@
|
#
ddb514ca |
| 26-Oct-2022 |
kettenis <kettenis@openbsd.org> |
Fix handling of PGIDs in wait4(2) that I broke with the previous commit.
ok anton@, millert@
|
#
8112871f |
| 25-Oct-2022 |
kettenis <kettenis@openbsd.org> |
Implement waitid(2) which is now part of POSIX and used by mozilla. This includes a change of siginfo_r which is technically an ABI break but this should have no real-world impact since the members i
Implement waitid(2) which is now part of POSIX and used by mozilla. This includes a change of siginfo_r which is technically an ABI break but this should have no real-world impact since the members involved are never touched by the kernel.
ok millert@, deraadt@
show more ...
|
#
0d280c5f |
| 14-Aug-2022 |
jsg <jsg@openbsd.org> |
remove unneeded includes in sys/kern ok mpi@ miod@
|
#
26eeb0be |
| 31-Mar-2022 |
millert <millert@openbsd.org> |
Move knote_processexit() call from exit1() to the reaper(). This fixes a problem where NOTE_EXIT could be received before the process was officially a zombie and thus not immediately waitable. OK de
Move knote_processexit() call from exit1() to the reaper(). This fixes a problem where NOTE_EXIT could be received before the process was officially a zombie and thus not immediately waitable. OK deraadt@ visa@
show more ...
|
#
49e9d6d1 |
| 14-Feb-2022 |
claudio <claudio@openbsd.org> |
Introduce a signal context that is used to pass signal related information from cursig() to postsig() or the caller itself. This will simplify locking. Also alter sigactsfree() a bit and move it into
Introduce a signal context that is used to pass signal related information from cursig() to postsig() or the caller itself. This will simplify locking. Also alter sigactsfree() a bit and move it into process_zap() so ps_sigacts is always a valid pointer. OK semarie@
show more ...
|
#
a671ffa0 |
| 28-Jan-2022 |
guenther <guenther@openbsd.org> |
When it's the possessive of 'it', it's spelled "its", without the apostrophe.
|
#
da571ddd |
| 24-Oct-2021 |
jsg <jsg@openbsd.org> |
use NULL not 0 for pointer values in kern ok semarie@
|
#
b6d396f3 |
| 12-Mar-2021 |
mpi <mpi@openbsd.org> |
Kill SINGLE_PTRACE and use SINGLE_SUSPEND which has almost the same semantic
single_thread_set() is modified to explicitly indicated when waiting until sibling threads are parked is required. This
Kill SINGLE_PTRACE and use SINGLE_SUSPEND which has almost the same semantic
single_thread_set() is modified to explicitly indicated when waiting until sibling threads are parked is required. This is obviously not required if a traced thread is switching away from a CPU after handling a STOP signal.
ok claudio@
show more ...
|
#
134c8d13 |
| 08-Mar-2021 |
claudio <claudio@openbsd.org> |
Revert commitid: AZrsCSWEYDm7XWuv;
Kill SINGLE_PTRACE and use SINGLE_SUSPEND which has almost the same semantic.
This diff did not properly kill SINGLE_PTRACE and broke RAMDISK kernels.
|
#
f0794b6f |
| 08-Mar-2021 |
mpi <mpi@openbsd.org> |
Kill SINGLE_PTRACE and use SINGLE_SUSPEND which has almost the same semantic.
single_thread_set() is modified to explicitly indicated when waiting until sibling threads are parked is required. This
Kill SINGLE_PTRACE and use SINGLE_SUSPEND which has almost the same semantic.
single_thread_set() is modified to explicitly indicated when waiting until sibling threads are parked is required. This is obviously not required if a traced thread is switching away from a CPU after handling a STOP signal.
ok claudio@
show more ...
|
#
86c58351 |
| 15-Feb-2021 |
mpi <mpi@openbsd.org> |
Move single_thread_set() out of KERNEL_LOCK().
Use the SCHED_LOCK() to ensure `ps_thread' isn't being modified by a sibling when entering tsleep(9) w/o KERNEL_LOCK().
ok visa@
|
#
193f316c |
| 08-Feb-2021 |
mpi <mpi@openbsd.org> |
Revert the convertion of per-process thread into a SMR_TAILQ.
We did not reach a consensus about using SMR to unlock single_thread_set() so there's no point in keeping this change.
|
#
9e1c4ad6 |
| 17-Jan-2021 |
mvs <mvs@openbsd.org> |
Cache parent's pid as `ps_ppid' and use it instead of `ps_pptr->ps_pid'. This allows us to unlock getppid(2).
ok mpi@
|
#
39f6f778 |
| 09-Dec-2020 |
mpi <mpi@openbsd.org> |
Add kernel-only per-thread kqueue & helpers to initialize and free it.
This will soon be used by select(2) and poll(2).
ok anton@, visa@
|
#
b21c774f |
| 07-Dec-2020 |
mpi <mpi@openbsd.org> |
Convert the per-process thread list into a SMR_TAILQ.
Currently all iterations are done under KERNEL_LOCK() and therefor use the *_LOCKED() variant.
From and ok claudio@
|
#
00349993 |
| 16-Nov-2020 |
jsing <jsing@openbsd.org> |
Prevent exit status from being clobbered on thread exit.
Ensure that EXIT_NORMAL only runs once by guarding it with PS_EXITING.
It was previously possible for EXIT_NORMAL to be run twice, depending
Prevent exit status from being clobbered on thread exit.
Ensure that EXIT_NORMAL only runs once by guarding it with PS_EXITING.
It was previously possible for EXIT_NORMAL to be run twice, depending on which thread called exit() and the order in which the threads were torn down. This is due to the P_HASSIBLING() check triggering the last thread to run EXIT_NORMAL, even though it may have already been run via an exit() call.
ok kettenis@ visa@
show more ...
|