#
2ee1bbb5 |
| 13-Jan-2025 |
mvs <mvs@openbsd.org> |
taskq_del_barrier(9) and timeout_del_barrier(9) should always call taskq_barrier(9) and timeout_barrier(9) respectively.
In the case that task or timeout were scheduled to run both *_del_barrier(9)
taskq_del_barrier(9) and timeout_del_barrier(9) should always call taskq_barrier(9) and timeout_barrier(9) respectively.
In the case that task or timeout were scheduled to run both *_del_barrier(9) only remove it from pending queue and doesn't wait its finish. However task could be simultaneously running and scheduled to the next run, so they need to always wait until running handler has completed.
ok dlg
show more ...
|
#
3b372c34 |
| 14-May-2024 |
jsg <jsg@openbsd.org> |
remove prototypes with no matching function
|
#
da19784a |
| 29-Jul-2023 |
anton <anton@openbsd.org> |
Avoid accessing curproc early during boot when kcov is enabled as it might be unassigned until all secondary processors are up and running.
|
#
a75b147b |
| 15-Aug-2022 |
mvs <mvs@openbsd.org> |
Revert previous. It was not ok'ed by dlg@.
|
#
56070bb8 |
| 15-Aug-2022 |
mvs <mvs@openbsd.org> |
Stop doing lockless `t_flags' check within task_add(9) and task_del(9). This doesn't work on MP systems. We do locked `t_flags' check just after lockless check, so just remove it.
ok dlg@
|
#
8430bc4b |
| 01-Aug-2020 |
anton <anton@openbsd.org> |
Add support for remote coverage to kcov. Remote coverage is collected from threads other than the one currently having kcov enabled. A thread with kcov enabled occasionally delegates work to another
Add support for remote coverage to kcov. Remote coverage is collected from threads other than the one currently having kcov enabled. A thread with kcov enabled occasionally delegates work to another thread, collecting coverage from such threads improves the ability of syzkaller to correlate side effects in the kernel caused by issuing a syscall.
Remote coverage is divided into subsystems. The only supported subsystem right now collects coverage from scheduled tasks and timeouts on behalf of a kcov enabled thread. In order to make this work `struct task' and `struct timeout' must be extended with a new field keeping track of the process that scheduled the task/timeout. Both aforementioned structures have therefore increased with the size of a pointer on all architectures.
The kernel API is documented in a new kcov_remote_register(9) manual.
Remote coverage is also supported by kcov on NetBSD and Linux.
ok mpi@
show more ...
|
#
c3947ab6 |
| 11-Jun-2020 |
dlg <dlg@openbsd.org> |
whitespace and speeling fix in a comment. no functional change.
|
#
1330a2b3 |
| 11-Jun-2020 |
dlg <dlg@openbsd.org> |
make taskq_barrier wait for pending tasks, not just the running tasks.
I wrote taskq_barrier with the behaviour described in the manpage:
taskq_barrier() guarantees that any task that was running
make taskq_barrier wait for pending tasks, not just the running tasks.
I wrote taskq_barrier with the behaviour described in the manpage:
taskq_barrier() guarantees that any task that was running on the tq taskq when the barrier was called has finished by the time the barrier returns.
Note that it talks about the currently running task, not pending tasks.
It just so happens that the original implementation just used task_add to put a condvar on the list and waited for it to run. Because task_add uses TAILQ_INSERT_TAIL, you ended up waiting for all pending to work to run too, not just the currently running task.
The new implementation took advantage of already holding the lock and used TAILQ_INSERT_HEAD to put the barrier work at the front of the queue so it would run next, which is closer to the stated behaviour.
Using the tail insert here restores the previous accidental behaviour.
jsg@ points out the following:
> The linux functions like flush_workqueue() we use this for in drm want > to wait on all scheduled work not just currently running. > > ie a comment from one of the linux functions: > > /** > * flush_workqueue - ensure that any scheduled work has run to completion. > * @wq: workqueue to flush > * > * This function sleeps until all work items which were queued on entry > * have finished execution, but it is not livelocked by new incoming ones. > */ > > our implementation of this in drm is > > void > flush_workqueue(struct workqueue_struct *wq) > { > if (cold) > return; > > taskq_barrier((struct taskq *)wq); > }
I don't think it's worth complicating the taskq API, so I'm just going to make taskq_barrier wait for pending work too.
tested by tb@ ok jsg@
show more ...
|
#
ba614123 |
| 07-Jun-2020 |
dlg <dlg@openbsd.org> |
add support for running taskq_barrier from a task inside the taskq.
this is required for an upcoming drm update, where the linux workqueue api that supports this is mapped to our taskq api.
this ma
add support for running taskq_barrier from a task inside the taskq.
this is required for an upcoming drm update, where the linux workqueue api that supports this is mapped to our taskq api.
this main way taskqs support that is to have the taskq worker threads record thir curproc on the taskq, so taskq_barrier calls can iterate over that list looking for their own curproc. if a barriers curproc is in the list, it must be running inside the taskq, and should pretend that it's a barrier task.
this also supports concurrent barrier calls by having the taskq recognise the situation and have the barriers work together rather than deadlocking. they end up trying to share the work of getting the barrier tasks onto the workers. once all the workers (or in tq barriers) have rendezvoused the barrier calls unwind, and the last one out lets the other barriers and barrier tasks return.
all this barrier logic is implemented in the barrier code, it takes the existing multiworker handling out of the actual taskq loop.
thanks to jsg@ for testing this and previous versions of the diff. ok visa@ kettenis@
show more ...
|
#
babb761d |
| 19-Dec-2019 |
mpi <mpi@openbsd.org> |
Convert infinite sleeps to {m,t}sleep_nsec(9).
ok visa@
|
#
cf12935a |
| 23-Jun-2019 |
kettenis <kettenis@openbsd.org> |
Make taskq_barrier(9) work for multi-threaded task queues.
ok visa@
|
#
60aa962e |
| 28-Apr-2019 |
dlg <dlg@openbsd.org> |
add WITNESS support to barriers modelled on the timeout stuff visa did.
if a taskq takes a lock, and something holding that lock calls taskq_barrier, there's a potential deadlock. detect this as a l
add WITNESS support to barriers modelled on the timeout stuff visa did.
if a taskq takes a lock, and something holding that lock calls taskq_barrier, there's a potential deadlock. detect this as a lock order problem when witness is enable. task_del conditionally followed by taskq_barrier is a common pattern, so add a taskq_del_barrier wrapper for it that unconditionally checks for the deadlock, like timeout_del_barrier.
ok visa@
show more ...
|
#
f07ea34c |
| 01-Apr-2019 |
dlg <dlg@openbsd.org> |
deprecate TASKQ_CANTSLEEP since nothing uses it anymore
if we ever want it back, it's in the attic.
ok mpi@ visa@ kettenis@
|
#
ff4a31df |
| 16-Dec-2018 |
dlg <dlg@openbsd.org> |
add task_pending
jsg@ wants this for drm, and i've had a version of it in diffs sine 2016, but obviously havent needed to use it just yet.
task_pending is modelled on timeout_pending, and tells you
add task_pending
jsg@ wants this for drm, and i've had a version of it in diffs sine 2016, but obviously havent needed to use it just yet.
task_pending is modelled on timeout_pending, and tells you if the task is on a list waiting to execute.
ok jsg@
show more ...
|
#
5241d0f4 |
| 14-Dec-2017 |
dlg <dlg@openbsd.org> |
replace the bare sleep state handling in barriers with wait cond code
|
#
6f1a6781 |
| 13-Nov-2017 |
dlg <dlg@openbsd.org> |
add taskq_barrier
taskq_barrier guarantees that any task that was running on the taskq has finished by the time taskq_barrier returns. it is similar to intr_barrier.
this is needed for use in ifq_b
add taskq_barrier
taskq_barrier guarantees that any task that was running on the taskq has finished by the time taskq_barrier returns. it is similar to intr_barrier.
this is needed for use in ifq_barrier as part of an upcoming change.
show more ...
|
#
bd1acdab |
| 30-Oct-2017 |
visa <visa@openbsd.org> |
Let witness(4) differentiate between taskq mutexes to avoid reporting an error in a scenario like the following:
1. mtx_enter(&tqa->tq_mtx); 2. IRQ 3. mtx_enter(&tqb->tq_mtx);
Found by Hrvoje Popov
Let witness(4) differentiate between taskq mutexes to avoid reporting an error in a scenario like the following:
1. mtx_enter(&tqa->tq_mtx); 2. IRQ 3. mtx_enter(&tqb->tq_mtx);
Found by Hrvoje Popovski, OK mpi@
show more ...
|
#
9b1ed563 |
| 14-Feb-2017 |
mpi <mpi@openbsd.org> |
Convert most of the manual checks for CPU hogging to sched_pause().
The distinction between preempt() and yield() stays as it is usueful to know if a thread decided to yield by itself or if the kern
Convert most of the manual checks for CPU hogging to sched_pause().
The distinction between preempt() and yield() stays as it is usueful to know if a thread decided to yield by itself or if the kernel told him to go away.
ok tedu@, guenther@
show more ...
|
#
9e51102c |
| 11-Aug-2016 |
dlg <dlg@openbsd.org> |
shuffle some code to make it more symmetrical.
no functional change.
|
#
8b519233 |
| 08-Dec-2015 |
dlg <dlg@openbsd.org> |
tweak whitespace in struct definition. no functional change.
|
#
d3bdd90f |
| 08-Dec-2015 |
dlg <dlg@openbsd.org> |
use struct task_list instead of TAILQ_HEAD(, task)
|
#
08dae4e5 |
| 19-Nov-2015 |
dlg <dlg@openbsd.org> |
dont try and wakeup other threads to handle pending work when we know there's only one thread in the taskq. wakeups are much more expensive than a simple compare.
from haesbart
|
#
79ea9c08 |
| 09-Feb-2015 |
dlg <dlg@openbsd.org> |
we want to defer work traditionally (in openbsd) handled in an interrupt context to a taskq running in a thread. however, there is a concern that if we do that then we allow accidental use of sleepin
we want to defer work traditionally (in openbsd) handled in an interrupt context to a taskq running in a thread. however, there is a concern that if we do that then we allow accidental use of sleeping APIs in this work, which will make it harder to move the work back to interrupts in the future.
guenther and kettenis came up with the idea of marking a proc with CANTSLEEP which the sleep paths can check and panic on.
this builds on that so you create taskqs that run with CANTSLEEP set except when they need to sleep for more tasks to run.
the taskq_create api is changed to take a flags argument so users can specify CANTSLEEP. MPSAFE is also passed via this flags field now. this means archs that defined IPL_MPSAFE to 0 can now create mpsafe taskqs too.
lots of discussion at s2k15 ok guenther@ miod@ mpi@ tedu@ pelikan@
show more ...
|
#
e4195480 |
| 27-Jan-2015 |
dlg <dlg@openbsd.org> |
remove the second void * argument on tasks.
when workqs were introduced, we provided a second argument so you could pass a thing and some context to work on it in. there were very few things that to
remove the second void * argument on tasks.
when workqs were introduced, we provided a second argument so you could pass a thing and some context to work on it in. there were very few things that took advantage of the second argument, so when i introduced pools i suggested removing it. since tasks were meant to replace workqs, it was requested that we keep the second argument to make porting from workqs to tasks easier.
now that workqs are gone, i had a look at the use of the second argument again and found only one good use of it (vdsp(4) on sparc64 if you're interested) and a tiny handful of questionable uses. the vast majority of tasks only used a single argument. i have since modified all tasks that used two args to only use one, so now we can remove the second argument.
so this is a mechanical change. all tasks only passed NULL as their second argument, so we can just remove it.
ok krw@
show more ...
|
#
fc62de09 |
| 01-Nov-2014 |
tedu <tedu@openbsd.org> |
add a few sizes to free
|