History log of /netbsd-src/sys/kern/subr_workqueue.c (Results 1 – 25 of 48)
Revision Date Author Comments
# 04615d56 01-Mar-2024 mrg <mrg@NetBSD.org>

check that l_nopreempt (preemption count) doesn't change after callbacks

check that the idle loop, soft interrupt handlers, workqueue, and xcall
callbacks do not modify the preemption count, in most

check that l_nopreempt (preemption count) doesn't change after callbacks

check that the idle loop, soft interrupt handlers, workqueue, and xcall
callbacks do not modify the preemption count, in most cases, knowing it
should be 0 currently.

this work was originally done by simonb. cleaned up slightly and some
minor enhancement made by myself, and with discussion with riastradh@.

other callback call sites could check this as well (such as MD interrupt
handlers, or really anything that includes a callback registration. x86
version to be commited separately.)

show more ...


# e5b4f163 09-Aug-2023 riastradh <riastradh@NetBSD.org>

workqueue(9): Factor out wq->wq_flags & WQ_FPU in workqueue_worker.

No functional change intended. Makes it clearer that s is
initialized when used.


# 9b181471 09-Aug-2023 riastradh <riastradh@NetBSD.org>

workqueue(9): Sort includes.

No functional change intended.


# 0d01e7b0 09-Aug-2023 riastradh <riastradh@NetBSD.org>

workqueue(9): Avoid unnecessary mutex_exit/enter cycle each loop.


# 6ce73099 09-Aug-2023 riastradh <riastradh@NetBSD.org>

workqueue(9): Stop violating queue(3) internals.


# 23d0d098 09-Aug-2023 riastradh <riastradh@NetBSD.org>

workqueue(9): Sprinkle dtrace probes for workqueue_wait edge cases.

Let's make it easy to find out whether these are hit.


# 53a88a41 09-Aug-2023 riastradh <riastradh@NetBSD.org>

workqueue(9): Avoid touching running work items in workqueue_wait.

As soon as the workqueue function has called, it is forbidden to
touch the struct work passed to it -- the function might free or
r

workqueue(9): Avoid touching running work items in workqueue_wait.

As soon as the workqueue function has called, it is forbidden to
touch the struct work passed to it -- the function might free or
reuse the data structure it is embedded in.

So workqueue_wait is forbidden to search the queue for the batch of
running work items. Instead, use a generation number which is odd
while the thread is processing a batch of work and even when not.

There's still a small optimization available with the struct work
pointer to wait for: if we find the work item in one of the per-CPU
_pending_ queues, then after we wait for a batch of work to complete
on that CPU, we don't need to wait for work on any other CPUs.

PR kern/57574

XXX pullup-10
XXX pullup-9
XXX pullup-8

show more ...


# f980ceeb 29-Oct-2022 riastradh <riastradh@NetBSD.org>

workqueue(9): Sprinkle dtrace probes.


# c9a823a4 15-Aug-2022 riastradh <riastradh@NetBSD.org>

workqueue(9): workqueue_wait and workqueue_destroy may sleep.

But might not, so assert sleepable up front.


# d18cf1b9 08-Sep-2020 riastradh <riastradh@NetBSD.org>

workqueue: Lift unnecessary restriction on workqueue_wait.

Allow multiple concurrent waits at a time, and allow enqueueing work
at the same time (as long as it's not the work we're waiting for).
Thi

workqueue: Lift unnecessary restriction on workqueue_wait.

Allow multiple concurrent waits at a time, and allow enqueueing work
at the same time (as long as it's not the work we're waiting for).
This way multiple users can use a shared global workqueue and safely
wait for individual work items concurrently, while the workqueue is
still in use for other items (e.g., wg(4) peers).

This has the side effect of taking away a diagnostic measure, but I
think allowing the diagnostic's false positives instead of rejecting
them is worth it. We could cheaply add it back with some false
negatives if it's important.

show more ...


# d7f8883f 01-Aug-2020 riastradh <riastradh@NetBSD.org>

New workqueue flag WQ_FPU.

Arranges kthread_fpu_enter/exit around calls to the worker. Saves
cost over explicit calls to kthread_fpu_enter/exit in the worker by
only doing it once, since there's of

New workqueue flag WQ_FPU.

Arranges kthread_fpu_enter/exit around calls to the worker. Saves
cost over explicit calls to kthread_fpu_enter/exit in the worker by
only doing it once, since there's often a high cost to flushing the
icache and zeroing the fpu registers.

As proposed on tech-kern:
https://mail-index.netbsd.org/tech-kern/2020/06/20/msg026524.html

show more ...


# 21f6c0a1 13-Jun-2018 ozaki-r <ozaki-r@NetBSD.org>

Don't wait on workqueue_wait if called from worker itself

Otherwise workqueue_wait never return in such a case. This treatment
is the same as callout_halt.


# 1cac9077 06-Feb-2018 ozaki-r <ozaki-r@NetBSD.org>

Check the length of a passed name to avoid slient truncation


# cd3ab682 30-Jan-2018 ozaki-r <ozaki-r@NetBSD.org>

Check if a queued work is tried to be enqueued again, which is not allowed


# 3e34af79 28-Dec-2017 ozaki-r <ozaki-r@NetBSD.org>

Add workqueue_wait that waits for a specific work to finish

The caller must ensure that no new work is enqueued before calling
workqueue_wait. Note that Note that if the workqueue is WQ_PERCPU, the

Add workqueue_wait that waits for a specific work to finish

The caller must ensure that no new work is enqueued before calling
workqueue_wait. Note that Note that if the workqueue is WQ_PERCPU, the caller
can enqueue a new work to another queue other than the waiting queue.

Discussed on tech-kern@

show more ...


# 320d4922 07-Oct-2012 matt <matt@NetBSD.org>

If the workqueue is using a prio less than PRI_KERNEL, make sure KTHREAD_TS
is used when creating the kthread.


# 81797b40 23-Oct-2011 jym <jym@NetBSD.org>

Turn a workqueue(9) name into an array in the struct workqueue, rather
than a const char *. This avoids keeping a reference to a string
owned by caller (string could be allocated on stack).


# 2de1fdfe 27-Jul-2011 uebayasi <uebayasi@NetBSD.org>

These don't need uvm/uvm_extern.h.


# ad4f42d4 11-Nov-2009 rmind <rmind@NetBSD.org>

workqueue_finiqueue: remove unused variable.


# 40cf6f36 21-Oct-2009 rmind <rmind@NetBSD.org>

Remove uarea swap-out functionality:

- Addresses the issue described in PR/38828.
- Some simplification in threading and sleepq subsystems.
- Eliminates pmap_collect() and, as a side note, allows pm

Remove uarea swap-out functionality:

- Addresses the issue described in PR/38828.
- Some simplification in threading and sleepq subsystems.
- Eliminates pmap_collect() and, as a side note, allows pmap optimisations.
- Eliminates XS_CTL_DATA_ONSTACK in scsipi code.
- Avoids few scans on LWP list and thus potentially long holds of proc_lock.
- Cuts ~1.5k lines of code. Reduces amd64 kernel size by ~4k.
- Removes __SWAP_BROKEN cases.

Tested on x86, mips, acorn32 (thanks <mpumford>) and partly tested on
acorn26 (thanks to <bjh21>).

Discussed on <tech-kern>, reviewed by <ad>.

show more ...


# d59302b0 16-Aug-2009 yamt <yamt@NetBSD.org>

struct lwp -> lwp_t for consistency


# 117500b9 03-Apr-2009 ad <ad@NetBSD.org>

workqueue_finiqueue: our stack could be swapped out while enqueued to
a worker thread.


# 38d7b7ba 15-Sep-2008 rmind <rmind@NetBSD.org>

Replace intptr_t in few places to uintptr_t.


# 1906aa3e 02-Jul-2008 matt <matt@NetBSD.org>

Switch from KASSERT to CTASSERT for those asserts testing sizes of types.


# feb4783f 27-Mar-2008 ad <ad@NetBSD.org>

Replace use of CACHE_LINE_SIZE in some obvious places.


12