| #
04615d56 |
| 01-Mar-2024 |
mrg <mrg@NetBSD.org> |
check that l_nopreempt (preemption count) doesn't change after callbacks
check that the idle loop, soft interrupt handlers, workqueue, and xcall callbacks do not modify the preemption count, in most
check that l_nopreempt (preemption count) doesn't change after callbacks
check that the idle loop, soft interrupt handlers, workqueue, and xcall callbacks do not modify the preemption count, in most cases, knowing it should be 0 currently.
this work was originally done by simonb. cleaned up slightly and some minor enhancement made by myself, and with discussion with riastradh@.
other callback call sites could check this as well (such as MD interrupt handlers, or really anything that includes a callback registration. x86 version to be commited separately.)
show more ...
|
| #
b316ad65 |
| 05-Oct-2023 |
ad <ad@NetBSD.org> |
The idle LWP doesn't need to care about kernel_lock.
|
| #
9fc45356 |
| 05-Sep-2020 |
riastradh <riastradh@NetBSD.org> |
Round of uvm.h cleanup.
The poorly named uvm.h is generally supposed to be for uvm-internal users only.
- Narrow it to files that actually need it -- mostly files that need to query whether curlw
Round of uvm.h cleanup.
The poorly named uvm.h is generally supposed to be for uvm-internal users only.
- Narrow it to files that actually need it -- mostly files that need to query whether curlwp is the pagedaemon, which should maybe be exposed by an external header.
- Use uvm_extern.h where feasible and uvm_*.h for things not exposed by it. We should split up uvm_extern.h but this will serve for now to reduce the uvm.h dependencies.
- Use uvm_stat.h and #ifdef UVMHIST uvm.h for files that use UVMHIST(ubchist), since ubchist is declared in uvm.h but the reference evaporates if UVMHIST is not defined, so we reduce header file dependencies.
- Make uvm_device.h and uvm_swap.h independently includable while here.
ok chs@
show more ...
|
| #
abbb7ed5 |
| 26-Mar-2020 |
ad <ad@NetBSD.org> |
Leave the idle LWPs in state LSIDL even when running, so they don't mess up output from ps/top/etc. Correctness isn't at stake, LWPs in other states are temporarily on the CPU at times too (e.g. LS
Leave the idle LWPs in state LSIDL even when running, so they don't mess up output from ps/top/etc. Correctness isn't at stake, LWPs in other states are temporarily on the CPU at times too (e.g. LSZOMB, LSSLEEP).
show more ...
|
| #
82002773 |
| 15-Feb-2020 |
ad <ad@NetBSD.org> |
- Move the LW_RUNNING flag back into l_pflag: updating l_flag without lock in softint_dispatch() is risky. May help with the "softint screwup" panic.
- Correct the memory barriers around zombie
- Move the LW_RUNNING flag back into l_pflag: updating l_flag without lock in softint_dispatch() is risky. May help with the "softint screwup" panic.
- Correct the memory barriers around zombies switching into oblivion.
show more ...
|
| #
9c6efdb4 |
| 25-Jan-2020 |
ad <ad@NetBSD.org> |
For secondary CPUs, the idle LWP is the first to run, and it's directly entered from MD code without a trip through mi_switch(). Make the picture look good in case the CPU takes an interrupt before
For secondary CPUs, the idle LWP is the first to run, and it's directly entered from MD code without a trip through mi_switch(). Make the picture look good in case the CPU takes an interrupt before it calls idle_loop().
show more ...
|
| #
2ddceed1 |
| 08-Jan-2020 |
ad <ad@NetBSD.org> |
Hopefully fix some problems seen with MP support on non-x86, in particular where curcpu() is defined as curlwp->l_cpu:
- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before calling
Hopefully fix some problems seen with MP support on non-x86, in particular where curcpu() is defined as curlwp->l_cpu:
- mi_switch(): undo the ~2007ish optimisation to unlock curlwp before calling cpu_switchto(). It's not safe to let other actors mess with the LWP (in particular l->l_cpu) while it's still context switching. This removes l->l_ctxswtch.
- Move the LP_RUNNING flag into l->l_flag and rename to LW_RUNNING since it's now covered by the LWP's lock.
- Ditch lwp_exit_switchaway() and just call mi_switch() instead. Everything is in cache anyway so it wasn't buying much by trying to avoid saving old state. This means cpu_switchto() will never be called with prevlwp == NULL.
- Remove some KERNEL_LOCK handling which hasn't been needed for years.
show more ...
|
| #
94843b13 |
| 31-Dec-2019 |
ad <ad@NetBSD.org> |
- Add and use wrapper functions that take and acquire page interlocks, and pairs of page interlocks. Require that the page interlock be held over calls to uvm_pageactivate(), uvm_pagewire() and
- Add and use wrapper functions that take and acquire page interlocks, and pairs of page interlocks. Require that the page interlock be held over calls to uvm_pageactivate(), uvm_pagewire() and similar.
- Solve the concurrency problem with page replacement state. Rather than updating the global state synchronously, set an intended state on individual pages (active, inactive, enqueued, dequeued) while holding the page interlock. After the interlock is released put the pages on a 128 entry per-CPU queue for their state changes to be made real in batch. This results in in a ~400 fold decrease in contention on my test system. Proposed on tech-kern but modified to use the page interlock rather than atomics to synchronise as it's much easier to maintain that way, and cheaper.
show more ...
|
| #
4477d28d |
| 06-Dec-2019 |
ad <ad@NetBSD.org> |
Make it possible to call mi_switch() and immediately switch to another CPU. This seems to take about 3us on my Intel system. Two changes required:
- Have the caller to mi_switch() be responsible fo
Make it possible to call mi_switch() and immediately switch to another CPU. This seems to take about 3us on my Intel system. Two changes required:
- Have the caller to mi_switch() be responsible for calling spc_lock(). - Avoid using l->l_cpu in mi_switch().
While here:
- Add a couple of calls to membar_enter() - Have the idle LWP set itself to LSIDL, to match softint_thread(). - Remove unused return value from mi_switch().
show more ...
|
| #
57eb66c6 |
| 01-Dec-2019 |
ad <ad@NetBSD.org> |
Fix false sharing problems with cpu_info. Identified with tprof(8). This was a very nice win in my tests on a 48 CPU box.
- Reorganise cpu_data slightly according to usage. - Put cpu_onproc into st
Fix false sharing problems with cpu_info. Identified with tprof(8). This was a very nice win in my tests on a 48 CPU box.
- Reorganise cpu_data slightly according to usage. - Put cpu_onproc into struct cpu_info alongside ci_curlwp (now is ci_onproc). - On x86, put some items in their own cache lines according to usage, like the IPI bitmask and ci_want_resched.
show more ...
|
| #
11ba4e18 |
| 23-Nov-2019 |
ad <ad@NetBSD.org> |
Minor scheduler cleanup:
- Adapt to cpu_need_resched() changes. Avoid lost & duplicate IPIs and ASTs. sched_resched_cpu() and sched_resched_lwp() contain the logic for this. - Changes for LSIDL to
Minor scheduler cleanup:
- Adapt to cpu_need_resched() changes. Avoid lost & duplicate IPIs and ASTs. sched_resched_cpu() and sched_resched_lwp() contain the logic for this. - Changes for LSIDL to make the locking scheme match the intended design. - Reduce lock contention and false sharing further. - Numerous small bugfixes, including some corrections for SCHED_FIFO/RT. - Use setrunnable() in more places, and merge cut & pasted code.
show more ...
|
| #
f7666738 |
| 29-Jan-2012 |
rmind <rmind@NetBSD.org> |
- Add mi_cpu_init() and initialise cpu_lock and kcpuset_attached/running there. - Add kcpuset_running which gets set in idle_loop(). - Use kcpuset_running in pserialize_perform().
|
| #
9d567f00 |
| 17-Jan-2011 |
uebayasi <uebayasi@NetBSD.org> |
Include internal definitions (uvm/uvm.h) only where necessary.
|
| #
0436400c |
| 19-Jul-2009 |
yamt <yamt@NetBSD.org> |
set LP_RUNNING when starting lwp0 and idle lwps. add assertions.
|
| #
5b4feac1 |
| 28-Jun-2009 |
ad <ad@NetBSD.org> |
idle_loop: explicitly go to spl0() to sidestep potential MD bugs.
|
| #
62c877a4 |
| 11-Jun-2008 |
ad <ad@NetBSD.org> |
Don't call uvm_pageidlezero() if the CPU is marked offline.
|
| #
cbbf514e |
| 04-Jun-2008 |
ad <ad@NetBSD.org> |
- vm_page: put listq, pageq into a union alongside a LIST_ENTRY, so we can use both types of list.
- Make page coloring and idle zero state per-CPU.
- Maintain per-CPU page freelists. When freein
- vm_page: put listq, pageq into a union alongside a LIST_ENTRY, so we can use both types of list.
- Make page coloring and idle zero state per-CPU.
- Maintain per-CPU page freelists. When freeing, put pages onto the local CPU's lists and the global lists. When allocating, prefer to take pages from the local CPU. If none are available take from the global list as done now. Proposed on tech-kern@.
show more ...
|
| #
29170d38 |
| 29-May-2008 |
rmind <rmind@NetBSD.org> |
Simplifcation for running LWP migration. Removes double-locking in mi_switch(), migration for LSONPROC is now performed via idle loop. Handles/fixes on-CPU case in lwp_migrate(), misc.
Closes PR/38
Simplifcation for running LWP migration. Removes double-locking in mi_switch(), migration for LSONPROC is now performed via idle loop. Handles/fixes on-CPU case in lwp_migrate(), misc.
Closes PR/38169, idea of migration via idle loop by Andrew Doran.
show more ...
|
| #
81fa379a |
| 27-May-2008 |
ad <ad@NetBSD.org> |
PR kern/38707 scheduler related deadlock during build.sh
- Fix performance regression inroduced by the workaround by making job stealing a lot simpler: if the local run queue is empty, let the CPU
PR kern/38707 scheduler related deadlock during build.sh
- Fix performance regression inroduced by the workaround by making job stealing a lot simpler: if the local run queue is empty, let the CPU enter the idle loop. In the idle loop, try to steal a job from another CPU's run queue if we are idle. If we succeed, re-enter mi_switch() immediatley to dispatch the job.
- When stealing jobs, consider a remote CPU to have one less job in its queue if it's currently in the idle loop. It will dispatch the job soon, so there's no point sloshing it about.
- Introduce a few event counters to monitor what's happening with the run queues.
- Revert the idle CPU bitmap change. It's pointless considering NUMA.
show more ...
|
| #
25866fbf |
| 24-May-2008 |
ad <ad@NetBSD.org> |
Set cpu_onproc on entry to the idle loop.
|
| #
0e18a546 |
| 26-Apr-2008 |
yamt <yamt@NetBSD.org> |
fix a comment.
|
| #
52c2e613 |
| 26-Apr-2008 |
yamt <yamt@NetBSD.org> |
idle_loop: unsigned -> uint32_t to be consistent with the rest of the code. no functional change.
|
| #
c2deaa26 |
| 24-Apr-2008 |
ad <ad@NetBSD.org> |
xc_broadcast: don't try to run cross calls on CPUs that are not yet running.
|
| #
61a0a960 |
| 04-Apr-2008 |
ad <ad@NetBSD.org> |
Maintain a bitmap of idle CPUs and add idle_pick() to find an idle CPU and remove it from the bitmap.
|
| #
d8788e7f |
| 10-Mar-2008 |
martin <martin@NetBSD.org> |
Use cpu index instead of the machine dependend, not very expressive cpuid when naming user-visible kernel entities.
|