#
6b5aed99 |
| 25-Jan-2025 |
mpi <mpi@openbsd.org> |
Remove incorrect unlock in error path.
Reported & tested by daharmasterkor AT gmail.com on bugs@
|
#
0b4f1452 |
| 27-Dec-2024 |
mpi <mpi@openbsd.org> |
Move pmap_page_protect(PROT_NONE) call inside uvm_pagedeactivate().
ok tb@, kettenis@
|
#
f3e3a779 |
| 25-Nov-2024 |
mpi <mpi@openbsd.org> |
Push the KERNEL_LOCK() down in the aiodone_daemon().
Improve responsiveness during swapping for MP machines without bouncing. When the page daemon is busy writing a lot of clusters without releasing
Push the KERNEL_LOCK() down in the aiodone_daemon().
Improve responsiveness during swapping for MP machines without bouncing. When the page daemon is busy writing a lot of clusters without releasing the KERNEL_LOCK() and without allocating.
This currently require vm.swapencrypt.enable=0 and a dma_constraint covering the whole address range.
Tested by sthen@ and miod@. ok claudio@, tb@
show more ...
|
#
1df50bec |
| 25-Nov-2024 |
mpi <mpi@openbsd.org> |
Account for in-flight pages being written to disk when computing page shortage.
Due to its asynchronous design, on MP machines, the page daemon was generally over swapping memory resulting in a dege
Account for in-flight pages being written to disk when computing page shortage.
Due to its asynchronous design, on MP machines, the page daemon was generally over swapping memory resulting in a degenerative behavior when OOM. To prevent swapping more pages than necessary, take the amount of in-flight pages into account when calculating the page shortage.
Tested by sthen@ and miod@. ok claudio@, tb@
show more ...
|
#
2da9c48e |
| 25-Nov-2024 |
mpi <mpi@openbsd.org> |
Do not retry with a single page if paging out a cluster didn't work.
Allocations that might fail in uvm_swap_io() are independant from the amount of pages, so retrying with fewer pages is counter pr
Do not retry with a single page if paging out a cluster didn't work.
Allocations that might fail in uvm_swap_io() are independant from the amount of pages, so retrying with fewer pages is counter productive.
Tested by sthen@, ok tb@
show more ...
|
#
ab22dc52 |
| 07-Nov-2024 |
mpi <mpi@openbsd.org> |
Free Oxford commas for most of the comments.
Requested by miod@
|
#
767e8a65 |
| 07-Nov-2024 |
mpi <mpi@openbsd.org> |
Do not try to release memory if all we need is balancing the page lists.
ok miod@
|
#
a125353d |
| 07-Nov-2024 |
mpi <mpi@openbsd.org> |
Optimize active & inactive list traversals when looking only for low pages.
- In the inactive list, if we are not OOM, do not bother releasing high pages.
- In the active list, if we couldn't relea
Optimize active & inactive list traversals when looking only for low pages.
- In the inactive list, if we are not OOM, do not bother releasing high pages.
- In the active list, if we couldn't release enough low pages and are not out- of-swap, deactivate only low pages.
ok miod@
show more ...
|
#
ba03bb80 |
| 07-Nov-2024 |
mpi <mpi@openbsd.org> |
Use a static request to notify failed nowait allocations.
As a side effect the page daemon now considers releasing inactive pages when a nowait allocation for low pages failed.
Note that the hardco
Use a static request to notify failed nowait allocations.
As a side effect the page daemon now considers releasing inactive pages when a nowait allocation for low pages failed.
Note that the hardcoded number of 16 pages (a 64K cluster on 4K archs) which corresponds to what the buffer cache currently wants is left with the original XXX.
ok miod@
show more ...
|
#
ab9ceab3 |
| 07-Nov-2024 |
mpi <mpi@openbsd.org> |
Remove redundant `constraint' argument to uvmpd_scan() & friends.
Currently when there is no request waiting for low pages the constraint used corresponds to the full address range. So use the cons
Remove redundant `constraint' argument to uvmpd_scan() & friends.
Currently when there is no request waiting for low pages the constraint used corresponds to the full address range. So use the constraint attached to the memory request.
This also speeds up the active/inactive list lookups by skipping the constraint check if the request has already been fulfilled.
ok miod@
show more ...
|
#
d7bddd8c |
| 07-Nov-2024 |
mpi <mpi@openbsd.org> |
Introduce an helper to check if memory has been freed for a given request.
Using this helper we stop scanning the active/inactive lists if the shrinkers already released enough pages to fulfill the
Introduce an helper to check if memory has been freed for a given request.
Using this helper we stop scanning the active/inactive lists if the shrinkers already released enough pages to fulfill the allocation.
This speeds up the page daemon loop by avoiding useless lookups to find the first managed page belonging to the low memory range before aborting.
ok miod@
show more ...
|
#
03c39359 |
| 07-Nov-2024 |
mpi <mpi@openbsd.org> |
Introduce an helper to check if a page belongs to a given memory range.
No functional change.
ok miod@
|
#
786a9acf |
| 06-Nov-2024 |
mpi <mpi@openbsd.org> |
Update `shortage' based on the number of pages freed by the shrinkers.
ok miod@, kettenis@
|
#
0b4f309d |
| 05-Nov-2024 |
mpi <mpi@openbsd.org> |
Return the number of freed pages in bufbackoff().
Reviewed by miod@, ok tb@, beck@
|
#
4783fe62 |
| 05-Nov-2024 |
mpi <mpi@openbsd.org> |
Use computed `shortage' value instead of `uvmexp.free' in uvmpd_scan_inactive().
ok miod@
|
#
4e368fae |
| 03-Nov-2024 |
mpi <mpi@openbsd.org> |
Introduce a `shortage' variable to reduce accesses to `uvmexp.free' & friends.
ok miod@
|
#
779ee49f |
| 02-Nov-2024 |
mpi <mpi@openbsd.org> |
Compute inactive target only once per iteration.
Reduce accesses to global counters.
ok jsg@
|
#
c1e5f9e3 |
| 02-Oct-2024 |
mpi <mpi@openbsd.org> |
Modify uvmpd_scan_inactive() to access `uvmexp.pdfreed' only once.
ok kettenis@
|
#
7d378e62 |
| 02-Oct-2024 |
mpi <mpi@openbsd.org> |
Improve responsiveness in OOM situations & make free target checks coherent.
Remove a change introduced in NetBSD to pageout 4 times as many pages as required to meet the low water mark of free page
Improve responsiveness in OOM situations & make free target checks coherent.
Remove a change introduced in NetBSD to pageout 4 times as many pages as required to meet the low water mark of free pages. With todays' Gbs of RAMs, it makes the pagedaemon hog the CPU for too long when the amount of free pages is close to none.
ok sthen@, kettenis@
show more ...
|
#
789ce988 |
| 30-Sep-2024 |
mpi <mpi@openbsd.org> |
Return the number of freed pages and handle SHRINK_STOP in drmbackoff().
ok jsg@
|
#
82673a18 |
| 01-May-2024 |
mpi <mpi@openbsd.org> |
Add per-CPU caches to the pmemrange allocator.
The caches are used primarily to reduce contention on uvm_lock_fpageq() during concurrent page faults. For the moment only uvm_pagealloc() tries to ge
Add per-CPU caches to the pmemrange allocator.
The caches are used primarily to reduce contention on uvm_lock_fpageq() during concurrent page faults. For the moment only uvm_pagealloc() tries to get a page from the current CPU's cache. So on some architectures the caches are also used by the pmap layer.
Each cache is composed of two magazines, design is borrowed from jeff bonwick vmem's paper and the implementation is similar to the one of pool_cache from dlg@. However there is no depot layer and magazines are refilled directly by the pmemrange allocator.
This version includes splvm()/splx() dances because the buffer cache flips buffers in interrupt context. So we have to prevent recursive accesses to per-CPU magazines.
Tested by naddy@, solene@, krw@, robert@, claudio@ and Laurence Tratt.
ok claudio@, kettenis@
show more ...
|
#
097a266d |
| 19-Apr-2024 |
mpi <mpi@openbsd.org> |
Revert per-CPU caches a double-free has been found by naddy@.
|
#
52feabc5 |
| 17-Apr-2024 |
mpi <mpi@openbsd.org> |
Add per-CPU caches to the pmemrange allocator.
The caches are used primarily to reduce contention on uvm_lock_fpageq() during concurrent page faults. For the moment only uvm_pagealloc() tries to ge
Add per-CPU caches to the pmemrange allocator.
The caches are used primarily to reduce contention on uvm_lock_fpageq() during concurrent page faults. For the moment only uvm_pagealloc() tries to get a page from the current CPU's cache. So on some architectures the caches are also used by the pmap layer.
Each cache is composed of two magazines, design is borrowed from jeff bonwick vmem's paper and the implementation is similar to the one of pool_cache from dlg@. However there is no depot layer and magazines are refilled directly by the pmemrange allocator.
Tested by robert@, claudio@ and Laurence Tratt.
ok kettenis@
show more ...
|
#
5a3e8fe8 |
| 10-Apr-2024 |
mpi <mpi@openbsd.org> |
Use uvmpd_dropswap() in the case of swap shortage.
ok kn@, kettenis@, miod@
|
#
a853522e |
| 24-Mar-2024 |
mpi <mpi@openbsd.org> |
Cleanup uvmpd_tune() & document global variable ownership.
- Stop calling uvmpd_tune() inside uvm_pageout(). OpenBSD currently doesn't support adding RAM. `uvmexp.npages' is immutable after boot.
Cleanup uvmpd_tune() & document global variable ownership.
- Stop calling uvmpd_tune() inside uvm_pageout(). OpenBSD currently doesn't support adding RAM. `uvmexp.npages' is immutable after boot.
- Document that `uvmexp.freemin' and `uvmexp.freetarg' are immutable.
- Reduce the scope of the `uvm_pageq_lock' lock. It serializes accesses to `uvmexp.active' and `uvmexp.inactive'.
ok kettenis@
show more ...
|