#
06ddeb9f |
| 12-Apr-2022 |
andvar <andvar@NetBSD.org> |
s/stablize/stabilize/
|
#
ba90a6ba |
| 11-Jun-2020 |
ad <ad@NetBSD.org> |
Counter tweaks:
- Don't need to count anonpages+filepages any more; clean+unknown+dirty for each kind of page can be summed to get the totals.
- Track the number of free pages with a counter so t
Counter tweaks:
- Don't need to count anonpages+filepages any more; clean+unknown+dirty for each kind of page can be summed to get the totals.
- Track the number of free pages with a counter so that it's one less thing for the allocator to do, which opens up further options there.
- Remove cpu_count_sync_one(). It has no users and doesn't save a whole lot. For the cheap option, give cpu_count_sync() a boolean parameter indicating that a cached value is okay, and rate limit the updates for cached values to hz.
show more ...
|
#
4b8a875a |
| 11-Jun-2020 |
ad <ad@NetBSD.org> |
uvm_availmem(): give it a boolean argument to specify whether a recent cached value will do, or if the very latest total must be fetched. It can be called thousands of times a second and fetching th
uvm_availmem(): give it a boolean argument to specify whether a recent cached value will do, or if the very latest total must be fetched. It can be called thousands of times a second and fetching the totals impacts not only the calling LWP but other CPUs doing unrelated activity in the VM system.
show more ...
|
#
ff872804 |
| 17-May-2020 |
ad <ad@NetBSD.org> |
Start trying to reduce cache misses on vm_page during fault processing.
- Make PGO_LOCKED getpages imply PGO_NOBUSY and remove the latter. Mark pages busy only when there's actually I/O to do.
-
Start trying to reduce cache misses on vm_page during fault processing.
- Make PGO_LOCKED getpages imply PGO_NOBUSY and remove the latter. Mark pages busy only when there's actually I/O to do.
- When doing COW on a uvm_object, don't mess with neighbouring pages. In all likelyhood they're already entered.
- Don't mess with neighbouring VAs that have existing mappings as replacing those mappings with same can be quite costly.
- Don't enqueue pages for neighbour faults unless not enqueued already, and don't activate centre pages unless uvmpdpol says its useful.
Also:
- Make PGO_LOCKED getpages on UAOs work more like vnodes: do gang lookup in the radix tree, and don't allocate new pages.
- Fix many assertion failures around faults/loans with tmpfs.
show more ...
|
#
fd2e91e6 |
| 02-Apr-2020 |
maxv <maxv@NetBSD.org> |
Hide 'hardclock_ticks' behind a new getticks() function, and use relaxed atomics internally. Only one caller is converted for now.
Discussed with riastradh@ and ad@.
|
#
231cabb5 |
| 14-Mar-2020 |
ad <ad@NetBSD.org> |
uvm_pdpolicy: Require a write lock on the object only for dequeue. No sense in requiring that for enqueue/activate/deactivate.
|
#
9d385320 |
| 08-Mar-2020 |
ad <ad@NetBSD.org> |
Don't zap the non-pdpolicy bits in pg->pqflags.
|
#
d2a0ebb6 |
| 23-Feb-2020 |
ad <ad@NetBSD.org> |
UVM locking changes, proposed on tech-kern:
- Change the lock on uvm_object, vm_amap and vm_anon to be a RW lock. - Break v_interlock and vmobjlock apart. v_interlock remains a mutex. - Do partial
UVM locking changes, proposed on tech-kern:
- Change the lock on uvm_object, vm_amap and vm_anon to be a RW lock. - Break v_interlock and vmobjlock apart. v_interlock remains a mutex. - Do partial PV list locking in the x86 pmap. Others to follow later.
show more ...
|
#
da84a45c |
| 30-Jan-2020 |
ad <ad@NetBSD.org> |
uvmpdpol_estimatepageable(): Don't take any locks here. This can be called from DDB, and in any case the numbers are stale the instant the lock is dropped, so it just doesn't matter.
|
#
090ebf9c |
| 21-Jan-2020 |
ad <ad@NetBSD.org> |
uvmpdpol_pageactive(): the change to not re-activate recently activated pages worked great with uvm_pageqlock, but it doesn't buy anything any more, because now the busy pages are likely in a per-CPU
uvmpdpol_pageactive(): the change to not re-activate recently activated pages worked great with uvm_pageqlock, but it doesn't buy anything any more, because now the busy pages are likely in a per-CPU queue somewhere waiting to be processed, and changing the intent on those queued pages costs next to nothing. Remove this and get back all the bits in pg->pqflags.
show more ...
|
#
8764f427 |
| 01-Jan-2020 |
ad <ad@NetBSD.org> |
Fix a comment.
|
#
c3c98c15 |
| 01-Jan-2020 |
mlelstv <mlelstv@NetBSD.org> |
explicitely include sys/atomic.h for atomic operations.
|
#
94843b13 |
| 31-Dec-2019 |
ad <ad@NetBSD.org> |
- Add and use wrapper functions that take and acquire page interlocks, and pairs of page interlocks. Require that the page interlock be held over calls to uvm_pageactivate(), uvm_pagewire() and
- Add and use wrapper functions that take and acquire page interlocks, and pairs of page interlocks. Require that the page interlock be held over calls to uvm_pageactivate(), uvm_pagewire() and similar.
- Solve the concurrency problem with page replacement state. Rather than updating the global state synchronously, set an intended state on individual pages (active, inactive, enqueued, dequeued) while holding the page interlock. After the interlock is released put the pages on a 128 entry per-CPU queue for their state changes to be made real in batch. This results in in a ~400 fold decrease in contention on my test system. Proposed on tech-kern but modified to use the page interlock rather than atomics to synchronise as it's much easier to maintain that way, and cheaper.
show more ...
|
#
5c06357c |
| 31-Dec-2019 |
ad <ad@NetBSD.org> |
Rename uvm_free() -> uvm_availmem().
|
#
b78a6618 |
| 31-Dec-2019 |
ad <ad@NetBSD.org> |
Rename uvm_page_locked_p() -> uvm_page_owner_locked_p()
|
#
87f077e0 |
| 30-Dec-2019 |
ad <ad@NetBSD.org> |
Whitespace.
|
#
9344a595 |
| 30-Dec-2019 |
ad <ad@NetBSD.org> |
pagedaemon:
- Use marker pages to keep place in the queue when scanning, rather than relying on assumptions.
- In uvmpdpol_balancequeue(), lock the object once instead of twice.
- When draining
pagedaemon:
- Use marker pages to keep place in the queue when scanning, rather than relying on assumptions.
- In uvmpdpol_balancequeue(), lock the object once instead of twice.
- When draining pools, the situation is getting desperate, but try to avoid saturating the system with xcall, lock and interrupt activity by sleeping for 1 clock tick if being continually awoken and all pools have been cycled through at least once.
- Pause & resume the freelist cache during pool draining.
PR kern/54209: NetBSD 8 large memory performance extremely low PR kern/54210: NetBSD-8 processes presumably not exiting PR kern/54727: writing a large file causes unreasonable system behaviour
show more ...
|
#
6c2dc768 |
| 27-Dec-2019 |
ad <ad@NetBSD.org> |
vm_page: Now that listq is gone, give the pagedaemon its own private TAILQ_ENTRY, so that update of page replacement state can be made asynchronous/lazy. No functional change.
|
#
f90ab610 |
| 23-Dec-2019 |
ad <ad@NetBSD.org> |
uvmpdpol_selectvictim: don't assert wire_count == 0, as we can (safely) race with object owner and wired pages can very briefly appear on the queue.
|
#
ddd3a0be |
| 21-Dec-2019 |
ad <ad@NetBSD.org> |
uvmexp.free -> uvm_free()
|
#
a98966d3 |
| 16-Dec-2019 |
ad <ad@NetBSD.org> |
- Extend the per-CPU counters matt@ did to include all of the hot counters in UVM, excluding uvmexp.free, which needs special treatment and will be done with a separate commit. Cuts system time
- Extend the per-CPU counters matt@ did to include all of the hot counters in UVM, excluding uvmexp.free, which needs special treatment and will be done with a separate commit. Cuts system time for a build by 20-25% on a 48 CPU machine w/DIAGNOSTIC.
- Avoid 64-bit integer divide on every fault (for rnd_add_uint32).
show more ...
|
#
bc7137b6 |
| 16-Dec-2019 |
ad <ad@NetBSD.org> |
Use the high bits of pqflags for PQ_TIME, not low.
|
#
5978ddc6 |
| 13-Dec-2019 |
ad <ad@NetBSD.org> |
Break the global uvm_pageqlock into a per-page identity lock and a private lock for use of the pagedaemon policy code. Discussed on tech-kern.
PR kern/54209: NetBSD 8 large memory performance extre
Break the global uvm_pageqlock into a per-page identity lock and a private lock for use of the pagedaemon policy code. Discussed on tech-kern.
PR kern/54209: NetBSD 8 large memory performance extremely low PR kern/54210: NetBSD-8 processes presumably not exiting PR kern/54727: writing a large file causes unreasonable system behaviour
show more ...
|
#
4db6dbc1 |
| 30-Jan-2012 |
para <para@NetBSD.org> |
removed code from uvmpdpol_needsscan_p that got there by mistake pointed out by yamt@
|
#
bc9403f1 |
| 28-Jan-2012 |
rmind <rmind@NetBSD.org> |
pool_page_alloc, pool_page_alloc_meta: avoid extra compare, use const. ffs_mountfs,sys_swapctl: replace memset with kmem_zalloc. sys_swapctl: move kmem_free outside the lock path. uvm_init: fix comme
pool_page_alloc, pool_page_alloc_meta: avoid extra compare, use const. ffs_mountfs,sys_swapctl: replace memset with kmem_zalloc. sys_swapctl: move kmem_free outside the lock path. uvm_init: fix comment, remove pointless numeration of steps. uvm_map_enter: remove meflagval variable. Fix some indentation.
show more ...
|