#
bec77b8e |
| 04-Jan-2025 |
mvs <mvs@openbsd.org> |
Unlock sysctl_dopool().
sysctl_dopool() only delivers pool(9) statistics, moreover it already relies on pool(9) related locks, so it is mp-safe as is. It relies on `pool_lock' rwlock(9) to make `pp'
Unlock sysctl_dopool().
sysctl_dopool() only delivers pool(9) statistics, moreover it already relies on pool(9) related locks, so it is mp-safe as is. It relies on `pool_lock' rwlock(9) to make `pp' pool pointer dereference safe, so copyout()s, M_WAITOK malloc()s and yeld() calls happen locked too. Introduce `pr_refcnt' reference counter to make them lockless.
ok dlg
show more ...
|
#
0d280c5f |
| 14-Aug-2022 |
jsg <jsg@openbsd.org> |
remove unneeded includes in sys/kern ok mpi@ miod@
|
#
41d7544a |
| 20-Jan-2022 |
bluhm <bluhm@openbsd.org> |
Shifting signed integers left by 31 is undefined behavior in C. found by kubsan; joint work with tobhe@; OK miod@
|
#
4ea72498 |
| 15-Jun-2021 |
dlg <dlg@openbsd.org> |
factor out nsecuptime and getnsecuptime.
these functions were implemented in a bunch of places with comments saying it should be moved to kern_tc.c when more pop up, and i was about to add another o
factor out nsecuptime and getnsecuptime.
these functions were implemented in a bunch of places with comments saying it should be moved to kern_tc.c when more pop up, and i was about to add another one. i think it's time to move them to kern_tc.c.
ok cheloa@ jmatthew@
show more ...
|
#
678831be |
| 10-Mar-2021 |
jsg <jsg@openbsd.org> |
spelling
ok gnezdo@ semarie@ mpi@
|
#
da4391b3 |
| 06-Jan-2021 |
claudio <claudio@openbsd.org> |
Add dt(4) TRACEPOINTs for pool_get() and pool_put(), this is simmilar to the ones added to malloc() and free(). Pass the struct pool pointer as argv1 since it is currently not possible to pass the po
Add dt(4) TRACEPOINTs for pool_get() and pool_put(), this is simmilar to the ones added to malloc() and free(). Pass the struct pool pointer as argv1 since it is currently not possible to pass the pool name to btrace. OK mpi@
show more ...
|
#
cf76921f |
| 02-Jan-2021 |
cheloha <cheloha@openbsd.org> |
pool(9): remove ticks
Change the pool(9) timeouts to use the system uptime instead of ticks.
- Change the timeouts from variables to macros so we can use SEC_TO_NSEC(). This means these timeouts
pool(9): remove ticks
Change the pool(9) timeouts to use the system uptime instead of ticks.
- Change the timeouts from variables to macros so we can use SEC_TO_NSEC(). This means these timeouts are no longer patchable via ddb(4). dlg@ does not think this will be a problem, as the timeout intervals have not changed in years.
- Use low-res time to keep things fast. Add a local copy of getnsecuptime() to subr_pool.c to keep the diff small. We will need to move getnsecuptime() into kern_tc.c and document it later if we ever have other users elsewhere in the kernel.
- Rename ph_tick -> ph_timestamp and pr_cache_tick -> pr_cache_timestamp.
Prompted by tedu@ some time ago, but the effort stalled (may have been my fault). Input from kettenis@ and dlg@.
Special thanks to mpi@ for help with struct shuffling. This change does not increase the size of struct pool_page_header or struct pool.
ok dlg@ mpi@
show more ...
|
#
cbef7e11 |
| 24-Jan-2020 |
cheloha <cheloha@openbsd.org> |
pool(9): replace custom TAILQ concatenation loops with TAILQ_CONCAT(3)
TAILQ_CONCAT(3) apparently wasn't in-tree when this code was written. Using it leaves us with less code *and* better performanc
pool(9): replace custom TAILQ concatenation loops with TAILQ_CONCAT(3)
TAILQ_CONCAT(3) apparently wasn't in-tree when this code was written. Using it leaves us with less code *and* better performance.
ok tedu@
show more ...
|
#
071627cc |
| 23-Jan-2020 |
cheloha <cheloha@openbsd.org> |
pool(9): pl_sleep(): drop unused timeout argument
All sleeps have been indefinite since introduction of this interface ~5 years ago, so remove the timeout argument and make indefinite sleeps implici
pool(9): pl_sleep(): drop unused timeout argument
All sleeps have been indefinite since introduction of this interface ~5 years ago, so remove the timeout argument and make indefinite sleeps implicit.
While here: *sleep(9) -> *sleep_nsec(9)
"i don't think we're going to use timeouts [here]" tedu@, ok mpi@
show more ...
|
#
4286c7cf |
| 19-Jul-2019 |
bluhm <bluhm@openbsd.org> |
After the kernel has reached the sysclt kern.maxclusters limit, operations get stuck while holding the net lock. Increasing the limit did not help as there was no wakeup of the waiting pools. So in
After the kernel has reached the sysclt kern.maxclusters limit, operations get stuck while holding the net lock. Increasing the limit did not help as there was no wakeup of the waiting pools. So introduce pool_wakeup() and run through the mbuf pool request list when the limit changes. OK dlg@ visa@
show more ...
|
#
8c00de5e |
| 23-Apr-2019 |
visa <visa@openbsd.org> |
Remove file name and line number output from witness(4)
Reduce code clutter by removing the file name and line number output from witness(4). Typically it is easy enough to locate offending locks us
Remove file name and line number output from witness(4)
Reduce code clutter by removing the file name and line number output from witness(4). Typically it is easy enough to locate offending locks using the stack traces that are shown in lock order conflict reports. Tricky cases can be tracked using sysctl kern.witness.locktrace=1 .
This patch additionally removes the witness(4) wrapper for mutexes. Now each mutex implementation has to invoke the WITNESS_*() macros in order to utilize the checker.
Discussed with and OK dlg@, OK mpi@
show more ...
|
#
4bfbad54 |
| 10-Feb-2019 |
tedu <tedu@openbsd.org> |
revert revert revert. there are many other archs that use custom allocs.
|
#
7d335b5a |
| 10-Feb-2019 |
tedu <tedu@openbsd.org> |
if waitok flag is set, have the interrupt multipage allocator redirect to the not interrupt allocator.
|
#
079cc439 |
| 10-Feb-2019 |
tedu <tedu@openbsd.org> |
make it possible to reduce kmem pressure by letting some pools use a more accomodating allocator. an interrupt safe pool may also be used in process context, as indicated by waitok flags. thanks to t
make it possible to reduce kmem pressure by letting some pools use a more accomodating allocator. an interrupt safe pool may also be used in process context, as indicated by waitok flags. thanks to the garbage collector, we can always free pages in process context. the only complication is where to put the pages. solve this by saving the allocation flags in the pool page header so the free function can examine them. not actually used in this diff. (coming soon.) arm testing and compile fixes from phessler
show more ...
|
#
e0c5510e |
| 08-Jun-2018 |
guenther <guenther@openbsd.org> |
Constipate all the struct lock_type's so they go into .rodata
ok visa@
|
#
a544e172 |
| 06-Feb-2018 |
dlg <dlg@openbsd.org> |
slightly randomize the order that new pages populate their item lists in.
ok tedu@ deraadt@
|
#
a97ba27b |
| 18-Jan-2018 |
bluhm <bluhm@openbsd.org> |
While booting it does not make sense to wait for memory, there is no other process which could free it. Better panic in malloc(9) or pool_get(9) instead of sleeping forever. tested by visa@ patrick@
While booting it does not make sense to wait for memory, there is no other process which could free it. Better panic in malloc(9) or pool_get(9) instead of sleeping forever. tested by visa@ patrick@ Jan Klemkow suggested by kettenis@; OK deraadt@
show more ...
|
#
703735be |
| 13-Aug-2017 |
guenther <guenther@openbsd.org> |
New flag PR_RWLOCK for pool_init(9) makes the pool use rwlocks instead of mutexes. Use this immediately for the pool_cache futex pools.
Mostly worked out with dlg@ during e2k17 ok mpi@ tedu@
|
#
ecdf287e |
| 12-Jul-2017 |
visa <visa@openbsd.org> |
Compute the level of contention only once.
Suggested by and OK dlg@
|
#
63c7b0bd |
| 12-Jul-2017 |
visa <visa@openbsd.org> |
When there is no contention on a pool cache lock, lower the number of items that a cache list is allowed to hold. This lets the cache release resources back to the common pool after pressure on the c
When there is no contention on a pool cache lock, lower the number of items that a cache list is allowed to hold. This lets the cache release resources back to the common pool after pressure on the cache has decreased.
OK dlg@
show more ...
|
#
908b79cb |
| 23-Jun-2017 |
dlg <dlg@openbsd.org> |
set the alignment of the per cpu cache structures to CACHELINESIZE.
hardcoding 64 is too optimistic.
|
#
9a59f808 |
| 23-Jun-2017 |
dlg <dlg@openbsd.org> |
change the semantic for calculating when to grow the size of a cache list.
previously it would figure out if there's enough items overall for all the cpus to have full active an inactive free lists.
change the semantic for calculating when to grow the size of a cache list.
previously it would figure out if there's enough items overall for all the cpus to have full active an inactive free lists. this included currently allocated items, which pools wont actually hold on a free list and cannot predict when they will come back.
instead, see if there's enough items in the idle lists in the depot that could instead go on all the free lists on the cpus. if there's enough idle items, then we can grow.
tested by hrvoje popovski and amit kulkarni ok visa@
show more ...
|
#
a11cecbb |
| 19-Jun-2017 |
dlg <dlg@openbsd.org> |
dynamically scale the size of the per cpu cache lists.
if the lock around the global depot of extra cache lists is contented a lot in between the gc task runs, consider growing the number of entries
dynamically scale the size of the per cpu cache lists.
if the lock around the global depot of extra cache lists is contented a lot in between the gc task runs, consider growing the number of entries a free list can hold.
the size of the list is bounded by the number of pool items the current set of pages can represent to avoid having cpus starve each other. im not sure this semantic is right (or the least worst) but we're putting it in now to see what happens.
this also means reality matches the documentation i just committed in pool_cache_init.9.
tested by hrvoje popovski and amit kulkarni ok visa@
show more ...
|
#
338a5c68 |
| 16-Jun-2017 |
dlg <dlg@openbsd.org> |
add garbage collection of unused lists percpu cached items.
the cpu caches in pools amortise the cost of accessing global structures by moving lists of items around instead of individual items. exce
add garbage collection of unused lists percpu cached items.
the cpu caches in pools amortise the cost of accessing global structures by moving lists of items around instead of individual items. excess lists of items are stored in the global pool struct, but these idle lists never get returned back to the system for use elsewhere.
this adds a timestamp to the global idle list, which is updated when the idle list stops being empty. if the idle list hasn't been empty for a while, it means the per cpu caches arent using the idle entries and they can be recovered. timestamping the pages prevents recovery of a lot of items that may be used again shortly. eg, rx ring processing and replenishing from rate limited interrupts tends to allocate and free items in large chunks, which the timestamping smooths out.
gc'ed lists are returned to the pool pages, which in turn get gc'ed back to uvm.
ok visa@
show more ...
|
#
7276a683 |
| 16-Jun-2017 |
dlg <dlg@openbsd.org> |
split returning an item to the pool pages out of pool_put as pool_do_put.
this lets pool_cache_list_put return items to the pages. currently, if pool_cache_list_put is called while the per cpu cache
split returning an item to the pool pages out of pool_put as pool_do_put.
this lets pool_cache_list_put return items to the pages. currently, if pool_cache_list_put is called while the per cpu caches are enabled, the items on the list will put put straight back onto another list in the cpu cache. this also avoids counting puts for these items twice. a put for the items have already been coutned when the items went to a cpu cache, it doesnt need to be counted again when it goes back to the pool pages.
another side effect of this is that pool_cache_list_put can take the pool mutex once when returning all the items in the list with pool_do_put, rather than once per item.
ok visa@
show more ...
|