History log of /openbsd-src/sys/kern/subr_pool.c (Results 26 – 50 of 237)
Revision Date Author Comments
# ee784f0a 15-Jun-2017 dlg <dlg@openbsd.org>

report contention on caches global data to userland.


# b65ef0d2 15-Jun-2017 dlg <dlg@openbsd.org>

white space tweaks. no functional change.


# c0775b48 15-Jun-2017 dlg <dlg@openbsd.org>

implement the backend of the sysctls that report pool cache info.

KERN_POOL_CACHE reports info about the global cache info, like how long
the lists of cache items the cpus build should be and how ma

implement the backend of the sysctls that report pool cache info.

KERN_POOL_CACHE reports info about the global cache info, like how long
the lists of cache items the cpus build should be and how many of these
lists are idle on the pool struct.

KERN_POOL_CACHE_CPUS reports counters from each each. the counters
are for how many item and list operations the cache has handled on
a cpu. the sysctl provides an array of ncpusfound * struct
kinfo_pool_cache_cpu, not a single struct kinfo_pool_cache_cpu.

tested by hrvoje popovski
ok mikeb@ millert@
----------------------------------------------------------------------

show more ...


# 2b9bbc55 13-Jun-2017 dlg <dlg@openbsd.org>

when enabling cpu caches, check the item size against the right thing

lists of free items on the per cpu caches are built out the pool items
as struct pool_cache_items, not struct pool_cache. make t

when enabling cpu caches, check the item size against the right thing

lists of free items on the per cpu caches are built out the pool items
as struct pool_cache_items, not struct pool_cache. make the KASSERT
in pool_cache_init check that properly.

show more ...


# 6f84e71d 20-Apr-2017 visa <visa@openbsd.org>

Tweak lock inits to make the system runnable with witness(4)
on amd64 and i386.


# ca3bb01e 20-Feb-2017 dlg <dlg@openbsd.org>

revert 1.206 because it allows deadlocks.

if the gc task is running on a cpu that handles interrupts it is
possible to allow a deadlock. the gc task my be cleaning up a pool
and holding its mutex wh

revert 1.206 because it allows deadlocks.

if the gc task is running on a cpu that handles interrupts it is
possible to allow a deadlock. the gc task my be cleaning up a pool
and holding its mutex when an non-MPSAFE interrupt arrives and tries
to take the kernel lock. another cpu may already be holding the
kernel lock when it then tries use the same pool thats the pool GC
is currently processing.

thanks to sthen@ and mpi@ for chasing this down.

show more ...


# e828f349 08-Feb-2017 dlg <dlg@openbsd.org>

the splvm() in pool_gc_pages is unecessary now.

all pools set their ipls unconditionally now, so there isn't a need
to second guess them.

pointed out by and ok jmatthew@


# 47efcd19 24-Jan-2017 mpi <mpi@openbsd.org>

Force a context switch for every pool_get(9) with the PR_WAITOK flag
if pool_debug is equal to 2, just like we do for malloc(9).

ok dlg@


# ff773029 21-Nov-2016 dlg <dlg@openbsd.org>

let pool page allocators advertise what sizes they can provide.

to keep things concise i let the multi page allocators provide
multiple sizes of pages, but this feature was implicit inside
pool_init

let pool page allocators advertise what sizes they can provide.

to keep things concise i let the multi page allocators provide
multiple sizes of pages, but this feature was implicit inside
pool_init and only usable if the caller of pool_init did not specify
a page allocator.

callers of pool_init can now suplly a page allocator that provides
multiple page sizes. pool_init will try to fit 8 items onto a page
still, but will scale its page size down until it fits into what
the allocator provides.

supported page sizes are specified as a bit field in the pa_pagesz
member of a pool_allocator. setting the low bit in that word indicates
that the pages can be aligned to their size.

show more ...


# dc856b0f 07-Nov-2016 dlg <dlg@openbsd.org>

rename some types and functions to make the code easier to read.

pool_item_header is now pool_page_header. the more useful change
is pool_list is now pool_cache_item. that's what items going into
th

rename some types and functions to make the code easier to read.

pool_item_header is now pool_page_header. the more useful change
is pool_list is now pool_cache_item. that's what items going into
the per cpu pool caches are cast to, and they get linked together
to make a list.

the functions operating on what is now pool_cache_items have been
renamed to make it more obvious what they manipulate.

show more ...


# 8d0c6f3f 02-Nov-2016 dlg <dlg@openbsd.org>

poison the TAILQ_ENTRY in items in the per cpu pool cache.


# f3662f08 02-Nov-2016 dlg <dlg@openbsd.org>

add poisoning of items on the per cpu caches.

it copies the existing pool code, except it works on pool_list
structures instead of pool_item structures.

after this id like to poison the words used

add poisoning of items on the per cpu caches.

it copies the existing pool code, except it works on pool_list
structures instead of pool_item structures.

after this id like to poison the words used by the TAILQ_ENTRY in
the pool_list struct that arent used until a list of items is moved
into the global depot.

show more ...


# cd90703f 02-Nov-2016 dlg <dlg@openbsd.org>

use a TAILQ to maintain the list of item lists used by the percpu code.

it makes it more readable, and fixes a bug in pool_list_put where it
was returning the next item in the current list rather th

use a TAILQ to maintain the list of item lists used by the percpu code.

it makes it more readable, and fixes a bug in pool_list_put where it
was returning the next item in the current list rather than the next
list to be freed.

show more ...


# 1f212a5e 02-Nov-2016 dlg <dlg@openbsd.org>

add per cpu caches for free pool items.

this is modelled on whats described in the "Magazines and Vmem:
Extending the Slab Allocator to Many CPUs and Arbitrary Resources"
paper by Jeff Bonwick and J

add per cpu caches for free pool items.

this is modelled on whats described in the "Magazines and Vmem:
Extending the Slab Allocator to Many CPUs and Arbitrary Resources"
paper by Jeff Bonwick and Jonathan Adams.

the main semantic borrowed from the paper is the use of two lists
of free pool items on each cpu, and only moving one of the lists
in and out of a global depot of free lists to mitigate against a
cpu thrashing against that global depot.

unlike slabs, pools do not maintain or cache constructed items,
which allows us to use the items themselves to build the free list
rather than having to allocate arrays to point at constructed pool
items.

the per cpu caches are build on top of the cpumem api.

this has been kicked a bit by hrvoje popovski and simon mages (thank you).
im putting it in now so it is easier to work on and test.
ok jmatthew@

show more ...


# 1378bae2 15-Sep-2016 dlg <dlg@openbsd.org>

all pools have their ipl set via pool_setipl, so fold it into pool_init.

the ioff argument to pool_init() is unused and has been for many
years, so this replaces it with an ipl argument. because the

all pools have their ipl set via pool_setipl, so fold it into pool_init.

the ioff argument to pool_init() is unused and has been for many
years, so this replaces it with an ipl argument. because the ipl
will be set on init we no longer need pool_setipl.

most of these changes have been done with coccinelle using the spatch
below. cocci sucks at formatting code though, so i fixed that by hand.

the manpage and subr_pool.c bits i did myself.

ok tedu@ jmatthew@

@ipl@
expression pp;
expression ipl;
expression s, a, o, f, m, p;
@@
-pool_init(pp, s, a, o, f, m, p);
-pool_setipl(pp, ipl);
+pool_init(pp, s, a, ipl, f, m, p);

show more ...


# 74c39504 15-Sep-2016 dlg <dlg@openbsd.org>

move pools to using the subr_tree version of rb trees

this is half way to recovering the space used by the subr_tree code.


# aaa96388 05-Sep-2016 dlg <dlg@openbsd.org>

revert moving pools from tree.h to subr_tree.c rb trees.

itll go in again when i dont break userland.


# a829c13b 05-Sep-2016 dlg <dlg@openbsd.org>

move pool red-black trees from tree.h code to subr_tree.c code

ok tedu@


# ce5376a6 15-Jan-2016 dlg <dlg@openbsd.org>

add a "show socket" command to ddb

should help inspecting socket issues in the future.

enthusiasm from mpi@ bluhm@ deraadt@


# ce1d5440 11-Sep-2015 kettenis <kettenis@openbsd.org>

Now that interrupt-safe uvm maps are porperly locked, the interrupt-safe
multi page backend allocator implementation no longer needs to grab the
kernel lock.

ok mlarkin@, dlg@


# e03893a6 08-Sep-2015 kettenis <kettenis@openbsd.org>

Give the pool page allocator backends more sensible names. We now have:
* pool_allocator_single: single page allocator, always interrupt safe
* pool_allocator_multi: multi-page allocator, interrupt

Give the pool page allocator backends more sensible names. We now have:
* pool_allocator_single: single page allocator, always interrupt safe
* pool_allocator_multi: multi-page allocator, interrupt safe
* pool_allocator_multi_ni: multi-page allocator, not interrupt-safe

ok deraadt@, dlg@

show more ...


# 13499c96 08-Sep-2015 kettenis <kettenis@openbsd.org>

Now that msleep(9) no longer requires the kernel lock (as long as PCATCH
isn't specified) the default backend allocator implementation no longer
needs to grab the kernel lock.

ok visa@, guenther@


# b0c99887 06-Sep-2015 kettenis <kettenis@openbsd.org>

We no longer need to grab the kernel lock for allocating and freeing pages
in the (default) single page pool backend allocator. This means it is now
safe to call pool_get(9) and pool_put(9) for "sma

We no longer need to grab the kernel lock for allocating and freeing pages
in the (default) single page pool backend allocator. This means it is now
safe to call pool_get(9) and pool_put(9) for "small" items while holding
a mutex without holding the kernel lock as well as these functions will
no longer acquire the kernel lock under any circumstances. For "large" items
(where large is larger than 1/8th of a page) this still isn't safe though.

ok dlg@

show more ...


# fdd75b91 01-Sep-2015 kettenis <kettenis@openbsd.org>

Push down the KERNEL_LOCK/KERNEL_UNLOCK calls into the back-end allocator
functions. Note that these calls are deliberately not added to the
special-purpose back-end allocators in the various pmaps.

Push down the KERNEL_LOCK/KERNEL_UNLOCK calls into the back-end allocator
functions. Note that these calls are deliberately not added to the
special-purpose back-end allocators in the various pmaps. Those allocators
either don't need to grab the kernel lock, are always called with the kernel
lock already held, or are only used on non-MULTIPROCESSOR platforms.

pk tedu@, deraadt@, dlg@

show more ...


# a920f20f 21-Aug-2015 dlg <dlg@openbsd.org>

re-enable *8.

if we're allowed to try and use large pages, we try and fit at least
8 of the items. this amortises the per page cost of an item a bit.

"be careful" deraadt@


12345678910