xref: /openbsd-src/share/man/man9/pool_cache_init.9 (revision aaeccf361df1072faadf2a1fead1655dd480730e)
1.\"	$OpenBSD: pool_cache_init.9,v 1.8 2018/01/12 04:36:45 deraadt Exp $
2.\"
3.\" Copyright (c) 2017 David Gwynne <dlg@openbsd.org>
4.\"
5.\" Permission to use, copy, modify, and distribute this software for any
6.\" purpose with or without fee is hereby granted, provided that the above
7.\" copyright notice and this permission notice appear in all copies.
8.\"
9.\" THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
10.\" WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
11.\" MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
12.\" ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
13.\" WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
14.\" ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
15.\" OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
16.\"
17.Dd $Mdocdate: January 12 2018 $
18.Dt POOL_CACHE_INIT 9
19.Os
20.Sh NAME
21.Nm pool_cache_init ,
22.Nm pool_cache_destroy
23.Nd per CPU caching of pool items
24.Sh SYNOPSIS
25.In sys/pool.h
26.Ft void
27.Fn pool_cache_init "struct pool *pp"
28.Ft void
29.Fn pool_cache_destroy "struct pool *pp"
30.Sh DESCRIPTION
31By default, pools protect their internal state using a single lock,
32so concurrent access to a pool may suffer contention on that lock.
33The pool API provides support for caching free pool items on each
34CPU which can be enabled to mitigate against this contention.
35.Pp
36When per CPU caches are enabled on a pool, each CPU maintains an
37active and inactive list of free pool items.
38A global depot of free lists is initialised in the pool structure
39to store excess lists of free items that may accumulate on CPUs.
40.Pp
41.Fn pool_cache_init
42allocates the free lists on each CPU, initialises the global depot
43of free lists, and enables their use to handle
44.Xr pool_get 9
45and
46.Xr pool_put 9
47operations.
48.Pp
49.Fn pool_cache_destroy
50disables the use of the free lists on each CPU, returns items cached
51on all the free lists in the subsystem back to the normal pool
52allocator, and finally frees the per CPU data structures.
53.Pp
54Once per CPU caches are enabled, items returned to a pool with
55.Xr pool_put 9
56are placed on the current CPU's active free list.
57If the active list becomes full, it becomes the inactive list and
58a new active list is initialised for the free item to go on.
59If an inactive list already exists when the active list becomes
60full, the inactive list is moved to the global depot of free lists
61before the active list is moved into its place.
62.Pp
63Attempts to allocate items with
64.Xr pool_get 9
65first try to get an item from the active free list on the CPU it is
66called on.
67If the active free list is empty but an inactive list of items is
68available, the inactive list is moved back into place as the active
69list so it can satisfy the request.
70If no lists are available on the current CPU, an attempt to allocate
71a free list from the global depot is made.
72Finally, if no free list is available,
73.Xr pool_get 9
74falls through to allocating a pool item normally.
75.Pp
76The maximum number of items cached on a free list is dynamically
77scaled for each pool based on the contention on the lock around the
78global depot of free lists.
79A garbage collector runs periodically to recover idle free lists
80and make the memory they consume available to the system for
81use elsewhere.
82.Pp
83Information about the current state of the per CPU caches and
84counters of operations they handle are available via
85.Xr sysctl 2 ,
86or displayed in the pcache view in
87.Xr systat 1 .
88.Pp
89The
90.Vt kinfo_pool_cache
91struct provides information about the global state of a pool's caches
92via a node for each pool under the
93.Dv CTL_KERN ,
94.Dv KERN_POOL ,
95.Dv KERN_POOL_CACHE
96.Xr sysctl 2
97MIB hierarchy.
98.Bd -literal -offset indent
99struct kinfo_pool_cache {
100	uint64_t	pr_ngc;
101	unsigned int	pr_len;
102	unsigned int	pr_nitems;
103	unsigned int	pr_contention;
104};
105.Ed
106.Pp
107.Va pr_ngc
108indicates the number of times the garbage collector has recovered
109an idle item free list.
110.Pp
111.Va pr_len
112shows the maximum number of items that can be cached on a CPU's
113active free list.
114.Pp
115.Va pr_nitems
116shows the number of free items that are currently stored in the
117global depot.
118.Pp
119.Va pr_contention
120indicates the number of times that there was contention on the lock
121protecting the global depot.
122.Pp
123The
124.Vt kinfo_pool_cache_cpus
125struct provides information about the number of times the cache on
126a CPU handled certain operations.
127These counters may be accessed via a node for each pool under the
128.Dv CTL_KERN ,
129.Dv KERN_POOL ,
130.Dv KERN_POOL_CACHE_CPUS
131.Xr sysctl 2
132MIB hierarchy.
133This sysctl returns an array of
134.Vt kinfo_pool_cache_cpus
135structures sized by the number of CPUs found in the system.
136The number of CPUs in the system can be read from the
137.Dv CTL_HW ,
138.Dv HW_NCPUFOUND
139sysctl MIB.
140.Bd -literal -offset indent
141struct kinfo_pool_cache_cpu {
142	unsigned int	pr_cpu;
143	uint64_t	pr_nget;
144	uint64_t	pr_nfail;
145	uint64_t	pr_nput;
146	uint64_t	pr_nlget;
147	uint64_t	pr_nlfail;
148	uint64_t	pr_nlput;
149};
150.Ed
151.Pp
152.Va pr_cpu
153indicates which CPU performed the relevant operations.
154.Pp
155.Va pr_nget
156and
157.Va pr_nfail
158show the number of times the CPU successfully or unsuccessfully handled a
159.Xr pool_get 9
160operation respectively.
161.Va pr_nput
162shows the number of times the CPU handled a
163.Xr pool_put 9
164operation.
165.Pp
166.Va pr_nlget
167and
168.Va pr_nlfail
169show the number of times the CPU successfully or unsuccessfully
170requested a list of free items from the global depot.
171.Va pr_nlput
172shows the number of times the CPU pushed a list of free items to
173the global depot.
174.Sh CONTEXT
175.Fn pool_cache_init
176and
177.Fn pool_cache_destroy
178can be called from process context.
179.Sh CODE REFERENCES
180The pool implementation is in the file
181.Pa sys/kern/subr_pool.c .
182.Sh SEE ALSO
183.Xr systat 1 ,
184.Xr sysctl 2 ,
185.Xr pool_get 9
186.Sh CAVEATS
187Because the intention of per CPU pool caches is to avoid having
188all CPUs coordinate via shared data structures for handling
189.Xr pool_get 9
190and
191.Xr pool_put 9
192operations, any limits set on the pool with
193.Xr pool_sethardlimit 9
194are ignored.
195If limits on the memory used by a pool with per CPU caches enabled
196are needed, they must be enforced by a page allocator specified
197when a pool is set up with
198.Xr pool_init 9 .
199