1*0Sstevel@tonic-gate /* 2*0Sstevel@tonic-gate * CDDL HEADER START 3*0Sstevel@tonic-gate * 4*0Sstevel@tonic-gate * The contents of this file are subject to the terms of the 5*0Sstevel@tonic-gate * Common Development and Distribution License, Version 1.0 only 6*0Sstevel@tonic-gate * (the "License"). You may not use this file except in compliance 7*0Sstevel@tonic-gate * with the License. 8*0Sstevel@tonic-gate * 9*0Sstevel@tonic-gate * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 10*0Sstevel@tonic-gate * or http://www.opensolaris.org/os/licensing. 11*0Sstevel@tonic-gate * See the License for the specific language governing permissions 12*0Sstevel@tonic-gate * and limitations under the License. 13*0Sstevel@tonic-gate * 14*0Sstevel@tonic-gate * When distributing Covered Code, include this CDDL HEADER in each 15*0Sstevel@tonic-gate * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 16*0Sstevel@tonic-gate * If applicable, add the following below this CDDL HEADER, with the 17*0Sstevel@tonic-gate * fields enclosed by brackets "[]" replaced with your own identifying 18*0Sstevel@tonic-gate * information: Portions Copyright [yyyy] [name of copyright owner] 19*0Sstevel@tonic-gate * 20*0Sstevel@tonic-gate * CDDL HEADER END 21*0Sstevel@tonic-gate */ 22*0Sstevel@tonic-gate /* 23*0Sstevel@tonic-gate * Copyright 2005 Sun Microsystems, Inc. All rights reserved. 24*0Sstevel@tonic-gate * Use is subject to license terms. 25*0Sstevel@tonic-gate */ 26*0Sstevel@tonic-gate 27*0Sstevel@tonic-gate #pragma ident "%Z%%M% %I% %E% SMI" 28*0Sstevel@tonic-gate 29*0Sstevel@tonic-gate /* 30*0Sstevel@tonic-gate * Big Theory Statement for the virtual memory allocator. 31*0Sstevel@tonic-gate * 32*0Sstevel@tonic-gate * For a more complete description of the main ideas, see: 33*0Sstevel@tonic-gate * 34*0Sstevel@tonic-gate * Jeff Bonwick and Jonathan Adams, 35*0Sstevel@tonic-gate * 36*0Sstevel@tonic-gate * Magazines and vmem: Extending the Slab Allocator to Many CPUs and 37*0Sstevel@tonic-gate * Arbitrary Resources. 38*0Sstevel@tonic-gate * 39*0Sstevel@tonic-gate * Proceedings of the 2001 Usenix Conference. 40*0Sstevel@tonic-gate * Available as http://www.usenix.org/event/usenix01/bonwick.html 41*0Sstevel@tonic-gate * 42*0Sstevel@tonic-gate * 43*0Sstevel@tonic-gate * 1. General Concepts 44*0Sstevel@tonic-gate * ------------------- 45*0Sstevel@tonic-gate * 46*0Sstevel@tonic-gate * 1.1 Overview 47*0Sstevel@tonic-gate * ------------ 48*0Sstevel@tonic-gate * We divide the kernel address space into a number of logically distinct 49*0Sstevel@tonic-gate * pieces, or *arenas*: text, data, heap, stack, and so on. Within these 50*0Sstevel@tonic-gate * arenas we often subdivide further; for example, we use heap addresses 51*0Sstevel@tonic-gate * not only for the kernel heap (kmem_alloc() space), but also for DVMA, 52*0Sstevel@tonic-gate * bp_mapin(), /dev/kmem, and even some device mappings like the TOD chip. 53*0Sstevel@tonic-gate * The kernel address space, therefore, is most accurately described as 54*0Sstevel@tonic-gate * a tree of arenas in which each node of the tree *imports* some subset 55*0Sstevel@tonic-gate * of its parent. The virtual memory allocator manages these arenas and 56*0Sstevel@tonic-gate * supports their natural hierarchical structure. 57*0Sstevel@tonic-gate * 58*0Sstevel@tonic-gate * 1.2 Arenas 59*0Sstevel@tonic-gate * ---------- 60*0Sstevel@tonic-gate * An arena is nothing more than a set of integers. These integers most 61*0Sstevel@tonic-gate * commonly represent virtual addresses, but in fact they can represent 62*0Sstevel@tonic-gate * anything at all. For example, we could use an arena containing the 63*0Sstevel@tonic-gate * integers minpid through maxpid to allocate process IDs. vmem_create() 64*0Sstevel@tonic-gate * and vmem_destroy() create and destroy vmem arenas. In order to 65*0Sstevel@tonic-gate * differentiate between arenas used for adresses and arenas used for 66*0Sstevel@tonic-gate * identifiers, the VMC_IDENTIFIER flag is passed to vmem_create(). This 67*0Sstevel@tonic-gate * prevents identifier exhaustion from being diagnosed as general memory 68*0Sstevel@tonic-gate * failure. 69*0Sstevel@tonic-gate * 70*0Sstevel@tonic-gate * 1.3 Spans 71*0Sstevel@tonic-gate * --------- 72*0Sstevel@tonic-gate * We represent the integers in an arena as a collection of *spans*, or 73*0Sstevel@tonic-gate * contiguous ranges of integers. For example, the kernel heap consists 74*0Sstevel@tonic-gate * of just one span: [kernelheap, ekernelheap). Spans can be added to an 75*0Sstevel@tonic-gate * arena in two ways: explicitly, by vmem_add(), or implicitly, by 76*0Sstevel@tonic-gate * importing, as described in Section 1.5 below. 77*0Sstevel@tonic-gate * 78*0Sstevel@tonic-gate * 1.4 Segments 79*0Sstevel@tonic-gate * ------------ 80*0Sstevel@tonic-gate * Spans are subdivided into *segments*, each of which is either allocated 81*0Sstevel@tonic-gate * or free. A segment, like a span, is a contiguous range of integers. 82*0Sstevel@tonic-gate * Each allocated segment [addr, addr + size) represents exactly one 83*0Sstevel@tonic-gate * vmem_alloc(size) that returned addr. Free segments represent the space 84*0Sstevel@tonic-gate * between allocated segments. If two free segments are adjacent, we 85*0Sstevel@tonic-gate * coalesce them into one larger segment; that is, if segments [a, b) and 86*0Sstevel@tonic-gate * [b, c) are both free, we merge them into a single segment [a, c). 87*0Sstevel@tonic-gate * The segments within a span are linked together in increasing-address order 88*0Sstevel@tonic-gate * so we can easily determine whether coalescing is possible. 89*0Sstevel@tonic-gate * 90*0Sstevel@tonic-gate * Segments never cross span boundaries. When all segments within 91*0Sstevel@tonic-gate * an imported span become free, we return the span to its source. 92*0Sstevel@tonic-gate * 93*0Sstevel@tonic-gate * 1.5 Imported Memory 94*0Sstevel@tonic-gate * ------------------- 95*0Sstevel@tonic-gate * As mentioned in the overview, some arenas are logical subsets of 96*0Sstevel@tonic-gate * other arenas. For example, kmem_va_arena (a virtual address cache 97*0Sstevel@tonic-gate * that satisfies most kmem_slab_create() requests) is just a subset 98*0Sstevel@tonic-gate * of heap_arena (the kernel heap) that provides caching for the most 99*0Sstevel@tonic-gate * common slab sizes. When kmem_va_arena runs out of virtual memory, 100*0Sstevel@tonic-gate * it *imports* more from the heap; we say that heap_arena is the 101*0Sstevel@tonic-gate * *vmem source* for kmem_va_arena. vmem_create() allows you to 102*0Sstevel@tonic-gate * specify any existing vmem arena as the source for your new arena. 103*0Sstevel@tonic-gate * Topologically, since every arena is a child of at most one source, 104*0Sstevel@tonic-gate * the set of all arenas forms a collection of trees. 105*0Sstevel@tonic-gate * 106*0Sstevel@tonic-gate * 1.6 Constrained Allocations 107*0Sstevel@tonic-gate * --------------------------- 108*0Sstevel@tonic-gate * Some vmem clients are quite picky about the kind of address they want. 109*0Sstevel@tonic-gate * For example, the DVMA code may need an address that is at a particular 110*0Sstevel@tonic-gate * phase with respect to some alignment (to get good cache coloring), or 111*0Sstevel@tonic-gate * that lies within certain limits (the addressable range of a device), 112*0Sstevel@tonic-gate * or that doesn't cross some boundary (a DMA counter restriction) -- 113*0Sstevel@tonic-gate * or all of the above. vmem_xalloc() allows the client to specify any 114*0Sstevel@tonic-gate * or all of these constraints. 115*0Sstevel@tonic-gate * 116*0Sstevel@tonic-gate * 1.7 The Vmem Quantum 117*0Sstevel@tonic-gate * -------------------- 118*0Sstevel@tonic-gate * Every arena has a notion of 'quantum', specified at vmem_create() time, 119*0Sstevel@tonic-gate * that defines the arena's minimum unit of currency. Most commonly the 120*0Sstevel@tonic-gate * quantum is either 1 or PAGESIZE, but any power of 2 is legal. 121*0Sstevel@tonic-gate * All vmem allocations are guaranteed to be quantum-aligned. 122*0Sstevel@tonic-gate * 123*0Sstevel@tonic-gate * 1.8 Quantum Caching 124*0Sstevel@tonic-gate * ------------------- 125*0Sstevel@tonic-gate * A vmem arena may be so hot (frequently used) that the scalability of vmem 126*0Sstevel@tonic-gate * allocation is a significant concern. We address this by allowing the most 127*0Sstevel@tonic-gate * common allocation sizes to be serviced by the kernel memory allocator, 128*0Sstevel@tonic-gate * which provides low-latency per-cpu caching. The qcache_max argument to 129*0Sstevel@tonic-gate * vmem_create() specifies the largest allocation size to cache. 130*0Sstevel@tonic-gate * 131*0Sstevel@tonic-gate * 1.9 Relationship to Kernel Memory Allocator 132*0Sstevel@tonic-gate * ------------------------------------------- 133*0Sstevel@tonic-gate * Every kmem cache has a vmem arena as its slab supplier. The kernel memory 134*0Sstevel@tonic-gate * allocator uses vmem_alloc() and vmem_free() to create and destroy slabs. 135*0Sstevel@tonic-gate * 136*0Sstevel@tonic-gate * 137*0Sstevel@tonic-gate * 2. Implementation 138*0Sstevel@tonic-gate * ----------------- 139*0Sstevel@tonic-gate * 140*0Sstevel@tonic-gate * 2.1 Segment lists and markers 141*0Sstevel@tonic-gate * ----------------------------- 142*0Sstevel@tonic-gate * The segment structure (vmem_seg_t) contains two doubly-linked lists. 143*0Sstevel@tonic-gate * 144*0Sstevel@tonic-gate * The arena list (vs_anext/vs_aprev) links all segments in the arena. 145*0Sstevel@tonic-gate * In addition to the allocated and free segments, the arena contains 146*0Sstevel@tonic-gate * special marker segments at span boundaries. Span markers simplify 147*0Sstevel@tonic-gate * coalescing and importing logic by making it easy to tell both when 148*0Sstevel@tonic-gate * we're at a span boundary (so we don't coalesce across it), and when 149*0Sstevel@tonic-gate * a span is completely free (its neighbors will both be span markers). 150*0Sstevel@tonic-gate * 151*0Sstevel@tonic-gate * Imported spans will have vs_import set. 152*0Sstevel@tonic-gate * 153*0Sstevel@tonic-gate * The next-of-kin list (vs_knext/vs_kprev) links segments of the same type: 154*0Sstevel@tonic-gate * (1) for allocated segments, vs_knext is the hash chain linkage; 155*0Sstevel@tonic-gate * (2) for free segments, vs_knext is the freelist linkage; 156*0Sstevel@tonic-gate * (3) for span marker segments, vs_knext is the next span marker. 157*0Sstevel@tonic-gate * 158*0Sstevel@tonic-gate * 2.2 Allocation hashing 159*0Sstevel@tonic-gate * ---------------------- 160*0Sstevel@tonic-gate * We maintain a hash table of all allocated segments, hashed by address. 161*0Sstevel@tonic-gate * This allows vmem_free() to discover the target segment in constant time. 162*0Sstevel@tonic-gate * vmem_update() periodically resizes hash tables to keep hash chains short. 163*0Sstevel@tonic-gate * 164*0Sstevel@tonic-gate * 2.3 Freelist management 165*0Sstevel@tonic-gate * ----------------------- 166*0Sstevel@tonic-gate * We maintain power-of-2 freelists for free segments, i.e. free segments 167*0Sstevel@tonic-gate * of size >= 2^n reside in vmp->vm_freelist[n]. To ensure constant-time 168*0Sstevel@tonic-gate * allocation, vmem_xalloc() looks not in the first freelist that *might* 169*0Sstevel@tonic-gate * satisfy the allocation, but in the first freelist that *definitely* 170*0Sstevel@tonic-gate * satisfies the allocation (unless VM_BESTFIT is specified, or all larger 171*0Sstevel@tonic-gate * freelists are empty). For example, a 1000-byte allocation will be 172*0Sstevel@tonic-gate * satisfied not from the 512..1023-byte freelist, whose members *might* 173*0Sstevel@tonic-gate * contains a 1000-byte segment, but from a 1024-byte or larger freelist, 174*0Sstevel@tonic-gate * the first member of which will *definitely* satisfy the allocation. 175*0Sstevel@tonic-gate * This ensures that vmem_xalloc() works in constant time. 176*0Sstevel@tonic-gate * 177*0Sstevel@tonic-gate * We maintain a bit map to determine quickly which freelists are non-empty. 178*0Sstevel@tonic-gate * vmp->vm_freemap & (1 << n) is non-zero iff vmp->vm_freelist[n] is non-empty. 179*0Sstevel@tonic-gate * 180*0Sstevel@tonic-gate * The different freelists are linked together into one large freelist, 181*0Sstevel@tonic-gate * with the freelist heads serving as markers. Freelist markers simplify 182*0Sstevel@tonic-gate * the maintenance of vm_freemap by making it easy to tell when we're taking 183*0Sstevel@tonic-gate * the last member of a freelist (both of its neighbors will be markers). 184*0Sstevel@tonic-gate * 185*0Sstevel@tonic-gate * 2.4 Vmem Locking 186*0Sstevel@tonic-gate * ---------------- 187*0Sstevel@tonic-gate * For simplicity, all arena state is protected by a per-arena lock. 188*0Sstevel@tonic-gate * For very hot arenas, use quantum caching for scalability. 189*0Sstevel@tonic-gate * 190*0Sstevel@tonic-gate * 2.5 Vmem Population 191*0Sstevel@tonic-gate * ------------------- 192*0Sstevel@tonic-gate * Any internal vmem routine that might need to allocate new segment 193*0Sstevel@tonic-gate * structures must prepare in advance by calling vmem_populate(), which 194*0Sstevel@tonic-gate * will preallocate enough vmem_seg_t's to get is through the entire 195*0Sstevel@tonic-gate * operation without dropping the arena lock. 196*0Sstevel@tonic-gate * 197*0Sstevel@tonic-gate * 2.6 Auditing 198*0Sstevel@tonic-gate * ------------ 199*0Sstevel@tonic-gate * If KMF_AUDIT is set in kmem_flags, we audit vmem allocations as well. 200*0Sstevel@tonic-gate * Since virtual addresses cannot be scribbled on, there is no equivalent 201*0Sstevel@tonic-gate * in vmem to redzone checking, deadbeef, or other kmem debugging features. 202*0Sstevel@tonic-gate * Moreover, we do not audit frees because segment coalescing destroys the 203*0Sstevel@tonic-gate * association between an address and its segment structure. Auditing is 204*0Sstevel@tonic-gate * thus intended primarily to keep track of who's consuming the arena. 205*0Sstevel@tonic-gate * Debugging support could certainly be extended in the future if it proves 206*0Sstevel@tonic-gate * necessary, but we do so much live checking via the allocation hash table 207*0Sstevel@tonic-gate * that even non-DEBUG systems get quite a bit of sanity checking already. 208*0Sstevel@tonic-gate */ 209*0Sstevel@tonic-gate 210*0Sstevel@tonic-gate #include <sys/vmem_impl.h> 211*0Sstevel@tonic-gate #include <sys/kmem.h> 212*0Sstevel@tonic-gate #include <sys/kstat.h> 213*0Sstevel@tonic-gate #include <sys/param.h> 214*0Sstevel@tonic-gate #include <sys/systm.h> 215*0Sstevel@tonic-gate #include <sys/atomic.h> 216*0Sstevel@tonic-gate #include <sys/bitmap.h> 217*0Sstevel@tonic-gate #include <sys/sysmacros.h> 218*0Sstevel@tonic-gate #include <sys/cmn_err.h> 219*0Sstevel@tonic-gate #include <sys/debug.h> 220*0Sstevel@tonic-gate #include <sys/panic.h> 221*0Sstevel@tonic-gate 222*0Sstevel@tonic-gate #define VMEM_INITIAL 10 /* early vmem arenas */ 223*0Sstevel@tonic-gate #define VMEM_SEG_INITIAL 200 /* early segments */ 224*0Sstevel@tonic-gate 225*0Sstevel@tonic-gate /* 226*0Sstevel@tonic-gate * Adding a new span to an arena requires two segment structures: one to 227*0Sstevel@tonic-gate * represent the span, and one to represent the free segment it contains. 228*0Sstevel@tonic-gate */ 229*0Sstevel@tonic-gate #define VMEM_SEGS_PER_SPAN_CREATE 2 230*0Sstevel@tonic-gate 231*0Sstevel@tonic-gate /* 232*0Sstevel@tonic-gate * Allocating a piece of an existing segment requires 0-2 segment structures 233*0Sstevel@tonic-gate * depending on how much of the segment we're allocating. 234*0Sstevel@tonic-gate * 235*0Sstevel@tonic-gate * To allocate the entire segment, no new segment structures are needed; we 236*0Sstevel@tonic-gate * simply move the existing segment structure from the freelist to the 237*0Sstevel@tonic-gate * allocation hash table. 238*0Sstevel@tonic-gate * 239*0Sstevel@tonic-gate * To allocate a piece from the left or right end of the segment, we must 240*0Sstevel@tonic-gate * split the segment into two pieces (allocated part and remainder), so we 241*0Sstevel@tonic-gate * need one new segment structure to represent the remainder. 242*0Sstevel@tonic-gate * 243*0Sstevel@tonic-gate * To allocate from the middle of a segment, we need two new segment strucures 244*0Sstevel@tonic-gate * to represent the remainders on either side of the allocated part. 245*0Sstevel@tonic-gate */ 246*0Sstevel@tonic-gate #define VMEM_SEGS_PER_EXACT_ALLOC 0 247*0Sstevel@tonic-gate #define VMEM_SEGS_PER_LEFT_ALLOC 1 248*0Sstevel@tonic-gate #define VMEM_SEGS_PER_RIGHT_ALLOC 1 249*0Sstevel@tonic-gate #define VMEM_SEGS_PER_MIDDLE_ALLOC 2 250*0Sstevel@tonic-gate 251*0Sstevel@tonic-gate /* 252*0Sstevel@tonic-gate * vmem_populate() preallocates segment structures for vmem to do its work. 253*0Sstevel@tonic-gate * It must preallocate enough for the worst case, which is when we must import 254*0Sstevel@tonic-gate * a new span and then allocate from the middle of it. 255*0Sstevel@tonic-gate */ 256*0Sstevel@tonic-gate #define VMEM_SEGS_PER_ALLOC_MAX \ 257*0Sstevel@tonic-gate (VMEM_SEGS_PER_SPAN_CREATE + VMEM_SEGS_PER_MIDDLE_ALLOC) 258*0Sstevel@tonic-gate 259*0Sstevel@tonic-gate /* 260*0Sstevel@tonic-gate * The segment structures themselves are allocated from vmem_seg_arena, so 261*0Sstevel@tonic-gate * we have a recursion problem when vmem_seg_arena needs to populate itself. 262*0Sstevel@tonic-gate * We address this by working out the maximum number of segment structures 263*0Sstevel@tonic-gate * this act will require, and multiplying by the maximum number of threads 264*0Sstevel@tonic-gate * that we'll allow to do it simultaneously. 265*0Sstevel@tonic-gate * 266*0Sstevel@tonic-gate * The worst-case segment consumption to populate vmem_seg_arena is as 267*0Sstevel@tonic-gate * follows (depicted as a stack trace to indicate why events are occurring): 268*0Sstevel@tonic-gate * 269*0Sstevel@tonic-gate * (In order to lower the fragmentation in the heap_arena, we specify a 270*0Sstevel@tonic-gate * minimum import size for the vmem_metadata_arena which is the same size 271*0Sstevel@tonic-gate * as the kmem_va quantum cache allocations. This causes the worst-case 272*0Sstevel@tonic-gate * allocation from the vmem_metadata_arena to be 3 segments.) 273*0Sstevel@tonic-gate * 274*0Sstevel@tonic-gate * vmem_alloc(vmem_seg_arena) -> 2 segs (span create + exact alloc) 275*0Sstevel@tonic-gate * segkmem_alloc(vmem_metadata_arena) 276*0Sstevel@tonic-gate * vmem_alloc(vmem_metadata_arena) -> 3 segs (span create + left alloc) 277*0Sstevel@tonic-gate * vmem_alloc(heap_arena) -> 1 seg (left alloc) 278*0Sstevel@tonic-gate * page_create() 279*0Sstevel@tonic-gate * hat_memload() 280*0Sstevel@tonic-gate * kmem_cache_alloc() 281*0Sstevel@tonic-gate * kmem_slab_create() 282*0Sstevel@tonic-gate * vmem_alloc(hat_memload_arena) -> 2 segs (span create + exact alloc) 283*0Sstevel@tonic-gate * segkmem_alloc(heap_arena) 284*0Sstevel@tonic-gate * vmem_alloc(heap_arena) -> 1 seg (left alloc) 285*0Sstevel@tonic-gate * page_create() 286*0Sstevel@tonic-gate * hat_memload() -> (hat layer won't recurse further) 287*0Sstevel@tonic-gate * 288*0Sstevel@tonic-gate * The worst-case consumption for each arena is 3 segment structures. 289*0Sstevel@tonic-gate * Of course, a 3-seg reserve could easily be blown by multiple threads. 290*0Sstevel@tonic-gate * Therefore, we serialize all allocations from vmem_seg_arena (which is OK 291*0Sstevel@tonic-gate * because they're rare). We cannot allow a non-blocking allocation to get 292*0Sstevel@tonic-gate * tied up behind a blocking allocation, however, so we use separate locks 293*0Sstevel@tonic-gate * for VM_SLEEP and VM_NOSLEEP allocations. In addition, if the system is 294*0Sstevel@tonic-gate * panicking then we must keep enough resources for panic_thread to do its 295*0Sstevel@tonic-gate * work. Thus we have at most three threads trying to allocate from 296*0Sstevel@tonic-gate * vmem_seg_arena, and each thread consumes at most three segment structures, 297*0Sstevel@tonic-gate * so we must maintain a 9-seg reserve. 298*0Sstevel@tonic-gate */ 299*0Sstevel@tonic-gate #define VMEM_POPULATE_RESERVE 9 300*0Sstevel@tonic-gate 301*0Sstevel@tonic-gate /* 302*0Sstevel@tonic-gate * vmem_populate() ensures that each arena has VMEM_MINFREE seg structures 303*0Sstevel@tonic-gate * so that it can satisfy the worst-case allocation *and* participate in 304*0Sstevel@tonic-gate * worst-case allocation from vmem_seg_arena. 305*0Sstevel@tonic-gate */ 306*0Sstevel@tonic-gate #define VMEM_MINFREE (VMEM_POPULATE_RESERVE + VMEM_SEGS_PER_ALLOC_MAX) 307*0Sstevel@tonic-gate 308*0Sstevel@tonic-gate static vmem_t vmem0[VMEM_INITIAL]; 309*0Sstevel@tonic-gate static vmem_t *vmem_populator[VMEM_INITIAL]; 310*0Sstevel@tonic-gate static uint32_t vmem_id; 311*0Sstevel@tonic-gate static uint32_t vmem_populators; 312*0Sstevel@tonic-gate static vmem_seg_t vmem_seg0[VMEM_SEG_INITIAL]; 313*0Sstevel@tonic-gate static vmem_seg_t *vmem_segfree; 314*0Sstevel@tonic-gate static kmutex_t vmem_list_lock; 315*0Sstevel@tonic-gate static kmutex_t vmem_segfree_lock; 316*0Sstevel@tonic-gate static kmutex_t vmem_sleep_lock; 317*0Sstevel@tonic-gate static kmutex_t vmem_nosleep_lock; 318*0Sstevel@tonic-gate static kmutex_t vmem_panic_lock; 319*0Sstevel@tonic-gate static vmem_t *vmem_list; 320*0Sstevel@tonic-gate static vmem_t *vmem_metadata_arena; 321*0Sstevel@tonic-gate static vmem_t *vmem_seg_arena; 322*0Sstevel@tonic-gate static vmem_t *vmem_hash_arena; 323*0Sstevel@tonic-gate static vmem_t *vmem_vmem_arena; 324*0Sstevel@tonic-gate static long vmem_update_interval = 15; /* vmem_update() every 15 seconds */ 325*0Sstevel@tonic-gate uint32_t vmem_mtbf; /* mean time between failures [default: off] */ 326*0Sstevel@tonic-gate size_t vmem_seg_size = sizeof (vmem_seg_t); 327*0Sstevel@tonic-gate 328*0Sstevel@tonic-gate static vmem_kstat_t vmem_kstat_template = { 329*0Sstevel@tonic-gate { "mem_inuse", KSTAT_DATA_UINT64 }, 330*0Sstevel@tonic-gate { "mem_import", KSTAT_DATA_UINT64 }, 331*0Sstevel@tonic-gate { "mem_total", KSTAT_DATA_UINT64 }, 332*0Sstevel@tonic-gate { "vmem_source", KSTAT_DATA_UINT32 }, 333*0Sstevel@tonic-gate { "alloc", KSTAT_DATA_UINT64 }, 334*0Sstevel@tonic-gate { "free", KSTAT_DATA_UINT64 }, 335*0Sstevel@tonic-gate { "wait", KSTAT_DATA_UINT64 }, 336*0Sstevel@tonic-gate { "fail", KSTAT_DATA_UINT64 }, 337*0Sstevel@tonic-gate { "lookup", KSTAT_DATA_UINT64 }, 338*0Sstevel@tonic-gate { "search", KSTAT_DATA_UINT64 }, 339*0Sstevel@tonic-gate { "populate_wait", KSTAT_DATA_UINT64 }, 340*0Sstevel@tonic-gate { "populate_fail", KSTAT_DATA_UINT64 }, 341*0Sstevel@tonic-gate { "contains", KSTAT_DATA_UINT64 }, 342*0Sstevel@tonic-gate { "contains_search", KSTAT_DATA_UINT64 }, 343*0Sstevel@tonic-gate }; 344*0Sstevel@tonic-gate 345*0Sstevel@tonic-gate /* 346*0Sstevel@tonic-gate * Insert/delete from arena list (type 'a') or next-of-kin list (type 'k'). 347*0Sstevel@tonic-gate */ 348*0Sstevel@tonic-gate #define VMEM_INSERT(vprev, vsp, type) \ 349*0Sstevel@tonic-gate { \ 350*0Sstevel@tonic-gate vmem_seg_t *vnext = (vprev)->vs_##type##next; \ 351*0Sstevel@tonic-gate (vsp)->vs_##type##next = (vnext); \ 352*0Sstevel@tonic-gate (vsp)->vs_##type##prev = (vprev); \ 353*0Sstevel@tonic-gate (vprev)->vs_##type##next = (vsp); \ 354*0Sstevel@tonic-gate (vnext)->vs_##type##prev = (vsp); \ 355*0Sstevel@tonic-gate } 356*0Sstevel@tonic-gate 357*0Sstevel@tonic-gate #define VMEM_DELETE(vsp, type) \ 358*0Sstevel@tonic-gate { \ 359*0Sstevel@tonic-gate vmem_seg_t *vprev = (vsp)->vs_##type##prev; \ 360*0Sstevel@tonic-gate vmem_seg_t *vnext = (vsp)->vs_##type##next; \ 361*0Sstevel@tonic-gate (vprev)->vs_##type##next = (vnext); \ 362*0Sstevel@tonic-gate (vnext)->vs_##type##prev = (vprev); \ 363*0Sstevel@tonic-gate } 364*0Sstevel@tonic-gate 365*0Sstevel@tonic-gate /* 366*0Sstevel@tonic-gate * Get a vmem_seg_t from the global segfree list. 367*0Sstevel@tonic-gate */ 368*0Sstevel@tonic-gate static vmem_seg_t * 369*0Sstevel@tonic-gate vmem_getseg_global(void) 370*0Sstevel@tonic-gate { 371*0Sstevel@tonic-gate vmem_seg_t *vsp; 372*0Sstevel@tonic-gate 373*0Sstevel@tonic-gate mutex_enter(&vmem_segfree_lock); 374*0Sstevel@tonic-gate if ((vsp = vmem_segfree) != NULL) 375*0Sstevel@tonic-gate vmem_segfree = vsp->vs_knext; 376*0Sstevel@tonic-gate mutex_exit(&vmem_segfree_lock); 377*0Sstevel@tonic-gate 378*0Sstevel@tonic-gate return (vsp); 379*0Sstevel@tonic-gate } 380*0Sstevel@tonic-gate 381*0Sstevel@tonic-gate /* 382*0Sstevel@tonic-gate * Put a vmem_seg_t on the global segfree list. 383*0Sstevel@tonic-gate */ 384*0Sstevel@tonic-gate static void 385*0Sstevel@tonic-gate vmem_putseg_global(vmem_seg_t *vsp) 386*0Sstevel@tonic-gate { 387*0Sstevel@tonic-gate mutex_enter(&vmem_segfree_lock); 388*0Sstevel@tonic-gate vsp->vs_knext = vmem_segfree; 389*0Sstevel@tonic-gate vmem_segfree = vsp; 390*0Sstevel@tonic-gate mutex_exit(&vmem_segfree_lock); 391*0Sstevel@tonic-gate } 392*0Sstevel@tonic-gate 393*0Sstevel@tonic-gate /* 394*0Sstevel@tonic-gate * Get a vmem_seg_t from vmp's segfree list. 395*0Sstevel@tonic-gate */ 396*0Sstevel@tonic-gate static vmem_seg_t * 397*0Sstevel@tonic-gate vmem_getseg(vmem_t *vmp) 398*0Sstevel@tonic-gate { 399*0Sstevel@tonic-gate vmem_seg_t *vsp; 400*0Sstevel@tonic-gate 401*0Sstevel@tonic-gate ASSERT(vmp->vm_nsegfree > 0); 402*0Sstevel@tonic-gate 403*0Sstevel@tonic-gate vsp = vmp->vm_segfree; 404*0Sstevel@tonic-gate vmp->vm_segfree = vsp->vs_knext; 405*0Sstevel@tonic-gate vmp->vm_nsegfree--; 406*0Sstevel@tonic-gate 407*0Sstevel@tonic-gate return (vsp); 408*0Sstevel@tonic-gate } 409*0Sstevel@tonic-gate 410*0Sstevel@tonic-gate /* 411*0Sstevel@tonic-gate * Put a vmem_seg_t on vmp's segfree list. 412*0Sstevel@tonic-gate */ 413*0Sstevel@tonic-gate static void 414*0Sstevel@tonic-gate vmem_putseg(vmem_t *vmp, vmem_seg_t *vsp) 415*0Sstevel@tonic-gate { 416*0Sstevel@tonic-gate vsp->vs_knext = vmp->vm_segfree; 417*0Sstevel@tonic-gate vmp->vm_segfree = vsp; 418*0Sstevel@tonic-gate vmp->vm_nsegfree++; 419*0Sstevel@tonic-gate } 420*0Sstevel@tonic-gate 421*0Sstevel@tonic-gate /* 422*0Sstevel@tonic-gate * Add vsp to the appropriate freelist. 423*0Sstevel@tonic-gate */ 424*0Sstevel@tonic-gate static void 425*0Sstevel@tonic-gate vmem_freelist_insert(vmem_t *vmp, vmem_seg_t *vsp) 426*0Sstevel@tonic-gate { 427*0Sstevel@tonic-gate vmem_seg_t *vprev; 428*0Sstevel@tonic-gate 429*0Sstevel@tonic-gate ASSERT(*VMEM_HASH(vmp, vsp->vs_start) != vsp); 430*0Sstevel@tonic-gate 431*0Sstevel@tonic-gate vprev = (vmem_seg_t *)&vmp->vm_freelist[highbit(VS_SIZE(vsp)) - 1]; 432*0Sstevel@tonic-gate vsp->vs_type = VMEM_FREE; 433*0Sstevel@tonic-gate vmp->vm_freemap |= VS_SIZE(vprev); 434*0Sstevel@tonic-gate VMEM_INSERT(vprev, vsp, k); 435*0Sstevel@tonic-gate 436*0Sstevel@tonic-gate cv_broadcast(&vmp->vm_cv); 437*0Sstevel@tonic-gate } 438*0Sstevel@tonic-gate 439*0Sstevel@tonic-gate /* 440*0Sstevel@tonic-gate * Take vsp from the freelist. 441*0Sstevel@tonic-gate */ 442*0Sstevel@tonic-gate static void 443*0Sstevel@tonic-gate vmem_freelist_delete(vmem_t *vmp, vmem_seg_t *vsp) 444*0Sstevel@tonic-gate { 445*0Sstevel@tonic-gate ASSERT(*VMEM_HASH(vmp, vsp->vs_start) != vsp); 446*0Sstevel@tonic-gate ASSERT(vsp->vs_type == VMEM_FREE); 447*0Sstevel@tonic-gate 448*0Sstevel@tonic-gate if (vsp->vs_knext->vs_start == 0 && vsp->vs_kprev->vs_start == 0) { 449*0Sstevel@tonic-gate /* 450*0Sstevel@tonic-gate * The segments on both sides of 'vsp' are freelist heads, 451*0Sstevel@tonic-gate * so taking vsp leaves the freelist at vsp->vs_kprev empty. 452*0Sstevel@tonic-gate */ 453*0Sstevel@tonic-gate ASSERT(vmp->vm_freemap & VS_SIZE(vsp->vs_kprev)); 454*0Sstevel@tonic-gate vmp->vm_freemap ^= VS_SIZE(vsp->vs_kprev); 455*0Sstevel@tonic-gate } 456*0Sstevel@tonic-gate VMEM_DELETE(vsp, k); 457*0Sstevel@tonic-gate } 458*0Sstevel@tonic-gate 459*0Sstevel@tonic-gate /* 460*0Sstevel@tonic-gate * Add vsp to the allocated-segment hash table and update kstats. 461*0Sstevel@tonic-gate */ 462*0Sstevel@tonic-gate static void 463*0Sstevel@tonic-gate vmem_hash_insert(vmem_t *vmp, vmem_seg_t *vsp) 464*0Sstevel@tonic-gate { 465*0Sstevel@tonic-gate vmem_seg_t **bucket; 466*0Sstevel@tonic-gate 467*0Sstevel@tonic-gate vsp->vs_type = VMEM_ALLOC; 468*0Sstevel@tonic-gate bucket = VMEM_HASH(vmp, vsp->vs_start); 469*0Sstevel@tonic-gate vsp->vs_knext = *bucket; 470*0Sstevel@tonic-gate *bucket = vsp; 471*0Sstevel@tonic-gate 472*0Sstevel@tonic-gate if (vmem_seg_size == sizeof (vmem_seg_t)) { 473*0Sstevel@tonic-gate vsp->vs_depth = (uint8_t)getpcstack(vsp->vs_stack, 474*0Sstevel@tonic-gate VMEM_STACK_DEPTH); 475*0Sstevel@tonic-gate vsp->vs_thread = curthread; 476*0Sstevel@tonic-gate vsp->vs_timestamp = gethrtime(); 477*0Sstevel@tonic-gate } else { 478*0Sstevel@tonic-gate vsp->vs_depth = 0; 479*0Sstevel@tonic-gate } 480*0Sstevel@tonic-gate 481*0Sstevel@tonic-gate vmp->vm_kstat.vk_alloc.value.ui64++; 482*0Sstevel@tonic-gate vmp->vm_kstat.vk_mem_inuse.value.ui64 += VS_SIZE(vsp); 483*0Sstevel@tonic-gate } 484*0Sstevel@tonic-gate 485*0Sstevel@tonic-gate /* 486*0Sstevel@tonic-gate * Remove vsp from the allocated-segment hash table and update kstats. 487*0Sstevel@tonic-gate */ 488*0Sstevel@tonic-gate static vmem_seg_t * 489*0Sstevel@tonic-gate vmem_hash_delete(vmem_t *vmp, uintptr_t addr, size_t size) 490*0Sstevel@tonic-gate { 491*0Sstevel@tonic-gate vmem_seg_t *vsp, **prev_vspp; 492*0Sstevel@tonic-gate 493*0Sstevel@tonic-gate prev_vspp = VMEM_HASH(vmp, addr); 494*0Sstevel@tonic-gate while ((vsp = *prev_vspp) != NULL) { 495*0Sstevel@tonic-gate if (vsp->vs_start == addr) { 496*0Sstevel@tonic-gate *prev_vspp = vsp->vs_knext; 497*0Sstevel@tonic-gate break; 498*0Sstevel@tonic-gate } 499*0Sstevel@tonic-gate vmp->vm_kstat.vk_lookup.value.ui64++; 500*0Sstevel@tonic-gate prev_vspp = &vsp->vs_knext; 501*0Sstevel@tonic-gate } 502*0Sstevel@tonic-gate 503*0Sstevel@tonic-gate if (vsp == NULL) 504*0Sstevel@tonic-gate panic("vmem_hash_delete(%p, %lx, %lu): bad free", 505*0Sstevel@tonic-gate vmp, addr, size); 506*0Sstevel@tonic-gate if (VS_SIZE(vsp) != size) 507*0Sstevel@tonic-gate panic("vmem_hash_delete(%p, %lx, %lu): wrong size (expect %lu)", 508*0Sstevel@tonic-gate vmp, addr, size, VS_SIZE(vsp)); 509*0Sstevel@tonic-gate 510*0Sstevel@tonic-gate vmp->vm_kstat.vk_free.value.ui64++; 511*0Sstevel@tonic-gate vmp->vm_kstat.vk_mem_inuse.value.ui64 -= size; 512*0Sstevel@tonic-gate 513*0Sstevel@tonic-gate return (vsp); 514*0Sstevel@tonic-gate } 515*0Sstevel@tonic-gate 516*0Sstevel@tonic-gate /* 517*0Sstevel@tonic-gate * Create a segment spanning the range [start, end) and add it to the arena. 518*0Sstevel@tonic-gate */ 519*0Sstevel@tonic-gate static vmem_seg_t * 520*0Sstevel@tonic-gate vmem_seg_create(vmem_t *vmp, vmem_seg_t *vprev, uintptr_t start, uintptr_t end) 521*0Sstevel@tonic-gate { 522*0Sstevel@tonic-gate vmem_seg_t *newseg = vmem_getseg(vmp); 523*0Sstevel@tonic-gate 524*0Sstevel@tonic-gate newseg->vs_start = start; 525*0Sstevel@tonic-gate newseg->vs_end = end; 526*0Sstevel@tonic-gate newseg->vs_type = 0; 527*0Sstevel@tonic-gate newseg->vs_import = 0; 528*0Sstevel@tonic-gate 529*0Sstevel@tonic-gate VMEM_INSERT(vprev, newseg, a); 530*0Sstevel@tonic-gate 531*0Sstevel@tonic-gate return (newseg); 532*0Sstevel@tonic-gate } 533*0Sstevel@tonic-gate 534*0Sstevel@tonic-gate /* 535*0Sstevel@tonic-gate * Remove segment vsp from the arena. 536*0Sstevel@tonic-gate */ 537*0Sstevel@tonic-gate static void 538*0Sstevel@tonic-gate vmem_seg_destroy(vmem_t *vmp, vmem_seg_t *vsp) 539*0Sstevel@tonic-gate { 540*0Sstevel@tonic-gate ASSERT(vsp->vs_type != VMEM_ROTOR); 541*0Sstevel@tonic-gate VMEM_DELETE(vsp, a); 542*0Sstevel@tonic-gate 543*0Sstevel@tonic-gate vmem_putseg(vmp, vsp); 544*0Sstevel@tonic-gate } 545*0Sstevel@tonic-gate 546*0Sstevel@tonic-gate /* 547*0Sstevel@tonic-gate * Add the span [vaddr, vaddr + size) to vmp and update kstats. 548*0Sstevel@tonic-gate */ 549*0Sstevel@tonic-gate static vmem_seg_t * 550*0Sstevel@tonic-gate vmem_span_create(vmem_t *vmp, void *vaddr, size_t size, uint8_t import) 551*0Sstevel@tonic-gate { 552*0Sstevel@tonic-gate vmem_seg_t *newseg, *span; 553*0Sstevel@tonic-gate uintptr_t start = (uintptr_t)vaddr; 554*0Sstevel@tonic-gate uintptr_t end = start + size; 555*0Sstevel@tonic-gate 556*0Sstevel@tonic-gate ASSERT(MUTEX_HELD(&vmp->vm_lock)); 557*0Sstevel@tonic-gate 558*0Sstevel@tonic-gate if ((start | end) & (vmp->vm_quantum - 1)) 559*0Sstevel@tonic-gate panic("vmem_span_create(%p, %p, %lu): misaligned", 560*0Sstevel@tonic-gate vmp, vaddr, size); 561*0Sstevel@tonic-gate 562*0Sstevel@tonic-gate span = vmem_seg_create(vmp, vmp->vm_seg0.vs_aprev, start, end); 563*0Sstevel@tonic-gate span->vs_type = VMEM_SPAN; 564*0Sstevel@tonic-gate span->vs_import = import; 565*0Sstevel@tonic-gate VMEM_INSERT(vmp->vm_seg0.vs_kprev, span, k); 566*0Sstevel@tonic-gate 567*0Sstevel@tonic-gate newseg = vmem_seg_create(vmp, span, start, end); 568*0Sstevel@tonic-gate vmem_freelist_insert(vmp, newseg); 569*0Sstevel@tonic-gate 570*0Sstevel@tonic-gate if (import) 571*0Sstevel@tonic-gate vmp->vm_kstat.vk_mem_import.value.ui64 += size; 572*0Sstevel@tonic-gate vmp->vm_kstat.vk_mem_total.value.ui64 += size; 573*0Sstevel@tonic-gate 574*0Sstevel@tonic-gate return (newseg); 575*0Sstevel@tonic-gate } 576*0Sstevel@tonic-gate 577*0Sstevel@tonic-gate /* 578*0Sstevel@tonic-gate * Remove span vsp from vmp and update kstats. 579*0Sstevel@tonic-gate */ 580*0Sstevel@tonic-gate static void 581*0Sstevel@tonic-gate vmem_span_destroy(vmem_t *vmp, vmem_seg_t *vsp) 582*0Sstevel@tonic-gate { 583*0Sstevel@tonic-gate vmem_seg_t *span = vsp->vs_aprev; 584*0Sstevel@tonic-gate size_t size = VS_SIZE(vsp); 585*0Sstevel@tonic-gate 586*0Sstevel@tonic-gate ASSERT(MUTEX_HELD(&vmp->vm_lock)); 587*0Sstevel@tonic-gate ASSERT(span->vs_type == VMEM_SPAN); 588*0Sstevel@tonic-gate 589*0Sstevel@tonic-gate if (span->vs_import) 590*0Sstevel@tonic-gate vmp->vm_kstat.vk_mem_import.value.ui64 -= size; 591*0Sstevel@tonic-gate vmp->vm_kstat.vk_mem_total.value.ui64 -= size; 592*0Sstevel@tonic-gate 593*0Sstevel@tonic-gate VMEM_DELETE(span, k); 594*0Sstevel@tonic-gate 595*0Sstevel@tonic-gate vmem_seg_destroy(vmp, vsp); 596*0Sstevel@tonic-gate vmem_seg_destroy(vmp, span); 597*0Sstevel@tonic-gate } 598*0Sstevel@tonic-gate 599*0Sstevel@tonic-gate /* 600*0Sstevel@tonic-gate * Allocate the subrange [addr, addr + size) from segment vsp. 601*0Sstevel@tonic-gate * If there are leftovers on either side, place them on the freelist. 602*0Sstevel@tonic-gate * Returns a pointer to the segment representing [addr, addr + size). 603*0Sstevel@tonic-gate */ 604*0Sstevel@tonic-gate static vmem_seg_t * 605*0Sstevel@tonic-gate vmem_seg_alloc(vmem_t *vmp, vmem_seg_t *vsp, uintptr_t addr, size_t size) 606*0Sstevel@tonic-gate { 607*0Sstevel@tonic-gate uintptr_t vs_start = vsp->vs_start; 608*0Sstevel@tonic-gate uintptr_t vs_end = vsp->vs_end; 609*0Sstevel@tonic-gate size_t vs_size = vs_end - vs_start; 610*0Sstevel@tonic-gate size_t realsize = P2ROUNDUP(size, vmp->vm_quantum); 611*0Sstevel@tonic-gate uintptr_t addr_end = addr + realsize; 612*0Sstevel@tonic-gate 613*0Sstevel@tonic-gate ASSERT(P2PHASE(vs_start, vmp->vm_quantum) == 0); 614*0Sstevel@tonic-gate ASSERT(P2PHASE(addr, vmp->vm_quantum) == 0); 615*0Sstevel@tonic-gate ASSERT(vsp->vs_type == VMEM_FREE); 616*0Sstevel@tonic-gate ASSERT(addr >= vs_start && addr_end - 1 <= vs_end - 1); 617*0Sstevel@tonic-gate ASSERT(addr - 1 <= addr_end - 1); 618*0Sstevel@tonic-gate 619*0Sstevel@tonic-gate /* 620*0Sstevel@tonic-gate * If we're allocating from the start of the segment, and the 621*0Sstevel@tonic-gate * remainder will be on the same freelist, we can save quite 622*0Sstevel@tonic-gate * a bit of work. 623*0Sstevel@tonic-gate */ 624*0Sstevel@tonic-gate if (P2SAMEHIGHBIT(vs_size, vs_size - realsize) && addr == vs_start) { 625*0Sstevel@tonic-gate ASSERT(highbit(vs_size) == highbit(vs_size - realsize)); 626*0Sstevel@tonic-gate vsp->vs_start = addr_end; 627*0Sstevel@tonic-gate vsp = vmem_seg_create(vmp, vsp->vs_aprev, addr, addr + size); 628*0Sstevel@tonic-gate vmem_hash_insert(vmp, vsp); 629*0Sstevel@tonic-gate return (vsp); 630*0Sstevel@tonic-gate } 631*0Sstevel@tonic-gate 632*0Sstevel@tonic-gate vmem_freelist_delete(vmp, vsp); 633*0Sstevel@tonic-gate 634*0Sstevel@tonic-gate if (vs_end != addr_end) 635*0Sstevel@tonic-gate vmem_freelist_insert(vmp, 636*0Sstevel@tonic-gate vmem_seg_create(vmp, vsp, addr_end, vs_end)); 637*0Sstevel@tonic-gate 638*0Sstevel@tonic-gate if (vs_start != addr) 639*0Sstevel@tonic-gate vmem_freelist_insert(vmp, 640*0Sstevel@tonic-gate vmem_seg_create(vmp, vsp->vs_aprev, vs_start, addr)); 641*0Sstevel@tonic-gate 642*0Sstevel@tonic-gate vsp->vs_start = addr; 643*0Sstevel@tonic-gate vsp->vs_end = addr + size; 644*0Sstevel@tonic-gate 645*0Sstevel@tonic-gate vmem_hash_insert(vmp, vsp); 646*0Sstevel@tonic-gate return (vsp); 647*0Sstevel@tonic-gate } 648*0Sstevel@tonic-gate 649*0Sstevel@tonic-gate /* 650*0Sstevel@tonic-gate * Returns 1 if we are populating, 0 otherwise. 651*0Sstevel@tonic-gate * Call it if we want to prevent recursion from HAT. 652*0Sstevel@tonic-gate */ 653*0Sstevel@tonic-gate int 654*0Sstevel@tonic-gate vmem_is_populator() 655*0Sstevel@tonic-gate { 656*0Sstevel@tonic-gate return (mutex_owner(&vmem_sleep_lock) == curthread || 657*0Sstevel@tonic-gate mutex_owner(&vmem_nosleep_lock) == curthread || 658*0Sstevel@tonic-gate mutex_owner(&vmem_panic_lock) == curthread); 659*0Sstevel@tonic-gate } 660*0Sstevel@tonic-gate 661*0Sstevel@tonic-gate /* 662*0Sstevel@tonic-gate * Populate vmp's segfree list with VMEM_MINFREE vmem_seg_t structures. 663*0Sstevel@tonic-gate */ 664*0Sstevel@tonic-gate static int 665*0Sstevel@tonic-gate vmem_populate(vmem_t *vmp, int vmflag) 666*0Sstevel@tonic-gate { 667*0Sstevel@tonic-gate char *p; 668*0Sstevel@tonic-gate vmem_seg_t *vsp; 669*0Sstevel@tonic-gate ssize_t nseg; 670*0Sstevel@tonic-gate size_t size; 671*0Sstevel@tonic-gate kmutex_t *lp; 672*0Sstevel@tonic-gate int i; 673*0Sstevel@tonic-gate 674*0Sstevel@tonic-gate while (vmp->vm_nsegfree < VMEM_MINFREE && 675*0Sstevel@tonic-gate (vsp = vmem_getseg_global()) != NULL) 676*0Sstevel@tonic-gate vmem_putseg(vmp, vsp); 677*0Sstevel@tonic-gate 678*0Sstevel@tonic-gate if (vmp->vm_nsegfree >= VMEM_MINFREE) 679*0Sstevel@tonic-gate return (1); 680*0Sstevel@tonic-gate 681*0Sstevel@tonic-gate /* 682*0Sstevel@tonic-gate * If we're already populating, tap the reserve. 683*0Sstevel@tonic-gate */ 684*0Sstevel@tonic-gate if (vmem_is_populator()) { 685*0Sstevel@tonic-gate ASSERT(vmp->vm_cflags & VMC_POPULATOR); 686*0Sstevel@tonic-gate return (1); 687*0Sstevel@tonic-gate } 688*0Sstevel@tonic-gate 689*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 690*0Sstevel@tonic-gate 691*0Sstevel@tonic-gate if (panic_thread == curthread) 692*0Sstevel@tonic-gate lp = &vmem_panic_lock; 693*0Sstevel@tonic-gate else if (vmflag & VM_NOSLEEP) 694*0Sstevel@tonic-gate lp = &vmem_nosleep_lock; 695*0Sstevel@tonic-gate else 696*0Sstevel@tonic-gate lp = &vmem_sleep_lock; 697*0Sstevel@tonic-gate 698*0Sstevel@tonic-gate mutex_enter(lp); 699*0Sstevel@tonic-gate 700*0Sstevel@tonic-gate nseg = VMEM_MINFREE + vmem_populators * VMEM_POPULATE_RESERVE; 701*0Sstevel@tonic-gate size = P2ROUNDUP(nseg * vmem_seg_size, vmem_seg_arena->vm_quantum); 702*0Sstevel@tonic-gate nseg = size / vmem_seg_size; 703*0Sstevel@tonic-gate 704*0Sstevel@tonic-gate /* 705*0Sstevel@tonic-gate * The following vmem_alloc() may need to populate vmem_seg_arena 706*0Sstevel@tonic-gate * and all the things it imports from. When doing so, it will tap 707*0Sstevel@tonic-gate * each arena's reserve to prevent recursion (see the block comment 708*0Sstevel@tonic-gate * above the definition of VMEM_POPULATE_RESERVE). 709*0Sstevel@tonic-gate */ 710*0Sstevel@tonic-gate p = vmem_alloc(vmem_seg_arena, size, vmflag & VM_KMFLAGS); 711*0Sstevel@tonic-gate if (p == NULL) { 712*0Sstevel@tonic-gate mutex_exit(lp); 713*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 714*0Sstevel@tonic-gate vmp->vm_kstat.vk_populate_fail.value.ui64++; 715*0Sstevel@tonic-gate return (0); 716*0Sstevel@tonic-gate } 717*0Sstevel@tonic-gate 718*0Sstevel@tonic-gate /* 719*0Sstevel@tonic-gate * Restock the arenas that may have been depleted during population. 720*0Sstevel@tonic-gate */ 721*0Sstevel@tonic-gate for (i = 0; i < vmem_populators; i++) { 722*0Sstevel@tonic-gate mutex_enter(&vmem_populator[i]->vm_lock); 723*0Sstevel@tonic-gate while (vmem_populator[i]->vm_nsegfree < VMEM_POPULATE_RESERVE) 724*0Sstevel@tonic-gate vmem_putseg(vmem_populator[i], 725*0Sstevel@tonic-gate (vmem_seg_t *)(p + --nseg * vmem_seg_size)); 726*0Sstevel@tonic-gate mutex_exit(&vmem_populator[i]->vm_lock); 727*0Sstevel@tonic-gate } 728*0Sstevel@tonic-gate 729*0Sstevel@tonic-gate mutex_exit(lp); 730*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 731*0Sstevel@tonic-gate 732*0Sstevel@tonic-gate /* 733*0Sstevel@tonic-gate * Now take our own segments. 734*0Sstevel@tonic-gate */ 735*0Sstevel@tonic-gate ASSERT(nseg >= VMEM_MINFREE); 736*0Sstevel@tonic-gate while (vmp->vm_nsegfree < VMEM_MINFREE) 737*0Sstevel@tonic-gate vmem_putseg(vmp, (vmem_seg_t *)(p + --nseg * vmem_seg_size)); 738*0Sstevel@tonic-gate 739*0Sstevel@tonic-gate /* 740*0Sstevel@tonic-gate * Give the remainder to charity. 741*0Sstevel@tonic-gate */ 742*0Sstevel@tonic-gate while (nseg > 0) 743*0Sstevel@tonic-gate vmem_putseg_global((vmem_seg_t *)(p + --nseg * vmem_seg_size)); 744*0Sstevel@tonic-gate 745*0Sstevel@tonic-gate return (1); 746*0Sstevel@tonic-gate } 747*0Sstevel@tonic-gate 748*0Sstevel@tonic-gate /* 749*0Sstevel@tonic-gate * Advance a walker from its previous position to 'afterme'. 750*0Sstevel@tonic-gate * Note: may drop and reacquire vmp->vm_lock. 751*0Sstevel@tonic-gate */ 752*0Sstevel@tonic-gate static void 753*0Sstevel@tonic-gate vmem_advance(vmem_t *vmp, vmem_seg_t *walker, vmem_seg_t *afterme) 754*0Sstevel@tonic-gate { 755*0Sstevel@tonic-gate vmem_seg_t *vprev = walker->vs_aprev; 756*0Sstevel@tonic-gate vmem_seg_t *vnext = walker->vs_anext; 757*0Sstevel@tonic-gate vmem_seg_t *vsp = NULL; 758*0Sstevel@tonic-gate 759*0Sstevel@tonic-gate VMEM_DELETE(walker, a); 760*0Sstevel@tonic-gate 761*0Sstevel@tonic-gate if (afterme != NULL) 762*0Sstevel@tonic-gate VMEM_INSERT(afterme, walker, a); 763*0Sstevel@tonic-gate 764*0Sstevel@tonic-gate /* 765*0Sstevel@tonic-gate * The walker segment's presence may have prevented its neighbors 766*0Sstevel@tonic-gate * from coalescing. If so, coalesce them now. 767*0Sstevel@tonic-gate */ 768*0Sstevel@tonic-gate if (vprev->vs_type == VMEM_FREE) { 769*0Sstevel@tonic-gate if (vnext->vs_type == VMEM_FREE) { 770*0Sstevel@tonic-gate ASSERT(vprev->vs_end == vnext->vs_start); 771*0Sstevel@tonic-gate vmem_freelist_delete(vmp, vnext); 772*0Sstevel@tonic-gate vmem_freelist_delete(vmp, vprev); 773*0Sstevel@tonic-gate vprev->vs_end = vnext->vs_end; 774*0Sstevel@tonic-gate vmem_freelist_insert(vmp, vprev); 775*0Sstevel@tonic-gate vmem_seg_destroy(vmp, vnext); 776*0Sstevel@tonic-gate } 777*0Sstevel@tonic-gate vsp = vprev; 778*0Sstevel@tonic-gate } else if (vnext->vs_type == VMEM_FREE) { 779*0Sstevel@tonic-gate vsp = vnext; 780*0Sstevel@tonic-gate } 781*0Sstevel@tonic-gate 782*0Sstevel@tonic-gate /* 783*0Sstevel@tonic-gate * vsp could represent a complete imported span, 784*0Sstevel@tonic-gate * in which case we must return it to the source. 785*0Sstevel@tonic-gate */ 786*0Sstevel@tonic-gate if (vsp != NULL && vsp->vs_aprev->vs_import && 787*0Sstevel@tonic-gate vmp->vm_source_free != NULL && 788*0Sstevel@tonic-gate vsp->vs_aprev->vs_type == VMEM_SPAN && 789*0Sstevel@tonic-gate vsp->vs_anext->vs_type == VMEM_SPAN) { 790*0Sstevel@tonic-gate void *vaddr = (void *)vsp->vs_start; 791*0Sstevel@tonic-gate size_t size = VS_SIZE(vsp); 792*0Sstevel@tonic-gate ASSERT(size == VS_SIZE(vsp->vs_aprev)); 793*0Sstevel@tonic-gate vmem_freelist_delete(vmp, vsp); 794*0Sstevel@tonic-gate vmem_span_destroy(vmp, vsp); 795*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 796*0Sstevel@tonic-gate vmp->vm_source_free(vmp->vm_source, vaddr, size); 797*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 798*0Sstevel@tonic-gate } 799*0Sstevel@tonic-gate } 800*0Sstevel@tonic-gate 801*0Sstevel@tonic-gate /* 802*0Sstevel@tonic-gate * VM_NEXTFIT allocations deliberately cycle through all virtual addresses 803*0Sstevel@tonic-gate * in an arena, so that we avoid reusing addresses for as long as possible. 804*0Sstevel@tonic-gate * This helps to catch used-after-freed bugs. It's also the perfect policy 805*0Sstevel@tonic-gate * for allocating things like process IDs, where we want to cycle through 806*0Sstevel@tonic-gate * all values in order. 807*0Sstevel@tonic-gate */ 808*0Sstevel@tonic-gate static void * 809*0Sstevel@tonic-gate vmem_nextfit_alloc(vmem_t *vmp, size_t size, int vmflag) 810*0Sstevel@tonic-gate { 811*0Sstevel@tonic-gate vmem_seg_t *vsp, *rotor; 812*0Sstevel@tonic-gate uintptr_t addr; 813*0Sstevel@tonic-gate size_t realsize = P2ROUNDUP(size, vmp->vm_quantum); 814*0Sstevel@tonic-gate size_t vs_size; 815*0Sstevel@tonic-gate 816*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 817*0Sstevel@tonic-gate 818*0Sstevel@tonic-gate if (vmp->vm_nsegfree < VMEM_MINFREE && !vmem_populate(vmp, vmflag)) { 819*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 820*0Sstevel@tonic-gate return (NULL); 821*0Sstevel@tonic-gate } 822*0Sstevel@tonic-gate 823*0Sstevel@tonic-gate /* 824*0Sstevel@tonic-gate * The common case is that the segment right after the rotor is free, 825*0Sstevel@tonic-gate * and large enough that extracting 'size' bytes won't change which 826*0Sstevel@tonic-gate * freelist it's on. In this case we can avoid a *lot* of work. 827*0Sstevel@tonic-gate * Instead of the normal vmem_seg_alloc(), we just advance the start 828*0Sstevel@tonic-gate * address of the victim segment. Instead of moving the rotor, we 829*0Sstevel@tonic-gate * create the new segment structure *behind the rotor*, which has 830*0Sstevel@tonic-gate * the same effect. And finally, we know we don't have to coalesce 831*0Sstevel@tonic-gate * the rotor's neighbors because the new segment lies between them. 832*0Sstevel@tonic-gate */ 833*0Sstevel@tonic-gate rotor = &vmp->vm_rotor; 834*0Sstevel@tonic-gate vsp = rotor->vs_anext; 835*0Sstevel@tonic-gate if (vsp->vs_type == VMEM_FREE && (vs_size = VS_SIZE(vsp)) > realsize && 836*0Sstevel@tonic-gate P2SAMEHIGHBIT(vs_size, vs_size - realsize)) { 837*0Sstevel@tonic-gate ASSERT(highbit(vs_size) == highbit(vs_size - realsize)); 838*0Sstevel@tonic-gate addr = vsp->vs_start; 839*0Sstevel@tonic-gate vsp->vs_start = addr + realsize; 840*0Sstevel@tonic-gate vmem_hash_insert(vmp, 841*0Sstevel@tonic-gate vmem_seg_create(vmp, rotor->vs_aprev, addr, addr + size)); 842*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 843*0Sstevel@tonic-gate return ((void *)addr); 844*0Sstevel@tonic-gate } 845*0Sstevel@tonic-gate 846*0Sstevel@tonic-gate /* 847*0Sstevel@tonic-gate * Starting at the rotor, look for a segment large enough to 848*0Sstevel@tonic-gate * satisfy the allocation. 849*0Sstevel@tonic-gate */ 850*0Sstevel@tonic-gate for (;;) { 851*0Sstevel@tonic-gate vmp->vm_kstat.vk_search.value.ui64++; 852*0Sstevel@tonic-gate if (vsp->vs_type == VMEM_FREE && VS_SIZE(vsp) >= size) 853*0Sstevel@tonic-gate break; 854*0Sstevel@tonic-gate vsp = vsp->vs_anext; 855*0Sstevel@tonic-gate if (vsp == rotor) { 856*0Sstevel@tonic-gate /* 857*0Sstevel@tonic-gate * We've come full circle. One possibility is that the 858*0Sstevel@tonic-gate * there's actually enough space, but the rotor itself 859*0Sstevel@tonic-gate * is preventing the allocation from succeeding because 860*0Sstevel@tonic-gate * it's sitting between two free segments. Therefore, 861*0Sstevel@tonic-gate * we advance the rotor and see if that liberates a 862*0Sstevel@tonic-gate * suitable segment. 863*0Sstevel@tonic-gate */ 864*0Sstevel@tonic-gate vmem_advance(vmp, rotor, rotor->vs_anext); 865*0Sstevel@tonic-gate vsp = rotor->vs_aprev; 866*0Sstevel@tonic-gate if (vsp->vs_type == VMEM_FREE && VS_SIZE(vsp) >= size) 867*0Sstevel@tonic-gate break; 868*0Sstevel@tonic-gate /* 869*0Sstevel@tonic-gate * If there's a lower arena we can import from, or it's 870*0Sstevel@tonic-gate * a VM_NOSLEEP allocation, let vmem_xalloc() handle it. 871*0Sstevel@tonic-gate * Otherwise, wait until another thread frees something. 872*0Sstevel@tonic-gate */ 873*0Sstevel@tonic-gate if (vmp->vm_source_alloc != NULL || 874*0Sstevel@tonic-gate (vmflag & VM_NOSLEEP)) { 875*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 876*0Sstevel@tonic-gate return (vmem_xalloc(vmp, size, vmp->vm_quantum, 877*0Sstevel@tonic-gate 0, 0, NULL, NULL, vmflag & VM_KMFLAGS)); 878*0Sstevel@tonic-gate } 879*0Sstevel@tonic-gate vmp->vm_kstat.vk_wait.value.ui64++; 880*0Sstevel@tonic-gate cv_wait(&vmp->vm_cv, &vmp->vm_lock); 881*0Sstevel@tonic-gate vsp = rotor->vs_anext; 882*0Sstevel@tonic-gate } 883*0Sstevel@tonic-gate } 884*0Sstevel@tonic-gate 885*0Sstevel@tonic-gate /* 886*0Sstevel@tonic-gate * We found a segment. Extract enough space to satisfy the allocation. 887*0Sstevel@tonic-gate */ 888*0Sstevel@tonic-gate addr = vsp->vs_start; 889*0Sstevel@tonic-gate vsp = vmem_seg_alloc(vmp, vsp, addr, size); 890*0Sstevel@tonic-gate ASSERT(vsp->vs_type == VMEM_ALLOC && 891*0Sstevel@tonic-gate vsp->vs_start == addr && vsp->vs_end == addr + size); 892*0Sstevel@tonic-gate 893*0Sstevel@tonic-gate /* 894*0Sstevel@tonic-gate * Advance the rotor to right after the newly-allocated segment. 895*0Sstevel@tonic-gate * That's where the next VM_NEXTFIT allocation will begin searching. 896*0Sstevel@tonic-gate */ 897*0Sstevel@tonic-gate vmem_advance(vmp, rotor, vsp); 898*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 899*0Sstevel@tonic-gate return ((void *)addr); 900*0Sstevel@tonic-gate } 901*0Sstevel@tonic-gate 902*0Sstevel@tonic-gate /* 903*0Sstevel@tonic-gate * Checks if vmp is guaranteed to have a size-byte buffer somewhere on its 904*0Sstevel@tonic-gate * freelist. If size is not a power-of-2, it can return a false-negative. 905*0Sstevel@tonic-gate * 906*0Sstevel@tonic-gate * Used to decide if a newly imported span is superfluous after re-acquiring 907*0Sstevel@tonic-gate * the arena lock. 908*0Sstevel@tonic-gate */ 909*0Sstevel@tonic-gate static int 910*0Sstevel@tonic-gate vmem_canalloc(vmem_t *vmp, size_t size) 911*0Sstevel@tonic-gate { 912*0Sstevel@tonic-gate int hb; 913*0Sstevel@tonic-gate int flist = 0; 914*0Sstevel@tonic-gate ASSERT(MUTEX_HELD(&vmp->vm_lock)); 915*0Sstevel@tonic-gate 916*0Sstevel@tonic-gate if ((size & (size - 1)) == 0) 917*0Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, size)); 918*0Sstevel@tonic-gate else if ((hb = highbit(size)) < VMEM_FREELISTS) 919*0Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, 1UL << hb)); 920*0Sstevel@tonic-gate 921*0Sstevel@tonic-gate return (flist); 922*0Sstevel@tonic-gate } 923*0Sstevel@tonic-gate 924*0Sstevel@tonic-gate /* 925*0Sstevel@tonic-gate * Allocate size bytes at offset phase from an align boundary such that the 926*0Sstevel@tonic-gate * resulting segment [addr, addr + size) is a subset of [minaddr, maxaddr) 927*0Sstevel@tonic-gate * that does not straddle a nocross-aligned boundary. 928*0Sstevel@tonic-gate */ 929*0Sstevel@tonic-gate void * 930*0Sstevel@tonic-gate vmem_xalloc(vmem_t *vmp, size_t size, size_t align_arg, size_t phase, 931*0Sstevel@tonic-gate size_t nocross, void *minaddr, void *maxaddr, int vmflag) 932*0Sstevel@tonic-gate { 933*0Sstevel@tonic-gate vmem_seg_t *vsp; 934*0Sstevel@tonic-gate vmem_seg_t *vbest = NULL; 935*0Sstevel@tonic-gate uintptr_t addr, taddr, start, end; 936*0Sstevel@tonic-gate uintptr_t align = (align_arg != 0) ? align_arg : vmp->vm_quantum; 937*0Sstevel@tonic-gate void *vaddr, *xvaddr = NULL; 938*0Sstevel@tonic-gate size_t xsize; 939*0Sstevel@tonic-gate int hb, flist, resv; 940*0Sstevel@tonic-gate uint32_t mtbf; 941*0Sstevel@tonic-gate 942*0Sstevel@tonic-gate if ((align | phase | nocross) & (vmp->vm_quantum - 1)) 943*0Sstevel@tonic-gate panic("vmem_xalloc(%p, %lu, %lu, %lu, %lu, %p, %p, %x): " 944*0Sstevel@tonic-gate "parameters not vm_quantum aligned", 945*0Sstevel@tonic-gate (void *)vmp, size, align_arg, phase, nocross, 946*0Sstevel@tonic-gate minaddr, maxaddr, vmflag); 947*0Sstevel@tonic-gate 948*0Sstevel@tonic-gate if (nocross != 0 && 949*0Sstevel@tonic-gate (align > nocross || P2ROUNDUP(phase + size, align) > nocross)) 950*0Sstevel@tonic-gate panic("vmem_xalloc(%p, %lu, %lu, %lu, %lu, %p, %p, %x): " 951*0Sstevel@tonic-gate "overconstrained allocation", 952*0Sstevel@tonic-gate (void *)vmp, size, align_arg, phase, nocross, 953*0Sstevel@tonic-gate minaddr, maxaddr, vmflag); 954*0Sstevel@tonic-gate 955*0Sstevel@tonic-gate if (phase >= align || (align & (align - 1)) != 0 || 956*0Sstevel@tonic-gate (nocross & (nocross - 1)) != 0) 957*0Sstevel@tonic-gate panic("vmem_xalloc(%p, %lu, %lu, %lu, %lu, %p, %p, %x): " 958*0Sstevel@tonic-gate "parameters inconsistent or invalid", 959*0Sstevel@tonic-gate (void *)vmp, size, align_arg, phase, nocross, 960*0Sstevel@tonic-gate minaddr, maxaddr, vmflag); 961*0Sstevel@tonic-gate 962*0Sstevel@tonic-gate if ((mtbf = vmem_mtbf | vmp->vm_mtbf) != 0 && gethrtime() % mtbf == 0 && 963*0Sstevel@tonic-gate (vmflag & (VM_NOSLEEP | VM_PANIC)) == VM_NOSLEEP) 964*0Sstevel@tonic-gate return (NULL); 965*0Sstevel@tonic-gate 966*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 967*0Sstevel@tonic-gate for (;;) { 968*0Sstevel@tonic-gate if (vmp->vm_nsegfree < VMEM_MINFREE && 969*0Sstevel@tonic-gate !vmem_populate(vmp, vmflag)) 970*0Sstevel@tonic-gate break; 971*0Sstevel@tonic-gate do_alloc: 972*0Sstevel@tonic-gate /* 973*0Sstevel@tonic-gate * highbit() returns the highest bit + 1, which is exactly 974*0Sstevel@tonic-gate * what we want: we want to search the first freelist whose 975*0Sstevel@tonic-gate * members are *definitely* large enough to satisfy our 976*0Sstevel@tonic-gate * allocation. However, there are certain cases in which we 977*0Sstevel@tonic-gate * want to look at the next-smallest freelist (which *might* 978*0Sstevel@tonic-gate * be able to satisfy the allocation): 979*0Sstevel@tonic-gate * 980*0Sstevel@tonic-gate * (1) The size is exactly a power of 2, in which case 981*0Sstevel@tonic-gate * the smaller freelist is always big enough; 982*0Sstevel@tonic-gate * 983*0Sstevel@tonic-gate * (2) All other freelists are empty; 984*0Sstevel@tonic-gate * 985*0Sstevel@tonic-gate * (3) We're in the highest possible freelist, which is 986*0Sstevel@tonic-gate * always empty (e.g. the 4GB freelist on 32-bit systems); 987*0Sstevel@tonic-gate * 988*0Sstevel@tonic-gate * (4) We're doing a best-fit or first-fit allocation. 989*0Sstevel@tonic-gate */ 990*0Sstevel@tonic-gate if ((size & (size - 1)) == 0) { 991*0Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, size)); 992*0Sstevel@tonic-gate } else { 993*0Sstevel@tonic-gate hb = highbit(size); 994*0Sstevel@tonic-gate if ((vmp->vm_freemap >> hb) == 0 || 995*0Sstevel@tonic-gate hb == VMEM_FREELISTS || 996*0Sstevel@tonic-gate (vmflag & (VM_BESTFIT | VM_FIRSTFIT))) 997*0Sstevel@tonic-gate hb--; 998*0Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, 1UL << hb)); 999*0Sstevel@tonic-gate } 1000*0Sstevel@tonic-gate 1001*0Sstevel@tonic-gate for (vbest = NULL, vsp = (flist == 0) ? NULL : 1002*0Sstevel@tonic-gate vmp->vm_freelist[flist - 1].vs_knext; 1003*0Sstevel@tonic-gate vsp != NULL; vsp = vsp->vs_knext) { 1004*0Sstevel@tonic-gate vmp->vm_kstat.vk_search.value.ui64++; 1005*0Sstevel@tonic-gate if (vsp->vs_start == 0) { 1006*0Sstevel@tonic-gate /* 1007*0Sstevel@tonic-gate * We're moving up to a larger freelist, 1008*0Sstevel@tonic-gate * so if we've already found a candidate, 1009*0Sstevel@tonic-gate * the fit can't possibly get any better. 1010*0Sstevel@tonic-gate */ 1011*0Sstevel@tonic-gate if (vbest != NULL) 1012*0Sstevel@tonic-gate break; 1013*0Sstevel@tonic-gate /* 1014*0Sstevel@tonic-gate * Find the next non-empty freelist. 1015*0Sstevel@tonic-gate */ 1016*0Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, 1017*0Sstevel@tonic-gate VS_SIZE(vsp))); 1018*0Sstevel@tonic-gate if (flist-- == 0) 1019*0Sstevel@tonic-gate break; 1020*0Sstevel@tonic-gate vsp = (vmem_seg_t *)&vmp->vm_freelist[flist]; 1021*0Sstevel@tonic-gate ASSERT(vsp->vs_knext->vs_type == VMEM_FREE); 1022*0Sstevel@tonic-gate continue; 1023*0Sstevel@tonic-gate } 1024*0Sstevel@tonic-gate if (vsp->vs_end - 1 < (uintptr_t)minaddr) 1025*0Sstevel@tonic-gate continue; 1026*0Sstevel@tonic-gate if (vsp->vs_start > (uintptr_t)maxaddr - 1) 1027*0Sstevel@tonic-gate continue; 1028*0Sstevel@tonic-gate start = MAX(vsp->vs_start, (uintptr_t)minaddr); 1029*0Sstevel@tonic-gate end = MIN(vsp->vs_end - 1, (uintptr_t)maxaddr - 1) + 1; 1030*0Sstevel@tonic-gate taddr = P2PHASEUP(start, align, phase); 1031*0Sstevel@tonic-gate if (P2CROSS(taddr, taddr + size - 1, nocross)) 1032*0Sstevel@tonic-gate taddr += 1033*0Sstevel@tonic-gate P2ROUNDUP(P2NPHASE(taddr, nocross), align); 1034*0Sstevel@tonic-gate if ((taddr - start) + size > end - start || 1035*0Sstevel@tonic-gate (vbest != NULL && VS_SIZE(vsp) >= VS_SIZE(vbest))) 1036*0Sstevel@tonic-gate continue; 1037*0Sstevel@tonic-gate vbest = vsp; 1038*0Sstevel@tonic-gate addr = taddr; 1039*0Sstevel@tonic-gate if (!(vmflag & VM_BESTFIT) || VS_SIZE(vbest) == size) 1040*0Sstevel@tonic-gate break; 1041*0Sstevel@tonic-gate } 1042*0Sstevel@tonic-gate if (vbest != NULL) 1043*0Sstevel@tonic-gate break; 1044*0Sstevel@tonic-gate ASSERT(xvaddr == NULL); 1045*0Sstevel@tonic-gate if (size == 0) 1046*0Sstevel@tonic-gate panic("vmem_xalloc(): size == 0"); 1047*0Sstevel@tonic-gate if (vmp->vm_source_alloc != NULL && nocross == 0 && 1048*0Sstevel@tonic-gate minaddr == NULL && maxaddr == NULL) { 1049*0Sstevel@tonic-gate size_t aneeded, asize; 1050*0Sstevel@tonic-gate size_t aquantum = MAX(vmp->vm_quantum, 1051*0Sstevel@tonic-gate vmp->vm_source->vm_quantum); 1052*0Sstevel@tonic-gate size_t aphase = phase; 1053*0Sstevel@tonic-gate if (align > aquantum) { 1054*0Sstevel@tonic-gate aphase = (P2PHASE(phase, aquantum) != 0) ? 1055*0Sstevel@tonic-gate align - vmp->vm_quantum : align - aquantum; 1056*0Sstevel@tonic-gate ASSERT(aphase >= phase); 1057*0Sstevel@tonic-gate } 1058*0Sstevel@tonic-gate aneeded = MAX(size + aphase, vmp->vm_min_import); 1059*0Sstevel@tonic-gate asize = P2ROUNDUP(aneeded, aquantum); 1060*0Sstevel@tonic-gate 1061*0Sstevel@tonic-gate /* 1062*0Sstevel@tonic-gate * Determine how many segment structures we'll consume. 1063*0Sstevel@tonic-gate * The calculation must be precise because if we're 1064*0Sstevel@tonic-gate * here on behalf of vmem_populate(), we are taking 1065*0Sstevel@tonic-gate * segments from a very limited reserve. 1066*0Sstevel@tonic-gate */ 1067*0Sstevel@tonic-gate if (size == asize && !(vmp->vm_cflags & VMC_XALLOC)) 1068*0Sstevel@tonic-gate resv = VMEM_SEGS_PER_SPAN_CREATE + 1069*0Sstevel@tonic-gate VMEM_SEGS_PER_EXACT_ALLOC; 1070*0Sstevel@tonic-gate else if (phase == 0 && 1071*0Sstevel@tonic-gate align <= vmp->vm_source->vm_quantum) 1072*0Sstevel@tonic-gate resv = VMEM_SEGS_PER_SPAN_CREATE + 1073*0Sstevel@tonic-gate VMEM_SEGS_PER_LEFT_ALLOC; 1074*0Sstevel@tonic-gate else 1075*0Sstevel@tonic-gate resv = VMEM_SEGS_PER_ALLOC_MAX; 1076*0Sstevel@tonic-gate 1077*0Sstevel@tonic-gate ASSERT(vmp->vm_nsegfree >= resv); 1078*0Sstevel@tonic-gate vmp->vm_nsegfree -= resv; /* reserve our segs */ 1079*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1080*0Sstevel@tonic-gate if (vmp->vm_cflags & VMC_XALLOC) { 1081*0Sstevel@tonic-gate size_t oasize = asize; 1082*0Sstevel@tonic-gate vaddr = ((vmem_ximport_t *) 1083*0Sstevel@tonic-gate vmp->vm_source_alloc)(vmp->vm_source, 1084*0Sstevel@tonic-gate &asize, vmflag & VM_KMFLAGS); 1085*0Sstevel@tonic-gate ASSERT(asize >= oasize); 1086*0Sstevel@tonic-gate ASSERT(P2PHASE(asize, 1087*0Sstevel@tonic-gate vmp->vm_source->vm_quantum) == 0); 1088*0Sstevel@tonic-gate } else { 1089*0Sstevel@tonic-gate vaddr = vmp->vm_source_alloc(vmp->vm_source, 1090*0Sstevel@tonic-gate asize, vmflag & VM_KMFLAGS); 1091*0Sstevel@tonic-gate } 1092*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1093*0Sstevel@tonic-gate vmp->vm_nsegfree += resv; /* claim reservation */ 1094*0Sstevel@tonic-gate aneeded = size + align - vmp->vm_quantum; 1095*0Sstevel@tonic-gate aneeded = P2ROUNDUP(aneeded, vmp->vm_quantum); 1096*0Sstevel@tonic-gate if (vaddr != NULL) { 1097*0Sstevel@tonic-gate /* 1098*0Sstevel@tonic-gate * Since we dropped the vmem lock while 1099*0Sstevel@tonic-gate * calling the import function, other 1100*0Sstevel@tonic-gate * threads could have imported space 1101*0Sstevel@tonic-gate * and made our import unnecessary. In 1102*0Sstevel@tonic-gate * order to save space, we return 1103*0Sstevel@tonic-gate * excess imports immediately. 1104*0Sstevel@tonic-gate */ 1105*0Sstevel@tonic-gate if (asize > aneeded && 1106*0Sstevel@tonic-gate vmp->vm_source_free != NULL && 1107*0Sstevel@tonic-gate vmem_canalloc(vmp, aneeded)) { 1108*0Sstevel@tonic-gate ASSERT(resv >= 1109*0Sstevel@tonic-gate VMEM_SEGS_PER_MIDDLE_ALLOC); 1110*0Sstevel@tonic-gate xvaddr = vaddr; 1111*0Sstevel@tonic-gate xsize = asize; 1112*0Sstevel@tonic-gate goto do_alloc; 1113*0Sstevel@tonic-gate } 1114*0Sstevel@tonic-gate vbest = vmem_span_create(vmp, vaddr, asize, 1); 1115*0Sstevel@tonic-gate addr = P2PHASEUP(vbest->vs_start, align, phase); 1116*0Sstevel@tonic-gate break; 1117*0Sstevel@tonic-gate } else if (vmem_canalloc(vmp, aneeded)) { 1118*0Sstevel@tonic-gate /* 1119*0Sstevel@tonic-gate * Our import failed, but another thread 1120*0Sstevel@tonic-gate * added sufficient free memory to the arena 1121*0Sstevel@tonic-gate * to satisfy our request. Go back and 1122*0Sstevel@tonic-gate * grab it. 1123*0Sstevel@tonic-gate */ 1124*0Sstevel@tonic-gate ASSERT(resv >= VMEM_SEGS_PER_MIDDLE_ALLOC); 1125*0Sstevel@tonic-gate goto do_alloc; 1126*0Sstevel@tonic-gate } 1127*0Sstevel@tonic-gate } 1128*0Sstevel@tonic-gate 1129*0Sstevel@tonic-gate /* 1130*0Sstevel@tonic-gate * If the requestor chooses to fail the allocation attempt 1131*0Sstevel@tonic-gate * rather than reap wait and retry - get out of the loop. 1132*0Sstevel@tonic-gate */ 1133*0Sstevel@tonic-gate if (vmflag & VM_ABORT) 1134*0Sstevel@tonic-gate break; 1135*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1136*0Sstevel@tonic-gate if (vmp->vm_cflags & VMC_IDENTIFIER) 1137*0Sstevel@tonic-gate kmem_reap_idspace(); 1138*0Sstevel@tonic-gate else 1139*0Sstevel@tonic-gate kmem_reap(); 1140*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1141*0Sstevel@tonic-gate if (vmflag & VM_NOSLEEP) 1142*0Sstevel@tonic-gate break; 1143*0Sstevel@tonic-gate vmp->vm_kstat.vk_wait.value.ui64++; 1144*0Sstevel@tonic-gate cv_wait(&vmp->vm_cv, &vmp->vm_lock); 1145*0Sstevel@tonic-gate } 1146*0Sstevel@tonic-gate if (vbest != NULL) { 1147*0Sstevel@tonic-gate ASSERT(vbest->vs_type == VMEM_FREE); 1148*0Sstevel@tonic-gate ASSERT(vbest->vs_knext != vbest); 1149*0Sstevel@tonic-gate (void) vmem_seg_alloc(vmp, vbest, addr, size); 1150*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1151*0Sstevel@tonic-gate if (xvaddr) 1152*0Sstevel@tonic-gate vmp->vm_source_free(vmp->vm_source, xvaddr, xsize); 1153*0Sstevel@tonic-gate ASSERT(P2PHASE(addr, align) == phase); 1154*0Sstevel@tonic-gate ASSERT(!P2CROSS(addr, addr + size - 1, nocross)); 1155*0Sstevel@tonic-gate ASSERT(addr >= (uintptr_t)minaddr); 1156*0Sstevel@tonic-gate ASSERT(addr + size - 1 <= (uintptr_t)maxaddr - 1); 1157*0Sstevel@tonic-gate return ((void *)addr); 1158*0Sstevel@tonic-gate } 1159*0Sstevel@tonic-gate vmp->vm_kstat.vk_fail.value.ui64++; 1160*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1161*0Sstevel@tonic-gate if (vmflag & VM_PANIC) 1162*0Sstevel@tonic-gate panic("vmem_xalloc(%p, %lu, %lu, %lu, %lu, %p, %p, %x): " 1163*0Sstevel@tonic-gate "cannot satisfy mandatory allocation", 1164*0Sstevel@tonic-gate (void *)vmp, size, align_arg, phase, nocross, 1165*0Sstevel@tonic-gate minaddr, maxaddr, vmflag); 1166*0Sstevel@tonic-gate ASSERT(xvaddr == NULL); 1167*0Sstevel@tonic-gate return (NULL); 1168*0Sstevel@tonic-gate } 1169*0Sstevel@tonic-gate 1170*0Sstevel@tonic-gate /* 1171*0Sstevel@tonic-gate * Free the segment [vaddr, vaddr + size), where vaddr was a constrained 1172*0Sstevel@tonic-gate * allocation. vmem_xalloc() and vmem_xfree() must always be paired because 1173*0Sstevel@tonic-gate * both routines bypass the quantum caches. 1174*0Sstevel@tonic-gate */ 1175*0Sstevel@tonic-gate void 1176*0Sstevel@tonic-gate vmem_xfree(vmem_t *vmp, void *vaddr, size_t size) 1177*0Sstevel@tonic-gate { 1178*0Sstevel@tonic-gate vmem_seg_t *vsp, *vnext, *vprev; 1179*0Sstevel@tonic-gate 1180*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1181*0Sstevel@tonic-gate 1182*0Sstevel@tonic-gate vsp = vmem_hash_delete(vmp, (uintptr_t)vaddr, size); 1183*0Sstevel@tonic-gate vsp->vs_end = P2ROUNDUP(vsp->vs_end, vmp->vm_quantum); 1184*0Sstevel@tonic-gate 1185*0Sstevel@tonic-gate /* 1186*0Sstevel@tonic-gate * Attempt to coalesce with the next segment. 1187*0Sstevel@tonic-gate */ 1188*0Sstevel@tonic-gate vnext = vsp->vs_anext; 1189*0Sstevel@tonic-gate if (vnext->vs_type == VMEM_FREE) { 1190*0Sstevel@tonic-gate ASSERT(vsp->vs_end == vnext->vs_start); 1191*0Sstevel@tonic-gate vmem_freelist_delete(vmp, vnext); 1192*0Sstevel@tonic-gate vsp->vs_end = vnext->vs_end; 1193*0Sstevel@tonic-gate vmem_seg_destroy(vmp, vnext); 1194*0Sstevel@tonic-gate } 1195*0Sstevel@tonic-gate 1196*0Sstevel@tonic-gate /* 1197*0Sstevel@tonic-gate * Attempt to coalesce with the previous segment. 1198*0Sstevel@tonic-gate */ 1199*0Sstevel@tonic-gate vprev = vsp->vs_aprev; 1200*0Sstevel@tonic-gate if (vprev->vs_type == VMEM_FREE) { 1201*0Sstevel@tonic-gate ASSERT(vprev->vs_end == vsp->vs_start); 1202*0Sstevel@tonic-gate vmem_freelist_delete(vmp, vprev); 1203*0Sstevel@tonic-gate vprev->vs_end = vsp->vs_end; 1204*0Sstevel@tonic-gate vmem_seg_destroy(vmp, vsp); 1205*0Sstevel@tonic-gate vsp = vprev; 1206*0Sstevel@tonic-gate } 1207*0Sstevel@tonic-gate 1208*0Sstevel@tonic-gate /* 1209*0Sstevel@tonic-gate * If the entire span is free, return it to the source. 1210*0Sstevel@tonic-gate */ 1211*0Sstevel@tonic-gate if (vsp->vs_aprev->vs_import && vmp->vm_source_free != NULL && 1212*0Sstevel@tonic-gate vsp->vs_aprev->vs_type == VMEM_SPAN && 1213*0Sstevel@tonic-gate vsp->vs_anext->vs_type == VMEM_SPAN) { 1214*0Sstevel@tonic-gate vaddr = (void *)vsp->vs_start; 1215*0Sstevel@tonic-gate size = VS_SIZE(vsp); 1216*0Sstevel@tonic-gate ASSERT(size == VS_SIZE(vsp->vs_aprev)); 1217*0Sstevel@tonic-gate vmem_span_destroy(vmp, vsp); 1218*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1219*0Sstevel@tonic-gate vmp->vm_source_free(vmp->vm_source, vaddr, size); 1220*0Sstevel@tonic-gate } else { 1221*0Sstevel@tonic-gate vmem_freelist_insert(vmp, vsp); 1222*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1223*0Sstevel@tonic-gate } 1224*0Sstevel@tonic-gate } 1225*0Sstevel@tonic-gate 1226*0Sstevel@tonic-gate /* 1227*0Sstevel@tonic-gate * Allocate size bytes from arena vmp. Returns the allocated address 1228*0Sstevel@tonic-gate * on success, NULL on failure. vmflag specifies VM_SLEEP or VM_NOSLEEP, 1229*0Sstevel@tonic-gate * and may also specify best-fit, first-fit, or next-fit allocation policy 1230*0Sstevel@tonic-gate * instead of the default instant-fit policy. VM_SLEEP allocations are 1231*0Sstevel@tonic-gate * guaranteed to succeed. 1232*0Sstevel@tonic-gate */ 1233*0Sstevel@tonic-gate void * 1234*0Sstevel@tonic-gate vmem_alloc(vmem_t *vmp, size_t size, int vmflag) 1235*0Sstevel@tonic-gate { 1236*0Sstevel@tonic-gate vmem_seg_t *vsp; 1237*0Sstevel@tonic-gate uintptr_t addr; 1238*0Sstevel@tonic-gate int hb; 1239*0Sstevel@tonic-gate int flist = 0; 1240*0Sstevel@tonic-gate uint32_t mtbf; 1241*0Sstevel@tonic-gate 1242*0Sstevel@tonic-gate if (size - 1 < vmp->vm_qcache_max) 1243*0Sstevel@tonic-gate return (kmem_cache_alloc(vmp->vm_qcache[(size - 1) >> 1244*0Sstevel@tonic-gate vmp->vm_qshift], vmflag & VM_KMFLAGS)); 1245*0Sstevel@tonic-gate 1246*0Sstevel@tonic-gate if ((mtbf = vmem_mtbf | vmp->vm_mtbf) != 0 && gethrtime() % mtbf == 0 && 1247*0Sstevel@tonic-gate (vmflag & (VM_NOSLEEP | VM_PANIC)) == VM_NOSLEEP) 1248*0Sstevel@tonic-gate return (NULL); 1249*0Sstevel@tonic-gate 1250*0Sstevel@tonic-gate if (vmflag & VM_NEXTFIT) 1251*0Sstevel@tonic-gate return (vmem_nextfit_alloc(vmp, size, vmflag)); 1252*0Sstevel@tonic-gate 1253*0Sstevel@tonic-gate if (vmflag & (VM_BESTFIT | VM_FIRSTFIT)) 1254*0Sstevel@tonic-gate return (vmem_xalloc(vmp, size, vmp->vm_quantum, 0, 0, 1255*0Sstevel@tonic-gate NULL, NULL, vmflag)); 1256*0Sstevel@tonic-gate 1257*0Sstevel@tonic-gate /* 1258*0Sstevel@tonic-gate * Unconstrained instant-fit allocation from the segment list. 1259*0Sstevel@tonic-gate */ 1260*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1261*0Sstevel@tonic-gate 1262*0Sstevel@tonic-gate if (vmp->vm_nsegfree >= VMEM_MINFREE || vmem_populate(vmp, vmflag)) { 1263*0Sstevel@tonic-gate if ((size & (size - 1)) == 0) 1264*0Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, size)); 1265*0Sstevel@tonic-gate else if ((hb = highbit(size)) < VMEM_FREELISTS) 1266*0Sstevel@tonic-gate flist = lowbit(P2ALIGN(vmp->vm_freemap, 1UL << hb)); 1267*0Sstevel@tonic-gate } 1268*0Sstevel@tonic-gate 1269*0Sstevel@tonic-gate if (flist-- == 0) { 1270*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1271*0Sstevel@tonic-gate return (vmem_xalloc(vmp, size, vmp->vm_quantum, 1272*0Sstevel@tonic-gate 0, 0, NULL, NULL, vmflag)); 1273*0Sstevel@tonic-gate } 1274*0Sstevel@tonic-gate 1275*0Sstevel@tonic-gate ASSERT(size <= (1UL << flist)); 1276*0Sstevel@tonic-gate vsp = vmp->vm_freelist[flist].vs_knext; 1277*0Sstevel@tonic-gate addr = vsp->vs_start; 1278*0Sstevel@tonic-gate (void) vmem_seg_alloc(vmp, vsp, addr, size); 1279*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1280*0Sstevel@tonic-gate return ((void *)addr); 1281*0Sstevel@tonic-gate } 1282*0Sstevel@tonic-gate 1283*0Sstevel@tonic-gate /* 1284*0Sstevel@tonic-gate * Free the segment [vaddr, vaddr + size). 1285*0Sstevel@tonic-gate */ 1286*0Sstevel@tonic-gate void 1287*0Sstevel@tonic-gate vmem_free(vmem_t *vmp, void *vaddr, size_t size) 1288*0Sstevel@tonic-gate { 1289*0Sstevel@tonic-gate if (size - 1 < vmp->vm_qcache_max) 1290*0Sstevel@tonic-gate kmem_cache_free(vmp->vm_qcache[(size - 1) >> vmp->vm_qshift], 1291*0Sstevel@tonic-gate vaddr); 1292*0Sstevel@tonic-gate else 1293*0Sstevel@tonic-gate vmem_xfree(vmp, vaddr, size); 1294*0Sstevel@tonic-gate } 1295*0Sstevel@tonic-gate 1296*0Sstevel@tonic-gate /* 1297*0Sstevel@tonic-gate * Determine whether arena vmp contains the segment [vaddr, vaddr + size). 1298*0Sstevel@tonic-gate */ 1299*0Sstevel@tonic-gate int 1300*0Sstevel@tonic-gate vmem_contains(vmem_t *vmp, void *vaddr, size_t size) 1301*0Sstevel@tonic-gate { 1302*0Sstevel@tonic-gate uintptr_t start = (uintptr_t)vaddr; 1303*0Sstevel@tonic-gate uintptr_t end = start + size; 1304*0Sstevel@tonic-gate vmem_seg_t *vsp; 1305*0Sstevel@tonic-gate vmem_seg_t *seg0 = &vmp->vm_seg0; 1306*0Sstevel@tonic-gate 1307*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1308*0Sstevel@tonic-gate vmp->vm_kstat.vk_contains.value.ui64++; 1309*0Sstevel@tonic-gate for (vsp = seg0->vs_knext; vsp != seg0; vsp = vsp->vs_knext) { 1310*0Sstevel@tonic-gate vmp->vm_kstat.vk_contains_search.value.ui64++; 1311*0Sstevel@tonic-gate ASSERT(vsp->vs_type == VMEM_SPAN); 1312*0Sstevel@tonic-gate if (start >= vsp->vs_start && end - 1 <= vsp->vs_end - 1) 1313*0Sstevel@tonic-gate break; 1314*0Sstevel@tonic-gate } 1315*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1316*0Sstevel@tonic-gate return (vsp != seg0); 1317*0Sstevel@tonic-gate } 1318*0Sstevel@tonic-gate 1319*0Sstevel@tonic-gate /* 1320*0Sstevel@tonic-gate * Add the span [vaddr, vaddr + size) to arena vmp. 1321*0Sstevel@tonic-gate */ 1322*0Sstevel@tonic-gate void * 1323*0Sstevel@tonic-gate vmem_add(vmem_t *vmp, void *vaddr, size_t size, int vmflag) 1324*0Sstevel@tonic-gate { 1325*0Sstevel@tonic-gate if (vaddr == NULL || size == 0) 1326*0Sstevel@tonic-gate panic("vmem_add(%p, %p, %lu): bad arguments", vmp, vaddr, size); 1327*0Sstevel@tonic-gate 1328*0Sstevel@tonic-gate ASSERT(!vmem_contains(vmp, vaddr, size)); 1329*0Sstevel@tonic-gate 1330*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1331*0Sstevel@tonic-gate if (vmem_populate(vmp, vmflag)) 1332*0Sstevel@tonic-gate (void) vmem_span_create(vmp, vaddr, size, 0); 1333*0Sstevel@tonic-gate else 1334*0Sstevel@tonic-gate vaddr = NULL; 1335*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1336*0Sstevel@tonic-gate return (vaddr); 1337*0Sstevel@tonic-gate } 1338*0Sstevel@tonic-gate 1339*0Sstevel@tonic-gate /* 1340*0Sstevel@tonic-gate * Walk the vmp arena, applying func to each segment matching typemask. 1341*0Sstevel@tonic-gate * If VMEM_REENTRANT is specified, the arena lock is dropped across each 1342*0Sstevel@tonic-gate * call to func(); otherwise, it is held for the duration of vmem_walk() 1343*0Sstevel@tonic-gate * to ensure a consistent snapshot. Note that VMEM_REENTRANT callbacks 1344*0Sstevel@tonic-gate * are *not* necessarily consistent, so they may only be used when a hint 1345*0Sstevel@tonic-gate * is adequate. 1346*0Sstevel@tonic-gate */ 1347*0Sstevel@tonic-gate void 1348*0Sstevel@tonic-gate vmem_walk(vmem_t *vmp, int typemask, 1349*0Sstevel@tonic-gate void (*func)(void *, void *, size_t), void *arg) 1350*0Sstevel@tonic-gate { 1351*0Sstevel@tonic-gate vmem_seg_t *vsp; 1352*0Sstevel@tonic-gate vmem_seg_t *seg0 = &vmp->vm_seg0; 1353*0Sstevel@tonic-gate vmem_seg_t walker; 1354*0Sstevel@tonic-gate 1355*0Sstevel@tonic-gate if (typemask & VMEM_WALKER) 1356*0Sstevel@tonic-gate return; 1357*0Sstevel@tonic-gate 1358*0Sstevel@tonic-gate bzero(&walker, sizeof (walker)); 1359*0Sstevel@tonic-gate walker.vs_type = VMEM_WALKER; 1360*0Sstevel@tonic-gate 1361*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1362*0Sstevel@tonic-gate VMEM_INSERT(seg0, &walker, a); 1363*0Sstevel@tonic-gate for (vsp = seg0->vs_anext; vsp != seg0; vsp = vsp->vs_anext) { 1364*0Sstevel@tonic-gate if (vsp->vs_type & typemask) { 1365*0Sstevel@tonic-gate void *start = (void *)vsp->vs_start; 1366*0Sstevel@tonic-gate size_t size = VS_SIZE(vsp); 1367*0Sstevel@tonic-gate if (typemask & VMEM_REENTRANT) { 1368*0Sstevel@tonic-gate vmem_advance(vmp, &walker, vsp); 1369*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1370*0Sstevel@tonic-gate func(arg, start, size); 1371*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1372*0Sstevel@tonic-gate vsp = &walker; 1373*0Sstevel@tonic-gate } else { 1374*0Sstevel@tonic-gate func(arg, start, size); 1375*0Sstevel@tonic-gate } 1376*0Sstevel@tonic-gate } 1377*0Sstevel@tonic-gate } 1378*0Sstevel@tonic-gate vmem_advance(vmp, &walker, NULL); 1379*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1380*0Sstevel@tonic-gate } 1381*0Sstevel@tonic-gate 1382*0Sstevel@tonic-gate /* 1383*0Sstevel@tonic-gate * Return the total amount of memory whose type matches typemask. Thus: 1384*0Sstevel@tonic-gate * 1385*0Sstevel@tonic-gate * typemask VMEM_ALLOC yields total memory allocated (in use). 1386*0Sstevel@tonic-gate * typemask VMEM_FREE yields total memory free (available). 1387*0Sstevel@tonic-gate * typemask (VMEM_ALLOC | VMEM_FREE) yields total arena size. 1388*0Sstevel@tonic-gate */ 1389*0Sstevel@tonic-gate size_t 1390*0Sstevel@tonic-gate vmem_size(vmem_t *vmp, int typemask) 1391*0Sstevel@tonic-gate { 1392*0Sstevel@tonic-gate uint64_t size = 0; 1393*0Sstevel@tonic-gate 1394*0Sstevel@tonic-gate if (typemask & VMEM_ALLOC) 1395*0Sstevel@tonic-gate size += vmp->vm_kstat.vk_mem_inuse.value.ui64; 1396*0Sstevel@tonic-gate if (typemask & VMEM_FREE) 1397*0Sstevel@tonic-gate size += vmp->vm_kstat.vk_mem_total.value.ui64 - 1398*0Sstevel@tonic-gate vmp->vm_kstat.vk_mem_inuse.value.ui64; 1399*0Sstevel@tonic-gate return ((size_t)size); 1400*0Sstevel@tonic-gate } 1401*0Sstevel@tonic-gate 1402*0Sstevel@tonic-gate /* 1403*0Sstevel@tonic-gate * Create an arena called name whose initial span is [base, base + size). 1404*0Sstevel@tonic-gate * The arena's natural unit of currency is quantum, so vmem_alloc() 1405*0Sstevel@tonic-gate * guarantees quantum-aligned results. The arena may import new spans 1406*0Sstevel@tonic-gate * by invoking afunc() on source, and may return those spans by invoking 1407*0Sstevel@tonic-gate * ffunc() on source. To make small allocations fast and scalable, 1408*0Sstevel@tonic-gate * the arena offers high-performance caching for each integer multiple 1409*0Sstevel@tonic-gate * of quantum up to qcache_max. 1410*0Sstevel@tonic-gate */ 1411*0Sstevel@tonic-gate static vmem_t * 1412*0Sstevel@tonic-gate vmem_create_common(const char *name, void *base, size_t size, size_t quantum, 1413*0Sstevel@tonic-gate void *(*afunc)(vmem_t *, size_t, int), 1414*0Sstevel@tonic-gate void (*ffunc)(vmem_t *, void *, size_t), 1415*0Sstevel@tonic-gate vmem_t *source, size_t qcache_max, int vmflag) 1416*0Sstevel@tonic-gate { 1417*0Sstevel@tonic-gate int i; 1418*0Sstevel@tonic-gate size_t nqcache; 1419*0Sstevel@tonic-gate vmem_t *vmp, *cur, **vmpp; 1420*0Sstevel@tonic-gate vmem_seg_t *vsp; 1421*0Sstevel@tonic-gate vmem_freelist_t *vfp; 1422*0Sstevel@tonic-gate uint32_t id = atomic_add_32_nv(&vmem_id, 1); 1423*0Sstevel@tonic-gate 1424*0Sstevel@tonic-gate if (vmem_vmem_arena != NULL) { 1425*0Sstevel@tonic-gate vmp = vmem_alloc(vmem_vmem_arena, sizeof (vmem_t), 1426*0Sstevel@tonic-gate vmflag & VM_KMFLAGS); 1427*0Sstevel@tonic-gate } else { 1428*0Sstevel@tonic-gate ASSERT(id <= VMEM_INITIAL); 1429*0Sstevel@tonic-gate vmp = &vmem0[id - 1]; 1430*0Sstevel@tonic-gate } 1431*0Sstevel@tonic-gate 1432*0Sstevel@tonic-gate /* An identifier arena must inherit from another identifier arena */ 1433*0Sstevel@tonic-gate ASSERT(source == NULL || ((source->vm_cflags & VMC_IDENTIFIER) == 1434*0Sstevel@tonic-gate (vmflag & VMC_IDENTIFIER))); 1435*0Sstevel@tonic-gate 1436*0Sstevel@tonic-gate if (vmp == NULL) 1437*0Sstevel@tonic-gate return (NULL); 1438*0Sstevel@tonic-gate bzero(vmp, sizeof (vmem_t)); 1439*0Sstevel@tonic-gate 1440*0Sstevel@tonic-gate (void) snprintf(vmp->vm_name, VMEM_NAMELEN, "%s", name); 1441*0Sstevel@tonic-gate mutex_init(&vmp->vm_lock, NULL, MUTEX_DEFAULT, NULL); 1442*0Sstevel@tonic-gate cv_init(&vmp->vm_cv, NULL, CV_DEFAULT, NULL); 1443*0Sstevel@tonic-gate vmp->vm_cflags = vmflag; 1444*0Sstevel@tonic-gate vmflag &= VM_KMFLAGS; 1445*0Sstevel@tonic-gate 1446*0Sstevel@tonic-gate vmp->vm_quantum = quantum; 1447*0Sstevel@tonic-gate vmp->vm_qshift = highbit(quantum) - 1; 1448*0Sstevel@tonic-gate nqcache = MIN(qcache_max >> vmp->vm_qshift, VMEM_NQCACHE_MAX); 1449*0Sstevel@tonic-gate 1450*0Sstevel@tonic-gate for (i = 0; i <= VMEM_FREELISTS; i++) { 1451*0Sstevel@tonic-gate vfp = &vmp->vm_freelist[i]; 1452*0Sstevel@tonic-gate vfp->vs_end = 1UL << i; 1453*0Sstevel@tonic-gate vfp->vs_knext = (vmem_seg_t *)(vfp + 1); 1454*0Sstevel@tonic-gate vfp->vs_kprev = (vmem_seg_t *)(vfp - 1); 1455*0Sstevel@tonic-gate } 1456*0Sstevel@tonic-gate 1457*0Sstevel@tonic-gate vmp->vm_freelist[0].vs_kprev = NULL; 1458*0Sstevel@tonic-gate vmp->vm_freelist[VMEM_FREELISTS].vs_knext = NULL; 1459*0Sstevel@tonic-gate vmp->vm_freelist[VMEM_FREELISTS].vs_end = 0; 1460*0Sstevel@tonic-gate vmp->vm_hash_table = vmp->vm_hash0; 1461*0Sstevel@tonic-gate vmp->vm_hash_mask = VMEM_HASH_INITIAL - 1; 1462*0Sstevel@tonic-gate vmp->vm_hash_shift = highbit(vmp->vm_hash_mask); 1463*0Sstevel@tonic-gate 1464*0Sstevel@tonic-gate vsp = &vmp->vm_seg0; 1465*0Sstevel@tonic-gate vsp->vs_anext = vsp; 1466*0Sstevel@tonic-gate vsp->vs_aprev = vsp; 1467*0Sstevel@tonic-gate vsp->vs_knext = vsp; 1468*0Sstevel@tonic-gate vsp->vs_kprev = vsp; 1469*0Sstevel@tonic-gate vsp->vs_type = VMEM_SPAN; 1470*0Sstevel@tonic-gate 1471*0Sstevel@tonic-gate vsp = &vmp->vm_rotor; 1472*0Sstevel@tonic-gate vsp->vs_type = VMEM_ROTOR; 1473*0Sstevel@tonic-gate VMEM_INSERT(&vmp->vm_seg0, vsp, a); 1474*0Sstevel@tonic-gate 1475*0Sstevel@tonic-gate bcopy(&vmem_kstat_template, &vmp->vm_kstat, sizeof (vmem_kstat_t)); 1476*0Sstevel@tonic-gate 1477*0Sstevel@tonic-gate vmp->vm_id = id; 1478*0Sstevel@tonic-gate if (source != NULL) 1479*0Sstevel@tonic-gate vmp->vm_kstat.vk_source_id.value.ui32 = source->vm_id; 1480*0Sstevel@tonic-gate vmp->vm_source = source; 1481*0Sstevel@tonic-gate vmp->vm_source_alloc = afunc; 1482*0Sstevel@tonic-gate vmp->vm_source_free = ffunc; 1483*0Sstevel@tonic-gate 1484*0Sstevel@tonic-gate /* 1485*0Sstevel@tonic-gate * Some arenas (like vmem_metadata and kmem_metadata) cannot 1486*0Sstevel@tonic-gate * use quantum caching to lower fragmentation. Instead, we 1487*0Sstevel@tonic-gate * increase their imports, giving a similar effect. 1488*0Sstevel@tonic-gate */ 1489*0Sstevel@tonic-gate if (vmp->vm_cflags & VMC_NO_QCACHE) { 1490*0Sstevel@tonic-gate vmp->vm_min_import = 1491*0Sstevel@tonic-gate VMEM_QCACHE_SLABSIZE(nqcache << vmp->vm_qshift); 1492*0Sstevel@tonic-gate nqcache = 0; 1493*0Sstevel@tonic-gate } 1494*0Sstevel@tonic-gate 1495*0Sstevel@tonic-gate if (nqcache != 0) { 1496*0Sstevel@tonic-gate ASSERT(!(vmflag & VM_NOSLEEP)); 1497*0Sstevel@tonic-gate vmp->vm_qcache_max = nqcache << vmp->vm_qshift; 1498*0Sstevel@tonic-gate for (i = 0; i < nqcache; i++) { 1499*0Sstevel@tonic-gate char buf[VMEM_NAMELEN + 21]; 1500*0Sstevel@tonic-gate (void) sprintf(buf, "%s_%lu", vmp->vm_name, 1501*0Sstevel@tonic-gate (i + 1) * quantum); 1502*0Sstevel@tonic-gate vmp->vm_qcache[i] = kmem_cache_create(buf, 1503*0Sstevel@tonic-gate (i + 1) * quantum, quantum, NULL, NULL, NULL, 1504*0Sstevel@tonic-gate NULL, vmp, KMC_QCACHE | KMC_NOTOUCH); 1505*0Sstevel@tonic-gate } 1506*0Sstevel@tonic-gate } 1507*0Sstevel@tonic-gate 1508*0Sstevel@tonic-gate if ((vmp->vm_ksp = kstat_create("vmem", vmp->vm_id, vmp->vm_name, 1509*0Sstevel@tonic-gate "vmem", KSTAT_TYPE_NAMED, sizeof (vmem_kstat_t) / 1510*0Sstevel@tonic-gate sizeof (kstat_named_t), KSTAT_FLAG_VIRTUAL)) != NULL) { 1511*0Sstevel@tonic-gate vmp->vm_ksp->ks_data = &vmp->vm_kstat; 1512*0Sstevel@tonic-gate kstat_install(vmp->vm_ksp); 1513*0Sstevel@tonic-gate } 1514*0Sstevel@tonic-gate 1515*0Sstevel@tonic-gate mutex_enter(&vmem_list_lock); 1516*0Sstevel@tonic-gate vmpp = &vmem_list; 1517*0Sstevel@tonic-gate while ((cur = *vmpp) != NULL) 1518*0Sstevel@tonic-gate vmpp = &cur->vm_next; 1519*0Sstevel@tonic-gate *vmpp = vmp; 1520*0Sstevel@tonic-gate mutex_exit(&vmem_list_lock); 1521*0Sstevel@tonic-gate 1522*0Sstevel@tonic-gate if (vmp->vm_cflags & VMC_POPULATOR) { 1523*0Sstevel@tonic-gate ASSERT(vmem_populators < VMEM_INITIAL); 1524*0Sstevel@tonic-gate vmem_populator[atomic_add_32_nv(&vmem_populators, 1) - 1] = vmp; 1525*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1526*0Sstevel@tonic-gate (void) vmem_populate(vmp, vmflag | VM_PANIC); 1527*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1528*0Sstevel@tonic-gate } 1529*0Sstevel@tonic-gate 1530*0Sstevel@tonic-gate if ((base || size) && vmem_add(vmp, base, size, vmflag) == NULL) { 1531*0Sstevel@tonic-gate vmem_destroy(vmp); 1532*0Sstevel@tonic-gate return (NULL); 1533*0Sstevel@tonic-gate } 1534*0Sstevel@tonic-gate 1535*0Sstevel@tonic-gate return (vmp); 1536*0Sstevel@tonic-gate } 1537*0Sstevel@tonic-gate 1538*0Sstevel@tonic-gate vmem_t * 1539*0Sstevel@tonic-gate vmem_xcreate(const char *name, void *base, size_t size, size_t quantum, 1540*0Sstevel@tonic-gate vmem_ximport_t *afunc, vmem_free_t *ffunc, vmem_t *source, 1541*0Sstevel@tonic-gate size_t qcache_max, int vmflag) 1542*0Sstevel@tonic-gate { 1543*0Sstevel@tonic-gate ASSERT(!(vmflag & (VMC_POPULATOR | VMC_XALLOC))); 1544*0Sstevel@tonic-gate vmflag &= ~(VMC_POPULATOR | VMC_XALLOC); 1545*0Sstevel@tonic-gate 1546*0Sstevel@tonic-gate return (vmem_create_common(name, base, size, quantum, 1547*0Sstevel@tonic-gate (vmem_alloc_t *)afunc, ffunc, source, qcache_max, 1548*0Sstevel@tonic-gate vmflag | VMC_XALLOC)); 1549*0Sstevel@tonic-gate } 1550*0Sstevel@tonic-gate 1551*0Sstevel@tonic-gate vmem_t * 1552*0Sstevel@tonic-gate vmem_create(const char *name, void *base, size_t size, size_t quantum, 1553*0Sstevel@tonic-gate vmem_alloc_t *afunc, vmem_free_t *ffunc, vmem_t *source, 1554*0Sstevel@tonic-gate size_t qcache_max, int vmflag) 1555*0Sstevel@tonic-gate { 1556*0Sstevel@tonic-gate ASSERT(!(vmflag & VMC_XALLOC)); 1557*0Sstevel@tonic-gate vmflag &= ~VMC_XALLOC; 1558*0Sstevel@tonic-gate 1559*0Sstevel@tonic-gate return (vmem_create_common(name, base, size, quantum, 1560*0Sstevel@tonic-gate afunc, ffunc, source, qcache_max, vmflag)); 1561*0Sstevel@tonic-gate } 1562*0Sstevel@tonic-gate 1563*0Sstevel@tonic-gate /* 1564*0Sstevel@tonic-gate * Destroy arena vmp. 1565*0Sstevel@tonic-gate */ 1566*0Sstevel@tonic-gate void 1567*0Sstevel@tonic-gate vmem_destroy(vmem_t *vmp) 1568*0Sstevel@tonic-gate { 1569*0Sstevel@tonic-gate vmem_t *cur, **vmpp; 1570*0Sstevel@tonic-gate vmem_seg_t *seg0 = &vmp->vm_seg0; 1571*0Sstevel@tonic-gate vmem_seg_t *vsp; 1572*0Sstevel@tonic-gate size_t leaked; 1573*0Sstevel@tonic-gate int i; 1574*0Sstevel@tonic-gate 1575*0Sstevel@tonic-gate mutex_enter(&vmem_list_lock); 1576*0Sstevel@tonic-gate vmpp = &vmem_list; 1577*0Sstevel@tonic-gate while ((cur = *vmpp) != vmp) 1578*0Sstevel@tonic-gate vmpp = &cur->vm_next; 1579*0Sstevel@tonic-gate *vmpp = vmp->vm_next; 1580*0Sstevel@tonic-gate mutex_exit(&vmem_list_lock); 1581*0Sstevel@tonic-gate 1582*0Sstevel@tonic-gate for (i = 0; i < VMEM_NQCACHE_MAX; i++) 1583*0Sstevel@tonic-gate if (vmp->vm_qcache[i]) 1584*0Sstevel@tonic-gate kmem_cache_destroy(vmp->vm_qcache[i]); 1585*0Sstevel@tonic-gate 1586*0Sstevel@tonic-gate leaked = vmem_size(vmp, VMEM_ALLOC); 1587*0Sstevel@tonic-gate if (leaked != 0) 1588*0Sstevel@tonic-gate cmn_err(CE_WARN, "vmem_destroy('%s'): leaked %lu %s", 1589*0Sstevel@tonic-gate vmp->vm_name, leaked, (vmp->vm_cflags & VMC_IDENTIFIER) ? 1590*0Sstevel@tonic-gate "identifiers" : "bytes"); 1591*0Sstevel@tonic-gate 1592*0Sstevel@tonic-gate if (vmp->vm_hash_table != vmp->vm_hash0) 1593*0Sstevel@tonic-gate vmem_free(vmem_hash_arena, vmp->vm_hash_table, 1594*0Sstevel@tonic-gate (vmp->vm_hash_mask + 1) * sizeof (void *)); 1595*0Sstevel@tonic-gate 1596*0Sstevel@tonic-gate /* 1597*0Sstevel@tonic-gate * Give back the segment structures for anything that's left in the 1598*0Sstevel@tonic-gate * arena, e.g. the primary spans and their free segments. 1599*0Sstevel@tonic-gate */ 1600*0Sstevel@tonic-gate VMEM_DELETE(&vmp->vm_rotor, a); 1601*0Sstevel@tonic-gate for (vsp = seg0->vs_anext; vsp != seg0; vsp = vsp->vs_anext) 1602*0Sstevel@tonic-gate vmem_putseg_global(vsp); 1603*0Sstevel@tonic-gate 1604*0Sstevel@tonic-gate while (vmp->vm_nsegfree > 0) 1605*0Sstevel@tonic-gate vmem_putseg_global(vmem_getseg(vmp)); 1606*0Sstevel@tonic-gate 1607*0Sstevel@tonic-gate kstat_delete(vmp->vm_ksp); 1608*0Sstevel@tonic-gate 1609*0Sstevel@tonic-gate mutex_destroy(&vmp->vm_lock); 1610*0Sstevel@tonic-gate cv_destroy(&vmp->vm_cv); 1611*0Sstevel@tonic-gate vmem_free(vmem_vmem_arena, vmp, sizeof (vmem_t)); 1612*0Sstevel@tonic-gate } 1613*0Sstevel@tonic-gate 1614*0Sstevel@tonic-gate /* 1615*0Sstevel@tonic-gate * Resize vmp's hash table to keep the average lookup depth near 1.0. 1616*0Sstevel@tonic-gate */ 1617*0Sstevel@tonic-gate static void 1618*0Sstevel@tonic-gate vmem_hash_rescale(vmem_t *vmp) 1619*0Sstevel@tonic-gate { 1620*0Sstevel@tonic-gate vmem_seg_t **old_table, **new_table, *vsp; 1621*0Sstevel@tonic-gate size_t old_size, new_size, h, nseg; 1622*0Sstevel@tonic-gate 1623*0Sstevel@tonic-gate nseg = (size_t)(vmp->vm_kstat.vk_alloc.value.ui64 - 1624*0Sstevel@tonic-gate vmp->vm_kstat.vk_free.value.ui64); 1625*0Sstevel@tonic-gate 1626*0Sstevel@tonic-gate new_size = MAX(VMEM_HASH_INITIAL, 1 << (highbit(3 * nseg + 4) - 2)); 1627*0Sstevel@tonic-gate old_size = vmp->vm_hash_mask + 1; 1628*0Sstevel@tonic-gate 1629*0Sstevel@tonic-gate if ((old_size >> 1) <= new_size && new_size <= (old_size << 1)) 1630*0Sstevel@tonic-gate return; 1631*0Sstevel@tonic-gate 1632*0Sstevel@tonic-gate new_table = vmem_alloc(vmem_hash_arena, new_size * sizeof (void *), 1633*0Sstevel@tonic-gate VM_NOSLEEP); 1634*0Sstevel@tonic-gate if (new_table == NULL) 1635*0Sstevel@tonic-gate return; 1636*0Sstevel@tonic-gate bzero(new_table, new_size * sizeof (void *)); 1637*0Sstevel@tonic-gate 1638*0Sstevel@tonic-gate mutex_enter(&vmp->vm_lock); 1639*0Sstevel@tonic-gate 1640*0Sstevel@tonic-gate old_size = vmp->vm_hash_mask + 1; 1641*0Sstevel@tonic-gate old_table = vmp->vm_hash_table; 1642*0Sstevel@tonic-gate 1643*0Sstevel@tonic-gate vmp->vm_hash_mask = new_size - 1; 1644*0Sstevel@tonic-gate vmp->vm_hash_table = new_table; 1645*0Sstevel@tonic-gate vmp->vm_hash_shift = highbit(vmp->vm_hash_mask); 1646*0Sstevel@tonic-gate 1647*0Sstevel@tonic-gate for (h = 0; h < old_size; h++) { 1648*0Sstevel@tonic-gate vsp = old_table[h]; 1649*0Sstevel@tonic-gate while (vsp != NULL) { 1650*0Sstevel@tonic-gate uintptr_t addr = vsp->vs_start; 1651*0Sstevel@tonic-gate vmem_seg_t *next_vsp = vsp->vs_knext; 1652*0Sstevel@tonic-gate vmem_seg_t **hash_bucket = VMEM_HASH(vmp, addr); 1653*0Sstevel@tonic-gate vsp->vs_knext = *hash_bucket; 1654*0Sstevel@tonic-gate *hash_bucket = vsp; 1655*0Sstevel@tonic-gate vsp = next_vsp; 1656*0Sstevel@tonic-gate } 1657*0Sstevel@tonic-gate } 1658*0Sstevel@tonic-gate 1659*0Sstevel@tonic-gate mutex_exit(&vmp->vm_lock); 1660*0Sstevel@tonic-gate 1661*0Sstevel@tonic-gate if (old_table != vmp->vm_hash0) 1662*0Sstevel@tonic-gate vmem_free(vmem_hash_arena, old_table, 1663*0Sstevel@tonic-gate old_size * sizeof (void *)); 1664*0Sstevel@tonic-gate } 1665*0Sstevel@tonic-gate 1666*0Sstevel@tonic-gate /* 1667*0Sstevel@tonic-gate * Perform periodic maintenance on all vmem arenas. 1668*0Sstevel@tonic-gate */ 1669*0Sstevel@tonic-gate void 1670*0Sstevel@tonic-gate vmem_update(void *dummy) 1671*0Sstevel@tonic-gate { 1672*0Sstevel@tonic-gate vmem_t *vmp; 1673*0Sstevel@tonic-gate 1674*0Sstevel@tonic-gate mutex_enter(&vmem_list_lock); 1675*0Sstevel@tonic-gate for (vmp = vmem_list; vmp != NULL; vmp = vmp->vm_next) { 1676*0Sstevel@tonic-gate /* 1677*0Sstevel@tonic-gate * If threads are waiting for resources, wake them up 1678*0Sstevel@tonic-gate * periodically so they can issue another kmem_reap() 1679*0Sstevel@tonic-gate * to reclaim resources cached by the slab allocator. 1680*0Sstevel@tonic-gate */ 1681*0Sstevel@tonic-gate cv_broadcast(&vmp->vm_cv); 1682*0Sstevel@tonic-gate 1683*0Sstevel@tonic-gate /* 1684*0Sstevel@tonic-gate * Rescale the hash table to keep the hash chains short. 1685*0Sstevel@tonic-gate */ 1686*0Sstevel@tonic-gate vmem_hash_rescale(vmp); 1687*0Sstevel@tonic-gate } 1688*0Sstevel@tonic-gate mutex_exit(&vmem_list_lock); 1689*0Sstevel@tonic-gate 1690*0Sstevel@tonic-gate (void) timeout(vmem_update, dummy, vmem_update_interval * hz); 1691*0Sstevel@tonic-gate } 1692*0Sstevel@tonic-gate 1693*0Sstevel@tonic-gate /* 1694*0Sstevel@tonic-gate * Prepare vmem for use. 1695*0Sstevel@tonic-gate */ 1696*0Sstevel@tonic-gate vmem_t * 1697*0Sstevel@tonic-gate vmem_init(const char *heap_name, 1698*0Sstevel@tonic-gate void *heap_start, size_t heap_size, size_t heap_quantum, 1699*0Sstevel@tonic-gate void *(*heap_alloc)(vmem_t *, size_t, int), 1700*0Sstevel@tonic-gate void (*heap_free)(vmem_t *, void *, size_t)) 1701*0Sstevel@tonic-gate { 1702*0Sstevel@tonic-gate uint32_t id; 1703*0Sstevel@tonic-gate int nseg = VMEM_SEG_INITIAL; 1704*0Sstevel@tonic-gate vmem_t *heap; 1705*0Sstevel@tonic-gate 1706*0Sstevel@tonic-gate while (--nseg >= 0) 1707*0Sstevel@tonic-gate vmem_putseg_global(&vmem_seg0[nseg]); 1708*0Sstevel@tonic-gate 1709*0Sstevel@tonic-gate heap = vmem_create(heap_name, 1710*0Sstevel@tonic-gate heap_start, heap_size, heap_quantum, 1711*0Sstevel@tonic-gate NULL, NULL, NULL, 0, 1712*0Sstevel@tonic-gate VM_SLEEP | VMC_POPULATOR); 1713*0Sstevel@tonic-gate 1714*0Sstevel@tonic-gate vmem_metadata_arena = vmem_create("vmem_metadata", 1715*0Sstevel@tonic-gate NULL, 0, heap_quantum, 1716*0Sstevel@tonic-gate vmem_alloc, vmem_free, heap, 8 * heap_quantum, 1717*0Sstevel@tonic-gate VM_SLEEP | VMC_POPULATOR | VMC_NO_QCACHE); 1718*0Sstevel@tonic-gate 1719*0Sstevel@tonic-gate vmem_seg_arena = vmem_create("vmem_seg", 1720*0Sstevel@tonic-gate NULL, 0, heap_quantum, 1721*0Sstevel@tonic-gate heap_alloc, heap_free, vmem_metadata_arena, 0, 1722*0Sstevel@tonic-gate VM_SLEEP | VMC_POPULATOR); 1723*0Sstevel@tonic-gate 1724*0Sstevel@tonic-gate vmem_hash_arena = vmem_create("vmem_hash", 1725*0Sstevel@tonic-gate NULL, 0, 8, 1726*0Sstevel@tonic-gate heap_alloc, heap_free, vmem_metadata_arena, 0, 1727*0Sstevel@tonic-gate VM_SLEEP); 1728*0Sstevel@tonic-gate 1729*0Sstevel@tonic-gate vmem_vmem_arena = vmem_create("vmem_vmem", 1730*0Sstevel@tonic-gate vmem0, sizeof (vmem0), 1, 1731*0Sstevel@tonic-gate heap_alloc, heap_free, vmem_metadata_arena, 0, 1732*0Sstevel@tonic-gate VM_SLEEP); 1733*0Sstevel@tonic-gate 1734*0Sstevel@tonic-gate for (id = 0; id < vmem_id; id++) 1735*0Sstevel@tonic-gate (void) vmem_xalloc(vmem_vmem_arena, sizeof (vmem_t), 1736*0Sstevel@tonic-gate 1, 0, 0, &vmem0[id], &vmem0[id + 1], 1737*0Sstevel@tonic-gate VM_NOSLEEP | VM_BESTFIT | VM_PANIC); 1738*0Sstevel@tonic-gate 1739*0Sstevel@tonic-gate return (heap); 1740*0Sstevel@tonic-gate } 1741