1*0Sstevel@tonic-gate /* 2*0Sstevel@tonic-gate * CDDL HEADER START 3*0Sstevel@tonic-gate * 4*0Sstevel@tonic-gate * The contents of this file are subject to the terms of the 5*0Sstevel@tonic-gate * Common Development and Distribution License, Version 1.0 only 6*0Sstevel@tonic-gate * (the "License"). You may not use this file except in compliance 7*0Sstevel@tonic-gate * with the License. 8*0Sstevel@tonic-gate * 9*0Sstevel@tonic-gate * You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE 10*0Sstevel@tonic-gate * or http://www.opensolaris.org/os/licensing. 11*0Sstevel@tonic-gate * See the License for the specific language governing permissions 12*0Sstevel@tonic-gate * and limitations under the License. 13*0Sstevel@tonic-gate * 14*0Sstevel@tonic-gate * When distributing Covered Code, include this CDDL HEADER in each 15*0Sstevel@tonic-gate * file and include the License file at usr/src/OPENSOLARIS.LICENSE. 16*0Sstevel@tonic-gate * If applicable, add the following below this CDDL HEADER, with the 17*0Sstevel@tonic-gate * fields enclosed by brackets "[]" replaced with your own identifying 18*0Sstevel@tonic-gate * information: Portions Copyright [yyyy] [name of copyright owner] 19*0Sstevel@tonic-gate * 20*0Sstevel@tonic-gate * CDDL HEADER END 21*0Sstevel@tonic-gate */ 22*0Sstevel@tonic-gate /* 23*0Sstevel@tonic-gate * Copyright 2005 Sun Microsystems, Inc. All rights reserved. 24*0Sstevel@tonic-gate * Use is subject to license terms. 25*0Sstevel@tonic-gate */ 26*0Sstevel@tonic-gate 27*0Sstevel@tonic-gate #pragma ident "%Z%%M% %I% %E% SMI" 28*0Sstevel@tonic-gate 29*0Sstevel@tonic-gate /* 30*0Sstevel@tonic-gate * Kernel task queues: general-purpose asynchronous task scheduling. 31*0Sstevel@tonic-gate * 32*0Sstevel@tonic-gate * A common problem in kernel programming is the need to schedule tasks 33*0Sstevel@tonic-gate * to be performed later, by another thread. There are several reasons 34*0Sstevel@tonic-gate * you may want or need to do this: 35*0Sstevel@tonic-gate * 36*0Sstevel@tonic-gate * (1) The task isn't time-critical, but your current code path is. 37*0Sstevel@tonic-gate * 38*0Sstevel@tonic-gate * (2) The task may require grabbing locks that you already hold. 39*0Sstevel@tonic-gate * 40*0Sstevel@tonic-gate * (3) The task may need to block (e.g. to wait for memory), but you 41*0Sstevel@tonic-gate * cannot block in your current context. 42*0Sstevel@tonic-gate * 43*0Sstevel@tonic-gate * (4) Your code path can't complete because of some condition, but you can't 44*0Sstevel@tonic-gate * sleep or fail, so you queue the task for later execution when condition 45*0Sstevel@tonic-gate * disappears. 46*0Sstevel@tonic-gate * 47*0Sstevel@tonic-gate * (5) You just want a simple way to launch multiple tasks in parallel. 48*0Sstevel@tonic-gate * 49*0Sstevel@tonic-gate * Task queues provide such a facility. In its simplest form (used when 50*0Sstevel@tonic-gate * performance is not a critical consideration) a task queue consists of a 51*0Sstevel@tonic-gate * single list of tasks, together with one or more threads to service the 52*0Sstevel@tonic-gate * list. There are some cases when this simple queue is not sufficient: 53*0Sstevel@tonic-gate * 54*0Sstevel@tonic-gate * (1) The task queues are very hot and there is a need to avoid data and lock 55*0Sstevel@tonic-gate * contention over global resources. 56*0Sstevel@tonic-gate * 57*0Sstevel@tonic-gate * (2) Some tasks may depend on other tasks to complete, so they can't be put in 58*0Sstevel@tonic-gate * the same list managed by the same thread. 59*0Sstevel@tonic-gate * 60*0Sstevel@tonic-gate * (3) Some tasks may block for a long time, and this should not block other 61*0Sstevel@tonic-gate * tasks in the queue. 62*0Sstevel@tonic-gate * 63*0Sstevel@tonic-gate * To provide useful service in such cases we define a "dynamic task queue" 64*0Sstevel@tonic-gate * which has an individual thread for each of the tasks. These threads are 65*0Sstevel@tonic-gate * dynamically created as they are needed and destroyed when they are not in 66*0Sstevel@tonic-gate * use. The API for managing task pools is the same as for managing task queues 67*0Sstevel@tonic-gate * with the exception of a taskq creation flag TASKQ_DYNAMIC which tells that 68*0Sstevel@tonic-gate * dynamic task pool behavior is desired. 69*0Sstevel@tonic-gate * 70*0Sstevel@tonic-gate * Dynamic task queues may also place tasks in the normal queue (called "backing 71*0Sstevel@tonic-gate * queue") when task pool runs out of resources. Users of task queues may 72*0Sstevel@tonic-gate * disallow such queued scheduling by specifying TQ_NOQUEUE in the dispatch 73*0Sstevel@tonic-gate * flags. 74*0Sstevel@tonic-gate * 75*0Sstevel@tonic-gate * The backing task queue is also used for scheduling internal tasks needed for 76*0Sstevel@tonic-gate * dynamic task queue maintenance. 77*0Sstevel@tonic-gate * 78*0Sstevel@tonic-gate * INTERFACES: 79*0Sstevel@tonic-gate * 80*0Sstevel@tonic-gate * taskq_t *taskq_create(name, nthreads, pri_t pri, minalloc, maxall, flags); 81*0Sstevel@tonic-gate * 82*0Sstevel@tonic-gate * Create a taskq with specified properties. 83*0Sstevel@tonic-gate * Possible 'flags': 84*0Sstevel@tonic-gate * 85*0Sstevel@tonic-gate * TASKQ_DYNAMIC: Create task pool for task management. If this flag is 86*0Sstevel@tonic-gate * specified, 'nthreads' specifies the maximum number of threads in 87*0Sstevel@tonic-gate * the task queue. Task execution order for dynamic task queues is 88*0Sstevel@tonic-gate * not predictable. 89*0Sstevel@tonic-gate * 90*0Sstevel@tonic-gate * If this flag is not specified (default case) a 91*0Sstevel@tonic-gate * single-list task queue is created with 'nthreads' threads 92*0Sstevel@tonic-gate * servicing it. Entries in this queue are managed by 93*0Sstevel@tonic-gate * taskq_ent_alloc() and taskq_ent_free() which try to keep the 94*0Sstevel@tonic-gate * task population between 'minalloc' and 'maxalloc', but the 95*0Sstevel@tonic-gate * latter limit is only advisory for TQ_SLEEP dispatches and the 96*0Sstevel@tonic-gate * former limit is only advisory for TQ_NOALLOC dispatches. If 97*0Sstevel@tonic-gate * TASKQ_PREPOPULATE is set in 'flags', the taskq will be 98*0Sstevel@tonic-gate * prepopulated with 'minalloc' task structures. 99*0Sstevel@tonic-gate * 100*0Sstevel@tonic-gate * Since non-DYNAMIC taskqs are queues, tasks are guaranteed to be 101*0Sstevel@tonic-gate * executed in the order they are scheduled if nthreads == 1. 102*0Sstevel@tonic-gate * If nthreads > 1, task execution order is not predictable. 103*0Sstevel@tonic-gate * 104*0Sstevel@tonic-gate * TASKQ_PREPOPULATE: Prepopulate task queue with threads. 105*0Sstevel@tonic-gate * Also prepopulate the task queue with 'minalloc' task structures. 106*0Sstevel@tonic-gate * 107*0Sstevel@tonic-gate * TASKQ_CPR_SAFE: This flag specifies that users of the task queue will 108*0Sstevel@tonic-gate * use their own protocol for handling CPR issues. This flag is not 109*0Sstevel@tonic-gate * supported for DYNAMIC task queues. 110*0Sstevel@tonic-gate * 111*0Sstevel@tonic-gate * The 'pri' field specifies the default priority for the threads that 112*0Sstevel@tonic-gate * service all scheduled tasks. 113*0Sstevel@tonic-gate * 114*0Sstevel@tonic-gate * void taskq_destroy(tap): 115*0Sstevel@tonic-gate * 116*0Sstevel@tonic-gate * Waits for any scheduled tasks to complete, then destroys the taskq. 117*0Sstevel@tonic-gate * Caller should guarantee that no new tasks are scheduled in the closing 118*0Sstevel@tonic-gate * taskq. 119*0Sstevel@tonic-gate * 120*0Sstevel@tonic-gate * taskqid_t taskq_dispatch(tq, func, arg, flags): 121*0Sstevel@tonic-gate * 122*0Sstevel@tonic-gate * Dispatches the task "func(arg)" to taskq. The 'flags' indicates whether 123*0Sstevel@tonic-gate * the caller is willing to block for memory. The function returns an 124*0Sstevel@tonic-gate * opaque value which is zero iff dispatch fails. If flags is TQ_NOSLEEP 125*0Sstevel@tonic-gate * or TQ_NOALLOC and the task can't be dispatched, taskq_dispatch() fails 126*0Sstevel@tonic-gate * and returns (taskqid_t)0. 127*0Sstevel@tonic-gate * 128*0Sstevel@tonic-gate * ASSUMES: func != NULL. 129*0Sstevel@tonic-gate * 130*0Sstevel@tonic-gate * Possible flags: 131*0Sstevel@tonic-gate * TQ_NOSLEEP: Do not wait for resources; may fail. 132*0Sstevel@tonic-gate * 133*0Sstevel@tonic-gate * TQ_NOALLOC: Do not allocate memory; may fail. May only be used with 134*0Sstevel@tonic-gate * non-dynamic task queues. 135*0Sstevel@tonic-gate * 136*0Sstevel@tonic-gate * TQ_NOQUEUE: Do not enqueue a task if it can't dispatch it due to 137*0Sstevel@tonic-gate * lack of available resources and fail. If this flag is not 138*0Sstevel@tonic-gate * set, and the task pool is exhausted, the task may be scheduled 139*0Sstevel@tonic-gate * in the backing queue. This flag may ONLY be used with dynamic 140*0Sstevel@tonic-gate * task queues. 141*0Sstevel@tonic-gate * 142*0Sstevel@tonic-gate * NOTE: This flag should always be used when a task queue is used 143*0Sstevel@tonic-gate * for tasks that may depend on each other for completion. 144*0Sstevel@tonic-gate * Enqueueing dependent tasks may create deadlocks. 145*0Sstevel@tonic-gate * 146*0Sstevel@tonic-gate * TQ_SLEEP: May block waiting for resources. May still fail for 147*0Sstevel@tonic-gate * dynamic task queues if TQ_NOQUEUE is also specified, otherwise 148*0Sstevel@tonic-gate * always succeed. 149*0Sstevel@tonic-gate * 150*0Sstevel@tonic-gate * NOTE: Dynamic task queues are much more likely to fail in 151*0Sstevel@tonic-gate * taskq_dispatch() (especially if TQ_NOQUEUE was specified), so it 152*0Sstevel@tonic-gate * is important to have backup strategies handling such failures. 153*0Sstevel@tonic-gate * 154*0Sstevel@tonic-gate * void taskq_wait(tq): 155*0Sstevel@tonic-gate * 156*0Sstevel@tonic-gate * Waits for all previously scheduled tasks to complete. 157*0Sstevel@tonic-gate * 158*0Sstevel@tonic-gate * NOTE: It does not stop any new task dispatches. 159*0Sstevel@tonic-gate * Do NOT call taskq_wait() from a task: it will cause deadlock. 160*0Sstevel@tonic-gate * 161*0Sstevel@tonic-gate * void taskq_suspend(tq) 162*0Sstevel@tonic-gate * 163*0Sstevel@tonic-gate * Suspend all task execution. Tasks already scheduled for a dynamic task 164*0Sstevel@tonic-gate * queue will still be executed, but all new scheduled tasks will be 165*0Sstevel@tonic-gate * suspended until taskq_resume() is called. 166*0Sstevel@tonic-gate * 167*0Sstevel@tonic-gate * int taskq_suspended(tq) 168*0Sstevel@tonic-gate * 169*0Sstevel@tonic-gate * Returns 1 if taskq is suspended and 0 otherwise. It is intended to 170*0Sstevel@tonic-gate * ASSERT that the task queue is suspended. 171*0Sstevel@tonic-gate * 172*0Sstevel@tonic-gate * void taskq_resume(tq) 173*0Sstevel@tonic-gate * 174*0Sstevel@tonic-gate * Resume task queue execution. 175*0Sstevel@tonic-gate * 176*0Sstevel@tonic-gate * int taskq_member(tq, thread) 177*0Sstevel@tonic-gate * 178*0Sstevel@tonic-gate * Returns 1 if 'thread' belongs to taskq 'tq' and 0 otherwise. The 179*0Sstevel@tonic-gate * intended use is to ASSERT that a given function is called in taskq 180*0Sstevel@tonic-gate * context only. 181*0Sstevel@tonic-gate * 182*0Sstevel@tonic-gate * system_taskq 183*0Sstevel@tonic-gate * 184*0Sstevel@tonic-gate * Global system-wide dynamic task queue for common uses. It may be used by 185*0Sstevel@tonic-gate * any subsystem that needs to schedule tasks and does not need to manage 186*0Sstevel@tonic-gate * its own task queues. It is initialized quite early during system boot. 187*0Sstevel@tonic-gate * 188*0Sstevel@tonic-gate * IMPLEMENTATION. 189*0Sstevel@tonic-gate * 190*0Sstevel@tonic-gate * This is schematic representation of the task queue structures. 191*0Sstevel@tonic-gate * 192*0Sstevel@tonic-gate * taskq: 193*0Sstevel@tonic-gate * +-------------+ 194*0Sstevel@tonic-gate * |tq_lock | +---< taskq_ent_free() 195*0Sstevel@tonic-gate * +-------------+ | 196*0Sstevel@tonic-gate * |... | | tqent: tqent: 197*0Sstevel@tonic-gate * +-------------+ | +------------+ +------------+ 198*0Sstevel@tonic-gate * | tq_freelist |-->| tqent_next |--> ... ->| tqent_next | 199*0Sstevel@tonic-gate * +-------------+ +------------+ +------------+ 200*0Sstevel@tonic-gate * |... | | ... | | ... | 201*0Sstevel@tonic-gate * +-------------+ +------------+ +------------+ 202*0Sstevel@tonic-gate * | tq_task | | 203*0Sstevel@tonic-gate * | | +-------------->taskq_ent_alloc() 204*0Sstevel@tonic-gate * +--------------------------------------------------------------------------+ 205*0Sstevel@tonic-gate * | | | tqent tqent | 206*0Sstevel@tonic-gate * | +---------------------+ +--> +------------+ +--> +------------+ | 207*0Sstevel@tonic-gate * | | ... | | | func, arg | | | func, arg | | 208*0Sstevel@tonic-gate * +>+---------------------+ <---|-+ +------------+ <---|-+ +------------+ | 209*0Sstevel@tonic-gate * | tq_taskq.tqent_next | ----+ | | tqent_next | --->+ | | tqent_next |--+ 210*0Sstevel@tonic-gate * +---------------------+ | +------------+ ^ | +------------+ 211*0Sstevel@tonic-gate * +-| tq_task.tqent_prev | +--| tqent_prev | | +--| tqent_prev | ^ 212*0Sstevel@tonic-gate * | +---------------------+ +------------+ | +------------+ | 213*0Sstevel@tonic-gate * | |... | | ... | | | ... | | 214*0Sstevel@tonic-gate * | +---------------------+ +------------+ | +------------+ | 215*0Sstevel@tonic-gate * | ^ | | 216*0Sstevel@tonic-gate * | | | | 217*0Sstevel@tonic-gate * +--------------------------------------+--------------+ TQ_APPEND() -+ 218*0Sstevel@tonic-gate * | | | 219*0Sstevel@tonic-gate * |... | taskq_thread()-----+ 220*0Sstevel@tonic-gate * +-------------+ 221*0Sstevel@tonic-gate * | tq_buckets |--+-------> [ NULL ] (for regular task queues) 222*0Sstevel@tonic-gate * +-------------+ | 223*0Sstevel@tonic-gate * | DYNAMIC TASK QUEUES: 224*0Sstevel@tonic-gate * | 225*0Sstevel@tonic-gate * +-> taskq_bucket[nCPU] taskq_bucket_dispatch() 226*0Sstevel@tonic-gate * +-------------------+ ^ 227*0Sstevel@tonic-gate * +--->| tqbucket_lock | | 228*0Sstevel@tonic-gate * | +-------------------+ +--------+ +--------+ 229*0Sstevel@tonic-gate * | | tqbucket_freelist |-->| tqent |-->...| tqent | ^ 230*0Sstevel@tonic-gate * | +-------------------+<--+--------+<--...+--------+ | 231*0Sstevel@tonic-gate * | | ... | | thread | | thread | | 232*0Sstevel@tonic-gate * | +-------------------+ +--------+ +--------+ | 233*0Sstevel@tonic-gate * | +-------------------+ | 234*0Sstevel@tonic-gate * taskq_dispatch()--+--->| tqbucket_lock | TQ_APPEND()------+ 235*0Sstevel@tonic-gate * TQ_HASH() | +-------------------+ +--------+ +--------+ 236*0Sstevel@tonic-gate * | | tqbucket_freelist |-->| tqent |-->...| tqent | 237*0Sstevel@tonic-gate * | +-------------------+<--+--------+<--...+--------+ 238*0Sstevel@tonic-gate * | | ... | | thread | | thread | 239*0Sstevel@tonic-gate * | +-------------------+ +--------+ +--------+ 240*0Sstevel@tonic-gate * +---> ... 241*0Sstevel@tonic-gate * 242*0Sstevel@tonic-gate * 243*0Sstevel@tonic-gate * Task queues use tq_task field to link new entry in the queue. The queue is a 244*0Sstevel@tonic-gate * circular doubly-linked list. Entries are put in the end of the list with 245*0Sstevel@tonic-gate * TQ_APPEND() and processed from the front of the list by taskq_thread() in 246*0Sstevel@tonic-gate * FIFO order. Task queue entries are cached in the free list managed by 247*0Sstevel@tonic-gate * taskq_ent_alloc() and taskq_ent_free() functions. 248*0Sstevel@tonic-gate * 249*0Sstevel@tonic-gate * All threads used by task queues mark t_taskq field of the thread to 250*0Sstevel@tonic-gate * point to the task queue. 251*0Sstevel@tonic-gate * 252*0Sstevel@tonic-gate * Dynamic Task Queues Implementation. 253*0Sstevel@tonic-gate * 254*0Sstevel@tonic-gate * For a dynamic task queues there is a 1-to-1 mapping between a thread and 255*0Sstevel@tonic-gate * taskq_ent_structure. Each entry is serviced by its own thread and each thread 256*0Sstevel@tonic-gate * is controlled by a single entry. 257*0Sstevel@tonic-gate * 258*0Sstevel@tonic-gate * Entries are distributed over a set of buckets. To avoid using modulo 259*0Sstevel@tonic-gate * arithmetics the number of buckets is 2^n and is determined as the nearest 260*0Sstevel@tonic-gate * power of two roundown of the number of CPUs in the system. Tunable 261*0Sstevel@tonic-gate * variable 'taskq_maxbuckets' limits the maximum number of buckets. Each entry 262*0Sstevel@tonic-gate * is attached to a bucket for its lifetime and can't migrate to other buckets. 263*0Sstevel@tonic-gate * 264*0Sstevel@tonic-gate * Entries that have scheduled tasks are not placed in any list. The dispatch 265*0Sstevel@tonic-gate * function sets their "func" and "arg" fields and signals the corresponding 266*0Sstevel@tonic-gate * thread to execute the task. Once the thread executes the task it clears the 267*0Sstevel@tonic-gate * "func" field and places an entry on the bucket cache of free entries pointed 268*0Sstevel@tonic-gate * by "tqbucket_freelist" field. ALL entries on the free list should have "func" 269*0Sstevel@tonic-gate * field equal to NULL. The free list is a circular doubly-linked list identical 270*0Sstevel@tonic-gate * in structure to the tq_task list above, but entries are taken from it in LIFO 271*0Sstevel@tonic-gate * order - the last freed entry is the first to be allocated. The 272*0Sstevel@tonic-gate * taskq_bucket_dispatch() function gets the most recently used entry from the 273*0Sstevel@tonic-gate * free list, sets its "func" and "arg" fields and signals a worker thread. 274*0Sstevel@tonic-gate * 275*0Sstevel@tonic-gate * After executing each task a per-entry thread taskq_d_thread() places its 276*0Sstevel@tonic-gate * entry on the bucket free list and goes to a timed sleep. If it wakes up 277*0Sstevel@tonic-gate * without getting new task it removes the entry from the free list and destroys 278*0Sstevel@tonic-gate * itself. The thread sleep time is controlled by a tunable variable 279*0Sstevel@tonic-gate * `taskq_thread_timeout'. 280*0Sstevel@tonic-gate * 281*0Sstevel@tonic-gate * There is various statistics kept in the bucket which allows for later 282*0Sstevel@tonic-gate * analysis of taskq usage patterns. Also, a global copy of taskq creation and 283*0Sstevel@tonic-gate * death statistics is kept in the global taskq data structure. Since thread 284*0Sstevel@tonic-gate * creation and death happen rarely, updating such global data does not present 285*0Sstevel@tonic-gate * a performance problem. 286*0Sstevel@tonic-gate * 287*0Sstevel@tonic-gate * NOTE: Threads are not bound to any CPU and there is absolutely no association 288*0Sstevel@tonic-gate * between the bucket and actual thread CPU, so buckets are used only to 289*0Sstevel@tonic-gate * split resources and reduce resource contention. Having threads attached 290*0Sstevel@tonic-gate * to the CPU denoted by a bucket may reduce number of times the job 291*0Sstevel@tonic-gate * switches between CPUs. 292*0Sstevel@tonic-gate * 293*0Sstevel@tonic-gate * Current algorithm creates a thread whenever a bucket has no free 294*0Sstevel@tonic-gate * entries. It would be nice to know how many threads are in the running 295*0Sstevel@tonic-gate * state and don't create threads if all CPUs are busy with existing 296*0Sstevel@tonic-gate * tasks, but it is unclear how such strategy can be implemented. 297*0Sstevel@tonic-gate * 298*0Sstevel@tonic-gate * Currently buckets are created statically as an array attached to task 299*0Sstevel@tonic-gate * queue. On some system with nCPUs < max_ncpus it may waste system 300*0Sstevel@tonic-gate * memory. One solution may be allocation of buckets when they are first 301*0Sstevel@tonic-gate * touched, but it is not clear how useful it is. 302*0Sstevel@tonic-gate * 303*0Sstevel@tonic-gate * SUSPEND/RESUME implementation. 304*0Sstevel@tonic-gate * 305*0Sstevel@tonic-gate * Before executing a task taskq_thread() (executing non-dynamic task 306*0Sstevel@tonic-gate * queues) obtains taskq's thread lock as a reader. The taskq_suspend() 307*0Sstevel@tonic-gate * function gets the same lock as a writer blocking all non-dynamic task 308*0Sstevel@tonic-gate * execution. The taskq_resume() function releases the lock allowing 309*0Sstevel@tonic-gate * taskq_thread to continue execution. 310*0Sstevel@tonic-gate * 311*0Sstevel@tonic-gate * For dynamic task queues, each bucket is marked as TQBUCKET_SUSPEND by 312*0Sstevel@tonic-gate * taskq_suspend() function. After that taskq_bucket_dispatch() always 313*0Sstevel@tonic-gate * fails, so that taskq_dispatch() will either enqueue tasks for a 314*0Sstevel@tonic-gate * suspended backing queue or fail if TQ_NOQUEUE is specified in dispatch 315*0Sstevel@tonic-gate * flags. 316*0Sstevel@tonic-gate * 317*0Sstevel@tonic-gate * NOTE: taskq_suspend() does not immediately block any tasks already 318*0Sstevel@tonic-gate * scheduled for dynamic task queues. It only suspends new tasks 319*0Sstevel@tonic-gate * scheduled after taskq_suspend() was called. 320*0Sstevel@tonic-gate * 321*0Sstevel@tonic-gate * taskq_member() function works by comparing a thread t_taskq pointer with 322*0Sstevel@tonic-gate * the passed thread pointer. 323*0Sstevel@tonic-gate * 324*0Sstevel@tonic-gate * LOCKS and LOCK Hierarchy: 325*0Sstevel@tonic-gate * 326*0Sstevel@tonic-gate * There are two locks used in task queues. 327*0Sstevel@tonic-gate * 328*0Sstevel@tonic-gate * 1) Task queue structure has a lock, protecting global task queue state. 329*0Sstevel@tonic-gate * 330*0Sstevel@tonic-gate * 2) Each per-CPU bucket has a lock for bucket management. 331*0Sstevel@tonic-gate * 332*0Sstevel@tonic-gate * If both locks are needed, task queue lock should be taken only after bucket 333*0Sstevel@tonic-gate * lock. 334*0Sstevel@tonic-gate * 335*0Sstevel@tonic-gate * DEBUG FACILITIES. 336*0Sstevel@tonic-gate * 337*0Sstevel@tonic-gate * For DEBUG kernels it is possible to induce random failures to 338*0Sstevel@tonic-gate * taskq_dispatch() function when it is given TQ_NOSLEEP argument. The value of 339*0Sstevel@tonic-gate * taskq_dmtbf and taskq_smtbf tunables control the mean time between induced 340*0Sstevel@tonic-gate * failures for dynamic and static task queues respectively. 341*0Sstevel@tonic-gate * 342*0Sstevel@tonic-gate * Setting TASKQ_STATISTIC to 0 will disable per-bucket statistics. 343*0Sstevel@tonic-gate * 344*0Sstevel@tonic-gate * TUNABLES 345*0Sstevel@tonic-gate * 346*0Sstevel@tonic-gate * system_taskq_size - Size of the global system_taskq. 347*0Sstevel@tonic-gate * This value is multiplied by nCPUs to determine 348*0Sstevel@tonic-gate * actual size. 349*0Sstevel@tonic-gate * Default value: 64 350*0Sstevel@tonic-gate * 351*0Sstevel@tonic-gate * taskq_thread_timeout - Maximum idle time for taskq_d_thread() 352*0Sstevel@tonic-gate * Default value: 5 minutes 353*0Sstevel@tonic-gate * 354*0Sstevel@tonic-gate * taskq_maxbuckets - Maximum number of buckets in any task queue 355*0Sstevel@tonic-gate * Default value: 128 356*0Sstevel@tonic-gate * 357*0Sstevel@tonic-gate * taskq_search_depth - Maximum # of buckets searched for a free entry 358*0Sstevel@tonic-gate * Default value: 4 359*0Sstevel@tonic-gate * 360*0Sstevel@tonic-gate * taskq_dmtbf - Mean time between induced dispatch failures 361*0Sstevel@tonic-gate * for dynamic task queues. 362*0Sstevel@tonic-gate * Default value: UINT_MAX (no induced failures) 363*0Sstevel@tonic-gate * 364*0Sstevel@tonic-gate * taskq_smtbf - Mean time between induced dispatch failures 365*0Sstevel@tonic-gate * for static task queues. 366*0Sstevel@tonic-gate * Default value: UINT_MAX (no induced failures) 367*0Sstevel@tonic-gate * 368*0Sstevel@tonic-gate * CONDITIONAL compilation. 369*0Sstevel@tonic-gate * 370*0Sstevel@tonic-gate * TASKQ_STATISTIC - If set will enable bucket statistic (default). 371*0Sstevel@tonic-gate * 372*0Sstevel@tonic-gate */ 373*0Sstevel@tonic-gate 374*0Sstevel@tonic-gate #include <sys/taskq_impl.h> 375*0Sstevel@tonic-gate #include <sys/thread.h> 376*0Sstevel@tonic-gate #include <sys/proc.h> 377*0Sstevel@tonic-gate #include <sys/kmem.h> 378*0Sstevel@tonic-gate #include <sys/vmem.h> 379*0Sstevel@tonic-gate #include <sys/callb.h> 380*0Sstevel@tonic-gate #include <sys/systm.h> 381*0Sstevel@tonic-gate #include <sys/cmn_err.h> 382*0Sstevel@tonic-gate #include <sys/debug.h> 383*0Sstevel@tonic-gate #include <sys/vmsystm.h> /* For throttlefree */ 384*0Sstevel@tonic-gate #include <sys/sysmacros.h> 385*0Sstevel@tonic-gate #include <sys/cpuvar.h> 386*0Sstevel@tonic-gate #include <sys/sdt.h> 387*0Sstevel@tonic-gate 388*0Sstevel@tonic-gate static kmem_cache_t *taskq_ent_cache, *taskq_cache; 389*0Sstevel@tonic-gate 390*0Sstevel@tonic-gate /* 391*0Sstevel@tonic-gate * Pseudo instance numbers for taskqs without explicitely provided instance. 392*0Sstevel@tonic-gate */ 393*0Sstevel@tonic-gate static vmem_t *taskq_id_arena; 394*0Sstevel@tonic-gate 395*0Sstevel@tonic-gate /* Global system task queue for common use */ 396*0Sstevel@tonic-gate taskq_t *system_taskq; 397*0Sstevel@tonic-gate 398*0Sstevel@tonic-gate /* 399*0Sstevel@tonic-gate * Maxmimum number of entries in global system taskq is 400*0Sstevel@tonic-gate * system_taskq_size * max_ncpus 401*0Sstevel@tonic-gate */ 402*0Sstevel@tonic-gate #define SYSTEM_TASKQ_SIZE 64 403*0Sstevel@tonic-gate int system_taskq_size = SYSTEM_TASKQ_SIZE; 404*0Sstevel@tonic-gate 405*0Sstevel@tonic-gate /* 406*0Sstevel@tonic-gate * Dynamic task queue threads that don't get any work within 407*0Sstevel@tonic-gate * taskq_thread_timeout destroy themselves 408*0Sstevel@tonic-gate */ 409*0Sstevel@tonic-gate #define TASKQ_THREAD_TIMEOUT (60 * 5) 410*0Sstevel@tonic-gate int taskq_thread_timeout = TASKQ_THREAD_TIMEOUT; 411*0Sstevel@tonic-gate 412*0Sstevel@tonic-gate #define TASKQ_MAXBUCKETS 128 413*0Sstevel@tonic-gate int taskq_maxbuckets = TASKQ_MAXBUCKETS; 414*0Sstevel@tonic-gate 415*0Sstevel@tonic-gate /* 416*0Sstevel@tonic-gate * When a bucket has no available entries another buckets are tried. 417*0Sstevel@tonic-gate * taskq_search_depth parameter limits the amount of buckets that we search 418*0Sstevel@tonic-gate * before failing. This is mostly useful in systems with many CPUs where we may 419*0Sstevel@tonic-gate * spend too much time scanning busy buckets. 420*0Sstevel@tonic-gate */ 421*0Sstevel@tonic-gate #define TASKQ_SEARCH_DEPTH 4 422*0Sstevel@tonic-gate int taskq_search_depth = TASKQ_SEARCH_DEPTH; 423*0Sstevel@tonic-gate 424*0Sstevel@tonic-gate /* 425*0Sstevel@tonic-gate * Hashing function: mix various bits of x. May be pretty much anything. 426*0Sstevel@tonic-gate */ 427*0Sstevel@tonic-gate #define TQ_HASH(x) ((x) ^ ((x) >> 11) ^ ((x) >> 17) ^ ((x) ^ 27)) 428*0Sstevel@tonic-gate 429*0Sstevel@tonic-gate /* 430*0Sstevel@tonic-gate * We do not create any new threads when the system is low on memory and start 431*0Sstevel@tonic-gate * throttling memory allocations. The following macro tries to estimate such 432*0Sstevel@tonic-gate * condition. 433*0Sstevel@tonic-gate */ 434*0Sstevel@tonic-gate #define ENOUGH_MEMORY() (freemem > throttlefree) 435*0Sstevel@tonic-gate 436*0Sstevel@tonic-gate /* 437*0Sstevel@tonic-gate * Static functions. 438*0Sstevel@tonic-gate */ 439*0Sstevel@tonic-gate static taskq_t *taskq_create_common(const char *, int, int, pri_t, int, 440*0Sstevel@tonic-gate int, uint_t); 441*0Sstevel@tonic-gate static void taskq_thread(void *); 442*0Sstevel@tonic-gate static void taskq_d_thread(taskq_ent_t *); 443*0Sstevel@tonic-gate static void taskq_bucket_extend(void *); 444*0Sstevel@tonic-gate static int taskq_constructor(void *, void *, int); 445*0Sstevel@tonic-gate static void taskq_destructor(void *, void *); 446*0Sstevel@tonic-gate static int taskq_ent_constructor(void *, void *, int); 447*0Sstevel@tonic-gate static void taskq_ent_destructor(void *, void *); 448*0Sstevel@tonic-gate static taskq_ent_t *taskq_ent_alloc(taskq_t *, int); 449*0Sstevel@tonic-gate static void taskq_ent_free(taskq_t *, taskq_ent_t *); 450*0Sstevel@tonic-gate static taskq_ent_t *taskq_bucket_dispatch(taskq_bucket_t *, task_func_t, 451*0Sstevel@tonic-gate void *); 452*0Sstevel@tonic-gate 453*0Sstevel@tonic-gate /* 454*0Sstevel@tonic-gate * Task queues kstats. 455*0Sstevel@tonic-gate */ 456*0Sstevel@tonic-gate struct taskq_kstat { 457*0Sstevel@tonic-gate kstat_named_t tq_tasks; 458*0Sstevel@tonic-gate kstat_named_t tq_executed; 459*0Sstevel@tonic-gate kstat_named_t tq_maxtasks; 460*0Sstevel@tonic-gate kstat_named_t tq_totaltime; 461*0Sstevel@tonic-gate kstat_named_t tq_nalloc; 462*0Sstevel@tonic-gate kstat_named_t tq_nactive; 463*0Sstevel@tonic-gate kstat_named_t tq_pri; 464*0Sstevel@tonic-gate kstat_named_t tq_nthreads; 465*0Sstevel@tonic-gate } taskq_kstat = { 466*0Sstevel@tonic-gate { "tasks", KSTAT_DATA_UINT64 }, 467*0Sstevel@tonic-gate { "executed", KSTAT_DATA_UINT64 }, 468*0Sstevel@tonic-gate { "maxtasks", KSTAT_DATA_UINT64 }, 469*0Sstevel@tonic-gate { "totaltime", KSTAT_DATA_UINT64 }, 470*0Sstevel@tonic-gate { "nactive", KSTAT_DATA_UINT64 }, 471*0Sstevel@tonic-gate { "nalloc", KSTAT_DATA_UINT64 }, 472*0Sstevel@tonic-gate { "priority", KSTAT_DATA_UINT64 }, 473*0Sstevel@tonic-gate { "threads", KSTAT_DATA_UINT64 }, 474*0Sstevel@tonic-gate }; 475*0Sstevel@tonic-gate 476*0Sstevel@tonic-gate struct taskq_d_kstat { 477*0Sstevel@tonic-gate kstat_named_t tqd_pri; 478*0Sstevel@tonic-gate kstat_named_t tqd_btasks; 479*0Sstevel@tonic-gate kstat_named_t tqd_bexecuted; 480*0Sstevel@tonic-gate kstat_named_t tqd_bmaxtasks; 481*0Sstevel@tonic-gate kstat_named_t tqd_bnalloc; 482*0Sstevel@tonic-gate kstat_named_t tqd_bnactive; 483*0Sstevel@tonic-gate kstat_named_t tqd_btotaltime; 484*0Sstevel@tonic-gate kstat_named_t tqd_hits; 485*0Sstevel@tonic-gate kstat_named_t tqd_misses; 486*0Sstevel@tonic-gate kstat_named_t tqd_overflows; 487*0Sstevel@tonic-gate kstat_named_t tqd_tcreates; 488*0Sstevel@tonic-gate kstat_named_t tqd_tdeaths; 489*0Sstevel@tonic-gate kstat_named_t tqd_maxthreads; 490*0Sstevel@tonic-gate kstat_named_t tqd_nomem; 491*0Sstevel@tonic-gate kstat_named_t tqd_disptcreates; 492*0Sstevel@tonic-gate kstat_named_t tqd_totaltime; 493*0Sstevel@tonic-gate kstat_named_t tqd_nalloc; 494*0Sstevel@tonic-gate kstat_named_t tqd_nfree; 495*0Sstevel@tonic-gate } taskq_d_kstat = { 496*0Sstevel@tonic-gate { "priority", KSTAT_DATA_UINT64 }, 497*0Sstevel@tonic-gate { "btasks", KSTAT_DATA_UINT64 }, 498*0Sstevel@tonic-gate { "bexecuted", KSTAT_DATA_UINT64 }, 499*0Sstevel@tonic-gate { "bmaxtasks", KSTAT_DATA_UINT64 }, 500*0Sstevel@tonic-gate { "bnalloc", KSTAT_DATA_UINT64 }, 501*0Sstevel@tonic-gate { "bnactive", KSTAT_DATA_UINT64 }, 502*0Sstevel@tonic-gate { "btotaltime", KSTAT_DATA_UINT64 }, 503*0Sstevel@tonic-gate { "hits", KSTAT_DATA_UINT64 }, 504*0Sstevel@tonic-gate { "misses", KSTAT_DATA_UINT64 }, 505*0Sstevel@tonic-gate { "overflows", KSTAT_DATA_UINT64 }, 506*0Sstevel@tonic-gate { "tcreates", KSTAT_DATA_UINT64 }, 507*0Sstevel@tonic-gate { "tdeaths", KSTAT_DATA_UINT64 }, 508*0Sstevel@tonic-gate { "maxthreads", KSTAT_DATA_UINT64 }, 509*0Sstevel@tonic-gate { "nomem", KSTAT_DATA_UINT64 }, 510*0Sstevel@tonic-gate { "disptcreates", KSTAT_DATA_UINT64 }, 511*0Sstevel@tonic-gate { "totaltime", KSTAT_DATA_UINT64 }, 512*0Sstevel@tonic-gate { "nalloc", KSTAT_DATA_UINT64 }, 513*0Sstevel@tonic-gate { "nfree", KSTAT_DATA_UINT64 }, 514*0Sstevel@tonic-gate }; 515*0Sstevel@tonic-gate 516*0Sstevel@tonic-gate static kmutex_t taskq_kstat_lock; 517*0Sstevel@tonic-gate static kmutex_t taskq_d_kstat_lock; 518*0Sstevel@tonic-gate static int taskq_kstat_update(kstat_t *, int); 519*0Sstevel@tonic-gate static int taskq_d_kstat_update(kstat_t *, int); 520*0Sstevel@tonic-gate 521*0Sstevel@tonic-gate 522*0Sstevel@tonic-gate /* 523*0Sstevel@tonic-gate * Collect per-bucket statistic when TASKQ_STATISTIC is defined. 524*0Sstevel@tonic-gate */ 525*0Sstevel@tonic-gate #define TASKQ_STATISTIC 1 526*0Sstevel@tonic-gate 527*0Sstevel@tonic-gate #if TASKQ_STATISTIC 528*0Sstevel@tonic-gate #define TQ_STAT(b, x) b->tqbucket_stat.x++ 529*0Sstevel@tonic-gate #else 530*0Sstevel@tonic-gate #define TQ_STAT(b, x) 531*0Sstevel@tonic-gate #endif 532*0Sstevel@tonic-gate 533*0Sstevel@tonic-gate /* 534*0Sstevel@tonic-gate * Random fault injection. 535*0Sstevel@tonic-gate */ 536*0Sstevel@tonic-gate uint_t taskq_random; 537*0Sstevel@tonic-gate uint_t taskq_dmtbf = UINT_MAX; /* mean time between injected failures */ 538*0Sstevel@tonic-gate uint_t taskq_smtbf = UINT_MAX; /* mean time between injected failures */ 539*0Sstevel@tonic-gate 540*0Sstevel@tonic-gate /* 541*0Sstevel@tonic-gate * TQ_NOSLEEP dispatches on dynamic task queues are always allowed to fail. 542*0Sstevel@tonic-gate * 543*0Sstevel@tonic-gate * TQ_NOSLEEP dispatches on static task queues can't arbitrarily fail because 544*0Sstevel@tonic-gate * they could prepopulate the cache and make sure that they do not use more 545*0Sstevel@tonic-gate * then minalloc entries. So, fault injection in this case insures that 546*0Sstevel@tonic-gate * either TASKQ_PREPOPULATE is not set or there are more entries allocated 547*0Sstevel@tonic-gate * than is specified by minalloc. TQ_NOALLOC dispatches are always allowed 548*0Sstevel@tonic-gate * to fail, but for simplicity we treat them identically to TQ_NOSLEEP 549*0Sstevel@tonic-gate * dispatches. 550*0Sstevel@tonic-gate */ 551*0Sstevel@tonic-gate #ifdef DEBUG 552*0Sstevel@tonic-gate #define TASKQ_D_RANDOM_DISPATCH_FAILURE(tq, flag) \ 553*0Sstevel@tonic-gate taskq_random = (taskq_random * 2416 + 374441) % 1771875;\ 554*0Sstevel@tonic-gate if ((flag & TQ_NOSLEEP) && \ 555*0Sstevel@tonic-gate taskq_random < 1771875 / taskq_dmtbf) { \ 556*0Sstevel@tonic-gate return (NULL); \ 557*0Sstevel@tonic-gate } 558*0Sstevel@tonic-gate 559*0Sstevel@tonic-gate #define TASKQ_S_RANDOM_DISPATCH_FAILURE(tq, flag) \ 560*0Sstevel@tonic-gate taskq_random = (taskq_random * 2416 + 374441) % 1771875;\ 561*0Sstevel@tonic-gate if ((flag & (TQ_NOSLEEP | TQ_NOALLOC)) && \ 562*0Sstevel@tonic-gate (!(tq->tq_flags & TASKQ_PREPOPULATE) || \ 563*0Sstevel@tonic-gate (tq->tq_nalloc > tq->tq_minalloc)) && \ 564*0Sstevel@tonic-gate (taskq_random < (1771875 / taskq_smtbf))) { \ 565*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); \ 566*0Sstevel@tonic-gate return (NULL); \ 567*0Sstevel@tonic-gate } 568*0Sstevel@tonic-gate #else 569*0Sstevel@tonic-gate #define TASKQ_S_RANDOM_DISPATCH_FAILURE(tq, flag) 570*0Sstevel@tonic-gate #define TASKQ_D_RANDOM_DISPATCH_FAILURE(tq, flag) 571*0Sstevel@tonic-gate #endif 572*0Sstevel@tonic-gate 573*0Sstevel@tonic-gate #define IS_EMPTY(l) (((l).tqent_prev == (l).tqent_next) && \ 574*0Sstevel@tonic-gate ((l).tqent_prev == &(l))) 575*0Sstevel@tonic-gate 576*0Sstevel@tonic-gate /* 577*0Sstevel@tonic-gate * Append `tqe' in the end of the doubly-linked list denoted by l. 578*0Sstevel@tonic-gate */ 579*0Sstevel@tonic-gate #define TQ_APPEND(l, tqe) { \ 580*0Sstevel@tonic-gate tqe->tqent_next = &l; \ 581*0Sstevel@tonic-gate tqe->tqent_prev = l.tqent_prev; \ 582*0Sstevel@tonic-gate tqe->tqent_next->tqent_prev = tqe; \ 583*0Sstevel@tonic-gate tqe->tqent_prev->tqent_next = tqe; \ 584*0Sstevel@tonic-gate } 585*0Sstevel@tonic-gate 586*0Sstevel@tonic-gate /* 587*0Sstevel@tonic-gate * Schedule a task specified by func and arg into the task queue entry tqe. 588*0Sstevel@tonic-gate */ 589*0Sstevel@tonic-gate #define TQ_ENQUEUE(tq, tqe, func, arg) { \ 590*0Sstevel@tonic-gate ASSERT(MUTEX_HELD(&tq->tq_lock)); \ 591*0Sstevel@tonic-gate TQ_APPEND(tq->tq_task, tqe); \ 592*0Sstevel@tonic-gate tqe->tqent_func = (func); \ 593*0Sstevel@tonic-gate tqe->tqent_arg = (arg); \ 594*0Sstevel@tonic-gate tq->tq_tasks++; \ 595*0Sstevel@tonic-gate if (tq->tq_tasks - tq->tq_executed > tq->tq_maxtasks) \ 596*0Sstevel@tonic-gate tq->tq_maxtasks = tq->tq_tasks - tq->tq_executed; \ 597*0Sstevel@tonic-gate cv_signal(&tq->tq_dispatch_cv); \ 598*0Sstevel@tonic-gate DTRACE_PROBE2(taskq__enqueue, taskq_t *, tq, taskq_ent_t *, tqe); \ 599*0Sstevel@tonic-gate } 600*0Sstevel@tonic-gate 601*0Sstevel@tonic-gate /* 602*0Sstevel@tonic-gate * Do-nothing task which may be used to prepopulate thread caches. 603*0Sstevel@tonic-gate */ 604*0Sstevel@tonic-gate /*ARGSUSED*/ 605*0Sstevel@tonic-gate void 606*0Sstevel@tonic-gate nulltask(void *unused) 607*0Sstevel@tonic-gate { 608*0Sstevel@tonic-gate } 609*0Sstevel@tonic-gate 610*0Sstevel@tonic-gate 611*0Sstevel@tonic-gate /*ARGSUSED*/ 612*0Sstevel@tonic-gate static int 613*0Sstevel@tonic-gate taskq_constructor(void *buf, void *cdrarg, int kmflags) 614*0Sstevel@tonic-gate { 615*0Sstevel@tonic-gate taskq_t *tq = buf; 616*0Sstevel@tonic-gate 617*0Sstevel@tonic-gate bzero(tq, sizeof (taskq_t)); 618*0Sstevel@tonic-gate 619*0Sstevel@tonic-gate mutex_init(&tq->tq_lock, NULL, MUTEX_DEFAULT, NULL); 620*0Sstevel@tonic-gate rw_init(&tq->tq_threadlock, NULL, RW_DEFAULT, NULL); 621*0Sstevel@tonic-gate cv_init(&tq->tq_dispatch_cv, NULL, CV_DEFAULT, NULL); 622*0Sstevel@tonic-gate cv_init(&tq->tq_wait_cv, NULL, CV_DEFAULT, NULL); 623*0Sstevel@tonic-gate 624*0Sstevel@tonic-gate tq->tq_task.tqent_next = &tq->tq_task; 625*0Sstevel@tonic-gate tq->tq_task.tqent_prev = &tq->tq_task; 626*0Sstevel@tonic-gate 627*0Sstevel@tonic-gate return (0); 628*0Sstevel@tonic-gate } 629*0Sstevel@tonic-gate 630*0Sstevel@tonic-gate /*ARGSUSED*/ 631*0Sstevel@tonic-gate static void 632*0Sstevel@tonic-gate taskq_destructor(void *buf, void *cdrarg) 633*0Sstevel@tonic-gate { 634*0Sstevel@tonic-gate taskq_t *tq = buf; 635*0Sstevel@tonic-gate 636*0Sstevel@tonic-gate mutex_destroy(&tq->tq_lock); 637*0Sstevel@tonic-gate rw_destroy(&tq->tq_threadlock); 638*0Sstevel@tonic-gate cv_destroy(&tq->tq_dispatch_cv); 639*0Sstevel@tonic-gate cv_destroy(&tq->tq_wait_cv); 640*0Sstevel@tonic-gate } 641*0Sstevel@tonic-gate 642*0Sstevel@tonic-gate /*ARGSUSED*/ 643*0Sstevel@tonic-gate static int 644*0Sstevel@tonic-gate taskq_ent_constructor(void *buf, void *cdrarg, int kmflags) 645*0Sstevel@tonic-gate { 646*0Sstevel@tonic-gate taskq_ent_t *tqe = buf; 647*0Sstevel@tonic-gate 648*0Sstevel@tonic-gate tqe->tqent_thread = NULL; 649*0Sstevel@tonic-gate cv_init(&tqe->tqent_cv, NULL, CV_DEFAULT, NULL); 650*0Sstevel@tonic-gate 651*0Sstevel@tonic-gate return (0); 652*0Sstevel@tonic-gate } 653*0Sstevel@tonic-gate 654*0Sstevel@tonic-gate /*ARGSUSED*/ 655*0Sstevel@tonic-gate static void 656*0Sstevel@tonic-gate taskq_ent_destructor(void *buf, void *cdrarg) 657*0Sstevel@tonic-gate { 658*0Sstevel@tonic-gate taskq_ent_t *tqe = buf; 659*0Sstevel@tonic-gate 660*0Sstevel@tonic-gate ASSERT(tqe->tqent_thread == NULL); 661*0Sstevel@tonic-gate cv_destroy(&tqe->tqent_cv); 662*0Sstevel@tonic-gate } 663*0Sstevel@tonic-gate 664*0Sstevel@tonic-gate void 665*0Sstevel@tonic-gate taskq_init(void) 666*0Sstevel@tonic-gate { 667*0Sstevel@tonic-gate taskq_ent_cache = kmem_cache_create("taskq_ent_cache", 668*0Sstevel@tonic-gate sizeof (taskq_ent_t), 0, taskq_ent_constructor, 669*0Sstevel@tonic-gate taskq_ent_destructor, NULL, NULL, NULL, 0); 670*0Sstevel@tonic-gate taskq_cache = kmem_cache_create("taskq_cache", sizeof (taskq_t), 671*0Sstevel@tonic-gate 0, taskq_constructor, taskq_destructor, NULL, NULL, NULL, 0); 672*0Sstevel@tonic-gate taskq_id_arena = vmem_create("taskq_id_arena", 673*0Sstevel@tonic-gate (void *)1, INT32_MAX, 1, NULL, NULL, NULL, 0, 674*0Sstevel@tonic-gate VM_SLEEP | VMC_IDENTIFIER); 675*0Sstevel@tonic-gate } 676*0Sstevel@tonic-gate 677*0Sstevel@tonic-gate /* 678*0Sstevel@tonic-gate * Create global system dynamic task queue. 679*0Sstevel@tonic-gate */ 680*0Sstevel@tonic-gate void 681*0Sstevel@tonic-gate system_taskq_init(void) 682*0Sstevel@tonic-gate { 683*0Sstevel@tonic-gate system_taskq = taskq_create_common("system_taskq", 0, 684*0Sstevel@tonic-gate system_taskq_size * max_ncpus, minclsyspri, 4, 512, 685*0Sstevel@tonic-gate TASKQ_DYNAMIC | TASKQ_PREPOPULATE); 686*0Sstevel@tonic-gate } 687*0Sstevel@tonic-gate 688*0Sstevel@tonic-gate /* 689*0Sstevel@tonic-gate * taskq_ent_alloc() 690*0Sstevel@tonic-gate * 691*0Sstevel@tonic-gate * Allocates a new taskq_ent_t structure either from the free list or from the 692*0Sstevel@tonic-gate * cache. Returns NULL if it can't be allocated. 693*0Sstevel@tonic-gate * 694*0Sstevel@tonic-gate * Assumes: tq->tq_lock is held. 695*0Sstevel@tonic-gate */ 696*0Sstevel@tonic-gate static taskq_ent_t * 697*0Sstevel@tonic-gate taskq_ent_alloc(taskq_t *tq, int flags) 698*0Sstevel@tonic-gate { 699*0Sstevel@tonic-gate int kmflags = (flags & TQ_NOSLEEP) ? KM_NOSLEEP : KM_SLEEP; 700*0Sstevel@tonic-gate 701*0Sstevel@tonic-gate taskq_ent_t *tqe; 702*0Sstevel@tonic-gate 703*0Sstevel@tonic-gate ASSERT(MUTEX_HELD(&tq->tq_lock)); 704*0Sstevel@tonic-gate 705*0Sstevel@tonic-gate /* 706*0Sstevel@tonic-gate * TQ_NOALLOC allocations are allowed to use the freelist, even if 707*0Sstevel@tonic-gate * we are below tq_minalloc. 708*0Sstevel@tonic-gate */ 709*0Sstevel@tonic-gate if ((tqe = tq->tq_freelist) != NULL && 710*0Sstevel@tonic-gate ((flags & TQ_NOALLOC) || tq->tq_nalloc >= tq->tq_minalloc)) { 711*0Sstevel@tonic-gate tq->tq_freelist = tqe->tqent_next; 712*0Sstevel@tonic-gate } else { 713*0Sstevel@tonic-gate if (flags & TQ_NOALLOC) 714*0Sstevel@tonic-gate return (NULL); 715*0Sstevel@tonic-gate 716*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 717*0Sstevel@tonic-gate if (tq->tq_nalloc >= tq->tq_maxalloc) { 718*0Sstevel@tonic-gate if (kmflags & KM_NOSLEEP) { 719*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 720*0Sstevel@tonic-gate return (NULL); 721*0Sstevel@tonic-gate } 722*0Sstevel@tonic-gate /* 723*0Sstevel@tonic-gate * We don't want to exceed tq_maxalloc, but we can't 724*0Sstevel@tonic-gate * wait for other tasks to complete (and thus free up 725*0Sstevel@tonic-gate * task structures) without risking deadlock with 726*0Sstevel@tonic-gate * the caller. So, we just delay for one second 727*0Sstevel@tonic-gate * to throttle the allocation rate. 728*0Sstevel@tonic-gate */ 729*0Sstevel@tonic-gate delay(hz); 730*0Sstevel@tonic-gate } 731*0Sstevel@tonic-gate tqe = kmem_cache_alloc(taskq_ent_cache, kmflags); 732*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 733*0Sstevel@tonic-gate if (tqe != NULL) 734*0Sstevel@tonic-gate tq->tq_nalloc++; 735*0Sstevel@tonic-gate } 736*0Sstevel@tonic-gate return (tqe); 737*0Sstevel@tonic-gate } 738*0Sstevel@tonic-gate 739*0Sstevel@tonic-gate /* 740*0Sstevel@tonic-gate * taskq_ent_free() 741*0Sstevel@tonic-gate * 742*0Sstevel@tonic-gate * Free taskq_ent_t structure by either putting it on the free list or freeing 743*0Sstevel@tonic-gate * it to the cache. 744*0Sstevel@tonic-gate * 745*0Sstevel@tonic-gate * Assumes: tq->tq_lock is held. 746*0Sstevel@tonic-gate */ 747*0Sstevel@tonic-gate static void 748*0Sstevel@tonic-gate taskq_ent_free(taskq_t *tq, taskq_ent_t *tqe) 749*0Sstevel@tonic-gate { 750*0Sstevel@tonic-gate ASSERT(MUTEX_HELD(&tq->tq_lock)); 751*0Sstevel@tonic-gate 752*0Sstevel@tonic-gate if (tq->tq_nalloc <= tq->tq_minalloc) { 753*0Sstevel@tonic-gate tqe->tqent_next = tq->tq_freelist; 754*0Sstevel@tonic-gate tq->tq_freelist = tqe; 755*0Sstevel@tonic-gate } else { 756*0Sstevel@tonic-gate tq->tq_nalloc--; 757*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 758*0Sstevel@tonic-gate kmem_cache_free(taskq_ent_cache, tqe); 759*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 760*0Sstevel@tonic-gate } 761*0Sstevel@tonic-gate } 762*0Sstevel@tonic-gate 763*0Sstevel@tonic-gate /* 764*0Sstevel@tonic-gate * Dispatch a task "func(arg)" to a free entry of bucket b. 765*0Sstevel@tonic-gate * 766*0Sstevel@tonic-gate * Assumes: no bucket locks is held. 767*0Sstevel@tonic-gate * 768*0Sstevel@tonic-gate * Returns: a pointer to an entry if dispatch was successful. 769*0Sstevel@tonic-gate * NULL if there are no free entries or if the bucket is suspended. 770*0Sstevel@tonic-gate */ 771*0Sstevel@tonic-gate static taskq_ent_t * 772*0Sstevel@tonic-gate taskq_bucket_dispatch(taskq_bucket_t *b, task_func_t func, void *arg) 773*0Sstevel@tonic-gate { 774*0Sstevel@tonic-gate taskq_ent_t *tqe; 775*0Sstevel@tonic-gate 776*0Sstevel@tonic-gate ASSERT(MUTEX_NOT_HELD(&b->tqbucket_lock)); 777*0Sstevel@tonic-gate ASSERT(func != NULL); 778*0Sstevel@tonic-gate 779*0Sstevel@tonic-gate mutex_enter(&b->tqbucket_lock); 780*0Sstevel@tonic-gate 781*0Sstevel@tonic-gate ASSERT(b->tqbucket_nfree != 0 || IS_EMPTY(b->tqbucket_freelist)); 782*0Sstevel@tonic-gate ASSERT(b->tqbucket_nfree == 0 || !IS_EMPTY(b->tqbucket_freelist)); 783*0Sstevel@tonic-gate 784*0Sstevel@tonic-gate /* 785*0Sstevel@tonic-gate * Get en entry from the freelist if there is one. 786*0Sstevel@tonic-gate * Schedule task into the entry. 787*0Sstevel@tonic-gate */ 788*0Sstevel@tonic-gate if ((b->tqbucket_nfree != 0) && 789*0Sstevel@tonic-gate !(b->tqbucket_flags & TQBUCKET_SUSPEND)) { 790*0Sstevel@tonic-gate tqe = b->tqbucket_freelist.tqent_prev; 791*0Sstevel@tonic-gate 792*0Sstevel@tonic-gate ASSERT(tqe != &b->tqbucket_freelist); 793*0Sstevel@tonic-gate ASSERT(tqe->tqent_thread != NULL); 794*0Sstevel@tonic-gate 795*0Sstevel@tonic-gate tqe->tqent_prev->tqent_next = tqe->tqent_next; 796*0Sstevel@tonic-gate tqe->tqent_next->tqent_prev = tqe->tqent_prev; 797*0Sstevel@tonic-gate b->tqbucket_nalloc++; 798*0Sstevel@tonic-gate b->tqbucket_nfree--; 799*0Sstevel@tonic-gate tqe->tqent_func = func; 800*0Sstevel@tonic-gate tqe->tqent_arg = arg; 801*0Sstevel@tonic-gate TQ_STAT(b, tqs_hits); 802*0Sstevel@tonic-gate cv_signal(&tqe->tqent_cv); 803*0Sstevel@tonic-gate DTRACE_PROBE2(taskq__d__enqueue, taskq_bucket_t *, b, 804*0Sstevel@tonic-gate taskq_ent_t *, tqe); 805*0Sstevel@tonic-gate } else { 806*0Sstevel@tonic-gate tqe = NULL; 807*0Sstevel@tonic-gate TQ_STAT(b, tqs_misses); 808*0Sstevel@tonic-gate } 809*0Sstevel@tonic-gate mutex_exit(&b->tqbucket_lock); 810*0Sstevel@tonic-gate return (tqe); 811*0Sstevel@tonic-gate } 812*0Sstevel@tonic-gate 813*0Sstevel@tonic-gate /* 814*0Sstevel@tonic-gate * Dispatch a task. 815*0Sstevel@tonic-gate * 816*0Sstevel@tonic-gate * Assumes: func != NULL 817*0Sstevel@tonic-gate * 818*0Sstevel@tonic-gate * Returns: NULL if dispatch failed. 819*0Sstevel@tonic-gate * non-NULL if task dispatched successfully. 820*0Sstevel@tonic-gate * Actual return value is the pointer to taskq entry that was used to 821*0Sstevel@tonic-gate * dispatch a task. This is useful for debugging. 822*0Sstevel@tonic-gate */ 823*0Sstevel@tonic-gate /* ARGSUSED */ 824*0Sstevel@tonic-gate taskqid_t 825*0Sstevel@tonic-gate taskq_dispatch(taskq_t *tq, task_func_t func, void *arg, uint_t flags) 826*0Sstevel@tonic-gate { 827*0Sstevel@tonic-gate taskq_bucket_t *bucket = NULL; /* Which bucket needs extension */ 828*0Sstevel@tonic-gate taskq_ent_t *tqe = NULL; 829*0Sstevel@tonic-gate taskq_ent_t *tqe1; 830*0Sstevel@tonic-gate uint_t bsize; 831*0Sstevel@tonic-gate 832*0Sstevel@tonic-gate ASSERT(tq != NULL); 833*0Sstevel@tonic-gate ASSERT(func != NULL); 834*0Sstevel@tonic-gate 835*0Sstevel@tonic-gate if (!(tq->tq_flags & TASKQ_DYNAMIC)) { 836*0Sstevel@tonic-gate /* 837*0Sstevel@tonic-gate * TQ_NOQUEUE flag can't be used with non-dynamic task queues. 838*0Sstevel@tonic-gate */ 839*0Sstevel@tonic-gate ASSERT(! (flags & TQ_NOQUEUE)); 840*0Sstevel@tonic-gate /* 841*0Sstevel@tonic-gate * Enqueue the task to the underlying queue. 842*0Sstevel@tonic-gate */ 843*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 844*0Sstevel@tonic-gate 845*0Sstevel@tonic-gate TASKQ_S_RANDOM_DISPATCH_FAILURE(tq, flags); 846*0Sstevel@tonic-gate 847*0Sstevel@tonic-gate if ((tqe = taskq_ent_alloc(tq, flags)) == NULL) { 848*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 849*0Sstevel@tonic-gate return (NULL); 850*0Sstevel@tonic-gate } 851*0Sstevel@tonic-gate TQ_ENQUEUE(tq, tqe, func, arg); 852*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 853*0Sstevel@tonic-gate return ((taskqid_t)tqe); 854*0Sstevel@tonic-gate } 855*0Sstevel@tonic-gate 856*0Sstevel@tonic-gate /* 857*0Sstevel@tonic-gate * Dynamic taskq dispatching. 858*0Sstevel@tonic-gate */ 859*0Sstevel@tonic-gate ASSERT(!(flags & TQ_NOALLOC)); 860*0Sstevel@tonic-gate TASKQ_D_RANDOM_DISPATCH_FAILURE(tq, flags); 861*0Sstevel@tonic-gate 862*0Sstevel@tonic-gate bsize = tq->tq_nbuckets; 863*0Sstevel@tonic-gate 864*0Sstevel@tonic-gate if (bsize == 1) { 865*0Sstevel@tonic-gate /* 866*0Sstevel@tonic-gate * In a single-CPU case there is only one bucket, so get 867*0Sstevel@tonic-gate * entry directly from there. 868*0Sstevel@tonic-gate */ 869*0Sstevel@tonic-gate if ((tqe = taskq_bucket_dispatch(tq->tq_buckets, func, arg)) 870*0Sstevel@tonic-gate != NULL) 871*0Sstevel@tonic-gate return ((taskqid_t)tqe); /* Fastpath */ 872*0Sstevel@tonic-gate bucket = tq->tq_buckets; 873*0Sstevel@tonic-gate } else { 874*0Sstevel@tonic-gate int loopcount; 875*0Sstevel@tonic-gate taskq_bucket_t *b; 876*0Sstevel@tonic-gate uintptr_t h = ((uintptr_t)CPU + (uintptr_t)arg) >> 3; 877*0Sstevel@tonic-gate 878*0Sstevel@tonic-gate h = TQ_HASH(h); 879*0Sstevel@tonic-gate 880*0Sstevel@tonic-gate /* 881*0Sstevel@tonic-gate * The 'bucket' points to the original bucket that we hit. If we 882*0Sstevel@tonic-gate * can't allocate from it, we search other buckets, but only 883*0Sstevel@tonic-gate * extend this one. 884*0Sstevel@tonic-gate */ 885*0Sstevel@tonic-gate b = &tq->tq_buckets[h & (bsize - 1)]; 886*0Sstevel@tonic-gate ASSERT(b->tqbucket_taskq == tq); /* Sanity check */ 887*0Sstevel@tonic-gate 888*0Sstevel@tonic-gate /* 889*0Sstevel@tonic-gate * Do a quick check before grabbing the lock. If the bucket does 890*0Sstevel@tonic-gate * not have free entries now, chances are very small that it 891*0Sstevel@tonic-gate * will after we take the lock, so we just skip it. 892*0Sstevel@tonic-gate */ 893*0Sstevel@tonic-gate if (b->tqbucket_nfree != 0) { 894*0Sstevel@tonic-gate if ((tqe = taskq_bucket_dispatch(b, func, arg)) != NULL) 895*0Sstevel@tonic-gate return ((taskqid_t)tqe); /* Fastpath */ 896*0Sstevel@tonic-gate } else { 897*0Sstevel@tonic-gate TQ_STAT(b, tqs_misses); 898*0Sstevel@tonic-gate } 899*0Sstevel@tonic-gate 900*0Sstevel@tonic-gate bucket = b; 901*0Sstevel@tonic-gate loopcount = MIN(taskq_search_depth, bsize); 902*0Sstevel@tonic-gate /* 903*0Sstevel@tonic-gate * If bucket dispatch failed, search loopcount number of buckets 904*0Sstevel@tonic-gate * before we give up and fail. 905*0Sstevel@tonic-gate */ 906*0Sstevel@tonic-gate do { 907*0Sstevel@tonic-gate b = &tq->tq_buckets[++h & (bsize - 1)]; 908*0Sstevel@tonic-gate ASSERT(b->tqbucket_taskq == tq); /* Sanity check */ 909*0Sstevel@tonic-gate loopcount--; 910*0Sstevel@tonic-gate 911*0Sstevel@tonic-gate if (b->tqbucket_nfree != 0) { 912*0Sstevel@tonic-gate tqe = taskq_bucket_dispatch(b, func, arg); 913*0Sstevel@tonic-gate } else { 914*0Sstevel@tonic-gate TQ_STAT(b, tqs_misses); 915*0Sstevel@tonic-gate } 916*0Sstevel@tonic-gate } while ((tqe == NULL) && (loopcount > 0)); 917*0Sstevel@tonic-gate } 918*0Sstevel@tonic-gate 919*0Sstevel@tonic-gate /* 920*0Sstevel@tonic-gate * At this point we either scheduled a task and (tqe != NULL) or failed 921*0Sstevel@tonic-gate * (tqe == NULL). Try to recover from fails. 922*0Sstevel@tonic-gate */ 923*0Sstevel@tonic-gate 924*0Sstevel@tonic-gate /* 925*0Sstevel@tonic-gate * For KM_SLEEP dispatches, try to extend the bucket and retry dispatch. 926*0Sstevel@tonic-gate */ 927*0Sstevel@tonic-gate if ((tqe == NULL) && !(flags & TQ_NOSLEEP)) { 928*0Sstevel@tonic-gate /* 929*0Sstevel@tonic-gate * taskq_bucket_extend() may fail to do anything, but this is 930*0Sstevel@tonic-gate * fine - we deal with it later. If the bucket was successfully 931*0Sstevel@tonic-gate * extended, there is a good chance that taskq_bucket_dispatch() 932*0Sstevel@tonic-gate * will get this new entry, unless someone is racing with us and 933*0Sstevel@tonic-gate * stealing the new entry from under our nose. 934*0Sstevel@tonic-gate * taskq_bucket_extend() may sleep. 935*0Sstevel@tonic-gate */ 936*0Sstevel@tonic-gate taskq_bucket_extend(bucket); 937*0Sstevel@tonic-gate TQ_STAT(bucket, tqs_disptcreates); 938*0Sstevel@tonic-gate if ((tqe = taskq_bucket_dispatch(bucket, func, arg)) != NULL) 939*0Sstevel@tonic-gate return ((taskqid_t)tqe); 940*0Sstevel@tonic-gate } 941*0Sstevel@tonic-gate 942*0Sstevel@tonic-gate ASSERT(bucket != NULL); 943*0Sstevel@tonic-gate /* 944*0Sstevel@tonic-gate * Since there are not enough free entries in the bucket, extend it 945*0Sstevel@tonic-gate * in the background using backing queue. 946*0Sstevel@tonic-gate */ 947*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 948*0Sstevel@tonic-gate if ((tqe1 = taskq_ent_alloc(tq, TQ_NOSLEEP)) != NULL) { 949*0Sstevel@tonic-gate TQ_ENQUEUE(tq, tqe1, taskq_bucket_extend, 950*0Sstevel@tonic-gate bucket); 951*0Sstevel@tonic-gate } else { 952*0Sstevel@tonic-gate TQ_STAT(bucket, tqs_nomem); 953*0Sstevel@tonic-gate } 954*0Sstevel@tonic-gate 955*0Sstevel@tonic-gate /* 956*0Sstevel@tonic-gate * Dispatch failed and we can't find an entry to schedule a task. 957*0Sstevel@tonic-gate * Revert to the backing queue unless TQ_NOQUEUE was asked. 958*0Sstevel@tonic-gate */ 959*0Sstevel@tonic-gate if ((tqe == NULL) && !(flags & TQ_NOQUEUE)) { 960*0Sstevel@tonic-gate if ((tqe = taskq_ent_alloc(tq, flags)) != NULL) { 961*0Sstevel@tonic-gate TQ_ENQUEUE(tq, tqe, func, arg); 962*0Sstevel@tonic-gate } else { 963*0Sstevel@tonic-gate TQ_STAT(bucket, tqs_nomem); 964*0Sstevel@tonic-gate } 965*0Sstevel@tonic-gate } 966*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 967*0Sstevel@tonic-gate 968*0Sstevel@tonic-gate return ((taskqid_t)tqe); 969*0Sstevel@tonic-gate } 970*0Sstevel@tonic-gate 971*0Sstevel@tonic-gate /* 972*0Sstevel@tonic-gate * Wait for all pending tasks to complete. 973*0Sstevel@tonic-gate * Calling taskq_wait from a task will cause deadlock. 974*0Sstevel@tonic-gate */ 975*0Sstevel@tonic-gate void 976*0Sstevel@tonic-gate taskq_wait(taskq_t *tq) 977*0Sstevel@tonic-gate { 978*0Sstevel@tonic-gate ASSERT(tq != curthread->t_taskq); 979*0Sstevel@tonic-gate 980*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 981*0Sstevel@tonic-gate while (tq->tq_task.tqent_next != &tq->tq_task || tq->tq_active != 0) 982*0Sstevel@tonic-gate cv_wait(&tq->tq_wait_cv, &tq->tq_lock); 983*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 984*0Sstevel@tonic-gate 985*0Sstevel@tonic-gate if (tq->tq_flags & TASKQ_DYNAMIC) { 986*0Sstevel@tonic-gate taskq_bucket_t *b = tq->tq_buckets; 987*0Sstevel@tonic-gate int bid = 0; 988*0Sstevel@tonic-gate for (; (b != NULL) && (bid < tq->tq_nbuckets); b++, bid++) { 989*0Sstevel@tonic-gate mutex_enter(&b->tqbucket_lock); 990*0Sstevel@tonic-gate while (b->tqbucket_nalloc > 0) 991*0Sstevel@tonic-gate cv_wait(&b->tqbucket_cv, &b->tqbucket_lock); 992*0Sstevel@tonic-gate mutex_exit(&b->tqbucket_lock); 993*0Sstevel@tonic-gate } 994*0Sstevel@tonic-gate } 995*0Sstevel@tonic-gate } 996*0Sstevel@tonic-gate 997*0Sstevel@tonic-gate /* 998*0Sstevel@tonic-gate * Suspend execution of tasks. 999*0Sstevel@tonic-gate * 1000*0Sstevel@tonic-gate * Tasks in the queue part will be suspended immediately upon return from this 1001*0Sstevel@tonic-gate * function. Pending tasks in the dynamic part will continue to execute, but all 1002*0Sstevel@tonic-gate * new tasks will be suspended. 1003*0Sstevel@tonic-gate */ 1004*0Sstevel@tonic-gate void 1005*0Sstevel@tonic-gate taskq_suspend(taskq_t *tq) 1006*0Sstevel@tonic-gate { 1007*0Sstevel@tonic-gate rw_enter(&tq->tq_threadlock, RW_WRITER); 1008*0Sstevel@tonic-gate 1009*0Sstevel@tonic-gate if (tq->tq_flags & TASKQ_DYNAMIC) { 1010*0Sstevel@tonic-gate taskq_bucket_t *b = tq->tq_buckets; 1011*0Sstevel@tonic-gate int bid = 0; 1012*0Sstevel@tonic-gate for (; (b != NULL) && (bid < tq->tq_nbuckets); b++, bid++) { 1013*0Sstevel@tonic-gate mutex_enter(&b->tqbucket_lock); 1014*0Sstevel@tonic-gate b->tqbucket_flags |= TQBUCKET_SUSPEND; 1015*0Sstevel@tonic-gate mutex_exit(&b->tqbucket_lock); 1016*0Sstevel@tonic-gate } 1017*0Sstevel@tonic-gate } 1018*0Sstevel@tonic-gate /* 1019*0Sstevel@tonic-gate * Mark task queue as being suspended. Needed for taskq_suspended(). 1020*0Sstevel@tonic-gate */ 1021*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1022*0Sstevel@tonic-gate ASSERT(!(tq->tq_flags & TASKQ_SUSPENDED)); 1023*0Sstevel@tonic-gate tq->tq_flags |= TASKQ_SUSPENDED; 1024*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1025*0Sstevel@tonic-gate } 1026*0Sstevel@tonic-gate 1027*0Sstevel@tonic-gate /* 1028*0Sstevel@tonic-gate * returns: 1 if tq is suspended, 0 otherwise. 1029*0Sstevel@tonic-gate */ 1030*0Sstevel@tonic-gate int 1031*0Sstevel@tonic-gate taskq_suspended(taskq_t *tq) 1032*0Sstevel@tonic-gate { 1033*0Sstevel@tonic-gate return ((tq->tq_flags & TASKQ_SUSPENDED) != 0); 1034*0Sstevel@tonic-gate } 1035*0Sstevel@tonic-gate 1036*0Sstevel@tonic-gate /* 1037*0Sstevel@tonic-gate * Resume taskq execution. 1038*0Sstevel@tonic-gate */ 1039*0Sstevel@tonic-gate void 1040*0Sstevel@tonic-gate taskq_resume(taskq_t *tq) 1041*0Sstevel@tonic-gate { 1042*0Sstevel@tonic-gate ASSERT(RW_WRITE_HELD(&tq->tq_threadlock)); 1043*0Sstevel@tonic-gate 1044*0Sstevel@tonic-gate if (tq->tq_flags & TASKQ_DYNAMIC) { 1045*0Sstevel@tonic-gate taskq_bucket_t *b = tq->tq_buckets; 1046*0Sstevel@tonic-gate int bid = 0; 1047*0Sstevel@tonic-gate for (; (b != NULL) && (bid < tq->tq_nbuckets); b++, bid++) { 1048*0Sstevel@tonic-gate mutex_enter(&b->tqbucket_lock); 1049*0Sstevel@tonic-gate b->tqbucket_flags &= ~TQBUCKET_SUSPEND; 1050*0Sstevel@tonic-gate mutex_exit(&b->tqbucket_lock); 1051*0Sstevel@tonic-gate } 1052*0Sstevel@tonic-gate } 1053*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1054*0Sstevel@tonic-gate ASSERT(tq->tq_flags & TASKQ_SUSPENDED); 1055*0Sstevel@tonic-gate tq->tq_flags &= ~TASKQ_SUSPENDED; 1056*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1057*0Sstevel@tonic-gate 1058*0Sstevel@tonic-gate rw_exit(&tq->tq_threadlock); 1059*0Sstevel@tonic-gate } 1060*0Sstevel@tonic-gate 1061*0Sstevel@tonic-gate int 1062*0Sstevel@tonic-gate taskq_member(taskq_t *tq, kthread_t *thread) 1063*0Sstevel@tonic-gate { 1064*0Sstevel@tonic-gate return (thread->t_taskq == tq); 1065*0Sstevel@tonic-gate } 1066*0Sstevel@tonic-gate 1067*0Sstevel@tonic-gate /* 1068*0Sstevel@tonic-gate * Worker thread for processing task queue. 1069*0Sstevel@tonic-gate */ 1070*0Sstevel@tonic-gate static void 1071*0Sstevel@tonic-gate taskq_thread(void *arg) 1072*0Sstevel@tonic-gate { 1073*0Sstevel@tonic-gate taskq_t *tq = arg; 1074*0Sstevel@tonic-gate taskq_ent_t *tqe; 1075*0Sstevel@tonic-gate callb_cpr_t cprinfo; 1076*0Sstevel@tonic-gate hrtime_t start, end; 1077*0Sstevel@tonic-gate 1078*0Sstevel@tonic-gate if (tq->tq_flags & TASKQ_CPR_SAFE) { 1079*0Sstevel@tonic-gate CALLB_CPR_INIT_SAFE(curthread, tq->tq_name); 1080*0Sstevel@tonic-gate } else { 1081*0Sstevel@tonic-gate CALLB_CPR_INIT(&cprinfo, &tq->tq_lock, callb_generic_cpr, 1082*0Sstevel@tonic-gate tq->tq_name); 1083*0Sstevel@tonic-gate } 1084*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1085*0Sstevel@tonic-gate while (tq->tq_flags & TASKQ_ACTIVE) { 1086*0Sstevel@tonic-gate if ((tqe = tq->tq_task.tqent_next) == &tq->tq_task) { 1087*0Sstevel@tonic-gate if (--tq->tq_active == 0) 1088*0Sstevel@tonic-gate cv_broadcast(&tq->tq_wait_cv); 1089*0Sstevel@tonic-gate if (tq->tq_flags & TASKQ_CPR_SAFE) { 1090*0Sstevel@tonic-gate cv_wait(&tq->tq_dispatch_cv, &tq->tq_lock); 1091*0Sstevel@tonic-gate } else { 1092*0Sstevel@tonic-gate CALLB_CPR_SAFE_BEGIN(&cprinfo); 1093*0Sstevel@tonic-gate cv_wait(&tq->tq_dispatch_cv, &tq->tq_lock); 1094*0Sstevel@tonic-gate CALLB_CPR_SAFE_END(&cprinfo, &tq->tq_lock); 1095*0Sstevel@tonic-gate } 1096*0Sstevel@tonic-gate tq->tq_active++; 1097*0Sstevel@tonic-gate continue; 1098*0Sstevel@tonic-gate } 1099*0Sstevel@tonic-gate tqe->tqent_prev->tqent_next = tqe->tqent_next; 1100*0Sstevel@tonic-gate tqe->tqent_next->tqent_prev = tqe->tqent_prev; 1101*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1102*0Sstevel@tonic-gate 1103*0Sstevel@tonic-gate rw_enter(&tq->tq_threadlock, RW_READER); 1104*0Sstevel@tonic-gate start = gethrtime(); 1105*0Sstevel@tonic-gate DTRACE_PROBE2(taskq__exec__start, taskq_t *, tq, 1106*0Sstevel@tonic-gate taskq_ent_t *, tqe); 1107*0Sstevel@tonic-gate tqe->tqent_func(tqe->tqent_arg); 1108*0Sstevel@tonic-gate DTRACE_PROBE2(taskq__exec__end, taskq_t *, tq, 1109*0Sstevel@tonic-gate taskq_ent_t *, tqe); 1110*0Sstevel@tonic-gate end = gethrtime(); 1111*0Sstevel@tonic-gate rw_exit(&tq->tq_threadlock); 1112*0Sstevel@tonic-gate 1113*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1114*0Sstevel@tonic-gate tq->tq_totaltime += end - start; 1115*0Sstevel@tonic-gate tq->tq_executed++; 1116*0Sstevel@tonic-gate 1117*0Sstevel@tonic-gate taskq_ent_free(tq, tqe); 1118*0Sstevel@tonic-gate } 1119*0Sstevel@tonic-gate tq->tq_nthreads--; 1120*0Sstevel@tonic-gate cv_broadcast(&tq->tq_wait_cv); 1121*0Sstevel@tonic-gate ASSERT(!(tq->tq_flags & TASKQ_CPR_SAFE)); 1122*0Sstevel@tonic-gate CALLB_CPR_EXIT(&cprinfo); 1123*0Sstevel@tonic-gate thread_exit(); 1124*0Sstevel@tonic-gate } 1125*0Sstevel@tonic-gate 1126*0Sstevel@tonic-gate /* 1127*0Sstevel@tonic-gate * Worker per-entry thread for dynamic dispatches. 1128*0Sstevel@tonic-gate */ 1129*0Sstevel@tonic-gate static void 1130*0Sstevel@tonic-gate taskq_d_thread(taskq_ent_t *tqe) 1131*0Sstevel@tonic-gate { 1132*0Sstevel@tonic-gate taskq_bucket_t *bucket = tqe->tqent_bucket; 1133*0Sstevel@tonic-gate taskq_t *tq = bucket->tqbucket_taskq; 1134*0Sstevel@tonic-gate kmutex_t *lock = &bucket->tqbucket_lock; 1135*0Sstevel@tonic-gate kcondvar_t *cv = &tqe->tqent_cv; 1136*0Sstevel@tonic-gate callb_cpr_t cprinfo; 1137*0Sstevel@tonic-gate clock_t w; 1138*0Sstevel@tonic-gate 1139*0Sstevel@tonic-gate CALLB_CPR_INIT(&cprinfo, lock, callb_generic_cpr, tq->tq_name); 1140*0Sstevel@tonic-gate 1141*0Sstevel@tonic-gate mutex_enter(lock); 1142*0Sstevel@tonic-gate 1143*0Sstevel@tonic-gate for (;;) { 1144*0Sstevel@tonic-gate /* 1145*0Sstevel@tonic-gate * If a task is scheduled (func != NULL), execute it, otherwise 1146*0Sstevel@tonic-gate * sleep, waiting for a job. 1147*0Sstevel@tonic-gate */ 1148*0Sstevel@tonic-gate if (tqe->tqent_func != NULL) { 1149*0Sstevel@tonic-gate hrtime_t start; 1150*0Sstevel@tonic-gate hrtime_t end; 1151*0Sstevel@tonic-gate 1152*0Sstevel@tonic-gate ASSERT(bucket->tqbucket_nalloc > 0); 1153*0Sstevel@tonic-gate 1154*0Sstevel@tonic-gate /* 1155*0Sstevel@tonic-gate * It is possible to free the entry right away before 1156*0Sstevel@tonic-gate * actually executing the task so that subsequent 1157*0Sstevel@tonic-gate * dispatches may immediately reuse it. But this, 1158*0Sstevel@tonic-gate * effectively, creates a two-length queue in the entry 1159*0Sstevel@tonic-gate * and may lead to a deadlock if the execution of the 1160*0Sstevel@tonic-gate * current task depends on the execution of the next 1161*0Sstevel@tonic-gate * scheduled task. So, we keep the entry busy until the 1162*0Sstevel@tonic-gate * task is processed. 1163*0Sstevel@tonic-gate */ 1164*0Sstevel@tonic-gate 1165*0Sstevel@tonic-gate mutex_exit(lock); 1166*0Sstevel@tonic-gate start = gethrtime(); 1167*0Sstevel@tonic-gate DTRACE_PROBE3(taskq__d__exec__start, taskq_t *, tq, 1168*0Sstevel@tonic-gate taskq_bucket_t *, bucket, taskq_ent_t *, tqe); 1169*0Sstevel@tonic-gate tqe->tqent_func(tqe->tqent_arg); 1170*0Sstevel@tonic-gate DTRACE_PROBE3(taskq__d__exec__end, taskq_t *, tq, 1171*0Sstevel@tonic-gate taskq_bucket_t *, bucket, taskq_ent_t *, tqe); 1172*0Sstevel@tonic-gate end = gethrtime(); 1173*0Sstevel@tonic-gate mutex_enter(lock); 1174*0Sstevel@tonic-gate bucket->tqbucket_totaltime += end - start; 1175*0Sstevel@tonic-gate 1176*0Sstevel@tonic-gate /* 1177*0Sstevel@tonic-gate * Return the entry to the bucket free list. 1178*0Sstevel@tonic-gate */ 1179*0Sstevel@tonic-gate tqe->tqent_func = NULL; 1180*0Sstevel@tonic-gate TQ_APPEND(bucket->tqbucket_freelist, tqe); 1181*0Sstevel@tonic-gate bucket->tqbucket_nalloc--; 1182*0Sstevel@tonic-gate bucket->tqbucket_nfree++; 1183*0Sstevel@tonic-gate ASSERT(!IS_EMPTY(bucket->tqbucket_freelist)); 1184*0Sstevel@tonic-gate /* 1185*0Sstevel@tonic-gate * taskq_wait() waits for nalloc to drop to zero on 1186*0Sstevel@tonic-gate * tqbucket_cv. 1187*0Sstevel@tonic-gate */ 1188*0Sstevel@tonic-gate cv_signal(&bucket->tqbucket_cv); 1189*0Sstevel@tonic-gate } 1190*0Sstevel@tonic-gate 1191*0Sstevel@tonic-gate /* 1192*0Sstevel@tonic-gate * At this point the entry must be in the bucket free list - 1193*0Sstevel@tonic-gate * either because it was there initially or because it just 1194*0Sstevel@tonic-gate * finished executing a task and put itself on the free list. 1195*0Sstevel@tonic-gate */ 1196*0Sstevel@tonic-gate ASSERT(bucket->tqbucket_nfree > 0); 1197*0Sstevel@tonic-gate /* 1198*0Sstevel@tonic-gate * Go to sleep unless we are closing. 1199*0Sstevel@tonic-gate * If a thread is sleeping too long, it dies. 1200*0Sstevel@tonic-gate */ 1201*0Sstevel@tonic-gate if (! (bucket->tqbucket_flags & TQBUCKET_CLOSE)) { 1202*0Sstevel@tonic-gate CALLB_CPR_SAFE_BEGIN(&cprinfo); 1203*0Sstevel@tonic-gate w = cv_timedwait(cv, lock, lbolt + 1204*0Sstevel@tonic-gate taskq_thread_timeout * hz); 1205*0Sstevel@tonic-gate CALLB_CPR_SAFE_END(&cprinfo, lock); 1206*0Sstevel@tonic-gate } 1207*0Sstevel@tonic-gate 1208*0Sstevel@tonic-gate /* 1209*0Sstevel@tonic-gate * At this point we may be in two different states: 1210*0Sstevel@tonic-gate * 1211*0Sstevel@tonic-gate * (1) tqent_func is set which means that a new task is 1212*0Sstevel@tonic-gate * dispatched and we need to execute it. 1213*0Sstevel@tonic-gate * 1214*0Sstevel@tonic-gate * (2) Thread is sleeping for too long or we are closing. In 1215*0Sstevel@tonic-gate * both cases destroy the thread and the entry. 1216*0Sstevel@tonic-gate */ 1217*0Sstevel@tonic-gate 1218*0Sstevel@tonic-gate /* If func is NULL we should be on the freelist. */ 1219*0Sstevel@tonic-gate ASSERT((tqe->tqent_func != NULL) || 1220*0Sstevel@tonic-gate (bucket->tqbucket_nfree > 0)); 1221*0Sstevel@tonic-gate /* If func is non-NULL we should be allocated */ 1222*0Sstevel@tonic-gate ASSERT((tqe->tqent_func == NULL) || 1223*0Sstevel@tonic-gate (bucket->tqbucket_nalloc > 0)); 1224*0Sstevel@tonic-gate 1225*0Sstevel@tonic-gate /* Check freelist consistency */ 1226*0Sstevel@tonic-gate ASSERT((bucket->tqbucket_nfree > 0) || 1227*0Sstevel@tonic-gate IS_EMPTY(bucket->tqbucket_freelist)); 1228*0Sstevel@tonic-gate ASSERT((bucket->tqbucket_nfree == 0) || 1229*0Sstevel@tonic-gate !IS_EMPTY(bucket->tqbucket_freelist)); 1230*0Sstevel@tonic-gate 1231*0Sstevel@tonic-gate if ((tqe->tqent_func == NULL) && 1232*0Sstevel@tonic-gate ((w == -1) || (bucket->tqbucket_flags & TQBUCKET_CLOSE))) { 1233*0Sstevel@tonic-gate /* 1234*0Sstevel@tonic-gate * This thread is sleeping for too long or we are 1235*0Sstevel@tonic-gate * closing - time to die. 1236*0Sstevel@tonic-gate * Thread creation/destruction happens rarely, 1237*0Sstevel@tonic-gate * so grabbing the lock is not a big performance issue. 1238*0Sstevel@tonic-gate * The bucket lock is dropped by CALLB_CPR_EXIT(). 1239*0Sstevel@tonic-gate */ 1240*0Sstevel@tonic-gate 1241*0Sstevel@tonic-gate /* Remove the entry from the free list. */ 1242*0Sstevel@tonic-gate tqe->tqent_prev->tqent_next = tqe->tqent_next; 1243*0Sstevel@tonic-gate tqe->tqent_next->tqent_prev = tqe->tqent_prev; 1244*0Sstevel@tonic-gate ASSERT(bucket->tqbucket_nfree > 0); 1245*0Sstevel@tonic-gate bucket->tqbucket_nfree--; 1246*0Sstevel@tonic-gate 1247*0Sstevel@tonic-gate TQ_STAT(bucket, tqs_tdeaths); 1248*0Sstevel@tonic-gate cv_signal(&bucket->tqbucket_cv); 1249*0Sstevel@tonic-gate tqe->tqent_thread = NULL; 1250*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1251*0Sstevel@tonic-gate tq->tq_tdeaths++; 1252*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1253*0Sstevel@tonic-gate CALLB_CPR_EXIT(&cprinfo); 1254*0Sstevel@tonic-gate kmem_cache_free(taskq_ent_cache, tqe); 1255*0Sstevel@tonic-gate thread_exit(); 1256*0Sstevel@tonic-gate } 1257*0Sstevel@tonic-gate } 1258*0Sstevel@tonic-gate } 1259*0Sstevel@tonic-gate 1260*0Sstevel@tonic-gate 1261*0Sstevel@tonic-gate /* 1262*0Sstevel@tonic-gate * Taskq creation. May sleep for memory. 1263*0Sstevel@tonic-gate * Always use automatically generated instances to avoid kstat name space 1264*0Sstevel@tonic-gate * collisions. 1265*0Sstevel@tonic-gate */ 1266*0Sstevel@tonic-gate 1267*0Sstevel@tonic-gate taskq_t * 1268*0Sstevel@tonic-gate taskq_create(const char *name, int nthreads, pri_t pri, int minalloc, 1269*0Sstevel@tonic-gate int maxalloc, uint_t flags) 1270*0Sstevel@tonic-gate { 1271*0Sstevel@tonic-gate return taskq_create_common(name, 0, nthreads, pri, minalloc, 1272*0Sstevel@tonic-gate maxalloc, flags | TASKQ_NOINSTANCE); 1273*0Sstevel@tonic-gate } 1274*0Sstevel@tonic-gate 1275*0Sstevel@tonic-gate /* 1276*0Sstevel@tonic-gate * Create an instance of task queue. It is legal to create task queues with the 1277*0Sstevel@tonic-gate * same name and different instances. 1278*0Sstevel@tonic-gate * 1279*0Sstevel@tonic-gate * taskq_create_instance is used by ddi_taskq_create() where it gets the 1280*0Sstevel@tonic-gate * instance from ddi_get_instance(). In some cases the instance is not 1281*0Sstevel@tonic-gate * initialized and is set to -1. This case is handled as if no instance was 1282*0Sstevel@tonic-gate * passed at all. 1283*0Sstevel@tonic-gate */ 1284*0Sstevel@tonic-gate taskq_t * 1285*0Sstevel@tonic-gate taskq_create_instance(const char *name, int instance, int nthreads, pri_t pri, 1286*0Sstevel@tonic-gate int minalloc, int maxalloc, uint_t flags) 1287*0Sstevel@tonic-gate { 1288*0Sstevel@tonic-gate ASSERT((instance >= 0) || (instance == -1)); 1289*0Sstevel@tonic-gate 1290*0Sstevel@tonic-gate if (instance < 0) { 1291*0Sstevel@tonic-gate flags |= TASKQ_NOINSTANCE; 1292*0Sstevel@tonic-gate } 1293*0Sstevel@tonic-gate 1294*0Sstevel@tonic-gate return (taskq_create_common(name, instance, nthreads, 1295*0Sstevel@tonic-gate pri, minalloc, maxalloc, flags)); 1296*0Sstevel@tonic-gate } 1297*0Sstevel@tonic-gate 1298*0Sstevel@tonic-gate static taskq_t * 1299*0Sstevel@tonic-gate taskq_create_common(const char *name, int instance, int nthreads, pri_t pri, 1300*0Sstevel@tonic-gate int minalloc, int maxalloc, uint_t flags) 1301*0Sstevel@tonic-gate { 1302*0Sstevel@tonic-gate taskq_t *tq = kmem_cache_alloc(taskq_cache, KM_SLEEP); 1303*0Sstevel@tonic-gate uint_t ncpus = ((boot_max_ncpus == -1) ? max_ncpus : boot_max_ncpus); 1304*0Sstevel@tonic-gate uint_t bsize; /* # of buckets - always power of 2 */ 1305*0Sstevel@tonic-gate 1306*0Sstevel@tonic-gate /* 1307*0Sstevel@tonic-gate * TASKQ_CPR_SAFE and TASKQ_DYNAMIC flags are mutually exclusive. 1308*0Sstevel@tonic-gate */ 1309*0Sstevel@tonic-gate ASSERT((flags & (TASKQ_DYNAMIC | TASKQ_CPR_SAFE)) != 1310*0Sstevel@tonic-gate ((TASKQ_DYNAMIC | TASKQ_CPR_SAFE))); 1311*0Sstevel@tonic-gate 1312*0Sstevel@tonic-gate ASSERT(tq->tq_buckets == NULL); 1313*0Sstevel@tonic-gate 1314*0Sstevel@tonic-gate bsize = 1 << (highbit(ncpus) - 1); 1315*0Sstevel@tonic-gate ASSERT(bsize >= 1); 1316*0Sstevel@tonic-gate bsize = MIN(bsize, taskq_maxbuckets); 1317*0Sstevel@tonic-gate 1318*0Sstevel@tonic-gate tq->tq_maxsize = nthreads; 1319*0Sstevel@tonic-gate 1320*0Sstevel@tonic-gate /* For non-dynamic task queues use just one backup thread */ 1321*0Sstevel@tonic-gate if (flags & TASKQ_DYNAMIC) 1322*0Sstevel@tonic-gate nthreads = 1; 1323*0Sstevel@tonic-gate 1324*0Sstevel@tonic-gate (void) strncpy(tq->tq_name, name, TASKQ_NAMELEN + 1); 1325*0Sstevel@tonic-gate tq->tq_name[TASKQ_NAMELEN] = '\0'; 1326*0Sstevel@tonic-gate /* Make sure the name conforms to the rules for C indentifiers */ 1327*0Sstevel@tonic-gate strident_canon(tq->tq_name, TASKQ_NAMELEN); 1328*0Sstevel@tonic-gate 1329*0Sstevel@tonic-gate tq->tq_flags = flags | TASKQ_ACTIVE; 1330*0Sstevel@tonic-gate tq->tq_active = nthreads; 1331*0Sstevel@tonic-gate tq->tq_instance = instance; 1332*0Sstevel@tonic-gate tq->tq_nthreads = nthreads; 1333*0Sstevel@tonic-gate tq->tq_minalloc = minalloc; 1334*0Sstevel@tonic-gate tq->tq_maxalloc = maxalloc; 1335*0Sstevel@tonic-gate tq->tq_nbuckets = bsize; 1336*0Sstevel@tonic-gate tq->tq_pri = pri; 1337*0Sstevel@tonic-gate 1338*0Sstevel@tonic-gate if (flags & TASKQ_PREPOPULATE) { 1339*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1340*0Sstevel@tonic-gate while (minalloc-- > 0) 1341*0Sstevel@tonic-gate taskq_ent_free(tq, taskq_ent_alloc(tq, TQ_SLEEP)); 1342*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1343*0Sstevel@tonic-gate } 1344*0Sstevel@tonic-gate 1345*0Sstevel@tonic-gate if (nthreads == 1) { 1346*0Sstevel@tonic-gate tq->tq_thread = thread_create(NULL, 0, taskq_thread, tq, 1347*0Sstevel@tonic-gate 0, &p0, TS_RUN, pri); 1348*0Sstevel@tonic-gate /* 1349*0Sstevel@tonic-gate * No need to take thread_lock to change the field: no one can 1350*0Sstevel@tonic-gate * reference it at this point. 1351*0Sstevel@tonic-gate */ 1352*0Sstevel@tonic-gate tq->tq_thread->t_taskq = tq; 1353*0Sstevel@tonic-gate } else { 1354*0Sstevel@tonic-gate kthread_t **tpp = kmem_alloc(sizeof (kthread_t *) * nthreads, 1355*0Sstevel@tonic-gate KM_SLEEP); 1356*0Sstevel@tonic-gate 1357*0Sstevel@tonic-gate tq->tq_threadlist = tpp; 1358*0Sstevel@tonic-gate 1359*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1360*0Sstevel@tonic-gate while (nthreads-- > 0) { 1361*0Sstevel@tonic-gate *tpp = thread_create(NULL, 0, taskq_thread, tq, 1362*0Sstevel@tonic-gate 0, &p0, TS_RUN, pri); 1363*0Sstevel@tonic-gate (*tpp)->t_taskq = tq; 1364*0Sstevel@tonic-gate tpp++; 1365*0Sstevel@tonic-gate } 1366*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1367*0Sstevel@tonic-gate } 1368*0Sstevel@tonic-gate 1369*0Sstevel@tonic-gate if (flags & TASKQ_DYNAMIC) { 1370*0Sstevel@tonic-gate taskq_bucket_t *bucket = kmem_zalloc(sizeof (taskq_bucket_t) * 1371*0Sstevel@tonic-gate bsize, KM_SLEEP); 1372*0Sstevel@tonic-gate int b_id; 1373*0Sstevel@tonic-gate 1374*0Sstevel@tonic-gate tq->tq_buckets = bucket; 1375*0Sstevel@tonic-gate 1376*0Sstevel@tonic-gate /* Initialize each bucket */ 1377*0Sstevel@tonic-gate for (b_id = 0; b_id < bsize; b_id++, bucket++) { 1378*0Sstevel@tonic-gate mutex_init(&bucket->tqbucket_lock, NULL, MUTEX_DEFAULT, 1379*0Sstevel@tonic-gate NULL); 1380*0Sstevel@tonic-gate cv_init(&bucket->tqbucket_cv, NULL, CV_DEFAULT, NULL); 1381*0Sstevel@tonic-gate bucket->tqbucket_taskq = tq; 1382*0Sstevel@tonic-gate bucket->tqbucket_freelist.tqent_next = 1383*0Sstevel@tonic-gate bucket->tqbucket_freelist.tqent_prev = 1384*0Sstevel@tonic-gate &bucket->tqbucket_freelist; 1385*0Sstevel@tonic-gate if (flags & TASKQ_PREPOPULATE) 1386*0Sstevel@tonic-gate taskq_bucket_extend(bucket); 1387*0Sstevel@tonic-gate } 1388*0Sstevel@tonic-gate } 1389*0Sstevel@tonic-gate 1390*0Sstevel@tonic-gate /* 1391*0Sstevel@tonic-gate * Install kstats. 1392*0Sstevel@tonic-gate * We have two cases: 1393*0Sstevel@tonic-gate * 1) Instance is provided to taskq_create_instance(). In this case it 1394*0Sstevel@tonic-gate * should be >= 0 and we use it. 1395*0Sstevel@tonic-gate * 1396*0Sstevel@tonic-gate * 2) Instance is not provided and is automatically generated 1397*0Sstevel@tonic-gate */ 1398*0Sstevel@tonic-gate if (flags & TASKQ_NOINSTANCE) { 1399*0Sstevel@tonic-gate instance = tq->tq_instance = 1400*0Sstevel@tonic-gate (int)(uintptr_t)vmem_alloc(taskq_id_arena, 1, VM_SLEEP); 1401*0Sstevel@tonic-gate } 1402*0Sstevel@tonic-gate 1403*0Sstevel@tonic-gate if (flags & TASKQ_DYNAMIC) { 1404*0Sstevel@tonic-gate if ((tq->tq_kstat = kstat_create("unix", instance, 1405*0Sstevel@tonic-gate tq->tq_name, "taskq_d", KSTAT_TYPE_NAMED, 1406*0Sstevel@tonic-gate sizeof (taskq_d_kstat) / sizeof (kstat_named_t), 1407*0Sstevel@tonic-gate KSTAT_FLAG_VIRTUAL)) != NULL) { 1408*0Sstevel@tonic-gate tq->tq_kstat->ks_lock = &taskq_d_kstat_lock; 1409*0Sstevel@tonic-gate tq->tq_kstat->ks_data = &taskq_d_kstat; 1410*0Sstevel@tonic-gate tq->tq_kstat->ks_update = taskq_d_kstat_update; 1411*0Sstevel@tonic-gate tq->tq_kstat->ks_private = tq; 1412*0Sstevel@tonic-gate kstat_install(tq->tq_kstat); 1413*0Sstevel@tonic-gate } 1414*0Sstevel@tonic-gate } else { 1415*0Sstevel@tonic-gate if ((tq->tq_kstat = kstat_create("unix", instance, tq->tq_name, 1416*0Sstevel@tonic-gate "taskq", KSTAT_TYPE_NAMED, 1417*0Sstevel@tonic-gate sizeof (taskq_kstat) / sizeof (kstat_named_t), 1418*0Sstevel@tonic-gate KSTAT_FLAG_VIRTUAL)) != NULL) { 1419*0Sstevel@tonic-gate tq->tq_kstat->ks_lock = &taskq_kstat_lock; 1420*0Sstevel@tonic-gate tq->tq_kstat->ks_data = &taskq_kstat; 1421*0Sstevel@tonic-gate tq->tq_kstat->ks_update = taskq_kstat_update; 1422*0Sstevel@tonic-gate tq->tq_kstat->ks_private = tq; 1423*0Sstevel@tonic-gate kstat_install(tq->tq_kstat); 1424*0Sstevel@tonic-gate } 1425*0Sstevel@tonic-gate } 1426*0Sstevel@tonic-gate 1427*0Sstevel@tonic-gate return (tq); 1428*0Sstevel@tonic-gate } 1429*0Sstevel@tonic-gate 1430*0Sstevel@tonic-gate /* 1431*0Sstevel@tonic-gate * taskq_destroy(). 1432*0Sstevel@tonic-gate * 1433*0Sstevel@tonic-gate * Assumes: by the time taskq_destroy is called no one will use this task queue 1434*0Sstevel@tonic-gate * in any way and no one will try to dispatch entries in it. 1435*0Sstevel@tonic-gate */ 1436*0Sstevel@tonic-gate void 1437*0Sstevel@tonic-gate taskq_destroy(taskq_t *tq) 1438*0Sstevel@tonic-gate { 1439*0Sstevel@tonic-gate taskq_bucket_t *b = tq->tq_buckets; 1440*0Sstevel@tonic-gate int bid = 0; 1441*0Sstevel@tonic-gate 1442*0Sstevel@tonic-gate ASSERT(! (tq->tq_flags & TASKQ_CPR_SAFE)); 1443*0Sstevel@tonic-gate 1444*0Sstevel@tonic-gate /* 1445*0Sstevel@tonic-gate * Destroy kstats. 1446*0Sstevel@tonic-gate */ 1447*0Sstevel@tonic-gate if (tq->tq_kstat != NULL) { 1448*0Sstevel@tonic-gate kstat_delete(tq->tq_kstat); 1449*0Sstevel@tonic-gate tq->tq_kstat = NULL; 1450*0Sstevel@tonic-gate } 1451*0Sstevel@tonic-gate 1452*0Sstevel@tonic-gate /* 1453*0Sstevel@tonic-gate * Destroy instance if needed. 1454*0Sstevel@tonic-gate */ 1455*0Sstevel@tonic-gate if (tq->tq_flags & TASKQ_NOINSTANCE) { 1456*0Sstevel@tonic-gate vmem_free(taskq_id_arena, (void *)(uintptr_t)(tq->tq_instance), 1457*0Sstevel@tonic-gate 1); 1458*0Sstevel@tonic-gate tq->tq_instance = 0; 1459*0Sstevel@tonic-gate } 1460*0Sstevel@tonic-gate 1461*0Sstevel@tonic-gate /* 1462*0Sstevel@tonic-gate * Wait for any pending entries to complete. 1463*0Sstevel@tonic-gate */ 1464*0Sstevel@tonic-gate taskq_wait(tq); 1465*0Sstevel@tonic-gate 1466*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1467*0Sstevel@tonic-gate ASSERT((tq->tq_task.tqent_next == &tq->tq_task) && 1468*0Sstevel@tonic-gate (tq->tq_active == 0)); 1469*0Sstevel@tonic-gate 1470*0Sstevel@tonic-gate if ((tq->tq_nthreads > 1) && (tq->tq_threadlist != NULL)) 1471*0Sstevel@tonic-gate kmem_free(tq->tq_threadlist, sizeof (kthread_t *) * 1472*0Sstevel@tonic-gate tq->tq_nthreads); 1473*0Sstevel@tonic-gate 1474*0Sstevel@tonic-gate tq->tq_flags &= ~TASKQ_ACTIVE; 1475*0Sstevel@tonic-gate cv_broadcast(&tq->tq_dispatch_cv); 1476*0Sstevel@tonic-gate while (tq->tq_nthreads != 0) 1477*0Sstevel@tonic-gate cv_wait(&tq->tq_wait_cv, &tq->tq_lock); 1478*0Sstevel@tonic-gate 1479*0Sstevel@tonic-gate tq->tq_minalloc = 0; 1480*0Sstevel@tonic-gate while (tq->tq_nalloc != 0) 1481*0Sstevel@tonic-gate taskq_ent_free(tq, taskq_ent_alloc(tq, TQ_SLEEP)); 1482*0Sstevel@tonic-gate 1483*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1484*0Sstevel@tonic-gate 1485*0Sstevel@tonic-gate /* 1486*0Sstevel@tonic-gate * Mark each bucket as closing and wakeup all sleeping threads. 1487*0Sstevel@tonic-gate */ 1488*0Sstevel@tonic-gate for (; (b != NULL) && (bid < tq->tq_nbuckets); b++, bid++) { 1489*0Sstevel@tonic-gate taskq_ent_t *tqe; 1490*0Sstevel@tonic-gate 1491*0Sstevel@tonic-gate mutex_enter(&b->tqbucket_lock); 1492*0Sstevel@tonic-gate 1493*0Sstevel@tonic-gate b->tqbucket_flags |= TQBUCKET_CLOSE; 1494*0Sstevel@tonic-gate /* Wakeup all sleeping threads */ 1495*0Sstevel@tonic-gate 1496*0Sstevel@tonic-gate for (tqe = b->tqbucket_freelist.tqent_next; 1497*0Sstevel@tonic-gate tqe != &b->tqbucket_freelist; tqe = tqe->tqent_next) 1498*0Sstevel@tonic-gate cv_signal(&tqe->tqent_cv); 1499*0Sstevel@tonic-gate 1500*0Sstevel@tonic-gate ASSERT(b->tqbucket_nalloc == 0); 1501*0Sstevel@tonic-gate 1502*0Sstevel@tonic-gate /* 1503*0Sstevel@tonic-gate * At this point we waited for all pending jobs to complete (in 1504*0Sstevel@tonic-gate * both the task queue and the bucket and no new jobs should 1505*0Sstevel@tonic-gate * arrive. Wait for all threads to die. 1506*0Sstevel@tonic-gate */ 1507*0Sstevel@tonic-gate while (b->tqbucket_nfree > 0) 1508*0Sstevel@tonic-gate cv_wait(&b->tqbucket_cv, &b->tqbucket_lock); 1509*0Sstevel@tonic-gate mutex_exit(&b->tqbucket_lock); 1510*0Sstevel@tonic-gate mutex_destroy(&b->tqbucket_lock); 1511*0Sstevel@tonic-gate cv_destroy(&b->tqbucket_cv); 1512*0Sstevel@tonic-gate } 1513*0Sstevel@tonic-gate 1514*0Sstevel@tonic-gate if (tq->tq_buckets != NULL) { 1515*0Sstevel@tonic-gate ASSERT(tq->tq_flags & TASKQ_DYNAMIC); 1516*0Sstevel@tonic-gate kmem_free(tq->tq_buckets, 1517*0Sstevel@tonic-gate sizeof (taskq_bucket_t) * tq->tq_nbuckets); 1518*0Sstevel@tonic-gate 1519*0Sstevel@tonic-gate /* Cleanup fields before returning tq to the cache */ 1520*0Sstevel@tonic-gate tq->tq_buckets = NULL; 1521*0Sstevel@tonic-gate tq->tq_tcreates = 0; 1522*0Sstevel@tonic-gate tq->tq_tdeaths = 0; 1523*0Sstevel@tonic-gate } else { 1524*0Sstevel@tonic-gate ASSERT(!(tq->tq_flags & TASKQ_DYNAMIC)); 1525*0Sstevel@tonic-gate } 1526*0Sstevel@tonic-gate 1527*0Sstevel@tonic-gate tq->tq_totaltime = 0; 1528*0Sstevel@tonic-gate tq->tq_tasks = 0; 1529*0Sstevel@tonic-gate tq->tq_maxtasks = 0; 1530*0Sstevel@tonic-gate tq->tq_executed = 0; 1531*0Sstevel@tonic-gate kmem_cache_free(taskq_cache, tq); 1532*0Sstevel@tonic-gate } 1533*0Sstevel@tonic-gate 1534*0Sstevel@tonic-gate /* 1535*0Sstevel@tonic-gate * Extend a bucket with a new entry on the free list and attach a worker thread 1536*0Sstevel@tonic-gate * to it. 1537*0Sstevel@tonic-gate * 1538*0Sstevel@tonic-gate * Argument: pointer to the bucket. 1539*0Sstevel@tonic-gate * 1540*0Sstevel@tonic-gate * This function may quietly fail. It is only used by taskq_dispatch() which 1541*0Sstevel@tonic-gate * handles such failures properly. 1542*0Sstevel@tonic-gate */ 1543*0Sstevel@tonic-gate static void 1544*0Sstevel@tonic-gate taskq_bucket_extend(void *arg) 1545*0Sstevel@tonic-gate { 1546*0Sstevel@tonic-gate taskq_ent_t *tqe; 1547*0Sstevel@tonic-gate taskq_bucket_t *b = (taskq_bucket_t *)arg; 1548*0Sstevel@tonic-gate taskq_t *tq = b->tqbucket_taskq; 1549*0Sstevel@tonic-gate int nthreads; 1550*0Sstevel@tonic-gate 1551*0Sstevel@tonic-gate if (! ENOUGH_MEMORY()) { 1552*0Sstevel@tonic-gate TQ_STAT(b, tqs_nomem); 1553*0Sstevel@tonic-gate return; 1554*0Sstevel@tonic-gate } 1555*0Sstevel@tonic-gate 1556*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1557*0Sstevel@tonic-gate 1558*0Sstevel@tonic-gate /* 1559*0Sstevel@tonic-gate * Observe global taskq limits on the number of threads. 1560*0Sstevel@tonic-gate */ 1561*0Sstevel@tonic-gate if (tq->tq_tcreates++ - tq->tq_tdeaths > tq->tq_maxsize) { 1562*0Sstevel@tonic-gate tq->tq_tcreates--; 1563*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1564*0Sstevel@tonic-gate return; 1565*0Sstevel@tonic-gate } 1566*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1567*0Sstevel@tonic-gate 1568*0Sstevel@tonic-gate tqe = kmem_cache_alloc(taskq_ent_cache, KM_NOSLEEP); 1569*0Sstevel@tonic-gate 1570*0Sstevel@tonic-gate if (tqe == NULL) { 1571*0Sstevel@tonic-gate mutex_enter(&tq->tq_lock); 1572*0Sstevel@tonic-gate TQ_STAT(b, tqs_nomem); 1573*0Sstevel@tonic-gate tq->tq_tcreates--; 1574*0Sstevel@tonic-gate mutex_exit(&tq->tq_lock); 1575*0Sstevel@tonic-gate return; 1576*0Sstevel@tonic-gate } 1577*0Sstevel@tonic-gate 1578*0Sstevel@tonic-gate ASSERT(tqe->tqent_thread == NULL); 1579*0Sstevel@tonic-gate 1580*0Sstevel@tonic-gate tqe->tqent_bucket = b; 1581*0Sstevel@tonic-gate 1582*0Sstevel@tonic-gate /* 1583*0Sstevel@tonic-gate * Create a thread in a TS_STOPPED state first. If it is successfully 1584*0Sstevel@tonic-gate * created, place the entry on the free list and start the thread. 1585*0Sstevel@tonic-gate */ 1586*0Sstevel@tonic-gate tqe->tqent_thread = thread_create(NULL, 0, taskq_d_thread, tqe, 1587*0Sstevel@tonic-gate 0, &p0, TS_STOPPED, tq->tq_pri); 1588*0Sstevel@tonic-gate 1589*0Sstevel@tonic-gate /* 1590*0Sstevel@tonic-gate * Once the entry is ready, link it to the the bucket free list. 1591*0Sstevel@tonic-gate */ 1592*0Sstevel@tonic-gate mutex_enter(&b->tqbucket_lock); 1593*0Sstevel@tonic-gate tqe->tqent_func = NULL; 1594*0Sstevel@tonic-gate TQ_APPEND(b->tqbucket_freelist, tqe); 1595*0Sstevel@tonic-gate b->tqbucket_nfree++; 1596*0Sstevel@tonic-gate TQ_STAT(b, tqs_tcreates); 1597*0Sstevel@tonic-gate 1598*0Sstevel@tonic-gate #if TASKQ_STATISTIC 1599*0Sstevel@tonic-gate nthreads = b->tqbucket_stat.tqs_tcreates - 1600*0Sstevel@tonic-gate b->tqbucket_stat.tqs_tdeaths; 1601*0Sstevel@tonic-gate b->tqbucket_stat.tqs_maxthreads = MAX(nthreads, 1602*0Sstevel@tonic-gate b->tqbucket_stat.tqs_maxthreads); 1603*0Sstevel@tonic-gate #endif 1604*0Sstevel@tonic-gate 1605*0Sstevel@tonic-gate mutex_exit(&b->tqbucket_lock); 1606*0Sstevel@tonic-gate /* 1607*0Sstevel@tonic-gate * Start the stopped thread. 1608*0Sstevel@tonic-gate */ 1609*0Sstevel@tonic-gate thread_lock(tqe->tqent_thread); 1610*0Sstevel@tonic-gate tqe->tqent_thread->t_taskq = tq; 1611*0Sstevel@tonic-gate tqe->tqent_thread->t_schedflag |= TS_ALLSTART; 1612*0Sstevel@tonic-gate setrun_locked(tqe->tqent_thread); 1613*0Sstevel@tonic-gate thread_unlock(tqe->tqent_thread); 1614*0Sstevel@tonic-gate } 1615*0Sstevel@tonic-gate 1616*0Sstevel@tonic-gate static int 1617*0Sstevel@tonic-gate taskq_kstat_update(kstat_t *ksp, int rw) 1618*0Sstevel@tonic-gate { 1619*0Sstevel@tonic-gate struct taskq_kstat *tqsp = &taskq_kstat; 1620*0Sstevel@tonic-gate taskq_t *tq = ksp->ks_private; 1621*0Sstevel@tonic-gate 1622*0Sstevel@tonic-gate if (rw == KSTAT_WRITE) 1623*0Sstevel@tonic-gate return (EACCES); 1624*0Sstevel@tonic-gate 1625*0Sstevel@tonic-gate tqsp->tq_tasks.value.ui64 = tq->tq_tasks; 1626*0Sstevel@tonic-gate tqsp->tq_executed.value.ui64 = tq->tq_executed; 1627*0Sstevel@tonic-gate tqsp->tq_maxtasks.value.ui64 = tq->tq_maxtasks; 1628*0Sstevel@tonic-gate tqsp->tq_totaltime.value.ui64 = tq->tq_totaltime; 1629*0Sstevel@tonic-gate tqsp->tq_nactive.value.ui64 = tq->tq_active; 1630*0Sstevel@tonic-gate tqsp->tq_nalloc.value.ui64 = tq->tq_nalloc; 1631*0Sstevel@tonic-gate tqsp->tq_pri.value.ui64 = tq->tq_pri; 1632*0Sstevel@tonic-gate tqsp->tq_nthreads.value.ui64 = tq->tq_nthreads; 1633*0Sstevel@tonic-gate return (0); 1634*0Sstevel@tonic-gate } 1635*0Sstevel@tonic-gate 1636*0Sstevel@tonic-gate static int 1637*0Sstevel@tonic-gate taskq_d_kstat_update(kstat_t *ksp, int rw) 1638*0Sstevel@tonic-gate { 1639*0Sstevel@tonic-gate struct taskq_d_kstat *tqsp = &taskq_d_kstat; 1640*0Sstevel@tonic-gate taskq_t *tq = ksp->ks_private; 1641*0Sstevel@tonic-gate taskq_bucket_t *b = tq->tq_buckets; 1642*0Sstevel@tonic-gate int bid = 0; 1643*0Sstevel@tonic-gate 1644*0Sstevel@tonic-gate if (rw == KSTAT_WRITE) 1645*0Sstevel@tonic-gate return (EACCES); 1646*0Sstevel@tonic-gate 1647*0Sstevel@tonic-gate ASSERT(tq->tq_flags & TASKQ_DYNAMIC); 1648*0Sstevel@tonic-gate 1649*0Sstevel@tonic-gate tqsp->tqd_btasks.value.ui64 = tq->tq_tasks; 1650*0Sstevel@tonic-gate tqsp->tqd_bexecuted.value.ui64 = tq->tq_executed; 1651*0Sstevel@tonic-gate tqsp->tqd_bmaxtasks.value.ui64 = tq->tq_maxtasks; 1652*0Sstevel@tonic-gate tqsp->tqd_bnalloc.value.ui64 = tq->tq_nalloc; 1653*0Sstevel@tonic-gate tqsp->tqd_bnactive.value.ui64 = tq->tq_active; 1654*0Sstevel@tonic-gate tqsp->tqd_btotaltime.value.ui64 = tq->tq_totaltime; 1655*0Sstevel@tonic-gate tqsp->tqd_pri.value.ui64 = tq->tq_pri; 1656*0Sstevel@tonic-gate 1657*0Sstevel@tonic-gate tqsp->tqd_hits.value.ui64 = 0; 1658*0Sstevel@tonic-gate tqsp->tqd_misses.value.ui64 = 0; 1659*0Sstevel@tonic-gate tqsp->tqd_overflows.value.ui64 = 0; 1660*0Sstevel@tonic-gate tqsp->tqd_tcreates.value.ui64 = 0; 1661*0Sstevel@tonic-gate tqsp->tqd_tdeaths.value.ui64 = 0; 1662*0Sstevel@tonic-gate tqsp->tqd_maxthreads.value.ui64 = 0; 1663*0Sstevel@tonic-gate tqsp->tqd_nomem.value.ui64 = 0; 1664*0Sstevel@tonic-gate tqsp->tqd_disptcreates.value.ui64 = 0; 1665*0Sstevel@tonic-gate tqsp->tqd_totaltime.value.ui64 = 0; 1666*0Sstevel@tonic-gate tqsp->tqd_nalloc.value.ui64 = 0; 1667*0Sstevel@tonic-gate tqsp->tqd_nfree.value.ui64 = 0; 1668*0Sstevel@tonic-gate 1669*0Sstevel@tonic-gate for (; (b != NULL) && (bid < tq->tq_nbuckets); b++, bid++) { 1670*0Sstevel@tonic-gate tqsp->tqd_hits.value.ui64 += b->tqbucket_stat.tqs_hits; 1671*0Sstevel@tonic-gate tqsp->tqd_misses.value.ui64 += b->tqbucket_stat.tqs_misses; 1672*0Sstevel@tonic-gate tqsp->tqd_overflows.value.ui64 += b->tqbucket_stat.tqs_overflow; 1673*0Sstevel@tonic-gate tqsp->tqd_tcreates.value.ui64 += b->tqbucket_stat.tqs_tcreates; 1674*0Sstevel@tonic-gate tqsp->tqd_tdeaths.value.ui64 += b->tqbucket_stat.tqs_tdeaths; 1675*0Sstevel@tonic-gate tqsp->tqd_maxthreads.value.ui64 += 1676*0Sstevel@tonic-gate b->tqbucket_stat.tqs_maxthreads; 1677*0Sstevel@tonic-gate tqsp->tqd_nomem.value.ui64 += b->tqbucket_stat.tqs_nomem; 1678*0Sstevel@tonic-gate tqsp->tqd_disptcreates.value.ui64 += 1679*0Sstevel@tonic-gate b->tqbucket_stat.tqs_disptcreates; 1680*0Sstevel@tonic-gate tqsp->tqd_totaltime.value.ui64 += b->tqbucket_totaltime; 1681*0Sstevel@tonic-gate tqsp->tqd_nalloc.value.ui64 += b->tqbucket_nalloc; 1682*0Sstevel@tonic-gate tqsp->tqd_nfree.value.ui64 += b->tqbucket_nfree; 1683*0Sstevel@tonic-gate } 1684*0Sstevel@tonic-gate return (0); 1685*0Sstevel@tonic-gate } 1686