1.\" $NetBSD: mutex.9,v 1.29 2017/07/03 21:28:48 wiz Exp $ 2.\" 3.\" Copyright (c) 2007, 2009 The NetBSD Foundation, Inc. 4.\" All rights reserved. 5.\" 6.\" This code is derived from software contributed to The NetBSD Foundation 7.\" by Andrew Doran. 8.\" 9.\" Redistribution and use in source and binary forms, with or without 10.\" modification, are permitted provided that the following conditions 11.\" are met: 12.\" 1. Redistributions of source code must retain the above copyright 13.\" notice, this list of conditions and the following disclaimer. 14.\" 2. Redistributions in binary form must reproduce the above copyright 15.\" notice, this list of conditions and the following disclaimer in the 16.\" documentation and/or other materials provided with the distribution. 17.\" 18.\" THIS SOFTWARE IS PROVIDED BY THE NETBSD FOUNDATION, INC. AND CONTRIBUTORS 19.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED 20.\" TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR 21.\" PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE FOUNDATION OR CONTRIBUTORS 22.\" BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR 23.\" CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF 24.\" SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS 25.\" INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN 26.\" CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) 27.\" ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE 28.\" POSSIBILITY OF SUCH DAMAGE. 29.\" 30.Dd May 1, 2017 31.Dt MUTEX 9 32.Os 33.Sh NAME 34.Nm mutex , 35.Nm mutex_init , 36.Nm mutex_destroy , 37.Nm mutex_enter , 38.Nm mutex_exit , 39.Nm mutex_ownable , 40.Nm mutex_owned , 41.Nm mutex_spin_enter , 42.Nm mutex_spin_exit , 43.Nm mutex_tryenter 44.Nd mutual exclusion primitives 45.Sh SYNOPSIS 46.In sys/mutex.h 47.Ft void 48.Fn mutex_init "kmutex_t *mtx" "kmutex_type_t type" "int ipl" 49.Ft void 50.Fn mutex_destroy "kmutex_t *mtx" 51.Ft void 52.Fn mutex_enter "kmutex_t *mtx" 53.Ft void 54.Fn mutex_exit "kmutex_t *mtx" 55.Ft int 56.Fn mutex_ownable "kmutex_t *mtx" 57.Ft int 58.Fn mutex_owned "kmutex_t *mtx" 59.Ft void 60.Fn mutex_spin_enter "kmutex_t *mtx" 61.Ft void 62.Fn mutex_spin_exit "kmutex_t *mtx" 63.Ft int 64.Fn mutex_tryenter "kmutex_t *mtx" 65.Pp 66.Cd "options DIAGNOSTIC" 67.Cd "options LOCKDEBUG" 68.Sh DESCRIPTION 69Mutexes are used in the kernel to implement mutual exclusion among LWPs 70(lightweight processes) and interrupt handlers. 71.Pp 72The 73.Vt kmutex_t 74type provides storage for the mutex object. 75This should be treated as an opaque object and not examined directly by 76consumers. 77.Pp 78Mutexes replace the 79.Xr spl 9 80system traditionally used to provide synchronization between interrupt 81handlers and LWPs. 82.Sh OPTIONS 83.Bl -tag -width abcd 84.It Cd "options DIAGNOSTIC" 85.Pp 86Kernels compiled with the 87.Dv DIAGNOSTIC 88option perform basic sanity checks on mutex operations. 89.It Cd "options LOCKDEBUG" 90.Pp 91Kernels compiled with the 92.Dv LOCKDEBUG 93option perform potentially CPU intensive sanity checks 94on mutex operations. 95.El 96.Sh FUNCTIONS 97.Bl -tag -width abcd 98.It Fn mutex_init "mtx" "type" "ipl" 99.Pp 100Dynamically initialize a mutex for use. 101.Pp 102No other operations can be performed on a mutex until it has been initialized. 103Once initialized, all types of mutex are manipulated using the same interface. 104Note that 105.Fn mutex_init 106may block in order to allocate memory. 107.Pp 108The 109.Fa type 110argument must be given as 111.Dv MUTEX_DEFAULT . 112Other constants are defined but are for low-level system use and are not 113an endorsed, stable part of the interface. 114.Pp 115The type of mutex returned depends on the 116.Fa ipl 117argument: 118.Bl -tag -width abcd 119.It IPL_NONE, or one of the IPL_SOFT* constants 120.Pp 121An adaptive mutex will be returned. 122Adaptive mutexes provide mutual exclusion between LWPs, 123and between LWPs and soft interrupt handlers. 124.Pp 125Adaptive mutexes cannot be acquired from a hardware interrupt handler. 126An LWP may either sleep or busy-wait when attempting to acquire 127an adaptive mutex that is already held. 128.It IPL_VM, IPL_SCHED, IPL_HIGH 129.Pp 130A spin mutex will be returned. 131Spin mutexes provide mutual exclusion between LWPs, and between LWPs 132and interrupt handlers. 133.Pp 134The 135.Fa ipl 136argument is used to pass a system interrupt priority level (IPL) 137that will block all interrupt handlers that may try to acquire the mutex. 138.Pp 139LWPs that own spin mutexes may not sleep, and therefore must not 140try to acquire adaptive mutexes or other sleep locks. 141.Pp 142A processor will always busy-wait when attempting to acquire 143a spin mutex that is already held. 144.El 145.Pp 146See 147.Xr spl 9 148for further information on interrupt priority levels (IPLs). 149.Pp 150.It Fn mutex_destroy "mtx" 151.Pp 152Release resources used by a mutex. 153The mutex may not be used after it has been destroyed. 154.Fn mutex_destroy 155may block in order to free memory. 156.It Fn mutex_enter "mtx" 157.Pp 158Acquire a mutex. 159If the mutex is already held, the caller will block and not return until the 160mutex is acquired. 161.Pp 162Mutexes and other types of locks must always be acquired in a 163consistent order with respect to each other. 164Otherwise, the potential for system deadlock exists. 165.Pp 166Adaptive mutexes and other types of lock that can sleep may 167not be acquired while a spin mutex is held by the caller. 168.Pp 169When acquiring a spin mutex, the IPL of the current CPU will be raised to 170the level set in 171.Fn mutex_init 172if it is not already equal or higher. 173.It Fn mutex_exit "mtx" 174.Pp 175Release a mutex. 176The mutex must have been previously acquired by the caller. 177Mutexes may be released out of order as needed. 178.It Fn mutex_ownable "mtx" 179.Pp 180When compiled with 181.Dv LOCKDEBUG 182(see 183.Xr options 4 ) , 184ensure that the current process can successfully acquire 185.Ar mtx . 186If 187.Ar mtx 188is already owned by the current process, the system will panic 189with a "locking against myself" error. 190.Pp 191This function is needed because 192.Fn mutex_owned 193does not differentiate if a spin mutex is owned by the current process 194vs owned by another process. 195.Fn mutex_ownable 196is reasonably heavy-weight, and should only be used with 197.Xr KDASSERT 9 . 198.It Fn mutex_owned "mtx" 199.Pp 200For adaptive mutexes, return non-zero if the current LWP holds the mutex. 201For spin mutexes, return non-zero if the mutex is held, potentially by the 202current processor. 203Otherwise, return zero. 204.Pp 205.Fn mutex_owned 206is provided for making diagnostic checks to verify that a lock is held. 207For example: 208.Bd -literal 209 KASSERT(mutex_owned(&driver_lock)); 210.Ed 211.Pp 212It should not be used to make locking decisions at run time. 213For spin mutexes, it must not be used to verify that a lock is not held. 214.It Fn mutex_spin_enter "mtx" 215.Pp 216Equivalent to 217.Fn mutex_enter , 218but may only be used when it is known that 219.Ar mtx 220is a spin mutex. 221On some architectures, this can substantially reduce the cost of acquiring 222a spin mutex. 223.It Fn mutex_spin_exit "mtx" 224.Pp 225Equivalent to 226.Fn mutex_exit , 227but may only be used when it is known that 228.Ar mtx 229is a spin mutex. 230On some architectures, this can substantially reduce the cost of releasing 231a spin mutex. 232.It Fn mutex_tryenter "mtx" 233.Pp 234Try to acquire a mutex, but do not block if the mutex is already held. 235Returns non-zero if the mutex was acquired, or zero if the mutex was 236already held. 237.Pp 238.Fn mutex_tryenter 239can be used as an optimization when acquiring locks in the wrong order. 240For example, in a setting where the convention is that 241.Dv first_lock 242must be acquired before 243.Dv second_lock , 244the following can be used to optimistically lock in reverse order: 245.Bd -literal 246 /* We hold second_lock, but not first_lock. */ 247 KASSERT(mutex_owned(&second_lock)); 248 249 if (!mutex_tryenter(&first_lock)) { 250 /* Failed to get it - lock in the correct order. */ 251 mutex_exit(&second_lock); 252 mutex_enter(&first_lock); 253 mutex_enter(&second_lock); 254 255 /* 256 * We may need to recheck any conditions the code 257 * path depends on, as we released second_lock 258 * briefly. 259 */ 260 } 261.Ed 262.El 263.Sh CODE REFERENCES 264The core of the mutex implementation is in 265.Pa sys/kern/kern_mutex.c . 266.Pp 267The header file 268.Pa sys/sys/mutex.h 269describes the public interface, and interfaces that machine-dependent 270code must provide to support mutexes. 271.Sh SEE ALSO 272.Xr atomic_ops 3 , 273.Xr membar_ops 3 , 274.Xr lockstat 8 , 275.Xr condvar 9 , 276.Xr kpreempt 9 , 277.Xr rwlock 9 , 278.Xr spl 9 279.Pp 280.Rs 281.%A Jim Mauro 282.%A Richard McDougall 283.%T Solaris Internals: Core Kernel Architecture 284.%I Prentice Hall 285.%D 2001 286.%O ISBN 0-13-022496-0 287.Re 288.Sh HISTORY 289The mutex primitives first appeared in 290.Nx 5.0 . 291