xref: /dflybsd-src/share/man/man9/spinlock.9 (revision a9656fbcd49c376aba5e04370d8b0f1fa96e063c)
1.\"
2.\" Copyright (c) 2006 The DragonFly Project.  All rights reserved.
3.\"
4.\" Redistribution and use in source and binary forms, with or without
5.\" modification, are permitted provided that the following conditions
6.\" are met:
7.\"
8.\" 1. Redistributions of source code must retain the above copyright
9.\"    notice, this list of conditions and the following disclaimer.
10.\" 2. Redistributions in binary form must reproduce the above copyright
11.\"    notice, this list of conditions and the following disclaimer in
12.\"    the documentation and/or other materials provided with the
13.\"    distribution.
14.\" 3. Neither the name of The DragonFly Project nor the names of its
15.\"    contributors may be used to endorse or promote products derived
16.\"    from this software without specific, prior written permission.
17.\"
18.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
19.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
20.\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
21.\" FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE
22.\" COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
23.\" INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING,
24.\" BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
25.\" LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
26.\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
27.\" OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
28.\" OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
29.\" SUCH DAMAGE.
30.\"
31.Dd April 10, 2010
32.Os
33.Dt SPINLOCK 9
34.Sh NAME
35.Nm spin_init ,
36.Nm spin_lock_rd ,
37.Nm spin_lock_rd_quick ,
38.Nm spin_lock_wr ,
39.Nm spin_lock_wr_quick ,
40.Nm spin_trylock_wr ,
41.Nm spin_uninit ,
42.Nm spin_unlock_rd ,
43.Nm spin_unlock_rd_quick ,
44.Nm spin_unlock_wr ,
45.Nm spin_unlock_wr_quick
46.Nd core spinlocks
47.Sh SYNOPSIS
48.In sys/spinlock.h
49.In sys/spinlock2.h
50.Ft void
51.Fn spin_init "struct spinlock *mtx"
52.Ft void
53.Fn spin_uninit "struct spinlock *mtx"
54.Ft void
55.Fn spin_lock_rd "struct spinlock *mtx"
56.Ft void
57.Fn spin_lock_rd_quick "globaldata_t gd" "struct spinlock *mtx"
58.Ft void
59.Fn spin_unlock_rd "struct spinlock *mtx"
60.Ft void
61.Fn spin_unlock_rd_quick "globaldata_t gd" "struct spinlock *mtx"
62.Ft void
63.Fn spin_lock_wr "struct spinlock *mtx"
64.Ft void
65.Fn spin_lock_wr_quick "globaldata_t gd" "struct spinlock *mtx"
66.Ft boolean_t
67.Fn spin_trylock_wr "struct spinlock *mtx"
68.Ft void
69.Fn spin_unlock_wr "struct spinlock *mtx"
70.Ft void
71.Fn spin_unlock_wr_quick "globaldata_t gd" "struct spinlock *mtx"
72.Sh DESCRIPTION
73The
74.Fa spinlock
75structure and call API are defined in the
76.In sys/spinlock.h
77and
78.In sys/spinlock2.h
79header files, respectively.
80.Pp
81The
82.Fn spin_init
83function initializes a new
84.Fa spinlock
85structure for use.
86The structure is cleaned up with
87.Fn spin_uninit
88when it is no longer needed.
89.Pp
90The
91.Fn spin_lock_rd
92function obtains a shared
93.Em read-only
94spinlock.
95A thread may hold only one shared lock at a time, and may not acquire any
96new exclusive locks while holding a shared lock (but may already be holding
97some).
98A shared spinlock can be held by multiple CPUs concurrently.
99If a thread attempts to obtain an exclusive spinlock while shared
100references from other CPUs exist it will spin until the shared references
101go away.
102No new shared references will be allowed (that is, new shared requests
103will also spin) while the exclusive spinlock is being acquired.
104If you have the current CPU's
105.Fa globaldata
106pointer in hand you can call
107.Fn spin_lock_rd_quick ,
108but most code will just call the normal version.
109Shared spinlocks reserve a bit in the spinlock's memory for each CPU
110and do not clear the bit once set.
111This means that once set, a shared spinlock does not need to issue a
112locked read-modify-write bus cycle to the spinlock's memory, which in
113turn greatly reduces conflicts between CPU caches.
114The bit is cleared via a different mechanism only when an exclusive
115spinlock is acquired.
116The result is extremely low overheads even when a shared spinlock is
117being operated upon concurrently by multiple CPUs.
118.Pp
119A previously obtained shared spinlock is released by calling either
120.Fn spin_unlock_rd
121or
122.Fn spin_unlock_rd_quick .
123.Pp
124The
125.Fn spin_lock_wr
126function obtains an exclusive
127.Em read-write
128spinlock.
129A thread may hold any number of exclusive spinlocks but should always
130be mindful of ordering deadlocks.
131Exclusive spinlocks can only be safely
132acquired if no shared spinlocks are held.
133The
134.Fn spin_trylock_wr
135function will return
136.Dv TRUE
137if the spinlock was successfully obtained and
138.Dv FALSE
139if it wasn't.
140If you have the current CPU's
141.Fa globaldata
142pointer in hand you can call
143.Fn spin_lock_wr_quick ,
144but most code will just call the normal version.
145A spinlock used only for exclusive access has about the same overhead
146as a mutex based on a locked bus cycle.
147When used in a mixed shared/exclusive environment, however, additional
148overhead may be incurred to obtain the exclusive spinlock.
149Because shared spinlocks are left intact even after released (to
150optimize shared spinlock performance), the exclusive spinlock code
151must run through any shared bits it finds in the spinlock, clear them,
152and check the related CPU's
153.Fa globaldata
154structure to determine whether it needs to spin or not.
155.Pp
156A previously obtained exclusive spinlock is released by calling either
157.Fn spin_unlock_wr
158or
159.Fn spin_unlock_wr_quick .
160.Sh IMPLEMENTATION NOTES
161A thread may not hold any spinlock across a blocking condition or
162thread switch.
163LWKT tokens should be used for situations where you want an exclusive
164run-time lock that will survive a blocking condition or thread switch.
165Tokens will be automatically unlocked when a thread switches away and
166relocked when the thread is switched back in.
167If you want a lock that survives a blocking condition or thread switch
168without being released, use
169.Xr lockmgr 9
170locks or LWKT reader/writer locks.
171.Pp
172.Dx Ap s
173core spinlocks should only be used around small contained sections of
174code.
175For example, to manage a reference count or to implement higher level
176locking mechanisms.
177Both the token code and the
178.Xr lockmgr 9
179code use exclusive spinlocks internally.
180Core spinlocks should not be used around large chunks of code.
181.Pp
182Holding one or more spinlocks will disable thread preemption by
183another thread (e.g. preemption by an interrupt thread), but will not
184disable FAST interrupts or IPIs.
185This means that a FAST interrupt can still operate during a spinlock,
186and any threaded interrupt (which is basically all interrupts except
187the clock interrupt) will still be scheduled for later execution, but
188will not be able to preempt the current thread.
189If you wish to disable FAST interrupts and IPIs you need to enter a
190critical section prior to obtaining the spinlock.
191.Pp
192Currently, FAST interrupts, including IPI messages, are not allowed to
193acquire any spinlocks.
194It is possible to work around this if
195.Va mycpu->gd_spinlocks_wr
196and
197.Va mycpu->gd_spinlocks_rd
198are both 0.
199If one
200or the other is not zero, the FAST interrupt or IPI cannot acquire
201any spinlocks without risking a deadlock, even if the spinlocks in
202question are not related.
203.Pp
204A thread may hold any number of exclusive
205.Em read-write
206spinlocks.
207However, a thread may only hold one shared
208.Em read-only
209spinlock, and may not acquire any new exclusive locks while it is holding
210that one shared lock.
211This requirement is due to the method exclusive
212spinlocks use to determine when they can clear cached shared bits in
213the lock.
214If an exclusive lock is acquired while holding shared locks,
215a deadlock can occur even if the locks are unrelated.
216Always be mindful of potential deadlocks.
217.Pp
218Spinlocks spin.
219A thread will not block, switch away, or lose its critical section
220while obtaining or releasing a spinlock.
221Spinlocks do not use IPIs or other mechanisms.
222They are considered to be a very low level mechanism.
223.Pp
224If a spinlock can not be obtained after one second a warning will be
225printed on the console.
226If a system panic occurs, spinlocks will succeed after one second in
227order to allow the panic operation to proceed.
228.Pp
229If you have a complex structure such as a
230.Xr vnode 9
231which contains a token or
232.Xr lockmgr 9
233lock, it is legal to directly access the internal spinlock embedded
234in those structures for other purposes as long as the spinlock is not
235held when you issue the token or
236.Xr lockmgr 9
237operation.
238.Sh FILES
239The uncontended path of the spinlock implementation is in
240.Pa /sys/sys/spinlock2.h .
241The core of the spinlock implementation is in
242.Pa /sys/kern/kern_spinlock.c .
243.Sh SEE ALSO
244.Xr crit_enter 9 ,
245.Xr lockmgr 9 ,
246.Xr serializer 9
247.Sh HISTORY
248A
249.Nm spinlock
250implementation first appeared in
251.Dx 1.3 .
252.Sh AUTHORS
253.An -nosplit
254The original
255.Nm spinlock
256implementation was written by
257.An Jeffrey M. Hsu
258and was later extended by
259.An Matthew Dillon .
260This manual page was written by
261.An Matthew Dillon
262and
263.An Sascha Wildner .
264