xref: /dflybsd-src/share/man/man9/spinlock.9 (revision 4331bf9190076e34e2432cbc8a2fc613700d902c)
116338e2dSSascha Wildner.\"
216338e2dSSascha Wildner.\" Copyright (c) 2006 The DragonFly Project.  All rights reserved.
316338e2dSSascha Wildner.\"
416338e2dSSascha Wildner.\" Redistribution and use in source and binary forms, with or without
516338e2dSSascha Wildner.\" modification, are permitted provided that the following conditions
616338e2dSSascha Wildner.\" are met:
716338e2dSSascha Wildner.\"
816338e2dSSascha Wildner.\" 1. Redistributions of source code must retain the above copyright
916338e2dSSascha Wildner.\"    notice, this list of conditions and the following disclaimer.
1016338e2dSSascha Wildner.\" 2. Redistributions in binary form must reproduce the above copyright
1116338e2dSSascha Wildner.\"    notice, this list of conditions and the following disclaimer in
1216338e2dSSascha Wildner.\"    the documentation and/or other materials provided with the
1316338e2dSSascha Wildner.\"    distribution.
1416338e2dSSascha Wildner.\" 3. Neither the name of The DragonFly Project nor the names of its
1516338e2dSSascha Wildner.\"    contributors may be used to endorse or promote products derived
1616338e2dSSascha Wildner.\"    from this software without specific, prior written permission.
1716338e2dSSascha Wildner.\"
1816338e2dSSascha Wildner.\" THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
1916338e2dSSascha Wildner.\" ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
2016338e2dSSascha Wildner.\" LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
2116338e2dSSascha Wildner.\" FOR A PARTICULAR PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE
2216338e2dSSascha Wildner.\" COPYRIGHT HOLDERS OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
2316338e2dSSascha Wildner.\" INCIDENTAL, SPECIAL, EXEMPLARY OR CONSEQUENTIAL DAMAGES (INCLUDING,
2416338e2dSSascha Wildner.\" BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
2516338e2dSSascha Wildner.\" LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED
2616338e2dSSascha Wildner.\" AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
2716338e2dSSascha Wildner.\" OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT
2816338e2dSSascha Wildner.\" OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
2916338e2dSSascha Wildner.\" SUCH DAMAGE.
3016338e2dSSascha Wildner.\"
31*4331bf91SMatthew Dillon.\" $DragonFly: src/share/man/man9/spinlock.9,v 1.3 2006/06/01 17:05:01 dillon Exp $
3216338e2dSSascha Wildner.\"
3316338e2dSSascha Wildner.Dd May 27, 2006
3416338e2dSSascha Wildner.Os
3516338e2dSSascha Wildner.Dt SPINLOCK 9
3616338e2dSSascha Wildner.Sh NAME
3716338e2dSSascha Wildner.Nm spin_init ,
3816338e2dSSascha Wildner.Nm spin_lock_rd ,
3916338e2dSSascha Wildner.Nm spin_lock_rd_quick ,
4016338e2dSSascha Wildner.Nm spin_lock_wr ,
4116338e2dSSascha Wildner.Nm spin_lock_wr_quick ,
4216338e2dSSascha Wildner.Nm spin_trylock_wr ,
4316338e2dSSascha Wildner.Nm spin_uninit ,
4416338e2dSSascha Wildner.Nm spin_unlock_rd ,
4516338e2dSSascha Wildner.Nm spin_unlock_rd_quick ,
4616338e2dSSascha Wildner.Nm spin_unlock_wr ,
4716338e2dSSascha Wildner.Nm spin_unlock_wr_quick
4816338e2dSSascha Wildner.Nd core spinlocks
4916338e2dSSascha Wildner.Sh SYNOPSIS
5016338e2dSSascha Wildner.In sys/spinlock.h
5116338e2dSSascha Wildner.In sys/spinlock2.h
5216338e2dSSascha Wildner.Ft void
5316338e2dSSascha Wildner.Fn spin_init "struct spinlock *mtx"
5416338e2dSSascha Wildner.Ft void
5516338e2dSSascha Wildner.Fn spin_uninit "struct spinlock *mtx"
5616338e2dSSascha Wildner.Ft void
5716338e2dSSascha Wildner.Fn spin_lock_rd "struct spinlock *mtx"
5816338e2dSSascha Wildner.Ft void
5916338e2dSSascha Wildner.Fn spin_lock_rd_quick "globaldata_t gd" "struct spinlock *mtx"
6016338e2dSSascha Wildner.Ft void
6116338e2dSSascha Wildner.Fn spin_unlock_rd "struct spinlock *mtx"
6216338e2dSSascha Wildner.Ft void
6316338e2dSSascha Wildner.Fn spin_unlock_rd_quick "globaldata_t gd" "struct spinlock *mtx"
6416338e2dSSascha Wildner.Ft void
6516338e2dSSascha Wildner.Fn spin_lock_wr "struct spinlock *mtx"
6616338e2dSSascha Wildner.Ft void
6716338e2dSSascha Wildner.Fn spin_lock_wr_quick "globaldata_t gd" "struct spinlock *mtx"
6816338e2dSSascha Wildner.Ft boolean_t
6916338e2dSSascha Wildner.Fn spin_trylock_wr "struct spinlock *mtx"
7016338e2dSSascha Wildner.Ft void
7116338e2dSSascha Wildner.Fn spin_unlock_wr "struct spinlock *mtx"
7216338e2dSSascha Wildner.Ft void
7316338e2dSSascha Wildner.Fn spin_unlock_wr_quick "globaldata_t gd" "struct spinlock *mtx"
7416338e2dSSascha Wildner.Sh DESCRIPTION
7516338e2dSSascha WildnerThe
7616338e2dSSascha Wildner.Fa spinlock
7716338e2dSSascha Wildnerstructure and call API are defined in the
7816338e2dSSascha Wildner.In sys/spinlock.h
7916338e2dSSascha Wildnerand
8016338e2dSSascha Wildner.In sys/spinlock2.h
8116338e2dSSascha Wildnerheader files, respectively.
8216338e2dSSascha Wildner.Pp
8316338e2dSSascha WildnerThe
8416338e2dSSascha Wildner.Fn spin_init
8516338e2dSSascha Wildnerfunction initializes a new
8616338e2dSSascha Wildner.Fa spinlock
8716338e2dSSascha Wildnerstructure for use.
8816338e2dSSascha WildnerThe structure is cleaned up with
8916338e2dSSascha Wildner.Fn spin_uninit
9016338e2dSSascha Wildnerwhen it is no longer needed.
9116338e2dSSascha Wildner.Pp
9216338e2dSSascha WildnerThe
9316338e2dSSascha Wildner.Fn spin_lock_rd
9416338e2dSSascha Wildnerfunction obtains a shared
9516338e2dSSascha Wildner.Em read-only
9616338e2dSSascha Wildnerspinlock.
97*4331bf91SMatthew DillonA thread may hold only one shared lock at a time, and may not hold any
98*4331bf91SMatthew Dillonexclusive locks while holding a shared lock.
9916338e2dSSascha WildnerA shared spinlock can be held by multiple CPUs concurrently.
10016338e2dSSascha WildnerIf a thread attempts to obtain an exclusive spinlock while shared
10116338e2dSSascha Wildnerreferences exist it will spin until the shared references go away.
10216338e2dSSascha WildnerNo new shared references will be allowed (that is, new shared requests
10316338e2dSSascha Wildnerwill also spin) while the exclusive spinlock is being acquired.
10416338e2dSSascha WildnerIf you have the current CPU's
10516338e2dSSascha Wildner.Fa globaldata
10616338e2dSSascha Wildnerpointer in hand you can call
10716338e2dSSascha Wildner.Fn spin_lock_rd_quick ,
10816338e2dSSascha Wildnerbut most code will just call the normal version.
10916338e2dSSascha WildnerShared spinlocks reserve a bit in the spinlock's memory for each CPU
11016338e2dSSascha Wildnerand do not clear the bit once set.
11116338e2dSSascha WildnerThis means that once set, a shared spinlock does not need to issue a
11216338e2dSSascha Wildnerlocked read-modify-write bus cycle to the spinlock's memory, which in
11316338e2dSSascha Wildnerturn greatly reduces conflicts between CPU caches.
114*4331bf91SMatthew DillonThe bit is cleared via a different mechanism only when an exclusive
115*4331bf91SMatthew Dillonspinlock is acquired.
11616338e2dSSascha WildnerThe result is extremely low overheads even when a shared spinlock is
11716338e2dSSascha Wildnerbeing operated upon concurrently by multiple CPUs.
11816338e2dSSascha Wildner.Pp
11916338e2dSSascha WildnerA previously obtained shared spinlock is released by calling either
12016338e2dSSascha Wildner.Fn spin_unlock_rd
12116338e2dSSascha Wildneror
12216338e2dSSascha Wildner.Fn spin_unlock_rd_quick .
12316338e2dSSascha Wildner.Pp
12416338e2dSSascha WildnerThe
12516338e2dSSascha Wildner.Fn spin_lock_wr
12616338e2dSSascha Wildnerfunction obtains an exclusive
12716338e2dSSascha Wildner.Em read-write
12816338e2dSSascha Wildnerspinlock.
12916338e2dSSascha WildnerA thread may hold any number of exclusive spinlocks but should always
13016338e2dSSascha Wildnerbe mindful of ordering deadlocks.
13116338e2dSSascha WildnerThe
13216338e2dSSascha Wildner.Fn spin_trylock_wr
13316338e2dSSascha Wildnerfunction will return
13416338e2dSSascha Wildner.Dv TRUE
13516338e2dSSascha Wildnerif the spinlock was successfully obtained and
13616338e2dSSascha Wildner.Dv FALSE
13716338e2dSSascha Wildnerif it wasn't.
13816338e2dSSascha WildnerIf you have the current CPU's
13916338e2dSSascha Wildner.Fa globaldata
14016338e2dSSascha Wildnerpointer in hand you can call
14116338e2dSSascha Wildner.Fn spin_lock_wr_quick ,
14216338e2dSSascha Wildnerbut most code will just call the normal version.
14316338e2dSSascha WildnerA spinlock used only for exclusive access has about the same overhead
14416338e2dSSascha Wildneras a mutex based on a locked bus cycle.
14516338e2dSSascha WildnerWhen used in a mixed shared/exclusive environment, however, additional
14616338e2dSSascha Wildneroverhead may be incurred to obtain the exclusive spinlock.
14716338e2dSSascha WildnerBecause shared spinlocks are left intact even after released (to
14816338e2dSSascha Wildneroptimize shared spinlock performance), the exclusive spinlock code
14916338e2dSSascha Wildnermust run through any shared bits it finds in the spinlock, clear them,
15016338e2dSSascha Wildnerand check the related CPU's
15116338e2dSSascha Wildner.Fa globaldata
15216338e2dSSascha Wildnerstructure to determine whether it needs to spin or not.
15316338e2dSSascha Wildner.Pp
15416338e2dSSascha WildnerA previously obtained exclusive spinlock is released by calling either
15516338e2dSSascha Wildner.Fn spin_unlock_wr
15616338e2dSSascha Wildneror
15716338e2dSSascha Wildner.Fn spin_unlock_wr_quick .
15816338e2dSSascha Wildner.Sh IMPLEMENTATION NOTES
15916338e2dSSascha WildnerA thread may not hold any spinlock across a blocking condition or
16016338e2dSSascha Wildnerthread switch.
16116338e2dSSascha WildnerLWKT tokens should be used for situations where you want an exclusive
16216338e2dSSascha Wildnerrun-time lock that will survive a blocking condition or thread switch.
16316338e2dSSascha WildnerTokens will be automatically unlocked when a thread switches away and
16416338e2dSSascha Wildnerrelocked when the thread is switched back in.
16516338e2dSSascha WildnerIf you want a lock that survives a blocking condition or thread switch
16616338e2dSSascha Wildnerwithout being released, use
16716338e2dSSascha Wildner.Xr lockmgr 9
16816338e2dSSascha Wildnerlocks or LWKT reader/writer locks.
16916338e2dSSascha Wildner.Pp
17016338e2dSSascha Wildner.Dx Ap s
17116338e2dSSascha Wildnercore spinlocks should only be used around small contained sections of
17216338e2dSSascha Wildnercode.
17316338e2dSSascha WildnerFor example, to manage a reference count or to implement higher level
17416338e2dSSascha Wildnerlocking mechanisms.
17516338e2dSSascha WildnerBoth the token code and the
17616338e2dSSascha Wildner.Xr lockmgr 9
17716338e2dSSascha Wildnercode use exclusive spinlocks internally.
17816338e2dSSascha WildnerCore spinlocks should not be used around large chunks of code.
17916338e2dSSascha Wildner.Pp
18016338e2dSSascha WildnerHolding one or more spinlocks will disable thread preemption by
18116338e2dSSascha Wildneranother thread (e.g. preemption by an interrupt thread), but will not
18216338e2dSSascha Wildnerdisable FAST interrupts or IPIs.
18316338e2dSSascha WildnerIf you wish to disable FAST interrupts and IPIs you need to enter a
184*4331bf91SMatthew Dilloncritical section prior to obtaining the spinlock.
185*4331bf91SMatthew Dillon.Pp
186*4331bf91SMatthew DillonCurrently, FAST interrupts, including IPI messages, are not allowed to
187*4331bf91SMatthew Dillonacquire any spinlocks.  It is possible to work around this if
188*4331bf91SMatthew Dillonmycpu->gd_spinlocks_wr and mycpu->gd_spinlocks_rd are both 0.  If one
189*4331bf91SMatthew Dillonor the other is not zero, the FAST interrupt or IPI cannot acquire
190*4331bf91SMatthew Dillonany spinlocks without risking a deadlock, even if the spinlocks in
191*4331bf91SMatthew Dillonquestion are not related.
19216338e2dSSascha Wildner.Pp
19316338e2dSSascha WildnerA thread may hold any number of exclusive
19416338e2dSSascha Wildner.Em read-write
195*4331bf91SMatthew Dillonspinlocks.
196*4331bf91SMatthew DillonHowever, a thread may only hold one shared
19716338e2dSSascha Wildner.Em read-only
198*4331bf91SMatthew Dillonspinlock, and may not hold any exclusive locks while it is holding
199*4331bf91SMatthew Dillonthat one shared lock.  This requirement is due to the method exclusive
200*4331bf91SMatthew Dillonspinlocks use to determine when they can clear cached shared bits in
201*4331bf91SMatthew Dillonthe lock.  If an exclusive lock is acquired while holding shared locks,
202*4331bf91SMatthew Dillona deadlock can occur even if the locks are unrelated.
20316338e2dSSascha WildnerAlways be mindful of potential deadlocks.
20416338e2dSSascha Wildner.Pp
20516338e2dSSascha WildnerSpinlocks spin.
20616338e2dSSascha WildnerA thread will not block, switch away, or lose its critical section
20716338e2dSSascha Wildnerwhile obtaining or releasing a spinlock.
20816338e2dSSascha WildnerSpinlocks do not use IPIs or other mechanisms.
20916338e2dSSascha WildnerThey are considered to be a very low level mechanism.
21016338e2dSSascha Wildner.Pp
21116338e2dSSascha WildnerIf a spinlock can not be obtained after one second a warning will be
21216338e2dSSascha Wildnerprinted on the console.
21316338e2dSSascha WildnerIf a system panic occurs, spinlocks will fail after one second in
21416338e2dSSascha Wildnerorder to allow the panic operation to proceed.
21516338e2dSSascha Wildner.Pp
21616338e2dSSascha WildnerIf you have a complex structure such as a
21716338e2dSSascha Wildner.Xr vnode 9
21816338e2dSSascha Wildnerwhich contains a token or
21916338e2dSSascha Wildner.Xr lockmgr 9
22016338e2dSSascha Wildnerlock, it is legal to directly access the internal spinlock embedded
22116338e2dSSascha Wildnerin those structures.
22216338e2dSSascha Wildner.Sh SEE ALSO
223a8b21083SSascha Wildner.Xr lockmgr 9 ,
224a8b21083SSascha Wildner.Xr lwkt 9
22516338e2dSSascha Wildner.Sh HISTORY
22616338e2dSSascha WildnerA
22716338e2dSSascha Wildner.Nm spinlock
22816338e2dSSascha Wildnerimplementation first appeared in
22916338e2dSSascha Wildner.Dx 1.3 .
23016338e2dSSascha Wildner.Sh AUTHORS
23116338e2dSSascha Wildner.An -nosplit
23216338e2dSSascha WildnerThe original
23316338e2dSSascha Wildner.Nm spinlock
23416338e2dSSascha Wildnerimplementation was written by
23516338e2dSSascha Wildner.An Jeffrey M. Hsu
23616338e2dSSascha Wildnerand was later extended by
23716338e2dSSascha Wildner.An Matthew Dillon .
23816338e2dSSascha WildnerThis manual page was written by
23916338e2dSSascha Wildner.An Matthew Dillon
24016338e2dSSascha Wildnerand
24116338e2dSSascha Wildner.An Sascha Wildner .
242