#
ef50770e |
| 08-Jun-2020 |
François Tigeot <ftigeot@wolfpond.org> |
drm/linux: Add mutex_lock_nested() and mutex_trylock_recursive()
|
#
c6002f72 |
| 17-Feb-2019 |
François Tigeot <ftigeot@wolfpond.org> |
drm/linux: Add asm/processor.h
|
#
3558045e |
| 21-Jul-2018 |
Sascha Wildner <saw@online.de> |
kernel/drm: Use more straightforward lockmgr_try().
|
#
d6aa1cc5 |
| 06-May-2018 |
François Tigeot <ftigeot@wolfpond.org> |
drm: Sync include directives with Linux
* Add a few key include/asm or include/linux headers
* Move some code from .h to .c files in order to avoid clashes between the DragonFly and Linux variant
drm: Sync include directives with Linux
* Add a few key include/asm or include/linux headers
* Move some code from .h to .c files in order to avoid clashes between the DragonFly and Linux variants of kmalloc() and kfree()
show more ...
|
#
39cfddd2 |
| 27-Nov-2017 |
François Tigeot <ftigeot@wolfpond.org> |
drm/linux: Add or improve various header files
|
#
3b6a19b2 |
| 24-Oct-2017 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Refactor lockmgr()
* Seriously refactor lockmgr() so we can use atomic_fetchadd_*() for shared locks and reduce unnecessary atomic ops and atomic op loops.
The main win here is being a
kernel - Refactor lockmgr()
* Seriously refactor lockmgr() so we can use atomic_fetchadd_*() for shared locks and reduce unnecessary atomic ops and atomic op loops.
The main win here is being able to use atomic_fetchadd_*() when acquiring and releasing shared locks. A simple fstat() loop (which utilizes a LK_SHARED lockmgr lock on the vnode) improves from 191ns to around 110ns per loop with 32 concurrent threads (on a 16-core/ 32-thread xeon).
* To accomplish this, the 32-bit lk_count field becomes 64-bits. The shared count is separated into the high 32-bits, allowing it to be manipulated for both blocking shared requests and the shared lock count field. The low count bits are used for exclusive locks. Control bits are adjusted to manage lockmgr features.
LKC_SHARED Indicates shared lock count is active, else excl lock count. Can predispose the lock when the related count is 0 (does not have to be cleared, for example).
LKC_UPREQ Queued upgrade request. Automatically granted by releasing entity (UPREQ -> ~SHARED|1).
LKC_EXREQ Queued exclusive request (only when lock held shared). Automatically granted by releasing entity (EXREQ -> ~SHARED|1).
LKC_EXREQ2 Aggregated exclusive request. When EXREQ cannot be obtained due to the lock being held exclusively or EXREQ already being queued, EXREQ2 is flagged for wakeup/retries.
LKC_CANCEL Cancel API support
LKC_SMASK Shared lock count mask (LKC_SCOUNT increments).
LKC_XMASK Exclusive lock count mask (+1 increments)
The 'no lock' condition occurs when LKC_XMASK is 0 and LKC_SMASK is 0, regardless of the state of LKC_SHARED.
* Lockmgr still supports exclusive priority over shared locks. The semantics have slightly changed. The priority mechanism only applies to the EXREQ holder. Once an exclusive lock is obtained, any blocking shared or exclusive locks will have equal priority until the exclusive lock is released. Once released, shared locks can squeeze in, but then the next pending exclusive lock will assert its priority over any new shared locks when it wakes up and loops.
This isn't quite what I wanted, but it seems to work quite well. I had to make a trade-off in the EXREQ lock-grant mechanism to improve performance.
* In addition, we use atomic_fcmpset_long() instead of atomic_cmpset_long() to reduce cache line flip flopping at least a little.
* Remove lockcount() and lockcountnb(), which tried to count lock refs. Replace with lockinuse(), which simply tells the caller whether the lock is referenced or not.
* Expand some of the copyright notices (years and authors) for major rewrites. Really there are a lot more and I have to pay more attention to adjustments.
show more ...
|
#
961a6190 |
| 23-Oct-2015 |
François Tigeot <ftigeot@wolfpond.org> |
drm: Add linux/lockdep.h
|
#
1186f36a |
| 02-Jun-2015 |
Matthew Dillon <dillon@apollo.backplane.com> |
drm - Fix deadlock
* mutex_trylock()'s return value was inverted, resulting in a hanging lock. Adjust the macro.
* This should fix multiple reports of deadlocks in i915 (intel).
Reported-by: ft
drm - Fix deadlock
* mutex_trylock()'s return value was inverted, resulting in a hanging lock. Adjust the macro.
* This should fix multiple reports of deadlocks in i915 (intel).
Reported-by: ftigeot
show more ...
|
#
115eefbd |
| 22-May-2015 |
François Tigeot <ftigeot@wolfpond.org> |
drm/linux: Implement DEFINE_MUTEX()
|
#
beb32878 |
| 21-May-2015 |
François Tigeot <ftigeot@wolfpond.org> |
drm: Improve mutex_lock_interruptible() again
It should really be interruptible by signals
Spotted-by: dillon
|
#
17bbf32e |
| 21-May-2015 |
François Tigeot <ftigeot@wolfpond.org> |
drm/linux: Improve the implementation of mutex_lock_interruptible()
|
#
8634e73a |
| 04-May-2015 |
François Tigeot <ftigeot@wolfpond.org> |
drm: Add mutex_trylock() and mutex_lock_interruptible()
|
#
303d234d |
| 02-Jan-2015 |
François Tigeot <ftigeot@wolfpond.org> |
drm: Add the mutex_lock and _unlock() Linux functions
|
#
2581d4aa |
| 08-Feb-2014 |
François Tigeot <ftigeot@wolfpond.org> |
drm: Implement mutex_is_locked()
|