#
568eb77e |
| 10-Apr-2022 |
riastradh <riastradh@NetBSD.org> |
pthread: Nix trailing whitespace.
|
#
7adb4107 |
| 12-Feb-2022 |
riastradh <riastradh@NetBSD.org> |
libpthread: Move namespacing include to top of .c files.
Stuff like libc's namespace.h, or atomic_op_namespace.h, which does namespacing tricks like `#define atomic_cas_uint _atomic_cas_uint', has t
libpthread: Move namespacing include to top of .c files.
Stuff like libc's namespace.h, or atomic_op_namespace.h, which does namespacing tricks like `#define atomic_cas_uint _atomic_cas_uint', has to go at the top of each .c file. If it goes in the middle, it might be too late to affect the declarations, and result in compile errors.
I tripped over this by including <sys/atomic.h> in mips <machine/lock.h>.
(Maybe we should create a new pthread_namespace.h file for the purpose, but this'll do for now.)
show more ...
|
#
2b4d5392 |
| 11-Jun-2020 |
ad <ad@NetBSD.org> |
Adjust memory barriers.
|
#
62e0939e |
| 10-Jun-2020 |
ad <ad@NetBSD.org> |
- Make pthread_condvar and pthread_mutex work on the stack rather than in pthread_t, so there's less chance of bad things happening if someone calls (for example) pthread_cond_broadcast() from a
- Make pthread_condvar and pthread_mutex work on the stack rather than in pthread_t, so there's less chance of bad things happening if someone calls (for example) pthread_cond_broadcast() from a signal handler.
- Remove all the deferred waiter handling except for the one case that really matters which is transferring waiters from condvar -> mutex on wakeup, and do that by splicing the condvar's waiters onto the mutex.
- Remove the mutex waiters bit as it's another complication that's not strictly needed.
show more ...
|
#
051faad4 |
| 03-Jun-2020 |
ad <ad@NetBSD.org> |
Deal with a couple of problems with threads being awoken early due to timeouts or cancellation where:
- The restarting thread calls _lwp_exit() before another thread gets around to waking it with
Deal with a couple of problems with threads being awoken early due to timeouts or cancellation where:
- The restarting thread calls _lwp_exit() before another thread gets around to waking it with _lwp_unpark(), leading to ESRCH (observed by joerg@). (I may have removed a similar check mistakenly over the weekend.)
- The restarting thread considers itself gone off the sleep queue but at the same time another thread is part way through waking it, and hasn't fully completed that operation yet by setting thread->pt_mutexwait = 0. I think that could have potentially lead to the list of waiters getting messed up given the right circumstances.
show more ...
|
#
06d492d1 |
| 01-Jun-2020 |
ad <ad@NetBSD.org> |
In the interests of reliability simplify waiter handling more and redo condvars to manage the list of waiters with atomic ops.
|
#
bc77394c |
| 16-May-2020 |
ad <ad@NetBSD.org> |
- Try to eliminate a hang in "parked" I've been seeing while stress testing. Centralise wakeup of deferred waiters in pthread__clear_waiters() and use throughout libpthread. Make fewer assumptio
- Try to eliminate a hang in "parked" I've been seeing while stress testing. Centralise wakeup of deferred waiters in pthread__clear_waiters() and use throughout libpthread. Make fewer assumptions. Be more conservative in pthread_mutex when dealing with pending waiters.
- Remove the "hint" argument everywhere since the kernel doesn't use it any more.
show more ...
|
#
331480e6 |
| 16-Feb-2020 |
kamil <kamil@NetBSD.org> |
Revert "Enhance the pthread(3) + malloc(3) init model"
It is reported to hand on aarch64 with gzip.
|
#
5fa60982 |
| 15-Feb-2020 |
kamil <kamil@NetBSD.org> |
Enhance the pthread(3) + malloc(3) init model
Separate the pthread_atfork(3) call from pthread_tsd_init() and move it into a distinct function.
Call inside pthread__init() late TSD initialization r
Enhance the pthread(3) + malloc(3) init model
Separate the pthread_atfork(3) call from pthread_tsd_init() and move it into a distinct function.
Call inside pthread__init() late TSD initialization route, just after "pthread_atfork(NULL, NULL, pthread__fork_callback);".
Document that malloc(3) initialization is now controlled again and called during the first pthread_atfork(3) call.
Remove #if 0 code from pthread_mutex.c as we no longer initialize malloc prematurely.
show more ...
|
#
31ebab19 |
| 01-Feb-2020 |
kamil <kamil@NetBSD.org> |
Revert previous
'git grep' breaks now.
|
#
f93ad707 |
| 01-Feb-2020 |
kamil <kamil@NetBSD.org> |
Remove 'ifdef 0' hacks
It is no longer needed as the proper fix avoiding premature malloc() landed the sources.
|
#
260b3a17 |
| 31-Jan-2020 |
kamil <kamil@NetBSD.org> |
Refactor libpthread checks for invalid arguments
Switch from manual functions to pthread__error().
|
#
bbb79fe8 |
| 31-Jan-2020 |
christos <christos@NetBSD.org> |
In the same spirit as the previous pthread_mutex_init change for jemalloc, make pthread_mutexattr_init do always a full initialization, so that the attribute that will be used later when we become th
In the same spirit as the previous pthread_mutex_init change for jemalloc, make pthread_mutexattr_init do always a full initialization, so that the attribute that will be used later when we become threaded is properly initialized.
show more ...
|
#
12ee584a |
| 29-Jan-2020 |
kamil <kamil@NetBSD.org> |
Use pthread_mutexattr_t and pthread_mutex_t magic fields
Validate _PT_MUTEX_MAGIC in pthread_mutex_t and _PT_MUTEXATTR_MAGIC in pthread_mutexattr_t accordingly.
|
#
d48cac51 |
| 29-Jan-2020 |
kamil <kamil@NetBSD.org> |
Mark destroyed pthread_mutexattr_t as dead
|
#
e3546949 |
| 25-Jan-2020 |
ad <ad@NetBSD.org> |
Adjustment to previous: don't call _lwp_unpark_all() with nwaiters == 0.
|
#
769155be |
| 25-Jan-2020 |
ad <ad@NetBSD.org> |
pthread__mutex_unlock_slow(): ignore the DEFERRED bit. It's only purpose is to get the thread to go through the slow path. If there are waiters, process them there and then. Should not affect well
pthread__mutex_unlock_slow(): ignore the DEFERRED bit. It's only purpose is to get the thread to go through the slow path. If there are waiters, process them there and then. Should not affect well behaved apps. Maybe of help for:
PR bin/50350: rump/rumpkern/t_sp/stress_{long,short} fail on Core 2 Quad
show more ...
|
#
51002188 |
| 13-Jan-2020 |
ad <ad@NetBSD.org> |
Rip out some very ambitious optimisations around pthread_mutex that are don't buy much. This stuff is hard enough to get right in the kernel let alone userspace, and I don't trust that it's right.
|
#
2cb70116 |
| 05-Mar-2019 |
christos <christos@NetBSD.org> |
Jemalloc initializes mutexes before we become threaded and expects to use them later.
|
#
85d957d4 |
| 08-Dec-2017 |
kre <kre@NetBSD.org> |
Deal with more lwp_park() timestamp unconsting
|
#
6e03f600 |
| 31-Oct-2016 |
christos <christos@NetBSD.org> |
Don't spin if we already own the mutex, otherwise we will get stuck spinning forever, fixes timemutex{1,2} tests.
|
#
3b2c691c |
| 17-Jul-2016 |
skrll <skrll@NetBSD.org> |
Use anonymous union for ptm_ceiling and old __pthread_spin_t field to maintain backward compatibility and fix hppa build. hppa has an non- integer type __pthread_spin_t
|
#
d946c609 |
| 16-Jul-2016 |
skrll <skrll@NetBSD.org> |
KNF
|
#
7cf7644f |
| 03-Jul-2016 |
christos <christos@NetBSD.org> |
GSoC 2016 Charles Cui: Implement thread priority protection based on work by Andy Doran. Also document the get/set pshared thread calls as not implemented, and add a skeleton implementation that is d
GSoC 2016 Charles Cui: Implement thread priority protection based on work by Andy Doran. Also document the get/set pshared thread calls as not implemented, and add a skeleton implementation that is disabled. XXX: document _sched_protect(2).
show more ...
|
#
2e17c78b |
| 03-Feb-2014 |
rmind <rmind@NetBSD.org> |
pthread__mutex_lock_slow: fix the handling of a potential race with the non-interlocked CAS in the fast unlock path -- it is unsafe to test for the waiters-bit while the owner thread is running, we h
pthread__mutex_lock_slow: fix the handling of a potential race with the non-interlocked CAS in the fast unlock path -- it is unsafe to test for the waiters-bit while the owner thread is running, we have to spin for the owner or its state change to be sure about the presence of the bit. Split off the logic into the pthread__mutex_setwaiters() routine.
This is a partial fix to the named lockup problem (also see PR/44756). It seems there is another race which can be reproduced on faster CPUs.
show more ...
|