#
173c062a |
| 06-Dec-2019 |
Bjoern A. Zeeb <bz@FreeBSD.org> |
Improve EPOCH_TRACE
Two changes to EPOCH_TRACE: (1) add a sysctl to surpress the backtrace from epoch_trace_report(). Sometimes the log line for the recursion is enough and the backtrace mas
Improve EPOCH_TRACE
Two changes to EPOCH_TRACE: (1) add a sysctl to surpress the backtrace from epoch_trace_report(). Sometimes the log line for the recursion is enough and the backtrace massively spams the console. (2) In order to be able to go without the backtrace do not only print where the previous occurance happened, but also where the current one happens. That way we have file:line information for both and can look at them without the need for getting line numbers from backtrace and a debugging tool.
Reviewed by: glebius Sponsored by: Netflix (originally) Differential Revision: https://reviews.freebsd.org/D22641
show more ...
|
#
7993a104 |
| 22-Nov-2019 |
Conrad Meyer <cem@FreeBSD.org> |
Add explicit SI_SUB_EPOCH
Add explicit SI_SUB_EPOCH, after SI_SUB_TASKQ and before SI_SUB_SMP (EARLY_AP_STARTUP). Rename existing "SI_SUB_TASKQ + 1" to SI_SUB_EPOCH.
epoch(9) consumers cannot epoc
Add explicit SI_SUB_EPOCH
Add explicit SI_SUB_EPOCH, after SI_SUB_TASKQ and before SI_SUB_SMP (EARLY_AP_STARTUP). Rename existing "SI_SUB_TASKQ + 1" to SI_SUB_EPOCH.
epoch(9) consumers cannot epoch_alloc() before SI_SUB_EPOCH:SI_ORDER_SECOND, but likely should allocate before SI_SUB_SMP. Prior to this change, consumers (well, epoch itself, and net/if.c) just open-coded the SI_SUB_TASKQ + 1 order to match epoch.c, but this was fragile.
Reviewed by: mmacy Differential Revision: https://reviews.freebsd.org/D22503
show more ...
|
Revision tags: release/12.1.0 |
|
#
5757b59f |
| 29-Oct-2019 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Merge td_epochnest with td_no_sleeping.
Epoch itself doesn't rely on the counter and it is provided merely for sleeping subsystems to check it.
- In functions that sleep use THREAD_CAN_SLEEP() to a
Merge td_epochnest with td_no_sleeping.
Epoch itself doesn't rely on the counter and it is provided merely for sleeping subsystems to check it.
- In functions that sleep use THREAD_CAN_SLEEP() to assert correctness. With EPOCH_TRACE compiled print epoch info. - _sleep() was a wrong place to put the assertion for epoch, right place is sleepq_add(), as there ways to call the latter bypassing _sleep(). - Do not increase td_no_sleeping in non-preemptible epochs. The critical section would trigger all possible safeguards, no sleeping counter is extraneous.
Reviewed by: kib
show more ...
|
#
080e9496 |
| 22-Oct-2019 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Allow epoch tracker to use the very last byte of the stack. Not sure this will help to avoid panic in this function, since it will also use some stack, but makes code more strict.
Submitted by: hse
Allow epoch tracker to use the very last byte of the stack. Not sure this will help to avoid panic in this function, since it will also use some stack, but makes code more strict.
Submitted by: hselasky
show more ...
|
#
77d70e51 |
| 21-Oct-2019 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Assert that any epoch tracker belongs to the thread stack.
Reviewed by: kib
|
#
279b9aab |
| 21-Oct-2019 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Remove epoch tracker from struct thread. It was an ugly crutch to emulate locking semantics for if_addr_rlock() and if_maddr_rlock().
|
#
bac06038 |
| 15-Oct-2019 |
Gleb Smirnoff <glebius@FreeBSD.org> |
When assertion for a thread not being in an epoch fails also print all entered epochs. Works with EPOCH_TRACE only.
Reviewed by: hselasky Differential Revision: https://reviews.freebsd.org/D22017
|
#
f6eccf96 |
| 14-Oct-2019 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Since EPOCH_TRACE had been moved to opt_global.h, we don't need to waste extra space in struct thread.
|
#
668ee101 |
| 26-Sep-2019 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r352587 through r352763.
|
#
dd902d01 |
| 25-Sep-2019 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Add debugging facility EPOCH_TRACE that checks that epochs entered are properly nested and warns about recursive entrances. Unlike with locks, there is nothing fundamentally wrong with such use, the
Add debugging facility EPOCH_TRACE that checks that epochs entered are properly nested and warns about recursive entrances. Unlike with locks, there is nothing fundamentally wrong with such use, the intent of tracer is to help to review complex epoch-protected code paths, and we mean the network stack here.
Reviewed by: hselasky Sponsored by: Netflix Pull Request: https://reviews.freebsd.org/D21610
show more ...
|
#
a63915c2 |
| 28-Jul-2019 |
Alan Somers <asomers@FreeBSD.org> |
MFHead @r350386
Sponsored by: The FreeBSD Foundation
|
#
2fb62b1a |
| 24-Jul-2019 |
Mark Johnston <markj@FreeBSD.org> |
Fix the turnstile_lock() KPI.
turnstile_{lock,unlock}() were added for use in epoch. turnstile_lock() returned NULL to indicate that the calling thread had lost a race and the turnstile was no long
Fix the turnstile_lock() KPI.
turnstile_{lock,unlock}() were added for use in epoch. turnstile_lock() returned NULL to indicate that the calling thread had lost a race and the turnstile was no longer associated with the given lock, or the lock owner. However, reader-writer locks may not have a designated owner, in which case turnstile_lock() would return NULL and epoch_block_handler_preempt() would leak spinlocks as a result.
Apply a minimal fix: return the lock owner as a separate return value.
Reviewed by: kib MFC after: 3 days Sponsored by: The FreeBSD Foundation Differential Revision: https://reviews.freebsd.org/D21048
show more ...
|
Revision tags: release/11.3.0 |
|
#
131b2b76 |
| 28-Jun-2019 |
Hans Petter Selasky <hselasky@FreeBSD.org> |
Implement API for draining EPOCH(9) callbacks.
The epoch_drain_callbacks() function is used to drain all pending callbacks which have been invoked by prior epoch_call() function calls on the same ep
Implement API for draining EPOCH(9) callbacks.
The epoch_drain_callbacks() function is used to drain all pending callbacks which have been invoked by prior epoch_call() function calls on the same epoch. This function is useful when there are shared memory structure(s) referred to by the epoch callback(s) which are not refcounted and are rarely freed. The typical place for calling this function is right before freeing or invalidating the shared resource(s) used by the epoch callback(s). This function can sleep and is not optimized for performance.
Differential Revision: https://reviews.freebsd.org/D20109 MFC after: 1 week Sponsored by: Mellanox Technologies
show more ...
|
#
c981cbbd |
| 15-Feb-2019 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r343956 through r344177.
|
#
f855ec81 |
| 12-Feb-2019 |
Marius Strobl <marius@FreeBSD.org> |
Make taskqgroup_attach{,_cpu}(9) work across architectures
So far, intr_{g,s}etaffinity(9) take a single int for identifying a device interrupt. This approach doesn't work on all architectures suppo
Make taskqgroup_attach{,_cpu}(9) work across architectures
So far, intr_{g,s}etaffinity(9) take a single int for identifying a device interrupt. This approach doesn't work on all architectures supported, as a single int isn't sufficient to globally specify a device interrupt. In particular, with multiple interrupt controllers in one system as found on e. g. arm and arm64 machines, an interrupt number as returned by rman_get_start(9) may be only unique relative to the bus and, thus, interrupt controller, a certain device hangs off from. In turn, this makes taskqgroup_attach{,_cpu}(9) and - internal to the gtaskqueue implementation - taskqgroup_attach_deferred{,_cpu}() not work across architectures. Yet in turn, iflib(4) as gtaskqueue consumer so far doesn't fit architectures where interrupt numbers aren't globally unique. However, at least for intr_setaffinity(..., CPU_WHICH_IRQ, ...) as employed by the gtaskqueue implementation to bind an interrupt to a particular CPU, using bus_bind_intr(9) instead is equivalent from a functional point of view, with bus_bind_intr(9) taking the device and interrupt resource arguments required for uniquely specifying a device interrupt. Thus, change the gtaskqueue implementation to employ bus_bind_intr(9) instead and intr_{g,s}etaffinity(9) to take the device and interrupt resource arguments required respectively. This change also moves struct grouptask from <sys/_task.h> to <sys/gtaskqueue.h> and wraps struct gtask along with the gtask_fn_t typedef into #ifdef _KERNEL as userland likes to include <sys/_task.h> or indirectly drags it in - for better or worse also with _KERNEL defined -, which with device_t and struct resource dependencies otherwise is no longer as easily possible now. The userland inclusion problem probably can be improved a bit by introducing a _WANT_TASK (as well as a _WANT_MOUNT) akin to the existing _WANT_PRISON etc., which is orthogonal to this change, though, and likely needs an exp-run.
While at it: - Change the gt_cpu member in the grouptask structure to be of type int as used elswhere for specifying CPUs (an int16_t may be too narrow sooner or later), - move the gtaskqueue_enqueue_fn typedef from <sys/gtaskqueue.h> to the gtaskqueue implementation as it's only used and needed there, - change the GTASK_INIT macro to use "gtask" rather than "task" as argument given that it actually operates on a struct gtask rather than a struct task, and - let subr_gtaskqueue.c consistently use __func__ to print functions names.
Reported by: mmel Reviewed by: mmel Differential Revision: https://reviews.freebsd.org/D19139
show more ...
|
Revision tags: release/12.0.0 |
|
#
6149ed01 |
| 14-Nov-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r340368 through r340426.
|
#
91cf4975 |
| 14-Nov-2018 |
Matt Macy <mmacy@FreeBSD.org> |
epoch(9) revert r340097 - no longer a need for multiple sections per cpu
I spoke with Samy Bahra and recent changes to CK to make ck_epoch_call and ck_epoch_poll not modify the record have eliminate
epoch(9) revert r340097 - no longer a need for multiple sections per cpu
I spoke with Samy Bahra and recent changes to CK to make ck_epoch_call and ck_epoch_poll not modify the record have eliminated the need for this.
show more ...
|
#
635c1884 |
| 13-Nov-2018 |
Gleb Smirnoff <glebius@FreeBSD.org> |
style(9), mostly adjusting overly long lines.
|
#
a760c50c |
| 13-Nov-2018 |
Gleb Smirnoff <glebius@FreeBSD.org> |
With epoch not inlined, there is no point in using _lite KPI. While here, remove some unnecessary casts.
|
#
9f360eec |
| 13-Nov-2018 |
Gleb Smirnoff <glebius@FreeBSD.org> |
The dualism between epoch_tracker and epoch_thread is fragile and unnecessary. So, expose CK types to kernel and use a single normal structure for epoch_tracker.
Reviewed by: jtl, gallatin
|
#
b79aa45e |
| 13-Nov-2018 |
Gleb Smirnoff <glebius@FreeBSD.org> |
For compatibility KPI functions like if_addr_rlock() that used to have mutexes but now are converted to epoch(9) use thread-private epoch_tracker. Embedding tracker into ifnet(9) or ifnet derived str
For compatibility KPI functions like if_addr_rlock() that used to have mutexes but now are converted to epoch(9) use thread-private epoch_tracker. Embedding tracker into ifnet(9) or ifnet derived structures creates a non reentrable function, that will fail miserably if called simultaneously from two different contexts. A thread private tracker will provide a single tracker that would allow to call these functions safely. It doesn't allow nested call, but this is not expected from compatibility KPIs.
Reviewed by: markj
show more ...
|
#
a82296c2 |
| 13-Nov-2018 |
Gleb Smirnoff <glebius@FreeBSD.org> |
Uninline epoch(9) entrance and exit. There is no proof that modern processors would benefit from avoiding a function call, but bloating code. In fact, clang created an uninlined real function for man
Uninline epoch(9) entrance and exit. There is no proof that modern processors would benefit from avoiding a function call, but bloating code. In fact, clang created an uninlined real function for many object files in the network stack.
- Move epoch_private.h into subr_epoch.c. Code copied exactly, avoiding any changes, including style(9). - Remove private copies of critical_enter/exit.
Reviewed by: kib, jtl Differential Revision: https://reviews.freebsd.org/D17879
show more ...
|
#
2a22df74 |
| 04-Nov-2018 |
Dimitry Andric <dim@FreeBSD.org> |
Merge ^/head r339813 through r340125.
|
#
10f42d24 |
| 03-Nov-2018 |
Matt Macy <mmacy@FreeBSD.org> |
Convert epoch to read / write records per cpu
In discussing D17503 "Run epoch calls sooner and more reliably" with sbahra@ we came to the conclusion that epoch is currently misusing the ck_epoch API
Convert epoch to read / write records per cpu
In discussing D17503 "Run epoch calls sooner and more reliably" with sbahra@ we came to the conclusion that epoch is currently misusing the ck_epoch API. It isn't safe to do a "write side" operation (ck_epoch_call or ck_epoch_poll) in the middle of a "read side" section. Since, by definition, it's possible to be preempted during the middle of an EPOCH_PREEMPT epoch the GC task might call ck_epoch_poll or another thread might call ck_epoch_call on the same section. The right solution is ultimately to change the way that ck_epoch works for this use case. However, as a stopgap for 12 we agreed to simply have separate records for each use case.
Tested by: pho@
MFC after: 3 days
show more ...
|
#
14b841d4 |
| 11-Aug-2018 |
Kyle Evans <kevans@FreeBSD.org> |
MFH @ r337607, in preparation for boarding
|