#
eca1e48f |
| 28-Mar-2020 |
Sascha Wildner <saw@online.de> |
kernel: Remove <sys/mplock2.h> from all files that do not need it.
|
#
0fdb7d01 |
| 15-Jan-2014 |
Sascha Wildner <saw@online.de> |
Remove a bunch of unnecessary semicolons.
|
#
fa8cc5a2 |
| 19-Jul-2013 |
Sascha Wildner <saw@online.de> |
kernel: Remove some more unused kmalloc types.
M_MPSSAS M_MPTUSER M_NETGRAPH_ITEM M_NWFSMNT M_PDU M_RDRAND M_SMBDATA M_SMBFSMNT
|
#
5e8a14a3 |
| 19-Nov-2012 |
Sepherosa Ziehau <sephe@dragonflybsd.org> |
mchian: Sync w/ FreeBSD a little bit
subr_mchain.c CVS 1.{6, 8, 9, 10, 16, 18}
Submitted-by: Alexey Slynko w/ modification by me DragonFly-bug: http://bugs.dragonflybsd.org/issues/80
|
#
4643740a |
| 15-Nov-2011 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Major signal path adjustments to fix races, tsleep race fixes, +more
* Refactor the signal code to properly hold the lp->lwp_token. In particular the ksignal() and lwp_signotify() paths.
kernel - Major signal path adjustments to fix races, tsleep race fixes, +more
* Refactor the signal code to properly hold the lp->lwp_token. In particular the ksignal() and lwp_signotify() paths.
* The tsleep() path must also hold lp->lwp_token to properly handle lp->lwp_stat states and interlocks.
* Refactor the timeout code in tsleep() to ensure that endtsleep() is only called from the proper context, and fix races between endtsleep() and lwkt_switch().
* Rename proc->p_flag to proc->p_flags
* Rename lwp->lwp_flag to lwp->lwp_flags
* Add lwp->lwp_mpflags and move flags which require atomic ops (are adjusted when not the current thread) to the new field.
* Add td->td_mpflags and move flags which require atomic ops (are adjusted when not the current thread) to the new field.
* Add some freeze testing code to the x86-64 trap code (default disabled).
show more ...
|
#
cd8ab232 |
| 28-Aug-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - unwind kthread_create() mplock
* All kthread_create*() calls and kproc_start() calls now create threads which do not hold the mplock at startup.
* Add get_mplock()/rel_mplock() to thread
kernel - unwind kthread_create() mplock
* All kthread_create*() calls and kproc_start() calls now create threads which do not hold the mplock at startup.
* Add get_mplock()/rel_mplock() to threads which are not yet mpsafe.
* Remove rel_mplock() calls from thread startups which were making themselves mpsafe by releasing the mplock.
* Kernel eventhandler API is now MPSAFE
* Kernel kproc API is now MPSAFE
* Rename a few thread procedures to make their function more obvious.
show more ...
|
#
60055ede |
| 14-Jul-2010 |
Samuel J. Greear <sjg@thesjg.com> |
kern - Always give a new thread a ucred
Supplied-by: dillon@
|
#
ae8e83e6 |
| 15-Jul-2009 |
Matthew Dillon <dillon@apollo.backplane.com> |
MPSAFE - tsleep_interlock, BUF/BIO, cluster, swap_pager.
* tsleep_interlock()/tsleep() could miss wakeups during periods of heavy cpu activity. What would happen is code inbetween the two calls
MPSAFE - tsleep_interlock, BUF/BIO, cluster, swap_pager.
* tsleep_interlock()/tsleep() could miss wakeups during periods of heavy cpu activity. What would happen is code inbetween the two calls would try to send an IPI (say, issue a wakeup()), but while sending the IPI the kernel would be forced to process incoming IPIs synchronous to avoid a deadlock.
The new tsleep_interlock()/tsleep() code adds another TAILQ_ENTRY to the thread structure allowing tsleep_interlock() to formally place the thread on the appropriate sleep queue without having to deschedule the thread. Any wakeup which occurs between the interlock and the real tsleep() call will remove the thread from the queue and the later tsleep() call will recognize this and simply return without sleeping.
The new tsleep() call requires PINTERLOCKED to be passed to tsleep so tsleep() knows that the thread has already been placed on a sleep queue.
* Continue making BUF/BIO MPSAFE. Remove B_ASYNC and B_WANT from buf->b_flag and add a new bio->bio_flags field to the bio. Add BIO_SYNC, BIO_WANT, and BIO_DONE. Use atomic_cmpset_int() (aka cmpxchg) to interlock biodone() against biowait().
vn_strategy() and dev_dstrategy() call semantics now require that synchronous BIO's install a bio_done function and set BIO_SYNC in the bio.
* Clean up the cluster code a bit.
* Redo the swap_pager code. Instead of issuing I/O during the collection, which depended on critical sections to avoid races in the cluster append, we now build the entire collection first and then dispatch the I/O. This allows us to use only async completion for the BIOs, instead of a hybrid sync-or-async completion.
show more ...
|
#
d9345d3a |
| 14-Jul-2009 |
Matthew Dillon <dillon@apollo.backplane.com> |
tsleep() - Add PINTERLOCKED flag to catch edge case.
When the tsleep_interlock() + UNLOCK + tsleep() combination is used it is possible for an incoming wakeup IPI to be processed even if the combina
tsleep() - Add PINTERLOCKED flag to catch edge case.
When the tsleep_interlock() + UNLOCK + tsleep() combination is used it is possible for an incoming wakeup IPI to be processed even if the combination is used within a critical section, because operations inbetween the two may send an IPI. Under heavy loads sending an IPI can force incoming IPIs to be processed synchronously to avoid deadlocks.
It is also possible for tsleep itself to create this condition when it releases the user process schedule prior to descheduling itself.
PINTERLOCKED causes tsleep to check whether the bit set by tsleep_interlock() is still set. If it is not set we simply return without sleeping.
show more ...
|
#
973c11b9 |
| 24-Jun-2009 |
Matthew Dillon <dillon@apollo.backplane.com> |
AMD64 - Fix many compile-time warnings. int/ptr type mismatches, %llx, etc.
|
#
978400d3 |
| 06-Jan-2008 |
Sascha Wildner <swildner@dragonflybsd.org> |
Remove bogus checks after kmalloc(M_WAITOK) which never returns NULL.
Reviewed-by: hasso
|
#
91bd9c1e |
| 01-Mar-2007 |
Simon Schubert <corecode@dragonflybsd.org> |
1:1 Userland threading stage 4.7/4:
Add a new system call lwp_create() which spawns a new lwp with a given thread function address and given stack pointer. Rework and add some associated functions
1:1 Userland threading stage 4.7/4:
Add a new system call lwp_create() which spawns a new lwp with a given thread function address and given stack pointer. Rework and add some associated functions to realize this goal.
In-collaboration-with: Thomas E. Spanjaard <tgen@netphreax.net>
show more ...
|
#
b1b4e5a6 |
| 25-Feb-2007 |
Simon Schubert <corecode@dragonflybsd.org> |
Get rid of struct user/UAREA.
Merge procsig with sigacts and replace usage of procsig with sigacts, like it used to be in 4.4BSD.
Put signal-related inline functions in sys/signal2.h.
Reviewed-by:
Get rid of struct user/UAREA.
Merge procsig with sigacts and replace usage of procsig with sigacts, like it used to be in 4.4BSD.
Put signal-related inline functions in sys/signal2.h.
Reviewed-by: Thomas E. Spanjaard <tgen@netphreax.net>
show more ...
|
#
aa6c3de6 |
| 21-Feb-2007 |
Simon Schubert <corecode@dragonflybsd.org> |
1:1 Userland threading stage 2.20/4:
Unify access to pending threads with a new function, lwp_sigpend(), which returns pending signals for the lwp, which includes both lwp-specific signals and signa
1:1 Userland threading stage 2.20/4:
Unify access to pending threads with a new function, lwp_sigpend(), which returns pending signals for the lwp, which includes both lwp-specific signals and signals pending on the process. The new function lwp_delsig() is used to remove a certain signal from the pending set of both process and lwp.
Rework the places which access the pending signal list to either use those two functions or, where not possibly, to work on both lwp and proc signal lists.
show more ...
|
#
08f2f1bb |
| 03-Feb-2007 |
Simon Schubert <corecode@dragonflybsd.org> |
1:1 Userland threading stage 2.11/4:
Move signals into lwps, take p_lwp out of proc.
Originally-Submitted-by: David Xu <davidxu@freebsd.org> Reviewed-by: Thomas E. Spanjaard <tgen@netphreax.net>
|
#
fde7ac71 |
| 01-Jan-2007 |
Simon Schubert <corecode@dragonflybsd.org> |
1:1 Userland threading stage 2.10/4:
Separate p_stats into p_ru and lwp_ru.
proc.p_ru keeps track of all statistics directly related to a proc. This consists of RSS usage and nswap information and
1:1 Userland threading stage 2.10/4:
Separate p_stats into p_ru and lwp_ru.
proc.p_ru keeps track of all statistics directly related to a proc. This consists of RSS usage and nswap information and aggregate numbers for all former lwps of this proc.
proc.p_cru is the sum of all stats of reaped children.
lwp.lwp_ru contains the stats directly related to one specific lwp, meaning packet, scheduler switch or page fault counts, etc. This information gets added to lwp.lwp_proc.p_ru when the lwp exits.
show more ...
|
#
a6ec04bc |
| 22-Dec-2006 |
Sascha Wildner <swildner@dragonflybsd.org> |
Rename printf -> kprintf in sys/ and add some defines where necessary (files which are used in userland, too).
|
#
379210cb |
| 18-Dec-2006 |
Matthew Dillon <dillon@dragonflybsd.org> |
Rename kvprintf -> kvcprintf (call-back version) Rename vprintf -> kvprintf Rename vsprintf -> kvsprintf Rename vsnprintf -> kvsnprintf
|
#
ae10516a |
| 30-Sep-2006 |
Simon Schubert <corecode@dragonflybsd.org> |
Fix smb panic, td might be NULL
Reported-by: Rumcic
|
#
bb3cd951 |
| 19-Sep-2006 |
Simon Schubert <corecode@dragonflybsd.org> |
1:1 Userland threading stage 2.9/4:
Push out p_thread a little bit more
|
#
efda3bd0 |
| 05-Sep-2006 |
Matthew Dillon <dillon@dragonflybsd.org> |
Rename malloc->kmalloc, free->kfree, and realloc->krealloc. Pass 1
|
#
fcf5f48c |
| 08-Dec-2005 |
Matthew Dillon <dillon@dragonflybsd.org> |
Clean up more spinlock conversion issues and fix related panics.
Reported-by: "Adrian M. Nida" <nida@musc.edu>
|
#
8886b1fc |
| 06-Dec-2005 |
Matthew Dillon <dillon@dragonflybsd.org> |
The new lockmgr() function requires spinlocks, not tokens. Take this opportunity to convert all use of tokens in SMB to spinlocks since they are only used to wrap very low level operations.
Report
The new lockmgr() function requires spinlocks, not tokens. Take this opportunity to convert all use of tokens in SMB to spinlocks since they are only used to wrap very low level operations.
Reported-by: "Adrian M. Nida" <nida@musc.edu>
show more ...
|
#
6b69ab88 |
| 03-Dec-2005 |
Matthew Dillon <dillon@dragonflybsd.org> |
Fix a bogus proc0 test that is no longer accurate. This should allow the smb module to be loaded.
Reported-by: Adrian Nida <nida@musc.edu>
|
#
344ad853 |
| 14-Nov-2005 |
Matthew Dillon <dillon@dragonflybsd.org> |
Make tsleep/wakeup() MP SAFE for kernel threads and get us closer to making it MP SAFE for user processes. Currently the code is operating under the rule that access to a thread structure requires c
Make tsleep/wakeup() MP SAFE for kernel threads and get us closer to making it MP SAFE for user processes. Currently the code is operating under the rule that access to a thread structure requires cpu locality of reference, and access to a proc structure requires the Big Giant Lock. The two are not mutually exclusive so, for example, tsleep/wakeup on a proc needs both cpu locality of reference *AND* the BGL. This was true with the old tsleep/wakeup and has now been documented.
The new tsleep/wakeup algorithm is quite simple in concept. Each cpu has its own ident based hash table and each hash slot has a cpu mask which tells wakeup() which cpu's might have the ident. A wakeup iterates through all candidate cpus simply by chaining the IPI message through them until either all candidate cpus have been serviced, or (with wakeup_one()) the requested number of threads have been woken up.
Other changes made in this patch set:
* The sense of P_INMEM has been reversed. It is now P_SWAPPEDOUT. Also, P_SWAPPING, P_SWAPINREQ are not longer relevant and have been removed.
* The swapping code has been cleaned up and seriously revamped. The new swapin code staggers swapins to give the VM system a chance to respond to new conditions. Also some lwp-related fixes were made (more p_rtprio vs lwp_rtprio confusion).
* As mentioned above, tsleep/wakeup have been rewritten. The process p_stat no longer does crazy transitions from SSLEEP to SSTOP. There is now only SSLEEP and SSTOP is synthesized from P_SWAPPEDOUT for userland consumpion. Additionally, tsleep() with PCATCH will NO LONGER STOP THE PROCESS IN THE TSLEEP CALL. Instead, the actual stop is deferred until the process tries to return to userland. This removes all remaining cases where a stopped process can hold a locked kernel resource.
* A P_BREAKTSLEEP flag has been added. This flag indicates when an event occurs that is allowed to break a tsleep with PCATCH. All the weird undocumented setrunnable() rules have been removed and replaced with a very simple algorithm based on this flag.
* Since the UAREA is no longer swapped, we no longer faultin() on PHOLD(). This also incidently fixes the 'ps' command's tendancy to try to swap all processes back into memory.
* speedup_syncer() no longer does hackish checks on proc0's tsleep channel (td_wchan).
* Userland scheduler acquisition and release has now been tightened up and KKASSERT's have been added (one of the bugs Stefan found was related to an improper lwkt_schedule() that was found by one of the new assertions). We also have added other assertions related to expected conditions.
* A serious race in pmap_release_free_page() has been corrected. We no longer couple the object generation check with a failed pmap_release_free_page() call. Instead the two conditions are checked independantly. We no longer loop when pmap_release_free_page() succeeds (it is unclear how that could ever have worked properly).
Major testing by: Stefan Krueger <skrueger@meinberlikomm.de>
show more ...
|