#
ed183f8c |
| 23-Oct-2019 |
Sascha Wildner <saw@online.de> |
world/kernel: Use the rounddown() macro in various places.
Tested-by: zrj
|
#
c3783d8f |
| 04-Feb-2018 |
zrj <rimvydas.jasinskas@gmail.com> |
kernel/disk: Remove use of "%b" format.
Switch to args safe "%pb%i" internal format.
|
#
2458a87a |
| 26-Nov-2017 |
zrj <rimvydas.jasinskas@gmail.com> |
kernel/nata: Deal with ATA_DEV() and atadev->unit.
Hopefully I got all places correctly.
While there, some misc cleanup.
|
#
bb15467a |
| 25-Nov-2017 |
zrj <rimvydas.jasinskas@gmail.com> |
kernel/nata: Misc cleanup in non chipset codes.
* Move some stuff around. * Add local implementations of biofinish() and g_io_deliver(). * Add prints for READ_NATIVE_MAX_ADDRESS. * Use >= in
kernel/nata: Misc cleanup in non chipset codes.
* Move some stuff around. * Add local implementations of biofinish() and g_io_deliver(). * Add prints for READ_NATIVE_MAX_ADDRESS. * Use >= in comparisons for devclass_get_maxunit()
No functional change.
show more ...
|
#
9243051b |
| 24-Nov-2017 |
zrj <rimvydas.jasinskas@gmail.com> |
kernel/nata: Return more data for natacontrol(8).
* include info about backing subdisks * use last 16 bytes of serial number in meta (as MatrixRAID does) * add optional automatc spindown/spinup s
kernel/nata: Return more data for natacontrol(8).
* include info about backing subdisks * use last 16 bytes of serial number in meta (as MatrixRAID does) * add optional automatc spindown/spinup support (dmesg noisy) * various cleanups * natacontrol(8) additions + cleanup
Taken-from: FreeBSD
show more ...
|
#
57e09377 |
| 07-Aug-2016 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Cleanup gcc warnings
* Cleanup gcc warnings at higher optimization levels. This will allow us to build kernels -O2 or -O3.
|
#
15bd3c73 |
| 25-Nov-2014 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Fix boot-time panic in NATA revealed by new callout mechanics
* The NATA driver was using spin locks in a very, very dangerous way. They did not play nice with the new blocking callout me
kernel - Fix boot-time panic in NATA revealed by new callout mechanics
* The NATA driver was using spin locks in a very, very dangerous way. They did not play nice with the new blocking callout mechanism.
* Replace all of NATAs spinlocks with lockmgr locks. In addition, change all asynchronous callout_stop() calls to synchronous callout_stop_sync() calls, and use callout_init_lk() to auto-lock ch->state_lock for the callback, which fixes a long-time deadlock race.
Reported-by: tuxillo
show more ...
|
#
ba87a4ab |
| 24-Aug-2014 |
Sascha Wildner <saw@online.de> |
kernel/spinlock: Add a description to struct spinlock.
And add it to spin_init() and SPINLOCK_INITIALIZER().
Submitted-by: dclink (see <http://bugs.dragonflybsd.org/issues/2714>) OK'd-by: dill
kernel/spinlock: Add a description to struct spinlock.
And add it to spin_init() and SPINLOCK_INITIALIZER().
Submitted-by: dclink (see <http://bugs.dragonflybsd.org/issues/2714>) OK'd-by: dillon
show more ...
|
#
f6e8a0a1 |
| 07-Jun-2014 |
Imre Vadasz <imre@vdsz.com> |
Convert files to UTF-8
Taken-from: FreeBSD
|
#
34c2e953 |
| 20-Feb-2014 |
Sascha Wildner <saw@online.de> |
kernel/nataraid: Fix an issue with the devstat support I added recently.
This fixes the "devstat_end_transaction: HELP!! busy_count for ar0 is < 0" we were seeing.
Tested-by: Aaron Bieber <deftly@g
kernel/nataraid: Fix an issue with the devstat support I added recently.
This fixes the "devstat_end_transaction: HELP!! busy_count for ar0 is < 0" we were seeing.
Tested-by: Aaron Bieber <deftly@gmail.com>
show more ...
|
#
60279a3c |
| 20-Feb-2014 |
Sascha Wildner <saw@online.de> |
kernel/nataraid: Fix a panic upon booting with a degraded Intel RAID.
disk_idx has upper bits set in this case which we need to mask.
Taken-from: FreeBSD's r205074 Reported-and-tested-b
kernel/nataraid: Fix a panic upon booting with a degraded Intel RAID.
disk_idx has upper bits set in this case which we need to mask.
Taken-from: FreeBSD's r205074 Reported-and-tested-by: Aaron Bieber <deftly@gmail.com>
show more ...
|
#
feabea0f |
| 16-Feb-2014 |
Sascha Wildner <saw@online.de> |
kernel/nataraid: Fix a bug for array sizes >2TB.
The overall array size (total_sectors) in the softc was already 64 bit wide but due to a missing cast when multiplying the 32 bit disk size by the nu
kernel/nataraid: Fix a bug for array sizes >2TB.
The overall array size (total_sectors) in the softc was already 64 bit wide but due to a missing cast when multiplying the 32 bit disk size by the number of disks, it never became larger than 32 bits.
Also, the disk size was signed when it should have been unsigned.
Note that these fixes apply to RAIDs created using natacontrol(8), but not necessarily to those created with BIOS utilities.
Reported-by: Aaron Bieber <deftly@gmail.com>
show more ...
|
#
89be26e1 |
| 16-Feb-2014 |
Sascha Wildner <saw@online.de> |
kernel/nataraid: Add devstat support.
|
#
b8ba70b8 |
| 16-Feb-2014 |
Sascha Wildner <saw@online.de> |
kernel/nataraid: Fix nVidia MediaShield metadata kprintfs for unsigned.
|
#
d3c9c58e |
| 20-Feb-2013 |
Sascha Wildner <saw@online.de> |
kernel: Use DEVMETHOD_END in the drivers.
|
#
a43d9d72 |
| 05-Jan-2013 |
Sascha Wildner <saw@online.de> |
kernel/disk: Remove some unused variables and add __debugvar.
|
#
3598cc14 |
| 21-Nov-2012 |
Sascha Wildner <saw@online.de> |
Remove some duplicated semicolons (mostly in the kernel).
|
#
46c681a1 |
| 25-Jan-2012 |
Sascha Wildner <saw@online.de> |
Revert "nataraid(4): Add devstat support."
This reverts commit 3e184884618d66845f8b90e6dae483155da6dce6.
Oops, it was a bit too untested and causes nasty messages on the console. Will investigate a
Revert "nataraid(4): Add devstat support."
This reverts commit 3e184884618d66845f8b90e6dae483155da6dce6.
Oops, it was a bit too untested and causes nasty messages on the console. Will investigate and commit a proper fix.
Reported-by: Joerg Anslik <joerg@anslik.de>
show more ...
|
#
3e184884 |
| 21-Jan-2012 |
Sascha Wildner <saw@online.de> |
nataraid(4): Add devstat support.
|
#
47f4bca5 |
| 20-Jan-2012 |
Sascha Wildner <saw@online.de> |
kernel: Remove some old major numbers.
|
#
b5d7061d |
| 24-Sep-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Work through some memory leaks in dsched
* Add a uninitbufbio() function to complement initbufbio(). Also move BUF_LOCKINIT() into initbufbio() and BUF_LOCKFREE() into uninitbufbio().
*
kernel - Work through some memory leaks in dsched
* Add a uninitbufbio() function to complement initbufbio(). Also move BUF_LOCKINIT() into initbufbio() and BUF_LOCKFREE() into uninitbufbio().
* There are several device drivers and other places where the struct buf is still being allocated manually (verses using getpbuf()). These were kfree()ing the buffer without dealing with e.g. dsched_exit_buf().
Have uninitbufbio() call dsched_exit_buf() and adjust the various manual allocations/frees of struct buf to call uninitbufbio() before kfree()ing. Also remove various manual calls to BUF_LOCKFREE() (which is now done inside uninitbufbio()).
* This should hopefully deal with the memory leaks but there could be a few left.
Reported-by: "Steve O'Hara-Smith" <steve@sohara.org>
show more ...
|
#
287a8577 |
| 30-Aug-2010 |
Alex Hornung <ahornung@gmail.com> |
spinlocks - Rename API to spin_{try,un,}lock
* Rename the API to spin_trylock, spin_unlock and spin_lock instead of spin_lock_wr, spin_unlock_wr and spin_trylock_wr now that we only have exclusi
spinlocks - Rename API to spin_{try,un,}lock
* Rename the API to spin_trylock, spin_unlock and spin_lock instead of spin_lock_wr, spin_unlock_wr and spin_trylock_wr now that we only have exclusive spinlocks.
* 99% of this patch was generated by a semantic coccinelle patch
show more ...
|
#
da44240f |
| 29-Aug-2010 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - gcc -Os/-O2 warnings pass
* This is just a partial pass on the code to start cleaning up gcc warnings at higher optimization levels.
|
#
b24cd69c |
| 06-Dec-2009 |
Alex Hornung <ahornung@gmail.com> |
dump - Make use of the new dumping
* Adapt our dumping infrastructure to the new dump type.
* Update all disk/raid dump methods (except aac) to work with the new dumps. These now don't take matte
dump - Make use of the new dumping
* Adapt our dumping infrastructure to the new dump type.
* Update all disk/raid dump methods (except aac) to work with the new dumps. These now don't take matters into their own hands and just write what they are told to.
show more ...
|
#
ae8e83e6 |
| 15-Jul-2009 |
Matthew Dillon <dillon@apollo.backplane.com> |
MPSAFE - tsleep_interlock, BUF/BIO, cluster, swap_pager.
* tsleep_interlock()/tsleep() could miss wakeups during periods of heavy cpu activity. What would happen is code inbetween the two calls
MPSAFE - tsleep_interlock, BUF/BIO, cluster, swap_pager.
* tsleep_interlock()/tsleep() could miss wakeups during periods of heavy cpu activity. What would happen is code inbetween the two calls would try to send an IPI (say, issue a wakeup()), but while sending the IPI the kernel would be forced to process incoming IPIs synchronous to avoid a deadlock.
The new tsleep_interlock()/tsleep() code adds another TAILQ_ENTRY to the thread structure allowing tsleep_interlock() to formally place the thread on the appropriate sleep queue without having to deschedule the thread. Any wakeup which occurs between the interlock and the real tsleep() call will remove the thread from the queue and the later tsleep() call will recognize this and simply return without sleeping.
The new tsleep() call requires PINTERLOCKED to be passed to tsleep so tsleep() knows that the thread has already been placed on a sleep queue.
* Continue making BUF/BIO MPSAFE. Remove B_ASYNC and B_WANT from buf->b_flag and add a new bio->bio_flags field to the bio. Add BIO_SYNC, BIO_WANT, and BIO_DONE. Use atomic_cmpset_int() (aka cmpxchg) to interlock biodone() against biowait().
vn_strategy() and dev_dstrategy() call semantics now require that synchronous BIO's install a bio_done function and set BIO_SYNC in the bio.
* Clean up the cluster code a bit.
* Redo the swap_pager code. Instead of issuing I/O during the collection, which depended on critical sections to avoid races in the cluster append, we now build the entire collection first and then dispatch the I/O. This allows us to use only async completion for the BIOs, instead of a hybrid sync-or-async completion.
show more ...
|