#
e8599db1 |
| 03-May-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 40F/Many: Inode/link-count sequencer cleanup pass, UNDO cache.
* Implement an UNDO cache. If we have already laid down an UNDO in the current flush cycle we do not have to lay down another
HAMMER 40F/Many: Inode/link-count sequencer cleanup pass, UNDO cache.
* Implement an UNDO cache. If we have already laid down an UNDO in the current flush cycle we do not have to lay down another one for the same address. This greatly reduces the number of UNDOs we generate during a flush.
* Properly get the vnode in order to be able to issue vfsync()'s from the backend. We may also have to acquire the vnode when doing an unload check for a file deletion.
* Properly generate UNDO records for the volume header. During crash recovery we have to UNDO the volume header along with any partially written meta-data, because the volume header refers to the meta-data.
* Add another record type, GENERAL, representing inode or softlink records.
* Move the setting of HAMMER_INODE_WRITE_ALT to the backend, allowing the kernel to flush buffers up to the point where the backend syncs the inode.
show more ...
|
#
4e17f465 |
| 03-May-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 40D/Many: Inode/link-count sequencer cleanup pass.
* Move the vfsync from the frontend to the backend. This allows the frontend to passively move inodes to the backend without having to
HAMMER 40D/Many: Inode/link-count sequencer cleanup pass.
* Move the vfsync from the frontend to the backend. This allows the frontend to passively move inodes to the backend without having to actually start the flush, greatly improving performance.
* Use an inode lock to deal with directory entry syncing races between the frontend and the backend. It isn't optimal but it's ok for now.
* Massively optimize the backend code by initializing a single cursor for an inode and passing the cursor to procedures, instead of having each procedure initialize its own cursor.
* Fix a sequencing issue with the backend. While building the flush state for an inode another process could get in and initiate its own flush, screwing up the flush group and creating confusion. (hmp->flusher_lock)
* Don't lose track of HAMMER_FLUSH_SIGNAL flush requests. If we get such a requet but have to flag a reflush, also flag that the reflush is to be signaled (done immediately when the current flush is done).
* Remove shared inode locks from hammer_vnops.c. Their original purpose no longer exists.
* Simplify the arguments passed to numerous procedures (hammer_ip_first(), etc).
show more ...
|
#
9480ff55 |
| 27-Apr-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 38E/Many: Undo/Synchronization and crash recovery
* Fix a couple of deadlocks.
* Fix a kernel buffer cache exhaustion issue.
* Get the 'hammer prune' and 'hammer reblock' command working ag
HAMMER 38E/Many: Undo/Synchronization and crash recovery
* Fix a couple of deadlocks.
* Fix a kernel buffer cache exhaustion issue.
* Get the 'hammer prune' and 'hammer reblock' command working again. The commands are now properly synchronized for crash recovery.
show more ...
|
#
10a5d1ba |
| 25-Apr-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 38C/Many: Undo/Synchronization and crash recovery
* Classify buffers as meta, undo, or data buffers, and collect them into separate lists so they can be flushed in the proper order.
* Make
HAMMER 38C/Many: Undo/Synchronization and crash recovery
* Classify buffers as meta, undo, or data buffers, and collect them into separate lists so they can be flushed in the proper order.
* Make the META buffers and volume header flushed under HAMMERs direct control only, as part of the UNDO sequencing.
* Major work on the flusher thread. Flush the various buffer classes in the correct order (the synchronization points are not yet coded, however).
* Update the volume header's UNDO fifo indices.
* Add a ton of sanity checks on buffer modifications and narrow the size of some of the UNDO records.
* Clean-up after loose IOs. An IO can be loose when its ref count drops to zero and the kernel attempts to reclaim its bp. We can't garbage collect the IO in the kernel bioops callback so we have to remember that the IO is now loose and do it later (in the flusher).
* Temporarily comment out an allocator iterator feature which we cannot do right now because it may result in new data allocations overwriting old deletions which are still subject to UNDO.
show more ...
|
#
6e1e8b6d |
| 26-Mar-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 35C/many: Stabilization pass.
* The reblock code was only adjusting the data offset for B-Tree elements when moving records containing in-line data. Also adjust the offset in the record
HAMMER 35C/many: Stabilization pass.
* The reblock code was only adjusting the data offset for B-Tree elements when moving records containing in-line data. Also adjust the offset in the record itself.
show more ...
|
#
b58c6388 |
| 24-Mar-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 35/many: Stabilization pass, cleanups
* Fix a buffer load race which could result in an assertion or panic related to a referenced HAMMER buffer with a NULL bp. The problem was that the
HAMMER 35/many: Stabilization pass, cleanups
* Fix a buffer load race which could result in an assertion or panic related to a referenced HAMMER buffer with a NULL bp. The problem was that the loading flag must be used when releasing the buffer as well as when acquiring the buffer. Change the loading flag to a loading count.
* Do not lose flush requests. The flush request now stays flagged until the buffer is able to be flushed.
* Fix stale blockmap offsets cached in hammer_buffer. Clear the cached offset when freeing a big block from the blockmap. NOTE: We do not yet try to index buffers based on the blockmap offset but we should.
* Remove the old write ordering code in preparation for redoing the algorithm.
* General code cleanups.
show more ...
|
#
855942b6 |
| 20-Mar-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 33C/many: features and bug fixes.
* Add a signal test for long-running ioctl's which allows them to be interrupted.
* Assert that a record update's delete_tid does not match it's create_ti
HAMMER 33C/many: features and bug fixes.
* Add a signal test for long-running ioctl's which allows them to be interrupted.
* Assert that a record update's delete_tid does not match it's create_tid and fix a case in the rename code and another case in the inode update code where this could occur.
* Add a feature to the pruning ioctl that the new snapshot softlink option for 'hammer prune' needs.
* Fix a minor overflow assertion in the transaction code.
show more ...
|
#
36f82b23 |
| 19-Mar-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 33/many: Expand transaction processing, fix bug in B-Tree
* Expand transaction processing to cover more of the code space for upcoming undo code.
* Fix a bug in the btree_split_leaf(), the
HAMMER 33/many: Expand transaction processing, fix bug in B-Tree
* Expand transaction processing to cover more of the code space for upcoming undo code.
* Fix a bug in the btree_split_leaf(), the separator would sometimes not properly be to the left of the split point, resulting in a panic. Temporarily add many more assertions to btree_split_leaf().
* Improve the critical path for blockmap lookups, the (newly) passed trans already contains a referenced root volume so the blockmap code does not have to acquire one.
Reported-by: YONETANI Tomokazu <qhwt+dfly@les.ath.cx> (B-Tree bug)
show more ...
|
#
8fe2cd5d |
| 18-Mar-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 32B/many: Reblocking work.
* Fix a bug in the record reblocking code. data_offset's representing data embedded in the record itself must be shifted when the record is shifted.
|
#
bf686dbe |
| 18-Mar-2008 |
Matthew Dillon <dillon@dragonflybsd.org> |
HAMMER 32/many: Record holes, initial undo API, initial reblocking code
* Add code to record recent 'holes' created by the blockmap allocator due to the requirement that data blocks not cross a 16
HAMMER 32/many: Record holes, initial undo API, initial reblocking code
* Add code to record recent 'holes' created by the blockmap allocator due to the requirement that data blocks not cross a 16K hammer buffer boundary, in order to try to fill in the gaps with smaller chunks of data when possible.
Currently a hole is not added for blockmap frees. It is questionable whether it is a good idea to do it for frees or not, because it can interfere with the reblock code's attempt to completely free a big block.
* Add a reblocking ioctl which scans the B-Tree and reblocks leaf nodes, records, and data in partially empty big blocks to try to free up the entire big block. Incomplete (needs to reblock internal B-Tree nodes and doesn't yet, needs a low-free-space mode which focuses on freeing a single large block).
* Add the API infrastructure required to implement the undo records, and implement the initial undo code (sans ordering requirements for writes). Incomplete.
show more ...
|