xref: /netbsd-src/sys/ufs/lfs/TODO (revision 1ffa7b76c40339c17a0fb2a09fac93f287cfc046)
1#   $NetBSD: TODO,v 1.7 2003/02/23 00:22:33 perseant Exp $
2
3- Lock audit.  Need to check locking for multiprocessor case in particular.
4
5- Get rid of the syscalls: make them into ioctl calls instead.  This would
6  allow LFS to be loaded as a module.  We would then ideally have an
7  in-kernel cleaner that runs if no userland cleaner has asserted itself.
8
9- Get rid of lfs_segclean(); the kernel should clean a dirty segment IFF it
10  has passed two checkpoints containing zero live bytes.
11
12- Now that our cache is basically all of physical memory, we need to make
13  sure that segwrite is not starving other important things.  Need a way
14  to prioritize which blocks are most important to write, and write only
15  those before giving up the seglock to do the rest.  How does this change
16  our notion of what a checkpoint is?
17
18- Investigate alternate inode locking strategy: Inode locks are useful
19  for locking against simultaneous changes to inode size (balloc,
20  truncate, write) but because the assignment of disk blocks is also
21  covered by the segment lock, we don't really need to pay attention to
22  the inode lock when writing a segment, right?  If this is true, the
23  locking problem in lfs_{bmapv,markv} goes away and lfs_reserve can go,
24  too.
25
26- Fully working fsck_lfs.  (Really, need a general-purpose external
27  partial-segment writer.)
28
29- Get rid of DEV_BSIZE, pay attention to the media block size at mount time.
30
31- More fs ops need to call lfs_imtime.  Which ones?  (Blackwell et al., 1995)
32
33- lfs_vunref_head exists so that vnodes loaded solely for cleaning can
34  be put back on the *head* of the vnode free list.  Make sure we
35  actually do this, since we now take IN_CLEANING off during segment write.
36
37- The cleaner could be enhanced to be controlled from other processes,
38  and possibly perform additional tasks:
39
40  - Backups.  At a minimum, turn the cleaner off and on to allow
41    effective live backups.  More aggressively, the cleaner itself could
42    be the backup agent, and dump_lfs would merely be a controller.
43
44  - Cleaning time policies.  Be able to tweak the cleaner's thresholds
45    to allow more thorough cleaning during policy-determined idle
46    periods (regardless of actual idleness) or put off until later
47    during short, intensive write periods.
48
49  - File coalescing and placement.  During periods we expect to be idle,
50    coalesce fragmented files into one place on disk for better read
51    performance.  Ideally, move files that have not been accessed in a
52    while to the extremes of the disk, thereby shortening seek times for
53    files that are accessed more frequently (though how the cleaner
54    should communicate "please put this near the beginning or end of the
55    disk" to the kernel is a very good question; flags to lfs_markv?).
56
57  - Versioning.  When it cleans a segment it could write data for files
58    that were less than n versions old to tape or elsewhere.  Perhaps it
59    could even write them back onto the disk, although that requires
60    more thought (and kernel mods).
61
62- Move lfs_countlocked() into vfs_bio.c, to replace count_locked_queue;
63  perhaps keep the name, replace the function.  Could it count referenced
64  vnodes as well, if it was in vfs_subr.c instead?
65
66- Why not delete the lfs_bmapv call, just mark everything dirty that
67  isn't deleted/truncated?  Get some numbers about what percentage of
68  the stuff that the cleaner thinks might be live is live.  If it's
69  high, get rid of lfs_bmapv.
70
71- There is a nasty problem in that it may take *more* room to write the
72  data to clean a segment than is returned by the new segment because of
73  indirect blocks in segment 2 being dirtied by the data being copied
74  into the log from segment 1.  The suggested solution at this point is
75  to detect it when we have no space left on the filesystem, write the
76  extra data into the last segment (leaving no clean ones), make it a
77  checkpoint and shut down the file system for fixing by a utility
78  reading the raw partition.  Argument is that this should never happen
79  and is practically impossible to fix since the cleaner would have to
80  theoretically build a model of the entire filesystem in memory to
81  detect the condition occurring.  A file coalescing cleaner will help
82  avoid the problem, and one that reads/writes from the raw disk could
83  fix it.
84
85- Need to keep vnode v_numoutput up to date for pending writes?
86
87- If delete a file that's being executed, the version number isn't
88  updated, and fsck_lfs has to figure this out; case is the same as if
89  have an inode that no directory references, so the file should be
90  reattached into lost+found.
91
92- Currently there's no notion of write error checking.
93  + Failed data/inode writes should be rescheduled (kernel level bad blocking).
94  + Failed superblock writes should cause selection of new superblock
95  for checkpointing.
96
97- Future fantasies:
98  - unrm, versioning
99  - transactions
100  - extended cleaner policies (hot/cold data, data placement)
101
102- Problem with the concept of multiple buffer headers referencing the segment:
103  Positives:
104    Don't lock down 1 segment per file system of physical memory.
105    Don't copy from buffers to segment memory.
106    Don't tie down the bus to transfer 1M.
107    Works on controllers supporting less than large transfers.
108    Disk can start writing immediately instead of waiting 1/2 rotation
109        and the full transfer.
110  Negatives:
111    Have to do segment write then segment summary write, since the latter
112    is what verifies that the segment is okay.  (Is there another way
113    to do this?)
114
115- The algorithm for selecting the disk addresses of the super-blocks
116  has to be available to the user program which checks the file system.
117