#
4506a0ee |
| 06-Sep-2015 |
David van Moolenbroek <david@minix3.org> |
VM: allocate cache pages in mmap region
That way, these pages are transferred during live update, as they should. This resolves a mfs crash after a number of live updates.
Change-Id: Ia53bec2692b2
VM: allocate cache pages in mmap region
That way, these pages are transferred during live update, as they should. This resolves a mfs crash after a number of live updates.
Change-Id: Ia53bec2692b2114c29b96a453beb0f915f56453a
show more ...
|
#
36f477c2 |
| 03-Jan-2015 |
Cristiano Giuffrida <giuffrida@cs.vu.nl> |
vm: Allow in-band metadata for cache blocks
Allow extra space for in-band metadata when allocating cache blocks.
Edited by David van Moolenbroek: since this effectively halves the potential size of
vm: Allow in-band metadata for cache blocks
Allow extra space for in-band metadata when allocating cache blocks.
Edited by David van Moolenbroek: since this effectively halves the potential size of the typical file system cache, do this only when compiling with instrumentation.
Change-Id: I0840af6420899ede2d5bb7539e79c0a456b5128d
show more ...
|
#
6c46a77d |
| 29-Mar-2015 |
David van Moolenbroek <david@minix3.org> |
libminixfs: better support for read errors and EOF
- The lmfs_get_block*(3) API calls may now return an error. The idea is to encourage a next generation of file system services to do a better
libminixfs: better support for read errors and EOF
- The lmfs_get_block*(3) API calls may now return an error. The idea is to encourage a next generation of file system services to do a better job at dealing with block read errors than the MFS-derived implementations do. These existing file systems have been changed to panic immediately upon getting a block read error, in order to let unchecked errors cause corruption. Note that libbdev already retries failing I/O operations a few times first.
- The libminixfs block device I/O module (bio.c) now deals properly with end-of-file conditions on block devices. Since a device or partition size may not be a multiple of the root file system's block size, support for partial block retrival has been added, with a new internal lmfs_get_partial_block(3) call. A new test program, test85, tests the new handling of EOF conditions when reading, writing, and memory-mapping a block device.
Change-Id: I05e35b6b8851488328a2679da635ebba0c6d08ce
show more ...
|
#
e94f856b |
| 13-Aug-2015 |
David van Moolenbroek <david@minix3.org> |
libminixfs/VM: fix memory-mapped file corruption
This patch employs one solution to resolve two independent but related issues. Both issues are the result of one fundamental aspect of the way VM's
libminixfs/VM: fix memory-mapped file corruption
This patch employs one solution to resolve two independent but related issues. Both issues are the result of one fundamental aspect of the way VM's memory mapping works: VM uses its cache to map in blocks for memory-mapped file regions, and for blocks already in the VM cache, VM does not go to the file system before mapping them in. To preserve consistency between the FS and VM caches, VM relies on being informed about all updates to file contents through the block cache. The two issues are both the result of VM not being properly informed about such updates:
1. Once a file system provides libminixfs with an inode association (inode number + inode offset) for a disk block, this association is not broken until a new inode association is provided for it. If a block is freed and reallocated as a metadata (non-inode) block, its old association is maintained, and may be supplied to VM's secondary cache. Due to reuse of inodes, it is possible that the same inode association becomes valid for an actual file block again. In that case, when that new file is memory-mapped, under certain circumstances, VM may end up using the metadata block to satisfy a page fault on the file, due to the stale inode association. The result is a corrupted memory mapping, with the application seeing data other than the current file contents mapped in at the file block.
2. When a hole is created in a file, the underlying block is freed from the device, but VM is not informed of this update, and thus, if VM's cache contains the block with its previous inode association, this block will remain there. As a result, if an application subsequently memory-maps the file, VM will map in the old block at the position of the hole, rather than an all-zeroes block. Thus, again, the result is a corrupted memory mapping.
This patch resolves both issues by making the file system inform the minixfs library about blocks being freed, so that libminixfs can break the inode association for that block, both in its own cache and in the VM cache. Since libminixfs does not know whether VM has the block in its cache or not, it makes a call to VM for each block being freed. Thus, this change introduces more calls to VM, but it solves the correctness issues at hand; optimizations may be introduced later. On the upside, all freed blocks are now marked as clean, which should result in fewer blocks being written back to the device, and the blocks are removed from the caches entirely, which should result in slightly better cache usage.
This patch is necessary but not sufficient to resolve the situation with respect to memory mapping of file holes in general. Therefore, this patch extends test 74 with a (rather particular but effective) test for the first issue, but not yet with a test for the second one.
This fixes #90.
Change-Id: Iad8b134d2f88a884f15d3fc303e463280749c467
show more ...
|
#
e321f655 |
| 15-Nov-2014 |
David van Moolenbroek <david@minix3.org> |
libfsdriver: support mmap on FSes with no device
This patch adds (very limited) support for memory-mapping pages on file systems that are mounted on the special "none" device and that do not impleme
libfsdriver: support mmap on FSes with no device
This patch adds (very limited) support for memory-mapping pages on file systems that are mounted on the special "none" device and that do not implement PEEK support by themselves. This includes hgfs, vbfs, and procfs.
The solution is implemented in libvtreefs, and consists of allocating pages, filling them with content by calling the file system's READ functionality, passing the pages to VM, and freeing them again. A new VM flag is used to indicate that these pages should be mapped in only once, and thus not cached beyond their single use. This prevents stale data from getting mapped in without the involvement of the file system, which would be problematic on file systems where file contents may become outdated at any time. No VM caching means no sharing and poor performance, but mmap no longer fails on these file systems.
Compared to a libc-based approach, this patch retains the on-demand nature of mmap. Especially tail(1) is known to map in a large file area only to use a small portion of it.
All file systems now need to be given permission for the SETCACHEPAGE and CLEARCACHE calls to VM.
A very basic regression test is added to test74.
Change-Id: I17afc4cb97315b515cad1542521b98f293b6b559
show more ...
|