d6d9df16 | 04-May-2015 |
Tomohiro Kusumi <kusumi.tomohiro@gmail.com> |
sys/vfs/tmpfs: Change tm_ino type from 'int' to 'ino_t'
- tmpfs_mount::tm_ino needs to be ino_t.
- A wrapper function for ino allocation tmpfs_fetch_ino() and inode itself assume it's ino_t, but
sys/vfs/tmpfs: Change tm_ino type from 'int' to 'ino_t'
- tmpfs_mount::tm_ino needs to be ino_t.
- A wrapper function for ino allocation tmpfs_fetch_ino() and inode itself assume it's ino_t, but not int.
show more ...
|
e195a8fe | 23-Oct-2013 |
Matthew Dillon <dillon@apollo.backplane.com> |
tmpfs - remove most mp->mnt_token cases, kqueue filterops are MPSAFE (2)
* Fix bug introduced in tmpfs_nrename(). The RB tree removal was not being guarded by the appropriate node lock.
* Also a
tmpfs - remove most mp->mnt_token cases, kqueue filterops are MPSAFE (2)
* Fix bug introduced in tmpfs_nrename(). The RB tree removal was not being guarded by the appropriate node lock.
* Also assert that the directory entry is still present.
show more ...
|
77d1fb91 | 27-Jun-2013 |
Matthew Dillon <dillon@apollo.backplane.com> |
kernel - Wakeup threads blocked in the VM page allocator more quickly
Additional tuning to changes in the way the pageout daemon and VM system wakes up threads blocked allocating normal VM pages. P
kernel - Wakeup threads blocked in the VM page allocator more quickly
Additional tuning to changes in the way the pageout daemon and VM system wakes up threads blocked allocating normal VM pages. Previously the VM system would wait for the vm_paging_target() to be reached before waking up all waiters, and had code to try to wakeup individual threads past the minimum.
This just didn't work very well on machines with lots of memory because it could take quite a long time for the pageout daemon to actually reach the vm_paging_target() (and VM load could prevent it from being reached at all!). Many threads could wind up being blocked indefinitely waiting for cache and/or free page counts to reach reasonable levels.
The solution is to give the kernel time to build up a smaller number of free+cache pages beyond the minimum, enough to give all waiting threads a fair shot at allocating at least one page, and then simply wakeup all the waiters. This hysteresis is set smallish on purpose, defaulting to a value of 16 in order to avoid holding threads blocked for excessive periods of time.
Under heavy VM loads this creates an overlap between memory consumers and the pageout daemon, allowing the pageout daemon to run continuously in these situations.
* Add the vm.page_free_hysteresis sysctl, initialized to 16. This field specifies a small number of pages past the minimum required for normal system operation before the VM system will wakeup threads blocked in the VM page allocator.
* Adjust tmpfs to force-free pages through the hysteresis value to reduce degenerate block/wakeup situations under heavy VM loads.
show more ...
|
6ff1f162 | 27-Jun-2013 |
Matthew Dillon <dillon@apollo.backplane.com> |
tmpfs - Handle low memory situations a little better (2)
* When handling pageout daemon requests we want to try to free pages directly to the VM page cache or the freeq and not necessarily cycle
tmpfs - Handle low memory situations a little better (2)
* When handling pageout daemon requests we want to try to free pages directly to the VM page cache or the freeq and not necessarily cycle them through active or inactive.
Shortcutting to the cache/free queues will allow the machine to unstick much more quickly which is particularly important on boxes with a lot of memory because the pageout hysteresis runs in such a large range.
* Greatly improves monster's performance under extreme tmpfs write loads.
show more ...
|