#
4016c7de |
| 22-Jan-2025 |
mpi <mpi@openbsd.org> |
indent.
|
#
b83f5574 |
| 22-Jan-2025 |
mpi <mpi@openbsd.org> |
Remove leftovers of loaned pages.
ok kettenis@
|
#
679d40f1 |
| 18-Jan-2025 |
kettenis <kettenis@openbsd.org> |
The pmap_enter(9) function may fail even if we have enough free physical memory. This happens if we can't allocate KVA to map that memory. This is especially likely to happen with the arm64 pmap (a
The pmap_enter(9) function may fail even if we have enough free physical memory. This happens if we can't allocate KVA to map that memory. This is especially likely to happen with the arm64 pmap (and the riscv64 pmap that is derived from it) since lock contention on the kernel map will make us fail to allocate the required KVA. But this may happen on other architectures as well, especially those that don't define __HAVE_PMAP_DIRECT.
Fix this issue by introducing a new pmap_populate() interface that may be called to populate the page tables such that a subsequent pmap_enter(9) call that uses the same virtual address will succeed. Use this in the uvm fault handler after a failed pmap_enter(9) call before we retry the fault.
tested by phessler@, mglocker@ ok mpi@
show more ...
|
#
e9d70b48 |
| 03-Jan-2025 |
mpi <mpi@openbsd.org> |
Unlock underlying `uobj' when OOM in uvmfault_promote().
Found the hardway by sthen@, ok kettenis@
|
#
0b4f1452 |
| 27-Dec-2024 |
mpi <mpi@openbsd.org> |
Move pmap_page_protect(PROT_NONE) call inside uvm_pagedeactivate().
ok tb@, kettenis@
|
#
552563d5 |
| 24-Dec-2024 |
mpi <mpi@openbsd.org> |
Use a variable to represent the type of lock necessary for the lower fault.
For the moment an exclusive (RW_WRITE) lock is always used.
ok kettenis@, tb@
|
#
8774a958 |
| 22-Dec-2024 |
mpi <mpi@openbsd.org> |
Read entry's values before releasing locks in uvm_fault_lower_io().
ok tb@, miod@, kettenis@
|
#
dceff774 |
| 20-Dec-2024 |
mpi <mpi@openbsd.org> |
Merge identical code paths to promote data to a new anon into a new function.
ok tb@, miod@
|
#
0528dcd0 |
| 18-Dec-2024 |
mpi <mpi@openbsd.org> |
Do not busy pages that are resident & fetched with PGO_LOCKED.
This is safe because the rwlock of the related object is never released until the handler is done with the pages.
ok kettenis@, tb@
|
#
f46a341e |
| 15-Dec-2024 |
mpi <mpi@openbsd.org> |
Return errno values rather than dying VM_PAGER_* in the pgo_fault() interfaces.
This doesn't introduce any value change. All errors are converted to EACCES even if many could use EIO.
ok tb@, jsg@
|
#
07c549d8 |
| 04-Dec-2024 |
mpi <mpi@openbsd.org> |
Document that the original page during a CoW can be unlocked earlier.
ok tb@
|
#
335383c9 |
| 04-Dec-2024 |
mpi <mpi@openbsd.org> |
Pass the rw_enter(9) type to amap_lock() in preparation for using shared locks.
ok tb@
|
#
b16b5f31 |
| 03-Dec-2024 |
mpi <mpi@openbsd.org> |
Add missing wakeup & cleanup in error path.
ok tb@
|
#
897b1709 |
| 03-Dec-2024 |
mpi <mpi@openbsd.org> |
Use uvm_pagewait() rather than re-rolling it.
ok miod@, tb@
|
#
5797ad06 |
| 29-Nov-2024 |
mpi <mpi@openbsd.org> |
Also call pmap_extract() before entering a page ahead for lower layer faults.
As for the upper layer, call pmap_update() only if, at least, a page has been entered.
ok tb@, kettenis@
|
#
a52f395c |
| 29-Nov-2024 |
mpi <mpi@openbsd.org> |
When paging ahead, delay calling pmap_extract() after checking for a valid page.
While here call pmap_update() only if, at least, a page has been entered.
ok tb@, kettenis@
|
#
8a233859 |
| 27-Nov-2024 |
mpi <mpi@openbsd.org> |
Neighbor (fault ahead) pages are never mapped with the wired attribute.
Wired faults are always "narrow". That means the fault handler do not try to fault neighbor pages ahead. So do not propagate
Neighbor (fault ahead) pages are never mapped with the wired attribute.
Wired faults are always "narrow". That means the fault handler do not try to fault neighbor pages ahead. So do not propagate the `flt->wired' attribute to the corresponding pmap_enter(9) calls and instead assert that it is false whenever neighbor pages are entered in a memory space.
ok tb@
show more ...
|
#
34e43087 |
| 26-Nov-2024 |
mpi <mpi@openbsd.org> |
Make uvmfault_anonget() return errno values instead of converting them.
ok miod@, tb@
|
#
58243cbf |
| 25-Nov-2024 |
mpi <mpi@openbsd.org> |
Remove unused `fault_type' argument.
|
#
cce913b9 |
| 05-Nov-2024 |
mpi <mpi@openbsd.org> |
Use a helper to get lower page from backing store: uvm_fault_lower_io().
Reduce differences with NetBSD, no behavior change.
ok tb@, miod@
|
#
96ec8e93 |
| 05-Nov-2024 |
mpi <mpi@openbsd.org> |
Do not put wired pages on the page queues & release their swap resources.
While here move the code to release swap resources outside of the pageq mutex and shuffle some locking dances to reduce diff
Do not put wired pages on the page queues & release their swap resources.
While here move the code to release swap resources outside of the pageq mutex and shuffle some locking dances to reduce differences with NetBSD.
ok miod@
show more ...
|
#
0535051c |
| 05-Nov-2024 |
mpi <mpi@openbsd.org> |
Check if the mapping for an vm_map_entry exists while holding its lock.
Prevent a race where the mapped object is being truncated while we are spinning to unwire it.
Reported-by: syzbot+189cd03d088
Check if the mapping for an vm_map_entry exists while holding its lock.
Prevent a race where the mapped object is being truncated while we are spinning to unwire it.
Reported-by: syzbot+189cd03d088cddbee591@syzkaller.appspotmail.com
Adapted from NetBSD r1.207, ok miod@
show more ...
|
#
d6897f14 |
| 05-Nov-2024 |
mpi <mpi@openbsd.org> |
Handle faults on wired map entries similarly to VM_FAULT_WIRE faults.
It is valid to fault on wired mappings if the object was truncated then grown again.
Adapted from NetBSD r1.207, ok miod@
|
#
e5ad67b7 |
| 03-Nov-2024 |
mpi <mpi@openbsd.org> |
Revert previous, at least on arm64 too many pages end up being wired.
|
#
ba2fc704 |
| 03-Nov-2024 |
mpi <mpi@openbsd.org> |
Do not put wired pages on the page queues & release their swap resources.
While here move the code to release swap resources outside of the pageq mutex and shuffle some locking dances to reduce diff
Do not put wired pages on the page queues & release their swap resources.
While here move the code to release swap resources outside of the pageq mutex and shuffle some locking dances to reduce differences with NetBSD. ok miod@
show more ...
|