Lines Matching full:page
75 * page, indexed by page number. Each structure
88 * and offset to which this page belongs (for pageout),
95 * The queue lock for a page depends on the value of its queue field and is
101 * (B) the page busy lock.
105 * (O) the object that the page belongs to.
106 * (Q) the page's queue lock.
109 * page's contents and identity (i.e., its <object, pindex> tuple) as
111 * the page structure, the busy lock lacks some of the features available
114 * detected, and an attempt to xbusy a busy page or sbusy an xbusy page
116 * vm_page_sleep_if_busy() can be used to sleep until the page's busy
117 * state changes, after which the caller must re-lookup the page and
121 * The valid field is protected by the page busy lock (B) and object
124 * These must be protected with the busy lock to prevent page-in or
125 * creation races. Page invalidation generally happens as a result
136 * In contrast, the synchronization of accesses to the page's
138 * the machine-independent layer, the page busy must be held to
148 * only way to ensure a page can not become dirty. I/O generally
149 * removes the page from pmap to ensure exclusive access and atomic
152 * The ref_count field tracks references to the page. References that
153 * prevent the page from being reclaimable are called wirings and are
158 * pmap_extract_and_hold(). When a page belongs to an object, it may be
159 * wired only when the object is locked, or the page is busy, or by
161 * page is not busy (or is exclusively busied by the current thread), and
162 * the page is unmapped, its wire count will not increase. The ref_count
164 * is known that no other references to the page exist, such as in the page
165 * allocator. A page may be present in the page queues, or even actively
166 * scanned by the page daemon, without an explicitly counted referenced.
167 * The page daemon must therefore handle the possibility of a concurrent
168 * free of the page.
170 * The queue state of a page consists of the queue and act_count fields of
172 * by PGA_QUEUE_STATE_MASK. The queue field contains the page's page queue
173 * index, or PQ_NONE if it does not belong to a page queue. To modify the
174 * queue field, the page queue lock corresponding to the old value must be
177 * this rule: the page daemon may transition the queue field from
178 * PQ_INACTIVE to PQ_NONE immediately prior to freeing the page during an
179 * inactive queue scan. At that point the page is already dequeued and no
181 * flag, when set, indicates that the page structure is physically inserted
182 * into the queue corresponding to the page's queue index, and may only be
183 * set or cleared with the corresponding page queue lock held.
185 * To avoid contention on page queue locks, page queue operations (enqueue,
189 * queue is full, an attempt to insert a new entry will lock the page
193 * indefinitely. In particular, a page may be freed with pending batch
194 * queue entries. The page queue operation flags must be set using atomic
223 TAILQ_ENTRY(vm_page) q; /* page queue or free list (Q) */
239 vm_paddr_t phys_addr; /* physical address of page (C) */
241 u_int ref_count; /* page references (A) */
246 uint8_t flags; /* page PG_* flags (P) */
247 uint8_t oflags; /* page VPO_* flags (O) */
250 /* NOTE that these must support one bit per DEV_BSIZE in a page */
259 * ref_count is normally used to count wirings that prevent the page from being
262 * the page is unallocated.
268 * attempting to tear down all mappings of a given page. The page busy lock and
277 * Page flags stored in oflags:
279 * Access to these page flags is synchronized by the lock on the object
280 * containing the page (O).
283 * indicates that the page is not under PV management but
284 * otherwise should be treated as a normal page. Pages not
292 #define VPO_UNMANAGED 0x04 /* no PV management for page */
293 #define VPO_SWAPINPROG 0x08 /* swap I/O in progress on page */
296 * Busy page implementation details.
393 * PGA_REFERENCED may be cleared only if the page is locked. It is set by
398 * When it does so, the object must be locked, or the page must be
402 * PGA_EXECUTABLE may be set by pmap routines, and indicates that a page has
405 * PGA_NOSYNC must be set and cleared with the page busy lock held.
407 * PGA_ENQUEUED is set and cleared when a page is inserted into or removed
408 * from a page queue, respectively. It determines whether the plinks.q field
409 * of the page is valid. To set or clear this flag, page's "queue" field must
410 * be a valid queue index, and the corresponding page queue lock must be held.
412 * PGA_DEQUEUE is set when the page is scheduled to be dequeued from a page
413 * queue, and cleared when the dequeue request is processed. A page may
415 * is requested after the page is scheduled to be enqueued but before it is
416 * actually inserted into the page queue.
418 * PGA_REQUEUE is set when the page is scheduled to be enqueued or requeued
419 * in its page queue.
426 * and the corresponding page queue lock must be held when clearing any of the
430 * when the context that dirties the page does not have the object write lock
433 #define PGA_WRITEABLE 0x0001 /* page may be mapped writeable */
434 #define PGA_REFERENCED 0x0002 /* page has been referenced */
435 #define PGA_EXECUTABLE 0x0004 /* page may be mapped executable */
436 #define PGA_ENQUEUED 0x0008 /* page is enqueued in a page queue */
437 #define PGA_DEQUEUE 0x0010 /* page is due to be dequeued */
438 #define PGA_REQUEUE 0x0020 /* page is due to be requeued */
439 #define PGA_REQUEUE_HEAD 0x0040 /* page requeue should bypass LRU */
441 #define PGA_SWAP_FREE 0x0100 /* page with swap space was dirtied */
442 #define PGA_SWAP_SPACE 0x0200 /* page has allocated swap space */
448 * Page flags. Updates to these flags are not synchronized, and thus they must
449 * be set during page allocation or free to avoid races.
451 * The PG_PCPU_CACHE flag is set at allocation time if the page was
453 * page is allocated from the physical memory allocator.
456 #define PG_FICTITIOUS 0x02 /* physical page doesn't exist */
457 #define PG_ZERO 0x04 /* page is zeroed */
458 #define PG_MARKER 0x08 /* special queue marker page */
459 #define PG_NODUMP 0x10 /* don't include this page in a dump */
460 #define PG_NOFREE 0x20 /* page should never be freed. */
477 * Each pageable resident page falls into one of five lists:
500 extern vm_page_t vm_page_array; /* First resident page in table */
502 extern long first_page; /* first physical page number */
508 * page to which the given physical address belongs. The correct vm_page_t
509 * object is returned for addresses that are not page-aligned.
514 * Page allocation parameters for vm_page for the functions
539 #define VM_ALLOC_WIRED 0x0020 /* (acgnp) Allocate a wired page */
540 #define VM_ALLOC_ZERO 0x0040 /* (acgnp) Allocate a zeroed page */
542 #define VM_ALLOC_NOFREE 0x0100 /* (an) Page will never be released */
543 #define VM_ALLOC_NOBUSY 0x0200 /* (acgp) Do not excl busy the page */
544 #define VM_ALLOC_NOCREAT 0x0400 /* (gp) Don't create a page */
548 #define VM_ALLOC_SBUSY 0x4000 /* (acgp) Shared busy the page */
589 * PS_ALL_DIRTY is true only if the entire (super)page is dirty.
590 * However, it can be spuriously false when the (super)page has become
730 ("vm_page_assert_busied: page %p not busy @ %s:%d", \
735 ("vm_page_assert_sbusied: page %p not shared busy @ %s:%d", \
741 ("vm_page_assert_unbusied: page %p busy_lock %#x owned" \
747 ("vm_page_assert_xbusied: page %p not exclusive busy @ %s:%d", \
754 ("vm_page_assert_xbusied: page %p busy_lock %#x not owned" \
768 /* Note: page m's lock must not be owned by the caller. */
787 * Claim ownership of a page's xbusy state. In non-INVARIANTS kernels this
813 * Load a snapshot of a page's 32-bit atomic state.
825 * Atomically compare and set a page's atomic state.
832 ("%s: invalid head requeue request for page %p", __func__, m));
834 ("%s: setting PGA_ENQUEUED with PQ_NONE in page %p", __func__, m));
842 * Clear the given bits in the specified page.
860 * Set the given bits in the specified page.
882 * Set all bits in the page's dirty field.
884 * The object containing the specified page must be locked if the
904 * Set page to not be dirty. Note: does not clear pmap modify bits
972 * Release a reference to a page and return the old reference count.
981 * page structure are visible before it is freed.
986 ("vm_page_drop: page %p has an invalid refcount value", m));
993 * Perform a racy check to determine whether a reference prevents the page
994 * from being reclaimable. If the page's object is locked, and the page is