Lines Matching full:we

67  * we keep a list of writeable active vnode-backed VM objects for sync op.
68 * we keep a simpleq of vnodes that are currently being sync'd.
131 * => in fact, nothing should be locked so that we can sleep here.
144 /* if we're mapping a BLK device, make sure it is a disk. */
160 * we can bump the reference count, check to see if we need to
165 /* regain vref if we were persisting */
185 * block (due to I/O), so we want to unlock the object before calling.
186 * however, we want to keep anyone else from playing with the object
187 * while it is unlocked. to do this we set UVM_VNODE_ALOCK which
188 * prevents anyone from attaching to the vnode until we are done with
196 * We could implement this as a specfs getattr call, but:
201 * (2) All we want is the size, anyhow.
245 * reference count goes to zero [and we either free or persist].
249 /* if write access, we need to add it to the wlist */
251 uvn->u_flags |= UVM_VNODE_WRITEABLE; /* we are on wlist! */
328 * we just dropped the last reference to the uvn. see if we can
341 * even though we may unlock in flush, no one can gain a reference
342 * to us until we clear the "dying" flag [because it blocks
343 * attaches]. we will not do that until after we've disposed of all
355 * middle of an async io. If we still have pages we set the "relkill"
356 * state, so that in the case the vnode gets terminated we know
357 * to leave it alone. Otherwise we'll kill the vnode when it's empty.
374 * kill object now. note that we can't be on the sync q because
442 * debug check: are we yanking the vnode out from under our uvn?
453 * we take over the vnode now and cancel the relkill.
454 * we want to know when the i/o is done so we can recycle right
465 * also, note that we tell I/O that we are already VOP_LOCK'd so
469 * due to a forceful unmount might not be a good idea. maybe we
478 * as we just did a flush we expect all the pages to be gone or in
492 * XXXCDC: this is unlikely to happen without async i/o so we
502 * done. now we free the uvn if its reference count is zero
503 * (true if we are zapping a persisting uvn). however, if we are
504 * terminating a uvn with active mappings we let it live ... future
518 * gone [it is dropped when we enter the persist state].
536 * NOTE: currently we have to use VOP_READ/VOP_WRITE because they go
540 * block]. we will eventually want to change this.
547 * i/o and returns VM_PAGER_PEND. when the i/o is done, we expect
574 * => if PGO_CLEANIT is set, we may block (due to I/O). thus, a caller
577 * => if PGO_CLEANIT is not set, then we will not block
580 * => NOTE: we are allowed to lock the page queues, so the caller
583 * => we return TRUE unless we encountered some sort of I/O error
593 * cleaning the page for us (how nice!). in this case, if we
594 * have syncio specified, then after we make our pass through the
595 * object we need to wait for the other PG_BUSY pages to clear
596 * off (i.e. we need to do an iosync). also note that once a
613 /* get init vals and determine how we are going to traverse object */
628 * anything. we clear all PG_CLEANCHK bits here, and pgo_mk_pcluster
629 * will set them as it syncs PG_CLEAN. This is only an issue if we
650 * handle case where we do not need to clean page (either
651 * because we are not clean or because page is not dirty or
654 * NOTE: we are allowed to deactivate a non-wired active
667 * freeing: nuke all mappings so we can sync
682 /* if we don't need a clean, deactivate/free pages then cont. */
708 * pp points to a page in the object that we are
709 * working on. if it is !PG_CLEAN,!PG_BUSY and we asked
710 * for cleaning (PGO_CLEANIT). we clean it now.
718 /* if we're async, free the page in aiodoned */
729 * if we did an async I/O it is remotely possible for the
731 * not before we get a chance to relock the object. Therefore,
732 * we only touch it when it won't be freed, RELEASED took care
739 * can only happen when we are doing async I/O and can't
741 * of vm space. if this happens we drop back to sync I/O.
746 * we ignore this now and retry the I/O.
747 * we will detect and
770 * for pending async i/o if we are not deactivating
771 * we can move on to the next page. aiodoned deals with
779 * we gotta do.
864 * we are about to do I/O in an object at offset. this function is called
865 * to establish a range of offsets around "offset" in which we can cluster
882 * XXX: old pager claims we could use VOP_BMAP to get maxcontig value.
895 * => XXX: currently we use VOP_READ/VOP_WRITE which are only sync.
896 * [thus we never do async i/o! see iodone comment]
907 * Unless we're recycling this vnode, grab a reference to it
909 * This also makes sure we can don't panic if we end up in
911 * function assumes we hold a reference.
916 * the vnode, so simply return VM_PAGER_AGAIN such that we
958 * gotpages is the current number of pages we've gotten (which
959 * we pass back up to caller via *npagesp.
972 /* do we care about this page? if not, skip it */
1000 * XXX: given the "advice", should we consider async read-ahead?
1008 * XXX: so we don't do anything now.
1012 * step 1c: now we've either done everything needed or we to
1023 * XXX: because we can't do async I/O at this level we get things
1024 * page at a time (otherwise we'd chunk). the VOP_READ() will do
1030 /* skip over pages we've already gotten or don't want */
1031 /* skip over pages we don't _have_ to get */
1037 * we have yet to locate the current page (pps[lcv]). we first
1039 * we fine a page, we check to see if it is busy or released.
1040 * if that is the case, then we sleep on the page until it is
1042 * page we found is neither busy nor released, then we busy it
1043 * (so we own it) and plug it into pps[lcv]. this breaks the
1044 * following while loop and indicates we are ready to move on
1047 * if we exit the while loop with pps[lcv] still set to NULL,
1048 * then it means that we allocated a new busy/fake/clean page
1049 * ptmp in the object and we need to do I/O to fill in the data.
1055 /* nope? allocate one now (if we can) */
1075 /* page is there, see if we need to wait on it */
1083 * if we get here then the page has become resident
1084 * and unbusy between steps 1 and 2. we busy it
1085 * now (so we own it) and set pps[lcv] (so that we
1094 * if we own the a valid page at the correct offset, pps[lcv]
1102 * we have a "fake/busy/clean" page that we just allocated. do
1109 * I/O done. because we used syncio the result can not be
1127 * we got the page! clear the fake flag (indicates valid
1154 * => XXX: currently we use VOP_READ/VOP_WRITE which are only sync.
1155 * [thus we never do async i/o! see iodone comment]
1205 * ok, now bump u_nio up. at this point we are done with uvn
1206 * and can unlock it. if we still don't have a kva, try again
1209 uvn->u_nio++; /* we have an I/O in progress! */
1219 * get touched (so we can look at "offset" without having to lock
1237 * This process may already have the NET_LOCK(), if we
1247 * This process may already have this vnode locked, if we faulted in
1323 * is gone we will kill the object (flushing dirty pages back to the vnode
1327 * one and we killed it [i.e. if there is no active uvn]
1328 * => called with the vnode VOP_LOCK'd [we will unlock it for I/O, if
1331 * => XXX: given that we now kill uvn's when a vnode is recycled (without
1336 * => XXX: this function should DIE once we merge the VM and buffer
1342 * ext2fs_write, WRITE [ufs_readwrite], msdosfs_write: called when we
1346 * ffs_realloccg: when we can't extend the current block and have
1347 * to allocate a new one we call this [XXX: why?]
1349 * and we want to remove it
1350 * nfsrv_remove, sys_unlink: called on file we are removing
1351 * nfsrv_access: if VTEXT and we want WRITE access and we don't uncache
1365 /* lock uvn part of the vnode and check if we need to do anything */
1375 * we have a valid, non-blocked uvn. clear persist flag.
1376 * if uvn is currently active we can return now.
1385 * uvn is currently persisting! we have to gain a reference to
1386 * it so that we can call uvn_detach to kill the uvn.
1395 * be VOP_LOCK'd, and we confirm it here.
1402 * now drop our reference to the vnode. if we have the sole
1403 * reference to the vnode then this will cause it to die [as we
1404 * just cleared the persist flag]. we have to unlock the vnode
1405 * while we are doing this as it may trigger I/O.
1407 * XXX: it might be possible for uvn to get reclaimed while we are
1408 * unlocked causing us to return TRUE when we should not. we ignore
1424 * => we assume that the caller has a reference of some sort to the
1452 * now check if the size has changed: if we shrink we had better
1470 * structure only has one queue for sync'ing). we ensure this
1473 * until we are done.
1482 * step 1: ensure we are only ones using the uvn_sync_q by locking
1489 * write list. we gain a reference to uvns of interest.
1500 * we can safely skip it.
1502 * note that uvn must already be valid because we found it on
1519 /* step 3: we now have a list of uvn's that may need cleaning. */
1530 * if we have the only reference and we just cleaned the uvn,
1531 * then we can pull it out of the UVM_VNODE_WRITEABLE state