Lines Matching full:we
41 * flag is set to active when we create the checkpoint and remains active
44 * references the state of the pool when we take the checkpoint. The entry
45 * remains populated until we start discarding the checkpoint or we rewind
51 * but we want to keep around in case we decide to rewind to the checkpoint.
55 * checkpoint, with the only exception being the scenario when we free
65 * - To create a checkpoint, we first wait for the current TXG to be synced,
66 * so we can use the most recently synced uberblock (spa_ubsync) as the
67 * checkpointed uberblock. Then we use an early synctask to place that
70 * to the TXG of the checkpointed uberblock. We use an early synctask for
75 * - When a checkpoint exists, we need to ensure that the blocks that
81 * Whenever a block is freed and we find out that it is referenced by the
82 * checkpoint (we find out by comparing its birth to spa_checkpoint_txg),
83 * we place it in the ms_checkpointing tree instead of the ms_freeingtree.
84 * This way, we divide the blocks that are being freed into checkpointed
87 * In order to persist these frees, we write the extents from the
92 * when we discard the checkpoint, we can find the entries that have
96 * - To discard the checkpoint we use an early synctask to delete the
99 * We use an early synctask to ensure that the operation happens before any
102 * Once the synctask is done and the discarding zthr is awake, we discard
112 * - To rewind to the checkpoint, we first use the current uberblock and
113 * open the MOS so we can access the checkpointed uberblock from the MOS
114 * config. After we retrieve the checkpointed uberblock, we use it as the
119 * An important note on rewinding to the checkpoint has to do with how we
120 * handle ZIL blocks. In the scenario of a rewind, we clear out any ZIL
121 * blocks that have not been claimed by the time we took the checkpoint
127 * - In the hypothetical event that we take a checkpoint, remove a vdev,
130 * and others of similar nature, we disallow the following operations that
218 * Since the space map is not condensed, we know that
224 * metaslab boundaries. So if needed we could add code
232 * At this point we should not be processing any
234 * unnecessary. We use the lock anyway though to
289 * number of non-debug entries, we want to ensure that we only
290 * read what we prefetched from open-context.
292 * Thus, we set the maximum entries that the space map callback
296 * Note that since this is a conservative estimate we also
309 * 1] We reached the beginning of the space map. At this point
316 * 2] We reached the memory limit (amount of memory used to hold
499 * (we use spa_ubsync), its txg must be equal to the txg number of
500 * the txg we are syncing, minus 1.
505 * Once the checkpoint is in place, we need to ensure that none of
507 * When there is a checkpoint and a block is freed, we compare its
509 * block is part of the checkpoint or not. Therefore, we have to set
525 * Note that the feature will be deactivated when we've
552 * uberblock (spa_ubsync) has all the changes that we expect
553 * to see if we were to revert later to the checkpoint. In other
554 * words we want the checkpointed uberblock to include/reference
555 * all the changes that were pending at the time that we issued
562 * txg (spa_ubsync) we want to ensure that are not freeing any of
564 * run. Thus, we run it as an early synctask, so the dirty changes
619 * Similarly to spa_checkpoint(), we want our synctask to run