Lines Matching defs:dirty
62 * basis, ZFS tracks the amount of modified (dirty) data. As operations change
63 * data, the amount of dirty data increases; as ZFS syncs out data, the amount
64 * of dirty data decreases. When the amount of dirty data exceeds a
66 * of dirty data decreases (as data is synced out).
68 * The limit on dirty data is tunable, and should be adjusted according to
71 * changes. However, memory is a limited resource, and allowing for more dirty
79 * dirty space used; dsl_pool_dirty_space() decrements those values as data
82 * zfs_dirty_data_max determines the dirty space limit. Once that value is
89 * The IO scheduler uses both the dirty space limit and current amount of
90 * dirty data as inputs. Those values affect the number of concurrent IOs ZFS
93 * The delay is also calculated based on the amount of dirty data. See the
115 * If there's at least this much dirty data (as a percentage of
122 * Once there is this amount of dirty data, the dmu_tx_delay() will kick in
130 * Larger values cause it to delay more for a given amount of dirty data.
131 * Therefore larger values will cause there to be less dirty data for a
621 /* Choose a value slightly bigger than min dirty sync bytes */
678 zio_t *rio; /* root zio for all dirty dataset syncs */
691 * Run all early sync tasks before writing out any dirty blocks.
708 * Write out all dirty blocks of dirty datasets. Note, this could
821 * We have written all of the accounted dirty data, so our
826 * Note that, besides any dirty data from datasets, the amount of
827 * dirty data in the MOS is also accounted by the pool. Therefore,
829 * attempt to update the accounting for the same dirty data twice.
837 * its dsl_dir's dd_dbuf will be dirty, and thus have a hold on it.
984 uint64_t dirty = dp->dp_dirty_pertxg[txg & TXG_MASK];
986 return (dirty > dirty_min_bytes);
1014 /* XXX writing something we didn't dirty? */
1463 "Max percent of RAM allowed to be dirty");
1473 "Determines the dirty space limit");