Lines Matching +full:trim +full:- +full:data +full:- +full:valid

9 .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
31 .Bl -tag -width Ds
88 This assumes redundancy for this data is provided by the vdev layer.
102 Turbo L2ARC warm-up.
138 Controls whether only MFU metadata and data are cached from ARC into L2ARC.
140 amounts of data that are not expected to be accessed more than once.
143 meaning both MRU and MFU data and metadata are cached.
151 Setting it to 1 means to L2 cache only MFU data and metadata.
154 only MFU data (ie: MRU data are not cached). This can be the right setting
155 to cache as much metadata as possible even when having high data turnover.
179 Percent of ARC size allowed for L2ARC-only headers.
191 we TRIM twice the space required to accommodate upcoming writes.
195 It also enables TRIM of the whole L2ARC device upon creation
200 disables TRIM on L2ARC altogether and is the default as it can put significant
242 For L2ARC devices less than 1 GiB, the amount of data
244 evicts is significant compared to the amount of restored L2ARC data.
251 In normal operation, ZFS will try to write this amount of data to each disk
252 before moving on to the next top-level vdev.
255 Enable metaslab group biasing based on their vdevs' over- or under-utilization
271 Default BRT ZAP data block size as a power of 2. Note that changing this after
281 Default DDT ZAP data block size as a power of 2. Note that changing this after
305 When attempting to log an output nvlist of an ioctl in the on-disk history,
310 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
316 Enable/disable segment-based metaslab selection.
319 When using segment-based metaslab selection, continue allocating
338 becomes the performance limiting factor on high-performance storage.
382 .Bl -item -compact
390 If that fails then we will have a multi-layer gang block.
393 .Bl -item -compact
403 If that fails then we will have a multi-layer gang block.
413 When a vdev is added, target this number of metaslabs per top-level vdev.
422 Maximum ashift used when optimizing for logical \[->] physical sector size on
424 top-level vdevs.
430 If non-zero, then a Direct I/O write's checksum will be verified every
432 In the event the checksum is not valid then the I/O operation will return EIO.
450 Minimum ashift used when creating new top-level vdevs.
453 Minimum number of metaslabs to create in a top-level vdev.
461 Practical upper limit of total metaslabs per top-level vdev.
495 Max amount of memory to use for RAID-Z expansion I/O.
499 For testing, pause RAID-Z expansion when reflow amount reaches this value.
502 For expanded RAID-Z, aggregate reads that have more rows than this.
526 size of data being written.
528 but lower values may be valid for a given pool depending on its configuration.
538 Whether to traverse data blocks during an "extreme rewind"
544 If this parameter is unset, the traversal skips non-metadata blocks.
546 import has started to stop or start the traversal of non-metadata blocks.
568 It also limits the worst-case time to allocate space.
588 Limits the number of on-disk error log entries that will be converted to the
595 During top-level vdev removal, chunks of data are copied from the vdev
598 which will be included as "unnecessary" data in a chunk of copied data.
606 Logical ashift for file-based devices.
609 Physical ashift for file-based devices.
664 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
673 This is the minimum allocation size that will use scatter (page-based) ABDs.
678 bytes, try to unpin some of it in response to demand for non-metadata.
692 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
700 with 8-byte pointers.
708 waits for this percent of the requested amount of data to be evicted.
723 Number ARC headers to evict per sub-list before proceeding to another sub-list.
724 This batch-style operation prevents entire sub-lists from being evicted at once
750 .Sy all_system_memory No \- Sy 1 GiB
764 Balance between metadata and data on ghost hits.
766 of ghost data hits on target data/metadata rate.
792 Number of missing top-level vdevs which will be allowed during
793 pool import (only in read-only mode).
805 .Pa zfs-dbgmsg
810 equivalent to a quarter of the user-wired memory limit under
817 To allow more fine-grained locking, each ARC state contains a series
818 of lists for both data and metadata objects.
819 Locking is performed at the level of these "sub-lists".
820 This parameters controls the number of sub-lists per ARC state,
821 and also applies to other uses of the multilist data structure.
935 bytes on-disk.
940 Only attempt to condense indirect vdev mappings if the on-disk size
996 Valid values are:
997 .Bl -tag -compact -offset 4n -width "continue"
1003 Attempt to recover from a "hung" operation by re-dispatching it
1007 This can be used to facilitate automatic fail-over
1008 to a properly configured fail-over partner.
1027 Enable prefetching dedup-ed blocks which are going to be freed.
1094 OpenZFS will spend no more than this much memory on maintaining the in-memory
1112 Start to delay each transaction once there is this amount of dirty data,
1121 Larger values cause longer delays for a given amount of dirty data.
1137 OpenZFS pre-release versions and now have compatibility issues.
1148 are not created per-object and instead a hashtable is used where collisions
1157 Upper-bound limit for unflushed metadata changes to be held by the
1175 This tunable is important because it involves a trade-off between import
1204 It effectively limits maximum number of unflushed per-TXG spacemap logs
1252 for 32-bit systems.
1278 Start syncing out a transaction group if there's at least this much dirty data
1284 The upper limit of write-transaction zil log data size in bytes.
1285 Write operations are throttled when approaching the limit until log data is
1296 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1329 results in the original CPU-based calculation being used.
1342 When set to 1 the FICLONE and FICLONERANGE ioctls wait for dirty data to be
1382 When the pool has more than this much dirty data, use
1385 If the dirty data is between the minimum and maximum,
1390 When the pool has less than this much dirty data, use
1393 If the dirty data is between the minimum and maximum,
1478 Maximum trim/discard I/O operations active to each device.
1482 Minimum trim/discard I/O operations active to each device.
1486 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1487 the number of concurrently-active I/O operations is limited to
1495 and the number of concurrently-active non-interactive operations is increased to
1506 To prevent non-interactive I/O, like scrub,
1515 Maximum number of queued allocations per top-level vdev expressed as
1533 The following options may be bitwise-ored together:
1591 The following flags may be bitwise-ored together:
1604 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1607 2048 ZFS_DEBUG_TRIM Verify TRIM ranges are always within the allocatable range tree.
1637 from the error-encountering filesystem is "temporarily leaked".
1651 .Bl -enum -compact -offset 4n -width "1."
1654 e.g. due to a top-level vdev going offline), or
1656 have localized, permanent errors (e.g. disk returns the wrong data
1684 Largest data block to write to the ZIL.
1691 .Xr zpool-initialize 8 .
1695 .Xr zpool-initialize 8 .
1699 The threshold size (in block pointers) at which we create a new sub-livelist.
1776 Normally disabled because these datasets may be missing key data.
1811 Setting the threshold to a non-zero percentage will stop allocations
1819 If enabled, ZFS will place DDT data into the special allocation class.
1822 If enabled, ZFS will place user data indirect blocks
1838 .Sy zfs_multihost_interval No / Sy leaf-vdevs .
1889 This results in scrubs not actually scrubbing data and
1898 if a volatile out-of-order write cache is enabled.
1901 Allow no-operation writes.
1907 When enabled forces ZFS to sync data when
1915 or other data crawling operations.
1918 The number of blocks pointed by indirect (non-L0) block which should be
1921 or other data crawling operations.
1947 Disable QAT hardware acceleration for AES-GCM encryption.
1963 top-level vdev.
2063 Determines the order that data will be verified while scrubbing or resilvering:
2064 .Bl -tag -compact -offset 4n -width "a"
2066 Data will be verified as sequentially as possible, given the
2069 This may improve scrub performance if the pool's data is very fragmented.
2071 The largest mostly-contiguous chunk of found data will be verified first.
2072 By deferring scrubbing of small segments, we may later find adjacent data
2092 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2096 data verification I/O.
2099 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2124 Maximum amount of data that can be concurrently issued at once for scrubs and
2128 Allow sending of corrupt data (ignore read/checksum errors when sending).
2134 Including unmodified copies of the spill blocks creates a backwards-compatible
2137 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2148 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2159 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2172 The maximum amount of data, in bytes, that
2181 When this variable is set to non-zero a corrective receive:
2182 .Bl -enum -compact -offset 4n -width "1."
2197 Override this value if most data in your dataset is not of that size
2201 Flushing of data to disk is done in passes.
2209 Only allow small data blocks to be allocated on the special and dedup vdev
2224 many blocks' size will change, and thus we have to re-allocate
2241 Maximum size of TRIM command.
2246 Minimum size of TRIM commands.
2247 TRIM ranges smaller than this will be skipped,
2253 Skip uninitialized metaslabs during the TRIM process.
2254 This option is useful for pools constructed from large thinly-provisioned
2256 where TRIM operations are slow.
2259 This setting is stored when starting a manual TRIM and will
2260 persist for the duration of the requested TRIM.
2264 The number of concurrent TRIM commands issued to the device is controlled by
2269 before TRIM operations are issued to the device.
2270 This setting represents a trade-off between issuing larger,
2271 more efficient TRIM operations and the delay
2275 This will result is larger TRIM operations and potentially increased memory
2287 Flush dirty data to disk at least every this many seconds (maximum TXG
2294 Max vdev I/O aggregation size for non-rotating media.
2317 the purpose of selecting the least busy mirror member on non-rotational vdevs
2330 Aggregate read I/O operations if the on-disk gap between them is within this
2334 Aggregate write I/O operations if the on-disk gap between them is within this
2340 Variants that don't depend on CPU-specific features
2353 fastest selected by built-in benchmark
2356 sse2 SSE2 instruction set 64-bit x86
2357 ssse3 SSSE3 instruction set 64-bit x86
2358 avx2 AVX2 instruction set 64-bit x86
2359 avx512f AVX512F instruction set 64-bit x86
2360 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2361 aarch64_neon NEON Aarch64/64-bit ARMv8
2362 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2373 .Xr zpool-events 8 .
2390 The number of taskq entries that are pre-populated when the taskq is first
2415 if a volatile out-of-order write cache is enabled.
2437 Usually, one metaslab from each normal-class vdev is dedicated for use by
2446 Whether heuristic for detection of incompressible data with zstd levels >= 3
2447 using LZ4 and zstd-1 passes is enabled.
2454 If non-zero, the zio deadman will produce debugging messages
2472 When enabled, the maximum number of pending allocations per top-level vdev
2524 generate a system-dependent value close to 6 threads per taskq.
2554 Discard (TRIM) operations done on zvols will be done in batches of this
2577 .Pq Li blk-mq .
2597 .Li blk-mq
2609 .Li blk-mq
2617 .Li blk-mq
2624 .Sy volblocksize Ns -sized blocks per zvol thread.
2632 .Li blk-mq
2637 .Li blk-mq
2640 .Li blk-mq
2654 .Bl -tag -compact -offset 4n -width "a"
2678 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2679 If the sum of the per-queue maxima exceeds the aggregate maximum,
2683 regardless of whether all per-queue minima have been met.
2720 Asynchronous writes represent the data that is committed to stable storage
2727 according to the amount of dirty data in the pool.
2733 from the async write queue as there is more dirty data in the pool.
2737 follows a piece-wise linear function defined by a few adjustable points:
2738 .Bd -literal
2739 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2746 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2750 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2751 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2754 Until the amount of dirty data exceeds a minimum percentage of the dirty
2755 data allowed in the pool, the I/O scheduler will limit the number of
2759 of the dirty data allowed in the pool.
2761 Ideally, the amount of dirty data on a busy pool will stay in the sloped
2767 this indicates that the rate of incoming data is
2787 .D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms)
2790 The percentage of dirty data at which we start to delay is defined by
2800 .Bd -literal
2802 10ms +-------------------------------------------------------------*+
2821 | \fBzfs_delay_scale\fP ----------> ******** |
2822 0 +-------------------------------------*********----------------+
2823 0% <- \fBzfs_dirty_data_max\fP -> 100%
2833 was chosen such that small changes in the amount of accumulated dirty data
2839 .Bd -literal
2841 100ms +-------------------------------------------------------------++
2850 + \fBzfs_delay_scale\fP ----------> ***** +
2861 +--------------------------------------------------------------+
2862 0% <- \fBzfs_dirty_data_max\fP -> 100%
2865 Note here that only as the amount of dirty data approaches its limit does
2867 The goal of a properly tuned system should be to keep the amount of dirty data
2869 for the I/O scheduler to reach optimal throughput on the back-end storage,