Lines Matching +full:ideal +full:- +full:factor +full:- +full:value

9 .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
31 .Bl -tag -width Ds
80 The default value minimizes lock contention for the bulk operation performed.
102 Turbo L2ARC warm-up.
128 A value of
179 Percent of ARC size allowed for L2ARC-only headers.
198 A value of
252 before moving on to the next top-level vdev.
255 Enable metaslab group biasing based on their vdevs' over- or under-utilization
305 When attempting to log an output nvlist of an ioctl in the on-disk history,
310 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
316 Enable/disable segment-based metaslab selection.
319 When using segment-based metaslab selection, continue allocating
338 becomes the performance limiting factor on high-performance storage.
382 .Bl -item -compact
390 If that fails then we will have a multi-layer gang block.
393 .Bl -item -compact
403 If that fails then we will have a multi-layer gang block.
413 When a vdev is added, target this number of metaslabs per top-level vdev.
422 Maximum ashift used when optimizing for logical \[->] physical sector size on
424 top-level vdevs.
430 If non-zero, then a Direct I/O write's checksum will be verified every
443 The default value for this is 1 on Linux, but is 0 for
450 Minimum ashift used when creating new top-level vdevs.
453 Minimum number of metaslabs to create in a top-level vdev.
461 Practical upper limit of total metaslabs per top-level vdev.
495 Max amount of memory to use for RAID-Z expansion I/O.
499 For testing, pause RAID-Z expansion when reflow amount reaches this value.
502 For expanded RAID-Z, aggregate reads that have more rows than this.
525 Multiplication factor used to estimate actual disk consumption from the
527 The default value is a worst case estimate,
530 may wish to specify a more realistic inflation factor,
544 If this parameter is unset, the traversal skips non-metadata blocks.
546 import has started to stop or start the traversal of non-metadata blocks.
568 It also limits the worst-case time to allocate space.
578 Note that setting this value too high could result in performance
580 Set value only applies to pools imported/created after that.
585 Set value only applies to pools imported/created after that.
588 Limits the number of on-disk error log entries that will be converted to the
595 During top-level vdev removal, chunks of data are copied from the vdev
600 The default value here was chosen to align with
606 Logical ashift for file-based devices.
609 Physical ashift for file-based devices.
631 this value, doubling on each hit.
664 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
668 The value of
673 This is the minimum allocation size that will use scatter (page-based) ABDs.
678 bytes, try to unpin some of it in response to demand for non-metadata.
679 This value acts as a ceiling to the amount of dnode metadata, and defaults to
692 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
698 block size of this value.
700 with 8-byte pointers.
702 this value can be increased to reduce the memory footprint.
723 Number ARC headers to evict per sub-list before proceeding to another sub-list.
724 This batch-style operation prevents entire sub-lists from being evicted at once
728 If set to a non zero value, it will replace the
730 value with this value.
733 .No value Pq default Sy 5 Ns s
740 Setting this value to
750 .Sy all_system_memory No \- Sy 1 GiB
754 This value must be at least
757 This value can be changed dynamically, with some caveats.
792 Number of missing top-level vdevs which will be allowed during
793 pool import (only in read-only mode).
805 .Pa zfs-dbgmsg
810 equivalent to a quarter of the user-wired memory limit under
817 To allow more fine-grained locking, each ARC state contains a series
819 Locking is performed at the level of these "sub-lists".
820 This parameters controls the number of sub-lists per ARC state,
842 The default value of
852 with the new value.
863 This value is specified as percent of pagecache size (as measured by
877 For example a value of
891 Value of 4 means parity with page cache.
935 bytes on-disk.
940 Only attempt to condense indirect vdev mappings if the on-disk size
997 .Bl -tag -compact -offset 4n -width "continue"
1003 Attempt to recover from a "hung" operation by re-dispatching it
1007 This can be used to facilitate automatic fail-over
1008 to a properly configured fail-over partner.
1027 Enable prefetching dedup-ed blocks which are going to be freed.
1094 OpenZFS will spend no more than this much memory on maintaining the in-memory
1097 The default value of
1115 This value should be at least
1123 For the smoothest delay, this value should be about 1 billion divided
1137 OpenZFS pre-release versions and now have compatibility issues.
1140 Maximum number of uses of a single salt value before generating a new one for
1142 The default value is also the maximum.
1148 are not created per-object and instead a hashtable is used where collisions
1157 Upper-bound limit for unflushed metadata changes to be held by the
1166 The default value means that the space in all the log spacemaps
1175 This tunable is important because it involves a trade-off between import
1204 It effectively limits maximum number of unflushed per-TXG spacemap logs
1219 Decreasing this value will reduce the time spent in an
1238 Maximum allowable value of
1252 for 32-bit systems.
1255 Maximum allowable value of
1284 The upper limit of write-transaction zil log data size in bytes.
1296 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1329 results in the original CPU-based calculation being used.
1408 The default value of
1411 A value of
1431 Timeout value to wait before determining a device is missing
1486 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1487 the number of concurrently-active I/O operations is limited to
1495 and the number of concurrently-active non-interactive operations is increased to
1506 To prevent non-interactive I/O, like scrub,
1515 Maximum number of queued allocations per top-level vdev expressed as
1533 The following options may be bitwise-ored together:
1537 Value Name Description
1548 Setting it to zero will cause the kernel's ideal size to be used.
1591 The following flags may be bitwise-ored together:
1595 Value Name Description
1604 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1619 Value Description
1637 from the error-encountering filesystem is "temporarily leaked".
1651 .Bl -enum -compact -offset 4n -width "1."
1654 e.g. due to a top-level vdev going offline), or
1691 .Xr zpool-initialize 8 .
1695 .Xr zpool-initialize 8 .
1699 The threshold size (in block pointers) at which we create a new sub-livelist.
1747 This value can be tuned temporarily to
1783 percentage is no more than this value.
1790 this value.
1797 The value is expressed as a percentage of free space
1804 The default value of
1811 Setting the threshold to a non-zero percentage will stop allocations
1838 .Sy zfs_multihost_interval No / Sy leaf-vdevs .
1844 and this observed value is the delay which is stored in the uberblock.
1858 The activity check time may be further extended if the value of MMP
1898 if a volatile out-of-order write cache is enabled.
1901 Allow no-operation writes.
1918 The number of blocks pointed by indirect (non-L0) block which should be
1947 Disable QAT hardware acceleration for AES-GCM encryption.
1963 top-level vdev.
2008 The default value is also the maximum.
2058 This value is only tunable upon module insertion.
2059 Changing the value afterwards will have no effect on scrub or resilver
2064 .Bl -tag -compact -offset 4n -width "a"
2071 The largest mostly-contiguous chunk of found data will be verified first.
2089 Changing this value will not
2092 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2099 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2134 Including unmodified copies of the spill blocks creates a backwards-compatible
2137 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2148 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2157 This value must be at least twice the maximum block size in use.
2159 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2169 This value must be at least twice the maximum block size in use.
2181 When this variable is set to non-zero a corrective receive:
2182 .Bl -enum -compact -offset 4n -width "1."
2197 Override this value if most data in your dataset is not of that size
2211 value.
2224 many blocks' size will change, and thus we have to re-allocate
2242 Larger ranges will be split into chunks no larger than this value before
2254 This option is useful for pools constructed from large thinly-provisioned
2270 This setting represents a trade-off between issuing larger,
2274 Increasing this value will allow frees to be aggregated for a longer time.
2277 Decreasing this value will have the opposite effect.
2294 Max vdev I/O aggregation size for non-rotating media.
2317 the purpose of selecting the least busy mirror member on non-rotational vdevs
2330 Aggregate read I/O operations if the on-disk gap between them is within this
2334 Aggregate write I/O operations if the on-disk gap between them is within this
2340 Variants that don't depend on CPU-specific features
2353 fastest selected by built-in benchmark
2356 sse2 SSE2 instruction set 64-bit x86
2357 ssse3 SSSE3 instruction set 64-bit x86
2358 avx2 AVX2 instruction set 64-bit x86
2359 avx512f AVX512F instruction set 64-bit x86
2360 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2361 aarch64_neon NEON Aarch64/64-bit ARMv8
2362 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2373 .Xr zpool-events 8 .
2390 The number of taskq entries that are pre-populated when the taskq is first
2396 The default value of
2415 if a volatile out-of-order write cache is enabled.
2437 Usually, one metaslab from each normal-class vdev is dedicated for use by
2447 using LZ4 and zstd-1 passes is enabled.
2454 If non-zero, the zio deadman will produce debugging messages
2472 When enabled, the maximum number of pending allocations per top-level vdev
2509 The default value of
2514 Set value only applies to pools imported/created after that.
2520 Set value only applies to pools imported/created after that.
2524 generate a system-dependent value close to 6 threads per taskq.
2525 Set value only applies to pools imported/created after that.
2531 Set value only applies to pools imported/created after that.
2577 .Pq Li blk-mq .
2597 .Li blk-mq
2609 .Li blk-mq
2617 .Li blk-mq
2624 .Sy volblocksize Ns -sized blocks per zvol thread.
2632 .Li blk-mq
2636 The queue_depth value for the zvol
2637 .Li blk-mq
2640 .Li blk-mq
2654 .Bl -tag -compact -offset 4n -width "a"
2678 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2679 If the sum of the per-queue maxima exceeds the aggregate maximum,
2683 regardless of whether all per-queue minima have been met.
2737 follows a piece-wise linear function defined by a few adjustable points:
2738 .Bd -literal
2739 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2746 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2750 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2751 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2787 .D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms)
2800 .Bd -literal
2802 10ms +-------------------------------------------------------------*+
2821 | \fBzfs_delay_scale\fP ----------> ******** |
2822 0 +-------------------------------------*********----------------+
2823 0% <- \fBzfs_dirty_data_max\fP -> 100%
2839 .Bd -literal
2841 100ms +-------------------------------------------------------------++
2850 + \fBzfs_delay_scale\fP ----------> ***** +
2861 +--------------------------------------------------------------+
2862 0% <- \fBzfs_dirty_data_max\fP -> 100%
2869 for the I/O scheduler to reach optimal throughput on the back-end storage,
2870 and then by changing the value of