Lines Matching +full:cache +full:- +full:time +full:- +full:ms

9 .\" usr/src/OPENSOLARIS.LICENSE or https://opensource.org/licenses/CDDL-1.0.
31 .Bl -tag -width Ds
33 Maximum size in bytes of the dbuf cache.
37 The behavior of the dbuf cache and its associated settings
43 Maximum size in bytes of the metadata dbuf cache.
47 The behavior of the metadata dbuf cache and its associated settings
63 Set the size of the dbuf cache
68 Set the size of the dbuf metadata cache
73 Set the size of the mutex array for the dbuf cache.
102 Turbo L2ARC warm-up.
151 Setting it to 1 means to L2 cache only MFU data and metadata.
153 Setting it to 2 means to L2 cache all metadata (MRU+MFU) but
155 to cache as much metadata as possible even when having high data turnover.
167 arcstats when importing the pool or onlining a cache
179 Percent of ARC size allowed for L2ARC-only headers.
197 invalid upon importing a pool or onlining a cache device.
252 before moving on to the next top-level vdev.
255 Enable metaslab group biasing based on their vdevs' over- or under-utilization
305 When attempting to log an output nvlist of an ioctl in the on-disk history,
310 .Fn zfs_ioc_channel_program Pq cf. Xr zfs-program 8 .
316 Enable/disable segment-based metaslab selection.
319 When using segment-based metaslab selection, continue allocating
338 becomes the performance limiting factor on high-performance storage.
365 When we unload a metaslab, we cache the size of the largest free chunk.
382 .Bl -item -compact
390 If that fails then we will have a multi-layer gang block.
393 .Bl -item -compact
403 If that fails then we will have a multi-layer gang block.
413 When a vdev is added, target this number of metaslabs per top-level vdev.
422 Maximum ashift used when optimizing for logical \[->] physical sector size on
424 top-level vdevs.
430 If non-zero, then a Direct I/O write's checksum will be verified every
431 time the write is issued and before it is committed to the block pointer.
450 Minimum ashift used when creating new top-level vdevs.
453 Minimum number of metaslabs to create in a top-level vdev.
461 Practical upper limit of total metaslabs per top-level vdev.
484 .It Sy metaslab_unload_delay_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq uint
495 Max amount of memory to use for RAID-Z expansion I/O.
499 For testing, pause RAID-Z expansion when reflow amount reaches this value.
502 For expanded RAID-Z, aggregate reads that have more rows than this.
521 .It Sy spa_config_path Ns = Ns Pa /etc/zfs/zpool.cache Pq charp
544 If this parameter is unset, the traversal skips non-metadata blocks.
546 import has started to stop or start the traversal of non-metadata blocks.
568 It also limits the worst-case time to allocate space.
588 Limits the number of on-disk error log entries that will be converted to the
595 During top-level vdev removal, chunks of data are copied from the vdev
606 Logical ashift for file-based devices.
609 Physical ashift for file-based devices.
633 since last time haven't completed in time to satisfy demand request, i.e.
653 Min time before inactive prefetch stream can be reclaimed
656 Max time before inactive prefetch stream can be deleted
664 .It Sy zfs_abd_scatter_max_order Ns = Ns Sy MAX_ORDER\-1 Pq uint
673 This is the minimum allocation size that will use scatter (page-based) ABDs.
678 bytes, try to unpin some of it in response to demand for non-metadata.
692 Percentage of ARC dnodes to try to scan in response to demand for non-metadata
700 with 8-byte pointers.
719 even during the potentially long time that
723 Number ARC headers to evict per sub-list before proceeding to another sub-list.
724 This batch-style operation prevents entire sub-lists from being evicted at once
750 .Sy all_system_memory No \- Sy 1 GiB
776 .It Sy zfs_arc_min_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 1s Pc Pq uint
777 Minimum time prefetched blocks are locked in the ARC.
779 .It Sy zfs_arc_min_prescient_prefetch_ms Ns = Ns Sy 0 Ns ms Ns Po Ns ≡ Ns 6s Pc Pq uint
780 Minimum time "prescient prefetched" blocks are locked in the ARC.
792 Number of missing top-level vdevs which will be allowed during
793 pool import (only in read-only mode).
805 .Pa zfs-dbgmsg
810 equivalent to a quarter of the user-wired memory limit under
817 To allow more fine-grained locking, each ARC state contains a series
819 Locking is performed at the level of these "sub-lists".
820 This parameters controls the number of sub-lists per ARC state,
879 limits the amount of time spent attempting to reclaim ARC memory to
880 less than 100 ms per allocation attempt,
891 Value of 4 means parity with page cache.
899 Disable pool import at module load by ignoring the cache file
909 This controls the amount of time that a ZIL block (lwb) will remain "open"
916 .It Sy zfs_condense_indirect_commit_entry_delay_ms Ns = Ns Sy 0 Ns ms Pq int
935 bytes on-disk.
940 Only attempt to condense indirect vdev mappings if the on-disk size
968 .It Sy zfs_deadman_checktime_ms Ns = Ns Sy 60000 Ns ms Po 1 min Pc Pq u64
969 Check time in milliseconds.
997 .Bl -tag -compact -offset 4n -width "continue"
1003 Attempt to recover from a "hung" operation by re-dispatching it
1007 This can be used to facilitate automatic fail-over
1008 to a properly configured fail-over partner.
1011 .It Sy zfs_deadman_synctime_ms Ns = Ns Sy 600000 Ns ms Po 10 min Pc Pq u64
1018 .It Sy zfs_deadman_ziotime_ms Ns = Ns Sy 300000 Ns ms Po 5 min Pc Pq u64
1027 Enable prefetching dedup-ed blocks which are going to be freed.
1033 needs to flush out to keep up with the change rate, taking the amount and time
1037 At each pass, it will use the amount already flushed and the total time taken
1047 sync time beyond
1051 Minimum time to spend on dedup log flush each transaction.
1057 completes under this time.
1075 flushed (flush rate) and time spent flushing (flush time rate) and combining
1094 OpenZFS will spend no more than this much memory on maintaining the in-memory
1137 OpenZFS pre-release versions and now have compatibility issues.
1148 are not created per-object and instead a hashtable is used where collisions
1157 Upper-bound limit for unflushed metadata changes to be held by the
1175 This tunable is important because it involves a trade-off between import
1176 time after an unclean export and the frequency of flushing metaslabs.
1180 At the same time though, that means that in the event of an unclean export,
1182 in the import time of the pool.
1185 to be read during import time after a crash.
1203 Tunable limiting maximum time in TXGs any metaslab may remain unflushed.
1204 It effectively limits maximum number of unflushed per-TXG spacemap logs
1219 Decreasing this value will reduce the time spent in an
1241 This limit is only enforced at module load time, and will be ignored if
1252 for 32-bit systems.
1258 This limit is only enforced at module load time, and will be ignored if
1284 The upper limit of write-transaction zil log data size in bytes.
1296 Since ZFS is a copy-on-write filesystem with snapshots, blocks cannot be
1329 results in the original CPU-based calculation being used.
1486 For non-interactive I/O (scrub, resilver, removal, initialize and rebuild),
1487 the number of concurrently-active I/O operations is limited to
1495 and the number of concurrently-active non-interactive operations is increased to
1503 are submitted one at a time, and so setting
1506 To prevent non-interactive I/O, like scrub,
1511 within a reasonable amount of time.
1515 Maximum number of queued allocations per top-level vdev expressed as
1533 The following options may be bitwise-ored together:
1565 This parameter only applies on Linux, and can only be set at module load time.
1568 Time before expiring
1591 The following flags may be bitwise-ored together:
1604 256 ZFS_DEBUG_METASLAB_VERIFY Verify space accounting on disk matches in-memory \fBrange_trees\fP.
1637 from the error-encountering filesystem is "temporarily leaked".
1651 .Bl -enum -compact -offset 4n -width "1."
1654 e.g. due to a top-level vdev going offline), or
1670 .It Sy zfs_free_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1s Pc Pq uint
1676 a minimum of this much time will be spent working on freeing blocks per TXG.
1678 .It Sy zfs_obsolete_min_time_ms Ns = Ns Sy 500 Ns ms Pq uint
1691 .Xr zpool-initialize 8 .
1695 .Xr zpool-initialize 8 .
1699 The threshold size (in block pointers) at which we create a new sub-livelist.
1711 Incremented each time an extra ALLOC blkptr is added to a livelist entry while
1716 Incremented each time livelist condensing is canceled while in
1727 Incremented each time livelist condensing is canceled while in
1738 The maximum execution time limit that can be set for a ZFS channel program,
1811 Setting the threshold to a non-zero percentage will stop allocations
1830 .It Sy zfs_multihost_interval Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq u64
1838 .Sy zfs_multihost_interval No / Sy leaf-vdevs .
1850 will reduce the import time but increase
1852 The total activity check time is never allowed to drop below one second.
1854 On import the activity check waits a minimum amount of time determined by
1858 The activity check time may be further extended if the value of MMP
1863 .Em 100 ms
1896 Disable cache flush operations on disks when writing.
1898 if a volatile out-of-order write cache is enabled.
1901 Allow no-operation writes.
1918 The number of blocks pointed by indirect (non-L0) block which should be
1947 Disable QAT hardware acceleration for AES-GCM encryption.
1959 Include cache hits in read history
1963 top-level vdev.
1980 combinations each time the block is accessed.
2023 .It Sy zfs_resilver_min_time_ms Ns = Ns Sy 3000 Ns ms Po 3 s Pc Pq uint
2025 While resilvering, it will spend at least this much time
2029 If set, remove the DTL (dirty time list) upon completion of a pool scan (scrub),
2040 .It Sy zfs_scrub_min_time_ms Ns = Ns Sy 1000 Ns ms Po 1 s Pc Pq uint
2042 While scrubbing, it will spend at least this much time
2064 .Bl -tag -compact -offset 4n -width "a"
2071 The largest mostly-contiguous chunk of found data will be verified first.
2092 .It Sy zfs_scan_mem_lim_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2099 .It Sy zfs_scan_mem_lim_soft_fact Ns = Ns Sy 20 Ns ^-1 Pq uint
2109 When reporting resilver throughput and estimated completion time use the
2113 When set to zero performance is calculated over the time between checkpoints.
2134 Including unmodified copies of the spill blocks creates a backwards-compatible
2137 .It Sy zfs_send_no_prefetch_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2148 .It Sy zfs_send_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2159 .It Sy zfs_recv_queue_ff Ns = Ns Sy 20 Ns ^\-1 Pq uint
2181 When this variable is set to non-zero a corrective receive:
2182 .Bl -enum -compact -offset 4n -width "1."
2224 many blocks' size will change, and thus we have to re-allocate
2254 This option is useful for pools constructed from large thinly-provisioned
2270 This setting represents a trade-off between issuing larger,
2274 Increasing this value will allow frees to be aggregated for a longer time.
2294 Max vdev I/O aggregation size for non-rotating media.
2317 the purpose of selecting the least busy mirror member on non-rotational vdevs
2330 Aggregate read I/O operations if the on-disk gap between them is within this
2334 Aggregate write I/O operations if the on-disk gap between them is within this
2340 Variants that don't depend on CPU-specific features
2353 fastest selected by built-in benchmark
2356 sse2 SSE2 instruction set 64-bit x86
2357 ssse3 SSSE3 instruction set 64-bit x86
2358 avx2 AVX2 instruction set 64-bit x86
2359 avx512f AVX512F instruction set 64-bit x86
2360 avx512bw AVX512F & AVX512BW instruction sets 64-bit x86
2361 aarch64_neon NEON Aarch64/64-bit ARMv8
2362 aarch64_neonx2 NEON with more unrolling Aarch64/64-bit ARMv8
2373 .Xr zpool-events 8 .
2390 The number of taskq entries that are pre-populated when the taskq is first
2412 Disable the cache flush commands that are normally sent to disk by
2415 if a volatile out-of-order write cache is enabled.
2437 Usually, one metaslab from each normal-class vdev is dedicated for use by
2447 using LZ4 and zstd-1 passes is enabled.
2454 If non-zero, the zio deadman will produce debugging messages
2462 .It Sy zio_slow_io_ms Ns = Ns Sy 30000 Ns ms Po 30 s Pc Pq int
2463 When an I/O operation takes more than this much time to complete,
2472 When enabled, the maximum number of pending allocations per top-level vdev
2524 generate a system-dependent value close to 6 threads per taskq.
2547 This may slightly improve startup time on
2577 .Pq Li blk-mq .
2597 .Li blk-mq
2598 and is only read and assigned to a zvol at zvol load time.
2609 .Li blk-mq
2617 .Li blk-mq
2618 and is only read and assigned to a zvol at zvol load time.
2624 .Sy volblocksize Ns -sized blocks per zvol thread.
2632 .Li blk-mq
2633 and is only applied at each zvol's load time.
2637 .Li blk-mq
2640 .Li blk-mq
2641 and is only applied at each zvol's load time.
2654 .Bl -tag -compact -offset 4n -width "a"
2678 Note that the sum of the per-queue minima must not exceed the aggregate maximum.
2679 If the sum of the per-queue maxima exceeds the aggregate maximum,
2683 regardless of whether all per-queue minima have been met.
2700 Every time an I/O operation is queued or an operation completes,
2731 response time of operations from other queues, in particular synchronous ones.
2737 follows a piece-wise linear function defined by a few adjustable points:
2738 .Bd -literal
2739 | o---------| <-- \fBzfs_vdev_async_write_max_active\fP
2746 |-------o | | <-- \fBzfs_vdev_async_write_min_active\fP
2750 | `-- \fBzfs_vdev_async_write_active_max_dirty_percent\fP
2751 `--------- \fBzfs_vdev_async_write_active_min_dirty_percent\fP
2778 This way the calculated delay time
2782 rather than the current time.
2783 This credits the transaction for "time already served",
2786 The minimum time for a transaction to take is calculated as
2787 .D1 min_time = min( Ns Sy zfs_delay_scale No \(mu Po Sy dirty No \- Sy min Pc / Po Sy max No \- Sy dirty Pc , 100ms)
2800 .Bd -literal
2802 10ms +-------------------------------------------------------------*+
2804 9ms + *+
2806 8ms + *+
2808 7ms + * +
2810 6ms + * +
2812 5ms + * +
2814 4ms + * +
2816 3ms + * +
2818 2ms + (midpoint) * +
2820 1ms + v *** +
2821 | \fBzfs_delay_scale\fP ----------> ******** |
2822 0 +-------------------------------------*********----------------+
2823 0% <- \fBzfs_dirty_data_max\fP -> 100%
2826 Note, that since the delay is added to the outstanding time remaining on the
2839 .Bd -literal
2841 100ms +-------------------------------------------------------------++
2845 10ms + *+
2849 1ms + v **** +
2850 + \fBzfs_delay_scale\fP ----------> ***** +
2861 +--------------------------------------------------------------+
2862 0% <- \fBzfs_dirty_data_max\fP -> 100%
2869 for the I/O scheduler to reach optimal throughput on the back-end storage,