xref: /netbsd-src/external/cddl/osnet/dist/cmd/zpool/zpool.8 (revision a0412fee12f3d9602ab7e942292b023c269a35cb)
1'\" te
2.\" Copyright (c) 2012, Martin Matuska <mm@FreeBSD.org>.
3.\" Copyright (c) 2013-2014, Xin Li <delphij@FreeBSD.org>.
4.\" All Rights Reserved.
5.\"
6.\" The contents of this file are subject to the terms of the
7.\" Common Development and Distribution License (the "License").
8.\" You may not use this file except in compliance with the License.
9.\"
10.\" You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
11.\" or http://www.opensolaris.org/os/licensing.
12.\" See the License for the specific language governing permissions
13.\" and limitations under the License.
14.\"
15.\" When distributing Covered Code, include this CDDL HEADER in each
16.\" file and include the License file at usr/src/OPENSOLARIS.LICENSE.
17.\" If applicable, add the following below this CDDL HEADER, with the
18.\" fields enclosed by brackets "[]" replaced with your own identifying
19.\" information: Portions Copyright [yyyy] [name of copyright owner]
20.\"
21.\" Copyright (c) 2010, Sun Microsystems, Inc. All Rights Reserved.
22.\" Copyright 2011, Nexenta Systems, Inc. All Rights Reserved.
23.\" Copyright (c) 2011, Justin T. Gibbs <gibbs@FreeBSD.org>
24.\" Copyright (c) 2013 by Delphix. All Rights Reserved.
25.\" Copyright (c) 2012, Glen Barber <gjb@FreeBSD.org>
26.\"
27.\" $FreeBSD: head/cddl/contrib/opensolaris/cmd/zpool/zpool.8 302787 2016-07-13 21:27:10Z vangyzen $
28.\"
29.Dd July 26, 2014
30.Dt ZPOOL 8
31.Os
32.Sh NAME
33.Nm zpool
34.Nd configures ZFS storage pools
35.Sh SYNOPSIS
36.Nm
37.Op Fl \&?
38.Nm
39.Cm add
40.Op Fl fn
41.Ar pool vdev ...
42.Nm
43.Cm attach
44.Op Fl f
45.Ar pool device new_device
46.Nm
47.Cm clear
48.Op Fl F Op Fl n
49.Ar pool
50.Op Ar device
51.Nm
52.Cm create
53.Op Fl fnd
54.Op Fl o Ar property Ns = Ns Ar value
55.Ar ...
56.Op Fl O Ar file-system-property Ns = Ns Ar value
57.Ar ...
58.Op Fl m Ar mountpoint
59.Op Fl R Ar root
60.Ar pool vdev ...
61.Nm
62.Cm destroy
63.Op Fl f
64.Ar pool
65.Nm
66.Cm detach
67.Ar pool device
68.Nm
69.Cm export
70.Op Fl f
71.Ar pool ...
72.Nm
73.Cm get
74.Op Fl Hp
75.Op Fl o Ar field Ns Op , Ns Ar ...
76.Ar all | property Ns Op , Ns Ar ...
77.Ar pool ...
78.Nm
79.Cm history
80.Op Fl il
81.Op Ar pool
82.Ar ...
83.Nm
84.Cm import
85.Op Fl d Ar dir | Fl c Ar cachefile
86.Op Fl D
87.Nm
88.Cm import
89.Op Fl o Ar mntopts
90.Op Fl o Ar property Ns = Ns Ar value
91.Ar ...
92.Op Fl d Ar dir | Fl c Ar cachefile
93.Op Fl D
94.Op Fl f
95.Op Fl m
96.Op Fl N
97.Op Fl R Ar root
98.Op Fl F Op Fl n
99.Fl a
100.Nm
101.Cm import
102.Op Fl o Ar mntopts
103.Op Fl o Ar property Ns = Ns Ar value
104.Ar ...
105.Op Fl d Ar dir | Fl c Ar cachefile
106.Op Fl D
107.Op Fl f
108.Op Fl m
109.Op Fl N
110.Op Fl R Ar root
111.Op Fl F Op Fl n
112.Ar pool | id
113.Op Ar newpool
114.Nm
115.Cm iostat
116.Op Fl T Cm d Ns | Ns Cm u
117.Op Fl v
118.Op Ar pool
119.Ar ...
120.Nm
121.Cm labelclear
122.Op Fl f
123.Ar device
124.Nm
125.Cm list
126.Op Fl Hpv
127.Op Fl o Ar property Ns Op , Ns Ar ...
128.Op Fl T Cm d Ns | Ns Cm u
129.Op Ar pool
130.Ar ...
131.Op Ar inverval Op Ar count
132.Nm
133.Cm offline
134.Op Fl t
135.Ar pool device ...
136.Nm
137.Cm online
138.Op Fl e
139.Ar pool device ...
140.Nm
141.Cm reguid
142.Ar pool
143.Nm
144.Cm remove
145.Ar pool device ...
146.Nm
147.Cm reopen
148.Ar pool
149.Nm
150.Cm replace
151.Op Fl f
152.Ar pool device
153.Op Ar new_device
154.Nm
155.Cm scrub
156.Op Fl s
157.Ar pool ...
158.Nm
159.Cm set
160.Ar property Ns = Ns Ar value pool
161.Nm
162.Cm split
163.Op Fl n
164.Op Fl R Ar altroot
165.Op Fl o Ar mntopts
166.Op Fl o Ar property Ns = Ns Ar value
167.Ar pool newpool
168.Op Ar device ...
169.Nm
170.Cm status
171.Op Fl vx
172.Op Fl T Cm d Ns | Ns Cm u
173.Op Ar pool
174.Ar ...
175.Op Ar interval Op Ar count
176.Nm
177.Cm upgrade
178.Op Fl v
179.Nm
180.Cm upgrade
181.Op Fl V Ar version
182.Fl a | Ar pool ...
183.Sh DESCRIPTION
184The
185.Nm
186command configures
187.Tn ZFS
188storage pools. A storage pool is a collection of devices that provides physical
189storage and data replication for
190.Tn ZFS
191datasets.
192.Pp
193All datasets within a storage pool share the same space. See
194.Xr zfs 8
195for information on managing datasets.
196.Ss Virtual Devices (vdevs)
197A
198.Qq virtual device
199.Pq No vdev
200describes a single device or a collection of devices organized according to
201certain performance and fault characteristics. The following virtual devices
202are supported:
203.Bl -tag -width "XXXXXX"
204.It Sy disk
205A block device, typically located under
206.Pa /dev .
207.Tn ZFS
208can use individual slices or partitions, though the recommended mode of
209operation is to use whole disks. A disk can be specified by a full path to the
210device or the
211.Xr geom 4
212provider name. When given a whole disk,
213.Tn ZFS
214automatically labels the disk, if necessary.
215.It Sy file
216A regular file. The use of files as a backing store is strongly discouraged. It
217is designed primarily for experimental purposes, as the fault tolerance of a
218file is only as good the file system of which it is a part. A file must be
219specified by a full path.
220.It Sy mirror
221A mirror of two or more devices. Data is replicated in an identical fashion
222across all components of a mirror. A mirror with
223.Em N
224disks of size
225.Em X
226can hold
227.Em X
228bytes and can withstand
229.Pq Em N-1
230devices failing before data integrity is compromised.
231.It Sy raidz
232(or
233.Sy raidz1 raidz2 raidz3 ) .
234A variation on
235.Sy RAID-5
236that allows for better distribution of parity and eliminates the
237.Qq Sy RAID-5
238write hole (in which data and parity become inconsistent after a power loss).
239Data and parity is striped across all disks within a
240.No raidz
241group.
242.Pp
243A
244.No raidz
245group can have single-, double- , or triple parity, meaning that the
246.No raidz
247group can sustain one, two, or three failures, respectively, without
248losing any data. The
249.Sy raidz1 No vdev
250type specifies a single-parity
251.No raidz
252group; the
253.Sy raidz2 No vdev
254type specifies a double-parity
255.No raidz
256group; and the
257.Sy raidz3 No vdev
258type specifies a triple-parity
259.No raidz
260group. The
261.Sy raidz No vdev
262type is an alias for
263.Sy raidz1 .
264.Pp
265A
266.No raidz
267group with
268.Em N
269disks of size
270.Em X
271with
272.Em P
273parity disks can hold approximately
274.Sm off
275.Pq Em N-P
276*X
277.Sm on
278bytes and can withstand
279.Em P
280device(s) failing before data integrity is compromised. The minimum number of
281devices in a
282.No raidz
283group is one more than the number of parity disks. The
284recommended number is between 3 and 9 to help increase performance.
285.It Sy spare
286A special
287.No pseudo- Ns No vdev
288which keeps track of available hot spares for a pool.
289For more information, see the
290.Qq Sx Hot Spares
291section.
292.It Sy log
293A separate-intent log device. If more than one log device is specified, then
294writes are load-balanced between devices. Log devices can be mirrored. However,
295.No raidz
296.No vdev
297types are not supported for the intent log. For more information,
298see the
299.Qq Sx Intent Log
300section.
301.It Sy cache
302A device used to cache storage pool data. A cache device cannot be configured
303as a mirror or
304.No raidz
305group. For more information, see the
306.Qq Sx Cache Devices
307section.
308.El
309.Pp
310Virtual devices cannot be nested, so a mirror or
311.No raidz
312virtual device can only
313contain files or disks. Mirrors of mirrors (or other combinations) are not
314allowed.
315.Pp
316A pool can have any number of virtual devices at the top of the configuration
317(known as
318.Qq root
319.No vdev Ns s).
320Data is dynamically distributed across all top-level devices to balance data
321among devices. As new virtual devices are added,
322.Tn ZFS
323automatically places data on the newly available devices.
324.Pp
325Virtual devices are specified one at a time on the command line, separated by
326whitespace. The keywords
327.Qq mirror
328and
329.Qq raidz
330are used to distinguish where a group ends and another begins. For example, the
331following creates two root
332.No vdev Ns s,
333each a mirror of two disks:
334.Bd -literal -offset 2n
335.Li # Ic zpool create mypool mirror da0 da1 mirror da2 da3
336.Ed
337.Ss Device Failure and Recovery
338.Tn ZFS
339supports a rich set of mechanisms for handling device failure and data
340corruption. All metadata and data is checksummed, and
341.Tn ZFS
342automatically repairs bad data from a good copy when corruption is detected.
343.Pp
344In order to take advantage of these features, a pool must make use of some form
345of redundancy, using either mirrored or
346.No raidz
347groups. While
348.Tn ZFS
349supports running in a non-redundant configuration, where each root
350.No vdev
351is simply a disk or file, this is strongly discouraged. A single case of bit
352corruption can render some or all of your data unavailable.
353.Pp
354A pool's health status is described by one of three states: online, degraded,
355or faulted. An online pool has all devices operating normally. A degraded pool
356is one in which one or more devices have failed, but the data is still
357available due to a redundant configuration. A faulted pool has corrupted
358metadata, or one or more faulted devices, and insufficient replicas to continue
359functioning.
360.Pp
361The health of the top-level
362.No vdev ,
363such as mirror or
364.No raidz
365device, is
366potentially impacted by the state of its associated
367.No vdev Ns s,
368or component devices. A top-level
369.No vdev
370or component device is in one of the following states:
371.Bl -tag -width "DEGRADED"
372.It Sy DEGRADED
373One or more top-level
374.No vdev Ns s
375is in the degraded state because one or more
376component devices are offline. Sufficient replicas exist to continue
377functioning.
378.Pp
379One or more component devices is in the degraded or faulted state, but
380sufficient replicas exist to continue functioning. The underlying conditions
381are as follows:
382.Bl -bullet -offset 2n
383.It
384The number of checksum errors exceeds acceptable levels and the device is
385degraded as an indication that something may be wrong.
386.Tn ZFS
387continues to use the device as necessary.
388.It
389The number of
390.Tn I/O
391errors exceeds acceptable levels. The device could not be
392marked as faulted because there are insufficient replicas to continue
393functioning.
394.El
395.It Sy FAULTED
396One or more top-level
397.No vdev Ns s
398is in the faulted state because one or more
399component devices are offline. Insufficient replicas exist to continue
400functioning.
401.Pp
402One or more component devices is in the faulted state, and insufficient
403replicas exist to continue functioning. The underlying conditions are as
404follows:
405.Bl -bullet -offset 2n
406.It
407The device could be opened, but the contents did not match expected values.
408.It
409The number of
410.Tn I/O
411errors exceeds acceptable levels and the device is faulted to
412prevent further use of the device.
413.El
414.It Sy OFFLINE
415The device was explicitly taken offline by the
416.Qq Nm Cm offline
417command.
418.It Sy ONLINE
419The device is online and functioning.
420.It Sy REMOVED
421The device was physically removed while the system was running. Device removal
422detection is hardware-dependent and may not be supported on all platforms.
423.It Sy UNAVAIL
424The device could not be opened. If a pool is imported when a device was
425unavailable, then the device will be identified by a unique identifier instead
426of its path since the path was never correct in the first place.
427.El
428.Pp
429If a device is removed and later reattached to the system,
430.Tn ZFS
431attempts to put the device online automatically. Device attach detection is
432hardware-dependent and might not be supported on all platforms.
433.Ss Hot Spares
434.Tn ZFS
435allows devices to be associated with pools as
436.Qq hot spares .
437These devices are not actively used in the pool, but when an active device
438fails, it is automatically replaced by a hot spare. To create a pool with hot
439spares, specify a
440.Qq spare
441.No vdev
442with any number of devices. For example,
443.Bd -literal -offset 2n
444.Li # Ic zpool create pool mirror da0 da1 spare da2 da3
445.Ed
446.Pp
447Spares can be shared across multiple pools, and can be added with the
448.Qq Nm Cm add
449command and removed with the
450.Qq Nm Cm remove
451command. Once a spare replacement is initiated, a new "spare"
452.No vdev
453is created
454within the configuration that will remain there until the original device is
455replaced. At this point, the hot spare becomes available again if another
456device fails.
457.Pp
458If a pool has a shared spare that is currently being used, the pool can not be
459exported since other pools may use this shared spare, which may lead to
460potential data corruption.
461.Pp
462An in-progress spare replacement can be cancelled by detaching the hot spare.
463If the original faulted device is detached, then the hot spare assumes its
464place in the configuration, and is removed from the spare list of all active
465pools.
466.Pp
467Spares cannot replace log devices.
468.Pp
469This feature requires a userland helper.
470.Ss Intent Log
471The
472.Tn ZFS
473Intent Log
474.Pq Tn ZIL
475satisfies
476.Tn POSIX
477requirements for synchronous transactions. For instance, databases often
478require their transactions to be on stable storage devices when returning from
479a system call.
480.Tn NFS
481and other applications can also use
482.Xr fsync 2
483to ensure data stability. By default, the intent log is allocated from blocks
484within the main pool. However, it might be possible to get better performance
485using separate intent log devices such as
486.Tn NVRAM
487or a dedicated disk. For example:
488.Bd -literal -offset 2n
489.Li # Ic zpool create pool da0 da1 log da2
490.Ed
491.Pp
492Multiple log devices can also be specified, and they can be mirrored. See the
493.Sx EXAMPLES
494section for an example of mirroring multiple log devices.
495.Pp
496Log devices can be added, replaced, attached, detached, imported and exported
497as part of the larger pool. Mirrored log devices can be removed by specifying
498the top-level mirror for the log.
499.Ss Cache devices
500Devices can be added to a storage pool as "cache devices." These devices
501provide an additional layer of caching between main memory and disk. For
502read-heavy workloads, where the working set size is much larger than what can
503be cached in main memory, using cache devices allow much more of this working
504set to be served from low latency media. Using cache devices provides the
505greatest performance improvement for random read-workloads of mostly static
506content.
507.Pp
508To create a pool with cache devices, specify a "cache"
509.No vdev
510with any number of devices. For example:
511.Bd -literal -offset 2n
512.Li # Ic zpool create pool da0 da1 cache da2 da3
513.Ed
514.Pp
515Cache devices cannot be mirrored or part of a
516.No raidz
517configuration. If a read
518error is encountered on a cache device, that read
519.Tn I/O
520is reissued to the original storage pool device, which might be part of a
521mirrored or
522.No raidz
523configuration.
524.Pp
525The content of the cache devices is considered volatile, as is the case with
526other system caches.
527.Ss Properties
528Each pool has several properties associated with it. Some properties are
529read-only statistics while others are configurable and change the behavior of
530the pool. The following are read-only properties:
531.Bl -tag -width "dedupratio"
532.It Sy alloc
533Amount of storage space within the pool that has been physically allocated.
534.It Sy capacity
535Percentage of pool space used. This property can also be referred to by its
536shortened column name, "cap".
537.It Sy comment
538A text string consisting of printable ASCII characters that will be stored
539such that it is available even if the pool becomes faulted.  An administrator
540can provide additional information about a pool using this property.
541.It Sy dedupratio
542The deduplication ratio specified for a pool, expressed as a multiplier.
543For example, a
544.Sy dedupratio
545value of 1.76 indicates that 1.76 units of data were stored but only 1 unit of disk space was actually consumed. See
546.Xr zfs 8
547for a description of the deduplication feature.
548.It Sy expandsize
549Amount of uninitialized space within the pool or device that can be used to
550increase the total capacity of the pool.
551Uninitialized space consists of
552any space on an EFI labeled vdev which has not been brought online
553.Pq i.e. zpool online -e .
554This space occurs when a LUN is dynamically expanded.
555.It Sy fragmentation
556The amount of fragmentation in the pool.
557.It Sy free
558Number of blocks within the pool that are not allocated.
559.It Sy freeing
560After a file system or snapshot is destroyed, the space it was using is
561returned to the pool asynchronously.
562.Sy freeing
563is the amount of space remaining to be reclaimed.
564Over time
565.Sy freeing
566will decrease while
567.Sy free
568increases.
569.It Sy guid
570A unique identifier for the pool.
571.It Sy health
572The current health of the pool. Health can be
573.Qq Sy ONLINE ,
574.Qq Sy DEGRADED ,
575.Qq Sy FAULTED ,
576.Qq Sy OFFLINE ,
577.Qq Sy REMOVED ,
578or
579.Qq Sy UNAVAIL .
580.It Sy size
581Total size of the storage pool.
582.It Sy unsupported@ Ns Ar feature_guid
583Information about unsupported features that are enabled on the pool.
584See
585.Xr zpool-features 7
586for details.
587.It Sy used
588Amount of storage space used within the pool.
589.El
590.Pp
591The space usage properties report actual physical space available to the
592storage pool. The physical space can be different from the total amount of
593space that any contained datasets can actually use. The amount of space used in
594a
595.No raidz
596configuration depends on the characteristics of the data being written.
597In addition,
598.Tn ZFS
599reserves some space for internal accounting that the
600.Xr zfs 8
601command takes into account, but the
602.Xr zpool 8
603command does not. For non-full pools of a reasonable size, these effects should
604be invisible. For small pools, or pools that are close to being completely
605full, these discrepancies may become more noticeable.
606.Pp
607The following property can be set at creation time and import time:
608.Bl -tag -width 2n
609.It Sy altroot
610Alternate root directory. If set, this directory is prepended to any mount
611points within the pool. This can be used when examining an unknown pool where
612the mount points cannot be trusted, or in an alternate boot environment, where
613the typical paths are not valid.
614.Sy altroot
615is not a persistent property. It is valid only while the system is up.
616Setting
617.Sy altroot
618defaults to using
619.Cm cachefile=none ,
620though this may be overridden using an explicit setting.
621.El
622.Pp
623The following property can only be set at import time:
624.Bl -tag -width 2n
625.It Sy readonly Ns = Ns Cm on No | Cm off
626If set to
627.Cm on ,
628pool will be imported in read-only mode with the following restrictions:
629.Bl -bullet -offset 2n
630.It
631Synchronous data in the intent log will not be accessible
632.It
633Properties of the pool can not be changed
634.It
635Datasets of this pool can only be mounted read-only
636.It
637To write to a read-only pool, a export and import of the pool is required.
638.El
639.Pp
640This property can also be referred to by its shortened column name,
641.Sy rdonly .
642.El
643.Pp
644The following properties can be set at creation time and import time, and later
645changed with the
646.Ic zpool set
647command:
648.Bl -tag -width 2n
649.It Sy autoexpand Ns = Ns Cm on No | Cm off
650Controls automatic pool expansion when the underlying LUN is grown. If set to
651.Qq Cm on ,
652the pool will be resized according to the size of the expanded
653device. If the device is part of a mirror or
654.No raidz
655then all devices within that
656.No mirror/ Ns No raidz
657group must be expanded before the new space is made available to
658the pool. The default behavior is
659.Qq off .
660This property can also be referred to by its shortened column name,
661.Sy expand .
662.It Sy autoreplace Ns = Ns Cm on No | Cm off
663Controls automatic device replacement. If set to
664.Qq Cm off ,
665device replacement must be initiated by the administrator by using the
666.Qq Nm Cm replace
667command. If set to
668.Qq Cm on ,
669any new device, found in the same
670physical location as a device that previously belonged to the pool, is
671automatically formatted and replaced. The default behavior is
672.Qq Cm off .
673This property can also be referred to by its shortened column name, "replace".
674.It Sy bootfs Ns = Ns Ar pool Ns / Ns Ar dataset
675Identifies the default bootable dataset for the root pool. This property is
676expected to be set mainly by the installation and upgrade programs.
677.It Sy cachefile Ns = Ns Ar path No | Cm none
678Controls the location of where the pool configuration is cached. Discovering
679all pools on system startup requires a cached copy of the configuration data
680that is stored on the root file system. All pools in this cache are
681automatically imported when the system boots. Some environments, such as
682install and clustering, need to cache this information in a different location
683so that pools are not automatically imported. Setting this property caches the
684pool configuration in a different location that can later be imported with
685.Qq Nm Cm import Fl c .
686Setting it to the special value
687.Qq Cm none
688creates a temporary pool that is never cached, and the special value
689.Cm ''
690(empty string) uses the default location.
691.It Sy comment Ns = Ns Ar text
692A text string consisting of printable ASCII characters that will be stored
693such that it is available even if the pool becomes faulted.
694An administrator can provide additional information about a pool using this
695property.
696.It Sy dedupditto Ns = Ns Ar number
697Threshold for the number of block ditto copies. If the reference count for a
698deduplicated block increases above this number, a new ditto copy of this block
699is automatically stored. Default setting is
700.Cm 0
701which causes no ditto copies to be created for deduplicated blocks.
702The miniumum legal nonzero setting is 100.
703.It Sy delegation Ns = Ns Cm on No | Cm off
704Controls whether a non-privileged user is granted access based on the dataset
705permissions defined on the dataset. See
706.Xr zfs 8
707for more information on
708.Tn ZFS
709delegated administration.
710.It Sy failmode Ns = Ns Cm wait No | Cm continue No | Cm panic
711Controls the system behavior in the event of catastrophic pool failure. This
712condition is typically a result of a loss of connectivity to the underlying
713storage device(s) or a failure of all devices within the pool. The behavior of
714such an event is determined as follows:
715.Bl -tag -width indent
716.It Sy wait
717Blocks all
718.Tn I/O
719access until the device connectivity is recovered and the errors are cleared.
720This is the default behavior.
721.It Sy continue
722Returns
723.Em EIO
724to any new write
725.Tn I/O
726requests but allows reads to any of the remaining healthy devices. Any write
727requests that have yet to be committed to disk would be blocked.
728.It Sy panic
729Prints out a message to the console and generates a system crash dump.
730.El
731.It Sy feature@ Ns Ar feature_name Ns = Ns Sy enabled
732The value of this property is the current state of
733.Ar feature_name .
734The only valid value when setting this property is
735.Sy enabled
736which moves
737.Ar feature_name
738to the enabled state.
739See
740.Xr zpool-features 7
741for details on feature states.
742.It Sy listsnaps Ns = Ns Cm on No | Cm off
743Controls whether information about snapshots associated with this pool is
744output when
745.Qq Nm zfs Cm list
746is run without the
747.Fl t
748option. The default value is
749.Cm off .
750.It Sy version Ns = Ns Ar version
751The current on-disk version of the pool. This can be increased, but never
752decreased. The preferred method of updating pools is with the
753.Qq Nm Cm upgrade
754command, though this property can be used when a specific version is needed
755for backwards compatibility.
756Once feature flags is enabled on a pool this property will no longer have a
757value.
758.El
759.Sh SUBCOMMANDS
760All subcommands that modify state are logged persistently to the pool in their
761original form.
762.Pp
763The
764.Nm
765command provides subcommands to create and destroy storage pools, add capacity
766to storage pools, and provide information about the storage pools. The following
767subcommands are supported:
768.Bl -tag -width 2n
769.It Xo
770.Nm
771.Op Fl \&?
772.Xc
773.Pp
774Displays a help message.
775.It Xo
776.Nm
777.Cm add
778.Op Fl fn
779.Ar pool vdev ...
780.Xc
781.Pp
782Adds the specified virtual devices to the given pool. The
783.No vdev
784specification is described in the
785.Qq Sx Virtual Devices
786section. The behavior of the
787.Fl f
788option, and the device checks performed are described in the
789.Qq Nm Cm create
790subcommand.
791.Bl -tag -width indent
792.It Fl f
793Forces use of
794.Ar vdev ,
795even if they appear in use or specify a conflicting replication level.
796Not all devices can be overridden in this manner.
797.It Fl n
798Displays the configuration that would be used without actually adding the
799.Ar vdev Ns s.
800The actual pool creation can still fail due to insufficient privileges or device
801sharing.
802.Pp
803Do not add a disk that is currently configured as a quorum device to a zpool.
804After a disk is in the pool, that disk can then be configured as a quorum
805device.
806.El
807.It Xo
808.Nm
809.Cm attach
810.Op Fl f
811.Ar pool device new_device
812.Xc
813.Pp
814Attaches
815.Ar new_device
816to an existing
817.Sy zpool
818device. The existing device cannot be part of a
819.No raidz
820configuration. If
821.Ar device
822is not currently part of a mirrored configuration,
823.Ar device
824automatically transforms into a two-way mirror of
825.Ar device No and Ar new_device .
826If
827.Ar device
828is part of a two-way mirror, attaching
829.Ar new_device
830creates a three-way mirror, and so on. In either case,
831.Ar new_device
832begins to resilver immediately.
833.Bl -tag -width indent
834.It Fl f
835Forces use of
836.Ar new_device ,
837even if its appears to be in use. Not all devices can be overridden in this
838manner.
839.El
840.It Xo
841.Nm
842.Cm clear
843.Op Fl F Op Fl n
844.Ar pool
845.Op Ar device
846.Xc
847.Pp
848Clears device errors in a pool. If no arguments are specified, all device
849errors within the pool are cleared. If one or more devices is specified, only
850those errors associated with the specified device or devices are cleared.
851.Bl -tag -width indent
852.It Fl F
853Initiates recovery mode for an unopenable pool. Attempts to discard the last
854few transactions in the pool to return it to an openable state. Not all damaged
855pools can be recovered by using this option. If successful, the data from the
856discarded transactions is irretrievably lost.
857.It Fl n
858Used in combination with the
859.Fl F
860flag. Check whether discarding transactions would make the pool openable, but
861do not actually discard any transactions.
862.El
863.It Xo
864.Nm
865.Cm create
866.Op Fl fnd
867.Op Fl o Ar property Ns = Ns Ar value
868.Ar ...
869.Op Fl O Ar file-system-property Ns = Ns Ar value
870.Ar ...
871.Op Fl m Ar mountpoint
872.Op Fl R Ar root
873.Ar pool vdev ...
874.Xc
875.Pp
876Creates a new storage pool containing the virtual devices specified on the
877command line. The pool name must begin with a letter, and can only contain
878alphanumeric characters as well as underscore ("_"), dash ("-"), and period
879("."). The pool names "mirror", "raidz", "spare" and "log" are reserved, as are
880names beginning with the pattern "c[0-9]". The
881.No vdev
882specification is described in the
883.Qq Sx Virtual Devices
884section.
885.Pp
886The command verifies that each device specified is accessible and not currently
887in use by another subsystem. There are some uses, such as being currently
888mounted, or specified as the dedicated dump device, that prevents a device from
889ever being used by
890.Tn ZFS
891Other uses, such as having a preexisting
892.Sy UFS
893file system, can be overridden with the
894.Fl f
895option.
896.Pp
897The command also checks that the replication strategy for the pool is
898consistent. An attempt to combine redundant and non-redundant storage in a
899single pool, or to mix disks and files, results in an error unless
900.Fl f
901is specified. The use of differently sized devices within a single
902.No raidz
903or mirror group is also flagged as an error unless
904.Fl f
905is specified.
906.Pp
907Unless the
908.Fl R
909option is specified, the default mount point is
910.Qq Pa /pool .
911The mount point must not exist or must be empty, or else the
912root dataset cannot be mounted. This can be overridden with the
913.Fl m
914option.
915.Pp
916By default all supported features are enabled on the new pool unless the
917.Fl d
918option is specified.
919.Bl -tag -width indent
920.It Fl f
921Forces use of
922.Ar vdev Ns s,
923even if they appear in use or specify a conflicting replication level.
924Not all devices can be overridden in this manner.
925.It Fl n
926Displays the configuration that would be used without actually creating the
927pool. The actual pool creation can still fail due to insufficient privileges or
928device sharing.
929.It Fl d
930Do not enable any features on the new pool.
931Individual features can be enabled by setting their corresponding properties
932to
933.Sy enabled
934with the
935.Fl o
936option.
937See
938.Xr zpool-features 7
939for details about feature properties.
940.It Xo
941.Fl o Ar property Ns = Ns Ar value
942.Op Fl o Ar property Ns = Ns Ar value
943.Ar ...
944.Xc
945Sets the given pool properties. See the
946.Qq Sx Properties
947section for a list of valid properties that can be set.
948.It Xo
949.Fl O
950.Ar file-system-property Ns = Ns Ar value
951.Op Fl O Ar file-system-property Ns = Ns Ar value
952.Ar ...
953.Xc
954Sets the given file system properties in the root file system of the pool. See
955.Xr zfs 8 Properties
956for a list of valid properties that
957can be set.
958.It Fl R Ar root
959Equivalent to
960.Qq Fl o Cm cachefile=none,altroot= Ns Pa root
961.It Fl m Ar mountpoint
962Sets the mount point for the root dataset. The default mount point is
963.Qq Pa /pool
964or
965.Qq Cm altroot Ns Pa /pool
966if
967.Sy altroot
968is specified. The mount point must be an absolute path,
969.Qq Cm legacy ,
970or
971.Qq Cm none .
972For more information on dataset mount points, see
973.Xr zfs 8 .
974.El
975.It Xo
976.Nm
977.Cm destroy
978.Op Fl f
979.Ar pool
980.Xc
981.Pp
982Destroys the given pool, freeing up any devices for other use. This command
983tries to unmount any active datasets before destroying the pool.
984.Bl -tag -width indent
985.It Fl f
986Forces any active datasets contained within the pool to be unmounted.
987.El
988.It Xo
989.Nm
990.Cm detach
991.Ar pool device
992.Xc
993.Pp
994Detaches
995.Ar device
996from a mirror. The operation is refused if there are no other valid replicas
997of the data.
998.It Xo
999.Nm
1000.Cm export
1001.Op Fl f
1002.Ar pool ...
1003.Xc
1004.Pp
1005Exports the given pools from the system. All devices are marked as exported,
1006but are still considered in use by other subsystems. The devices can be moved
1007between systems (even those of different endianness) and imported as long as a
1008sufficient number of devices are present.
1009.Pp
1010Before exporting the pool, all datasets within the pool are unmounted. A pool
1011can not be exported if it has a shared spare that is currently being used.
1012.Pp
1013For pools to be portable, you must give the
1014.Nm
1015command whole disks, not just slices, so that
1016.Tn ZFS
1017can label the disks with portable
1018.Sy EFI
1019labels. Otherwise, disk drivers on platforms of different endianness will not
1020recognize the disks.
1021.Bl -tag -width indent
1022.It Fl f
1023Forcefully unmount all datasets, using the
1024.Qq Nm unmount Fl f
1025command.
1026.Pp
1027This command will forcefully export the pool even if it has a shared spare that
1028is currently being used. This may lead to potential data corruption.
1029.El
1030.It Xo
1031.Nm
1032.Cm get
1033.Op Fl Hp
1034.Op Fl o Ar field Ns Op , Ns Ar ...
1035.Ar all | property Ns Op , Ns Ar ...
1036.Ar pool ...
1037.Xc
1038.Pp
1039Retrieves the given list of properties (or all properties if
1040.Qq Cm all
1041is used) for the specified storage pool(s). These properties are displayed with
1042the following fields:
1043.Bl -column -offset indent "property"
1044.It name Ta Name of storage pool
1045.It property Ta Property name
1046.It value Ta Property value
1047.It source Ta Property source, either 'default' or 'local'.
1048.El
1049.Pp
1050See the
1051.Qq Sx Properties
1052section for more information on the available pool properties.
1053.It Fl H
1054Scripted mode. Do not display headers, and separate fields by a single tab
1055instead of arbitrary space.
1056.It Fl p
1057Display numbers in parsable (exact) values.
1058.It Fl o Ar field
1059A comma-separated list of columns to display.
1060.Sy name Ns , Ns
1061.Sy property Ns , Ns
1062.Sy value Ns , Ns
1063.Sy source
1064is the default value.
1065.It Xo
1066.Nm
1067.Cm history
1068.Op Fl il
1069.Op Ar pool
1070.Ar ...
1071.Xc
1072.Pp
1073Displays the command history of the specified pools or all pools if no pool is
1074specified.
1075.Bl -tag -width indent
1076.It Fl i
1077Displays internally logged
1078.Tn ZFS
1079events in addition to user initiated events.
1080.It Fl l
1081Displays log records in long format, which in addition to standard format
1082includes, the user name, the hostname, and the zone in which the operation was
1083performed.
1084.El
1085.It Xo
1086.Nm
1087.Cm import
1088.Op Fl d Ar dir | Fl c Ar cachefile
1089.Op Fl D
1090.Xc
1091.Pp
1092Lists pools available to import. If the
1093.Fl d
1094option is not specified, this command searches for devices in
1095.Qq Pa /dev .
1096The
1097.Fl d
1098option can be specified multiple times, and all directories are searched. If
1099the device appears to be part of an exported pool, this command displays a
1100summary of the pool with the name of the pool, a numeric identifier, as well as
1101the
1102.No vdev
1103layout and current health of the device for each device or file.
1104Destroyed pools, pools that were previously destroyed with the
1105.Qq Nm Cm destroy
1106command, are not listed unless the
1107.Fl D
1108option is specified.
1109.Pp
1110The numeric identifier is unique, and can be used instead of the pool name when
1111multiple exported pools of the same name are available.
1112.Bl -tag -width indent
1113.It Fl c Ar cachefile
1114Reads configuration from the given
1115.Ar cachefile
1116that was created with the
1117.Qq Sy cachefile
1118pool property. This
1119.Ar cachefile
1120is used instead of searching for devices.
1121.It Fl d Ar dir
1122Searches for devices or files in
1123.Ar dir .
1124The
1125.Fl d
1126option can be specified multiple times.
1127.It Fl D
1128Lists destroyed pools only.
1129.El
1130.It Xo
1131.Nm
1132.Cm import
1133.Op Fl o Ar mntopts
1134.Op Fl o Ar property Ns = Ns Ar value
1135.Ar ...
1136.Op Fl d Ar dir | Fl c Ar cachefile
1137.Op Fl D
1138.Op Fl f
1139.Op Fl m
1140.Op Fl N
1141.Op Fl R Ar root
1142.Op Fl F Op Fl n
1143.Fl a
1144.Xc
1145.Pp
1146Imports all pools found in the search directories. Identical to the previous
1147command, except that all pools with a sufficient number of devices available
1148are imported. Destroyed pools, pools that were previously destroyed with the
1149.Qq Nm Cm destroy
1150command, will not be imported unless the
1151.Fl D
1152option is specified.
1153.Bl -tag -width indent
1154.It Fl o Ar mntopts
1155Comma-separated list of mount options to use when mounting datasets within the
1156pool. See
1157.Xr zfs 8
1158for a description of dataset properties and mount options.
1159.It Fl o Ar property Ns = Ns Ar value
1160Sets the specified property on the imported pool. See the
1161.Qq Sx Properties
1162section for more information on the available pool properties.
1163.It Fl c Ar cachefile
1164Reads configuration from the given
1165.Ar cachefile
1166that was created with the
1167.Qq Sy cachefile
1168pool property. This
1169.Ar cachefile
1170is used instead of searching for devices.
1171.It Fl d Ar dir
1172Searches for devices or files in
1173.Ar dir .
1174The
1175.Fl d
1176option can be specified multiple times. This option is incompatible with the
1177.Fl c
1178option.
1179.It Fl D
1180Imports destroyed pools only. The
1181.Fl f
1182option is also required.
1183.It Fl f
1184Forces import, even if the pool appears to be potentially active.
1185.It Fl m
1186Allows a pool to import when there is a missing log device. Recent transactions
1187can be lost because the log device will be discarded.
1188.It Fl N
1189Import the pool without mounting any file systems.
1190.It Fl R Ar root
1191Sets the
1192.Qq Sy cachefile
1193property to
1194.Qq Cm none
1195and the
1196.Qq Sy altroot
1197property to
1198.Qq Ar root
1199.It Fl F
1200Recovery mode for a non-importable pool. Attempt to return the pool to an
1201importable state by discarding the last few transactions. Not all damaged pools
1202can be recovered by using this option. If successful, the data from the
1203discarded transactions is irretrievably lost. This option is ignored if the
1204pool is importable or already imported.
1205.It Fl n
1206Used with the
1207.Fl F
1208recovery option. Determines whether a non-importable pool can be made
1209importable again, but does not actually perform the pool recovery. For more
1210details about pool recovery mode, see the
1211.Fl F
1212option, above.
1213.It Fl a
1214Searches for and imports all pools found.
1215.El
1216.It Xo
1217.Nm
1218.Cm import
1219.Op Fl o Ar mntopts
1220.Op Fl o Ar property Ns = Ns Ar value
1221.Ar ...
1222.Op Fl d Ar dir | Fl c Ar cachefile
1223.Op Fl D
1224.Op Fl f
1225.Op Fl m
1226.Op Fl N
1227.Op Fl R Ar root
1228.Op Fl F Op Fl n
1229.Ar pool | id
1230.Op Ar newpool
1231.Xc
1232.Pp
1233Imports a specific pool. A pool can be identified by its name or the numeric
1234identifier. If
1235.Ar newpool
1236is specified, the pool is imported using the name
1237.Ar newpool .
1238Otherwise, it is imported with the same name as its exported name.
1239.Pp
1240If a device is removed from a system without running
1241.Qq Nm Cm export
1242first, the device appears as potentially active. It cannot be determined if
1243this was a failed export, or whether the device is really in use from another
1244host. To import a pool in this state, the
1245.Fl f
1246option is required.
1247.Bl -tag -width indent
1248.It Fl o Ar mntopts
1249Comma-separated list of mount options to use when mounting datasets within the
1250pool. See
1251.Xr zfs 8
1252for a description of dataset properties and mount options.
1253.It Fl o Ar property Ns = Ns Ar value
1254Sets the specified property on the imported pool. See the
1255.Qq Sx Properties
1256section for more information on the available pool properties.
1257.It Fl c Ar cachefile
1258Reads configuration from the given
1259.Ar cachefile
1260that was created with the
1261.Qq Sy cachefile
1262pool property. This
1263.Ar cachefile
1264is used instead of searching for devices.
1265.It Fl d Ar dir
1266Searches for devices or files in
1267.Ar dir .
1268The
1269.Fl d
1270option can be specified multiple times. This option is incompatible with the
1271.Fl c
1272option.
1273.It Fl D
1274Imports destroyed pools only. The
1275.Fl f
1276option is also required.
1277.It Fl f
1278Forces import, even if the pool appears to be potentially active.
1279.It Fl m
1280Allows a pool to import when there is a missing log device. Recent transactions
1281can be lost because the log device will be discarded.
1282.It Fl N
1283Import the pool without mounting any file systems.
1284.It Fl R Ar root
1285Equivalent to
1286.Qq Fl o Cm cachefile=none,altroot= Ns Pa root
1287.It Fl F
1288Recovery mode for a non-importable pool. Attempt to return the pool to an
1289importable state by discarding the last few transactions. Not all damaged pools
1290can be recovered by using this option. If successful, the data from the
1291discarded transactions is irretrievably lost. This option is ignored if the
1292pool is importable or already imported.
1293.It Fl n
1294Used with the
1295.Fl F
1296recovery option. Determines whether a non-importable pool can be made
1297importable again, but does not actually perform the pool recovery. For more
1298details about pool recovery mode, see the
1299.Fl F
1300option, above.
1301.El
1302.It Xo
1303.Nm
1304.Cm iostat
1305.Op Fl T Cm d Ns | Ns Cm u
1306.Op Fl v
1307.Op Ar pool
1308.Ar ...
1309.Op Ar interval Op Ar count
1310.Xc
1311.Pp
1312Displays
1313.Tn I/O
1314statistics for the given pools. When given an interval, the statistics are
1315printed every
1316.Ar interval
1317seconds until
1318.Sy Ctrl-C
1319is pressed. If no
1320.Ar pools
1321are specified, statistics for every pool in the system is shown. If
1322.Ar count
1323is specified, the command exits after
1324.Ar count
1325reports are printed.
1326.Bl -tag -width indent
1327.It Fl T Cm d Ns | Ns Cm u
1328Print a timestamp.
1329.Pp
1330Use modifier
1331.Cm d
1332for standard date format. See
1333.Xr date 1 .
1334Use modifier
1335.Cm u
1336for unixtime
1337.Pq equals Qq Ic date +%s .
1338.It Fl v
1339Verbose statistics. Reports usage statistics for individual
1340.No vdev Ns s
1341within the pool, in addition to the pool-wide statistics.
1342.El
1343.It Xo
1344.Nm
1345.Cm labelclear
1346.Op Fl f
1347.Ar device
1348.Xc
1349.Pp
1350Removes
1351.Tn ZFS
1352label information from the specified
1353.Ar device .
1354The
1355.Ar device
1356must not be part of an active pool configuration.
1357.Bl -tag -width indent
1358.It Fl f
1359Treat exported or foreign devices as inactive.
1360.El
1361.It Xo
1362.Nm
1363.Cm list
1364.Op Fl Hpv
1365.Op Fl o Ar property Ns Op , Ns Ar ...
1366.Op Fl T Cm d Ns | Ns Cm u
1367.Op Ar pool
1368.Ar ...
1369.Op Ar inverval Op Ar count
1370.Xc
1371.Pp
1372Lists the given pools along with a health status and space usage. If no
1373.Ar pools
1374are specified, all pools in the system are listed.
1375.Pp
1376When given an interval, the output is printed every
1377.Ar interval
1378seconds until
1379.Sy Ctrl-C
1380is pressed. If
1381.Ar count
1382is specified, the command exits after
1383.Ar count
1384reports are printed.
1385.Bl -tag -width indent
1386.It Fl T Cm d Ns | Ns Cm u
1387Print a timestamp.
1388.Pp
1389Use modifier
1390.Cm d
1391for standard date format. See
1392.Xr date 1 .
1393Use modifier
1394.Cm u
1395for unixtime
1396.Pq equals Qq Ic date +%s .
1397.It Fl H
1398Scripted mode. Do not display headers, and separate fields by a single tab
1399instead of arbitrary space.
1400.It Fl p
1401Display numbers in parsable (exact) values.
1402.It Fl v
1403Verbose statistics. Reports usage statistics for individual
1404.Em vdevs
1405within
1406the pool, in addition to the pool-wide statistics.
1407.It Fl o Ar property Ns Op , Ns Ar ...
1408Comma-separated list of properties to display. See the
1409.Qq Sx Properties
1410section for a list of valid properties. The default list is
1411.Sy name ,
1412.Sy size ,
1413.Sy used ,
1414.Sy available ,
1415.Sy fragmentation ,
1416.Sy expandsize ,
1417.Sy capacity  ,
1418.Sy health ,
1419.Sy altroot .
1420.It Fl T Cm d Ns | Ns Cm u
1421Print a timestamp.
1422.Pp
1423Use modifier
1424.Cm d
1425for standard date format. See
1426.Xr date 1 .
1427Use modifier
1428.Cm u
1429for unixtime
1430.Pq equals Qq Ic date +%s .
1431.El
1432.It Xo
1433.Nm
1434.Cm offline
1435.Op Fl t
1436.Ar pool device ...
1437.Xc
1438.Pp
1439Takes the specified physical device offline. While the
1440.Ar device
1441is offline, no attempt is made to read or write to the device.
1442.Bl -tag -width indent
1443.It Fl t
1444Temporary. Upon reboot, the specified physical device reverts to its previous
1445state.
1446.El
1447.It Xo
1448.Nm
1449.Cm online
1450.Op Fl e
1451.Ar pool device ...
1452.Xc
1453.Pp
1454Brings the specified physical device online.
1455.Pp
1456This command is not applicable to spares or cache devices.
1457.Bl -tag -width indent
1458.It Fl e
1459Expand the device to use all available space. If the device is part of a mirror
1460or
1461.No raidz
1462then all devices must be expanded before the new space will become
1463available to the pool.
1464.El
1465.It Xo
1466.Nm
1467.Cm reguid
1468.Ar pool
1469.Xc
1470.Pp
1471Generates a new unique identifier for the pool.  You must ensure that all
1472devices in this pool are online and healthy before performing this action.
1473.It Xo
1474.Nm
1475.Cm remove
1476.Ar pool device ...
1477.Xc
1478.Pp
1479Removes the specified device from the pool. This command currently only
1480supports removing hot spares, cache, and log devices. A mirrored log device can
1481be removed by specifying the top-level mirror for the log. Non-log devices that
1482are part of a mirrored configuration can be removed using the
1483.Qq Nm Cm detach
1484command. Non-redundant and
1485.No raidz
1486devices cannot be removed from a pool.
1487.It Xo
1488.Nm
1489.Cm reopen
1490.Ar pool
1491.Xc
1492.Pp
1493Reopen all the vdevs associated with the pool.
1494.It Xo
1495.Nm
1496.Cm replace
1497.Op Fl f
1498.Ar pool device
1499.Op Ar new_device
1500.Xc
1501.Pp
1502Replaces
1503.Ar old_device
1504with
1505.Ar new_device .
1506This is equivalent to attaching
1507.Ar new_device ,
1508waiting for it to resilver, and then detaching
1509.Ar old_device .
1510.Pp
1511The size of
1512.Ar new_device
1513must be greater than or equal to the minimum size
1514of all the devices in a mirror or
1515.No raidz
1516configuration.
1517.Pp
1518.Ar new_device
1519is required if the pool is not redundant. If
1520.Ar new_device
1521is not specified, it defaults to
1522.Ar old_device .
1523This form of replacement is useful after an existing disk has failed and has
1524been physically replaced. In this case, the new disk may have the same
1525.Pa /dev
1526path as the old device, even though it is actually a different disk.
1527.Tn ZFS
1528recognizes this.
1529.Bl -tag -width indent
1530.It Fl f
1531Forces use of
1532.Ar new_device ,
1533even if its appears to be in use. Not all devices can be overridden in this
1534manner.
1535.El
1536.It Xo
1537.Nm
1538.Cm scrub
1539.Op Fl s
1540.Ar pool ...
1541.Xc
1542.Pp
1543Begins a scrub. The scrub examines all data in the specified pools to verify
1544that it checksums correctly. For replicated (mirror or
1545.No raidz )
1546devices,
1547.Tn ZFS
1548automatically repairs any damage discovered during the scrub. The
1549.Qq Nm Cm status
1550command reports the progress of the scrub and summarizes the results of the
1551scrub upon completion.
1552.Pp
1553Scrubbing and resilvering are very similar operations. The difference is that
1554resilvering only examines data that
1555.Tn ZFS
1556knows to be out of date (for example, when attaching a new device to a mirror
1557or replacing an existing device), whereas scrubbing examines all data to
1558discover silent errors due to hardware faults or disk failure.
1559.Pp
1560Because scrubbing and resilvering are
1561.Tn I/O Ns -intensive
1562operations,
1563.Tn ZFS
1564only allows one at a time. If a scrub is already in progress, the
1565.Qq Nm Cm scrub
1566command returns an error. To start a new scrub, you have to stop the old scrub
1567with the
1568.Qq Nm Cm scrub Fl s
1569command first. If a resilver is in progress,
1570.Tn ZFS
1571does not allow a scrub to be started until the resilver completes.
1572.Bl -tag -width indent
1573.It Fl s
1574Stop scrubbing.
1575.El
1576.It Xo
1577.Nm
1578.Cm set
1579.Ar property Ns = Ns Ar value pool
1580.Xc
1581.Pp
1582Sets the given property on the specified pool. See the
1583.Qq Sx Properties
1584section for more information on what properties can be set and acceptable
1585values.
1586.It Xo
1587.Nm
1588.Cm split
1589.Op Fl n
1590.Op Fl R Ar altroot
1591.Op Fl o Ar mntopts
1592.Op Fl o Ar property Ns = Ns Ar value
1593.Ar pool newpool
1594.Op Ar device ...
1595.Xc
1596.Pp
1597Splits off one disk from each mirrored top-level
1598.No vdev
1599in a pool and creates a new pool from the split-off disks. The original pool
1600must be made up of one or more mirrors and must not be in the process of
1601resilvering. The
1602.Cm split
1603subcommand chooses the last device in each mirror
1604.No vdev
1605unless overridden by a device specification on the command line.
1606.Pp
1607When using a
1608.Ar device
1609argument,
1610.Cm split
1611includes the specified device(s) in a new pool and, should any devices remain
1612unspecified, assigns the last device in each mirror
1613.No vdev
1614to that pool, as it does normally. If you are uncertain about the outcome of a
1615.Cm split
1616command, use the
1617.Fl n
1618("dry-run") option to ensure your command will have the effect you intend.
1619.Bl -tag -width indent
1620.It Fl R Ar altroot
1621Automatically import the newly created pool after splitting, using the
1622specified
1623.Ar altroot
1624parameter for the new pool's alternate root. See the
1625.Sy altroot
1626description in the
1627.Qq Sx Properties
1628section, above.
1629.It Fl n
1630Displays the configuration that would be created without actually splitting the
1631pool. The actual pool split could still fail due to insufficient privileges or
1632device status.
1633.It Fl o Ar mntopts
1634Comma-separated list of mount options to use when mounting datasets within the
1635pool. See
1636.Xr zfs 8
1637for a description of dataset properties and mount options. Valid only in
1638conjunction with the
1639.Fl R
1640option.
1641.It Fl o Ar property Ns = Ns Ar value
1642Sets the specified property on the new pool. See the
1643.Qq Sx Properties
1644section, above, for more information on the available pool properties.
1645.El
1646.It Xo
1647.Nm
1648.Cm status
1649.Op Fl vx
1650.Op Fl T Cm d Ns | Ns Cm u
1651.Op Ar pool
1652.Ar ...
1653.Op Ar interval Op Ar count
1654.Xc
1655.Pp
1656Displays the detailed health status for the given pools. If no
1657.Ar pool
1658is specified, then the status of each pool in the system is displayed. For more
1659information on pool and device health, see the
1660.Qq Sx Device Failure and Recovery
1661section.
1662.Pp
1663When given an interval, the output is printed every
1664.Ar interval
1665seconds until
1666.Sy Ctrl-C
1667is pressed. If
1668.Ar count
1669is specified, the command exits after
1670.Ar count
1671reports are printed.
1672.Pp
1673If a scrub or resilver is in progress, this command reports the percentage
1674done and the estimated time to completion. Both of these are only approximate,
1675because the amount of data in the pool and the other workloads on the system
1676can change.
1677.Bl -tag -width indent
1678.It Fl x
1679Only display status for pools that are exhibiting errors or are otherwise
1680unavailable.
1681Warnings about pools not using the latest on-disk format, having non-native
1682block size or disabled features will not be included.
1683.It Fl v
1684Displays verbose data error information, printing out a complete list of all
1685data errors since the last complete pool scrub.
1686.It Fl T Cm d Ns | Ns Cm u
1687Print a timestamp.
1688.Pp
1689Use modifier
1690.Cm d
1691for standard date format. See
1692.Xr date 1 .
1693Use modifier
1694.Cm u
1695for unixtime
1696.Pq equals Qq Ic date +%s .
1697.El
1698.It Xo
1699.Nm
1700.Cm upgrade
1701.Op Fl v
1702.Xc
1703.Pp
1704Displays pools which do not have all supported features enabled and pools
1705formatted using a legacy
1706.Tn ZFS
1707version number.
1708These pools can continue to be used, but some features may not be available.
1709Use
1710.Nm Cm upgrade Fl a
1711to enable all features on all pools.
1712.Bl -tag -width indent
1713.It Fl v
1714Displays legacy
1715.Tn ZFS
1716versions supported by the current software.
1717See
1718.Xr zpool-features 7
1719for a description of feature flags features supported by the current software.
1720.El
1721.It Xo
1722.Nm
1723.Cm upgrade
1724.Op Fl V Ar version
1725.Fl a | Ar pool ...
1726.Xc
1727.Pp
1728Enables all supported features on the given pool.
1729Once this is done, the pool will no longer be accessible on systems that do
1730not support feature flags.
1731See
1732.Xr zpool-features 7
1733for details on compatibility with systems that support feature flags, but do
1734not support all features enabled on the pool.
1735.Bl -tag -width indent
1736.It Fl a
1737Enables all supported features on all pools.
1738.It Fl V Ar version
1739Upgrade to the specified legacy version. If the
1740.Fl V
1741flag is specified, no features will be enabled on the pool.
1742This option can only be used to increase version number up to the last
1743supported legacy version number.
1744.El
1745.El
1746.Sh EXIT STATUS
1747The following exit values are returned:
1748.Bl -tag -offset 2n -width 2n
1749.It 0
1750Successful completion.
1751.It 1
1752An error occurred.
1753.It 2
1754Invalid command line options were specified.
1755.El
1756.Sh EXAMPLES
1757.Bl -tag -width 0n
1758.It Sy Example 1 No Creating a RAID-Z Storage Pool
1759.Pp
1760The following command creates a pool with a single
1761.No raidz
1762root
1763.No vdev
1764that consists of six disks.
1765.Bd -literal -offset 2n
1766.Li # Ic zpool create tank raidz da0 da1 da2 da3 da4 da5
1767.Ed
1768.It Sy Example 2 No Creating a Mirrored Storage Pool
1769.Pp
1770The following command creates a pool with two mirrors, where each mirror
1771contains two disks.
1772.Bd -literal -offset 2n
1773.Li # Ic zpool create tank mirror da0 da1 mirror da2 da3
1774.Ed
1775.It Sy Example 3 No Creating a Tn ZFS No Storage Pool by Using Partitions
1776.Pp
1777The following command creates an unmirrored pool using two GPT partitions.
1778.Bd -literal -offset 2n
1779.Li # Ic zpool create tank da0p3 da1p3
1780.Ed
1781.It Sy Example 4 No Creating a Tn ZFS No Storage Pool by Using Files
1782.Pp
1783The following command creates an unmirrored pool using files. While not
1784recommended, a pool based on files can be useful for experimental purposes.
1785.Bd -literal -offset 2n
1786.Li # Ic zpool create tank /path/to/file/a /path/to/file/b
1787.Ed
1788.It Sy Example 5 No Adding a Mirror to a Tn ZFS No Storage Pool
1789.Pp
1790The following command adds two mirrored disks to the pool
1791.Em tank ,
1792assuming the pool is already made up of two-way mirrors. The additional space
1793is immediately available to any datasets within the pool.
1794.Bd -literal -offset 2n
1795.Li # Ic zpool add tank mirror da2 da3
1796.Ed
1797.It Sy Example 6 No Listing Available Tn ZFS No Storage Pools
1798.Pp
1799The following command lists all available pools on the system.
1800.Bd -literal -offset 2n
1801.Li # Ic zpool list
1802NAME   SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1803pool  2.70T   473G  2.24T    33%         -    17%  1.00x  ONLINE  -
1804test  1.98G  89.5K  1.98G    48%         -     0%  1.00x  ONLINE  -
1805.Ed
1806.It Sy Example 7 No Listing All Properties for a Pool
1807.Pp
1808The following command lists all the properties for a pool.
1809.Bd -literal -offset 2n
1810.Li # Ic zpool get all pool
1811pool  size           2.70T       -
1812pool  capacity       17%         -
1813pool  altroot        -           default
1814pool  health         ONLINE      -
1815pool  guid           2501120270416322443  default
1816pool  version        28          default
1817pool  bootfs         pool/root   local
1818pool  delegation     on          default
1819pool  autoreplace    off         default
1820pool  cachefile      -           default
1821pool  failmode       wait        default
1822pool  listsnapshots  off         default
1823pool  autoexpand     off         default
1824pool  dedupditto     0           default
1825pool  dedupratio     1.00x       -
1826pool  free           2.24T       -
1827pool  allocated      473G        -
1828pool  readonly       off         -
1829.Ed
1830.It Sy Example 8 No Destroying a Tn ZFS No Storage Pool
1831.Pp
1832The following command destroys the pool
1833.Qq Em tank
1834and any datasets contained within.
1835.Bd -literal -offset 2n
1836.Li # Ic zpool destroy -f tank
1837.Ed
1838.It Sy Example 9 No Exporting a Tn ZFS No Storage Pool
1839.Pp
1840The following command exports the devices in pool
1841.Em tank
1842so that they can be relocated or later imported.
1843.Bd -literal -offset 2n
1844.Li # Ic zpool export tank
1845.Ed
1846.It Sy Example 10 No Importing a Tn ZFS No Storage Pool
1847.Pp
1848The following command displays available pools, and then imports the pool
1849.Qq Em tank
1850for use on the system.
1851.Pp
1852The results from this command are similar to the following:
1853.Bd -literal -offset 2n
1854.Li # Ic zpool import
1855
1856  pool: tank
1857    id: 15451357997522795478
1858 state: ONLINE
1859action: The pool can be imported using its name or numeric identifier.
1860config:
1861
1862        tank        ONLINE
1863          mirror    ONLINE
1864               da0  ONLINE
1865               da1  ONLINE
1866.Ed
1867.It Xo
1868.Sy Example 11
1869Upgrading All
1870.Tn ZFS
1871Storage Pools to the Current Version
1872.Xc
1873.Pp
1874The following command upgrades all
1875.Tn ZFS
1876Storage pools to the current version of
1877the software.
1878.Bd -literal -offset 2n
1879.Li # Ic zpool upgrade -a
1880This system is currently running ZFS pool version 28.
1881.Ed
1882.It Sy Example 12 No Managing Hot Spares
1883.Pp
1884The following command creates a new pool with an available hot spare:
1885.Bd -literal -offset 2n
1886.Li # Ic zpool create tank mirror da0 da1 spare da2
1887.Ed
1888.Pp
1889If one of the disks were to fail, the pool would be reduced to the degraded
1890state. The failed device can be replaced using the following command:
1891.Bd -literal -offset 2n
1892.Li # Ic zpool replace tank da0 da2
1893.Ed
1894.Pp
1895Once the data has been resilvered, the spare is automatically removed and is
1896made available should another device fails. The hot spare can be permanently
1897removed from the pool using the following command:
1898.Bd -literal -offset 2n
1899.Li # Ic zpool remove tank da2
1900.Ed
1901.It Xo
1902.Sy Example 13
1903Creating a
1904.Tn ZFS
1905Pool with Mirrored Separate Intent Logs
1906.Xc
1907.Pp
1908The following command creates a
1909.Tn ZFS
1910storage pool consisting of two, two-way
1911mirrors and mirrored log devices:
1912.Bd -literal -offset 2n
1913.Li # Ic zpool create pool mirror da0 da1 mirror da2 da3 log mirror da4 da5
1914.Ed
1915.It Sy Example 14 No Adding Cache Devices to a Tn ZFS No Pool
1916.Pp
1917The following command adds two disks for use as cache devices to a
1918.Tn ZFS
1919storage pool:
1920.Bd -literal -offset 2n
1921.Li # Ic zpool add pool cache da2 da3
1922.Ed
1923.Pp
1924Once added, the cache devices gradually fill with content from main memory.
1925Depending on the size of your cache devices, it could take over an hour for
1926them to fill. Capacity and reads can be monitored using the
1927.Cm iostat
1928subcommand as follows:
1929.Bd -literal -offset 2n
1930.Li # Ic zpool iostat -v pool 5
1931.Ed
1932.It Xo
1933.Sy Example 15
1934Displaying expanded space on a device
1935.Xc
1936.Pp
1937The following command dipslays the detailed information for the
1938.Em data
1939pool.
1940This pool is comprised of a single
1941.Em raidz
1942vdev where one of its
1943devices increased its capacity by 10GB.
1944In this example, the pool will not
1945be able to utilized this extra capacity until all the devices under the
1946.Em raidz
1947vdev have been expanded.
1948.Bd -literal -offset 2n
1949.Li # Ic zpool list -v data
1950NAME       SIZE  ALLOC   FREE   FRAG  EXPANDSZ    CAP  DEDUP  HEALTH  ALTROOT
1951data      23.9G  14.6G  9.30G    48%         -    61%  1.00x  ONLINE  -
1952  raidz1  23.9G  14.6G  9.30G    48%         -
1953    ada0      -      -      -      -         -
1954    ada1      -      -      -      -       10G
1955    ada2      -      -      -      -         -
1956.Ed
1957.It Xo
1958.Sy Example 16
1959Removing a Mirrored Log Device
1960.Xc
1961.Pp
1962The following command removes the mirrored log device
1963.Em mirror-2 .
1964.Pp
1965Given this configuration:
1966.Bd -literal -offset 2n
1967   pool: tank
1968  state: ONLINE
1969  scrub: none requested
1970 config:
1971
1972         NAME        STATE     READ WRITE CKSUM
1973         tank        ONLINE       0     0     0
1974           mirror-0  ONLINE       0     0     0
1975                da0  ONLINE       0     0     0
1976                da1  ONLINE       0     0     0
1977           mirror-1  ONLINE       0     0     0
1978                da2  ONLINE       0     0     0
1979                da3  ONLINE       0     0     0
1980         logs
1981           mirror-2  ONLINE       0     0     0
1982                da4  ONLINE       0     0     0
1983                da5  ONLINE       0     0     0
1984.Ed
1985.Pp
1986The command to remove the mirrored log
1987.Em mirror-2
1988is:
1989.Bd -literal -offset 2n
1990.Li # Ic zpool remove tank mirror-2
1991.Ed
1992.It Xo
1993.Sy Example 17
1994Recovering a Faulted
1995.Tn ZFS
1996Pool
1997.Xc
1998.Pp
1999If a pool is faulted but recoverable, a message indicating this state is
2000provided by
2001.Qq Nm Cm status
2002if the pool was cached (see the
2003.Fl c Ar cachefile
2004argument above), or as part of the error output from a failed
2005.Qq Nm Cm import
2006of the pool.
2007.Pp
2008Recover a cached pool with the
2009.Qq Nm Cm clear
2010command:
2011.Bd -literal -offset 2n
2012.Li # Ic zpool clear -F data
2013Pool data returned to its state as of Tue Sep 08 13:23:35 2009.
2014Discarded approximately 29 seconds of transactions.
2015.Ed
2016.Pp
2017If the pool configuration was not cached, use
2018.Qq Nm Cm import
2019with the recovery mode flag:
2020.Bd -literal -offset 2n
2021.Li # Ic zpool import -F data
2022Pool data returned to its state as of Tue Sep 08 13:23:35 2009.
2023Discarded approximately 29 seconds of transactions.
2024.Ed
2025.El
2026.Sh SEE ALSO
2027.Xr zpool-features 7 ,
2028.Xr zfs 8 ,
2029.Xr zfsd 8
2030.Sh AUTHORS
2031This manual page is a
2032.Xr mdoc 7
2033reimplementation of the
2034.Tn OpenSolaris
2035manual page
2036.Em zpool(1M) ,
2037modified and customized for
2038.Fx
2039and licensed under the Common Development and Distribution License
2040.Pq Tn CDDL .
2041.Pp
2042The
2043.Xr mdoc 7
2044implementation of this manual page was initially written by
2045.An Martin Matuska Aq mm@FreeBSD.org .
2046