zpool [-?]
zpool add [-fn] [-o property=value] pool vdev ...
zpool attach [-f] [-o property=value] pool device new_device
zpool clear [-F [-n]] pool [device]
zpool create [-fn] [-o property=value] ... [-O file-system-property=value] ... [-m mountpoint] [-R root] pool vdev ...
zpool destroy [-f] pool
zpool detach pool device
zpool export [-f] pool ...
zpool get "all" | property[,...] pool ...
zpool history [-il] [pool] ...
zpool import [-d dir] [-D]
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile] [-D] [-f] [-R root] [-F [-n]] -a
zpool import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile] [-D] [-f] [-R root] [-F [-n]] pool |id [newpool]
zpool iostat [-T u | d ] [-v] [pool] ... [interval[count]]
zpool list [-H] [-o property[,...]] [pool] ...
zpool offline [-t] pool device ...
zpool online pool device ...
zpool remove pool device ...
zpool replace [-f] pool device [new_device]
zpool scrub [-s] pool ...
zpool set property=value pool
zpool split [-R altroot] [-n] [-o mntopts] [-o property=value] pool newpool [device ...]
zpool status [-xv] [pool] ...
zpool upgrade
zpool upgrade -v
zpool upgrade [-V version] -a | pool ...
The zpool command configures ZFS storage pools. A storage pool is a collection of devices that provides physical storage and data replication for ZFS datasets.
All datasets within a storage pool share the same space. See zfs(1M) for information on managing datasets.
A "virtual device" describes a single device or a collection of devices organized according to certain performance and fault characteristics. The following virtual devices are supported:
disk
file
mirror
raidz
raidz1
raidz2
raidz3
A raidz group can have single-, double- , or triple parity, meaning that the raidz group can sustain one, two, or three failures, respectively, without losing any data. The raidz1 vdev type specifies a single-parity raidz group; the raidz2 vdev type specifies a double-parity raidz group; and the raidz3 vdev type specifies a triple-parity raidz group. The raidz vdev type is an alias for raidz1.
A raidz group with N disks of size X with P parity disks can hold approximately (N-P)*X bytes and can withstand P device(s) failing before data integrity is compromised. The minimum number of devices in a raidz group is one more than the number of parity disks. The recommended number is between 3 and 9 to help increase performance.
spare
log
cache
Virtual devices cannot be nested, so a mirror or raidz virtual device can only contain files or disks. Mirrors of mirrors (or other combinations) are not allowed.
A pool can have any number of virtual devices at the top of the configuration (known as "root vdevs"). Data is dynamically distributed across all top-level devices to balance data among devices. As new virtual devices are added, ZFS automatically places data on the newly available devices.
Virtual devices are specified one at a time on the command line, separated by whitespace. The keywords "mirror" and "raidz" are used to distinguish where a group ends and another begins. For example, the following creates two root vdevs, each a mirror of two disks:
# zpool create mypool mirror c0t0d0 c0t1d0 mirror c1t0d0 c1t1d0
ZFS supports a rich set of mechanisms for handling device failure and data corruption. All metadata and data is checksummed, and ZFS automatically repairs bad data from a good copy when corruption is detected.
In order to take advantage of these features, a pool must make use of some form of redundancy, using either mirrored or raidz groups. While ZFS supports running in a non-redundant configuration, where each root vdev is simply a disk or file, this is strongly discouraged. A single case of bit corruption can render some or all of your data unavailable.
A pool's health status is described by one of three states: online, degraded, or faulted. An online pool has all devices operating normally. A degraded pool is one in which one or more devices have failed, but the data is still available due to a redundant configuration. A faulted pool has corrupted metadata, or one or more faulted devices, and insufficient replicas to continue functioning.
The health of the top-level vdev, such as mirror or raidz device, is potentially impacted by the state of its associated vdevs, or component devices. A top-level vdev or component device is in one of the following states:
DEGRADED
One or more component devices is in the degraded or faulted state, but sufficient replicas exist to continue functioning. The underlying conditions are as follows:
FAULTED
One or more component devices is in the faulted state, and insufficient replicas exist to continue functioning. The underlying conditions are as follows:
OFFLINE
ONLINE
REMOVED
UNAVAIL
If a device is removed and later re-attached to the system, ZFS attempts to put the device online automatically. Device attach detection is hardware-dependent and might not be supported on all platforms.
ZFS allows devices to be associated with pools as "hot spares". These devices are not actively used in the pool, but when an active device fails, it is automatically replaced by a hot spare. To create a pool with hot spares, specify a "spare" vdev with any number of devices. For example,
# zpool create pool mirror c0d0 c1d0 spare c2d0 c3d0
Spares can be shared across multiple pools, and can be added with the "zpool add" command and removed with the "zpool remove" command. Once a spare replacement is initiated, a new "spare" vdev is created within the configuration that will remain there until the original device is replaced. At this point, the hot spare becomes available again if another device fails.
If a pool has a shared spare that is currently being used, the pool can not be exported since other pools may use this shared spare, which may lead to potential data corruption.
An in-progress spare replacement can be cancelled by detaching the hot spare. If the original faulted device is detached, then the hot spare assumes its place in the configuration, and is removed from the spare list of all active pools.
Spares cannot replace log devices.
The ZFS Intent Log (ZIL) satisfies POSIX requirements for synchronous transactions. For instance, databases often require their transactions to be on stable storage devices when returning from a system call. NFS and other applications can also use fsync() to ensure data stability. By default, the intent log is allocated from blocks within the main pool. However, it might be possible to get better performance using separate intent log devices such as NVRAM or a dedicated disk. For example:
# zpool create pool c0d0 c1d0 log c2d0
Multiple log devices can also be specified, and they can be mirrored. See the EXAMPLES section for an example of mirroring multiple log devices.
Log devices can be added, replaced, attached, detached, and imported and exported as part of the larger pool. Mirrored log devices can be removed by specifying the top-level mirror for the log.
Devices can be added to a storage pool as "cache devices." These devices provide an additional layer of caching between main memory and disk. For read-heavy workloads, where the working set size is much larger than what can be cached in main memory, using cache devices allow much more of this working set to be served from low latency media. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content.
To create a pool with cache devices, specify a "cache" vdev with any number of devices. For example:
# zpool create pool c0d0 c1d0 cache c2d0 c3d0
Cache devices cannot be mirrored or part of a raidz configuration. If a read error is encountered on a cache device, that read I/O is reissued to the original storage pool device, which might be part of a mirrored or raidz configuration.
The content of the cache devices is considered volatile, as is the case with other system caches.
Each imported pool has an associated process, named zpool-poolname. The threads in this process are the pool's I/O processing threads, which handle the compression, checksumming, and other tasks for all I/O associated with the pool. This process exists to provides visibility into the CPU utilization of the system's storage pools. The existence of this process is an unstable interface.
Each pool has several properties associated with it. Some properties are read-only statistics while others are configurable and change the behavior of the pool. The following are read-only properties:
alloc
capacity
dedupratio
# zfs set dedup=on dataset
The default value is off.
dedupratio is expressed as a single decimal number. For example, a dedupratio value of 1.76 indicates that 1.76 units of data were stored but only 1 unit of disk space was actually consumed.
free
guid
health
size
These space usage properties report actual physical space available to the storage pool. The physical space can be different from the total amount of space that any contained datasets can actually use. The amount of space used in a raidz configuration depends on the characteristics of the data being written. In addition, ZFS reserves some space for internal accounting that the zfs(1M) command takes into account, but the zpool command does not. For non-full pools of a reasonable size, these effects should be invisible. For small pools, or pools that are close to being completely full, these discrepancies may become more noticeable. The following property can be set at creation time:
ashift
For optimal performance, the pool sector size should be greater than or equal to the sector size of the underlying disks. Since the property cannot be changed after pool creation, if in a given pool, you ever want to use drives that report 4KiB sectors, you must set ashift=12 at pool creation time.
The following property can be set at creation time and import time:
altroot
The following properties can be set at creation time and import time, and later changed with the zpool set command:
autoexpand=on | off
autoreplace=on | off
bootfs=pool/dataset
cachefile=path | none
Multiple pools can share the same cache file. Because the kernel destroys and recreates this file when pools are added and removed, care should be taken when attempting to access this file. When the last pool using a cachefile is exported or destroyed, the file is removed.
delegation=on | off
failmode=wait | continue | panic
wait
continue
panic
listsnaps=on | off
version=version
All subcommands that modify state are logged persistently to the pool in their original form.
The zpool command provides subcommands to create and destroy storage pools, add capacity to storage pools, and provide information about the storage pools. The following subcommands are supported:
zpool -?
zpool add [-fn] [-o property=value] pool vdev ...
-f
-n
-o property=value
Do not add a disk that is currently configured as a quorum device to a zpool. After a disk is in the pool, that disk can then be configured as a quorum device.
zpool attach [-f] [-o property=value] pool device new_device
-f
-o property=value
zpool clear [-F [-n]] pool [device] ...
-F
-n
zpool create [-fn] [-o property=value] ... [-O file-system-property=value] ... [-m mountpoint] [-R root] pool vdev ...
The command verifies that each device specified is accessible and not currently in use by another subsystem. There are some uses, such as being currently mounted, or specified as the dedicated dump device, that prevents a device from ever being used by ZFS. Other uses, such as having a preexisting UFS file system, can be overridden with the -f option.
The command also checks that the replication strategy for the pool is consistent. An attempt to combine redundant and non-redundant storage in a single pool, or to mix disks and files, results in an error unless -f is specified. The use of differently sized devices within a single raidz or mirror group is also flagged as an error unless -f is specified.
Unless the -R option is specified, the default mount point is "/pool". The mount point must not exist or must be empty, or else the root dataset cannot be mounted. This can be overridden with the -m option.
-f
-n
-o property=value [-o property=value] ...
-O file-system-property=value
[-O file-system-property=value] ...
-R root
-m mountpoint
zpool destroy [-f] pool
-f
zpool detach pool device
zpool export [-f] pool ...
Before exporting the pool, all datasets within the pool are unmounted. A pool can not be exported if it has a shared spare that is currently being used.
For pools to be portable, you must give the zpool command whole disks, not just slices, so that ZFS can label the disks with portable EFI labels. Otherwise, disk drivers on platforms of different endianness will not recognize the disks.
-f
This command will forcefully export the pool even if it has a shared spare that is currently being used. This may lead to potential data corruption.
zpool get "all" | property[,...] pool ...
name Name of storage pool property Property name value Property value source Property source, either 'default' or 'local'.
See the "Properties" section for more information on the available pool properties.
zpool history [-il] [pool] ...
-i
-l
zpool import [-d dir | -c cachefile] [-D]
The numeric identifier is unique, and can be used instead of the pool name when multiple exported pools of the same name are available.
-c cachefile
-d dir
-D
zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile] [-D] [-f] [-R root] [-F [-n]] -a
-o mntopts
-o property=value
-c cachefile
-d dir
-D
-f
-F
-a
-R root
-n
zpool import [-o mntopts] [ -o property=value] ... [-d dir | -c cachefile] [-D] [-f] [-R root] [-F [-n]] pool | id [newpool]
If a device is removed from a system without running "zpool export" first, the device appears as potentially active. It cannot be determined if this was a failed export, or whether the device is really in use from another host. To import a pool in this state, the -f option is required.
-o mntopts
-o property=value
-c cachefile
-d dir
-D
-f
-F
-R root
-n
zpool iostat [-T u | d] [-v] [pool] ... [interval[count]]
-T u | d
Specify u for a printed representation of the internal representation of time. See time(2). Specify d for standard date format. See date(1).
-v
zpool list [-H] [-o props[,...]] [pool] ...
-H
-o props
zpool offline [-t] pool device ...
This command is not applicable to spares or cache devices.
-t
zpool online [-e] pool device...
This command is not applicable to spares or cache devices.
-e
zpool remove pool device ...
zpool replace [-f] pool old_device [new_device]
The size of new_device must be greater than or equal to the minimum size of all the devices in a mirror or raidz configuration.
new_device is required if the pool is not redundant. If new_device is not specified, it defaults to old_device. This form of replacement is useful after an existing disk has failed and has been physically replaced. In this case, the new disk may have the same /dev/dsk path as the old device, even though it is actually a different disk. ZFS recognizes this.
-f
zpool scrub [-s] pool ...
Scrubbing and resilvering are very similar operations. The difference is that resilvering only examines data that ZFS knows to be out of date (for example, when attaching a new device to a mirror or replacing an existing device), whereas scrubbing examines all data to discover silent errors due to hardware faults or disk failure.
Because scrubbing and resilvering are I/O-intensive operations, ZFS only allows one at a time. If a scrub is already in progress, the "zpool scrub" command terminates it and starts a new scrub. If a resilver is in progress, ZFS does not allow a scrub to be started until the resilver completes.
-s
zpool set property=value pool
zpool split [-R altroot] [-n] [-o mntopts] [-o property=value] pool newpool [device ...]
When using a device argument, split includes the specified device(s) in a new pool and, should any devices remain unspecified, assigns the last device in each mirror vdev to that pool, as it does normally. If you are uncertain about the outcome of a split command, use the -n ("dry-run") option to ensure your command will have the effect you intend.
-R altroot
-n
-o mntopts
-o property=value
zpool status [-xv] [pool] ...
If a scrub or resilver is in progress, this command reports the percentage done and the estimated time to completion. Both of these are only approximate, because the amount of data in the pool and the other workloads on the system can change.
-x
-v
zpool upgrade
zpool upgrade -v
zpool upgrade [-V version] -a | pool ...
-a
-V version
Example 1 Creating a RAID-Z Storage Pool
The following command creates a pool with a single raidz root vdev that consists of six disks.
# zpool create tank raidz c0t0d0 c0t1d0 c0t2d0 c0t3d0 c0t4d0 c0t5d0
Example 2 Creating a Mirrored Storage Pool
The following command creates a pool with two mirrors, where each mirror contains two disks.
# zpool create tank mirror c0t0d0 c0t1d0 mirror c0t2d0 c0t3d0
Example 3 Creating a ZFS Storage Pool by Using Slices
The following command creates an unmirrored pool using two disk slices.
# zpool create tank /dev/dsk/c0t0d0s1 c0t1d0s4
Example 4 Creating a ZFS Storage Pool by Using Files
The following command creates an unmirrored pool using files. While not recommended, a pool based on files can be useful for experimental purposes.
# zpool create tank /path/to/file/a /path/to/file/b
Example 5 Adding a Mirror to a ZFS Storage Pool
The following command adds two mirrored disks to the pool "tank", assuming the pool is already made up of two-way mirrors. The additional space is immediately available to any datasets within the pool.
# zpool add tank mirror c1t0d0 c1t1d0
Example 6 Listing Available ZFS Storage Pools
The following command lists all available pools on the system.
# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT pool 136G 109M 136G 0% 3.00x ONLINE - rpool 67.5G 12.6G 54.9G 18% 1.01x ONLINE -
Example 7 Listing All Properties for a Pool
The following command lists all the properties for a pool.
% zpool get all pool NAME PROPERTY VALUE SOURCE pool size 136G - pool capacity 0% - pool altroot - default pool health ONLINE - pool guid 15697759092019394988 default pool version 21 default pool bootfs - default pool delegation on default pool autoreplace off default pool cachefile - default pool failmode wait default pool listsnapshots off default pool autoexpand off default pool dedupratio 3.00x - pool free 136G - pool allocated 109M -
Example 8 Destroying a ZFS Storage Pool
The following command destroys the pool "tank" and any datasets contained within.
# zpool destroy -f tank
Example 9 Exporting a ZFS Storage Pool
The following command exports the devices in pool tank so that they can be relocated or later imported.
# zpool export tank
Example 10 Importing a ZFS Storage Pool
The following command displays available pools, and then imports the pool "tank" for use on the system.
The results from this command are similar to the following:
# zpool import pool: tank id: 7678868315469843843 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: tank ONLINE mirror-0 ONLINE c1t2d0 ONLINE c1t3d0 ONLINE # zpool import tank
Example 11 Upgrading All ZFS Storage Pools to the Current Version
The following command upgrades all ZFS Storage pools to the current version of the software.
# zpool upgrade -a This system is currently running ZFS pool version 19. All pools are formatted using this version.
Example 12 Managing Hot Spares
The following command creates a new pool with an available hot spare:
# zpool create tank mirror c0t0d0 c0t1d0 spare c0t2d0
If one of the disks were to fail, the pool would be reduced to the degraded state. The failed device can be replaced using the following command:
# zpool replace tank c0t0d0 c0t3d0
Once the data has been resilvered, the spare is automatically removed and is made available should another device fails. The hot spare can be permanently removed from the pool using the following command:
# zpool remove tank c0t2d0
Example 13 Creating a ZFS Pool with Mirrored Separate Intent Logs
The following command creates a ZFS storage pool consisting of two, two-way mirrors and mirrored log devices:
# zpool create pool mirror c0d0 c1d0 mirror c2d0 c3d0 log mirror \ c4d0 c5d0
Example 14 Adding Cache Devices to a ZFS Pool
The following command adds two disks for use as cache devices to a ZFS storage pool:
# zpool add pool cache c2d0 c3d0
Once added, the cache devices gradually fill with content from main memory. Depending on the size of your cache devices, it could take over an hour for them to fill. Capacity and reads can be monitored using the iostat option as follows:
# zpool iostat -v pool 5
Example 15 Removing a Mirrored Log Device
The following command removes the mirrored log device mirror-2.
Given this configuration:
pool: tank state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM tank ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 c6t0d0 ONLINE 0 0 0 c6t1d0 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 c6t2d0 ONLINE 0 0 0 c6t3d0 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 c4t0d0 ONLINE 0 0 0 c4t1d0 ONLINE 0 0 0
The command to remove the mirrored log mirror-2 is:
# zpool remove tank mirror-2
Example 16 Recovering a Faulted ZFS Pool
If a pool is faulted but recoverable, a message indicating this state is provided by zpool status if the pool was cached (see cachefile above), or as part of the error output from a failed zpool import of the pool.
Recover a cached pool with the zpool clear command:
# zpool clear -F data Pool data returned to its state as of Tue Sep 08 13:23:35 2009. Discarded approximately 29 seconds of transactions.
If the pool configuration was not cached, use zpool import with the recovery mode flag:
# zpool import -F data Pool data returned to its state as of Tue Sep 08 13:23:35 2009. Discarded approximately 29 seconds of transactions.
The following exit values are returned:
0
1
2
See attributes(5) for descriptions of the following attributes:
|
zfs(1M), attributes(5)