xref: /spdk/doc/bdev.md (revision cdb0726b95631d46eaf4f2e39ddb6533f150fd27)
1# Block Device User Guide {#bdev}
2
3## Target Audience {#bdev_ug_targetaudience}
4
5This user guide is intended for software developers who have knowledge of block storage, storage drivers, issuing JSON-RPC
6commands and storage services such as RAID, compression, crypto, and others.
7
8## Introduction {#bdev_ug_introduction}
9
10The SPDK block device layer, often simply called *bdev*, is a C library
11intended to be equivalent to the operating system block storage layer that
12often sits immediately above the device drivers in a traditional kernel
13storage stack. Specifically, this library provides the following
14functionality:
15
16* A pluggable module API for implementing block devices that interface with different types of block storage devices.
17* Driver modules for NVMe, malloc (ramdisk), Linux AIO, virtio-scsi, Ceph RBD, Pmem and Vhost-SCSI Initiator and more.
18* An application API for enumerating and claiming SPDK block devices and then performing operations (read, write, unmap, etc.) on those devices.
19* Facilities to stack block devices to create complex I/O pipelines, including logical volume management (lvol) and partition support (GPT).
20* Configuration of block devices via JSON-RPC.
21* Request queueing, timeout, and reset handling.
22* Multiple, lockless queues for sending I/O to block devices.
23
24Bdev module creates abstraction layer that provides common API for all devices.
25User can use available bdev modules or create own module with any type of
26device underneath (please refer to @ref bdev_module for details). SPDK
27provides also vbdev modules which creates block devices on existing bdev. For
28example @ref bdev_ug_logical_volumes or @ref bdev_ug_gpt
29
30## Prerequisites {#bdev_ug_prerequisites}
31
32This guide assumes that you can already build the standard SPDK distribution
33on your platform. The block device layer is a C library with a single public
34header file named bdev.h. All SPDK configuration described in following
35chapters is done by using JSON-RPC commands. SPDK provides a python-based
36command line tool for sending RPC commands located at `scripts/rpc.py`. User
37can list available commands by running this script with `-h` or `--help` flag.
38Additionally user can retrieve currently supported set of RPC commands
39directly from SPDK application by running `scripts/rpc.py rpc_get_methods`.
40Detailed help for each command can be displayed by adding `-h` flag as a
41command parameter.
42
43## Configuring Block Device Modules {#bdev_ug_general_rpcs}
44
45Block devices can be configured using JSON RPCs. A complete list of available RPC commands
46with detailed information can be found on the @ref jsonrpc_components_bdev page.
47
48## Common Block Device Configuration Examples
49
50## Ceph RBD {#bdev_config_rbd}
51
52The SPDK RBD bdev driver provides SPDK block layer access to Ceph RADOS block
53devices (RBD). Ceph RBD devices are accessed via librbd and librados libraries
54to access the RADOS block device exported by Ceph. To create Ceph bdev RPC
55command `bdev_rbd_create` should be used.
56
57Example command
58
59`rpc.py bdev_rbd_create rbd foo 512`
60
61This command will create a bdev that represents the 'foo' image from a pool called 'rbd'.
62
63To remove a block device representation use the bdev_rbd_delete command.
64
65`rpc.py bdev_rbd_delete Rbd0`
66
67To resize a bdev use the bdev_rbd_resize command.
68
69`rpc.py bdev_rbd_resize Rbd0 4096`
70
71This command will resize the Rbd0 bdev to 4096 MiB.
72
73## Compression Virtual Bdev Module {#bdev_config_compress}
74
75The compression bdev module can be configured to provide compression/decompression
76services for an underlying thinly provisioned logical volume. Although the underlying
77module can be anything (i.e. NVME bdev) the overall compression benefits will not be realized
78unless the data stored on disk is placed appropriately. The compression vbdev module
79relies on an internal SPDK library called `reduce` to accomplish this, see @ref reduce
80for detailed information.
81
82The vbdev module relies on the DPDK CompressDev Framework to provide all compression
83functionality. The framework provides support for many different software only
84compression modules as well as hardware assisted support for Intel QAT. At this
85time the vbdev module supports the DPDK drivers for ISAL, QAT and mlx5_pci.
86
87mlx5_pci driver works with BlueField 2 SmartNIC and requires additional configuration of DPDK
88environment to enable compression function. It can be done via SPDK event library by configuring
89`env_context` member of `spdk_app_opts` structure or by passing corresponding CLI arguments in the
90following form: `--allow=BDF,class=compress`, e.g. `--allow=0000:01:00.0,class=compress`.
91
92Persistent memory is used to store metadata associated with the layout of the data on the
93backing device. SPDK relies on [PMDK](http://pmem.io/pmdk/) to interface persistent memory so any hardware
94supported by PMDK should work. If the directory for PMEM supplied upon vbdev creation does
95not point to persistent memory (i.e. a regular filesystem) performance will be severely
96impacted.  The vbdev module and reduce libraries were designed to use persistent memory for
97any production use.
98
99Example command
100
101`rpc.py bdev_compress_create -p /pmem_files -b myLvol`
102
103In this example, a compression vbdev is created using persistent memory that is mapped to
104the directory `pmem_files` on top of the existing thinly provisioned logical volume `myLvol`.
105The resulting compression bdev will be named `COMP_LVS/myLvol` where LVS is the name of the
106logical volume store that `myLvol` resides on.
107
108The logical volume is referred to as the backing device and once the compression vbdev is
109created it cannot be separated from the persistent memory file that will be created in
110the specified directory.  If the persistent memory file is not available, the compression
111vbdev will also not be available.
112
113By default the vbdev module will choose the QAT driver if the hardware and drivers are
114available and loaded.  If not, it will revert to the software-only ISAL driver. By using
115the following command, the driver may be specified however this is not persistent so it
116must be done either upon creation or before the underlying logical volume is loaded to
117be honored. In the example below, `0` is telling the vbdev module to use QAT if available
118otherwise use ISAL, this is the default and if sufficient the command is not required. Passing
119a value of 1 tells the driver to use QAT and if not available then the creation or loading
120the vbdev should fail to create or load.  A value of '2' as shown below tells the module
121to use ISAL and if for some reason it is not available, the vbdev should fail to create or load.
122
123`rpc.py bdev_compress_set_pmd -p 2`
124
125To remove a compression vbdev, use the following command which will also delete the PMEM
126file.  If the logical volume is deleted the PMEM file will not be removed and the
127compression vbdev will not be available.
128
129`rpc.py bdev_compress_delete COMP_LVS/myLvol`
130
131To list compression volumes that are only available for deletion because their PMEM file
132was missing use the following. The name parameter is optional and if not included will list
133all volumes, if used it will return the name or an error that the device does not exist.
134
135`rpc.py bdev_compress_get_orphans --name COMP_Nvme0n1`
136
137## Crypto Virtual Bdev Module {#bdev_config_crypto}
138
139The crypto virtual bdev module can be configured to provide at rest data encryption
140for any underlying bdev. The module relies on the DPDK CryptoDev Framework to provide
141all cryptographic functionality. The framework provides support for many different software
142only cryptographic modules as well hardware assisted support for the Intel QAT board and
143NVIDIA crypto enabled NICs.
144The framework also provides support for cipher, hash, authentication and AEAD functions.
145At this time the SPDK virtual bdev module supports cipher only as follows:
146
147- AESN-NI Multi Buffer Crypto Poll Mode Driver: RTE_CRYPTO_CIPHER_AES128_CBC
148- Intel(R) QuickAssist (QAT) Crypto Poll Mode Driver: RTE_CRYPTO_CIPHER_AES128_CBC,
149  RTE_CRYPTO_CIPHER_AES128_XTS
150  (Note: QAT is functional however is marked as experimental until the hardware has
151  been fully integrated with the SPDK CI system.)
152- MLX5 Crypto Poll Mode Driver: RTE_CRYPTO_CIPHER_AES256_XTS, RTE_CRYPTO_CIPHER_AES512_XTS
153
154In order to support using the bdev block offset (LBA) as the initialization vector (IV),
155the crypto module break up all I/O into crypto operations of a size equal to the block
156size of the underlying bdev.  For example, a 4K I/O to a bdev with a 512B block size,
157would result in 8 cryptographic operations.
158
159For reads, the buffer provided to the crypto module will be used as the destination buffer
160for unencrypted data.  For writes, however, a temporary scratch buffer is used as the
161destination buffer for encryption which is then passed on to the underlying bdev as the
162write buffer.  This is done to avoid encrypting the data in the original source buffer which
163may cause problems in some use cases.
164
165Example command
166
167`rpc.py bdev_crypto_create NVMe1n1 CryNvmeA crypto_aesni_mb 01234567891234560123456789123456`
168
169This command will create a crypto vbdev called 'CryNvmeA' on top of the NVMe bdev
170'NVMe1n1' and will use the DPDK software driver 'crypto_aesni_mb' and the key
171'01234567891234560123456789123456'.
172
173Please make sure the keys are provided in hexlified format. This means string passed to
174rpc.py must be twice as long than the key length in binary form.
175
176Example command
177
178rpc.py bdev_crypto_create -c AES_XTS -k2 7859243a027411e581e0c40a35c8228f NVMe1n1 CryNvmeA \
179mlx5_pci d16a2f3a9e9f5b32daefacd7f5984f4578add84425be4a0baa489b9de8884b09
180
181This command will create a crypto vbdev called 'CryNvmeA' on top of the NVMe bdev
182'NVMe1n1' and will use the DPDK software driver 'mlx5_pci', the AES key
183'd16a2f3a9e9f5b32daefacd7f5984f4578add84425be4a0baa489b9de8884b09' and the XTS key
184'7859243a027411e581e0c40a35c8228f'. In other words, the compound AES_XTS key to be used is
185'd16a2f3a9e9f5b32daefacd7f5984f4578add84425be4a0baa489b9de8884b097859243a027411e581e0c40a35c8228f'
186
187To remove the vbdev use the bdev_crypto_delete command.
188
189`rpc.py bdev_crypto_delete CryNvmeA`
190
191The MLX5 driver works with crypto enabled Nvidia NICs and requires special configuration of
192DPDK environment to enable crypto function. It can be done via SPDK event library by configuring
193`env_context` member of `spdk_app_opts` structure or by passing corresponding CLI arguments in
194the following form: `--allow=BDF,class=crypto,wcs_file=/full/path/to/wrapped/credentials`, e.g.
195`--allow=0000:01:00.0,class=crypto,wcs_file=/path/credentials.txt`.
196
197## Delay Bdev Module {#bdev_config_delay}
198
199The delay vbdev module is intended to apply a predetermined additional latency on top of a lower
200level bdev. This enables the simulation of the latency characteristics of a device during the functional
201or scalability testing of an SPDK application. For example, to simulate the effect of drive latency when
202processing I/Os, one could configure a NULL bdev with a delay bdev on top of it.
203
204The delay bdev module is not intended to provide a high fidelity replication of a specific NVMe drive's latency,
205instead it's main purpose is to provide a "big picture" understanding of how a generic latency affects a given
206application.
207
208A delay bdev is created using the `bdev_delay_create` RPC. This rpc takes 6 arguments, one for the name
209of the delay bdev and one for the name of the base bdev. The remaining four arguments represent the following
210latency values: average read latency, average write latency, p99 read latency, and p99 write latency.
211Within the context of the delay bdev p99 latency means that one percent of the I/O will be delayed by at
212least by the value of the p99 latency before being completed to the upper level protocol. All of the latency values
213are measured in microseconds.
214
215Example command:
216
217`rpc.py bdev_delay_create -b Null0 -d delay0 -r 10 --nine-nine-read-latency 50 -w 30 --nine-nine-write-latency 90`
218
219This command will create a delay bdev with average read and write latencies of 10 and 30 microseconds and p99 read
220and write latencies of 50 and 90 microseconds respectively.
221
222A delay bdev can be deleted using the `bdev_delay_delete` RPC
223
224Example command:
225
226`rpc.py bdev_delay_delete delay0`
227
228## GPT (GUID Partition Table) {#bdev_config_gpt}
229
230The GPT virtual bdev driver is enabled by default and does not require any configuration.
231It will automatically detect @ref bdev_ug_gpt on any attached bdev and will create
232possibly multiple virtual bdevs.
233
234### SPDK GPT partition table {#bdev_ug_gpt}
235
236The SPDK partition type GUID is `7c5222bd-8f5d-4087-9c00-bf9843c7b58c`. Existing SPDK bdevs
237can be exposed as Linux block devices via NBD and then can be partitioned with
238standard partitioning tools. After partitioning, the bdevs will need to be deleted and
239attached again for the GPT bdev module to see any changes. NBD kernel module must be
240loaded first. To create NBD bdev user should use `nbd_start_disk` RPC command.
241
242Example command
243
244`rpc.py nbd_start_disk Malloc0 /dev/nbd0`
245
246This will expose an SPDK bdev `Malloc0` under the `/dev/nbd0` block device.
247
248To remove NBD device user should use `nbd_stop_disk` RPC command.
249
250Example command
251
252`rpc.py nbd_stop_disk /dev/nbd0`
253
254To display full or specified nbd device list user should use `nbd_get_disks` RPC command.
255
256Example command
257
258`rpc.py nbd_stop_disk -n /dev/nbd0`
259
260### Creating a GPT partition table using NBD {#bdev_ug_gpt_create_part}
261
262~~~bash
263# Expose bdev Nvme0n1 as kernel block device /dev/nbd0 by JSON-RPC
264rpc.py nbd_start_disk Nvme0n1 /dev/nbd0
265
266# Create GPT partition table.
267parted -s /dev/nbd0 mklabel gpt
268
269# Add a partition consuming 50% of the available space.
270parted -s /dev/nbd0 mkpart MyPartition '0%' '50%'
271
272# Change the partition type to the SPDK GUID.
273# sgdisk is part of the gdisk package.
274sgdisk -t 1:7c5222bd-8f5d-4087-9c00-bf9843c7b58c /dev/nbd0
275
276# Stop the NBD device (stop exporting /dev/nbd0).
277rpc.py nbd_stop_disk /dev/nbd0
278
279# Now Nvme0n1 is configured with a GPT partition table, and
280# the first partition will be automatically exposed as
281# Nvme0n1p1 in SPDK applications.
282~~~
283
284## iSCSI bdev {#bdev_config_iscsi}
285
286The SPDK iSCSI bdev driver depends on libiscsi and hence is not enabled by default.
287In order to use it, build SPDK with an extra `--with-iscsi-initiator` configure option.
288
289The following command creates an `iSCSI0` bdev from a single LUN exposed at given iSCSI URL
290with `iqn.2016-06.io.spdk:init` as the reported initiator IQN.
291
292`rpc.py bdev_iscsi_create -b iSCSI0 -i iqn.2016-06.io.spdk:init --url iscsi://127.0.0.1/iqn.2016-06.io.spdk:disk1/0`
293
294The URL is in the following format:
295`iscsi://[<username>[%<password>]@]<host>[:<port>]/<target-iqn>/<lun>`
296
297## Linux AIO bdev {#bdev_config_aio}
298
299The SPDK AIO bdev driver provides SPDK block layer access to Linux kernel block
300devices or a file on a Linux filesystem via Linux AIO. Note that O_DIRECT is
301used and thus bypasses the Linux page cache. This mode is probably as close to
302a typical kernel based target as a user space target can get without using a
303user-space driver. To create AIO bdev RPC command `bdev_aio_create` should be
304used.
305
306Example commands
307
308`rpc.py bdev_aio_create /dev/sda aio0`
309
310This command will create `aio0` device from /dev/sda.
311
312`rpc.py bdev_aio_create /tmp/file file 4096`
313
314This command will create `file` device with block size 4096 from /tmp/file.
315
316To delete an aio bdev use the bdev_aio_delete command.
317
318`rpc.py bdev_aio_delete aio0`
319
320## OCF Virtual bdev {#bdev_config_cas}
321
322OCF virtual bdev module is based on [Open CAS Framework](https://github.com/Open-CAS/ocf) - a
323high performance block storage caching meta-library.
324To enable the module, configure SPDK using `--with-ocf` flag.
325OCF bdev can be used to enable caching for any underlying bdev.
326
327Below is an example command for creating OCF bdev:
328
329`rpc.py bdev_ocf_create Cache1 wt Malloc0 Nvme0n1`
330
331This command will create new OCF bdev `Cache1` having bdev `Malloc0` as caching-device
332and `Nvme0n1` as core-device and initial cache mode `Write-Through`.
333`Malloc0` will be used as cache for `Nvme0n1`, so  data written to `Cache1` will be present
334on `Nvme0n1` eventually.
335By default, OCF will be configured with cache line size equal 4KiB
336and non-volatile metadata will be disabled.
337
338To remove `Cache1`:
339
340`rpc.py bdev_ocf_delete Cache1`
341
342During removal OCF-cache will be stopped and all cached data will be written to the core device.
343
344Note that OCF has a per-device RAM requirement. More details can be found in the
345[OCF documentation](https://open-cas.github.io/guide_system_requirements.html).
346
347## Malloc bdev {#bdev_config_malloc}
348
349Malloc bdevs are ramdisks. Because of its nature they are volatile. They are created from hugepage memory given to SPDK
350application.
351
352Example command for creating malloc bdev:
353
354`rpc.py bdev_malloc_create -b Malloc0 64 512`
355
356Example command for removing malloc bdev:
357
358`rpc.py bdev_malloc_delete Malloc0`
359
360## Null {#bdev_config_null}
361
362The SPDK null bdev driver is a dummy block I/O target that discards all writes and returns undefined
363data for reads.  It is useful for benchmarking the rest of the bdev I/O stack with minimal block
364device overhead and for testing configurations that can't easily be created with the Malloc bdev.
365To create Null bdev RPC command `bdev_null_create` should be used.
366
367Example command
368
369`rpc.py bdev_null_create Null0 8589934592 4096`
370
371This command will create an 8 petabyte `Null0` device with block size 4096.
372
373To delete a null bdev use the bdev_null_delete command.
374
375`rpc.py bdev_null_delete Null0`
376
377## NVMe bdev {#bdev_config_nvme}
378
379There are two ways to create block device based on NVMe device in SPDK. First
380way is to connect local PCIe drive and second one is to connect NVMe-oF device.
381In both cases user should use `bdev_nvme_attach_controller` RPC command to achieve that.
382
383Example commands
384
385`rpc.py bdev_nvme_attach_controller -b NVMe1 -t PCIe -a 0000:01:00.0`
386
387This command will create NVMe bdev of physical device in the system.
388
389`rpc.py bdev_nvme_attach_controller -b Nvme0 -t RDMA -a 192.168.100.1 -f IPv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1`
390
391This command will create NVMe bdev of NVMe-oF resource.
392
393To remove an NVMe controller use the bdev_nvme_detach_controller command.
394
395`rpc.py bdev_nvme_detach_controller Nvme0`
396
397This command will remove NVMe bdev named Nvme0.
398
399The SPDK NVMe bdev driver provides the multipath feature. Please refer to
400@ref nvme_multipath for details.
401
402### NVMe bdev character device {#bdev_config_nvme_cuse}
403
404This feature is considered as experimental. You must configure with --with-nvme-cuse
405option to enable this RPC.
406
407Example commands
408
409`rpc.py bdev_nvme_cuse_register -n Nvme3
410
411This command will register a character device under /dev/spdk associated with Nvme3
412controller. If there are namespaces created on Nvme3 controller, a namespace
413character device is also created for each namespace.
414
415For example, the first controller registered will have a character device path of
416/dev/spdk/nvmeX, where X is replaced with a unique integer to differentiate it from
417other controllers.  Note that this 'nvmeX' name here has no correlation to the name
418associated with the controller in SPDK.  Namespace character devices will have a path
419of /dev/spdk/nvmeXnY, where Y is the namespace ID.
420
421Cuse devices are removed from system, when NVMe controller is detached or unregistered
422with command:
423
424`rpc.py bdev_nvme_cuse_unregister -n Nvme0`
425
426## Logical volumes {#bdev_ug_logical_volumes}
427
428The Logical Volumes library is a flexible storage space management system. It allows
429creating and managing virtual block devices with variable size on top of other bdevs.
430The SPDK Logical Volume library is built on top of @ref blob. For detailed description
431please refer to @ref lvol.
432
433### Logical volume store {#bdev_ug_lvol_store}
434
435Before creating any logical volumes (lvols), an lvol store has to be created first on
436selected block device. Lvol store is lvols vessel responsible for managing underlying
437bdev space assignment to lvol bdevs and storing metadata. To create lvol store user
438should use using `bdev_lvol_create_lvstore` RPC command.
439
440Example command
441
442`rpc.py bdev_lvol_create_lvstore Malloc2 lvs -c 4096`
443
444This will create lvol store named `lvs` with cluster size 4096, build on top of
445`Malloc2` bdev. In response user will be provided with uuid which is unique lvol store
446identifier.
447
448User can get list of available lvol stores using `bdev_lvol_get_lvstores` RPC command (no
449parameters available).
450
451Example response
452~~~
453{
454  "uuid": "330a6ab2-f468-11e7-983e-001e67edf35d",
455  "base_bdev": "Malloc2",
456  "free_clusters": 8190,
457  "cluster_size": 8192,
458  "total_data_clusters": 8190,
459  "block_size": 4096,
460  "name": "lvs"
461}
462~~~
463
464To delete lvol store user should use `bdev_lvol_delete_lvstore` RPC command.
465
466Example commands
467
468`rpc.py bdev_lvol_delete_lvstore -u 330a6ab2-f468-11e7-983e-001e67edf35d`
469
470`rpc.py bdev_lvol_delete_lvstore -l lvs`
471
472### Lvols {#bdev_ug_lvols}
473
474To create lvols on existing lvol store user should use `bdev_lvol_create` RPC command.
475Each created lvol will be represented by new bdev.
476
477Example commands
478
479`rpc.py bdev_lvol_create lvol1 25 -l lvs`
480
481`rpc.py bdev_lvol_create lvol2 25 -u 330a6ab2-f468-11e7-983e-001e67edf35d`
482
483## Passthru {#bdev_config_passthru}
484
485The SPDK Passthru virtual block device module serves as an example of how to write a
486virtual block device module. It implements the required functionality of a vbdev module
487and demonstrates some other basic features such as the use of per I/O context.
488
489Example commands
490
491`rpc.py bdev_passthru_create -b aio -p pt`
492
493`rpc.py bdev_passthru_delete pt`
494
495## Pmem {#bdev_config_pmem}
496
497The SPDK pmem bdev driver uses pmemblk pool as the target for block I/O operations. For
498details on Pmem memory please refer to PMDK documentation on http://pmem.io website.
499First, user needs to configure SPDK to include PMDK support:
500
501`configure --with-pmdk`
502
503To create pmemblk pool for use with SPDK user should use `bdev_pmem_create_pool` RPC command.
504
505Example command
506
507`rpc.py bdev_pmem_create_pool /path/to/pmem_pool 25 4096`
508
509To get information on created pmem pool file user can use `bdev_pmem_get_pool_info` RPC command.
510
511Example command
512
513`rpc.py bdev_pmem_get_pool_info /path/to/pmem_pool`
514
515To remove pmem pool file user can use `bdev_pmem_delete_pool` RPC command.
516
517Example command
518
519`rpc.py bdev_pmem_delete_pool /path/to/pmem_pool`
520
521To create bdev based on pmemblk pool file user should use `bdev_pmem_create` RPC
522command.
523
524Example command
525
526`rpc.py bdev_pmem_create /path/to/pmem_pool -n pmem`
527
528To remove a block device representation use the bdev_pmem_delete command.
529
530`rpc.py bdev_pmem_delete pmem`
531
532## RAID {#bdev_ug_raid}
533
534RAID virtual bdev module provides functionality to combine any SPDK bdevs into
535one RAID bdev. Currently SPDK supports only RAID 0. RAID functionality does not
536store on-disk metadata on the member disks, so user must recreate the RAID
537volume when restarting application. User may specify member disks to create RAID
538volume event if they do not exists yet - as the member disks are registered at
539a later time, the RAID module will claim them and will surface the RAID volume
540after all of the member disks are available. It is allowed to use disks of
541different sizes - the smallest disk size will be the amount of space used on
542each member disk.
543
544Example commands
545
546`rpc.py bdev_raid_create -n Raid0 -z 64 -r 0 -b "lvol0 lvol1 lvol2 lvol3"`
547
548`rpc.py bdev_raid_get_bdevs`
549
550`rpc.py bdev_raid_delete Raid0`
551
552## Split {#bdev_ug_split}
553
554The split block device module takes an underlying block device and splits it into
555several smaller equal-sized virtual block devices. This serves as an example to create
556more vbdevs on a given base bdev for user testing.
557
558Example commands
559
560To create four split bdevs with base bdev_b0 use the `bdev_split_create` command.
561Each split bdev will be one fourth the size of the base bdev.
562
563`rpc.py bdev_split_create bdev_b0 4`
564
565The `split_size_mb`(-s) parameter restricts the size of each split bdev.
566The total size of all split bdevs must not exceed the base bdev size.
567
568`rpc.py bdev_split_create bdev_b0 4 -s 128`
569
570To remove the split bdevs, use the `bdev_split_delete` command with the base bdev name.
571
572`rpc.py bdev_split_delete bdev_b0`
573
574## Uring {#bdev_ug_uring}
575
576The uring bdev module issues I/O to kernel block devices using the io_uring Linux kernel API. This module requires liburing.
577For more information on io_uring refer to kernel [IO_uring] (https://kernel.dk/io_uring.pdf)
578
579The user needs to configure SPDK to include io_uring support:
580
581`configure --with-uring`
582
583To create a uring bdev with given filename, bdev name and block size use the `bdev_uring_create` RPC.
584
585`rpc.py  bdev_uring_create /path/to/device bdev_u0 512`
586
587To remove a uring bdev use the `bdev_uring_delete` RPC.
588
589`rpc.py bdev_uring_delete bdev_u0`
590
591## xnvme {#bdev_ug_xnvme}
592
593The xnvme bdev module issues I/O to the underlying NVMe devices through various I/O mechanisms
594such as libaio, io_uring, Asynchronous IOCTL using io_uring passthrough, POSIX aio, emulated aio etc.
595
596This module requires xNVMe library.
597For more information on xNVMe refer to [xNVMe] (https://xnvme.io/docs/latest)
598
599The user needs to configure SPDK to include xNVMe support:
600
601`configure --with-xnvme`
602
603To create a xnvme bdev with given filename, bdev name and I/O mechanism use the `bdev_xnvme_create` RPC.
604
605`rpc.py  bdev_xnvme_create /dev/ng0n1 bdev_ng0n1 io_uring_cmd`
606
607To remove a xnvme bdev use the `bdev_xnvme_delete` RPC.
608
609`rpc.py bdev_xnvme_delete bdev_ng0n1`
610
611## Virtio Block {#bdev_config_virtio_blk}
612
613The Virtio-Block driver allows creating SPDK bdevs from Virtio-Block devices.
614
615The following command creates a Virtio-Block device named `VirtioBlk0` from a vhost-user
616socket `/tmp/vhost.0` exposed directly by SPDK @ref vhost. Optional `vq-count` and
617`vq-size` params specify number of request queues and queue depth to be used.
618
619`rpc.py bdev_virtio_attach_controller --dev-type blk --trtype user --traddr /tmp/vhost.0 --vq-count 2 --vq-size 512 VirtioBlk0`
620
621The driver can be also used inside QEMU-based VMs. The following command creates a Virtio
622Block device named `VirtioBlk0` from a Virtio PCI device at address `0000:00:01.0`.
623The entire configuration will be read automatically from PCI Configuration Space. It will
624reflect all parameters passed to QEMU's vhost-user-scsi-pci device.
625
626`rpc.py bdev_virtio_attach_controller --dev-type blk --trtype pci --traddr 0000:01:00.0 VirtioBlk1`
627
628Virtio-Block devices can be removed with the following command
629
630`rpc.py bdev_virtio_detach_controller VirtioBlk0`
631
632## Virtio SCSI {#bdev_config_virtio_scsi}
633
634The Virtio-SCSI driver allows creating SPDK block devices from Virtio-SCSI LUNs.
635
636Virtio-SCSI bdevs are created the same way as Virtio-Block ones.
637
638`rpc.py bdev_virtio_attach_controller --dev-type scsi --trtype user --traddr /tmp/vhost.0 --vq-count 2 --vq-size 512 VirtioScsi0`
639
640`rpc.py bdev_virtio_attach_controller --dev-type scsi --trtype pci --traddr 0000:01:00.0 VirtioScsi0`
641
642Each Virtio-SCSI device may export up to 64 block devices named VirtioScsi0t0 ~ VirtioScsi0t63,
643one LUN (LUN0) per SCSI device. The above 2 commands will output names of all exposed bdevs.
644
645Virtio-SCSI devices can be removed with the following command
646
647`rpc.py bdev_virtio_detach_controller VirtioScsi0`
648
649Removing a Virtio-SCSI device will destroy all its bdevs.
650