xref: /spdk/doc/bdev.md (revision 33f97fa33ad89651d75bafb5fb87dc4cd28dde6a)
1# Block Device User Guide {#bdev}
2
3# Introduction {#bdev_ug_introduction}
4
5The SPDK block device layer, often simply called *bdev*, is a C library
6intended to be equivalent to the operating system block storage layer that
7often sits immediately above the device drivers in a traditional kernel
8storage stack. Specifically, this library provides the following
9functionality:
10
11* A pluggable module API for implementing block devices that interface with different types of block storage devices.
12* Driver modules for NVMe, malloc (ramdisk), Linux AIO, virtio-scsi, Ceph RBD, Pmem and Vhost-SCSI Initiator and more.
13* An application API for enumerating and claiming SPDK block devices and then performing operations (read, write, unmap, etc.) on those devices.
14* Facilities to stack block devices to create complex I/O pipelines, including logical volume management (lvol) and partition support (GPT).
15* Configuration of block devices via JSON-RPC.
16* Request queueing, timeout, and reset handling.
17* Multiple, lockless queues for sending I/O to block devices.
18
19Bdev module creates abstraction layer that provides common API for all devices.
20User can use available bdev modules or create own module with any type of
21device underneath (please refer to @ref bdev_module for details). SPDK
22provides also vbdev modules which creates block devices on existing bdev. For
23example @ref bdev_ug_logical_volumes or @ref bdev_ug_gpt
24
25# Prerequisites {#bdev_ug_prerequisites}
26
27This guide assumes that you can already build the standard SPDK distribution
28on your platform. The block device layer is a C library with a single public
29header file named bdev.h. All SPDK configuration described in following
30chapters is done by using JSON-RPC commands. SPDK provides a python-based
31command line tool for sending RPC commands located at `scripts/rpc.py`. User
32can list available commands by running this script with `-h` or `--help` flag.
33Additionally user can retrieve currently supported set of RPC commands
34directly from SPDK application by running `scripts/rpc.py rpc_get_methods`.
35Detailed help for each command can be displayed by adding `-h` flag as a
36command parameter.
37
38# General Purpose RPCs {#bdev_ug_general_rpcs}
39
40## bdev_get_bdevs {#bdev_ug_get_bdevs}
41
42List of currently available block devices including detailed information about
43them can be get by using `bdev_get_bdevs` RPC command. User can add optional
44parameter `name` to get details about specified by that name bdev.
45
46Example response
47
48~~~
49{
50  "num_blocks": 32768,
51  "assigned_rate_limits": {
52    "rw_ios_per_sec": 10000,
53    "rw_mbytes_per_sec": 20
54  },
55  "supported_io_types": {
56    "reset": true,
57    "nvme_admin": false,
58    "unmap": true,
59    "read": true,
60    "write_zeroes": true,
61    "write": true,
62    "flush": true,
63    "nvme_io": false
64  },
65  "driver_specific": {},
66  "claimed": false,
67  "block_size": 4096,
68  "product_name": "Malloc disk",
69  "name": "Malloc0"
70}
71~~~
72
73## bdev_set_qos_limit {#bdev_set_qos_limit}
74
75Users can use the `bdev_set_qos_limit` RPC command to enable, adjust, and disable
76rate limits on an existing bdev.  Two types of rate limits are supported:
77IOPS and bandwidth.  The rate limits can be enabled, adjusted, and disabled at any
78time for the specified bdev.  The bdev name is a required parameter for this
79RPC command and at least one of `rw_ios_per_sec` and `rw_mbytes_per_sec` must be
80specified.  When both rate limits are enabled, the first met limit will
81take effect.  The value 0 may be specified to disable the corresponding rate
82limit. Users can run this command with `-h` or `--help` for more information.
83
84## Histograms {#rpc_bdev_histogram}
85
86The `bdev_enable_histogram` RPC command allows to enable or disable gathering
87latency data for specified bdev. Histogram can be downloaded by the user by
88calling `bdev_get_histogram` and parsed using scripts/histogram.py script.
89
90Example command
91
92`rpc.py bdev_enable_histogram Nvme0n1 --enable`
93
94The command will enable gathering data for histogram on Nvme0n1 device.
95
96`rpc.py bdev_get_histogram Nvme0n1 | histogram.py`
97
98The command will download gathered histogram data. The script will parse
99the data and show table containing IO count for latency ranges.
100
101`rpc.py bdev_enable_histogram Nvme0n1 --disable`
102
103The command will disable histogram on Nvme0n1 device.
104
105# Ceph RBD {#bdev_config_rbd}
106
107The SPDK RBD bdev driver provides SPDK block layer access to Ceph RADOS block
108devices (RBD). Ceph RBD devices are accessed via librbd and librados libraries
109to access the RADOS block device exported by Ceph. To create Ceph bdev RPC
110command `bdev_rbd_create` should be used.
111
112Example command
113
114`rpc.py bdev_rbd_create rbd foo 512`
115
116This command will create a bdev that represents the 'foo' image from a pool called 'rbd'.
117
118To remove a block device representation use the bdev_rbd_delete command.
119
120`rpc.py bdev_rbd_delete Rbd0`
121
122To resize a bdev use the bdev_rbd_resize command.
123
124`rpc.py bdev_rbd_resize Rbd0 4096`
125
126This command will resize the Rbd0 bdev to 4096 MiB.
127
128# Compression Virtual Bdev Module {#bdev_config_compress}
129
130The compression bdev module can be configured to provide compression/decompression
131services for an underlying thinly provisioned logical volume. Although the underlying
132module can be anything (i.e. NVME bdev) the overall compression benefits will not be realized
133unless the data stored on disk is placed appropriately. The compression vbdev module
134relies on an internal SPDK library called `reduce` to accomplish this, see @ref reduce
135for detailed information.
136
137The vbdev module relies on the DPDK CompressDev Framework to provide all compression
138functionality. The framework provides support for many different software only
139compression modules as well as hardware assisted support for Intel QAT. At this
140time the vbdev module supports the DPDK drivers for ISAL and QAT.
141
142Persistent memory is used to store metadata associated with the layout of the data on the
143backing device. SPDK relies on [PMDK](http://pmem.io/pmdk/) to interface persistent memory so any hardware
144supported by PMDK should work. If the directory for PMEM supplied upon vbdev creation does
145not point to persistent memory (i.e. a regular filesystem) performance will be severely
146impacted.  The vbdev module and reduce libraries were designed to use persistent memory for
147any production use.
148
149Example command
150
151`rpc.py bdev_compress_create -p /pmem_files -b myLvol`
152
153In this example, a compression vbdev is created using persistent memory that is mapped to
154the directory `pmem_files` on top of the existing thinly provisioned logical volume `myLvol`.
155The resulting compression bdev will be named `COMP_LVS/myLvol` where LVS is the name of the
156logical volume store that `myLvol` resides on.
157
158The logical volume is referred to as the backing device and once the compression vbdev is
159created it cannot be separated from the persistent memory file that will be created in
160the specified directory.  If the persistent memory file is not available, the compression
161vbdev will also not be available.
162
163By default the vbdev module will choose the QAT driver if the hardware and drivers are
164available and loaded.  If not, it will revert to the software-only ISAL driver. By using
165the following command, the driver may be specified however this is not persistent so it
166must be done either upon creation or before the underlying logical volume is loaded to
167be honored. In the example below, `0` is telling the vbdev module to use QAT if available
168otherwise use ISAL, this is the default and if sufficient the command is not required. Passing
169a value of 1 tells the driver to use QAT and if not available then the creation or loading
170the vbdev should fail to create or load.  A value of '2' as shown below tells the module
171to use ISAL and if for some reason it is not available, the vbdev should fail to create or load.
172
173`rpc.py compress_set_pmd -p 2`
174
175To remove a compression vbdev, use the following command which will also delete the PMEM
176file.  If the logical volume is deleted the PMEM file will not be removed and the
177compression vbdev will not be available.
178
179`rpc.py bdev_compress_delete COMP_LVS/myLvol`
180
181To list compression volumes that are only available for deletion because their PMEM file
182was missing use the following. The name parameter is optional and if not included will list
183all volumes, if used it will return the name or an error that the device does not exist.
184
185`rpc.py bdev_compress_get_orphans --name COMP_Nvme0n1`
186
187# Crypto Virtual Bdev Module {#bdev_config_crypto}
188
189The crypto virtual bdev module can be configured to provide at rest data encryption
190for any underlying bdev. The module relies on the DPDK CryptoDev Framework to provide
191all cryptographic functionality. The framework provides support for many different software
192only cryptographic modules as well hardware assisted support for the Intel QAT board. The
193framework also provides support for cipher, hash, authentication and AEAD functions. At this
194time the SPDK virtual bdev module supports cipher only as follows:
195
196- AESN-NI Multi Buffer Crypto Poll Mode Driver: RTE_CRYPTO_CIPHER_AES128_CBC
197- Intel(R) QuickAssist (QAT) Crypto Poll Mode Driver: RTE_CRYPTO_CIPHER_AES128_CBC
198  (Note: QAT is functional however is marked as experimental until the hardware has
199  been fully integrated with the SPDK CI system.)
200
201In order to support using the bdev block offset (LBA) as the initialization vector (IV),
202the crypto module break up all I/O into crypto operations of a size equal to the block
203size of the underlying bdev.  For example, a 4K I/O to a bdev with a 512B block size,
204would result in 8 cryptographic operations.
205
206For reads, the buffer provided to the crypto module will be used as the destination buffer
207for unencrypted data.  For writes, however, a temporary scratch buffer is used as the
208destination buffer for encryption which is then passed on to the underlying bdev as the
209write buffer.  This is done to avoid encrypting the data in the original source buffer which
210may cause problems in some use cases.
211
212Example command
213
214`rpc.py bdev_crypto_create NVMe1n1 CryNvmeA crypto_aesni_mb 0123456789123456`
215
216This command will create a crypto vbdev called 'CryNvmeA' on top of the NVMe bdev
217'NVMe1n1' and will use the DPDK software driver 'crypto_aesni_mb' and the key
218'0123456789123456'.
219
220To remove the vbdev use the bdev_crypto_delete command.
221
222`rpc.py bdev_crypto_delete CryNvmeA`
223
224# Delay Bdev Module {#bdev_config_delay}
225
226The delay vbdev module is intended to apply a predetermined additional latency on top of a lower
227level bdev. This enables the simulation of the latency characteristics of a device during the functional
228or scalability testing of an SPDK application. For example, to simulate the effect of drive latency when
229processing I/Os, one could configure a NULL bdev with a delay bdev on top of it.
230
231The delay bdev module is not intended to provide a high fidelity replication of a specific NVMe drive's latency,
232instead it's main purpose is to provide a "big picture" understanding of how a generic latency affects a given
233application.
234
235A delay bdev is created using the `bdev_delay_create` RPC. This rpc takes 6 arguments, one for the name
236of the delay bdev and one for the name of the base bdev. The remaining four arguments represent the following
237latency values: average read latency, average write latency, p99 read latency, and p99 write latency.
238Within the context of the delay bdev p99 latency means that one percent of the I/O will be delayed by at
239least by the value of the p99 latency before being completed to the upper level protocol. All of the latency values
240are measured in microseconds.
241
242Example command:
243
244`rpc.py bdev_delay_create -b Null0 -d delay0 -r 10 --nine-nine-read-latency 50 -w 30 --nine-nine-write-latency 90`
245
246This command will create a delay bdev with average read and write latencies of 10 and 30 microseconds and p99 read
247and write latencies of 50 and 90 microseconds respectively.
248
249A delay bdev can be deleted using the `bdev_delay_delete` RPC
250
251Example command:
252
253`rpc.py bdev_delay_delete delay0`
254
255# GPT (GUID Partition Table) {#bdev_config_gpt}
256
257The GPT virtual bdev driver is enabled by default and does not require any configuration.
258It will automatically detect @ref bdev_ug_gpt on any attached bdev and will create
259possibly multiple virtual bdevs.
260
261## SPDK GPT partition table {#bdev_ug_gpt}
262
263The SPDK partition type GUID is `7c5222bd-8f5d-4087-9c00-bf9843c7b58c`. Existing SPDK bdevs
264can be exposed as Linux block devices via NBD and then ca be partitioned with
265standard partitioning tools. After partitioning, the bdevs will need to be deleted and
266attached again for the GPT bdev module to see any changes. NBD kernel module must be
267loaded first. To create NBD bdev user should use `nbd_start_disk` RPC command.
268
269Example command
270
271`rpc.py nbd_start_disk Malloc0 /dev/nbd0`
272
273This will expose an SPDK bdev `Malloc0` under the `/dev/nbd0` block device.
274
275To remove NBD device user should use `nbd_stop_disk` RPC command.
276
277Example command
278
279`rpc.py nbd_stop_disk /dev/nbd0`
280
281To display full or specified nbd device list user should use `nbd_get_disks` RPC command.
282
283Example command
284
285`rpc.py nbd_stop_disk -n /dev/nbd0`
286
287## Creating a GPT partition table using NBD {#bdev_ug_gpt_create_part}
288
289~~~
290# Expose bdev Nvme0n1 as kernel block device /dev/nbd0 by JSON-RPC
291rpc.py nbd_start_disk Nvme0n1 /dev/nbd0
292
293# Create GPT partition table.
294parted -s /dev/nbd0 mklabel gpt
295
296# Add a partition consuming 50% of the available space.
297parted -s /dev/nbd0 mkpart MyPartition '0%' '50%'
298
299# Change the partition type to the SPDK GUID.
300# sgdisk is part of the gdisk package.
301sgdisk -t 1:7c5222bd-8f5d-4087-9c00-bf9843c7b58c /dev/nbd0
302
303# Stop the NBD device (stop exporting /dev/nbd0).
304rpc.py nbd_stop_disk /dev/nbd0
305
306# Now Nvme0n1 is configured with a GPT partition table, and
307# the first partition will be automatically exposed as
308# Nvme0n1p1 in SPDK applications.
309~~~
310
311# iSCSI bdev {#bdev_config_iscsi}
312
313The SPDK iSCSI bdev driver depends on libiscsi and hence is not enabled by default.
314In order to use it, build SPDK with an extra `--with-iscsi-initiator` configure option.
315
316The following command creates an `iSCSI0` bdev from a single LUN exposed at given iSCSI URL
317with `iqn.2016-06.io.spdk:init` as the reported initiator IQN.
318
319`rpc.py bdev_iscsi_create -b iSCSI0 -i iqn.2016-06.io.spdk:init --url iscsi://127.0.0.1/iqn.2016-06.io.spdk:disk1/0`
320
321The URL is in the following format:
322`iscsi://[<username>[%<password>]@]<host>[:<port>]/<target-iqn>/<lun>`
323
324# Linux AIO bdev {#bdev_config_aio}
325
326The SPDK AIO bdev driver provides SPDK block layer access to Linux kernel block
327devices or a file on a Linux filesystem via Linux AIO. Note that O_DIRECT is
328used and thus bypasses the Linux page cache. This mode is probably as close to
329a typical kernel based target as a user space target can get without using a
330user-space driver. To create AIO bdev RPC command `bdev_aio_create` should be
331used.
332
333Example commands
334
335`rpc.py bdev_aio_create /dev/sda aio0`
336
337This command will create `aio0` device from /dev/sda.
338
339`rpc.py bdev_aio_create /tmp/file file 8192`
340
341This command will create `file` device with block size 8192 from /tmp/file.
342
343To delete an aio bdev use the bdev_aio_delete command.
344
345`rpc.py bdev_aio_delete aio0`
346
347# OCF Virtual bdev {#bdev_config_cas}
348
349OCF virtual bdev module is based on [Open CAS Framework](https://github.com/Open-CAS/ocf) - a
350high performance block storage caching meta-library.
351To enable the module, configure SPDK using `--with-ocf` flag.
352OCF bdev can be used to enable caching for any underlying bdev.
353
354Below is an example command for creating OCF bdev:
355
356`rpc.py bdev_ocf_create Cache1 wt Malloc0 Nvme0n1`
357
358This command will create new OCF bdev `Cache1` having bdev `Malloc0` as caching-device
359and `Nvme0n1` as core-device and initial cache mode `Write-Through`.
360`Malloc0` will be used as cache for `Nvme0n1`, so  data written to `Cache1` will be present
361on `Nvme0n1` eventually.
362By default, OCF will be configured with cache line size equal 4KiB
363and non-volatile metadata will be disabled.
364
365To remove `Cache1`:
366
367`rpc.py bdev_ocf_delete Cache1`
368
369During removal OCF-cache will be stopped and all cached data will be written to the core device.
370
371Note that OCF has a per-device RAM requirement
372of about 56000 + _cache device size_ * 58 / _cache line size_ (in bytes).
373To get more information on OCF
374please visit [OCF documentation](https://open-cas.github.io/).
375
376# Malloc bdev {#bdev_config_malloc}
377
378Malloc bdevs are ramdisks. Because of its nature they are volatile. They are created from hugepage memory given to SPDK
379application.
380
381# Null {#bdev_config_null}
382
383The SPDK null bdev driver is a dummy block I/O target that discards all writes and returns undefined
384data for reads.  It is useful for benchmarking the rest of the bdev I/O stack with minimal block
385device overhead and for testing configurations that can't easily be created with the Malloc bdev.
386To create Null bdev RPC command `bdev_null_create` should be used.
387
388Example command
389
390`rpc.py bdev_null_create Null0 8589934592 4096`
391
392This command will create an 8 petabyte `Null0` device with block size 4096.
393
394To delete a null bdev use the bdev_null_delete command.
395
396`rpc.py bdev_null_delete Null0`
397
398# NVMe bdev {#bdev_config_nvme}
399
400There are two ways to create block device based on NVMe device in SPDK. First
401way is to connect local PCIe drive and second one is to connect NVMe-oF device.
402In both cases user should use `bdev_nvme_attach_controller` RPC command to achieve that.
403
404Example commands
405
406`rpc.py bdev_nvme_attach_controller -b NVMe1 -t PCIe -a 0000:01:00.0`
407
408This command will create NVMe bdev of physical device in the system.
409
410`rpc.py bdev_nvme_attach_controller -b Nvme0 -t RDMA -a 192.168.100.1 -f IPv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1`
411
412This command will create NVMe bdev of NVMe-oF resource.
413
414To remove an NVMe controller use the bdev_nvme_detach_controller command.
415
416`rpc.py bdev_nvme_detach_controller Nvme0`
417
418This command will remove NVMe bdev named Nvme0.
419
420## NVMe bdev character device {#bdev_config_nvme_cuse}
421
422This feature is considered as experimental.
423
424Example commands
425
426`rpc.py bdev_nvme_cuse_register -n Nvme0 -p spdk/nvme0`
427
428This command will register /dev/spdk/nvme0 character device associated with Nvme0
429controller. If there are namespaces created on Nvme0 controller, for each namespace
430device /dev/spdk/nvme0nX is created.
431
432Cuse devices are removed from system, when NVMe controller is detached or unregistered
433with command:
434
435`rpc.py bdev_nvme_cuse_unregister -n Nvme0`
436
437# Logical volumes {#bdev_ug_logical_volumes}
438
439The Logical Volumes library is a flexible storage space management system. It allows
440creating and managing virtual block devices with variable size on top of other bdevs.
441The SPDK Logical Volume library is built on top of @ref blob. For detailed description
442please refer to @ref lvol.
443
444## Logical volume store {#bdev_ug_lvol_store}
445
446Before creating any logical volumes (lvols), an lvol store has to be created first on
447selected block device. Lvol store is lvols vessel responsible for managing underlying
448bdev space assignment to lvol bdevs and storing metadata. To create lvol store user
449should use using `bdev_lvol_create_lvstore` RPC command.
450
451Example command
452
453`rpc.py bdev_lvol_create_lvstore Malloc2 lvs -c 4096`
454
455This will create lvol store named `lvs` with cluster size 4096, build on top of
456`Malloc2` bdev. In response user will be provided with uuid which is unique lvol store
457identifier.
458
459User can get list of available lvol stores using `bdev_lvol_get_lvstores` RPC command (no
460parameters available).
461
462Example response
463
464~~~
465{
466  "uuid": "330a6ab2-f468-11e7-983e-001e67edf35d",
467  "base_bdev": "Malloc2",
468  "free_clusters": 8190,
469  "cluster_size": 8192,
470  "total_data_clusters": 8190,
471  "block_size": 4096,
472  "name": "lvs"
473}
474~~~
475
476To delete lvol store user should use `bdev_lvol_delete_lvstore` RPC command.
477
478Example commands
479
480`rpc.py bdev_lvol_delete_lvstore -u 330a6ab2-f468-11e7-983e-001e67edf35d`
481
482`rpc.py bdev_lvol_delete_lvstore -l lvs`
483
484## Lvols {#bdev_ug_lvols}
485
486To create lvols on existing lvol store user should use `bdev_lvol_create` RPC command.
487Each created lvol will be represented by new bdev.
488
489Example commands
490
491`rpc.py bdev_lvol_create lvol1 25 -l lvs`
492
493`rpc.py bdev_lvol_create lvol2 25 -u 330a6ab2-f468-11e7-983e-001e67edf35d`
494
495# RAID {#bdev_ug_raid}
496
497RAID virtual bdev module provides functionality to combine any SPDK bdevs into
498one RAID bdev. Currently SPDK supports only RAID 0. RAID functionality does not
499store on-disk metadata on the member disks, so user must recreate the RAID
500volume when restarting application. User may specify member disks to create RAID
501volume event if they do not exists yet - as the member disks are registered at
502a later time, the RAID module will claim them and will surface the RAID volume
503after all of the member disks are available. It is allowed to use disks of
504different sizes - the smallest disk size will be the amount of space used on
505each member disk.
506
507Example commands
508
509`rpc.py bdev_raid_create -n Raid0 -z 64 -r 0 -b "lvol0 lvol1 lvol2 lvol3"`
510
511`rpc.py bdev_raid_get_bdevs`
512
513`rpc.py bdev_raid_delete Raid0`
514
515# Passthru {#bdev_config_passthru}
516
517The SPDK Passthru virtual block device module serves as an example of how to write a
518virtual block device module. It implements the required functionality of a vbdev module
519and demonstrates some other basic features such as the use of per I/O context.
520
521Example commands
522
523`rpc.py bdev_passthru_create -b aio -p pt`
524
525`rpc.py bdev_passthru_delete pt`
526
527# Pmem {#bdev_config_pmem}
528
529The SPDK pmem bdev driver uses pmemblk pool as the target for block I/O operations. For
530details on Pmem memory please refer to PMDK documentation on http://pmem.io website.
531First, user needs to configure SPDK to include PMDK support:
532
533`configure --with-pmdk`
534
535To create pmemblk pool for use with SPDK user should use `bdev_pmem_create_pool` RPC command.
536
537Example command
538
539`rpc.py bdev_pmem_create_pool /path/to/pmem_pool 25 4096`
540
541To get information on created pmem pool file user can use `bdev_pmem_get_pool_info` RPC command.
542
543Example command
544
545`rpc.py bdev_pmem_get_pool_info /path/to/pmem_pool`
546
547To remove pmem pool file user can use `bdev_pmem_delete_pool` RPC command.
548
549Example command
550
551`rpc.py bdev_pmem_delete_pool /path/to/pmem_pool`
552
553To create bdev based on pmemblk pool file user should use `bdev_pmem_create ` RPC
554command.
555
556Example command
557
558`rpc.py bdev_pmem_create /path/to/pmem_pool -n pmem`
559
560To remove a block device representation use the bdev_pmem_delete command.
561
562`rpc.py bdev_pmem_delete pmem`
563
564# Virtio Block {#bdev_config_virtio_blk}
565
566The Virtio-Block driver allows creating SPDK bdevs from Virtio-Block devices.
567
568The following command creates a Virtio-Block device named `VirtioBlk0` from a vhost-user
569socket `/tmp/vhost.0` exposed directly by SPDK @ref vhost. Optional `vq-count` and
570`vq-size` params specify number of request queues and queue depth to be used.
571
572`rpc.py bdev_virtio_attach_controller --dev-type blk --trtype user --traddr /tmp/vhost.0 --vq-count 2 --vq-size 512 VirtioBlk0`
573
574The driver can be also used inside QEMU-based VMs. The following command creates a Virtio
575Block device named `VirtioBlk0` from a Virtio PCI device at address `0000:00:01.0`.
576The entire configuration will be read automatically from PCI Configuration Space. It will
577reflect all parameters passed to QEMU's vhost-user-scsi-pci device.
578
579`rpc.py bdev_virtio_attach_controller --dev-type blk --trtype pci --traddr 0000:01:00.0 VirtioBlk1`
580
581Virtio-Block devices can be removed with the following command
582
583`rpc.py bdev_virtio_detach_controller VirtioBlk0`
584
585# Virtio SCSI {#bdev_config_virtio_scsi}
586
587The Virtio-SCSI driver allows creating SPDK block devices from Virtio-SCSI LUNs.
588
589Virtio-SCSI bdevs are created the same way as Virtio-Block ones.
590
591`rpc.py bdev_virtio_attach_controller --dev-type scsi --trtype user --traddr /tmp/vhost.0 --vq-count 2 --vq-size 512 VirtioScsi0`
592
593`rpc.py bdev_virtio_attach_controller --dev-type scsi --trtype pci --traddr 0000:01:00.0 VirtioScsi0`
594
595Each Virtio-SCSI device may export up to 64 block devices named VirtioScsi0t0 ~ VirtioScsi0t63,
596one LUN (LUN0) per SCSI device. The above 2 commands will output names of all exposed bdevs.
597
598Virtio-SCSI devices can be removed with the following command
599
600`rpc.py bdev_virtio_detach_controller VirtioScsi0`
601
602Removing a Virtio-SCSI device will destroy all its bdevs.
603