Lines Matching +full:storage +full:- +full:target
1 # vhost Target {#vhost}
5 - @ref vhost_intro
6 - @ref vhost_prereqs
7 - @ref vhost_start
8 - @ref vhost_config
9 - @ref vhost_qemu_config
10 - @ref vhost_example
11 - @ref vhost_advanced_topics
12 - @ref vhost_bugs
16 A vhost target provides a local storage service as a process running on a local machine.
20 The following diagram presents how QEMU-based VM communicates with SPDK Vhost-SCSI device.
26 SPDK provides an accelerated vhost target by applying the same user space and polling
34 getting_started. The SPDK vhost target is built with the default configure options.
38 Additional command line flags are available for Vhost target.
41 -------- | -------- | ---------------------- | -----------
42 -S | string | $PWD | directory where UNIX domain sockets will be created
46 The guest OS must contain virtio-scsi or virtio-blk drivers. Most Linux and FreeBSD
49 installed separately. The SPDK vhost target has been tested with recent versions of Ubuntu,
54 Userspace vhost-scsi target support was added to upstream QEMU in v2.10.0. Run
55 the following command to confirm your QEMU supports userspace vhost-scsi.
58 qemu-system-x86_64 -device vhost-user-scsi-pci,help
61 Userspace vhost-blk target support was added to upstream QEMU in v2.12.0. Run
62 the following command to confirm your QEMU supports userspace vhost-blk.
65 qemu-system-x86_64 -device vhost-user-blk-pci,help
68 ## Starting SPDK vhost target {#vhost_start}
70 First, run the SPDK setup.sh script to setup some hugepages for the SPDK vhost target
72 vhost target and the virtual machine.
78 Next, start the SPDK vhost target application. The following command will start vhost
85 build/bin/vhost -S /var/tmp -m 0x3
91 build/bin/vhost -h
99 For vhost-scsi, bdevs are exposed as SCSI LUNs on SCSI devices attached to the
100 vhost-scsi controller in the guest OS.
101 For vhost-blk, bdevs are exposed directly as block devices in the guest OS and are
104 SPDK supports several different types of storage backends, including NVMe,
106 additional information on configuring SPDK storage backends.
109 will create a 64MB malloc bdev with 512-byte block size.
112 scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0
117 #### Vhost-SCSI
119 The following RPC will create a vhost-scsi controller which can be accessed
122 The optional `--cpumask` parameter can directly specify which cores should be
123 taken into account - in this case always CPU 0. To achieve optimal performance
128 scripts/rpc.py vhost_create_scsi_controller --cpumask 0x1 vhost.0
131 The following RPC will attach the Malloc0 bdev to the vhost.0 vhost-scsi
133 target ID 0. SPDK Vhost-SCSI device currently supports only one LUN per SCSI target.
134 Additional LUNs can be added by specifying a different target ID.
140 To remove a bdev from a vhost-scsi controller use the following RPC:
146 #### Vhost-BLK
148 The following RPC will create a vhost-blk device exposing Malloc0 bdev.
150 will be pinned to the least occupied CPU core within given cpumask - in this case
155 scripts/rpc.py vhost_create_blk_controller --cpumask 0x1 vhost.1 Malloc0
158 It is also possible to create a read-only vhost-blk device by specifying an
159 extra `-r` or `--readonly` parameter.
162 scripts/rpc.py vhost_create_blk_controller --cpumask 0x1 -r vhost.1 Malloc0
167 Now the virtual machine can be started with QEMU. The following command-line
171 share the virtual machine's memory with the SPDK vhost target, the memory
175 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on
176 -numa node,memdev=mem
183 -drive file=guest_os_image.qcow2,if=none,id=disk
184 -device ide-hd,drive=disk,bootindex=0
189 #### Vhost-SCSI
192 -chardev socket,id=char0,path=/var/tmp/vhost.0
193 -device vhost-user-scsi-pci,id=scsi0,chardev=char0
196 #### Vhost-BLK
199 -chardev socket,id=char1,path=/var/tmp/vhost.1
200 -device vhost-user-blk-pci,id=blk0,chardev=char1
210 0000:01:00.0 (8086 0953): nvme -> vfio-pci
214 host:~# ./build/bin/vhost -S /var/tmp -s 1024 -m 0x3 &
216 [ DPDK EAL parameters: vhost -c 3 -m 1024 --main-lcore=1 --file-prefix=spdk_pid156014 ]
227 host:~# ./scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t pcie -a 0000:01:00.0
234 host:~# ./scripts/rpc.py bdev_malloc_create 128 4096 -b Malloc0
239 host:~# ./scripts/rpc.py vhost_create_scsi_controller --cpumask 0x1 vhost.0
240 VHOST_CONFIG: vhost-user server: socket created, fd: 21
247 vhost_scsi.c: 840:spdk_vhost_scsi_dev_add_tgt: *NOTICE*: Controller vhost.0: defined target 'Target…
253 vhost_scsi.c: 840:spdk_vhost_scsi_dev_add_tgt: *NOTICE*: Controller vhost.0: defined target 'Target…
257 host:~# ./scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1
262 host:~# ./scripts/rpc.py vhost_create_blk_controller --cpumask 0x2 vhost.1 Malloc1
267 host:~# taskset -c 2,3 qemu-system-x86_64 \
268 --enable-kvm \
269 -cpu host -smp 2 \
270 …-m 1G -object memory-backend-file,id=mem0,size=1G,mem-path=/dev/hugepages,share=on -numa node,memd…
271 -drive file=guest_os_image.qcow2,if=none,id=disk \
272 -device ide-hd,drive=disk,bootindex=0 \
273 -chardev socket,id=spdk_vhost_scsi0,path=/var/tmp/vhost.0 \
274 -device vhost-user-scsi-pci,id=scsi0,chardev=spdk_vhost_scsi0,num_queues=2 \
275 -chardev socket,id=spdk_vhost_blk0,path=/var/tmp/vhost.1 \
276 -device vhost-user-blk-pci,chardev=spdk_vhost_blk0,num-queues=2
282 guest:~# lsblk --output "NAME,KNAME,MODEL,HCTL,SIZE,VENDOR,SUBSYSTEMS"
301 We can see that `sdb` and `sdc` are SPDK vhost-scsi LUNs, and `vda` is SPDK a
302 vhost-blk disk.
306 ### Multi-Queue Block Layer (blk-mq) {#vhost_multiqueue}
308 For best performance use the Linux kernel block multi-queue feature with vhost.
316 3. `sudo update-grub`
326 specified via the `num-queues` parameter is greater than number of vCPUs. If you need to use
329 ### Hot-attach/hot-detach {#vhost_hotattach}
331 Hotplug/hotremove within a vhost controller is called hot-attach/detach. This is to
333 to a vhost-scsi controller, physically hotremoving the NVMe will trigger vhost-scsi
334 hot-detach. It is also possible to hot-detach a bdev manually via RPC - for example
337 Please also note that hot-attach/detach is Vhost-SCSI-specific. There are no RPCs
338 to hot-attach/detach the bdev from a Vhost-BLK device. If Vhost-BLK device exposes
339 an NVMe bdev that is hotremoved, all the I/O traffic on that Vhost-BLK device will
340 be aborted - possibly flooding a VM with syslog warnings and errors.
342 #### Hot-attach
344 Hot-attach is done by simply attaching a bdev to a vhost controller with a QEMU VM
351 #### Hot-detach
353 Just like hot-attach, the hot-detach is done by simply removing bdev from a controller
360 Removing an entire bdev will hot-detach it from a controller as well.
368 ### Windows virtio-blk driver before version 0.1.130-1 only works with 512-byte sectors
370 The Windows `viostor` driver before version 0.1.130-1 is buggy and does not
371 correctly support vhost-blk devices with non-512-byte block size.
375 ### QEMU vhost-user-blk
377 QEMU [vhost-user-blk](https://git.qemu.org/?p=qemu.git;a=commit;h=00343e4b54ba) is