1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2018 Intel Corporation. 3 4Vdpa Sample Application 5======================= 6 7The vdpa sample application creates vhost-user sockets by using the 8vDPA backend. vDPA stands for vhost Data Path Acceleration which utilizes 9virtio ring compatible devices to serve virtio driver directly to enable 10datapath acceleration. As vDPA driver can help to set up vhost datapath, 11this application doesn't need to launch dedicated worker threads for vhost 12enqueue/dequeue operations. 13 14Testing steps 15------------- 16 17This section shows the steps of how to start VMs with vDPA vhost-user 18backend and verify network connection & live migration. 19 20Build 21~~~~~ 22 23To compile the sample application see :doc:`compiling`. 24 25The application is located in the ``vdpa`` sub-directory. 26 27Start the vdpa example 28~~~~~~~~~~~~~~~~~~~~~~ 29 30.. code-block:: console 31 32 ./dpdk-vdpa [EAL options] -- [--client] [--interactive|-i] or [--iface SOCKET_PATH] 33 34where 35 36* --client means running vdpa app in client mode, in the client mode, QEMU needs 37 to run as the server mode and take charge of socket file creation. 38* --iface specifies the path prefix of the UNIX domain socket file, e.g. 39 /tmp/vhost-user-, then the socket files will be named as /tmp/vhost-user-<n> 40 (n starts from 0). 41* --interactive means run the vDPA sample in interactive mode: 42 43 #. help: show help message 44 45 #. list: list all available vDPA devices 46 47 #. create: create a new vDPA port with socket file and vDPA device address 48 49 #. stats: show statistics of virtio queues 50 51 #. quit: unregister vhost driver and exit the application 52 53Take IFCVF driver for example: 54 55.. code-block:: console 56 57 ./dpdk-vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \ 58 -a 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \ 59 -- --interactive 60 61.. note:: 62 Here 0000:06:00.3 and 0000:06:00.4 refer to virtio ring compatible devices, 63 and we need to bind vfio-pci to them before running vdpa sample. 64 65 * modprobe vfio-pci 66 * ./usertools/dpdk-devbind.py -b vfio-pci 06:00.3 06:00.4 67 68Then we can create 2 vdpa ports in interactive cmdline. 69 70.. code-block:: console 71 72 vdpa> list 73 device id device address queue num supported features 74 0 0000:06:00.3 1 0x14c238020 75 1 0000:06:00.4 1 0x14c238020 76 2 0000:06:00.5 1 0x14c238020 77 78 vdpa> create /tmp/vdpa-socket0 0000:06:00.3 79 vdpa> create /tmp/vdpa-socket1 0000:06:00.4 80 81.. _vdpa_app_run_vm: 82 83Start the VMs 84~~~~~~~~~~~~~ 85 86.. code-block:: console 87 88 qemu-system-x86_64 -cpu host -enable-kvm \ 89 <snip> 90 -mem-prealloc \ 91 -chardev socket,id=char0,path=<socket_file created in above steps> \ 92 -netdev type=vhost-user,id=vdpa,chardev=char0 \ 93 -device virtio-net-pci,netdev=vdpa,mac=00:aa:bb:cc:dd:ee,page-per-vq=on \ 94 95After the VMs launches, we can login the VMs and configure the ip, verify the 96network connection via ping or netperf. 97 98.. note:: 99 Suggest to use QEMU 3.0.0 which extends vhost-user for vDPA. 100 101Live Migration 102~~~~~~~~~~~~~~ 103vDPA supports cross-backend live migration, user can migrate SW vhost backend 104VM to vDPA backend VM and vice versa. Here are the detailed steps. Assume A is 105the source host with SW vhost VM and B is the destination host with vDPA. 106 107#. Start vdpa sample and launch a VM with exact same parameters as the VM on A, 108 in migration-listen mode: 109 110 .. code-block:: console 111 112 B: <qemu-command-line> -incoming tcp:0:4444 (or other PORT)) 113 114#. Start the migration (on source host): 115 116 .. code-block:: console 117 118 A: (qemu) migrate -d tcp:<B ip>:4444 (or other PORT) 119 120#. Check the status (on source host): 121 122 .. code-block:: console 123 124 A: (qemu) info migrate 125