1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2018 Intel Corporation. 3 4Vdpa Sample Application 5======================= 6 7The vdpa sample application creates vhost-user sockets by using the 8vDPA backend. vDPA stands for vhost Data Path Acceleration which utilizes 9virtio ring compatible devices to serve virtio driver directly to enable 10datapath acceleration. As vDPA driver can help to set up vhost datapath, 11this application doesn't need to launch dedicated worker threads for vhost 12enqueue/dequeue operations. 13 14Testing steps 15------------- 16 17This section shows the steps of how to start VMs with vDPA vhost-user 18backend and verify network connection & live migration. 19 20Build 21~~~~~ 22 23To compile the sample application see :doc:`compiling`. 24 25The application is located in the ``vdpa`` sub-directory. 26 27Start the vdpa example 28~~~~~~~~~~~~~~~~~~~~~~ 29 30.. code-block:: console 31 32 ./dpdk-vdpa [EAL options] -- [--client] [--interactive|-i] or [--iface SOCKET_PATH] 33 34where 35 36* --client means running vdpa app in client mode, in the client mode, QEMU needs 37 to run as the server mode and take charge of socket file creation. 38* --iface specifies the path prefix of the UNIX domain socket file, e.g. 39 /tmp/vhost-user-, then the socket files will be named as /tmp/vhost-user-<n> 40 (n starts from 0). 41* --interactive means run the vdpa sample in interactive mode, currently 4 42 internal cmds are supported: 43 44 1. help: show help message 45 2. list: list all available vdpa devices 46 3. create: create a new vdpa port with socket file and vdpa device address 47 4. stats: show statistics of virtio queues 48 5. quit: unregister vhost driver and exit the application 49 50Take IFCVF driver for example: 51 52.. code-block:: console 53 54 ./dpdk-vdpa -c 0x2 -n 4 --socket-mem 1024,1024 \ 55 -a 0000:06:00.3,vdpa=1 -a 0000:06:00.4,vdpa=1 \ 56 -- --interactive 57 58.. note:: 59 Here 0000:06:00.3 and 0000:06:00.4 refer to virtio ring compatible devices, 60 and we need to bind vfio-pci to them before running vdpa sample. 61 62 * modprobe vfio-pci 63 * ./usertools/dpdk-devbind.py -b vfio-pci 06:00.3 06:00.4 64 65Then we can create 2 vdpa ports in interactive cmdline. 66 67.. code-block:: console 68 69 vdpa> list 70 device id device address queue num supported features 71 0 0000:06:00.3 1 0x14c238020 72 1 0000:06:00.4 1 0x14c238020 73 2 0000:06:00.5 1 0x14c238020 74 75 vdpa> create /tmp/vdpa-socket0 0000:06:00.3 76 vdpa> create /tmp/vdpa-socket1 0000:06:00.4 77 78.. _vdpa_app_run_vm: 79 80Start the VMs 81~~~~~~~~~~~~~ 82 83.. code-block:: console 84 85 qemu-system-x86_64 -cpu host -enable-kvm \ 86 <snip> 87 -mem-prealloc \ 88 -chardev socket,id=char0,path=<socket_file created in above steps> \ 89 -netdev type=vhost-user,id=vdpa,chardev=char0 \ 90 -device virtio-net-pci,netdev=vdpa,mac=00:aa:bb:cc:dd:ee,page-per-vq=on \ 91 92After the VMs launches, we can login the VMs and configure the ip, verify the 93network connection via ping or netperf. 94 95.. note:: 96 Suggest to use QEMU 3.0.0 which extends vhost-user for vDPA. 97 98Live Migration 99~~~~~~~~~~~~~~ 100vDPA supports cross-backend live migration, user can migrate SW vhost backend 101VM to vDPA backend VM and vice versa. Here are the detailed steps. Assume A is 102the source host with SW vhost VM and B is the destination host with vDPA. 103 1041. Start vdpa sample and launch a VM with exact same parameters as the VM on A, 105 in migration-listen mode: 106 107.. code-block:: console 108 109 B: <qemu-command-line> -incoming tcp:0:4444 (or other PORT)) 110 1112. Start the migration (on source host): 112 113.. code-block:: console 114 115 A: (qemu) migrate -d tcp:<B ip>:4444 (or other PORT) 116 1173. Check the status (on source host): 118 119.. code-block:: console 120 121 A: (qemu) info migrate 122