1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2010-2016 Intel Corporation. 3 4Vhost Sample Application 5======================== 6 7The vhost sample application demonstrates integration of the Data Plane 8Development Kit (DPDK) with the Linux* KVM hypervisor by implementing the 9vhost-net offload API. The sample application performs simple packet 10switching between virtual machines based on Media Access Control (MAC) 11address or Virtual Local Area Network (VLAN) tag. The splitting of Ethernet 12traffic from an external switch is performed in hardware by the Virtual 13Machine Device Queues (VMDQ) and Data Center Bridging (DCB) features of 14the Intel® 82599 10 Gigabit Ethernet Controller. 15 16Testing steps 17------------- 18 19This section shows the steps how to test a typical PVP case with this 20vhost-switch sample, whereas packets are received from the physical NIC 21port first and enqueued to the VM's Rx queue. Through the guest testpmd's 22default forwarding mode (io forward), those packets will be put into 23the Tx queue. The vhost-switch example, in turn, gets the packets and 24puts back to the same physical NIC port. 25 26Build 27~~~~~ 28 29To compile the sample application see :doc:`compiling`. 30 31The application is located in the ``vhost`` sub-directory. 32 33.. note:: 34 In this example, you need build DPDK both on the host and inside guest. 35 36Start the vswitch example 37~~~~~~~~~~~~~~~~~~~~~~~~~ 38 39.. code-block:: console 40 41 ./dpdk-vhost-switch -l 0-3 -n 4 --socket-mem 1024 \ 42 -- --socket-file /tmp/sock0 --client \ 43 ... 44 45Check the `Parameters`_ section for the explanations on what do those 46parameters mean. 47 48.. _vhost_app_run_vm: 49 50Start the VM 51~~~~~~~~~~~~ 52 53.. code-block:: console 54 55 qemu-system-x86_64 -machine accel=kvm -cpu host \ 56 -m $mem -object memory-backend-file,id=mem,size=$mem,mem-path=/dev/hugepages,share=on \ 57 -mem-prealloc -numa node,memdev=mem \ 58 \ 59 -chardev socket,id=char1,path=/tmp/sock0,server \ 60 -netdev type=vhost-user,id=hostnet1,chardev=char1 \ 61 -device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:00:00:14 \ 62 ... 63 64.. note:: 65 For basic vhost-user support, QEMU 2.2 (or above) is required. For 66 some specific features, a higher version might be need. Such as 67 QEMU 2.7 (or above) for the reconnect feature. 68 69.. _vhost_app_run_dpdk_inside_guest: 70 71Run testpmd inside guest 72~~~~~~~~~~~~~~~~~~~~~~~~ 73 74Make sure you have DPDK built inside the guest. Also make sure the 75corresponding virtio-net PCI device is bond to a uio driver, which 76could be done by: 77 78.. code-block:: console 79 80 modprobe uio_pci_generic 81 dpdk/usertools/dpdk-devbind.py -b uio_pci_generic 0000:00:04.0 82 83Then start testpmd for packet forwarding testing. 84 85.. code-block:: console 86 87 ./<build_dir>/app/dpdk-testpmd -l 0-1 -- -i 88 > start tx_first 89 90Inject packets 91-------------- 92 93While a virtio-net is connected to vhost-switch, a VLAN tag starts with 941000 is assigned to it. So make sure configure your packet generator 95with the right MAC and VLAN tag, you should be able to see following 96log from the vhost-switch console. It means you get it work:: 97 98 VHOST_DATA: (0) mac 52:54:00:00:00:14 and vlan 1000 registered 99 100 101.. _vhost_app_parameters: 102 103Parameters 104---------- 105 106**--socket-file path** 107Specifies the vhost-user socket file path. 108 109**--client** 110DPDK vhost-user will act as the client mode when such option is given. 111In the client mode, QEMU will create the socket file. Otherwise, DPDK 112will create it. Put simply, it's the server to create the socket file. 113 114 115**--vm2vm mode** 116The vm2vm parameter sets the mode of packet switching between guests in 117the host. 118 119- 0 disables vm2vm, implying that VM's packets will always go to the NIC port. 120- 1 means a normal mac lookup packet routing. 121- 2 means hardware mode packet forwarding between guests, it allows packets 122 go to the NIC port, hardware L2 switch will determine which guest the 123 packet should forward to or need send to external, which bases on the 124 packet destination MAC address and VLAN tag. 125 126**--mergeable 0|1** 127Set 0/1 to disable/enable the mergeable Rx feature. It's disabled by default. 128 129**--stats interval** 130The stats parameter controls the printing of virtio-net device statistics. 131The parameter specifies an interval (in unit of seconds) to print statistics, 132with an interval of 0 seconds disabling statistics. 133 134**--rx-retry 0|1** 135The rx-retry option enables/disables enqueue retries when the guests Rx queue 136is full. This feature resolves a packet loss that is observed at high data 137rates, by allowing it to delay and retry in the receive path. This option is 138enabled by default. 139 140**--rx-retry-num num** 141The rx-retry-num option specifies the number of retries on an Rx burst, it 142takes effect only when rx retry is enabled. The default value is 4. 143 144**--rx-retry-delay msec** 145The rx-retry-delay option specifies the timeout (in micro seconds) between 146retries on an RX burst, it takes effect only when rx retry is enabled. The 147default value is 15. 148 149**--dequeue-zero-copy** 150Dequeue zero copy will be enabled when this option is given. it is worth to 151note that if NIC is bound to driver with iommu enabled, dequeue zero copy 152cannot work at VM2NIC mode (vm2vm=0) due to currently we don't setup iommu 153dma mapping for guest memory. 154 155**--vlan-strip 0|1** 156VLAN strip option is removed, because different NICs have different behaviors 157when disabling VLAN strip. Such feature, which heavily depends on hardware, 158should be removed from this example to reduce confusion. Now, VLAN strip is 159enabled and cannot be disabled. 160 161**--builtin-net-driver** 162A very simple vhost-user net driver which demonstrates how to use the generic 163vhost APIs will be used when this option is given. It is disabled by default. 164 165**--dma-type** 166This parameter is used to specify DMA type for async vhost-user net driver which 167demonstrates how to use the async vhost APIs. It's used in combination with dmas. 168 169**--dmas** 170This parameter is used to specify the assigned DMA device of a vhost device. 171Async vhost-user net driver will be used if --dmas is set. For example 172--dmas [txd0@00:04.0,txd1@00:04.1] means use DMA channel 00:04.0 for vhost 173device 0 enqueue operation and use DMA channel 00:04.1 for vhost device 1 174enqueue operation. 175 176Common Issues 177------------- 178 179* QEMU fails to allocate memory on hugetlbfs, with an error like the 180 following:: 181 182 file_ram_alloc: can't mmap RAM pages: Cannot allocate memory 183 184 When running QEMU the above error indicates that it has failed to allocate 185 memory for the Virtual Machine on the hugetlbfs. This is typically due to 186 insufficient hugepages being free to support the allocation request. The 187 number of free hugepages can be checked as follows: 188 189 .. code-block:: console 190 191 dpdk-hugepages.py --show 192 193 The command above indicates how many hugepages are free to support QEMU's 194 allocation request. 195 196* Failed to build DPDK in VM 197 198 Make sure "-cpu host" QEMU option is given. 199 200* Device start fails if NIC's max queues > the default number of 128 201 202 mbuf pool size is dependent on the MAX_QUEUES configuration, if NIC's 203 max queue number is larger than 128, device start will fail due to 204 insufficient mbuf. 205 206* Option "builtin-net-driver" is incompatible with QEMU 207 208 QEMU vhost net device start will fail if protocol feature is not negotiated. 209 DPDK virtio-user pmd can be the replacement of QEMU. 210 211* Device start fails when enabling "builtin-net-driver" without memory 212 pre-allocation 213 214 The builtin example doesn't support dynamic memory allocation. When vhost 215 backend enables "builtin-net-driver", "--socket-mem" option should be 216 added at virtio-user pmd side as a startup item. 217