xref: /dpdk/doc/guides/sample_app_ug/vhost.rst (revision 5f4f26d3150e2d3121b2afec04eef14704454d40)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2016 Intel Corporation.
3
4Vhost Sample Application
5========================
6
7The vhost sample application demonstrates integration of the Data Plane
8Development Kit (DPDK) with the Linux* KVM hypervisor by implementing the
9vhost-net offload API. The sample application performs simple packet
10switching between virtual machines based on Media Access Control (MAC)
11address or Virtual Local Area Network (VLAN) tag. The splitting of Ethernet
12traffic from an external switch is performed in hardware by the Virtual
13Machine Device Queues (VMDQ) and Data Center Bridging (DCB) features of
14the Intel® 82599 10 Gigabit Ethernet Controller.
15
16Testing steps
17-------------
18
19This section shows the steps how to test a typical PVP case with this
20dpdk-vhost sample, whereas packets are received from the physical NIC
21port first and enqueued to the VM's Rx queue. Through the guest testpmd's
22default forwarding mode (io forward), those packets will be put into
23the Tx queue. The dpdk-vhost example, in turn, gets the packets and
24puts back to the same physical NIC port.
25
26Build
27~~~~~
28
29To compile the sample application see :doc:`compiling`.
30
31The application is located in the ``vhost`` sub-directory.
32
33.. note::
34   In this example, you need build DPDK both on the host and inside guest.
35
36. _vhost_app_run_vm:
37
38Start the VM
39~~~~~~~~~~~~
40
41.. code-block:: console
42
43    qemu-system-x86_64 -machine accel=kvm -cpu host \
44        -m $mem -object memory-backend-file,id=mem,size=$mem,mem-path=/dev/hugepages,share=on \
45                -mem-prealloc -numa node,memdev=mem \
46        \
47        -chardev socket,id=char1,path=/tmp/sock0,server \
48        -netdev type=vhost-user,id=hostnet1,chardev=char1  \
49        -device virtio-net-pci,netdev=hostnet1,id=net1,mac=52:54:00:00:00:14 \
50        ...
51
52.. note::
53    For basic vhost-user support, QEMU 2.2 (or above) is required. For
54    some specific features, a higher version might be need. Such as
55    QEMU 2.7 (or above) for the reconnect feature.
56
57
58Start the vswitch example
59~~~~~~~~~~~~~~~~~~~~~~~~~
60
61.. code-block:: console
62
63        ./dpdk-vhost -l 0-3 -n 4 --socket-mem 1024  \
64             -- --socket-file /tmp/sock0 --client \
65             ...
66
67Check the `Parameters`_ section for the explanations on what do those
68parameters mean.
69
70.. _vhost_app_run_dpdk_inside_guest:
71
72Run testpmd inside guest
73~~~~~~~~~~~~~~~~~~~~~~~~
74
75Make sure you have DPDK built inside the guest. Also make sure the
76corresponding virtio-net PCI device is bond to a UIO driver, which
77could be done by:
78
79.. code-block:: console
80
81   modprobe vfio-pci
82   dpdk/usertools/dpdk-devbind.py -b vfio-pci 0000:00:04.0
83
84Then start testpmd for packet forwarding testing.
85
86.. code-block:: console
87
88    ./<build_dir>/app/dpdk-testpmd -l 0-1 -- -i
89    > start tx_first
90
91For more information about vIOMMU and NO-IOMMU and VFIO please refer to
92:doc:`/../linux_gsg/linux_drivers` section of the DPDK Getting started guide.
93
94Inject packets
95--------------
96
97While a virtio-net is connected to dpdk-vhost, a VLAN tag starts with
981000 is assigned to it. So make sure configure your packet generator
99with the right MAC and VLAN tag, you should be able to see following
100log from the dpdk-vhost console. It means you get it work::
101
102    VHOST_DATA: (0) mac 52:54:00:00:00:14 and vlan 1000 registered
103
104
105.. _vhost_app_parameters:
106
107Parameters
108----------
109
110**--socket-file path**
111Specifies the vhost-user socket file path.
112
113**--client**
114DPDK vhost-user will act as the client mode when such option is given.
115In the client mode, QEMU will create the socket file. Otherwise, DPDK
116will create it. Put simply, it's the server to create the socket file.
117
118
119**--vm2vm mode**
120The vm2vm parameter sets the mode of packet switching between guests in
121the host.
122
123- 0 disables vm2vm, implying that VM's packets will always go to the NIC port.
124- 1 means a normal mac lookup packet routing.
125- 2 means hardware mode packet forwarding between guests, it allows packets
126  go to the NIC port, hardware L2 switch will determine which guest the
127  packet should forward to or need send to external, which bases on the
128  packet destination MAC address and VLAN tag.
129
130**--mergeable 0|1**
131Set 0/1 to disable/enable the mergeable Rx feature. It's disabled by default.
132
133**--stats interval**
134The stats parameter controls the printing of virtio-net device statistics.
135The parameter specifies an interval (in unit of seconds) to print statistics,
136with an interval of 0 seconds disabling statistics.
137
138**--rx-retry 0|1**
139The rx-retry option enables/disables enqueue retries when the guests Rx queue
140is full. This feature resolves a packet loss that is observed at high data
141rates, by allowing it to delay and retry in the receive path. This option is
142enabled by default.
143
144**--rx-retry-num num**
145The rx-retry-num option specifies the number of retries on an Rx burst, it
146takes effect only when rx retry is enabled.  The default value is 4.
147
148**--rx-retry-delay msec**
149The rx-retry-delay option specifies the timeout (in micro seconds) between
150retries on an RX burst, it takes effect only when rx retry is enabled. The
151default value is 15.
152
153**--builtin-net-driver**
154A very simple vhost-user net driver which demonstrates how to use the generic
155vhost APIs will be used when this option is given. It is disabled by default.
156
157**--dmas**
158This parameter is used to specify the assigned DMA device of a vhost device.
159Async vhost-user net driver will be used if --dmas is set. For example
160--dmas [txd0@00:04.0,txd1@00:04.1,rxd0@00:04.2,rxd1@00:04.3] means use
161DMA channel 00:04.0/00:04.2 for vhost device 0 enqueue/dequeue operation
162and use DMA channel 00:04.1/00:04.3 for vhost device 1 enqueue/dequeue
163operation. The index of the device corresponds to the socket file in order,
164that means vhost device 0 is created through the first socket file, vhost
165device 1 is created through the second socket file, and so on.
166
167**--total-num-mbufs 0-N**
168This parameter sets the number of mbufs to be allocated in mbuf pools,
169the default value is 147456. This is can be used if launch of a port fails
170due to shortage of mbufs.
171
172**--tso 0|1**
173Disables/enables TCP segment offload.
174
175**--tx-csum 0|1**
176Disables/enables TX checksum offload.
177
178**-p mask**
179Port mask which specifies the ports to be used
180
181Common Issues
182-------------
183
184* QEMU fails to allocate memory on hugetlbfs, with an error like the
185  following::
186
187      file_ram_alloc: can't mmap RAM pages: Cannot allocate memory
188
189  When running QEMU the above error indicates that it has failed to allocate
190  memory for the Virtual Machine on the hugetlbfs. This is typically due to
191  insufficient hugepages being free to support the allocation request. The
192  number of free hugepages can be checked as follows:
193
194  .. code-block:: console
195
196     dpdk-hugepages.py --show
197
198  The command above indicates how many hugepages are free to support QEMU's
199  allocation request.
200
201* Failed to build DPDK in VM
202
203  Make sure "-cpu host" QEMU option is given.
204
205* Device start fails if NIC's max queues > the default number of 128
206
207  mbuf pool size is dependent on the MAX_QUEUES configuration, if NIC's
208  max queue number is larger than 128, device start will fail due to
209  insufficient mbuf. This can be adjusted using ``--total-num-mbufs``
210  parameter.
211
212* Option "builtin-net-driver" is incompatible with QEMU
213
214  QEMU vhost net device start will fail if protocol feature is not negotiated.
215  DPDK virtio-user PMD can be the replacement of QEMU.
216
217* Device start fails when enabling "builtin-net-driver" without memory
218  pre-allocation
219
220  The builtin example doesn't support dynamic memory allocation. When vhost
221  backend enables "builtin-net-driver", "--socket-mem" option should be
222  added at virtio-user PMD side as a startup item.
223