xref: /dpdk/doc/guides/howto/virtio_user_as_exception_path.rst (revision decb35d890209f603b01c1d23f35995bd51228fc)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2016 Intel Corporation.
3
4.. _virtio_user_as_exception_path:
5
6Virtio_user as Exception Path
7=============================
8
9.. note::
10
11   This solution is only applicable to Linux systems.
12
13The virtual device, virtio-user, was originally introduced with the vhost-user
14backend as a high performance solution for IPC (Inter-Process Communication)
15and user space container networking.
16
17Beyond this originally intended use,
18virtio-user can be used in conjunction with the vhost-kernel backend
19as a solution for dealing with exception path packets
20which need to be injected into the Linux kernel for processing there.
21In this regard, virtio-user and vhost in kernel space are an alternative to DPDK KNI
22for transferring packets between a DPDK packet processing application and the kernel stack.
23
24This solution has a number of advantages over alternatives such as KNI:
25
26*   Maintenance
27
28    All kernel modules needed by this solution, vhost and vhost-net (kernel),
29    are upstreamed and extensively used.
30
31*   Features
32
33    vhost-net is designed to be a networking solution, and, as such,
34    has lots of networking related features,
35    such as multi queue support, TSO, multi-segment buffer support, etc.
36
37*   Performance
38
39    Similar to KNI, this solution would use one or more kthreads
40    to send/receive packets to/from user space DPDK applications,
41    which minimises the impact on the polling DPDK threads.
42
43The overview of an application using virtio-user as exception path is shown
44in :numref:`figure_virtio_user_as_exception_path`.
45
46.. _figure_virtio_user_as_exception_path:
47
48.. figure:: img/virtio_user_as_exception_path.*
49
50   Overview of a DPDK app using virtio-user as exception path
51
52
53Example Usage With Testpmd
54---------------------------
55
56.. note::
57
58   These instructions assume that the vhost/vhost-net kernel modules are available
59   and have already been loaded into the running kernel.
60   It also assumes that the DPDK virtio driver has not been disabled in the DPDK build.
61
62To run a simple test of virtio-user as exception path using testpmd:
63
64#. Compile DPDK and bind a NIC to vfio-pci as documented in :ref:`linux_gsg_linux_drivers`.
65
66   This physical NIC is for communicating with the outside world,
67   and serves as a packet source in this example.
68
69#. Run testpmd to forward packets from NIC to kernel,
70   passing in a suitable list of logical cores to run on  (``-l`` parameter),
71   and optionally the PCI address of the physical NIC to use (``-a`` parameter).
72   The virtio-user device for interfacing to the kernel is specified via a ``--vdev`` argument,
73   taking the parameters described below.
74
75   .. code-block:: console
76
77      /path/to/dpdk-testpmd -l <cores> -a <pci BDF> \
78          --vdev=virtio_user0,path=/dev/vhost-net,queues=1,queue_size=1024
79
80   ``path``
81     The path to the kernel vhost-net device.
82
83   ``queue_size``
84     256 by default. To avoid shortage of descriptors, we can increase it to 1024.
85
86   ``queues``
87     Number of virt-queues. Each queue will be served by a kthread.
88
89#. Once testpmd is running, a new network interface - called ``tap0`` by default -
90   will be present on the system.
91   This should be configured with an IP address and then enabled for use:
92
93   .. code-block:: console
94
95      ip addr add 192.168.1.1/24 dev tap0
96      ip link set dev tap0 up
97
98#. To observe packet forwarding through the kernel,
99   a second testpmd instance can be run on the system,
100   taking packets from the kernel using an ``af_packet`` socket on the ``tap0`` interface.
101
102   .. code-block:: console
103
104      /path/to/dpdk-testpmd -l <cores> --vdev=net_af_packet0,iface=tap0 --in-memory --no-pci
105
106   When running this instance,
107   we can use ``--in-memory`` flag to avoid hugepage naming conflicts with the previous instance,
108   and we also use ``--no-pci`` flag to only use the ``af_packet`` interface
109   for all traffic forwarding.
110
111#. Running traffic into the system through the NIC should see that traffic returned back again,
112   having been forwarded through both testpmd instances.
113   This can be confirmed by checking the testpmd statistics on testpmd exit.
114
115For more advanced use of virtio-user with testpmd in this scenario,
116some other more advanced options may also be used.
117For example:
118
119* ``--tx-offloads=0x02c``
120
121  This testpmd option enables Tx offloads for UDP and TCP checksum on transmit,
122  as well as TCP TSO support.
123  The list of the offload flag values can be seen in header
124  `rte_ethdev.h <https://doc.dpdk.org/api/rte__ethdev_8h.html>`_.
125
126* ``--enable-lro``
127
128  This testpmd option is used to negotiate VIRTIO_NET_F_GUEST_TSO4 and
129  VIRTIO_NET_F_GUEST_TSO6 feature so that large packets from the kernel can be
130  transmitted to the DPDK application and further TSOed by physical NIC.
131  If unsupported by the physical NIC, errors may be reported by testpmd with this option.
132
133* Enabling Rx checksum offloads for physical port:
134
135  Within testpmd, you can enable and disable offloads on a per-port basis,
136  rather than enabling them for both ports.
137  For the physical NIC, it may be desirable to enable checksum offload on packet Rx.
138  This may be done as below, if testpmd is run with ``-i`` flag for interactive mode.
139
140   .. code-block:: console
141
142      testpmd> port stop 0
143      testpmd> port config 0 rx_offload tcp_cksum on
144      testpmd> port config 0 rx_offload udp_cksum on
145      testpmd> port start 0
146
147* Multiple queue support
148
149  Better performance may be achieved by using multiple queues,
150  so that multiple kernel threads are handling the traffic on the kernel side.
151  For example, to use 2 queues on both NIC and virtio ports,
152  while also enabling TX offloads and LRO support:
153
154  .. code-block:: console
155
156     /path/to/dpdk-testpmd --vdev=virtio_user0,path=/dev/vhost-net,queues=2,queue_size=1024 -- \
157         -i --tx-offloads=0x002c --enable-lro --txq=2 --rxq=2 --txd=1024 --rxd=1024
158
159