xref: /dpdk/doc/guides/nics/enic.rst (revision 543617f44eec3e348ea8cd04924ef80389610d46)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright (c) 2017, Cisco Systems, Inc.
3    All rights reserved.
4
5ENIC Poll Mode Driver
6=====================
7
8ENIC PMD is the DPDK poll-mode driver for the Cisco System Inc. VIC Ethernet
9NICs. These adapters are also referred to as vNICs below. If you are running
10or would like to run DPDK software applications on Cisco UCS servers using
11Cisco VIC adapters the following documentation is relevant.
12
13Supported Cisco VIC adapters
14----------------------------
15
16ENIC PMD supports all recent generations of Cisco VIC adapters including:
17
18- VIC 1200 series
19- VIC 1300 series
20- VIC 1400/14000 series
21- VIC 15000 series
22
23Supported features
24------------------
25
26- Unicast, multicast and broadcast transmission and reception
27- Receive queue polling
28- Port Hardware Statistics
29- Hardware VLAN acceleration
30- IP checksum offload
31- Receive side VLAN stripping
32- Multiple receive and transmit queues
33- Promiscuous mode
34- Setting RX VLAN (supported via UCSM/CIMC only)
35- VLAN filtering (supported via UCSM/CIMC only)
36- Execution of application by unprivileged system users
37- IPV4, IPV6 and TCP RSS hashing
38- UDP RSS hashing (1400 series and later adapters)
39- Scattered Rx
40- MTU update
41- SR-IOV virtual function
42- Flow API
43- Overlay offload
44
45  - Rx/Tx checksum offloads for VXLAN, NVGRE, GENEVE
46  - TSO for VXLAN and GENEVE packets
47  - Inner RSS
48
49How to obtain ENIC PMD integrated DPDK
50--------------------------------------
51
52ENIC PMD support is integrated into the DPDK suite. dpdk-<version>.tar.gz
53should be downloaded from https://core.dpdk.org/download/
54
55
56Configuration information
57-------------------------
58
59- **vNIC Configuration Parameters**
60
61  - **Number of Queues**
62
63    The maximum number of receive queues (RQs), work queues (WQs) and
64    completion queues (CQs) are configurable on a per vNIC basis
65    through the Cisco UCS Manager (CIMC or UCSM).
66
67    These values should be configured as follows:
68
69    - The number of WQs should be greater or equal to the value of the
70      expected nb_tx_q parameter in the call to
71      rte_eth_dev_configure()
72
73    - The number of RQs configured in the vNIC should be greater or
74      equal to *twice* the value of the expected nb_rx_q parameter in
75      the call to rte_eth_dev_configure().  With the addition of Rx
76      scatter, a pair of RQs on the vnic is needed for each receive
77      queue used by DPDK, even if Rx scatter is not being used.
78      Having a vNIC with only 1 RQ is not a valid configuration, and
79      will fail with an error message.
80
81    - The number of CQs should set so that there is one CQ for each
82      WQ, and one CQ for each pair of RQs.
83
84    For example: If the application requires 3 Rx queues, and 3 Tx
85    queues, the vNIC should be configured to have at least 3 WQs, 6
86    RQs (3 pairs), and 6 CQs (3 for use by WQs + 3 for use by the 3
87    pairs of RQs).
88
89  - **Size of Queues**
90
91    Likewise, the number of receive and transmit descriptors are configurable on
92    a per-vNIC basis via the UCS Manager and should be greater than or equal to
93    the nb_rx_desc and   nb_tx_desc parameters expected to be used in the calls
94    to rte_eth_rx_queue_setup() and rte_eth_tx_queue_setup() respectively.
95    An application requesting more than the set size will be limited to that
96    size.
97
98    Unless there is a lack of resources due to creating many vNICs, it
99    is recommended that the WQ and RQ sizes be set to the maximum.  This
100    gives the application the greatest amount of flexibility in its
101    queue configuration.
102
103    - *Note*: Since the introduction of Rx scatter, for performance
104      reasons, this PMD uses two RQs on the vNIC per receive queue in
105      DPDK.  One RQ holds descriptors for the start of a packet, and the
106      second RQ holds the descriptors for the rest of the fragments of
107      a packet.  This means that the nb_rx_desc parameter to
108      rte_eth_rx_queue_setup() can be a greater than 4096.  The exact
109      amount will depend on the size of the mbufs being used for
110      receives, and the MTU size.
111
112      For example: If the mbuf size is 2048, and the MTU is 9000, then
113      receiving a full size packet will take 5 descriptors, 1 from the
114      start-of-packet queue, and 4 from the second queue.  Assuming
115      that the RQ size was set to the maximum of 4096, then the
116      application can specify up to 1024 + 4096 as the nb_rx_desc
117      parameter to rte_eth_rx_queue_setup().
118
119  - **Interrupts**
120
121    At least one interrupt per vNIC interface should be configured in the UCS
122    manager regardless of the number receive/transmit queues. The ENIC PMD
123    uses this interrupt to get information about link status and errors
124    in the fast path.
125
126    In addition to the interrupt for link status and errors, when using Rx queue
127    interrupts, increase the number of configured interrupts so that there is at
128    least one interrupt for each Rx queue. For example, if the app uses 3 Rx
129    queues and wants to use per-queue interrupts, configure 4 (3 + 1) interrupts.
130
131  - **Receive Side Scaling**
132
133    In order to fully utilize RSS in DPDK, enable all RSS related settings in
134    CIMC or UCSM. These include the following items listed under
135    Receive Side Scaling:
136    TCP, IPv4, TCP-IPv4, IPv6, TCP-IPv6, IPv6 Extension, TCP-IPv6 Extension.
137
138
139SR-IOV Virtual Function
140-----------------------
141
142VIC 1400 and later series supports SR-IOV.
143It can be enabled via both UCSM and CIMC.
144Please refer to the following guides to enable SR-IOV virtual functions (VFs).
145
146  - CIMC: `Managing vNICs <https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/c/sw/gui/config/guide/4_3/b_cisco_ucs_c-series_gui_configuration_guide_43/b_Cisco_UCS_C-series_GUI_Configuration_Guide_41_chapter_01011.html#d77871e5874a1635>`_
147
148  - UCSM: `Configuring SRIOV HPN Connection Policies <https://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/ucs-manager/GUI-User-Guides/Network-Mgmt/4-3/b_UCSM_Network_Mgmt_Guide_4_3/b_UCSM_Network_Mgmt_Guide_chapter_01010.html#d21438e9555a1635>`_
149
150Note that the previous SR-IOV implementation that is tied to VM-FEX
151(Cisco Virtual Machine Fabric Extender) has been discontinued,
152and ENIC PMD no longer supports it.
153The current SR-IOV implementation does not require the Fabric Interconnect (FI),
154as layer 2 switching is done within the VIC adapter.
155
156Once SR-IOV is enabled, reboot the host OS and follow OS specific steps to create VFs
157and assign them to virtual machines (VMs) or containers as necessary.
158The VIC physical function (PF) drivers for ESXi and Linux support SR-IOV.
159The following shows simplified steps for Linux.
160
161.. code-block:: console
162
163   # echo 4 > /sys/class/net/<pf-interface>/device/sriov_numvfs
164
165   # lspci | grep Cisco | grep Ethernet
166   12:00.0 Ethernet controller: Cisco Systems Inc VIC Ethernet NIC (rev a2)
167   12:00.1 Ethernet controller: Cisco Systems Inc Device 02b7 (rev a2)
168   12:00.2 Ethernet controller: Cisco Systems Inc Device 02b7 (rev a2)
169   12:00.3 Ethernet controller: Cisco Systems Inc Device 02b7 (rev a2)
170   12:00.4 Ethernet controller: Cisco Systems Inc Device 02b7 (rev a2)
171
172Writing 4 to ``sriov_numvfs`` creates 4 VFs.
173``lspci`` shows VFs and their PCI locations.
174Interfaces with device ID ``02b7`` are the VFs.
175The following snippet for libvirt XML assigns VF at ``12:00.1`` to VM.
176
177.. code-block:: console
178
179    <interface type="hostdev" managed="yes">
180      <mac address="fa:16:3e:46:39:c5"/>
181      <driver name='vfio'/>
182      <source>
183        <address type="pci" domain="0x0000" bus="0x12" slot="0x00" function="0x1"/>
184      </source>
185      <vlan>
186        <tag id="1000"/>
187      </vlan>
188    </interface>
189
190When the VM instance is started, libvirt will bind the host VF to vfio-pci.
191In the VM instance, the VF will now be visible.
192In this example, VF at ``07:00.0`` is seen on the VM instance
193and is available for binding to DPDK.
194
195.. code-block:: console
196
197   # lspci | grep Cisco
198   07:00.0 Ethernet controller: Cisco Systems Inc Device 02b7 (rev a2)
199
200There are two known limitations of the current SR-IOV implementation.
201
202  - Software Rx statistics
203
204    VF on old VIC models does not have hardware Rx counters. In this case,
205    ENIC PMD counts packets/bytes and reports them as device statistics.
206
207  - Backward compatibility mode
208
209    Old PF drivers on ESXi may lack full admin channel support.
210    ENIC PMD detects such PF driver during initialization
211    and reverts to the compatibility mode.
212    In this mode, ENIC PMD does not use the admin channel,
213    and trust mode (e.g. enabling promiscuous mode on VF) is not supported.
214
215.. note::
216
217   Passthrough does not require SR-IOV.
218   If SR-IOV is not desired, the user may create as many regular vNICs as necessary
219   and assign them to VMs as passthrough devices.
220
221
222.. _enic-generic-flow-api:
223
224Generic Flow API support
225------------------------
226
227Generic Flow API (also called "rte_flow" API) is supported. More advanced
228capabilities are available when "Advanced Filtering" is enabled on the adapter.
229Advanced filtering was added to 1300 series VIC firmware starting with version
2302.0.13 for C-series UCS servers and version 3.1.2 for UCSM managed blade
231servers. Advanced filtering is available on 1400 series adapters and beyond.
232To enable advanced filtering, the 'Advanced filter' radio button should be
233selected via CIMC or UCSM followed by a reboot of the server.
234
235- **1200 series VICs**
236
237  5-tuple exact flow support for 1200 series adapters. This allows:
238
239  - Attributes: ingress
240  - Items: ipv4, ipv6, udp, tcp (must exactly match src/dst IP
241    addresses and ports and all must be specified)
242  - Actions: queue and void
243  - Selectors: 'is'
244
245- **1300 and later series VICS with advanced filters disabled**
246
247  With advanced filters disabled, an IPv4 or IPv6 item must be specified
248  in the pattern.
249
250  - Attributes: ingress
251  - Items: eth, vlan, ipv4, ipv6, udp, tcp, vxlan, inner eth, vlan, ipv4, ipv6, udp, tcp
252  - Actions: queue and void
253  - Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
254  - In total, up to 64 bytes of mask is allowed across all headers
255
256- **1300 and later series VICS with advanced filters enabled**
257
258  - Attributes: ingress
259  - Items: eth, vlan, ipv4, ipv6, udp, tcp, vxlan, raw, inner eth, vlan, ipv4, ipv6, udp, tcp
260  - Actions: queue, mark, drop, flag, rss, passthru, and void
261  - Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
262  - In total, up to 64 bytes of mask is allowed across all headers
263
264- **1400 and later series VICs with Flow Manager API enabled**
265
266  - Attributes: ingress, egress
267  - Items: eth, vlan, ipv4, ipv6, sctp, udp, tcp, vxlan, raw, inner eth, vlan, ipv4, ipv6, sctp, udp, tcp
268  - Ingress Actions: count, drop, flag, jump, mark, port_id, passthru, queue, rss, vxlan_decap, vxlan_encap, and void
269  - Egress Actions: count, drop, jump, passthru, vxlan_encap, and void
270  - Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
271  - In total, up to 64 bytes of mask is allowed across all headers
272
273The VIC performs packet matching after applying VLAN strip. If VLAN
274stripping is enabled, EtherType in the ETH item corresponds to the
275stripped VLAN header's EtherType. Stripping does not affect the VLAN
276item. TCI and EtherType in the VLAN item are matched against those in
277the (stripped) VLAN header whether stripping is enabled or disabled.
278
279More features may be added in future firmware and new versions of the VIC.
280Please refer to the release notes.
281
282.. _overlay_offload:
283
284Overlay Offload
285---------------
286
287Recent hardware models support overlay offload. When enabled, the NIC performs
288the following operations for VXLAN, NVGRE, and GENEVE packets. In all cases,
289inner and outer packets can be IPv4 or IPv6.
290
291- TSO for VXLAN and GENEVE packets.
292
293  Hardware supports NVGRE TSO, but DPDK currently has no NVGRE offload flags.
294
295- Tx checksum offloads.
296
297  The NIC fills in IPv4/UDP/TCP checksums for both inner and outer packets.
298
299- Rx checksum offloads.
300
301  The NIC validates IPv4/UDP/TCP checksums of both inner and outer packets.
302  Good checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_GOOD``) indicate that the inner
303  packet has the correct checksum, and if applicable, the outer packet also
304  has the correct checksum. Bad checksum flags (e.g. ``RTE_MBUF_F_RX_L4_CKSUM_BAD``)
305  indicate that the inner and/or outer packets have invalid checksum values.
306
307- Inner Rx packet type classification
308
309  PMD sets inner L3/L4 packet types (e.g. ``RTE_PTYPE_INNER_L4_TCP``), and
310  ``RTE_PTYPE_TUNNEL_GRENAT`` to indicate that the packet is tunneled.
311  PMD does not set L3/L4 packet types for outer packets.
312
313- Inner RSS
314
315  RSS hash calculation, therefore queue selection, is done on inner packets.
316
317In order to enable overlay offload, enable VXLAN and/or Geneve on vNIC
318via CIMC or UCSM followed by a reboot of the server. When PMD successfully
319enables overlay offload, it prints one of the following messages on the console.
320
321.. code-block:: console
322
323    Overlay offload is enabled (VxLAN)
324    Overlay offload is enabled (Geneve)
325    Overlay offload is enabled (VxLAN, Geneve)
326
327By default, PMD enables overlay offload if hardware supports it. To disable
328it, set ``devargs`` parameter ``disable-overlay=1``. For example::
329
330    -a 12:00.0,disable-overlay=1
331
332By default, the NIC uses 4789 and 6081 as the VXLAN and Geneve ports,
333respectively. The user may change them through
334``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as the current
335NIC has a single VXLAN port number and a single Geneve port number,
336the user cannot configure multiple port numbers for each tunnel type.
337
338Geneve offload support has evolved over VIC models. On older models,
339Geneve offload and advanced filters are mutually exclusive.  This is
340enforced by UCSM and CIMC, which only allow one of the two features
341to be selected at one time. Newer VIC models do not have this restriction.
342
343Ingress VLAN Rewrite
344--------------------
345
346VIC adapters can tag, untag, or modify the VLAN headers of ingress
347packets. The ingress VLAN rewrite mode controls this behavior. By
348default, it is set to pass-through, where the NIC does not modify the
349VLAN header in any way so that the application can see the original
350header. This mode is sufficient for many applications, but may not be
351suitable for others. Such applications may change the mode by setting
352``devargs`` parameter ``ig-vlan-rewrite`` to one of the following.
353
354- ``pass``: Pass-through mode. The NIC does not modify the VLAN
355  header. This is the default mode.
356
357- ``priority``: Priority-tag default VLAN mode. If the ingress packet
358  is tagged with the default VLAN, the NIC replaces its VLAN header
359  with the priority tag (VLAN ID 0).
360
361- ``trunk``: Default trunk mode. The NIC tags untagged ingress packets
362  with the default VLAN. Tagged ingress packets are not modified. To
363  the application, every packet appears as tagged.
364
365- ``untag``: Untag default VLAN mode. If the ingress packet is tagged
366  with the default VLAN, the NIC removes or untags its VLAN header so
367  that the application sees an untagged packet. As a result, the
368  default VLAN becomes `untagged`. This mode can be useful for
369  applications such as OVS-DPDK performance benchmarks that utilize
370  only the default VLAN and want to see only untagged packets.
371
372
373Vectorized Rx Handler
374---------------------
375
376ENIC PMD includes a version of the receive handler that is vectorized using
377AVX2 SIMD instructions. It is meant for bulk, throughput oriented workloads
378where reducing cycles/packet in PMD is a priority. In order to use the
379vectorized handler, take the following steps.
380
381- Use a recent version of gcc, icc, or clang and build 64-bit DPDK. If
382  the compiler is known to support AVX2, DPDK build system
383  automatically compiles the vectorized handler. Otherwise, the
384  handler is not available.
385
386- Set ``devargs`` parameter ``enable-avx2-rx=1`` to explicitly request that
387  PMD consider the vectorized handler when selecting the receive handler.
388  For example::
389
390    -a 12:00.0,enable-avx2-rx=1
391
392  As the current implementation is intended for field trials, by default, the
393  vectorized handler is not considered (``enable-avx2-rx=0``).
394
395- Run on a UCS M4 or later server with CPUs that support AVX2.
396
397PMD selects the vectorized handler when the handler is compiled into
398the driver, the user requests its use via ``enable-avx2-rx=1``, CPU
399supports AVX2, and scatter Rx is not used. To verify that the
400vectorized handler is selected, enable debug logging
401(``--log-level=pmd,debug``) and check the following message.
402
403.. code-block:: console
404
405    enic_use_vector_rx_handler use the non-scatter avx2 Rx handler
406
40764B Completion Queue Entry
408--------------------------
409
410Recent VIC adapters support 64B completion queue entries, as well as
41116B entries that are available on all adapter models. ENIC PMD enables
412and uses 64B entries by default, if available. 64B entries generally
413lower CPU cycles per Rx packet, as they avoid partial DMA writes and
414reduce cache contention between DMA and polling CPU. The effect is
415most pronounced when multiple Rx queues are used on Intel platforms
416with Data Direct I/O Technology (DDIO).
417
418If 64B entries are not available, PMD uses 16B entries. The user may
419explicitly disable 64B entries and use 16B entries by setting
420``devarg`` parameter ``cq64=0``. For example::
421
422    -a 12:00.0,cq64=0
423
424To verify the selected entry size, enable debug logging
425(``--log-level=enic,debug``) and check the following messages.
426
427.. code-block:: console
428
429    PMD: rte_enic_pmd: Supported CQ entry sizes: 16 32
430    PMD: rte_enic_pmd: Using 16B CQ entry size
431
432.. _enic_limitations:
433
434Limitations
435-----------
436
437- **VLAN 0 Priority Tagging**
438
439  If a vNIC is configured in TRUNK mode by the UCS manager, the adapter will
440  priority tag egress packets according to 802.1Q if they were not already
441  VLAN tagged by software. If the adapter is connected to a properly configured
442  switch, there will be no unexpected behavior.
443
444  In test setups where an Ethernet port of a Cisco adapter in TRUNK mode is
445  connected point-to-point to another adapter port or connected though a router
446  instead of a switch, all ingress packets will be VLAN tagged. Programs such
447  as l3fwd may not account for VLAN tags in packets and may misbehave. One
448  solution is to enable VLAN stripping on ingress so the VLAN tag is removed
449  from the packet and put into the mbuf->vlan_tci field. Here is an example
450  of how to accomplish this:
451
452.. code-block:: console
453
454     vlan_offload = rte_eth_dev_get_vlan_offload(port);
455     vlan_offload |= RTE_ETH_VLAN_STRIP_OFFLOAD;
456     rte_eth_dev_set_vlan_offload(port, vlan_offload);
457
458Another alternative is modify the adapter's ingress VLAN rewrite mode so that
459packets with the default VLAN tag are stripped by the adapter and presented to
460DPDK as untagged packets. In this case mbuf->vlan_tci and the RTE_MBUF_F_RX_VLAN and
461RTE_MBUF_F_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
462``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
463
464    -a 12:00.0,ig-vlan-rewrite=untag
465
466- **SR-IOV**
467
468  - KVM hypervisor support only. VMware has not been tested.
469  - Requires VM-FEX, and so is only available on UCS managed servers connected
470    to Fabric Interconnects. It is not on standalone C-Series servers.
471  - VF devices are not usable directly from the host. They can  only be used
472    as assigned devices on VM instances.
473  - Currently, unbind of the ENIC kernel mode driver 'enic.ko' on the VM
474    instance may hang. As a workaround, enic.ko should be blocked or removed
475    from the boot process.
476  - pci_generic cannot be used as the uio module in the VM. igb_uio or
477    vfio in non-IOMMU mode can be used.
478  - The number of RQs in UCSM dynamic vNIC configurations must be at least 2.
479  - The number of SR-IOV devices is limited to 256. Components on target system
480    might limit this number to fewer than 256.
481
482- **Flow API**
483
484  - The number of filters that can be specified with the Generic Flow API is
485    dependent on how many header fields are being masked. Use 'flow create' in
486    a loop to determine how many filters your VIC will support (not more than
487    1000 for 1300 series VICs). Filters are checked for matching in the order they
488    were added. Since there currently is no grouping or priority support,
489    'catch-all' filters should be added last.
490  - The supported range of IDs for the 'MARK' action is 0 - 0xFFFD.
491  - RSS and PASSTHRU actions only support "receive normally". They are limited
492    to supporting MARK + RSS and PASSTHRU + MARK to allow the application to mark
493    packets and then receive them normally. These require 1400 series VIC adapters
494    and latest firmware.
495  - RAW items are limited to matching UDP tunnel headers like VXLAN.
496  - GTP, GTP-C and GTP-U header matching is enabled, however matching items within
497    the tunnel is not supported.
498  - For 1400 VICs, all flows using the RSS action on a port use same hash
499    configuration. The RETA is ignored. The queues used in the RSS group must be
500    sequential. There is a performance hit if the number of queues is not a power of 2.
501    Only level 0 (outer header) RSS is allowed.
502
503- **Statistics**
504
505  - ``rx_good_bytes`` (ibytes) always includes VLAN header (4B) and CRC bytes (4B).
506    This behavior applies to 1300 and older series VIC adapters.
507    1400 series VICs do not count CRC bytes, and count VLAN header only when VLAN
508    stripping is disabled.
509  - When the NIC drops a packet because the Rx queue has no free buffers,
510    ``rx_good_bytes`` still increments by 4B if the packet is not VLAN tagged or
511    VLAN stripping is disabled, or by 8B if the packet is VLAN tagged and stripping
512    is enabled.
513    This behavior applies to 1300 and older series VIC adapters. 1400 series VICs
514    do not increment this byte counter when packets are dropped.
515
516- **RSS Hashing**
517
518  - Hardware enables and disables UDP and TCP RSS hashing together. The driver
519    cannot control UDP and TCP hashing individually.
520
521How to build the suite
522----------------------
523
524The build instructions for the DPDK suite should be followed. By default
525the ENIC PMD library will be built into the DPDK library.
526
527Refer to the document :ref:`compiling and testing a PMD for a NIC
528<pmd_build_and_test>` for details.
529
530For configuring and using UIO and VFIO frameworks, please refer to the
531documentation that comes with DPDK suite.
532
533Supported Operating Systems
534---------------------------
535
536Any Linux distribution fulfilling the conditions described in Dependencies
537section of DPDK documentation.
538
539Known bugs and unsupported features in this release
540---------------------------------------------------
541
542- Signature or flex byte based flow direction
543- Drop feature of flow direction
544- VLAN based flow direction
545- Non-IPV4 flow direction
546- Setting of extended VLAN
547- MTU update only works if Scattered Rx mode is disabled
548- Maximum receive packet length is ignored if Scattered Rx mode is used
549
550Prerequisites
551-------------
552
553- Prepare the system as recommended by DPDK suite.  This includes environment
554  variables, hugepages configuration, tool-chains and configuration.
555- Insert vfio-pci kernel module using the command 'modprobe vfio-pci' if the
556  user wants to use VFIO framework.
557- Insert uio kernel module using the command 'modprobe uio' if the user wants
558  to use UIO framework.
559- DPDK suite should be configured based on the user's decision to use VFIO or
560  UIO framework.
561- If the vNIC device(s) to be used is bound to the kernel mode Ethernet driver
562  use 'ip' to bring the interface down. The dpdk-devbind.py tool can
563  then be used to unbind the device's bus id from the ENIC kernel mode driver.
564- Bind the intended vNIC to vfio-pci in case the user wants ENIC PMD to use
565  VFIO framework using dpdk-devbind.py.
566- Bind the intended vNIC to igb_uio in case the user wants ENIC PMD to use
567  UIO framework using dpdk-devbind.py.
568
569At this point the system should be ready to run DPDK applications. Once the
570application runs to completion, the vNIC can be detached from vfio-pci or
571igb_uio if necessary.
572
573Root privilege is required to bind and unbind vNICs to/from VFIO/UIO.
574VFIO framework helps an unprivileged user to run the applications.
575For an unprivileged user to run the applications on DPDK and ENIC PMD,
576it may be necessary to increase the maximum locked memory of the user.
577The following command could be used to do this.
578
579.. code-block:: console
580
581    sudo sh -c "ulimit -l <value in Kilo Bytes>"
582
583The value depends on the memory configuration of the application, DPDK and
584PMD.  Typically, the limit has to be raised to higher than 2GB.
585e.g., 2621440
586
587Additional Reference
588--------------------
589
590- https://www.cisco.com/c/en/us/products/servers-unified-computing/index.html
591- https://www.cisco.com/c/en/us/products/interfaces-modules/unified-computing-system-adapters/index.html
592