xref: /dpdk/doc/guides/nics/enic.rst (revision 089e5ed727a15da2729cfee9b63533dd120bd04c)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright (c) 2017, Cisco Systems, Inc.
3    All rights reserved.
4
5ENIC Poll Mode Driver
6=====================
7
8ENIC PMD is the DPDK poll-mode driver for the Cisco System Inc. VIC Ethernet
9NICs. These adapters are also referred to as vNICs below. If you are running
10or would like to run DPDK software applications on Cisco UCS servers using
11Cisco VIC adapters the following documentation is relevant.
12
13How to obtain ENIC PMD integrated DPDK
14--------------------------------------
15
16ENIC PMD support is integrated into the DPDK suite. dpdk-<version>.tar.gz
17should be downloaded from http://core.dpdk.org/download/
18
19
20Configuration information
21-------------------------
22
23- **DPDK Configuration Parameters**
24
25  The following configuration options are available for the ENIC PMD:
26
27  - **CONFIG_RTE_LIBRTE_ENIC_PMD** (default y): Enables or disables inclusion
28    of the ENIC PMD driver in the DPDK compilation.
29
30- **vNIC Configuration Parameters**
31
32  - **Number of Queues**
33
34    The maximum number of receive queues (RQs), work queues (WQs) and
35    completion queues (CQs) are configurable on a per vNIC basis
36    through the Cisco UCS Manager (CIMC or UCSM).
37
38    These values should be configured as follows:
39
40    - The number of WQs should be greater or equal to the value of the
41      expected nb_tx_q parameter in the call to
42      rte_eth_dev_configure()
43
44    - The number of RQs configured in the vNIC should be greater or
45      equal to *twice* the value of the expected nb_rx_q parameter in
46      the call to rte_eth_dev_configure().  With the addition of Rx
47      scatter, a pair of RQs on the vnic is needed for each receive
48      queue used by DPDK, even if Rx scatter is not being used.
49      Having a vNIC with only 1 RQ is not a valid configuration, and
50      will fail with an error message.
51
52    - The number of CQs should set so that there is one CQ for each
53      WQ, and one CQ for each pair of RQs.
54
55    For example: If the application requires 3 Rx queues, and 3 Tx
56    queues, the vNIC should be configured to have at least 3 WQs, 6
57    RQs (3 pairs), and 6 CQs (3 for use by WQs + 3 for use by the 3
58    pairs of RQs).
59
60  - **Size of Queues**
61
62    Likewise, the number of receive and transmit descriptors are configurable on
63    a per-vNIC basis via the UCS Manager and should be greater than or equal to
64    the nb_rx_desc and   nb_tx_desc parameters expected to be used in the calls
65    to rte_eth_rx_queue_setup() and rte_eth_tx_queue_setup() respectively.
66    An application requesting more than the set size will be limited to that
67    size.
68
69    Unless there is a lack of resources due to creating many vNICs, it
70    is recommended that the WQ and RQ sizes be set to the maximum.  This
71    gives the application the greatest amount of flexibility in its
72    queue configuration.
73
74    - *Note*: Since the introduction of Rx scatter, for performance
75      reasons, this PMD uses two RQs on the vNIC per receive queue in
76      DPDK.  One RQ holds descriptors for the start of a packet, and the
77      second RQ holds the descriptors for the rest of the fragments of
78      a packet.  This means that the nb_rx_desc parameter to
79      rte_eth_rx_queue_setup() can be a greater than 4096.  The exact
80      amount will depend on the size of the mbufs being used for
81      receives, and the MTU size.
82
83      For example: If the mbuf size is 2048, and the MTU is 9000, then
84      receiving a full size packet will take 5 descriptors, 1 from the
85      start-of-packet queue, and 4 from the second queue.  Assuming
86      that the RQ size was set to the maximum of 4096, then the
87      application can specify up to 1024 + 4096 as the nb_rx_desc
88      parameter to rte_eth_rx_queue_setup().
89
90  - **Interrupts**
91
92    At least one interrupt per vNIC interface should be configured in the UCS
93    manager regardless of the number receive/transmit queues. The ENIC PMD
94    uses this interrupt to get information about link status and errors
95    in the fast path.
96
97    In addition to the interrupt for link status and errors, when using Rx queue
98    interrupts, increase the number of configured interrupts so that there is at
99    least one interrupt for each Rx queue. For example, if the app uses 3 Rx
100    queues and wants to use per-queue interrupts, configure 4 (3 + 1) interrupts.
101
102  - **Receive Side Scaling**
103
104    In order to fully utilize RSS in DPDK, enable all RSS related settings in
105    CIMC or UCSM. These include the following items listed under
106    Receive Side Scaling:
107    TCP, IPv4, TCP-IPv4, IPv6, TCP-IPv6, IPv6 Extension, TCP-IPv6 Extension.
108
109
110.. _enic-flow-director:
111
112Flow director support
113---------------------
114
115Advanced filtering support was added to 1300 series VIC firmware starting
116with version 2.0.13 for C-series UCS servers and version 3.1.2 for UCSM
117managed blade servers. In order to enable advanced filtering the 'Advanced
118filter' radio button should be enabled via CIMC or UCSM followed by a reboot
119of the server.
120
121With advanced filters, perfect matching of all fields of IPv4, IPv6 headers
122as well as TCP, UDP and SCTP L4 headers is available through flow director.
123Masking of these fields for partial match is also supported.
124
125Without advanced filter support, the flow director is limited to IPv4
126perfect filtering of the 5-tuple with no masking of fields supported.
127
128SR-IOV mode utilization
129-----------------------
130
131UCS blade servers configured with dynamic vNIC connection policies in UCSM
132are capable of supporting SR-IOV. SR-IOV virtual functions (VFs) are
133specialized vNICs, distinct from regular Ethernet vNICs. These VFs can be
134directly assigned to virtual machines (VMs) as 'passthrough' devices.
135
136In UCS, SR-IOV VFs require the use of the Cisco Virtual Machine Fabric Extender
137(VM-FEX), which gives the VM a dedicated
138interface on the Fabric Interconnect (FI). Layer 2 switching is done at
139the FI. This may eliminate the requirement for software switching on the
140host to route intra-host VM traffic.
141
142Please refer to `Creating a Dynamic vNIC Connection Policy
143<http://www.cisco.com/c/en/us/td/docs/unified_computing/ucs/sw/vm_fex/vmware/gui/config_guide/b_GUI_VMware_VM-FEX_UCSM_Configuration_Guide/b_GUI_VMware_VM-FEX_UCSM_Configuration_Guide_chapter_010.html#task_433E01651F69464783A68E66DA8A47A5>`_
144for information on configuring SR-IOV adapter policies and port profiles
145using UCSM.
146
147Once the policies are in place and the host OS is rebooted, VFs should be
148visible on the host, E.g.:
149
150.. code-block:: console
151
152     # lspci | grep Cisco | grep Ethernet
153     0d:00.0 Ethernet controller: Cisco Systems Inc VIC Ethernet NIC (rev a2)
154     0d:00.1 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
155     0d:00.2 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
156     0d:00.3 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
157     0d:00.4 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
158     0d:00.5 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
159     0d:00.6 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
160     0d:00.7 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
161
162Enable Intel IOMMU on the host and install KVM and libvirt, and reboot again as
163required. Then, using libvirt, create a VM instance with an assigned device.
164Below is an example ``interface`` block (part of the domain configuration XML)
165that adds the host VF 0d:00:01 to the VM. ``profileid='pp-vlan-25'`` indicates
166the port profile that has been configured in UCSM.
167
168.. code-block:: console
169
170    <interface type='hostdev' managed='yes'>
171      <mac address='52:54:00:ac:ff:b6'/>
172      <driver name='vfio'/>
173      <source>
174        <address type='pci' domain='0x0000' bus='0x0d' slot='0x00' function='0x1'/>
175      </source>
176      <virtualport type='802.1Qbh'>
177        <parameters profileid='pp-vlan-25'/>
178      </virtualport>
179    </interface>
180
181
182Alternatively, the configuration can be done in a separate file using the
183``network`` keyword. These methods are described in the libvirt documentation for
184`Network XML format <https://libvirt.org/formatnetwork.html>`_.
185
186When the VM instance is started, libvirt will bind the host VF to
187vfio, complete provisioning on the FI and bring up the link.
188
189.. note::
190
191    It is not possible to use a VF directly from the host because it is not
192    fully provisioned until libvirt brings up the VM that it is assigned
193    to.
194
195In the VM instance, the VF will now be visible. E.g., here the VF 00:04.0 is
196seen on the VM instance and should be available for binding to a DPDK.
197
198.. code-block:: console
199
200     # lspci | grep Ether
201     00:04.0 Ethernet controller: Cisco Systems Inc VIC SR-IOV VF (rev a2)
202
203Follow the normal DPDK install procedure, binding the VF to either ``igb_uio``
204or ``vfio`` in non-IOMMU mode.
205
206In the VM, the kernel enic driver may be automatically bound to the VF during
207boot. Unbinding it currently hangs due to a known issue with the driver. To
208work around the issue, blacklist the enic module as follows.
209Please see :ref:`Limitations <enic_limitations>` for limitations in
210the use of SR-IOV.
211
212.. code-block:: console
213
214     # cat /etc/modprobe.d/enic.conf
215     blacklist enic
216
217     # dracut --force
218
219.. note::
220
221    Passthrough does not require SR-IOV. If VM-FEX is not desired, the user
222    may create as many regular vNICs as necessary and assign them to VMs as
223    passthrough devices. Since these vNICs are not SR-IOV VFs, using them as
224    passthrough devices do not require libvirt, port profiles, and VM-FEX.
225
226
227.. _enic-generic-flow-api:
228
229Generic Flow API support
230------------------------
231
232Generic Flow API is supported. The baseline support is:
233
234- **1200 series VICs**
235
236  5-tuple exact flow support for 1200 series adapters. This allows:
237
238  - Attributes: ingress
239  - Items: ipv4, ipv6, udp, tcp (must exactly match src/dst IP
240    addresses and ports and all must be specified)
241  - Actions: queue and void
242  - Selectors: 'is'
243
244- **1300 and later series VICS with advanced filters disabled**
245
246  With advanced filters disabled, an IPv4 or IPv6 item must be specified
247  in the pattern.
248
249  - Attributes: ingress
250  - Items: eth, vlan, ipv4, ipv6, udp, tcp, vxlan, inner eth, vlan, ipv4, ipv6, udp, tcp
251  - Actions: queue and void
252  - Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
253  - In total, up to 64 bytes of mask is allowed across all headers
254
255- **1300 and later series VICS with advanced filters enabled**
256
257  - Attributes: ingress
258  - Items: eth, vlan, ipv4, ipv6, udp, tcp, vxlan, raw, inner eth, vlan, ipv4, ipv6, udp, tcp
259  - Actions: queue, mark, drop, flag, rss, passthru, and void
260  - Selectors: 'is', 'spec' and 'mask'. 'last' is not supported
261  - In total, up to 64 bytes of mask is allowed across all headers
262
263The VIC performs packet matching after applying VLAN strip. If VLAN
264stripping is enabled, EtherType in the ETH item corresponds to the
265stripped VLAN header's EtherType. Stripping does not affect the VLAN
266item. TCI and EtherType in the VLAN item are matched against those in
267the (stripped) VLAN header whether stripping is enabled or disabled.
268
269More features may be added in future firmware and new versions of the VIC.
270Please refer to the release notes.
271
272.. _overlay_offload:
273
274Overlay Offload
275---------------
276
277Recent hardware models support overlay offload. When enabled, the NIC performs
278the following operations for VXLAN, NVGRE, and GENEVE packets. In all cases,
279inner and outer packets can be IPv4 or IPv6.
280
281- TSO for VXLAN and GENEVE packets.
282
283  Hardware supports NVGRE TSO, but DPDK currently has no NVGRE offload flags.
284
285- Tx checksum offloads.
286
287  The NIC fills in IPv4/UDP/TCP checksums for both inner and outer packets.
288
289- Rx checksum offloads.
290
291  The NIC validates IPv4/UDP/TCP checksums of both inner and outer packets.
292  Good checksum flags (e.g. ``PKT_RX_L4_CKSUM_GOOD``) indicate that the inner
293  packet has the correct checksum, and if applicable, the outer packet also
294  has the correct checksum. Bad checksum flags (e.g. ``PKT_RX_L4_CKSUM_BAD``)
295  indicate that the inner and/or outer packets have invalid checksum values.
296
297- Inner Rx packet type classification
298
299  PMD sets inner L3/L4 packet types (e.g. ``RTE_PTYPE_INNER_L4_TCP``), and
300  ``RTE_PTYPE_TUNNEL_GRENAT`` to indicate that the packet is tunneled.
301  PMD does not set L3/L4 packet types for outer packets.
302
303- Inner RSS
304
305  RSS hash calculation, therefore queue selection, is done on inner packets.
306
307In order to enable overlay offload, the 'Enable VXLAN' box should be checked
308via CIMC or UCSM followed by a reboot of the server. When PMD successfully
309enables overlay offload, it prints the following message on the console.
310
311.. code-block:: console
312
313    Overlay offload is enabled
314
315By default, PMD enables overlay offload if hardware supports it. To disable
316it, set ``devargs`` parameter ``disable-overlay=1``. For example::
317
318    -w 12:00.0,disable-overlay=1
319
320By default, the NIC uses 4789 as the VXLAN port. The user may change
321it through ``rte_eth_dev_udp_tunnel_port_{add,delete}``. However, as
322the current NIC has a single VXLAN port number, the user cannot
323configure multiple port numbers.
324
325Ingress VLAN Rewrite
326--------------------
327
328VIC adapters can tag, untag, or modify the VLAN headers of ingress
329packets. The ingress VLAN rewrite mode controls this behavior. By
330default, it is set to pass-through, where the NIC does not modify the
331VLAN header in any way so that the application can see the original
332header. This mode is sufficient for many applications, but may not be
333suitable for others. Such applications may change the mode by setting
334``devargs`` parameter ``ig-vlan-rewrite`` to one of the following.
335
336- ``pass``: Pass-through mode. The NIC does not modify the VLAN
337  header. This is the default mode.
338
339- ``priority``: Priority-tag default VLAN mode. If the ingress packet
340  is tagged with the default VLAN, the NIC replaces its VLAN header
341  with the priority tag (VLAN ID 0).
342
343- ``trunk``: Default trunk mode. The NIC tags untagged ingress packets
344  with the default VLAN. Tagged ingress packets are not modified. To
345  the application, every packet appears as tagged.
346
347- ``untag``: Untag default VLAN mode. If the ingress packet is tagged
348  with the default VLAN, the NIC removes or untags its VLAN header so
349  that the application sees an untagged packet. As a result, the
350  default VLAN becomes `untagged`. This mode can be useful for
351  applications such as OVS-DPDK performance benchmarks that utilize
352  only the default VLAN and want to see only untagged packets.
353
354
355Vectorized Rx Handler
356---------------------
357
358ENIC PMD includes a version of the receive handler that is vectorized using
359AVX2 SIMD instructions. It is meant for bulk, throughput oriented workloads
360where reducing cycles/packet in PMD is a priority. In order to use the
361vectorized handler, take the following steps.
362
363- Use a recent version of gcc, icc, or clang and build 64-bit DPDK. If
364  the compiler is known to support AVX2, DPDK build system
365  automatically compiles the vectorized handler. Otherwise, the
366  handler is not available.
367
368- Set ``devargs`` parameter ``enable-avx2-rx=1`` to explicitly request that
369  PMD consider the vectorized handler when selecting the receive handler.
370  For example::
371
372    -w 12:00.0,enable-avx2-rx=1
373
374  As the current implementation is intended for field trials, by default, the
375  vectorized handler is not considered (``enable-avx2-rx=0``).
376
377- Run on a UCS M4 or later server with CPUs that support AVX2.
378
379PMD selects the vectorized handler when the handler is compiled into
380the driver, the user requests its use via ``enable-avx2-rx=1``, CPU
381supports AVX2, and scatter Rx is not used. To verify that the
382vectorized handler is selected, enable debug logging
383(``--log-level=pmd,debug``) and check the following message.
384
385.. code-block:: console
386
387    enic_use_vector_rx_handler use the non-scatter avx2 Rx handler
388
389.. _enic_limitations:
390
391Limitations
392-----------
393
394- **VLAN 0 Priority Tagging**
395
396  If a vNIC is configured in TRUNK mode by the UCS manager, the adapter will
397  priority tag egress packets according to 802.1Q if they were not already
398  VLAN tagged by software. If the adapter is connected to a properly configured
399  switch, there will be no unexpected behavior.
400
401  In test setups where an Ethernet port of a Cisco adapter in TRUNK mode is
402  connected point-to-point to another adapter port or connected though a router
403  instead of a switch, all ingress packets will be VLAN tagged. Programs such
404  as l3fwd may not account for VLAN tags in packets and may misbehave. One
405  solution is to enable VLAN stripping on ingress so the VLAN tag is removed
406  from the packet and put into the mbuf->vlan_tci field. Here is an example
407  of how to accomplish this:
408
409.. code-block:: console
410
411     vlan_offload = rte_eth_dev_get_vlan_offload(port);
412     vlan_offload |= ETH_VLAN_STRIP_OFFLOAD;
413     rte_eth_dev_set_vlan_offload(port, vlan_offload);
414
415Another alternative is modify the adapter's ingress VLAN rewrite mode so that
416packets with the default VLAN tag are stripped by the adapter and presented to
417DPDK as untagged packets. In this case mbuf->vlan_tci and the PKT_RX_VLAN and
418PKT_RX_VLAN_STRIPPED mbuf flags would not be set. This mode is enabled with the
419``devargs`` parameter ``ig-vlan-rewrite=untag``. For example::
420
421    -w 12:00.0,ig-vlan-rewrite=untag
422
423- Limited flow director support on 1200 series and 1300 series Cisco VIC
424  adapters with old firmware. Please see :ref:`enic-flow-director`.
425
426- Flow director features are not supported on generation 1 Cisco VIC adapters
427  (M81KR and P81E)
428
429- **SR-IOV**
430
431  - KVM hypervisor support only. VMware has not been tested.
432  - Requires VM-FEX, and so is only available on UCS managed servers connected
433    to Fabric Interconnects. It is not on standalone C-Series servers.
434  - VF devices are not usable directly from the host. They can  only be used
435    as assigned devices on VM instances.
436  - Currently, unbind of the ENIC kernel mode driver 'enic.ko' on the VM
437    instance may hang. As a workaround, enic.ko should be blacklisted or removed
438    from the boot process.
439  - pci_generic cannot be used as the uio module in the VM. igb_uio or
440    vfio in non-IOMMU mode can be used.
441  - The number of RQs in UCSM dynamic vNIC configurations must be at least 2.
442  - The number of SR-IOV devices is limited to 256. Components on target system
443    might limit this number to fewer than 256.
444
445- **Flow API**
446
447  - The number of filters that can be specified with the Generic Flow API is
448    dependent on how many header fields are being masked. Use 'flow create' in
449    a loop to determine how many filters your VIC will support (not more than
450    1000 for 1300 series VICs). Filters are checked for matching in the order they
451    were added. Since there currently is no grouping or priority support,
452    'catch-all' filters should be added last.
453  - The supported range of IDs for the 'MARK' action is 0 - 0xFFFD.
454  - RSS and PASSTHRU actions only support "receive normally". They are limited
455    to supporting MARK + RSS and PASSTHRU + MARK to allow the application to mark
456    packets and then receive them normally. These require 1400 series VIC adapters
457    and latest firmware.
458  - RAW items are limited to matching UDP tunnel headers like VXLAN.
459
460- **Statistics**
461
462  - ``rx_good_bytes`` (ibytes) always includes VLAN header (4B) and CRC bytes (4B).
463    This behavior applies to 1300 and older series VIC adapters.
464    1400 series VICs do not count CRC bytes, and count VLAN header only when VLAN
465    stripping is disabled.
466  - When the NIC drops a packet because the Rx queue has no free buffers,
467    ``rx_good_bytes`` still increments by 4B if the packet is not VLAN tagged or
468    VLAN stripping is disabled, or by 8B if the packet is VLAN tagged and stripping
469    is enabled.
470    This behavior applies to 1300 and older series VIC adapters. 1400 series VICs
471    do not increment this byte counter when packets are dropped.
472
473- **RSS Hashing**
474
475  - Hardware enables and disables UDP and TCP RSS hashing together. The driver
476    cannot control UDP and TCP hashing individually.
477
478How to build the suite
479----------------------
480
481The build instructions for the DPDK suite should be followed. By default
482the ENIC PMD library will be built into the DPDK library.
483
484Refer to the document :ref:`compiling and testing a PMD for a NIC
485<pmd_build_and_test>` for details.
486
487For configuring and using UIO and VFIO frameworks, please refer to the
488documentation that comes with DPDK suite.
489
490Supported Cisco VIC adapters
491----------------------------
492
493ENIC PMD supports all recent generations of Cisco VIC adapters including:
494
495- VIC 1200 series
496- VIC 1300 series
497- VIC 1400 series
498
499Supported Operating Systems
500---------------------------
501
502Any Linux distribution fulfilling the conditions described in Dependencies
503section of DPDK documentation.
504
505Supported features
506------------------
507
508- Unicast, multicast and broadcast transmission and reception
509- Receive queue polling
510- Port Hardware Statistics
511- Hardware VLAN acceleration
512- IP checksum offload
513- Receive side VLAN stripping
514- Multiple receive and transmit queues
515- Flow Director ADD, UPDATE, DELETE, STATS operation support IPv4 and IPv6
516- Promiscuous mode
517- Setting RX VLAN (supported via UCSM/CIMC only)
518- VLAN filtering (supported via UCSM/CIMC only)
519- Execution of application by unprivileged system users
520- IPV4, IPV6 and TCP RSS hashing
521- UDP RSS hashing (1400 series and later adapters)
522- Scattered Rx
523- MTU update
524- SR-IOV on UCS managed servers connected to Fabric Interconnects
525- Flow API
526- Overlay offload
527
528  - Rx/Tx checksum offloads for VXLAN, NVGRE, GENEVE
529  - TSO for VXLAN and GENEVE packets
530  - Inner RSS
531
532Known bugs and unsupported features in this release
533---------------------------------------------------
534
535- Signature or flex byte based flow direction
536- Drop feature of flow direction
537- VLAN based flow direction
538- Non-IPV4 flow direction
539- Setting of extended VLAN
540- MTU update only works if Scattered Rx mode is disabled
541- Maximum receive packet length is ignored if Scattered Rx mode is used
542
543Prerequisites
544-------------
545
546- Prepare the system as recommended by DPDK suite.  This includes environment
547  variables, hugepages configuration, tool-chains and configuration.
548- Insert vfio-pci kernel module using the command 'modprobe vfio-pci' if the
549  user wants to use VFIO framework.
550- Insert uio kernel module using the command 'modprobe uio' if the user wants
551  to use UIO framework.
552- DPDK suite should be configured based on the user's decision to use VFIO or
553  UIO framework.
554- If the vNIC device(s) to be used is bound to the kernel mode Ethernet driver
555  use 'ip' to bring the interface down. The dpdk-devbind.py tool can
556  then be used to unbind the device's bus id from the ENIC kernel mode driver.
557- Bind the intended vNIC to vfio-pci in case the user wants ENIC PMD to use
558  VFIO framework using dpdk-devbind.py.
559- Bind the intended vNIC to igb_uio in case the user wants ENIC PMD to use
560  UIO framework using dpdk-devbind.py.
561
562At this point the system should be ready to run DPDK applications. Once the
563application runs to completion, the vNIC can be detached from vfio-pci or
564igb_uio if necessary.
565
566Root privilege is required to bind and unbind vNICs to/from VFIO/UIO.
567VFIO framework helps an unprivileged user to run the applications.
568For an unprivileged user to run the applications on DPDK and ENIC PMD,
569it may be necessary to increase the maximum locked memory of the user.
570The following command could be used to do this.
571
572.. code-block:: console
573
574    sudo sh -c "ulimit -l <value in Kilo Bytes>"
575
576The value depends on the memory configuration of the application, DPDK and
577PMD.  Typically, the limit has to be raised to higher than 2GB.
578e.g., 2621440
579
580The compilation of any unused drivers can be disabled using the
581configuration file in config/ directory (e.g., config/common_linux).
582This would help in bringing down the time taken for building the
583libraries and the initialization time of the application.
584
585Additional Reference
586--------------------
587
588- https://www.cisco.com/c/en/us/products/servers-unified-computing/index.html
589- https://www.cisco.com/c/en/us/products/interfaces-modules/unified-computing-system-adapters/index.html
590
591Contact Information
592-------------------
593
594Any questions or bugs should be reported to DPDK community and to the ENIC PMD
595maintainers:
596
597- John Daley <johndale@cisco.com>
598- Hyong Youb Kim <hyonkim@cisco.com>
599