xref: /dpdk/doc/guides/nics/i40e.rst (revision 0fdf973cdb1dcef6513fb36f3105c8123844d937)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2016 Intel Corporation.
3
4I40E Poll Mode Driver
5======================
6
7The i40e PMD (**librte_net_i40e**) provides poll mode driver support for
810/25/40 Gbps Intel® Ethernet 700 Series Network Adapters based on
9the Intel Ethernet Controller X710/XL710/XXV710 and Intel Ethernet
10Connection X722 (only support part of features).
11
12
13Features
14--------
15
16Features of the i40e PMD are:
17
18- Multiple queues for TX and RX
19- Receiver Side Scaling (RSS)
20- MAC/VLAN filtering
21- Packet type information
22- Flow director
23- Cloud filter
24- Checksum offload
25- VLAN/QinQ stripping and inserting
26- TSO offload
27- Promiscuous mode
28- Multicast mode
29- Port hardware statistics
30- Jumbo frames
31- Link state information
32- Link flow control
33- Mirror on port, VLAN and VSI
34- Interrupt mode for RX
35- Scattered and gather for TX and RX
36- Vector Poll mode driver
37- DCB
38- VMDQ
39- SR-IOV VF
40- Hot plug
41- IEEE1588/802.1AS timestamping
42- VF Daemon (VFD) - EXPERIMENTAL
43- Dynamic Device Personalization (DDP)
44- Queue region configuration
45- Virtual Function Port Representors
46- Malicious Device Drive event catch and notify
47- Generic flow API
48
49Linux Prerequisites
50-------------------
51
52- Identifying your adapter using `Intel Support
53  <http://www.intel.com/support>`_ and get the latest NVM/FW images.
54
55- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
56
57- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
58  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
59
60- Upgrade the NVM/FW version following the `Intel® Ethernet NVM Update Tool Quick Usage Guide for Linux
61  <https://www-ssl.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-linux-usage-guide.html>`_ and `Intel® Ethernet NVM Update Tool: Quick Usage Guide for EFI <https://www.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-efi-usage-guide.html>`_ if needed.
62
63- For information about supported media, please refer to this document: `Intel® Ethernet Controller X710/XXV710/XL710 Feature Support Matrix
64  <http://www.intel.com/content/dam/www/public/us/en/documents/release-notes/xl710-ethernet-controller-feature-matrix.pdf>`_.
65
66   .. Note::
67
68      * Some adapters based on the Intel(R) Ethernet Controller 700 Series only
69        support Intel Ethernet Optics modules. On these adapters, other modules are not
70        supported and will not function.
71
72      * For connections based on Intel(R) Ethernet Controller 700 Series,
73        support is dependent on your system board. Please see your vendor for details.
74
75      * In all cases Intel recommends using Intel Ethernet Optics; other modules
76        may function but are not validated by Intel. Contact Intel for supported media types.
77
78Windows Prerequisites
79---------------------
80
81- Follow the :doc:`guide for Windows <../windows_gsg/run_apps>`
82  to setup the basic DPDK environment.
83
84- Identify the Intel® Ethernet adapter and get the latest NVM/FW version.
85
86- To access any Intel® Ethernet hardware, load the NetUIO driver in place of existing built-in (inbox) driver.
87
88- To load NetUIO driver, follow the steps mentioned in `dpdk-kmods repository
89  <https://git.dpdk.org/dpdk-kmods/tree/windows/netuio/README.rst>`_.
90
91Kernel driver and Firmware Matching List
92----------------------------------------
93
94It is highly recommended to upgrade the i40e kernel driver and firmware
95to avoid the compatibility issues with i40e PMD.
96The table below shows a summary of the DPDK versions
97with corresponding out-of-tree Linux kernel drivers and firmware.
98The full list of in-tree and out-of-tree Linux kernel drivers from kernel.org
99and Linux distributions that were tested and verified
100are listed in the Tested Platforms section of the Release Notes for each release.
101
102For X710/XL710/XXV710,
103
104   +--------------+-----------------------+------------------+
105   | DPDK version | Kernel driver version | Firmware version |
106   +==============+=======================+==================+
107   |    24.11     |         2.26.8        |       9.52       |
108   +--------------+-----------------------+------------------+
109   |    24.07     |         2.25.9        |       9.50       |
110   +--------------+-----------------------+------------------+
111   |    24.03     |         2.24.6        |       9.40       |
112   +--------------+-----------------------+------------------+
113   |    23.11     |         2.23.17       |       9.30       |
114   +--------------+-----------------------+------------------+
115   |    23.07     |         2.22.20       |       9.20       |
116   +--------------+-----------------------+------------------+
117   |    23.03     |         2.22.18       |       9.20       |
118   +--------------+-----------------------+------------------+
119   |    22.11     |         2.20.12       |       9.01       |
120   +--------------+-----------------------+------------------+
121   |    22.07     |         2.19.3        |       8.70       |
122   +--------------+-----------------------+------------------+
123   |    22.03     |         2.17.15       |       8.30       |
124   +--------------+-----------------------+------------------+
125   |    21.11     |         2.17.4        |       8.30       |
126   +--------------+-----------------------+------------------+
127   |    21.08     |         2.15.9        |       8.30       |
128   +--------------+-----------------------+------------------+
129   |    21.05     |         2.15.9        |       8.30       |
130   +--------------+-----------------------+------------------+
131   |    21.02     |         2.14.13       |       8.00       |
132   +--------------+-----------------------+------------------+
133   |    20.11     |         2.14.13       |       8.00       |
134   +--------------+-----------------------+------------------+
135   |    20.08     |         2.12.6        |       7.30       |
136   +--------------+-----------------------+------------------+
137   |    20.05     |         2.11.27       |       7.30       |
138   +--------------+-----------------------+------------------+
139   |    20.02     |         2.10.19       |       7.20       |
140   +--------------+-----------------------+------------------+
141   |    19.11     |         2.9.21        |       7.00       |
142   +--------------+-----------------------+------------------+
143   |    19.08     |         2.8.43        |       7.00       |
144   +--------------+-----------------------+------------------+
145   |    19.05     |         2.7.29        |       6.80       |
146   +--------------+-----------------------+------------------+
147   |    19.02     |         2.7.26        |       6.80       |
148   +--------------+-----------------------+------------------+
149   |    18.11     |         2.4.6         |       6.01       |
150   +--------------+-----------------------+------------------+
151   |    18.08     |         2.4.6         |       6.01       |
152   +--------------+-----------------------+------------------+
153   |    18.05     |         2.4.6         |       6.01       |
154   +--------------+-----------------------+------------------+
155   |    18.02     |         2.4.3         |       6.01       |
156   +--------------+-----------------------+------------------+
157   |    17.11     |         2.1.26        |       6.01       |
158   +--------------+-----------------------+------------------+
159   |    17.08     |         2.0.19        |       6.01       |
160   +--------------+-----------------------+------------------+
161   |    17.05     |         1.5.23        |       5.05       |
162   +--------------+-----------------------+------------------+
163   |    17.02     |         1.5.23        |       5.05       |
164   +--------------+-----------------------+------------------+
165   |    16.11     |         1.5.23        |       5.05       |
166   +--------------+-----------------------+------------------+
167   |    16.07     |         1.4.25        |       5.04       |
168   +--------------+-----------------------+------------------+
169   |    16.04     |         1.4.25        |       5.02       |
170   +--------------+-----------------------+------------------+
171
172
173For X722,
174
175   +--------------+-----------------------+------------------+
176   | DPDK version | Kernel driver version | Firmware version |
177   +==============+=======================+==================+
178   |    24.11     |         2.26.8        |       6.50       |
179   +--------------+-----------------------+------------------+
180   |    24.07     |         2.25.9        |       6.50       |
181   +--------------+-----------------------+------------------+
182   |    24.03     |         2.24.6        |       6.20       |
183   +--------------+-----------------------+------------------+
184   |    23.11     |         2.23.17       |       6.20       |
185   +--------------+-----------------------+------------------+
186   |    23.07     |         2.22.20       |       6.20       |
187   +--------------+-----------------------+------------------+
188   |    23.03     |         2.22.18       |       6.20       |
189   +--------------+-----------------------+------------------+
190   |    22.11     |         2.20.12       |       6.00       |
191   +--------------+-----------------------+------------------+
192   |    22.07     |         2.19.3        |       5.60       |
193   +--------------+-----------------------+------------------+
194   |    22.03     |         2.17.15       |       5.50       |
195   +--------------+-----------------------+------------------+
196   |    21.11     |         2.17.4        |       5.30       |
197   +--------------+-----------------------+------------------+
198   |    21.08     |         2.15.9        |       5.30       |
199   +--------------+-----------------------+------------------+
200   |    21.05     |         2.15.9        |       5.30       |
201   +--------------+-----------------------+------------------+
202   |    21.02     |         2.14.13       |       5.00       |
203   +--------------+-----------------------+------------------+
204   |    20.11     |         2.13.10       |       5.00       |
205   +--------------+-----------------------+------------------+
206   |    20.08     |         2.12.6        |       4.11       |
207   +--------------+-----------------------+------------------+
208   |    20.05     |         2.11.27       |       4.11       |
209   +--------------+-----------------------+------------------+
210   |    20.02     |         2.10.19       |       4.11       |
211   +--------------+-----------------------+------------------+
212   |    19.11     |         2.9.21        |       4.10       |
213   +--------------+-----------------------+------------------+
214   |    19.08     |         2.9.21        |       4.10       |
215   +--------------+-----------------------+------------------+
216   |    19.05     |         2.7.29        |       3.33       |
217   +--------------+-----------------------+------------------+
218   |    19.02     |         2.7.26        |       3.33       |
219   +--------------+-----------------------+------------------+
220   |    18.11     |         2.4.6         |       3.33       |
221   +--------------+-----------------------+------------------+
222
223
224Configuration
225-------------
226
227Compilation Options
228~~~~~~~~~~~~~~~~~~~
229
230The following options can be modified in the ``config/rte_config.h`` file.
231
232- ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF`` (default ``64``)
233
234  Number of queues reserved for PF.
235
236- ``RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` (default ``4``)
237
238  Number of queues reserved for each VMDQ Pool.
239
240Runtime Configuration
241~~~~~~~~~~~~~~~~~~~~~
242
243- ``Reserved number of Queues per VF`` (default ``4``)
244
245  The number of reserved queue per VF is determined by its host PF. If the
246  PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per
247  VF can be configured with EAL parameter like -a aaaa:bb.cc,queue-num-per-vf=n.
248  The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
249  number of reserved queues per VF is 4 by default. If VF request more than
250  reserved queues per VF, PF will able to allocate max to 16 queues after a VF
251  reset.
252
253
254- ``Support multiple driver`` (default ``disable``)
255
256  There was a multiple driver support issue during use of 700 series Ethernet
257  Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs``
258  parameter ``support-multi-driver`` is introduced, for example::
259
260    -a 84:00.0,support-multi-driver=1
261
262  With the above configuration, DPDK PMD will not change global registers, and
263  will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between
264  DPDK and Linux Kernel.
265
266- ``Support VF Port Representor`` (default ``not enabled``)
267
268  The i40e PF PMD supports the creation of VF port representors for the control
269  and monitoring of i40e virtual function devices. Each port representor
270  corresponds to a single virtual function of that device. Using the ``devargs``
271  option ``representor`` the user can specify which virtual functions to create
272  port representors for on initialization of the PF PMD by passing the VF IDs of
273  the VFs which are required.::
274
275  -a DBDF,representor=[0,1,4]
276
277  Currently hot-plugging of representor ports is not supported so all required
278  representors must be specified on the creation of the PF.
279
280- ``Enable validation for VF message`` (default ``not enabled``)
281
282  The PF counts messages from each VF. If in any period of seconds the message
283  statistic from a VF exceeds maximal limitation, the PF will ignore any new message
284  from that VF for some seconds.
285  Format -- "maximal-message@period-seconds:ignore-seconds"
286  For example::
287
288  -a 84:00.0,vf_msg_cfg=80@120:180
289
290- ``Support Tx diagnostics`` (default ``not enabled``)
291
292  Set the ``devargs`` parameter ``mbuf_check`` to enable Tx diagnostics.
293  For example, ``-a 18:01.0,mbuf_check=<case>`` or ``-a 18:01.0,mbuf_check=[<case1>,<case2>...]``.
294  Also, ``xstats_get`` can be used to get the error counts,
295  which are collected in ``tx_mbuf_error_packets`` xstats.
296  For example, to show the statistics in testpmd, use: ``testpmd> show port xstats all``.
297  Supported values for the ``<case>`` parameter:
298
299  * ``mbuf``: Check for corrupted mbuf.
300  * ``size``: Check min/max packet length according to HW spec.
301  * ``segment``: Check number of mbuf segments not exceed hw limitation.
302  * ``offload``: Check any unsupported offload flag.
303
304Vector RX Pre-conditions
305~~~~~~~~~~~~~~~~~~~~~~~~
306For Vector RX it is assumed that the number of descriptor rings will be a power
307of 2. With this pre-condition, the ring pointer can easily scroll back to the
308head after hitting the tail without a conditional check. In addition Vector RX
309can use this assumption to do a bit mask using ``ring_size - 1``.
310
311Driver compilation and testing
312------------------------------
313
314Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
315for details.
316
317
318SR-IOV: Prerequisites and sample Application Notes
319--------------------------------------------------
320
321#. Load the kernel module:
322
323   .. code-block:: console
324
325      modprobe i40e
326
327   Check the output in dmesg:
328
329   .. code-block:: console
330
331      i40e 0000:83:00.1 ens802f0: renamed from eth0
332
333#. Bring up the PF ports:
334
335   .. code-block:: console
336
337      ifconfig ens802f0 up
338
339#. Create VF device(s):
340
341   Echo the number of VFs to be created into the ``sriov_numvfs`` sysfs entry
342   of the parent PF.
343
344   Example:
345
346   .. code-block:: console
347
348      echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs
349
350
351#. Assign VF MAC address:
352
353   Assign MAC address to the VF using iproute2 utility. The syntax is:
354
355   .. code-block:: console
356
357      ip link set <PF netdev id> vf <VF id> mac <macaddr>
358
359   Example:
360
361   .. code-block:: console
362
363      ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0
364
365#. Assign VF to VM, and bring up the VM.
366   Please see the documentation for the *I40E/IXGBE/IGB Virtual Function Driver*.
367
368#. Running testpmd:
369
370   Follow instructions available in the document
371   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
372   to run testpmd.
373
374   Example output:
375
376   .. code-block:: console
377
378      ...
379      EAL: PCI device 0000:83:00.0 on NUMA socket 1
380      EAL: probe driver: 8086:1572 rte_i40e_pmd
381      EAL: PCI memory mapped at 0x7f7f80000000
382      EAL: PCI memory mapped at 0x7f7f80800000
383      PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a
384      Interactive-mode selected
385      Configuring Port 0 (socket 0)
386      ...
387
388      PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are
389      satisfied.Rx Burst Bulk Alloc function will be used on port=0, queue=0.
390
391      ...
392      Port 0: 68:05:CA:26:85:84
393      Checking link statuses...
394      Port 0 Link Up - speed 10000 Mbps - full-duplex
395      Done
396
397      testpmd>
398
399
400Sample Application Notes
401------------------------
402
403Vlan filter
404~~~~~~~~~~~
405
406Vlan filter only works when Promiscuous mode is off.
407
408To start ``testpmd``, and add vlan 10 to port 0:
409
410.. code-block:: console
411
412    ./<build_dir>/app/dpdk-testpmd -l 0-15 -n 4 -- -i --forward-mode=mac
413    ...
414
415    testpmd> set promisc 0 off
416    testpmd> rx_vlan add 10 0
417
418
419Flow Director
420~~~~~~~~~~~~~
421
422The Flow Director works in receive mode to identify specific flows or sets of flows and route them to specific queues.
423The Flow Director filters can match the different fields for different type of packet: flow type, specific input set per flow type and the flexible payload.
424
425The default input set of each flow type is::
426
427   ipv4-other : src_ip_address, dst_ip_address
428   ipv4-frag  : src_ip_address, dst_ip_address
429   ipv4-tcp   : src_ip_address, dst_ip_address, src_port, dst_port
430   ipv4-udp   : src_ip_address, dst_ip_address, src_port, dst_port
431   ipv4-sctp  : src_ip_address, dst_ip_address, src_port, dst_port,
432                verification_tag
433   ipv6-other : src_ip_address, dst_ip_address
434   ipv6-frag  : src_ip_address, dst_ip_address
435   ipv6-tcp   : src_ip_address, dst_ip_address, src_port, dst_port
436   ipv6-udp   : src_ip_address, dst_ip_address, src_port, dst_port
437   ipv6-sctp  : src_ip_address, dst_ip_address, src_port, dst_port,
438                verification_tag
439   l2_payload : ether_type
440
441The flex payload is selected from offset 0 to 15 of packet's payload by default, while it is masked out from matching.
442
443Start ``testpmd`` with ``--disable-rss`` and ``--pkt-filter-mode=perfect``:
444
445.. code-block:: console
446
447   ./<build_dir>/app/dpdk-testpmd -l 0-15 -n 4 -- -i --disable-rss \
448                 --pkt-filter-mode=perfect --rxq=8 --txq=8 --nb-cores=8 \
449                 --nb-ports=1
450
451Add a rule to direct ``ipv4-udp`` packet whose ``dst_ip=2.2.2.5, src_ip=2.2.2.3, src_port=32, dst_port=32`` to queue 1:
452
453.. code-block:: console
454
455   testpmd> flow create 0 ingress pattern eth / ipv4 src is 2.2.2.3 \
456            dst is 2.2.2.5 / udp src is 32 dst is 32 / end \
457            actions mark id 1 / queue index 1 / end
458
459Check the flow director status:
460
461.. code-block:: console
462
463   testpmd> show port fdir 0
464
465   ######################## FDIR infos for port 0      ####################
466     MODE:   PERFECT
467     SUPPORTED FLOW TYPE:  ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other
468                           ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other
469			   l2_payload
470     FLEX PAYLOAD INFO:
471     max_len:	    16	        payload_limit: 480
472     payload_unit:  2	        payload_seg:   3
473     bitmask_unit:  2	        bitmask_num:   2
474     MASK:
475       vlan_tci: 0x0000,
476       src_ipv4: 0x00000000,
477       dst_ipv4: 0x00000000,
478       src_port: 0x0000,
479       dst_port: 0x0000
480       src_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000,
481       dst_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000
482     FLEX PAYLOAD SRC OFFSET:
483       L2_PAYLOAD:    0      1	    2	   3	  4	 5	6  ...
484       L3_PAYLOAD:    0      1	    2	   3	  4	 5	6  ...
485       L4_PAYLOAD:    0      1	    2	   3	  4	 5	6  ...
486     FLEX MASK CFG:
487       ipv4-udp:    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
488       ipv4-tcp:    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
489       ipv4-sctp:   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
490       ipv4-other:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
491       ipv4-frag:   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
492       ipv6-udp:    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
493       ipv6-tcp:    00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
494       ipv6-sctp:   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
495       ipv6-other:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
496       ipv6-frag:   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
497       l2_payload:  00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
498     guarant_count: 1	        best_count:    0
499     guarant_space: 512         best_space:    7168
500     collision:     0	        free:	       0
501     maxhash:	    0	        maxlen:        0
502     add:	    0	        remove:        0
503     f_add:	    0	        f_remove:      0
504
505
506Floating VEB
507~~~~~~~~~~~~~
508
509The Intel® Ethernet 700 Series support a feature called
510"Floating VEB".
511
512A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term
513for functionality that allows local switching between virtual endpoints within
514a physical endpoint and also with an external bridge/network.
515
516A "Floating" VEB doesn't have an uplink connection to the outside world so all
517switching is done internally and remains within the host. As such, this
518feature provides security benefits.
519
520In addition, a Floating VEB overcomes a limitation of normal VEBs where they
521cannot forward packets when the physical link is down. Floating VEBs don't need
522to connect to the NIC port so they can still forward traffic from VF to VF
523even when the physical link is down.
524
525Therefore, with this feature enabled VFs can be limited to communicating with
526each other but not an outside network, and they can do so even when there is
527no physical uplink on the associated NIC port.
528
529To enable this feature, the user should pass a ``devargs`` parameter to the
530EAL, for example::
531
532    -a 84:00.0,enable_floating_veb=1
533
534In this configuration the PMD will use the floating VEB feature for all the
535VFs created by this PF device.
536
537Alternatively, the user can specify which VFs need to connect to this floating
538VEB using the ``floating_veb_list`` argument::
539
540    -a 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4
541
542In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB,
543while other VFs connect to the normal VEB.
544
545The current implementation only supports one floating VEB and one regular
546VEB. VFs can connect to a floating VEB or a regular VEB according to the
547configuration passed on the EAL command line.
548
549The floating VEB functionality requires a NIC firmware version of 5.0
550or greater.
551
552Dynamic Device Personalization (DDP)
553~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
554
555The Intel® Ethernet 700 Series except for the Intel Ethernet Connection
556X722 support a feature called "Dynamic Device Personalization (DDP)",
557which is used to configure hardware by downloading a profile to support
558protocols/filters which are not supported by default. The DDP
559functionality requires a NIC firmware version of 6.0 or greater.
560
561Current implementation supports GTP-C/GTP-U/PPPoE/PPPoL2TP/ESP,
562steering can be used with rte_flow API.
563
564GTPv1 package is released, and it can be downloaded from
565https://downloadcenter.intel.com/download/27587.
566
567PPPoE package is released, and it can be downloaded from
568https://downloadcenter.intel.com/download/28040.
569
570ESP-AH package is released, and it can be downloaded from
571https://downloadcenter.intel.com/download/29446.
572
573Load a profile which supports GTP and store backup profile:
574
575.. code-block:: console
576
577   testpmd> ddp add 0 ./gtp.pkgo,./backup.pkgo
578
579Delete a GTP profile and restore backup profile:
580
581.. code-block:: console
582
583   testpmd> ddp del 0 ./backup.pkgo
584
585Get loaded DDP package info list:
586
587.. code-block:: console
588
589   testpmd> ddp get list 0
590
591Display information about a GTP profile:
592
593.. code-block:: console
594
595   testpmd> ddp get info ./gtp.pkgo
596
597Input set configuration
598~~~~~~~~~~~~~~~~~~~~~~~
599Input set for any PCTYPE can be configured with user defined configuration,
600For example, to use only 48bit prefix for IPv6 src address for IPv6 TCP RSS:
601
602.. code-block:: console
603
604   testpmd> port config 0 pctype 43 hash_inset clear all
605   testpmd> port config 0 pctype 43 hash_inset set field 13
606   testpmd> port config 0 pctype 43 hash_inset set field 14
607   testpmd> port config 0 pctype 43 hash_inset set field 15
608
609Queue region configuration
610~~~~~~~~~~~~~~~~~~~~~~~~~~~
611The Intel® Ethernet 700 Series supports a feature of queue regions
612configuration for RSS in the PF, so that different traffic classes or
613different packet classification types can be separated to different
614queues in different queue regions. There is an API for configuration
615of queue regions in RSS with a command line. It can parse the parameters
616of the region index, queue number, queue start index, user priority, traffic
617classes and so on. Depending on commands from the command line, it will call
618i40e private APIs and start the process of setting or flushing the queue
619region configuration. As this feature is specific for i40e only private
620APIs are used.
621
622.. code-block:: console
623
624   testpmd> set port (port_id) queue-region region_id (value) \
625		queue_start_index (value) queue_num (value)
626   testpmd> set port (port_id) queue-region region_id (value) flowtype (value)
627   testpmd> set port (port_id) queue-region UP (value) region_id (value)
628   testpmd> set port (port_id) queue-region flush (on|off)
629   testpmd> show port (port_id) queue-region
630
631Generic flow API
632~~~~~~~~~~~~~~~~~~~
633
634- ``RSS Flow``
635
636  RSS Flow supports to set hash input set, hash function, enable hash
637  and configure queues.
638  For example:
639  Configure queues as queue 0, 1, 2, 3.
640
641  .. code-block:: console
642
643    testpmd> flow create 0 ingress pattern end actions rss types end \
644      queues 0 1 2 3 end / end
645
646  Enable hash and set input set for ipv4-tcp.
647
648  .. code-block:: console
649
650    testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
651      actions rss types ipv4-tcp l3-src-only end queues end / end
652
653  Set symmetric hash enable for flow type ipv4-tcp.
654
655  .. code-block:: console
656
657    testpmd> flow create 0 ingress pattern eth / ipv4 / tcp / end \
658      actions rss types ipv4-tcp end queues end func symmetric_toeplitz / end
659
660  Set hash function as simple xor.
661
662  .. code-block:: console
663
664    testpmd> flow create 0 ingress pattern end actions rss types end \
665      queues end func simple_xor / end
666
667Limitations or Known issues
668---------------------------
669
670MPLS packet classification
671~~~~~~~~~~~~~~~~~~~~~~~~~~
672
673For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC.
674The L2 Payload flow type in flow director can be used to classify MPLS packet
675by using a command in testpmd like:
676
677   testpmd> flow_director_filter 0 mode IP add flow l2_payload ether \
678            0x8847 flexbytes () fwd pf queue <N> fd_id <M>
679
680With the NIC firmware version 5.0 or greater, some limited MPLS support
681is added: Native MPLS (MPLS in Ethernet) skip is implemented, while no
682new packet type, no classification or offload are possible. With this change,
683L2 Payload flow type in flow director cannot be used to classify MPLS packet
684as with previous firmware versions. Meanwhile, the Ethertype filter can be
685used to classify MPLS packet by using a command in testpmd like:
686
687   testpmd> flow create 0 ingress pattern eth type is 0x8847 / end \
688            actions queue index <M> / end
689
690Receive packets with Ethertype 0x88A8
691~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
692
693Due to the FW limitation, PF can receive packets with Ethertype 0x88A8
694only when floating VEB is disabled.
695
696Incorrect Rx statistics when packet is oversize
697~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
698
699When a packet is over maximum frame size, the packet is dropped.
700However, the Rx statistics, when calling `rte_eth_stats_get` incorrectly
701shows it as received.
702
703RX/TX statistics may be incorrect when register overflowed
704~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
705
706The rx_bytes/tx_bytes statistics register is 48 bit length.
707Although this limitation is enlarged to 64 bit length on the software side,
708but there is no way to detect if the overflow occurred more than once.
709So rx_bytes/tx_bytes statistics data is correct when statistics are
710updated at least once between two overflows.
711
712VF & TC max bandwidth setting
713~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
714
715The per VF max bandwidth and per TC max bandwidth cannot be enabled in parallel.
716The behavior is different when handling per VF and per TC max bandwidth setting.
717When enabling per VF max bandwidth, SW will check if per TC max bandwidth is
718enabled. If so, return failure.
719When enabling per TC max bandwidth, SW will check if per VF max bandwidth
720is enabled. If so, disable per VF max bandwidth and continue with per TC max
721bandwidth setting.
722
723TC TX scheduling mode setting
724~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
725
726There are 2 TX scheduling modes for TCs, round robin and strict priority mode.
727If a TC is set to strict priority mode, it can consume unlimited bandwidth.
728It means if APP has set the max bandwidth for that TC, it comes to no
729effect.
730It's suggested to set the strict priority mode for a TC that is latency
731sensitive but no consuming much bandwidth.
732
733DCB function
734~~~~~~~~~~~~
735
736DCB works only when RSS is enabled.
737
738Global configuration warning
739~~~~~~~~~~~~~~~~~~~~~~~~~~~~
740
741I40E PMD will set some global registers to enable some function or set some
742configure. Then when using different ports of the same NIC with Linux kernel
743and DPDK, the port with Linux kernel will be impacted by the port with DPDK.
744For example, register I40E_GL_SWT_L2TAGCTRL is used to control L2 tag, i40e
745PMD uses I40E_GL_SWT_L2TAGCTRL to set vlan TPID. If setting TPID in port A
746with DPDK, then the configuration will also impact port B in the NIC with
747kernel driver, which don't want to use the TPID.
748So PMD reports warning to clarify what is changed by writing global register.
749
750Cloud Filter
751~~~~~~~~~~~~
752
753When programming cloud filters for IPv4/6_UDP/TCP/SCTP with SRC port only or DST port only,
754it will make any cloud filter using inner_vlan or tunnel key invalid. Default configuration will be
755recovered only by NIC core reset.
756
757Mirror rule limitation for X722
758~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
759
760Due to firmware restriction of X722, the same VSI cannot have more than one mirror rule.
761
762.. _net_i40e_testpmd_commands:
763
764Testpmd driver specific commands
765--------------------------------
766
767Some i40e driver specific features are integrated in testpmd.
768
769RSS queue region
770~~~~~~~~~~~~~~~~
771
772Set RSS queue region span on a port::
773
774   testpmd> set port (port_id) queue-region region_id (value) \
775		queue_start_index (value) queue_num (value)
776
777Set flowtype mapping on a RSS queue region on a port::
778
779   testpmd> set port (port_id) queue-region region_id (value) flowtype (value)
780
781where:
782
783* For the flowtype(pctype) of packet,the specific index for each type has
784  been defined in file i40e_type.h as enum i40e_filter_pctype.
785
786Set user priority mapping on a RSS queue region on a port::
787
788   testpmd> set port (port_id) queue-region UP (value) region_id (value)
789
790Flush all queue region related configuration on a port::
791
792   testpmd> set port (port_id) queue-region flush (on|off)
793
794where:
795
796* ``on``: is just an enable function which server for other configuration,
797  it is for all configuration about queue region from up layer,
798  at first will only keep in DPDK software stored in driver,
799  only after "flush on", it commit all configuration to HW.
800
801* ``"off``: is just clean all configuration about queue region just now,
802  and restore all to DPDK i40e driver default config when start up.
803
804Show all queue region related configuration info on a port::
805
806   testpmd> show port (port_id) queue-region
807
808.. note::
809
810  Queue region only support on PF by now, so these command is
811  only for configuration of queue region on PF port.
812
813set promisc (for VF)
814~~~~~~~~~~~~~~~~~~~~
815
816Set the unicast promiscuous mode for a VF from PF.
817It's supported by Intel i40e NICs now.
818In promiscuous mode packets are not dropped if they aren't for the specified MAC address::
819
820   testpmd> set vf promisc (port_id) (vf_id) (on|off)
821
822set allmulticast (for VF)
823~~~~~~~~~~~~~~~~~~~~~~~~~
824
825Set the multicast promiscuous mode for a VF from PF.
826It's supported by Intel i40e NICs now.
827In promiscuous mode packets are not dropped if they aren't for the specified MAC address::
828
829   testpmd> set vf allmulti (port_id) (vf_id) (on|off)
830
831set broadcast mode (for VF)
832~~~~~~~~~~~~~~~~~~~~~~~~~~~
833
834Set broadcast mode for a VF from the PF::
835
836   testpmd> set vf broadcast (port_id) (vf_id) (on|off)
837
838vlan set tag (for VF)
839~~~~~~~~~~~~~~~~~~~~~
840
841Set VLAN tag for a VF from the PF::
842
843   testpmd> set vf vlan tag (port_id) (vf_id) (on|off)
844
845set tx max bandwidth (for VF)
846~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
847
848Set TX max absolute bandwidth (Mbps) for a VF from PF::
849
850   testpmd> set vf tx max-bandwidth (port_id) (vf_id) (max_bandwidth)
851
852set tc tx min bandwidth (for VF)
853~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
854
855Set all TCs' TX min relative bandwidth (%) for a VF from PF::
856
857   testpmd> set vf tc tx min-bandwidth (port_id) (vf_id) (bw1, bw2, ...)
858
859set tc tx max bandwidth (for VF)
860~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
861
862Set a TC's TX max absolute bandwidth (Mbps) for a VF from PF::
863
864   testpmd> set vf tc tx max-bandwidth (port_id) (vf_id) (tc_no) (max_bandwidth)
865
866set tc strict link priority mode
867~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
868
869Set some TCs' strict link priority mode on a physical port::
870
871   testpmd> set tx strict-link-priority (port_id) (tc_bitmap)
872
873ddp add
874~~~~~~~
875
876Load a dynamic device personalization (DDP) profile and store backup profile::
877
878   testpmd> ddp add (port_id) (profile_path[,backup_profile_path])
879
880ddp del
881~~~~~~~
882
883Delete a dynamic device personalization profile and restore backup profile::
884
885   testpmd> ddp del (port_id) (backup_profile_path)
886
887ddp get list
888~~~~~~~~~~~~
889
890Get loaded dynamic device personalization (DDP) package info list::
891
892   testpmd> ddp get list (port_id)
893
894ddp get info
895~~~~~~~~~~~~
896
897Display information about dynamic device personalization (DDP) profile::
898
899   testpmd> ddp get info (profile_path)
900
901ptype mapping
902~~~~~~~~~~~~~
903
904List all items from the ptype mapping table::
905
906   testpmd> ptype mapping get (port_id) (valid_only)
907
908Where:
909
910* ``valid_only``: A flag indicates if only list valid items(=1) or all items(=0).
911
912Replace a specific or a group of software defined ptype with a new one::
913
914   testpmd> ptype mapping replace  (port_id) (target) (mask) (pkt_type)
915
916where:
917
918* ``target``: A specific software ptype or a mask to represent a group of software ptypes.
919
920* ``mask``: A flag indicate if "target" is a specific software ptype(=0) or a ptype mask(=1).
921
922* ``pkt_type``: The new software ptype to replace the old ones.
923
924Update hardware defined ptype to software defined packet type mapping table::
925
926   testpmd> ptype mapping update (port_id) (hw_ptype) (sw_ptype)
927
928where:
929
930* ``hw_ptype``: hardware ptype as the index of the ptype mapping table.
931
932* ``sw_ptype``: software ptype as the value of the ptype mapping table.
933
934Reset ptype mapping table::
935
936   testpmd> ptype mapping reset (port_id)
937
938show port pctype mapping
939~~~~~~~~~~~~~~~~~~~~~~~~
940
941List all items from the pctype mapping table::
942
943   testpmd> show port (port_id) pctype mapping
944
945High Performance of Small Packets on 40GbE NIC
946----------------------------------------------
947
948As there might be firmware fixes for performance enhancement in latest version
949of firmware image, the firmware update might be needed for getting high performance.
950Check the Intel support website for the latest firmware updates.
951Users should consult the release notes specific to a DPDK release to identify
952the validated firmware version for a NIC using the i40e driver.
953
954Use 16 Bytes RX Descriptor Size
955~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
956
957As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets.
958In ``config/rte_config.h`` set the following to use 16 bytes size RX descriptors::
959
960   #define RTE_LIBRTE_I40E_16BYTE_RX_DESC 1
961
962Input set requirement of each pctype for FDIR
963~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
964
965Each PCTYPE can only have one specific FDIR input set at one time.
966For example, if creating 2 rte_flow rules with different input set for one PCTYPE,
967it will fail and return the info "Conflict with the first rule's input set",
968which means the current rule's input set conflicts with the first rule's.
969Remove the first rule if want to change the input set of the PCTYPE.
970
971Vlan related Features miss when FW >= 8.4
972~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
973
974If FW version >= 8.4, there'll be some Vlan related issues:
975
976#. TCI input set for QinQ  is invalid.
977#. Fail to configure TPID for QinQ.
978#. Need to enable QinQ before enabling Vlan filter.
979#. Fail to strip outer Vlan.
980
981Example of getting best performance with l3fwd example
982------------------------------------------------------
983
984The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with a
985server with Intel Xeon processors and Intel Ethernet CNA XL710.
986
987The example scenario is to get best performance with two Intel Ethernet CNA XL710 40GbE ports.
988See :numref:`figure_intel_perf_test_setup` for the performance test setup.
989
990.. _figure_intel_perf_test_setup:
991
992.. figure:: img/intel_perf_test_setup.*
993
994   Performance Test Setup
995
996
997#. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance.
998   The reason for using two NICs is to overcome a PCIe v3.0 limitation since it cannot provide 80GbE bandwidth
999   for two 40GbE ports, but two different PCIe v3.0 x8 slot can.
1000   Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports::
1001
1002      82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
1003      85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583]
1004
1005#. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator.
1006
1007#. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id.
1008   In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform
1009   are 18-35 and 54-71.
1010   Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical
1011   cores from different cores (e.g core18 and core19).
1012
1013#. Bind these two ports to igb_uio.
1014
1015#. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port
1016   will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets.
1017
1018#. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding.
1019   Compile the ``l3fwd sample`` with the default lpm mode.
1020
1021#. The command line of running l3fwd would be something like the following::
1022
1023      ./dpdk-l3fwd -l 18-21 -n 4 -a 82:00.0 -a 85:00.0 \
1024              -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)'
1025
1026   This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding,
1027   core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding.
1028
1029#. Configure the traffic at a traffic generator.
1030
1031   * Start creating a stream on packet generator.
1032
1033   * Set the Ethernet II type to 0x0800.
1034
1035Tx bytes affected by the link status change
1036~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1037
1038For firmware versions prior to 6.01 for X710 series and 3.33 for X722 series, the tx_bytes statistics data is affected by
1039the link down event. Each time the link status changes to down, the tx_bytes decreases 110 bytes.
1040