1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2016 Intel Corporation. 3 4I40E Poll Mode Driver 5====================== 6 7The i40e PMD (librte_pmd_i40e) provides poll mode driver support for 810/25/40 Gbps Intel® Ethernet 700 Series Network Adapters based on 9the Intel Ethernet Controller X710/XL710/XXV710 and Intel Ethernet 10Connection X722 (only support part of features). 11 12 13Features 14-------- 15 16Features of the i40e PMD are: 17 18- Multiple queues for TX and RX 19- Receiver Side Scaling (RSS) 20- MAC/VLAN filtering 21- Packet type information 22- Flow director 23- Cloud filter 24- Checksum offload 25- VLAN/QinQ stripping and inserting 26- TSO offload 27- Promiscuous mode 28- Multicast mode 29- Port hardware statistics 30- Jumbo frames 31- Link state information 32- Link flow control 33- Mirror on port, VLAN and VSI 34- Interrupt mode for RX 35- Scattered and gather for TX and RX 36- Vector Poll mode driver 37- DCB 38- VMDQ 39- SR-IOV VF 40- Hot plug 41- IEEE1588/802.1AS timestamping 42- VF Daemon (VFD) - EXPERIMENTAL 43- Dynamic Device Personalization (DDP) 44- Queue region configuration 45- Virtual Function Port Representors 46 47Prerequisites 48------------- 49 50- Identifying your adapter using `Intel Support 51 <http://www.intel.com/support>`_ and get the latest NVM/FW images. 52 53- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment. 54 55- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms" 56 section of the :ref:`Getting Started Guide for Linux <linux_gsg>`. 57 58- Upgrade the NVM/FW version following the `Intel® Ethernet NVM Update Tool Quick Usage Guide for Linux 59 <https://www-ssl.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-linux-usage-guide.html>`_ and `Intel® Ethernet NVM Update Tool: Quick Usage Guide for EFI <https://www.intel.com/content/www/us/en/embedded/products/networking/nvm-update-tool-quick-efi-usage-guide.html>`_ if needed. 60 61Recommended Matching List 62------------------------- 63 64It is highly recommended to upgrade the i40e kernel driver and firmware to 65avoid the compatibility issues with i40e PMD. Here is the suggested matching 66list which has been tested and verified. The detailed information can refer 67to chapter Tested Platforms/Tested NICs in release notes. 68 69 +--------------+-----------------------+------------------+ 70 | DPDK version | Kernel driver version | Firmware version | 71 +==============+=======================+==================+ 72 | 19.08 | 2.9.21 | 7.00 | 73 +--------------+-----------------------+------------------+ 74 | 19.05 | 2.7.29 | 6.80 | 75 +--------------+-----------------------+------------------+ 76 | 19.02 | 2.7.26 | 6.80 | 77 +--------------+-----------------------+------------------+ 78 | 18.11 | 2.4.6 | 6.01 | 79 +--------------+-----------------------+------------------+ 80 | 18.08 | 2.4.6 | 6.01 | 81 +--------------+-----------------------+------------------+ 82 | 18.05 | 2.4.6 | 6.01 | 83 +--------------+-----------------------+------------------+ 84 | 18.02 | 2.4.3 | 6.01 | 85 +--------------+-----------------------+------------------+ 86 | 17.11 | 2.1.26 | 6.01 | 87 +--------------+-----------------------+------------------+ 88 | 17.08 | 2.0.19 | 6.01 | 89 +--------------+-----------------------+------------------+ 90 | 17.05 | 1.5.23 | 5.05 | 91 +--------------+-----------------------+------------------+ 92 | 17.02 | 1.5.23 | 5.05 | 93 +--------------+-----------------------+------------------+ 94 | 16.11 | 1.5.23 | 5.05 | 95 +--------------+-----------------------+------------------+ 96 | 16.07 | 1.4.25 | 5.04 | 97 +--------------+-----------------------+------------------+ 98 | 16.04 | 1.4.25 | 5.02 | 99 +--------------+-----------------------+------------------+ 100 101Pre-Installation Configuration 102------------------------------ 103 104Config File Options 105~~~~~~~~~~~~~~~~~~~ 106 107The following options can be modified in the ``config`` file. 108Please note that enabling debugging options may affect system performance. 109 110- ``CONFIG_RTE_LIBRTE_I40E_PMD`` (default ``y``) 111 112 Toggle compilation of the ``librte_pmd_i40e`` driver. 113 114- ``CONFIG_RTE_LIBRTE_I40E_DEBUG_*`` (default ``n``) 115 116 Toggle display of generic debugging messages. 117 118- ``CONFIG_RTE_LIBRTE_I40E_RX_ALLOW_BULK_ALLOC`` (default ``y``) 119 120 Toggle bulk allocation for RX. 121 122- ``CONFIG_RTE_LIBRTE_I40E_INC_VECTOR`` (default ``n``) 123 124 Toggle the use of Vector PMD instead of normal RX/TX path. 125 To enable vPMD for RX, bulk allocation for Rx must be allowed. 126 127- ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` (default ``n``) 128 129 Toggle to use a 16-byte RX descriptor, by default the RX descriptor is 32 byte. 130 131- ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_PF`` (default ``64``) 132 133 Number of queues reserved for PF. 134 135- ``CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM`` (default ``4``) 136 137 Number of queues reserved for each VMDQ Pool. 138 139Runtime Config Options 140~~~~~~~~~~~~~~~~~~~~~~ 141 142- ``Reserved number of Queues per VF`` (default ``4``) 143 144 The number of reserved queue per VF is determined by its host PF. If the 145 PCI address of an i40e PF is aaaa:bb.cc, the number of reserved queues per 146 VF can be configured with EAL parameter like -w aaaa:bb.cc,queue-num-per-vf=n. 147 The value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the 148 number of reserved queues per VF is 4 by default. If VF request more than 149 reserved queues per VF, PF will able to allocate max to 16 queues after a VF 150 reset. 151 152 153- ``Support multiple driver`` (default ``disable``) 154 155 There was a multiple driver support issue during use of 700 series Ethernet 156 Adapter with both Linux kernel and DPDK PMD. To fix this issue, ``devargs`` 157 parameter ``support-multi-driver`` is introduced, for example:: 158 159 -w 84:00.0,support-multi-driver=1 160 161 With the above configuration, DPDK PMD will not change global registers, and 162 will switch PF interrupt from IntN to Int0 to avoid interrupt conflict between 163 DPDK and Linux Kernel. 164 165- ``Support VF Port Representor`` (default ``not enabled``) 166 167 The i40e PF PMD supports the creation of VF port representors for the control 168 and monitoring of i40e virtual function devices. Each port representor 169 corresponds to a single virtual function of that device. Using the ``devargs`` 170 option ``representor`` the user can specify which virtual functions to create 171 port representors for on initialization of the PF PMD by passing the VF IDs of 172 the VFs which are required.:: 173 174 -w DBDF,representor=[0,1,4] 175 176 Currently hot-plugging of representor ports is not supported so all required 177 representors must be specified on the creation of the PF. 178 179- ``Use latest supported vector`` (default ``disable``) 180 181 Latest supported vector path may not always get the best perf so vector path was 182 recommended to use only on later platform. But users may want the latest vector path 183 since it can get better perf in some real work loading cases. So ``devargs`` param 184 ``use-latest-supported-vec`` is introduced, for example:: 185 186 -w 84:00.0,use-latest-supported-vec=1 187 188Vector RX Pre-conditions 189~~~~~~~~~~~~~~~~~~~~~~~~ 190For Vector RX it is assumed that the number of descriptor rings will be a power 191of 2. With this pre-condition, the ring pointer can easily scroll back to the 192head after hitting the tail without a conditional check. In addition Vector RX 193can use this assumption to do a bit mask using ``ring_size - 1``. 194 195Driver compilation and testing 196------------------------------ 197 198Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` 199for details. 200 201 202SR-IOV: Prerequisites and sample Application Notes 203-------------------------------------------------- 204 205#. Load the kernel module: 206 207 .. code-block:: console 208 209 modprobe i40e 210 211 Check the output in dmesg: 212 213 .. code-block:: console 214 215 i40e 0000:83:00.1 ens802f0: renamed from eth0 216 217#. Bring up the PF ports: 218 219 .. code-block:: console 220 221 ifconfig ens802f0 up 222 223#. Create VF device(s): 224 225 Echo the number of VFs to be created into the ``sriov_numvfs`` sysfs entry 226 of the parent PF. 227 228 Example: 229 230 .. code-block:: console 231 232 echo 2 > /sys/devices/pci0000:00/0000:00:03.0/0000:81:00.0/sriov_numvfs 233 234 235#. Assign VF MAC address: 236 237 Assign MAC address to the VF using iproute2 utility. The syntax is: 238 239 .. code-block:: console 240 241 ip link set <PF netdev id> vf <VF id> mac <macaddr> 242 243 Example: 244 245 .. code-block:: console 246 247 ip link set ens802f0 vf 0 mac a0:b0:c0:d0:e0:f0 248 249#. Assign VF to VM, and bring up the VM. 250 Please see the documentation for the *I40E/IXGBE/IGB Virtual Function Driver*. 251 252#. Running testpmd: 253 254 Follow instructions available in the document 255 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` 256 to run testpmd. 257 258 Example output: 259 260 .. code-block:: console 261 262 ... 263 EAL: PCI device 0000:83:00.0 on NUMA socket 1 264 EAL: probe driver: 8086:1572 rte_i40e_pmd 265 EAL: PCI memory mapped at 0x7f7f80000000 266 EAL: PCI memory mapped at 0x7f7f80800000 267 PMD: eth_i40e_dev_init(): FW 5.0 API 1.5 NVM 05.00.02 eetrack 8000208a 268 Interactive-mode selected 269 Configuring Port 0 (socket 0) 270 ... 271 272 PMD: i40e_dev_rx_queue_setup(): Rx Burst Bulk Alloc Preconditions are 273 satisfied.Rx Burst Bulk Alloc function will be used on port=0, queue=0. 274 275 ... 276 Port 0: 68:05:CA:26:85:84 277 Checking link statuses... 278 Port 0 Link Up - speed 10000 Mbps - full-duplex 279 Done 280 281 testpmd> 282 283 284Sample Application Notes 285------------------------ 286 287Vlan filter 288~~~~~~~~~~~ 289 290Vlan filter only works when Promiscuous mode is off. 291 292To start ``testpmd``, and add vlan 10 to port 0: 293 294.. code-block:: console 295 296 ./app/testpmd -l 0-15 -n 4 -- -i --forward-mode=mac 297 ... 298 299 testpmd> set promisc 0 off 300 testpmd> rx_vlan add 10 0 301 302 303Flow Director 304~~~~~~~~~~~~~ 305 306The Flow Director works in receive mode to identify specific flows or sets of flows and route them to specific queues. 307The Flow Director filters can match the different fields for different type of packet: flow type, specific input set per flow type and the flexible payload. 308 309The default input set of each flow type is:: 310 311 ipv4-other : src_ip_address, dst_ip_address 312 ipv4-frag : src_ip_address, dst_ip_address 313 ipv4-tcp : src_ip_address, dst_ip_address, src_port, dst_port 314 ipv4-udp : src_ip_address, dst_ip_address, src_port, dst_port 315 ipv4-sctp : src_ip_address, dst_ip_address, src_port, dst_port, 316 verification_tag 317 ipv6-other : src_ip_address, dst_ip_address 318 ipv6-frag : src_ip_address, dst_ip_address 319 ipv6-tcp : src_ip_address, dst_ip_address, src_port, dst_port 320 ipv6-udp : src_ip_address, dst_ip_address, src_port, dst_port 321 ipv6-sctp : src_ip_address, dst_ip_address, src_port, dst_port, 322 verification_tag 323 l2_payload : ether_type 324 325The flex payload is selected from offset 0 to 15 of packet's payload by default, while it is masked out from matching. 326 327Start ``testpmd`` with ``--disable-rss`` and ``--pkt-filter-mode=perfect``: 328 329.. code-block:: console 330 331 ./app/testpmd -l 0-15 -n 4 -- -i --disable-rss --pkt-filter-mode=perfect \ 332 --rxq=8 --txq=8 --nb-cores=8 --nb-ports=1 333 334Add a rule to direct ``ipv4-udp`` packet whose ``dst_ip=2.2.2.5, src_ip=2.2.2.3, src_port=32, dst_port=32`` to queue 1: 335 336.. code-block:: console 337 338 testpmd> flow_director_filter 0 mode IP add flow ipv4-udp \ 339 src 2.2.2.3 32 dst 2.2.2.5 32 vlan 0 flexbytes () \ 340 fwd pf queue 1 fd_id 1 341 342Check the flow director status: 343 344.. code-block:: console 345 346 testpmd> show port fdir 0 347 348 ######################## FDIR infos for port 0 #################### 349 MODE: PERFECT 350 SUPPORTED FLOW TYPE: ipv4-frag ipv4-tcp ipv4-udp ipv4-sctp ipv4-other 351 ipv6-frag ipv6-tcp ipv6-udp ipv6-sctp ipv6-other 352 l2_payload 353 FLEX PAYLOAD INFO: 354 max_len: 16 payload_limit: 480 355 payload_unit: 2 payload_seg: 3 356 bitmask_unit: 2 bitmask_num: 2 357 MASK: 358 vlan_tci: 0x0000, 359 src_ipv4: 0x00000000, 360 dst_ipv4: 0x00000000, 361 src_port: 0x0000, 362 dst_port: 0x0000 363 src_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000, 364 dst_ipv6: 0x00000000,0x00000000,0x00000000,0x00000000 365 FLEX PAYLOAD SRC OFFSET: 366 L2_PAYLOAD: 0 1 2 3 4 5 6 ... 367 L3_PAYLOAD: 0 1 2 3 4 5 6 ... 368 L4_PAYLOAD: 0 1 2 3 4 5 6 ... 369 FLEX MASK CFG: 370 ipv4-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 371 ipv4-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 372 ipv4-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 373 ipv4-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 374 ipv4-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 375 ipv6-udp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 376 ipv6-tcp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 377 ipv6-sctp: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 378 ipv6-other: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 379 ipv6-frag: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 380 l2_payload: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 381 guarant_count: 1 best_count: 0 382 guarant_space: 512 best_space: 7168 383 collision: 0 free: 0 384 maxhash: 0 maxlen: 0 385 add: 0 remove: 0 386 f_add: 0 f_remove: 0 387 388 389Delete all flow director rules on a port: 390 391.. code-block:: console 392 393 testpmd> flush_flow_director 0 394 395Floating VEB 396~~~~~~~~~~~~~ 397 398The Intel® Ethernet 700 Series support a feature called 399"Floating VEB". 400 401A Virtual Ethernet Bridge (VEB) is an IEEE Edge Virtual Bridging (EVB) term 402for functionality that allows local switching between virtual endpoints within 403a physical endpoint and also with an external bridge/network. 404 405A "Floating" VEB doesn't have an uplink connection to the outside world so all 406switching is done internally and remains within the host. As such, this 407feature provides security benefits. 408 409In addition, a Floating VEB overcomes a limitation of normal VEBs where they 410cannot forward packets when the physical link is down. Floating VEBs don't need 411to connect to the NIC port so they can still forward traffic from VF to VF 412even when the physical link is down. 413 414Therefore, with this feature enabled VFs can be limited to communicating with 415each other but not an outside network, and they can do so even when there is 416no physical uplink on the associated NIC port. 417 418To enable this feature, the user should pass a ``devargs`` parameter to the 419EAL, for example:: 420 421 -w 84:00.0,enable_floating_veb=1 422 423In this configuration the PMD will use the floating VEB feature for all the 424VFs created by this PF device. 425 426Alternatively, the user can specify which VFs need to connect to this floating 427VEB using the ``floating_veb_list`` argument:: 428 429 -w 84:00.0,enable_floating_veb=1,floating_veb_list=1;3-4 430 431In this example ``VF1``, ``VF3`` and ``VF4`` connect to the floating VEB, 432while other VFs connect to the normal VEB. 433 434The current implementation only supports one floating VEB and one regular 435VEB. VFs can connect to a floating VEB or a regular VEB according to the 436configuration passed on the EAL command line. 437 438The floating VEB functionality requires a NIC firmware version of 5.0 439or greater. 440 441Dynamic Device Personalization (DDP) 442~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 443 444The Intel® Ethernet 700 Series except for the Intel Ethernet Connection 445X722 support a feature called "Dynamic Device Personalization (DDP)", 446which is used to configure hardware by downloading a profile to support 447protocols/filters which are not supported by default. The DDP 448functionality requires a NIC firmware version of 6.0 or greater. 449 450Current implementation supports GTP-C/GTP-U/PPPoE/PPPoL2TP, 451steering can be used with rte_flow API. 452 453GTPv1 package is released, and it can be downloaded from 454https://downloadcenter.intel.com/download/27587. 455 456PPPoE package is released, and it can be downloaded from 457https://downloadcenter.intel.com/download/28040. 458 459Load a profile which supports GTP and store backup profile: 460 461.. code-block:: console 462 463 testpmd> ddp add 0 ./gtp.pkgo,./backup.pkgo 464 465Delete a GTP profile and restore backup profile: 466 467.. code-block:: console 468 469 testpmd> ddp del 0 ./backup.pkgo 470 471Get loaded DDP package info list: 472 473.. code-block:: console 474 475 testpmd> ddp get list 0 476 477Display information about a GTP profile: 478 479.. code-block:: console 480 481 testpmd> ddp get info ./gtp.pkgo 482 483Input set configuration 484~~~~~~~~~~~~~~~~~~~~~~~ 485Input set for any PCTYPE can be configured with user defined configuration, 486For example, to use only 48bit prefix for IPv6 src address for IPv6 TCP RSS: 487 488.. code-block:: console 489 490 testpmd> port config 0 pctype 43 hash_inset clear all 491 testpmd> port config 0 pctype 43 hash_inset set field 13 492 testpmd> port config 0 pctype 43 hash_inset set field 14 493 testpmd> port config 0 pctype 43 hash_inset set field 15 494 495Queue region configuration 496~~~~~~~~~~~~~~~~~~~~~~~~~~~ 497The Intel® Ethernet 700 Series supports a feature of queue regions 498configuration for RSS in the PF, so that different traffic classes or 499different packet classification types can be separated to different 500queues in different queue regions. There is an API for configuration 501of queue regions in RSS with a command line. It can parse the parameters 502of the region index, queue number, queue start index, user priority, traffic 503classes and so on. Depending on commands from the command line, it will call 504i40e private APIs and start the process of setting or flushing the queue 505region configuration. As this feature is specific for i40e only private 506APIs are used. These new ``test_pmd`` commands are as shown below. For 507details please refer to :doc:`../testpmd_app_ug/index`. 508 509.. code-block:: console 510 511 testpmd> set port (port_id) queue-region region_id (value) \ 512 queue_start_index (value) queue_num (value) 513 testpmd> set port (port_id) queue-region region_id (value) flowtype (value) 514 testpmd> set port (port_id) queue-region UP (value) region_id (value) 515 testpmd> set port (port_id) queue-region flush (on|off) 516 testpmd> show port (port_id) queue-region 517 518Limitations or Known issues 519--------------------------- 520 521MPLS packet classification 522~~~~~~~~~~~~~~~~~~~~~~~~~~ 523 524For firmware versions prior to 5.0, MPLS packets are not recognized by the NIC. 525The L2 Payload flow type in flow director can be used to classify MPLS packet 526by using a command in testpmd like: 527 528 testpmd> flow_director_filter 0 mode IP add flow l2_payload ether \ 529 0x8847 flexbytes () fwd pf queue <N> fd_id <M> 530 531With the NIC firmware version 5.0 or greater, some limited MPLS support 532is added: Native MPLS (MPLS in Ethernet) skip is implemented, while no 533new packet type, no classification or offload are possible. With this change, 534L2 Payload flow type in flow director cannot be used to classify MPLS packet 535as with previous firmware versions. Meanwhile, the Ethertype filter can be 536used to classify MPLS packet by using a command in testpmd like: 537 538 testpmd> ethertype_filter 0 add mac_ignr 00:00:00:00:00:00 ethertype \ 539 0x8847 fwd queue <M> 540 54116 Byte RX Descriptor setting on DPDK VF 542~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 543 544Currently the VF's RX descriptor mode is decided by PF. There's no PF-VF 545interface for VF to request the RX descriptor mode, also no interface to notify 546VF its own RX descriptor mode. 547For all available versions of the i40e driver, these drivers don't support 16 548byte RX descriptor. If the Linux i40e kernel driver is used as host driver, 549while DPDK i40e PMD is used as the VF driver, DPDK cannot choose 16 byte receive 550descriptor. The reason is that the RX descriptor is already set to 32 byte by 551the i40e kernel driver. That is to say, user should keep 552``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC=n`` in config file. 553In the future, if the Linux i40e driver supports 16 byte RX descriptor, user 554should make sure the DPDK VF uses the same RX descriptor mode, 16 byte or 32 555byte, as the PF driver. 556 557The same rule for DPDK PF + DPDK VF. The PF and VF should use the same RX 558descriptor mode. Or the VF RX will not work. 559 560Receive packets with Ethertype 0x88A8 561~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 562 563Due to the FW limitation, PF can receive packets with Ethertype 0x88A8 564only when floating VEB is disabled. 565 566Incorrect Rx statistics when packet is oversize 567~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 568 569When a packet is over maximum frame size, the packet is dropped. 570However, the Rx statistics, when calling `rte_eth_stats_get` incorrectly 571shows it as received. 572 573VF & TC max bandwidth setting 574~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 575 576The per VF max bandwidth and per TC max bandwidth cannot be enabled in parallel. 577The behavior is different when handling per VF and per TC max bandwidth setting. 578When enabling per VF max bandwidth, SW will check if per TC max bandwidth is 579enabled. If so, return failure. 580When enabling per TC max bandwidth, SW will check if per VF max bandwidth 581is enabled. If so, disable per VF max bandwidth and continue with per TC max 582bandwidth setting. 583 584TC TX scheduling mode setting 585~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 586 587There are 2 TX scheduling modes for TCs, round robin and strict priority mode. 588If a TC is set to strict priority mode, it can consume unlimited bandwidth. 589It means if APP has set the max bandwidth for that TC, it comes to no 590effect. 591It's suggested to set the strict priority mode for a TC that is latency 592sensitive but no consuming much bandwidth. 593 594VF performance is impacted by PCI extended tag setting 595~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 596 597To reach maximum NIC performance in the VF the PCI extended tag must be 598enabled. The DPDK i40e PF driver will set this feature during initialization, 599but the kernel PF driver does not. So when running traffic on a VF which is 600managed by the kernel PF driver, a significant NIC performance downgrade has 601been observed (for 64 byte packets, there is about 25% line-rate downgrade for 602a 25GbE device and about 35% for a 40GbE device). 603 604For kernel version >= 4.11, the kernel's PCI driver will enable the extended 605tag if it detects that the device supports it. So by default, this is not an 606issue. For kernels <= 4.11 or when the PCI extended tag is disabled it can be 607enabled using the steps below. 608 609#. Get the current value of the PCI configure register:: 610 611 setpci -s <XX:XX.X> a8.w 612 613#. Set bit 8:: 614 615 value = value | 0x100 616 617#. Set the PCI configure register with new value:: 618 619 setpci -s <XX:XX.X> a8.w=<value> 620 621Vlan strip of VF 622~~~~~~~~~~~~~~~~ 623 624The VF vlan strip function is only supported in the i40e kernel driver >= 2.1.26. 625 626DCB function 627~~~~~~~~~~~~ 628 629DCB works only when RSS is enabled. 630 631Global configuration warning 632~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 633 634I40E PMD will set some global registers to enable some function or set some 635configure. Then when using different ports of the same NIC with Linux kernel 636and DPDK, the port with Linux kernel will be impacted by the port with DPDK. 637For example, register I40E_GL_SWT_L2TAGCTRL is used to control L2 tag, i40e 638PMD uses I40E_GL_SWT_L2TAGCTRL to set vlan TPID. If setting TPID in port A 639with DPDK, then the configuration will also impact port B in the NIC with 640kernel driver, which don't want to use the TPID. 641So PMD reports warning to clarify what is changed by writing global register. 642 643High Performance of Small Packets on 40GbE NIC 644---------------------------------------------- 645 646As there might be firmware fixes for performance enhancement in latest version 647of firmware image, the firmware update might be needed for getting high performance. 648Check the Intel support website for the latest firmware updates. 649Users should consult the release notes specific to a DPDK release to identify 650the validated firmware version for a NIC using the i40e driver. 651 652Use 16 Bytes RX Descriptor Size 653~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 654 655As i40e PMD supports both 16 and 32 bytes RX descriptor sizes, and 16 bytes size can provide helps to high performance of small packets. 656Configuration of ``CONFIG_RTE_LIBRTE_I40E_16BYTE_RX_DESC`` in config files can be changed to use 16 bytes size RX descriptors. 657 658Example of getting best performance with l3fwd example 659------------------------------------------------------ 660 661The following is an example of running the DPDK ``l3fwd`` sample application to get high performance with a 662server with Intel Xeon processors and Intel Ethernet CNA XL710. 663 664The example scenario is to get best performance with two Intel Ethernet CNA XL710 40GbE ports. 665See :numref:`figure_intel_perf_test_setup` for the performance test setup. 666 667.. _figure_intel_perf_test_setup: 668 669.. figure:: img/intel_perf_test_setup.* 670 671 Performance Test Setup 672 673 6741. Add two Intel Ethernet CNA XL710 to the platform, and use one port per card to get best performance. 675 The reason for using two NICs is to overcome a PCIe v3.0 limitation since it cannot provide 80GbE bandwidth 676 for two 40GbE ports, but two different PCIe v3.0 x8 slot can. 677 Refer to the sample NICs output above, then we can select ``82:00.0`` and ``85:00.0`` as test ports:: 678 679 82:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583] 680 85:00.0 Ethernet [0200]: Intel XL710 for 40GbE QSFP+ [8086:1583] 681 6822. Connect the ports to the traffic generator. For high speed testing, it's best to use a hardware traffic generator. 683 6843. Check the PCI devices numa node (socket id) and get the cores number on the exact socket id. 685 In this case, ``82:00.0`` and ``85:00.0`` are both in socket 1, and the cores on socket 1 in the referenced platform 686 are 18-35 and 54-71. 687 Note: Don't use 2 logical cores on the same core (e.g core18 has 2 logical cores, core18 and core54), instead, use 2 logical 688 cores from different cores (e.g core18 and core19). 689 6904. Bind these two ports to igb_uio. 691 6925. As to Intel Ethernet CNA XL710 40GbE port, we need at least two queue pairs to achieve best performance, then two queues per port 693 will be required, and each queue pair will need a dedicated CPU core for receiving/transmitting packets. 694 6956. The DPDK sample application ``l3fwd`` will be used for performance testing, with using two ports for bi-directional forwarding. 696 Compile the ``l3fwd sample`` with the default lpm mode. 697 6987. The command line of running l3fwd would be something like the following:: 699 700 ./l3fwd -l 18-21 -n 4 -w 82:00.0 -w 85:00.0 \ 701 -- -p 0x3 --config '(0,0,18),(0,1,19),(1,0,20),(1,1,21)' 702 703 This means that the application uses core 18 for port 0, queue pair 0 forwarding, core 19 for port 0, queue pair 1 forwarding, 704 core 20 for port 1, queue pair 0 forwarding, and core 21 for port 1, queue pair 1 forwarding. 705 7068. Configure the traffic at a traffic generator. 707 708 * Start creating a stream on packet generator. 709 710 * Set the Ethernet II type to 0x0800. 711 712Tx bytes affected by the link status change 713~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 714 715For firmware versions prior to 6.01 for X710 series and 3.33 for X722 series, the tx_bytes statistics data is affected by 716the link down event. Each time the link status changes to down, the tx_bytes decreases 110 bytes. 717