1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright 2019 The DPDK contributors 3 4DPDK Release 19.05 5================== 6 7New Features 8------------ 9 10* **Added new armv8 machine targets.** 11 12 Added new armv8 machine targets: 13 14 * BlueField (Mellanox) 15 * OcteonTX2 (Marvell) 16 * ThunderX2 (Marvell) 17 18* **Added Windows Support.** 19 20 Added Windows support to build Hello World sample application. 21 22* **Added Stack Library.** 23 24 Added a new stack library and APIs for configuration and use of a bounded 25 stack of pointers. The API provides multi-thread safe push and pop 26 operations that can operate on one or more pointers per operation. 27 28 The library supports two stack implementations: standard (lock-based) and 29 lock-free. The lock-free implementation is currently limited to x86-64 30 platforms. 31 32* **Added Lock-Free Stack Mempool Handler.** 33 34 Added a new lock-free stack handler, which uses the newly added stack 35 library. 36 37* **Added RCU library.** 38 39 Added RCU library supporting a quiescent state based memory reclamation method. 40 This library helps identify the quiescent state of the reader threads so 41 that the writers can free the memory associated with the lock free data 42 structures. 43 44* **Updated KNI module and PMD.** 45 46 Updated the KNI kernel module to set the ``max_mtu`` according to the given 47 initial MTU size. Without it, the maximum MTU was 1500. 48 49 Updated the KNI PMD to set the ``mbuf_size`` and MTU based on 50 the given mb-pool. This provide the ability to pass jumbo frames 51 if the mb-pool contains a suitable buffer size. 52 53* **Added the AF_XDP PMD.** 54 55 Added a Linux-specific PMD for AF_XDP. This PMD can create an AF_XDP socket 56 and bind it to a specific netdev queue. It allows a DPDK application to send 57 and receive raw packets through the socket which would bypass the kernel 58 network stack to achieve high performance packet processing. 59 60* **Added a net PMD NFB.** 61 62 Added the new ``nfb`` net driver for Netcope NFB cards. See 63 the :doc:`../nics/nfb` NIC guide for more details on this new driver. 64 65* **Added IPN3KE net PMD.** 66 67 Added the new ``ipn3ke`` net driver for the Intel® FPGA PAC (Programmable 68 Acceleration Card) N3000. See the :doc:`../nics/ipn3ke` NIC guide for more 69 details on this new driver. 70 71 In addition ``ifpga_rawdev`` was also updated to support Intel® FPGA PAC 72 N3000 with SPI interface access, I2C Read/Write, and Ethernet PHY configuration. 73 74* **Updated Solarflare network PMD.** 75 76 Updated the Solarflare ``sfc_efx`` driver with changes including: 77 78 * Added support for Rx descriptor status and related API in a secondary 79 process. 80 * Added support for Tx descriptor status API in a secondary process. 81 * Added support for RSS RETA and hash configuration reading API in a 82 secondary process. 83 * Added support for Rx packet types list in a secondary process. 84 * Added Tx prepare to do Tx offloads checks. 85 * Added support for VXLAN and GENEVE encapsulated TSO. 86 87* **Updated Mellanox mlx4 driver.** 88 89 Updated Mellanox mlx4 driver with new features and improvements, including: 90 91 * Added firmware version reading. 92 * Added support for secondary processes. 93 * Added support of per-process device registers. Reserving identical VA space 94 is not needed anymore. 95 * Added support for multicast address list interfaces. 96 97* **Updated Mellanox mlx5 driver.** 98 99 Updated Mellanox mlx5 driver with new features and improvements, including: 100 101 * Added firmware version reading. 102 * Added support for new naming scheme of representor. 103 * Added support for new PCI device DMA map/unmap API. 104 * Added support for multiport InfiniBand device. 105 * Added control of excessive memory pinning by kernel. 106 * Added support of DMA memory registration by secondary process. 107 * Added support of per-process device registers. Reserving identical VA space 108 is not required anymore. 109 * Added support for jump action for both E-Switch and NIC. 110 * Added Support for multiple rte_flow groups in NIC steering. 111 * Flow engine re-designed to support large scale deployments. this includes: 112 * Support millions of offloaded flow rules. 113 * Fast flow insertion and deletion up to 1M flow update per second. 114 115* **Renamed avf to iavf.** 116 117 Renamed Intel Ethernet Adaptive Virtual Function driver ``avf`` to ``iavf``, 118 which includes the directory name, lib name, filenames, makefile, docs, 119 macros, functions, structs and any other strings in the code. 120 121* **Updated the enic driver.** 122 123 Updated enic driver with new features and improvements, including: 124 125 * Fixed several flow (director) bugs related to MARK, SCTP, VLAN, VXLAN, and 126 inner packet matching. 127 * Added limited support for RAW. 128 * Added limited support for RSS. 129 * Added limited support for PASSTHRU. 130 131* **Updated the ixgbe driver.** 132 133 Updated the ixgbe driver to add promiscuous mode support for the VF. 134 135* **Updated the ice driver.** 136 137 Updated ice driver with new features and improvements, including: 138 139 * Added support of SSE and AVX2 instructions in Rx and Tx paths. 140 * Added package download support. 141 * Added Safe Mode support. 142 * Supported RSS for UPD/TCP/SCTP+IPV4/IPV6 packets. 143 144* **Updated the i40e driver.** 145 146 New features for PF in the i40e driver: 147 148 * Added support for VXLAN-GPE packet. 149 * Added support for VXLAN-GPE classification. 150 151* **Updated the ENETC driver.** 152 153 Updated ENETC driver with new features and improvements, including: 154 155 * Added physical addressing mode support. 156 * Added SXGMII interface support. 157 * Added basic statistics support. 158 * Added promiscuous and allmulticast mode support. 159 * Added MTU update support. 160 * Added jumbo frame support. 161 * Added queue start/stop. 162 * Added CRC offload support. 163 * Added Rx checksum offload validation support. 164 165* **Updated the atlantic PMD.** 166 167 Added MACSEC hardware offload experimental API. 168 169* **Updated the Intel QuickAssist Technology (QAT) compression PMD.** 170 171 Updated the Intel QuickAssist Technology (QAT) compression PMD to simplify, 172 and make more robust, the handling of Scatter Gather Lists (SGLs) with more 173 than 16 segments. 174 175* **Updated the QuickAssist Technology (QAT) symmetric crypto PMD.** 176 177 Added support for AES-XTS with 128 and 256 bit AES keys. 178 179* **Added Intel QuickAssist Technology PMD for asymmetric crypto.** 180 181 Added a new QAT Crypto PMD which provides asymmetric cryptography 182 algorithms. Modular exponentiation and modular multiplicative 183 inverse algorithms were added in this release. 184 185* **Updated AESNI-MB PMD.** 186 187 Added support for out-of-place operations. 188 189* **Updated the IPsec library.** 190 191 The IPsec library has been updated with AES-CTR and 3DES-CBC cipher algorithms 192 support. The related ``ipsec-secgw`` test scripts have been added. 193 194* **Updated the testpmd application.** 195 196 Improved the ``testpmd`` application performance on ARM platform. For ``macswap`` 197 forwarding mode, NEON intrinsics are now used to do swap to save CPU cycles. 198 199* **Updated power management library.** 200 201 Added support for Intel Speed Select Technology - Base Frequency (SST-BF). 202 The ``rte_power_get_capabilities`` struct now has a bit in it's returned mask 203 indicating if it is a high frequency core. 204 205* **Updated distributor sample application.** 206 207 Added support for the Intel SST-BF feature so that the distributor core is 208 pinned to a high frequency core if available. 209 210 211API Changes 212----------- 213 214* eal: the type of the ``attr_value`` parameter of the function 215 ``rte_service_attr_get()`` has been changed 216 from ``uint32_t *`` to ``uint64_t *``. 217 218* meter: replace ``enum rte_meter_color`` in the meter library with new 219 ``rte_color`` definition added in 19.02. Replacements with ``rte_color`` 220 values has been performed in many places such as ``rte_mtr.h`` and 221 ``rte_tm.h`` to consolidate multiple color definitions. 222 223* vfio: Functions ``rte_vfio_container_dma_map`` and 224 ``rte_vfio_container_dma_unmap`` have been extended with an option to 225 request mapping or un-mapping to the default vfio container fd. 226 227* power: ``rte_power_set_env`` and ``rte_power_unset_env`` functions 228 have been modified to be thread safe. 229 230* timer: Functions have been introduced that allow multiple instances of the 231 timer lists to be created. In addition they are now allocated in shared 232 memory. New functions allow particular timer lists to be selected when 233 timers are being started, stopped, and managed. 234 235 236ABI Changes 237----------- 238 239* ethdev: Additional fields in rte_eth_dev_info. 240 241 The ``rte_eth_dev_info`` structure has had two extra fields 242 added: ``min_mtu`` and ``max_mtu``. Each of these are of type ``uint16_t``. 243 The values of these fields can be set specifically by the PMDs as 244 supported values can vary from device to device. 245 246* cryptodev: in 18.08 a new structure ``rte_crypto_asym_op`` was introduced and 247 included into ``rte_crypto_op``. As the ``rte_crypto_asym_op`` structure was 248 defined as cache-line aligned that caused unintended changes in 249 ``rte_crypto_op`` structure layout and alignment. Remove cache-line 250 alignment for ``rte_crypto_asym_op`` to restore expected ``rte_crypto_op`` 251 layout and alignment. 252 253* timer: ``rte_timer_subsystem_init`` now returns success or failure to reflect 254 whether it was able to allocate memory. 255 256 257Shared Library Versions 258----------------------- 259 260The libraries prepended with a plus sign were incremented in this version. 261 262.. code-block:: diff 263 264 librte_acl.so.2 265 librte_bbdev.so.1 266 librte_bitratestats.so.2 267 librte_bpf.so.1 268 librte_bus_dpaa.so.2 269 librte_bus_fslmc.so.2 270 librte_bus_ifpga.so.2 271 librte_bus_pci.so.2 272 librte_bus_vdev.so.2 273 librte_bus_vmbus.so.2 274 librte_cfgfile.so.2 275 librte_cmdline.so.2 276 librte_compressdev.so.1 277 + librte_cryptodev.so.7 278 librte_distributor.so.1 279 + librte_eal.so.10 280 librte_efd.so.1 281 + librte_ethdev.so.12 282 librte_eventdev.so.6 283 librte_flow_classify.so.1 284 librte_gro.so.1 285 librte_gso.so.1 286 librte_hash.so.2 287 librte_ip_frag.so.1 288 librte_ipsec.so.1 289 librte_jobstats.so.1 290 librte_kni.so.2 291 librte_kvargs.so.1 292 librte_latencystats.so.1 293 librte_lpm.so.2 294 librte_mbuf.so.5 295 librte_member.so.1 296 librte_mempool.so.5 297 librte_meter.so.3 298 librte_metrics.so.1 299 librte_net.so.1 300 librte_pci.so.1 301 librte_pdump.so.3 302 librte_pipeline.so.3 303 librte_pmd_bnxt.so.2 304 librte_pmd_bond.so.2 305 librte_pmd_i40e.so.2 306 librte_pmd_ixgbe.so.2 307 librte_pmd_dpaa2_qdma.so.1 308 librte_pmd_ring.so.2 309 librte_pmd_softnic.so.1 310 librte_pmd_vhost.so.2 311 librte_port.so.3 312 librte_power.so.1 313 librte_rawdev.so.1 314 + librte_rcu.so.1 315 librte_reorder.so.1 316 librte_ring.so.2 317 librte_sched.so.2 318 librte_security.so.2 319 + librte_stack.so.1 320 librte_table.so.3 321 librte_timer.so.1 322 librte_vhost.so.4 323 324 325Known Issues 326------------ 327 328* **On x86 platforms, AVX512 support is disabled with binutils 2.31.** 329 330 Due to a defect in binutils 2.31 AVX512 support is disabled. 331 DPDK defect: https://bugs.dpdk.org/show_bug.cgi?id=249 332 GCC defect: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=90028 333 334* **No software AES-XTS implementation.** 335 336 There are currently no cryptodev software PMDs available which implement 337 support for the AES-XTS algorithm, so this feature can only be used 338 if compatible hardware and an associated PMD is available. 339 340 341Tested Platforms 342---------------- 343 344* Intel(R) platforms with Intel(R) NICs combinations 345 346 * CPU 347 348 * Intel(R) Atom(TM) CPU C3758 @ 2.20GHz 349 * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz 350 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 351 * Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 352 * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz 353 * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz 354 * Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz 355 356 * OS: 357 358 * CentOS 7.4 359 * CentOS 7.5 360 * Fedora 25 361 * Fedora 28 362 * Fedora 29 363 * FreeBSD 12.0 364 * Red Hat Enterprise Linux Server release 7.4 365 * Red Hat Enterprise Linux Server release 7.5 366 * Red Hat Enterprise Linux Server release 7.6 367 * SUSE12SP3 368 * Open SUSE 15 369 * Wind River Linux 8 370 * Ubuntu 14.04 371 * Ubuntu 16.04 372 * Ubuntu 16.10 373 * Ubuntu 18.04 374 * Ubuntu 18.10 375 376 * NICs: 377 378 * Intel(R) 82599ES 10 Gigabit Ethernet Controller 379 380 * Firmware version: 0x61bf0001 381 * Device id (pf/vf): 8086:10fb / 8086:10ed 382 * Driver version: 5.2.3 (ixgbe) 383 384 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T 385 386 * Firmware version: 0x800003e7 387 * Device id (pf/vf): 8086:15ad / 8086:15a8 388 * Driver version: 4.4.6 (ixgbe) 389 390 * Intel Corporation Ethernet Controller 10G X550T 391 392 * Firmware version: 0x80000482 393 * Device id (pf): 8086:1563 394 * Driver version: 5.1.0-k(ixgbe) 395 396 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G) 397 398 * Firmware version: 6.80 0x80003cc1 399 * Device id (pf/vf): 8086:1572 / 8086:154c 400 * Driver version: 2.7.29 (i40e) 401 402 * Intel(R) Corporation Ethernet Connection X722 for 10GbE SFP+ (4x10G) 403 404 * Firmware version: 3.33 0x80000fd5 0.0.0 405 * Device id (pf/vf): 8086:37d0 / 8086:37cd 406 * Driver version: 2.7.29 (i40e) 407 408 * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G) 409 410 * Firmware version: 6.80 0x80003d05 411 * Device id (pf/vf): 8086:158b / 8086:154c 412 * Driver version: 2.7.29 (i40e) 413 414 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G) 415 416 * Firmware version: 6.80 0x80003cfb 417 * Device id (pf/vf): 8086:1583 / 8086:154c 418 * Driver version: 2.7.29 (i40e) 419 420 * Intel(R) Corporation I350 Gigabit Network Connection 421 422 * Firmware version: 1.63, 0x80000dda 423 * Device id (pf/vf): 8086:1521 / 8086:1520 424 * Driver version: 5.4.0-k (igb) 425 426 * Intel Corporation I210 Gigabit Network Connection 427 428 * Firmware version: 3.25, 0x800006eb, 1.1824.0 429 * Device id (pf): 8086:1533 430 * Driver version: 5.4.0-k(igb) 431 432* Intel(R) platforms with Mellanox(R) NICs combinations 433 434 * CPU: 435 436 * Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz 437 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz 438 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz 439 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 440 * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 441 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz 442 * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz 443 444 * OS: 445 446 * Red Hat Enterprise Linux Server release 7.6 (Maipo) 447 * Red Hat Enterprise Linux Server release 7.5 (Maipo) 448 * Red Hat Enterprise Linux Server release 7.4 (Maipo) 449 * Red Hat Enterprise Linux Server release 7.3 (Maipo) 450 * Red Hat Enterprise Linux Server release 7.2 (Maipo) 451 * Ubuntu 19.04 452 * Ubuntu 18.10 453 * Ubuntu 18.04 454 * Ubuntu 16.04 455 * SUSE Linux Enterprise Server 15 456 457 * MLNX_OFED: 4.5-1.0.1.0 458 * MLNX_OFED: 4.6-1.0.1.1 459 460 * NICs: 461 462 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G) 463 464 * Host interface: PCI Express 3.0 x8 465 * Device ID: 15b3:1007 466 * Firmware version: 2.42.5000 467 468 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G) 469 470 * Host interface: PCI Express 3.0 x8 471 * Device ID: 15b3:1013 472 * Firmware version: 12.25.1020 and above 473 474 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G) 475 476 * Host interface: PCI Express 3.0 x8 477 * Device ID: 15b3:1013 478 * Firmware version: 12.25.1020 and above 479 480 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G) 481 482 * Host interface: PCI Express 3.0 x8 483 * Device ID: 15b3:1013 484 * Firmware version: 12.25.1020 and above 485 486 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G) 487 488 * Host interface: PCI Express 3.0 x8 489 * Device ID: 15b3:1013 490 * Firmware version: 12.25.1020 and above 491 492 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G) 493 494 * Host interface: PCI Express 3.0 x8 495 * Device ID: 15b3:1013 496 * Firmware version: 12.25.1020 and above 497 498 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G) 499 500 * Host interface: PCI Express 3.0 x16 501 * Device ID: 15b3:1013 502 * Firmware version: 12.25.1020 and above 503 504 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G) 505 506 * Host interface: PCI Express 3.0 x8 507 * Device ID: 15b3:1013 508 * Firmware version: 12.25.1020 and above 509 510 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G) 511 512 * Host interface: PCI Express 3.0 x8 513 * Device ID: 15b3:1013 514 * Firmware version: 12.25.1020 and above 515 516 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G) 517 518 * Host interface: PCI Express 3.0 x16 519 * Device ID: 15b3:1013 520 * Firmware version: 12.25.1020 and above 521 * Firmware version: 12.25.1020 and above 522 523 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G) 524 525 * Host interface: PCI Express 3.0 x16 526 * Device ID: 15b3:1013 527 * Firmware version: 12.25.1020 and above 528 529 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G) 530 531 * Host interface: PCI Express 3.0 x16 532 * Device ID: 15b3:1013 533 * Firmware version: 12.25.1020 and above 534 535 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G) 536 537 * Host interface: PCI Express 3.0 x8 538 * Device ID: 15b3:1015 539 * Firmware version: 14.25.1020 and above 540 541 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 542 543 * Host interface: PCI Express 3.0 x8 544 * Device ID: 15b3:1015 545 * Firmware version: 14.25.1020 and above 546 547 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 548 549 * Host interface: PCI Express 3.0 x16 550 * Device ID: 15b3:1017 551 * Firmware version: 16.25.1020 and above 552 553 * Mellanox(R) ConnectX(R)-5 Ex EN 100G MCX516A-CDAT (2x100G) 554 555 * Host interface: PCI Express 4.0 x16 556 * Device ID: 15b3:1019 557 * Firmware version: 16.25.1020 and above 558 559* Arm platforms with Mellanox(R) NICs combinations 560 561 * CPU: 562 563 * Qualcomm Arm 1.1 2500MHz 564 565 * OS: 566 567 * Red Hat Enterprise Linux Server release 7.5 (Maipo) 568 569 * NICs: 570 571 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 572 573 * Host interface: PCI Express 3.0 x8 574 * Device ID: 15b3:1015 575 * Firmware version: 14.24.0220 576 577 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 578 579 * Host interface: PCI Express 3.0 x16 580 * Device ID: 15b3:1017 581 * Firmware version: 16.24.0220 582 583* Mellanox(R) BlueField SmartNIC 584 585 * Mellanox(R) BlueField SmartNIC MT416842 (2x25G) 586 587 * Host interface: PCI Express 3.0 x16 588 * Device ID: 15b3:a2d2 589 * Firmware version: 18.25.1010 590 591 * SoC Arm cores running OS: 592 593 * CentOS Linux release 7.4.1708 (AltArch) 594 * MLNX_OFED 4.6-1.0.0.0 595 596 * DPDK application running on Arm cores inside SmartNIC 597 598* IBM Power 9 platforms with Mellanox(R) NICs combinations 599 600 * CPU: 601 602 * POWER9 2.2 (pvr 004e 1202) 2300MHz 603 604 * OS: 605 606 * Ubuntu 18.04.1 LTS (Bionic Beaver) 607 608 * NICs: 609 610 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 611 612 * Host interface: PCI Express 3.0 x16 613 * Device ID: 15b3:1017 614 * Firmware version: 16.24.1000 615 616 * OFED: 617 618 * MLNX_OFED_LINUX-4.6-1.0.1.0 619