1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright 2018 The DPDK contributors 3 4DPDK Release 18.02 5================== 6 7New Features 8------------ 9 10* **Added function to allow releasing internal EAL resources on exit.** 11 12 During ``rte_eal_init()`` EAL allocates memory from hugepages to enable its 13 core libraries to perform their tasks. The ``rte_eal_cleanup()`` function 14 releases these resources, ensuring that no hugepage memory is leaked. It is 15 expected that all DPDK applications call ``rte_eal_cleanup()`` before 16 exiting. Not calling this function could result in leaking hugepages, leading 17 to failure during initialization of secondary processes. 18 19* **Added igb, ixgbe and i40e ethernet driver to support RSS with flow API.** 20 21 Added support for igb, ixgbe and i40e NICs with existing RSS configuration 22 using the ``rte_flow`` API. 23 24 Also enabled queue region configuration using the ``rte_flow`` API for i40e. 25 26* **Updated i40e driver to support PPPoE/PPPoL2TP.** 27 28 Updated i40e PMD to support PPPoE/PPPoL2TP with PPPoE/PPPoL2TP supporting 29 profiles which can be programmed by dynamic device personalization (DDP) 30 process. 31 32* **Added MAC loopback support for i40e.** 33 34 Added MAC loopback support for i40e in order to support test tasks requested 35 by users. It will setup ``Tx -> Rx`` loopback link according to the device 36 configuration. 37 38* **Added support of run time determination of number of queues per i40e VF.** 39 40 The number of queue per VF is determined by its host PF. If the PCI address 41 of an i40e PF is ``aaaa:bb.cc``, the number of queues per VF can be 42 configured with EAL parameter like ``-w aaaa:bb.cc,queue-num-per-vf=n``. The 43 value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the 44 number of queues per VF is 4 by default. 45 46* **Updated mlx5 driver.** 47 48 Updated the mlx5 driver including the following changes: 49 50 * Enabled compilation as a plugin, thus removed the mandatory dependency with rdma-core. 51 With the special compilation, the rdma-core libraries will be loaded only in case 52 Mellanox device is being used. For binaries creation the PMD can be enabled, still not 53 requiring from every end user to install rdma-core. 54 * Improved multi-segment packet performance. 55 * Changed driver name to use the PCI address to be compatible with OVS-DPDK APIs. 56 * Extended statistics for physical port packet/byte counters. 57 * Converted to the new offloads API. 58 * Supported device removal check operation. 59 60* **Updated mlx4 driver.** 61 62 Updated the mlx4 driver including the following changes: 63 64 * Enabled compilation as a plugin, thus removed the mandatory dependency with rdma-core. 65 With the special compilation, the rdma-core libraries will be loaded only in case 66 Mellanox device is being used. For binaries creation the PMD can be enabled, still not 67 requiring from every end user to install rdma-core. 68 * Improved data path performance. 69 * Converted to the new offloads API. 70 * Supported device removal check operation. 71 72* **Added NVGRE and UDP tunnels support in Solarflare network PMD.** 73 74 Added support for NVGRE, VXLAN and GENEVE tunnels. 75 76 * Added support for UDP tunnel ports configuration. 77 * Added tunneled packets classification. 78 * Added inner checksum offload. 79 80* **Added AVF (Adaptive Virtual Function) net PMD.** 81 82 Added a new net PMD called AVF (Adaptive Virtual Function), which supports 83 Intel® Ethernet Adaptive Virtual Function (AVF) with features such as: 84 85 * Basic Rx/Tx burst 86 * SSE vectorized Rx/Tx burst 87 * Promiscuous mode 88 * MAC/VLAN offload 89 * Checksum offload 90 * TSO offload 91 * Jumbo frame and MTU setting 92 * RSS configuration 93 * stats 94 * Rx/Tx descriptor status 95 * Link status update/event 96 97* **Added feature supports for live migration from vhost-net to vhost-user.** 98 99 Added feature supports for vhost-user to make live migration from vhost-net 100 to vhost-user possible. The features include: 101 102 * ``VIRTIO_F_ANY_LAYOUT`` 103 * ``VIRTIO_F_EVENT_IDX`` 104 * ``VIRTIO_NET_F_GUEST_ECN``, ``VIRTIO_NET_F_HOST_ECN`` 105 * ``VIRTIO_NET_F_GUEST_UFO``, ``VIRTIO_NET_F_HOST_UFO`` 106 * ``VIRTIO_NET_F_GSO`` 107 108 Also added ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature support in virtio PMD. 109 In a scenario where the vhost backend doesn't have the ability to generate 110 RARP packets, the VM running virtio PMD can still be live migrated if 111 ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature is negotiated. 112 113* **Updated the AESNI-MB PMD.** 114 115 The AESNI-MB PMD has been updated with additional support for: 116 117 * AES-CCM algorithm. 118 119* **Updated the DPAA_SEC crypto driver to support rte_security.** 120 121 Updated the ``dpaa_sec`` crypto PMD to support ``rte_security`` lookaside 122 protocol offload for IPsec. 123 124* **Added Wireless Base Band Device (bbdev) abstraction.** 125 126 The Wireless Baseband Device library is an acceleration abstraction 127 framework for 3gpp Layer 1 processing functions that provides a common 128 programming interface for seamless operation on integrated or discrete 129 hardware accelerators or using optimized software libraries for signal 130 processing. 131 132 The current release only supports 3GPP CRC, Turbo Coding and Rate 133 Matching operations, as specified in 3GPP TS 36.212. 134 135 See the :doc:`../prog_guide/bbdev` programmer's guide for more details. 136 137* **Added New eventdev Ordered Packet Distribution Library (OPDL) PMD.** 138 139 The OPDL (Ordered Packet Distribution Library) eventdev is a specific 140 implementation of the eventdev API. It is particularly suited to packet 141 processing workloads that have high throughput and low latency requirements. 142 All packets follow the same path through the device. The order in which 143 packets follow is determined by the order in which queues are set up. 144 Events are left on the ring until they are transmitted. As a result packets 145 do not go out of order. 146 147 With this change, applications can use the OPDL PMD via the eventdev api. 148 149* **Added new pipeline use case for dpdk-test-eventdev application.** 150 151 Added a new "pipeline" use case for the ``dpdk-test-eventdev`` application. 152 The pipeline case can be used to simulate various stages in a real world 153 application from packet receive to transmit while maintaining the packet 154 ordering. It can also be used to measure the performance of the event device 155 across the stages of the pipeline. 156 157 The pipeline use case has been made generic to work with all the event 158 devices based on the capabilities. 159 160* **Updated Eventdev sample application to support event devices based on capability.** 161 162 Updated the Eventdev pipeline sample application to support various types of 163 pipelines based on the capabilities of the attached event and ethernet 164 devices. Also, renamed the application from software PMD specific 165 ``eventdev_pipeline_sw_pmd`` to the more generic ``eventdev_pipeline``. 166 167* **Added Rawdev, a generic device support library.** 168 169 The Rawdev library provides support for integrating any generic device type with 170 the DPDK framework. Generic devices are those which do not have a pre-defined 171 type within DPDK, for example, ethernet, crypto, event etc. 172 173 A set of northbound APIs have been defined which encompass a generic set of 174 operations by allowing applications to interact with device using opaque 175 structures/buffers. Also, southbound APIs provide a means of integrating devices 176 either as part of a physical bus (PCI, FSLMC etc) or through ``vdev``. 177 178 See the :doc:`../prog_guide/rawdev` programmer's guide for more details. 179 180* **Added new multi-process communication channel.** 181 182 Added a generic channel in EAL for multi-process (primary/secondary) communication. 183 Consumers of this channel need to register an action with an action name to response 184 a message received; the actions will be identified by the action name and executed 185 in the context of a new dedicated thread for this channel. The list of new APIs: 186 187 * ``rte_mp_register`` and ``rte_mp_unregister`` are for action (un)registration. 188 * ``rte_mp_sendmsg`` is for sending a message without blocking for a response. 189 * ``rte_mp_request`` is for sending a request message and will block until 190 it gets a reply message which is sent from the peer by ``rte_mp_reply``. 191 192* **Added GRO support for VxLAN-tunneled packets.** 193 194 Added GRO support for VxLAN-tunneled packets. Supported VxLAN packets 195 must contain an outer IPv4 header and inner TCP/IPv4 headers. VxLAN 196 GRO doesn't check if input packets have correct checksums and doesn't 197 update checksums for output packets. Additionally, it assumes the 198 packets are complete (i.e., ``MF==0 && frag_off==0``), when IP 199 fragmentation is possible (i.e., ``DF==0``). 200 201* **Increased default Rx and Tx ring size in sample applications.** 202 203 Increased the default ``RX_RING_SIZE`` and ``TX_RING_SIZE`` to 1024 entries 204 in testpmd and the sample applications to give better performance in the 205 general case. The user should experiment with various Rx and Tx ring sizes 206 for their specific application to get best performance. 207 208* **Added new DPDK build system using the tools "meson" and "ninja" [EXPERIMENTAL].** 209 210 Added support for building DPDK using ``meson`` and ``ninja``, which gives 211 additional features, such as automatic build-time configuration, over the 212 current build system using ``make``. For instructions on how to do a DPDK build 213 using the new system, see the instructions in ``doc/build-sdk-meson.txt``. 214 215 .. note:: 216 217 This new build system support is incomplete at this point and is added 218 as experimental in this release. The existing build system using ``make`` 219 is unaffected by these changes, and can continue to be used for this 220 and subsequent releases until such time as it's deprecation is announced. 221 222 223Shared Library Versions 224----------------------- 225 226The libraries prepended with a plus sign were incremented in this version. 227 228.. code-block:: diff 229 230 librte_acl.so.2 231 + librte_bbdev.so.1 232 librte_bitratestats.so.2 233 librte_bus_dpaa.so.1 234 librte_bus_fslmc.so.1 235 librte_bus_pci.so.1 236 librte_bus_vdev.so.1 237 librte_cfgfile.so.2 238 librte_cmdline.so.2 239 librte_cryptodev.so.4 240 librte_distributor.so.1 241 librte_eal.so.6 242 librte_ethdev.so.8 243 librte_eventdev.so.3 244 librte_flow_classify.so.1 245 librte_gro.so.1 246 librte_gso.so.1 247 librte_hash.so.2 248 librte_ip_frag.so.1 249 librte_jobstats.so.1 250 librte_kni.so.2 251 librte_kvargs.so.1 252 librte_latencystats.so.1 253 librte_lpm.so.2 254 librte_mbuf.so.3 255 librte_mempool.so.3 256 librte_meter.so.1 257 librte_metrics.so.1 258 librte_net.so.1 259 librte_pci.so.1 260 librte_pdump.so.2 261 librte_pipeline.so.3 262 librte_pmd_bnxt.so.2 263 librte_pmd_bond.so.2 264 librte_pmd_i40e.so.2 265 librte_pmd_ixgbe.so.2 266 librte_pmd_ring.so.2 267 librte_pmd_softnic.so.1 268 librte_pmd_vhost.so.2 269 librte_port.so.3 270 librte_power.so.1 271 + librte_rawdev.so.1 272 librte_reorder.so.1 273 librte_ring.so.1 274 librte_sched.so.1 275 librte_security.so.1 276 librte_table.so.3 277 librte_timer.so.1 278 librte_vhost.so.3 279 280 281Tested Platforms 282---------------- 283 284* Intel(R) platforms with Intel(R) NICs combinations 285 286 * CPU 287 288 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz 289 * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz 290 * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz 291 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz 292 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 293 * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz 294 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz 295 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz 296 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz 297 * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz 298 299 * OS: 300 301 * CentOS 7.2 302 * Fedora 25 303 * Fedora 26 304 * Fedora 27 305 * FreeBSD 11 306 * Red Hat Enterprise Linux Server release 7.3 307 * SUSE Enterprise Linux 12 308 * Wind River Linux 8 309 * Ubuntu 14.04 310 * Ubuntu 16.04 311 * Ubuntu 16.10 312 * Ubuntu 17.10 313 314 * NICs: 315 316 * Intel(R) 82599ES 10 Gigabit Ethernet Controller 317 318 * Firmware version: 0x61bf0001 319 * Device id (pf/vf): 8086:10fb / 8086:10ed 320 * Driver version: 5.2.3 (ixgbe) 321 322 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T 323 324 * Firmware version: 0x800003e7 325 * Device id (pf/vf): 8086:15ad / 8086:15a8 326 * Driver version: 4.4.6 (ixgbe) 327 328 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G) 329 330 * Firmware version: 6.01 0x80003221 331 * Device id (pf/vf): 8086:1572 / 8086:154c 332 * Driver version: 2.4.3 (i40e) 333 334 * Intel Corporation Ethernet Connection X722 for 10GBASE-T 335 336 * firmware-version: 6.01 0x80003221 337 * Device id: 8086:37d2 / 8086:154c 338 * Driver version: 2.4.3 (i40e) 339 340 * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G) 341 342 * Firmware version: 6.01 0x80003221 343 * Device id (pf/vf): 8086:158b / 8086:154c 344 * Driver version: 2.4.3 (i40e) 345 346 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G) 347 348 * Firmware version: 6.01 0x8000321c 349 * Device id (pf/vf): 8086:1583 / 8086:154c 350 * Driver version: 2.4.3 (i40e) 351 352 * Intel(R) Corporation I350 Gigabit Network Connection 353 354 * Firmware version: 1.63, 0x80000dda 355 * Device id (pf/vf): 8086:1521 / 8086:1520 356 * Driver version: 5.3.0-k (igb) 357 358* Intel(R) platforms with Mellanox(R) NICs combinations 359 360 * CPU: 361 362 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz 363 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz 364 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 365 * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 366 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz 367 * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz 368 369 * OS: 370 371 * Red Hat Enterprise Linux Server release 7.5 Beta (Maipo) 372 * Red Hat Enterprise Linux Server release 7.4 (Maipo) 373 * Red Hat Enterprise Linux Server release 7.3 (Maipo) 374 * Red Hat Enterprise Linux Server release 7.2 (Maipo) 375 * Ubuntu 17.10 376 * Ubuntu 16.10 377 * Ubuntu 16.04 378 379 * MLNX_OFED: 4.2-1.0.0.0 380 * MLNX_OFED: 4.3-0.1.6.0 381 382 * NICs: 383 384 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G) 385 386 * Host interface: PCI Express 3.0 x8 387 * Device ID: 15b3:1007 388 * Firmware version: 2.42.5000 389 390 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G) 391 392 * Host interface: PCI Express 3.0 x8 393 * Device ID: 15b3:1013 394 * Firmware version: 12.21.1000 and above 395 396 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G) 397 398 * Host interface: PCI Express 3.0 x8 399 * Device ID: 15b3:1013 400 * Firmware version: 12.21.1000 and above 401 402 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G) 403 404 * Host interface: PCI Express 3.0 x8 405 * Device ID: 15b3:1013 406 * Firmware version: 12.21.1000 and above 407 408 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G) 409 410 * Host interface: PCI Express 3.0 x8 411 * Device ID: 15b3:1013 412 * Firmware version: 12.21.1000 and above 413 414 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G) 415 416 * Host interface: PCI Express 3.0 x8 417 * Device ID: 15b3:1013 418 * Firmware version: 12.21.1000 and above 419 420 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G) 421 422 * Host interface: PCI Express 3.0 x16 423 * Device ID: 15b3:1013 424 * Firmware version: 12.21.1000 and above 425 426 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G) 427 428 * Host interface: PCI Express 3.0 x8 429 * Device ID: 15b3:1013 430 * Firmware version: 12.21.1000 and above 431 432 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G) 433 434 * Host interface: PCI Express 3.0 x8 435 * Device ID: 15b3:1013 436 * Firmware version: 12.21.1000 and above 437 438 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G) 439 440 * Host interface: PCI Express 3.0 x16 441 * Device ID: 15b3:1013 442 * Firmware version: 12.21.1000 and above 443 * Firmware version: 12.21.1000 and above 444 445 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G) 446 447 * Host interface: PCI Express 3.0 x16 448 * Device ID: 15b3:1013 449 * Firmware version: 12.21.1000 and above 450 451 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G) 452 453 * Host interface: PCI Express 3.0 x16 454 * Device ID: 15b3:1013 455 * Firmware version: 12.21.1000 and above 456 457 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G) 458 459 * Host interface: PCI Express 3.0 x8 460 * Device ID: 15b3:1015 461 * Firmware version: 14.21.1000 and above 462 463 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 464 465 * Host interface: PCI Express 3.0 x8 466 * Device ID: 15b3:1015 467 * Firmware version: 14.21.1000 and above 468 469 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 470 471 * Host interface: PCI Express 3.0 x16 472 * Device ID: 15b3:1017 473 * Firmware version: 16.21.1000 and above 474 475 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G) 476 477 * Host interface: PCI Express 4.0 x16 478 * Device ID: 15b3:1019 479 * Firmware version: 16.21.1000 and above 480 481* ARM platforms with Mellanox(R) NICs combinations 482 483 * CPU: 484 485 * Qualcomm ARM 1.1 2500MHz 486 487 * OS: 488 489 * Ubuntu 16.04 490 491 * MLNX_OFED: 4.2-1.0.0.0 492 493 * NICs: 494 495 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 496 497 * Host interface: PCI Express 3.0 x8 498 * Device ID: 15b3:1015 499 * Firmware version: 14.21.1000 500 501 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 502 503 * Host interface: PCI Express 3.0 x16 504 * Device ID: 15b3:1017 505 * Firmware version: 16.21.1000 506