1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright 2016 The DPDK contributors 3 4DPDK Release 16.11 5================== 6 7New Features 8------------ 9 10* **Added software parser for packet type.** 11 12 * Added a new function ``rte_pktmbuf_read()`` to read the packet data from an 13 mbuf chain, linearizing if required. 14 * Added a new function ``rte_net_get_ptype()`` to parse an Ethernet packet 15 in an mbuf chain and retrieve its packet type from software. 16 * Added new functions ``rte_get_ptype_*()`` to dump a packet type as a string. 17 18* **Improved offloads support in mbuf.** 19 20 * Added a new function ``rte_raw_cksum_mbuf()`` to process the checksum of 21 data embedded in an mbuf chain. 22 * Added new Rx checksum flags in mbufs to describe more states: unknown, 23 good, bad, or not present (useful for virtual drivers). This modification 24 was done for IP and L4. 25 * Added a new Rx LRO mbuf flag, used when packets are coalesced. This 26 flag indicates that the segment size of original packets is known. 27 28* **Added vhost-user dequeue zero copy support.** 29 30 The copy in the dequeue path is avoided in order to improve the performance. 31 In the VM2VM case, the boost is quite impressive. The bigger the packet size, 32 the bigger performance boost you may get. However, for the VM2NIC case, there 33 are some limitations, so the boost is not as impressive as the VM2VM case. 34 It may even drop quite a bit for small packets. 35 36 For that reason, this feature is disabled by default. It can be enabled when 37 the ``RTE_VHOST_USER_DEQUEUE_ZERO_COPY`` flag is set. Check the VHost section 38 of the Programming Guide for more information. 39 40* **Added vhost-user indirect descriptors support.** 41 42 If the indirect descriptor feature is enabled, each packet sent by the guest 43 will take exactly one slot in the enqueue virtqueue. Without this feature, as in 44 the current version, even 64 bytes packets take two slots with Virtio PMD on guest 45 side. 46 47 The main impact is better performance for 0% packet loss use cases, as it 48 behaves as if the virtqueue size was enlarged, so more packets can be buffered 49 in the case of system perturbations. On the downside, small performance degradations 50 were measured when running micro-benchmarks. 51 52* **Added vhost PMD xstats.** 53 54 Added extended statistics to vhost PMD from a per port perspective. 55 56* **Supported offloads with virtio.** 57 58 Added support for the following offloads in virtio: 59 60 * Rx/Tx checksums. 61 * LRO. 62 * TSO. 63 64* **Added virtio NEON support for ARM.** 65 66 Added NEON support for ARM based virtio. 67 68* **Updated the ixgbe base driver.** 69 70 Updated the ixgbe base driver, including the following changes: 71 72 * Added X550em_a 10G PHY support. 73 * Added support for flow control auto negotiation for X550em_a 1G PHY. 74 * Added X550em_a FW ALEF support. 75 * Increased mailbox version to ``ixgbe_mbox_api_13``. 76 * Added two MAC operations for Hyper-V support. 77 78* **Added APIs for VF management to the ixgbe PMD.** 79 80 Eight new APIs have been added to the ixgbe PMD for VF management from the PF. 81 The declarations for the API's can be found in ``rte_pmd_ixgbe.h``. 82 83* **Updated the enic driver.** 84 85 * Added update to use interrupt for link status checking instead of polling. 86 * Added more flow director modes on UCS Blade with firmware version >= 2.0(13e). 87 * Added full support for MTU update. 88 * Added support for the ``rte_eth_rx_queue_count`` function. 89 90* **Updated the mlx5 driver.** 91 92 * Added support for RSS hash results. 93 * Added several performance improvements. 94 * Added several bug fixes. 95 96* **Updated the QAT PMD.** 97 98 The QAT PMD was updated with additional support for: 99 100 * MD5_HMAC algorithm. 101 * SHA224-HMAC algorithm. 102 * SHA384-HMAC algorithm. 103 * GMAC algorithm. 104 * KASUMI (F8 and F9) algorithm. 105 * 3DES algorithm. 106 * NULL algorithm. 107 * C3XXX device. 108 * C62XX device. 109 110* **Added openssl PMD.** 111 112 A new crypto PMD has been added, which provides several ciphering and hashing algorithms. 113 All cryptography operations use the Openssl library crypto API. 114 115* **Updated the IPsec example.** 116 117 Updated the IPsec example with the following support: 118 119 * Configuration file support. 120 * AES CBC IV generation with cipher forward function. 121 * AES GCM/CTR mode. 122 123* **Added support for new gcc -march option.** 124 125 The GCC 4.9 ``-march`` option supports the Intel processor code names. 126 The config option ``RTE_MACHINE`` can be used to pass code names to the compiler via the ``-march`` flag. 127 128 129Drivers 130~~~~~~~ 131 132* **enic: Fixed several flow director issues.** 133 134* **enic: Fixed inadvertent setting of L4 checksum ptype on ICMP packets.** 135 136* **enic: Fixed high driver overhead when servicing Rx queues beyond the first.** 137 138 139Known Issues 140------------ 141 142* **L3fwd-power app does not work properly when Rx vector is enabled.** 143 144 The L3fwd-power app doesn't work properly with some drivers in vector mode 145 since the queue monitoring works differently between scalar and vector modes 146 leading to incorrect frequency scaling. In addition, L3fwd-power application 147 requires the mbuf to have correct packet type set but in some drivers the 148 vector mode must be disabled for this. 149 150 Therefore, in order to use L3fwd-power, vector mode should be disabled 151 via the config file. 152 153* **Digest address must be supplied for crypto auth operation on QAT PMD.** 154 155 The cryptodev API specifies that if the rte_crypto_sym_op.digest.data field, 156 and by inference the digest.phys_addr field which points to the same location, 157 is not set for an auth operation the driver is to understand that the digest 158 result is located immediately following the region over which the digest is 159 computed. The QAT PMD doesn't correctly handle this case and reads and writes 160 to an incorrect location. 161 162 Callers can workaround this by always supplying the digest virtual and 163 physical address fields in the rte_crypto_sym_op for an auth operation. 164 165 166API Changes 167----------- 168 169* The driver naming convention has been changed to make them more 170 consistent. It especially impacts ``--vdev`` arguments. For example 171 ``eth_pcap`` becomes ``net_pcap`` and ``cryptodev_aesni_mb_pmd`` becomes 172 ``crypto_aesni_mb``. 173 174 For backward compatibility an alias feature has been enabled to support the 175 original names. 176 177* The log history has been removed. 178 179* The ``rte_ivshmem`` feature (including library and EAL code) has been removed 180 in 16.11 because it had some design issues which were not planned to be fixed. 181 182* The ``file_name`` data type of ``struct rte_port_source_params`` and 183 ``struct rte_port_sink_params`` is changed from ``char *`` to ``const char *``. 184 185* **Improved device/driver hierarchy and generalized hotplugging.** 186 187 The device and driver relationship has been restructured by introducing generic 188 classes. This paves the way for having PCI, VDEV and other device types as 189 instantiated objects rather than classes in themselves. Hotplugging has also 190 been generalized into EAL so that Ethernet or crypto devices can use the 191 common infrastructure. 192 193 * Removed ``pmd_type`` as a way of segregation of devices. 194 * Moved ``numa_node`` and ``devargs`` into ``rte_driver`` from 195 ``rte_pci_driver``. These can now be used by any instantiated object of 196 ``rte_driver``. 197 * Added ``rte_device`` class and all PCI and VDEV devices inherit from it 198 * Renamed devinit/devuninit handlers to probe/remove to make it more 199 semantically correct with respect to the device <=> driver relationship. 200 * Moved hotplugging support to EAL. Hereafter, PCI and vdev can use the 201 APIs ``rte_eal_dev_attach`` and ``rte_eal_dev_detach``. 202 * Renamed helpers and support macros to make them more synonymous 203 with their device types 204 (e.g. ``PMD_REGISTER_DRIVER`` => ``RTE_PMD_REGISTER_PCI``). 205 * Device naming functions have been generalized from ethdev and cryptodev 206 to EAL. ``rte_eal_pci_device_name`` has been introduced for obtaining 207 unique device name from PCI Domain-BDF description. 208 * Virtual device registration APIs have been added: ``rte_eal_vdrv_register`` 209 and ``rte_eal_vdrv_unregister``. 210 211 212Shared Library Versions 213----------------------- 214 215The libraries prepended with a plus sign were incremented in this version. 216 217.. code-block:: diff 218 219 librte_acl.so.2 220 librte_cfgfile.so.2 221 librte_cmdline.so.2 222 + librte_cryptodev.so.2 223 librte_distributor.so.1 224 + librte_eal.so.3 225 + librte_ethdev.so.5 226 librte_hash.so.2 227 librte_ip_frag.so.1 228 librte_jobstats.so.1 229 librte_kni.so.2 230 librte_kvargs.so.1 231 librte_lpm.so.2 232 librte_mbuf.so.2 233 librte_mempool.so.2 234 librte_meter.so.1 235 librte_net.so.1 236 librte_pdump.so.1 237 librte_pipeline.so.3 238 librte_pmd_bond.so.1 239 librte_pmd_ring.so.2 240 librte_port.so.3 241 librte_power.so.1 242 librte_reorder.so.1 243 librte_ring.so.1 244 librte_sched.so.1 245 librte_table.so.2 246 librte_timer.so.1 247 librte_vhost.so.3 248 249 250Tested Platforms 251---------------- 252 253#. SuperMicro 1U 254 255 - BIOS: 1.0c 256 - Processor: Intel(R) Atom(TM) CPU C2758 @ 2.40GHz 257 258#. SuperMicro 1U 259 260 - BIOS: 1.0a 261 - Processor: Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz 262 - Onboard NIC: Intel(R) X552/X557-AT (2x10G) 263 264 - Firmware-version: 0x800001cf 265 - Device ID (PF/VF): 8086:15ad /8086:15a8 266 267 - kernel driver version: 4.2.5 (ixgbe) 268 269#. SuperMicro 2U 270 271 - BIOS: 1.0a 272 - Processor: Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz 273 274#. Intel(R) Server board S2600GZ 275 276 - BIOS: SE5C600.86B.02.02.0002.122320131210 277 - Processor: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 278 279#. Intel(R) Server board W2600CR 280 281 - BIOS: SE5C600.86B.02.01.0002.082220131453 282 - Processor: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 283 284#. Intel(R) Server board S2600CWT 285 286 - BIOS: SE5C610.86B.01.01.0009.060120151350 287 - Processor: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 288 289#. Intel(R) Server board S2600WTT 290 291 - BIOS: SE5C610.86B.01.01.0005.101720141054 292 - Processor: Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 293 294#. Intel(R) Server board S2600WTT 295 296 - BIOS: SE5C610.86B.11.01.0044.090120151156 297 - Processor: Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz 298 299#. Intel(R) Server board S2600WTT 300 301 - Processor: Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz 302 303#. Intel(R) Server 304 305 - Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz 306 307#. IBM(R) Power8(R) 308 309 - Machine type-model: 8247-22L 310 - Firmware FW810.21 (SV810_108) 311 - Processor: POWER8E (raw), AltiVec supported 312 313 314Tested NICs 315----------- 316 317#. Intel(R) Ethernet Controller X540-AT2 318 319 - Firmware version: 0x80000389 320 - Device id (pf): 8086:1528 321 - Driver version: 3.23.2 (ixgbe) 322 323#. Intel(R) 82599ES 10 Gigabit Ethernet Controller 324 325 - Firmware version: 0x61bf0001 326 - Device id (pf/vf): 8086:10fb / 8086:10ed 327 - Driver version: 4.0.1-k (ixgbe) 328 329#. Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T 330 331 - Firmware version: 0x800001cf 332 - Device id (pf/vf): 8086:15ad / 8086:15a8 333 - Driver version: 4.2.5 (ixgbe) 334 335#. Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G) 336 337 - Firmware version: 5.05 338 - Device id (pf/vf): 8086:1572 / 8086:154c 339 - Driver version: 1.5.23 (i40e) 340 341#. Intel(R) Ethernet Converged Network Adapter X710-DA2 (2x10G) 342 343 - Firmware version: 5.05 344 - Device id (pf/vf): 8086:1572 / 8086:154c 345 - Driver version: 1.5.23 (i40e) 346 347#. Intel(R) Ethernet Converged Network Adapter XL710-QDA1 (1x40G) 348 349 - Firmware version: 5.05 350 - Device id (pf/vf): 8086:1584 / 8086:154c 351 - Driver version: 1.5.23 (i40e) 352 353#. Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G) 354 355 - Firmware version: 5.05 356 - Device id (pf/vf): 8086:1583 / 8086:154c 357 - Driver version: 1.5.23 (i40e) 358 359#. Intel(R) Corporation I350 Gigabit Network Connection 360 361 - Firmware version: 1.48, 0x800006e7 362 - Device id (pf/vf): 8086:1521 / 8086:1520 363 - Driver version: 5.2.13-k (igb) 364 365#. Intel(R) Ethernet Multi-host Controller FM10000 366 367 - Firmware version: N/A 368 - Device id (pf/vf): 8086:15d0 369 - Driver version: 0.17.0.9 (fm10k) 370 371#. Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G) 372 373 * Host interface: PCI Express 3.0 x8 374 * Device ID: 15b3:1013 375 * MLNX_OFED: 3.4-1.0.0.0 376 * Firmware version: 12.17.1010 377 378#. Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G) 379 380 * Host interface: PCI Express 3.0 x8 381 * Device ID: 15b3:1013 382 * MLNX_OFED: 3.4-1.0.0.0 383 * Firmware version: 12.17.1010 384 385#. Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G) 386 387 * Host interface: PCI Express 3.0 x8 388 * Device ID: 15b3:1013 389 * MLNX_OFED: 3.4-1.0.0.0 390 * Firmware version: 12.17.1010 391 392#. Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G) 393 394 * Host interface: PCI Express 3.0 x8 395 * Device ID: 15b3:1013 396 * MLNX_OFED: 3.4-1.0.0.0 397 * Firmware version: 12.17.1010 398 399#. Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G) 400 401 * Host interface: PCI Express 3.0 x8 402 * Device ID: 15b3:1013 403 * MLNX_OFED: 3.4-1.0.0.0 404 * Firmware version: 12.17.1010 405 406#. Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G) 407 408 * Host interface: PCI Express 3.0 x16 409 * Device ID: 15b3:1013 410 * MLNX_OFED: 3.4-1.0.0.0 411 * Firmware version: 12.17.1010 412 413#. Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G) 414 415 * Host interface: PCI Express 3.0 x8 416 * Device ID: 15b3:1013 417 * MLNX_OFED: 3.4-1.0.0.0 418 * Firmware version: 12.17.1010 419 420#. Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G) 421 422 * Host interface: PCI Express 3.0 x8 423 * Device ID: 15b3:1013 424 * MLNX_OFED: 3.4-1.0.0.0 425 * Firmware version: 12.17.1010 426 427#. Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G) 428 429 * Host interface: PCI Express 3.0 x16 430 * Device ID: 15b3:1013 431 * MLNX_OFED: 3.4-1.0.0.0 432 * Firmware version: 12.17.1010 433 434#. Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G) 435 436 * Host interface: PCI Express 3.0 x16 437 * Device ID: 15b3:1013 438 * MLNX_OFED: 3.4-1.0.0.0 439 * Firmware version: 12.17.1010 440 441#. Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G) 442 443 * Host interface: PCI Express 3.0 x16 444 * Device ID: 15b3:1013 445 * MLNX_OFED: 3.4-1.0.0.0 446 * Firmware version: 12.17.1010 447 448#. Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G) 449 450 * Host interface: PCI Express 3.0 x8 451 * Device ID: 15b3:1015 452 * MLNX_OFED: 3.4-1.0.0.0 453 * Firmware version: 14.17.1010 454 455#. Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 456 457 * Host interface: PCI Express 3.0 x8 458 * Device ID: 15b3:1015 459 * MLNX_OFED: 3.4-1.0.0.0 460 * Firmware version: 14.17.1010 461 462 463Tested OSes 464----------- 465 466* CentOS 7.2 467* Fedora 23 468* Fedora 24 469* FreeBSD 10.3 470* FreeBSD 11 471* Red Hat Enterprise Linux Server release 6.7 (Santiago) 472* Red Hat Enterprise Linux Server release 7.0 (Maipo) 473* Red Hat Enterprise Linux Server release 7.2 (Maipo) 474* SUSE Enterprise Linux 12 475* Wind River Linux 6.0.0.26 476* Wind River Linux 8 477* Ubuntu 14.04 478* Ubuntu 15.04 479* Ubuntu 16.04 480