1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright 2018 The DPDK contributors 3 4DPDK Release 18.05 5================== 6 7.. **Read this first.** 8 9 The text in the sections below explains how to update the release notes. 10 11 Use proper spelling, capitalization and punctuation in all sections. 12 13 Variable and config names should be quoted as fixed width text: 14 ``LIKE_THIS``. 15 16 Build the docs and view the output file to ensure the changes are correct:: 17 18 make doc-guides-html 19 20 xdg-open build/doc/html/guides/rel_notes/release_18_05.html 21 22 23New Features 24------------ 25 26.. This section should contain new features added in this release. Sample 27 format: 28 29 * **Add a title in the past tense with a full stop.** 30 31 Add a short 1-2 sentence description in the past tense. The description 32 should be enough to allow someone scanning the release notes to 33 understand the new feature. 34 35 If the feature adds a lot of sub-features you can use a bullet list like 36 this: 37 38 * Added feature foo to do something. 39 * Enhanced feature bar to do something else. 40 41 Refer to the previous release notes for examples. 42 43 This section is a comment. Do not overwrite or remove it. 44 Also, make sure to start the actual text at the margin. 45 ========================================================= 46 47* **Reworked memory subsystem.** 48 49 Memory subsystem has been reworked to support new functionality. 50 51 On Linux, support for reserving/unreserving hugepage memory at runtime has been 52 added, so applications no longer need to pre-reserve memory at startup. Due to 53 reorganized internal workings of memory subsystem, any memory allocated 54 through ``rte_malloc()`` or ``rte_memzone_reserve()`` is no longer guaranteed 55 to be IOVA-contiguous. 56 57 This functionality has introduced the following changes: 58 59 * ``rte_eal_get_physmem_layout()`` was removed. 60 * A new flag for memzone reservation (``RTE_MEMZONE_IOVA_CONTIG``) was added 61 to ensure reserved memory will be IOVA-contiguous, for use with device 62 drivers and other cases requiring such memory. 63 * New callbacks for memory allocation/deallocation events, allowing users (or 64 drivers) to be notified of new memory being allocated or deallocated 65 * New callbacks for validating memory allocations above a specified limit, 66 allowing user to permit or deny memory allocations. 67 * A new command-line switch ``--legacy-mem`` to enable EAL behavior similar to 68 how older versions of DPDK worked (memory segments that are IOVA-contiguous, 69 but hugepages are reserved at startup only, and can never be released). 70 * A new command-line switch ``--single-file-segments`` to put all memory 71 segments within a segment list in a single file. 72 * A set of convenience function calls to look up and iterate over allocated 73 memory segments. 74 * ``-m`` and ``--socket-mem`` command-line arguments now carry an additional 75 meaning and mark pre-reserved hugepages as "unfree-able", thereby acting as 76 a mechanism guaranteeing minimum availability of hugepage memory to the 77 application. 78 79 Reserving/unreserving memory at runtime is not currently supported on FreeBSD. 80 81* **Added bucket mempool driver.** 82 83 Added a bucket mempool driver which provides a way to allocate contiguous 84 block of objects. 85 The number of objects in the block depends on how many objects fit in the 86 ``RTE_DRIVER_MEMPOOL_BUCKET_SIZE_KB`` memory chunk which is a build time option. 87 The number may be obtained using ``rte_mempool_ops_get_info()`` API. 88 Contiguous blocks may be allocated using ``rte_mempool_get_contig_blocks()`` API. 89 90* **Added support for port representors.** 91 92 Added DPDK port representors (also known as "VF representors" in the specific 93 context of VFs), which are to DPDK what the Ethernet switch device driver 94 model (**switchdev**) is to Linux, and which can be thought as a software 95 "patch panel" front-end for applications. DPDK port representors are 96 implemented as additional virtual Ethernet device (**ethdev**) instances, 97 spawned on an as-needed basis through configuration parameters passed to the 98 driver of the underlying device using devargs. 99 100* **Added support for VXLAN and NVGRE tunnel endpoint.** 101 102 New actions types have been added to support encapsulation and decapsulation 103 operations for a tunnel endpoint. The new action types are 104 ``RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP``, ``RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP``, 105 ``RTE_FLOW_ACTION_TYPE_JUMP``. A new item type ``RTE_FLOW_ACTION_TYPE_MARK`` has been 106 added to match a flow against a previously marked flow. A shared counter has also been 107 introduced to the flow API to count a group of flows. 108 109* **Added PMD-recommended Tx and Rx parameters.** 110 111 Applications can now query drivers for device-tuned values of 112 ring sizes, burst sizes, and number of queues. 113 114* **Added RSS hash and key update to CXGBE PMD.** 115 116 Added support for updating the RSS hash and key to the CXGBE PMD. 117 118* **Added CXGBE VF PMD.** 119 120 CXGBE VF Poll Mode Driver has been added to run DPDK over Chelsio 121 T5/T6 NIC VF instances. 122 123* **Updated mlx5 driver.** 124 125 Updated the mlx5 driver including the following changes: 126 127 * Introduced Multi-packet Rx to enable 100Gb/sec with 64B frames. 128 * Support for being run by non-root users given a reduced set of capabilities 129 ``CAP_NET_ADMIN``, ``CAP_NET_RAW`` and ``CAP_IPC_LOCK``. 130 * Support for TSO and checksum for generic UDP and IP tunnels. 131 * Support for inner checksum and RSS for GRE, VXLAN-GPE, MPLSoGRE 132 and MPLSoUDP tunnels. 133 * Accommodate the new memory hotplug model. 134 * Support for non virtually contiguous mempools. 135 * Support for MAC adding along with allmulti and promiscuous modes from VF. 136 * Support for Mellanox BlueField SoC device. 137 * Support for PMD defaults for queue number and depth to improve the out 138 of the box performance. 139 140* **Updated mlx4 driver.** 141 142 Updated the mlx4 driver including the following changes: 143 144 * Support for to being run by non-root users given a reduced set of capabilities 145 ``CAP_NET_ADMIN``, ``CAP_NET_RAW`` and ``CAP_IPC_LOCK``. 146 * Supported CRC strip toggling. 147 * Accommodate the new memory hotplug model. 148 * Support non virtually contiguous mempools. 149 * Dropped support for Mellanox OFED 4.2. 150 151* **Updated Solarflare network PMD.** 152 153 Updated the sfc_efx driver including the following changes: 154 155 * Added support for Solarflare XtremeScale X2xxx family adapters. 156 * Added support for NVGRE, VXLAN and GENEVE filters in flow API. 157 * Added support for DROP action in flow API. 158 * Added support for equal stride super-buffer Rx mode (X2xxx only). 159 * Added support for MARK and FLAG actions in flow API (X2xxx only). 160 161* **Added Ethernet poll mode driver for AMD XGBE devices.** 162 163 Added the new ``axgbe`` ethernet poll mode driver for AMD XGBE devices. 164 See the :doc:`../nics/axgbe` nic driver guide for more details on this 165 new driver. 166 167* **Updated szedata2 PMD.** 168 169 Added support for new NFB-200G2QL card. 170 A new API was introduced in the libsze2 library which the szedata2 PMD depends 171 on, thus the new version of the library was needed. 172 New versions of the packages are available and the minimum required version 173 is 4.4.1. 174 175* **Added support for Broadcom NetXtreme-S (BCM58800) family of controllers (aka Stingray).** 176 177 Added support for the Broadcom NetXtreme-S (BCM58800) family of controllers 178 (aka Stingray). The BCM58800 devices feature a NetXtreme E-Series advanced 179 network controller, a high-performance ARM CPU block, PCI Express (PCIe) 180 Gen3 interfaces, key accelerators for compute offload and a high-speed 181 memory subsystem including L3 cache and DDR4 interfaces, all interconnected 182 by a coherent Network-on-chip (NOC) fabric. 183 184 The ARM CPU subsystem features eight ARMv8 Cortex-A72 CPUs at 3.0 GHz, 185 arranged in a multi-cluster configuration. 186 187* **Added vDPA in vhost-user lib.** 188 189 Added support for selective datapath in the vhost-user lib. vDPA stands for vhost 190 Data Path Acceleration. It supports virtio ring compatible devices to serve 191 the virtio driver directly to enable datapath acceleration. 192 193* **Added IFCVF vDPA driver.** 194 195 Added IFCVF vDPA driver to support Intel FPGA 100G VF devices. IFCVF works 196 as a HW vhost data path accelerator, it supports live migration and is 197 compatible with virtio 0.95 and 1.0. This driver registers the ifcvf vDPA driver 198 to vhost lib, when virtio connects. With the help of the registered vDPA 199 driver the assigned VF gets configured to Rx/Tx directly to VM's virtio 200 vrings. 201 202* **Added support for vhost dequeue interrupt mode.** 203 204 Added support for vhost dequeue interrupt mode to release CPUs to others 205 when there is no data to transmit. Applications can register an epoll event 206 file descriptor to associate Rx queues with interrupt vectors. 207 208* **Added support for virtio-user server mode.** 209 210 In a container environment if the vhost-user backend restarts, there's no way 211 for it to reconnect to virtio-user. To address this, support for server mode 212 has been added. In this mode the socket file is created by virtio-user, which the 213 backend connects to. This means that if the backend restarts, it can reconnect 214 to virtio-user and continue communications. 215 216* **Added crypto workload support to vhost library.** 217 218 New APIs have been introduced in the vhost library to enable virtio crypto support 219 including session creation/deletion handling and translating virtio-crypto 220 requests into DPDK crypto operations. A sample application has also been introduced. 221 222* **Added virtio crypto PMD.** 223 224 Added a new Poll Mode Driver for virtio crypto devices, which provides 225 AES-CBC ciphering and AES-CBC with HMAC-SHA1 algorithm-chaining. See the 226 :doc:`../cryptodevs/virtio` crypto driver guide for more details on 227 this new driver. 228 229* **Added AMD CCP Crypto PMD.** 230 231 Added the new ``ccp`` crypto driver for AMD CCP devices. See the 232 :doc:`../cryptodevs/ccp` crypto driver guide for more details on 233 this new driver. 234 235* **Updated AESNI MB PMD.** 236 237 The AESNI MB PMD has been updated with additional support for: 238 239 * AES-CMAC (128-bit key). 240 241* **Added the Compressdev Library, a generic compression service library.** 242 243 Added the Compressdev library which provides an API for offload of compression and 244 decompression operations to hardware or software accelerator devices. 245 246* **Added a new compression poll mode driver using Intels ISA-L.** 247 248 Added the new ``ISA-L`` compression driver, for compression and decompression 249 operations in software. See the :doc:`../compressdevs/isal` compression driver 250 guide for details on this new driver. 251 252* **Added the Event Timer Adapter Library.** 253 254 The Event Timer Adapter Library extends the event-based model by introducing 255 APIs that allow applications to arm/cancel event timers that generate 256 timer expiry events. This new type of event is scheduled by an event device 257 along with existing types of events. 258 259* **Added OcteonTx TIM Driver (Event timer adapter).** 260 261 The OcteonTx Timer block enables software to schedule events for a future 262 time, it is exposed to an application via the Event timer adapter library. 263 264 See the :doc:`../eventdevs/octeontx` guide for more details 265 266* **Added Event Crypto Adapter Library.** 267 268 Added the Event Crypto Adapter Library. This library extends the 269 event-based model by introducing APIs that allow applications to 270 enqueue/dequeue crypto operations to/from cryptodev as events scheduled 271 by an event device. 272 273* **Added Ifpga Bus, a generic Intel FPGA Bus library.** 274 275 Added the Ifpga Bus library which provides support for integrating any Intel 276 FPGA device with the DPDK framework. It provides Intel FPGA Partial Bit 277 Stream AFU (Accelerated Function Unit) scan and drivers probe. 278 279* **Added IFPGA (Intel FPGA) Rawdev Driver.** 280 281 Added a new Rawdev driver called IFPGA (Intel FPGA) Rawdev Driver, which cooperates 282 with OPAE (Open Programmable Acceleration Engine) shared code to provide common FPGA 283 management ops for FPGA operation. 284 285 See the :doc:`../rawdevs/ifpga` programmer's guide for more details. 286 287* **Added DPAA2 QDMA Driver (in rawdev).** 288 289 The DPAA2 QDMA is an implementation of the rawdev API, that provide a means 290 of initiating a DMA transaction from CPU. The initiated DMA is performed 291 without the CPU being involved in the actual DMA transaction. 292 293 See the :doc:`../rawdevs/dpaa2_qdma` guide for more details. 294 295* **Added DPAA2 Command Interface Driver (in rawdev).** 296 297 The DPAA2 CMDIF is an implementation of the rawdev API, that provides 298 communication between the GPP and NXP's QorIQ based AIOP Block (Firmware). 299 Advanced IO Processor i.e. AIOP are clusters of programmable RISC engines 300 optimized for flexible networking and I/O operations. The communication 301 between GPP and AIOP is achieved via using DPCI devices exposed by MC for 302 GPP <--> AIOP interaction. 303 304 See the :doc:`../rawdevs/dpaa2_cmdif` guide for more details. 305 306* **Added device event monitor framework.** 307 308 Added a general device event monitor framework to EAL, for device dynamic 309 management to facilitate device hotplug awareness and associated 310 actions. The list of new APIs is: 311 312 * ``rte_dev_event_monitor_start`` and ``rte_dev_event_monitor_stop`` for 313 the event monitor enabling and disabling. 314 * ``rte_dev_event_callback_register`` and ``rte_dev_event_callback_unregister`` 315 for registering and un-registering user callbacks. 316 317 Linux uevent is supported as a backend of this device event notification framework. 318 319* **Added support for procinfo and pdump on eth vdev.** 320 321 For ethernet virtual devices (like TAP, PCAP, etc.), with this feature, we can get 322 stats/xstats on shared memory from a secondary process, and also pdump packets on 323 those virtual devices. 324 325* **Enhancements to the Packet Framework Library.** 326 327 Design and development of new API functions for Packet Framework library that 328 implement a common set of actions such as traffic metering, packet 329 encapsulation, network address translation, TTL update, etc., for pipeline 330 table and input ports to speed up application development. The API functions 331 includes creating action profiles, registering actions to the profiles, 332 instantiating action profiles for pipeline table and input ports, etc. 333 334* **Added the BPF Library.** 335 336 The BPF Library provides the ability to load and execute 337 Enhanced Berkeley Packet Filters (eBPF) within user-space DPDK applications. 338 It also introduces a basic framework to load/unload BPF-based filters 339 on Eth devices (right now only via SW RX/TX callbacks). 340 It also adds a dependency on libelf. 341 342 343API Changes 344----------- 345 346.. This section should contain API changes. Sample format: 347 348 * Add a short 1-2 sentence description of the API change. Use fixed width 349 quotes for ``rte_function_names`` or ``rte_struct_names``. Use the past 350 tense. 351 352 This section is a comment. Do not overwrite or remove it. 353 Also, make sure to start the actual text at the margin. 354 ========================================================= 355 356* service cores: No longer marked as experimental. 357 358 The service cores functions are no longer marked as experimental, and have 359 become part of the normal DPDK API and ABI. Any future ABI changes will be 360 announced at least one release before the ABI change is made. There are no 361 ABI breaking changes planned. 362 363* eal: The ``rte_lcore_has_role()`` return value changed. 364 365 This function now returns true or false, respectively, 366 rather than 0 or < 0 for success or failure. 367 It makes use of the function more intuitive. 368 369* mempool: The capability flags and related functions have been removed. 370 371 Flags ``MEMPOOL_F_CAPA_PHYS_CONTIG`` and 372 ``MEMPOOL_F_CAPA_BLK_ALIGNED_OBJECTS`` were used by octeontx mempool 373 driver to customize generic mempool library behavior. 374 Now the new driver callbacks ``calc_mem_size`` and ``populate`` may be 375 used to achieve it without specific knowledge in the generic code. 376 377* mempool: The following xmem functions have been deprecated: 378 379 - ``rte_mempool_xmem_create`` 380 - ``rte_mempool_xmem_size`` 381 - ``rte_mempool_xmem_usage`` 382 - ``rte_mempool_populate_iova_tab`` 383 384* mbuf: The control mbuf API has been removed in v18.05. The impacted 385 functions and macros are: 386 387 - ``rte_ctrlmbuf_init()`` 388 - ``rte_ctrlmbuf_alloc()`` 389 - ``rte_ctrlmbuf_free()`` 390 - ``rte_ctrlmbuf_data()`` 391 - ``rte_ctrlmbuf_len()`` 392 - ``rte_is_ctrlmbuf()`` 393 - ``CTRL_MBUF_FLAG`` 394 395 The packet mbuf API should be used as a replacement. 396 397* meter: API updated to accommodate configuration profiles. 398 399 The meter API has been changed to support meter configuration profiles. The 400 configuration profile represents the set of configuration parameters 401 for a given meter object, such as the rates and sizes for the token 402 buckets. These configuration parameters were previously part of the meter 403 object internal data structure. The separation of the configuration 404 parameters from the meter object data structure results in reducing its 405 memory footprint which helps in better cache utilization when a large number 406 of meter objects are used. 407 408* ethdev: The function ``rte_eth_dev_count()``, often mis-used to iterate 409 over ports, is deprecated and replaced by ``rte_eth_dev_count_avail()``. 410 There is also a new function ``rte_eth_dev_count_total()`` to get the 411 total number of allocated ports, available or not. 412 The hotplug-proof applications should use ``RTE_ETH_FOREACH_DEV`` or 413 ``RTE_ETH_FOREACH_DEV_OWNED_BY`` as port iterators. 414 415* ethdev: In struct ``struct rte_eth_dev_info``, field ``rte_pci_device *pci_dev`` 416 has been replaced with field ``struct rte_device *device``. 417 418* ethdev: Changes to the semantics of ``rte_eth_dev_configure()`` parameters. 419 420 If both the ``nb_rx_q`` and ``nb_tx_q`` parameters are zero, 421 ``rte_eth_dev_configure()`` will now use PMD-recommended queue sizes, or if 422 recommendations are not provided by the PMD the function will use ethdev 423 fall-back values. Previously setting both of the parameters to zero would 424 have resulted in ``-EINVAL`` being returned. 425 426* ethdev: Changes to the semantics of ``rte_eth_rx_queue_setup()`` parameters. 427 428 If the ``nb_rx_desc`` parameter is zero, ``rte_eth_rx_queue_setup`` will 429 now use the PMD-recommended Rx ring size, or in the case where the PMD 430 does not provide a recommendation, will use an ethdev-provided 431 fall-back value. Previously, setting ``nb_rx_desc`` to zero would have 432 resulted in an error. 433 434* ethdev: Changes to the semantics of ``rte_eth_tx_queue_setup()`` parameters. 435 436 If the ``nb_tx_desc`` parameter is zero, ``rte_eth_tx_queue_setup`` will 437 now use the PMD-recommended Tx ring size, or in the case where the PMD 438 does not provide a recommendation, will use an ethdev-provided 439 fall-back value. Previously, setting ``nb_tx_desc`` to zero would have 440 resulted in an error. 441 442* ethdev: Several changes were made to the flow API. 443 444 * The unused DUP action was removed. 445 * Actions semantics in flow rules: list order now matters ("first 446 to last" instead of "all simultaneously"), repeated actions are now 447 all performed, and they do not individually have (non-)terminating 448 properties anymore. 449 * Flow rules are now always terminating unless a ``PASSTHRU`` action is 450 present. 451 * C99-style flexible arrays were replaced with standard pointers in RSS 452 action and in RAW pattern item structures due to compatibility issues. 453 * The RSS action was modified to not rely on external 454 ``struct rte_eth_rss_conf`` anymore to instead expose its own and more 455 appropriately named configuration fields directly 456 (``rss_conf->rss_key`` => ``key``, 457 ``rss_conf->rss_key_len`` => ``key_len``, 458 ``rss_conf->rss_hf`` => ``types``, 459 ``num`` => ``queue_num``), and the addition of missing RSS parameters 460 (``func`` for RSS hash function to apply and ``level`` for the 461 encapsulation level). 462 * The VLAN pattern item (``struct rte_flow_item_vlan``) was modified to 463 include inner EtherType instead of outer TPID. Its default mask was also 464 modified to cover the VID part (lower 12 bits) of TCI only. 465 * A new transfer attribute was added to ``struct rte_flow_attr`` in order 466 to clarify the behavior of some pattern items. 467 * PF and VF pattern items are now only accepted by PMDs that implement 468 them (bnxt and i40e) when the transfer attribute is also present, for 469 consistency. 470 * Pattern item PORT was renamed PHY_PORT to avoid confusion with DPDK port 471 IDs. 472 * An action counterpart to the PHY_PORT pattern item was added in order to 473 redirect matching traffic to a specific physical port. 474 * PORT_ID pattern item and actions were added to match and target DPDK 475 port IDs at a higher level than PHY_PORT. 476 * ``RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_ENCAP`` action items were added to support 477 tunnel encapsulation operation for VXLAN and NVGRE type tunnel endpoint. 478 * ``RTE_FLOW_ACTION_TYPE_[VXLAN/NVGRE]_DECAP`` action items were added to support 479 tunnel decapsulation operation for VXLAN and NVGRE type tunnel endpoint. 480 * ``RTE_FLOW_ACTION_TYPE_JUMP`` action item was added to support a matched flow 481 to be redirected to the specific group. 482 * ``RTE_FLOW_ACTION_TYPE_MARK`` item type has been added to match a flow against 483 a previously marked flow. 484 485* ethdev: Change flow APIs regarding count action: 486 487 * ``rte_flow_create()`` API count action now requires the ``struct rte_flow_action_count``. 488 * ``rte_flow_query()`` API parameter changed from action type to action structure. 489 490* ethdev: Changes to offload API 491 492 A pure per-port offloading isn't requested to be repeated in [rt]x_conf->offloads to 493 ``rte_eth_[rt]x_queue_setup()``. Now any offloading enabled in ``rte_eth_dev_configure()`` 494 can't be disabled by ``rte_eth_[rt]x_queue_setup()``. Any new added offloading which has 495 not been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled in 496 ``rte_eth_[rt]x_queue_setup()`` must be per-queue type, or otherwise trigger an error log. 497 498* ethdev: Runtime queue setup 499 500 ``rte_eth_rx_queue_setup`` and ``rte_eth_tx_queue_setup`` can be called after 501 ``rte_eth_dev_start`` if the device supports runtime queue setup. The device driver can 502 expose this capability through ``rte_eth_dev_info_get``. A Rx or Tx queue 503 set up at runtime need to be started explicitly by ``rte_eth_dev_rx_queue_start`` 504 or ``rte_eth_dev_tx_queue_start``. 505 506 507ABI Changes 508----------- 509 510.. This section should contain ABI changes. Sample format: 511 512 * Add a short 1-2 sentence description of the ABI change that was announced 513 in the previous releases and made in this release. Use fixed width quotes 514 for ``rte_function_names`` or ``rte_struct_names``. Use the past tense. 515 516 This section is a comment. Do not overwrite or remove it. 517 Also, make sure to start the actual text at the margin. 518 ========================================================= 519 520* ring: The alignment constraints on the ring structure has been relaxed 521 to one cache line instead of two, and an empty cache line padding is 522 added between the producer and consumer structures. The size of the 523 structure and the offset of the fields remains the same on platforms 524 with 64B cache line, but changes on other platforms. 525 526* mempool: Some ops have changed. 527 528 A new callback ``calc_mem_size`` has been added to ``rte_mempool_ops`` 529 to allow customization of the required memory size calculation. 530 A new callback ``populate`` has been added to ``rte_mempool_ops`` 531 to allow customized object population. 532 Callback ``get_capabilities`` has been removed from ``rte_mempool_ops`` 533 since its features are covered by ``calc_mem_size`` and ``populate`` 534 callbacks. 535 Callback ``register_memory_area`` has been removed from ``rte_mempool_ops`` 536 since the new callback ``populate`` may be used instead of it. 537 538* ethdev: Additional fields in rte_eth_dev_info. 539 540 The ``rte_eth_dev_info`` structure has had two extra entries appended to the 541 end of it: ``default_rxportconf`` and ``default_txportconf``. Each of these 542 in turn are ``rte_eth_dev_portconf`` structures containing three fields of 543 type ``uint16_t``: ``burst_size``, ``ring_size``, and ``nb_queues``. These 544 are parameter values recommended for use by the PMD. 545 546* ethdev: ABI for all flow API functions was updated. 547 548 This includes functions ``rte_flow_copy``, ``rte_flow_create``, 549 ``rte_flow_destroy``, ``rte_flow_error_set``, ``rte_flow_flush``, 550 ``rte_flow_isolate``, ``rte_flow_query`` and ``rte_flow_validate``, due to 551 changes in error type definitions (``enum rte_flow_error_type``), removal 552 of the unused DUP action (``enum rte_flow_action_type``), modified 553 behavior for flow rule actions (see API changes), removal of C99 flexible 554 array from RAW pattern item (``struct rte_flow_item_raw``), complete 555 rework of the RSS action definition (``struct rte_flow_action_rss``), 556 sanity fix in the VLAN pattern item (``struct rte_flow_item_vlan``) and 557 new transfer attribute (``struct rte_flow_attr``). 558 559* bbdev: New parameter added to rte_bbdev_op_cap_turbo_dec. 560 561 A new parameter ``max_llr_modulus`` has been added to 562 ``rte_bbdev_op_cap_turbo_dec`` structure to specify maximal LLR (likelihood 563 ratio) absolute value. 564 565* bbdev: Queue Groups split into UL/DL Groups. 566 567 Queue Groups have been split into UL/DL Groups in the Turbo Software Driver. 568 They are independent for Decode/Encode. ``rte_bbdev_driver_info`` reflects 569 introduced changes. 570 571 572Known Issues 573------------ 574 575.. This section should contain new known issues in this release. Sample format: 576 577 * **Add title in present tense with full stop.** 578 579 Add a short 1-2 sentence description of the known issue in the present 580 tense. Add information on any known workarounds. 581 582 This section is a comment. Do not overwrite or remove it. 583 Also, make sure to start the actual text at the margin. 584 ========================================================= 585 586* **Secondary process launch is not reliable.** 587 588 Recent memory hotplug patches have made multiprocess startup less reliable 589 than it was in past releases. A number of workarounds are known to work depending 590 on the circumstances. As such it isn't recommended to use the secondary 591 process mechanism for critical systems. The underlying issues will be 592 addressed in upcoming releases. 593 594 The issue is explained in more detail, including potential workarounds, 595 in the Bugzilla entry referenced below. 596 597 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=50 598 599* **pdump is not compatible with old applications.** 600 601 As we changed to use generic multi-process communication for pdump 602 negotiation instead of previous dedicated unix socket way, pdump 603 applications, including the dpdk-pdump example and any other applications 604 using ``librte_pdump``, will not work with older version DPDK primary 605 applications. 606 607* **rte_abort takes a long time on FreeBSD.** 608 609 DPDK processes now allocates a large area of virtual memory address space. 610 As a result ``rte_abort`` on FreeBSD now dumps the contents of the 611 whole reserved memory range, not just the used portion, to a core dump file. 612 Writing this large core file can take a significant amount of time, causing 613 processes to appear to hang on the system. 614 615 The work around for the issue is to set the system resource limits for core 616 dumps before running any tests, e.g. ``limit coredumpsize 0``. This will 617 effectively disable core dumps on FreeBSD. If they are not to be completely 618 disabled, a suitable limit, e.g. 1G might be specified instead of 0. This 619 needs to be run per-shell session, or before every test run. This change 620 can also be made persistent by adding ``kern.coredump=0`` to ``/etc/sysctl.conf``. 621 622 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=53 623 624* **ixgbe PMD crash on hotplug detach when no VF created.** 625 626 ixgbe PMD uninit path cause null pointer dereference because of port representor 627 cleanup when number of VF is zero. 628 629 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=57 630 631* **Bonding PMD may fail to accept new slave ports in certain conditions.** 632 633 In certain conditions when using testpmd, 634 bonding may fail to register new slave ports. 635 636 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=52. 637 638* **Unexpected performance regression in Vhost library.** 639 640 Patches fixing CVE-2018-1059 were expected to introduce a small performance 641 drop. However, in some setups, bigger performance drops have been measured 642 when running micro-benchmarks. 643 644 Bugzilla entry: https://bugs.dpdk.org/show_bug.cgi?id=48 645 646 647Shared Library Versions 648----------------------- 649 650.. Update any library version updated in this release and prepend with a ``+`` 651 sign, like this: 652 653 librte_acl.so.2 654 + librte_cfgfile.so.2 655 librte_cmdline.so.2 656 657 This section is a comment. Do not overwrite or remove it. 658 ========================================================= 659 660 661The libraries prepended with a plus sign were incremented in this version. 662 663.. code-block:: diff 664 665 librte_acl.so.2 666 librte_bbdev.so.1 667 librte_bitratestats.so.2 668 + librte_bpf.so.1 669 librte_bus_dpaa.so.1 670 librte_bus_fslmc.so.1 671 librte_bus_pci.so.1 672 librte_bus_vdev.so.1 673 librte_cfgfile.so.2 674 librte_cmdline.so.2 675 + librte_common_octeontx.so.1 676 + librte_compressdev.so.1 677 librte_cryptodev.so.4 678 librte_distributor.so.1 679 + librte_eal.so.7 680 + librte_ethdev.so.9 681 + librte_eventdev.so.4 682 librte_flow_classify.so.1 683 librte_gro.so.1 684 librte_gso.so.1 685 librte_hash.so.2 686 librte_ip_frag.so.1 687 librte_jobstats.so.1 688 librte_kni.so.2 689 librte_kvargs.so.1 690 librte_latencystats.so.1 691 librte_lpm.so.2 692 + librte_mbuf.so.4 693 + librte_mempool.so.4 694 + librte_meter.so.2 695 librte_metrics.so.1 696 librte_net.so.1 697 librte_pci.so.1 698 librte_pdump.so.2 699 librte_pipeline.so.3 700 librte_pmd_bnxt.so.2 701 librte_pmd_bond.so.2 702 librte_pmd_i40e.so.2 703 librte_pmd_ixgbe.so.2 704 + librte_pmd_dpaa2_cmdif.so.1 705 + librte_pmd_dpaa2_qdma.so.1 706 librte_pmd_ring.so.2 707 librte_pmd_softnic.so.1 708 librte_pmd_vhost.so.2 709 librte_port.so.3 710 librte_power.so.1 711 librte_rawdev.so.1 712 librte_reorder.so.1 713 + librte_ring.so.2 714 librte_sched.so.1 715 librte_security.so.1 716 librte_table.so.3 717 librte_timer.so.1 718 librte_vhost.so.3 719 720 721Tested Platforms 722---------------- 723 724.. This section should contain a list of platforms that were tested with this 725 release. 726 727 The format is: 728 729 * <vendor> platform with <vendor> <type of devices> combinations 730 731 * List of CPU 732 * List of OS 733 * List of devices 734 * Other relevant details... 735 736 This section is a comment. Do not overwrite or remove it. 737 Also, make sure to start the actual text at the margin. 738 ========================================================= 739 740* Intel(R) platforms with Intel(R) NICs combinations 741 742 * CPU 743 744 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz 745 * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz 746 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz 747 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 748 * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz 749 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz 750 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz 751 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz 752 * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz 753 754 * OS: 755 756 * CentOS 7.4 757 * Fedora 25 758 * Fedora 27 759 * Fedora 28 760 * FreeBSD 11.1 761 * Red Hat Enterprise Linux Server release 7.3 762 * SUSE Enterprise Linux 12 763 * Wind River Linux 8 764 * Ubuntu 14.04 765 * Ubuntu 16.04 766 * Ubuntu 16.10 767 * Ubuntu 17.10 768 769 * NICs: 770 771 * Intel(R) 82599ES 10 Gigabit Ethernet Controller 772 773 * Firmware version: 0x61bf0001 774 * Device id (pf/vf): 8086:10fb / 8086:10ed 775 * Driver version: 5.2.3 (ixgbe) 776 777 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T 778 779 * Firmware version: 0x800003e7 780 * Device id (pf/vf): 8086:15ad / 8086:15a8 781 * Driver version: 4.4.6 (ixgbe) 782 783 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G) 784 785 * Firmware version: 6.01 0x80003221 786 * Device id (pf/vf): 8086:1572 / 8086:154c 787 * Driver version: 2.4.6 (i40e) 788 789 * Intel Corporation Ethernet Connection X722 for 10GbE SFP+ (4x10G) 790 791 * Firmware version: 3.33 0x80000fd5 0.0.0 792 * Device id (pf/vf): 8086:37d0 / 8086:37cd 793 * Driver version: 2.4.3 (i40e) 794 795 * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G) 796 797 * Firmware version: 6.01 0x80003221 798 * Device id (pf/vf): 8086:158b / 8086:154c 799 * Driver version: 2.4.6 (i40e) 800 801 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G) 802 803 * Firmware version: 6.01 0x8000321c 804 * Device id (pf/vf): 8086:1583 / 8086:154c 805 * Driver version: 2.4.6 (i40e) 806 807 * Intel(R) Corporation I350 Gigabit Network Connection 808 809 * Firmware version: 1.63, 0x80000dda 810 * Device id (pf/vf): 8086:1521 / 8086:1520 811 * Driver version: 5.4.0-k (igb) 812 813* Intel(R) platforms with Mellanox(R) NICs combinations 814 815 * CPU: 816 817 * Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz 818 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz 819 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz 820 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 821 * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 822 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz 823 * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz 824 825 * OS: 826 827 * Red Hat Enterprise Linux Server release 7.5 (Maipo) 828 * Red Hat Enterprise Linux Server release 7.4 (Maipo) 829 * Red Hat Enterprise Linux Server release 7.3 (Maipo) 830 * Red Hat Enterprise Linux Server release 7.2 (Maipo) 831 * Ubuntu 18.04 832 * Ubuntu 17.10 833 * Ubuntu 16.10 834 * Ubuntu 16.04 835 * SUSE Linux Enterprise Server 15 836 837 * MLNX_OFED: 4.2-1.0.0.0 838 * MLNX_OFED: 4.3-2.0.2.0 839 840 * NICs: 841 842 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G) 843 844 * Host interface: PCI Express 3.0 x8 845 * Device ID: 15b3:1007 846 * Firmware version: 2.42.5000 847 848 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G) 849 850 * Host interface: PCI Express 3.0 x8 851 * Device ID: 15b3:1013 852 * Firmware version: 12.21.1000 and above 853 854 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G) 855 856 * Host interface: PCI Express 3.0 x8 857 * Device ID: 15b3:1013 858 * Firmware version: 12.21.1000 and above 859 860 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G) 861 862 * Host interface: PCI Express 3.0 x8 863 * Device ID: 15b3:1013 864 * Firmware version: 12.21.1000 and above 865 866 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G) 867 868 * Host interface: PCI Express 3.0 x8 869 * Device ID: 15b3:1013 870 * Firmware version: 12.21.1000 and above 871 872 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G) 873 874 * Host interface: PCI Express 3.0 x8 875 * Device ID: 15b3:1013 876 * Firmware version: 12.21.1000 and above 877 878 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G) 879 880 * Host interface: PCI Express 3.0 x16 881 * Device ID: 15b3:1013 882 * Firmware version: 12.21.1000 and above 883 884 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G) 885 886 * Host interface: PCI Express 3.0 x8 887 * Device ID: 15b3:1013 888 * Firmware version: 12.21.1000 and above 889 890 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G) 891 892 * Host interface: PCI Express 3.0 x8 893 * Device ID: 15b3:1013 894 * Firmware version: 12.21.1000 and above 895 896 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G) 897 898 * Host interface: PCI Express 3.0 x16 899 * Device ID: 15b3:1013 900 * Firmware version: 12.21.1000 and above 901 * Firmware version: 12.21.1000 and above 902 903 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G) 904 905 * Host interface: PCI Express 3.0 x16 906 * Device ID: 15b3:1013 907 * Firmware version: 12.21.1000 and above 908 909 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G) 910 911 * Host interface: PCI Express 3.0 x16 912 * Device ID: 15b3:1013 913 * Firmware version: 12.21.1000 and above 914 915 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G) 916 917 * Host interface: PCI Express 3.0 x8 918 * Device ID: 15b3:1015 919 * Firmware version: 14.21.1000 and above 920 921 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 922 923 * Host interface: PCI Express 3.0 x8 924 * Device ID: 15b3:1015 925 * Firmware version: 14.21.1000 and above 926 927 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 928 929 * Host interface: PCI Express 3.0 x16 930 * Device ID: 15b3:1017 931 * Firmware version: 16.21.1000 and above 932 933 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G) 934 935 * Host interface: PCI Express 4.0 x16 936 * Device ID: 15b3:1019 937 * Firmware version: 16.21.1000 and above 938 939* ARM platforms with Mellanox(R) NICs combinations 940 941 * CPU: 942 943 * Qualcomm ARM 1.1 2500MHz 944 945 * OS: 946 947 * Red Hat Enterprise Linux Server release 7.5 (Maipo) 948 949 * NICs: 950 951 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 952 953 * Host interface: PCI Express 3.0 x8 954 * Device ID: 15b3:1015 955 * Firmware version: 14.22.0428 956 957 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 958 959 * Host interface: PCI Express 3.0 x16 960 * Device ID: 15b3:1017 961 * Firmware version: 16.22.0428 962 963* ARM SoC combinations from Cavium (with integrated NICs) 964 965 * SoC: 966 967 * Cavium CN81xx 968 * Cavium CN83xx 969 970 * OS: 971 972 * Ubuntu 16.04.2 LTS with Cavium SDK-6.2.0-Patch2 release support package. 973 974* ARM SoC combinations from NXP (with integrated NICs) 975 976 * SoC: 977 978 * NXP/Freescale QorIQ LS1046A with ARM Cortex A72 979 * NXP/Freescale QorIQ LS2088A with ARM Cortex A72 980 981 * OS: 982 983 * Ubuntu 16.04.3 LTS with NXP QorIQ LSDK 1803 support packages 984