xref: /dpdk/doc/guides/rel_notes/release_19_02.rst (revision 6d124f592c9e3f19eb1cb856ae7ba56b50ccc44d)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright 2018 The DPDK contributors
3
4DPDK Release 19.02
5==================
6
7New Features
8------------
9
10* **Added support for freeing hugepages exactly as originally allocated.**
11
12  Some applications using memory event callbacks (especially for managing
13  RDMA memory regions) require that memory be freed back to the system
14  exactly as it was originally allocated. These applications typically
15  also require that a malloc allocation not span across two separate
16  hugepage allocations.  A new ``--match-allocations`` EAL init flag has
17  been added to fulfill both of these requirements.
18
19* **Added API to register external memory in DPDK.**
20
21  A new ``rte_extmem_register``/``rte_extmem_unregister`` API was added to allow
22  chunks of external memory to be registered with DPDK without adding them to
23  the malloc heap.
24
25* **Added support for using virtio-user without hugepages.**
26
27  The ``--no-huge`` mode was augmented to use memfd-backed memory (on systems
28  that support memfd), to allow using virtio-user-based NICs without
29  hugepages.
30
31* **Release of the ENA PMD v2.0.0.**
32
33  Version 2.0.0 of the ENA PMD was added with the following additions:
34
35  * Added Low Latency Queue v2 (LLQv2). This feature reduces the latency
36    of the packets by pushing the header directly through the PCI to the
37    device. This allows the NIC to start handle packets right after the doorbell
38    without waiting for DMA.
39  * Added independent configuration of HW Tx and Rx ring depths.
40  * Added support for up to 8k Rx descriptors per ring.
41  * Added additional doorbell check on Tx, to handle Tx more efficiently for big
42    bursts of packets.
43  * Added per queue statistics.
44  * Added extended statistics using xstats DPDK API.
45  * The reset routine was aligned with the DPDK API, so now it can be
46    handled as in other PMDs.
47  * Fixed out of order (OOO) completion.
48  * Fixed memory leaks due to port stops and starts in the middle of
49    traffic.
50  * Updated documentation and features list of the PMD.
51
52* **Updated mlx5 driver.**
53
54  Updated the mlx5 driver including the following changes:
55
56  * Fixed ``imissed`` counter to be reported through ``rte_eth_stats`` instead
57    of ``rte_eth_xstats``.
58  * Added packet header modification through Direct Verbs flow driver.
59  * Added ConnectX-6 PCI device ID to be proved by ``mlx5`` driver.
60  * Added flow counter support to Direct Verbs flow driver though DevX.
61  * Renamed build options for the glue layer to
62    ``CONFIG_RTE_IBVERBS_LINK_DLOPEN`` for make and ``ibverbs_link`` for meson.
63  * Added static linkage of ``mlx`` dependency.
64  * Improved stability of E-Switch flow driver.
65  * Added new make build configuration to set the cacheline size for BlueField
66    correctly - ``arm64-bluefield-linux-gcc``.
67
68* **Updated the enic driver.**
69
70  * Added support for the ``RTE_ETH_DEV_CLOSE_REMOVE`` flag.
71  * Added a handler to get the firmware version string.
72  * Added support for multicast filtering.
73
74* **Added dynamic queues allocation support for i40e VF.**
75
76  Previously, the available VF queues were reserved by PF at initialization
77  stage. Now both DPDK PF and Kernel PF (>=2.1.14) will support dynamic queue
78  allocation. At runtime, when VF requests for more queue exceed the initial
79  reserved amount, the PF can allocate up to 16 queues as the request after a
80  VF reset.
81
82* **Added ICE net PMD.**
83
84  Added the new ``ice`` net driver for Intel(R) Ethernet Network Adapters E810.
85  See the :doc:`../nics/ice` NIC guide for more details on this new driver.
86
87* **Added support for SW-assisted VDPA live migration.**
88
89  This SW-assisted VDPA live migration facility helps VDPA devices without
90  logging capability to perform live migration, a mediated SW relay can help
91  devices to track dirty pages caused by DMA. the IFC driver has enabled this
92  SW-assisted live migration mode.
93
94* **Added security checks to the cryptodev symmetric session operations.**
95
96  Added a set of security checks to the access cryptodev symmetric session.
97  The checks include the session's user data read/write check and the
98  session private data referencing status check while freeing a session.
99
100* **Updated the AESNI-MB PMD.**
101
102  * Added support for intel-ipsec-mb version 0.52.
103  * Added AES-GMAC algorithm support.
104  * Added Plain SHA1, SHA224, SHA256, SHA384, and SHA512 algorithms support.
105
106* **Added IPsec Library.**
107
108  Added an experimental library ``librte_ipsec`` to provide ESP tunnel and
109  transport support for IPv4 and IPv6 packets.
110
111  The library provides support for AES-CBC ciphering and AES-CBC with HMAC-SHA1
112  algorithm-chaining, and AES-GCM and NULL algorithms only at present. It is
113  planned to add more algorithms in future releases.
114
115  See :doc:`../prog_guide/ipsec_lib` for more information.
116
117* **Updated the ipsec-secgw sample application.**
118
119  The ``ipsec-secgw`` sample application has been updated to use the new
120  ``librte_ipsec`` library, which has also been added in this release.
121  The original functionality of ipsec-secgw is retained, a new command line
122  parameter ``-l`` has  been added to ipsec-secgw to use the IPsec library,
123  instead of the existing IPsec code in the application.
124
125  The IPsec library does not support all the functionality of the existing
126  ipsec-secgw application. It is planned to add the outstanding functionality
127  in future releases.
128
129  See :doc:`../sample_app_ug/ipsec_secgw` for more information.
130
131* **Enabled checksum support in the ISA-L compressdev driver.**
132
133  Added support for both adler and crc32 checksums in the ISA-L PMD.
134  This aids data integrity across both compression and decompression.
135
136* **Added a compression performance test tool.**
137
138  Added a new performance test tool to test the compressdev PMD. The tool tests
139  compression ratio and compression throughput.
140
141* **Added intel_pstate support to Power Management library.**
142
143  Previously, using the power management library required the
144  disabling of the intel_pstate kernel driver, and the enabling of the
145  acpi_cpufreq kernel driver. This is no longer the case, as the use of
146  the intel_pstate kernel driver is now supported, and automatically
147  detected by the library.
148
149
150API Changes
151-----------
152
153* eal: Function ``rte_bsf64`` in ``rte_bitmap.h`` has been renamed to
154  ``rte_bsf64_safe`` and moved to ``rte_common.h``. A new ``rte_bsf64``
155  function has been added in ``rte_common.h`` that follows the convention set
156  by the existing ``rte_bsf32`` function.
157
158* eal: Segment fd API on Linux now sets error code to ``ENOTSUP`` in more cases
159  where segment the fd API is not expected to be supported:
160
161  - On attempt to get a segment fd for an externally allocated memory segment
162  - In cases where memfd support would have been required to provide segment
163    fds (such as in-memory or no-huge mode)
164
165* eal: Functions ``rte_malloc_dump_stats()``, ``rte_malloc_dump_heaps()`` and
166  ``rte_malloc_get_socket_stats()`` are no longer safe to call concurrently with
167  ``rte_malloc_heap_create()`` or ``rte_malloc_heap_destroy()`` function calls.
168
169* mbuf: ``RTE_MBUF_INDIRECT()``, which was deprecated in 18.05, was replaced
170  with ``RTE_MBUF_CLONED()`` and removed in 19.02.
171
172* sched: As result of the new format of the mbuf sched field, the
173  functions ``rte_sched_port_pkt_write()`` and
174  ``rte_sched_port_pkt_read_tree_path()`` got an additional parameter of
175  type ``struct rte_sched_port``.
176
177* pdump: The ``rte_pdump_set_socket_dir()``, the parameter ``path`` of
178  ``rte_pdump_init()`` and enum ``rte_pdump_socktype`` were deprecated
179  since 18.05 and are removed in this release.
180
181* cryptodev: The parameter ``session_pool`` in the function
182  ``rte_cryptodev_queue_pair_setup()`` is removed.
183
184* cryptodev: a new function ``rte_cryptodev_sym_session_pool_create()`` has been
185  introduced. This function is now mandatory when creating symmetric session
186  header mempool. Please note all crypto applications are required to use this
187  function from now on. Failed to do so will cause a
188  ``rte_cryptodev_sym_session_create()`` function call return error.
189
190
191ABI Changes
192-----------
193
194* mbuf: The format of the sched field of ``rte_mbuf`` has been changed
195  to include the following fields: ``queue ID``, ``traffic class``, ``color``.
196
197* cryptodev: as shown in the 18.11 deprecation notice, the structure
198  ``rte_cryptodev_qp_conf`` has added two parameters for symmetric session
199  mempool and symmetric session private data mempool.
200
201* cryptodev: as shown in the 18.11 deprecation notice, the structure
202  ``rte_cryptodev_sym_session`` has been updated to contain more information
203  to ensure safely accessing the session and session private data.
204
205* security: A new field ``uint64_t opaque_data`` has been added to
206  ``rte_security_session`` structure. That would allow upper layer to easily
207  associate/de-associate some user defined data with the security session.
208
209
210Shared Library Versions
211-----------------------
212
213The libraries prepended with a plus sign were incremented in this version.
214
215.. code-block:: diff
216
217     librte_acl.so.2
218     librte_bbdev.so.1
219     librte_bitratestats.so.2
220     librte_bpf.so.1
221     librte_bus_dpaa.so.2
222     librte_bus_fslmc.so.2
223     librte_bus_ifpga.so.2
224     librte_bus_pci.so.2
225     librte_bus_vdev.so.2
226     librte_bus_vmbus.so.2
227     librte_cfgfile.so.2
228     librte_cmdline.so.2
229     librte_compressdev.so.1
230   + librte_cryptodev.so.6
231     librte_distributor.so.1
232     librte_eal.so.9
233     librte_efd.so.1
234     librte_ethdev.so.11
235     librte_eventdev.so.6
236     librte_flow_classify.so.1
237     librte_gro.so.1
238     librte_gso.so.1
239     librte_hash.so.2
240     librte_ip_frag.so.1
241     librte_jobstats.so.1
242     librte_kni.so.2
243     librte_kvargs.so.1
244     librte_latencystats.so.1
245     librte_lpm.so.2
246   + librte_mbuf.so.5
247     librte_member.so.1
248     librte_mempool.so.5
249     librte_meter.so.2
250     librte_metrics.so.1
251     librte_net.so.1
252     librte_pci.so.1
253   + librte_pdump.so.3
254     librte_pipeline.so.3
255     librte_pmd_bnxt.so.2
256     librte_pmd_bond.so.2
257     librte_pmd_i40e.so.2
258     librte_pmd_ixgbe.so.2
259     librte_pmd_dpaa2_qdma.so.1
260     librte_pmd_ring.so.2
261     librte_pmd_softnic.so.1
262     librte_pmd_vhost.so.2
263     librte_port.so.3
264     librte_power.so.1
265     librte_rawdev.so.1
266     librte_reorder.so.1
267     librte_ring.so.2
268   + librte_sched.so.2
269   + librte_security.so.2
270     librte_table.so.3
271     librte_timer.so.1
272     librte_vhost.so.4
273
274
275Known Issues
276------------
277
278* ``AVX-512`` support has been disabled for ``GCC`` builds when ``binutils 2.30``
279  is detected [1] because of a crash [2]. This can affect ``native`` machine type
280  build targets on the platforms that support ``AVX512F`` like ``Intel Skylake``
281  processors, and can cause a possible performance drop. The immediate workaround
282  is to use ``clang`` compiler on these platforms.
283  Initial workaround in DPDK v18.11 was to disable ``AVX-512`` support for ``GCC``
284  completely, but based on information on defect submitted to GCC community [3],
285  issue has been identified as ``binutils 2.30`` issue. Since currently only GCC
286  generates ``AVX-512`` instructions, the scope is limited to ``GCC`` and
287  ``binutils 2.30``
288
289  - [1]: Commit ("mk: fix scope of disabling AVX512F support")
290  - [2]: https://bugs.dpdk.org/show_bug.cgi?id=97
291  - [3]: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=88096
292
293
294Tested Platforms
295----------------
296
297* Intel(R) platforms with Intel(R) NICs combinations
298
299   * CPU
300
301     * Intel(R) Atom(TM) CPU C3758 @ 2.20GHz
302     * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz
303     * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
304     * Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
305     * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
306     * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz
307     * Intel(R) Xeon(R) Gold 6139 CPU @ 2.30GHz
308
309   * OS:
310
311     * CentOS 7.4
312     * CentOS 7.5
313     * Fedora 25
314     * Fedora 28
315     * FreeBSD 11.2
316     * FreeBSD 12.0
317     * Red Hat Enterprise Linux Server release 7.4
318     * Red Hat Enterprise Linux Server release 7.5
319     * Open SUSE 15
320     * Wind River Linux 8
321     * Ubuntu 14.04
322     * Ubuntu 16.04
323     * Ubuntu 16.10
324     * Ubuntu 18.04
325     * Ubuntu 18.10
326
327   * NICs:
328
329     * Intel(R) 82599ES 10 Gigabit Ethernet Controller
330
331       * Firmware version: 0x61bf0001
332       * Device id (pf/vf): 8086:10fb / 8086:10ed
333       * Driver version: 5.2.3 (ixgbe)
334
335     * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
336
337       * Firmware version: 0x800003e7
338       * Device id (pf/vf): 8086:15ad / 8086:15a8
339       * Driver version: 4.4.6 (ixgbe)
340
341     * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
342
343       * Firmware version: 6.80 0x80003cc1
344       * Device id (pf/vf): 8086:1572 / 8086:154c
345       * Driver version: 2.7.26 (i40e)
346
347     * Intel(R) Corporation Ethernet Connection X722 for 10GbE SFP+ (4x10G)
348
349       * Firmware version: 3.33 0x80000fd5 0.0.0
350       * Device id (pf/vf): 8086:37d0 / 8086:37cd
351       * Driver version: 2.7.26 (i40e)
352
353     * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G)
354
355       * Firmware version: 6.80 0x80003d05
356       * Device id (pf/vf): 8086:158b / 8086:154c
357       * Driver version: 2.7.26 (i40e)
358
359     * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
360
361       * Firmware version: 6.80 0x80003cfb
362       * Device id (pf/vf): 8086:1583 / 8086:154c
363       * Driver version: 2.7.26 (i40e)
364
365     * Intel(R) Corporation I350 Gigabit Network Connection
366
367       * Firmware version: 1.63, 0x80000dda
368       * Device id (pf/vf): 8086:1521 / 8086:1520
369       * Driver version: 5.4.0-k (igb)
370
371* Intel(R) platforms with Mellanox(R) NICs combinations
372
373   * CPU:
374
375     * Intel(R) Xeon(R) Gold 6154 CPU @ 3.00GHz
376     * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
377     * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
378     * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
379     * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
380     * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz
381     * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
382
383   * OS:
384
385     * Red Hat Enterprise Linux Server release 7.6 (Maipo)
386     * Red Hat Enterprise Linux Server release 7.5 (Maipo)
387     * Red Hat Enterprise Linux Server release 7.4 (Maipo)
388     * Red Hat Enterprise Linux Server release 7.3 (Maipo)
389     * Red Hat Enterprise Linux Server release 7.2 (Maipo)
390     * Ubuntu 18.10
391     * Ubuntu 18.04
392     * Ubuntu 17.10
393     * Ubuntu 16.04
394     * SUSE Linux Enterprise Server 15
395
396   * MLNX_OFED: 4.4-2.0.1.0
397   * MLNX_OFED: 4.5-1.0.1.0
398
399   * NICs:
400
401     * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
402
403       * Host interface: PCI Express 3.0 x8
404       * Device ID: 15b3:1007
405       * Firmware version: 2.42.5000
406
407     * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
408
409       * Host interface: PCI Express 3.0 x8
410       * Device ID: 15b3:1013
411       * Firmware version: 12.24.1000 and above
412
413     * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
414
415       * Host interface: PCI Express 3.0 x8
416       * Device ID: 15b3:1013
417       * Firmware version: 12.24.1000 and above
418
419     * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
420
421       * Host interface: PCI Express 3.0 x8
422       * Device ID: 15b3:1013
423       * Firmware version: 12.24.1000 and above
424
425     * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
426
427       * Host interface: PCI Express 3.0 x8
428       * Device ID: 15b3:1013
429       * Firmware version: 12.24.1000 and above
430
431     * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
432
433       * Host interface: PCI Express 3.0 x8
434       * Device ID: 15b3:1013
435       * Firmware version: 12.24.1000 and above
436
437     * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
438
439       * Host interface: PCI Express 3.0 x16
440       * Device ID: 15b3:1013
441       * Firmware version: 12.24.1000 and above
442
443     * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
444
445       * Host interface: PCI Express 3.0 x8
446       * Device ID: 15b3:1013
447       * Firmware version: 12.24.1000 and above
448
449     * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
450
451       * Host interface: PCI Express 3.0 x8
452       * Device ID: 15b3:1013
453       * Firmware version: 12.24.1000 and above
454
455     * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
456
457       * Host interface: PCI Express 3.0 x16
458       * Device ID: 15b3:1013
459       * Firmware version: 12.24.1000 and above
460       * Firmware version: 12.24.1000 and above
461
462     * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
463
464       * Host interface: PCI Express 3.0 x16
465       * Device ID: 15b3:1013
466       * Firmware version: 12.24.1000 and above
467
468     * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
469
470       * Host interface: PCI Express 3.0 x16
471       * Device ID: 15b3:1013
472       * Firmware version: 12.24.1000 and above
473
474     * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
475
476       * Host interface: PCI Express 3.0 x8
477       * Device ID: 15b3:1015
478       * Firmware version: 14.24.1000 and above
479
480     * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
481
482       * Host interface: PCI Express 3.0 x8
483       * Device ID: 15b3:1015
484       * Firmware version: 14.24.1000 and above
485
486     * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
487
488       * Host interface: PCI Express 3.0 x16
489       * Device ID: 15b3:1017
490       * Firmware version: 16.24.1000 and above
491
492     * Mellanox(R) ConnectX(R)-5 Ex EN 100G MCX516A-CDAT (2x100G)
493
494       * Host interface: PCI Express 4.0 x16
495       * Device ID: 15b3:1019
496       * Firmware version: 16.24.1000 and above
497
498* ARM platforms with Mellanox(R) NICs combinations
499
500   * CPU:
501
502     * Qualcomm ARM 1.1 2500MHz
503
504   * OS:
505
506     * Red Hat Enterprise Linux Server release 7.5 (Maipo)
507
508   * NICs:
509
510     * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
511
512       * Host interface: PCI Express 3.0 x8
513       * Device ID: 15b3:1015
514       * Firmware version: 14.24.0220
515
516     * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
517
518       * Host interface: PCI Express 3.0 x16
519       * Device ID: 15b3:1017
520       * Firmware version: 16.24.0220
521
522* Mellanox(R) BlueField SmartNIC
523
524   * Mellanox(R) BlueField SmartNIC MT416842 (2x25G)
525
526       * Host interface: PCI Express 3.0 x16
527       * Device ID: 15b3:a2d2
528       * Firmware version: 18.24.0328
529
530   * SoC ARM cores running OS:
531
532     * CentOS Linux release 7.4.1708 (AltArch)
533     * MLNX_OFED 4.4-2.5.9.0
534
535  * DPDK application running on ARM cores inside SmartNIC
536
537* Power 9 platforms with Mellanox(R) NICs combinations
538
539   * CPU:
540
541     * POWER9 2.2 (pvr 004e 1202) 2300MHz
542
543   * OS:
544
545     * Ubuntu 18.04.1 LTS (Bionic Beaver)
546
547   * NICs:
548
549     * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
550
551       * Host interface: PCI Express 3.0 x16
552       * Device ID: 15b3:1017
553       * Firmware version: 16.23.1020
554
555   * OFED:
556
557      * MLNX_OFED_LINUX-4.5-1.0.1.0
558