xref: /dpdk/doc/guides/rel_notes/release_18_02.rst (revision 68a03efeed657e6e05f281479b33b51102797e15)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright 2018 The DPDK contributors
3
4DPDK Release 18.02
5==================
6
7.. **Read this first.**
8
9   The text in the sections below explains how to update the release notes.
10
11   Use proper spelling, capitalization and punctuation in all sections.
12
13   Variable and config names should be quoted as fixed width text:
14   ``LIKE_THIS``.
15
16   Build the docs and view the output file to ensure the changes are correct::
17
18      make doc-guides-html
19
20      xdg-open build/doc/html/guides/rel_notes/release_18_02.html
21
22
23New Features
24------------
25
26.. This section should contain new features added in this release. Sample
27   format:
28
29   * **Add a title in the past tense with a full stop.**
30
31     Add a short 1-2 sentence description in the past tense. The description
32     should be enough to allow someone scanning the release notes to
33     understand the new feature.
34
35     If the feature adds a lot of sub-features you can use a bullet list like
36     this:
37
38     * Added feature foo to do something.
39     * Enhanced feature bar to do something else.
40
41     Refer to the previous release notes for examples.
42
43     This section is a comment. do not overwrite or remove it.
44     Also, make sure to start the actual text at the margin.
45     =========================================================
46
47* **Added function to allow releasing internal EAL resources on exit.**
48
49  During ``rte_eal_init()`` EAL allocates memory from hugepages to enable its
50  core libraries to perform their tasks. The ``rte_eal_cleanup()`` function
51  releases these resources, ensuring that no hugepage memory is leaked. It is
52  expected that all DPDK applications call ``rte_eal_cleanup()`` before
53  exiting. Not calling this function could result in leaking hugepages, leading
54  to failure during initialization of secondary processes.
55
56* **Added igb, ixgbe and i40e ethernet driver to support RSS with flow API.**
57
58  Added support for igb, ixgbe and i40e NICs with existing RSS configuration
59  using the ``rte_flow`` API.
60
61  Also enabled queue region configuration using the ``rte_flow`` API for i40e.
62
63* **Updated i40e driver to support PPPoE/PPPoL2TP.**
64
65  Updated i40e PMD to support PPPoE/PPPoL2TP with PPPoE/PPPoL2TP supporting
66  profiles which can be programmed by dynamic device personalization (DDP)
67  process.
68
69* **Added MAC loopback support for i40e.**
70
71  Added MAC loopback support for i40e in order to support test tasks requested
72  by users. It will setup ``Tx -> Rx`` loopback link according to the device
73  configuration.
74
75* **Added support of run time determination of number of queues per i40e VF.**
76
77  The number of queue per VF is determined by its host PF. If the PCI address
78  of an i40e PF is ``aaaa:bb.cc``, the number of queues per VF can be
79  configured with EAL parameter like ``-w aaaa:bb.cc,queue-num-per-vf=n``. The
80  value n can be 1, 2, 4, 8 or 16. If no such parameter is configured, the
81  number of queues per VF is 4 by default.
82
83* **Updated mlx5 driver.**
84
85  Updated the mlx5 driver including the following changes:
86
87  * Enabled compilation as a plugin, thus removed the mandatory dependency with rdma-core.
88    With the special compilation, the rdma-core libraries will be loaded only in case
89    Mellanox device is being used. For binaries creation the PMD can be enabled, still not
90    requiring from every end user to install rdma-core.
91  * Improved multi-segment packet performance.
92  * Changed driver name to use the PCI address to be compatible with OVS-DPDK APIs.
93  * Extended statistics for physical port packet/byte counters.
94  * Converted to the new offloads API.
95  * Supported device removal check operation.
96
97* **Updated mlx4 driver.**
98
99  Updated the mlx4 driver including the following changes:
100
101  * Enabled compilation as a plugin, thus removed the mandatory dependency with rdma-core.
102    With the special compilation, the rdma-core libraries will be loaded only in case
103    Mellanox device is being used. For binaries creation the PMD can be enabled, still not
104    requiring from every end user to install rdma-core.
105  * Improved data path performance.
106  * Converted to the new offloads API.
107  * Supported device removal check operation.
108
109* **Added NVGRE and UDP tunnels support in Solarflare network PMD.**
110
111  Added support for NVGRE, VXLAN and GENEVE tunnels.
112
113  * Added support for UDP tunnel ports configuration.
114  * Added tunneled packets classification.
115  * Added inner checksum offload.
116
117* **Added AVF (Adaptive Virtual Function) net PMD.**
118
119  Added a new net PMD called AVF (Adaptive Virtual Function), which supports
120  Intel® Ethernet Adaptive Virtual Function (AVF) with features such as:
121
122  * Basic Rx/Tx burst
123  * SSE vectorized Rx/Tx burst
124  * Promiscuous mode
125  * MAC/VLAN offload
126  * Checksum offload
127  * TSO offload
128  * Jumbo frame and MTU setting
129  * RSS configuration
130  * stats
131  * Rx/Tx descriptor status
132  * Link status update/event
133
134* **Added feature supports for live migration from vhost-net to vhost-user.**
135
136  Added feature supports for vhost-user to make live migration from vhost-net
137  to vhost-user possible. The features include:
138
139  * ``VIRTIO_F_ANY_LAYOUT``
140  * ``VIRTIO_F_EVENT_IDX``
141  * ``VIRTIO_NET_F_GUEST_ECN``, ``VIRTIO_NET_F_HOST_ECN``
142  * ``VIRTIO_NET_F_GUEST_UFO``, ``VIRTIO_NET_F_HOST_UFO``
143  * ``VIRTIO_NET_F_GSO``
144
145  Also added ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature support in virtio pmd.
146  In a scenario where the vhost backend doesn't have the ability to generate
147  RARP packets, the VM running virtio pmd can still be live migrated if
148  ``VIRTIO_NET_F_GUEST_ANNOUNCE`` feature is negotiated.
149
150* **Updated the AESNI-MB PMD.**
151
152  The AESNI-MB PMD has been updated with additional support for:
153
154  * AES-CCM algorithm.
155
156* **Updated the DPAA_SEC crypto driver to support rte_security.**
157
158  Updated the ``dpaa_sec`` crypto PMD to support ``rte_security`` lookaside
159  protocol offload for IPsec.
160
161* **Added Wireless Base Band Device (bbdev) abstraction.**
162
163  The Wireless Baseband Device library is an acceleration abstraction
164  framework for 3gpp Layer 1 processing functions that provides a common
165  programming interface for seamless operation on integrated or discrete
166  hardware accelerators or using optimized software libraries for signal
167  processing.
168
169  The current release only supports 3GPP CRC, Turbo Coding and Rate
170  Matching operations, as specified in 3GPP TS 36.212.
171
172  See the :doc:`../prog_guide/bbdev` programmer's guide for more details.
173
174* **Added New eventdev Ordered Packet Distribution Library (OPDL) PMD.**
175
176  The OPDL (Ordered Packet Distribution Library) eventdev is a specific
177  implementation of the eventdev API. It is particularly suited to packet
178  processing workloads that have high throughput and low latency requirements.
179  All packets follow the same path through the device. The order in which
180  packets follow is determined by the order in which queues are set up.
181  Events are left on the ring until they are transmitted. As a result packets
182  do not go out of order.
183
184  With this change, applications can use the OPDL PMD via the eventdev api.
185
186* **Added new pipeline use case for dpdk-test-eventdev application.**
187
188  Added a new "pipeline" use case for the ``dpdk-test-eventdev`` application.
189  The pipeline case can be used to simulate various stages in a real world
190  application from packet receive to transmit while maintaining the packet
191  ordering. It can also be used to measure the performance of the event device
192  across the stages of the pipeline.
193
194  The pipeline use case has been made generic to work with all the event
195  devices based on the capabilities.
196
197* **Updated Eventdev sample application to support event devices based on capability.**
198
199  Updated the Eventdev pipeline sample application to support various types of
200  pipelines based on the capabilities of the attached event and ethernet
201  devices. Also, renamed the application from software PMD specific
202  ``eventdev_pipeline_sw_pmd`` to the more generic ``eventdev_pipeline``.
203
204* **Added Rawdev, a generic device support library.**
205
206  The Rawdev library provides support for integrating any generic device type with
207  the DPDK framework. Generic devices are those which do not have a pre-defined
208  type within DPDK, for example, ethernet, crypto, event etc.
209
210  A set of northbound APIs have been defined which encompass a generic set of
211  operations by allowing applications to interact with device using opaque
212  structures/buffers. Also, southbound APIs provide a means of integrating devices
213  either as part of a physical bus (PCI, FSLMC etc) or through ``vdev``.
214
215  See the :doc:`../prog_guide/rawdev` programmer's guide for more details.
216
217* **Added new multi-process communication channel.**
218
219  Added a generic channel in EAL for multi-process (primary/secondary) communication.
220  Consumers of this channel need to register an action with an action name to response
221  a message received; the actions will be identified by the action name and executed
222  in the context of a new dedicated thread for this channel. The list of new APIs:
223
224  * ``rte_mp_register`` and ``rte_mp_unregister`` are for action (un)registration.
225  * ``rte_mp_sendmsg`` is for sending a message without blocking for a response.
226  * ``rte_mp_request`` is for sending a request message and will block until
227    it gets a reply message which is sent from the peer by ``rte_mp_reply``.
228
229* **Added GRO support for VxLAN-tunneled packets.**
230
231  Added GRO support for VxLAN-tunneled packets. Supported VxLAN packets
232  must contain an outer IPv4 header and inner TCP/IPv4 headers. VxLAN
233  GRO doesn't check if input packets have correct checksums and doesn't
234  update checksums for output packets. Additionally, it assumes the
235  packets are complete (i.e., ``MF==0 && frag_off==0``), when IP
236  fragmentation is possible (i.e., ``DF==0``).
237
238* **Increased default Rx and Tx ring size in sample applications.**
239
240  Increased the default ``RX_RING_SIZE`` and ``TX_RING_SIZE`` to 1024 entries
241  in testpmd and the sample applications to give better performance in the
242  general case. The user should experiment with various Rx and Tx ring sizes
243  for their specific application to get best performance.
244
245* **Added new DPDK build system using the tools "meson" and "ninja" [EXPERIMENTAL].**
246
247  Added support for building DPDK using ``meson`` and ``ninja``, which gives
248  additional features, such as automatic build-time configuration, over the
249  current build system using ``make``. For instructions on how to do a DPDK build
250  using the new system, see the instructions in ``doc/build-sdk-meson.txt``.
251
252  .. note::
253
254      This new build system support is incomplete at this point and is added
255      as experimental in this release. The existing build system using ``make``
256      is unaffected by these changes, and can continue to be used for this
257      and subsequent releases until such time as it's deprecation is announced.
258
259
260Shared Library Versions
261-----------------------
262
263.. Update any library version updated in this release and prepend with a ``+``
264   sign, like this:
265
266     librte_acl.so.2
267   + librte_cfgfile.so.2
268     librte_cmdline.so.2
269
270   This section is a comment. do not overwrite or remove it.
271   =========================================================
272
273
274The libraries prepended with a plus sign were incremented in this version.
275
276.. code-block:: diff
277
278     librte_acl.so.2
279   + librte_bbdev.so.1
280     librte_bitratestats.so.2
281     librte_bus_dpaa.so.1
282     librte_bus_fslmc.so.1
283     librte_bus_pci.so.1
284     librte_bus_vdev.so.1
285     librte_cfgfile.so.2
286     librte_cmdline.so.2
287     librte_cryptodev.so.4
288     librte_distributor.so.1
289     librte_eal.so.6
290     librte_ethdev.so.8
291     librte_eventdev.so.3
292     librte_flow_classify.so.1
293     librte_gro.so.1
294     librte_gso.so.1
295     librte_hash.so.2
296     librte_ip_frag.so.1
297     librte_jobstats.so.1
298     librte_kni.so.2
299     librte_kvargs.so.1
300     librte_latencystats.so.1
301     librte_lpm.so.2
302     librte_mbuf.so.3
303     librte_mempool.so.3
304     librte_meter.so.1
305     librte_metrics.so.1
306     librte_net.so.1
307     librte_pci.so.1
308     librte_pdump.so.2
309     librte_pipeline.so.3
310     librte_pmd_bnxt.so.2
311     librte_pmd_bond.so.2
312     librte_pmd_i40e.so.2
313     librte_pmd_ixgbe.so.2
314     librte_pmd_ring.so.2
315     librte_pmd_softnic.so.1
316     librte_pmd_vhost.so.2
317     librte_port.so.3
318     librte_power.so.1
319   + librte_rawdev.so.1
320     librte_reorder.so.1
321     librte_ring.so.1
322     librte_sched.so.1
323     librte_security.so.1
324     librte_table.so.3
325     librte_timer.so.1
326     librte_vhost.so.3
327
328
329
330Tested Platforms
331----------------
332
333.. This section should contain a list of platforms that were tested with this
334   release.
335
336   The format is:
337
338   * <vendor> platform with <vendor> <type of devices> combinations
339
340     * List of CPU
341     * List of OS
342     * List of devices
343     * Other relevant details...
344
345   This section is a comment. do not overwrite or remove it.
346   Also, make sure to start the actual text at the margin.
347   =========================================================
348
349* Intel(R) platforms with Intel(R) NICs combinations
350
351   * CPU
352
353     * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
354     * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
355     * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz
356     * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
357     * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
358     * Intel(R) Xeon(R) CPU E5-2699 v4 @ 2.20GHz
359     * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
360     * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
361     * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz
362     * Intel(R) Xeon(R) Platinum 8180 CPU @ 2.50GHz
363
364   * OS:
365
366     * CentOS 7.2
367     * Fedora 25
368     * Fedora 26
369     * Fedora 27
370     * FreeBSD 11
371     * Red Hat Enterprise Linux Server release 7.3
372     * SUSE Enterprise Linux 12
373     * Wind River Linux 8
374     * Ubuntu 14.04
375     * Ubuntu 16.04
376     * Ubuntu 16.10
377     * Ubuntu 17.10
378
379   * NICs:
380
381     * Intel(R) 82599ES 10 Gigabit Ethernet Controller
382
383       * Firmware version: 0x61bf0001
384       * Device id (pf/vf): 8086:10fb / 8086:10ed
385       * Driver version: 5.2.3 (ixgbe)
386
387     * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
388
389       * Firmware version: 0x800003e7
390       * Device id (pf/vf): 8086:15ad / 8086:15a8
391       * Driver version: 4.4.6 (ixgbe)
392
393     * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
394
395       * Firmware version: 6.01 0x80003221
396       * Device id (pf/vf): 8086:1572 / 8086:154c
397       * Driver version: 2.4.3 (i40e)
398
399     * Intel Corporation Ethernet Connection X722 for 10GBASE-T
400
401       * firmware-version: 6.01 0x80003221
402       * Device id: 8086:37d2 / 8086:154c
403       * Driver version: 2.4.3 (i40e)
404
405     * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G)
406
407       * Firmware version: 6.01 0x80003221
408       * Device id (pf/vf): 8086:158b / 8086:154c
409       * Driver version: 2.4.3 (i40e)
410
411     * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
412
413       * Firmware version: 6.01 0x8000321c
414       * Device id (pf/vf): 8086:1583 / 8086:154c
415       * Driver version: 2.4.3 (i40e)
416
417     * Intel(R) Corporation I350 Gigabit Network Connection
418
419       * Firmware version: 1.63, 0x80000dda
420       * Device id (pf/vf): 8086:1521 / 8086:1520
421       * Driver version: 5.3.0-k (igb)
422
423* Intel(R) platforms with Mellanox(R) NICs combinations
424
425   * CPU:
426
427     * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz
428     * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
429     * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
430     * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz
431     * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz
432     * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz
433
434   * OS:
435
436     * Red Hat Enterprise Linux Server release 7.5 Beta (Maipo)
437     * Red Hat Enterprise Linux Server release 7.4 (Maipo)
438     * Red Hat Enterprise Linux Server release 7.3 (Maipo)
439     * Red Hat Enterprise Linux Server release 7.2 (Maipo)
440     * Ubuntu 17.10
441     * Ubuntu 16.10
442     * Ubuntu 16.04
443
444   * MLNX_OFED: 4.2-1.0.0.0
445   * MLNX_OFED: 4.3-0.1.6.0
446
447   * NICs:
448
449     * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
450
451       * Host interface: PCI Express 3.0 x8
452       * Device ID: 15b3:1007
453       * Firmware version: 2.42.5000
454
455     * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
456
457       * Host interface: PCI Express 3.0 x8
458       * Device ID: 15b3:1013
459       * Firmware version: 12.21.1000 and above
460
461     * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
462
463       * Host interface: PCI Express 3.0 x8
464       * Device ID: 15b3:1013
465       * Firmware version: 12.21.1000 and above
466
467     * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
468
469       * Host interface: PCI Express 3.0 x8
470       * Device ID: 15b3:1013
471       * Firmware version: 12.21.1000 and above
472
473     * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
474
475       * Host interface: PCI Express 3.0 x8
476       * Device ID: 15b3:1013
477       * Firmware version: 12.21.1000 and above
478
479     * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
480
481       * Host interface: PCI Express 3.0 x8
482       * Device ID: 15b3:1013
483       * Firmware version: 12.21.1000 and above
484
485     * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
486
487       * Host interface: PCI Express 3.0 x16
488       * Device ID: 15b3:1013
489       * Firmware version: 12.21.1000 and above
490
491     * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
492
493       * Host interface: PCI Express 3.0 x8
494       * Device ID: 15b3:1013
495       * Firmware version: 12.21.1000 and above
496
497     * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
498
499       * Host interface: PCI Express 3.0 x8
500       * Device ID: 15b3:1013
501       * Firmware version: 12.21.1000 and above
502
503     * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
504
505       * Host interface: PCI Express 3.0 x16
506       * Device ID: 15b3:1013
507       * Firmware version: 12.21.1000 and above
508       * Firmware version: 12.21.1000 and above
509
510     * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
511
512       * Host interface: PCI Express 3.0 x16
513       * Device ID: 15b3:1013
514       * Firmware version: 12.21.1000 and above
515
516     * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
517
518       * Host interface: PCI Express 3.0 x16
519       * Device ID: 15b3:1013
520       * Firmware version: 12.21.1000 and above
521
522     * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
523
524       * Host interface: PCI Express 3.0 x8
525       * Device ID: 15b3:1015
526       * Firmware version: 14.21.1000 and above
527
528     * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
529
530       * Host interface: PCI Express 3.0 x8
531       * Device ID: 15b3:1015
532       * Firmware version: 14.21.1000 and above
533
534     * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
535
536       * Host interface: PCI Express 3.0 x16
537       * Device ID: 15b3:1017
538       * Firmware version: 16.21.1000 and above
539
540     * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
541
542       * Host interface: PCI Express 4.0 x16
543       * Device ID: 15b3:1019
544       * Firmware version: 16.21.1000 and above
545
546* ARM platforms with Mellanox(R) NICs combinations
547
548   * CPU:
549
550     * Qualcomm ARM 1.1 2500MHz
551
552   * OS:
553
554     * Ubuntu 16.04
555
556   * MLNX_OFED: 4.2-1.0.0.0
557
558   * NICs:
559
560     * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
561
562       * Host interface: PCI Express 3.0 x8
563       * Device ID: 15b3:1015
564       * Firmware version: 14.21.1000
565
566     * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
567
568       * Host interface: PCI Express 3.0 x16
569       * Device ID: 15b3:1017
570       * Firmware version: 16.21.1000
571