xref: /dpdk/doc/guides/rel_notes/release_17_02.rst (revision 41dd9a6bc2d9c6e20e139ad713cc9d172572dd43)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright 2017 The DPDK contributors
3
4DPDK Release 17.02
5==================
6
7New Features
8------------
9
10* **Added support for representing buses in EAL**
11
12  The ``rte_bus`` structure was introduced into the EAL. This allows for
13  devices to be represented by buses they are connected to. A new bus can be
14  added to DPDK by extending the ``rte_bus`` structure and implementing the
15  scan and probe functions. Once a new bus is registered using the provided
16  APIs, new devices can be detected and initialized using bus scan and probe
17  callbacks.
18
19  With this change, devices other than PCI or VDEV type can be represented
20  in the DPDK framework.
21
22* **Added generic EAL API for I/O device memory read/write operations.**
23
24  This API introduces 8 bit, 16 bit, 32 bit and 64 bit I/O device
25  memory read/write operations along with "relaxed" versions.
26
27  Weakly-ordered architectures like ARM need an additional I/O barrier for
28  device memory read/write access over PCI bus. By introducing the EAL
29  abstraction for I/O device memory read/write access, the drivers can access
30  I/O device memory in an architecture-agnostic manner. The relaxed version
31  does not have an additional I/O memory barrier, which is useful in accessing
32  the device registers of integrated controllers which is implicitly strongly
33  ordered with respect to memory access.
34
35* **Added generic flow API (rte_flow).**
36
37  This API provides a generic means to configure hardware to match specific
38  ingress or egress traffic, alter its behavior and query related counters
39  according to any number of user-defined rules.
40
41  In order to expose a single interface with an unambiguous behavior that is
42  common to all poll-mode drivers (PMDs) the ``rte_flow`` API is slightly
43  higher-level than the legacy filtering framework, which it encompasses and
44  supersedes (including all functions and filter types) .
45
46  See the :doc:`../prog_guide/ethdev/flow_offload` documentation for more information.
47
48* **Added firmware version get API.**
49
50  Added a new function ``rte_eth_dev_fw_version_get()`` to fetch the firmware
51  version for a given device.
52
53* **Added APIs for MACsec offload support to the ixgbe PMD.**
54
55  Six new APIs have been added to the ixgbe PMD for MACsec offload support.
56  The declarations for the APIs can be found in ``rte_pmd_ixgbe.h``.
57
58* **Added I219 NICs support.**
59
60  Added support for I219 Intel 1GbE NICs.
61
62* **Added VF Daemon (VFD) for i40e. - EXPERIMENTAL**
63
64  This is an EXPERIMENTAL feature to enhance the capability of the DPDK PF as
65  many VF management features are not currently supported by the kernel PF
66  driver. Some new private APIs are implemented directly in the PMD without an
67  abstraction layer. They can be used directly by some users who have the
68  need.
69
70  The new APIs to control VFs directly from PF include:
71
72  * Set VF MAC anti-spoofing.
73  * Set VF VLAN anti-spoofing.
74  * Set TX loopback.
75  * Set VF unicast promiscuous mode.
76  * Set VF multicast promiscuous mode.
77  * Set VF MTU.
78  * Get/reset VF stats.
79  * Set VF MAC address.
80  * Set VF VLAN stripping.
81  * Vf VLAN insertion.
82  * Set VF broadcast mode.
83  * Set VF VLAN tag.
84  * Set VF VLAN filter.
85
86  VFD also includes VF to PF mailbox message management from an application.
87  When the PF receives mailbox messages from the VF the PF should call the
88  callback provided by the application to know if they're permitted to be
89  processed.
90
91  As an EXPERIMENTAL feature, please be aware it can be changed or even
92  removed without prior notice.
93
94* **Updated the i40e base driver.**
95
96  Updated the i40e base driver, including the following changes:
97
98  * Replace existing legacy ``memcpy()`` calls with ``i40e_memcpy()`` calls.
99  * Use ``BIT()`` macro instead of bit fields.
100  * Add clear all WoL filters implementation.
101  * Add broadcast promiscuous control per VLAN.
102  * Remove unused ``X722_SUPPORT`` and ``I40E_NDIS_SUPPORT`` macros.
103
104* **Updated the enic driver.**
105
106  * Set new Rx checksum flags in mbufs to indicate unknown, good or bad checksums.
107  * Fix set/remove of MAC addresses. Allow up to 64 addresses per device.
108  * Enable TSO on outer headers.
109
110* **Added Solarflare libefx-based network PMD.**
111
112  Added a new network PMD which supports Solarflare SFN7xxx and SFN8xxx family
113  of 10/40 Gbps adapters.
114
115* **Updated the mlx4 driver.**
116
117  * Addressed a few bugs.
118
119* **Added support for Mellanox ConnectX-5 adapters (mlx5).**
120
121  Added support for Mellanox ConnectX-5 family of 10/25/40/50/100 Gbps
122  adapters to the existing mlx5 PMD.
123
124* **Updated the mlx5 driver.**
125
126  * Improve Tx performance by using vector logic.
127  * Improve RSS balancing when number of queues is not a power of two.
128  * Generic flow API support for Ethernet, IPv4, IPv4, UDP, TCP, VLAN and
129    VXLAN pattern items with DROP and QUEUE actions.
130  * Support for extended statistics.
131  * Addressed several data path bugs.
132  * As of MLNX_OFED 4.0-1.0.1.0, the Toeplitz RSS hash function is not
133    symmetric anymore for consistency with other PMDs.
134
135* **virtio-user with vhost-kernel as another exceptional path.**
136
137  Previously, we upstreamed a virtual device, virtio-user with vhost-user as
138  the backend as a way of enabling IPC (Inter-Process Communication) and user
139  space container networking.
140
141  Virtio-user with vhost-kernel as the backend is a solution for the exception
142  path, such as KNI, which exchanges packets with the kernel networking stack.
143  This solution is very promising in:
144
145  * Maintenance: vhost and vhost-net (kernel) is an upstreamed and extensively
146    used kernel module.
147  * Features: vhost-net is designed to be a networking solution, which has
148    lots of networking related features, like multi-queue, TSO, multi-seg
149    mbuf, etc.
150  * Performance: similar to KNI, this solution would use one or more
151    kthreads to send/receive packets from user space DPDK applications,
152    which has little impact on user space polling thread (except that
153    it might enter into kernel space to wake up those kthreads if
154    necessary).
155
156* **Added virtio Rx interrupt support.**
157
158  Added a feature to enable Rx interrupt mode for virtio pci net devices as
159  bound to VFIO (noiommu mode) and driven by virtio PMD.
160
161  With this feature, the virtio PMD can switch between polling mode and
162  interrupt mode, to achieve best performance, and at the same time save
163  power. It can work on both legacy and modern virtio devices. In this mode,
164  each ``rxq`` is mapped with an excluded MSIx interrupt.
165
166  See the :ref:`Virtio Interrupt Mode <virtio_interrupt_mode>` documentation
167  for more information.
168
169* **Added ARMv8 crypto PMD.**
170
171  A new crypto PMD has been added, which provides combined mode cryptographic
172  operations optimized for ARMv8 processors. The driver can be used to enhance
173  performance in processing chained operations such as cipher + HMAC.
174
175* **Updated the QAT PMD.**
176
177  The QAT PMD has been updated with additional support for:
178
179  * DES algorithm.
180  * Scatter-gather list (SGL) support.
181
182* **Updated the AESNI MB PMD.**
183
184  * The Intel(R) Multi Buffer Crypto for IPsec library used in
185    AESNI MB PMD has been moved to a new repository, in GitHub.
186  * Support has been added for single operations (cipher only and
187    authentication only).
188
189* **Updated the AES-NI GCM PMD.**
190
191  The AES-NI GCM PMD was migrated from the Multi Buffer library to the ISA-L
192  library. The migration entailed adding additional support for:
193
194  * GMAC algorithm.
195  * 256-bit cipher key.
196  * Session-less mode.
197  * Out-of place processing
198  * Scatter-gather support for chained mbufs (only out-of place and destination
199    mbuf must be contiguous)
200
201* **Added crypto performance test application.**
202
203  Added a new performance test application for measuring performance
204  parameters of PMDs available in the crypto tree.
205
206* **Added Elastic Flow Distributor library (rte_efd).**
207
208  Added a new library which uses perfect hashing to determine a target/value
209  for a given incoming flow key.
210
211  The library does not store the key itself for lookup operations, and
212  therefore, lookup performance is not dependent on the key size. Also, the
213  target/value can be any arbitrary value (8 bits by default). Finally, the
214  storage requirement is much smaller than a hash-based flow table and
215  therefore, it can better fit in CPU cache and scale to millions of flow
216  keys.
217
218  See the :doc:`../prog_guide/efd_lib` documentation in
219  the Programmers Guide document, for more information.
220
221
222Resolved Issues
223---------------
224
225Drivers
226~~~~~~~
227
228* **net/virtio: Fixed multiple process support.**
229
230  Fixed a few regressions introduced in recent releases that break the virtio
231  multiple process support.
232
233
234Examples
235~~~~~~~~
236
237* **examples/ethtool: Fixed crash with non-PCI devices.**
238
239  Fixed issue where querying a non-PCI device was dereferencing non-existent
240  PCI data resulting in a segmentation fault.
241
242
243API Changes
244-----------
245
246* **Moved five APIs for VF management from the ethdev to the ixgbe PMD.**
247
248  The following five APIs for VF management from the PF have been removed from
249  the ethdev, renamed, and added to the ixgbe PMD::
250
251     rte_eth_dev_set_vf_rate_limit()
252     rte_eth_dev_set_vf_rx()
253     rte_eth_dev_set_vf_rxmode()
254     rte_eth_dev_set_vf_tx()
255     rte_eth_dev_set_vf_vlan_filter()
256
257  The API's have been renamed to the following::
258
259     rte_pmd_ixgbe_set_vf_rate_limit()
260     rte_pmd_ixgbe_set_vf_rx()
261     rte_pmd_ixgbe_set_vf_rxmode()
262     rte_pmd_ixgbe_set_vf_tx()
263     rte_pmd_ixgbe_set_vf_vlan_filter()
264
265  The declarations for the API’s can be found in ``rte_pmd_ixgbe.h``.
266
267
268Shared Library Versions
269-----------------------
270
271The libraries prepended with a plus sign were incremented in this version.
272
273.. code-block:: diff
274
275     librte_acl.so.2
276     librte_cfgfile.so.2
277     librte_cmdline.so.2
278     librte_cryptodev.so.2
279     librte_distributor.so.1
280     librte_eal.so.3
281   + librte_ethdev.so.6
282     librte_hash.so.2
283     librte_ip_frag.so.1
284     librte_jobstats.so.1
285     librte_kni.so.2
286     librte_kvargs.so.1
287     librte_lpm.so.2
288     librte_mbuf.so.2
289     librte_mempool.so.2
290     librte_meter.so.1
291     librte_net.so.1
292     librte_pdump.so.1
293     librte_pipeline.so.3
294     librte_pmd_bond.so.1
295     librte_pmd_ring.so.2
296     librte_port.so.3
297     librte_power.so.1
298     librte_reorder.so.1
299     librte_ring.so.1
300     librte_sched.so.1
301     librte_table.so.2
302     librte_timer.so.1
303     librte_vhost.so.3
304
305
306Tested Platforms
307----------------
308
309This release has been tested with the below list of CPU/device/firmware/OS.
310Each section describes a different set of combinations.
311
312* Intel(R) platforms with Mellanox(R) NICs combinations
313
314   * Platform details
315
316     * Intel(R) Xeon(R) CPU E5-2697 v2 @ 2.70GHz
317     * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
318     * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
319
320   * OS:
321
322     * CentOS 7.0
323     * Fedora 23
324     * Fedora 24
325     * FreeBSD 10.3
326     * Red Hat Enterprise Linux 7.2
327     * SUSE Enterprise Linux 12
328     * Ubuntu 14.04 LTS
329     * Ubuntu 15.10
330     * Ubuntu 16.04 LTS
331     * Wind River Linux 8
332
333   * MLNX_OFED: 4.0-1.0.1.0
334
335   * NICs:
336
337     * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G)
338
339       * Host interface: PCI Express 3.0 x8
340       * Device ID: 15b3:1007
341       * Firmware version: 2.40.5030
342
343     * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
344
345       * Host interface: PCI Express 3.0 x8
346       * Device ID: 15b3:1013
347       * Firmware version: 12.18.1000
348
349     * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
350
351       * Host interface: PCI Express 3.0 x8
352       * Device ID: 15b3:1013
353       * Firmware version: 12.18.1000
354
355     * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
356
357       * Host interface: PCI Express 3.0 x8
358       * Device ID: 15b3:1013
359       * Firmware version: 12.18.1000
360
361     * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
362
363       * Host interface: PCI Express 3.0 x8
364       * Device ID: 15b3:1013
365       * Firmware version: 12.18.1000
366
367     * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
368
369       * Host interface: PCI Express 3.0 x8
370       * Device ID: 15b3:1013
371       * Firmware version: 12.18.1000
372
373     * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
374
375       * Host interface: PCI Express 3.0 x16
376       * Device ID: 15b3:1013
377       * Firmware version: 12.18.1000
378
379     * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
380
381       * Host interface: PCI Express 3.0 x8
382       * Device ID: 15b3:1013
383       * Firmware version: 12.18.1000
384
385     * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
386
387       * Host interface: PCI Express 3.0 x8
388       * Device ID: 15b3:1013
389       * Firmware version: 12.18.1000
390
391     * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
392
393       * Host interface: PCI Express 3.0 x16
394       * Device ID: 15b3:1013
395       * Firmware version: 12.18.1000
396
397     * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
398
399       * Host interface: PCI Express 3.0 x16
400       * Device ID: 15b3:1013
401       * Firmware version: 12.18.1000
402
403     * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
404
405       * Host interface: PCI Express 3.0 x16
406       * Device ID: 15b3:1013
407       * Firmware version: 12.18.1000
408
409     * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
410
411       * Host interface: PCI Express 3.0 x8
412       * Device ID: 15b3:1015
413       * Firmware version: 14.18.1000
414
415     * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
416
417       * Host interface: PCI Express 3.0 x8
418       * Device ID: 15b3:1015
419       * Firmware version: 14.18.1000
420
421     * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
422
423       * Host interface: PCI Express 3.0 x16
424       * Device ID: 15b3:1017
425       * Firmware version: 16.18.1000
426
427     * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G)
428
429       * Host interface: PCI Express 4.0 x16
430       * Device ID: 15b3:1019
431       * Firmware version: 16.18.1000
432
433* IBM(R) Power8(R) with Mellanox(R) NICs combinations
434
435   * Machine:
436
437     * Processor: POWER8E (raw), AltiVec supported
438
439       * type-model: 8247-22L
440       * Firmware FW810.21 (SV810_108)
441
442   * OS: Ubuntu 16.04 LTS PPC le
443
444   * MLNX_OFED: 4.0-1.0.1.0
445
446   * NICs:
447
448     * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G)
449
450       * Host interface: PCI Express 3.0 x8
451       * Device ID: 15b3:1013
452       * Firmware version: 12.18.1000
453
454     * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G)
455
456       * Host interface: PCI Express 3.0 x8
457       * Device ID: 15b3:1013
458       * Firmware version: 12.18.1000
459
460     * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G)
461
462       * Host interface: PCI Express 3.0 x8
463       * Device ID: 15b3:1013
464       * Firmware version: 12.18.1000
465
466     * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G)
467
468       * Host interface: PCI Express 3.0 x8
469       * Device ID: 15b3:1013
470       * Firmware version: 12.18.1000
471
472     * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G)
473
474       * Host interface: PCI Express 3.0 x8
475       * Device ID: 15b3:1013
476       * Firmware version: 12.18.1000
477
478     * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G)
479
480       * Host interface: PCI Express 3.0 x16
481       * Device ID: 15b3:1013
482       * Firmware version: 12.18.1000
483
484     * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G)
485
486       * Host interface: PCI Express 3.0 x8
487       * Device ID: 15b3:1013
488       * Firmware version: 12.18.1000
489
490     * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G)
491
492       * Host interface: PCI Express 3.0 x8
493       * Device ID: 15b3:1013
494       * Firmware version: 12.18.1000
495
496     * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G)
497
498       * Host interface: PCI Express 3.0 x16
499       * Device ID: 15b3:1013
500       * Firmware version: 12.18.1000
501
502     * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G)
503
504       * Host interface: PCI Express 3.0 x16
505       * Device ID: 15b3:1013
506       * Firmware version: 12.18.1000
507
508     * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G)
509
510       * Host interface: PCI Express 3.0 x16
511       * Device ID: 15b3:1013
512       * Firmware version: 12.18.1000
513
514     * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G)
515
516       * Host interface: PCI Express 3.0 x8
517       * Device ID: 15b3:1015
518       * Firmware version: 14.18.1000
519
520     * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G)
521
522       * Host interface: PCI Express 3.0 x8
523       * Device ID: 15b3:1015
524       * Firmware version: 14.18.1000
525
526     * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G)
527
528       * Host interface: PCI Express 3.0 x16
529       * Device ID: 15b3:1017
530       * Firmware version: 16.18.1000
531
532* Intel(R) platforms with Intel(R) NICs combinations
533
534   * Platform details
535
536     * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz
537     * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz
538     * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz
539     * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
540     * Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz
541     * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz
542     * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz
543
544   * OS:
545
546     * CentOS 7.2
547     * Fedora 25
548     * FreeBSD 11
549     * Red Hat Enterprise Linux Server release 7.3
550     * SUSE Enterprise Linux 12
551     * Wind River Linux 8
552     * Ubuntu 16.04
553     * Ubuntu 16.10
554
555   * NICs:
556
557     * Intel(R) 82599ES 10 Gigabit Ethernet Controller
558
559       * Firmware version: 0x61bf0001
560       * Device id (pf/vf): 8086:10fb / 8086:10ed
561       * Driver version: 4.0.1-k (ixgbe)
562
563     * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T
564
565       * Firmware version: 0x800001cf
566       * Device id (pf/vf): 8086:15ad / 8086:15a8
567       * Driver version: 4.2.5 (ixgbe)
568
569     * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G)
570
571       * Firmware version: 5.05
572       * Device id (pf/vf): 8086:1572 / 8086:154c
573       * Driver version: 1.5.23 (i40e)
574
575     * Intel(R) Ethernet Converged Network Adapter X710-DA2 (2x10G)
576
577       * Firmware version: 5.05
578       * Device id (pf/vf): 8086:1572 / 8086:154c
579       * Driver version: 1.5.23 (i40e)
580
581     * Intel(R) Ethernet Converged Network Adapter XL710-QDA1 (1x40G)
582
583       * Firmware version: 5.05
584       * Device id (pf/vf): 8086:1584 / 8086:154c
585       * Driver version: 1.5.23 (i40e)
586
587     * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G)
588
589       * Firmware version: 5.05
590       * Device id (pf/vf): 8086:1583 / 8086:154c
591       * Driver version: 1.5.23 (i40e)
592
593     * Intel(R) Corporation I350 Gigabit Network Connection
594
595       * Firmware version: 1.48, 0x800006e7
596       * Device id (pf/vf): 8086:1521 / 8086:1520
597       * Driver version: 5.2.13-k (igb)
598