1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright 2017 The DPDK contributors 3 4DPDK Release 17.11 5================== 6 7New Features 8------------ 9 10* **Extended port_id range from uint8_t to uint16_t.** 11 12 Increased the ``port_id`` range from 8 bits to 16 bits in order to support 13 more than 256 ports in DPDK. All ethdev APIs which have ``port_id`` as 14 parameter have been changed. 15 16* **Modified the return type of rte_eth_stats_reset.** 17 18 Changed return type of ``rte_eth_stats_reset`` from ``void`` to ``int`` so 19 that the caller can determine whether a device supports the operation or not 20 and if the operation was carried out. 21 22* **Added a new driver for Marvell Armada 7k/8k devices.** 23 24 Added the new ``mrvl`` net driver for Marvell Armada 7k/8k devices. See the 25 :doc:`../nics/mvpp2` NIC guide for more details on this new driver. 26 27* **Updated mlx4 driver.** 28 29 Updated the mlx4 driver including the following changes: 30 31 * Isolated mode (rte_flow) can now be enabled anytime, not only during 32 initial device configuration. 33 * Flow rules now support up to 4096 priority levels usable at will by 34 applications. 35 * Enhanced error message to help debugging invalid/unsupported flow rules. 36 * Flow rules matching all multicast and promiscuous traffic are now allowed. 37 * No more software restrictions on flow rules with the RSS action, their 38 configuration is much more flexible. 39 * Significantly reduced memory footprint for Rx and Tx queue objects. 40 * While supported, UDP RSS is temporarily disabled due to a remaining issue 41 with its support in the Linux kernel. 42 * The new RSS implementation does not automatically spread traffic according 43 to the inner packet of VXLAN frames anymore, only the outer one (like 44 other PMDs). 45 * Partial (Tx only) support for secondary processes was broken and had to be 46 removed. 47 * Refactored driver to get rid of dependency on the components provided by 48 Mellanox OFED and instead rely on the current and public rdma-core 49 package and Linux version from now on. 50 * Removed compile-time limitation on number of device instances the PMD 51 can support. 52 53* **Updated mlx5 driver.** 54 55 Updated the mlx5 driver including the following changes: 56 57 * Enabled the PMD to run on top of upstream Linux kernel and rdma-core 58 libs, removing the dependency on specific Mellanox OFED libraries. 59 * Improved PMD latency performance. 60 * Improved PMD memory footprint. 61 * Added support for vectorized Rx/Tx burst for ARMv8. 62 * Added support for secondary process. 63 * Added support for flow counters. 64 * Added support for Rx hardware timestamp offload. 65 * Added support for device removal event. 66 67* **Added SoftNIC PMD.** 68 69 Added a new SoftNIC PMD. This virtual device provides applications with 70 software fallback support for traffic management. 71 72* **Added support for NXP DPAA Devices.** 73 74 Added support for NXP's DPAA devices - LS104x series. This includes: 75 76 * DPAA Bus driver 77 * DPAA Mempool driver for supporting offloaded packet memory pool 78 * DPAA PMD for DPAA devices 79 80 See the :doc:`../nics/dpaa` document for more details of this new driver. 81 82* **Updated support for Cavium OCTEONTX Device.** 83 84 Updated support for Cavium's OCTEONTX device (CN83xx). This includes: 85 86 * OCTEONTX Mempool driver for supporting offloaded packet memory pool 87 * OCTEONTX Ethdev PMD 88 * OCTEONTX Eventdev-Ethdev Rx adapter 89 90 See the :doc:`../nics/octeontx` document for more details of this new driver. 91 92* **Added PF support to the Netronome NFP PMD.** 93 94 Added PF support to the Netronome NFP PMD. Previously the NFP PMD only 95 supported VFs. PF support is just as a basic DPDK port and has no VF 96 management yet. 97 98 PF support comes with firmware upload support which allows the PMD to 99 independently work from kernel netdev NFP drivers. 100 101 NFP 4000 devices are also now supported along with previous 6000 devices. 102 103* **Updated bnxt PMD.** 104 105 Major enhancements include: 106 107 * Support for Flow API 108 * Support for Tx and Rx descriptor status functions 109 110* **Added bus agnostic functions to cryptodev for PMD initialization** 111 112 Added new PMD assist, bus independent, functions 113 ``rte_cryptodev_pmd_parse_input_args()``, ``rte_cryptodev_pmd_create()`` and 114 ``rte_cryptodev_pmd_destroy()`` for drivers to manage creation and 115 destruction of new device instances. 116 117* **Updated QAT crypto PMD.** 118 119 Added several performance enhancements: 120 121 * Removed atomics from the internal queue pair structure. 122 * Added coalesce writes to HEAD CSR on response processing. 123 * Added coalesce writes to TAIL CSR on request processing. 124 125 In addition support was added for the AES CCM algorithm. 126 127* **Updated the AESNI MB PMD.** 128 129 The AESNI MB PMD has been updated with additional support for: 130 131 * The DES CBC algorithm. 132 * The DES DOCSIS BPI algorithm. 133 134 This change requires version 0.47 of the IPsec Multi-buffer library. For 135 more details see the :doc:`../cryptodevs/aesni_mb` documentation. 136 137* **Updated the OpenSSL PMD.** 138 139 The OpenSSL PMD has been updated with additional support for: 140 141 * The DES CBC algorithm. 142 * The AES CCM algorithm. 143 144* **Added NXP DPAA SEC crypto PMD.** 145 146 A new ``dpaa_sec`` hardware based crypto PMD for NXP DPAA devices has been 147 added. See the :doc:`../cryptodevs/dpaa_sec` document for more details. 148 149* **Added MRVL crypto PMD.** 150 151 A new crypto PMD has been added, which provides several ciphering and hashing 152 algorithms. All cryptography operations use the MUSDK library crypto API. 153 See the :doc:`../cryptodevs/mvsam` document for more details. 154 155* **Add new benchmarking mode to dpdk-test-crypto-perf application.** 156 157 Added a new "PMD cyclecount" benchmark mode to the ``dpdk-test-crypto-perf`` 158 application to display a detailed breakdown of CPU cycles used by hardware 159 acceleration. 160 161* **Added the Security Offload Library.** 162 163 Added an experimental library - ``rte_security``. This provide security APIs 164 for protocols like IPsec using inline ipsec offload to ethernet devices or 165 full protocol offload with lookaside crypto devices. 166 167 See the :doc:`../prog_guide/rte_security` section of the DPDK Programmers 168 Guide document for more information. 169 170* **Updated the DPAA2_SEC crypto driver to support rte_security.** 171 172 Updated the ``dpaa2_sec`` crypto PMD to support ``rte_security`` lookaside 173 protocol offload for IPsec. 174 175* **Updated the IXGBE ethernet driver to support rte_security.** 176 177 Updated ixgbe ethernet PMD to support ``rte_security`` inline IPsec offload. 178 179* **Updated i40e driver to support GTP-C/GTP-U.** 180 181 Updated i40e PMD to support GTP-C/GTP-U with GTP-C/GTP-U supporting 182 profiles which can be programmed by dynamic device personalization (DDP) 183 process. 184 185* **Added the i40e ethernet driver to support queue region feature.** 186 187 This feature enable queue regions configuration for RSS in PF, 188 so that different traffic classes or different packet 189 classification types can be separated into different queues in 190 different queue regions. 191 192* **Updated ipsec-secgw application to support rte_security.** 193 194 Updated the ``ipsec-secgw`` sample application to support ``rte_security`` 195 actions for ipsec inline and full protocol offload using lookaside crypto 196 offload. 197 198* **Added IOMMU support to libvhost-user** 199 200 Implemented device IOTLB in the Vhost-user backend, and enabled Virtio's 201 IOMMU feature. The feature is disabled by default, and can be enabled by 202 setting ``RTE_VHOST_USER_IOMMU_SUPPORT`` flag at vhost device registration 203 time. 204 205* **Added the Event Ethernet Adapter Library.** 206 207 Added the Event Ethernet Adapter library. This library provides APIs for 208 eventdev applications to configure the ethdev for eventdev packet flow. 209 210* **Updated DPAA2 Event PMD for the Event Ethernet Adapter.** 211 212 Added support for the eventdev ethernet adapter for DPAA2. 213 214* **Added Membership library (rte_member).** 215 216 Added a new data structure library called the Membership Library. 217 218 The Membership Library is an extension and generalization of a traditional 219 filter (for example Bloom Filter) structure that has multiple usages in a 220 wide variety of workloads and applications. In general, the Membership 221 Library is a data structure that provides a "set-summary" and responds to 222 set-membership queries whether a certain member belongs to a set(s). 223 224 The library provides APIs for DPDK applications to insert a new member, 225 delete an existing member, and query the existence of a member in a given 226 set, or a group of sets. For the case of a group of sets the library will 227 return not only whether the element has been inserted in one of the sets but 228 also which set it belongs to. 229 230 See the :doc:`../prog_guide/member_lib` documentation in the Programmers 231 Guide, for more information. 232 233* **Added the Generic Segmentation Offload Library.** 234 235 Added the Generic Segmentation Offload (GSO) library to enable 236 applications to split large packets (e.g. MTU is 64KB) into small 237 ones (e.g. MTU is 1500B). Supported packet types are: 238 239 * TCP/IPv4 packets. 240 * VxLAN packets, which must have an outer IPv4 header, and contain 241 an inner TCP/IPv4 packet. 242 * GRE packets, which must contain an outer IPv4 header, and inner 243 TCP/IPv4 headers. 244 245 The GSO library doesn't check if the input packets have correct 246 checksums, and doesn't update checksums for output packets. 247 Additionally, the GSO library doesn't process IP fragmented packets. 248 249* **Added the Flow Classification Library.** 250 251 Added an experimental Flow Classification library to provide APIs for DPDK 252 applications to classify an input packet by matching it against a set of 253 flow rules. It uses the ``librte_table`` API to manage the flow rules. 254 255 256Resolved Issues 257--------------- 258 259* **Service core fails to call service callback due to atomic lock** 260 261 In a specific configuration of multi-thread unsafe services and service 262 cores, a service core previously did not correctly release the atomic lock 263 on the service. This would result in the cores polling the service, but it 264 looked like another thread was executing the service callback. The logic for 265 atomic locking of the services has been fixed and refactored for readability. 266 267 268API Changes 269----------- 270 271* **Ethdev device name length increased.** 272 273 The size of internal device name has been increased to 64 characters 274 to allow for storing longer bus specific names. 275 276* **Removed the Ethdev RTE_ETH_DEV_DETACHABLE flag.** 277 278 Removed the Ethdev ``RTE_ETH_DEV_DETACHABLE`` flag. This flag is not 279 required anymore, with the new hotplug implementation. It has been removed 280 from the ether library. Its semantics are now expressed at the bus and PMD 281 level. 282 283* **Service cores API updated for usability** 284 285 The service cores API has been changed, removing pointers from the API where 286 possible, and instead using integer IDs to identify each service. This 287 simplifies application code, aids debugging, and provides better 288 encapsulation. A summary of the main changes made is as follows: 289 290 * Services identified by ID not by ``rte_service_spec`` pointer 291 * Reduced API surface by using ``set`` functions instead of enable/disable 292 * Reworked ``rte_service_register`` to provide the service ID to registrar 293 * Reworked start and stop APIs into ``rte_service_runstate_set`` 294 * Added API to set runstate of service implementation to indicate readiness 295 296* **The following changes have been made in the mempool library** 297 298 * Moved ``flags`` datatype from ``int`` to ``unsigned int`` for 299 ``rte_mempool``. 300 * Removed ``__rte_unused int flag`` param from ``rte_mempool_generic_put`` 301 and ``rte_mempool_generic_get`` API. 302 * Added ``flags`` param in ``rte_mempool_xmem_size`` and 303 ``rte_mempool_xmem_usage``. 304 * ``rte_mem_phy2mch`` was used in Xen dom0 to obtain the physical address; 305 remove this API as Xen dom0 support was removed. 306 307* **Added IOVA aliases related to physical address handling.** 308 309 Some data types, structure members and functions related to physical address 310 handling are deprecated and have new aliases with IOVA wording. For example: 311 312 * ``phys_addr_t`` can be often replaced by ``rte_iova_t`` of same size. 313 * ``RTE_BAD_PHYS_ADDR`` is often replaced by ``RTE_BAD_IOVA`` of same value. 314 * ``rte_memseg.phys_addr`` is aliased with ``rte_memseg.iova_addr``. 315 * ``rte_mem_virt2phy()`` can often be replaced by ``rte_mem_virt2iova``. 316 * ``rte_malloc_virt2phy`` is aliased with ``rte_malloc_virt2iova``. 317 * ``rte_memzone.phys_addr`` is aliased with ``rte_memzone.iova``. 318 * ``rte_mempool_objhdr.physaddr`` is aliased with 319 ``rte_mempool_objhdr.iova``. 320 * ``rte_mempool_memhdr.phys_addr`` is aliased with 321 ``rte_mempool_memhdr.iova``. 322 * ``rte_mempool_virt2phy()`` can be replaced by ``rte_mempool_virt2iova()``. 323 * ``rte_mempool_populate_phys*()`` are aliased with 324 ``rte_mempool_populate_iova*()`` 325 * ``rte_mbuf.buf_physaddr`` is aliased with ``rte_mbuf.buf_iova``. 326 * ``rte_mbuf_data_dma_addr*()`` are aliased with ``rte_mbuf_data_iova*()``. 327 * ``rte_pktmbuf_mtophys*`` are aliased with ``rte_pktmbuf_iova*()``. 328 329* **PCI bus API moved outside of the EAL** 330 331 The PCI bus previously implemented within the EAL has been moved. 332 A first part has been added as an RTE library providing PCI helpers to 333 parse device locations or other such utilities. 334 A second part consisting of the actual bus driver has been moved to its 335 proper subdirectory, without changing its functionalities. 336 337 As such, several PCI-related functions are not exposed by the EAL anymore: 338 339 * ``rte_pci_detach`` 340 * ``rte_pci_dump`` 341 * ``rte_pci_ioport_map`` 342 * ``rte_pci_ioport_read`` 343 * ``rte_pci_ioport_unmap`` 344 * ``rte_pci_ioport_write`` 345 * ``rte_pci_map_device`` 346 * ``rte_pci_probe`` 347 * ``rte_pci_probe_one`` 348 * ``rte_pci_read_config`` 349 * ``rte_pci_register`` 350 * ``rte_pci_scan`` 351 * ``rte_pci_unmap_device`` 352 * ``rte_pci_unregister`` 353 * ``rte_pci_write_config`` 354 355 These functions are made available either as part of ``librte_pci`` or 356 ``librte_bus_pci``. 357 358* **Moved vdev bus APIs outside of the EAL** 359 360 Moved the following APIs from ``librte_eal`` to ``librte_bus_vdev``: 361 362 * ``rte_vdev_init`` 363 * ``rte_vdev_register`` 364 * ``rte_vdev_uninit`` 365 * ``rte_vdev_unregister`` 366 367* **Add return value to stats_get dev op API** 368 369 The ``stats_get`` dev op API return value has been changed to be int. 370 In this way PMDs can return an error value in case of failure at stats 371 getting process time. 372 373* **Modified the rte_cryptodev_allocate_driver function.** 374 375 Modified the ``rte_cryptodev_allocate_driver()`` function in the cryptodev 376 library. An extra parameter ``struct cryptodev_driver *crypto_drv`` has been 377 added. 378 379* **Removed virtual device bus specific functions from librte_cryptodev.** 380 381 The functions ``rte_cryptodev_vdev_parse_init_params()`` and 382 ``rte_cryptodev_vdev_pmd_init()`` have been removed from librte_cryptodev 383 and have been replaced by non bus specific functions 384 ``rte_cryptodev_pmd_parse_input_args()`` and ``rte_cryptodev_pmd_create()``. 385 386 The ``rte_cryptodev_create_vdev()`` function was removed to avoid the 387 dependency on vdev in librte_cryptodev; instead, users can call 388 ``rte_vdev_init()`` directly. 389 390* **Removed PCI device bus specific functions from librte_cryptodev.** 391 392 The functions ``rte_cryptodev_pci_generic_probe()`` and 393 ``rte_cryptodev_pci_generic_remove()`` have been removed from librte_cryptodev 394 and have been replaced by non bus specific functions 395 ``rte_cryptodev_pmd_create()`` and ``rte_cryptodev_pmd_destroy()``. 396 397* **Removed deprecated functions to manage log level or type.** 398 399 The functions ``rte_set_log_level()``, ``rte_get_log_level()``, 400 ``rte_set_log_type()`` and ``rte_get_log_type()`` have been removed. 401 402 They are respectively replaced by ``rte_log_set_global_level()``, 403 ``rte_log_get_global_level()``, ``rte_log_set_level()`` and 404 ``rte_log_get_level()``. 405 406* **Removed mbuf flags PKT_RX_VLAN_PKT and PKT_RX_QINQ_PKT.** 407 408 The ``mbuf`` flags ``PKT_RX_VLAN_PKT`` and ``PKT_RX_QINQ_PKT`` have 409 been removed since their behavior was not properly described. 410 411* **Added mbuf flags PKT_RX_VLAN and PKT_RX_QINQ.** 412 413 Two ``mbuf`` flags have been added to indicate that the VLAN 414 identifier has been saved in the ``mbuf`` structure. For instance: 415 416 - If VLAN is not stripped and TCI is saved: ``PKT_RX_VLAN`` 417 - If VLAN is stripped and TCI is saved: ``PKT_RX_VLAN | PKT_RX_VLAN_STRIPPED`` 418 419* **Modified the vlan_offload_set_t function prototype in the ethdev library.** 420 421 Modified the ``vlan_offload_set_t`` function prototype in the ethdev 422 library. The return value has been changed from ``void`` to ``int`` so the 423 caller can determine whether the backing device supports the operation or if 424 the operation was successfully performed. 425 426 427ABI Changes 428----------- 429 430* **Extended port_id range.** 431 432 The size of the field ``port_id`` in the ``rte_eth_dev_data`` structure 433 has changed, as described in the `New Features` section above. 434 435* **New parameter added to rte_eth_dev.** 436 437 A new parameter ``security_ctx`` has been added to ``rte_eth_dev`` to 438 support security operations like IPsec inline. 439 440* **New parameter added to rte_cryptodev.** 441 442 A new parameter ``security_ctx`` has been added to ``rte_cryptodev`` to 443 support security operations like lookaside crypto. 444 445 446Removed Items 447------------- 448 449* Xen dom0 in EAL has been removed, as well as the xenvirt PMD and vhost_xen. 450 451* The crypto performance unit tests have been removed, 452 replaced by the ``dpdk-test-crypto-perf`` application. 453 454 455Shared Library Versions 456----------------------- 457 458The libraries prepended with a plus sign were incremented in this version. 459 460.. code-block:: diff 461 462 librte_acl.so.2 463 + librte_bitratestats.so.2 464 + librte_bus_dpaa.so.1 465 + librte_bus_fslmc.so.1 466 + librte_bus_pci.so.1 467 + librte_bus_vdev.so.1 468 librte_cfgfile.so.2 469 librte_cmdline.so.2 470 + librte_cryptodev.so.4 471 librte_distributor.so.1 472 + librte_eal.so.6 473 + librte_ethdev.so.8 474 + librte_eventdev.so.3 475 + librte_flow_classify.so.1 476 librte_gro.so.1 477 + librte_gso.so.1 478 librte_hash.so.2 479 librte_ip_frag.so.1 480 librte_jobstats.so.1 481 librte_kni.so.2 482 librte_kvargs.so.1 483 librte_latencystats.so.1 484 librte_lpm.so.2 485 librte_mbuf.so.3 486 + librte_mempool.so.3 487 librte_meter.so.1 488 librte_metrics.so.1 489 librte_net.so.1 490 + librte_pci.so.1 491 + librte_pdump.so.2 492 librte_pipeline.so.3 493 + librte_pmd_bnxt.so.2 494 + librte_pmd_bond.so.2 495 + librte_pmd_i40e.so.2 496 + librte_pmd_ixgbe.so.2 497 librte_pmd_ring.so.2 498 + librte_pmd_softnic.so.1 499 + librte_pmd_vhost.so.2 500 librte_port.so.3 501 librte_power.so.1 502 librte_reorder.so.1 503 librte_ring.so.1 504 librte_sched.so.1 505 + librte_security.so.1 506 + librte_table.so.3 507 librte_timer.so.1 508 librte_vhost.so.3 509 510 511Tested Platforms 512---------------- 513 514* Intel(R) platforms with Intel(R) NICs combinations 515 516 * CPU 517 518 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz 519 * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz 520 * Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz 521 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz 522 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 523 * Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 524 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz 525 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz 526 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz 527 528 * OS: 529 530 * CentOS 7.2 531 * Fedora 25 532 * Fedora 26 533 * FreeBSD 11 534 * Red Hat Enterprise Linux Server release 7.3 535 * SUSE Enterprise Linux 12 536 * Wind River Linux 8 537 * Ubuntu 16.04 538 * Ubuntu 16.10 539 540 * NICs: 541 542 * Intel(R) 82599ES 10 Gigabit Ethernet Controller 543 544 * Firmware version: 0x61bf0001 545 * Device id (pf/vf): 8086:10fb / 8086:10ed 546 * Driver version: 5.2.3 (ixgbe) 547 548 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T 549 550 * Firmware version: 0x800003e7 551 * Device id (pf/vf): 8086:15ad / 8086:15a8 552 * Driver version: 4.4.6 (ixgbe) 553 554 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G) 555 556 * Firmware version: 6.01 0x80003205 557 * Device id (pf/vf): 8086:1572 / 8086:154c 558 * Driver version: 2.1.26 (i40e) 559 560 * Intel(R) Ethernet Converged Network Adapter X710-DA2 (2x10G) 561 562 * Firmware version: 6.01 0x80003204 563 * Device id (pf/vf): 8086:1572 / 8086:154c 564 * Driver version: 2.1.26 (i40e) 565 566 * Intel(R) Ethernet Converged Network Adapter XXV710-DA2 (2x25G) 567 568 * Firmware version: 6.01 0x80003221 569 * Device id (pf/vf): 8086:158b 570 * Driver version: 2.1.26 (i40e) 571 572 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G) 573 574 * Firmware version: 6.01 0x8000321c 575 * Device id (pf/vf): 8086:1583 / 8086:154c 576 * Driver version: 2.1.26 (i40e) 577 578 * Intel(R) Corporation I350 Gigabit Network Connection 579 580 * Firmware version: 1.63, 0x80000dda 581 * Device id (pf/vf): 8086:1521 / 8086:1520 582 * Driver version: 5.3.0-k (igb) 583 584* Intel(R) platforms with Mellanox(R) NICs combinations 585 586 * Platform details: 587 588 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz 589 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz 590 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 591 * Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz 592 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz 593 * Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz 594 595 * OS: 596 597 * Red Hat Enterprise Linux Server release 7.3 (Maipo) 598 * Red Hat Enterprise Linux Server release 7.2 (Maipo) 599 * Ubuntu 16.10 600 * Ubuntu 16.04 601 * Ubuntu 14.04 602 603 * MLNX_OFED: 4.2-1.0.0.0 604 605 * NICs: 606 607 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G) 608 609 * Host interface: PCI Express 3.0 x8 610 * Device ID: 15b3:1007 611 * Firmware version: 2.42.5000 612 613 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G) 614 615 * Host interface: PCI Express 3.0 x8 616 * Device ID: 15b3:1013 617 * Firmware version: 12.21.1000 618 619 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G) 620 621 * Host interface: PCI Express 3.0 x8 622 * Device ID: 15b3:1013 623 * Firmware version: 12.21.1000 624 625 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G) 626 627 * Host interface: PCI Express 3.0 x8 628 * Device ID: 15b3:1013 629 * Firmware version: 12.21.1000 630 631 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G) 632 633 * Host interface: PCI Express 3.0 x8 634 * Device ID: 15b3:1013 635 * Firmware version: 12.21.1000 636 637 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G) 638 639 * Host interface: PCI Express 3.0 x8 640 * Device ID: 15b3:1013 641 * Firmware version: 12.21.1000 642 643 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G) 644 645 * Host interface: PCI Express 3.0 x16 646 * Device ID: 15b3:1013 647 * Firmware version: 12.21.1000 648 649 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G) 650 651 * Host interface: PCI Express 3.0 x8 652 * Device ID: 15b3:1013 653 * Firmware version: 12.21.1000 654 655 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G) 656 657 * Host interface: PCI Express 3.0 x8 658 * Device ID: 15b3:1013 659 * Firmware version: 12.21.1000 660 661 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT 662 (2x50G) 663 664 * Host interface: PCI Express 3.0 x16 665 * Device ID: 15b3:1013 666 * Firmware version: 12.21.1000 667 668 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G) 669 670 * Host interface: PCI Express 3.0 x16 671 * Device ID: 15b3:1013 672 * Firmware version: 12.21.1000 673 674 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G) 675 676 * Host interface: PCI Express 3.0 x16 677 * Device ID: 15b3:1013 678 * Firmware version: 12.21.1000 679 680 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G) 681 682 * Host interface: PCI Express 3.0 x8 683 * Device ID: 15b3:1015 684 * Firmware version: 14.21.1000 685 686 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 687 688 * Host interface: PCI Express 3.0 x8 689 * Device ID: 15b3:1015 690 * Firmware version: 14.21.1000 691 692 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 693 694 * Host interface: PCI Express 3.0 x16 695 * Device ID: 15b3:1017 696 * Firmware version: 16.21.1000 697 698 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G) 699 700 * Host interface: PCI Express 4.0 x16 701 * Device ID: 15b3:1019 702 * Firmware version: 16.21.1000 703 704* ARM platforms with Mellanox(R) NICs combinations 705 706 * Platform details: 707 708 * Qualcomm ARM 1.1 2500MHz 709 710 * OS: 711 712 * Ubuntu 16.04 713 714 * MLNX_OFED: 4.2-1.0.0.0 715 716 * NICs: 717 718 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 719 720 * Host interface: PCI Express 3.0 x8 721 * Device ID: 15b3:1015 722 * Firmware version: 14.21.1000 723 724 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 725 726 * Host interface: PCI Express 3.0 x16 727 * Device ID: 15b3:1017 728 * Firmware version: 16.21.1000 729