1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright 2017 The DPDK contributors 3 4DPDK Release 17.05 5================== 6 7New Features 8------------ 9 10* **Reorganized mbuf structure.** 11 12 The mbuf structure has been reorganized as follows: 13 14 * Align fields to facilitate the writing of ``data_off``, ``refcnt``, and 15 ``nb_segs`` in one operation. 16 * Use 2 bytes for port and number of segments. 17 * Move the sequence number to the second cache line. 18 * Add a timestamp field. 19 * Set default value for ``refcnt``, ``next`` and ``nb_segs`` at mbuf free. 20 21* **Added mbuf raw free API.** 22 23 Moved ``rte_mbuf_raw_free()`` and ``rte_pktmbuf_prefree_seg()`` functions to 24 the public API. 25 26* **Added free Tx mbuf on demand API.** 27 28 Added a new function ``rte_eth_tx_done_cleanup()`` which allows an 29 application to request the driver to release mbufs that are no longer in use 30 from a Tx ring, independent of whether or not the ``tx_rs_thresh`` has been 31 crossed. 32 33* **Added device removal interrupt.** 34 35 Added a new ethdev event ``RTE_ETH_DEV_INTR_RMV`` to signify 36 the sudden removal of a device. 37 This event can be advertised by PCI drivers and enabled accordingly. 38 39* **Added EAL dynamic log framework.** 40 41 Added new APIs to dynamically register named log types, and control 42 the level of each type independently. 43 44* **Added descriptor status ethdev API.** 45 46 Added a new API to get the status of a descriptor. 47 48 For Rx, it is almost similar to the ``rx_descriptor_done`` API, except 49 it differentiates descriptors which are held by the driver and not 50 returned to the hardware. For Tx, it is a new API. 51 52* **Increased number of next hops for LPM IPv6 to 2^21.** 53 54 The next_hop field has been extended from 8 bits to 21 bits for IPv6. 55 56* **Added VFIO hotplug support.** 57 58 Added hotplug support for VFIO in addition to the existing UIO support. 59 60* **Added PowerPC support to pci probing for vfio-pci devices.** 61 62 Enabled sPAPR IOMMU based pci probing for vfio-pci devices. 63 64* **Kept consistent PMD batching behavior.** 65 66 Removed the limit of fm10k/i40e/ixgbe Tx burst size and vhost Rx/Tx burst size 67 in order to support the same policy of "make an best effort to Rx/Tx pkts" 68 for PMDs. 69 70* **Updated the ixgbe base driver.** 71 72 Updated the ixgbe base driver, including the following changes: 73 74 * Add link block check for KR. 75 * Complete HW initialization even if SFP is not present. 76 * Add VF xcast promiscuous mode. 77 78* **Added PowerPC support for i40e and its vector PMD.** 79 80 Enabled i40e PMD and its vector PMD by default in PowerPC. 81 82* **Added VF max bandwidth setting in i40e.** 83 84 Enabled capability to set the max bandwidth for a VF in i40e. 85 86* **Added VF TC min and max bandwidth setting in i40e.** 87 88 Enabled capability to set the min and max allocated bandwidth for a TC on a 89 VF in i40. 90 91* **Added TC strict priority mode setting on i40e.** 92 93 There are 2 Tx scheduling modes supported for TCs by i40e HW: round robin 94 mode and strict priority mode. By default the round robin mode is used. It 95 is now possible to change the Tx scheduling mode for a TC. This is a global 96 setting on a physical port. 97 98* **Added i40e dynamic device personalization support.** 99 100 * Added dynamic device personalization processing to i40e firmware. 101 102* **Updated i40e driver to support MPLSoUDP/MPLSoGRE.** 103 104 Updated i40e PMD to support MPLSoUDP/MPLSoGRE with MPLSoUDP/MPLSoGRE 105 supporting profiles which can be programmed by dynamic device personalization 106 (DDP) process. 107 108* **Added Cloud Filter for QinQ steering to i40e.** 109 110 * Added a QinQ cloud filter on the i40e PMD, for steering traffic to a VM 111 using both VLAN tags. Note, this feature is not supported in Vector Mode. 112 113* **Updated mlx5 PMD.** 114 115 Updated the mlx5 driver, including the following changes: 116 117 * Added Generic flow API support for classification according to ether type. 118 * Extended Generic flow API support for classification of IPv6 flow 119 according to Vtc flow, Protocol and Hop limit. 120 * Added Generic flow API support for FLAG action. 121 * Added Generic flow API support for RSS action. 122 * Added support for TSO for non-tunneled and VXLAN packets. 123 * Added support for hardware Tx checksum offloads for VXLAN packets. 124 * Added support for user space Rx interrupt mode. 125 * Improved ConnectX-5 single core and maximum performance. 126 127* **Updated mlx4 PMD.** 128 129 Updated the mlx4 driver, including the following changes: 130 131 * Added support for Generic flow API basic flow items and actions. 132 * Added support for device removal event. 133 134* **Updated the sfc_efx driver.** 135 136 * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and TCP 137 pattern items with QUEUE action for ingress traffic. 138 139 * Added support for virtual functions (VFs). 140 141* **Added LiquidIO network PMD.** 142 143 Added poll mode driver support for Cavium LiquidIO II server adapter VFs. 144 145* **Added Atomic Rules Arkville PMD.** 146 147 Added a new poll mode driver for the Arkville family of 148 devices from Atomic Rules. The net/ark PMD supports line-rate 149 agnostic, multi-queue data movement on Arkville core FPGA instances. 150 151* **Added support for NXP DPAA2 - FSLMC bus.** 152 153 Added the new bus "fslmc" driver for NXP DPAA2 devices. See the 154 "Network Interface Controller Drivers" document for more details of this new 155 driver. 156 157* **Added support for NXP DPAA2 Network PMD.** 158 159 Added the new "dpaa2" net driver for NXP DPAA2 devices. See the 160 "Network Interface Controller Drivers" document for more details of this new 161 driver. 162 163* **Added support for the Wind River Systems AVP PMD.** 164 165 Added a new networking driver for the AVP device type. Theses devices are 166 specific to the Wind River Systems virtualization platforms. 167 168* **Added vmxnet3 version 3 support.** 169 170 Added support for vmxnet3 version 3 which includes several 171 performance enhancements such as configurable Tx data ring, Receive 172 Data Ring, and the ability to register memory regions. 173 174* **Updated the TAP driver.** 175 176 Updated the TAP PMD to: 177 178 * Support MTU modification. 179 * Support packet type for Rx. 180 * Support segmented packets on Rx and Tx. 181 * Speed up Rx on TAP when no packets are available. 182 * Support capturing traffic from another netdevice. 183 * Dynamically change link status when the underlying interface state changes. 184 * Added Generic Flow API support for Ethernet, VLAN, IPv4, IPv6, UDP and 185 TCP pattern items with DROP, QUEUE and PASSTHRU actions for ingress 186 traffic. 187 188* **Added MTU feature support to Virtio and Vhost.** 189 190 Implemented new Virtio MTU feature in Vhost and Virtio: 191 192 * Add ``rte_vhost_mtu_get()`` API to Vhost library. 193 * Enable Vhost PMD's MTU get feature. 194 * Get max MTU value from host in Virtio PMD 195 196* **Added interrupt mode support for virtio-user.** 197 198 Implemented Rxq interrupt mode and LSC support for virtio-user as a virtual 199 device. Supported cases: 200 201 * Rxq interrupt for virtio-user + vhost-user as the backend. 202 * Rxq interrupt for virtio-user + vhost-kernel as the backend. 203 * LSC interrupt for virtio-user + vhost-user as the backend. 204 205* **Added event driven programming model library (rte_eventdev).** 206 207 This API introduces an event driven programming model. 208 209 In a polling model, lcores poll ethdev ports and associated 210 Rx queues directly to look for a packet. By contrast in an event 211 driven model, lcores call the scheduler that selects packets for 212 them based on programmer-specified criteria. The Eventdev library 213 adds support for an event driven programming model, which offers 214 applications automatic multicore scaling, dynamic load balancing, 215 pipelining, packet ingress order maintenance and 216 synchronization services to simplify application packet processing. 217 218 By introducing an event driven programming model, DPDK can support 219 both polling and event driven programming models for packet processing, 220 and applications are free to choose whatever model 221 (or combination of the two) best suits their needs. 222 223* **Added Software Eventdev PMD.** 224 225 Added support for the software eventdev PMD. The software eventdev is a 226 software based scheduler device that implements the eventdev API. This 227 PMD allows an application to configure a pipeline using the eventdev 228 library, and run the scheduling workload on a CPU core. 229 230* **Added Cavium OCTEONTX Eventdev PMD.** 231 232 Added the new octeontx ssovf eventdev driver for OCTEONTX devices. See the 233 "Event Device Drivers" document for more details on this new driver. 234 235* **Added information metrics library.** 236 237 Added a library that allows information metrics to be added and updated 238 by producers, typically other libraries, for later retrieval by 239 consumers such as applications. It is intended to provide a 240 reporting mechanism that is independent of other libraries such 241 as ethdev. 242 243* **Added bit-rate calculation library.** 244 245 Added a library that can be used to calculate device bit-rates. Calculated 246 bitrates are reported using the metrics library. 247 248* **Added latency stats library.** 249 250 Added a library that measures packet latency. The collected statistics are 251 jitter and latency. For latency the minimum, average, and maximum is 252 measured. 253 254* **Added NXP DPAA2 SEC crypto PMD.** 255 256 A new "dpaa2_sec" hardware based crypto PMD for NXP DPAA2 devices has been 257 added. See the "Crypto Device Drivers" document for more details on this 258 driver. 259 260* **Updated the Cryptodev Scheduler PMD.** 261 262 * Added a packet-size based distribution mode, which distributes the enqueued 263 crypto operations among two slaves, based on their data lengths. 264 * Added fail-over scheduling mode, which enqueues crypto operations to a 265 primary slave first. Then, any operation that cannot be enqueued is 266 enqueued to a secondary slave. 267 * Added mode specific option support, so each scheduling mode can 268 now be configured individually by the new API. 269 270* **Updated the QAT PMD.** 271 272 The QAT PMD has been updated with additional support for: 273 274 * AES DOCSIS BPI algorithm. 275 * DES DOCSIS BPI algorithm. 276 * ZUC EEA3/EIA3 algorithms. 277 278* **Updated the AESNI MB PMD.** 279 280 The AESNI MB PMD has been updated with additional support for: 281 282 * AES DOCSIS BPI algorithm. 283 284* **Updated the OpenSSL PMD.** 285 286 The OpenSSL PMD has been updated with additional support for: 287 288 * DES DOCSIS BPI algorithm. 289 290 291Resolved Issues 292--------------- 293 294* **l2fwd-keepalive: Fixed unclean shutdowns.** 295 296 Added clean shutdown to l2fwd-keepalive so that it can free up 297 stale resources used for inter-process communication. 298 299 300Known Issues 301------------ 302 303* **LSC interrupt doesn't work for virtio-user + vhost-kernel.** 304 305 LSC interrupt cannot be detected when setting the backend, tap device, 306 up/down as we fail to find a way to monitor such event. 307 308 309API Changes 310----------- 311 312* The LPM ``next_hop`` field is extended from 8 bits to 21 bits for IPv6 313 while keeping ABI compatibility. 314 315* **Reworked rte_ring library.** 316 317 The rte_ring library has been reworked and updated. The following changes 318 have been made to it: 319 320 * Removed the build-time setting ``CONFIG_RTE_RING_SPLIT_PROD_CONS``. 321 * Removed the build-time setting ``CONFIG_RTE_LIBRTE_RING_DEBUG``. 322 * Removed the build-time setting ``CONFIG_RTE_RING_PAUSE_REP_COUNT``. 323 * Removed the function ``rte_ring_set_water_mark`` as part of a general 324 removal of watermarks support in the library. 325 * Added an extra parameter to the burst/bulk enqueue functions to 326 return the number of free spaces in the ring after enqueue. This can 327 be used by an application to implement its own watermark functionality. 328 * Added an extra parameter to the burst/bulk dequeue functions to return 329 the number elements remaining in the ring after dequeue. 330 * Changed the return value of the enqueue and dequeue bulk functions to 331 match that of the burst equivalents. In all cases, ring functions which 332 operate on multiple packets now return the number of elements enqueued 333 or dequeued, as appropriate. The updated functions are: 334 335 - ``rte_ring_mp_enqueue_bulk`` 336 - ``rte_ring_sp_enqueue_bulk`` 337 - ``rte_ring_enqueue_bulk`` 338 - ``rte_ring_mc_dequeue_bulk`` 339 - ``rte_ring_sc_dequeue_bulk`` 340 - ``rte_ring_dequeue_bulk`` 341 342 NOTE: the above functions all have different parameters as well as 343 different return values, due to the other listed changes above. This 344 means that all instances of the functions in existing code will be 345 flagged by the compiler. The return value usage should be checked 346 while fixing the compiler error due to the extra parameter. 347 348* **Reworked rte_vhost library.** 349 350 The rte_vhost library has been reworked to make it generic enough so that 351 the user could build other vhost-user drivers on top of it. To achieve this 352 the following changes have been made: 353 354 * The following vhost-pmd APIs are removed: 355 356 * ``rte_eth_vhost_feature_disable`` 357 * ``rte_eth_vhost_feature_enable`` 358 * ``rte_eth_vhost_feature_get`` 359 360 * The vhost API ``rte_vhost_driver_callback_register(ops)`` is reworked to 361 be per vhost-user socket file. Thus, it takes one more argument: 362 ``rte_vhost_driver_callback_register(path, ops)``. 363 364 * The vhost API ``rte_vhost_get_queue_num`` is deprecated, instead, 365 ``rte_vhost_get_vring_num`` should be used. 366 367 * The following macros are removed in ``rte_virtio_net.h`` 368 369 * ``VIRTIO_RXQ`` 370 * ``VIRTIO_TXQ`` 371 * ``VIRTIO_QNUM`` 372 373 * The following net specific header files are removed in ``rte_virtio_net.h`` 374 375 * ``linux/virtio_net.h`` 376 * ``sys/socket.h`` 377 * ``linux/if.h`` 378 * ``rte_ether.h`` 379 380 * The vhost struct ``virtio_net_device_ops`` is renamed to 381 ``vhost_device_ops`` 382 383 * The vhost API ``rte_vhost_driver_session_start`` is removed. Instead, 384 ``rte_vhost_driver_start`` should be used, and there is no need to create 385 a thread to call it. 386 387 * The vhost public header file ``rte_virtio_net.h`` is renamed to 388 ``rte_vhost.h`` 389 390 391ABI Changes 392----------- 393 394* **Reorganized the mbuf structure.** 395 396 The order and size of the fields in the ``mbuf`` structure changed, 397 as described in the `New Features`_ section. 398 399* The ``rte_cryptodev_info.sym`` structure has a new field ``max_nb_sessions_per_qp`` 400 to support drivers which may support a limited number of sessions per queue_pair. 401 402 403Removed Items 404------------- 405 406* KNI vhost support has been removed. 407 408* The dpdk_qat sample application has been removed. 409 410Shared Library Versions 411----------------------- 412 413The libraries prepended with a plus sign were incremented in this version. 414 415.. code-block:: diff 416 417 librte_acl.so.2 418 + librte_bitratestats.so.1 419 librte_cfgfile.so.2 420 librte_cmdline.so.2 421 librte_cryptodev.so.2 422 librte_distributor.so.1 423 + librte_eal.so.4 424 librte_ethdev.so.6 425 + librte_eventdev.so.1 426 librte_hash.so.2 427 librte_ip_frag.so.1 428 librte_jobstats.so.1 429 librte_kni.so.2 430 librte_kvargs.so.1 431 + librte_latencystats.so.1 432 librte_lpm.so.2 433 + librte_mbuf.so.3 434 librte_mempool.so.2 435 librte_meter.so.1 436 + librte_metrics.so.1 437 librte_net.so.1 438 librte_pdump.so.1 439 librte_pipeline.so.3 440 librte_pmd_bond.so.1 441 librte_pmd_ring.so.2 442 librte_port.so.3 443 librte_power.so.1 444 librte_reorder.so.1 445 librte_ring.so.1 446 librte_sched.so.1 447 librte_table.so.2 448 librte_timer.so.1 449 librte_vhost.so.3 450 451 452Tested Platforms 453---------------- 454 455* Intel(R) platforms with Intel(R) NICs combinations 456 457 * CPU 458 459 * Intel(R) Atom(TM) CPU C2758 @ 2.40GHz 460 * Intel(R) Xeon(R) CPU D-1540 @ 2.00GHz 461 * Intel(R) Xeon(R) CPU E5-4667 v3 @ 2.00GHz 462 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 463 * Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz 464 * Intel(R) Xeon(R) CPU E5-2695 v4 @ 2.10GHz 465 * Intel(R) Xeon(R) CPU E5-2658 v2 @ 2.40GHz 466 * Intel(R) Xeon(R) CPU E5-2658 v3 @ 2.20GHz 467 468 * OS: 469 470 * CentOS 7.2 471 * Fedora 25 472 * FreeBSD 11 473 * Red Hat Enterprise Linux Server release 7.3 474 * SUSE Enterprise Linux 12 475 * Wind River Linux 8 476 * Ubuntu 16.04 477 * Ubuntu 16.10 478 479 * NICs: 480 481 * Intel(R) 82599ES 10 Gigabit Ethernet Controller 482 483 * Firmware version: 0x61bf0001 484 * Device id (pf/vf): 8086:10fb / 8086:10ed 485 * Driver version: 4.0.1-k (ixgbe) 486 487 * Intel(R) Corporation Ethernet Connection X552/X557-AT 10GBASE-T 488 489 * Firmware version: 0x800001cf 490 * Device id (pf/vf): 8086:15ad / 8086:15a8 491 * Driver version: 4.2.5 (ixgbe) 492 493 * Intel(R) Ethernet Converged Network Adapter X710-DA4 (4x10G) 494 495 * Firmware version: 5.05 496 * Device id (pf/vf): 8086:1572 / 8086:154c 497 * Driver version: 1.5.23 (i40e) 498 499 * Intel(R) Ethernet Converged Network Adapter X710-DA2 (2x10G) 500 501 * Firmware version: 5.05 502 * Device id (pf/vf): 8086:1572 / 8086:154c 503 * Driver version: 1.5.23 (i40e) 504 505 * Intel(R) Ethernet Converged Network Adapter XL710-QDA1 (1x40G) 506 507 * Firmware version: 5.05 508 * Device id (pf/vf): 8086:1584 / 8086:154c 509 * Driver version: 1.5.23 (i40e) 510 511 * Intel(R) Ethernet Converged Network Adapter XL710-QDA2 (2X40G) 512 513 * Firmware version: 5.05 514 * Device id (pf/vf): 8086:1583 / 8086:154c 515 * Driver version: 1.5.23 (i40e) 516 517 * Intel(R) Corporation I350 Gigabit Network Connection 518 519 * Firmware version: 1.48, 0x800006e7 520 * Device id (pf/vf): 8086:1521 / 8086:1520 521 * Driver version: 5.2.13-k (igb) 522 523* Intel(R) platforms with Mellanox(R) NICs combinations 524 525 * Platform details: 526 527 * Intel(R) Xeon(R) CPU E5-2697A v4 @ 2.60GHz 528 * Intel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz 529 * Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz 530 * Intel(R) Xeon(R) CPU E5-2640 @ 2.50GHz 531 532 * OS: 533 534 * Red Hat Enterprise Linux Server release 7.3 (Maipo) 535 * Red Hat Enterprise Linux Server release 7.2 (Maipo) 536 * Ubuntu 16.10 537 * Ubuntu 16.04 538 * Ubuntu 14.04 539 540 * MLNX_OFED: 4.0-2.0.0.0 541 542 * NICs: 543 544 * Mellanox(R) ConnectX(R)-3 Pro 40G MCX354A-FCC_Ax (2x40G) 545 546 * Host interface: PCI Express 3.0 x8 547 * Device ID: 15b3:1007 548 * Firmware version: 2.40.5030 549 550 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G) 551 552 * Host interface: PCI Express 3.0 x8 553 * Device ID: 15b3:1013 554 * Firmware version: 12.18.2000 555 556 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G) 557 558 * Host interface: PCI Express 3.0 x8 559 * Device ID: 15b3:1013 560 * Firmware version: 12.18.2000 561 562 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G) 563 564 * Host interface: PCI Express 3.0 x8 565 * Device ID: 15b3:1013 566 * Firmware version: 12.18.2000 567 568 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G) 569 570 * Host interface: PCI Express 3.0 x8 571 * Device ID: 15b3:1013 572 * Firmware version: 12.18.2000 573 574 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G) 575 576 * Host interface: PCI Express 3.0 x8 577 * Device ID: 15b3:1013 578 * Firmware version: 12.18.2000 579 580 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G) 581 582 * Host interface: PCI Express 3.0 x16 583 * Device ID: 15b3:1013 584 * Firmware version: 12.18.2000 585 586 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G) 587 588 * Host interface: PCI Express 3.0 x8 589 * Device ID: 15b3:1013 590 * Firmware version: 12.18.2000 591 592 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G) 593 594 * Host interface: PCI Express 3.0 x8 595 * Device ID: 15b3:1013 596 * Firmware version: 12.18.2000 597 598 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G) 599 600 * Host interface: PCI Express 3.0 x16 601 * Device ID: 15b3:1013 602 * Firmware version: 12.18.2000 603 604 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G) 605 606 * Host interface: PCI Express 3.0 x16 607 * Device ID: 15b3:1013 608 * Firmware version: 12.18.2000 609 610 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G) 611 612 * Host interface: PCI Express 3.0 x16 613 * Device ID: 15b3:1013 614 * Firmware version: 12.18.2000 615 616 * Mellanox(R) ConnectX(R)-4 Lx 10G MCX4121A-XCAT (2x10G) 617 618 * Host interface: PCI Express 3.0 x8 619 * Device ID: 15b3:1015 620 * Firmware version: 14.18.2000 621 622 * Mellanox(R) ConnectX(R)-4 Lx 25G MCX4121A-ACAT (2x25G) 623 624 * Host interface: PCI Express 3.0 x8 625 * Device ID: 15b3:1015 626 * Firmware version: 14.18.2000 627 628 * Mellanox(R) ConnectX(R)-5 100G MCX556A-ECAT (2x100G) 629 630 * Host interface: PCI Express 3.0 x16 631 * Device ID: 15b3:1017 632 * Firmware version: 16.19.1200 633 634 * Mellanox(R) ConnectX-5 Ex EN 100G MCX516A-CDAT (2x100G) 635 636 * Host interface: PCI Express 4.0 x16 637 * Device ID: 15b3:1019 638 * Firmware version: 16.19.1200 639 640* IBM(R) Power8(R) with Mellanox(R) NICs combinations 641 642 * Platform details: 643 644 * Processor: POWER8E (raw), AltiVec supported 645 * type-model: 8247-22L 646 * Firmware FW810.21 (SV810_108) 647 648 * OS: Ubuntu 16.04 LTS PPC le 649 650 * MLNX_OFED: 4.0-2.0.0.0 651 652 * NICs: 653 654 * Mellanox(R) ConnectX(R)-4 10G MCX4111A-XCAT (1x10G) 655 656 * Host interface: PCI Express 3.0 x8 657 * Device ID: 15b3:1013 658 * Firmware version: 12.18.2000 659 660 * Mellanox(R) ConnectX(R)-4 10G MCX4121A-XCAT (2x10G) 661 662 * Host interface: PCI Express 3.0 x8 663 * Device ID: 15b3:1013 664 * Firmware version: 12.18.2000 665 666 * Mellanox(R) ConnectX(R)-4 25G MCX4111A-ACAT (1x25G) 667 668 * Host interface: PCI Express 3.0 x8 669 * Device ID: 15b3:1013 670 * Firmware version: 12.18.2000 671 672 * Mellanox(R) ConnectX(R)-4 25G MCX4121A-ACAT (2x25G) 673 674 * Host interface: PCI Express 3.0 x8 675 * Device ID: 15b3:1013 676 * Firmware version: 12.18.2000 677 678 * Mellanox(R) ConnectX(R)-4 40G MCX4131A-BCAT/MCX413A-BCAT (1x40G) 679 680 * Host interface: PCI Express 3.0 x8 681 * Device ID: 15b3:1013 682 * Firmware version: 12.18.2000 683 684 * Mellanox(R) ConnectX(R)-4 40G MCX415A-BCAT (1x40G) 685 686 * Host interface: PCI Express 3.0 x16 687 * Device ID: 15b3:1013 688 * Firmware version: 12.18.2000 689 690 * Mellanox(R) ConnectX(R)-4 50G MCX4131A-GCAT/MCX413A-GCAT (1x50G) 691 692 * Host interface: PCI Express 3.0 x8 693 * Device ID: 15b3:1013 694 * Firmware version: 12.18.2000 695 696 * Mellanox(R) ConnectX(R)-4 50G MCX414A-BCAT (2x50G) 697 698 * Host interface: PCI Express 3.0 x8 699 * Device ID: 15b3:1013 700 * Firmware version: 12.18.2000 701 702 * Mellanox(R) ConnectX(R)-4 50G MCX415A-GCAT/MCX416A-BCAT/MCX416A-GCAT (2x50G) 703 704 * Host interface: PCI Express 3.0 x16 705 * Device ID: 15b3:1013 706 * Firmware version: 12.18.2000 707 708 * Mellanox(R) ConnectX(R)-4 50G MCX415A-CCAT (1x100G) 709 710 * Host interface: PCI Express 3.0 x16 711 * Device ID: 15b3:1013 712 * Firmware version: 12.18.2000 713 714 * Mellanox(R) ConnectX(R)-4 100G MCX416A-CCAT (2x100G) 715 716 * Host interface: PCI Express 3.0 x16 717 * Device ID: 15b3:1013 718 * Firmware version: 12.18.2000 719