1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright 2020 The DPDK contributors 3 4.. include:: <isonum.txt> 5 6DPDK Release 20.05 7================== 8 9New Features 10------------ 11 12* **Added Trace Library and Tracepoints.** 13 14 Added a native implementation of the "common trace format" (CTF) based trace 15 library. This allows the user add tracepoints in an application/library to 16 get runtime trace/debug information for control, and fast APIs with minimum 17 impact on fast path performance. Typical trace overhead is ~20 cycles and 18 instrumentation overhead is 1 cycle. Added tracepoints in ``EAL``, 19 ``ethdev``, ``cryptodev``, ``eventdev`` and ``mempool`` libraries for 20 important functions. 21 22* **Added APIs for RCU defer queues.** 23 24 Added APIs to create and delete defer queues. Additional APIs are provided 25 to enqueue a deleted resource and reclaim the resource in the future. 26 These APIs help an application use lock-free data structures with 27 less effort. 28 29* **Added new API for rte_ring.** 30 31 * Introduced new synchronization modes for ``rte_ring``. 32 33 Introduced new optional MT synchronization modes for ``rte_ring``: 34 Relaxed Tail Sync (RTS) mode and Head/Tail Sync (HTS) mode. 35 With these modes selected, ``rte_ring`` shows significant improvements for 36 average enqueue/dequeue times on overcommitted systems. 37 38 * Added peek style API for ``rte_ring``. 39 40 For rings with producer/consumer in ``RTE_RING_SYNC_ST``, ``RTE_RING_SYNC_MT_HTS`` 41 mode, provide the ability to split enqueue/dequeue operation into two phases 42 (enqueue/dequeue start and enqueue/dequeue finish). This allows the user to inspect 43 objects in the ring without removing them (aka MT safe peek). 44 45* **Added flow aging support.** 46 47 Added flow aging support to detect and report aged-out flows, including: 48 49 * Added new action: ``RTE_FLOW_ACTION_TYPE_AGE`` to set the timeout 50 and the application flow context for each flow. 51 * Added new event: ``RTE_ETH_EVENT_FLOW_AGED`` for the driver to report 52 that there are new aged-out flows. 53 * Added new query: ``rte_flow_get_aged_flows`` to get the aged-out flows 54 contexts from the port. 55 56* **ethdev: Added a new value to link speed for 200Gbps.** 57 58 Added a new ethdev value to for link speeds of 200Gbps. 59 60* **Updated the Amazon ena driver.** 61 62 Updated the ena PMD with new features and improvements, including: 63 64 * Added support for large LLQ (Low-latency queue) headers. 65 * Added Tx drops as a new extended driver statistic. 66 * Added support for accelerated LLQ mode. 67 * Handling of the 0 length descriptors on the Rx path. 68 69* **Updated Broadcom bnxt driver.** 70 71 Updated the Broadcom bnxt driver with new features and improvements, including: 72 73 * Added support for host based flow table management. 74 * Added flow counters to extended stats. 75 * Added PCI function stats to extended stats. 76 77* **Updated Cisco enic driver.** 78 79 Updated Cisco enic driver GENEVE tunneling support: 80 81 * Added support to control GENEVE tunneling via UCSM/CIMC and removed devarg. 82 * Added GENEVE port number configuration. 83 84* **Updated Hisilicon hns3 driver.** 85 86 Updated Hisilicon hns3 driver with new features and improvements, including: 87 88 * Added support for TSO. 89 * Added support for configuring promiscuous and allmulticast mode for VF. 90 91* **Added a new driver for Intel Foxville I225 devices.** 92 93 Added the new ``igc`` net driver for Intel Foxville I225 devices. See the 94 :doc:`../nics/igc` NIC guide for more details on this new driver. 95 96* **Updated Intel i40e driver.** 97 98 Updated i40e PMD with new features and improvements, including: 99 100 * Enabled MAC address as FDIR input set for ipv4-other, ipv4-udp and ipv4-tcp. 101 * Added support for RSS using L3/L4 source/destination only. 102 * Added support for setting hash function in rte flow. 103 104* **Updated the Intel iavf driver.** 105 106 Update the Intel iavf driver with new features and improvements, including: 107 108 * Added generic filter support. 109 * Added advanced iavf with FDIR capability. 110 * Added advanced RSS configuration for VFs. 111 112* **Updated the Intel ice driver.** 113 114 Updated the Intel ice driver with new features and improvements, including: 115 116 * Added support for DCF (Device Config Function) feature. 117 * Added switch filter support for Intel DCF. 118 119* **Updated Marvell OCTEON TX2 ethdev driver.** 120 121 Updated Marvell OCTEON TX2 ethdev driver with traffic manager support, 122 including: 123 124 * Hierarchical Scheduling with DWRR and SP. 125 * Single rate - Two color, Two rate - Three color shaping. 126 127* **Updated Mellanox mlx5 driver.** 128 129 Updated Mellanox mlx5 driver with new features and improvements, including: 130 131 * Added support for matching on IPv4 Time To Live and IPv6 Hop Limit. 132 * Added support for creating Relaxed Ordering Memory Regions. 133 * Added support for configuring Hairpin queue data buffer size. 134 * Added support for jumbo frame size (9K MTU) in Multi-Packet RQ mode. 135 * Removed flow rules caching for memory saving and compliance with ethdev API. 136 * Optimized the memory consumption of flows. 137 * Added support for flow aging based on hardware counters. 138 * Added support for flow patterns with wildcard VLAN items (without VID value). 139 * Updated support for matching on GTP headers, added match on GTP flags. 140 141* **Added Chacha20-Poly1305 algorithm to Cryptodev API.** 142 143 Added support for Chacha20-Poly1305 AEAD algorithm in Cryptodev. 144 145* **Updated the AESNI MB crypto PMD.** 146 147 * Added support for intel-ipsec-mb version 0.54. 148 * Updated the AESNI MB PMD with AES-256 DOCSIS algorithm. 149 * Added support for synchronous Crypto burst API. 150 151* **Updated the AESNI GCM crypto PMD.** 152 153 Added support for intel-ipsec-mb version 0.54. 154 155* **Updated the ZUC crypto PMD.** 156 157 * Added support for intel-ipsec-mb version 0.54. 158 * Updated the PMD to support Multi-buffer ZUC-EIA3, 159 improving performance significantly, when using 160 intel-ipsec-mb version 0.54 161 162* **Updated the SNOW3G crypto PMD.** 163 164 Added support for intel-ipsec-mb version 0.54. 165 166* **Updated the KASUMI crypto PMD.** 167 168 Added support for intel-ipsec-mb version 0.54. 169 170* **Updated the QuickAssist Technology (QAT) Crypto PMD.** 171 172 * Added handling of mixed crypto algorithms in QAT PMD for GEN2. 173 174 Enabled handling of mixed algorithms in encrypted digest hash-cipher 175 (generation) and cipher-hash (verification) requests in QAT PMD when 176 running on GEN2 QAT hardware with particular firmware versions (GEN3 177 support was added in DPDK 20.02). 178 179 * Added plain SHA-1, 224, 256, 384, 512 support to QAT PMD. 180 181 Added support for plain SHA-1, SHA-224, SHA-256, SHA-384 and SHA-512 182 hashes to QAT PMD. 183 184 * Added AES-GCM/GMAC J0 support to QAT PMD. 185 186 Added support for AES-GCM/GMAC J0 to Intel QuickAssist Technology PMD. The 187 user can use this feature by passing a zero length IV in the appropriate 188 xform. For more information refer to the doxygen comments in 189 ``rte_crypto_sym.h`` for ``J0``. 190 191 * Updated the QAT PMD for AES-256 DOCSIS. 192 193 Added AES-256 DOCSIS algorithm support to the QAT PMD. 194 195* **Updated the QuickAssist Technology (QAT) Compression PMD.** 196 197 Added special buffer handling when the internal QAT intermediate buffer is 198 too small for the Huffman dynamic compression operation. Instead of falling 199 back to fixed compression, the operation is now split into multiple smaller 200 dynamic compression requests (which are possible to execute on QAT) and 201 their results are then combined and copied into the output buffer. This is 202 not possible if any checksum calculation was requested - in such cases the 203 code falls back to fixed compression as before. 204 205* **Updated the turbo_sw bbdev PMD.** 206 207 Added support for large size code blocks which do not fit in one mbuf 208 segment. 209 210* **Added Intel FPGA_5GNR_FEC bbdev PMD.** 211 212 Added a new ``fpga_5gnr_fec`` bbdev driver for the Intel\ |reg| FPGA PAC 213 (Programmable Acceleration Card) N3000. See the 214 :doc:`../bbdevs/fpga_5gnr_fec` BBDEV guide for more details on this new driver. 215 216* **Updated the DSW event device.** 217 218 Updated the DSW PMD with new features and improvements, including: 219 220 * Improved flow migration mechanism, allowing faster and more 221 accurate load balancing. 222 * Improved behavior on high-core count systems. 223 * Reduced latency in low-load situations. 224 * Extended DSW xstats with migration and load-related statistics. 225 226* **Updated ipsec-secgw sample application.** 227 228 Updated the ``ipsec-secgw`` sample application with the following features: 229 230 * Updated the application to add event based packet processing. The worker 231 thread(s) would receive events and submit them back to the event device 232 after the processing. This way, multicore scaling and HW assisted 233 scheduling is achieved by making use of the event device capabilities. The 234 event mode currently only supports inline IPsec protocol offload. 235 236 * Updated the application to support key sizes for AES-192-CBC, AES-192-GCM, 237 AES-256-GCM algorithms. 238 239 * Added IPsec inbound load-distribution support for the application using 240 NIC load distribution feature (Flow Director). 241 242* **Updated Telemetry Library.** 243 244 The updated Telemetry library has been significantly improved in relation to 245 the original version to make it more accessible and scalable: 246 247 * It now enables DPDK libraries and applications to provide their own 248 specific telemetry information, rather than being limited to what could be 249 reported through the metrics library. 250 251 * It is no longer dependent on the external Jansson library, which allows 252 Telemetry be enabled by default. 253 254 * The socket handling has been simplified making it easier for clients to 255 connect and retrieve information. 256 257* **Added the rte_graph library.** 258 259 The Graph architecture abstracts the data processing functions as ``nodes`` 260 and ``links`` them together to create a complex ``graph`` to enable 261 reusable/modular data processing functions. The graph library provides APIs 262 to enable graph framework operations such as create, lookup, dump and 263 destroy on graph and node operations such as clone, edge update, and edge 264 shrink, etc. The API also allows the creation of a stats cluster to monitor 265 per graph and per node statistics. 266 267* **Added the rte_node library.** 268 269 Added the ``rte_node`` library that consists of nodes used by the 270 ``rte_graph`` library. Each node performs a specific packet processing 271 function based on the application configuration. 272 273 The following nodes are added: 274 275 * Null node: A skeleton node that defines the general structure of a node. 276 * Ethernet device node: Consists of Ethernet Rx/Tx nodes as well as Ethernet 277 control APIs. 278 * IPv4 lookup node: Consists of IPv4 extract and LPM lookup node. Routes can 279 be configured by the application through the ``rte_node_ip4_route_add`` 280 function. 281 * IPv4 rewrite node: Consists of IPv4 and Ethernet header rewrite 282 functionality that can be configured through the 283 ``rte_node_ip4_rewrite_add`` function. 284 * Packet drop node: Frees the packets received to their respective mempool. 285 286* **Added new l3fwd-graph sample application.** 287 288 Added an example application ``l3fwd-graph``. This demonstrates the usage of 289 the graph library and node library for packet processing. In addition to the 290 library usage demonstration, this application can be used for performance 291 comparison of the existing ``l3fwd`` (static code without any nodes) with 292 the modular ``l3fwd-graph`` approach. 293 294* **Updated the testpmd application.** 295 296 Added a new cmdline option ``--rx-mq-mode`` which can be used to test PMD's 297 behaviour on handling Rx mq mode. 298 299* **Added support for GCC 10.** 300 301 Added support for building with GCC 10.1. 302 303 304API Changes 305----------- 306 307* mempool: The API of ``rte_mempool_populate_iova()`` and 308 ``rte_mempool_populate_virt()`` changed to return 0 instead of ``-EINVAL`` 309 when there is not enough room to store one object. 310 311 312ABI Changes 313----------- 314 315* No ABI change that would break compatibility with DPDK 20.02 and 19.11. 316 317 318Tested Platforms 319---------------- 320 321* Intel\ |reg| platforms with Broadcom\ |reg| NICs combinations 322 323 * CPU: 324 325 * Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz 326 * Intel\ |reg| Xeon\ |reg| CPU E5-2650 v2 @ 2.60GHz 327 * Intel\ |reg| Xeon\ |reg| CPU E5-2667 v3 @ 3.20GHz 328 * Intel\ |reg| Xeon\ |reg| Gold 6142 CPU @ 2.60GHz 329 * Intel\ |reg| Xeon\ |reg| Silver 4110 CPU @ 2.10GHz 330 331 * OS: 332 333 * Red Hat Enterprise Linux Server release 8.1 334 * Red Hat Enterprise Linux Server release 7.6 335 * Red Hat Enterprise Linux Server release 7.5 336 * Ubuntu 16.04 337 * Centos 8.1 338 * Centos 7.7 339 340 * upstream kernel: 341 342 * Linux 5.3 343 344 * NICs: 345 346 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P225p (2x25G) 347 348 * Host interface: PCI Express 3.0 x8 349 * Firmware version: 214.4.81.0 and above 350 351 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P425p (4x25G) 352 353 * Host interface: PCI Express 3.0 x16 354 * Firmware version: 216.4.259.0 and above 355 356 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P2100G (2x100G) 357 358 * Host interface: PCI Express 3.0 x16 359 * Firmware version: 216.1.259.0 and above 360 361 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P425p (4x25G) 362 363 * Host interface: PCI Express 4.0 x16 364 * Firmware version: 216.1.259.0 and above 365 366 * Broadcom\ |reg| NetXtreme-E\ |reg| Series P2100G (2x100G) 367 368 * Host interface: PCI Express 4.0 x16 369 * Firmware version: 216.1.259.0 and above 370 371* Intel\ |reg| platforms with Intel\ |reg| NICs combinations 372 373 * CPU 374 375 * Intel\ |reg| Atom\ |trade| CPU C3758 @ 2.20GHz 376 * Intel\ |reg| Atom\ |trade| CPU C3858 @ 2.00GHz 377 * Intel\ |reg| Atom\ |trade| CPU C3958 @ 2.00GHz 378 * Intel\ |reg| Xeon\ |reg| CPU D-1541 @ 2.10GHz 379 * Intel\ |reg| Xeon\ |reg| CPU D-1553N @ 2.30GHz 380 * Intel\ |reg| Xeon\ |reg| CPU E5-2680 0 @ 2.70GHz 381 * Intel\ |reg| Xeon\ |reg| CPU E5-2680 v2 @ 2.80GHz 382 * Intel\ |reg| Xeon\ |reg| CPU E5-2699 v3 @ 2.30GHz 383 * Intel\ |reg| Xeon\ |reg| CPU E5-2699 v4 @ 2.20GHz 384 * Intel\ |reg| Xeon\ |reg| Gold 5218N CPU @ 2.30GHz 385 * Intel\ |reg| Xeon\ |reg| Gold 6139 CPU @ 2.30GHz 386 * Intel\ |reg| Xeon\ |reg| Gold 6252N CPU @ 2.30GHz 387 * Intel\ |reg| Xeon\ |reg| Platinum 8180 CPU @ 2.50GHz 388 * Intel\ |reg| Xeon\ |reg| Platinum 8280M CPU @ 2.70GHz 389 390 * OS: 391 392 * CentOS 7.7 393 * CentOS 8.0 394 * Fedora 32 395 * FreeBSD 12.1 396 * OpenWRT 19.07 397 * Red Hat Enterprise Linux Server release 8.0 398 * Red Hat Enterprise Linux Server release 7.7 399 * Suse15 SP1 400 * Ubuntu 16.04 401 * Ubuntu 18.04 402 * Ubuntu 20.04 403 404 * NICs: 405 406 * Intel\ |reg| 82599ES 10 Gigabit Ethernet Controller 407 408 * Firmware version: 0x61bf0001 409 * Device id (pf/vf): 8086:10fb / 8086:10ed 410 * Driver version: 5.6.5 (ixgbe) 411 412 * Intel\ |reg| Corporation Ethernet Connection X552/X557-AT 10GBASE-T 413 414 * Firmware version: 0x800003e7 415 * Device id (pf/vf): 8086:15ad / 8086:15a8 416 * Driver version: 5.1.0-k (ixgbe) 417 418 * Intel\ |reg| Corporation Ethernet Controller 10G X550T 419 420 * Firmware version: 0x80000482 421 * Device id (pf): 8086:1563 422 * Driver version: 5.6.5 (ixgbe) 423 424 * Intel\ |reg| Ethernet Converged Network Adapter X710-DA4 (4x10G) 425 426 * Firmware version: 7.20 0x800079e8 1.2585.0 427 * Device id (pf/vf): 8086:1572 / 8086:154c 428 * Driver version: 2.11.29 (i40e) 429 430 * Intel\ |reg| Corporation Ethernet Connection X722 for 10GbE SFP+ (4x10G) 431 432 * Firmware version: 4.11 0x80001def 1.1999.0 433 * Device id (pf/vf): 8086:37d0 / 8086:37cd 434 * Driver version: 2.11.29 (i40e) 435 436 * Intel\ |reg| Corporation Ethernet Connection X722 for 10GBASE-T (2x10G) 437 438 * Firmware version: 4.10 0x80001a7a 439 * Device id (pf/vf): 8086:37d2 / 8086:37cd 440 * Driver version: 2.11.29 (i40e) 441 442 * Intel\ |reg| Ethernet Converged Network Adapter XXV710-DA2 (2x25G) 443 444 * Firmware version: 7.30 0x800080a2 1.2658.0 445 * Device id (pf/vf): 8086:158b / 8086:154c 446 * Driver version: 2.11.27_rc13 (i40e) 447 448 * Intel\ |reg| Ethernet Converged Network Adapter XL710-QDA2 (2X40G) 449 450 * Firmware version: 7.30 0x800080ab 1.2658.0 451 * Device id (pf/vf): 8086:1583 / 8086:154c 452 * Driver version: 2.11.27_rc13 (i40e) 453 454 * Intel\ |reg| Corporation I350 Gigabit Network Connection 455 456 * Firmware version: 1.63, 0x80000cbc 457 * Device id (pf/vf): 8086:1521 / 8086:1520 458 * Driver version: 5.4.0-k (igb) 459 460 * Intel\ |reg| Corporation I210 Gigabit Network Connection 461 462 * Firmware version: 3.25, 0x800006eb 463 * Device id (pf): 8086:1533 464 * Driver version: 5.6.5(igb) 465 466 * Intel\ |reg| Ethernet Controller 10-Gigabit X540-AT2 467 468 * Firmware version: 0x800005f9 469 * Device id (pf): 8086:1528 470 * Driver version: 5.1.0-k(ixgbe) 471 472 * Intel\ |reg| Ethernet Converged Network Adapter X710-T2L 473 474 * Firmware version: 7.30 0x80008061 1.2585.0 475 * Device id (pf): 8086:15ff 476 * Driver version: 2.11.27_rc13(i40e) 477 478* Intel\ |reg| platforms with Mellanox\ |reg| NICs combinations 479 480 * CPU: 481 482 * Intel\ |reg| Xeon\ |reg| Gold 6154 CPU @ 3.00GHz 483 * Intel\ |reg| Xeon\ |reg| CPU E5-2697A v4 @ 2.60GHz 484 * Intel\ |reg| Xeon\ |reg| CPU E5-2697 v3 @ 2.60GHz 485 * Intel\ |reg| Xeon\ |reg| CPU E5-2680 v2 @ 2.80GHz 486 * Intel\ |reg| Xeon\ |reg| CPU E5-2650 v4 @ 2.20GHz 487 * Intel\ |reg| Xeon\ |reg| CPU E5-2640 @ 2.50GHz 488 * Intel\ |reg| Xeon\ |reg| CPU E5-2620 v4 @ 2.10GHz 489 490 * OS: 491 492 * Red Hat Enterprise Linux Server release 7.5 (Maipo) 493 * Red Hat Enterprise Linux Server release 7.4 (Maipo) 494 * Red Hat Enterprise Linux Server release 7.3 (Maipo) 495 * Red Hat Enterprise Linux Server release 7.2 (Maipo) 496 * Ubuntu 18.04 497 * Ubuntu 16.04 498 499 * OFED: 500 501 * MLNX_OFED 4.7-3.2.9.0 502 * MLNX_OFED 5.0-2.1.8.0 and above 503 504 * upstream kernel: 505 506 * Linux 5.7.0-rc5 and above 507 508 * rdma-core: 509 510 * rdma-core-29.0-1 and above 511 512 * NICs: 513 514 * Mellanox\ |reg| ConnectX\ |reg|-3 Pro 40G MCX354A-FCC_Ax (2x40G) 515 516 * Host interface: PCI Express 3.0 x8 517 * Device ID: 15b3:1007 518 * Firmware version: 2.42.5000 519 520 * Mellanox\ |reg| ConnectX\ |reg|-3 Pro 40G MCX354A-FCCT (2x40G) 521 522 * Host interface: PCI Express 3.0 x8 523 * Device ID: 15b3:1007 524 * Firmware version: 2.42.5000 525 526 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 25G MCX4121A-ACAT (2x25G) 527 528 * Host interface: PCI Express 3.0 x8 529 * Device ID: 15b3:1015 530 * Firmware version: 14.27.2008 and above 531 532 * Mellanox\ |reg| ConnectX\ |reg|-4 Lx 50G MCX4131A-GCAT (1x50G) 533 534 * Host interface: PCI Express 3.0 x8 535 * Device ID: 15b3:1015 536 * Firmware version: 14.27.2008 and above 537 538 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX516A-CCAT (2x100G) 539 540 * Host interface: PCI Express 3.0 x16 541 * Device ID: 15b3:1017 542 * Firmware version: 16.27.2008 and above 543 544 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G) 545 546 * Host interface: PCI Express 3.0 x16 547 * Device ID: 15b3:1017 548 * Firmware version: 16.27.2008 and above 549 550 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-EDAT (2x100G) 551 552 * Host interface: PCI Express 3.0 x16 553 * Device ID: 15b3:1017 554 * Firmware version: 16.27.2008 and above 555 556 * Mellanox\ |reg| ConnectX\ |reg|-5 Ex EN 100G MCX516A-CDAT (2x100G) 557 558 * Host interface: PCI Express 4.0 x16 559 * Device ID: 15b3:1019 560 * Firmware version: 16.27.2008 and above 561 562 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G) 563 564 * Host interface: PCI Express 4.0 x16 565 * Device ID: 15b3:101d 566 * Firmware version: 22.27.2008 and above 567 568* IBM Power 9 platforms with Mellanox\ |reg| NICs combinations 569 570 * CPU: 571 572 * POWER9 2.2 (pvr 004e 1202) 2300MHz 573 574 * OS: 575 576 * Red Hat Enterprise Linux Server release 7.6 577 578 * NICs: 579 580 * Mellanox\ |reg| ConnectX\ |reg|-5 100G MCX556A-ECAT (2x100G) 581 582 * Host interface: PCI Express 4.0 x16 583 * Device ID: 15b3:1017 584 * Firmware version: 16.27.2008 585 586 * Mellanox\ |reg| ConnectX\ |reg|-6 Dx 100G MCX623106AN-CDAT (2x100G) 587 588 * Host interface: PCI Express 4.0 x16 589 * Device ID: 15b3:101d 590 * Firmware version: 22.27.2008 591 592 * OFED: 593 594 * MLNX_OFED 5.0-2.1.8.0 595 596* ARMv8 SoC combinations from Marvell (with integrated NICs) 597 598 * SoC: 599 600 * CN83xx, CN96xx, CN93xx 601 602 * OS (Based on Marvell OCTEON TX SDK-10.3.2.0-PR12): 603 604 * Arch Linux 605 * Buildroot 2018.11 606 * Ubuntu 16.04.1 LTS 607 * Ubuntu 16.10 608 * Ubuntu 18.04.1 609 * Ubuntu 19.04 610