1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright 2016,2020-2021 NXP 3 4 5DPAA2 Poll Mode Driver 6====================== 7 8The DPAA2 NIC PMD (**librte_net_dpaa2**) provides poll mode driver 9support for the inbuilt NIC found in the **NXP DPAA2** SoC family. 10 11More information can be found at `NXP Official Website 12<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_. 13 14NXP DPAA2 (Data Path Acceleration Architecture Gen2) 15---------------------------------------------------- 16 17This section provides an overview of the NXP DPAA2 architecture 18and how it is integrated into the DPDK. 19 20Contents summary 21 22- DPAA2 overview 23- Overview of DPAA2 objects 24- DPAA2 driver architecture overview 25 26.. _dpaa2_overview: 27 28DPAA2 Overview 29~~~~~~~~~~~~~~ 30 31Reference: `FSL MC BUS in Linux Kernel <https://www.kernel.org/doc/readme/drivers-staging-fsl-mc-README.txt>`_. 32 33DPAA2 is a hardware architecture designed for high-speed network 34packet processing. DPAA2 consists of sophisticated mechanisms for 35processing Ethernet packets, queue management, buffer management, 36autonomous L2 switching, virtual Ethernet bridging, and accelerator 37(e.g. crypto) sharing. 38 39A DPAA2 hardware component called the Management Complex (or MC) manages the 40DPAA2 hardware resources. The MC provides an object-based abstraction for 41software drivers to use the DPAA2 hardware. 42 43The MC uses DPAA2 hardware resources such as queues, buffer pools, and 44network ports to create functional objects/devices such as network 45interfaces, an L2 switch, or accelerator instances. 46 47The MC provides memory-mapped I/O command interfaces (MC portals) 48which DPAA2 software drivers use to operate on DPAA2 objects: 49 50The diagram below shows an overview of the DPAA2 resource management 51architecture: 52 53.. code-block:: console 54 55 +--------------------------------------+ 56 | OS | 57 | DPAA2 drivers | 58 | | | 59 +-----------------------------|--------+ 60 | 61 | (create,discover,connect 62 | config,use,destroy) 63 | 64 DPAA2 | 65 +------------------------| mc portal |-+ 66 | | | 67 | +- - - - - - - - - - - - -V- - -+ | 68 | | | | 69 | | Management Complex (MC) | | 70 | | | | 71 | +- - - - - - - - - - - - - - - -+ | 72 | | 73 | Hardware Hardware | 74 | Resources Objects | 75 | --------- ------- | 76 | -queues -DPRC | 77 | -buffer pools -DPMCP | 78 | -Eth MACs/ports -DPIO | 79 | -network interface -DPNI | 80 | profiles -DPMAC | 81 | -queue portals -DPBP | 82 | -MC portals ... | 83 | ... | 84 | | 85 +--------------------------------------+ 86 87The MC mediates operations such as create, discover, 88connect, configuration, and destroy. Fast-path operations 89on data, such as packet transmit/receive, are not mediated by 90the MC and are done directly using memory mapped regions in 91DPIO objects. 92 93Overview of DPAA2 Objects 94~~~~~~~~~~~~~~~~~~~~~~~~~ 95 96The section provides a brief overview of some key DPAA2 objects. 97A simple scenario is described illustrating the objects involved 98in creating a network interfaces. 99 100DPRC (Datapath Resource Container) 101 102 A DPRC is a container object that holds all the other 103 types of DPAA2 objects. In the example diagram below there 104 are 8 objects of 5 types (DPMCP, DPIO, DPBP, DPNI, and DPMAC) 105 in the container. 106 107.. code-block:: console 108 109 +---------------------------------------------------------+ 110 | DPRC | 111 | | 112 | +-------+ +-------+ +-------+ +-------+ +-------+ | 113 | | DPMCP | | DPIO | | DPBP | | DPNI | | DPMAC | | 114 | +-------+ +-------+ +-------+ +---+---+ +---+---+ | 115 | | DPMCP | | DPIO | | 116 | +-------+ +-------+ | 117 | | DPMCP | | 118 | +-------+ | 119 | | 120 +---------------------------------------------------------+ 121 122From the point of view of an OS, a DPRC behaves similar to a plug and 123play bus, like PCI. DPRC commands can be used to enumerate the contents 124of the DPRC, discover the hardware objects present (including mappable 125regions and interrupts). 126 127.. code-block:: console 128 129 DPRC.1 (bus) 130 | 131 +--+--------+-------+-------+-------+ 132 | | | | | 133 DPMCP.1 DPIO.1 DPBP.1 DPNI.1 DPMAC.1 134 DPMCP.2 DPIO.2 135 DPMCP.3 136 137Hardware objects can be created and destroyed dynamically, providing 138the ability to hot plug/unplug objects in and out of the DPRC. 139 140A DPRC has a mappable MMIO region (an MC portal) that can be used 141to send MC commands. It has an interrupt for status events (like 142hotplug). 143 144All objects in a container share the same hardware "isolation context". 145This means that with respect to an IOMMU the isolation granularity 146is at the DPRC (container) level, not at the individual object 147level. 148 149DPRCs can be defined statically and populated with objects 150via a config file passed to the MC when firmware starts 151it. There is also a Linux user space tool called "restool" 152that can be used to create/destroy containers and objects 153dynamically. 154 155DPAA2 Objects for an Ethernet Network Interface 156~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 157 158A typical Ethernet NIC is monolithic-- the NIC device contains TX/RX 159queuing mechanisms, configuration mechanisms, buffer management, 160physical ports, and interrupts. DPAA2 uses a more granular approach 161utilizing multiple hardware objects. Each object provides specialized 162functions. Groups of these objects are used by software to provide 163Ethernet network interface functionality. This approach provides 164efficient use of finite hardware resources, flexibility, and 165performance advantages. 166 167The diagram below shows the objects needed for a simple 168network interface configuration on a system with 2 CPUs. 169 170.. code-block:: console 171 172 +---+---+ +---+---+ 173 CPU0 CPU1 174 +---+---+ +---+---+ 175 | | 176 +---+---+ +---+---+ 177 DPIO DPIO 178 +---+---+ +---+---+ 179 \ / 180 \ / 181 \ / 182 +---+---+ 183 DPNI --- DPBP,DPMCP 184 +---+---+ 185 | 186 | 187 +---+---+ 188 DPMAC 189 +---+---+ 190 | 191 port/PHY 192 193Below the objects are described. For each object a brief description 194is provided along with a summary of the kinds of operations the object 195supports and a summary of key resources of the object (MMIO regions 196and IRQs). 197 198DPMAC (Datapath Ethernet MAC): represents an Ethernet MAC, a 199hardware device that connects to an Ethernet PHY and allows 200physical transmission and reception of Ethernet frames. 201 202- MMIO regions: none 203- IRQs: DPNI link change 204- commands: set link up/down, link config, get stats, IRQ config, enable, reset 205 206DPNI (Datapath Network Interface): contains TX/RX queues, 207network interface configuration, and RX buffer pool configuration 208mechanisms. The TX/RX queues are in memory and are identified by 209queue number. 210 211- MMIO regions: none 212- IRQs: link state 213- commands: port config, offload config, queue config, parse/classify config, IRQ config, enable, reset 214 215DPIO (Datapath I/O): provides interfaces to enqueue and dequeue 216packets and do hardware buffer pool management operations. The DPAA2 217architecture separates the mechanism to access queues (the DPIO object) 218from the queues themselves. The DPIO provides an MMIO interface to 219enqueue/dequeue packets. To enqueue something a descriptor is written 220to the DPIO MMIO region, which includes the target queue number. 221There will typically be one DPIO assigned to each CPU. This allows all 222CPUs to simultaneously perform enqueue/dequeued operations. DPIOs are 223expected to be shared by different DPAA2 drivers. 224 225- MMIO regions: queue operations, buffer management 226- IRQs: data availability, congestion notification, buffer pool depletion 227- commands: IRQ config, enable, reset 228 229DPBP (Datapath Buffer Pool): represents a hardware buffer 230pool. 231 232- MMIO regions: none 233- IRQs: none 234- commands: enable, reset 235 236DPMCP (Datapath MC Portal): provides an MC command portal. 237Used by drivers to send commands to the MC to manage 238objects. 239 240- MMIO regions: MC command portal 241- IRQs: command completion 242- commands: IRQ config, enable, reset 243 244Object Connections 245~~~~~~~~~~~~~~~~~~ 246 247Some objects have explicit relationships that must 248be configured: 249 250- DPNI <--> DPMAC 251- DPNI <--> DPNI 252- DPNI <--> L2-switch-port 253 254A DPNI must be connected to something such as a DPMAC, 255another DPNI, or L2 switch port. The DPNI connection 256is made via a DPRC command. 257 258.. code-block:: console 259 260 +-------+ +-------+ 261 | DPNI | | DPMAC | 262 +---+---+ +---+---+ 263 | | 264 +==========+ 265 266- DPNI <--> DPBP 267 268A network interface requires a 'buffer pool' (DPBP object) which provides 269a list of pointers to memory where received Ethernet data is to be copied. 270The Ethernet driver configures the DPBPs associated with the network 271interface. 272 273Interrupts 274~~~~~~~~~~ 275 276All interrupts generated by DPAA2 objects are message 277interrupts. At the hardware level message interrupts 278generated by devices will normally have 3 components-- 2791) a non-spoofable 'device-id' expressed on the hardware 280bus, 2) an address, 3) a data value. 281 282In the case of DPAA2 devices/objects, all objects in the 283same container/DPRC share the same 'device-id'. 284For ARM-based SoC this is the same as the stream ID. 285 286 287DPAA2 DPDK - Poll Mode Driver Overview 288-------------------------------------- 289 290This section provides an overview of the drivers for 291DPAA2-- 1) the bus driver and associated "DPAA2 infrastructure" 292drivers and 2) functional object drivers (such as Ethernet). 293 294As described previously, a DPRC is a container that holds the other 295types of DPAA2 objects. It is functionally similar to a plug-and-play 296bus controller. 297 298Each object in the DPRC is a Linux "device" and is bound to a driver. 299The diagram below shows the dpaa2 drivers involved in a networking 300scenario and the objects bound to each driver. A brief description 301of each driver follows. 302 303.. code-block:: console 304 305 306 +------------+ 307 | DPDK DPAA2 | 308 | PMD | 309 +------------+ +------------+ 310 | Ethernet |.......| Mempool | 311 . . . . . . . . . | (DPNI) | | (DPBP) | 312 . +---+---+----+ +-----+------+ 313 . ^ | . 314 . | |<enqueue, . 315 . | | dequeue> . 316 . | | . 317 . +---+---V----+ . 318 . . . . . . . . . . .| DPIO driver| . 319 . . | (DPIO) | . 320 . . +-----+------+ . 321 . . | QBMAN | . 322 . . | Driver | . 323 +----+------+-------+ +-----+----- | . 324 | dpaa2 bus | | . 325 | VFIO fslmc-bus |....................|..................... 326 | | | 327 | /bus/fslmc | | 328 +-------------------+ | 329 | 330 ========================== HARDWARE =====|======================= 331 DPIO 332 | 333 DPNI---DPBP 334 | 335 DPMAC 336 | 337 PHY 338 =========================================|======================== 339 340 341A brief description of each driver is provided below. 342 343DPAA2 bus driver 344~~~~~~~~~~~~~~~~ 345 346The DPAA2 bus driver is a rte_bus driver which scans the fsl-mc bus. 347Key functions include: 348 349- Reading the container and setting up vfio group 350- Scanning and parsing the various MC objects and adding them to 351 their respective device list. 352 353Additionally, it also provides the object driver for generic MC objects. 354 355DPIO driver 356~~~~~~~~~~~ 357 358The DPIO driver is bound to DPIO objects and provides services that allow 359other drivers such as the Ethernet driver to enqueue and dequeue data for 360their respective objects. 361Key services include: 362 363- Data availability notifications 364- Hardware queuing operations (enqueue and dequeue of data) 365- Hardware buffer pool management 366 367To transmit a packet the Ethernet driver puts data on a queue and 368invokes a DPIO API. For receive, the Ethernet driver registers 369a data availability notification callback. To dequeue a packet 370a DPIO API is used. 371 372There is typically one DPIO object per physical CPU for optimum 373performance, allowing different CPUs to simultaneously enqueue 374and dequeue data. 375 376The DPIO driver operates on behalf of all DPAA2 drivers 377active -- Ethernet, crypto, compression, etc. 378 379DPBP based Mempool driver 380~~~~~~~~~~~~~~~~~~~~~~~~~ 381 382The DPBP driver is bound to a DPBP objects and provides services to 383create a hardware offloaded packet buffer mempool. 384 385DPAA2 NIC Driver 386~~~~~~~~~~~~~~~~ 387The Ethernet driver is bound to a DPNI and implements the kernel 388interfaces needed to connect the DPAA2 network interface to 389the network stack. 390 391Each DPNI corresponds to a DPDK network interface. 392 393Features 394^^^^^^^^ 395 396Features of the DPAA2 PMD are: 397 398- Multiple queues for TX and RX 399- Receive Side Scaling (RSS) 400- MAC/VLAN filtering 401- Packet type information 402- Checksum offload 403- Promiscuous mode 404- Multicast mode 405- Port hardware statistics 406- Jumbo frames 407- Link flow control 408- Scattered and gather for TX and RX 409- :ref:`Traffic Management API <dptmapi>` 410 411 412Supported DPAA2 SoCs 413-------------------- 414- LX2160A 415- LS2084A/LS2044A 416- LS2088A/LS2048A 417- LS1088A/LS1048A 418 419Prerequisites 420------------- 421 422See :doc:`../platform/dpaa2` for setup information 423 424Currently supported by DPDK: 425 426- NXP LSDK **19.08+**. 427- MC Firmware version **10.18.0** and higher. 428- Supported architectures: **arm64 LE**. 429 430- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment. 431 432.. note:: 433 434 Some part of fslmc bus code (mc flib - object library) routines are 435 dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace. 436 437 438Driver compilation and testing 439------------------------------ 440 441Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` 442for details. 443 444#. Running testpmd: 445 446 Follow instructions available in the document 447 :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>` 448 to run testpmd. 449 450 Example output: 451 452 .. code-block:: console 453 454 ./dpdk-testpmd -c 0xff -n 1 -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx 455 456 ..... 457 EAL: Registered [pci] bus. 458 EAL: Registered [fslmc] bus. 459 EAL: Detected 8 lcore(s) 460 EAL: Probing VFIO support... 461 EAL: VFIO support initialized 462 ..... 463 PMD: DPAA2: Processing Container = dprc.2 464 EAL: fslmc: DPRC contains = 51 devices 465 EAL: fslmc: Bus scan completed 466 ..... 467 Configuring Port 0 (socket 0) 468 Port 0: 00:00:00:00:00:01 469 Configuring Port 1 (socket 0) 470 Port 1: 00:00:00:00:00:02 471 ..... 472 Checking link statuses... 473 Port 0 Link Up - speed 10000 Mbps - full-duplex 474 Port 1 Link Up - speed 10000 Mbps - full-duplex 475 Done 476 testpmd> 477 478 479* Use dev arg option ``drv_loopback=1`` to loopback packets at 480 driver level. Any packet received will be reflected back by the 481 driver on same port. e.g. ``fslmc:dpni.1,drv_loopback=1`` 482 483* Use dev arg option ``drv_no_prefetch=1`` to disable prefetching 484 of the packet pull command which is issued in the previous cycle. 485 e.g. ``fslmc:dpni.1,drv_no_prefetch=1`` 486 487* Use dev arg option ``drv_tx_conf=1`` to enable TX confirmation mode. 488 In this mode tx conf queues need to be polled to free the buffers. 489 e.g. ``fslmc:dpni.1,drv_tx_conf=1`` 490 491* Use dev arg option ``drv_error_queue=1`` to enable Packets in Error queue. 492 DPAA2 hardware drops the error packet in hardware. This option enables the 493 hardware to not drop the error packet and let the driver dump the error 494 packets, so that user can check what is wrong with those packets. 495 e.g. ``fslmc:dpni.1,drv_error_queue=1`` 496 497Enabling logs 498------------- 499 500For enabling logging for DPAA2 PMD, following log-level prefix can be used: 501 502 .. code-block:: console 503 504 <dpdk app> <EAL args> --log-level=bus.fslmc:<level> -- ... 505 506Using ``bus.fslmc`` as log matching criteria, all FSLMC bus logs can be enabled 507which are lower than logging ``level``. 508 509 Or 510 511 .. code-block:: console 512 513 <dpdk app> <EAL args> --log-level=pmd.net.dpaa2:<level> -- ... 514 515Using ``pmd.net.dpaa2`` as log matching criteria, all PMD logs can be enabled 516which are lower than logging ``level``. 517 518Allowing & Blocking 519------------------- 520 521For blocking a DPAA2 device, following commands can be used. 522 523 .. code-block:: console 524 525 <dpdk app> <EAL args> -b "fslmc:dpni.x" -- ... 526 527Where x is the device object id as configured in resource container. 528 529Running secondary debug app without blocklist 530--------------------------------------------- 531 532dpaa2 hardware imposes limits on some H/W access devices like Management 533Control Port and H/W portal. This causes issue in their shared usages in 534case of multi-process applications. It can overcome by using 535allowlist/blocklist in primary and secondary applications. 536 537In order to ease usage of standard debugging apps like dpdk-procinfo, dpaa2 538driver reserves extra Management Control Port and H/W portal which can be 539used by debug application to debug any existing application without 540blocking these devices in primary process. 541 542Limitations 543----------- 544 545Platform Requirement 546~~~~~~~~~~~~~~~~~~~~ 547DPAA2 drivers for DPDK can only work on NXP SoCs as listed in the 548``Supported DPAA2 SoCs``. 549 550Maximum packet length 551~~~~~~~~~~~~~~~~~~~~~ 552 553The DPAA2 SoC family support a maximum of a 10240 jumbo frame. The value 554is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len`` 555member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames 556up to 10240 bytes can still reach the host interface. 557 558Other Limitations 559~~~~~~~~~~~~~~~~~ 560 561- RSS hash key cannot be modified. 562- RSS RETA cannot be configured. 563 564.. _dptmapi: 565 566Traffic Management API 567---------------------- 568 569DPAA2 PMD supports generic DPDK Traffic Management API which allows to 570configure the following features: 571 5721. Hierarchical scheduling 5732. Traffic shaping 574 575Internally TM is represented by a hierarchy (tree) of nodes. 576Node which has a parent is called a leaf whereas node without 577parent is called a non-leaf (root). 578 579Nodes hold following types of settings: 580 581- for egress scheduler configuration: weight 582- for egress rate limiter: private shaper 583 584Hierarchy is always constructed from the top, i.e first a root node is added 585then some number of leaf nodes. Number of leaf nodes cannot exceed number 586of configured tx queues. 587 588After hierarchy is complete it can be committed. 589 590For an additional description please refer to DPDK :doc:`Traffic Management API <../prog_guide/traffic_management>`. 591 592Supported Features 593~~~~~~~~~~~~~~~~~~ 594 595The following capabilities are supported: 596 597- Level0 (root node) and Level1 are supported. 598- 1 private shaper at root node (port level) is supported. 599- 8 TX queues per port supported (1 channel per port) 600- Both SP and WFQ scheduling mechanisms are supported on all 8 queues. 601- Congestion notification is supported. It means if there is congestion on 602 the network, DPDK driver will not enqueue any packet (no taildrop or WRED) 603 604 User can also check node, level capabilities using testpmd commands. 605 606Usage example 607~~~~~~~~~~~~~ 608 609For a detailed usage description please refer to "Traffic Management" section in DPDK :doc:`Testpmd Runtime Functions <../testpmd_app_ug/testpmd_funcs>`. 610 6111. Run testpmd as follows: 612 613 .. code-block:: console 614 615 ./dpdk-testpmd -c 0xf -n 1 -- -i --portmask 0x3 --nb-cores=1 --txq=4 --rxq=4 616 6172. Stop all ports: 618 619 .. code-block:: console 620 621 testpmd> port stop all 622 6233. Add shaper profile: 624 625 One port level shaper and strict priority on all 4 queues of port 0: 626 627 .. code-block:: console 628 629 add port tm node shaper profile 0 1 104857600 64 100 0 0 630 add port tm nonleaf node 0 8 -1 0 1 0 1 1 1 0 631 add port tm leaf node 0 0 8 0 1 1 -1 0 0 0 0 632 add port tm leaf node 0 1 8 1 1 1 -1 0 0 0 0 633 add port tm leaf node 0 2 8 2 1 1 -1 0 0 0 0 634 add port tm leaf node 0 3 8 3 1 1 -1 0 0 0 0 635 port tm hierarchy commit 0 no 636 637 or 638 639 One port level shaper and WFQ on all 4 queues of port 0: 640 641 .. code-block:: console 642 643 add port tm node shaper profile 0 1 104857600 64 100 0 0 644 add port tm nonleaf node 0 8 -1 0 1 0 1 1 1 0 645 add port tm leaf node 0 0 8 0 200 1 -1 0 0 0 0 646 add port tm leaf node 0 1 8 0 300 1 -1 0 0 0 0 647 add port tm leaf node 0 2 8 0 400 1 -1 0 0 0 0 648 add port tm leaf node 0 3 8 0 500 1 -1 0 0 0 0 649 port tm hierarchy commit 0 no 650 6514. Create flows as per the source IP addresses: 652 653 .. code-block:: console 654 655 flow create 1 group 0 priority 1 ingress pattern ipv4 src is \ 656 10.10.10.1 / end actions queue index 0 / end 657 flow create 1 group 0 priority 2 ingress pattern ipv4 src is \ 658 10.10.10.2 / end actions queue index 1 / end 659 flow create 1 group 0 priority 3 ingress pattern ipv4 src is \ 660 10.10.10.3 / end actions queue index 2 / end 661 flow create 1 group 0 priority 4 ingress pattern ipv4 src is \ 662 10.10.10.4 / end actions queue index 3 / end 663 6645. Start all ports 665 666 .. code-block:: console 667 668 testpmd> port start all 669 670 671 6726. Enable forwarding 673 674 .. code-block:: console 675 676 testpmd> start 677 6787. Inject the traffic on port1 as per the configured flows, you will see shaped and scheduled forwarded traffic on port0 679