xref: /dpdk/doc/guides/nics/dpaa.rst (revision 7e5f49ae767da93486d28142ef53a8fd745f240b)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright 2017,2020-2024 NXP
3
4
5DPAA Poll Mode Driver
6=====================
7
8The DPAA NIC PMD (**librte_net_dpaa**) provides poll mode driver
9support for the inbuilt NIC found in the **NXP DPAA** SoC family.
10
11More information can be found at `NXP Official Website
12<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
13
14NXP DPAA (Data Path Acceleration Architecture - Gen 1)
15------------------------------------------------------
16
17This section provides an overview of the NXP DPAA architecture
18and how it is integrated into the DPDK.
19
20Contents summary
21
22- DPAA overview
23- DPAA driver architecture overview
24- FMAN configuration tools and library
25
26.. _dpaa_overview:
27
28DPAA Overview
29~~~~~~~~~~~~~
30
31Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
32
33The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
34components on specific QorIQ series multicore processors. This architecture
35provides the infrastructure to support simplified sharing of networking
36interfaces and accelerators by multiple CPU cores, and the accelerators
37themselves.
38
39DPAA includes:
40
41- Cores
42- Network and packet I/O
43- Hardware offload accelerators
44- Infrastructure required to facilitate flow of packets between the components above
45
46Infrastructure components are:
47
48- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
49  It allows  CPUs and other accelerators connected to the SoC datapath to
50  enqueue and dequeue ethernet frames, thus providing the infrastructure for
51  data exchange among CPUs and datapath accelerators.
52- The Buffer Manager (BMan) is a hardware buffer pool management block that
53  allows software and accelerators on the datapath to acquire and release
54  buffers in order to build frames.
55
56Hardware accelerators are:
57
58- SEC - Cryptographic accelerator
59- PME - Pattern matching engine
60
61The Network and packet I/O component:
62
63- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
64  DPAA infrastructure (QMan and BMan). FMan  is responsible for packet
65  distribution and policing. Each frame can be parsed, classified and results
66  may be attached to the frame. This meta data can be used to select
67  particular QMan queue, which the packet is forwarded to.
68
69
70DPAA DPDK - Poll Mode Driver Overview
71-------------------------------------
72
73This section provides an overview of the drivers for DPAA:
74
75* Bus driver and associated "DPAA infrastructure" drivers
76* Functional object drivers (such as Ethernet).
77
78Brief description of each driver is provided in layout below as well as
79in the following sections.
80
81.. code-block:: console
82
83                                       +------------+
84                                       | DPDK DPAA  |
85                                       |    PMD     |
86                                       +-----+------+
87                                             |
88                                       +-----+------+       +---------------+
89                                       :  Ethernet  :.......| DPDK DPAA     |
90                    . . . . . . . . .  :   (FMAN)   :       | Mempool driver|
91                   .                   +---+---+----+       |  (BMAN)       |
92                  .                        ^   |            +-----+---------+
93                 .                         |   |<enqueue,         .
94                .                          |   | dequeue>         .
95               .                           |   |                  .
96              .                        +---+---V----+             .
97             .      . . . . . . . . . .: Portal drv :             .
98            .      .                   :            :             .
99           .      .                    +-----+------+             .
100          .      .                     :   QMAN     :             .
101         .      .                      :  Driver    :             .
102    +----+------+-------+              +-----+------+             .
103    |   DPDK DPAA Bus   |                    |                    .
104    |   driver          |....................|.....................
105    |   /bus/dpaa       |                    |
106    +-------------------+                    |
107                                             |
108    ========================== HARDWARE =====|========================
109                                            PHY
110    =========================================|========================
111
112In the above representation, solid lines represent components which interface
113with DPDK RTE Framework and dotted lines represent DPAA internal components.
114
115DPAA Bus driver
116~~~~~~~~~~~~~~~
117
118The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
119Key functions include:
120
121- Scanning and parsing the various objects and adding them to their respective
122  device list.
123- Performing probe for available drivers against each scanned device
124- Creating necessary ethernet instance before passing control to the PMD
125
126DPAA NIC Driver (PMD)
127~~~~~~~~~~~~~~~~~~~~~
128
129DPAA PMD is traditional DPDK PMD which provides necessary interface between
130RTE framework and DPAA internal components/drivers.
131
132- Once devices have been identified by DPAA Bus, each device is associated
133  with the PMD
134- PMD is responsible for implementing necessary glue layer between RTE APIs
135  and lower level QMan and FMan blocks.
136  The Ethernet driver is bound to a FMAN port and implements the interfaces
137  needed to connect the DPAA network interface to the network stack.
138  Each FMAN Port corresponds to a DPDK network interface.
139- PMD also support OH/ONIC mode, where the port works as a HW assisted virtual port
140  without actually connecting to a Physical MAC.
141
142
143Features
144^^^^^^^^
145
146  Features of the DPAA PMD are:
147
148  - Multiple queues for TX and RX
149  - Receive Side Scaling (RSS)
150  - Packet type information
151  - Checksum offload
152  - Promiscuous mode
153  - IEEE1588 PTP
154  - OH Port for inter application communication
155  - ONIC virtual port support
156
157
158DPAA Mempool Driver
159~~~~~~~~~~~~~~~~~~~
160
161DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
162Manager.
163
164- Using standard Mempools operations RTE API, the mempool driver interfaces
165  with RTE to service each mempool creation, deletion, buffer allocation and
166  deallocation requests.
167- Each FMAN instance has a BMan pool attached to it during initialization.
168  Each Tx frame can be automatically released by hardware, if allocated from
169  this pool.
170
171
172Allowing & Blocking
173-------------------
174
175For blocking a DPAA device, following commands can be used.
176
177 .. code-block:: console
178
179    <dpdk app> <EAL args> -b "dpaa_bus:fmX-macY" -- ...
180    e.g. "dpaa_bus:fm1-mac4"
181
182Supported DPAA SoCs
183-------------------
184
185- LS1043A/LS1023A
186- LS1046A/LS1026A
187
188Prerequisites
189-------------
190
191See :doc:`../platform/dpaa` for setup information
192
193
194- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
195  to setup the basic DPDK environment.
196- DPAA driver has dependency on kernel to perform various functionalities.
197  So kernel and DPDK version should be compatible for proper working.
198  Refer release notes of NXP SDK guide to match the versions `NXP LSDK GUIDE
199  <https://www.nxp.com/design/software/embedded-software/linux-software-and-development-tools/layerscape-software-development-kit-v21-08:LAYERSCAPE-SDK>`_.
200
201.. note::
202
203   Some part of dpaa bus code (qbman and fman - library) routines are
204   dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
205
206Configuration
207-------------
208
209Environment Variables
210~~~~~~~~~~~~~~~~~~~~~
211
212DPAA drivers uses the following environment variables to configure its
213state during application initialization:
214
215- ``DPAA_NUM_RX_QUEUES`` (default 1)
216
217  This defines the number of Rx queues configured for an application, per
218  port. Hardware would distribute across these many number of queues on Rx
219  of packets.
220  In case the application is configured to use lesser number of queues than
221  configured above, it might result in packet loss (because of distribution).
222
223- ``DPAA_PUSH_QUEUES_NUMBER`` (default 4)
224
225  This defines the number of High performance queues to be used for ethdev Rx.
226  These queues use one private HW portal per queue configured, so they are
227  limited in the system. The first configured ethdev queues will be
228  automatically be assigned from the these high perf PUSH queues. Any queue
229  configuration beyond that will be standard Rx queues. The application can
230  choose to change their number if HW portals are limited.
231  The valid values are from '0' to '4'. The values shall be set to '0' if the
232  application want to use eventdev with DPAA device.
233  Currently these queues are not used for LS1023/LS1043 platform by default.
234
235- ``DPAA_DISPLAY_FRAME_AND_PARSER_RESULT`` (default 0)
236
237  This defines the debug flag, whether to dump the detailed frame
238  and packet parsing result for the incoming packets.
239
240
241Driver compilation and testing
242------------------------------
243
244Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
245for details.
246
247#. Running testpmd:
248
249   Follow instructions available in the document
250   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
251   to run testpmd.
252
253   Example output:
254
255   .. code-block:: console
256
257      ./<build_dir>/app/dpdk-testpmd -c 0xff -n 1 \
258        -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
259
260      .....
261      EAL: Registered [pci] bus.
262      EAL: Registered [dpaa] bus.
263      EAL: Detected 4 lcore(s)
264      .....
265      EAL: dpaa: Bus scan completed
266      .....
267      Configuring Port 0 (socket 0)
268      Port 0: 00:00:00:00:00:01
269      Configuring Port 1 (socket 0)
270      Port 1: 00:00:00:00:00:02
271      .....
272      Checking link statuses...
273      Port 0 Link Up - speed 10000 Mbps - full-duplex
274      Port 1 Link Up - speed 10000 Mbps - full-duplex
275      Done
276      testpmd>
277
278* Use dev arg option ``drv_ieee1588=1`` to enable IEEE 1588 support
279  at driver level, e.g. ``dpaa:fm1-mac3,drv_ieee1588=1``.
280
281FMAN Config
282-----------
283
284Frame Manager is also responsible for parser, classify and distribute
285functionality in the DPAA.
286
287   FMAN supports:
288   Packet parsing at wire speed. It supports standard protocols parsing and
289   identification by HW (VLAN/IP/UDP/TCP/SCTP/PPPoE/PPP/MPLS/GRE/IPSec).
290   It supports non-standard UDF header parsing for custom protocols.
291   Classification / Distribution: Coarse classification based on Key generation
292   Hash and exact match lookup
293
294FMC - FMAN Configuration Tool
295~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
296   This tool is available in User Space. The tool is used to configure FMAN
297   Physical (MAC) or Ephemeral (OH)ports for Parse/Classify/distribute.
298   The PCDs can be hash based where a set of fields are key input for hash
299   generation within FMAN keygen. The hash value is used to generate a FQID for
300   frame. There is a provision to setup exact match lookup too where field
301   values within a packet drives corresponding FQID.
302   Currently it works on XML file inputs.
303
304   Limitations:
305   1.For Dynamic Configuration change, currently no support is available.
306   E.g. enable/disable a port, a operator (set of VLANs and associate rules).
307
308   2.During FMC configuration, port for which policy is being configured is
309   brought down and the policy is flushed on port before new policy is updated
310   for the port. Support is required to add/append/delete etc.
311
312   3.FMC, being a separate user-space application, needs to be invoked from
313   Shell.
314
315
316   The details can be found in FMC Doc at:
317   `Frame Manager Configuration Tool <https://www.nxp.com/docs/en/application-note/AN4760.pdf>`_.
318
319FMLIB
320~~~~~
321   The Frame Manager library provides an API on top of the Frame Manager driver
322   ioctl calls, that provides a user space application with a simple way to
323   configure driver parameters and PCD (parse - classify - distribute) rules.
324
325   This is an alternate to the FMC based configuration. This library provides
326   direct ioctl based interfaces for FMAN configuration as used by the FMC tool
327   as well. This helps in overcoming the main limitation of FMC - i.e. lack
328   of dynamic configuration.
329
330   The location for the fmd driver as used by FMLIB and FMC is as follows:
331   `Kernel FMD Driver
332   <https://source.codeaurora.org/external/qoriq/qoriq-components/linux/tree/drivers/net/ethernet/freescale/sdk_fman?h=linux-4.19-rt>`_.
333
334OH Port
335~~~~~~~
336   Offline(O/H) port is a type of hardware port
337   which is able to dequeue and enqueue from/to a QMan queue.
338   The FMan applies a Parse Classify Distribute (PCD) flow
339   and (if configured to do so) enqueues the frame back in a QMan queue.
340
341   The FMan is able to copy the frame into new buffers and enqueue back to the QMan.
342   This means these ports can be used to send and receive packets
343   between two applications as well.
344
345   An O/H port have two queues.
346   One to receive and one to send the packets.
347   It will loopback all the packets on Tx queue which are received on Rx queue.
348
349
350		--------      Tx Packets      ---------
351		| App  | - -  - - - - - - - > | O/H   |
352		|      | < - - - - - - - - -  | Port  |
353		--------      Rx Packets      ---------
354
355
356ONIC
357~~~~
358   To use OH port to communicate between two applications,
359   we can assign Rx port of an O/H port to Application 1
360   and Tx port to Application 2
361   so that Application 1 can send packets to Application 2.
362   Similarly, we can assign Tx port of another O/H port to Application 1
363   and Rx port to Application 2
364   so that Application 2 can send packets to Application 1.
365
366   ONIC is logically defined to achieve it.
367   Internally it will use one Rx queue of an O/H port
368   and one Tx queue of another O/H port.
369   For application, it will behave as single O/H port.
370
371   +------+         +------+        +------+        +------+        +------+
372   |      |   Tx    |      |   Rx   | O/H  |   Tx   |      |   Rx   |      |
373   |      | - - - > |      | -  - > | Port | -  - > |      | -  - > |      |
374   |      |         |      |        |  1   |        |      |        |      |
375   |      |         |      |        +------+        |      |        |      |
376   | App  |         | ONIC |                        | ONIC |        | App  |
377   |  1   |         | Port |                        | Port |        |  2   |
378   |      |         |  1   |        +------+        |  2   |        |      |
379   |      |   Rx    |      |   Tx   | O/H  |   Rx   |      |   Tx   |      |
380   |      | < - - - |      | < - - -| Port | < - - -|      | < - - -|      |
381   |      |         |      |        |  2   |        |      |        |      |
382   +------+         +------+        +------+        +------+        +------+
383
384   All the packets received by ONIC port 1 will be send to ONIC port 2 and vice versa.
385   These ports can be used by DPDK applications just like physical ports.
386
387
388VSP (Virtual Storage Profile)
389~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
390   The storage profiled are means to provide virtualized interface. A ranges of
391   storage profiles cab be associated to Ethernet ports.
392   They are selected during classification. Specify how the frame should be
393   written to memory and which buffer pool to select for packet storage in
394   queues. Start and End margin of buffer can also be configured.
395
396Limitations
397-----------
398
399Platform Requirement
400~~~~~~~~~~~~~~~~~~~~
401
402DPAA drivers for DPDK can only work on NXP SoCs as listed in the
403``Supported DPAA SoCs``.
404
405Maximum packet length
406~~~~~~~~~~~~~~~~~~~~~
407
408The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
409is fixed and cannot be changed. So, even when the ``rxmode.mtu``
410member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
411up to 10240 bytes can still reach the host interface.
412
413Multiprocess Support
414~~~~~~~~~~~~~~~~~~~~
415
416Current version of DPAA driver doesn't support multi-process applications
417where I/O is performed using secondary processes. This feature would be
418implemented in subsequent versions.
419