xref: /dpdk/doc/guides/nics/dpaa.rst (revision 68a03efeed657e6e05f281479b33b51102797e15)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright 2017,2020 NXP
3
4
5DPAA Poll Mode Driver
6=====================
7
8The DPAA NIC PMD (**librte_net_dpaa**) provides poll mode driver
9support for the inbuilt NIC found in the **NXP DPAA** SoC family.
10
11More information can be found at `NXP Official Website
12<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
13
14NXP DPAA (Data Path Acceleration Architecture - Gen 1)
15------------------------------------------------------
16
17This section provides an overview of the NXP DPAA architecture
18and how it is integrated into the DPDK.
19
20Contents summary
21
22- DPAA overview
23- DPAA driver architecture overview
24- FMAN configuration tools and library
25
26.. _dpaa_overview:
27
28DPAA Overview
29~~~~~~~~~~~~~
30
31Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
32
33The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
34components on specific QorIQ series multicore processors. This architecture
35provides the infrastructure to support simplified sharing of networking
36interfaces and accelerators by multiple CPU cores, and the accelerators
37themselves.
38
39DPAA includes:
40
41- Cores
42- Network and packet I/O
43- Hardware offload accelerators
44- Infrastructure required to facilitate flow of packets between the components above
45
46Infrastructure components are:
47
48- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
49  It allows  CPUs and other accelerators connected to the SoC datapath to
50  enqueue and dequeue ethernet frames, thus providing the infrastructure for
51  data exchange among CPUs and datapath accelerators.
52- The Buffer Manager (BMan) is a hardware buffer pool management block that
53  allows software and accelerators on the datapath to acquire and release
54  buffers in order to build frames.
55
56Hardware accelerators are:
57
58- SEC - Cryptographic accelerator
59- PME - Pattern matching engine
60
61The Network and packet I/O component:
62
63- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
64  DPAA infrastructure (QMan and BMan). FMan  is responsible for packet
65  distribution and policing. Each frame can be parsed, classified and results
66  may be attached to the frame. This meta data can be used to select
67  particular QMan queue, which the packet is forwarded to.
68
69
70DPAA DPDK - Poll Mode Driver Overview
71-------------------------------------
72
73This section provides an overview of the drivers for DPAA:
74
75* Bus driver and associated "DPAA infrastructure" drivers
76* Functional object drivers (such as Ethernet).
77
78Brief description of each driver is provided in layout below as well as
79in the following sections.
80
81.. code-block:: console
82
83                                       +------------+
84                                       | DPDK DPAA  |
85                                       |    PMD     |
86                                       +-----+------+
87                                             |
88                                       +-----+------+       +---------------+
89                                       :  Ethernet  :.......| DPDK DPAA     |
90                    . . . . . . . . .  :   (FMAN)   :       | Mempool driver|
91                   .                   +---+---+----+       |  (BMAN)       |
92                  .                        ^   |            +-----+---------+
93                 .                         |   |<enqueue,         .
94                .                          |   | dequeue>         .
95               .                           |   |                  .
96              .                        +---+---V----+             .
97             .      . . . . . . . . . .: Portal drv :             .
98            .      .                   :            :             .
99           .      .                    +-----+------+             .
100          .      .                     :   QMAN     :             .
101         .      .                      :  Driver    :             .
102    +----+------+-------+              +-----+------+             .
103    |   DPDK DPAA Bus   |                    |                    .
104    |   driver          |....................|.....................
105    |   /bus/dpaa       |                    |
106    +-------------------+                    |
107                                             |
108    ========================== HARDWARE =====|========================
109                                            PHY
110    =========================================|========================
111
112In the above representation, solid lines represent components which interface
113with DPDK RTE Framework and dotted lines represent DPAA internal components.
114
115DPAA Bus driver
116~~~~~~~~~~~~~~~
117
118The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
119Key functions include:
120
121- Scanning and parsing the various objects and adding them to their respective
122  device list.
123- Performing probe for available drivers against each scanned device
124- Creating necessary ethernet instance before passing control to the PMD
125
126DPAA NIC Driver (PMD)
127~~~~~~~~~~~~~~~~~~~~~
128
129DPAA PMD is traditional DPDK PMD which provides necessary interface between
130RTE framework and DPAA internal components/drivers.
131
132- Once devices have been identified by DPAA Bus, each device is associated
133  with the PMD
134- PMD is responsible for implementing necessary glue layer between RTE APIs
135  and lower level QMan and FMan blocks.
136  The Ethernet driver is bound to a FMAN port and implements the interfaces
137  needed to connect the DPAA network interface to the network stack.
138  Each FMAN Port corresponds to a DPDK network interface.
139
140
141Features
142^^^^^^^^
143
144  Features of the DPAA PMD are:
145
146  - Multiple queues for TX and RX
147  - Receive Side Scaling (RSS)
148  - Packet type information
149  - Checksum offload
150  - Promiscuous mode
151
152DPAA Mempool Driver
153~~~~~~~~~~~~~~~~~~~
154
155DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
156Manager.
157
158- Using standard Mempools operations RTE API, the mempool driver interfaces
159  with RTE to service each mempool creation, deletion, buffer allocation and
160  deallocation requests.
161- Each FMAN instance has a BMan pool attached to it during initialization.
162  Each Tx frame can be automatically released by hardware, if allocated from
163  this pool.
164
165
166Allowing & Blocking
167-------------------
168
169For blocking a DPAA device, following commands can be used.
170
171 .. code-block:: console
172
173    <dpdk app> <EAL args> -b "dpaa_bus:fmX-macY" -- ...
174    e.g. "dpaa_bus:fm1-mac4"
175
176Supported DPAA SoCs
177-------------------
178
179- LS1043A/LS1023A
180- LS1046A/LS1026A
181
182Prerequisites
183-------------
184
185See :doc:`../platform/dpaa` for setup information
186
187
188- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
189  to setup the basic DPDK environment.
190
191.. note::
192
193   Some part of dpaa bus code (qbman and fman - library) routines are
194   dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
195
196Pre-Installation Configuration
197------------------------------
198
199
200Environment Variables
201~~~~~~~~~~~~~~~~~~~~~
202
203DPAA drivers uses the following environment variables to configure its
204state during application initialization:
205
206- ``DPAA_NUM_RX_QUEUES`` (default 1)
207
208  This defines the number of Rx queues configured for an application, per
209  port. Hardware would distribute across these many number of queues on Rx
210  of packets.
211  In case the application is configured to use lesser number of queues than
212  configured above, it might result in packet loss (because of distribution).
213
214- ``DPAA_PUSH_QUEUES_NUMBER`` (default 4)
215
216  This defines the number of High performance queues to be used for ethdev Rx.
217  These queues use one private HW portal per queue configured, so they are
218  limited in the system. The first configured ethdev queues will be
219  automatically be assigned from the these high perf PUSH queues. Any queue
220  configuration beyond that will be standard Rx queues. The application can
221  choose to change their number if HW portals are limited.
222  The valid values are from '0' to '4'. The values shall be set to '0' if the
223  application want to use eventdev with DPAA device.
224  Currently these queues are not used for LS1023/LS1043 platform by default.
225
226
227Driver compilation and testing
228------------------------------
229
230Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
231for details.
232
233#. Running testpmd:
234
235   Follow instructions available in the document
236   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
237   to run testpmd.
238
239   Example output:
240
241   .. code-block:: console
242
243      ./<build_dir>/app/dpdk-testpmd -c 0xff -n 1 \
244        -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
245
246      .....
247      EAL: Registered [pci] bus.
248      EAL: Registered [dpaa] bus.
249      EAL: Detected 4 lcore(s)
250      .....
251      EAL: dpaa: Bus scan completed
252      .....
253      Configuring Port 0 (socket 0)
254      Port 0: 00:00:00:00:00:01
255      Configuring Port 1 (socket 0)
256      Port 1: 00:00:00:00:00:02
257      .....
258      Checking link statuses...
259      Port 0 Link Up - speed 10000 Mbps - full-duplex
260      Port 1 Link Up - speed 10000 Mbps - full-duplex
261      Done
262      testpmd>
263
264FMAN Config
265-----------
266
267Frame Manager is also responsible for parser, classify and distribute
268functionality in the DPAA.
269
270   FMAN supports:
271   Packet parsing at wire speed. It supports standard protocols parsing and
272   identification by HW (VLAN/IP/UDP/TCP/SCTP/PPPoE/PPP/MPLS/GRE/IPSec).
273   It supports non-standard UDF header parsing for custom protocols.
274   Classification / Distribution: Coarse classification based on Key generation
275   Hash and exact match lookup
276
277FMC - FMAN Configuration Tool
278~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
279   This tool is available in User Space. The tool is used to configure FMAN
280   Physical (MAC) or Ephemeral (OH)ports for Parse/Classify/distribute.
281   The PCDs can be hash based where a set of fields are key input for hash
282   generation within FMAN keygen. The hash value is used to generate a FQID for
283   frame. There is a provision to setup exact match lookup too where field
284   values within a packet drives corresponding FQID.
285   Currently it works on XML file inputs.
286
287   Limitations:
288   1.For Dynamic Configuration change, currently no support is available.
289   E.g. enable/disable a port, a operator (set of VLANs and associate rules).
290
291   2.During FMC configuration, port for which policy is being configured is
292   brought down and the policy is flushed on port before new policy is updated
293   for the port. Support is required to add/append/delete etc.
294
295   3.FMC, being a separate user-space application, needs to be invoked from
296   Shell.
297
298
299   The details can be found in FMC Doc at:
300   `Frame Mnager Configuration Tool <https://www.nxp.com/docs/en/application-note/AN4760.pdf>`_.
301
302FMLIB
303~~~~~
304   The Frame Manager library provides an API on top of the Frame Manager driver
305   ioctl calls, that provides a user space application with a simple way to
306   configure driver parameters and PCD (parse - classify - distribute) rules.
307
308   This is an alternate to the FMC based configuration. This library provides
309   direct ioctl based interfaces for FMAN configuration as used by the FMC tool
310   as well. This helps in overcoming the main limitaiton of FMC - i.e. lack
311   of dynamic configuration.
312
313   The location for the fmd driver as used by FMLIB and FMC is as follows:
314   `Kernel FMD Driver
315   <https://source.codeaurora.org/external/qoriq/qoriq-components/linux/tree/drivers/net/ethernet/freescale/sdk_fman?h=linux-4.19-rt>`_.
316
317VSP (Virtual Storage Profile)
318~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
319   The storage profiled are means to provide virtualized interface. A ranges of
320   storage profiles cab be associated to Ethernet ports.
321   They are selected during classification. Specify how the frame should be
322   written to memory and which buffer pool to select for packet storange in
323   queues. Start and End margin of buffer can also be configured.
324
325Limitations
326-----------
327
328Platform Requirement
329~~~~~~~~~~~~~~~~~~~~
330
331DPAA drivers for DPDK can only work on NXP SoCs as listed in the
332``Supported DPAA SoCs``.
333
334Maximum packet length
335~~~~~~~~~~~~~~~~~~~~~
336
337The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
338is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
339member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
340up to 10240 bytes can still reach the host interface.
341
342Multiprocess Support
343~~~~~~~~~~~~~~~~~~~~
344
345Current version of DPAA driver doesn't support multi-process applications
346where I/O is performed using secondary processes. This feature would be
347implemented in subsequent versions.
348