xref: /dpdk/doc/guides/nics/dpaa.rst (revision f00d0d5fb652504ad6af2ab1a8b146b1cb86fe38)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright 2017 NXP
3
4
5DPAA Poll Mode Driver
6=====================
7
8The DPAA NIC PMD (**librte_pmd_dpaa**) provides poll mode driver
9support for the inbuilt NIC found in the **NXP DPAA** SoC family.
10
11More information can be found at `NXP Official Website
12<http://www.nxp.com/products/microcontrollers-and-processors/arm-processors/qoriq-arm-processors:QORIQ-ARM>`_.
13
14NXP DPAA (Data Path Acceleration Architecture - Gen 1)
15------------------------------------------------------
16
17This section provides an overview of the NXP DPAA architecture
18and how it is integrated into the DPDK.
19
20Contents summary
21
22- DPAA overview
23- DPAA driver architecture overview
24
25.. _dpaa_overview:
26
27DPAA Overview
28~~~~~~~~~~~~~
29
30Reference: `FSL DPAA Architecture <http://www.nxp.com/assets/documents/data/en/white-papers/QORIQDPAAWP.pdf>`_.
31
32The QorIQ Data Path Acceleration Architecture (DPAA) is a set of hardware
33components on specific QorIQ series multicore processors. This architecture
34provides the infrastructure to support simplified sharing of networking
35interfaces and accelerators by multiple CPU cores, and the accelerators
36themselves.
37
38DPAA includes:
39
40- Cores
41- Network and packet I/O
42- Hardware offload accelerators
43- Infrastructure required to facilitate flow of packets between the components above
44
45Infrastructure components are:
46
47- The Queue Manager (QMan) is a hardware accelerator that manages frame queues.
48  It allows  CPUs and other accelerators connected to the SoC datapath to
49  enqueue and dequeue ethernet frames, thus providing the infrastructure for
50  data exchange among CPUs and datapath accelerators.
51- The Buffer Manager (BMan) is a hardware buffer pool management block that
52  allows software and accelerators on the datapath to acquire and release
53  buffers in order to build frames.
54
55Hardware accelerators are:
56
57- SEC - Cryptographic accelerator
58- PME - Pattern matching engine
59
60The Network and packet I/O component:
61
62- The Frame Manager (FMan) is a key component in the DPAA and makes use of the
63  DPAA infrastructure (QMan and BMan). FMan  is responsible for packet
64  distribution and policing. Each frame can be parsed, classified and results
65  may be attached to the frame. This meta data can be used to select
66  particular QMan queue, which the packet is forwarded to.
67
68
69DPAA DPDK - Poll Mode Driver Overview
70-------------------------------------
71
72This section provides an overview of the drivers for DPAA:
73
74* Bus driver and associated "DPAA infrastructure" drivers
75* Functional object drivers (such as Ethernet).
76
77Brief description of each driver is provided in layout below as well as
78in the following sections.
79
80.. code-block:: console
81
82                                       +------------+
83                                       | DPDK DPAA  |
84                                       |    PMD     |
85                                       +-----+------+
86                                             |
87                                       +-----+------+       +---------------+
88                                       :  Ethernet  :.......| DPDK DPAA     |
89                    . . . . . . . . .  :   (FMAN)   :       | Mempool driver|
90                   .                   +---+---+----+       |  (BMAN)       |
91                  .                        ^   |            +-----+---------+
92                 .                         |   |<enqueue,         .
93                .                          |   | dequeue>         .
94               .                           |   |                  .
95              .                        +---+---V----+             .
96             .      . . . . . . . . . .: Portal drv :             .
97            .      .                   :            :             .
98           .      .                    +-----+------+             .
99          .      .                     :   QMAN     :             .
100         .      .                      :  Driver    :             .
101    +----+------+-------+              +-----+------+             .
102    |   DPDK DPAA Bus   |                    |                    .
103    |   driver          |....................|.....................
104    |   /bus/dpaa       |                    |
105    +-------------------+                    |
106                                             |
107    ========================== HARDWARE =====|========================
108                                            PHY
109    =========================================|========================
110
111In the above representation, solid lines represent components which interface
112with DPDK RTE Framework and dotted lines represent DPAA internal components.
113
114DPAA Bus driver
115~~~~~~~~~~~~~~~
116
117The DPAA bus driver is a ``rte_bus`` driver which scans the platform like bus.
118Key functions include:
119
120- Scanning and parsing the various objects and adding them to their respective
121  device list.
122- Performing probe for available drivers against each scanned device
123- Creating necessary ethernet instance before passing control to the PMD
124
125DPAA NIC Driver (PMD)
126~~~~~~~~~~~~~~~~~~~~~
127
128DPAA PMD is traditional DPDK PMD which provides necessary interface between
129RTE framework and DPAA internal components/drivers.
130
131- Once devices have been identified by DPAA Bus, each device is associated
132  with the PMD
133- PMD is responsible for implementing necessary glue layer between RTE APIs
134  and lower level QMan and FMan blocks.
135  The Ethernet driver is bound to a FMAN port and implements the interfaces
136  needed to connect the DPAA network interface to the network stack.
137  Each FMAN Port corresponds to a DPDK network interface.
138
139
140Features
141^^^^^^^^
142
143  Features of the DPAA PMD are:
144
145  - Multiple queues for TX and RX
146  - Receive Side Scaling (RSS)
147  - Packet type information
148  - Checksum offload
149  - Promiscuous mode
150
151DPAA Mempool Driver
152~~~~~~~~~~~~~~~~~~~
153
154DPAA has a hardware offloaded buffer pool manager, called BMan, or Buffer
155Manager.
156
157- Using standard Mempools operations RTE API, the mempool driver interfaces
158  with RTE to service each mempool creation, deletion, buffer allocation and
159  deallocation requests.
160- Each FMAN instance has a BMan pool attached to it during initialization.
161  Each Tx frame can be automatically released by hardware, if allocated from
162  this pool.
163
164
165Whitelisting & Blacklisting
166---------------------------
167
168For blacklisting a DPAA device, following commands can be used.
169
170 .. code-block:: console
171
172    <dpdk app> <EAL args> -b "dpaa_bus:fmX-macY" -- ...
173    e.g. "dpaa_bus:fm1-mac4"
174
175Supported DPAA SoCs
176-------------------
177
178- LS1043A/LS1023A
179- LS1046A/LS1026A
180
181Prerequisites
182-------------
183
184See :doc:`../platform/dpaa` for setup information
185
186
187- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>`
188  to setup the basic DPDK environment.
189
190.. note::
191
192   Some part of dpaa bus code (qbman and fman - library) routines are
193   dual licensed (BSD & GPLv2), however they are used as BSD in DPDK in userspace.
194
195Pre-Installation Configuration
196------------------------------
197
198Config File Options
199~~~~~~~~~~~~~~~~~~~
200
201The following options can be modified in the ``config`` file.
202Please note that enabling debugging options may affect system performance.
203
204- ``CONFIG_RTE_LIBRTE_DPAA_BUS`` (default ``n``)
205
206  By default it is enabled only for defconfig_arm64-dpaa-* config.
207  Toggle compilation of the ``librte_bus_dpaa`` driver.
208
209- ``CONFIG_RTE_LIBRTE_DPAA_PMD`` (default ``n``)
210
211  By default it is enabled only for defconfig_arm64-dpaa-* config.
212  Toggle compilation of the ``librte_pmd_dpaa`` driver.
213
214- ``CONFIG_RTE_LIBRTE_DPAA_DEBUG_DRIVER`` (default ``n``)
215
216  Toggles display of bus configurations and enables a debugging queue
217  to fetch error (Rx/Tx) packets to driver. By default, packets with errors
218  (like wrong checksum) are dropped by the hardware.
219
220- ``CONFIG_RTE_LIBRTE_DPAA_HWDEBUG`` (default ``n``)
221
222  Enables debugging of the Queue and Buffer Manager layer which interacts
223  with the DPAA hardware.
224
225- ``CONFIG_RTE_MBUF_DEFAULT_MEMPOOL_OPS`` (default ``dpaa``)
226
227  This is not a DPAA specific configuration - it is a generic RTE config.
228  For optimal performance and hardware utilization, it is expected that DPAA
229  Mempool driver is used for mempools. For that, this configuration needs to
230  enabled.
231
232Environment Variables
233~~~~~~~~~~~~~~~~~~~~~
234
235DPAA drivers uses the following environment variables to configure its
236state during application initialization:
237
238- ``DPAA_NUM_RX_QUEUES`` (default 1)
239
240  This defines the number of Rx queues configured for an application, per
241  port. Hardware would distribute across these many number of queues on Rx
242  of packets.
243  In case the application is configured to use lesser number of queues than
244  configured above, it might result in packet loss (because of distribution).
245
246- ``DPAA_PUSH_QUEUES_NUMBER`` (default 4)
247
248  This defines the number of High performance queues to be used for ethdev Rx.
249  These queues use one private HW portal per queue configured, so they are
250  limited in the system. The first configured ethdev queues will be
251  automatically be assigned from the these high perf PUSH queues. Any queue
252  configuration beyond that will be standard Rx queues. The application can
253  choose to change their number if HW portals are limited.
254  The valid values are from '0' to '4'. The valuse shall be set to '0' if the
255  application want to use eventdev with DPAA device.
256
257
258Driver compilation and testing
259------------------------------
260
261Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
262for details.
263
264#. Running testpmd:
265
266   Follow instructions available in the document
267   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
268   to run testpmd.
269
270   Example output:
271
272   .. code-block:: console
273
274      ./arm64-dpaa-linuxapp-gcc/testpmd -c 0xff -n 1 \
275        -- -i --portmask=0x3 --nb-cores=1 --no-flush-rx
276
277      .....
278      EAL: Registered [pci] bus.
279      EAL: Registered [dpaa] bus.
280      EAL: Detected 4 lcore(s)
281      .....
282      EAL: dpaa: Bus scan completed
283      .....
284      Configuring Port 0 (socket 0)
285      Port 0: 00:00:00:00:00:01
286      Configuring Port 1 (socket 0)
287      Port 1: 00:00:00:00:00:02
288      .....
289      Checking link statuses...
290      Port 0 Link Up - speed 10000 Mbps - full-duplex
291      Port 1 Link Up - speed 10000 Mbps - full-duplex
292      Done
293      testpmd>
294
295Limitations
296-----------
297
298Platform Requirement
299~~~~~~~~~~~~~~~~~~~~
300
301DPAA drivers for DPDK can only work on NXP SoCs as listed in the
302``Supported DPAA SoCs``.
303
304Maximum packet length
305~~~~~~~~~~~~~~~~~~~~~
306
307The DPAA SoC family support a maximum of a 10240 jumbo frame. The value
308is fixed and cannot be changed. So, even when the ``rxmode.max_rx_pkt_len``
309member of ``struct rte_eth_conf`` is set to a value lower than 10240, frames
310up to 10240 bytes can still reach the host interface.
311
312Multiprocess Support
313~~~~~~~~~~~~~~~~~~~~
314
315Current version of DPAA driver doesn't support multi-process applications
316where I/O is performed using secondary processes. This feature would be
317implemented in subsequent versions.
318