xref: /dpdk/doc/guides/nics/ice.rst (revision c1d145834f287aa8cf53de914618a7312f2c360e)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2018 Intel Corporation.
3
4ICE Poll Mode Driver
5======================
6
7The ice PMD (**librte_net_ice**) provides poll mode driver support for
810/25/50/100 Gbps Intel® Ethernet 800 Series Network Adapters based on
9the Intel Ethernet Controller E810 and Intel Ethernet Connection E822/E823.
10
11Linux Prerequisites
12-------------------
13
14- Follow the DPDK :ref:`Getting Started Guide for Linux <linux_gsg>` to setup the basic DPDK environment.
15
16- To get better performance on Intel platforms, please follow the "How to get best performance with NICs on Intel platforms"
17  section of the :ref:`Getting Started Guide for Linux <linux_gsg>`.
18
19- Please follow the matching list to download specific kernel driver, firmware and DDP package from
20  `https://www.intel.com/content/www/us/en/search.html?ws=text#q=e810&t=Downloads&layout=table`.
21
22- To understand what is DDP package and how it works, please review `Intel® Ethernet Controller E810 Dynamic
23  Device Personalization (DDP) for Telecommunications Technology Guide <https://cdrdv2.intel.com/v1/dl/getContent/617015>`_.
24
25- To understand DDP for COMMs usage with DPDK, please review `Intel® Ethernet 800 Series Telecommunication (Comms)
26  Dynamic Device Personalization (DDP) Package <https://cdrdv2.intel.com/v1/dl/getContent/618651>`_.
27
28Windows Prerequisites
29---------------------
30
31- Follow the :doc:`guide for Windows <../windows_gsg/run_apps>`
32  to setup the basic DPDK environment.
33
34- Identify the Intel® Ethernet adapter and get the latest NVM/FW version.
35
36- To access any Intel® Ethernet hardware, load the NetUIO driver in place of existing built-in (inbox) driver.
37
38- To load NetUIO driver, follow the steps mentioned in `dpdk-kmods repository
39  <https://git.dpdk.org/dpdk-kmods/tree/windows/netuio/README.rst>`_.
40
41- Loading of private Dynamic Device Personalization (DDP) package is not supported on Windows.
42
43
44Kernel driver, DDP and Firmware Matching List
45---------------------------------------------
46
47It is highly recommended to upgrade the ice kernel driver, firmware and DDP package
48to avoid the compatibility issues with ice PMD.
49The table below shows a summary of the DPDK versions
50with corresponding out-of-tree Linux kernel drivers, DDP package and firmware.
51The full list of in-tree and out-of-tree Linux kernel drivers from kernel.org
52and Linux distributions that were tested and verified
53are listed in the Tested Platforms section of the Release Notes for each release.
54
55   +-----------+---------------+-----------------+-----------+--------------+-----------+
56   |    DPDK   | Kernel Driver | OS Default DDP  | COMMS DDP | Wireless DDP | Firmware  |
57   +===========+===============+=================+===========+==============+===========+
58   |    20.11  |     1.3.2     |      1.3.20     |  1.3.24   |      N/A     |    2.3    |
59   +-----------+---------------+-----------------+-----------+--------------+-----------+
60   |    21.02  |     1.4.11    |      1.3.24     |  1.3.28   |    1.3.4     |    2.4    |
61   +-----------+---------------+-----------------+-----------+--------------+-----------+
62   |    21.05  |     1.6.5     |      1.3.26     |  1.3.30   |    1.3.6     |    3.0    |
63   +-----------+---------------+-----------------+-----------+--------------+-----------+
64   |    21.08  |     1.7.16    |      1.3.27     |  1.3.31   |    1.3.7     |    3.1    |
65   +-----------+---------------+-----------------+-----------+--------------+-----------+
66   |    21.11  |     1.7.16    |      1.3.27     |  1.3.31   |    1.3.7     |    3.1    |
67   +-----------+---------------+-----------------+-----------+--------------+-----------+
68   |    22.03  |     1.8.3     |      1.3.28     |  1.3.35   |    1.3.8     |    3.2    |
69   +-----------+---------------+-----------------+-----------+--------------+-----------+
70   |    22.07  |     1.9.11    |      1.3.30     |  1.3.37   |    1.3.10    |    4.0    |
71   +-----------+---------------+-----------------+-----------+--------------+-----------+
72   |    22.11  |     1.10.1    |      1.3.30     |  1.3.37   |    1.3.10    |    4.1    |
73   +-----------+---------------+-----------------+-----------+--------------+-----------+
74   |    23.03  |     1.11.1    |      1.3.30     |  1.3.40   |    1.3.10    |    4.2    |
75   +-----------+---------------+-----------------+-----------+--------------+-----------+
76   |    23.07  |     1.12.6    |      1.3.35     |  1.3.45   |    1.3.13    |    4.3    |
77   +-----------+---------------+-----------------+-----------+--------------+-----------+
78   |    23.11  |     1.13.7    |      1.3.36     |  1.3.46   |    1.3.14    |    4.4    |
79   +-----------+---------------+-----------------+-----------+--------------+-----------+
80   |    24.03  |     1.13.7    |      1.3.35     |  1.3.45   |    1.3.13    |    4.4    |
81   +-----------+---------------+-----------------+-----------+--------------+-----------+
82   |    24.07  |     1.14.11   |      1.3.36     |  1.3.46   |    1.3.14    |    4.5    |
83   +-----------+---------------+-----------------+-----------+--------------+-----------+
84   |    24.11  |     1.15.4    |      1.3.36     |  1.3.46   |    1.3.14    |    4.6    |
85   +-----------+---------------+-----------------+-----------+--------------+-----------+
86
87Dynamic Device Personalization (DDP) package loading
88----------------------------------------------------
89
90The Intel E810 requires a programmable pipeline package
91be downloaded by the driver to support normal operations.
92The E810 has limited functionality built in to allow PXE boot and other use cases,
93but for DPDK use the driver must download a package file during the driver initialization stage.
94
95The default DDP package file name is ``ice.pkg``.
96For a specific NIC, the DDP package supposed to be loaded can have a filename:
97``ice-xxxxxx.pkg``, where 'xxxxxx' is the 64-bit PCIe Device Serial Number of the NIC.
98For example, if the NIC's device serial number is 00-CC-BB-FF-FF-AA-05-68,
99the device-specific DDP package filename is ``ice-00ccbbffffaa0568.pkg`` (in hex and all low case).
100A symbolic link to the DDP package file is also ok.
101The same package file is used by both the kernel driver and the ICE PMD.
102For more information, please review the README file from
103`Intel® Ethernet 800 Series Dynamic Device Personalization (DDP) for Telecommunication (Comms) Package
104<https://www.intel.com/content/www/us/en/download/19660/intel-ethernet-800-series-dynamic-device-personalization-ddp-for-telecommunication-comms-package.html>`_.
105
106ICE PMD supports using a customized DDP search path.
107The driver will read the search path from
108``/sys/module/firmware_class/parameters/path`` as a ``CUSTOMIZED_PATH``.
109During initialization, the driver searches in the following paths in order:
110``CUSTOMIZED_PATH``, ``/lib/firmware/updates/intel/ice/ddp`` and ``/lib/firmware/intel/ice/ddp``.
111The device-specific DDP package has a higher loading priority than default DDP package, ``ice.pkg``.
112
113.. note::
114
115   Windows support: DDP packages are not supported on Windows.
116
117Configuration
118-------------
119
120Runtime Configuration
121~~~~~~~~~~~~~~~~~~~~~
122
123- ``Safe Mode Support`` (default ``0``)
124
125  If driver failed to load OS package, by default driver's initialization failed.
126  But if user intend to use the device without OS package, user can take ``devargs``
127  parameter ``safe-mode-support``, for example::
128
129    -a 80:00.0,safe-mode-support=1
130
131  Then the driver will be initialized successfully and the device will enter Safe Mode.
132  NOTE: In Safe mode, only very limited features are available, features like RSS,
133  checksum, fdir, tunneling ... are all disabled.
134
135- ``Default MAC Disable`` (default ``0``)
136
137  Disable the default MAC make the device drop all packets by default,
138  only packets hit on filter rules will pass.
139
140  Default MAC can be disabled by setting the devargs parameter ``default-mac-disable``,
141  for example::
142
143    -a 80:00.0,default-mac-disable=1
144
145- ``DDP Package File``
146
147  Rather than have the driver search for the DDP package to load,
148  or to override what package is used,
149  the ``ddp_pkg_file`` option can be used to provide the path to a specific package file.
150  For example::
151
152    -a 80:00.0,ddp_pkg_file=/path/to/ice-version.pkg
153
154- ``Traffic Management Scheduling Levels``
155
156  The DPDK Traffic Management (rte_tm) APIs can be used to configure the Tx scheduler on the NIC.
157  From 24.11 release, all available hardware layers are available to software.
158  Earlier versions of DPDK only supported 3 levels in the scheduling hierarchy.
159  To help with backward compatibility the ``tm_sched_levels`` parameter
160  can be used to limit the scheduler levels to the provided value.
161  The provided value must be between 3 and 8.
162  If the value provided is greater than the number of levels provided by the HW,
163  SW will use the hardware maximum value.
164
165- ``Protocol extraction for per queue``
166
167  Configure the RX queues to do protocol extraction into mbuf for protocol
168  handling acceleration, like checking the TCP SYN packets quickly.
169
170  The argument format is::
171
172      18:00.0,proto_xtr=<queues:protocol>[<queues:protocol>...],field_offs=<offset>, \
173      field_name=<name>
174      18:00.0,proto_xtr=<protocol>,field_offs=<offset>,field_name=<name>
175
176  Queues are grouped by ``(`` and ``)`` within the group. The ``-`` character
177  is used as a range separator and ``,`` is used as a single number separator.
178  The grouping ``()`` can be omitted for single element group. If no queues are
179  specified, PMD will use this protocol extraction type for all queues.
180  ``field_offs`` is the offset of mbuf dynamic field for protocol extraction data.
181  ``field_name`` is the name of mbuf dynamic field for protocol extraction data.
182  ``field_offs`` and ``field_name`` will be checked whether it is valid. If invalid,
183  an error print will be returned: ``Invalid field offset or name, no match dynfield``,
184  and the proto_ext function will not be enabled.
185
186  Protocol is : ``vlan, ipv4, ipv6, ipv6_flow, tcp, ip_offset``.
187
188  .. code-block:: console
189
190    dpdk-testpmd -c 0xff -- -i
191    port stop 0
192    port detach 0
193    port attach 18:00.0,proto_xtr='[(1,2-3,8-9):tcp,10-13:vlan]',field_offs=92,field_name=pmd_dyn
194
195  This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-13 are
196  VLAN extraction, other queues run with no protocol extraction. The offset of mbuf
197  dynamic field is 92 for all queues with protocol extraction.
198
199  .. code-block:: console
200
201    dpdk-testpmd -c 0xff -- -i
202    port stop 0
203    port detach 0
204    port attach 18:00.0,proto_xtr=vlan,proto_xtr='[(1,2-3,8-9):tcp,10-23:ipv6]', \
205    field_offs=92,field_name=pmd_dyn
206
207  This setting means queues 1, 2-3, 8-9 are TCP extraction, queues 10-23 are
208  IPv6 extraction, other queues use the default VLAN extraction. The offset of mbuf
209  dynamic field is 92 for all queues with protocol extraction.
210
211  The extraction metadata is copied into the registered dynamic mbuf field, and
212  the related dynamic mbuf flags is set.
213
214  .. table:: Protocol extraction : ``vlan``
215
216   +----------------------------+----------------------------+
217   |           VLAN2            |           VLAN1            |
218   +======+===+=================+======+===+=================+
219   |  PCP | D |       VID       |  PCP | D |       VID       |
220   +------+---+-----------------+------+---+-----------------+
221
222  VLAN1 - single or EVLAN (first for QinQ).
223
224  VLAN2 - C-VLAN (second for QinQ).
225
226  .. table:: Protocol extraction : ``ipv4``
227
228   +----------------------------+----------------------------+
229   |           IPHDR2           |           IPHDR1           |
230   +======+=======+=============+==============+=============+
231   |  Ver |Hdr Len|    ToS      |      TTL     |  Protocol   |
232   +------+-------+-------------+--------------+-------------+
233
234  IPHDR1 - IPv4 header word 4, "TTL" and "Protocol" fields.
235
236  IPHDR2 - IPv4 header word 0, "Ver", "Hdr Len" and "Type of Service" fields.
237
238  .. table:: Protocol extraction : ``ipv6``
239
240   +----------------------------+----------------------------+
241   |           IPHDR2           |           IPHDR1           |
242   +=====+=============+========+=============+==============+
243   | Ver |Traffic class|  Flow  | Next Header |   Hop Limit  |
244   +-----+-------------+--------+-------------+--------------+
245
246  IPHDR1 - IPv6 header word 3, "Next Header" and "Hop Limit" fields.
247
248  IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
249  "Flow Label" fields.
250
251  .. table:: Protocol extraction : ``ipv6_flow``
252
253   +----------------------------+----------------------------+
254   |           IPHDR2           |           IPHDR1           |
255   +=====+=============+========+============================+
256   | Ver |Traffic class|            Flow Label               |
257   +-----+-------------+-------------------------------------+
258
259  IPHDR1 - IPv6 header word 1, 16 low bits of the "Flow Label" field.
260
261  IPHDR2 - IPv6 header word 0, "Ver", "Traffic class" and high 4 bits of
262  "Flow Label" fields.
263
264  .. table:: Protocol extraction : ``tcp``
265
266   +----------------------------+----------------------------+
267   |           TCPHDR2          |           TCPHDR1          |
268   +============================+======+======+==============+
269   |          Reserved          |Offset|  RSV |     Flags    |
270   +----------------------------+------+------+--------------+
271
272  TCPHDR1 - TCP header word 6, "Data Offset" and "Flags" fields.
273
274  TCPHDR2 - Reserved
275
276  .. table:: Protocol extraction : ``ip_offset``
277
278   +----------------------------+----------------------------+
279   |           IPHDR2           |           IPHDR1           |
280   +============================+============================+
281   |       IPv6 HDR Offset      |       IPv4 HDR Offset      |
282   +----------------------------+----------------------------+
283
284  IPHDR1 - Outer/Single IPv4 Header offset.
285
286  IPHDR2 - Outer/Single IPv6 Header offset.
287
288- ``Hardware debug mask log support`` (default ``0``)
289
290  User can enable the related hardware debug mask such as ICE_DBG_NVM::
291
292    -a 0000:88:00.0,hw_debug_mask=0x80 --log-level=pmd.net.ice.driver:8
293
294  These ICE_DBG_XXX are defined in ``drivers/net/intel/ice/base/ice_type.h``.
295
296- ``1PPS out support``
297
298  The E810 supports four single-ended GPIO signals (SDP[20:23]). The 1PPS
299  signal outputs via SDP[20:23]. User can select GPIO pin index flexibly.
300  Pin index 0 means SDP20, 1 means SDP21 and so on. For example::
301
302    -a af:00.0,pps_out='[pin:0]'
303
304- ``Low Rx latency`` (default ``0``)
305
306  vRAN workloads require low latency DPDK interface for the front haul
307  interface connection to Radio. By specifying ``1`` for parameter
308  ``rx_low_latency``, each completed Rx descriptor can be written immediately
309  to host memory and the Rx interrupt latency can be reduced to 2us::
310
311    -a 0000:88:00.0,rx_low_latency=1
312
313  As a trade-off, this configuration may cause the packet processing performance
314  degradation due to the PCI bandwidth limitation.
315
316- ``Tx Scheduler Topology Download``
317
318  The default Tx scheduler topology exposed by the NIC,
319  generally a 9-level topology of which 8 levels are SW configurable,
320  may be updated by a new topology loaded from a DDP package file.
321  The ``ddp_load_sched_topo`` option can be used to specify that the scheduler topology,
322  if any, in the DDP package file being used should be loaded into the NIC.
323  For example::
324
325    -a 0000:88:00.0,ddp_load_sched_topo=1
326
327  or::
328
329    -a 0000:88:00.0,ddp_pkg_file=/path/to/pkg.file,ddp_load_sched_topo=1
330
331- ``Tx diagnostics`` (default ``not enabled``)
332
333  Set the ``devargs`` parameter ``mbuf_check`` to enable Tx diagnostics.
334  For example, ``-a 81:00.0,mbuf_check=<case>`` or ``-a 81:00.0,mbuf_check=[<case1>,<case2>...]``.
335  Thereafter, ``rte_eth_xstats_get()`` can be used to get the error counts,
336  which are collected in ``tx_mbuf_error_packets`` xstats.
337  In testpmd these can be shown via: ``testpmd> show port xstats all``.
338  Supported values for the ``case`` parameter are:
339
340  * ``mbuf``: Check for corrupted mbuf.
341  * ``size``: Check min/max packet length according to HW spec.
342  * ``segment``: Check number of mbuf segments does not exceed HW limits.
343  * ``offload``: Check for use of an unsupported offload flag.
344
345Driver compilation and testing
346------------------------------
347
348Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
349for details.
350
351Features
352--------
353
354Vector PMD
355~~~~~~~~~~
356
357Vector PMD for RX and TX path are selected automatically. The paths
358are chosen based on 2 conditions.
359
360- ``CPU``
361  On the X86 platform, the driver checks if the CPU supports AVX2.
362  If it's supported, AVX2 paths will be chosen. If not, SSE is chosen.
363  If the CPU supports AVX512 and EAL argument ``--force-max-simd-bitwidth``
364  is set to 512, AVX512 paths will be chosen.
365
366- ``Offload features``
367  The supported HW offload features are described in the document ice.ini,
368  A value "P" means the offload feature is not supported by vector path.
369  If any not supported features are used, ICE vector PMD is disabled and the
370  normal paths are chosen.
371
372Malicious driver detection (MDD)
373~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
374
375It's not appropriate to send a packet, if this packet's destination MAC address
376is just this port's MAC address. If SW tries to send such packets, HW will
377report a MDD event and drop the packets.
378
379The APPs based on DPDK should avoid providing such packets.
380
381Device Config Function (DCF)
382~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
383
384This section demonstrates ICE DCF PMD, which shares the core module with ICE
385PMD and iAVF PMD.
386
387A DCF (Device Config Function) PMD bounds to the device's trusted VF with ID 0,
388it can act as a sole controlling entity to exercise advance functionality (such
389as switch, ACL) for the rest VFs.
390
391The DCF PMD needs to advertise and acquire DCF capability which allows DCF to
392send AdminQ commands that it would like to execute over to the PF and receive
393responses for the same from PF.
394
395Forward Error Correction (FEC)
396~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
397
398Supports get/set FEC mode and get FEC capability.
399
400Time Synchronisation
401~~~~~~~~~~~~~~~~~~~~
402
403The system operator can run a PTP (Precision Time Protocol) client application
404to synchronise the time on the network card
405(and optionally the time on the system) to the PTP master.
406
407ICE PMD supports PTP client applications that use the DPDK IEEE 1588 API
408to communicate with the PTP master clock.
409Note that PTP client application needs to run on PF
410and add the ``--force-max-simd-bitwidth=64`` startup parameter to disable vector mode.
411
412.. code-block:: console
413
414   examples/dpdk-ptpclient -c f -n 3 -a 0000:ec:00.1 --force-max-simd-bitwidth=64 -- -T 1 -p 0x1 -c 1
415
416Generic Flow Support
417~~~~~~~~~~~~~~~~~~~~
418
419The ice PMD provides support for the Generic Flow API (RTE_FLOW), enabling
420users to offload various flow classification tasks to the E810 NIC.
421The E810 NIC's  packet processing pipeline consists of the following stages:
422
423Switch: Supports exact match and limited wildcard matching with a large flow
424capacity.
425
426ACL: Supports wildcard matching with a smaller flow capacity (DCF mode only).
427
428FDIR: Supports exact match with a large flow capacity (PF mode only).
429
430Hash: Supports RSS (PF mode only)
431
432The ice PMD utilizes the ice_flow_engine structure to represent each of these
433stages and leverages the rte_flow rule's ``group`` attribute for selecting the
434appropriate engine for Switch, ACL, and FDIR operations:
435
436Group 0 maps to Switch
437Group 1 maps to ACL
438Group 2 maps to FDIR
439
440In the case of RSS, it will only be selected if a ``RTE_FLOW_ACTION_RSS`` action
441is targeted to no queue group, and the group attribute is ignored.
442
443For each engine, a list of supported patterns is maintained in a global array
444named ``ice_<engine>_supported_pattern``. The Ice PMD will reject any rule with
445a pattern that is not included in the supported list.
446
447One notable feature is the ice PMD's ability to leverage the Raw pattern,
448enabling protocol-agnostic flow offloading. Here is an example of creating
449a rule that matches an IPv4 destination address of 1.2.3.4 and redirects it to
450queue 3 using a raw pattern::
451
452  flow create 0 ingress group 2 pattern raw \
453  pattern spec \
454  00000000000000000000000008004500001400004000401000000000000001020304 \
455  pattern mask \
456  000000000000000000000000000000000000000000000000000000000000ffffffff \
457  end actions queue index 3 / mark id 3 / end
458
459Currently, raw pattern support is limited to the FDIR and Hash engines.
460
461Traffic Management Support
462~~~~~~~~~~~~~~~~~~~~~~~~~~
463
464The ice PMD provides support for the Traffic Management API (RTE_TM),
465enabling users to configure and manage the traffic shaping and scheduling of transmitted packets.
466By default, all available transmit scheduler layers are available for configuration,
467allowing up to 2000 queues to be configured in a hierarchy of up to 8 levels.
468The number of levels in the hierarchy can be adjusted via driver parameters:
469
470* the default 9-level topology (8 levels usable) can be replaced by a new topology downloaded from a DDP file,
471  using the driver parameter ``ddp_load_sched_topo=1``.
472  Using this mechanism, if the number of levels is reduced,
473  the possible fan-out of child-nodes from each level may be increased.
474  The default topology is a 9-level tree with a fan-out of 8 at each level.
475  Released DDP package files contain a 5-level hierarchy (4-levels usable),
476  with increased fan-out at the lower 3 levels
477  e.g. 64 at levels 2 and 3, and 256 or more at the leaf-node level.
478
479* the number of levels can be reduced
480  by setting the driver parameter ``tm_sched_levels`` to a lower value.
481  This scheme will reduce in software the number of editable levels,
482  but will not affect the fan-out from each level.
483
484For more details on how to configure a Tx scheduling hierarchy,
485please refer to the ``rte_tm`` `API documentation <https://doc.dpdk.org/api/rte__tm_8h.html>`_.
486
487Additional Options
488++++++++++++++++++
489
490- ``Disable ACL Engine`` (default ``enabled``)
491
492  By default, all flow engines are enabled. But if user does not need the
493  ACL engine related functions, user can set ``devargs`` parameter
494  ``acl=off`` to disable the ACL engine and shorten the startup time.
495
496    -a 18:01.0,cap=dcf,acl=off
497
498.. _figure_ice_dcf:
499
500.. figure:: img/ice_dcf.*
501
502   DCF Communication flow.
503
504#. Create the VFs::
505
506      echo 4 > /sys/bus/pci/devices/0000\:18\:00.0/sriov_numvfs
507
508#. Enable the VF0 trust on::
509
510      ip link set dev enp24s0f0 vf 0 trust on
511
512#. Bind the VF0, and run testpmd with 'cap=dcf' with port representor for VF 1 and 2::
513
514      dpdk-testpmd -l 22-25 -n 4 -a 18:01.0,cap=dcf,representor=vf[1-2] -- -i
515
516#. Monitor the VF2 interface network traffic::
517
518      tcpdump -e -nn -i enp24s1f2
519
520#. Create one flow to redirect the traffic to VF2 by DCF (assume the representor port ID is 5)::
521
522      flow create 0 priority 0 ingress pattern eth / ipv4 src is 192.168.0.2 \
523      dst is 192.168.0.3 / end actions represented_port ethdev_port_id 5 / end
524
525#. Send the packet, and it should be displayed on tcpdump::
526
527      sendp(Ether(src='3c:fd:fe:aa:bb:78', dst='00:00:00:01:02:03')/IP(src=' \
528      192.168.0.2', dst="192.168.0.3")/TCP(flags='S')/Raw(load='XXXXXXXXXX'), \
529      iface="enp24s0f0", count=10)
530
531Sample Application Notes
532------------------------
533
534Vlan filter
535~~~~~~~~~~~
536
537Vlan filter only works when Promiscuous mode is off.
538
539To start ``testpmd``, and add vlan 10 to port 0:
540
541.. code-block:: console
542
543    ./app/dpdk-testpmd -l 0-15 -n 4 -- -i
544    ...
545
546    testpmd> rx_vlan add 10 0
547
548Diagnostic Utilities
549--------------------
550
551Dump DDP Package
552~~~~~~~~~~~~~~~~
553
554Dump the runtime packet processing pipeline configuration into a binary file.
555This helps the support team diagnose hardware configuration issues.
556
557Usage::
558
559    testpmd> ddp dump <port_id> <output_file>
560
561Dump Switch Configurations
562~~~~~~~~~~~~~~~~~~~~~~~~~~
563
564Dump detail hardware configurations related to the switch pipeline stage into a binary file.
565
566Usage::
567
568    testpmd> ddp dump switch <port_id> <output_file>
569
570Dump Tx Scheduling Tree
571~~~~~~~~~~~~~~~~~~~~~~~
572
573Dump the runtime Tx scheduling tree into a DOT file.
574
575Usage::
576
577    testpmd> txsched dump <port_id> <brief|detail> <output_file>
578
579In "brief" mode, all scheduling nodes in the tree are displayed.
580In "detail" mode, each node's configuration parameters are also displayed.
581