xref: /dpdk/doc/guides/nics/cnxk.rst (revision 03b152389fb15f96e25d9acd87b84c9c22cf8b2b)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(C) 2021 Marvell.
3
4CNXK Poll Mode driver
5=====================
6
7The CNXK ETHDEV PMD (**librte_net_cnxk**) provides poll mode ethdev driver
8support for the inbuilt network device found in **Marvell OCTEON CN9K/CN10K**
9SoC family as well as for their virtual functions (VF) in SR-IOV context.
10
11More information can be found at `Marvell Official Website
12<https://www.marvell.com/embedded-processors/infrastructure-processors>`_.
13
14Features
15--------
16
17Features of the CNXK Ethdev PMD are:
18
19- Packet type information
20- Promiscuous mode
21- Jumbo frames
22- SR-IOV VF
23- Lock-free Tx queue
24- Multiple queues for TX and RX
25- Receiver Side Scaling (RSS)
26- MAC filtering
27- Generic flow API
28- Inner and Outer Checksum offload
29- Port hardware statistics
30- Link state information
31- Link flow control
32- MTU update
33- Scatter-Gather IO support
34- Vector Poll mode driver
35- Debug utilities - Context dump and error interrupt support
36- Support Rx interrupt
37- Inline IPsec processing support
38- Ingress meter support
39- Queue based priority flow control support
40- Port representors
41- Represented port pattern matching and action
42- Port representor pattern matching and action
43
44Prerequisites
45-------------
46
47See :doc:`../platform/cnxk` for setup information.
48
49
50Driver compilation and testing
51------------------------------
52
53Refer to the document :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
54for details.
55
56#. Running testpmd:
57
58   Follow instructions available in the document
59   :ref:`compiling and testing a PMD for a NIC <pmd_build_and_test>`
60   to run testpmd.
61
62   Example output:
63
64   .. code-block:: console
65
66      ./<build_dir>/app/dpdk-testpmd -c 0xc -a 0002:02:00.0 -- --portmask=0x1 --nb-cores=1 --port-topology=loop --rxq=1 --txq=1
67      EAL: Detected 4 lcore(s)
68      EAL: Detected 1 NUMA nodes
69      EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
70      EAL: Selected IOVA mode 'VA'
71      EAL: No available hugepages reported in hugepages-16777216kB
72      EAL: No available hugepages reported in hugepages-2048kB
73      EAL: Probing VFIO support...
74      EAL: VFIO support initialized
75      EAL:   using IOMMU type 1 (Type 1)
76      [ 2003.202721] vfio-pci 0002:02:00.0: vfio_cap_init: hiding cap 0x14@0x98
77      EAL: Probe PCI driver: net_cn10k (177d:a063) device: 0002:02:00.0 (socket 0)
78      PMD: RoC Model: cn10k
79      testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
80      testpmd: preferred mempool ops selected: cn10k_mempool_ops
81      Configuring Port 0 (socket 0)
82      PMD: Port 0: Link Up - speed 25000 Mbps - full-duplex
83
84      Port 0: link state change event
85      Port 0: 96:D4:99:72:A5:BF
86      Checking link statuses...
87      Done
88      No commandline core given, start packet forwarding
89      io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
90      Logical Core 3 (socket 0) forwards packets on 1 streams:
91        RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
92
93        io packet forwarding packets/burst=32
94        nb forwarding cores=1 - nb forwarding ports=1
95        port 0: RX queue number: 1 Tx queue number: 1
96          Rx offloads=0x0 Tx offloads=0x10000
97          RX queue: 0
98            RX desc=4096 - RX free threshold=0
99            RX threshold registers: pthresh=0 hthresh=0  wthresh=0
100            RX Offloads=0x0
101          TX queue: 0
102            TX desc=512 - TX free threshold=0
103            TX threshold registers: pthresh=0 hthresh=0  wthresh=0
104            TX offloads=0x0 - TX RS bit threshold=0
105      Press enter to exit
106
107Runtime Config Options
108----------------------
109
110- ``Rx&Tx scalar mode enable`` (default ``0``)
111
112   PMD supports both scalar and vector mode, it may be selected at runtime
113   using ``scalar_enable`` ``devargs`` parameter.
114
115- ``RSS reta size`` (default ``64``)
116
117   RSS redirection table size may be configured during runtime using ``reta_size``
118   ``devargs`` parameter.
119
120   For example::
121
122      -a 0002:02:00.0,reta_size=256
123
124   With the above configuration, reta table of size 256 is populated.
125
126- ``Flow priority levels`` (default ``3``)
127
128   RTE Flow priority levels can be configured during runtime using
129   ``flow_max_priority`` ``devargs`` parameter.
130
131   For example::
132
133      -a 0002:02:00.0,flow_max_priority=10
134
135   With the above configuration, priority level was set to 10 (0-9). Max
136   priority level supported is 32.
137
138- ``Reserve Flow entries`` (default ``8``)
139
140   RTE flow entries can be pre allocated and the size of pre allocation can be
141   selected runtime using ``flow_prealloc_size`` ``devargs`` parameter.
142
143   For example::
144
145      -a 0002:02:00.0,flow_prealloc_size=4
146
147   With the above configuration, pre alloc size was set to 4. Max pre alloc
148   size supported is 32.
149
150- ``Max SQB buffer count`` (default ``512``)
151
152   Send queue descriptor buffer count may be limited during runtime using
153   ``max_sqb_count`` ``devargs`` parameter.
154
155   For example::
156
157      -a 0002:02:00.0,max_sqb_count=64
158
159   With the above configuration, each send queue's descriptor buffer count is
160   limited to a maximum of 64 buffers.
161
162- ``SQB slack count`` (default ``12``)
163
164   Send queue descriptor slack count added to SQB count when a Tx queue is
165   created, can be set using ``sqb_slack`` ``devargs`` parameter.
166
167   For example::
168
169      -a 0002:02:00.0,sqb_slack=32
170
171   With the above configuration, each send queue's descriptor buffer count will
172   be increased by 32, while keeping the queue limit to default configuration.
173
174- ``Switch header enable`` (default ``none``)
175
176   A port can be configured to a specific switch header type by using
177   ``switch_header`` ``devargs`` parameter.
178
179   For example::
180
181      -a 0002:02:00.0,switch_header="higig2"
182
183   With the above configuration, higig2 will be enabled on that port and the
184   traffic on this port should be higig2 traffic only. Supported switch header
185   types are "chlen24b", "chlen90b", "dsa", "exdsa", "higig2", "vlan_exdsa" and
186   "pre_l2".
187
188- ``Flow pre_l2 info`` (default ``0x0/0x0/0x0``)
189
190   pre_l2 headers are custom headers placed before the ethernet header. For
191   parsing custom pre_l2 headers, an offset, mask within the offset and shift
192   direction has to be provided within the custom header that holds the size of
193   the custom header. This is valid only with switch header pre_l2. Maximum
194   supported offset range is 0 to 255 and mask range is 1 to 255 and
195   shift direction, 0: left shift, 1: right shift.
196   Info format will be "offset/mask/shift direction". All parameters has to be
197   in hexadecimal format and mask should be contiguous. Info can be configured
198   using ``flow_pre_l2_info`` ``devargs`` parameter.
199
200   For example::
201
202      -a 0002:02:00.0,switch_header="pre_l2",flow_pre_l2_info=0x2/0x7e/0x1
203
204   With the above configuration, custom pre_l2 header will be enabled on that
205   port and size of the header is placed at byte offset 0x2 in the packet with
206   mask 0x7e and right shift will be used to get the size. That is, size will be
207   (pkt[0x2] & 0x7e) >> shift count. Shift count will be calculated based on
208   mask and shift direction. For example, if mask is 0x7c and shift direction is
209   1 (i.e., right shift) then the shift count will be 2, that is, absolute
210   position of the rightmost set bit. If the mask is 0x7c and shift direction
211   is 0 (i.e., left shift) then the shift count will be 1, that is, (8 - n),
212   where n is the absolute position of leftmost set bit.
213
214- ``RSS tag as XOR`` (default ``0``)
215
216   The HW gives two options to configure the RSS adder i.e
217
218   * ``rss_adder<7:0> = flow_tag<7:0> ^ flow_tag<15:8> ^ flow_tag<23:16> ^ flow_tag<31:24>``
219
220   * ``rss_adder<7:0> = flow_tag<7:0>``
221
222   Latter one aligns with standard NIC behavior vs former one is a legacy
223   RSS adder scheme used in OCTEON 9 products.
224
225   By default, the driver runs in the latter mode.
226   Setting this flag to 1 to select the legacy mode.
227
228   For example to select the legacy mode(RSS tag adder as XOR)::
229
230      -a 0002:02:00.0,tag_as_xor=1
231
232- ``Min SPI for inbound inline IPsec`` (default ``0``)
233
234   Min SPI supported for inbound inline IPsec processing can be specified by
235   ``ipsec_in_min_spi`` ``devargs`` parameter.
236
237   For example::
238
239      -a 0002:02:00.0,ipsec_in_min_spi=6
240
241   With the above configuration, application can enable inline IPsec processing
242   for inbound SA with min SPI of 6.
243
244- ``Max SPI for inbound inline IPsec`` (default ``255``)
245
246   Max SPI supported for inbound inline IPsec processing can be specified by
247   ``ipsec_in_max_spi`` ``devargs`` parameter.
248
249   For example::
250
251      -a 0002:02:00.0,ipsec_in_max_spi=128
252
253   With the above configuration, application can enable inline IPsec processing
254   with max SPI of 128.
255
256- ``Max SA's for outbound inline IPsec`` (default ``4096``)
257
258   Max number of SA's supported for outbound inline IPsec processing can be
259   specified by ``ipsec_out_max_sa`` ``devargs`` parameter.
260
261   For example::
262
263      -a 0002:02:00.0,ipsec_out_max_sa=128
264
265   With the above configuration, application can enable inline IPsec processing
266   for 128 outbound SAs.
267
268- ``Enable custom SA action`` (default ``0``)
269
270   Custom SA action can be enabled by specifying ``custom_sa_act`` ``devargs`` parameter.
271
272   For example::
273
274      -a 0002:02:00.0,custom_sa_act=1
275
276   With the above configuration, application can enable custom SA action. This
277   configuration allows the potential for a MCAM entry to match many SAs,
278   rather than only match a single SA.
279   For cnxk device sa_index will be calculated based on SPI value. So, it will
280   be 1 to 1 mapping. By enabling this devargs and setting a MCAM rule, will
281   allow application to configure the sa_index as part of session create. And
282   later original SPI value can be updated using session update.
283   For example, application can set sa_index as 0 using session create as SPI value
284   and later can update the original SPI value (for example 0x10000001) using
285   session update. And create a flow rule with security action and algorithm as
286   RTE_PMD_CNXK_SEC_ACTION_ALG0 and sa_hi as 0x1000 and sa_lo as 0x0001.
287
288- ``Outbound CPT LF queue size`` (default ``8200``)
289
290   Size of Outbound CPT LF queue in number of descriptors can be specified by
291   ``outb_nb_desc`` ``devargs`` parameter.
292
293   For example::
294
295      -a 0002:02:00.0,outb_nb_desc=16384
296
297    With the above configuration, Outbound CPT LF will be created to accommodate
298    at max 16384 descriptors at any given time.
299
300- ``Outbound CPT LF count`` (default ``1``)
301
302   Number of CPT LF's to attach for Outbound processing can be specified by
303   ``outb_nb_crypto_qs`` ``devargs`` parameter.
304
305   For example::
306
307      -a 0002:02:00.0,outb_nb_crypto_qs=2
308
309   With the above configuration, two CPT LF's are setup and distributed among
310   all the Tx queues for outbound processing.
311
312- ``Disable using inline ipsec device for inbound`` (default ``0``)
313
314   In CN10K, in event mode, driver can work in two modes,
315
316   #. Inbound encrypted traffic received by probed ipsec inline device while
317      plain traffic post decryption is received by ethdev.
318
319   #. Both Inbound encrypted traffic and plain traffic post decryption are
320      received by ethdev.
321
322   By default event mode works using inline device i.e mode ``1``.
323   This behaviour can be changed to pick mode ``2`` by using
324   ``no_inl_dev`` ``devargs`` parameter.
325
326   For example::
327
328      -a 0002:02:00.0,no_inl_dev=1 -a 0002:03:00.0,no_inl_dev=1
329
330   With the above configuration, inbound encrypted traffic from both the ports
331   is received by ipsec inline device.
332
333- ``Inline IPsec device channel and mask`` (default ``none``)
334
335   Set channel and channel mask configuration for the inline IPSec device. This
336   will be used when creating flow rules with RTE_FLOW_ACTION_TYPE_SECURITY
337   action.
338
339   By default, RTE Flow API sets the channel number of the port on which the
340   rule is created in the MCAM entry and matches it exactly. This behaviour can
341   be modified using the ``inl_cpt_channel`` ``devargs`` parameter.
342
343   For example::
344
345      -a 0002:1d:00.0,inl_cpt_channel=0x100/0xf00
346
347   With the above configuration, RTE Flow rules API will set the channel
348   and channel mask as 0x100 and 0xF00 in the MCAM entries of the  flow rules
349   created with RTE_FLOW_ACTION_TYPE_SECURITY action. Since channel number is
350   set with this custom mask, inbound encrypted traffic from all ports with
351   matching channel number pattern will be directed to the inline IPSec device.
352
353- ``Inline IPsec device flow rules`` (default ``none``)
354
355   For inline IPsec device, reserve number of rules specified by ``max_ipsec_rules``
356   and use them while installing rules with action as security.
357   Rule priority should be 0.
358   If specified number of rules not available,
359   then only available number of rules will be allocated and used.
360   If application try to insert more than allocated rules, flow creation will fail.
361
362   For example::
363
364      -a 0002:1d:00.0,max_ipsec_rules=100
365
366   With the above configuration, 100 rules will be allocated from 0-99 if available
367   and will be used for rules with action security.
368   If 100 rules are not available, and only 50 are available,
369   then only 50 rules will be allocated and used for flow rule creation.
370   If application try to add more than 50 rules, the flow creation will fail.
371
372- ``SDP device channel and mask`` (default ``none``)
373   Set channel and channel mask configuration for the SDP device. This
374   will be used when creating flow rules on the SDP device.
375
376   By default, for rules created on the SDP device, the RTE Flow API sets the
377   channel number and mask to cover the entire SDP channel range in the channel
378   field of the MCAM entry. This behaviour can be modified using the
379   ``sdp_channel_mask`` ``devargs`` parameter.
380
381   For example::
382
383      -a 0002:1d:00.0,sdp_channel_mask=0x700/0xf00
384
385   With the above configuration, RTE Flow rules API will set the channel
386   and channel mask as 0x700 and 0xF00 in the MCAM entries of the  flow rules
387   created on the SDP device. This option needs to be used when more than one
388   SDP interface is in use and RTE Flow rules created need to distinguish
389   between traffic from each SDP interface. The channel and mask combination
390   specified should match all the channels(or rings) configured on the SDP
391   interface.
392
393- ``Transmit completion handler`` (default ``0``)
394
395   When transmit completion handler is enabled,
396   the PMD invokes the callback handler provided by the application
397   for every packet which has external buf attached to mbuf
398   and frees main mbuf, external buffer is provided to applicatoin.
399   Once external buffer is handed over to application,
400   it is application responsibility either to free or reuse external buffer
401   using ``tx_compl_ena`` devargs parameter.
402
403   For example::
404
405      -a 0002:01:00.1,tx_compl_ena=1
406
407- ``Meta buffer size per ethdev port for inline inbound IPsec second pass``
408
409   Size of meta buffer allocated for inline inbound IPsec second pass per
410   ethdev port can be specified by ``meta_buf_sz`` ``devargs`` parameter.
411   Default value is computed runtime based on pkt mbuf pools created and in use.
412   This option is for OCTEON CN106-B0/CN103XX SoC family.
413
414   For example::
415
416      -a 0002:02:00.0,meta_buf_sz=512
417
418   With the above configuration, PMD would allocate meta buffers of size 512 for
419   inline inbound IPsec processing second pass.
420
421- ``NPC MCAM Aging poll frequency in seconds`` (default ``10``)
422
423   Poll frequency for aging control thread can be specified by
424   ``aging_poll_freq`` devargs parameter.
425
426   For example::
427
428      -a 0002:01:00.2,aging_poll_freq=50
429
430   With the above configuration, driver would poll for aging flows
431   every 50 seconds.
432
433- ``Rx Inject Enable inbound inline IPsec for second pass`` (default ``0``)
434
435   Rx packet inject feature for inbound inline IPsec processing can be enabled
436   by ``rx_inj_ena`` devargs parameter.
437   This option is for OCTEON CN106-B0/CN103XX SoC family.
438
439   For example::
440
441      -a 0002:02:00.0,rx_inj_ena=1
442
443   With the above configuration, driver would enable packet inject from ARM cores
444   to crypto to process and send back in Rx path.
445
446- ``Disable custom meta aura feature`` (default ``0``)
447
448   Custom meta aura i.e 1:N meta aura is enabled for second pass traffic by default
449   when ``inl_cpt_channel`` devarg is provided.
450   The custom meta aura feature can be disabled
451   by setting devarg ``custom_meta_aura_dis`` to ``1``.
452
453   For example::
454
455      -a 0002:02:00.0,custom_meta_aura_dis=1
456
457   With the above configuration, the driver would disable custom meta aura feature
458   for the device ``0002:02:00.0``.
459
460- ``Enable custom SA for inbound inline IPsec`` (default ``0``)
461
462   Custom SA for inbound inline IPsec can be enabled
463   by specifying ``custom_inb_sa`` devargs parameter.
464   This option needs to be given to both ethdev and inline device.
465
466   For example::
467
468      -a 0002:02:00.0,custom_inb_sa=1
469
470   With the above configuration, inline inbound IPsec post-processing
471   should be done by the application.
472
473.. note::
474
475   Above devarg parameters are configurable per device, user needs to pass the
476   parameters to all the PCIe devices if application requires to configure on
477   all the ethdev ports.
478
479Limitations
480-----------
481
482``mempool_cnxk`` external mempool handler dependency
483~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
484
485The OCTEON CN9K/CN10K SoC family NIC has inbuilt HW assisted external mempool manager.
486``net_cnxk`` PMD only works with ``mempool_cnxk`` mempool handler
487as it is performance wise most effective way for packet allocation and Tx buffer
488recycling on OCTEON 9 SoC platform.
489
490``mempool_cnxk`` rte_mempool cache sizes for CN10K
491~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
492
493The OCTEON CN10K SoC Family supports asynchronous batch allocation
494of objects from an NPA pool.
495In the CNXK mempool driver, asynchronous batch allocation is enabled
496when local caches are enabled.
497This asynchronous batch allocation will be using an additional local async buffer
498whose size will be equal to ``RTE_ALIGN_CEIL(rte_mempool->cache_size, 16)``.
499This can result in additional objects being cached locally.
500While creating an rte_mempool using ``mempool_cnxk`` driver for OCTEON CN10K,
501this must be taken into consideration
502and the local cache sizes should be adjusted accordingly
503so that starvation does not happen.
504
505For Eg: If the ``cache_size`` passed into ``rte_mempool_create`` is ``8``,
506then the max objects than can get cached locally on a core
507would be the sum of max objects in the local cache + max objects in the async buffer
508i.e ``8 + RTE_ALIGN_CEIL(8, 16) = 24``.
509
510CRC stripping
511~~~~~~~~~~~~~
512
513The OCTEON CN9K/CN10K SoC family NICs strip the CRC for every packet being received by
514the host interface irrespective of the offload configuration.
515
516RTE flow GRE support
517~~~~~~~~~~~~~~~~~~~~
518
519- ``RTE_FLOW_ITEM_TYPE_GRE_KEY`` works only when checksum and routing
520  bits in the GRE header are equal to 0.
521
522RTE flow action represented_port support
523~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
524
525- ``RTE_FLOW_ACTION_TYPE_REPRESENTED_PORT`` only works between a PF and its VFs.
526
527RTE flow action port_id support
528~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
529
530- ``RTE_FLOW_ACTION_TYPE_PORT_ID`` is only supported between PF and its VFs.
531
532Custom protocols supported in RTE Flow
533~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
534
535The ``RTE_FLOW_ITEM_TYPE_RAW`` can be used to parse the below custom protocols.
536
537* ``vlan_exdsa`` and ``exdsa`` can be parsed at L2 level.
538* ``NGIO`` can be parsed at L3 level.
539
540For ``vlan_exdsa`` and ``exdsa``, the port has to be configured with the
541respective switch header.
542
543For example::
544
545   -a 0002:02:00.0,switch_header="vlan_exdsa"
546
547The below fields of ``struct rte_flow_item_raw`` shall be used to specify the
548pattern.
549
550- ``relative`` Selects the layer at which parsing is done.
551
552  - 0 for ``exdsa`` and ``vlan_exdsa``.
553
554  - 1 for  ``NGIO``.
555
556- ``offset`` The offset in the header where the pattern should be matched.
557- ``length`` Length of the pattern.
558- ``pattern`` Pattern as a byte string.
559
560Example usage in testpmd::
561
562   ./dpdk-testpmd -c 3 -w 0002:02:00.0,switch_header=exdsa -- -i \
563                  --rx-offloads=0x00080000 --rxq 8 --txq 8
564   testpmd> flow create 0 ingress pattern eth / raw relative is 0 pattern \
565          spec ab pattern mask ab offset is 4 / end actions queue index 1 / end
566
567RTE Flow mark item support
568~~~~~~~~~~~~~~~~~~~~~~~~~~
569
570- ``RTE_FLOW_ITEM_TYPE_MARK`` can be used to create ingress flow rules to match
571  packets from CPT(second pass packets). When mark item type is used, it should
572  be the first item in the patterns specification.
573
574Inline device support for CN10K
575-------------------------------
576
577CN10K HW provides a misc device Inline device that supports ethernet devices in
578providing following features.
579
580  - Aggregate all the inline IPsec inbound traffic from all the CN10K ethernet
581    devices to be processed by the single inline IPSec device. This allows
582    single rte security session to accept traffic from multiple ports.
583
584  - Support for event generation on outbound inline IPsec processing errors.
585
586  - Support CN106xx poll mode of operation for inline IPSec inbound processing.
587
588Inline IPsec device is identified by PCI PF vendid:devid ``177D:A0F0`` or
589VF ``177D:A0F1``.
590
591Runtime Config Options for inline device
592~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
593
594- ``Min SPI for inbound inline IPsec`` (default ``0``)
595
596   Min SPI supported for inbound inline IPsec processing can be specified by
597   ``ipsec_in_min_spi`` ``devargs`` parameter.
598
599   For example::
600
601      -a 0002:1d:00.0,ipsec_in_min_spi=6
602
603   With the above configuration, application can enable inline IPsec processing
604   for inbound SA with min SPI of 6 for traffic aggregated on inline device.
605
606- ``Max SPI for inbound inline IPsec`` (default ``255``)
607
608   Max SPI supported for inbound inline IPsec processing can be specified by
609   ``ipsec_in_max_spi`` ``devargs`` parameter.
610
611   For example::
612
613      -a 0002:1d:00.0,ipsec_in_max_spi=128
614
615   With the above configuration, application can enable inline IPsec processing
616   for inbound SA with max SPI of 128 for traffic aggregated on inline device.
617
618- ``Count of meta buffers for inline inbound IPsec second pass``
619
620   Number of meta buffers allocated for inline inbound IPsec second pass can
621   be specified by ``nb_meta_bufs`` ``devargs`` parameter. Default value is
622   computed runtime based on pkt mbuf pools created and in use. Number of meta
623   buffers should be at least equal to aggregated number of packet buffers of all
624   packet mbuf pools in use by Inline IPsec enabled ethernet devices.
625
626   For example::
627
628      -a 0002:1d:00.0,nb_meta_bufs=1024
629
630   With the above configuration, PMD would enable inline IPsec processing
631   for inbound with 1024 meta buffers available for second pass.
632
633- ``Meta buffer size for inline inbound IPsec second pass``
634
635   Size of meta buffer allocated for inline inbound IPsec second pass can
636   be specified by ``meta_buf_sz`` ``devargs`` parameter. Default value is
637   computed runtime based on pkt mbuf pools created and in use.
638
639   For example::
640
641      -a 0002:1d:00.0,meta_buf_sz=512
642
643   With the above configuration, PMD would allocate meta buffers of size 512 for
644   inline inbound IPsec processing second pass.
645
646- ``Inline Outbound soft expiry poll frequency in usec`` (default ``100``)
647
648   Soft expiry poll frequency for Inline Outbound sessions can be specified by
649   ``soft_exp_poll_freq`` ``devargs`` parameter.
650
651   For example::
652
653      -a 0002:1d:00.0,soft_exp_poll_freq=1000
654
655   With the above configuration, driver would poll for soft expiry events every
656   1000 usec.
657
658- ``Rx Inject Enable inbound inline IPsec for second pass`` (default ``0``)
659
660   Rx packet inject feature for inbound inline IPsec processing can be enabled
661   by ``rx_inj_ena`` devargs parameter with both inline device and ethdev device.
662   This option is for OCTEON CN106-B0/CN103XX SoC family.
663
664   For example::
665
666      -a 0002:1d:00.0,rx_inj_ena=1
667
668   With the above configuration, driver would enable packet inject from ARM cores
669   to crypto to process and send back in Rx path.
670
671- ``Enable custom SA for inbound inline IPsec`` (default ``0``)
672
673   Custom SA for inbound inline IPsec can be enabled
674   by specifying ``custom_inb_sa`` devargs parameter
675   with both inline device and ethdev.
676
677   For example::
678
679      -a 0002:1d:00.0,custom_inb_sa=1
680
681   With the above configuration, inline inbound IPsec post-processing
682   should be done by the application.
683
684Port Representors
685-----------------
686
687The CNXK driver supports port representor model by adding virtual ethernet ports
688providing a logical representation in DPDK for physical function (PF)
689or SR-IOV virtual function (VF) devices for control and monitoring.
690
691Base device or parent device underneath the representor ports is an eswitch device
692which is not a cnxk ethernet device but has NIC Rx and Tx capabilities.
693Each representor port is represented by a RQ and SQ pair of this eswitch device.
694
695Implementation supports representors for both physical function and virtual function.
696
697Port representor ethdev instances can be spawned on an as needed basis
698through configuration parameters passed to the driver of the underlying
699base device using devargs ``-a <base PCI BDF>,representor=pf*vf*``
700
701.. note::
702
703   Representor ports to be created for respective representees
704   should be defined via standard representor devargs patterns
705   Eg. To create a representor for representee PF1VF0,
706   devargs to be passed is ``-a <base PCI BDF>,representor=pf01vf0``
707
708   Implementation supports creation of multiple port representors with pattern:
709   ``-a <base PCI BDF>,representor=[pf0vf[1,2],pf1vf[2-5]]``
710
711Port representor PMD supports following operations:
712
713- Get PF/VF statistics
714- Flow operations - create, validate, destroy, query, flush, dump
715
716Debugging Options
717-----------------
718
719.. _table_cnxk_ethdev_debug_options:
720
721.. table:: cnxk ethdev debug options
722
723   +---+------------+-------------------------------------------------------+
724   | # | Component  | EAL log command                                       |
725   +===+============+=======================================================+
726   | 1 | NIX        | --log-level='pmd\.common.cnxk\.nix,8'                 |
727   +---+------------+-------------------------------------------------------+
728   | 2 | NPC        | --log-level='pmd\.common.cnxk\.flow,8'                |
729   +---+------------+-------------------------------------------------------+
730   | 3 | REP        | --log-level='pmd\.common.cnxk\.rep,8'                 |
731   +---+------------+-------------------------------------------------------+
732   | 4 | ESW        | --log-level='pmd\.common.cnxk\.esw,8'                 |
733   +---+------------+-------------------------------------------------------+
734