xref: /dpdk/doc/guides/nics/nfb.rst (revision 68a03efeed657e6e05f281479b33b51102797e15)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright 2019 Cesnet
3    Copyright 2019 Netcope Technologies
4
5NFB poll mode driver library
6=================================
7
8The NFB poll mode driver library implements support for the Netcope
9FPGA Boards (**NFB-40G2, NFB-100G2, NFB-200G2QL**) and Silicom **FB2CGG3** card,
10FPGA-based programmable NICs. The NFB PMD uses interface provided by the libnfb
11library to communicate with these cards over the nfb layer.
12
13More information about the
14`NFB cards <http://www.netcope.com/en/products/fpga-boards>`_
15and used technology
16(`Netcope Development Kit <http://www.netcope.com/en/products/fpga-development-kit>`_)
17can be found on the `Netcope Technologies website <http://www.netcope.com/>`_.
18
19.. note::
20
21   Currently the driver is supported only on x86_64 architectures.
22   Only x86_64 versions of the external libraries are provided.
23
24Prerequisites
25-------------
26
27This PMD requires kernel modules which are responsible for initialization and
28allocation of resources needed for nfb layer function.
29Communication between PMD and kernel modules is mediated by libnfb library.
30These kernel modules and library are not part of DPDK and must be installed
31separately:
32
33*  **libnfb library**
34
35   The library provides API for initialization of nfb transfers, receiving and
36   transmitting data segments.
37
38*  **Kernel modules**
39
40   * nfb
41
42   Kernel modules manage initialization of hardware, allocation and
43   sharing of resources for user space applications.
44
45Dependencies can be found here:
46`Netcope common <https://www.netcope.com/en/company/community-support/dpdk-libsze2#NFB>`_.
47
48Versions of the packages
49~~~~~~~~~~~~~~~~~~~~~~~~
50
51The minimum version of the provided packages:
52
53* for DPDK from 19.05
54
55Configuration
56-------------
57
58Timestamps
59
60The PMD supports hardware timestamps of frame receipt on physical network interface. In order to use
61the timestamps, the hardware timestamping unit must be enabled (follow the documentation of the NFB
62products) and the device argument `timestamp=1` must be used.
63
64.. code-block:: console
65
66    ./<build_dir>/app/dpdk-testpmd -a b3:00.0,timestamp=1 <other EAL params> -- <testpmd params>
67
68When the timestamps are enabled with the *devarg*, a timestamp validity flag is set in the MBUFs
69containing received frames and timestamp is inserted into the `rte_mbuf` struct.
70
71The timestamp is an `uint64_t` field. Its lower 32 bits represent *seconds* portion of the timestamp
72(number of seconds elapsed since 1.1.1970 00:00:00 UTC) and its higher 32 bits represent
73*nanosecond* portion of the timestamp (number of nanoseconds elapsed since the beginning of the
74second in the *seconds* portion.
75
76
77Using the NFB PMD
78----------------------
79
80Kernel modules have to be loaded before running the DPDK application.
81
82NFB card architecture
83---------------------
84
85The NFB cards are multi-port multi-queue cards, where (generally) data from any
86Ethernet port may be sent to any queue.
87They are represented in DPDK as a single port.
88
89NFB-200G2QL card employs an add-on cable which allows to connect it to two
90physical PCI-E slots at the same time (see the diagram below).
91This is done to allow 200 Gbps of traffic to be transferred through the PCI-E
92bus (note that a single PCI-E 3.0 x16 slot provides only 125 Gbps theoretical
93throughput).
94
95Although each slot may be connected to a different CPU and therefore to a different
96NUMA node, the card is represented as a single port in DPDK. To work with data
97from the individual queues on the right NUMA node, connection of NUMA nodes on
98first and last queue (each NUMA node has half of the queues) need to be checked.
99
100.. figure:: img/szedata2_nfb200g_architecture.*
101    :align: center
102
103    NFB-200G2QL high-level diagram
104
105Limitations
106-----------
107
108Driver is usable only on Linux architecture, namely on CentOS.
109
110Since a card is always represented as a single port, but can be connected to two
111NUMA nodes, there is need for manual check where master/slave is connected.
112
113Example of usage
114----------------
115
116Read packets from 0. and 1. receive queue and write them to 0. and 1.
117transmit queue:
118
119.. code-block:: console
120
121   ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 2 \
122   -- --port-topology=chained --rxq=2 --txq=2 --nb-cores=2 -i -a
123
124Example output:
125
126.. code-block:: console
127
128   [...]
129   EAL: PCI device 0000:06:00.0 on NUMA socket -1
130   EAL:   probe driver: 1b26:c1c1 net_nfb
131   PMD: Initializing NFB device (0000:06:00.0)
132   PMD: Available DMA queues RX: 8 TX: 8
133   PMD: NFB device (0000:06:00.0) successfully initialized
134   Interactive-mode selected
135   Auto-start selected
136   Configuring Port 0 (socket 0)
137   Port 0: 00:11:17:00:00:00
138   Checking link statuses...
139   Port 0 Link Up - speed 10000 Mbps - full-duplex
140   Done
141   Start automatic packet forwarding
142     io packet forwarding - CRC stripping disabled - packets/burst=32
143     nb forwarding cores=2 - nb forwarding ports=1
144     RX queues=2 - RX desc=128 - RX free threshold=0
145     RX threshold registers: pthresh=0 hthresh=0 wthresh=0
146     TX queues=2 - TX desc=512 - TX free threshold=0
147     TX threshold registers: pthresh=0 hthresh=0 wthresh=0
148     TX RS bit threshold=0 - TXQ flags=0x0
149   testpmd>
150