xref: /dpdk/doc/guides/bbdevs/fpga_5gnr_fec.rst (revision 44dc6faa796f13f8f15f4c7d52ceb50979e94bc9)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2019 Intel Corporation
3
4Intel(R) FPGA 5GNR FEC Poll Mode Driver
5=======================================
6
7The BBDEV FPGA 5GNR FEC poll mode driver (PMD) supports an FPGA implementation of a VRAN
8LDPC Encode / Decode 5GNR wireless acceleration function, using Intel's PCI-e and FPGA
9based Vista Creek device.
10
11Features
12--------
13
14FPGA 5GNR FEC PMD supports the following features:
15
16- LDPC Encode in the DL
17- LDPC Decode in the UL
18- 8 VFs per PF (physical device)
19- Maximum of 32 UL queues per VF
20- Maximum of 32 DL queues per VF
21- PCIe Gen-3 x8 Interface
22- MSI-X
23- SR-IOV
24
25FPGA 5GNR FEC PMD supports the following BBDEV capabilities:
26
27* For the LDPC encode operation:
28   - ``RTE_BBDEV_LDPC_CRC_24B_ATTACH`` :  set to attach CRC24B to CB(s)
29   - ``RTE_BBDEV_LDPC_RATE_MATCH`` :  if set then do not do Rate Match bypass
30
31* For the LDPC decode operation:
32   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK`` :  check CRC24B from CB(s)
33   - ``RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE`` :  disable early termination
34   - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP`` :  drops CRC24B bits appended while decoding
35   - ``RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE`` :  provides an input for HARQ combining
36   - ``RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE`` :  provides an input for HARQ combining
37   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE`` :  HARQ memory input is internal
38   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE`` :  HARQ memory output is internal
39   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK`` :  loopback data to/from HARQ memory
40   - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS`` :  HARQ memory includes the fillers bits
41
42
43Limitations
44-----------
45
46FPGA 5GNR FEC does not support the following:
47
48- Scatter-Gather function
49
50
51Installation
52------------
53
54Section 3 of the DPDK manual provides instuctions on installing and compiling DPDK. The
55default set of bbdev compile flags may be found in config/common_base, where for example
56the flag to build the FPGA 5GNR FEC device, ``CONFIG_RTE_LIBRTE_PMD_BBDEV_FPGA_5GNR_FEC``,
57is already set. It is assumed DPDK has been compiled using for instance:
58
59.. code-block:: console
60
61  make install T=x86_64-native-linuxapp-gcc
62
63
64DPDK requires hugepages to be configured as detailed in section 2 of the DPDK manual.
65The bbdev test application has been tested with a configuration 40 x 1GB hugepages. The
66hugepage configuration of a server may be examined using:
67
68.. code-block:: console
69
70   grep Huge* /proc/meminfo
71
72
73Initialization
74--------------
75
76When the device first powers up, its PCI Physical Functions (PF) can be listed through this command:
77
78.. code-block:: console
79
80  sudo lspci -vd8086:0d8f
81
82The physical and virtual functions are compatible with Linux UIO drivers:
83``vfio`` and ``igb_uio``. However, in order to work the FPGA 5GNR FEC device firstly needs
84to be bound to one of these linux drivers through DPDK.
85
86
87Bind PF UIO driver(s)
88~~~~~~~~~~~~~~~~~~~~~
89
90Install the DPDK igb_uio driver, bind it with the PF PCI device ID and use
91``lspci`` to confirm the PF device is under use by ``igb_uio`` DPDK UIO driver.
92
93The igb_uio driver may be bound to the PF PCI device using one of three methods:
94
95
961. PCI functions (physical or virtual, depending on the use case) can be bound to
97the UIO driver by repeating this command for every function.
98
99.. code-block:: console
100
101  cd <dpdk-top-level-directory>
102  insmod ./build/kmod/igb_uio.ko
103  echo "8086 0d8f" > /sys/bus/pci/drivers/igb_uio/new_id
104  lspci -vd8086:0d8f
105
106
1072. Another way to bind PF with DPDK UIO driver is by using the ``dpdk-devbind.py`` tool
108
109.. code-block:: console
110
111  cd <dpdk-top-level-directory>
112  ./usertools/dpdk-devbind.py -b igb_uio 0000:06:00.0
113
114where the PCI device ID (example: 0000:06:00.0) is obtained using lspci -vd8086:0d8f
115
116
1173. A third way to bind is to use ``dpdk-setup.sh`` tool
118
119.. code-block:: console
120
121  cd <dpdk-top-level-directory>
122  ./usertools/dpdk-setup.sh
123
124  select 'Bind Ethernet/Crypto/Baseband device to IGB UIO module'
125  or
126  select 'Bind Ethernet/Crypto/Baseband device to VFIO module' depending on driver required
127  enter PCI device ID
128  select 'Display current Ethernet/Crypto/Baseband device settings' to confirm binding
129
130
131In the same way the FPGA 5GNR FEC PF can be bound with vfio, but vfio driver does not
132support SR-IOV configuration right out of the box, so it will need to be patched.
133
134
135Enable Virtual Functions
136~~~~~~~~~~~~~~~~~~~~~~~~
137
138Now, it should be visible in the printouts that PCI PF is under igb_uio control
139"``Kernel driver in use: igb_uio``"
140
141To show the number of available VFs on the device, read ``sriov_totalvfs`` file..
142
143.. code-block:: console
144
145  cat /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_totalvfs
146
147  where 0000\:<b>\:<d>.<f> is the PCI device ID
148
149
150To enable VFs via igb_uio, echo the number of virtual functions intended to
151enable to ``max_vfs`` file..
152
153.. code-block:: console
154
155  echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/max_vfs
156
157
158Afterwards, all VFs must be bound to appropriate UIO drivers as required, same
159way it was done with the physical function previously.
160
161Enabling SR-IOV via vfio driver is pretty much the same, except that the file
162name is different:
163
164.. code-block:: console
165
166  echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_numvfs
167
168
169Test Vectors
170~~~~~~~~~~~~
171
172In addition to the simple LDPC decoder and LDPC encoder tests, bbdev also provides
173a range of additional tests under the test_vectors folder, which may be useful. The results
174of these tests will depend on the FPGA 5GNR FEC capabilities.
175