1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2019 Intel Corporation 3 4Intel(R) FPGA 5GNR FEC Poll Mode Driver 5======================================= 6 7The BBDEV FPGA 5GNR FEC poll mode driver (PMD) supports an FPGA implementation of a VRAN 8LDPC Encode / Decode 5GNR wireless acceleration function, using Intel's PCI-e and FPGA 9based Vista Creek device. 10 11Features 12-------- 13 14FPGA 5GNR FEC PMD supports the following features: 15 16- LDPC Encode in the DL 17- LDPC Decode in the UL 18- 8 VFs per PF (physical device) 19- Maximum of 32 UL queues per VF 20- Maximum of 32 DL queues per VF 21- PCIe Gen-3 x8 Interface 22- MSI-X 23- SR-IOV 24 25FPGA 5GNR FEC PMD supports the following BBDEV capabilities: 26 27* For the LDPC encode operation: 28 - ``RTE_BBDEV_LDPC_CRC_24B_ATTACH`` : set to attach CRC24B to CB(s) 29 - ``RTE_BBDEV_LDPC_RATE_MATCH`` : if set then do not do Rate Match bypass 30 31* For the LDPC decode operation: 32 - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_CHECK`` : check CRC24B from CB(s) 33 - ``RTE_BBDEV_LDPC_ITERATION_STOP_ENABLE`` : disable early termination 34 - ``RTE_BBDEV_LDPC_CRC_TYPE_24B_DROP`` : drops CRC24B bits appended while decoding 35 - ``RTE_BBDEV_LDPC_HQ_COMBINE_IN_ENABLE`` : provides an input for HARQ combining 36 - ``RTE_BBDEV_LDPC_HQ_COMBINE_OUT_ENABLE`` : provides an input for HARQ combining 37 - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_IN_ENABLE`` : HARQ memory input is internal 38 - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_OUT_ENABLE`` : HARQ memory output is internal 39 - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_LOOPBACK`` : loopback data to/from HARQ memory 40 - ``RTE_BBDEV_LDPC_INTERNAL_HARQ_MEMORY_FILLERS`` : HARQ memory includes the fillers bits 41 42 43Limitations 44----------- 45 46FPGA 5GNR FEC does not support the following: 47 48- Scatter-Gather function 49 50 51Installation 52------------ 53 54Section 3 of the DPDK manual provides instructions on installing and compiling DPDK. 55 56DPDK requires hugepages to be configured as detailed in section 2 of the DPDK manual. 57The bbdev test application has been tested with a configuration 40 x 1GB hugepages. The 58hugepage configuration of a server may be examined using: 59 60.. code-block:: console 61 62 grep Huge* /proc/meminfo 63 64 65Initialization 66-------------- 67 68When the device first powers up, its PCI Physical Functions (PF) can be listed through this command: 69 70.. code-block:: console 71 72 sudo lspci -vd8086:0d8f 73 74The physical and virtual functions are compatible with Linux UIO drivers: 75``vfio`` and ``igb_uio``. However, in order to work the FPGA 5GNR FEC device firstly needs 76to be bound to one of these linux drivers through DPDK. 77 78 79Bind PF UIO driver(s) 80~~~~~~~~~~~~~~~~~~~~~ 81 82Install the DPDK igb_uio driver, bind it with the PF PCI device ID and use 83``lspci`` to confirm the PF device is under use by ``igb_uio`` DPDK UIO driver. 84 85The igb_uio driver may be bound to the PF PCI device using one of two methods: 86 87 881. PCI functions (physical or virtual, depending on the use case) can be bound to 89the UIO driver by repeating this command for every function. 90 91.. code-block:: console 92 93 insmod igb_uio.ko 94 echo "8086 0d8f" > /sys/bus/pci/drivers/igb_uio/new_id 95 lspci -vd8086:0d8f 96 97 982. Another way to bind PF with DPDK UIO driver is by using the ``dpdk-devbind.py`` tool 99 100.. code-block:: console 101 102 cd <dpdk-top-level-directory> 103 ./usertools/dpdk-devbind.py -b igb_uio 0000:06:00.0 104 105where the PCI device ID (example: 0000:06:00.0) is obtained using lspci -vd8086:0d8f 106 107 108In the same way the FPGA 5GNR FEC PF can be bound with vfio, but vfio driver does not 109support SR-IOV configuration right out of the box, so it will need to be patched. 110 111 112Enable Virtual Functions 113~~~~~~~~~~~~~~~~~~~~~~~~ 114 115Now, it should be visible in the printouts that PCI PF is under igb_uio control 116"``Kernel driver in use: igb_uio``" 117 118To show the number of available VFs on the device, read ``sriov_totalvfs`` file.. 119 120.. code-block:: console 121 122 cat /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_totalvfs 123 124 where 0000\:<b>\:<d>.<f> is the PCI device ID 125 126 127To enable VFs via igb_uio, echo the number of virtual functions intended to 128enable to ``max_vfs`` file.. 129 130.. code-block:: console 131 132 echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/max_vfs 133 134 135Afterwards, all VFs must be bound to appropriate UIO drivers as required, same 136way it was done with the physical function previously. 137 138Enabling SR-IOV via vfio driver is pretty much the same, except that the file 139name is different: 140 141.. code-block:: console 142 143 echo <num-of-vfs> > /sys/bus/pci/devices/0000\:<b>\:<d>.<f>/sriov_numvfs 144 145 146Configure the VFs through PF 147~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 148 149The PCI virtual functions must be configured before working or getting assigned 150to VMs/Containers. The configuration involves allocating the number of hardware 151queues, priorities, load balance, bandwidth and other settings necessary for the 152device to perform FEC functions. 153 154This configuration needs to be executed at least once after reboot or PCI FLR and can 155be achieved by using the function ``rte_fpga_5gnr_fec_configure()``, which sets up the 156parameters defined in ``rte_fpga_5gnr_fec_conf`` structure: 157 158.. code-block:: c 159 160 struct rte_fpga_5gnr_fec_conf { 161 bool pf_mode_en; 162 uint8_t vf_ul_queues_number[FPGA_5GNR_FEC_NUM_VFS]; 163 uint8_t vf_dl_queues_number[FPGA_5GNR_FEC_NUM_VFS]; 164 uint8_t ul_bandwidth; 165 uint8_t dl_bandwidth; 166 uint8_t ul_load_balance; 167 uint8_t dl_load_balance; 168 uint16_t flr_time_out; 169 }; 170 171- ``pf_mode_en``: identifies whether only PF is to be used, or the VFs. PF and 172 VFs are mutually exclusive and cannot run simultaneously. 173 Set to 1 for PF mode enabled. 174 If PF mode is enabled all queues available in the device are assigned 175 exclusively to PF and 0 queues given to VFs. 176 177- ``vf_*l_queues_number``: defines the hardware queue mapping for every VF. 178 179- ``*l_bandwidth``: in case of congestion on PCIe interface. The device 180 allocates different bandwidth to UL and DL. The weight is configured by this 181 setting. The unit of weight is 3 code blocks. For example, if the code block 182 cbps (code block per second) ratio between UL and DL is 12:1, then the 183 configuration value should be set to 36:3. The schedule algorithm is based 184 on code block regardless the length of each block. 185 186- ``*l_load_balance``: hardware queues are load-balanced in a round-robin 187 fashion. Queues get filled first-in first-out until they reach a pre-defined 188 watermark level, if exceeded, they won't get assigned new code blocks.. 189 This watermark is defined by this setting. 190 191 If all hardware queues exceeds the watermark, no code blocks will be 192 streamed in from UL/DL code block FIFO. 193 194- ``flr_time_out``: specifies how many 16.384us to be FLR time out. The 195 time_out = flr_time_out x 16.384us. For instance, if you want to set 10ms for 196 the FLR time out then set this setting to 0x262=610. 197 198 199An example configuration code calling the function ``rte_fpga_5gnr_fec_configure()`` is shown 200below: 201 202.. code-block:: c 203 204 struct rte_fpga_5gnr_fec_conf conf; 205 unsigned int i; 206 207 memset(&conf, 0, sizeof(struct rte_fpga_5gnr_fec_conf)); 208 conf.pf_mode_en = 1; 209 210 for (i = 0; i < FPGA_5GNR_FEC_NUM_VFS; ++i) { 211 conf.vf_ul_queues_number[i] = 4; 212 conf.vf_dl_queues_number[i] = 4; 213 } 214 conf.ul_bandwidth = 12; 215 conf.dl_bandwidth = 5; 216 conf.dl_load_balance = 64; 217 conf.ul_load_balance = 64; 218 219 /* setup FPGA PF */ 220 ret = rte_fpga_5gnr_fec_configure(info->dev_name, &conf); 221 TEST_ASSERT_SUCCESS(ret, 222 "Failed to configure 4G FPGA PF for bbdev %s", 223 info->dev_name); 224 225 226Test Application 227---------------- 228 229BBDEV provides a test application, ``test-bbdev.py`` and range of test data for testing 230the functionality of the device, depending on the device's capabilities. 231 232For more details on how to use the test application, 233see :ref:`test_bbdev_application`. 234 235 236Test Vectors 237~~~~~~~~~~~~ 238 239In addition to the simple LDPC decoder and LDPC encoder tests, bbdev also provides 240a range of additional tests under the test_vectors folder, which may be useful. The results 241of these tests will depend on the FPGA 5GNR FEC capabilities. 242 243 244Alternate Baseband Device configuration tool 245~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 246 247On top of the embedded configuration feature supported in test-bbdev using "- -init-device" 248option, there is also a tool available to perform that device configuration using a companion 249application. 250The ``pf_bb_config`` application notably enables then to run bbdev-test from the VF 251and not only limited to the PF as captured above. 252 253See for more details: https://github.com/intel/pf-bb-config 254 255Specifically for the BBDEV FPGA 5GNR FEC PMD, the command below can be used: 256 257.. code-block:: console 258 259 ./pf_bb_config FPGA_5GNR -c fpga_5gnr/fpga_5gnr_config_vf.cfg 260 ./test-bbdev.py -e="-c 0xff0 -a${VF_PCI_ADDR}" -c validation -n 64 -b 32 -l 1 -v ./ldpc_dec_default.data 261