1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2020 Intel Corporation. 3 4VMDq Forwarding Sample Application 5========================================== 6 7The VMDq Forwarding sample application is a simple example of packet processing using the DPDK. 8The application performs L2 forwarding using VMDq to divide the incoming traffic into queues. 9The traffic splitting is performed in hardware by the VMDq feature of the Intel® 82599 and X710/XL710 Ethernet Controllers. 10 11Overview 12-------- 13 14This sample application can be used as a starting point for developing a new application that is based on the DPDK and 15uses VMDq for traffic partitioning. 16 17VMDq filters split the incoming packets up into different "pools" - each with its own set of RX queues - based upon 18the MAC address and VLAN ID within the VLAN tag of the packet. 19 20All traffic is read from a single incoming port and output on another port, without any processing being performed. 21With Intel® 82599 NIC, for example, the traffic is split into 128 queues on input, where each thread of the application reads from 22multiple queues. When run with 8 threads, that is, with the -c FF option, each thread receives and forwards packets from 16 queues. 23 24As supplied, the sample application configures the VMDq feature to have 32 pools with 4 queues each. 25The Intel® 82599 10 Gigabit Ethernet Controller NIC also supports the splitting of traffic into 16 pools of 2 queues. 26While the Intel® X710 or XL710 Ethernet Controller NICs support many configurations of VMDq pools of 4 or 8 queues each. 27And queues numbers for each VMDq pool can be changed by setting CONFIG_RTE_LIBRTE_I40E_QUEUE_NUM_PER_VM 28in config/common_* file. 29The nb-pools and enable-rss parameters can be passed on the command line, after the EAL parameters: 30 31.. code-block:: console 32 33 ./build/vmdq_app [EAL options] -- -p PORTMASK --nb-pools NP --enable-rss 34 35where, NP can be 8, 16 or 32, rss is disabled by default. 36 37In Linux* user space, the application can display statistics with the number of packets received on each queue. 38To have the application display the statistics, send a SIGHUP signal to the running application process. 39 40The VMDq Forwarding sample application is in many ways simpler than the L2 Forwarding application 41(see :doc:`l2_forward_real_virtual`) 42as it performs unidirectional L2 forwarding of packets from one port to a second port. 43No command-line options are taken by this application apart from the standard EAL command-line options. 44 45Compiling the Application 46------------------------- 47 48To compile the sample application see :doc:`compiling`. 49 50The application is located in the ``vmdq`` sub-directory. 51 52Running the Application 53----------------------- 54 55To run the example in a Linux environment: 56 57.. code-block:: console 58 59 user@target:~$ ./build/vmdq_app -l 0-3 -n 4 -- -p 0x3 --nb-pools 16 60 61Refer to the *DPDK Getting Started Guide* for general information on running applications and 62the Environment Abstraction Layer (EAL) options. 63 64Explanation 65----------- 66 67The following sections provide some explanation of the code. 68 69Initialization 70~~~~~~~~~~~~~~ 71 72The EAL, driver and PCI configuration is performed largely as in the L2 Forwarding sample application, 73as is the creation of the mbuf pool. 74See :doc:`l2_forward_real_virtual`. 75Where this example application differs is in the configuration of the NIC port for RX. 76 77The VMDq hardware feature is configured at port initialization time by setting the appropriate values in the 78rte_eth_conf structure passed to the rte_eth_dev_configure() API. 79Initially in the application, 80a default structure is provided for VMDq configuration to be filled in later by the application. 81 82.. code-block:: c 83 84 /* empty vmdq configuration structure. Filled in programmatically */ 85 static const struct rte_eth_conf vmdq_conf_default = { 86 .rxmode = { 87 .mq_mode = ETH_MQ_RX_VMDQ_ONLY, 88 .split_hdr_size = 0, 89 }, 90 91 .txmode = { 92 .mq_mode = ETH_MQ_TX_NONE, 93 }, 94 .rx_adv_conf = { 95 /* 96 * should be overridden separately in code with 97 * appropriate values 98 */ 99 .vmdq_rx_conf = { 100 .nb_queue_pools = ETH_8_POOLS, 101 .enable_default_pool = 0, 102 .default_pool = 0, 103 .nb_pool_maps = 0, 104 .pool_map = {{0, 0},}, 105 }, 106 }, 107 }; 108 109The get_eth_conf() function fills in an rte_eth_conf structure with the appropriate values, 110based on the global vlan_tags array. 111For the VLAN IDs, each one can be allocated to possibly multiple pools of queues. 112For destination MAC, each VMDq pool will be assigned with a MAC address. In this sample, each VMDq pool 113is assigned to the MAC like 52:54:00:12:<port_id>:<pool_id>, that is, 114the MAC of VMDq pool 2 on port 1 is 52:54:00:12:01:02. 115 116.. code-block:: c 117 118 const uint16_t vlan_tags[] = { 119 0, 1, 2, 3, 4, 5, 6, 7, 120 8, 9, 10, 11, 12, 13, 14, 15, 121 16, 17, 18, 19, 20, 21, 22, 23, 122 24, 25, 26, 27, 28, 29, 30, 31, 123 32, 33, 34, 35, 36, 37, 38, 39, 124 40, 41, 42, 43, 44, 45, 46, 47, 125 48, 49, 50, 51, 52, 53, 54, 55, 126 56, 57, 58, 59, 60, 61, 62, 63, 127 }; 128 129 /* pool mac addr template, pool mac addr is like: 52 54 00 12 port# pool# */ 130 static struct rte_ether_addr pool_addr_template = { 131 .addr_bytes = {0x52, 0x54, 0x00, 0x12, 0x00, 0x00} 132 }; 133 134 /* 135 * Builds up the correct configuration for vmdq based on the vlan tags array 136 * given above, and determine the queue number and pool map number according to 137 * valid pool number 138 */ 139 static inline int 140 get_eth_conf(struct rte_eth_conf *eth_conf, uint32_t num_pools) 141 { 142 struct rte_eth_vmdq_rx_conf conf; 143 unsigned i; 144 145 conf.nb_queue_pools = (enum rte_eth_nb_pools)num_pools; 146 conf.nb_pool_maps = num_pools; 147 conf.enable_default_pool = 0; 148 conf.default_pool = 0; /* set explicit value, even if not used */ 149 150 for (i = 0; i < conf.nb_pool_maps; i++) { 151 conf.pool_map[i].vlan_id = vlan_tags[i]; 152 conf.pool_map[i].pools = (1UL << (i % num_pools)); 153 } 154 155 (void)(rte_memcpy(eth_conf, &vmdq_conf_default, sizeof(*eth_conf))); 156 (void)(rte_memcpy(ð_conf->rx_adv_conf.vmdq_rx_conf, &conf, 157 sizeof(eth_conf->rx_adv_conf.vmdq_rx_conf))); 158 return 0; 159 } 160 161 ...... 162 163 /* 164 * Set mac for each pool. 165 * There is no default mac for the pools in i40. 166 * Removes this after i40e fixes this issue. 167 */ 168 for (q = 0; q < num_pools; q++) { 169 struct rte_ether_addr mac; 170 mac = pool_addr_template; 171 mac.addr_bytes[4] = port; 172 mac.addr_bytes[5] = q; 173 printf("Port %u vmdq pool %u set mac %02x:%02x:%02x:%02x:%02x:%02x\n", 174 port, q, 175 mac.addr_bytes[0], mac.addr_bytes[1], 176 mac.addr_bytes[2], mac.addr_bytes[3], 177 mac.addr_bytes[4], mac.addr_bytes[5]); 178 retval = rte_eth_dev_mac_addr_add(port, &mac, 179 q + vmdq_pool_base); 180 if (retval) { 181 printf("mac addr add failed at pool %d\n", q); 182 return retval; 183 } 184 } 185 186Once the network port has been initialized using the correct VMDq values, 187the initialization of the port's RX and TX hardware rings is performed similarly to that 188in the L2 Forwarding sample application. 189See :doc:`l2_forward_real_virtual` for more information. 190 191Statistics Display 192~~~~~~~~~~~~~~~~~~ 193 194When run in a Linux environment, 195the VMDq Forwarding sample application can display statistics showing the number of packets read from each RX queue. 196This is provided by way of a signal handler for the SIGHUP signal, 197which simply prints to standard output the packet counts in grid form. 198Each row of the output is a single pool with the columns being the queue number within that pool. 199 200To generate the statistics output, use the following command: 201 202.. code-block:: console 203 204 user@host$ sudo killall -HUP vmdq_app 205 206Please note that the statistics output will appear on the terminal where the vmdq_app is running, 207rather than the terminal from which the HUP signal was sent. 208