1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2010-2014 Intel Corporation. 3 4QoS Scheduler Sample Application 5================================ 6 7The QoS sample application demonstrates the use of the DPDK to provide QoS scheduling. 8 9Overview 10-------- 11 12The architecture of the QoS scheduler application is shown in the following figure. 13 14.. _figure_qos_sched_app_arch: 15 16.. figure:: img/qos_sched_app_arch.* 17 18 QoS Scheduler Application Architecture 19 20 21There are two flavors of the runtime execution for this application, 22with two or three threads per each packet flow configuration being used. 23The RX thread reads packets from the RX port, 24classifies the packets based on the double VLAN (outer and inner) and 25the lower two bytes of the IP destination address and puts them into the ring queue. 26The worker thread dequeues the packets from the ring and calls the QoS scheduler enqueue/dequeue functions. 27If a separate TX core is used, these are sent to the TX ring. 28Otherwise, they are sent directly to the TX port. 29The TX thread, if present, reads from the TX ring and write the packets to the TX port. 30 31Compiling the Application 32------------------------- 33 34To compile the sample application see :doc:`compiling`. 35 36The application is located in the ``qos_sched`` sub-directory. 37 38 .. note:: 39 40 This application is intended as a linuxapp only. 41 42.. note:: 43 44 To get statistics on the sample app using the command line interface as described in the next section, 45 DPDK must be compiled defining *CONFIG_RTE_SCHED_COLLECT_STATS*, 46 which can be done by changing the configuration file for the specific target to be compiled. 47 48Running the Application 49----------------------- 50 51.. note:: 52 53 In order to run the application, a total of at least 4 54 G of huge pages must be set up for each of the used sockets (depending on the cores in use). 55 56The application has a number of command line options: 57 58.. code-block:: console 59 60 ./qos_sched [EAL options] -- <APP PARAMS> 61 62Mandatory application parameters include: 63 64* --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE": Packet flow configuration. 65 Multiple pfc entities can be configured in the command line, 66 having 4 or 5 items (if TX core defined or not). 67 68Optional application parameters include: 69 70* -i: It makes the application to start in the interactive mode. 71 In this mode, the application shows a command line that can be used for obtaining statistics while 72 scheduling is taking place (see interactive mode below for more information). 73 74* --mst n: Master core index (the default value is 1). 75 76* --rsz "A, B, C": Ring sizes: 77 78* A = Size (in number of buffer descriptors) of each of the NIC RX rings read 79 by the I/O RX lcores (the default value is 128). 80 81* B = Size (in number of elements) of each of the software rings used 82 by the I/O RX lcores to send packets to worker lcores (the default value is 8192). 83 84* C = Size (in number of buffer descriptors) of each of the NIC TX rings written 85 by worker lcores (the default value is 256) 86 87* --bsz "A, B, C, D": Burst sizes 88 89* A = I/O RX lcore read burst size from the NIC RX (the default value is 64) 90 91* B = I/O RX lcore write burst size to the output software rings, 92 worker lcore read burst size from input software rings,QoS enqueue size (the default value is 64) 93 94* C = QoS dequeue size (the default value is 32) 95 96* D = Worker lcore write burst size to the NIC TX (the default value is 64) 97 98* --msz M: Mempool size (in number of mbufs) for each pfc (default 2097152) 99 100* --rth "A, B, C": The RX queue threshold parameters 101 102* A = RX prefetch threshold (the default value is 8) 103 104* B = RX host threshold (the default value is 8) 105 106* C = RX write-back threshold (the default value is 4) 107 108* --tth "A, B, C": TX queue threshold parameters 109 110* A = TX prefetch threshold (the default value is 36) 111 112* B = TX host threshold (the default value is 0) 113 114* C = TX write-back threshold (the default value is 0) 115 116* --cfg FILE: Profile configuration to load 117 118Refer to *DPDK Getting Started Guide* for general information on running applications and 119the Environment Abstraction Layer (EAL) options. 120 121The profile configuration file defines all the port/subport/pipe/traffic class/queue parameters 122needed for the QoS scheduler configuration. 123 124The profile file has the following format: 125 126:: 127 128 ; port configuration [port] 129 130 frame overhead = 24 131 number of subports per port = 1 132 number of pipes per subport = 4096 133 queue sizes = 64 64 64 64 134 135 ; Subport configuration 136 137 [subport 0] 138 tb rate = 1250000000; Bytes per second 139 tb size = 1000000; Bytes 140 tc 0 rate = 1250000000; Bytes per second 141 tc 1 rate = 1250000000; Bytes per second 142 tc 2 rate = 1250000000; Bytes per second 143 tc 3 rate = 1250000000; Bytes per second 144 tc period = 10; Milliseconds 145 tc oversubscription period = 10; Milliseconds 146 147 pipe 0-4095 = 0; These pipes are configured with pipe profile 0 148 149 ; Pipe configuration 150 151 [pipe profile 0] 152 tb rate = 305175; Bytes per second 153 tb size = 1000000; Bytes 154 155 tc 0 rate = 305175; Bytes per second 156 tc 1 rate = 305175; Bytes per second 157 tc 2 rate = 305175; Bytes per second 158 tc 3 rate = 305175; Bytes per second 159 tc period = 40; Milliseconds 160 161 tc 0 oversubscription weight = 1 162 tc 1 oversubscription weight = 1 163 tc 2 oversubscription weight = 1 164 tc 3 oversubscription weight = 1 165 166 tc 0 wrr weights = 1 1 1 1 167 tc 1 wrr weights = 1 1 1 1 168 tc 2 wrr weights = 1 1 1 1 169 tc 3 wrr weights = 1 1 1 1 170 171 ; RED params per traffic class and color (Green / Yellow / Red) 172 173 [red] 174 tc 0 wred min = 48 40 32 175 tc 0 wred max = 64 64 64 176 tc 0 wred inv prob = 10 10 10 177 tc 0 wred weight = 9 9 9 178 179 tc 1 wred min = 48 40 32 180 tc 1 wred max = 64 64 64 181 tc 1 wred inv prob = 10 10 10 182 tc 1 wred weight = 9 9 9 183 184 tc 2 wred min = 48 40 32 185 tc 2 wred max = 64 64 64 186 tc 2 wred inv prob = 10 10 10 187 tc 2 wred weight = 9 9 9 188 189 tc 3 wred min = 48 40 32 190 tc 3 wred max = 64 64 64 191 tc 3 wred inv prob = 10 10 10 192 tc 3 wred weight = 9 9 9 193 194Interactive mode 195~~~~~~~~~~~~~~~~ 196 197These are the commands that are currently working under the command line interface: 198 199* Control Commands 200 201* --quit: Quits the application. 202 203* General Statistics 204 205 * stats app: Shows a table with in-app calculated statistics. 206 207 * stats port X subport Y: For a specific subport, it shows the number of packets that 208 went through the scheduler properly and the number of packets that were dropped. 209 The same information is shown in bytes. 210 The information is displayed in a table separating it in different traffic classes. 211 212 * stats port X subport Y pipe Z: For a specific pipe, it shows the number of packets that 213 went through the scheduler properly and the number of packets that were dropped. 214 The same information is shown in bytes. 215 This information is displayed in a table separating it in individual queues. 216 217* Average queue size 218 219All of these commands work the same way, averaging the number of packets throughout a specific subset of queues. 220 221Two parameters can be configured for this prior to calling any of these commands: 222 223 * qavg n X: n is the number of times that the calculation will take place. 224 Bigger numbers provide higher accuracy. The default value is 10. 225 226 * qavg period X: period is the number of microseconds that will be allowed between each calculation. 227 The default value is 100. 228 229The commands that can be used for measuring average queue size are: 230 231* qavg port X subport Y: Show average queue size per subport. 232 233* qavg port X subport Y tc Z: Show average queue size per subport for a specific traffic class. 234 235* qavg port X subport Y pipe Z: Show average queue size per pipe. 236 237* qavg port X subport Y pipe Z tc A: Show average queue size per pipe for a specific traffic class. 238 239* qavg port X subport Y pipe Z tc A q B: Show average queue size of a specific queue. 240 241Example 242~~~~~~~ 243 244The following is an example command with a single packet flow configuration: 245 246.. code-block:: console 247 248 ./qos_sched -l 1,5,7 -n 4 -- --pfc "3,2,5,7" --cfg ./profile.cfg 249 250This example uses a single packet flow configuration which creates one RX thread on lcore 5 reading 251from port 3 and a worker thread on lcore 7 writing to port 2. 252 253Another example with 2 packet flow configurations using different ports but sharing the same core for QoS scheduler is given below: 254 255.. code-block:: console 256 257 ./qos_sched -l 1,2,6,7 -n 4 -- --pfc "3,2,2,6,7" --pfc "1,0,2,6,7" --cfg ./profile.cfg 258 259Note that independent cores for the packet flow configurations for each of the RX, WT and TX thread are also supported, 260providing flexibility to balance the work. 261 262The EAL coremask/corelist is constrained to contain the default mastercore 1 and the RX, WT and TX cores only. 263 264Explanation 265----------- 266 267The Port/Subport/Pipe/Traffic Class/Queue are the hierarchical entities in a typical QoS application: 268 269* A subport represents a predefined group of users. 270 271* A pipe represents an individual user/subscriber. 272 273* A traffic class is the representation of a different traffic type with a specific loss rate, 274 delay and jitter requirements; such as data voice, video or data transfers. 275 276* A queue hosts packets from one or multiple connections of the same type belonging to the same user. 277 278The traffic flows that need to be configured are application dependent. 279This application classifies based on the QinQ double VLAN tags and the IP destination address as indicated in the following table. 280 281.. _table_qos_scheduler_1: 282 283.. table:: Entity Types 284 285 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 286 | **Level Name** | **Siblings per Parent** | **QoS Functional Description** | **Selected By** | 287 | | | | | 288 +================+=========================+==================================================+==================================+ 289 | Port | - | Ethernet port | Physical port | 290 | | | | | 291 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 292 | Subport | Config (8) | Traffic shaped (token bucket) | Outer VLAN tag | 293 | | | | | 294 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 295 | Pipe | Config (4k) | Traffic shaped (token bucket) | Inner VLAN tag | 296 | | | | | 297 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 298 | Traffic Class | 4 | TCs of the same pipe services in strict priority | Destination IP address (0.0.X.0) | 299 | | | | | 300 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 301 | Queue | 4 | Queue of the same TC serviced in WRR | Destination IP address (0.0.0.X) | 302 | | | | | 303 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 304 305Please refer to the "QoS Scheduler" chapter in the *DPDK Programmer's Guide* for more information about these parameters. 306