1.. BSD LICENSE 2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved. 3 All rights reserved. 4 5 Redistribution and use in source and binary forms, with or without 6 modification, are permitted provided that the following conditions 7 are met: 8 9 * Redistributions of source code must retain the above copyright 10 notice, this list of conditions and the following disclaimer. 11 * Redistributions in binary form must reproduce the above copyright 12 notice, this list of conditions and the following disclaimer in 13 the documentation and/or other materials provided with the 14 distribution. 15 * Neither the name of Intel Corporation nor the names of its 16 contributors may be used to endorse or promote products derived 17 from this software without specific prior written permission. 18 19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 31QoS Scheduler Sample Application 32================================ 33 34The QoS sample application demonstrates the use of the DPDK to provide QoS scheduling. 35 36Overview 37-------- 38 39The architecture of the QoS scheduler application is shown in the following figure. 40 41.. _figure_qos_sched_app_arch: 42 43.. figure:: img/qos_sched_app_arch.* 44 45 QoS Scheduler Application Architecture 46 47 48There are two flavors of the runtime execution for this application, 49with two or three threads per each packet flow configuration being used. 50The RX thread reads packets from the RX port, 51classifies the packets based on the double VLAN (outer and inner) and 52the lower two bytes of the IP destination address and puts them into the ring queue. 53The worker thread dequeues the packets from the ring and calls the QoS scheduler enqueue/dequeue functions. 54If a separate TX core is used, these are sent to the TX ring. 55Otherwise, they are sent directly to the TX port. 56The TX thread, if present, reads from the TX ring and write the packets to the TX port. 57 58Compiling the Application 59------------------------- 60 61To compile the application: 62 63#. Go to the sample application directory: 64 65 .. code-block:: console 66 67 export RTE_SDK=/path/to/rte_sdk 68 cd ${RTE_SDK}/examples/qos_sched 69 70#. Set the target (a default target is used if not specified). For example: 71 72 .. note:: 73 74 This application is intended as a linuxapp only. 75 76 .. code-block:: console 77 78 export RTE_TARGET=x86_64-native-linuxapp-gcc 79 80#. Build the application: 81 82 .. code-block:: console 83 84 make 85 86.. note:: 87 88 To get statistics on the sample app using the command line interface as described in the next section, 89 DPDK must be compiled defining *CONFIG_RTE_SCHED_COLLECT_STATS*, 90 which can be done by changing the configuration file for the specific target to be compiled. 91 92Running the Application 93----------------------- 94 95.. note:: 96 97 In order to run the application, a total of at least 4 98 G of huge pages must be set up for each of the used sockets (depending on the cores in use). 99 100The application has a number of command line options: 101 102.. code-block:: console 103 104 ./qos_sched [EAL options] -- <APP PARAMS> 105 106Mandatory application parameters include: 107 108* --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE": Packet flow configuration. 109 Multiple pfc entities can be configured in the command line, 110 having 4 or 5 items (if TX core defined or not). 111 112Optional application parameters include: 113 114* -i: It makes the application to start in the interactive mode. 115 In this mode, the application shows a command line that can be used for obtaining statistics while 116 scheduling is taking place (see interactive mode below for more information). 117 118* --mst n: Master core index (the default value is 1). 119 120* --rsz "A, B, C": Ring sizes: 121 122* A = Size (in number of buffer descriptors) of each of the NIC RX rings read 123 by the I/O RX lcores (the default value is 128). 124 125* B = Size (in number of elements) of each of the software rings used 126 by the I/O RX lcores to send packets to worker lcores (the default value is 8192). 127 128* C = Size (in number of buffer descriptors) of each of the NIC TX rings written 129 by worker lcores (the default value is 256) 130 131* --bsz "A, B, C, D": Burst sizes 132 133* A = I/O RX lcore read burst size from the NIC RX (the default value is 64) 134 135* B = I/O RX lcore write burst size to the output software rings, 136 worker lcore read burst size from input software rings,QoS enqueue size (the default value is 64) 137 138* C = QoS dequeue size (the default value is 32) 139 140* D = Worker lcore write burst size to the NIC TX (the default value is 64) 141 142* --msz M: Mempool size (in number of mbufs) for each pfc (default 2097152) 143 144* --rth "A, B, C": The RX queue threshold parameters 145 146* A = RX prefetch threshold (the default value is 8) 147 148* B = RX host threshold (the default value is 8) 149 150* C = RX write-back threshold (the default value is 4) 151 152* --tth "A, B, C": TX queue threshold parameters 153 154* A = TX prefetch threshold (the default value is 36) 155 156* B = TX host threshold (the default value is 0) 157 158* C = TX write-back threshold (the default value is 0) 159 160* --cfg FILE: Profile configuration to load 161 162Refer to *DPDK Getting Started Guide* for general information on running applications and 163the Environment Abstraction Layer (EAL) options. 164 165The profile configuration file defines all the port/subport/pipe/traffic class/queue parameters 166needed for the QoS scheduler configuration. 167 168The profile file has the following format: 169 170:: 171 172 ; port configuration [port] 173 174 frame overhead = 24 175 number of subports per port = 1 176 number of pipes per subport = 4096 177 queue sizes = 64 64 64 64 178 179 ; Subport configuration 180 181 [subport 0] 182 tb rate = 1250000000; Bytes per second 183 tb size = 1000000; Bytes 184 tc 0 rate = 1250000000; Bytes per second 185 tc 1 rate = 1250000000; Bytes per second 186 tc 2 rate = 1250000000; Bytes per second 187 tc 3 rate = 1250000000; Bytes per second 188 tc period = 10; Milliseconds 189 tc oversubscription period = 10; Milliseconds 190 191 pipe 0-4095 = 0; These pipes are configured with pipe profile 0 192 193 ; Pipe configuration 194 195 [pipe profile 0] 196 tb rate = 305175; Bytes per second 197 tb size = 1000000; Bytes 198 199 tc 0 rate = 305175; Bytes per second 200 tc 1 rate = 305175; Bytes per second 201 tc 2 rate = 305175; Bytes per second 202 tc 3 rate = 305175; Bytes per second 203 tc period = 40; Milliseconds 204 205 tc 0 oversubscription weight = 1 206 tc 1 oversubscription weight = 1 207 tc 2 oversubscription weight = 1 208 tc 3 oversubscription weight = 1 209 210 tc 0 wrr weights = 1 1 1 1 211 tc 1 wrr weights = 1 1 1 1 212 tc 2 wrr weights = 1 1 1 1 213 tc 3 wrr weights = 1 1 1 1 214 215 ; RED params per traffic class and color (Green / Yellow / Red) 216 217 [red] 218 tc 0 wred min = 48 40 32 219 tc 0 wred max = 64 64 64 220 tc 0 wred inv prob = 10 10 10 221 tc 0 wred weight = 9 9 9 222 223 tc 1 wred min = 48 40 32 224 tc 1 wred max = 64 64 64 225 tc 1 wred inv prob = 10 10 10 226 tc 1 wred weight = 9 9 9 227 228 tc 2 wred min = 48 40 32 229 tc 2 wred max = 64 64 64 230 tc 2 wred inv prob = 10 10 10 231 tc 2 wred weight = 9 9 9 232 233 tc 3 wred min = 48 40 32 234 tc 3 wred max = 64 64 64 235 tc 3 wred inv prob = 10 10 10 236 tc 3 wred weight = 9 9 9 237 238Interactive mode 239~~~~~~~~~~~~~~~~ 240 241These are the commands that are currently working under the command line interface: 242 243* Control Commands 244 245* --quit: Quits the application. 246 247* General Statistics 248 249 * stats app: Shows a table with in-app calculated statistics. 250 251 * stats port X subport Y: For a specific subport, it shows the number of packets that 252 went through the scheduler properly and the number of packets that were dropped. 253 The same information is shown in bytes. 254 The information is displayed in a table separating it in different traffic classes. 255 256 * stats port X subport Y pipe Z: For a specific pipe, it shows the number of packets that 257 went through the scheduler properly and the number of packets that were dropped. 258 The same information is shown in bytes. 259 This information is displayed in a table separating it in individual queues. 260 261* Average queue size 262 263All of these commands work the same way, averaging the number of packets throughout a specific subset of queues. 264 265Two parameters can be configured for this prior to calling any of these commands: 266 267 * qavg n X: n is the number of times that the calculation will take place. 268 Bigger numbers provide higher accuracy. The default value is 10. 269 270 * qavg period X: period is the number of microseconds that will be allowed between each calculation. 271 The default value is 100. 272 273The commands that can be used for measuring average queue size are: 274 275* qavg port X subport Y: Show average queue size per subport. 276 277* qavg port X subport Y tc Z: Show average queue size per subport for a specific traffic class. 278 279* qavg port X subport Y pipe Z: Show average queue size per pipe. 280 281* qavg port X subport Y pipe Z tc A: Show average queue size per pipe for a specific traffic class. 282 283* qavg port X subport Y pipe Z tc A q B: Show average queue size of a specific queue. 284 285Example 286~~~~~~~ 287 288The following is an example command with a single packet flow configuration: 289 290.. code-block:: console 291 292 ./qos_sched -l 1,5,7 -n 4 -- --pfc "3,2,5,7" --cfg ./profile.cfg 293 294This example uses a single packet flow configuration which creates one RX thread on lcore 5 reading 295from port 3 and a worker thread on lcore 7 writing to port 2. 296 297Another example with 2 packet flow configurations using different ports but sharing the same core for QoS scheduler is given below: 298 299.. code-block:: console 300 301 ./qos_sched -l 1,2,6,7 -n 4 -- --pfc "3,2,2,6,7" --pfc "1,0,2,6,7" --cfg ./profile.cfg 302 303Note that independent cores for the packet flow configurations for each of the RX, WT and TX thread are also supported, 304providing flexibility to balance the work. 305 306The EAL coremask/corelist is constrained to contain the default mastercore 1 and the RX, WT and TX cores only. 307 308Explanation 309----------- 310 311The Port/Subport/Pipe/Traffic Class/Queue are the hierarchical entities in a typical QoS application: 312 313* A subport represents a predefined group of users. 314 315* A pipe represents an individual user/subscriber. 316 317* A traffic class is the representation of a different traffic type with a specific loss rate, 318 delay and jitter requirements; such as data voice, video or data transfers. 319 320* A queue hosts packets from one or multiple connections of the same type belonging to the same user. 321 322The traffic flows that need to be configured are application dependent. 323This application classifies based on the QinQ double VLAN tags and the IP destination address as indicated in the following table. 324 325.. _table_qos_scheduler_1: 326 327.. table:: Entity Types 328 329 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 330 | **Level Name** | **Siblings per Parent** | **QoS Functional Description** | **Selected By** | 331 | | | | | 332 +================+=========================+==================================================+==================================+ 333 | Port | - | Ethernet port | Physical port | 334 | | | | | 335 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 336 | Subport | Config (8) | Traffic shaped (token bucket) | Outer VLAN tag | 337 | | | | | 338 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 339 | Pipe | Config (4k) | Traffic shaped (token bucket) | Inner VLAN tag | 340 | | | | | 341 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 342 | Traffic Class | 4 | TCs of the same pipe services in strict priority | Destination IP address (0.0.X.0) | 343 | | | | | 344 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 345 | Queue | 4 | Queue of the same TC serviced in WRR | Destination IP address (0.0.0.X) | 346 | | | | | 347 +----------------+-------------------------+--------------------------------------------------+----------------------------------+ 348 349Please refer to the "QoS Scheduler" chapter in the *DPDK Programmer's Guide* for more information about these parameters. 350