xref: /dpdk/doc/guides/sample_app_ug/qos_scheduler.rst (revision bc8e32473cc3978d763a1387eaa8244bcf75e77d)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2014 Intel Corporation.
3
4QoS Scheduler Sample Application
5================================
6
7The QoS sample application demonstrates the use of the DPDK to provide QoS scheduling.
8
9Overview
10--------
11
12The architecture of the QoS scheduler application is shown in the following figure.
13
14.. _figure_qos_sched_app_arch:
15
16.. figure:: img/qos_sched_app_arch.*
17
18   QoS Scheduler Application Architecture
19
20
21There are two flavors of the runtime execution for this application,
22with two or three threads per each packet flow configuration being used.
23The RX thread reads packets from the RX port,
24classifies the packets based on the double VLAN (outer and inner) and
25the lower byte of the IP destination address and puts them into the ring queue.
26The worker thread dequeues the packets from the ring and calls the QoS scheduler enqueue/dequeue functions.
27If a separate TX core is used, these are sent to the TX ring.
28Otherwise, they are sent directly to the TX port.
29The TX thread, if present, reads from the TX ring and write the packets to the TX port.
30
31Compiling the Application
32-------------------------
33
34To compile the sample application see :doc:`compiling`.
35
36The application is located in the ``qos_sched`` sub-directory.
37
38    .. note::
39
40        This application is intended as a linux only.
41
42.. note::
43
44    To get statistics on the sample app using the command line interface as described in the next section,
45    DPDK must be compiled defining *RTE_SCHED_COLLECT_STATS*, which can be done by changing the relevant
46    entry in the ``config/rte_config.h`` file.
47
48Running the Application
49-----------------------
50
51.. note::
52
53    In order to run the application, a total of at least 4
54    G of huge pages must be set up for each of the used sockets (depending on the cores in use).
55
56The application has a number of command line options:
57
58.. code-block:: console
59
60    ./<build_dir>/examples/dpdk-qos_sched [EAL options] -- <APP PARAMS>
61
62Mandatory application parameters include:
63
64*   --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE": Packet flow configuration.
65    Multiple pfc entities can be configured in the command line,
66    having 4 or 5 items (if TX core defined or not).
67
68Optional application parameters include:
69
70*   -i: It makes the application to start in the interactive mode.
71    In this mode, the application shows a command line that can be used for obtaining statistics while
72    scheduling is taking place (see interactive mode below for more information).
73
74*   --mnc n: Main core index (the default value is 1).
75
76*   --rsz "A, B, C": Ring sizes:
77
78*   A = Size (in number of buffer descriptors) of each of the NIC RX rings read
79    by the I/O RX lcores (the default value is 128).
80
81*   B = Size (in number of elements) of each of the software rings used
82    by the I/O RX lcores to send packets to worker lcores (the default value is 8192).
83
84*   C = Size (in number of buffer descriptors) of each of the NIC TX rings written
85    by worker lcores (the default value is 256)
86
87*   --bsz "A, B, C, D": Burst sizes
88
89*   A = I/O RX lcore read burst size from the NIC RX (the default value is 64)
90
91*   B = I/O RX lcore write burst size to the output software rings,
92    worker lcore read burst size from input software rings,QoS enqueue size (the default value is 64)
93
94*   C = QoS dequeue size (the default value is 32)
95
96*   D = Worker lcore write burst size to the NIC TX (the default value is 64)
97
98*   --msz M: Mempool size (in number of mbufs) for each pfc (default 2097152)
99
100*   --rth "A, B, C": The RX queue threshold parameters
101
102*   A = RX prefetch threshold (the default value is 8)
103
104*   B = RX host threshold (the default value is 8)
105
106*   C = RX write-back threshold (the default value is 4)
107
108*   --tth "A, B, C": TX queue threshold parameters
109
110*   A = TX prefetch threshold (the default value is 36)
111
112*   B = TX host threshold (the default value is 0)
113
114*   C = TX write-back threshold (the default value is 0)
115
116*   --cfg FILE: Profile configuration to load
117
118Refer to *DPDK Getting Started Guide* for general information on running applications and
119the Environment Abstraction Layer (EAL) options.
120
121The profile configuration file defines all the port/subport/pipe/traffic class/queue parameters
122needed for the QoS scheduler configuration.
123
124The profile file has the following format:
125
126::
127
128    ; port configuration [port]
129
130    frame overhead = 24
131    number of subports per port = 1
132
133    ; Subport configuration
134
135    [subport 0]
136    number of pipes per subport = 4096
137    queue sizes = 64 64 64 64 64 64 64 64 64 64 64 64 64
138    tb rate = 1250000000; Bytes per second
139    tb size = 1000000; Bytes
140    tc 0 rate = 1250000000;     Bytes per second
141    tc 1 rate = 1250000000;     Bytes per second
142    tc 2 rate = 1250000000;     Bytes per second
143    tc 3 rate = 1250000000;     Bytes per second
144    tc 4 rate = 1250000000;     Bytes per second
145    tc 5 rate = 1250000000;     Bytes per second
146    tc 6 rate = 1250000000;     Bytes per second
147    tc 7 rate = 1250000000;     Bytes per second
148    tc 8 rate = 1250000000;     Bytes per second
149    tc 9 rate = 1250000000;     Bytes per second
150    tc 10 rate = 1250000000;     Bytes per second
151    tc 11 rate = 1250000000;     Bytes per second
152    tc 12 rate = 1250000000;     Bytes per second
153
154    tc period = 10;             Milliseconds
155    tc oversubscription period = 10;     Milliseconds
156
157    pipe 0-4095 = 0;        These pipes are configured with pipe profile 0
158
159    ; Pipe configuration
160
161    [pipe profile 0]
162    tb rate = 305175; Bytes per second
163    tb size = 1000000; Bytes
164
165    tc 0 rate = 305175; Bytes per second
166    tc 1 rate = 305175; Bytes per second
167    tc 2 rate = 305175; Bytes per second
168    tc 3 rate = 305175; Bytes per second
169    tc 4 rate = 305175; Bytes per second
170    tc 5 rate = 305175; Bytes per second
171    tc 6 rate = 305175; Bytes per second
172    tc 7 rate = 305175; Bytes per second
173    tc 8 rate = 305175; Bytes per second
174    tc 9 rate = 305175; Bytes per second
175    tc 10 rate = 305175; Bytes per second
176    tc 11 rate = 305175; Bytes per second
177    tc 12 rate = 305175; Bytes per second
178    tc period = 40; Milliseconds
179
180    tc 0 oversubscription weight = 1
181    tc 1 oversubscription weight = 1
182    tc 2 oversubscription weight = 1
183    tc 3 oversubscription weight = 1
184    tc 4 oversubscription weight = 1
185    tc 5 oversubscription weight = 1
186    tc 6 oversubscription weight = 1
187    tc 7 oversubscription weight = 1
188    tc 8 oversubscription weight = 1
189    tc 9 oversubscription weight = 1
190    tc 10 oversubscription weight = 1
191    tc 11 oversubscription weight = 1
192    tc 12 oversubscription weight = 1
193
194    tc 12 wrr weights = 1 1 1 1
195
196    ; RED params per traffic class and color (Green / Yellow / Red)
197
198    [red]
199    tc 0 wred min = 48 40 32
200    tc 0 wred max = 64 64 64
201    tc 0 wred inv prob = 10 10 10
202    tc 0 wred weight = 9 9 9
203
204    tc 1 wred min = 48 40 32
205    tc 1 wred max = 64 64 64
206    tc 1 wred inv prob = 10 10 10
207    tc 1 wred weight = 9 9 9
208
209    tc 2 wred min = 48 40 32
210    tc 2 wred max = 64 64 64
211    tc 2 wred inv prob = 10 10 10
212    tc 2 wred weight = 9 9 9
213
214    tc 3 wred min = 48 40 32
215    tc 3 wred max = 64 64 64
216    tc 3 wred inv prob = 10 10 10
217    tc 3 wred weight = 9 9 9
218
219    tc 4 wred min = 48 40 32
220    tc 4 wred max = 64 64 64
221    tc 4 wred inv prob = 10 10 10
222    tc 4 wred weight = 9 9 9
223
224    tc 5 wred min = 48 40 32
225    tc 5 wred max = 64 64 64
226    tc 5 wred inv prob = 10 10 10
227    tc 5 wred weight = 9 9 9
228
229    tc 6 wred min = 48 40 32
230    tc 6 wred max = 64 64 64
231    tc 6 wred inv prob = 10 10 10
232    tc 6 wred weight = 9 9 9
233
234    tc 7 wred min = 48 40 32
235    tc 7 wred max = 64 64 64
236    tc 7 wred inv prob = 10 10 10
237    tc 7 wred weight = 9 9 9
238
239    tc 8 wred min = 48 40 32
240    tc 8 wred max = 64 64 64
241    tc 8 wred inv prob = 10 10 10
242    tc 8 wred weight = 9 9 9
243
244    tc 9 wred min = 48 40 32
245    tc 9 wred max = 64 64 64
246    tc 9 wred inv prob = 10 10 10
247    tc 9 wred weight = 9 9 9
248
249    tc 10 wred min = 48 40 32
250    tc 10 wred max = 64 64 64
251    tc 10 wred inv prob = 10 10 10
252    tc 10 wred weight = 9 9 9
253
254    tc 11 wred min = 48 40 32
255    tc 11 wred max = 64 64 64
256    tc 11 wred inv prob = 10 10 10
257    tc 11 wred weight = 9 9 9
258
259    tc 12 wred min = 48 40 32
260    tc 12 wred max = 64 64 64
261    tc 12 wred inv prob = 10 10 10
262    tc 12 wred weight = 9 9 9
263
264Interactive mode
265~~~~~~~~~~~~~~~~
266
267These are the commands that are currently working under the command line interface:
268
269*   Control Commands
270
271*   --quit: Quits the application.
272
273*   General Statistics
274
275    *   stats app: Shows a table with in-app calculated statistics.
276
277    *   stats port X subport Y: For a specific subport, it shows the number of packets that
278        went through the scheduler properly and the number of packets that were dropped.
279        The same information is shown in bytes.
280        The information is displayed in a table separating it in different traffic classes.
281
282    *   stats port X subport Y pipe Z: For a specific pipe, it shows the number of packets that
283        went through the scheduler properly and the number of packets that were dropped.
284        The same information is shown in bytes.
285        This information is displayed in a table separating it in individual queues.
286
287*   Average queue size
288
289All of these commands work the same way, averaging the number of packets throughout a specific subset of queues.
290
291Two parameters can be configured for this prior to calling any of these commands:
292
293    *   qavg n X: n is the number of times that the calculation will take place.
294        Bigger numbers provide higher accuracy. The default value is 10.
295
296    *   qavg period X: period is the number of microseconds that will be allowed between each calculation.
297        The default value is 100.
298
299The commands that can be used for measuring average queue size are:
300
301*   qavg port X subport Y: Show average queue size per subport.
302
303*   qavg port X subport Y tc Z: Show average queue size per subport for a specific traffic class.
304
305*   qavg port X subport Y pipe Z: Show average queue size per pipe.
306
307*   qavg port X subport Y pipe Z tc A: Show average queue size per pipe for a specific traffic class.
308
309*   qavg port X subport Y pipe Z tc A q B: Show average queue size of a specific queue.
310
311Example
312~~~~~~~
313
314The following is an example command with a single packet flow configuration:
315
316.. code-block:: console
317
318    ./<build_dir>/examples/dpdk-qos_sched -l 1,5,7 -n 4 -- --pfc "3,2,5,7" --cfg ./profile.cfg
319
320This example uses a single packet flow configuration which creates one RX thread on lcore 5 reading
321from port 3 and a worker thread on lcore 7 writing to port 2.
322
323Another example with 2 packet flow configurations using different ports but sharing the same core for QoS scheduler is given below:
324
325.. code-block:: console
326
327   ./<build_dir>/examples/dpdk-qos_sched -l 1,2,6,7 -n 4 -- --pfc "3,2,2,6,7" --pfc "1,0,2,6,7" --cfg ./profile.cfg
328
329Note that independent cores for the packet flow configurations for each of the RX, WT and TX thread are also supported,
330providing flexibility to balance the work.
331
332The EAL coremask/corelist is constrained to contain the default main core 1 and the RX, WT and TX cores only.
333
334Explanation
335-----------
336
337The Port/Subport/Pipe/Traffic Class/Queue are the hierarchical entities in a typical QoS application:
338
339*   A subport represents a predefined group of users.
340
341*   A pipe represents an individual user/subscriber.
342
343*   A traffic class is the representation of a different traffic type with a specific loss rate,
344    delay and jitter requirements; such as data voice, video or data transfers.
345
346*   A queue hosts packets from one or multiple connections of the same type belonging to the same user.
347
348The traffic flows that need to be configured are application dependent.
349This application classifies based on the QinQ double VLAN tags and the IP destination address as indicated in the following table.
350
351.. _table_qos_scheduler_1:
352
353.. table:: Entity Types
354
355   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
356   | **Level Name** | **Siblings per Parent** | **QoS Functional Description**                   | **Selected By**                  |
357   |                |                         |                                                  |                                  |
358   +================+=========================+==================================================+==================================+
359   | Port           | -                       | Ethernet port                                    | Physical port                    |
360   |                |                         |                                                  |                                  |
361   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
362   | Subport        | Config (8)              | Traffic shaped (token bucket)                    | Outer VLAN tag                   |
363   |                |                         |                                                  |                                  |
364   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
365   | Pipe           | Config (4k)             | Traffic shaped (token bucket)                    | Inner VLAN tag                   |
366   |                |                         |                                                  |                                  |
367   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
368   | Traffic Class  | 13                      | TCs of the same pipe services in strict priority | Destination IP address (0.0.0.X) |
369   |                |                         |                                                  |                                  |
370   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
371   | Queue          | High Priority TC: 1,    | Queue of lowest priority traffic                 | Destination IP address (0.0.0.X) |
372   |                | Lowest Priority TC: 4   | class (Best effort) serviced in WRR              |                                  |
373   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
374
375Please refer to the "QoS Scheduler" chapter in the *DPDK Programmer's Guide* for more information about these parameters.
376