xref: /dpdk/doc/guides/sample_app_ug/qos_scheduler.rst (revision aba2af9936b671e1b492f9c044b3e7f4b7e7628b)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2014 Intel Corporation.
3
4QoS Scheduler Sample Application
5================================
6
7The QoS sample application demonstrates the use of the DPDK to provide QoS scheduling.
8
9Overview
10--------
11
12The architecture of the QoS scheduler application is shown in the following figure.
13
14.. _figure_qos_sched_app_arch:
15
16.. figure:: img/qos_sched_app_arch.*
17
18   QoS Scheduler Application Architecture
19
20
21There are two flavors of the runtime execution for this application,
22with two or three threads per each packet flow configuration being used.
23The RX thread reads packets from the RX port,
24classifies the packets based on the double VLAN (outer and inner) and
25the lower byte of the IP destination address and puts them into the ring queue.
26The worker thread dequeues the packets from the ring and calls the QoS scheduler enqueue/dequeue functions.
27If a separate TX core is used, these are sent to the TX ring.
28Otherwise, they are sent directly to the TX port.
29The TX thread, if present, reads from the TX ring and write the packets to the TX port.
30
31Compiling the Application
32-------------------------
33
34To compile the sample application see :doc:`compiling`.
35
36The application is located in the ``qos_sched`` sub-directory.
37
38    .. note::
39
40        This application is intended as a linux only.
41
42.. note::
43
44    To get statistics on the sample app using the command line interface as described in the next section,
45    DPDK must be compiled defining *RTE_SCHED_COLLECT_STATS*, which can be done by changing the relevant
46    entry in the ``config/rte_config.h`` file.
47
48Running the Application
49-----------------------
50
51.. note::
52
53    In order to run the application, a total of at least 4
54    G of huge pages must be set up for each of the used sockets (depending on the cores in use).
55
56The application has a number of command line options:
57
58.. code-block:: console
59
60    ./<build_dir>/examples/dpdk-qos_sched [EAL options] -- <APP PARAMS>
61
62Mandatory application parameters include:
63
64*   --pfc "RX PORT, TX PORT, RX LCORE, WT LCORE, TX CORE": Packet flow configuration.
65    Multiple pfc entities can be configured in the command line,
66    having 4 or 5 items (if TX core defined or not).
67
68Optional application parameters include:
69
70*   -i: It makes the application to start in the interactive mode.
71    In this mode, the application shows a command line that can be used for obtaining statistics while
72    scheduling is taking place (see interactive mode below for more information).
73
74*   --mnc n: Main core index (the default value is 1).
75
76*   --rsz "A, B, C": Ring sizes:
77
78*   A = Size (in number of buffer descriptors) of each of the NIC RX rings read
79    by the I/O RX lcores (the default value is 128).
80
81*   B = Size (in number of elements) of each of the software rings used
82    by the I/O RX lcores to send packets to worker lcores (the default value is 8192).
83
84*   C = Size (in number of buffer descriptors) of each of the NIC TX rings written
85    by worker lcores (the default value is 256)
86
87*   --bsz "A, B, C, D": Burst sizes
88
89*   A = I/O RX lcore read burst size from the NIC RX (the default value is 64)
90
91*   B = I/O RX lcore write burst size to the output software rings,
92    worker lcore read burst size from input software rings,QoS enqueue size (the default value is 64)
93
94*   C = QoS dequeue size (the default value is 32)
95
96*   D = Worker lcore write burst size to the NIC TX (the default value is 64)
97
98*   --msz M: Mempool size (in number of mbufs) for each pfc (default 2097152)
99
100*   --rth "A, B, C": The RX queue threshold parameters
101
102*   A = RX prefetch threshold (the default value is 8)
103
104*   B = RX host threshold (the default value is 8)
105
106*   C = RX write-back threshold (the default value is 4)
107
108*   --tth "A, B, C": TX queue threshold parameters
109
110*   A = TX prefetch threshold (the default value is 36)
111
112*   B = TX host threshold (the default value is 0)
113
114*   C = TX write-back threshold (the default value is 0)
115
116*   --cfg FILE: Profile configuration to load
117
118Refer to *DPDK Getting Started Guide* for general information on running applications and
119the Environment Abstraction Layer (EAL) options.
120
121The profile configuration file defines all the port/subport/pipe/traffic class/queue parameters
122needed for the QoS scheduler configuration.
123
124The profile file has the following format:
125
126::
127
128    ; port configuration [port]
129
130    frame overhead = 24
131    number of subports per port = 1
132
133    ; Subport configuration
134
135    [subport 0]
136    number of pipes per subport = 4096
137    queue sizes = 64 64 64 64 64 64 64 64 64 64 64 64 64
138
139    subport 0-8 = 0     ; These subports are configured with subport profile 0
140
141    [subport profile 0]
142    tb rate = 1250000000; Bytes per second
143    tb size = 1000000; Bytes
144    tc 0 rate = 1250000000;     Bytes per second
145    tc 1 rate = 1250000000;     Bytes per second
146    tc 2 rate = 1250000000;     Bytes per second
147    tc 3 rate = 1250000000;     Bytes per second
148    tc 4 rate = 1250000000;     Bytes per second
149    tc 5 rate = 1250000000;     Bytes per second
150    tc 6 rate = 1250000000;     Bytes per second
151    tc 7 rate = 1250000000;     Bytes per second
152    tc 8 rate = 1250000000;     Bytes per second
153    tc 9 rate = 1250000000;     Bytes per second
154    tc 10 rate = 1250000000;     Bytes per second
155    tc 11 rate = 1250000000;     Bytes per second
156    tc 12 rate = 1250000000;     Bytes per second
157
158    tc period = 10;             Milliseconds
159    tc oversubscription period = 10;     Milliseconds
160
161    pipe 0-4095 = 0;        These pipes are configured with pipe profile 0
162
163    ; Pipe configuration
164
165    [pipe profile 0]
166    tb rate = 305175; Bytes per second
167    tb size = 1000000; Bytes
168
169    tc 0 rate = 305175; Bytes per second
170    tc 1 rate = 305175; Bytes per second
171    tc 2 rate = 305175; Bytes per second
172    tc 3 rate = 305175; Bytes per second
173    tc 4 rate = 305175; Bytes per second
174    tc 5 rate = 305175; Bytes per second
175    tc 6 rate = 305175; Bytes per second
176    tc 7 rate = 305175; Bytes per second
177    tc 8 rate = 305175; Bytes per second
178    tc 9 rate = 305175; Bytes per second
179    tc 10 rate = 305175; Bytes per second
180    tc 11 rate = 305175; Bytes per second
181    tc 12 rate = 305175; Bytes per second
182    tc period = 40; Milliseconds
183
184    tc 0 oversubscription weight = 1
185    tc 1 oversubscription weight = 1
186    tc 2 oversubscription weight = 1
187    tc 3 oversubscription weight = 1
188    tc 4 oversubscription weight = 1
189    tc 5 oversubscription weight = 1
190    tc 6 oversubscription weight = 1
191    tc 7 oversubscription weight = 1
192    tc 8 oversubscription weight = 1
193    tc 9 oversubscription weight = 1
194    tc 10 oversubscription weight = 1
195    tc 11 oversubscription weight = 1
196    tc 12 oversubscription weight = 1
197
198    tc 12 wrr weights = 1 1 1 1
199
200    ; RED params per traffic class and color (Green / Yellow / Red)
201
202    [red]
203    tc 0 wred min = 48 40 32
204    tc 0 wred max = 64 64 64
205    tc 0 wred inv prob = 10 10 10
206    tc 0 wred weight = 9 9 9
207
208    tc 1 wred min = 48 40 32
209    tc 1 wred max = 64 64 64
210    tc 1 wred inv prob = 10 10 10
211    tc 1 wred weight = 9 9 9
212
213    tc 2 wred min = 48 40 32
214    tc 2 wred max = 64 64 64
215    tc 2 wred inv prob = 10 10 10
216    tc 2 wred weight = 9 9 9
217
218    tc 3 wred min = 48 40 32
219    tc 3 wred max = 64 64 64
220    tc 3 wred inv prob = 10 10 10
221    tc 3 wred weight = 9 9 9
222
223    tc 4 wred min = 48 40 32
224    tc 4 wred max = 64 64 64
225    tc 4 wred inv prob = 10 10 10
226    tc 4 wred weight = 9 9 9
227
228    tc 5 wred min = 48 40 32
229    tc 5 wred max = 64 64 64
230    tc 5 wred inv prob = 10 10 10
231    tc 5 wred weight = 9 9 9
232
233    tc 6 wred min = 48 40 32
234    tc 6 wred max = 64 64 64
235    tc 6 wred inv prob = 10 10 10
236    tc 6 wred weight = 9 9 9
237
238    tc 7 wred min = 48 40 32
239    tc 7 wred max = 64 64 64
240    tc 7 wred inv prob = 10 10 10
241    tc 7 wred weight = 9 9 9
242
243    tc 8 wred min = 48 40 32
244    tc 8 wred max = 64 64 64
245    tc 8 wred inv prob = 10 10 10
246    tc 8 wred weight = 9 9 9
247
248    tc 9 wred min = 48 40 32
249    tc 9 wred max = 64 64 64
250    tc 9 wred inv prob = 10 10 10
251    tc 9 wred weight = 9 9 9
252
253    tc 10 wred min = 48 40 32
254    tc 10 wred max = 64 64 64
255    tc 10 wred inv prob = 10 10 10
256    tc 10 wred weight = 9 9 9
257
258    tc 11 wred min = 48 40 32
259    tc 11 wred max = 64 64 64
260    tc 11 wred inv prob = 10 10 10
261    tc 11 wred weight = 9 9 9
262
263    tc 12 wred min = 48 40 32
264    tc 12 wred max = 64 64 64
265    tc 12 wred inv prob = 10 10 10
266    tc 12 wred weight = 9 9 9
267
268Interactive mode
269~~~~~~~~~~~~~~~~
270
271These are the commands that are currently working under the command line interface:
272
273*   Control Commands
274
275*   --quit: Quits the application.
276
277*   General Statistics
278
279    *   stats app: Shows a table with in-app calculated statistics.
280
281    *   stats port X subport Y: For a specific subport, it shows the number of packets that
282        went through the scheduler properly and the number of packets that were dropped.
283        The same information is shown in bytes.
284        The information is displayed in a table separating it in different traffic classes.
285
286    *   stats port X subport Y pipe Z: For a specific pipe, it shows the number of packets that
287        went through the scheduler properly and the number of packets that were dropped.
288        The same information is shown in bytes.
289        This information is displayed in a table separating it in individual queues.
290
291*   Average queue size
292
293All of these commands work the same way, averaging the number of packets throughout a specific subset of queues.
294
295Two parameters can be configured for this prior to calling any of these commands:
296
297    *   qavg n X: n is the number of times that the calculation will take place.
298        Bigger numbers provide higher accuracy. The default value is 10.
299
300    *   qavg period X: period is the number of microseconds that will be allowed between each calculation.
301        The default value is 100.
302
303The commands that can be used for measuring average queue size are:
304
305*   qavg port X subport Y: Show average queue size per subport.
306
307*   qavg port X subport Y tc Z: Show average queue size per subport for a specific traffic class.
308
309*   qavg port X subport Y pipe Z: Show average queue size per pipe.
310
311*   qavg port X subport Y pipe Z tc A: Show average queue size per pipe for a specific traffic class.
312
313*   qavg port X subport Y pipe Z tc A q B: Show average queue size of a specific queue.
314
315Example
316~~~~~~~
317
318The following is an example command with a single packet flow configuration:
319
320.. code-block:: console
321
322    ./<build_dir>/examples/dpdk-qos_sched -l 1,5,7 -n 4 -- --pfc "3,2,5,7" --cfg ./profile.cfg
323
324This example uses a single packet flow configuration which creates one RX thread on lcore 5 reading
325from port 3 and a worker thread on lcore 7 writing to port 2.
326
327Another example with 2 packet flow configurations using different ports but sharing the same core for QoS scheduler is given below:
328
329.. code-block:: console
330
331   ./<build_dir>/examples/dpdk-qos_sched -l 1,2,6,7 -n 4 -- --pfc "3,2,2,6,7" --pfc "1,0,2,6,7" --cfg ./profile.cfg
332
333Note that independent cores for the packet flow configurations for each of the RX, WT and TX thread are also supported,
334providing flexibility to balance the work.
335
336The EAL coremask/corelist is constrained to contain the default main core 1 and the RX, WT and TX cores only.
337
338Explanation
339-----------
340
341The Port/Subport/Pipe/Traffic Class/Queue are the hierarchical entities in a typical QoS application:
342
343*   A subport represents a predefined group of users.
344
345*   A pipe represents an individual user/subscriber.
346
347*   A traffic class is the representation of a different traffic type with a specific loss rate,
348    delay and jitter requirements; such as data voice, video or data transfers.
349
350*   A queue hosts packets from one or multiple connections of the same type belonging to the same user.
351
352The traffic flows that need to be configured are application dependent.
353This application classifies based on the QinQ double VLAN tags and the IP destination address as indicated in the following table.
354
355.. _table_qos_scheduler_1:
356
357.. table:: Entity Types
358
359   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
360   | **Level Name** | **Siblings per Parent** | **QoS Functional Description**                   | **Selected By**                  |
361   |                |                         |                                                  |                                  |
362   +================+=========================+==================================================+==================================+
363   | Port           | -                       | Ethernet port                                    | Physical port                    |
364   |                |                         |                                                  |                                  |
365   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
366   | Subport        | Config (8)              | Traffic shaped (token bucket)                    | Outer VLAN tag                   |
367   |                |                         |                                                  |                                  |
368   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
369   | Pipe           | Config (4k)             | Traffic shaped (token bucket)                    | Inner VLAN tag                   |
370   |                |                         |                                                  |                                  |
371   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
372   | Traffic Class  | 13                      | TCs of the same pipe services in strict priority | Destination IP address (0.0.0.X) |
373   |                |                         |                                                  |                                  |
374   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
375   | Queue          | High Priority TC: 1,    | Queue of lowest priority traffic                 | Destination IP address (0.0.0.X) |
376   |                | Lowest Priority TC: 4   | class (Best effort) serviced in WRR              |                                  |
377   +----------------+-------------------------+--------------------------------------------------+----------------------------------+
378
379Please refer to the "QoS Scheduler" chapter in the *DPDK Programmer's Guide* for more information about these parameters.
380