xref: /dpdk/doc/guides/sample_app_ug/l2_forward_job_stats.rst (revision 68a03efeed657e6e05f281479b33b51102797e15)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2015 Intel Corporation.
3
4L2 Forwarding Sample Application (in Real and Virtualized Environments) with core load statistics.
5==================================================================================================
6
7The L2 Forwarding sample application is a simple example of packet processing using
8the Data Plane Development Kit (DPDK) which
9also takes advantage of Single Root I/O Virtualization (SR-IOV) features in a virtualized environment.
10
11.. note::
12
13    This application is a variation of L2 Forwarding sample application. It demonstrate possible
14    scheme of job stats library usage therefore some parts of this document is identical with original
15    L2 forwarding application.
16
17Overview
18--------
19
20The L2 Forwarding sample application, which can operate in real and virtualized environments,
21performs L2 forwarding for each packet that is received.
22The destination port is the adjacent port from the enabled portmask, that is,
23if the first four ports are enabled (portmask 0xf),
24ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other.
25Also, the MAC addresses are affected as follows:
26
27*   The source MAC address is replaced by the TX port MAC address
28
29*   The destination MAC address is replaced by  02:00:00:00:00:TX_PORT_ID
30
31This application can be used to benchmark performance using a traffic-generator, as shown in the :numref:`figure_l2_fwd_benchmark_setup_jobstats`.
32
33The application can also be used in a virtualized environment as shown in :numref:`figure_l2_fwd_virtenv_benchmark_setup_jobstats`.
34
35The L2 Forwarding application can also be used as a starting point for developing a new application based on the DPDK.
36
37.. _figure_l2_fwd_benchmark_setup_jobstats:
38
39.. figure:: img/l2_fwd_benchmark_setup.*
40
41   Performance Benchmark Setup (Basic Environment)
42
43.. _figure_l2_fwd_virtenv_benchmark_setup_jobstats:
44
45.. figure:: img/l2_fwd_virtenv_benchmark_setup.*
46
47   Performance Benchmark Setup (Virtualized Environment)
48
49
50Virtual Function Setup Instructions
51~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
52
53This application can use the virtual function available in the system and
54therefore can be used in a virtual machine without passing through
55the whole Network Device into a guest machine in a virtualized scenario.
56The virtual functions can be enabled in the host machine or the hypervisor with the respective physical function driver.
57
58For example, in a Linux* host machine, it is possible to enable a virtual function using the following command:
59
60.. code-block:: console
61
62    modprobe ixgbe max_vfs=2,2
63
64This command enables two Virtual Functions on each of Physical Function of the NIC,
65with two physical ports in the PCI configuration space.
66It is important to note that enabled Virtual Function 0 and 2 would belong to Physical Function 0
67and Virtual Function 1 and 3 would belong to Physical Function 1,
68in this case enabling a total of four Virtual Functions.
69
70Compiling the Application
71-------------------------
72
73To compile the sample application see :doc:`compiling`.
74
75The application is located in the ``l2fwd-jobstats`` sub-directory.
76
77Running the Application
78-----------------------
79
80The application requires a number of command line options:
81
82.. code-block:: console
83
84    ./<build_dir>/examples/dpdk-l2fwd-jobstats [EAL options] -- -p PORTMASK [-q NQ] [-l]
85
86where,
87
88*   p PORTMASK: A hexadecimal bitmask of the ports to configure
89
90*   q NQ: A number of queues (=ports) per lcore (default is 1)
91
92*   l: Use locale thousands separator when formatting big numbers.
93
94To run the application in linux environment with 4 lcores, 16 ports, 8 RX queues per lcore and
95thousands  separator printing, issue the command:
96
97.. code-block:: console
98
99    $ ./<build_dir>/examples/dpdk-l2fwd-jobstats -l 0-3 -n 4 -- -q 8 -p ffff -l
100
101Refer to the *DPDK Getting Started Guide* for general information on running applications
102and the Environment Abstraction Layer (EAL) options.
103
104Explanation
105-----------
106
107The following sections provide some explanation of the code.
108
109Command Line Arguments
110~~~~~~~~~~~~~~~~~~~~~~
111
112The L2 Forwarding sample application takes specific parameters,
113in addition to Environment Abstraction Layer (EAL) arguments
114(see `Running the Application`_).
115The preferred way to parse parameters is to use the getopt() function,
116since it is part of a well-defined and portable library.
117
118The parsing of arguments is done in the l2fwd_parse_args() function.
119The method of argument parsing is not described here.
120Refer to the *glibc getopt(3)* man page for details.
121
122EAL arguments are parsed first, then application-specific arguments.
123This is done at the beginning of the main() function:
124
125.. code-block:: c
126
127    /* init EAL */
128
129    ret = rte_eal_init(argc, argv);
130    if (ret < 0)
131        rte_exit(EXIT_FAILURE, "Invalid EAL arguments\n");
132
133    argc -= ret;
134    argv += ret;
135
136    /* parse application arguments (after the EAL ones) */
137
138    ret = l2fwd_parse_args(argc, argv);
139    if (ret < 0)
140        rte_exit(EXIT_FAILURE, "Invalid L2FWD arguments\n");
141
142Mbuf Pool Initialization
143~~~~~~~~~~~~~~~~~~~~~~~~
144
145Once the arguments are parsed, the mbuf pool is created.
146The mbuf pool contains a set of mbuf objects that will be used by the driver
147and the application to store network packet data:
148
149.. code-block:: c
150
151    /* create the mbuf pool */
152    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
153		MEMPOOL_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE,
154		rte_socket_id());
155
156    if (l2fwd_pktmbuf_pool == NULL)
157        rte_exit(EXIT_FAILURE, "Cannot init mbuf pool\n");
158
159The rte_mempool is a generic structure used to handle pools of objects.
160In this case, it is necessary to create a pool that will be used by the driver.
161The number of allocated pkt mbufs is NB_MBUF, with a data room size of
162RTE_MBUF_DEFAULT_BUF_SIZE each.
163A per-lcore cache of MEMPOOL_CACHE_SIZE mbufs is kept.
164The memory is allocated in rte_socket_id() socket,
165but it is possible to extend this code to allocate one mbuf pool per socket.
166
167The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
168initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
169An advanced application may want to use the mempool API to create the
170mbuf pool with more control.
171
172Driver Initialization
173~~~~~~~~~~~~~~~~~~~~~
174
175The main part of the code in the main() function relates to the initialization of the driver.
176To fully understand this code, it is recommended to study the chapters that related to the Poll Mode Driver
177in the *DPDK Programmer's Guide* and the *DPDK API Reference*.
178
179.. code-block:: c
180
181    /* reset l2fwd_dst_ports */
182
183    for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
184        l2fwd_dst_ports[portid] = 0;
185
186    last_port = 0;
187
188    /*
189     * Each logical core is assigned a dedicated TX queue on each port.
190     */
191    RTE_ETH_FOREACH_DEV(portid) {
192        /* skip ports that are not enabled */
193        if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
194           continue;
195
196        if (nb_ports_in_mask % 2) {
197            l2fwd_dst_ports[portid] = last_port;
198            l2fwd_dst_ports[last_port] = portid;
199        }
200        else
201           last_port = portid;
202
203        nb_ports_in_mask++;
204
205        rte_eth_dev_info_get((uint8_t) portid, &dev_info);
206    }
207
208The next step is to configure the RX and TX queues.
209For each port, there is only one RX queue (only one lcore is able to poll a given port).
210The number of TX queues depends on the number of available lcores.
211The rte_eth_dev_configure() function is used to configure the number of queues for a port:
212
213.. code-block:: c
214
215    ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
216    if (ret < 0)
217        rte_exit(EXIT_FAILURE, "Cannot configure device: "
218            "err=%d, port=%u\n",
219            ret, portid);
220
221RX Queue Initialization
222~~~~~~~~~~~~~~~~~~~~~~~
223
224The application uses one lcore to poll one or several ports, depending on the -q option,
225which specifies the number of queues per lcore.
226
227For example, if the user specifies -q 4, the application is able to poll four ports with one lcore.
228If there are 16 ports on the target (and if the portmask argument is -p ffff ),
229the application will need four lcores to poll all the ports.
230
231.. code-block:: c
232
233    ret = rte_eth_rx_queue_setup(portid, 0, nb_rxd,
234                rte_eth_dev_socket_id(portid),
235                NULL,
236                l2fwd_pktmbuf_pool);
237
238    if (ret < 0)
239        rte_exit(EXIT_FAILURE, "rte_eth_rx_queue_setup:err=%d, port=%u\n",
240                ret, (unsigned) portid);
241
242The list of queues that must be polled for a given lcore is stored in a private structure called struct lcore_queue_conf.
243
244.. code-block:: c
245
246    struct lcore_queue_conf {
247        unsigned n_rx_port;
248        unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
249        truct mbuf_table tx_mbufs[RTE_MAX_ETHPORTS];
250
251        struct rte_timer rx_timers[MAX_RX_QUEUE_PER_LCORE];
252        struct rte_jobstats port_fwd_jobs[MAX_RX_QUEUE_PER_LCORE];
253
254        struct rte_timer flush_timer;
255        struct rte_jobstats flush_job;
256        struct rte_jobstats idle_job;
257        struct rte_jobstats_context jobs_context;
258
259        rte_atomic16_t stats_read_pending;
260        rte_spinlock_t lock;
261    } __rte_cache_aligned;
262
263Values of struct lcore_queue_conf:
264
265*   n_rx_port and rx_port_list[] are used in the main packet processing loop
266    (see Section `Receive, Process and Transmit Packets`_ later in this chapter).
267
268*   rx_timers and flush_timer are used to ensure forced TX on low packet rate.
269
270*   flush_job, idle_job and jobs_context are librte_jobstats objects used for managing l2fwd jobs.
271
272*   stats_read_pending and lock are used during job stats read phase.
273
274TX Queue Initialization
275~~~~~~~~~~~~~~~~~~~~~~~
276
277Each lcore should be able to transmit on any port. For every port, a single TX queue is initialized.
278
279.. code-block:: c
280
281    /* init one TX queue on each port */
282
283    fflush(stdout);
284    ret = rte_eth_tx_queue_setup(portid, 0, nb_txd,
285            rte_eth_dev_socket_id(portid),
286            NULL);
287    if (ret < 0)
288        rte_exit(EXIT_FAILURE, "rte_eth_tx_queue_setup:err=%d, port=%u\n",
289                ret, (unsigned) portid);
290
291Jobs statistics initialization
292~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
293There are several statistics objects available:
294
295*   Flush job statistics
296
297.. code-block:: c
298
299    rte_jobstats_init(&qconf->flush_job, "flush", drain_tsc, drain_tsc,
300            drain_tsc, 0);
301
302    rte_timer_init(&qconf->flush_timer);
303    ret = rte_timer_reset(&qconf->flush_timer, drain_tsc, PERIODICAL,
304                lcore_id, &l2fwd_flush_job, NULL);
305
306    if (ret < 0) {
307        rte_exit(1, "Failed to reset flush job timer for lcore %u: %s",
308                    lcore_id, rte_strerror(-ret));
309    }
310
311*   Statistics per RX port
312
313.. code-block:: c
314
315    rte_jobstats_init(job, name, 0, drain_tsc, 0, MAX_PKT_BURST);
316    rte_jobstats_set_update_period_function(job, l2fwd_job_update_cb);
317
318    rte_timer_init(&qconf->rx_timers[i]);
319    ret = rte_timer_reset(&qconf->rx_timers[i], 0, PERIODICAL, lcore_id,
320            l2fwd_fwd_job, (void *)(uintptr_t)i);
321
322    if (ret < 0) {
323        rte_exit(1, "Failed to reset lcore %u port %u job timer: %s",
324                    lcore_id, qconf->rx_port_list[i], rte_strerror(-ret));
325    }
326
327Following parameters are passed to rte_jobstats_init():
328
329*   0 as minimal poll period
330
331*   drain_tsc as maximum poll period
332
333*   MAX_PKT_BURST as desired target value (RX burst size)
334
335Main loop
336~~~~~~~~~
337
338The forwarding path is reworked comparing to original L2 Forwarding application.
339In the l2fwd_main_loop() function three loops are placed.
340
341.. code-block:: c
342
343    for (;;) {
344        rte_spinlock_lock(&qconf->lock);
345
346        do {
347            rte_jobstats_context_start(&qconf->jobs_context);
348
349            /* Do the Idle job:
350             * - Read stats_read_pending flag
351             * - check if some real job need to be executed
352             */
353            rte_jobstats_start(&qconf->jobs_context, &qconf->idle_job);
354
355            do {
356                uint8_t i;
357                uint64_t now = rte_get_timer_cycles();
358
359                need_manage = qconf->flush_timer.expire < now;
360                /* Check if we was esked to give a stats. */
361                stats_read_pending =
362                        rte_atomic16_read(&qconf->stats_read_pending);
363                need_manage |= stats_read_pending;
364
365                for (i = 0; i < qconf->n_rx_port && !need_manage; i++)
366                    need_manage = qconf->rx_timers[i].expire < now;
367
368            } while (!need_manage);
369            rte_jobstats_finish(&qconf->idle_job, qconf->idle_job.target);
370
371            rte_timer_manage();
372            rte_jobstats_context_finish(&qconf->jobs_context);
373        } while (likely(stats_read_pending == 0));
374
375        rte_spinlock_unlock(&qconf->lock);
376        rte_pause();
377    }
378
379First infinite for loop is to minimize impact of stats reading. Lock is only locked/unlocked when asked.
380
381Second inner while loop do the whole jobs management. When any job is ready, the use rte_timer_manage() is used to call the job handler.
382In this place functions l2fwd_fwd_job() and l2fwd_flush_job() are called when needed.
383Then rte_jobstats_context_finish() is called to mark loop end - no other jobs are ready to execute. By this time stats are ready to be read
384and if stats_read_pending is set, loop breaks allowing stats to be read.
385
386Third do-while loop is the idle job (idle stats counter). Its only purpose is monitoring if any job is ready or stats job read is pending
387for this lcore. Statistics from this part of code is considered as the headroom available for additional processing.
388
389Receive, Process and Transmit Packets
390~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
391
392The main task of l2fwd_fwd_job() function is to read ingress packets from the RX queue of particular port and forward it.
393This is done using the following code:
394
395.. code-block:: c
396
397    total_nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
398            MAX_PKT_BURST);
399
400    for (j = 0; j < total_nb_rx; j++) {
401        m = pkts_burst[j];
402        rte_prefetch0(rte_pktmbuf_mtod(m, void *));
403        l2fwd_simple_forward(m, portid);
404    }
405
406Packets are read in a burst of size MAX_PKT_BURST.
407Then, each mbuf in the table is processed by the l2fwd_simple_forward() function.
408The processing is very simple: process the TX port from the RX port, then replace the source and destination MAC addresses.
409
410The rte_eth_rx_burst() function writes the mbuf pointers in a local table and returns the number of available mbufs in the table.
411
412After first read second try is issued.
413
414.. code-block:: c
415
416    if (total_nb_rx == MAX_PKT_BURST) {
417        const uint16_t nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst,
418                MAX_PKT_BURST);
419
420        total_nb_rx += nb_rx;
421        for (j = 0; j < nb_rx; j++) {
422            m = pkts_burst[j];
423            rte_prefetch0(rte_pktmbuf_mtod(m, void *));
424            l2fwd_simple_forward(m, portid);
425        }
426    }
427
428This second read is important to give job stats library a feedback how many packets was processed.
429
430.. code-block:: c
431
432    /* Adjust period time in which we are running here. */
433    if (rte_jobstats_finish(job, total_nb_rx) != 0) {
434        rte_timer_reset(&qconf->rx_timers[port_idx], job->period, PERIODICAL,
435                lcore_id, l2fwd_fwd_job, arg);
436    }
437
438To maximize performance exactly MAX_PKT_BURST is expected (the target value) to be read for each l2fwd_fwd_job() call.
439If total_nb_rx is smaller than target value job->period will be increased. If it is greater the period will be decreased.
440
441.. note::
442
443    In the following code, one line for getting the output port requires some explanation.
444
445During the initialization process, a static array of destination ports (l2fwd_dst_ports[]) is filled such that for each source port,
446a destination port is assigned that is either the next or previous enabled port from the portmask.
447Naturally, the number of ports in the portmask must be even, otherwise, the application exits.
448
449.. code-block:: c
450
451    static void
452    l2fwd_simple_forward(struct rte_mbuf *m, unsigned portid)
453    {
454        struct rte_ether_hdr *eth;
455        void *tmp;
456        unsigned dst_port;
457
458        dst_port = l2fwd_dst_ports[portid];
459
460        eth = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);
461
462        /* 02:00:00:00:00:xx */
463
464        tmp = &eth->d_addr.addr_bytes[0];
465
466        *((uint64_t *)tmp) = 0x000000000002 + ((uint64_t) dst_port << 40);
467
468        /* src addr */
469
470        rte_ether_addr_copy(&l2fwd_ports_eth_addr[dst_port], &eth->s_addr);
471
472        l2fwd_send_packet(m, (uint8_t) dst_port);
473    }
474
475Then, the packet is sent using the l2fwd_send_packet (m, dst_port) function.
476For this test application, the processing is exactly the same for all packets arriving on the same RX port.
477Therefore, it would have been possible to call the l2fwd_send_burst() function directly from the main loop
478to send all the received packets on the same TX port,
479using the burst-oriented send function, which is more efficient.
480
481However, in real-life applications (such as, L3 routing),
482packet N is not necessarily forwarded on the same port as packet N-1.
483The application is implemented to illustrate that, so the same approach can be reused in a more complex application.
484
485The l2fwd_send_packet() function stores the packet in a per-lcore and per-txport table.
486If the table is full, the whole packets table is transmitted using the l2fwd_send_burst() function:
487
488.. code-block:: c
489
490    /* Send the packet on an output interface */
491
492    static int
493    l2fwd_send_packet(struct rte_mbuf *m, uint16_t port)
494    {
495        unsigned lcore_id, len;
496        struct lcore_queue_conf *qconf;
497
498        lcore_id = rte_lcore_id();
499        qconf = &lcore_queue_conf[lcore_id];
500        len = qconf->tx_mbufs[port].len;
501        qconf->tx_mbufs[port].m_table[len] = m;
502        len++;
503
504        /* enough pkts to be sent */
505
506        if (unlikely(len == MAX_PKT_BURST)) {
507            l2fwd_send_burst(qconf, MAX_PKT_BURST, port);
508            len = 0;
509        }
510
511        qconf->tx_mbufs[port].len = len; return 0;
512    }
513
514To ensure that no packets remain in the tables, the flush job exists. The l2fwd_flush_job()
515is called periodically to for each lcore draining TX queue of each port.
516This technique introduces some latency when there are not many packets to send,
517however it improves performance:
518
519.. code-block:: c
520
521    static void
522    l2fwd_flush_job(__rte_unused struct rte_timer *timer, __rte_unused void *arg)
523    {
524        uint64_t now;
525        unsigned lcore_id;
526        struct lcore_queue_conf *qconf;
527        struct mbuf_table *m_table;
528        uint16_t portid;
529
530        lcore_id = rte_lcore_id();
531        qconf = &lcore_queue_conf[lcore_id];
532
533        rte_jobstats_start(&qconf->jobs_context, &qconf->flush_job);
534
535        now = rte_get_timer_cycles();
536        lcore_id = rte_lcore_id();
537        qconf = &lcore_queue_conf[lcore_id];
538        for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {
539            m_table = &qconf->tx_mbufs[portid];
540            if (m_table->len == 0 || m_table->next_flush_time <= now)
541                continue;
542
543            l2fwd_send_burst(qconf, portid);
544        }
545
546
547        /* Pass target to indicate that this job is happy of time interval
548         * in which it was called. */
549        rte_jobstats_finish(&qconf->flush_job, qconf->flush_job.target);
550    }
551