xref: /dpdk/doc/guides/sample_app_ug/l2_forward_event.rst (revision 8809f78c7dd9f33a44a4f89c58fc91ded34296ed)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2014 Intel Corporation.
3
4.. _l2_fwd_event_app:
5
6L2 Forwarding Eventdev Sample Application
7=========================================
8
9The L2 Forwarding eventdev sample application is a simple example of packet
10processing using the Data Plane Development Kit (DPDK) to demonstrate usage of
11poll and event mode packet I/O mechanism.
12
13Overview
14--------
15
16The L2 Forwarding eventdev sample application, performs L2 forwarding for each
17packet that is received on an RX_PORT. The destination port is the adjacent port
18from the enabled portmask, that is, if the first four ports are enabled (portmask=0x0f),
19ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other.
20Also, if MAC addresses updating is enabled, the MAC addresses are affected as follows:
21
22*   The source MAC address is replaced by the TX_PORT MAC address
23
24*   The destination MAC address is replaced by  02:00:00:00:00:TX_PORT_ID
25
26Application receives packets from RX_PORT using below mentioned methods:
27
28*   Poll mode
29
30*   Eventdev mode (default)
31
32This application can be used to benchmark performance using a traffic-generator,
33as shown in the :numref:`figure_l2fwd_event_benchmark_setup`.
34
35.. _figure_l2fwd_event_benchmark_setup:
36
37.. figure:: img/l2_fwd_benchmark_setup.*
38
39   Performance Benchmark Setup (Basic Environment)
40
41Compiling the Application
42-------------------------
43
44To compile the sample application see :doc:`compiling`.
45
46The application is located in the ``l2fwd-event`` sub-directory.
47
48Running the Application
49-----------------------
50
51The application requires a number of command line options:
52
53.. code-block:: console
54
55    ./<build_dir>/examples/dpdk-l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sched=SCHED_MODE
56
57where,
58
59*   p PORTMASK: A hexadecimal bitmask of the ports to configure
60
61*   q NQ: A number of queues (=ports) per lcore (default is 1)
62
63*   --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default).
64
65*   --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default.
66
67*   --eventq-sched=SCHED_MODE: Event queue schedule mode, Ordered, Atomic or Parallel. Atomic by default.
68
69*   --config: Configure forwarding port pair mapping. Alternate port pairs by default.
70
71Sample usage commands are given below to run the application into different mode:
72
73Poll mode with 4 lcores, 16 ports and 8 RX queues per lcore and MAC address updating enabled,
74issue the command:
75
76.. code-block:: console
77
78    ./<build_dir>/examples/dpdk-l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll
79
80Eventdev mode with 4 lcores, 16 ports , sched method ordered and MAC address updating enabled,
81issue the command:
82
83.. code-block:: console
84
85    ./<build_dir>/examples/dpdk-l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sched=ordered
86
87or
88
89.. code-block:: console
90
91    ./<build_dir>/examples/dpdk-l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sched=ordered
92
93Refer to the *DPDK Getting Started Guide* for general information on running
94applications and the Environment Abstraction Layer (EAL) options.
95
96To run application with S/W scheduler, it uses following DPDK services:
97
98*   Software scheduler
99*   Rx adapter service function
100*   Tx adapter service function
101
102Application needs service cores to run above mentioned services. Service cores
103must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W
104scheduler. Following is the sample command:
105
106.. code-block:: console
107
108    ./<build_dir>/examples/dpdk-l2fwd-event -l 0-7 -s 0-3 -n 4 --vdev event_sw0 -- -q 8 -p ffff --mode=eventdev --eventq-sched=ordered
109
110Explanation
111-----------
112
113The following sections provide some explanation of the code.
114
115.. _l2_fwd_event_app_cmd_arguments:
116
117Command Line Arguments
118~~~~~~~~~~~~~~~~~~~~~~
119
120The L2 Forwarding eventdev sample application takes specific parameters,
121in addition to Environment Abstraction Layer (EAL) arguments.
122The preferred way to parse parameters is to use the getopt() function,
123since it is part of a well-defined and portable library.
124
125The parsing of arguments is done in the **l2fwd_parse_args()** function for non
126eventdev parameters and in **parse_eventdev_args()** for eventdev parameters.
127The method of argument parsing is not described here. Refer to the
128*glibc getopt(3)* man page for details.
129
130EAL arguments are parsed first, then application-specific arguments.
131This is done at the beginning of the main() function and eventdev parameters
132are parsed in eventdev_resource_setup() function during eventdev setup:
133
134.. code-block:: c
135
136    /* init EAL */
137
138    ret = rte_eal_init(argc, argv);
139    if (ret < 0)
140        rte_panic("Invalid EAL arguments\n");
141
142    argc -= ret;
143    argv += ret;
144
145    /* parse application arguments (after the EAL ones) */
146
147    ret = l2fwd_parse_args(argc, argv);
148    if (ret < 0)
149        rte_panic("Invalid L2FWD arguments\n");
150    .
151    .
152    .
153
154    /* Parse eventdev command line options */
155    ret = parse_eventdev_args(argc, argv);
156    if (ret < 0)
157        return ret;
158
159
160
161
162.. _l2_fwd_event_app_mbuf_init:
163
164Mbuf Pool Initialization
165~~~~~~~~~~~~~~~~~~~~~~~~
166
167Once the arguments are parsed, the mbuf pool is created.
168The mbuf pool contains a set of mbuf objects that will be used by the driver
169and the application to store network packet data:
170
171.. code-block:: c
172
173    /* create the mbuf pool */
174
175    l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF,
176                                                 MEMPOOL_CACHE_SIZE, 0,
177                                                 RTE_MBUF_DEFAULT_BUF_SIZE,
178                                                 rte_socket_id());
179    if (l2fwd_pktmbuf_pool == NULL)
180        rte_panic("Cannot init mbuf pool\n");
181
182The rte_mempool is a generic structure used to handle pools of objects.
183In this case, it is necessary to create a pool that will be used by the driver.
184The number of allocated pkt mbufs is NB_MBUF, with a data room size of
185RTE_MBUF_DEFAULT_BUF_SIZE each.
186A per-lcore cache of 32 mbufs is kept.
187The memory is allocated in NUMA socket 0,
188but it is possible to extend this code to allocate one mbuf pool per socket.
189
190The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf
191initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init().
192An advanced application may want to use the mempool API to create the
193mbuf pool with more control.
194
195.. _l2_fwd_event_app_drv_init:
196
197Driver Initialization
198~~~~~~~~~~~~~~~~~~~~~
199
200The main part of the code in the main() function relates to the initialization
201of the driver. To fully understand this code, it is recommended to study the
202chapters that related to the Poll Mode and Event mode Driver in the
203*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*.
204
205.. code-block:: c
206
207    /* reset l2fwd_dst_ports */
208
209    for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++)
210        l2fwd_dst_ports[portid] = 0;
211
212    last_port = 0;
213
214    /*
215     * Each logical core is assigned a dedicated TX queue on each port.
216     */
217
218    RTE_ETH_FOREACH_DEV(portid) {
219        /* skip ports that are not enabled */
220
221        if ((l2fwd_enabled_port_mask & (1 << portid)) == 0)
222           continue;
223
224        if (nb_ports_in_mask % 2) {
225            l2fwd_dst_ports[portid] = last_port;
226            l2fwd_dst_ports[last_port] = portid;
227        }
228        else
229           last_port = portid;
230
231        nb_ports_in_mask++;
232
233        rte_eth_dev_info_get((uint8_t) portid, &dev_info);
234    }
235
236The next step is to configure the RX and TX queues. For each port, there is only
237one RX queue (only one lcore is able to poll a given port). The number of TX
238queues depends on the number of available lcores. The rte_eth_dev_configure()
239function is used to configure the number of queues for a port:
240
241.. code-block:: c
242
243    ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf);
244    if (ret < 0)
245        rte_panic("Cannot configure device: err=%d, port=%u\n",
246                  ret, portid);
247
248.. _l2_fwd_event_app_rx_init:
249
250RX Queue Initialization
251~~~~~~~~~~~~~~~~~~~~~~~
252
253The application uses one lcore to poll one or several ports, depending on the -q
254option, which specifies the number of queues per lcore.
255
256For example, if the user specifies -q 4, the application is able to poll four
257ports with one lcore. If there are 16 ports on the target (and if the portmask
258argument is -p ffff ), the application will need four lcores to poll all the
259ports.
260
261.. code-block:: c
262
263    ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0,
264                                 &rx_conf, l2fwd_pktmbuf_pool);
265    if (ret < 0)
266
267        rte_panic("rte_eth_rx_queue_setup: err=%d, port=%u\n",
268                  ret, portid);
269
270The list of queues that must be polled for a given lcore is stored in a private
271structure called struct lcore_queue_conf.
272
273.. code-block:: c
274
275    struct lcore_queue_conf {
276        unsigned n_rx_port;
277        unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE];
278        struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS];
279    } rte_cache_aligned;
280
281    struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE];
282
283The values n_rx_port and rx_port_list[] are used in the main packet processing
284loop (see :ref:`l2_fwd_event_app_rx_tx_packets`).
285
286.. _l2_fwd_event_app_tx_init:
287
288TX Queue Initialization
289~~~~~~~~~~~~~~~~~~~~~~~
290
291Each lcore should be able to transmit on any port. For every port, a single TX
292queue is initialized.
293
294.. code-block:: c
295
296    /* init one TX queue on each port */
297
298    fflush(stdout);
299
300    ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd,
301                                 rte_eth_dev_socket_id(portid), &tx_conf);
302    if (ret < 0)
303        rte_panic("rte_eth_tx_queue_setup:err=%d, port=%u\n",
304                  ret, (unsigned) portid);
305
306To configure eventdev support, application setups following components:
307
308*   Event dev
309*   Event queue
310*   Event Port
311*   Rx/Tx adapters
312*   Ethernet ports
313
314.. _l2_fwd_event_app_event_dev_init:
315
316Event device Initialization
317~~~~~~~~~~~~~~~~~~~~~~~~~~~
318Application can use either H/W or S/W based event device scheduler
319implementation and supports single instance of event device. It configures event
320device as per below configuration
321
322.. code-block:: c
323
324   struct rte_event_dev_config event_d_conf = {
325        .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */
326        .nb_event_ports = num_workers, /* Dedicated to each lcore */
327        .nb_events_limit  = 4096,
328        .nb_event_queue_flows = 1024,
329        .nb_event_port_dequeue_depth = 128,
330        .nb_event_port_enqueue_depth = 128
331   };
332
333   ret = rte_event_dev_configure(event_d_id, &event_d_conf);
334   if (ret < 0)
335        rte_panic("Error in configuring event device\n");
336
337In case of S/W scheduler, application runs eventdev scheduler service on service
338core. Application retrieves service id and finds the best possible service core to
339run S/W scheduler.
340
341.. code-block:: c
342
343        rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info);
344        if (evdev_info.event_dev_cap  & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) {
345                ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id,
346                                &service_id);
347                if (ret != -ESRCH && ret != 0)
348                        rte_panic("Error in starting eventdev service\n");
349                l2fwd_event_service_enable(service_id);
350        }
351
352.. _l2_fwd_app_event_queue_init:
353
354Event queue Initialization
355~~~~~~~~~~~~~~~~~~~~~~~~~~
356Each Ethernet device is assigned a dedicated event queue which will be linked
357to all available event ports i.e. each lcore can dequeue packets from any of the
358Ethernet ports.
359
360.. code-block:: c
361
362   struct rte_event_queue_conf event_q_conf = {
363        .nb_atomic_flows = 1024,
364        .nb_atomic_order_sequences = 1024,
365        .event_queue_cfg = 0,
366        .schedule_type = RTE_SCHED_TYPE_ATOMIC,
367        .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST
368   };
369
370   /* User requested sched mode */
371   event_q_conf.schedule_type = eventq_sched_mode;
372   for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) {
373        ret = rte_event_queue_setup(event_d_id, event_q_id,
374                                            &event_q_conf);
375        if (ret < 0)
376              rte_panic("Error in configuring event queue\n");
377   }
378
379In case of S/W scheduler, an extra event queue is created which will be used for
380Tx adapter service function for enqueue operation.
381
382.. _l2_fwd_app_event_port_init:
383
384Event port Initialization
385~~~~~~~~~~~~~~~~~~~~~~~~~
386Each worker thread is assigned a dedicated event port for enq/deq operations
387to/from an event device. All event ports are linked with all available event
388queues.
389
390.. code-block:: c
391
392   struct rte_event_port_conf event_p_conf = {
393        .dequeue_depth = 32,
394        .enqueue_depth = 32,
395        .new_event_threshold = 4096
396   };
397
398   for (event_p_id = 0; event_p_id < num_workers; event_p_id++) {
399        ret = rte_event_port_setup(event_d_id, event_p_id,
400                                   &event_p_conf);
401        if (ret < 0)
402              rte_panic("Error in configuring event port %d\n", event_p_id);
403
404        ret = rte_event_port_link(event_d_id, event_p_id, NULL,
405                                  NULL, 0);
406        if (ret < 0)
407              rte_panic("Error in linking event port %d to queue\n",
408                        event_p_id);
409   }
410
411In case of S/W scheduler, an extra event port is created by DPDK library which
412is retrieved  by the application and same will be used by Tx adapter service.
413
414.. code-block:: c
415
416        ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id);
417        if (ret)
418                rte_panic("Failed to get Tx adapter port id: %d\n", ret);
419
420        ret = rte_event_port_link(event_d_id, tx_port_id,
421                                  &evt_rsrc.evq.event_q_id[
422                                        evt_rsrc.evq.nb_queues - 1],
423                                  NULL, 1);
424        if (ret != 1)
425                rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n",
426                          ret);
427
428.. _l2_fwd_event_app_adapter_init:
429
430Rx/Tx adapter Initialization
431~~~~~~~~~~~~~~~~~~~~~~~~~~~~
432Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each
433Ethernet port's Rx queues are connected to its respective event queue at
434priority 0 via Rx adapter configuration and Ethernet port's tx queues are
435connected via Tx adapter.
436
437.. code-block:: c
438
439	RTE_ETH_FOREACH_DEV(port_id) {
440		if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
441			continue;
442		ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id,
443						&evt_rsrc->def_p_conf);
444		if (ret)
445			rte_panic("Failed to create rx adapter[%d]\n",
446                                  adapter_id);
447
448		/* Configure user requested sched type*/
449		eth_q_conf.ev.sched_type = rsrc->sched_type;
450		eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id];
451		ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id,
452							 -1, &eth_q_conf);
453		if (ret)
454			rte_panic("Failed to add queues to Rx adapter\n");
455
456		ret = rte_event_eth_rx_adapter_start(adapter_id);
457		if (ret)
458			rte_panic("Rx adapter[%d] start Failed\n", adapter_id);
459
460		evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id;
461		adapter_id++;
462		if (q_id < evt_rsrc->evq.nb_queues)
463			q_id++;
464	}
465
466	adapter_id = 0;
467	RTE_ETH_FOREACH_DEV(port_id) {
468		if ((rsrc->enabled_port_mask & (1 << port_id)) == 0)
469			continue;
470		ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id,
471						&evt_rsrc->def_p_conf);
472		if (ret)
473			rte_panic("Failed to create tx adapter[%d]\n",
474                                  adapter_id);
475
476		ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id,
477							 -1);
478		if (ret)
479			rte_panic("Failed to add queues to Tx adapter\n");
480
481		ret = rte_event_eth_tx_adapter_start(adapter_id);
482		if (ret)
483			rte_panic("Tx adapter[%d] start Failed\n", adapter_id);
484
485		evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id;
486		adapter_id++;
487	}
488
489For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are
490configured which will be shared among all the Ethernet ports. Also DPDK library
491need service cores to run internal services for Rx/Tx adapters. Application gets
492service id for Rx/Tx adapters and after successful setup it runs the services
493on dedicated service cores.
494
495.. code-block:: c
496
497	for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) {
498		ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id,
499				evt_rsrc->rx_adptr.rx_adptr[i], &caps);
500		if (ret < 0)
501			rte_panic("Failed to get Rx adapter[%d] caps\n",
502                                  evt_rsrc->rx_adptr.rx_adptr[i]);
503		ret = rte_event_eth_rx_adapter_service_id_get(
504                                                evt_rsrc->event_d_id,
505                                                &service_id);
506		if (ret != -ESRCH && ret != 0)
507			rte_panic("Error in starting Rx adapter[%d] service\n",
508                                  evt_rsrc->rx_adptr.rx_adptr[i]);
509		l2fwd_event_service_enable(service_id);
510	}
511
512	for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) {
513		ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id,
514				evt_rsrc->tx_adptr.tx_adptr[i], &caps);
515		if (ret < 0)
516			rte_panic("Failed to get Rx adapter[%d] caps\n",
517                                  evt_rsrc->tx_adptr.tx_adptr[i]);
518		ret = rte_event_eth_tx_adapter_service_id_get(
519				evt_rsrc->event_d_id,
520				&service_id);
521		if (ret != -ESRCH && ret != 0)
522			rte_panic("Error in starting Rx adapter[%d] service\n",
523                                  evt_rsrc->tx_adptr.tx_adptr[i]);
524		l2fwd_event_service_enable(service_id);
525	}
526
527.. _l2_fwd_event_app_rx_tx_packets:
528
529Receive, Process and Transmit Packets
530~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
531
532In the **l2fwd_main_loop()** function, the main task is to read ingress packets from
533the RX queues. This is done using the following code:
534
535.. code-block:: c
536
537    /*
538     * Read packet from RX queues
539     */
540
541    for (i = 0; i < qconf->n_rx_port; i++) {
542        portid = qconf->rx_port_list[i];
543        nb_rx = rte_eth_rx_burst((uint8_t) portid, 0,  pkts_burst,
544                                 MAX_PKT_BURST);
545
546        for (j = 0; j < nb_rx; j++) {
547            m = pkts_burst[j];
548            rte_prefetch0(rte_pktmbuf_mtod(m, void *));
549            l2fwd_simple_forward(m, portid);
550        }
551    }
552
553Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst()
554function writes the mbuf pointers in a local table and returns the number of
555available mbufs in the table.
556
557Then, each mbuf in the table is processed by the l2fwd_simple_forward()
558function. The processing is very simple: process the TX port from the RX port,
559then replace the source and destination MAC addresses if MAC addresses updating
560is enabled.
561
562During the initialization process, a static array of destination ports
563(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
564is assigned that is either the next or previous enabled port from the portmask.
565If number of ports are odd in portmask then packet from last port will be
566forwarded to first port i.e. if portmask=0x07, then forwarding will take place
567like p0--->p1, p1--->p2, p2--->p0.
568
569Also to optimize enqueue operation, l2fwd_simple_forward() stores incoming mbufs
570up to MAX_PKT_BURST. Once it reaches up to limit, all packets are transmitted to
571destination ports.
572
573.. code-block:: c
574
575   static void
576   l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid)
577   {
578       uint32_t dst_port;
579       int32_t sent;
580       struct rte_eth_dev_tx_buffer *buffer;
581
582       dst_port = l2fwd_dst_ports[portid];
583
584       if (mac_updating)
585           l2fwd_mac_updating(m, dst_port);
586
587       buffer = tx_buffer[dst_port];
588       sent = rte_eth_tx_buffer(dst_port, 0, buffer, m);
589       if (sent)
590       port_statistics[dst_port].tx += sent;
591   }
592
593For this test application, the processing is exactly the same for all packets
594arriving on the same RX port. Therefore, it would have been possible to call
595the rte_eth_tx_buffer() function directly from the main loop to send all the
596received packets on the same TX port, using the burst-oriented send function,
597which is more efficient.
598
599However, in real-life applications (such as, L3 routing),
600packet N is not necessarily forwarded on the same port as packet N-1.
601The application is implemented to illustrate that, so the same approach can be
602reused in a more complex application.
603
604To ensure that no packets remain in the tables, each lcore does a draining of TX
605queue in its main loop. This technique introduces some latency when there are
606not many packets to send, however it improves performance:
607
608.. code-block:: c
609
610        cur_tsc = rte_rdtsc();
611
612        /*
613        * TX burst queue drain
614        */
615        diff_tsc = cur_tsc - prev_tsc;
616        if (unlikely(diff_tsc > drain_tsc)) {
617                for (i = 0; i < qconf->n_rx_port; i++) {
618                        portid = l2fwd_dst_ports[qconf->rx_port_list[i]];
619                        buffer = tx_buffer[portid];
620                        sent = rte_eth_tx_buffer_flush(portid, 0,
621                                                       buffer);
622                        if (sent)
623                                port_statistics[portid].tx += sent;
624                }
625
626                /* if timer is enabled */
627                if (timer_period > 0) {
628                        /* advance the timer */
629                        timer_tsc += diff_tsc;
630
631                        /* if timer has reached its timeout */
632                        if (unlikely(timer_tsc >= timer_period)) {
633                                /* do this only on main core */
634                                if (lcore_id == rte_get_main_lcore()) {
635                                        print_stats();
636                                        /* reset the timer */
637                                        timer_tsc = 0;
638                                }
639                        }
640                }
641
642                prev_tsc = cur_tsc;
643        }
644
645In the **l2fwd_event_loop()** function, the main task is to read ingress
646packets from the event ports. This is done using the following code:
647
648.. code-block:: c
649
650        /* Read packet from eventdev */
651        nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id,
652                                        events, deq_len, 0);
653        if (nb_rx == 0) {
654                rte_pause();
655                continue;
656        }
657
658        for (i = 0; i < nb_rx; i++) {
659                mbuf[i] = events[i].mbuf;
660                rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *));
661        }
662
663
664Before reading packets, deq_len is fetched to ensure correct allowed deq length
665by the eventdev.
666The rte_event_dequeue_burst() function writes the mbuf pointers in a local table
667and returns the number of available mbufs in the table.
668
669Then, each mbuf in the table is processed by the l2fwd_eventdev_forward()
670function. The processing is very simple: process the TX port from the RX port,
671then replace the source and destination MAC addresses if MAC addresses updating
672is enabled.
673
674During the initialization process, a static array of destination ports
675(l2fwd_dst_ports[]) is filled such that for each source port, a destination port
676is assigned that is either the next or previous enabled port from the portmask.
677If number of ports are odd in portmask then packet from last port will be
678forwarded to first port i.e. if portmask=0x07, then forwarding will take place
679like p0--->p1, p1--->p2, p2--->p0.
680
681l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded
682be to destination ports via Tx adapter or generic event dev enqueue API
683depending H/W or S/W scheduler is used.
684
685.. code-block:: c
686
687	nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id, port_id, ev,
688						 nb_rx);
689	while (nb_tx < nb_rx && !rsrc->force_quit)
690		nb_tx += rte_event_eth_tx_adapter_enqueue(
691				event_d_id, port_id,
692				ev + nb_tx, nb_rx - nb_tx);
693