1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2010-2014 Intel Corporation. 3 4.. _l2_fwd_event_app: 5 6L2 Forwarding Eventdev Sample Application 7========================================= 8 9The L2 Forwarding eventdev sample application is a simple example of packet 10processing using the Data Plane Development Kit (DPDK) to demonstrate usage of 11poll and event mode packet I/O mechanism. 12 13Overview 14-------- 15 16The L2 Forwarding eventdev sample application, performs L2 forwarding for each 17packet that is received on an RX_PORT. The destination port is the adjacent port 18from the enabled portmask, that is, if the first four ports are enabled (portmask=0x0f), 19ports 1 and 2 forward into each other, and ports 3 and 4 forward into each other. 20Also, if MAC addresses updating is enabled, the MAC addresses are affected as follows: 21 22* The source MAC address is replaced by the TX_PORT MAC address 23 24* The destination MAC address is replaced by 02:00:00:00:00:TX_PORT_ID 25 26Application receives packets from RX_PORT using below mentioned methods: 27 28* Poll mode 29 30* Eventdev mode (default) 31 32This application can be used to benchmark performance using a traffic-generator, 33as shown in the :numref:`figure_l2fwd_event_benchmark_setup`. 34 35.. _figure_l2fwd_event_benchmark_setup: 36 37.. figure:: img/l2_fwd_benchmark_setup.* 38 39 Performance Benchmark Setup (Basic Environment) 40 41Compiling the Application 42------------------------- 43 44To compile the sample application see :doc:`compiling`. 45 46The application is located in the ``l2fwd-event`` sub-directory. 47 48Running the Application 49----------------------- 50 51The application requires a number of command line options: 52 53.. code-block:: console 54 55 ./build/l2fwd-event [EAL options] -- -p PORTMASK [-q NQ] --[no-]mac-updating --mode=MODE --eventq-sched=SCHED_MODE 56 57where, 58 59* p PORTMASK: A hexadecimal bitmask of the ports to configure 60 61* q NQ: A number of queues (=ports) per lcore (default is 1) 62 63* --[no-]mac-updating: Enable or disable MAC addresses updating (enabled by default). 64 65* --mode=MODE: Packet transfer mode for I/O, poll or eventdev. Eventdev by default. 66 67* --eventq-sched=SCHED_MODE: Event queue schedule mode, Ordered, Atomic or Parallel. Atomic by default. 68 69Sample usage commands are given below to run the application into different mode: 70 71Poll mode with 4 lcores, 16 ports and 8 RX queues per lcore and MAC address updating enabled, 72issue the command: 73 74.. code-block:: console 75 76 ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=poll 77 78Eventdev mode with 4 lcores, 16 ports , sched method ordered and MAC address updating enabled, 79issue the command: 80 81.. code-block:: console 82 83 ./build/l2fwd-event -l 0-3 -n 4 -- -p ffff --eventq-sched=ordered 84 85or 86 87.. code-block:: console 88 89 ./build/l2fwd-event -l 0-3 -n 4 -- -q 8 -p ffff --mode=eventdev --eventq-sched=ordered 90 91Refer to the *DPDK Getting Started Guide* for general information on running 92applications and the Environment Abstraction Layer (EAL) options. 93 94To run application with S/W scheduler, it uses following DPDK services: 95 96* Software scheduler 97* Rx adapter service function 98* Tx adapter service function 99 100Application needs service cores to run above mentioned services. Service cores 101must be provided as EAL parameters along with the --vdev=event_sw0 to enable S/W 102scheduler. Following is the sample command: 103 104.. code-block:: console 105 106 ./build/l2fwd-event -l 0-7 -s 0-3 -n 4 ---vdev event_sw0 --q 8 -p ffff --mode=eventdev --eventq-sched=ordered 107 108Explanation 109----------- 110 111The following sections provide some explanation of the code. 112 113.. _l2_fwd_event_app_cmd_arguments: 114 115Command Line Arguments 116~~~~~~~~~~~~~~~~~~~~~~ 117 118The L2 Forwarding eventdev sample application takes specific parameters, 119in addition to Environment Abstraction Layer (EAL) arguments. 120The preferred way to parse parameters is to use the getopt() function, 121since it is part of a well-defined and portable library. 122 123The parsing of arguments is done in the **l2fwd_parse_args()** function for non 124eventdev parameters and in **parse_eventdev_args()** for eventdev parameters. 125The method of argument parsing is not described here. Refer to the 126*glibc getopt(3)* man page for details. 127 128EAL arguments are parsed first, then application-specific arguments. 129This is done at the beginning of the main() function and eventdev parameters 130are parsed in eventdev_resource_setup() function during eventdev setup: 131 132.. code-block:: c 133 134 /* init EAL */ 135 136 ret = rte_eal_init(argc, argv); 137 if (ret < 0) 138 rte_panic("Invalid EAL arguments\n"); 139 140 argc -= ret; 141 argv += ret; 142 143 /* parse application arguments (after the EAL ones) */ 144 145 ret = l2fwd_parse_args(argc, argv); 146 if (ret < 0) 147 rte_panic("Invalid L2FWD arguments\n"); 148 . 149 . 150 . 151 152 /* Parse eventdev command line options */ 153 ret = parse_eventdev_args(argc, argv); 154 if (ret < 0) 155 return ret; 156 157 158 159 160.. _l2_fwd_event_app_mbuf_init: 161 162Mbuf Pool Initialization 163~~~~~~~~~~~~~~~~~~~~~~~~ 164 165Once the arguments are parsed, the mbuf pool is created. 166The mbuf pool contains a set of mbuf objects that will be used by the driver 167and the application to store network packet data: 168 169.. code-block:: c 170 171 /* create the mbuf pool */ 172 173 l2fwd_pktmbuf_pool = rte_pktmbuf_pool_create("mbuf_pool", NB_MBUF, 174 MEMPOOL_CACHE_SIZE, 0, 175 RTE_MBUF_DEFAULT_BUF_SIZE, 176 rte_socket_id()); 177 if (l2fwd_pktmbuf_pool == NULL) 178 rte_panic("Cannot init mbuf pool\n"); 179 180The rte_mempool is a generic structure used to handle pools of objects. 181In this case, it is necessary to create a pool that will be used by the driver. 182The number of allocated pkt mbufs is NB_MBUF, with a data room size of 183RTE_MBUF_DEFAULT_BUF_SIZE each. 184A per-lcore cache of 32 mbufs is kept. 185The memory is allocated in NUMA socket 0, 186but it is possible to extend this code to allocate one mbuf pool per socket. 187 188The rte_pktmbuf_pool_create() function uses the default mbuf pool and mbuf 189initializers, respectively rte_pktmbuf_pool_init() and rte_pktmbuf_init(). 190An advanced application may want to use the mempool API to create the 191mbuf pool with more control. 192 193.. _l2_fwd_event_app_drv_init: 194 195Driver Initialization 196~~~~~~~~~~~~~~~~~~~~~ 197 198The main part of the code in the main() function relates to the initialization 199of the driver. To fully understand this code, it is recommended to study the 200chapters that related to the Poll Mode and Event mode Driver in the 201*DPDK Programmer's Guide* - Rel 1.4 EAR and the *DPDK API Reference*. 202 203.. code-block:: c 204 205 if (rte_pci_probe() < 0) 206 rte_panic("Cannot probe PCI\n"); 207 208 /* reset l2fwd_dst_ports */ 209 210 for (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) 211 l2fwd_dst_ports[portid] = 0; 212 213 last_port = 0; 214 215 /* 216 * Each logical core is assigned a dedicated TX queue on each port. 217 */ 218 219 RTE_ETH_FOREACH_DEV(portid) { 220 /* skip ports that are not enabled */ 221 222 if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) 223 continue; 224 225 if (nb_ports_in_mask % 2) { 226 l2fwd_dst_ports[portid] = last_port; 227 l2fwd_dst_ports[last_port] = portid; 228 } 229 else 230 last_port = portid; 231 232 nb_ports_in_mask++; 233 234 rte_eth_dev_info_get((uint8_t) portid, &dev_info); 235 } 236 237Observe that: 238 239* rte_pci_probe() parses the devices on the PCI bus and initializes recognized 240 devices. 241 242The next step is to configure the RX and TX queues. For each port, there is only 243one RX queue (only one lcore is able to poll a given port). The number of TX 244queues depends on the number of available lcores. The rte_eth_dev_configure() 245function is used to configure the number of queues for a port: 246 247.. code-block:: c 248 249 ret = rte_eth_dev_configure((uint8_t)portid, 1, 1, &port_conf); 250 if (ret < 0) 251 rte_panic("Cannot configure device: err=%d, port=%u\n", 252 ret, portid); 253 254.. _l2_fwd_event_app_rx_init: 255 256RX Queue Initialization 257~~~~~~~~~~~~~~~~~~~~~~~ 258 259The application uses one lcore to poll one or several ports, depending on the -q 260option, which specifies the number of queues per lcore. 261 262For example, if the user specifies -q 4, the application is able to poll four 263ports with one lcore. If there are 16 ports on the target (and if the portmask 264argument is -p ffff ), the application will need four lcores to poll all the 265ports. 266 267.. code-block:: c 268 269 ret = rte_eth_rx_queue_setup((uint8_t) portid, 0, nb_rxd, SOCKET0, 270 &rx_conf, l2fwd_pktmbuf_pool); 271 if (ret < 0) 272 273 rte_panic("rte_eth_rx_queue_setup: err=%d, port=%u\n", 274 ret, portid); 275 276The list of queues that must be polled for a given lcore is stored in a private 277structure called struct lcore_queue_conf. 278 279.. code-block:: c 280 281 struct lcore_queue_conf { 282 unsigned n_rx_port; 283 unsigned rx_port_list[MAX_RX_QUEUE_PER_LCORE]; 284 struct mbuf_table tx_mbufs[L2FWD_MAX_PORTS]; 285 } rte_cache_aligned; 286 287 struct lcore_queue_conf lcore_queue_conf[RTE_MAX_LCORE]; 288 289The values n_rx_port and rx_port_list[] are used in the main packet processing 290loop (see :ref:`l2_fwd_event_app_rx_tx_packets`). 291 292.. _l2_fwd_event_app_tx_init: 293 294TX Queue Initialization 295~~~~~~~~~~~~~~~~~~~~~~~ 296 297Each lcore should be able to transmit on any port. For every port, a single TX 298queue is initialized. 299 300.. code-block:: c 301 302 /* init one TX queue on each port */ 303 304 fflush(stdout); 305 306 ret = rte_eth_tx_queue_setup((uint8_t) portid, 0, nb_txd, 307 rte_eth_dev_socket_id(portid), &tx_conf); 308 if (ret < 0) 309 rte_panic("rte_eth_tx_queue_setup:err=%d, port=%u\n", 310 ret, (unsigned) portid); 311 312To configure eventdev support, application setups following components: 313 314* Event dev 315* Event queue 316* Event Port 317* Rx/Tx adapters 318* Ethernet ports 319 320.. _l2_fwd_event_app_event_dev_init: 321 322Event device Initialization 323~~~~~~~~~~~~~~~~~~~~~~~~~~~ 324Application can use either H/W or S/W based event device scheduler 325implementation and supports single instance of event device. It configures event 326device as per below configuration 327 328.. code-block:: c 329 330 struct rte_event_dev_config event_d_conf = { 331 .nb_event_queues = ethdev_count, /* Dedicated to each Ethernet port */ 332 .nb_event_ports = num_workers, /* Dedicated to each lcore */ 333 .nb_events_limit = 4096, 334 .nb_event_queue_flows = 1024, 335 .nb_event_port_dequeue_depth = 128, 336 .nb_event_port_enqueue_depth = 128 337 }; 338 339 ret = rte_event_dev_configure(event_d_id, &event_d_conf); 340 if (ret < 0) 341 rte_panic("Error in configuring event device\n"); 342 343In case of S/W scheduler, application runs eventdev scheduler service on service 344core. Application retrieves service id and finds the best possible service core to 345run S/W scheduler. 346 347.. code-block:: c 348 349 rte_event_dev_info_get(evt_rsrc->event_d_id, &evdev_info); 350 if (evdev_info.event_dev_cap & RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED) { 351 ret = rte_event_dev_service_id_get(evt_rsrc->event_d_id, 352 &service_id); 353 if (ret != -ESRCH && ret != 0) 354 rte_panic("Error in starting eventdev service\n"); 355 l2fwd_event_service_enable(service_id); 356 } 357 358.. _l2_fwd_app_event_queue_init: 359 360Event queue Initialization 361~~~~~~~~~~~~~~~~~~~~~~~~~~ 362Each Ethernet device is assigned a dedicated event queue which will be linked 363to all available event ports i.e. each lcore can dequeue packets from any of the 364Ethernet ports. 365 366.. code-block:: c 367 368 struct rte_event_queue_conf event_q_conf = { 369 .nb_atomic_flows = 1024, 370 .nb_atomic_order_sequences = 1024, 371 .event_queue_cfg = 0, 372 .schedule_type = RTE_SCHED_TYPE_ATOMIC, 373 .priority = RTE_EVENT_DEV_PRIORITY_HIGHEST 374 }; 375 376 /* User requested sched mode */ 377 event_q_conf.schedule_type = eventq_sched_mode; 378 for (event_q_id = 0; event_q_id < ethdev_count; event_q_id++) { 379 ret = rte_event_queue_setup(event_d_id, event_q_id, 380 &event_q_conf); 381 if (ret < 0) 382 rte_panic("Error in configuring event queue\n"); 383 } 384 385In case of S/W scheduler, an extra event queue is created which will be used for 386Tx adapter service function for enqueue operation. 387 388.. _l2_fwd_app_event_port_init: 389 390Event port Initialization 391~~~~~~~~~~~~~~~~~~~~~~~~~ 392Each worker thread is assigned a dedicated event port for enq/deq operations 393to/from an event device. All event ports are linked with all available event 394queues. 395 396.. code-block:: c 397 398 struct rte_event_port_conf event_p_conf = { 399 .dequeue_depth = 32, 400 .enqueue_depth = 32, 401 .new_event_threshold = 4096 402 }; 403 404 for (event_p_id = 0; event_p_id < num_workers; event_p_id++) { 405 ret = rte_event_port_setup(event_d_id, event_p_id, 406 &event_p_conf); 407 if (ret < 0) 408 rte_panic("Error in configuring event port %d\n", event_p_id); 409 410 ret = rte_event_port_link(event_d_id, event_p_id, NULL, 411 NULL, 0); 412 if (ret < 0) 413 rte_panic("Error in linking event port %d to queue\n", 414 event_p_id); 415 } 416 417In case of S/W scheduler, an extra event port is created by DPDK library which 418is retrieved by the application and same will be used by Tx adapter service. 419 420.. code-block:: c 421 422 ret = rte_event_eth_tx_adapter_event_port_get(tx_adptr_id, &tx_port_id); 423 if (ret) 424 rte_panic("Failed to get Tx adapter port id: %d\n", ret); 425 426 ret = rte_event_port_link(event_d_id, tx_port_id, 427 &evt_rsrc.evq.event_q_id[ 428 evt_rsrc.evq.nb_queues - 1], 429 NULL, 1); 430 if (ret != 1) 431 rte_panic("Unable to link Tx adapter port to Tx queue:err=%d\n", 432 ret); 433 434.. _l2_fwd_event_app_adapter_init: 435 436Rx/Tx adapter Initialization 437~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 438Each Ethernet port is assigned a dedicated Rx/Tx adapter for H/W scheduler. Each 439Ethernet port's Rx queues are connected to its respective event queue at 440priority 0 via Rx adapter configuration and Ethernet port's tx queues are 441connected via Tx adapter. 442 443.. code-block:: c 444 445 RTE_ETH_FOREACH_DEV(port_id) { 446 if ((rsrc->enabled_port_mask & (1 << port_id)) == 0) 447 continue; 448 ret = rte_event_eth_rx_adapter_create(adapter_id, event_d_id, 449 &evt_rsrc->def_p_conf); 450 if (ret) 451 rte_panic("Failed to create rx adapter[%d]\n", 452 adapter_id); 453 454 /* Configure user requested sched type*/ 455 eth_q_conf.ev.sched_type = rsrc->sched_type; 456 eth_q_conf.ev.queue_id = evt_rsrc->evq.event_q_id[q_id]; 457 ret = rte_event_eth_rx_adapter_queue_add(adapter_id, port_id, 458 -1, ð_q_conf); 459 if (ret) 460 rte_panic("Failed to add queues to Rx adapter\n"); 461 462 ret = rte_event_eth_rx_adapter_start(adapter_id); 463 if (ret) 464 rte_panic("Rx adapter[%d] start Failed\n", adapter_id); 465 466 evt_rsrc->rx_adptr.rx_adptr[adapter_id] = adapter_id; 467 adapter_id++; 468 if (q_id < evt_rsrc->evq.nb_queues) 469 q_id++; 470 } 471 472 adapter_id = 0; 473 RTE_ETH_FOREACH_DEV(port_id) { 474 if ((rsrc->enabled_port_mask & (1 << port_id)) == 0) 475 continue; 476 ret = rte_event_eth_tx_adapter_create(adapter_id, event_d_id, 477 &evt_rsrc->def_p_conf); 478 if (ret) 479 rte_panic("Failed to create tx adapter[%d]\n", 480 adapter_id); 481 482 ret = rte_event_eth_tx_adapter_queue_add(adapter_id, port_id, 483 -1); 484 if (ret) 485 rte_panic("Failed to add queues to Tx adapter\n"); 486 487 ret = rte_event_eth_tx_adapter_start(adapter_id); 488 if (ret) 489 rte_panic("Tx adapter[%d] start Failed\n", adapter_id); 490 491 evt_rsrc->tx_adptr.tx_adptr[adapter_id] = adapter_id; 492 adapter_id++; 493 } 494 495For S/W scheduler instead of dedicated adapters, common Rx/Tx adapters are 496configured which will be shared among all the Ethernet ports. Also DPDK library 497need service cores to run internal services for Rx/Tx adapters. Application gets 498service id for Rx/Tx adapters and after successful setup it runs the services 499on dedicated service cores. 500 501.. code-block:: c 502 503 for (i = 0; i < evt_rsrc->rx_adptr.nb_rx_adptr; i++) { 504 ret = rte_event_eth_rx_adapter_caps_get(evt_rsrc->event_d_id, 505 evt_rsrc->rx_adptr.rx_adptr[i], &caps); 506 if (ret < 0) 507 rte_panic("Failed to get Rx adapter[%d] caps\n", 508 evt_rsrc->rx_adptr.rx_adptr[i]); 509 ret = rte_event_eth_rx_adapter_service_id_get( 510 evt_rsrc->event_d_id, 511 &service_id); 512 if (ret != -ESRCH && ret != 0) 513 rte_panic("Error in starting Rx adapter[%d] service\n", 514 evt_rsrc->rx_adptr.rx_adptr[i]); 515 l2fwd_event_service_enable(service_id); 516 } 517 518 for (i = 0; i < evt_rsrc->tx_adptr.nb_tx_adptr; i++) { 519 ret = rte_event_eth_tx_adapter_caps_get(evt_rsrc->event_d_id, 520 evt_rsrc->tx_adptr.tx_adptr[i], &caps); 521 if (ret < 0) 522 rte_panic("Failed to get Rx adapter[%d] caps\n", 523 evt_rsrc->tx_adptr.tx_adptr[i]); 524 ret = rte_event_eth_tx_adapter_service_id_get( 525 evt_rsrc->event_d_id, 526 &service_id); 527 if (ret != -ESRCH && ret != 0) 528 rte_panic("Error in starting Rx adapter[%d] service\n", 529 evt_rsrc->tx_adptr.tx_adptr[i]); 530 l2fwd_event_service_enable(service_id); 531 } 532 533.. _l2_fwd_event_app_rx_tx_packets: 534 535Receive, Process and Transmit Packets 536~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 537 538In the **l2fwd_main_loop()** function, the main task is to read ingress packets from 539the RX queues. This is done using the following code: 540 541.. code-block:: c 542 543 /* 544 * Read packet from RX queues 545 */ 546 547 for (i = 0; i < qconf->n_rx_port; i++) { 548 portid = qconf->rx_port_list[i]; 549 nb_rx = rte_eth_rx_burst((uint8_t) portid, 0, pkts_burst, 550 MAX_PKT_BURST); 551 552 for (j = 0; j < nb_rx; j++) { 553 m = pkts_burst[j]; 554 rte_prefetch0(rte_pktmbuf_mtod(m, void *)); 555 l2fwd_simple_forward(m, portid); 556 } 557 } 558 559Packets are read in a burst of size MAX_PKT_BURST. The rte_eth_rx_burst() 560function writes the mbuf pointers in a local table and returns the number of 561available mbufs in the table. 562 563Then, each mbuf in the table is processed by the l2fwd_simple_forward() 564function. The processing is very simple: process the TX port from the RX port, 565then replace the source and destination MAC addresses if MAC addresses updating 566is enabled. 567 568During the initialization process, a static array of destination ports 569(l2fwd_dst_ports[]) is filled such that for each source port, a destination port 570is assigned that is either the next or previous enabled port from the portmask. 571If number of ports are odd in portmask then packet from last port will be 572forwarded to first port i.e. if portmask=0x07, then forwarding will take place 573like p0--->p1, p1--->p2, p2--->p0. 574 575Also to optimize enqueue operation, l2fwd_simple_forward() stores incoming mbufs 576up to MAX_PKT_BURST. Once it reaches up to limit, all packets are transmitted to 577destination ports. 578 579.. code-block:: c 580 581 static void 582 l2fwd_simple_forward(struct rte_mbuf *m, uint32_t portid) 583 { 584 uint32_t dst_port; 585 int32_t sent; 586 struct rte_eth_dev_tx_buffer *buffer; 587 588 dst_port = l2fwd_dst_ports[portid]; 589 590 if (mac_updating) 591 l2fwd_mac_updating(m, dst_port); 592 593 buffer = tx_buffer[dst_port]; 594 sent = rte_eth_tx_buffer(dst_port, 0, buffer, m); 595 if (sent) 596 port_statistics[dst_port].tx += sent; 597 } 598 599For this test application, the processing is exactly the same for all packets 600arriving on the same RX port. Therefore, it would have been possible to call 601the rte_eth_tx_buffer() function directly from the main loop to send all the 602received packets on the same TX port, using the burst-oriented send function, 603which is more efficient. 604 605However, in real-life applications (such as, L3 routing), 606packet N is not necessarily forwarded on the same port as packet N-1. 607The application is implemented to illustrate that, so the same approach can be 608reused in a more complex application. 609 610To ensure that no packets remain in the tables, each lcore does a draining of TX 611queue in its main loop. This technique introduces some latency when there are 612not many packets to send, however it improves performance: 613 614.. code-block:: c 615 616 cur_tsc = rte_rdtsc(); 617 618 /* 619 * TX burst queue drain 620 */ 621 diff_tsc = cur_tsc - prev_tsc; 622 if (unlikely(diff_tsc > drain_tsc)) { 623 for (i = 0; i < qconf->n_rx_port; i++) { 624 portid = l2fwd_dst_ports[qconf->rx_port_list[i]]; 625 buffer = tx_buffer[portid]; 626 sent = rte_eth_tx_buffer_flush(portid, 0, 627 buffer); 628 if (sent) 629 port_statistics[portid].tx += sent; 630 } 631 632 /* if timer is enabled */ 633 if (timer_period > 0) { 634 /* advance the timer */ 635 timer_tsc += diff_tsc; 636 637 /* if timer has reached its timeout */ 638 if (unlikely(timer_tsc >= timer_period)) { 639 /* do this only on master core */ 640 if (lcore_id == rte_get_master_lcore()) { 641 print_stats(); 642 /* reset the timer */ 643 timer_tsc = 0; 644 } 645 } 646 } 647 648 prev_tsc = cur_tsc; 649 } 650 651In the **l2fwd_event_loop()** function, the main task is to read ingress 652packets from the event ports. This is done using the following code: 653 654.. code-block:: c 655 656 /* Read packet from eventdev */ 657 nb_rx = rte_event_dequeue_burst(event_d_id, event_p_id, 658 events, deq_len, 0); 659 if (nb_rx == 0) { 660 rte_pause(); 661 continue; 662 } 663 664 for (i = 0; i < nb_rx; i++) { 665 mbuf[i] = events[i].mbuf; 666 rte_prefetch0(rte_pktmbuf_mtod(mbuf[i], void *)); 667 } 668 669 670Before reading packets, deq_len is fetched to ensure correct allowed deq length 671by the eventdev. 672The rte_event_dequeue_burst() function writes the mbuf pointers in a local table 673and returns the number of available mbufs in the table. 674 675Then, each mbuf in the table is processed by the l2fwd_eventdev_forward() 676function. The processing is very simple: process the TX port from the RX port, 677then replace the source and destination MAC addresses if MAC addresses updating 678is enabled. 679 680During the initialization process, a static array of destination ports 681(l2fwd_dst_ports[]) is filled such that for each source port, a destination port 682is assigned that is either the next or previous enabled port from the portmask. 683If number of ports are odd in portmask then packet from last port will be 684forwarded to first port i.e. if portmask=0x07, then forwarding will take place 685like p0--->p1, p1--->p2, p2--->p0. 686 687l2fwd_eventdev_forward() does not stores incoming mbufs. Packet will forwarded 688be to destination ports via Tx adapter or generic event dev enqueue API 689depending H/W or S/W scheduler is used. 690 691.. code-block:: c 692 693 nb_tx = rte_event_eth_tx_adapter_enqueue(event_d_id, port_id, ev, 694 nb_rx); 695 while (nb_tx < nb_rx && !rsrc->force_quit) 696 nb_tx += rte_event_eth_tx_adapter_enqueue( 697 event_d_id, port_id, 698 ev + nb_tx, nb_rx - nb_tx); 699