1.. BSD LICENSE 2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved. 3 All rights reserved. 4 5 Redistribution and use in source and binary forms, with or without 6 modification, are permitted provided that the following conditions 7 are met: 8 9 * Redistributions of source code must retain the above copyright 10 notice, this list of conditions and the following disclaimer. 11 * Redistributions in binary form must reproduce the above copyright 12 notice, this list of conditions and the following disclaimer in 13 the documentation and/or other materials provided with the 14 distribution. 15 * Neither the name of Intel Corporation nor the names of its 16 contributors may be used to endorse or promote products derived 17 from this software without specific prior written permission. 18 19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 31.. _multi_process_app: 32 33Multi-process Sample Application 34================================ 35 36This chapter describes the example applications for multi-processing that are included in the DPDK. 37 38Example Applications 39-------------------- 40 41Building the Sample Applications 42~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 43The multi-process example applications are built in the same way as other sample applications, 44and as documented in the *DPDK Getting Started Guide*. 45 46 47To compile the sample application see :doc:`compiling`. 48 49The applications are located in the ``multi_process`` sub-directory. 50 51.. note:: 52 53 If just a specific multi-process application needs to be built, 54 the final make command can be run just in that application's directory, 55 rather than at the top-level multi-process directory. 56 57Basic Multi-process Example 58~~~~~~~~~~~~~~~~~~~~~~~~~~~ 59 60The examples/simple_mp folder in the DPDK release contains a basic example application to demonstrate how 61two DPDK processes can work together using queues and memory pools to share information. 62 63Running the Application 64^^^^^^^^^^^^^^^^^^^^^^^ 65 66To run the application, start one copy of the simple_mp binary in one terminal, 67passing at least two cores in the coremask/corelist, as follows: 68 69.. code-block:: console 70 71 ./build/simple_mp -l 0-1 -n 4 --proc-type=primary 72 73For the first DPDK process run, the proc-type flag can be omitted or set to auto, 74since all DPDK processes will default to being a primary instance, 75meaning they have control over the hugepage shared memory regions. 76The process should start successfully and display a command prompt as follows: 77 78.. code-block:: console 79 80 $ ./build/simple_mp -l 0-1 -n 4 --proc-type=primary 81 EAL: coremask set to 3 82 EAL: Detected lcore 0 on socket 0 83 EAL: Detected lcore 1 on socket 0 84 EAL: Detected lcore 2 on socket 0 85 EAL: Detected lcore 3 on socket 0 86 ... 87 88 EAL: Requesting 2 pages of size 1073741824 89 EAL: Requesting 768 pages of size 2097152 90 EAL: Ask a virtual area of 0x40000000 bytes 91 EAL: Virtual area found at 0x7ff200000000 (size = 0x40000000) 92 ... 93 94 EAL: check igb_uio module 95 EAL: check module finished 96 EAL: Master core 0 is ready (tid=54e41820) 97 EAL: Core 1 is ready (tid=53b32700) 98 99 Starting core 1 100 101 simple_mp > 102 103To run the secondary process to communicate with the primary process, 104again run the same binary setting at least two cores in the coremask/corelist: 105 106.. code-block:: console 107 108 ./build/simple_mp -l 2-3 -n 4 --proc-type=secondary 109 110When running a secondary process such as that shown above, the proc-type parameter can again be specified as auto. 111However, omitting the parameter altogether will cause the process to try and start as a primary rather than secondary process. 112 113Once the process type is specified correctly, 114the process starts up, displaying largely similar status messages to the primary instance as it initializes. 115Once again, you will be presented with a command prompt. 116 117Once both processes are running, messages can be sent between them using the send command. 118At any stage, either process can be terminated using the quit command. 119 120.. code-block:: console 121 122 EAL: Master core 10 is ready (tid=b5f89820) EAL: Master core 8 is ready (tid=864a3820) 123 EAL: Core 11 is ready (tid=84ffe700) EAL: Core 9 is ready (tid=85995700) 124 Starting core 11 Starting core 9 125 simple_mp > send hello_secondary simple_mp > core 9: Received 'hello_secondary' 126 simple_mp > core 11: Received 'hello_primary' simple_mp > send hello_primary 127 simple_mp > quit simple_mp > quit 128 129.. note:: 130 131 If the primary instance is terminated, the secondary instance must also be shut-down and restarted after the primary. 132 This is necessary because the primary instance will clear and reset the shared memory regions on startup, 133 invalidating the secondary process's pointers. 134 The secondary process can be stopped and restarted without affecting the primary process. 135 136How the Application Works 137^^^^^^^^^^^^^^^^^^^^^^^^^ 138 139The core of this example application is based on using two queues and a single memory pool in shared memory. 140These three objects are created at startup by the primary process, 141since the secondary process cannot create objects in memory as it cannot reserve memory zones, 142and the secondary process then uses lookup functions to attach to these objects as it starts up. 143 144.. code-block:: c 145 146 if (rte_eal_process_type() == RTE_PROC_PRIMARY){ 147 send_ring = rte_ring_create(_PRI_2_SEC, ring_size, SOCKET0, flags); 148 recv_ring = rte_ring_create(_SEC_2_PRI, ring_size, SOCKET0, flags); 149 message_pool = rte_mempool_create(_MSG_POOL, pool_size, string_size, pool_cache, priv_data_sz, NULL, NULL, NULL, NULL, SOCKET0, flags); 150 } else { 151 recv_ring = rte_ring_lookup(_PRI_2_SEC); 152 send_ring = rte_ring_lookup(_SEC_2_PRI); 153 message_pool = rte_mempool_lookup(_MSG_POOL); 154 } 155 156Note, however, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process. 157 158Once the rings and memory pools are all available in both the primary and secondary processes, 159the application simply dedicates two threads to sending and receiving messages respectively. 160The receive thread simply dequeues any messages on the receive ring, prints them, 161and frees the buffer space used by the messages back to the memory pool. 162The send thread makes use of the command-prompt library to interactively request user input for messages to send. 163Once a send command is issued by the user, a buffer is allocated from the memory pool, filled in with the message contents, 164then enqueued on the appropriate rte_ring. 165 166Symmetric Multi-process Example 167~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 168 169The second example of DPDK multi-process support demonstrates how a set of processes can run in parallel, 170with each process performing the same set of packet- processing operations. 171(Since each process is identical in functionality to the others, 172we refer to this as symmetric multi-processing, to differentiate it from asymmetric multi- processing - 173such as a client-server mode of operation seen in the next example, 174where different processes perform different tasks, yet co-operate to form a packet-processing system.) 175The following diagram shows the data-flow through the application, using two processes. 176 177.. _figure_sym_multi_proc_app: 178 179.. figure:: img/sym_multi_proc_app.* 180 181 Example Data Flow in a Symmetric Multi-process Application 182 183 184As the diagram shows, each process reads packets from each of the network ports in use. 185RSS is used to distribute incoming packets on each port to different hardware RX queues. 186Each process reads a different RX queue on each port and so does not contend with any other process for that queue access. 187Similarly, each process writes outgoing packets to a different TX queue on each port. 188 189Running the Application 190^^^^^^^^^^^^^^^^^^^^^^^ 191 192As with the simple_mp example, the first instance of the symmetric_mp process must be run as the primary instance, 193though with a number of other application- specific parameters also provided after the EAL arguments. 194These additional parameters are: 195 196* -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the system are to be used. 197 For example: -p 3 to use ports 0 and 1 only. 198 199* --num-procs <N>, where N is the total number of symmetric_mp instances that will be run side-by-side to perform packet processing. 200 This parameter is used to configure the appropriate number of receive queues on each network port. 201 202* --proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of processes, specified above). 203 This identifies which symmetric_mp instance is being run, so that each process can read a unique receive queue on each network port. 204 205The secondary symmetric_mp instances must also have these parameters specified, 206and the first two must be the same as those passed to the primary instance, or errors result. 207 208For example, to run a set of four symmetric_mp instances, running on lcores 1-4, 209all performing level-2 forwarding of packets between ports 0 and 1, 210the following commands can be used (assuming run as root): 211 212.. code-block:: console 213 214 # ./build/symmetric_mp -l 1 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=0 215 # ./build/symmetric_mp -l 2 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=1 216 # ./build/symmetric_mp -l 3 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=2 217 # ./build/symmetric_mp -l 4 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=3 218 219.. note:: 220 221 In the above example, the process type can be explicitly specified as primary or secondary, rather than auto. 222 When using auto, the first process run creates all the memory structures needed for all processes - 223 irrespective of whether it has a proc-id of 0, 1, 2 or 3. 224 225.. note:: 226 227 For the symmetric multi-process example, since all processes work in the same manner, 228 once the hugepage shared memory and the network ports are initialized, 229 it is not necessary to restart all processes if the primary instance dies. 230 Instead, that process can be restarted as a secondary, 231 by explicitly setting the proc-type to secondary on the command line. 232 (All subsequent instances launched will also need this explicitly specified, 233 as auto-detection will detect no primary processes running and therefore attempt to re-initialize shared memory.) 234 235How the Application Works 236^^^^^^^^^^^^^^^^^^^^^^^^^ 237 238The initialization calls in both the primary and secondary instances are the same for the most part, 239calling the rte_eal_init(), 1 G and 10 G driver initialization and then rte_pci_probe() functions. 240Thereafter, the initialization done depends on whether the process is configured as a primary or secondary instance. 241 242In the primary instance, a memory pool is created for the packet mbufs and the network ports to be used are initialized - 243the number of RX and TX queues per port being determined by the num-procs parameter passed on the command-line. 244The structures for the initialized network ports are stored in shared memory and 245therefore will be accessible by the secondary process as it initializes. 246 247.. code-block:: c 248 249 if (num_ports & 1) 250 rte_exit(EXIT_FAILURE, "Application must use an even number of ports\n"); 251 252 for(i = 0; i < num_ports; i++){ 253 if(proc_type == RTE_PROC_PRIMARY) 254 if (smp_port_init(ports[i], mp, (uint16_t)num_procs) < 0) 255 rte_exit(EXIT_FAILURE, "Error initializing ports\n"); 256 } 257 258In the secondary instance, rather than initializing the network ports, the port information exported by the primary process is used, 259giving the secondary process access to the hardware and software rings for each network port. 260Similarly, the memory pool of mbufs is accessed by doing a lookup for it by name: 261 262.. code-block:: c 263 264 mp = (proc_type == RTE_PROC_SECONDARY) ? rte_mempool_lookup(_SMP_MBUF_POOL) : rte_mempool_create(_SMP_MBUF_POOL, NB_MBUFS, MBUF_SIZE, ... ) 265 266Once this initialization is complete, the main loop of each process, both primary and secondary, 267is exactly the same - each process reads from each port using the queue corresponding to its proc-id parameter, 268and writes to the corresponding transmit queue on the output port. 269 270Client-Server Multi-process Example 271~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 272 273The third example multi-process application included with the DPDK shows how one can 274use a client-server type multi-process design to do packet processing. 275In this example, a single server process performs the packet reception from the ports being used and 276distributes these packets using round-robin ordering among a set of client processes, 277which perform the actual packet processing. 278In this case, the client applications just perform level-2 forwarding of packets by sending each packet out on a different network port. 279 280The following diagram shows the data-flow through the application, using two client processes. 281 282.. _figure_client_svr_sym_multi_proc_app: 283 284.. figure:: img/client_svr_sym_multi_proc_app.* 285 286 Example Data Flow in a Client-Server Symmetric Multi-process Application 287 288 289Running the Application 290^^^^^^^^^^^^^^^^^^^^^^^ 291 292The server process must be run initially as the primary process to set up all memory structures for use by the clients. 293In addition to the EAL parameters, the application- specific parameters are: 294 295* -p <portmask >, where portmask is a hexadecimal bitmask of what ports on the system are to be used. 296 For example: -p 3 to use ports 0 and 1 only. 297 298* -n <num-clients>, where the num-clients parameter is the number of client processes that will process the packets received 299 by the server application. 300 301.. note:: 302 303 In the server process, a single thread, the master thread, that is, the lowest numbered lcore in the coremask/corelist, performs all packet I/O. 304 If a coremask/corelist is specified with more than a single lcore bit set in it, 305 an additional lcore will be used for a thread to periodically print packet count statistics. 306 307Since the server application stores configuration data in shared memory, including the network ports to be used, 308the only application parameter needed by a client process is its client instance ID. 309Therefore, to run a server application on lcore 1 (with lcore 2 printing statistics) along with two client processes running on lcores 3 and 4, 310the following commands could be used: 311 312.. code-block:: console 313 314 # ./mp_server/build/mp_server -l 1-2 -n 4 -- -p 3 -n 2 315 # ./mp_client/build/mp_client -l 3 -n 4 --proc-type=auto -- -n 0 316 # ./mp_client/build/mp_client -l 4 -n 4 --proc-type=auto -- -n 1 317 318.. note:: 319 320 If the server application dies and needs to be restarted, all client applications also need to be restarted, 321 as there is no support in the server application for it to run as a secondary process. 322 Any client processes that need restarting can be restarted without affecting the server process. 323 324How the Application Works 325^^^^^^^^^^^^^^^^^^^^^^^^^ 326 327The server process performs the network port and data structure initialization much as the symmetric multi-process application does when run as primary. 328One additional enhancement in this sample application is that the server process stores its port configuration data in a memory zone in hugepage shared memory. 329This eliminates the need for the client processes to have the portmask parameter passed into them on the command line, 330as is done for the symmetric multi-process application, and therefore eliminates mismatched parameters as a potential source of errors. 331 332In the same way that the server process is designed to be run as a primary process instance only, 333the client processes are designed to be run as secondary instances only. 334They have no code to attempt to create shared memory objects. 335Instead, handles to all needed rings and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup(). 336The network ports for use by the processes are obtained by loading the network port drivers and probing the PCI bus, 337which will, as in the symmetric multi-process example, 338automatically get access to the network ports using the settings already configured by the primary/server process. 339 340Once all applications are initialized, the server operates by reading packets from each network port in turn and 341distributing those packets to the client queues (software rings, one for each client process) in round-robin order. 342On the client side, the packets are read from the rings in as big of bursts as possible, then routed out to a different network port. 343The routing used is very simple. All packets received on the first NIC port are transmitted back out on the second port and vice versa. 344Similarly, packets are routed between the 3rd and 4th network ports and so on. 345The sending of packets is done by writing the packets directly to the network ports; they are not transferred back via the server process. 346 347In both the server and the client processes, outgoing packets are buffered before being sent, 348so as to allow the sending of multiple packets in a single burst to improve efficiency. 349For example, the client process will buffer packets to send, 350until either the buffer is full or until we receive no further packets from the server. 351 352Master-slave Multi-process Example 353~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 354 355The fourth example of DPDK multi-process support demonstrates a master-slave model that 356provide the capability of application recovery if a slave process crashes or meets unexpected conditions. 357In addition, it also demonstrates the floating process, 358which can run among different cores in contrast to the traditional way of binding a process/thread to a specific CPU core, 359using the local cache mechanism of mempool structures. 360 361This application performs the same functionality as the L2 Forwarding sample application, 362therefore this chapter does not cover that part but describes functionality that is introduced in this multi-process example only. 363Please refer to :doc:`l2_forward_real_virtual` for more information. 364 365Unlike previous examples where all processes are started from the command line with input arguments, in this example, 366only one process is spawned from the command line and that process creates other processes. 367The following section describes this in more detail. 368 369Master-slave Process Models 370^^^^^^^^^^^^^^^^^^^^^^^^^^^ 371 372The process spawned from the command line is called the *master process* in this document. 373A process created by the master is called a *slave process*. 374The application has only one master process, but could have multiple slave processes. 375 376Once the master process begins to run, it tries to initialize all the resources such as 377memory, CPU cores, driver, ports, and so on, as the other examples do. 378Thereafter, it creates slave processes, as shown in the following figure. 379 380.. _figure_master_slave_proc: 381 382.. figure:: img/master_slave_proc.* 383 384 Master-slave Process Workflow 385 386 387The master process calls the rte_eal_mp_remote_launch() EAL function to launch an application function for each pinned thread through the pipe. 388Then, it waits to check if any slave processes have exited. 389If so, the process tries to re-initialize the resources that belong to that slave and launch them in the pinned thread entry again. 390The following section describes the recovery procedures in more detail. 391 392For each pinned thread in EAL, after reading any data from the pipe, it tries to call the function that the application specified. 393In this master specified function, a fork() call creates a slave process that performs the L2 forwarding task. 394Then, the function waits until the slave exits, is killed or crashes. Thereafter, it notifies the master of this event and returns. 395Finally, the EAL pinned thread waits until the new function is launched. 396 397After discussing the master-slave model, it is necessary to mention another issue, global and static variables. 398 399For multiple-thread cases, all global and static variables have only one copy and they can be accessed by any thread if applicable. 400So, they can be used to sync or share data among threads. 401 402In the previous examples, each process has separate global and static variables in memory and are independent of each other. 403If it is necessary to share the knowledge, some communication mechanism should be deployed, such as, memzone, ring, shared memory, and so on. 404The global or static variables are not a valid approach to share data among processes. 405For variables in this example, on the one hand, the slave process inherits all the knowledge of these variables after being created by the master. 406On the other hand, other processes cannot know if one or more processes modifies them after slave creation since that 407is the nature of a multiple process address space. 408But this does not mean that these variables cannot be used to share or sync data; it depends on the use case. 409The following are the possible use cases: 410 411#. The master process starts and initializes a variable and it will never be changed after slave processes created. This case is OK. 412 413#. After the slave processes are created, the master or slave cores need to change a variable, but other processes do not need to know the change. 414 This case is also OK. 415 416#. After the slave processes are created, the master or a slave needs to change a variable. 417 In the meantime, one or more other process needs to be aware of the change. 418 In this case, global and static variables cannot be used to share knowledge. Another communication mechanism is needed. 419 A simple approach without lock protection can be a heap buffer allocated by rte_malloc or mem zone. 420 421Slave Process Recovery Mechanism 422^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 423 424Before talking about the recovery mechanism, it is necessary to know what is needed before a new slave instance can run if a previous one exited. 425 426When a slave process exits, the system returns all the resources allocated for this process automatically. 427However, this does not include the resources that were allocated by the DPDK. All the hardware resources are shared among the processes, 428which include memzone, mempool, ring, a heap buffer allocated by the rte_malloc library, and so on. 429If the new instance runs and the allocated resource is not returned, either resource allocation failed or the hardware resource is lost forever. 430 431When a slave process runs, it may have dependencies on other processes. 432They could have execution sequence orders; they could share the ring to communicate; they could share the same port for reception and forwarding; 433they could use lock structures to do exclusive access in some critical path. 434What happens to the dependent process(es) if the peer leaves? 435The consequence are varied since the dependency cases are complex. 436It depends on what the processed had shared. 437However, it is necessary to notify the peer(s) if one slave exited. 438Then, the peer(s) will be aware of that and wait until the new instance begins to run. 439 440Therefore, to provide the capability to resume the new slave instance if the previous one exited, it is necessary to provide several mechanisms: 441 442#. Keep a resource list for each slave process. 443 Before a slave process run, the master should prepare a resource list. 444 After it exits, the master could either delete the allocated resources and create new ones, 445 or re-initialize those for use by the new instance. 446 447#. Set up a notification mechanism for slave process exit cases. After the specific slave leaves, 448 the master should be notified and then help to create a new instance. 449 This mechanism is provided in Section `Master-slave Process Models`_. 450 451#. Use a synchronization mechanism among dependent processes. 452 The master should have the capability to stop or kill slave processes that have a dependency on the one that has exited. 453 Then, after the new instance of exited slave process begins to run, the dependency ones could resume or run from the start. 454 The example sends a STOP command to slave processes dependent on the exited one, then they will exit. 455 Thereafter, the master creates new instances for the exited slave processes. 456 457The following diagram describes slave process recovery. 458 459.. _figure_slave_proc_recov: 460 461.. figure:: img/slave_proc_recov.* 462 463 Slave Process Recovery Process Flow 464 465 466Floating Process Support 467^^^^^^^^^^^^^^^^^^^^^^^^ 468 469When the DPDK application runs, there is always a -c option passed in to indicate the cores that are enabled. 470Then, the DPDK creates a thread for each enabled core. 471By doing so, it creates a 1:1 mapping between the enabled core and each thread. 472The enabled core always has an ID, therefore, each thread has a unique core ID in the DPDK execution environment. 473With the ID, each thread can easily access the structures or resources exclusively belonging to it without using function parameter passing. 474It can easily use the rte_lcore_id() function to get the value in every function that is called. 475 476For threads/processes not created in that way, either pinned to a core or not, they will not own a unique ID and the 477rte_lcore_id() function will not work in the correct way. 478However, sometimes these threads/processes still need the unique ID mechanism to do easy access on structures or resources. 479For example, the DPDK mempool library provides a local cache mechanism 480(refer to :ref:`mempool_local_cache`) 481for fast element allocation and freeing. 482If using a non-unique ID or a fake one, 483a race condition occurs if two or more threads/ processes with the same core ID try to use the local cache. 484 485Therefore, unused core IDs from the passing of parameters with the -c option are used to organize the core ID allocation array. 486Once the floating process is spawned, it tries to allocate a unique core ID from the array and release it on exit. 487 488A natural way to spawn a floating process is to use the fork() function and allocate a unique core ID from the unused core ID array. 489However, it is necessary to write new code to provide a notification mechanism for slave exit 490and make sure the process recovery mechanism can work with it. 491 492To avoid producing redundant code, the Master-Slave process model is still used to spawn floating processes, 493then cancel the affinity to specific cores. 494Besides that, clear the core ID assigned to the DPDK spawning a thread that has a 1:1 mapping with the core mask. 495Thereafter, get a new core ID from the unused core ID allocation array. 496 497Run the Application 498^^^^^^^^^^^^^^^^^^^ 499 500This example has a command line similar to the L2 Forwarding sample application with a few differences. 501 502To run the application, start one copy of the l2fwd_fork binary in one terminal. 503Unlike the L2 Forwarding example, 504this example requires at least three cores since the master process will wait and be accountable for slave process recovery. 505The command is as follows: 506 507.. code-block:: console 508 509 #./build/l2fwd_fork -l 2-4 -n 4 -- -p 3 -f 510 511This example provides another -f option to specify the use of floating process. 512If not specified, the example will use a pinned process to perform the L2 forwarding task. 513 514To verify the recovery mechanism, proceed as follows: First, check the PID of the slave processes: 515 516.. code-block:: console 517 518 #ps -fe | grep l2fwd_fork 519 root 5136 4843 29 11:11 pts/1 00:00:05 ./build/l2fwd_fork 520 root 5145 5136 98 11:11 pts/1 00:00:11 ./build/l2fwd_fork 521 root 5146 5136 98 11:11 pts/1 00:00:11 ./build/l2fwd_fork 522 523Then, kill one of the slaves: 524 525.. code-block:: console 526 527 #kill -9 5145 528 529After 1 or 2 seconds, check whether the slave has resumed: 530 531.. code-block:: console 532 533 #ps -fe | grep l2fwd_fork 534 root 5136 4843 3 11:11 pts/1 00:00:06 ./build/l2fwd_fork 535 root 5247 5136 99 11:14 pts/1 00:00:01 ./build/l2fwd_fork 536 root 5248 5136 99 11:14 pts/1 00:00:01 ./build/l2fwd_fork 537 538It can also monitor the traffic generator statics to see whether slave processes have resumed. 539 540Explanation 541^^^^^^^^^^^ 542 543As described in previous sections, 544not all global and static variables need to change to be accessible in multiple processes; 545it depends on how they are used. 546In this example, 547the statics info on packets dropped/forwarded/received count needs to be updated by the slave process, 548and the master needs to see the update and print them out. 549So, it needs to allocate a heap buffer using rte_zmalloc. 550In addition, if the -f option is specified, 551an array is needed to store the allocated core ID for the floating process so that the master can return it 552after a slave has exited accidentally. 553 554.. code-block:: c 555 556 static int 557 l2fwd_malloc_shared_struct(void) 558 { 559 port_statistics = rte_zmalloc("port_stat", sizeof(struct l2fwd_port_statistics) * RTE_MAX_ETHPORTS, 0); 560 561 if (port_statistics == NULL) 562 return -1; 563 564 /* allocate mapping_id array */ 565 566 if (float_proc) { 567 int i; 568 569 mapping_id = rte_malloc("mapping_id", sizeof(unsigned) * RTE_MAX_LCORE, 0); 570 if (mapping_id == NULL) 571 return -1; 572 573 for (i = 0 ;i < RTE_MAX_LCORE; i++) 574 mapping_id[i] = INVALID_MAPPING_ID; 575 576 } 577 return 0; 578 } 579 580For each slave process, packets are received from one port and forwarded to another port that another slave is operating on. 581If the other slave exits accidentally, the port it is operating on may not work normally, 582so the first slave cannot forward packets to that port. 583There is a dependency on the port in this case. So, the master should recognize the dependency. 584The following is the code to detect this dependency: 585 586.. code-block:: c 587 588 for (portid = 0; portid < nb_ports; portid++) { 589 /* skip ports that are not enabled */ 590 591 if ((l2fwd_enabled_port_mask & (1 << portid)) == 0) 592 continue; 593 594 /* Find pair ports' lcores */ 595 596 find_lcore = find_pair_lcore = 0; 597 pair_port = l2fwd_dst_ports[portid]; 598 599 for (i = 0; i < RTE_MAX_LCORE; i++) { 600 if (!rte_lcore_is_enabled(i)) 601 continue; 602 603 for (j = 0; j < lcore_queue_conf[i].n_rx_port;j++) { 604 if (lcore_queue_conf[i].rx_port_list[j] == portid) { 605 lcore = i; 606 find_lcore = 1; 607 break; 608 } 609 610 if (lcore_queue_conf[i].rx_port_list[j] == pair_port) { 611 pair_lcore = i; 612 find_pair_lcore = 1; 613 break; 614 } 615 } 616 617 if (find_lcore && find_pair_lcore) 618 break; 619 } 620 621 if (!find_lcore || !find_pair_lcore) 622 rte_exit(EXIT_FAILURE, "Not find port=%d pair\\n", portid); 623 624 printf("lcore %u and %u paired\\n", lcore, pair_lcore); 625 626 lcore_resource[lcore].pair_id = pair_lcore; 627 lcore_resource[pair_lcore].pair_id = lcore; 628 } 629 630Before launching the slave process, 631it is necessary to set up the communication channel between the master and slave so that 632the master can notify the slave if its peer process with the dependency exited. 633In addition, the master needs to register a callback function in the case where a specific slave exited. 634 635.. code-block:: c 636 637 for (i = 0; i < RTE_MAX_LCORE; i++) { 638 if (lcore_resource[i].enabled) { 639 /* Create ring for master and slave communication */ 640 641 ret = create_ms_ring(i); 642 if (ret != 0) 643 rte_exit(EXIT_FAILURE, "Create ring for lcore=%u failed",i); 644 645 if (flib_register_slave_exit_notify(i,slave_exit_cb) != 0) 646 rte_exit(EXIT_FAILURE, "Register master_trace_slave_exit failed"); 647 } 648 } 649 650After launching the slave process, the master waits and prints out the port statics periodically. 651If an event indicating that a slave process exited is detected, 652it sends the STOP command to the peer and waits until it has also exited. 653Then, it tries to clean up the execution environment and prepare new resources. 654Finally, the new slave instance is launched. 655 656.. code-block:: c 657 658 while (1) { 659 sleep(1); 660 cur_tsc = rte_rdtsc(); 661 diff_tsc = cur_tsc - prev_tsc; 662 663 /* if timer is enabled */ 664 665 if (timer_period > 0) { 666 /* advance the timer */ 667 timer_tsc += diff_tsc; 668 669 /* if timer has reached its timeout */ 670 if (unlikely(timer_tsc >= (uint64_t) timer_period)) { 671 print_stats(); 672 673 /* reset the timer */ 674 timer_tsc = 0; 675 } 676 } 677 678 prev_tsc = cur_tsc; 679 680 /* Check any slave need restart or recreate */ 681 682 rte_spinlock_lock(&res_lock); 683 684 for (i = 0; i < RTE_MAX_LCORE; i++) { 685 struct lcore_resource_struct *res = &lcore_resource[i]; 686 struct lcore_resource_struct *pair = &lcore_resource[res->pair_id]; 687 688 /* If find slave exited, try to reset pair */ 689 690 if (res->enabled && res->flags && pair->enabled) { 691 if (!pair->flags) { 692 master_sendcmd_with_ack(pair->lcore_id, CMD_STOP); 693 rte_spinlock_unlock(&res_lock); 694 sleep(1); 695 rte_spinlock_lock(&res_lock); 696 if (pair->flags) 697 continue; 698 } 699 700 if (reset_pair(res->lcore_id, pair->lcore_id) != 0) 701 rte_exit(EXIT_FAILURE, "failed to reset slave"); 702 703 res->flags = 0; 704 pair->flags = 0; 705 } 706 } 707 rte_spinlock_unlock(&res_lock); 708 } 709 710When the slave process is spawned and starts to run, it checks whether the floating process option is applied. 711If so, it clears the affinity to a specific core and also sets the unique core ID to 0. 712Then, it tries to allocate a new core ID. 713Since the core ID has changed, the resource allocated by the master cannot work, 714so it remaps the resource to the new core ID slot. 715 716.. code-block:: c 717 718 static int 719 l2fwd_launch_one_lcore( attribute ((unused)) void *dummy) 720 { 721 unsigned lcore_id = rte_lcore_id(); 722 723 if (float_proc) { 724 unsigned flcore_id; 725 726 /* Change it to floating process, also change it's lcore_id */ 727 728 clear_cpu_affinity(); 729 730 RTE_PER_LCORE(_lcore_id) = 0; 731 732 /* Get a lcore_id */ 733 734 if (flib_assign_lcore_id() < 0 ) { 735 printf("flib_assign_lcore_id failed\n"); 736 return -1; 737 } 738 739 flcore_id = rte_lcore_id(); 740 741 /* Set mapping id, so master can return it after slave exited */ 742 743 mapping_id[lcore_id] = flcore_id; 744 printf("Org lcore_id = %u, cur lcore_id = %u\n",lcore_id, flcore_id); 745 remapping_slave_resource(lcore_id, flcore_id); 746 } 747 748 l2fwd_main_loop(); 749 750 /* return lcore_id before return */ 751 if (float_proc) { 752 flib_free_lcore_id(rte_lcore_id()); 753 mapping_id[lcore_id] = INVALID_MAPPING_ID; 754 } 755 return 0; 756 } 757