xref: /dpdk/doc/guides/sample_app_ug/multi_process.rst (revision b19da32e31511837452b1eaba68b836837dba643)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2014 Intel Corporation.
3
4.. _multi_process_app:
5
6Multi-process Sample Application
7================================
8
9This chapter describes the example applications for multi-processing that are included in the DPDK.
10
11Example Applications
12--------------------
13
14Building the Sample Applications
15~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
16The multi-process example applications are built in the same way as other sample applications,
17and as documented in the *DPDK Getting Started Guide*.
18
19
20To compile the sample application see :doc:`compiling`.
21
22The applications are located in the ``multi_process`` sub-directory.
23
24.. note::
25
26    If just a specific multi-process application needs to be built,
27    the final make command can be run just in that application's directory,
28    rather than at the top-level multi-process directory.
29
30Basic Multi-process Example
31~~~~~~~~~~~~~~~~~~~~~~~~~~~
32
33The examples/simple_mp folder in the DPDK release contains a basic example application to demonstrate how
34two DPDK processes can work together using queues and memory pools to share information.
35
36Running the Application
37^^^^^^^^^^^^^^^^^^^^^^^
38
39To run the application, start one copy of the simple_mp binary in one terminal,
40passing at least two cores in the coremask/corelist, as follows:
41
42.. code-block:: console
43
44    ./build/simple_mp -l 0-1 -n 4 --proc-type=primary
45
46For the first DPDK process run, the proc-type flag can be omitted or set to auto,
47since all DPDK processes will default to being a primary instance,
48meaning they have control over the hugepage shared memory regions.
49The process should start successfully and display a command prompt as follows:
50
51.. code-block:: console
52
53    $ ./build/simple_mp -l 0-1 -n 4 --proc-type=primary
54    EAL: coremask set to 3
55    EAL: Detected lcore 0 on socket 0
56    EAL: Detected lcore 1 on socket 0
57    EAL: Detected lcore 2 on socket 0
58    EAL: Detected lcore 3 on socket 0
59    ...
60
61    EAL: Requesting 2 pages of size 1073741824
62    EAL: Requesting 768 pages of size 2097152
63    EAL: Ask a virtual area of 0x40000000 bytes
64    EAL: Virtual area found at 0x7ff200000000 (size = 0x40000000)
65    ...
66
67    EAL: Master core 0 is ready (tid=54e41820)
68    EAL: Core 1 is ready (tid=53b32700)
69
70    Starting core 1
71
72    simple_mp >
73
74To run the secondary process to communicate with the primary process,
75again run the same binary setting at least two cores in the coremask/corelist:
76
77.. code-block:: console
78
79    ./build/simple_mp -l 2-3 -n 4 --proc-type=secondary
80
81When running a secondary process such as that shown above, the proc-type parameter can again be specified as auto.
82However, omitting the parameter altogether will cause the process to try and start as a primary rather than secondary process.
83
84Once the process type is specified correctly,
85the process starts up, displaying largely similar status messages to the primary instance as it initializes.
86Once again, you will be presented with a command prompt.
87
88Once both processes are running, messages can be sent between them using the send command.
89At any stage, either process can be terminated using the quit command.
90
91.. code-block:: console
92
93   EAL: Master core 10 is ready (tid=b5f89820)           EAL: Master core 8 is ready (tid=864a3820)
94   EAL: Core 11 is ready (tid=84ffe700)                  EAL: Core 9 is ready (tid=85995700)
95   Starting core 11                                      Starting core 9
96   simple_mp > send hello_secondary                      simple_mp > core 9: Received 'hello_secondary'
97   simple_mp > core 11: Received 'hello_primary'         simple_mp > send hello_primary
98   simple_mp > quit                                      simple_mp > quit
99
100.. note::
101
102    If the primary instance is terminated, the secondary instance must also be shut-down and restarted after the primary.
103    This is necessary because the primary instance will clear and reset the shared memory regions on startup,
104    invalidating the secondary process's pointers.
105    The secondary process can be stopped and restarted without affecting the primary process.
106
107How the Application Works
108^^^^^^^^^^^^^^^^^^^^^^^^^
109
110The core of this example application is based on using two queues and a single memory pool in shared memory.
111These three objects are created at startup by the primary process,
112since the secondary process cannot create objects in memory as it cannot reserve memory zones,
113and the secondary process then uses lookup functions to attach to these objects as it starts up.
114
115.. code-block:: c
116
117    if (rte_eal_process_type() == RTE_PROC_PRIMARY){
118        send_ring = rte_ring_create(_PRI_2_SEC, ring_size, SOCKET0, flags);
119        recv_ring = rte_ring_create(_SEC_2_PRI, ring_size, SOCKET0, flags);
120        message_pool = rte_mempool_create(_MSG_POOL, pool_size, string_size, pool_cache, priv_data_sz, NULL, NULL, NULL, NULL, SOCKET0, flags);
121    } else {
122        recv_ring = rte_ring_lookup(_PRI_2_SEC);
123        send_ring = rte_ring_lookup(_SEC_2_PRI);
124        message_pool = rte_mempool_lookup(_MSG_POOL);
125    }
126
127Note, however, that the named ring structure used as send_ring in the primary process is the recv_ring in the secondary process.
128
129Once the rings and memory pools are all available in both the primary and secondary processes,
130the application simply dedicates two threads to sending and receiving messages respectively.
131The receive thread simply dequeues any messages on the receive ring, prints them,
132and frees the buffer space used by the messages back to the memory pool.
133The send thread makes use of the command-prompt library to interactively request user input for messages to send.
134Once a send command is issued by the user, a buffer is allocated from the memory pool, filled in with the message contents,
135then enqueued on the appropriate rte_ring.
136
137Symmetric Multi-process Example
138~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
139
140The second example of DPDK multi-process support demonstrates how a set of processes can run in parallel,
141with each process performing the same set of packet- processing operations.
142(Since each process is identical in functionality to the others,
143we refer to this as symmetric multi-processing, to differentiate it from asymmetric multi- processing -
144such as a client-server mode of operation seen in the next example,
145where different processes perform different tasks, yet co-operate to form a packet-processing system.)
146The following diagram shows the data-flow through the application, using two processes.
147
148.. _figure_sym_multi_proc_app:
149
150.. figure:: img/sym_multi_proc_app.*
151
152   Example Data Flow in a Symmetric Multi-process Application
153
154
155As the diagram shows, each process reads packets from each of the network ports in use.
156RSS is used to distribute incoming packets on each port to different hardware RX queues.
157Each process reads a different RX queue on each port and so does not contend with any other process for that queue access.
158Similarly, each process writes outgoing packets to a different TX queue on each port.
159
160Running the Application
161^^^^^^^^^^^^^^^^^^^^^^^
162
163As with the simple_mp example, the first instance of the symmetric_mp process must be run as the primary instance,
164though with a number of other application- specific parameters also provided after the EAL arguments.
165These additional parameters are:
166
167*   -p <portmask>, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
168    For example: -p 3 to use ports 0 and 1 only.
169
170*   --num-procs <N>, where N is the total number of symmetric_mp instances that will be run side-by-side to perform packet processing.
171    This parameter is used to configure the appropriate number of receive queues on each network port.
172
173*   --proc-id <n>, where n is a numeric value in the range 0 <= n < N (number of processes, specified above).
174    This identifies which symmetric_mp instance is being run, so that each process can read a unique receive queue on each network port.
175
176The secondary symmetric_mp instances must also have these parameters specified,
177and the first two must be the same as those passed to the primary instance, or errors result.
178
179For example, to run a set of four symmetric_mp instances, running on lcores 1-4,
180all performing level-2 forwarding of packets between ports 0 and 1,
181the following commands can be used (assuming run as root):
182
183.. code-block:: console
184
185    # ./build/symmetric_mp -l 1 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=0
186    # ./build/symmetric_mp -l 2 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=1
187    # ./build/symmetric_mp -l 3 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=2
188    # ./build/symmetric_mp -l 4 -n 4 --proc-type=auto -- -p 3 --num-procs=4 --proc-id=3
189
190.. note::
191
192    In the above example, the process type can be explicitly specified as primary or secondary, rather than auto.
193    When using auto, the first process run creates all the memory structures needed for all processes -
194    irrespective of whether it has a proc-id of 0, 1, 2 or 3.
195
196.. note::
197
198    For the symmetric multi-process example, since all processes work in the same manner,
199    once the hugepage shared memory and the network ports are initialized,
200    it is not necessary to restart all processes if the primary instance dies.
201    Instead, that process can be restarted as a secondary,
202    by explicitly setting the proc-type to secondary on the command line.
203    (All subsequent instances launched will also need this explicitly specified,
204    as auto-detection will detect no primary processes running and therefore attempt to re-initialize shared memory.)
205
206How the Application Works
207^^^^^^^^^^^^^^^^^^^^^^^^^
208
209The initialization calls in both the primary and secondary instances are the same for the most part,
210calling the rte_eal_init(), 1 G and 10 G driver initialization and then probing devices.
211Thereafter, the initialization done depends on whether the process is configured as a primary or secondary instance.
212
213In the primary instance, a memory pool is created for the packet mbufs and the network ports to be used are initialized -
214the number of RX and TX queues per port being determined by the num-procs parameter passed on the command-line.
215The structures for the initialized network ports are stored in shared memory and
216therefore will be accessible by the secondary process as it initializes.
217
218.. code-block:: c
219
220    if (num_ports & 1)
221       rte_exit(EXIT_FAILURE, "Application must use an even number of ports\n");
222
223    for(i = 0; i < num_ports; i++){
224        if(proc_type == RTE_PROC_PRIMARY)
225            if (smp_port_init(ports[i], mp, (uint16_t)num_procs) < 0)
226                rte_exit(EXIT_FAILURE, "Error initializing ports\n");
227    }
228
229In the secondary instance, rather than initializing the network ports, the port information exported by the primary process is used,
230giving the secondary process access to the hardware and software rings for each network port.
231Similarly, the memory pool of mbufs is accessed by doing a lookup for it by name:
232
233.. code-block:: c
234
235    mp = (proc_type == RTE_PROC_SECONDARY) ? rte_mempool_lookup(_SMP_MBUF_POOL) : rte_mempool_create(_SMP_MBUF_POOL, NB_MBUFS, MBUF_SIZE, ... )
236
237Once this initialization is complete, the main loop of each process, both primary and secondary,
238is exactly the same - each process reads from each port using the queue corresponding to its proc-id parameter,
239and writes to the corresponding transmit queue on the output port.
240
241Client-Server Multi-process Example
242~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
243
244The third example multi-process application included with the DPDK shows how one can
245use a client-server type multi-process design to do packet processing.
246In this example, a single server process performs the packet reception from the ports being used and
247distributes these packets using round-robin ordering among a set of client  processes,
248which perform the actual packet processing.
249In this case, the client applications just perform level-2 forwarding of packets by sending each packet out on a different network port.
250
251The following diagram shows the data-flow through the application, using two client processes.
252
253.. _figure_client_svr_sym_multi_proc_app:
254
255.. figure:: img/client_svr_sym_multi_proc_app.*
256
257   Example Data Flow in a Client-Server Symmetric Multi-process Application
258
259
260Running the Application
261^^^^^^^^^^^^^^^^^^^^^^^
262
263The server process must be run initially as the primary process to set up all memory structures for use by the clients.
264In addition to the EAL parameters, the application- specific parameters are:
265
266*   -p <portmask >, where portmask is a hexadecimal bitmask of what ports on the system are to be used.
267    For example: -p 3 to use ports 0 and 1 only.
268
269*   -n <num-clients>, where the num-clients parameter is the number of client processes that will process the packets received
270    by the server application.
271
272.. note::
273
274    In the server process, a single thread, the master thread, that is, the lowest numbered lcore in the coremask/corelist, performs all packet I/O.
275    If a coremask/corelist is specified with more than a single lcore bit set in it,
276    an additional lcore will be used for a thread to periodically print packet count statistics.
277
278Since the server application stores configuration data in shared memory, including the network ports to be used,
279the only application parameter needed by a client process is its client instance ID.
280Therefore, to run a server application on lcore 1 (with lcore 2 printing statistics) along with two client processes running on lcores 3 and 4,
281the following commands could be used:
282
283.. code-block:: console
284
285    # ./mp_server/build/mp_server -l 1-2 -n 4 -- -p 3 -n 2
286    # ./mp_client/build/mp_client -l 3 -n 4 --proc-type=auto -- -n 0
287    # ./mp_client/build/mp_client -l 4 -n 4 --proc-type=auto -- -n 1
288
289.. note::
290
291    If the server application dies and needs to be restarted, all client applications also need to be restarted,
292    as there is no support in the server application for it to run as a secondary process.
293    Any client processes that need restarting can be restarted without affecting the server process.
294
295How the Application Works
296^^^^^^^^^^^^^^^^^^^^^^^^^
297
298The server process performs the network port and data structure initialization much as the symmetric multi-process application does when run as primary.
299One additional enhancement in this sample application is that the server process stores its port configuration data in a memory zone in hugepage shared memory.
300This eliminates the need for the client processes to have the portmask parameter passed into them on the command line,
301as is done for the symmetric multi-process application, and therefore eliminates mismatched parameters as a potential source of errors.
302
303In the same way that the server process is designed to be run as a primary process instance only,
304the client processes are designed to be run as secondary instances only.
305They have no code to attempt to create shared memory objects.
306Instead, handles to all needed rings and memory pools are obtained via calls to rte_ring_lookup() and rte_mempool_lookup().
307The network ports for use by the processes are obtained by loading the network port drivers and probing the PCI bus,
308which will, as in the symmetric multi-process example,
309automatically get access to the network ports using the settings already configured by the primary/server process.
310
311Once all applications are initialized, the server operates by reading packets from each network port in turn and
312distributing those packets to the client queues (software rings, one for each client process) in round-robin order.
313On the client side, the packets are read from the rings in as big of bursts as possible, then routed out to a different network port.
314The routing used is very simple. All packets received on the first NIC port are transmitted back out on the second port and vice versa.
315Similarly, packets are routed between the 3rd and 4th network ports and so on.
316The sending of packets is done by writing the packets directly to the network ports; they are not transferred back via the server process.
317
318In both the server and the client processes, outgoing packets are buffered before being sent,
319so as to allow the sending of multiple packets in a single burst to improve efficiency.
320For example, the client process will buffer packets to send,
321until either the buffer is full or until we receive no further packets from the server.
322