xref: /dpdk/doc/guides/prog_guide/eventdev/dispatcher_lib.rst (revision 41dd9a6bc2d9c6e20e139ad713cc9d172572dd43)
1*41dd9a6bSDavid Young..  SPDX-License-Identifier: BSD-3-Clause
2*41dd9a6bSDavid Young    Copyright(c) 2023 Ericsson AB.
3*41dd9a6bSDavid Young
4*41dd9a6bSDavid YoungDispatcher Library
5*41dd9a6bSDavid Young==================
6*41dd9a6bSDavid Young
7*41dd9a6bSDavid YoungOverview
8*41dd9a6bSDavid Young--------
9*41dd9a6bSDavid Young
10*41dd9a6bSDavid YoungThe purpose of the dispatcher is to help reduce coupling in an
11*41dd9a6bSDavid Young:doc:`Eventdev <eventdev>`-based DPDK application.
12*41dd9a6bSDavid Young
13*41dd9a6bSDavid YoungIn particular, the dispatcher addresses a scenario where an
14*41dd9a6bSDavid Youngapplication's modules share the same event device and event device
15*41dd9a6bSDavid Youngports, and performs work on the same lcore threads.
16*41dd9a6bSDavid Young
17*41dd9a6bSDavid YoungThe dispatcher replaces the conditional logic that follows an event
18*41dd9a6bSDavid Youngdevice dequeue operation, where events are dispatched to different
19*41dd9a6bSDavid Youngparts of the application, typically based on fields in the
20*41dd9a6bSDavid Young``rte_event``, such as the ``queue_id``, ``sub_event_type``, or
21*41dd9a6bSDavid Young``sched_type``.
22*41dd9a6bSDavid Young
23*41dd9a6bSDavid YoungBelow is an excerpt from a fictitious application consisting of two
24*41dd9a6bSDavid Youngmodules; A and B. In this example, event-to-module routing is based
25*41dd9a6bSDavid Youngpurely on queue id, where module A expects all events to a certain
26*41dd9a6bSDavid Youngqueue id, and module B two other queue ids.
27*41dd9a6bSDavid Young
28*41dd9a6bSDavid Young.. note::
29*41dd9a6bSDavid Young
30*41dd9a6bSDavid Young   Event routing may reasonably be done based on other ``rte_event``
31*41dd9a6bSDavid Young   fields (or even event user data). Indeed, that's the very reason to
32*41dd9a6bSDavid Young   have match callback functions, instead of a simple queue
33*41dd9a6bSDavid Young   id-to-handler mapping scheme. Queue id-based routing serves well in
34*41dd9a6bSDavid Young   a simple example.
35*41dd9a6bSDavid Young
36*41dd9a6bSDavid Young.. code-block:: c
37*41dd9a6bSDavid Young
38*41dd9a6bSDavid Young    for (;;) {
39*41dd9a6bSDavid Young            struct rte_event events[MAX_BURST];
40*41dd9a6bSDavid Young            unsigned int n;
41*41dd9a6bSDavid Young
42*41dd9a6bSDavid Young            n = rte_event_dequeue_burst(dev_id, port_id, events,
43*41dd9a6bSDavid Young	                                MAX_BURST, 0);
44*41dd9a6bSDavid Young
45*41dd9a6bSDavid Young            for (i = 0; i < n; i++) {
46*41dd9a6bSDavid Young                    const struct rte_event *event = &events[i];
47*41dd9a6bSDavid Young
48*41dd9a6bSDavid Young                    switch (event->queue_id) {
49*41dd9a6bSDavid Young                    case MODULE_A_QUEUE_ID:
50*41dd9a6bSDavid Young                            module_a_process(event);
51*41dd9a6bSDavid Young                            break;
52*41dd9a6bSDavid Young                    case MODULE_B_STAGE_0_QUEUE_ID:
53*41dd9a6bSDavid Young                            module_b_process_stage_0(event);
54*41dd9a6bSDavid Young                            break;
55*41dd9a6bSDavid Young                    case MODULE_B_STAGE_1_QUEUE_ID:
56*41dd9a6bSDavid Young                            module_b_process_stage_1(event);
57*41dd9a6bSDavid Young                            break;
58*41dd9a6bSDavid Young                    }
59*41dd9a6bSDavid Young            }
60*41dd9a6bSDavid Young    }
61*41dd9a6bSDavid Young
62*41dd9a6bSDavid YoungThe issue this example attempts to illustrate is that the centralized
63*41dd9a6bSDavid Youngconditional logic has knowledge of things that should be private to
64*41dd9a6bSDavid Youngthe modules. In other words, this pattern leads to a violation of
65*41dd9a6bSDavid Youngmodule encapsulation.
66*41dd9a6bSDavid Young
67*41dd9a6bSDavid YoungThe shared conditional logic contains explicit knowledge about what
68*41dd9a6bSDavid Youngevents should go where. In case, for example, the
69*41dd9a6bSDavid Young``module_a_process()`` is broken into two processing stages — a
70*41dd9a6bSDavid Youngmodule-internal affair — the shared conditional code must be updated
71*41dd9a6bSDavid Youngto reflect this change.
72*41dd9a6bSDavid Young
73*41dd9a6bSDavid YoungThe centralized event routing code becomes an issue in larger
74*41dd9a6bSDavid Youngapplications, where modules are developed by different organizations.
75*41dd9a6bSDavid YoungThis pattern also makes module reuse across different applications more
76*41dd9a6bSDavid Youngdifficult. The part of the conditional logic relevant for a particular
77*41dd9a6bSDavid Youngapplication may need to be duplicated across many module
78*41dd9a6bSDavid Younginstantiations (e.g., applications and test setups).
79*41dd9a6bSDavid Young
80*41dd9a6bSDavid YoungThe dispatcher separates the mechanism (routing events to their
81*41dd9a6bSDavid Youngreceiver) from the policy (which events should go where).
82*41dd9a6bSDavid Young
83*41dd9a6bSDavid YoungThe basic operation of the dispatcher is as follows:
84*41dd9a6bSDavid Young
85*41dd9a6bSDavid Young* Dequeue a batch of events from the event device.
86*41dd9a6bSDavid Young* For each event determine which handler should receive the event, using
87*41dd9a6bSDavid Young  a set of application-provided, per-handler event matching callback
88*41dd9a6bSDavid Young  functions.
89*41dd9a6bSDavid Young* Provide events matching a particular handler, to that handler, using
90*41dd9a6bSDavid Young  its process callback.
91*41dd9a6bSDavid Young
92*41dd9a6bSDavid YoungIf the above application would have made use of the dispatcher, the
93*41dd9a6bSDavid Youngcode relevant for its module A may have looked something like this:
94*41dd9a6bSDavid Young
95*41dd9a6bSDavid Young.. code-block:: c
96*41dd9a6bSDavid Young
97*41dd9a6bSDavid Young    static bool
98*41dd9a6bSDavid Young    module_a_match(const struct rte_event *event, void *cb_data)
99*41dd9a6bSDavid Young    {
100*41dd9a6bSDavid Young           return event->queue_id == MODULE_A_QUEUE_ID;
101*41dd9a6bSDavid Young    }
102*41dd9a6bSDavid Young
103*41dd9a6bSDavid Young    static void
104*41dd9a6bSDavid Young    module_a_process_events(uint8_t event_dev_id, uint8_t event_port_id,
105*41dd9a6bSDavid Young                            const struct rte_event *events,
106*41dd9a6bSDavid Young			    uint16_t num, void *cb_data)
107*41dd9a6bSDavid Young    {
108*41dd9a6bSDavid Young            uint16_t i;
109*41dd9a6bSDavid Young
110*41dd9a6bSDavid Young            for (i = 0; i < num; i++)
111*41dd9a6bSDavid Young                    module_a_process_event(&events[i]);
112*41dd9a6bSDavid Young    }
113*41dd9a6bSDavid Young
114*41dd9a6bSDavid Young    /* In the module's initialization code */
115*41dd9a6bSDavid Young    rte_dispatcher_register(dispatcher, module_a_match, NULL,
116*41dd9a6bSDavid Young			    module_a_process_events, module_a_data);
117*41dd9a6bSDavid Young
118*41dd9a6bSDavid Young.. note::
119*41dd9a6bSDavid Young
120*41dd9a6bSDavid Young   Error handling is left out of this and future example code in this chapter.
121*41dd9a6bSDavid Young
122*41dd9a6bSDavid YoungWhen the shared conditional logic is removed, a new question arises:
123*41dd9a6bSDavid Youngwhich part of the system actually runs the dispatching mechanism? Or
124*41dd9a6bSDavid Youngphrased differently, what is replacing the function hosting the shared
125*41dd9a6bSDavid Youngconditional logic (typically launched on all lcores using
126*41dd9a6bSDavid Young``rte_eal_remote_launch()``)? To solve this issue, the dispatcher is
127*41dd9a6bSDavid Youngrun as a DPDK :doc:`Service <../service_cores>`.
128*41dd9a6bSDavid Young
129*41dd9a6bSDavid YoungThe dispatcher is a layer between the application and the event device
130*41dd9a6bSDavid Youngin the receive direction. In the transmit (i.e., item of work
131*41dd9a6bSDavid Youngsubmission) direction, the application directly accesses the Eventdev
132*41dd9a6bSDavid Youngcore API (e.g., ``rte_event_enqueue_burst()``) to submit new or
133*41dd9a6bSDavid Youngforwarded events to the event device.
134*41dd9a6bSDavid Young
135*41dd9a6bSDavid YoungDispatcher Creation
136*41dd9a6bSDavid Young-------------------
137*41dd9a6bSDavid Young
138*41dd9a6bSDavid YoungA dispatcher is created using the ``rte_dispatcher_create()`` function.
139*41dd9a6bSDavid Young
140*41dd9a6bSDavid YoungThe event device must be configured before the dispatcher is created.
141*41dd9a6bSDavid Young
142*41dd9a6bSDavid YoungUsually, only one dispatcher is needed per event device. A dispatcher
143*41dd9a6bSDavid Younghandles exactly one event device.
144*41dd9a6bSDavid Young
145*41dd9a6bSDavid YoungA dispatcher is freed using the ``rte_dispatcher_free()`` function.
146*41dd9a6bSDavid YoungThe dispatcher's service functions must not be running on
147*41dd9a6bSDavid Youngany lcore at the point of this call.
148*41dd9a6bSDavid Young
149*41dd9a6bSDavid YoungEvent Port Binding
150*41dd9a6bSDavid Young------------------
151*41dd9a6bSDavid Young
152*41dd9a6bSDavid YoungTo be able to dequeue events, the dispatcher must know which event
153*41dd9a6bSDavid Youngports are to be used, on all the lcores it uses. The application
154*41dd9a6bSDavid Youngprovides this information using
155*41dd9a6bSDavid Young``rte_dispatcher_bind_port_to_lcore()``.
156*41dd9a6bSDavid Young
157*41dd9a6bSDavid YoungThis call is typically made from the part of the application that
158*41dd9a6bSDavid Youngdeals with deployment issues (e.g., iterating lcores and determining
159*41dd9a6bSDavid Youngwhich lcore does what), at the time of application initialization.
160*41dd9a6bSDavid Young
161*41dd9a6bSDavid YoungThe ``rte_dispatcher_unbind_port_from_lcore()`` is used to undo
162*41dd9a6bSDavid Youngthis operation.
163*41dd9a6bSDavid Young
164*41dd9a6bSDavid YoungMultiple lcore threads may not safely use the same event
165*41dd9a6bSDavid Youngport.
166*41dd9a6bSDavid Young
167*41dd9a6bSDavid Young.. note::
168*41dd9a6bSDavid Young
169*41dd9a6bSDavid Young   This property (which is a feature, not a bug) is inherited from the
170*41dd9a6bSDavid Young   core Eventdev APIs.
171*41dd9a6bSDavid Young
172*41dd9a6bSDavid YoungEvent ports cannot safely be bound or unbound while the dispatcher's
173*41dd9a6bSDavid Youngservice function is running on any lcore.
174*41dd9a6bSDavid Young
175*41dd9a6bSDavid YoungEvent Handlers
176*41dd9a6bSDavid Young--------------
177*41dd9a6bSDavid Young
178*41dd9a6bSDavid YoungThe dispatcher handler is an interface between the dispatcher and an
179*41dd9a6bSDavid Youngapplication module, used to route events to the appropriate part of
180*41dd9a6bSDavid Youngthe application.
181*41dd9a6bSDavid Young
182*41dd9a6bSDavid YoungHandler Registration
183*41dd9a6bSDavid Young^^^^^^^^^^^^^^^^^^^^
184*41dd9a6bSDavid Young
185*41dd9a6bSDavid YoungThe event handler interface consists of two function pointers:
186*41dd9a6bSDavid Young
187*41dd9a6bSDavid Young* The ``rte_dispatcher_match_t`` callback, which job is to
188*41dd9a6bSDavid Young  decide if this event is to be the property of this handler.
189*41dd9a6bSDavid Young* The ``rte_dispatcher_process_t``, which is used by the
190*41dd9a6bSDavid Young  dispatcher to deliver matched events.
191*41dd9a6bSDavid Young
192*41dd9a6bSDavid YoungAn event handler registration is valid on all lcores.
193*41dd9a6bSDavid Young
194*41dd9a6bSDavid YoungThe functions pointed to by the match and process callbacks resides in
195*41dd9a6bSDavid Youngthe application's domain logic, with one or more handlers per
196*41dd9a6bSDavid Youngapplication module.
197*41dd9a6bSDavid Young
198*41dd9a6bSDavid YoungA module may use more than one event handler, for convenience or to
199*41dd9a6bSDavid Youngfurther decouple sub-modules. However, the dispatcher may impose an
200*41dd9a6bSDavid Youngupper limit of the number of handlers. In addition, installing a large
201*41dd9a6bSDavid Youngnumber of handlers increase dispatcher overhead, although this does
202*41dd9a6bSDavid Youngnot necessarily translate to a system-level performance degradation. See
203*41dd9a6bSDavid Youngthe section on :ref:`Event Clustering` for more information.
204*41dd9a6bSDavid Young
205*41dd9a6bSDavid YoungHandler registration and unregistration cannot safely be done while
206*41dd9a6bSDavid Youngthe dispatcher's service function is running on any lcore.
207*41dd9a6bSDavid Young
208*41dd9a6bSDavid YoungEvent Matching
209*41dd9a6bSDavid Young^^^^^^^^^^^^^^
210*41dd9a6bSDavid Young
211*41dd9a6bSDavid YoungA handler's match callback function decides if an event should be
212*41dd9a6bSDavid Youngdelivered to this handler, or not.
213*41dd9a6bSDavid Young
214*41dd9a6bSDavid YoungAn event is routed to no more than one handler. Thus, if a match
215*41dd9a6bSDavid Youngfunction returns true, no further match functions will be invoked for
216*41dd9a6bSDavid Youngthat event.
217*41dd9a6bSDavid Young
218*41dd9a6bSDavid YoungMatch functions must not depend on being invocated in any particular
219*41dd9a6bSDavid Youngorder (e.g., in the handler registration order).
220*41dd9a6bSDavid Young
221*41dd9a6bSDavid YoungEvents failing to match any handler are dropped, and the
222*41dd9a6bSDavid Young``ev_drop_count`` counter is updated accordingly.
223*41dd9a6bSDavid Young
224*41dd9a6bSDavid YoungEvent Delivery
225*41dd9a6bSDavid Young^^^^^^^^^^^^^^
226*41dd9a6bSDavid Young
227*41dd9a6bSDavid YoungThe handler callbacks are invocated by the dispatcher's service
228*41dd9a6bSDavid Youngfunction, upon the arrival of events to the event ports bound to the
229*41dd9a6bSDavid Youngrunning service lcore.
230*41dd9a6bSDavid Young
231*41dd9a6bSDavid YoungA particular event is delivered to at most one handler.
232*41dd9a6bSDavid Young
233*41dd9a6bSDavid YoungThe application must not depend on all match callback invocations for
234*41dd9a6bSDavid Younga particular event batch being made prior to any process calls are
235*41dd9a6bSDavid Youngbeing made. For example, if the dispatcher dequeues two events from
236*41dd9a6bSDavid Youngthe event device, it may choose to find out the destination for the
237*41dd9a6bSDavid Youngfirst event, and deliver it, and then continue to find out the
238*41dd9a6bSDavid Youngdestination for the second, and then deliver that event as well. The
239*41dd9a6bSDavid Youngdispatcher may also choose a strategy where no event is delivered
240*41dd9a6bSDavid Younguntil the destination handler for both events have been determined.
241*41dd9a6bSDavid Young
242*41dd9a6bSDavid YoungThe events provided in a single process call always belong to the same
243*41dd9a6bSDavid Youngevent port dequeue burst.
244*41dd9a6bSDavid Young
245*41dd9a6bSDavid Young.. _Event Clustering:
246*41dd9a6bSDavid Young
247*41dd9a6bSDavid YoungEvent Clustering
248*41dd9a6bSDavid Young^^^^^^^^^^^^^^^^
249*41dd9a6bSDavid Young
250*41dd9a6bSDavid YoungThe dispatcher maintains the order of events destined for the same
251*41dd9a6bSDavid Younghandler.
252*41dd9a6bSDavid Young
253*41dd9a6bSDavid Young*Order* here refers to the order in which the events were delivered
254*41dd9a6bSDavid Youngfrom the event device to the dispatcher (i.e., in the event array
255*41dd9a6bSDavid Youngpopulated by ``rte_event_dequeue_burst()``), in relation to the order
256*41dd9a6bSDavid Youngin which the dispatcher delivers these events to the application.
257*41dd9a6bSDavid Young
258*41dd9a6bSDavid YoungThe dispatcher *does not* guarantee to maintain the order of events
259*41dd9a6bSDavid Youngdelivered to *different* handlers.
260*41dd9a6bSDavid Young
261*41dd9a6bSDavid YoungFor example, assume that ``MODULE_A_QUEUE_ID`` expands to the value 0,
262*41dd9a6bSDavid Youngand ``MODULE_B_STAGE_0_QUEUE_ID`` expands to the value 1. Then
263*41dd9a6bSDavid Youngconsider a scenario where the following events are dequeued from the
264*41dd9a6bSDavid Youngevent device (qid is short for event queue id).
265*41dd9a6bSDavid Young
266*41dd9a6bSDavid Young.. code-block:: none
267*41dd9a6bSDavid Young
268*41dd9a6bSDavid Young    [e0: qid=1], [e1: qid=1], [e2: qid=0], [e3: qid=1]
269*41dd9a6bSDavid Young
270*41dd9a6bSDavid YoungThe dispatcher may deliver the events in the following manner:
271*41dd9a6bSDavid Young
272*41dd9a6bSDavid Young.. code-block:: none
273*41dd9a6bSDavid Young
274*41dd9a6bSDavid Young   module_b_stage_0_process([e0: qid=1], [e1: qid=1])
275*41dd9a6bSDavid Young   module_a_process([e2: qid=0])
276*41dd9a6bSDavid Young   module_b_stage_0_process([e2: qid=1])
277*41dd9a6bSDavid Young
278*41dd9a6bSDavid YoungThe dispatcher may also choose to cluster (group) all events destined
279*41dd9a6bSDavid Youngfor ``module_b_stage_0_process()`` into one array:
280*41dd9a6bSDavid Young
281*41dd9a6bSDavid Young.. code-block:: none
282*41dd9a6bSDavid Young
283*41dd9a6bSDavid Young   module_b_stage_0_process([e0: qid=1], [e1: qid=1], [e3: qid=1])
284*41dd9a6bSDavid Young   module_a_process([e2: qid=0])
285*41dd9a6bSDavid Young
286*41dd9a6bSDavid YoungHere, the event ``e2`` is reordered and placed behind ``e3``, from a
287*41dd9a6bSDavid Youngdelivery order point of view. This kind of reshuffling is allowed,
288*41dd9a6bSDavid Youngsince the events are destined for different handlers.
289*41dd9a6bSDavid Young
290*41dd9a6bSDavid YoungThe dispatcher may also deliver ``e2`` before the three events
291*41dd9a6bSDavid Youngdestined for module B.
292*41dd9a6bSDavid Young
293*41dd9a6bSDavid YoungAn example of what the dispatcher may not do, is to reorder event
294*41dd9a6bSDavid Young``e1`` so, that it precedes ``e0`` in the array passed to the module
295*41dd9a6bSDavid YoungB's stage 0 process callback.
296*41dd9a6bSDavid Young
297*41dd9a6bSDavid YoungAlthough clustering requires some extra work for the dispatcher, it
298*41dd9a6bSDavid Youngleads to fewer process function calls. In addition, and likely more
299*41dd9a6bSDavid Youngimportantly, it improves temporal locality of memory accesses to
300*41dd9a6bSDavid Younghandler-specific data structures in the application, which in turn may
301*41dd9a6bSDavid Younglead to fewer cache misses and improved overall performance.
302*41dd9a6bSDavid Young
303*41dd9a6bSDavid YoungFinalize
304*41dd9a6bSDavid Young--------
305*41dd9a6bSDavid Young
306*41dd9a6bSDavid YoungThe dispatcher may be configured to notify one or more parts of the
307*41dd9a6bSDavid Youngapplication when the matching and processing of a batch of events has
308*41dd9a6bSDavid Youngcompleted.
309*41dd9a6bSDavid Young
310*41dd9a6bSDavid YoungThe ``rte_dispatcher_finalize_register`` call is used to
311*41dd9a6bSDavid Youngregister a finalize callback. The function
312*41dd9a6bSDavid Young``rte_dispatcher_finalize_unregister`` is used to remove a
313*41dd9a6bSDavid Youngcallback.
314*41dd9a6bSDavid Young
315*41dd9a6bSDavid YoungThe finalize hook may be used by a set of event handlers (in the same
316*41dd9a6bSDavid Youngmodules, or a set of cooperating modules) sharing an event output
317*41dd9a6bSDavid Youngbuffer, since it allows for flushing of the buffers at the last
318*41dd9a6bSDavid Youngpossible moment. In particular, it allows for buffering of
319*41dd9a6bSDavid Young``RTE_EVENT_OP_FORWARD`` events, which must be flushed before the next
320*41dd9a6bSDavid Young``rte_event_dequeue_burst()`` call is made (assuming implicit release
321*41dd9a6bSDavid Youngis employed).
322*41dd9a6bSDavid Young
323*41dd9a6bSDavid YoungThe following is an example with an application-defined event output
324*41dd9a6bSDavid Youngbuffer (the ``event_buffer``):
325*41dd9a6bSDavid Young
326*41dd9a6bSDavid Young.. code-block:: c
327*41dd9a6bSDavid Young
328*41dd9a6bSDavid Young    static void
329*41dd9a6bSDavid Young    finalize_batch(uint8_t event_dev_id, uint8_t event_port_id,
330*41dd9a6bSDavid Young                   void *cb_data)
331*41dd9a6bSDavid Young    {
332*41dd9a6bSDavid Young            struct event_buffer *buffer = cb_data;
333*41dd9a6bSDavid Young            unsigned lcore_id = rte_lcore_id();
334*41dd9a6bSDavid Young            struct event_buffer_lcore *lcore_buffer =
335*41dd9a6bSDavid Young                    &buffer->lcore_buffer[lcore_id];
336*41dd9a6bSDavid Young
337*41dd9a6bSDavid Young            event_buffer_lcore_flush(lcore_buffer);
338*41dd9a6bSDavid Young    }
339*41dd9a6bSDavid Young
340*41dd9a6bSDavid Young    /* In the module's initialization code */
341*41dd9a6bSDavid Young    rte_dispatcher_finalize_register(dispatcher, finalize_batch,
342*41dd9a6bSDavid Young                                     shared_event_buffer);
343*41dd9a6bSDavid Young
344*41dd9a6bSDavid YoungThe dispatcher does not track any relationship between a handler and a
345*41dd9a6bSDavid Youngfinalize callback, and all finalize callbacks will be called, if (and
346*41dd9a6bSDavid Youngonly if) at least one event was dequeued from the event device.
347*41dd9a6bSDavid Young
348*41dd9a6bSDavid YoungFinalize callback registration and unregistration cannot safely be
349*41dd9a6bSDavid Youngdone while the dispatcher's service function is running on any lcore.
350*41dd9a6bSDavid Young
351*41dd9a6bSDavid YoungService
352*41dd9a6bSDavid Young-------
353*41dd9a6bSDavid Young
354*41dd9a6bSDavid YoungThe dispatcher is a DPDK service, and is managed in a manner similar
355*41dd9a6bSDavid Youngto other DPDK services (e.g., an Event Timer Adapter).
356*41dd9a6bSDavid Young
357*41dd9a6bSDavid YoungBelow is an example of how to configure a particular lcore to serve as
358*41dd9a6bSDavid Younga service lcore, and to map an already-configured dispatcher
359*41dd9a6bSDavid Young(identified by ``DISPATCHER_ID``) to that lcore.
360*41dd9a6bSDavid Young
361*41dd9a6bSDavid Young.. code-block:: c
362*41dd9a6bSDavid Young
363*41dd9a6bSDavid Young    static void
364*41dd9a6bSDavid Young    launch_dispatcher_core(struct rte_dispatcher *dispatcher,
365*41dd9a6bSDavid Young                           unsigned lcore_id)
366*41dd9a6bSDavid Young    {
367*41dd9a6bSDavid Young            uint32_t service_id;
368*41dd9a6bSDavid Young
369*41dd9a6bSDavid Young            rte_service_lcore_add(lcore_id);
370*41dd9a6bSDavid Young
371*41dd9a6bSDavid Young            rte_dispatcher_service_id_get(dispatcher, &service_id);
372*41dd9a6bSDavid Young
373*41dd9a6bSDavid Young            rte_service_map_lcore_set(service_id, lcore_id, 1);
374*41dd9a6bSDavid Young
375*41dd9a6bSDavid Young            rte_service_lcore_start(lcore_id);
376*41dd9a6bSDavid Young
377*41dd9a6bSDavid Young            rte_service_runstate_set(service_id, 1);
378*41dd9a6bSDavid Young    }
379*41dd9a6bSDavid Young
380*41dd9a6bSDavid YoungAs the final step, the dispatcher must be started.
381*41dd9a6bSDavid Young
382*41dd9a6bSDavid Young.. code-block:: c
383*41dd9a6bSDavid Young
384*41dd9a6bSDavid Young    rte_dispatcher_start(dispatcher);
385*41dd9a6bSDavid Young
386*41dd9a6bSDavid Young
387*41dd9a6bSDavid YoungMulti Service Dispatcher Lcores
388*41dd9a6bSDavid Young^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
389*41dd9a6bSDavid Young
390*41dd9a6bSDavid YoungIn an Eventdev application, most (or all) compute-intensive and
391*41dd9a6bSDavid Youngperformance-sensitive processing is done in an event-driven manner,
392*41dd9a6bSDavid Youngwhere CPU cycles spent on application domain logic is the direct
393*41dd9a6bSDavid Youngresult of items of work (i.e., ``rte_event`` events) dequeued from an
394*41dd9a6bSDavid Youngevent device.
395*41dd9a6bSDavid Young
396*41dd9a6bSDavid YoungIn the light of this, it makes sense to have the dispatcher service be
397*41dd9a6bSDavid Youngthe only DPDK service on all lcores used for packet processing — at
398*41dd9a6bSDavid Youngleast in principle.
399*41dd9a6bSDavid Young
400*41dd9a6bSDavid YoungHowever, there is nothing in DPDK that prevents colocating other
401*41dd9a6bSDavid Youngservices with the dispatcher service on the same lcore.
402*41dd9a6bSDavid Young
403*41dd9a6bSDavid YoungTasks that prior to the introduction of the dispatcher into the
404*41dd9a6bSDavid Youngapplication was performed on the lcore, even though no events were
405*41dd9a6bSDavid Youngreceived, are prime targets for being converted into such auxiliary
406*41dd9a6bSDavid Youngservices, running on the dispatcher core set.
407*41dd9a6bSDavid Young
408*41dd9a6bSDavid YoungAn example of such a task would be the management of a per-lcore timer
409*41dd9a6bSDavid Youngwheel (i.e., calling ``rte_timer_manage()``).
410*41dd9a6bSDavid Young
411*41dd9a6bSDavid YoungApplications employing :doc:`../rcu_lib` (or
412*41dd9a6bSDavid Youngsimilar technique) may opt for having quiescent state (e.g., calling
413*41dd9a6bSDavid Young``rte_rcu_qsbr_quiescent()``) signaling factored out into a separate
414*41dd9a6bSDavid Youngservice, to assure resource reclaiming occurs even though some
415*41dd9a6bSDavid Younglcores currently do not process any events.
416*41dd9a6bSDavid Young
417*41dd9a6bSDavid YoungIf more services than the dispatcher service is mapped to a service
418*41dd9a6bSDavid Younglcore, it's important that the other service are well-behaved and
419*41dd9a6bSDavid Youngdon't interfere with event processing to the extent the system's
420*41dd9a6bSDavid Youngthroughput and/or latency requirements are at risk of not being met.
421*41dd9a6bSDavid Young
422*41dd9a6bSDavid YoungIn particular, to avoid jitter, they should have a small upper bound
423*41dd9a6bSDavid Youngfor the maximum amount of time spent in a single service function
424*41dd9a6bSDavid Youngcall.
425*41dd9a6bSDavid Young
426*41dd9a6bSDavid YoungAn example of scenario with a more CPU-heavy colocated service is a
427*41dd9a6bSDavid Younglow-lcore count deployment, where the event device lacks the
428*41dd9a6bSDavid Young``RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT`` capability (and thus
429*41dd9a6bSDavid Youngrequires software to feed incoming packets into the event device). In
430*41dd9a6bSDavid Youngthis case, the best performance may be achieved if the Event Ethernet
431*41dd9a6bSDavid YoungRX and/or TX Adapters are mapped to lcores also used for event
432*41dd9a6bSDavid Youngdispatching, since otherwise the adapter lcores would have a lot of
433*41dd9a6bSDavid Youngidle CPU cycles.
434