xref: /dpdk/doc/guides/prog_guide/eventdev/eventdev.rst (revision 41dd9a6bc2d9c6e20e139ad713cc9d172572dd43)
1*41dd9a6bSDavid Young..  SPDX-License-Identifier: BSD-3-Clause
2*41dd9a6bSDavid Young    Copyright(c) 2017 Intel Corporation.
3*41dd9a6bSDavid Young    Copyright(c) 2018 Arm Limited.
4*41dd9a6bSDavid Young
5*41dd9a6bSDavid YoungEvent Device Library
6*41dd9a6bSDavid Young====================
7*41dd9a6bSDavid Young
8*41dd9a6bSDavid YoungThe DPDK Event device library is an abstraction that provides the application
9*41dd9a6bSDavid Youngwith features to schedule events. This is achieved using the PMD architecture
10*41dd9a6bSDavid Youngsimilar to the ethdev or cryptodev APIs, which may already be familiar to the
11*41dd9a6bSDavid Youngreader.
12*41dd9a6bSDavid Young
13*41dd9a6bSDavid YoungThe eventdev framework introduces the event driven programming model. In a
14*41dd9a6bSDavid Youngpolling model, lcores poll ethdev ports and associated Rx queues directly
15*41dd9a6bSDavid Youngto look for a packet. By contrast in an event driven model, lcores call the
16*41dd9a6bSDavid Youngscheduler that selects packets for them based on programmer-specified criteria.
17*41dd9a6bSDavid YoungThe Eventdev library adds support for an event driven programming model, which
18*41dd9a6bSDavid Youngoffers applications automatic multicore scaling, dynamic load balancing,
19*41dd9a6bSDavid Youngpipelining, packet ingress order maintenance and synchronization services to
20*41dd9a6bSDavid Youngsimplify application packet processing.
21*41dd9a6bSDavid Young
22*41dd9a6bSDavid YoungBy introducing an event driven programming model, DPDK can support both polling
23*41dd9a6bSDavid Youngand event driven programming models for packet processing, and applications are
24*41dd9a6bSDavid Youngfree to choose whatever model (or combination of the two) best suits their
25*41dd9a6bSDavid Youngneeds.
26*41dd9a6bSDavid Young
27*41dd9a6bSDavid YoungStep-by-step instructions of the eventdev design is available in the `API
28*41dd9a6bSDavid YoungWalk-through`_ section later in this document.
29*41dd9a6bSDavid Young
30*41dd9a6bSDavid YoungEvent struct
31*41dd9a6bSDavid Young------------
32*41dd9a6bSDavid Young
33*41dd9a6bSDavid YoungThe eventdev API represents each event with a generic struct, which contains a
34*41dd9a6bSDavid Youngpayload and metadata required for scheduling by an eventdev.  The
35*41dd9a6bSDavid Young``rte_event`` struct is a 16 byte C structure, defined in
36*41dd9a6bSDavid Young``libs/librte_eventdev/rte_eventdev.h``.
37*41dd9a6bSDavid Young
38*41dd9a6bSDavid YoungEvent Metadata
39*41dd9a6bSDavid Young~~~~~~~~~~~~~~
40*41dd9a6bSDavid Young
41*41dd9a6bSDavid YoungThe rte_event structure contains the following metadata fields, which the
42*41dd9a6bSDavid Youngapplication fills in to have the event scheduled as required:
43*41dd9a6bSDavid Young
44*41dd9a6bSDavid Young* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
45*41dd9a6bSDavid Young* ``event_type`` - The source of this event, e.g. RTE_EVENT_TYPE_ETHDEV or CPU.
46*41dd9a6bSDavid Young* ``sub_event_type`` - Distinguishes events inside the application, that have
47*41dd9a6bSDavid Young  the same event_type (see above)
48*41dd9a6bSDavid Young* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
49*41dd9a6bSDavid Young  eventdev about the status of the event - valid values are NEW, FORWARD or
50*41dd9a6bSDavid Young  RELEASE.
51*41dd9a6bSDavid Young* ``sched_type`` - Represents the type of scheduling that should be performed
52*41dd9a6bSDavid Young  on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
53*41dd9a6bSDavid Young  PARALLEL.
54*41dd9a6bSDavid Young* ``queue_id`` - The identifier for the event queue that the event is sent to.
55*41dd9a6bSDavid Young* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
56*41dd9a6bSDavid Young
57*41dd9a6bSDavid YoungEvent Payload
58*41dd9a6bSDavid Young~~~~~~~~~~~~~
59*41dd9a6bSDavid Young
60*41dd9a6bSDavid YoungThe rte_event struct contains a union for payload, allowing flexibility in what
61*41dd9a6bSDavid Youngthe actual event being scheduled is. The payload is a union of the following:
62*41dd9a6bSDavid Young
63*41dd9a6bSDavid Young* ``uint64_t u64``
64*41dd9a6bSDavid Young* ``void *event_ptr``
65*41dd9a6bSDavid Young* ``struct rte_mbuf *mbuf``
66*41dd9a6bSDavid Young* ``struct rte_event_vector *vec``
67*41dd9a6bSDavid Young
68*41dd9a6bSDavid YoungThese four items in a union occupy the same 64 bits at the end of the rte_event
69*41dd9a6bSDavid Youngstructure. The application can utilize the 64 bits directly by accessing the
70*41dd9a6bSDavid Youngu64 variable, while the event_ptr, mbuf, vec are provided as a convenience
71*41dd9a6bSDavid Youngvariables.  For example the mbuf pointer in the union can used to schedule a
72*41dd9a6bSDavid YoungDPDK packet.
73*41dd9a6bSDavid Young
74*41dd9a6bSDavid YoungEvent Vector
75*41dd9a6bSDavid Young~~~~~~~~~~~~
76*41dd9a6bSDavid Young
77*41dd9a6bSDavid YoungThe rte_event_vector struct contains a vector of elements defined by the event
78*41dd9a6bSDavid Youngtype specified in the ``rte_event``. The event_vector structure contains the
79*41dd9a6bSDavid Youngfollowing data:
80*41dd9a6bSDavid Young
81*41dd9a6bSDavid Young* ``nb_elem`` - The number of elements held within the vector.
82*41dd9a6bSDavid Young
83*41dd9a6bSDavid YoungSimilar to ``rte_event`` the payload of event vector is also a union, allowing
84*41dd9a6bSDavid Youngflexibility in what the actual vector is.
85*41dd9a6bSDavid Young
86*41dd9a6bSDavid Young* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs.
87*41dd9a6bSDavid Young* ``void *ptrs[0]`` - An array of pointers.
88*41dd9a6bSDavid Young* ``uint64_t u64s[0]`` - An array of uint64_t elements.
89*41dd9a6bSDavid Young
90*41dd9a6bSDavid YoungThe size of the event vector is related to the total number of elements it is
91*41dd9a6bSDavid Youngconfigured to hold, this is achieved by making `rte_event_vector` a variable
92*41dd9a6bSDavid Younglength structure.
93*41dd9a6bSDavid YoungA helper function is provided to create a mempool that holds event vector, which
94*41dd9a6bSDavid Youngtakes name of the pool, total number of required ``rte_event_vector``,
95*41dd9a6bSDavid Youngcache size, number of elements in each ``rte_event_vector`` and socket id.
96*41dd9a6bSDavid Young
97*41dd9a6bSDavid Young.. code-block:: c
98*41dd9a6bSDavid Young
99*41dd9a6bSDavid Young        rte_event_vector_pool_create("vector_pool", nb_event_vectors, cache_sz,
100*41dd9a6bSDavid Young                                     nb_elements_per_vector, socket_id);
101*41dd9a6bSDavid Young
102*41dd9a6bSDavid YoungThe function ``rte_event_vector_pool_create`` creates mempool with the best
103*41dd9a6bSDavid Youngplatform mempool ops.
104*41dd9a6bSDavid Young
105*41dd9a6bSDavid YoungQueues
106*41dd9a6bSDavid Young~~~~~~
107*41dd9a6bSDavid Young
108*41dd9a6bSDavid YoungAn event queue is a queue containing events that are scheduled by the event
109*41dd9a6bSDavid Youngdevice. An event queue contains events of different flows associated with
110*41dd9a6bSDavid Youngscheduling types, such as atomic, ordered, or parallel.
111*41dd9a6bSDavid Young
112*41dd9a6bSDavid YoungQueue All Types Capable
113*41dd9a6bSDavid Young^^^^^^^^^^^^^^^^^^^^^^^
114*41dd9a6bSDavid Young
115*41dd9a6bSDavid YoungIf RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability bit is set in the event device,
116*41dd9a6bSDavid Youngthen events of any type may be sent to any queue. Otherwise, the queues only
117*41dd9a6bSDavid Youngsupport events of the type that it was created with.
118*41dd9a6bSDavid Young
119*41dd9a6bSDavid YoungQueue All Types Incapable
120*41dd9a6bSDavid Young^^^^^^^^^^^^^^^^^^^^^^^^^
121*41dd9a6bSDavid Young
122*41dd9a6bSDavid YoungIn this case, each stage has a specified scheduling type.  The application
123*41dd9a6bSDavid Youngconfigures each queue for a specific type of scheduling, and just enqueues all
124*41dd9a6bSDavid Youngevents to the eventdev. An example of a PMD of this type is the eventdev
125*41dd9a6bSDavid Youngsoftware PMD.
126*41dd9a6bSDavid Young
127*41dd9a6bSDavid YoungThe Eventdev API supports the following scheduling types per queue:
128*41dd9a6bSDavid Young
129*41dd9a6bSDavid Young*   Atomic
130*41dd9a6bSDavid Young*   Ordered
131*41dd9a6bSDavid Young*   Parallel
132*41dd9a6bSDavid Young
133*41dd9a6bSDavid YoungAtomic, Ordered and Parallel are load-balanced scheduling types: the output
134*41dd9a6bSDavid Youngof the queue can be spread out over multiple CPU cores.
135*41dd9a6bSDavid Young
136*41dd9a6bSDavid YoungAtomic scheduling on a queue ensures that a single flow is not present on two
137*41dd9a6bSDavid Youngdifferent CPU cores at the same time. Ordered allows sending all flows to any
138*41dd9a6bSDavid Youngcore, but the scheduler must ensure that on egress the packets are returned to
139*41dd9a6bSDavid Youngingress order on downstream queue enqueue. Parallel allows sending all flows
140*41dd9a6bSDavid Youngto all CPU cores, without any re-ordering guarantees.
141*41dd9a6bSDavid Young
142*41dd9a6bSDavid YoungSingle Link Flag
143*41dd9a6bSDavid Young^^^^^^^^^^^^^^^^
144*41dd9a6bSDavid Young
145*41dd9a6bSDavid YoungThere is a SINGLE_LINK flag which allows an application to indicate that only
146*41dd9a6bSDavid Youngone port will be connected to a queue.  Queues configured with the single-link
147*41dd9a6bSDavid Youngflag follow a FIFO like structure, maintaining ordering but it is only capable
148*41dd9a6bSDavid Youngof being linked to a single port (see below for port and queue linking details).
149*41dd9a6bSDavid Young
150*41dd9a6bSDavid Young
151*41dd9a6bSDavid YoungPorts
152*41dd9a6bSDavid Young~~~~~
153*41dd9a6bSDavid Young
154*41dd9a6bSDavid YoungPorts are the points of contact between worker cores and the eventdev. The
155*41dd9a6bSDavid Younggeneral use case will see one CPU core using one port to enqueue and dequeue
156*41dd9a6bSDavid Youngevents from an eventdev. Ports are linked to queues in order to retrieve events
157*41dd9a6bSDavid Youngfrom those queues (more details in `Linking Queues and Ports`_ below).
158*41dd9a6bSDavid Young
159*41dd9a6bSDavid Young
160*41dd9a6bSDavid YoungAPI Walk-through
161*41dd9a6bSDavid Young----------------
162*41dd9a6bSDavid Young
163*41dd9a6bSDavid YoungThis section will introduce the reader to the eventdev API, showing how to
164*41dd9a6bSDavid Youngcreate and configure an eventdev and use it for a two-stage atomic pipeline
165*41dd9a6bSDavid Youngwith one core each for RX and TX. RX and TX cores are shown here for
166*41dd9a6bSDavid Youngillustration, refer to Eventdev Adapter documentation for further details.
167*41dd9a6bSDavid YoungThe diagram below shows the final state of the application after this
168*41dd9a6bSDavid Youngwalk-through:
169*41dd9a6bSDavid Young
170*41dd9a6bSDavid Young.. _figure_eventdev-usage1:
171*41dd9a6bSDavid Young
172*41dd9a6bSDavid Young.. figure:: ../img/eventdev_usage.*
173*41dd9a6bSDavid Young
174*41dd9a6bSDavid Young   Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
175*41dd9a6bSDavid Young
176*41dd9a6bSDavid Young
177*41dd9a6bSDavid YoungA high level overview of the setup steps are:
178*41dd9a6bSDavid Young
179*41dd9a6bSDavid Young* rte_event_dev_configure()
180*41dd9a6bSDavid Young* rte_event_queue_setup()
181*41dd9a6bSDavid Young* rte_event_port_setup()
182*41dd9a6bSDavid Young* rte_event_port_link()
183*41dd9a6bSDavid Young* rte_event_dev_start()
184*41dd9a6bSDavid Young
185*41dd9a6bSDavid Young
186*41dd9a6bSDavid YoungInit and Config
187*41dd9a6bSDavid Young~~~~~~~~~~~~~~~
188*41dd9a6bSDavid Young
189*41dd9a6bSDavid YoungThe eventdev library uses vdev options to add devices to the DPDK application.
190*41dd9a6bSDavid YoungThe ``--vdev`` EAL option allows adding eventdev instances to your DPDK
191*41dd9a6bSDavid Youngapplication, using the name of the eventdev PMD as an argument.
192*41dd9a6bSDavid Young
193*41dd9a6bSDavid YoungFor example, to create an instance of the software eventdev scheduler, the
194*41dd9a6bSDavid Youngfollowing vdev arguments should be provided to the application EAL command line:
195*41dd9a6bSDavid Young
196*41dd9a6bSDavid Young.. code-block:: console
197*41dd9a6bSDavid Young
198*41dd9a6bSDavid Young   ./dpdk_application --vdev="event_sw0"
199*41dd9a6bSDavid Young
200*41dd9a6bSDavid YoungIn the following code, we configure eventdev instance with 3 queues
201*41dd9a6bSDavid Youngand 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
202*41dd9a6bSDavid Youngwhile the 6 ports consist of 4 workers, 1 RX and 1 TX.
203*41dd9a6bSDavid Young
204*41dd9a6bSDavid Young.. code-block:: c
205*41dd9a6bSDavid Young
206*41dd9a6bSDavid Young        const struct rte_event_dev_config config = {
207*41dd9a6bSDavid Young                .nb_event_queues = 3,
208*41dd9a6bSDavid Young                .nb_event_ports = 6,
209*41dd9a6bSDavid Young                .nb_events_limit  = 4096,
210*41dd9a6bSDavid Young                .nb_event_queue_flows = 1024,
211*41dd9a6bSDavid Young                .nb_event_port_dequeue_depth = 128,
212*41dd9a6bSDavid Young                .nb_event_port_enqueue_depth = 128,
213*41dd9a6bSDavid Young        };
214*41dd9a6bSDavid Young        int err = rte_event_dev_configure(dev_id, &config);
215*41dd9a6bSDavid Young
216*41dd9a6bSDavid YoungThe remainder of this walk-through assumes that dev_id is 0.
217*41dd9a6bSDavid Young
218*41dd9a6bSDavid YoungSetting up Queues
219*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~~
220*41dd9a6bSDavid Young
221*41dd9a6bSDavid YoungOnce the eventdev itself is configured, the next step is to configure queues.
222*41dd9a6bSDavid YoungThis is done by setting the appropriate values in a queue_conf structure, and
223*41dd9a6bSDavid Youngcalling the setup function. Repeat this step for each queue, starting from
224*41dd9a6bSDavid Young0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
225*41dd9a6bSDavid Young
226*41dd9a6bSDavid Young.. code-block:: c
227*41dd9a6bSDavid Young
228*41dd9a6bSDavid Young        struct rte_event_queue_conf atomic_conf = {
229*41dd9a6bSDavid Young                .schedule_type = RTE_SCHED_TYPE_ATOMIC,
230*41dd9a6bSDavid Young                .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
231*41dd9a6bSDavid Young                .nb_atomic_flows = 1024,
232*41dd9a6bSDavid Young                .nb_atomic_order_sequences = 1024,
233*41dd9a6bSDavid Young        };
234*41dd9a6bSDavid Young        struct rte_event_queue_conf single_link_conf = {
235*41dd9a6bSDavid Young                .event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
236*41dd9a6bSDavid Young        };
237*41dd9a6bSDavid Young        int dev_id = 0;
238*41dd9a6bSDavid Young        int atomic_q_1 = 0;
239*41dd9a6bSDavid Young        int atomic_q_2 = 1;
240*41dd9a6bSDavid Young        int single_link_q = 2;
241*41dd9a6bSDavid Young        int err = rte_event_queue_setup(dev_id, atomic_q_1, &atomic_conf);
242*41dd9a6bSDavid Young        int err = rte_event_queue_setup(dev_id, atomic_q_2, &atomic_conf);
243*41dd9a6bSDavid Young        int err = rte_event_queue_setup(dev_id, single_link_q, &single_link_conf);
244*41dd9a6bSDavid Young
245*41dd9a6bSDavid YoungAs shown above, queue IDs are as follows:
246*41dd9a6bSDavid Young
247*41dd9a6bSDavid Young * id 0, atomic queue #1
248*41dd9a6bSDavid Young * id 1, atomic queue #2
249*41dd9a6bSDavid Young * id 2, single-link queue
250*41dd9a6bSDavid Young
251*41dd9a6bSDavid YoungThese queues are used for the remainder of this walk-through.
252*41dd9a6bSDavid Young
253*41dd9a6bSDavid YoungSetting up Ports
254*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~
255*41dd9a6bSDavid Young
256*41dd9a6bSDavid YoungOnce queues are set up successfully, create the ports as required.
257*41dd9a6bSDavid Young
258*41dd9a6bSDavid Young.. code-block:: c
259*41dd9a6bSDavid Young
260*41dd9a6bSDavid Young        struct rte_event_port_conf rx_conf = {
261*41dd9a6bSDavid Young                .dequeue_depth = 128,
262*41dd9a6bSDavid Young                .enqueue_depth = 128,
263*41dd9a6bSDavid Young                .new_event_threshold = 1024,
264*41dd9a6bSDavid Young        };
265*41dd9a6bSDavid Young        struct rte_event_port_conf worker_conf = {
266*41dd9a6bSDavid Young                .dequeue_depth = 16,
267*41dd9a6bSDavid Young                .enqueue_depth = 64,
268*41dd9a6bSDavid Young                .new_event_threshold = 4096,
269*41dd9a6bSDavid Young        };
270*41dd9a6bSDavid Young        struct rte_event_port_conf tx_conf = {
271*41dd9a6bSDavid Young                .dequeue_depth = 128,
272*41dd9a6bSDavid Young                .enqueue_depth = 128,
273*41dd9a6bSDavid Young                .new_event_threshold = 4096,
274*41dd9a6bSDavid Young        };
275*41dd9a6bSDavid Young        int dev_id = 0;
276*41dd9a6bSDavid Young        int rx_port_id = 0;
277*41dd9a6bSDavid Young        int worker_port_id;
278*41dd9a6bSDavid Young        int err = rte_event_port_setup(dev_id, rx_port_id, &rx_conf);
279*41dd9a6bSDavid Young
280*41dd9a6bSDavid Young        for (worker_port_id = 1; worker_port_id <= 4; worker_port_id++) {
281*41dd9a6bSDavid Young	        int err = rte_event_port_setup(dev_id, worker_port_id, &worker_conf);
282*41dd9a6bSDavid Young        }
283*41dd9a6bSDavid Young
284*41dd9a6bSDavid Young        int tx_port_id = 5;
285*41dd9a6bSDavid Young	int err = rte_event_port_setup(dev_id, tx_port_id, &tx_conf);
286*41dd9a6bSDavid Young
287*41dd9a6bSDavid YoungAs shown above:
288*41dd9a6bSDavid Young
289*41dd9a6bSDavid Young * port 0: RX core
290*41dd9a6bSDavid Young * ports 1,2,3,4: Workers
291*41dd9a6bSDavid Young * port 5: TX core
292*41dd9a6bSDavid Young
293*41dd9a6bSDavid YoungThese ports are used for the remainder of this walk-through.
294*41dd9a6bSDavid Young
295*41dd9a6bSDavid YoungLinking Queues and Ports
296*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~~~~
297*41dd9a6bSDavid Young
298*41dd9a6bSDavid YoungThe final step is to "wire up" the ports to the queues. After this, the
299*41dd9a6bSDavid Youngeventdev is capable of scheduling events, and when cores request work to do,
300*41dd9a6bSDavid Youngthe correct events are provided to that core. Note that the RX core takes input
301*41dd9a6bSDavid Youngfrom e.g.: a NIC so it is not linked to any eventdev queues.
302*41dd9a6bSDavid Young
303*41dd9a6bSDavid YoungLinking all workers to atomic queues, and the TX core to the single-link queue
304*41dd9a6bSDavid Youngcan be achieved like this:
305*41dd9a6bSDavid Young
306*41dd9a6bSDavid Young.. code-block:: c
307*41dd9a6bSDavid Young
308*41dd9a6bSDavid Young        uint8_t rx_port_id = 0;
309*41dd9a6bSDavid Young        uint8_t tx_port_id = 5;
310*41dd9a6bSDavid Young        uint8_t atomic_qs[] = {0, 1};
311*41dd9a6bSDavid Young        uint8_t single_link_q = 2;
312*41dd9a6bSDavid Young        uint8_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
313*41dd9a6bSDavid Young        int worker_port_id;
314*41dd9a6bSDavid Young
315*41dd9a6bSDavid Young        for (worker_port_id = 1; worker_port_id <= 4; worker_port_id++) {
316*41dd9a6bSDavid Young                int links_made = rte_event_port_link(dev_id, worker_port_id, atomic_qs, NULL, 2);
317*41dd9a6bSDavid Young        }
318*41dd9a6bSDavid Young        int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
319*41dd9a6bSDavid Young
320*41dd9a6bSDavid YoungLinking Queues to Ports with link profiles
321*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
322*41dd9a6bSDavid Young
323*41dd9a6bSDavid YoungAn application can use link profiles if supported by the underlying event device to setup up
324*41dd9a6bSDavid Youngmultiple link profile per port and change them run time depending up on heuristic data.
325*41dd9a6bSDavid YoungUsing Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
326*41dd9a6bSDavid Youngin fast-path and gives applications the ability to switch between preset profiles on the fly.
327*41dd9a6bSDavid Young
328*41dd9a6bSDavid YoungAn example use case could be as follows.
329*41dd9a6bSDavid Young
330*41dd9a6bSDavid YoungConfig path:
331*41dd9a6bSDavid Young
332*41dd9a6bSDavid Young.. code-block:: c
333*41dd9a6bSDavid Young
334*41dd9a6bSDavid Young   uint8_t lq[4] = {4, 5, 6, 7};
335*41dd9a6bSDavid Young   uint8_t hq[4] = {0, 1, 2, 3};
336*41dd9a6bSDavid Young
337*41dd9a6bSDavid Young   if (rte_event_dev_info.max_profiles_per_port < 2)
338*41dd9a6bSDavid Young       return -ENOTSUP;
339*41dd9a6bSDavid Young
340*41dd9a6bSDavid Young   rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
341*41dd9a6bSDavid Young   rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
342*41dd9a6bSDavid Young
343*41dd9a6bSDavid YoungWorker path:
344*41dd9a6bSDavid Young
345*41dd9a6bSDavid Young.. code-block:: c
346*41dd9a6bSDavid Young
347*41dd9a6bSDavid Young   uint8_t profile_id_to_switch;
348*41dd9a6bSDavid Young
349*41dd9a6bSDavid Young   while (1) {
350*41dd9a6bSDavid Young       deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
351*41dd9a6bSDavid Young       if (deq == 0) {
352*41dd9a6bSDavid Young           profile_id_to_switch = app_find_profile_id_to_switch();
353*41dd9a6bSDavid Young           rte_event_port_profile_switch(0, 0, profile_id_to_switch);
354*41dd9a6bSDavid Young           continue;
355*41dd9a6bSDavid Young       }
356*41dd9a6bSDavid Young
357*41dd9a6bSDavid Young       // Process the event received.
358*41dd9a6bSDavid Young   }
359*41dd9a6bSDavid Young
360*41dd9a6bSDavid YoungStarting the EventDev
361*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~
362*41dd9a6bSDavid Young
363*41dd9a6bSDavid YoungA single function call tells the eventdev instance to start processing
364*41dd9a6bSDavid Youngevents. Note that all queues must be linked to for the instance to start, as
365*41dd9a6bSDavid Youngif any queue is not linked to, enqueuing to that queue will cause the
366*41dd9a6bSDavid Youngapplication to backpressure and eventually stall due to no space in the
367*41dd9a6bSDavid Youngeventdev.
368*41dd9a6bSDavid Young
369*41dd9a6bSDavid Young.. code-block:: c
370*41dd9a6bSDavid Young
371*41dd9a6bSDavid Young        int err = rte_event_dev_start(dev_id);
372*41dd9a6bSDavid Young
373*41dd9a6bSDavid Young.. Note::
374*41dd9a6bSDavid Young
375*41dd9a6bSDavid Young         EventDev needs to be started before starting the event producers such
376*41dd9a6bSDavid Young         as event_eth_rx_adapter, event_timer_adapter, event_crypto_adapter and
377*41dd9a6bSDavid Young         event_dma_adapter.
378*41dd9a6bSDavid Young
379*41dd9a6bSDavid YoungIngress of New Events
380*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~
381*41dd9a6bSDavid Young
382*41dd9a6bSDavid YoungNow that the eventdev is set up, and ready to receive events, the RX core must
383*41dd9a6bSDavid Youngenqueue some events into the system for it to schedule. The events to be
384*41dd9a6bSDavid Youngscheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
385*41dd9a6bSDavid YoungThe following code shows how those packets can be enqueued into the eventdev:
386*41dd9a6bSDavid Young
387*41dd9a6bSDavid Young.. code-block:: c
388*41dd9a6bSDavid Young
389*41dd9a6bSDavid Young        const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
390*41dd9a6bSDavid Young
391*41dd9a6bSDavid Young        for (i = 0; i < nb_rx; i++) {
392*41dd9a6bSDavid Young                ev[i].flow_id = mbufs[i]->hash.rss;
393*41dd9a6bSDavid Young                ev[i].op = RTE_EVENT_OP_NEW;
394*41dd9a6bSDavid Young                ev[i].sched_type = RTE_SCHED_TYPE_ATOMIC;
395*41dd9a6bSDavid Young                ev[i].queue_id = atomic_q_1;
396*41dd9a6bSDavid Young                ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
397*41dd9a6bSDavid Young                ev[i].sub_event_type = 0;
398*41dd9a6bSDavid Young                ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
399*41dd9a6bSDavid Young                ev[i].mbuf = mbufs[i];
400*41dd9a6bSDavid Young        }
401*41dd9a6bSDavid Young
402*41dd9a6bSDavid Young        const int nb_tx = rte_event_enqueue_burst(dev_id, rx_port_id, ev, nb_rx);
403*41dd9a6bSDavid Young        if (nb_tx != nb_rx) {
404*41dd9a6bSDavid Young                for(i = nb_tx; i < nb_rx; i++)
405*41dd9a6bSDavid Young                        rte_pktmbuf_free(mbufs[i]);
406*41dd9a6bSDavid Young        }
407*41dd9a6bSDavid Young
408*41dd9a6bSDavid YoungForwarding of Events
409*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~
410*41dd9a6bSDavid Young
411*41dd9a6bSDavid YoungNow that the RX core has injected events, there is work to be done by the
412*41dd9a6bSDavid Youngworkers. Note that each worker will dequeue as many events as it can in a burst,
413*41dd9a6bSDavid Youngprocess each one individually, and then burst the packets back into the
414*41dd9a6bSDavid Youngeventdev.
415*41dd9a6bSDavid Young
416*41dd9a6bSDavid YoungThe worker can lookup the events source from ``event.queue_id``, which should
417*41dd9a6bSDavid Youngindicate to the worker what workload needs to be performed on the event.
418*41dd9a6bSDavid YoungOnce done, the worker can update the ``event.queue_id`` to a new value, to send
419*41dd9a6bSDavid Youngthe event to the next stage in the pipeline.
420*41dd9a6bSDavid Young
421*41dd9a6bSDavid Young.. code-block:: c
422*41dd9a6bSDavid Young
423*41dd9a6bSDavid Young        int timeout = 0;
424*41dd9a6bSDavid Young        struct rte_event events[BATCH_SIZE];
425*41dd9a6bSDavid Young        uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
426*41dd9a6bSDavid Young
427*41dd9a6bSDavid Young        for (i = 0; i < nb_rx; i++) {
428*41dd9a6bSDavid Young                /* process mbuf using events[i].queue_id as pipeline stage */
429*41dd9a6bSDavid Young                struct rte_mbuf *mbuf = events[i].mbuf;
430*41dd9a6bSDavid Young                /* Send event to next stage in pipeline */
431*41dd9a6bSDavid Young                events[i].queue_id++;
432*41dd9a6bSDavid Young        }
433*41dd9a6bSDavid Young
434*41dd9a6bSDavid Young        uint16_t nb_tx = rte_event_enqueue_burst(dev_id, worker_port_id, events, nb_rx);
435*41dd9a6bSDavid Young
436*41dd9a6bSDavid Young
437*41dd9a6bSDavid YoungEgress of Events
438*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~
439*41dd9a6bSDavid Young
440*41dd9a6bSDavid YoungFinally, when the packet is ready for egress or needs to be dropped, we need
441*41dd9a6bSDavid Youngto inform the eventdev that the packet is no longer being handled by the
442*41dd9a6bSDavid Youngapplication. This can be done by calling dequeue() or dequeue_burst(), which
443*41dd9a6bSDavid Youngindicates that the previous burst of packets is no longer in use by the
444*41dd9a6bSDavid Youngapplication.
445*41dd9a6bSDavid Young
446*41dd9a6bSDavid YoungAn event driven worker thread has following typical workflow on fastpath:
447*41dd9a6bSDavid Young
448*41dd9a6bSDavid Young.. code-block:: c
449*41dd9a6bSDavid Young
450*41dd9a6bSDavid Young       while (1) {
451*41dd9a6bSDavid Young               rte_event_dequeue_burst(...);
452*41dd9a6bSDavid Young               (event processing)
453*41dd9a6bSDavid Young               rte_event_enqueue_burst(...);
454*41dd9a6bSDavid Young       }
455*41dd9a6bSDavid Young
456*41dd9a6bSDavid YoungQuiescing Event Ports
457*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~
458*41dd9a6bSDavid Young
459*41dd9a6bSDavid YoungTo migrate the event port to another lcore
460*41dd9a6bSDavid Youngor while tearing down a worker core using an event port,
461*41dd9a6bSDavid Young``rte_event_port_quiesce()`` can be invoked to make sure that all the data
462*41dd9a6bSDavid Youngassociated with the event port are released from the worker core,
463*41dd9a6bSDavid Youngthis might also include any prefetched events.
464*41dd9a6bSDavid Young
465*41dd9a6bSDavid YoungA flush callback can be passed to the function to handle any outstanding events.
466*41dd9a6bSDavid Young
467*41dd9a6bSDavid Young.. code-block:: c
468*41dd9a6bSDavid Young
469*41dd9a6bSDavid Young        rte_event_port_quiesce(dev_id, port_id, release_cb, NULL);
470*41dd9a6bSDavid Young
471*41dd9a6bSDavid Young.. Note::
472*41dd9a6bSDavid Young
473*41dd9a6bSDavid Young        Invocation of this API does not affect the existing port configuration.
474*41dd9a6bSDavid Young
475*41dd9a6bSDavid YoungStopping the EventDev
476*41dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~
477*41dd9a6bSDavid Young
478*41dd9a6bSDavid YoungA single function call tells the eventdev instance to stop processing events.
479*41dd9a6bSDavid YoungA flush callback can be registered to free any inflight events
480*41dd9a6bSDavid Youngusing ``rte_event_dev_stop_flush_callback_register()`` function.
481*41dd9a6bSDavid Young
482*41dd9a6bSDavid Young.. code-block:: c
483*41dd9a6bSDavid Young
484*41dd9a6bSDavid Young        int err = rte_event_dev_stop(dev_id);
485*41dd9a6bSDavid Young
486*41dd9a6bSDavid Young.. Note::
487*41dd9a6bSDavid Young
488*41dd9a6bSDavid Young        The event producers such as ``event_eth_rx_adapter``,
489*41dd9a6bSDavid Young        ``event_timer_adapter``, ``event_crypto_adapter`` and
490*41dd9a6bSDavid Young        ``event_dma_adapter`` need to be stopped before stopping
491*41dd9a6bSDavid Young        the event device.
492*41dd9a6bSDavid Young
493*41dd9a6bSDavid YoungSummary
494*41dd9a6bSDavid Young-------
495*41dd9a6bSDavid Young
496*41dd9a6bSDavid YoungThe eventdev library allows an application to easily schedule events as it
497*41dd9a6bSDavid Youngrequires, either using a run-to-completion or pipeline processing model.  The
498*41dd9a6bSDavid Youngqueues and ports abstract the logical functionality of an eventdev, providing
499*41dd9a6bSDavid Youngthe application with a generic method to schedule events.  With the flexible
500*41dd9a6bSDavid YoungPMD infrastructure applications benefit of improvements in existing eventdevs
501*41dd9a6bSDavid Youngand additions of new ones without modification.
502