xref: /dpdk/doc/guides/prog_guide/eventdev/eventdev.rst (revision 4ade669c2823c0ebcaf7bfb7589db13cb2e4a6d8)
141dd9a6bSDavid Young..  SPDX-License-Identifier: BSD-3-Clause
241dd9a6bSDavid Young    Copyright(c) 2017 Intel Corporation.
341dd9a6bSDavid Young    Copyright(c) 2018 Arm Limited.
441dd9a6bSDavid Young
541dd9a6bSDavid YoungEvent Device Library
641dd9a6bSDavid Young====================
741dd9a6bSDavid Young
841dd9a6bSDavid YoungThe DPDK Event device library is an abstraction that provides the application
941dd9a6bSDavid Youngwith features to schedule events. This is achieved using the PMD architecture
1041dd9a6bSDavid Youngsimilar to the ethdev or cryptodev APIs, which may already be familiar to the
1141dd9a6bSDavid Youngreader.
1241dd9a6bSDavid Young
1341dd9a6bSDavid YoungThe eventdev framework introduces the event driven programming model. In a
1441dd9a6bSDavid Youngpolling model, lcores poll ethdev ports and associated Rx queues directly
1541dd9a6bSDavid Youngto look for a packet. By contrast in an event driven model, lcores call the
1641dd9a6bSDavid Youngscheduler that selects packets for them based on programmer-specified criteria.
1741dd9a6bSDavid YoungThe Eventdev library adds support for an event driven programming model, which
1841dd9a6bSDavid Youngoffers applications automatic multicore scaling, dynamic load balancing,
1941dd9a6bSDavid Youngpipelining, packet ingress order maintenance and synchronization services to
2041dd9a6bSDavid Youngsimplify application packet processing.
2141dd9a6bSDavid Young
2241dd9a6bSDavid YoungBy introducing an event driven programming model, DPDK can support both polling
2341dd9a6bSDavid Youngand event driven programming models for packet processing, and applications are
2441dd9a6bSDavid Youngfree to choose whatever model (or combination of the two) best suits their
2541dd9a6bSDavid Youngneeds.
2641dd9a6bSDavid Young
2741dd9a6bSDavid YoungStep-by-step instructions of the eventdev design is available in the `API
2841dd9a6bSDavid YoungWalk-through`_ section later in this document.
2941dd9a6bSDavid Young
3041dd9a6bSDavid YoungEvent struct
3141dd9a6bSDavid Young------------
3241dd9a6bSDavid Young
3341dd9a6bSDavid YoungThe eventdev API represents each event with a generic struct, which contains a
3441dd9a6bSDavid Youngpayload and metadata required for scheduling by an eventdev.  The
3541dd9a6bSDavid Young``rte_event`` struct is a 16 byte C structure, defined in
3641dd9a6bSDavid Young``libs/librte_eventdev/rte_eventdev.h``.
3741dd9a6bSDavid Young
3841dd9a6bSDavid YoungEvent Metadata
3941dd9a6bSDavid Young~~~~~~~~~~~~~~
4041dd9a6bSDavid Young
4141dd9a6bSDavid YoungThe rte_event structure contains the following metadata fields, which the
4241dd9a6bSDavid Youngapplication fills in to have the event scheduled as required:
4341dd9a6bSDavid Young
4441dd9a6bSDavid Young* ``flow_id`` - The targeted flow identifier for the enq/deq operation.
4541dd9a6bSDavid Young* ``event_type`` - The source of this event, e.g. RTE_EVENT_TYPE_ETHDEV or CPU.
4641dd9a6bSDavid Young* ``sub_event_type`` - Distinguishes events inside the application, that have
4741dd9a6bSDavid Young  the same event_type (see above)
4841dd9a6bSDavid Young* ``op`` - This field takes one of the RTE_EVENT_OP_* values, and tells the
4941dd9a6bSDavid Young  eventdev about the status of the event - valid values are NEW, FORWARD or
5041dd9a6bSDavid Young  RELEASE.
5141dd9a6bSDavid Young* ``sched_type`` - Represents the type of scheduling that should be performed
5241dd9a6bSDavid Young  on this event, valid values are the RTE_SCHED_TYPE_ORDERED, ATOMIC and
5341dd9a6bSDavid Young  PARALLEL.
5441dd9a6bSDavid Young* ``queue_id`` - The identifier for the event queue that the event is sent to.
5541dd9a6bSDavid Young* ``priority`` - The priority of this event, see RTE_EVENT_DEV_PRIORITY.
5641dd9a6bSDavid Young
5741dd9a6bSDavid YoungEvent Payload
5841dd9a6bSDavid Young~~~~~~~~~~~~~
5941dd9a6bSDavid Young
6041dd9a6bSDavid YoungThe rte_event struct contains a union for payload, allowing flexibility in what
6141dd9a6bSDavid Youngthe actual event being scheduled is. The payload is a union of the following:
6241dd9a6bSDavid Young
6341dd9a6bSDavid Young* ``uint64_t u64``
6441dd9a6bSDavid Young* ``void *event_ptr``
6541dd9a6bSDavid Young* ``struct rte_mbuf *mbuf``
6641dd9a6bSDavid Young* ``struct rte_event_vector *vec``
6741dd9a6bSDavid Young
6841dd9a6bSDavid YoungThese four items in a union occupy the same 64 bits at the end of the rte_event
6941dd9a6bSDavid Youngstructure. The application can utilize the 64 bits directly by accessing the
7041dd9a6bSDavid Youngu64 variable, while the event_ptr, mbuf, vec are provided as a convenience
7141dd9a6bSDavid Youngvariables.  For example the mbuf pointer in the union can used to schedule a
7241dd9a6bSDavid YoungDPDK packet.
7341dd9a6bSDavid Young
7441dd9a6bSDavid YoungEvent Vector
7541dd9a6bSDavid Young~~~~~~~~~~~~
7641dd9a6bSDavid Young
7741dd9a6bSDavid YoungThe rte_event_vector struct contains a vector of elements defined by the event
7841dd9a6bSDavid Youngtype specified in the ``rte_event``. The event_vector structure contains the
7941dd9a6bSDavid Youngfollowing data:
8041dd9a6bSDavid Young
8141dd9a6bSDavid Young* ``nb_elem`` - The number of elements held within the vector.
8241dd9a6bSDavid Young
8341dd9a6bSDavid YoungSimilar to ``rte_event`` the payload of event vector is also a union, allowing
8441dd9a6bSDavid Youngflexibility in what the actual vector is.
8541dd9a6bSDavid Young
8641dd9a6bSDavid Young* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs.
8741dd9a6bSDavid Young* ``void *ptrs[0]`` - An array of pointers.
8841dd9a6bSDavid Young* ``uint64_t u64s[0]`` - An array of uint64_t elements.
8941dd9a6bSDavid Young
9041dd9a6bSDavid YoungThe size of the event vector is related to the total number of elements it is
9141dd9a6bSDavid Youngconfigured to hold, this is achieved by making `rte_event_vector` a variable
9241dd9a6bSDavid Younglength structure.
9341dd9a6bSDavid YoungA helper function is provided to create a mempool that holds event vector, which
9441dd9a6bSDavid Youngtakes name of the pool, total number of required ``rte_event_vector``,
9541dd9a6bSDavid Youngcache size, number of elements in each ``rte_event_vector`` and socket id.
9641dd9a6bSDavid Young
9741dd9a6bSDavid Young.. code-block:: c
9841dd9a6bSDavid Young
9941dd9a6bSDavid Young        rte_event_vector_pool_create("vector_pool", nb_event_vectors, cache_sz,
10041dd9a6bSDavid Young                                     nb_elements_per_vector, socket_id);
10141dd9a6bSDavid Young
10241dd9a6bSDavid YoungThe function ``rte_event_vector_pool_create`` creates mempool with the best
10341dd9a6bSDavid Youngplatform mempool ops.
10441dd9a6bSDavid Young
10541dd9a6bSDavid YoungQueues
10641dd9a6bSDavid Young~~~~~~
10741dd9a6bSDavid Young
10841dd9a6bSDavid YoungAn event queue is a queue containing events that are scheduled by the event
10941dd9a6bSDavid Youngdevice. An event queue contains events of different flows associated with
11041dd9a6bSDavid Youngscheduling types, such as atomic, ordered, or parallel.
11141dd9a6bSDavid Young
11241dd9a6bSDavid YoungQueue All Types Capable
11341dd9a6bSDavid Young^^^^^^^^^^^^^^^^^^^^^^^
11441dd9a6bSDavid Young
11541dd9a6bSDavid YoungIf RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES capability bit is set in the event device,
11641dd9a6bSDavid Youngthen events of any type may be sent to any queue. Otherwise, the queues only
11741dd9a6bSDavid Youngsupport events of the type that it was created with.
11841dd9a6bSDavid Young
11941dd9a6bSDavid YoungQueue All Types Incapable
12041dd9a6bSDavid Young^^^^^^^^^^^^^^^^^^^^^^^^^
12141dd9a6bSDavid Young
12241dd9a6bSDavid YoungIn this case, each stage has a specified scheduling type.  The application
12341dd9a6bSDavid Youngconfigures each queue for a specific type of scheduling, and just enqueues all
12441dd9a6bSDavid Youngevents to the eventdev. An example of a PMD of this type is the eventdev
12541dd9a6bSDavid Youngsoftware PMD.
12641dd9a6bSDavid Young
12741dd9a6bSDavid YoungThe Eventdev API supports the following scheduling types per queue:
12841dd9a6bSDavid Young
12941dd9a6bSDavid Young*   Atomic
13041dd9a6bSDavid Young*   Ordered
13141dd9a6bSDavid Young*   Parallel
13241dd9a6bSDavid Young
13341dd9a6bSDavid YoungAtomic, Ordered and Parallel are load-balanced scheduling types: the output
13441dd9a6bSDavid Youngof the queue can be spread out over multiple CPU cores.
13541dd9a6bSDavid Young
13641dd9a6bSDavid YoungAtomic scheduling on a queue ensures that a single flow is not present on two
13741dd9a6bSDavid Youngdifferent CPU cores at the same time. Ordered allows sending all flows to any
13841dd9a6bSDavid Youngcore, but the scheduler must ensure that on egress the packets are returned to
13941dd9a6bSDavid Youngingress order on downstream queue enqueue. Parallel allows sending all flows
14041dd9a6bSDavid Youngto all CPU cores, without any re-ordering guarantees.
14141dd9a6bSDavid Young
14241dd9a6bSDavid YoungSingle Link Flag
14341dd9a6bSDavid Young^^^^^^^^^^^^^^^^
14441dd9a6bSDavid Young
14541dd9a6bSDavid YoungThere is a SINGLE_LINK flag which allows an application to indicate that only
14641dd9a6bSDavid Youngone port will be connected to a queue.  Queues configured with the single-link
14741dd9a6bSDavid Youngflag follow a FIFO like structure, maintaining ordering but it is only capable
14841dd9a6bSDavid Youngof being linked to a single port (see below for port and queue linking details).
14941dd9a6bSDavid Young
15041dd9a6bSDavid Young
15141dd9a6bSDavid YoungPorts
15241dd9a6bSDavid Young~~~~~
15341dd9a6bSDavid Young
15441dd9a6bSDavid YoungPorts are the points of contact between worker cores and the eventdev. The
15541dd9a6bSDavid Younggeneral use case will see one CPU core using one port to enqueue and dequeue
15641dd9a6bSDavid Youngevents from an eventdev. Ports are linked to queues in order to retrieve events
15741dd9a6bSDavid Youngfrom those queues (more details in `Linking Queues and Ports`_ below).
15841dd9a6bSDavid Young
15941dd9a6bSDavid Young
16041dd9a6bSDavid YoungAPI Walk-through
16141dd9a6bSDavid Young----------------
16241dd9a6bSDavid Young
16341dd9a6bSDavid YoungThis section will introduce the reader to the eventdev API, showing how to
16441dd9a6bSDavid Youngcreate and configure an eventdev and use it for a two-stage atomic pipeline
16541dd9a6bSDavid Youngwith one core each for RX and TX. RX and TX cores are shown here for
16641dd9a6bSDavid Youngillustration, refer to Eventdev Adapter documentation for further details.
16741dd9a6bSDavid YoungThe diagram below shows the final state of the application after this
16841dd9a6bSDavid Youngwalk-through:
16941dd9a6bSDavid Young
17041dd9a6bSDavid Young.. _figure_eventdev-usage1:
17141dd9a6bSDavid Young
17241dd9a6bSDavid Young.. figure:: ../img/eventdev_usage.*
17341dd9a6bSDavid Young
17441dd9a6bSDavid Young   Sample eventdev usage, with RX, two atomic stages and a single-link to TX.
17541dd9a6bSDavid Young
17641dd9a6bSDavid Young
17741dd9a6bSDavid YoungA high level overview of the setup steps are:
17841dd9a6bSDavid Young
17941dd9a6bSDavid Young* rte_event_dev_configure()
18041dd9a6bSDavid Young* rte_event_queue_setup()
18141dd9a6bSDavid Young* rte_event_port_setup()
18241dd9a6bSDavid Young* rte_event_port_link()
18341dd9a6bSDavid Young* rte_event_dev_start()
18441dd9a6bSDavid Young
18541dd9a6bSDavid Young
18641dd9a6bSDavid YoungInit and Config
18741dd9a6bSDavid Young~~~~~~~~~~~~~~~
18841dd9a6bSDavid Young
18941dd9a6bSDavid YoungThe eventdev library uses vdev options to add devices to the DPDK application.
19041dd9a6bSDavid YoungThe ``--vdev`` EAL option allows adding eventdev instances to your DPDK
19141dd9a6bSDavid Youngapplication, using the name of the eventdev PMD as an argument.
19241dd9a6bSDavid Young
19341dd9a6bSDavid YoungFor example, to create an instance of the software eventdev scheduler, the
19441dd9a6bSDavid Youngfollowing vdev arguments should be provided to the application EAL command line:
19541dd9a6bSDavid Young
19641dd9a6bSDavid Young.. code-block:: console
19741dd9a6bSDavid Young
19841dd9a6bSDavid Young   ./dpdk_application --vdev="event_sw0"
19941dd9a6bSDavid Young
20041dd9a6bSDavid YoungIn the following code, we configure eventdev instance with 3 queues
20141dd9a6bSDavid Youngand 6 ports as follows. The 3 queues consist of 2 Atomic and 1 Single-Link,
20241dd9a6bSDavid Youngwhile the 6 ports consist of 4 workers, 1 RX and 1 TX.
20341dd9a6bSDavid Young
20441dd9a6bSDavid Young.. code-block:: c
20541dd9a6bSDavid Young
20641dd9a6bSDavid Young        const struct rte_event_dev_config config = {
20741dd9a6bSDavid Young                .nb_event_queues = 3,
20841dd9a6bSDavid Young                .nb_event_ports = 6,
20941dd9a6bSDavid Young                .nb_events_limit  = 4096,
21041dd9a6bSDavid Young                .nb_event_queue_flows = 1024,
21141dd9a6bSDavid Young                .nb_event_port_dequeue_depth = 128,
21241dd9a6bSDavid Young                .nb_event_port_enqueue_depth = 128,
21341dd9a6bSDavid Young        };
21441dd9a6bSDavid Young        int err = rte_event_dev_configure(dev_id, &config);
21541dd9a6bSDavid Young
21641dd9a6bSDavid YoungThe remainder of this walk-through assumes that dev_id is 0.
21741dd9a6bSDavid Young
21841dd9a6bSDavid YoungSetting up Queues
21941dd9a6bSDavid Young~~~~~~~~~~~~~~~~~
22041dd9a6bSDavid Young
22141dd9a6bSDavid YoungOnce the eventdev itself is configured, the next step is to configure queues.
22241dd9a6bSDavid YoungThis is done by setting the appropriate values in a queue_conf structure, and
22341dd9a6bSDavid Youngcalling the setup function. Repeat this step for each queue, starting from
22441dd9a6bSDavid Young0 and ending at ``nb_event_queues - 1`` from the event_dev config above.
22541dd9a6bSDavid Young
22641dd9a6bSDavid Young.. code-block:: c
22741dd9a6bSDavid Young
22841dd9a6bSDavid Young        struct rte_event_queue_conf atomic_conf = {
22941dd9a6bSDavid Young                .schedule_type = RTE_SCHED_TYPE_ATOMIC,
23041dd9a6bSDavid Young                .priority = RTE_EVENT_DEV_PRIORITY_NORMAL,
23141dd9a6bSDavid Young                .nb_atomic_flows = 1024,
23241dd9a6bSDavid Young                .nb_atomic_order_sequences = 1024,
23341dd9a6bSDavid Young        };
23441dd9a6bSDavid Young        struct rte_event_queue_conf single_link_conf = {
23541dd9a6bSDavid Young                .event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK,
23641dd9a6bSDavid Young        };
23741dd9a6bSDavid Young        int dev_id = 0;
23841dd9a6bSDavid Young        int atomic_q_1 = 0;
23941dd9a6bSDavid Young        int atomic_q_2 = 1;
24041dd9a6bSDavid Young        int single_link_q = 2;
24141dd9a6bSDavid Young        int err = rte_event_queue_setup(dev_id, atomic_q_1, &atomic_conf);
24241dd9a6bSDavid Young        int err = rte_event_queue_setup(dev_id, atomic_q_2, &atomic_conf);
24341dd9a6bSDavid Young        int err = rte_event_queue_setup(dev_id, single_link_q, &single_link_conf);
24441dd9a6bSDavid Young
24541dd9a6bSDavid YoungAs shown above, queue IDs are as follows:
24641dd9a6bSDavid Young
24741dd9a6bSDavid Young * id 0, atomic queue #1
24841dd9a6bSDavid Young * id 1, atomic queue #2
24941dd9a6bSDavid Young * id 2, single-link queue
25041dd9a6bSDavid Young
25141dd9a6bSDavid YoungThese queues are used for the remainder of this walk-through.
25241dd9a6bSDavid Young
25341dd9a6bSDavid YoungSetting up Ports
25441dd9a6bSDavid Young~~~~~~~~~~~~~~~~
25541dd9a6bSDavid Young
25641dd9a6bSDavid YoungOnce queues are set up successfully, create the ports as required.
25741dd9a6bSDavid Young
25841dd9a6bSDavid Young.. code-block:: c
25941dd9a6bSDavid Young
26041dd9a6bSDavid Young        struct rte_event_port_conf rx_conf = {
26141dd9a6bSDavid Young                .dequeue_depth = 128,
26241dd9a6bSDavid Young                .enqueue_depth = 128,
26341dd9a6bSDavid Young                .new_event_threshold = 1024,
26441dd9a6bSDavid Young        };
26541dd9a6bSDavid Young        struct rte_event_port_conf worker_conf = {
26641dd9a6bSDavid Young                .dequeue_depth = 16,
26741dd9a6bSDavid Young                .enqueue_depth = 64,
26841dd9a6bSDavid Young                .new_event_threshold = 4096,
26941dd9a6bSDavid Young        };
27041dd9a6bSDavid Young        struct rte_event_port_conf tx_conf = {
27141dd9a6bSDavid Young                .dequeue_depth = 128,
27241dd9a6bSDavid Young                .enqueue_depth = 128,
27341dd9a6bSDavid Young                .new_event_threshold = 4096,
27441dd9a6bSDavid Young        };
27541dd9a6bSDavid Young        int dev_id = 0;
27641dd9a6bSDavid Young        int rx_port_id = 0;
27741dd9a6bSDavid Young        int worker_port_id;
27841dd9a6bSDavid Young        int err = rte_event_port_setup(dev_id, rx_port_id, &rx_conf);
27941dd9a6bSDavid Young
28041dd9a6bSDavid Young        for (worker_port_id = 1; worker_port_id <= 4; worker_port_id++) {
28141dd9a6bSDavid Young	        int err = rte_event_port_setup(dev_id, worker_port_id, &worker_conf);
28241dd9a6bSDavid Young        }
28341dd9a6bSDavid Young
28441dd9a6bSDavid Young        int tx_port_id = 5;
28541dd9a6bSDavid Young	int err = rte_event_port_setup(dev_id, tx_port_id, &tx_conf);
28641dd9a6bSDavid Young
28741dd9a6bSDavid YoungAs shown above:
28841dd9a6bSDavid Young
28941dd9a6bSDavid Young * port 0: RX core
29041dd9a6bSDavid Young * ports 1,2,3,4: Workers
29141dd9a6bSDavid Young * port 5: TX core
29241dd9a6bSDavid Young
29341dd9a6bSDavid YoungThese ports are used for the remainder of this walk-through.
29441dd9a6bSDavid Young
29541dd9a6bSDavid YoungLinking Queues and Ports
29641dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~~~~
29741dd9a6bSDavid Young
29841dd9a6bSDavid YoungThe final step is to "wire up" the ports to the queues. After this, the
29941dd9a6bSDavid Youngeventdev is capable of scheduling events, and when cores request work to do,
30041dd9a6bSDavid Youngthe correct events are provided to that core. Note that the RX core takes input
30141dd9a6bSDavid Youngfrom e.g.: a NIC so it is not linked to any eventdev queues.
30241dd9a6bSDavid Young
30341dd9a6bSDavid YoungLinking all workers to atomic queues, and the TX core to the single-link queue
30441dd9a6bSDavid Youngcan be achieved like this:
30541dd9a6bSDavid Young
30641dd9a6bSDavid Young.. code-block:: c
30741dd9a6bSDavid Young
30841dd9a6bSDavid Young        uint8_t rx_port_id = 0;
30941dd9a6bSDavid Young        uint8_t tx_port_id = 5;
31041dd9a6bSDavid Young        uint8_t atomic_qs[] = {0, 1};
31141dd9a6bSDavid Young        uint8_t single_link_q = 2;
31241dd9a6bSDavid Young        uint8_t priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
31341dd9a6bSDavid Young        int worker_port_id;
31441dd9a6bSDavid Young
31541dd9a6bSDavid Young        for (worker_port_id = 1; worker_port_id <= 4; worker_port_id++) {
31641dd9a6bSDavid Young                int links_made = rte_event_port_link(dev_id, worker_port_id, atomic_qs, NULL, 2);
31741dd9a6bSDavid Young        }
31841dd9a6bSDavid Young        int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1);
31941dd9a6bSDavid Young
32041dd9a6bSDavid YoungLinking Queues to Ports with link profiles
32141dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
32241dd9a6bSDavid Young
32341dd9a6bSDavid YoungAn application can use link profiles if supported by the underlying event device to setup up
32441dd9a6bSDavid Youngmultiple link profile per port and change them run time depending up on heuristic data.
32541dd9a6bSDavid YoungUsing Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress
32641dd9a6bSDavid Youngin fast-path and gives applications the ability to switch between preset profiles on the fly.
32741dd9a6bSDavid Young
32841dd9a6bSDavid YoungAn example use case could be as follows.
32941dd9a6bSDavid Young
33041dd9a6bSDavid YoungConfig path:
33141dd9a6bSDavid Young
33241dd9a6bSDavid Young.. code-block:: c
33341dd9a6bSDavid Young
33441dd9a6bSDavid Young   uint8_t lq[4] = {4, 5, 6, 7};
33541dd9a6bSDavid Young   uint8_t hq[4] = {0, 1, 2, 3};
33641dd9a6bSDavid Young
33741dd9a6bSDavid Young   if (rte_event_dev_info.max_profiles_per_port < 2)
33841dd9a6bSDavid Young       return -ENOTSUP;
33941dd9a6bSDavid Young
34041dd9a6bSDavid Young   rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0);
34141dd9a6bSDavid Young   rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1);
34241dd9a6bSDavid Young
34341dd9a6bSDavid YoungWorker path:
34441dd9a6bSDavid Young
34541dd9a6bSDavid Young.. code-block:: c
34641dd9a6bSDavid Young
34741dd9a6bSDavid Young   uint8_t profile_id_to_switch;
34841dd9a6bSDavid Young
34941dd9a6bSDavid Young   while (1) {
35041dd9a6bSDavid Young       deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0);
35141dd9a6bSDavid Young       if (deq == 0) {
35241dd9a6bSDavid Young           profile_id_to_switch = app_find_profile_id_to_switch();
35341dd9a6bSDavid Young           rte_event_port_profile_switch(0, 0, profile_id_to_switch);
35441dd9a6bSDavid Young           continue;
35541dd9a6bSDavid Young       }
35641dd9a6bSDavid Young
35741dd9a6bSDavid Young       // Process the event received.
35841dd9a6bSDavid Young   }
35941dd9a6bSDavid Young
360acc65ee3SPavan NikhileshEvent Pre-scheduling
361acc65ee3SPavan Nikhilesh~~~~~~~~~~~~~~~~~~~~
362acc65ee3SPavan Nikhilesh
363acc65ee3SPavan NikhileshEvent pre-scheduling improves scheduling performance
364acc65ee3SPavan Nikhileshby assigning events to event ports in advance when dequeues are issued.
365acc65ee3SPavan NikhileshThe ``rte_event_dequeue_burst`` operation initiates the pre-schedule operation,
366acc65ee3SPavan Nikhileshwhich completes in parallel
367acc65ee3SPavan Nikhileshwithout affecting the dequeued event flow contexts and dequeue latency.
368acc65ee3SPavan NikhileshOn the next dequeue operation, the pre-scheduled events are dequeued
369acc65ee3SPavan Nikhileshand pre-schedule is initiated again.
370acc65ee3SPavan Nikhilesh
371acc65ee3SPavan NikhileshAn application can use event pre-scheduling if the event device supports it
372acc65ee3SPavan Nikhileshat either device level or at a individual port level.
373acc65ee3SPavan NikhileshThe application must check pre-schedule capability
374acc65ee3SPavan Nikhileshby checking if ``rte_event_dev_info.event_dev_cap`` has the bit
375acc65ee3SPavan Nikhilesh``RTE_EVENT_DEV_CAP_PRESCHEDULE`` or ``RTE_EVENT_DEV_CAP_PRESCHEDULE_ADAPTIVE`` set,
376acc65ee3SPavan Nikhileshif present pre-scheduling can be enabled at device configuration time
377acc65ee3SPavan Nikhileshby setting appropriate pre-schedule type in ``rte_event_dev_config.preschedule``.
378acc65ee3SPavan Nikhilesh
379acc65ee3SPavan NikhileshThe following pre-schedule types are supported:
380acc65ee3SPavan Nikhilesh * ``RTE_EVENT_PRESCHEDULE_NONE`` - No pre-scheduling.
381acc65ee3SPavan Nikhilesh * ``RTE_EVENT_PRESCHEDULE`` - Always issue a pre-schedule when dequeue is issued.
382acc65ee3SPavan Nikhilesh * ``RTE_EVENT_PRESCHEDULE_ADAPTIVE`` - Issue pre-schedule when dequeue is issued
383acc65ee3SPavan Nikhilesh   and there are no forward progress constraints.
384acc65ee3SPavan Nikhilesh
385c1bdd86dSPavan NikhileshEvent devices that support ``RTE_EVENT_DEV_CAP_PER_PORT_PRESCHEDULE`` capability
386c1bdd86dSPavan Nikhileshallow applications to modify pre-scheduling at a per port level at runtime in fast-path.
387c1bdd86dSPavan NikhileshTo modify event pre-scheduling at a given event port,
388c1bdd86dSPavan Nikhileshthe application can use ``rte_event_port_preschedule_modify()`` API.
389c1bdd86dSPavan NikhileshThis API can be called even if the event device does not support per port pre-scheduling,
390c1bdd86dSPavan Nikhileshit will be treated as a no-op.
391c1bdd86dSPavan Nikhilesh
392c1bdd86dSPavan Nikhilesh.. code-block:: c
393c1bdd86dSPavan Nikhilesh
394c1bdd86dSPavan Nikhilesh   rte_event_port_preschedule_modify(dev_id, port_id, RTE_EVENT_PRESCHEDULE);
395c1bdd86dSPavan Nikhilesh   // Dequeue events from the event port with normal dequeue() function.
396c1bdd86dSPavan Nikhilesh   rte_event_port_preschedule_modify(dev_id, port_id, RTE_EVENT_PRESCHEDULE_NONE);
397c1bdd86dSPavan Nikhilesh   // Disable pre-scheduling if thread is about to be scheduled out
398c1bdd86dSPavan Nikhilesh   // and issue dequeue() to drain pending events.
399c1bdd86dSPavan Nikhilesh
400*4ade669cSPavan NikhileshApplication may  provide a hint to the eventdev PMD
401*4ade669cSPavan Nikhileshto pre-schedule the next event without releasing the current flow context.
402*4ade669cSPavan NikhileshEvent device that support this feature advertises the capability
403*4ade669cSPavan Nikhileshvia the ``RTE_EVENT_DEV_CAP_PRESCHEDULE_EXPLICIT`` flag.
404*4ade669cSPavan NikhileshIf pre-scheduling is already enabled at a event device or event port level
405*4ade669cSPavan Nikhileshor if the capability is not supported then the hint is ignored.
406*4ade669cSPavan Nikhilesh
407*4ade669cSPavan Nikhilesh.. code-block:: c
408*4ade669cSPavan Nikhilesh
409*4ade669cSPavan Nikhilesh   rte_event_port_preschedule(dev_id, port_id, RTE_EVENT_PRESCHEDULE);
410c1bdd86dSPavan Nikhilesh
41141dd9a6bSDavid YoungStarting the EventDev
41241dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~
41341dd9a6bSDavid Young
41441dd9a6bSDavid YoungA single function call tells the eventdev instance to start processing
41541dd9a6bSDavid Youngevents. Note that all queues must be linked to for the instance to start, as
41641dd9a6bSDavid Youngif any queue is not linked to, enqueuing to that queue will cause the
41741dd9a6bSDavid Youngapplication to backpressure and eventually stall due to no space in the
41841dd9a6bSDavid Youngeventdev.
41941dd9a6bSDavid Young
42041dd9a6bSDavid Young.. code-block:: c
42141dd9a6bSDavid Young
42241dd9a6bSDavid Young        int err = rte_event_dev_start(dev_id);
42341dd9a6bSDavid Young
42441dd9a6bSDavid Young.. Note::
42541dd9a6bSDavid Young
42641dd9a6bSDavid Young         EventDev needs to be started before starting the event producers such
42741dd9a6bSDavid Young         as event_eth_rx_adapter, event_timer_adapter, event_crypto_adapter and
42841dd9a6bSDavid Young         event_dma_adapter.
42941dd9a6bSDavid Young
43041dd9a6bSDavid YoungIngress of New Events
43141dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~
43241dd9a6bSDavid Young
43341dd9a6bSDavid YoungNow that the eventdev is set up, and ready to receive events, the RX core must
43441dd9a6bSDavid Youngenqueue some events into the system for it to schedule. The events to be
43541dd9a6bSDavid Youngscheduled are ordinary DPDK packets, received from an eth_rx_burst() as normal.
43641dd9a6bSDavid YoungThe following code shows how those packets can be enqueued into the eventdev:
43741dd9a6bSDavid Young
43841dd9a6bSDavid Young.. code-block:: c
43941dd9a6bSDavid Young
44041dd9a6bSDavid Young        const uint16_t nb_rx = rte_eth_rx_burst(eth_port, 0, mbufs, BATCH_SIZE);
44141dd9a6bSDavid Young
44241dd9a6bSDavid Young        for (i = 0; i < nb_rx; i++) {
44341dd9a6bSDavid Young                ev[i].flow_id = mbufs[i]->hash.rss;
44441dd9a6bSDavid Young                ev[i].op = RTE_EVENT_OP_NEW;
44541dd9a6bSDavid Young                ev[i].sched_type = RTE_SCHED_TYPE_ATOMIC;
44641dd9a6bSDavid Young                ev[i].queue_id = atomic_q_1;
44741dd9a6bSDavid Young                ev[i].event_type = RTE_EVENT_TYPE_ETHDEV;
44841dd9a6bSDavid Young                ev[i].sub_event_type = 0;
44941dd9a6bSDavid Young                ev[i].priority = RTE_EVENT_DEV_PRIORITY_NORMAL;
45041dd9a6bSDavid Young                ev[i].mbuf = mbufs[i];
45141dd9a6bSDavid Young        }
45241dd9a6bSDavid Young
45341dd9a6bSDavid Young        const int nb_tx = rte_event_enqueue_burst(dev_id, rx_port_id, ev, nb_rx);
45441dd9a6bSDavid Young        if (nb_tx != nb_rx) {
45541dd9a6bSDavid Young                for(i = nb_tx; i < nb_rx; i++)
45641dd9a6bSDavid Young                        rte_pktmbuf_free(mbufs[i]);
45741dd9a6bSDavid Young        }
45841dd9a6bSDavid Young
45941dd9a6bSDavid YoungForwarding of Events
46041dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~
46141dd9a6bSDavid Young
46241dd9a6bSDavid YoungNow that the RX core has injected events, there is work to be done by the
46341dd9a6bSDavid Youngworkers. Note that each worker will dequeue as many events as it can in a burst,
46441dd9a6bSDavid Youngprocess each one individually, and then burst the packets back into the
46541dd9a6bSDavid Youngeventdev.
46641dd9a6bSDavid Young
46741dd9a6bSDavid YoungThe worker can lookup the events source from ``event.queue_id``, which should
46841dd9a6bSDavid Youngindicate to the worker what workload needs to be performed on the event.
46941dd9a6bSDavid YoungOnce done, the worker can update the ``event.queue_id`` to a new value, to send
47041dd9a6bSDavid Youngthe event to the next stage in the pipeline.
47141dd9a6bSDavid Young
47241dd9a6bSDavid Young.. code-block:: c
47341dd9a6bSDavid Young
47441dd9a6bSDavid Young        int timeout = 0;
47541dd9a6bSDavid Young        struct rte_event events[BATCH_SIZE];
47641dd9a6bSDavid Young        uint16_t nb_rx = rte_event_dequeue_burst(dev_id, worker_port_id, events, BATCH_SIZE, timeout);
47741dd9a6bSDavid Young
47841dd9a6bSDavid Young        for (i = 0; i < nb_rx; i++) {
47941dd9a6bSDavid Young                /* process mbuf using events[i].queue_id as pipeline stage */
48041dd9a6bSDavid Young                struct rte_mbuf *mbuf = events[i].mbuf;
48141dd9a6bSDavid Young                /* Send event to next stage in pipeline */
48241dd9a6bSDavid Young                events[i].queue_id++;
48341dd9a6bSDavid Young        }
48441dd9a6bSDavid Young
48541dd9a6bSDavid Young        uint16_t nb_tx = rte_event_enqueue_burst(dev_id, worker_port_id, events, nb_rx);
48641dd9a6bSDavid Young
48741dd9a6bSDavid Young
48841dd9a6bSDavid YoungEgress of Events
48941dd9a6bSDavid Young~~~~~~~~~~~~~~~~
49041dd9a6bSDavid Young
49141dd9a6bSDavid YoungFinally, when the packet is ready for egress or needs to be dropped, we need
49241dd9a6bSDavid Youngto inform the eventdev that the packet is no longer being handled by the
49341dd9a6bSDavid Youngapplication. This can be done by calling dequeue() or dequeue_burst(), which
49441dd9a6bSDavid Youngindicates that the previous burst of packets is no longer in use by the
49541dd9a6bSDavid Youngapplication.
49641dd9a6bSDavid Young
49741dd9a6bSDavid YoungAn event driven worker thread has following typical workflow on fastpath:
49841dd9a6bSDavid Young
49941dd9a6bSDavid Young.. code-block:: c
50041dd9a6bSDavid Young
50141dd9a6bSDavid Young       while (1) {
50241dd9a6bSDavid Young               rte_event_dequeue_burst(...);
50341dd9a6bSDavid Young               (event processing)
50441dd9a6bSDavid Young               rte_event_enqueue_burst(...);
50541dd9a6bSDavid Young       }
50641dd9a6bSDavid Young
50741dd9a6bSDavid YoungQuiescing Event Ports
50841dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~
50941dd9a6bSDavid Young
51041dd9a6bSDavid YoungTo migrate the event port to another lcore
51141dd9a6bSDavid Youngor while tearing down a worker core using an event port,
51241dd9a6bSDavid Young``rte_event_port_quiesce()`` can be invoked to make sure that all the data
51341dd9a6bSDavid Youngassociated with the event port are released from the worker core,
51441dd9a6bSDavid Youngthis might also include any prefetched events.
51541dd9a6bSDavid Young
51641dd9a6bSDavid YoungA flush callback can be passed to the function to handle any outstanding events.
51741dd9a6bSDavid Young
51841dd9a6bSDavid Young.. code-block:: c
51941dd9a6bSDavid Young
52041dd9a6bSDavid Young        rte_event_port_quiesce(dev_id, port_id, release_cb, NULL);
52141dd9a6bSDavid Young
52241dd9a6bSDavid Young.. Note::
52341dd9a6bSDavid Young
52441dd9a6bSDavid Young        Invocation of this API does not affect the existing port configuration.
52541dd9a6bSDavid Young
52679ca24a4SAbdullah SevincerIndependent Enqueue Capability
52779ca24a4SAbdullah Sevincer~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
52879ca24a4SAbdullah Sevincer
52979ca24a4SAbdullah SevincerThis capability applies to eventdev devices that expects all forwarded events
53079ca24a4SAbdullah Sevincerto be enqueued in the same order as they are dequeued.
53179ca24a4SAbdullah SevincerFor dropped events, their releases should come
53279ca24a4SAbdullah Sevincerat the same location as the original event was expected.
53379ca24a4SAbdullah SevincerThe eventdev device has this restriction as it uses the order
53479ca24a4SAbdullah Sevincerto retrieve information about the original event that was sent to the CPU.
53579ca24a4SAbdullah SevincerThis contains information like atomic flow ID to release the flow lock
53679ca24a4SAbdullah Sevincerand ordered events sequence number to restore the original order.
53779ca24a4SAbdullah Sevincer
53879ca24a4SAbdullah SevincerThis capability only matters to eventdevs supporting burst mode.
53979ca24a4SAbdullah SevincerOn ports where the application is going to change enqueue order,
54079ca24a4SAbdullah Sevincer``RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ`` support should be enabled.
54179ca24a4SAbdullah Sevincer
54279ca24a4SAbdullah SevincerExample code to inform PMD that the application plans to use
54379ca24a4SAbdullah Sevincerindependent enqueue order on a port:
54479ca24a4SAbdullah Sevincer
54579ca24a4SAbdullah Sevincer.. code-block:: c
54679ca24a4SAbdullah Sevincer
54779ca24a4SAbdullah Sevincer   if (capability & RTE_EVENT_DEV_CAP_INDEPENDENT_ENQ)
54879ca24a4SAbdullah Sevincer       port_config = port_config | RTE_EVENT_PORT_CFG_INDEPENDENT_ENQ;
54979ca24a4SAbdullah Sevincer
55041dd9a6bSDavid YoungStopping the EventDev
55141dd9a6bSDavid Young~~~~~~~~~~~~~~~~~~~~~
55241dd9a6bSDavid Young
55341dd9a6bSDavid YoungA single function call tells the eventdev instance to stop processing events.
55441dd9a6bSDavid YoungA flush callback can be registered to free any inflight events
55541dd9a6bSDavid Youngusing ``rte_event_dev_stop_flush_callback_register()`` function.
55641dd9a6bSDavid Young
55741dd9a6bSDavid Young.. code-block:: c
55841dd9a6bSDavid Young
55941dd9a6bSDavid Young        int err = rte_event_dev_stop(dev_id);
56041dd9a6bSDavid Young
56141dd9a6bSDavid Young.. Note::
56241dd9a6bSDavid Young
56341dd9a6bSDavid Young        The event producers such as ``event_eth_rx_adapter``,
56441dd9a6bSDavid Young        ``event_timer_adapter``, ``event_crypto_adapter`` and
56541dd9a6bSDavid Young        ``event_dma_adapter`` need to be stopped before stopping
56641dd9a6bSDavid Young        the event device.
56741dd9a6bSDavid Young
56841dd9a6bSDavid YoungSummary
56941dd9a6bSDavid Young-------
57041dd9a6bSDavid Young
57141dd9a6bSDavid YoungThe eventdev library allows an application to easily schedule events as it
57241dd9a6bSDavid Youngrequires, either using a run-to-completion or pipeline processing model.  The
57341dd9a6bSDavid Youngqueues and ports abstract the logical functionality of an eventdev, providing
57441dd9a6bSDavid Youngthe application with a generic method to schedule events.  With the flexible
57541dd9a6bSDavid YoungPMD infrastructure applications benefit of improvements in existing eventdevs
57641dd9a6bSDavid Youngand additions of new ones without modification.
577