xref: /dpdk/doc/guides/prog_guide/graph_lib.rst (revision 455a771fd6f1a9cb6edc8711ff278ad31709cf7c)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(C) 2020 Marvell International Ltd.
3
4Graph Library and Inbuilt Nodes
5===============================
6
7Graph architecture abstracts the data processing functions as a ``node`` and
8``links`` them together to create a complex ``graph`` to enable reusable/modular
9data processing functions.
10
11The graph library provides API to enable graph framework operations such as
12create, lookup, dump and destroy on graph and node operations such as clone,
13edge update, and edge shrink, etc. The API also allows to create the stats
14cluster to monitor per graph and per node stats.
15
16Features
17--------
18
19Features of the Graph library are:
20
21- Nodes as plugins.
22- Support for out of tree nodes.
23- Inbuilt nodes for packet processing.
24- Multi-process support.
25- Low overhead graph walk and node enqueue.
26- Low overhead statistics collection infrastructure.
27- Support to export the graph as a Graphviz dot file. See ``rte_graph_export()``.
28- Allow having another graph walk implementation in the future by segregating
29  the fast path(``rte_graph_worker.h``) and slow path code.
30
31Advantages of Graph architecture
32--------------------------------
33
34- Memory latency is the enemy for high-speed packet processing, moving the
35  similar packet processing code to a node will reduce the I cache and D
36  caches misses.
37- Exploits the probability that most packets will follow the same nodes in the
38  graph.
39- Allow SIMD instructions for packet processing of the node.-
40- The modular scheme allows having reusable nodes for the consumers.
41- The modular scheme allows us to abstract the vendor HW specific
42  optimizations as a node.
43
44Performance tuning parameters
45-----------------------------
46
47- Test with various burst size values (256, 128, 64, 32) using
48  RTE_GRAPH_BURST_SIZE config option.
49  The testing shows, on x86 and arm64 servers, The sweet spot is 256 burst
50  size. While on arm64 embedded SoCs, it is either 64 or 128.
51- Disable node statistics (using ``RTE_LIBRTE_GRAPH_STATS`` config option)
52  if not needed.
53
54Programming model
55-----------------
56
57Anatomy of Node:
58~~~~~~~~~~~~~~~~
59
60.. _figure_anatomy_of_a_node:
61
62.. figure:: img/anatomy_of_a_node.*
63
64   Anatomy of a node
65
66The node is the basic building block of the graph framework.
67
68A node consists of:
69
70process():
71^^^^^^^^^^
72
73The callback function will be invoked by worker thread using
74``rte_graph_walk()`` function when there is data to be processed by the node.
75A graph node process the function using ``process()`` and enqueue to next
76downstream node using ``rte_node_enqueue*()`` function.
77
78Context memory:
79^^^^^^^^^^^^^^^
80
81It is memory allocated by the library to store the node-specific context
82information. This memory will be used by process(), init(), fini() callbacks.
83
84init():
85^^^^^^^
86
87The callback function will be invoked by ``rte_graph_create()`` on when
88a node gets attached to a graph.
89
90fini():
91^^^^^^^
92
93The callback function will be invoked by ``rte_graph_destroy()`` on when a
94node gets detached to a graph.
95
96Node name:
97^^^^^^^^^^
98
99It is the name of the node. When a node registers to graph library, the library
100gives the ID as ``rte_node_t`` type. Both ID or Name shall be used lookup the
101node. ``rte_node_from_name()``, ``rte_node_id_to_name()`` are the node
102lookup functions.
103
104nb_edges:
105^^^^^^^^^
106
107The number of downstream nodes connected to this node. The ``next_nodes[]``
108stores the downstream nodes objects. ``rte_node_edge_update()`` and
109``rte_node_edge_shrink()`` functions shall be used to update the ``next_node[]``
110objects. Consumers of the node APIs are free to update the ``next_node[]``
111objects till ``rte_graph_create()`` invoked.
112
113next_node[]:
114^^^^^^^^^^^^
115
116The dynamic array to store the downstream nodes connected to this node. Downstream
117node should not be current node itself or a source node.
118
119Source node:
120^^^^^^^^^^^^
121
122Source nodes are static nodes created using ``RTE_NODE_REGISTER`` by passing
123``flags`` as ``RTE_NODE_SOURCE_F``.
124While performing the graph walk, the ``process()`` function of all the source
125nodes will be called first. So that these nodes can be used as input nodes for a graph.
126
127Node creation and registration
128~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
129* Node implementer creates the node by implementing ops and attributes of
130  ``struct rte_node_register``.
131
132* The library registers the node by invoking RTE_NODE_REGISTER on library load
133  using the constructor scheme. The constructor scheme used here to support multi-process.
134
135Link the Nodes to create the graph topology
136~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
137.. _figure_link_the_nodes:
138
139.. figure:: img/link_the_nodes.*
140
141   Topology after linking the nodes
142
143Once nodes are available to the program, Application or node public API
144functions can links them together to create a complex packet processing graph.
145
146There are multiple different types of strategies to link the nodes.
147
148Method (a):
149^^^^^^^^^^^
150Provide the ``next_nodes[]`` at the node registration time. See  ``struct rte_node_register::nb_edges``.
151This is a use case to address the static node scheme where one knows upfront the
152``next_nodes[]`` of the node.
153
154Method (b):
155^^^^^^^^^^^
156Use ``rte_node_edge_get()``, ``rte_node_edge_update()``, ``rte_node_edge_shrink()``
157to update the ``next_nodes[]`` links for the node runtime but before graph create.
158
159Method (c):
160^^^^^^^^^^^
161Use ``rte_node_clone()`` to clone a already existing node, created using RTE_NODE_REGISTER.
162When ``rte_node_clone()`` invoked, The library, would clone all the attributes
163of the node and creates a new one. The name for cloned node shall be
164``"parent_node_name-user_provided_name"``.
165
166This method enables the use case of Rx and Tx nodes where multiple of those nodes
167need to be cloned based on the number of CPU available in the system.
168The cloned nodes will be identical, except the ``"context memory"``.
169Context memory will have information of port, queue pair in case of Rx and Tx
170ethdev nodes.
171
172Create the graph object
173~~~~~~~~~~~~~~~~~~~~~~~
174Now that the nodes are linked, Its time to create a graph by including
175the required nodes. The application can provide a set of node patterns to
176form a graph object. The ``fnmatch()`` API used underneath for the pattern
177matching to include the required nodes. After the graph create any changes to
178nodes or graph is not allowed.
179
180The ``rte_graph_create()`` API shall be used to create the graph.
181
182Example of a graph object creation:
183
184.. code-block:: console
185
186   {"ethdev_rx-0-0", ip4*, ethdev_tx-*"}
187
188In the above example, A graph object will be created with ethdev Rx
189node of port 0 and queue 0, all ipv4* nodes in the system,
190and ethdev tx node of all ports.
191
192Graph models
193~~~~~~~~~~~~
194There are two different kinds of graph walking models. User can select the model using
195``rte_graph_worker_model_set()`` API. If the application decides to use only one model,
196the fast path check can be avoided by defining the model with RTE_GRAPH_MODEL_SELECT.
197For example:
198
199.. code-block:: c
200
201  #define RTE_GRAPH_MODEL_SELECT RTE_GRAPH_MODEL_RTC
202  #include "rte_graph_worker.h"
203
204RTC (Run-To-Completion)
205^^^^^^^^^^^^^^^^^^^^^^^
206This is the default graph walking model. Specifically, ``rte_graph_walk_rtc()`` and
207``rte_node_enqueue*`` fast path API functions are designed to work on single-core to
208have better performance. The fast path API works on graph object, So the multi-core
209graph processing strategy would be to create graph object PER WORKER.
210
211Example:
212
213Graph: node-0 -> node-1 -> node-2 @Core0.
214
215.. code-block:: diff
216
217    + - - - - - - - - - - - - - - - - - - - - - +
218    '                  Core #0                  '
219    '                                           '
220    ' +--------+     +---------+     +--------+ '
221    ' | Node-0 | --> | Node-1  | --> | Node-2 | '
222    ' +--------+     +---------+     +--------+ '
223    '                                           '
224    + - - - - - - - - - - - - - - - - - - - - - +
225
226Dispatch model
227^^^^^^^^^^^^^^
228The dispatch model enables a cross-core dispatching mechanism which employs
229a scheduling work-queue to dispatch streams to other worker cores which
230being associated with the destination node.
231
232Use ``rte_graph_model_mcore_dispatch_lcore_affinity_set()`` to set lcore affinity
233with the node.
234Each worker core will have a graph repetition. Use ``rte_graph_clone()`` to clone
235graph for each worker and use``rte_graph_model_mcore_dispatch_core_bind()`` to
236bind graph with the worker core.
237
238Example:
239
240Graph topo: node-0 -> Core1; node-1 -> node-2; node-2 -> node-3.
241Config graph: node-0 @Core0; node-1/3 @Core1; node-2 @Core2.
242
243.. code-block:: diff
244
245    + - - - - - -+     +- - - - - - - - - - - - - +     + - - - - - -+
246    '  Core #0   '     '          Core #1         '     '  Core #2   '
247    '            '     '                          '     '            '
248    ' +--------+ '     ' +--------+    +--------+ '     ' +--------+ '
249    ' | Node-0 | - - - ->| Node-1 |    | Node-3 |<- - - - | Node-2 | '
250    ' +--------+ '     ' +--------+    +--------+ '     ' +--------+ '
251    '            '     '     |                    '     '      ^     '
252    + - - - - - -+     +- - -|- - - - - - - - - - +     + - - -|- - -+
253                             |                                 |
254                             + - - - - - - - - - - - - - - - - +
255
256
257In fast path
258~~~~~~~~~~~~
259Typical fast-path code looks like below, where the application
260gets the fast-path graph object using ``rte_graph_lookup()``
261on the worker thread and run the ``rte_graph_walk()`` in a tight loop.
262
263.. code-block:: c
264
265    struct rte_graph *graph = rte_graph_lookup("worker0");
266
267    while (!done) {
268        rte_graph_walk(graph);
269    }
270
271Context update when graph walk in action
272~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
273The fast-path object for the node is ``struct rte_node``.
274
275It may be possible that in slow-path or after the graph walk-in action,
276the user needs to update the context of the node hence access to
277``struct rte_node *`` memory.
278
279``rte_graph_foreach_node()``, ``rte_graph_node_get()``,
280``rte_graph_node_get_by_name()`` APIs can be used to get the
281``struct rte_node*``. ``rte_graph_foreach_node()`` iterator function works on
282``struct rte_graph *`` fast-path graph object while others works on graph ID or name.
283
284Get the node statistics using graph cluster
285~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
286The user may need to know the aggregate stats of the node across
287multiple graph objects. Especially the situation where each graph object bound
288to a worker thread.
289
290Introduced a graph cluster object for statistics.
291``rte_graph_cluster_stats_create()`` API shall be used for creating a
292graph cluster with multiple graph objects and ``rte_graph_cluster_stats_get()``
293to get the aggregate node statistics.
294
295An example statistics output from ``rte_graph_cluster_stats_get()``
296
297.. code-block:: diff
298
299    +---------+-----------+-------------+---------------+-----------+---------------+-----------+
300    |Node     |calls      |objs         |realloc_count  |objs/call  |objs/sec(10E6) |cycles/call|
301    +---------------------+-------------+---------------+-----------+---------------+-----------+
302    |node0    |12977424   |3322220544   |5              |256.000    |3047.151872    |20.0000    |
303    |node1    |12977653   |3322279168   |0              |256.000    |3047.210496    |17.0000    |
304    |node2    |12977696   |3322290176   |0              |256.000    |3047.221504    |17.0000    |
305    |node3    |12977734   |3322299904   |0              |256.000    |3047.231232    |17.0000    |
306    |node4    |12977784   |3322312704   |1              |256.000    |3047.243776    |17.0000    |
307    |node5    |12977825   |3322323200   |0              |256.000    |3047.254528    |17.0000    |
308    +---------+-----------+-------------+---------------+-----------+---------------+-----------+
309
310Node writing guidelines
311~~~~~~~~~~~~~~~~~~~~~~~
312
313The ``process()`` function of a node is the fast-path function and that needs
314to be written carefully to achieve max performance.
315
316Broadly speaking, there are two different types of nodes.
317
318Static nodes
319~~~~~~~~~~~~
320The first kind of nodes are those that have a fixed ``next_nodes[]`` for the
321complete burst (like ethdev_rx, ethdev_tx) and it is simple to write.
322``process()`` function can move the obj burst to the next node either using
323``rte_node_next_stream_move()`` or using ``rte_node_next_stream_get()`` and
324``rte_node_next_stream_put()``.
325
326Intermediate nodes
327~~~~~~~~~~~~~~~~~~
328The second kind of such node is ``intermediate nodes`` that decide what is the
329``next_node[]`` to send to on a per-packet basis. In these nodes,
330
331* Firstly, there has to be the best possible packet processing logic.
332
333* Secondly, each packet needs to be queued to its next node.
334
335This can be done using ``rte_node_enqueue_[x1|x2|x4]()`` APIs if
336they are to single next or ``rte_node_enqueue_next()`` that takes array of nexts.
337
338In scenario where multiple intermediate nodes are present but most of the time
339each node using the same next node for all its packets, the cost of moving every
340pointer from current node's stream to next node's stream could be avoided.
341This is called home run and ``rte_node_next_stream_move()`` could be used to
342just move stream from the current node to the next node with least number of cycles.
343Since this can be avoided only in the case where all the packets are destined
344to the same next node, node implementation should be also having worst-case
345handling where every packet could be going to different next node.
346
347Example of intermediate node implementation with home run:
348^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
349
350#. Start with speculation that next_node = node->ctx.
351   This could be the next_node application used in the previous function call of this node.
352
353#. Get the next_node stream array with required space using
354   ``rte_node_next_stream_get(next_node, space)``.
355
356#. while n_left_from > 0 (i.e packets left to be sent) prefetch next pkt_set
357   and process current pkt_set to find their next node
358
359#. if all the next nodes of the current pkt_set match speculated next node,
360   just count them as successfully speculated(``last_spec``) till now and
361   continue the loop without actually moving them to the next node. else if there is
362   a mismatch, copy all the pkt_set pointers that were ``last_spec`` and move the
363   current pkt_set to their respective next's nodes using ``rte_enqueue_next_x1()``.
364   Also, one of the next_node can be updated as speculated next_node if it is more
365   probable. Finally, reset ``last_spec`` to zero.
366
367#. if n_left_from != 0 then goto 3) to process remaining packets.
368
369#. if last_spec == nb_objs, All the objects passed were successfully speculated
370   to single next node. So, the current stream can be moved to next node using
371   ``rte_node_next_stream_move(node, next_node)``.
372   This is the ``home run`` where memcpy of buffer pointers to next node is avoided.
373
374#. Update the ``node->ctx`` with more probable next node.
375
376Graph object memory layout
377--------------------------
378.. _figure_graph_mem_layout:
379
380.. figure:: img/graph_mem_layout.*
381
382   Memory layout
383
384Understanding the memory layout helps to debug the graph library and
385improve the performance if needed.
386
387Graph object consists of a header, circular buffer to store the pending
388stream when walking over the graph, and variable-length memory to store
389the ``rte_node`` objects.
390
391The graph_nodes_mem_create() creates and populate this memory. The functions
392such as ``rte_graph_walk()`` and ``rte_node_enqueue_*`` use this memory
393to enable fastpath services.
394
395Inbuilt Nodes
396-------------
397
398DPDK provides a set of nodes for data processing.
399The following diagram depicts inbuilt nodes data flow.
400
401.. _figure_graph_inbuit_node_flow:
402
403.. figure:: img/graph_inbuilt_node_flow.*
404
405   Inbuilt nodes data flow
406
407Following section details the documentation for individual inbuilt node.
408
409ethdev_rx
410~~~~~~~~~
411This node does ``rte_eth_rx_burst()`` into stream buffer passed to it
412(src node stream) and does ``rte_node_next_stream_move()`` only when
413there are packets received. Each ``rte_node`` works only on one Rx port and
414queue that it gets from node->ctx. For each (port X, rx_queue Y),
415a rte_node is cloned from  ethdev_rx_base_node as ``ethdev_rx-X-Y`` in
416``rte_node_eth_config()`` along with updating ``node->ctx``.
417Each graph needs to be associated  with a unique rte_node for a (port, rx_queue).
418
419ethdev_tx
420~~~~~~~~~
421This node does ``rte_eth_tx_burst()`` for a burst of objs received by it.
422It sends the burst to a fixed Tx Port and Queue information from
423node->ctx. For each (port X), this ``rte_node`` is cloned from
424ethdev_tx_node_base as "ethdev_tx-X" in ``rte_node_eth_config()``
425along with updating node->context.
426
427Since each graph doesn't need more than one Txq, per port, a Txq is assigned
428based on graph id to each rte_node instance. Each graph needs to be associated
429with a rte_node for each (port).
430
431pkt_drop
432~~~~~~~~
433This node frees all the objects passed to it considering them as
434``rte_mbufs`` that need to be freed.
435
436ip4_lookup
437~~~~~~~~~~
438This node is an intermediate node that does LPM lookup for the received
439ipv4 packets and the result determines each packets next node.
440
441On successful LPM lookup, the result contains the ``next_node`` id and
442``next-hop`` id with which the packet needs to be further processed.
443
444On LPM lookup failure, objects are redirected to pkt_drop node.
445``rte_node_ip4_route_add()`` is control path API to add ipv4 routes.
446To achieve home run, node use ``rte_node_stream_move()`` as mentioned in above
447sections.
448
449ip4_rewrite
450~~~~~~~~~~~
451This node gets packets from ``ip4_lookup`` node with next-hop id for each
452packet is embedded in ``node_mbuf_priv1(mbuf)->nh``. This id is used
453to determine the L2 header to be written to the packet before sending
454the packet out to a particular ethdev_tx node.
455``rte_node_ip4_rewrite_add()`` is control path API to add next-hop info.
456
457ip4_reassembly
458~~~~~~~~~~~~~~
459This node is an intermediate node that reassembles ipv4 fragmented packets,
460non-fragmented packets pass through the node un-effected.
461The node rewrites its stream and moves it to the next node.
462The fragment table and death row table should be setup via the
463``rte_node_ip4_reassembly_configure`` API.
464
465ip6_lookup
466~~~~~~~~~~
467This node is an intermediate node that does LPM lookup for the received
468IPv6 packets and the result determines each packets next node.
469
470On successful LPM lookup, the result contains the ``next_node`` ID
471and `next-hop`` ID with which the packet needs to be further processed.
472
473On LPM lookup failure, objects are redirected to ``pkt_drop`` node.
474``rte_node_ip6_route_add()`` is control path API to add IPv6 routes.
475To achieve home run, node use ``rte_node_stream_move()``
476as mentioned in above sections.
477
478ip6_rewrite
479~~~~~~~~~~~
480This node gets packets from ``ip6_lookup`` node with next-hop ID
481for each packet is embedded in ``node_mbuf_priv1(mbuf)->nh``.
482This ID is used to determine the L2 header to be written to the packet
483before sending the packet out to a particular ``ethdev_tx`` node.
484``rte_node_ip6_rewrite_add()`` is control path API to add next-hop info.
485
486null
487~~~~
488This node ignores the set of objects passed to it and reports that all are
489processed.
490
491kernel_tx
492~~~~~~~~~
493This node is an exit node that forwards the packets to kernel.
494It will be used to forward any control plane traffic to kernel stack from DPDK.
495It uses a raw socket interface to transmit the packets,
496it uses the packet's destination IP address in sockaddr_in address structure
497and ``sendto`` function to send data on the raw socket.
498After sending the burst of packets to kernel,
499this node frees up the packet buffers.
500
501kernel_rx
502~~~~~~~~~
503This node is a source node which receives packets from kernel
504and forwards to any of the intermediate nodes.
505It uses the raw socket interface to receive packets from kernel.
506Uses ``poll`` function to poll on the socket fd
507for ``POLLIN`` events to read the packets from raw socket
508to stream buffer and does ``rte_node_next_stream_move()``
509when there are received packets.
510
511ip4_local
512~~~~~~~~~
513This node is an intermediate node that does ``packet_type`` lookup for
514the received ipv4 packets and the result determines each packets next node.
515
516On successful ``packet_type`` lookup, for any IPv4 protocol the result
517contains the ``next_node`` id and ``next-hop`` id with which the packet
518needs to be further processed.
519
520On packet_type lookup failure, objects are redirected to ``pkt_drop`` node.
521``rte_node_ip4_route_add()`` is control path API to add ipv4 address with 32 bit
522depth to receive to packets.
523To achieve home run, node use ``rte_node_stream_move()`` as mentioned in above
524sections.
525
526udp4_input
527~~~~~~~~~~
528This node is an intermediate node that does udp destination port lookup for
529the received ipv4 packets and the result determines each packets next node.
530
531User registers a new node ``udp4_input`` into graph library during initialization
532and attach user specified node as edege to this node using
533``rte_node_udp4_usr_node_add()``, and create empty hash table with destination
534port and node id as its feilds.
535
536After successful addition of user node as edege, edge id is returned to the user.
537
538User would register ``ip4_lookup`` table with specified ip address and 32 bit as mask
539for ip filtration using api ``rte_node_ip4_route_add()``.
540
541After graph is created user would update hash table with custom port with
542and previously obtained edge id using API ``rte_node_udp4_dst_port_add()``.
543
544When packet is received lpm look up is performed if ip is matched the packet
545is handed over to ip4_local node, then packet is verified for udp proto and
546on success packet is enqueued to ``udp4_input`` node.
547
548Hash lookup is performed in ``udp4_input`` node with registered destination port
549and destination port in UDP packet , on success packet is handed to ``udp_user_node``.
550