xref: /dpdk/doc/guides/prog_guide/env_abstraction_layer.rst (revision 25d11a86c56d50947af33d0b79ede622809bd8b9)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2014 Intel Corporation.
3
4.. _Environment_Abstraction_Layer:
5
6Environment Abstraction Layer
7=============================
8
9The Environment Abstraction Layer (EAL) is responsible for gaining access to low-level resources such as hardware and memory space.
10It provides a generic interface that hides the environment specifics from the applications and libraries.
11It is the responsibility of the initialization routine to decide how to allocate these resources
12(that is, memory space, devices, timers, consoles, and so on).
13
14Typical services expected from the EAL are:
15
16*   DPDK Loading and Launching:
17    The DPDK and its application are linked as a single application and must be loaded by some means.
18
19*   Core Affinity/Assignment Procedures:
20    The EAL provides mechanisms for assigning execution units to specific cores as well as creating execution instances.
21
22*   System Memory Reservation:
23    The EAL facilitates the reservation of different memory zones, for example, physical memory areas for device interactions.
24
25*   Trace and Debug Functions: Logs, dump_stack, panic and so on.
26
27*   Utility Functions: Spinlocks and atomic counters that are not provided in libc.
28
29*   CPU Feature Identification: Determine at runtime if a particular feature, for example, Intel® AVX is supported.
30    Determine if the current CPU supports the feature set that the binary was compiled for.
31
32*   Interrupt Handling: Interfaces to register/unregister callbacks to specific interrupt sources.
33
34*   Alarm Functions: Interfaces to set/remove callbacks to be run at a specific time.
35
36EAL in a Linux-userland Execution Environment
37---------------------------------------------
38
39In a Linux user space environment, the DPDK application runs as a user-space application using the pthread library.
40
41The EAL performs physical memory allocation using mmap() in hugetlbfs (using huge page sizes to increase performance).
42This memory is exposed to DPDK service layers such as the :ref:`Mempool Library <Mempool_Library>`.
43
44At this point, the DPDK services layer will be initialized, then through pthread setaffinity calls,
45each execution unit will be assigned to a specific logical core to run as a user-level thread.
46
47The time reference is provided by the CPU Time-Stamp Counter (TSC) or by the HPET kernel API through a mmap() call.
48
49Initialization and Core Launching
50~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
51
52Part of the initialization is done by the start function of glibc.
53A check is also performed at initialization time to ensure that the micro architecture type chosen in the config file is supported by the CPU.
54Then, the main() function is called. The core initialization and launch is done in rte_eal_init() (see the API documentation).
55It consist of calls to the pthread library (more specifically, pthread_self(), pthread_create(), and pthread_setaffinity_np()).
56
57.. _figure_linuxapp_launch:
58
59.. figure:: img/linuxapp_launch.*
60
61   EAL Initialization in a Linux Application Environment
62
63
64.. note::
65
66    Initialization of objects, such as memory zones, rings, memory pools, lpm tables and hash tables,
67    should be done as part of the overall application initialization on the master lcore.
68    The creation and initialization functions for these objects are not multi-thread safe.
69    However, once initialized, the objects themselves can safely be used in multiple threads simultaneously.
70
71Shutdown and Cleanup
72~~~~~~~~~~~~~~~~~~~~
73
74During the initialization of EAL resources such as hugepage backed memory can be
75allocated by core components.  The memory allocated during ``rte_eal_init()``
76can be released by calling the ``rte_eal_cleanup()`` function. Refer to the
77API documentation for details.
78
79Multi-process Support
80~~~~~~~~~~~~~~~~~~~~~
81
82The Linuxapp EAL allows a multi-process as well as a multi-threaded (pthread) deployment model.
83See chapter
84:ref:`Multi-process Support <Multi-process_Support>` for more details.
85
86Memory Mapping Discovery and Memory Reservation
87~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
88
89The allocation of large contiguous physical memory is done using the hugetlbfs kernel filesystem.
90The EAL provides an API to reserve named memory zones in this contiguous memory.
91The physical address of the reserved memory for that memory zone is also returned to the user by the memory zone reservation API.
92
93There are two modes in which DPDK memory subsystem can operate: dynamic mode,
94and legacy mode. Both modes are explained below.
95
96.. note::
97
98    Memory reservations done using the APIs provided by rte_malloc are also backed by pages from the hugetlbfs filesystem.
99
100+ Dynamic memory mode
101
102Currently, this mode is only supported on Linux.
103
104In this mode, usage of hugepages by DPDK application will grow and shrink based
105on application's requests. Any memory allocation through ``rte_malloc()``,
106``rte_memzone_reserve()`` or other methods, can potentially result in more
107hugepages being reserved from the system. Similarly, any memory deallocation can
108potentially result in hugepages being released back to the system.
109
110Memory allocated in this mode is not guaranteed to be IOVA-contiguous. If large
111chunks of IOVA-contiguous are required (with "large" defined as "more than one
112page"), it is recommended to either use VFIO driver for all physical devices (so
113that IOVA and VA addresses can be the same, thereby bypassing physical addresses
114entirely), or use legacy memory mode.
115
116For chunks of memory which must be IOVA-contiguous, it is recommended to use
117``rte_memzone_reserve()`` function with ``RTE_MEMZONE_IOVA_CONTIG`` flag
118specified. This way, memory allocator will ensure that, whatever memory mode is
119in use, either reserved memory will satisfy the requirements, or the allocation
120will fail.
121
122There is no need to preallocate any memory at startup using ``-m`` or
123``--socket-mem`` command-line parameters, however it is still possible to do so,
124in which case preallocate memory will be "pinned" (i.e. will never be released
125by the application back to the system). It will be possible to allocate more
126hugepages, and deallocate those, but any preallocated pages will not be freed.
127If neither ``-m`` nor ``--socket-mem`` were specified, no memory will be
128preallocated, and all memory will be allocated at runtime, as needed.
129
130Another available option to use in dynamic memory mode is
131``--single-file-segments`` command-line option. This option will put pages in
132single files (per memseg list), as opposed to creating a file per page. This is
133normally not needed, but can be useful for use cases like userspace vhost, where
134there is limited number of page file descriptors that can be passed to VirtIO.
135
136If the application (or DPDK-internal code, such as device drivers) wishes to
137receive notifications about newly allocated memory, it is possible to register
138for memory event callbacks via ``rte_mem_event_callback_register()`` function.
139This will call a callback function any time DPDK's memory map has changed.
140
141If the application (or DPDK-internal code, such as device drivers) wishes to be
142notified about memory allocations above specified threshold (and have a chance
143to deny them), allocation validator callbacks are also available via
144``rte_mem_alloc_validator_callback_register()`` function.
145
146A default validator callback is provided by EAL, which can be enabled with a
147``--socket-limit`` command-line option, for a simple way to limit maximum amount
148of memory that can be used by DPDK application.
149
150+ Legacy memory mode
151
152This mode is enabled by specifying ``--legacy-mem`` command-line switch to the
153EAL. This switch will have no effect on FreeBSD as FreeBSD only supports
154legacy mode anyway.
155
156This mode mimics historical behavior of EAL. That is, EAL will reserve all
157memory at startup, sort all memory into large IOVA-contiguous chunks, and will
158not allow acquiring or releasing hugepages from the system at runtime.
159
160If neither ``-m`` nor ``--socket-mem`` were specified, the entire available
161hugepage memory will be preallocated.
162
163+ Hugepage allocation matching
164
165This behavior is enabled by specifying the ``--match-allocations`` command-line
166switch to the EAL. This switch is Linux-only and not supported with
167``--legacy-mem`` nor ``--no-huge``.
168
169Some applications using memory event callbacks may require that hugepages be
170freed exactly as they were allocated. These applications may also require
171that any allocation from the malloc heap not span across allocations
172associated with two different memory event callbacks. Hugepage allocation
173matching can be used by these types of applications to satisfy both of these
174requirements. This can result in some increased memory usage which is
175very dependent on the memory allocation patterns of the application.
176
177+ 32-bit support
178
179Additional restrictions are present when running in 32-bit mode. In dynamic
180memory mode, by default maximum of 2 gigabytes of VA space will be preallocated,
181and all of it will be on master lcore NUMA node unless ``--socket-mem`` flag is
182used.
183
184In legacy mode, VA space will only be preallocated for segments that were
185requested (plus padding, to keep IOVA-contiguousness).
186
187+ Maximum amount of memory
188
189All possible virtual memory space that can ever be used for hugepage mapping in
190a DPDK process is preallocated at startup, thereby placing an upper limit on how
191much memory a DPDK application can have. DPDK memory is stored in segment lists,
192each segment is strictly one physical page. It is possible to change the amount
193of virtual memory being preallocated at startup by editing the following config
194variables:
195
196* ``CONFIG_RTE_MAX_MEMSEG_LISTS`` controls how many segment lists can DPDK have
197* ``CONFIG_RTE_MAX_MEM_MB_PER_LIST`` controls how much megabytes of memory each
198  segment list can address
199* ``CONFIG_RTE_MAX_MEMSEG_PER_LIST`` controls how many segments each segment can
200  have
201* ``CONFIG_RTE_MAX_MEMSEG_PER_TYPE`` controls how many segments each memory type
202  can have (where "type" is defined as "page size + NUMA node" combination)
203* ``CONFIG_RTE_MAX_MEM_MB_PER_TYPE`` controls how much megabytes of memory each
204  memory type can address
205* ``CONFIG_RTE_MAX_MEM_MB`` places a global maximum on the amount of memory
206  DPDK can reserve
207
208Normally, these options do not need to be changed.
209
210.. note::
211
212    Preallocated virtual memory is not to be confused with preallocated hugepage
213    memory! All DPDK processes preallocate virtual memory at startup. Hugepages
214    can later be mapped into that preallocated VA space (if dynamic memory mode
215    is enabled), and can optionally be mapped into it at startup.
216
217Support for Externally Allocated Memory
218~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
219
220It is possible to use externally allocated memory in DPDK. There are two ways in
221which using externally allocated memory can work: the malloc heap API's, and
222manual memory management.
223
224+ Using heap API's for externally allocated memory
225
226Using using a set of malloc heap API's is the recommended way to use externally
227allocated memory in DPDK. In this way, support for externally allocated memory
228is implemented through overloading the socket ID - externally allocated heaps
229will have socket ID's that would be considered invalid under normal
230circumstances. Requesting an allocation to take place from a specified
231externally allocated memory is a matter of supplying the correct socket ID to
232DPDK allocator, either directly (e.g. through a call to ``rte_malloc``) or
233indirectly (through data structure-specific allocation API's such as
234``rte_ring_create``). Using these API's also ensures that mapping of externally
235allocated memory for DMA is also performed on any memory segment that is added
236to a DPDK malloc heap.
237
238Since there is no way DPDK can verify whether memory is available or valid, this
239responsibility falls on the shoulders of the user. All multiprocess
240synchronization is also user's responsibility, as well as ensuring  that all
241calls to add/attach/detach/remove memory are done in the correct order. It is
242not required to attach to a memory area in all processes - only attach to memory
243areas as needed.
244
245The expected workflow is as follows:
246
247* Get a pointer to memory area
248* Create a named heap
249* Add memory area(s) to the heap
250    - If IOVA table is not specified, IOVA addresses will be assumed to be
251      unavailable, and DMA mappings will not be performed
252    - Other processes must attach to the memory area before they can use it
253* Get socket ID used for the heap
254* Use normal DPDK allocation procedures, using supplied socket ID
255* If memory area is no longer needed, it can be removed from the heap
256    - Other processes must detach from this memory area before it can be removed
257* If heap is no longer needed, remove it
258    - Socket ID will become invalid and will not be reused
259
260For more information, please refer to ``rte_malloc`` API documentation,
261specifically the ``rte_malloc_heap_*`` family of function calls.
262
263+ Using externally allocated memory without DPDK API's
264
265While using heap API's is the recommended method of using externally allocated
266memory in DPDK, there are certain use cases where the overhead of DPDK heap API
267is undesirable - for example, when manual memory management is performed on an
268externally allocated area. To support use cases where externally allocated
269memory will not be used as part of normal DPDK workflow, there is also another
270set of API's under the ``rte_extmem_*`` namespace.
271
272These API's are (as their name implies) intended to allow registering or
273unregistering externally allocated memory to/from DPDK's internal page table, to
274allow API's like ``rte_virt2memseg`` etc. to work with externally allocated
275memory. Memory added this way will not be available for any regular DPDK
276allocators; DPDK will leave this memory for the user application to manage.
277
278The expected workflow is as follows:
279
280* Get a pointer to memory area
281* Register memory within DPDK
282    - If IOVA table is not specified, IOVA addresses will be assumed to be
283      unavailable
284    - Other processes must attach to the memory area before they can use it
285* Perform DMA mapping with ``rte_vfio_dma_map`` if needed
286* Use the memory area in your application
287* If memory area is no longer needed, it can be unregistered
288    - If the area was mapped for DMA, unmapping must be performed before
289      unregistering memory
290    - Other processes must detach from the memory area before it can be
291      unregistered
292
293Since these externally allocated memory areas will not be managed by DPDK, it is
294therefore up to the user application to decide how to use them and what to do
295with them once they're registered.
296
297Per-lcore and Shared Variables
298~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
299
300.. note::
301
302    lcore refers to a logical execution unit of the processor, sometimes called a hardware *thread*.
303
304Shared variables are the default behavior.
305Per-lcore variables are implemented using *Thread Local Storage* (TLS) to provide per-thread local storage.
306
307Logs
308~~~~
309
310A logging API is provided by EAL.
311By default, in a Linux application, logs are sent to syslog and also to the console.
312However, the log function can be overridden by the user to use a different logging mechanism.
313
314Trace and Debug Functions
315^^^^^^^^^^^^^^^^^^^^^^^^^
316
317There are some debug functions to dump the stack in glibc.
318The rte_panic() function can voluntarily provoke a SIG_ABORT,
319which can trigger the generation of a core file, readable by gdb.
320
321CPU Feature Identification
322~~~~~~~~~~~~~~~~~~~~~~~~~~
323
324The EAL can query the CPU at runtime (using the rte_cpu_get_features() function) to determine which CPU features are available.
325
326User Space Interrupt Event
327~~~~~~~~~~~~~~~~~~~~~~~~~~
328
329+ User Space Interrupt and Alarm Handling in Host Thread
330
331The EAL creates a host thread to poll the UIO device file descriptors to detect the interrupts.
332Callbacks can be registered or unregistered by the EAL functions for a specific interrupt event
333and are called in the host thread asynchronously.
334The EAL also allows timed callbacks to be used in the same way as for NIC interrupts.
335
336.. note::
337
338    In DPDK PMD, the only interrupts handled by the dedicated host thread are those for link status change
339    (link up and link down notification) and for sudden device removal.
340
341
342+ RX Interrupt Event
343
344The receive and transmit routines provided by each PMD don't limit themselves to execute in polling thread mode.
345To ease the idle polling with tiny throughput, it's useful to pause the polling and wait until the wake-up event happens.
346The RX interrupt is the first choice to be such kind of wake-up event, but probably won't be the only one.
347
348EAL provides the event APIs for this event-driven thread mode.
349Taking linuxapp as an example, the implementation relies on epoll. Each thread can monitor an epoll instance
350in which all the wake-up events' file descriptors are added. The event file descriptors are created and mapped to
351the interrupt vectors according to the UIO/VFIO spec.
352From bsdapp's perspective, kqueue is the alternative way, but not implemented yet.
353
354EAL initializes the mapping between event file descriptors and interrupt vectors, while each device initializes the mapping
355between interrupt vectors and queues. In this way, EAL actually is unaware of the interrupt cause on the specific vector.
356The eth_dev driver takes responsibility to program the latter mapping.
357
358.. note::
359
360    Per queue RX interrupt event is only allowed in VFIO which supports multiple MSI-X vector. In UIO, the RX interrupt
361    together with other interrupt causes shares the same vector. In this case, when RX interrupt and LSC(link status change)
362    interrupt are both enabled(intr_conf.lsc == 1 && intr_conf.rxq == 1), only the former is capable.
363
364The RX interrupt are controlled/enabled/disabled by ethdev APIs - 'rte_eth_dev_rx_intr_*'. They return failure if the PMD
365hasn't support them yet. The intr_conf.rxq flag is used to turn on the capability of RX interrupt per device.
366
367+ Device Removal Event
368
369This event is triggered by a device being removed at a bus level. Its
370underlying resources may have been made unavailable (i.e. PCI mappings
371unmapped). The PMD must make sure that on such occurrence, the application can
372still safely use its callbacks.
373
374This event can be subscribed to in the same way one would subscribe to a link
375status change event. The execution context is thus the same, i.e. it is the
376dedicated interrupt host thread.
377
378Considering this, it is likely that an application would want to close a
379device having emitted a Device Removal Event. In such case, calling
380``rte_eth_dev_close()`` can trigger it to unregister its own Device Removal Event
381callback. Care must be taken not to close the device from the interrupt handler
382context. It is necessary to reschedule such closing operation.
383
384Blacklisting
385~~~~~~~~~~~~
386
387The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted,
388so they are ignored by the DPDK.
389The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function).
390
391Misc Functions
392~~~~~~~~~~~~~~
393
394Locks and atomic operations are per-architecture (i686 and x86_64).
395
396IOVA Mode Configuration
397~~~~~~~~~~~~~~~~~~~~~~~
398
399Auto detection of the IOVA mode, based on probing the bus and IOMMU configuration, may not report
400the desired addressing mode when virtual devices that are not directly attached to the bus are present.
401To facilitate forcing the IOVA mode to a specific value the EAL command line option ``--iova-mode`` can
402be used to select either physical addressing('pa') or virtual addressing('va').
403
404Memory Segments and Memory Zones (memzone)
405------------------------------------------
406
407The mapping of physical memory is provided by this feature in the EAL.
408As physical memory can have gaps, the memory is described in a table of descriptors,
409and each descriptor (called rte_memseg ) describes a physical page.
410
411On top of this, the memzone allocator's role is to reserve contiguous portions of physical memory.
412These zones are identified by a unique name when the memory is reserved.
413
414The rte_memzone descriptors are also located in the configuration structure.
415This structure is accessed using rte_eal_get_configuration().
416The lookup (by name) of a memory zone returns a descriptor containing the physical address of the memory zone.
417
418Memory zones can be reserved with specific start address alignment by supplying the align parameter
419(by default, they are aligned to cache line size).
420The alignment value should be a power of two and not less than the cache line size (64 bytes).
421Memory zones can also be reserved from either 2 MB or 1 GB hugepages, provided that both are available on the system.
422
423Both memsegs and memzones are stored using ``rte_fbarray`` structures. Please
424refer to *DPDK API Reference* for more information.
425
426
427Multiple pthread
428----------------
429
430DPDK usually pins one pthread per core to avoid the overhead of task switching.
431This allows for significant performance gains, but lacks flexibility and is not always efficient.
432
433Power management helps to improve the CPU efficiency by limiting the CPU runtime frequency.
434However, alternately it is possible to utilize the idle cycles available to take advantage of
435the full capability of the CPU.
436
437By taking advantage of cgroup, the CPU utilization quota can be simply assigned.
438This gives another way to improve the CPU efficiency, however, there is a prerequisite;
439DPDK must handle the context switching between multiple pthreads per core.
440
441For further flexibility, it is useful to set pthread affinity not only to a CPU but to a CPU set.
442
443EAL pthread and lcore Affinity
444~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
445
446The term "lcore" refers to an EAL thread, which is really a Linux/FreeBSD pthread.
447"EAL pthreads"  are created and managed by EAL and execute the tasks issued by *remote_launch*.
448In each EAL pthread, there is a TLS (Thread Local Storage) called *_lcore_id* for unique identification.
449As EAL pthreads usually bind 1:1 to the physical CPU, the *_lcore_id* is typically equal to the CPU ID.
450
451When using multiple pthreads, however, the binding is no longer always 1:1 between an EAL pthread and a specified physical CPU.
452The EAL pthread may have affinity to a CPU set, and as such the *_lcore_id* will not be the same as the CPU ID.
453For this reason, there is an EAL long option '--lcores' defined to assign the CPU affinity of lcores.
454For a specified lcore ID or ID group, the option allows setting the CPU set for that EAL pthread.
455
456The format pattern:
457	--lcores='<lcore_set>[@cpu_set][,<lcore_set>[@cpu_set],...]'
458
459'lcore_set' and 'cpu_set' can be a single number, range or a group.
460
461A number is a "digit([0-9]+)"; a range is "<number>-<number>"; a group is "(<number|range>[,<number|range>,...])".
462
463If a '\@cpu_set' value is not supplied, the value of 'cpu_set' will default to the value of 'lcore_set'.
464
465    ::
466
467    	For example, "--lcores='1,2@(5-7),(3-5)@(0,2),(0,6),7-8'" which means start 9 EAL thread;
468    	    lcore 0 runs on cpuset 0x41 (cpu 0,6);
469    	    lcore 1 runs on cpuset 0x2 (cpu 1);
470    	    lcore 2 runs on cpuset 0xe0 (cpu 5,6,7);
471    	    lcore 3,4,5 runs on cpuset 0x5 (cpu 0,2);
472    	    lcore 6 runs on cpuset 0x41 (cpu 0,6);
473    	    lcore 7 runs on cpuset 0x80 (cpu 7);
474    	    lcore 8 runs on cpuset 0x100 (cpu 8).
475
476Using this option, for each given lcore ID, the associated CPUs can be assigned.
477It's also compatible with the pattern of corelist('-l') option.
478
479non-EAL pthread support
480~~~~~~~~~~~~~~~~~~~~~~~
481
482It is possible to use the DPDK execution context with any user pthread (aka. Non-EAL pthreads).
483In a non-EAL pthread, the *_lcore_id* is always LCORE_ID_ANY which identifies that it is not an EAL thread with a valid, unique, *_lcore_id*.
484Some libraries will use an alternative unique ID (e.g. TID), some will not be impacted at all, and some will work but with limitations (e.g. timer and mempool libraries).
485
486All these impacts are mentioned in :ref:`known_issue_label` section.
487
488Public Thread API
489~~~~~~~~~~~~~~~~~
490
491There are two public APIs ``rte_thread_set_affinity()`` and ``rte_thread_get_affinity()`` introduced for threads.
492When they're used in any pthread context, the Thread Local Storage(TLS) will be set/get.
493
494Those TLS include *_cpuset* and *_socket_id*:
495
496*	*_cpuset* stores the CPUs bitmap to which the pthread is affinitized.
497
498*	*_socket_id* stores the NUMA node of the CPU set. If the CPUs in CPU set belong to different NUMA node, the *_socket_id* will be set to SOCKET_ID_ANY.
499
500
501.. _known_issue_label:
502
503Known Issues
504~~~~~~~~~~~~
505
506+ rte_mempool
507
508  The rte_mempool uses a per-lcore cache inside the mempool.
509  For non-EAL pthreads, ``rte_lcore_id()`` will not return a valid number.
510  So for now, when rte_mempool is used with non-EAL pthreads, the put/get operations will bypass the default mempool cache and there is a performance penalty because of this bypass.
511  Only user-owned external caches can be used in a non-EAL context in conjunction with ``rte_mempool_generic_put()`` and ``rte_mempool_generic_get()`` that accept an explicit cache parameter.
512
513+ rte_ring
514
515  rte_ring supports multi-producer enqueue and multi-consumer dequeue.
516  However, it is non-preemptive, this has a knock on effect of making rte_mempool non-preemptable.
517
518  .. note::
519
520    The "non-preemptive" constraint means:
521
522    - a pthread doing multi-producers enqueues on a given ring must not
523      be preempted by another pthread doing a multi-producer enqueue on
524      the same ring.
525    - a pthread doing multi-consumers dequeues on a given ring must not
526      be preempted by another pthread doing a multi-consumer dequeue on
527      the same ring.
528
529    Bypassing this constraint may cause the 2nd pthread to spin until the 1st one is scheduled again.
530    Moreover, if the 1st pthread is preempted by a context that has an higher priority, it may even cause a dead lock.
531
532  This means, use cases involving preemptible pthreads should consider using rte_ring carefully.
533
534  1. It CAN be used for preemptible single-producer and single-consumer use case.
535
536  2. It CAN be used for non-preemptible multi-producer and preemptible single-consumer use case.
537
538  3. It CAN be used for preemptible single-producer and non-preemptible multi-consumer use case.
539
540  4. It MAY be used by preemptible multi-producer and/or preemptible multi-consumer pthreads whose scheduling policy are all SCHED_OTHER(cfs), SCHED_IDLE or SCHED_BATCH. User SHOULD be aware of the performance penalty before using it.
541
542  5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
543
544+ rte_timer
545
546  Running  ``rte_timer_manage()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
547
548+ rte_log
549
550  In non-EAL pthreads, there is no per thread loglevel and logtype, global loglevels are used.
551
552+ misc
553
554  The debug statistics of rte_ring, rte_mempool and rte_timer are not supported in a non-EAL pthread.
555
556cgroup control
557~~~~~~~~~~~~~~
558
559The following is a simple example of cgroup control usage, there are two pthreads(t0 and t1) doing packet I/O on the same core ($CPU).
560We expect only 50% of CPU spend on packet IO.
561
562  .. code-block:: console
563
564    mkdir /sys/fs/cgroup/cpu/pkt_io
565    mkdir /sys/fs/cgroup/cpuset/pkt_io
566
567    echo $cpu > /sys/fs/cgroup/cpuset/cpuset.cpus
568
569    echo $t0 > /sys/fs/cgroup/cpu/pkt_io/tasks
570    echo $t0 > /sys/fs/cgroup/cpuset/pkt_io/tasks
571
572    echo $t1 > /sys/fs/cgroup/cpu/pkt_io/tasks
573    echo $t1 > /sys/fs/cgroup/cpuset/pkt_io/tasks
574
575    cd /sys/fs/cgroup/cpu/pkt_io
576    echo 100000 > pkt_io/cpu.cfs_period_us
577    echo  50000 > pkt_io/cpu.cfs_quota_us
578
579
580Malloc
581------
582
583The EAL provides a malloc API to allocate any-sized memory.
584
585The objective of this API is to provide malloc-like functions to allow
586allocation from hugepage memory and to facilitate application porting.
587The *DPDK API Reference* manual describes the available functions.
588
589Typically, these kinds of allocations should not be done in data plane
590processing because they are slower than pool-based allocation and make
591use of locks within the allocation and free paths.
592However, they can be used in configuration code.
593
594Refer to the rte_malloc() function description in the *DPDK API Reference*
595manual for more information.
596
597Cookies
598~~~~~~~
599
600When CONFIG_RTE_MALLOC_DEBUG is enabled, the allocated memory contains
601overwrite protection fields to help identify buffer overflows.
602
603Alignment and NUMA Constraints
604~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
605
606The rte_malloc() takes an align argument that can be used to request a memory
607area that is aligned on a multiple of this value (which must be a power of two).
608
609On systems with NUMA support, a call to the rte_malloc() function will return
610memory that has been allocated on the NUMA socket of the core which made the call.
611A set of APIs is also provided, to allow memory to be explicitly allocated on a
612NUMA socket directly, or by allocated on the NUMA socket where another core is
613located, in the case where the memory is to be used by a logical core other than
614on the one doing the memory allocation.
615
616Use Cases
617~~~~~~~~~
618
619This API is meant to be used by an application that requires malloc-like
620functions at initialization time.
621
622For allocating/freeing data at runtime, in the fast-path of an application,
623the memory pool library should be used instead.
624
625Internal Implementation
626~~~~~~~~~~~~~~~~~~~~~~~
627
628Data Structures
629^^^^^^^^^^^^^^^
630
631There are two data structure types used internally in the malloc library:
632
633*   struct malloc_heap - used to track free space on a per-socket basis
634
635*   struct malloc_elem - the basic element of allocation and free-space
636    tracking inside the library.
637
638Structure: malloc_heap
639""""""""""""""""""""""
640
641The malloc_heap structure is used to manage free space on a per-socket basis.
642Internally, there is one heap structure per NUMA node, which allows us to
643allocate memory to a thread based on the NUMA node on which this thread runs.
644While this does not guarantee that the memory will be used on that NUMA node,
645it is no worse than a scheme where the memory is always allocated on a fixed
646or random node.
647
648The key fields of the heap structure and their function are described below
649(see also diagram above):
650
651*   lock - the lock field is needed to synchronize access to the heap.
652    Given that the free space in the heap is tracked using a linked list,
653    we need a lock to prevent two threads manipulating the list at the same time.
654
655*   free_head - this points to the first element in the list of free nodes for
656    this malloc heap.
657
658*   first - this points to the first element in the heap.
659
660*   last - this points to the last element in the heap.
661
662.. _figure_malloc_heap:
663
664.. figure:: img/malloc_heap.*
665
666   Example of a malloc heap and malloc elements within the malloc library
667
668
669.. _malloc_elem:
670
671Structure: malloc_elem
672""""""""""""""""""""""
673
674The malloc_elem structure is used as a generic header structure for various
675blocks of memory.
676It is used in two different ways - all shown in the diagram above:
677
678#.  As a header on a block of free or allocated memory - normal case
679
680#.  As a padding header inside a block of memory
681
682The most important fields in the structure and how they are used are described below.
683
684Malloc heap is a doubly-linked list, where each element keeps track of its
685previous and next elements. Due to the fact that hugepage memory can come and
686go, neighbouring malloc elements may not necessarily be adjacent in memory.
687Also, since a malloc element may span multiple pages, its contents may not
688necessarily be IOVA-contiguous either - each malloc element is only guaranteed
689to be virtually contiguous.
690
691.. note::
692
693    If the usage of a particular field in one of the above three usages is not
694    described, the field can be assumed to have an undefined value in that
695    situation, for example, for padding headers only the "state" and "pad"
696    fields have valid values.
697
698*   heap - this pointer is a reference back to the heap structure from which
699    this block was allocated.
700    It is used for normal memory blocks when they are being freed, to add the
701    newly-freed block to the heap's free-list.
702
703*   prev - this pointer points to previous header element/block in memory. When
704    freeing a block, this pointer is used to reference the previous block to
705    check if that block is also free. If so, and the two blocks are immediately
706    adjacent to each other, then the two free blocks are merged to form a single
707    larger block.
708
709*   next - this pointer points to next header element/block in memory. When
710    freeing a block, this pointer is used to reference the next block to check
711    if that block is also free. If so, and the two blocks are immediately
712    adjacent to each other, then the two free blocks are merged to form a single
713    larger block.
714
715*   free_list - this is a structure pointing to previous and next elements in
716    this heap's free list.
717    It is only used in normal memory blocks; on ``malloc()`` to find a suitable
718    free block to allocate and on ``free()`` to add the newly freed element to
719    the free-list.
720
721*   state - This field can have one of three values: ``FREE``, ``BUSY`` or
722    ``PAD``.
723    The former two are to indicate the allocation state of a normal memory block
724    and the latter is to indicate that the element structure is a dummy structure
725    at the end of the start-of-block padding, i.e. where the start of the data
726    within a block is not at the start of the block itself, due to alignment
727    constraints.
728    In that case, the pad header is used to locate the actual malloc element
729    header for the block.
730
731*   pad - this holds the length of the padding present at the start of the block.
732    In the case of a normal block header, it is added to the address of the end
733    of the header to give the address of the start of the data area, i.e. the
734    value passed back to the application on a malloc.
735    Within a dummy header inside the padding, this same value is stored, and is
736    subtracted from the address of the dummy header to yield the address of the
737    actual block header.
738
739*   size - the size of the data block, including the header itself.
740
741Memory Allocation
742^^^^^^^^^^^^^^^^^
743
744On EAL initialization, all preallocated memory segments are setup as part of the
745malloc heap. This setup involves placing an :ref:`element header<malloc_elem>`
746with ``FREE`` at the start of each virtually contiguous segment of memory.
747The ``FREE`` element is then added to the ``free_list`` for the malloc heap.
748
749This setup also happens whenever memory is allocated at runtime (if supported),
750in which case newly allocated pages are also added to the heap, merging with any
751adjacent free segments if there are any.
752
753When an application makes a call to a malloc-like function, the malloc function
754will first index the ``lcore_config`` structure for the calling thread, and
755determine the NUMA node of that thread.
756The NUMA node is used to index the array of ``malloc_heap`` structures which is
757passed as a parameter to the ``heap_alloc()`` function, along with the
758requested size, type, alignment and boundary parameters.
759
760The ``heap_alloc()`` function will scan the free_list of the heap, and attempt
761to find a free block suitable for storing data of the requested size, with the
762requested alignment and boundary constraints.
763
764When a suitable free element has been identified, the pointer to be returned
765to the user is calculated.
766The cache-line of memory immediately preceding this pointer is filled with a
767struct malloc_elem header.
768Because of alignment and boundary constraints, there could be free space at
769the start and/or end of the element, resulting in the following behavior:
770
771#. Check for trailing space.
772   If the trailing space is big enough, i.e. > 128 bytes, then the free element
773   is split.
774   If it is not, then we just ignore it (wasted space).
775
776#. Check for space at the start of the element.
777   If the space at the start is small, i.e. <=128 bytes, then a pad header is
778   used, and the remaining space is wasted.
779   If, however, the remaining space is greater, then the free element is split.
780
781The advantage of allocating the memory from the end of the existing element is
782that no adjustment of the free list needs to take place - the existing element
783on the free list just has its size value adjusted, and the next/previous elements
784have their "prev"/"next" pointers redirected to the newly created element.
785
786In case when there is not enough memory in the heap to satisfy allocation
787request, EAL will attempt to allocate more memory from the system (if supported)
788and, following successful allocation, will retry reserving the memory again. In
789a multiprocessing scenario, all primary and secondary processes will synchronize
790their memory maps to ensure that any valid pointer to DPDK memory is guaranteed
791to be valid at all times in all currently running processes.
792
793Failure to synchronize memory maps in one of the processes will cause allocation
794to fail, even though some of the processes may have allocated the memory
795successfully. The memory is not added to the malloc heap unless primary process
796has ensured that all other processes have mapped this memory successfully.
797
798Any successful allocation event will trigger a callback, for which user
799applications and other DPDK subsystems can register. Additionally, validation
800callbacks will be triggered before allocation if the newly allocated memory will
801exceed threshold set by the user, giving a chance to allow or deny allocation.
802
803.. note::
804
805    Any allocation of new pages has to go through primary process. If the
806    primary process is not active, no memory will be allocated even if it was
807    theoretically possible to do so. This is because primary's process map acts
808    as an authority on what should or should not be mapped, while each secondary
809    process has its own, local memory map. Secondary processes do not update the
810    shared memory map, they only copy its contents to their local memory map.
811
812Freeing Memory
813^^^^^^^^^^^^^^
814
815To free an area of memory, the pointer to the start of the data area is passed
816to the free function.
817The size of the ``malloc_elem`` structure is subtracted from this pointer to get
818the element header for the block.
819If this header is of type ``PAD`` then the pad length is further subtracted from
820the pointer to get the proper element header for the entire block.
821
822From this element header, we get pointers to the heap from which the block was
823allocated and to where it must be freed, as well as the pointer to the previous
824and next elements. These next and previous elements are then checked to see if
825they are also ``FREE`` and are immediately adjacent to the current one, and if
826so, they are merged with the current element. This means that we can never have
827two ``FREE`` memory blocks adjacent to one another, as they are always merged
828into a single block.
829
830If deallocating pages at runtime is supported, and the free element encloses
831one or more pages, those pages can be deallocated and be removed from the heap.
832If DPDK was started with command-line parameters for preallocating memory
833(``-m`` or ``--socket-mem``), then those pages that were allocated at startup
834will not be deallocated.
835
836Any successful deallocation event will trigger a callback, for which user
837applications and other DPDK subsystems can register.
838