xref: /dpdk/doc/guides/prog_guide/env_abstraction_layer.rst (revision decb35d890209f603b01c1d23f35995bd51228fc)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2014 Intel Corporation.
3
4.. _Environment_Abstraction_Layer:
5
6Environment Abstraction Layer
7=============================
8
9The Environment Abstraction Layer (EAL) is responsible for gaining access to low-level resources such as hardware and memory space.
10It provides a generic interface that hides the environment specifics from the applications and libraries.
11It is the responsibility of the initialization routine to decide how to allocate these resources
12(that is, memory space, devices, timers, consoles, and so on).
13
14Typical services expected from the EAL are:
15
16*   DPDK Loading and Launching:
17    The DPDK and its application are linked as a single application and must be loaded by some means.
18
19*   Core Affinity/Assignment Procedures:
20    The EAL provides mechanisms for assigning execution units to specific cores as well as creating execution instances.
21
22*   System Memory Reservation:
23    The EAL facilitates the reservation of different memory zones, for example, physical memory areas for device interactions.
24
25*   Trace and Debug Functions: Logs, dump_stack, panic and so on.
26
27*   Utility Functions: Spinlocks and atomic counters that are not provided in libc.
28
29*   CPU Feature Identification: Determine at runtime if a particular feature, for example, Intel® AVX is supported.
30    Determine if the current CPU supports the feature set that the binary was compiled for.
31
32*   Interrupt Handling: Interfaces to register/unregister callbacks to specific interrupt sources.
33
34*   Alarm Functions: Interfaces to set/remove callbacks to be run at a specific time.
35
36EAL in a Linux-userland Execution Environment
37---------------------------------------------
38
39In a Linux user space environment, the DPDK application runs as a user-space application using the pthread library.
40
41The EAL performs physical memory allocation using mmap() in hugetlbfs (using huge page sizes to increase performance).
42This memory is exposed to DPDK service layers such as the :ref:`Mempool Library <Mempool_Library>`.
43
44At this point, the DPDK services layer will be initialized, then through pthread setaffinity calls,
45each execution unit will be assigned to a specific logical core to run as a user-level thread.
46
47The time reference is provided by the CPU Time-Stamp Counter (TSC) or by the HPET kernel API through a mmap() call.
48
49Initialization and Core Launching
50~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
51
52Part of the initialization is done by the start function of glibc.
53A check is also performed at initialization time to ensure that the micro architecture type chosen in the config file is supported by the CPU.
54Then, the main() function is called. The core initialization and launch is done in rte_eal_init() (see the API documentation).
55It consist of calls to the pthread library (more specifically, pthread_self(), pthread_create(), and pthread_setaffinity_np()).
56
57.. _figure_linux_launch:
58
59.. figure:: img/linuxapp_launch.*
60
61   EAL Initialization in a Linux Application Environment
62
63
64.. note::
65
66    Initialization of objects, such as memory zones, rings, memory pools, lpm tables and hash tables,
67    should be done as part of the overall application initialization on the main lcore.
68    The creation and initialization functions for these objects are not multi-thread safe.
69    However, once initialized, the objects themselves can safely be used in multiple threads simultaneously.
70
71Shutdown and Cleanup
72~~~~~~~~~~~~~~~~~~~~
73
74During the initialization of EAL resources such as hugepage backed memory can be
75allocated by core components.  The memory allocated during ``rte_eal_init()``
76can be released by calling the ``rte_eal_cleanup()`` function. Refer to the
77API documentation for details.
78
79Multi-process Support
80~~~~~~~~~~~~~~~~~~~~~
81
82The Linux EAL allows a multi-process as well as a multi-threaded (pthread) deployment model.
83See chapter
84:ref:`Multi-process Support <Multi-process_Support>` for more details.
85
86Memory Mapping Discovery and Memory Reservation
87~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
88
89The allocation of large contiguous physical memory is done using hugepages.
90The EAL provides an API to reserve named memory zones in this contiguous memory.
91The physical address of the reserved memory for that memory zone is also returned to the user by the memory zone reservation API.
92
93There are two modes in which DPDK memory subsystem can operate: dynamic mode,
94and legacy mode. Both modes are explained below.
95
96.. note::
97
98    Memory reservations done using the APIs provided by rte_malloc
99    are also backed by hugepages unless ``--no-huge`` option is given.
100
101Dynamic Memory Mode
102^^^^^^^^^^^^^^^^^^^
103
104Currently, this mode is only supported on Linux and Windows.
105
106In this mode, usage of hugepages by DPDK application will grow and shrink based
107on application's requests. Any memory allocation through ``rte_malloc()``,
108``rte_memzone_reserve()`` or other methods, can potentially result in more
109hugepages being reserved from the system. Similarly, any memory deallocation can
110potentially result in hugepages being released back to the system.
111
112Memory allocated in this mode is not guaranteed to be IOVA-contiguous. If large
113chunks of IOVA-contiguous are required (with "large" defined as "more than one
114page"), it is recommended to either use VFIO driver for all physical devices (so
115that IOVA and VA addresses can be the same, thereby bypassing physical addresses
116entirely), or use legacy memory mode.
117
118For chunks of memory which must be IOVA-contiguous, it is recommended to use
119``rte_memzone_reserve()`` function with ``RTE_MEMZONE_IOVA_CONTIG`` flag
120specified. This way, memory allocator will ensure that, whatever memory mode is
121in use, either reserved memory will satisfy the requirements, or the allocation
122will fail.
123
124There is no need to preallocate any memory at startup using ``-m`` or
125``--socket-mem`` command-line parameters, however it is still possible to do so,
126in which case preallocate memory will be "pinned" (i.e. will never be released
127by the application back to the system). It will be possible to allocate more
128hugepages, and deallocate those, but any preallocated pages will not be freed.
129If neither ``-m`` nor ``--socket-mem`` were specified, no memory will be
130preallocated, and all memory will be allocated at runtime, as needed.
131
132Another available option to use in dynamic memory mode is
133``--single-file-segments`` command-line option. This option will put pages in
134single files (per memseg list), as opposed to creating a file per page. This is
135normally not needed, but can be useful for use cases like userspace vhost, where
136there is limited number of page file descriptors that can be passed to VirtIO.
137
138If the application (or DPDK-internal code, such as device drivers) wishes to
139receive notifications about newly allocated memory, it is possible to register
140for memory event callbacks via ``rte_mem_event_callback_register()`` function.
141This will call a callback function any time DPDK's memory map has changed.
142
143If the application (or DPDK-internal code, such as device drivers) wishes to be
144notified about memory allocations above specified threshold (and have a chance
145to deny them), allocation validator callbacks are also available via
146``rte_mem_alloc_validator_callback_register()`` function.
147
148A default validator callback is provided by EAL, which can be enabled with a
149``--socket-limit`` command-line option, for a simple way to limit maximum amount
150of memory that can be used by DPDK application.
151
152.. warning::
153    Memory subsystem uses DPDK IPC internally, so memory allocations/callbacks
154    and IPC must not be mixed: it is not safe to allocate/free memory inside
155    memory-related or IPC callbacks, and it is not safe to use IPC inside
156    memory-related callbacks. See chapter
157    :ref:`Multi-process Support <Multi-process_Support>` for more details about
158    DPDK IPC.
159
160Legacy Memory Mode
161^^^^^^^^^^^^^^^^^^
162
163This mode is enabled by specifying ``--legacy-mem`` command-line switch to the
164EAL. This switch will have no effect on FreeBSD as FreeBSD only supports
165legacy mode anyway.
166
167This mode mimics historical behavior of EAL. That is, EAL will reserve all
168memory at startup, sort all memory into large IOVA-contiguous chunks, and will
169not allow acquiring or releasing hugepages from the system at runtime.
170
171If neither ``-m`` nor ``--socket-mem`` were specified, the entire available
172hugepage memory will be preallocated.
173
174Hugepage Allocation Matching
175^^^^^^^^^^^^^^^^^^^^^^^^^^^^
176
177This behavior is enabled by specifying the ``--match-allocations`` command-line
178switch to the EAL. This switch is Linux-only and not supported with
179``--legacy-mem`` nor ``--no-huge``.
180
181Some applications using memory event callbacks may require that hugepages be
182freed exactly as they were allocated. These applications may also require
183that any allocation from the malloc heap not span across allocations
184associated with two different memory event callbacks. Hugepage allocation
185matching can be used by these types of applications to satisfy both of these
186requirements. This can result in some increased memory usage which is
187very dependent on the memory allocation patterns of the application.
188
18932-bit Support
190^^^^^^^^^^^^^^
191
192Additional restrictions are present when running in 32-bit mode. In dynamic
193memory mode, by default maximum of 2 gigabytes of VA space will be preallocated,
194and all of it will be on main lcore NUMA node unless ``--socket-mem`` flag is
195used.
196
197In legacy mode, VA space will only be preallocated for segments that were
198requested (plus padding, to keep IOVA-contiguousness).
199
200Maximum Amount of Memory
201^^^^^^^^^^^^^^^^^^^^^^^^
202
203All possible virtual memory space that can ever be used for hugepage mapping in
204a DPDK process is preallocated at startup, thereby placing an upper limit on how
205much memory a DPDK application can have. DPDK memory is stored in segment lists,
206each segment is strictly one physical page. It is possible to change the amount
207of virtual memory being preallocated at startup by editing the following config
208variables:
209
210* ``RTE_MAX_MEMSEG_LISTS`` controls how many segment lists can DPDK have
211* ``RTE_MAX_MEM_MB_PER_LIST`` controls how much megabytes of memory each
212  segment list can address
213* ``RTE_MAX_MEMSEG_PER_LIST`` controls how many segments each segment list
214  can have
215* ``RTE_MAX_MEMSEG_PER_TYPE`` controls how many segments each memory type
216  can have (where "type" is defined as "page size + NUMA node" combination)
217* ``RTE_MAX_MEM_MB_PER_TYPE`` controls how much megabytes of memory each
218  memory type can address
219* ``RTE_MAX_MEM_MB`` places a global maximum on the amount of memory
220  DPDK can reserve
221
222Normally, these options do not need to be changed.
223
224.. note::
225
226    Preallocated virtual memory is not to be confused with preallocated hugepage
227    memory! All DPDK processes preallocate virtual memory at startup. Hugepages
228    can later be mapped into that preallocated VA space (if dynamic memory mode
229    is enabled), and can optionally be mapped into it at startup.
230
231.. _hugepage_mapping:
232
233Hugepage Mapping
234^^^^^^^^^^^^^^^^
235
236Below is an overview of methods used for each OS to obtain hugepages,
237explaining why certain limitations and options exist in EAL.
238See the user guide for a specific OS for configuration details.
239
240FreeBSD uses ``contigmem`` kernel module
241to reserve a fixed number of hugepages at system start,
242which are mapped by EAL at initialization using a specific ``sysctl()``.
243
244Windows EAL allocates hugepages from the OS as needed using Win32 API,
245so available amount depends on the system load.
246It uses ``virt2phys`` kernel module to obtain physical addresses,
247unless running in IOVA-as-VA mode (e.g. forced with ``--iova-mode=va``).
248
249Linux allows to select any combination of the following:
250
251* use files in hugetlbfs (the default)
252  or anonymous mappings (``--in-memory``);
253* map each hugepage from its own file (the default)
254  or map multiple hugepages from one big file (``--single-file-segments``).
255
256Mapping hugepages from files in hugetlbfs is essential for multi-process,
257because secondary processes need to map the same hugepages.
258EAL creates files like ``rtemap_0``
259in directories specified with ``--huge-dir`` option
260(or in the mount point for a specific hugepage size).
261The ``rte`` prefix can be changed using ``--file-prefix``.
262This may be needed for running multiple primary processes
263that share a hugetlbfs mount point.
264Each backing file by default corresponds to one hugepage,
265it is opened and locked for the entire time the hugepage is used.
266This may exhaust the number of open files limit (``NOFILE``).
267See :ref:`segment-file-descriptors` section
268on how the number of open backing file descriptors can be reduced.
269
270In dynamic memory mode, EAL removes a backing hugepage file
271when all pages mapped from it are freed back to the system.
272However, backing files may persist after the application terminates
273in case of a crash or a leak of DPDK memory (e.g. ``rte_free()`` is missing).
274This reduces the number of hugepages available to other processes
275as reported by ``/sys/kernel/mm/hugepages/hugepages-*/free_hugepages``.
276EAL can remove the backing files after opening them for mapping
277if ``--huge-unlink`` is given to avoid polluting hugetlbfs.
278However, since it disables multi-process anyway,
279using anonymous mapping (``--in-memory``) is recommended instead.
280
281:ref:`EAL memory allocator <malloc>` relies on hugepages being zero-filled.
282Hugepages are cleared by the kernel when a file in hugetlbfs or its part
283is mapped for the first time system-wide
284to prevent data leaks from previous users of the same hugepage.
285EAL ensures this behavior by removing existing backing files at startup
286and by recreating them before opening for mapping (as a precaution).
287
288One exception is ``--huge-unlink=never`` mode.
289It is used to speed up EAL initialization, usually on application restart.
290Clearing memory constitutes more than 95% of hugepage mapping time.
291EAL can save it by remapping existing backing files
292with all the data left in the mapped hugepages ("dirty" memory).
293Such segments are marked with ``RTE_MEMSEG_FLAG_DIRTY``.
294Memory allocator detects dirty segments and handles them accordingly,
295in particular, it clears memory requested with ``rte_zmalloc*()``.
296In this mode EAL also does not remove a backing file
297when all pages mapped from it are freed,
298because they are intended to be reusable at restart.
299
300Anonymous mapping does not allow multi-process architecture.
301This mode does not use hugetlbfs
302and thus does not require root permissions for memory management
303(the limit of locked memory amount, ``MEMLOCK``, still applies).
304It is free of filename conflict and leftover file issues.
305If ``memfd_create(2)`` is supported both at build and run time,
306DPDK memory manager can provide file descriptors for memory segments,
307which are required for VirtIO with vhost-user backend.
308This can exhaust the number of open files limit (``NOFILE``)
309despite not creating any visible files.
310See :ref:`segment-file-descriptors` section
311on how the number of open file descriptors used by EAL can be reduced.
312
313.. _segment-file-descriptors:
314
315Segment File Descriptors
316^^^^^^^^^^^^^^^^^^^^^^^^
317
318On Linux, in most cases, EAL will store segment file descriptors in EAL. This
319can become a problem when using smaller page sizes due to underlying limitations
320of ``glibc`` library. For example, Linux API calls such as ``select()`` may not
321work correctly because ``glibc`` does not support more than certain number of
322file descriptors.
323
324There are two possible solutions for this problem. The recommended solution is
325to use ``--single-file-segments`` mode, as that mode will not use a file
326descriptor per each page, and it will keep compatibility with Virtio with
327vhost-user backend. This option is not available when using ``--legacy-mem``
328mode.
329
330Another option is to use bigger page sizes. Since fewer pages are required to
331cover the same memory area, fewer file descriptors will be stored internally
332by EAL.
333
334Hugepage Worker Stacks
335^^^^^^^^^^^^^^^^^^^^^^
336
337When the ``--huge-worker-stack[=size]`` EAL option is specified, worker
338thread stacks are allocated from hugepage memory local to the NUMA node
339of the thread. Worker stack size defaults to system pthread stack size
340if the optional size parameter is not specified.
341
342.. warning::
343    Stacks allocated from hugepage memory are not protected by guard
344    pages. Worker stacks must be sufficiently sized to prevent stack
345    overflow when this option is used.
346
347    As with normal thread stacks, hugepage worker thread stack size is
348    fixed and is not dynamically resized. Therefore, an application that
349    is free of stack page faults under a given load should be safe with
350    hugepage worker thread stacks given the same thread stack size and
351    loading conditions.
352
353Support for Externally Allocated Memory
354~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
355
356It is possible to use externally allocated memory in DPDK. There are two ways in
357which using externally allocated memory can work: the malloc heap API's, and
358manual memory management.
359
360+ Using heap API's for externally allocated memory
361
362Using a set of malloc heap API's is the recommended way to use externally
363allocated memory in DPDK. In this way, support for externally allocated memory
364is implemented through overloading the socket ID - externally allocated heaps
365will have socket ID's that would be considered invalid under normal
366circumstances. Requesting an allocation to take place from a specified
367externally allocated memory is a matter of supplying the correct socket ID to
368DPDK allocator, either directly (e.g. through a call to ``rte_malloc``) or
369indirectly (through data structure-specific allocation API's such as
370``rte_ring_create``). Using these API's also ensures that mapping of externally
371allocated memory for DMA is also performed on any memory segment that is added
372to a DPDK malloc heap.
373
374Since there is no way DPDK can verify whether memory is available or valid, this
375responsibility falls on the shoulders of the user. All multiprocess
376synchronization is also user's responsibility, as well as ensuring  that all
377calls to add/attach/detach/remove memory are done in the correct order. It is
378not required to attach to a memory area in all processes - only attach to memory
379areas as needed.
380
381The expected workflow is as follows:
382
383* Get a pointer to memory area
384* Create a named heap
385* Add memory area(s) to the heap
386    - If IOVA table is not specified, IOVA addresses will be assumed to be
387      unavailable, and DMA mappings will not be performed
388    - Other processes must attach to the memory area before they can use it
389* Get socket ID used for the heap
390* Use normal DPDK allocation procedures, using supplied socket ID
391* If memory area is no longer needed, it can be removed from the heap
392    - Other processes must detach from this memory area before it can be removed
393* If heap is no longer needed, remove it
394    - Socket ID will become invalid and will not be reused
395
396For more information, please refer to ``rte_malloc`` API documentation,
397specifically the ``rte_malloc_heap_*`` family of function calls.
398
399+ Using externally allocated memory without DPDK API's
400
401While using heap API's is the recommended method of using externally allocated
402memory in DPDK, there are certain use cases where the overhead of DPDK heap API
403is undesirable - for example, when manual memory management is performed on an
404externally allocated area. To support use cases where externally allocated
405memory will not be used as part of normal DPDK workflow, there is also another
406set of API's under the ``rte_extmem_*`` namespace.
407
408These API's are (as their name implies) intended to allow registering or
409unregistering externally allocated memory to/from DPDK's internal page table, to
410allow API's like ``rte_mem_virt2memseg`` etc. to work with externally allocated
411memory. Memory added this way will not be available for any regular DPDK
412allocators; DPDK will leave this memory for the user application to manage.
413
414The expected workflow is as follows:
415
416* Get a pointer to memory area
417* Register memory within DPDK
418    - If IOVA table is not specified, IOVA addresses will be assumed to be
419      unavailable
420    - Other processes must attach to the memory area before they can use it
421* Perform DMA mapping with ``rte_dev_dma_map`` if needed
422* Use the memory area in your application
423* If memory area is no longer needed, it can be unregistered
424    - If the area was mapped for DMA, unmapping must be performed before
425      unregistering memory
426    - Other processes must detach from the memory area before it can be
427      unregistered
428
429Since these externally allocated memory areas will not be managed by DPDK, it is
430therefore up to the user application to decide how to use them and what to do
431with them once they're registered.
432
433Per-lcore and Shared Variables
434~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
435
436.. note::
437
438    lcore refers to a logical execution unit of the processor, sometimes called a hardware *thread*.
439
440Shared variables are the default behavior.
441Per-lcore variables are implemented using *Thread Local Storage* (TLS) to provide per-thread local storage.
442
443Logs
444~~~~
445
446A logging API is provided by EAL.
447By default, in a Linux application, logs are sent to syslog and also to the console.
448However, the log function can be overridden by the user to use a different logging mechanism.
449
450Trace and Debug Functions
451^^^^^^^^^^^^^^^^^^^^^^^^^
452
453There are some debug functions to dump the stack in glibc.
454The rte_panic() function can voluntarily provoke a SIG_ABORT,
455which can trigger the generation of a core file, readable by gdb.
456
457CPU Feature Identification
458~~~~~~~~~~~~~~~~~~~~~~~~~~
459
460The EAL can query the CPU at runtime (using the rte_cpu_get_features() function) to determine which CPU features are available.
461
462User Space Interrupt Event
463~~~~~~~~~~~~~~~~~~~~~~~~~~
464
465+ User Space Interrupt and Alarm Handling in Host Thread
466
467The EAL creates a host thread to poll the UIO device file descriptors to detect the interrupts.
468Callbacks can be registered or unregistered by the EAL functions for a specific interrupt event
469and are called in the host thread asynchronously.
470The EAL also allows timed callbacks to be used in the same way as for NIC interrupts.
471
472.. note::
473
474    In DPDK PMD, the only interrupts handled by the dedicated host thread are those for link status change
475    (link up and link down notification) and for sudden device removal.
476
477
478+ RX Interrupt Event
479
480The receive and transmit routines provided by each PMD don't limit themselves to execute in polling thread mode.
481To ease the idle polling with tiny throughput, it's useful to pause the polling and wait until the wake-up event happens.
482The RX interrupt is the first choice to be such kind of wake-up event, but probably won't be the only one.
483
484EAL provides the event APIs for this event-driven thread mode.
485Taking Linux as an example, the implementation relies on epoll. Each thread can monitor an epoll instance
486in which all the wake-up events' file descriptors are added. The event file descriptors are created and mapped to
487the interrupt vectors according to the UIO/VFIO spec.
488From FreeBSD's perspective, kqueue is the alternative way, but not implemented yet.
489
490EAL initializes the mapping between event file descriptors and interrupt vectors, while each device initializes the mapping
491between interrupt vectors and queues. In this way, EAL actually is unaware of the interrupt cause on the specific vector.
492The eth_dev driver takes responsibility to program the latter mapping.
493
494.. note::
495
496    Per queue RX interrupt event is only allowed in VFIO which supports multiple MSI-X vector. In UIO, the RX interrupt
497    together with other interrupt causes shares the same vector. In this case, when RX interrupt and LSC(link status change)
498    interrupt are both enabled(intr_conf.lsc == 1 && intr_conf.rxq == 1), only the former is capable.
499
500The RX interrupt are controlled/enabled/disabled by ethdev APIs - 'rte_eth_dev_rx_intr_*'. They return failure if the PMD
501hasn't support them yet. The intr_conf.rxq flag is used to turn on the capability of RX interrupt per device.
502
503+ Device Removal Event
504
505This event is triggered by a device being removed at a bus level. Its
506underlying resources may have been made unavailable (i.e. PCI mappings
507unmapped). The PMD must make sure that on such occurrence, the application can
508still safely use its callbacks.
509
510This event can be subscribed to in the same way one would subscribe to a link
511status change event. The execution context is thus the same, i.e. it is the
512dedicated interrupt host thread.
513
514Considering this, it is likely that an application would want to close a
515device having emitted a Device Removal Event. In such case, calling
516``rte_eth_dev_close()`` can trigger it to unregister its own Device Removal Event
517callback. Care must be taken not to close the device from the interrupt handler
518context. It is necessary to reschedule such closing operation.
519
520Block list
521~~~~~~~~~~
522
523The EAL PCI device block list functionality can be used to mark certain NIC ports as unavailable,
524so they are ignored by the DPDK.
525The ports to be blocked are identified using the PCIe* description (Domain:Bus:Device.Function).
526
527Misc Functions
528~~~~~~~~~~~~~~
529
530Locks and atomic operations are per-architecture (i686 and x86_64).
531
532IOVA Mode Detection
533~~~~~~~~~~~~~~~~~~~
534
535IOVA Mode is selected by considering what the current usable Devices on the
536system require and/or support.
537
538On FreeBSD, RTE_IOVA_PA is always the default. On Linux, the IOVA mode is
539detected based on a 2-step heuristic detailed below.
540
541For the first step, EAL asks each bus its requirement in terms of IOVA mode
542and decides on a preferred IOVA mode.
543
544- if all buses report RTE_IOVA_PA, then the preferred IOVA mode is RTE_IOVA_PA,
545- if all buses report RTE_IOVA_VA, then the preferred IOVA mode is RTE_IOVA_VA,
546- if all buses report RTE_IOVA_DC, no bus expressed a preference, then the
547  preferred mode is RTE_IOVA_DC,
548- if the buses disagree (at least one wants RTE_IOVA_PA and at least one wants
549  RTE_IOVA_VA), then the preferred IOVA mode is RTE_IOVA_DC (see below with the
550  check on Physical Addresses availability),
551
552If the buses have expressed no preference on which IOVA mode to pick, then a
553default is selected using the following logic:
554
555- if physical addresses are not available, RTE_IOVA_VA mode is used
556- if /sys/kernel/iommu_groups is not empty, RTE_IOVA_VA mode is used
557- otherwise, RTE_IOVA_PA mode is used
558
559In the case when the buses had disagreed on their preferred IOVA mode, part of
560the buses won't work because of this decision.
561
562The second step checks if the preferred mode complies with the Physical
563Addresses availability since those are only available to root user in recent
564kernels. Namely, if the preferred mode is RTE_IOVA_PA but there is no access to
565Physical Addresses, then EAL init fails early, since later probing of the
566devices would fail anyway.
567
568.. note::
569
570    The RTE_IOVA_VA mode is preferred as the default in most cases for the
571    following reasons:
572
573    - All drivers are expected to work in RTE_IOVA_VA mode, irrespective of
574      physical address availability.
575    - By default, the mempool, first asks for IOVA-contiguous memory using
576      ``RTE_MEMZONE_IOVA_CONTIG``. This is slow in RTE_IOVA_PA mode and it may
577      affect the application boot time.
578    - It is easy to enable large amount of IOVA-contiguous memory use cases
579      with IOVA in VA mode.
580
581    It is expected that all PCI drivers work in both RTE_IOVA_PA and
582    RTE_IOVA_VA modes.
583
584    If a PCI driver does not support RTE_IOVA_PA mode, the
585    ``RTE_PCI_DRV_NEED_IOVA_AS_VA`` flag is used to dictate that this PCI
586    driver can only work in RTE_IOVA_VA mode.
587
588    When the KNI kernel module is detected, RTE_IOVA_PA mode is preferred as a
589    performance penalty is expected in RTE_IOVA_VA mode.
590
591IOVA Mode Configuration
592~~~~~~~~~~~~~~~~~~~~~~~
593
594Auto detection of the IOVA mode, based on probing the bus and IOMMU configuration, may not report
595the desired addressing mode when virtual devices that are not directly attached to the bus are present.
596To facilitate forcing the IOVA mode to a specific value the EAL command line option ``--iova-mode`` can
597be used to select either physical addressing('pa') or virtual addressing('va').
598
599.. _max_simd_bitwidth:
600
601
602Max SIMD bitwidth
603~~~~~~~~~~~~~~~~~
604
605The EAL provides a single setting to limit the max SIMD bitwidth used by DPDK,
606which is used in determining the vector path, if any, chosen by a component.
607The value can be set at runtime by an application using the
608'rte_vect_set_max_simd_bitwidth(uint16_t bitwidth)' function,
609which should only be called once at initialization, before EAL init.
610The value can be overridden by the user using the EAL command-line option '--force-max-simd-bitwidth'.
611
612When choosing a vector path, along with checking the CPU feature support,
613the value of the max SIMD bitwidth must also be checked, and can be retrieved using the
614'rte_vect_get_max_simd_bitwidth()' function.
615The value should be compared against the enum values for accepted max SIMD bitwidths:
616
617.. code-block:: c
618
619   enum rte_vect_max_simd {
620       RTE_VECT_SIMD_DISABLED = 64,
621       RTE_VECT_SIMD_128 = 128,
622       RTE_VECT_SIMD_256 = 256,
623       RTE_VECT_SIMD_512 = 512,
624       RTE_VECT_SIMD_MAX = INT16_MAX + 1,
625   };
626
627    if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512)
628        /* Take AVX-512 vector path */
629    else if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256)
630        /* Take AVX2 vector path */
631
632
633Memory Segments and Memory Zones (memzone)
634------------------------------------------
635
636The mapping of physical memory is provided by this feature in the EAL.
637As physical memory can have gaps, the memory is described in a table of descriptors,
638and each descriptor (called rte_memseg ) describes a physical page.
639
640On top of this, the memzone allocator's role is to reserve contiguous portions of physical memory.
641These zones are identified by a unique name when the memory is reserved.
642
643The rte_memzone descriptors are also located in the configuration structure.
644This structure is accessed using rte_eal_get_configuration().
645The lookup (by name) of a memory zone returns a descriptor containing the physical address of the memory zone.
646
647Memory zones can be reserved with specific start address alignment by supplying the align parameter
648(by default, they are aligned to cache line size).
649The alignment value should be a power of two and not less than the cache line size (64 bytes).
650Memory zones can also be reserved from either 2 MB or 1 GB hugepages, provided that both are available on the system.
651
652Both memsegs and memzones are stored using ``rte_fbarray`` structures. Please
653refer to *DPDK API Reference* for more information.
654
655
656Multiple pthread
657----------------
658
659DPDK usually pins one pthread per core to avoid the overhead of task switching.
660This allows for significant performance gains, but lacks flexibility and is not always efficient.
661
662Power management helps to improve the CPU efficiency by limiting the CPU runtime frequency.
663However, alternately it is possible to utilize the idle cycles available to take advantage of
664the full capability of the CPU.
665
666By taking advantage of cgroup, the CPU utilization quota can be simply assigned.
667This gives another way to improve the CPU efficiency, however, there is a prerequisite;
668DPDK must handle the context switching between multiple pthreads per core.
669
670For further flexibility, it is useful to set pthread affinity not only to a CPU but to a CPU set.
671
672EAL pthread and lcore Affinity
673~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
674
675The term "lcore" refers to an EAL thread, which is really a Linux/FreeBSD pthread.
676"EAL pthreads"  are created and managed by EAL and execute the tasks issued by *remote_launch*.
677In each EAL pthread, there is a TLS (Thread Local Storage) called *_lcore_id* for unique identification.
678As EAL pthreads usually bind 1:1 to the physical CPU, the *_lcore_id* is typically equal to the CPU ID.
679
680When using multiple pthreads, however, the binding is no longer always 1:1 between an EAL pthread and a specified physical CPU.
681The EAL pthread may have affinity to a CPU set, and as such the *_lcore_id* will not be the same as the CPU ID.
682For this reason, there is an EAL long option '--lcores' defined to assign the CPU affinity of lcores.
683For a specified lcore ID or ID group, the option allows setting the CPU set for that EAL pthread.
684
685The format pattern:
686	--lcores='<lcore_set>[@cpu_set][,<lcore_set>[@cpu_set],...]'
687
688'lcore_set' and 'cpu_set' can be a single number, range or a group.
689
690A number is a "digit([0-9]+)"; a range is "<number>-<number>"; a group is "(<number|range>[,<number|range>,...])".
691
692If a '\@cpu_set' value is not supplied, the value of 'cpu_set' will default to the value of 'lcore_set'.
693
694    ::
695
696    	For example, "--lcores='1,2@(5-7),(3-5)@(0,2),(0,6),7-8'" which means start 9 EAL thread;
697    	    lcore 0 runs on cpuset 0x41 (cpu 0,6);
698    	    lcore 1 runs on cpuset 0x2 (cpu 1);
699    	    lcore 2 runs on cpuset 0xe0 (cpu 5,6,7);
700    	    lcore 3,4,5 runs on cpuset 0x5 (cpu 0,2);
701    	    lcore 6 runs on cpuset 0x41 (cpu 0,6);
702    	    lcore 7 runs on cpuset 0x80 (cpu 7);
703    	    lcore 8 runs on cpuset 0x100 (cpu 8).
704
705Using this option, for each given lcore ID, the associated CPUs can be assigned.
706It's also compatible with the pattern of corelist('-l') option.
707
708non-EAL pthread support
709~~~~~~~~~~~~~~~~~~~~~~~
710
711It is possible to use the DPDK execution context with any user pthread (aka. non-EAL pthreads).
712There are two kinds of non-EAL pthreads:
713
714- a registered non-EAL pthread with a valid *_lcore_id* that was successfully assigned by calling ``rte_thread_register()``,
715- a non registered non-EAL pthread with a LCORE_ID_ANY,
716
717For non registered non-EAL pthread (with a LCORE_ID_ANY *_lcore_id*), some libraries will use an alternative unique ID (e.g. TID), some will not be impacted at all, and some will work but with limitations (e.g. timer and mempool libraries).
718
719All these impacts are mentioned in :ref:`known_issue_label` section.
720
721Public Thread API
722~~~~~~~~~~~~~~~~~
723
724There are two public APIs ``rte_thread_set_affinity()`` and ``rte_thread_get_affinity()`` introduced for threads.
725When they're used in any pthread context, the Thread Local Storage(TLS) will be set/get.
726
727Those TLS include *_cpuset* and *_socket_id*:
728
729*	*_cpuset* stores the CPUs bitmap to which the pthread is affinitized.
730
731*	*_socket_id* stores the NUMA node of the CPU set. If the CPUs in CPU set belong to different NUMA node, the *_socket_id* will be set to SOCKET_ID_ANY.
732
733
734Control Thread API
735~~~~~~~~~~~~~~~~~~
736
737It is possible to create Control Threads using the public API
738``rte_ctrl_thread_create()``.
739Those threads can be used for management/infrastructure tasks and are used
740internally by DPDK for multi process support and interrupt handling.
741
742Those threads will be scheduled on CPUs part of the original process CPU
743affinity from which the dataplane and service lcores are excluded.
744
745For example, on a 8 CPUs system, starting a dpdk application with -l 2,3
746(dataplane cores), then depending on the affinity configuration which can be
747controlled with tools like taskset (Linux) or cpuset (FreeBSD),
748
749- with no affinity configuration, the Control Threads will end up on
750  0-1,4-7 CPUs.
751- with affinity restricted to 2-4, the Control Threads will end up on
752  CPU 4.
753- with affinity restricted to 2-3, the Control Threads will end up on
754  CPU 2 (main lcore, which is the default when no CPU is available).
755
756.. _known_issue_label:
757
758Known Issues
759~~~~~~~~~~~~
760
761+ rte_mempool
762
763  The rte_mempool uses a per-lcore cache inside the mempool.
764  For unregistered non-EAL pthreads, ``rte_lcore_id()`` will not return a valid number.
765  So for now, when rte_mempool is used with unregistered non-EAL pthreads, the put/get operations will bypass the default mempool cache and there is a performance penalty because of this bypass.
766  Only user-owned external caches can be used in an unregistered non-EAL context in conjunction with ``rte_mempool_generic_put()`` and ``rte_mempool_generic_get()`` that accept an explicit cache parameter.
767
768+ rte_ring
769
770  rte_ring supports multi-producer enqueue and multi-consumer dequeue.
771  However, it is non-preemptive, this has a knock on effect of making rte_mempool non-preemptible.
772
773  .. note::
774
775    The "non-preemptive" constraint means:
776
777    - a pthread doing multi-producers enqueues on a given ring must not
778      be preempted by another pthread doing a multi-producer enqueue on
779      the same ring.
780    - a pthread doing multi-consumers dequeues on a given ring must not
781      be preempted by another pthread doing a multi-consumer dequeue on
782      the same ring.
783
784    Bypassing this constraint may cause the 2nd pthread to spin until the 1st one is scheduled again.
785    Moreover, if the 1st pthread is preempted by a context that has an higher priority, it may even cause a dead lock.
786
787  This means, use cases involving preemptible pthreads should consider using rte_ring carefully.
788
789  1. It CAN be used for preemptible single-producer and single-consumer use case.
790
791  2. It CAN be used for non-preemptible multi-producer and preemptible single-consumer use case.
792
793  3. It CAN be used for preemptible single-producer and non-preemptible multi-consumer use case.
794
795  4. It MAY be used by preemptible multi-producer and/or preemptible multi-consumer pthreads whose scheduling policy are all SCHED_OTHER(cfs), SCHED_IDLE or SCHED_BATCH. User SHOULD be aware of the performance penalty before using it.
796
797  5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR.
798
799  Alternatively, applications can use the lock-free stack mempool handler. When
800  considering this handler, note that:
801
802  - It is currently limited to the aarch64 and x86_64 platforms, because it uses
803    an instruction (16-byte compare-and-swap) that is not yet available on other
804    platforms.
805  - It has worse average-case performance than the non-preemptive rte_ring, but
806    software caching (e.g. the mempool cache) can mitigate this by reducing the
807    number of stack accesses.
808
809+ rte_timer
810
811  Running  ``rte_timer_manage()`` on an unregistered non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed.
812
813+ rte_log
814
815  In unregistered non-EAL pthreads, there is no per thread loglevel and logtype, global loglevels are used.
816
817+ misc
818
819  The debug statistics of rte_ring, rte_mempool and rte_timer are not supported in an unregistered non-EAL pthread.
820
821cgroup control
822~~~~~~~~~~~~~~
823
824The following is a simple example of cgroup control usage, there are two pthreads(t0 and t1) doing packet I/O on the same core ($CPU).
825We expect only 50% of CPU spend on packet IO.
826
827  .. code-block:: console
828
829    mkdir /sys/fs/cgroup/cpu/pkt_io
830    mkdir /sys/fs/cgroup/cpuset/pkt_io
831
832    echo $cpu > /sys/fs/cgroup/cpuset/cpuset.cpus
833
834    echo $t0 > /sys/fs/cgroup/cpu/pkt_io/tasks
835    echo $t0 > /sys/fs/cgroup/cpuset/pkt_io/tasks
836
837    echo $t1 > /sys/fs/cgroup/cpu/pkt_io/tasks
838    echo $t1 > /sys/fs/cgroup/cpuset/pkt_io/tasks
839
840    cd /sys/fs/cgroup/cpu/pkt_io
841    echo 100000 > pkt_io/cpu.cfs_period_us
842    echo  50000 > pkt_io/cpu.cfs_quota_us
843
844.. _malloc:
845
846Malloc
847------
848
849The EAL provides a malloc API to allocate any-sized memory.
850
851The objective of this API is to provide malloc-like functions to allow
852allocation from hugepage memory and to facilitate application porting.
853The *DPDK API Reference* manual describes the available functions.
854
855Typically, these kinds of allocations should not be done in data plane
856processing because they are slower than pool-based allocation and make
857use of locks within the allocation and free paths.
858However, they can be used in configuration code.
859
860Refer to the rte_malloc() function description in the *DPDK API Reference*
861manual for more information.
862
863
864Alignment and NUMA Constraints
865~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
866
867The rte_malloc() takes an align argument that can be used to request a memory
868area that is aligned on a multiple of this value (which must be a power of two).
869
870On systems with NUMA support, a call to the rte_malloc() function will return
871memory that has been allocated on the NUMA socket of the core which made the call.
872A set of APIs is also provided, to allow memory to be explicitly allocated on a
873NUMA socket directly, or by allocated on the NUMA socket where another core is
874located, in the case where the memory is to be used by a logical core other than
875on the one doing the memory allocation.
876
877Use Cases
878~~~~~~~~~
879
880This API is meant to be used by an application that requires malloc-like
881functions at initialization time.
882
883For allocating/freeing data at runtime, in the fast-path of an application,
884the memory pool library should be used instead.
885
886Internal Implementation
887~~~~~~~~~~~~~~~~~~~~~~~
888
889Data Structures
890^^^^^^^^^^^^^^^
891
892There are two data structure types used internally in the malloc library:
893
894*   struct malloc_heap - used to track free space on a per-socket basis
895
896*   struct malloc_elem - the basic element of allocation and free-space
897    tracking inside the library.
898
899Structure: malloc_heap
900""""""""""""""""""""""
901
902The malloc_heap structure is used to manage free space on a per-socket basis.
903Internally, there is one heap structure per NUMA node, which allows us to
904allocate memory to a thread based on the NUMA node on which this thread runs.
905While this does not guarantee that the memory will be used on that NUMA node,
906it is no worse than a scheme where the memory is always allocated on a fixed
907or random node.
908
909The key fields of the heap structure and their function are described below
910(see also diagram above):
911
912*   lock - the lock field is needed to synchronize access to the heap.
913    Given that the free space in the heap is tracked using a linked list,
914    we need a lock to prevent two threads manipulating the list at the same time.
915
916*   free_head - this points to the first element in the list of free nodes for
917    this malloc heap.
918
919*   first - this points to the first element in the heap.
920
921*   last - this points to the last element in the heap.
922
923.. _figure_malloc_heap:
924
925.. figure:: img/malloc_heap.*
926
927   Example of a malloc heap and malloc elements within the malloc library
928
929
930.. _malloc_elem:
931
932Structure: malloc_elem
933""""""""""""""""""""""
934
935The malloc_elem structure is used as a generic header structure for various
936blocks of memory.
937It is used in two different ways - all shown in the diagram above:
938
939#.  As a header on a block of free or allocated memory - normal case
940
941#.  As a padding header inside a block of memory
942
943The most important fields in the structure and how they are used are described below.
944
945Malloc heap is a doubly-linked list, where each element keeps track of its
946previous and next elements. Due to the fact that hugepage memory can come and
947go, neighboring malloc elements may not necessarily be adjacent in memory.
948Also, since a malloc element may span multiple pages, its contents may not
949necessarily be IOVA-contiguous either - each malloc element is only guaranteed
950to be virtually contiguous.
951
952.. note::
953
954    If the usage of a particular field in one of the above three usages is not
955    described, the field can be assumed to have an undefined value in that
956    situation, for example, for padding headers only the "state" and "pad"
957    fields have valid values.
958
959*   heap - this pointer is a reference back to the heap structure from which
960    this block was allocated.
961    It is used for normal memory blocks when they are being freed, to add the
962    newly-freed block to the heap's free-list.
963
964*   prev - this pointer points to previous header element/block in memory. When
965    freeing a block, this pointer is used to reference the previous block to
966    check if that block is also free. If so, and the two blocks are immediately
967    adjacent to each other, then the two free blocks are merged to form a single
968    larger block.
969
970*   next - this pointer points to next header element/block in memory. When
971    freeing a block, this pointer is used to reference the next block to check
972    if that block is also free. If so, and the two blocks are immediately
973    adjacent to each other, then the two free blocks are merged to form a single
974    larger block.
975
976*   free_list - this is a structure pointing to previous and next elements in
977    this heap's free list.
978    It is only used in normal memory blocks; on ``malloc()`` to find a suitable
979    free block to allocate and on ``free()`` to add the newly freed element to
980    the free-list.
981
982*   state - This field can have one of three values: ``FREE``, ``BUSY`` or
983    ``PAD``.
984    The former two are to indicate the allocation state of a normal memory block
985    and the latter is to indicate that the element structure is a dummy structure
986    at the end of the start-of-block padding, i.e. where the start of the data
987    within a block is not at the start of the block itself, due to alignment
988    constraints.
989    In that case, the pad header is used to locate the actual malloc element
990    header for the block.
991
992*   dirty - this flag is only meaningful when ``state`` is ``FREE``.
993    It indicates that the content of the element is not fully zero-filled.
994    Memory from such blocks must be cleared when requested via ``rte_zmalloc*()``.
995    Dirty elements only appear with ``--huge-unlink=never``.
996
997*   pad - this holds the length of the padding present at the start of the block.
998    In the case of a normal block header, it is added to the address of the end
999    of the header to give the address of the start of the data area, i.e. the
1000    value passed back to the application on a malloc.
1001    Within a dummy header inside the padding, this same value is stored, and is
1002    subtracted from the address of the dummy header to yield the address of the
1003    actual block header.
1004
1005*   size - the size of the data block, including the header itself.
1006
1007Memory Allocation
1008^^^^^^^^^^^^^^^^^
1009
1010On EAL initialization, all preallocated memory segments are setup as part of the
1011malloc heap. This setup involves placing an :ref:`element header<malloc_elem>`
1012with ``FREE`` at the start of each virtually contiguous segment of memory.
1013The ``FREE`` element is then added to the ``free_list`` for the malloc heap.
1014
1015This setup also happens whenever memory is allocated at runtime (if supported),
1016in which case newly allocated pages are also added to the heap, merging with any
1017adjacent free segments if there are any.
1018
1019When an application makes a call to a malloc-like function, the malloc function
1020will first index the ``lcore_config`` structure for the calling thread, and
1021determine the NUMA node of that thread.
1022The NUMA node is used to index the array of ``malloc_heap`` structures which is
1023passed as a parameter to the ``heap_alloc()`` function, along with the
1024requested size, type, alignment and boundary parameters.
1025
1026The ``heap_alloc()`` function will scan the free_list of the heap, and attempt
1027to find a free block suitable for storing data of the requested size, with the
1028requested alignment and boundary constraints.
1029
1030When a suitable free element has been identified, the pointer to be returned
1031to the user is calculated.
1032The cache-line of memory immediately preceding this pointer is filled with a
1033struct malloc_elem header.
1034Because of alignment and boundary constraints, there could be free space at
1035the start and/or end of the element, resulting in the following behavior:
1036
1037#. Check for trailing space.
1038   If the trailing space is big enough, i.e. > 128 bytes, then the free element
1039   is split.
1040   If it is not, then we just ignore it (wasted space).
1041
1042#. Check for space at the start of the element.
1043   If the space at the start is small, i.e. <=128 bytes, then a pad header is
1044   used, and the remaining space is wasted.
1045   If, however, the remaining space is greater, then the free element is split.
1046
1047The advantage of allocating the memory from the end of the existing element is
1048that no adjustment of the free list needs to take place - the existing element
1049on the free list just has its size value adjusted, and the next/previous elements
1050have their "prev"/"next" pointers redirected to the newly created element.
1051
1052In case when there is not enough memory in the heap to satisfy allocation
1053request, EAL will attempt to allocate more memory from the system (if supported)
1054and, following successful allocation, will retry reserving the memory again. In
1055a multiprocessing scenario, all primary and secondary processes will synchronize
1056their memory maps to ensure that any valid pointer to DPDK memory is guaranteed
1057to be valid at all times in all currently running processes.
1058
1059Failure to synchronize memory maps in one of the processes will cause allocation
1060to fail, even though some of the processes may have allocated the memory
1061successfully. The memory is not added to the malloc heap unless primary process
1062has ensured that all other processes have mapped this memory successfully.
1063
1064Any successful allocation event will trigger a callback, for which user
1065applications and other DPDK subsystems can register. Additionally, validation
1066callbacks will be triggered before allocation if the newly allocated memory will
1067exceed threshold set by the user, giving a chance to allow or deny allocation.
1068
1069.. note::
1070
1071    Any allocation of new pages has to go through primary process. If the
1072    primary process is not active, no memory will be allocated even if it was
1073    theoretically possible to do so. This is because primary's process map acts
1074    as an authority on what should or should not be mapped, while each secondary
1075    process has its own, local memory map. Secondary processes do not update the
1076    shared memory map, they only copy its contents to their local memory map.
1077
1078Freeing Memory
1079^^^^^^^^^^^^^^
1080
1081To free an area of memory, the pointer to the start of the data area is passed
1082to the free function.
1083The size of the ``malloc_elem`` structure is subtracted from this pointer to get
1084the element header for the block.
1085If this header is of type ``PAD`` then the pad length is further subtracted from
1086the pointer to get the proper element header for the entire block.
1087
1088From this element header, we get pointers to the heap from which the block was
1089allocated and to where it must be freed, as well as the pointer to the previous
1090and next elements. These next and previous elements are then checked to see if
1091they are also ``FREE`` and are immediately adjacent to the current one, and if
1092so, they are merged with the current element. This means that we can never have
1093two ``FREE`` memory blocks adjacent to one another, as they are always merged
1094into a single block.
1095
1096If deallocating pages at runtime is supported, and the free element encloses
1097one or more pages, those pages can be deallocated and be removed from the heap.
1098If DPDK was started with command-line parameters for preallocating memory
1099(``-m`` or ``--socket-mem``), then those pages that were allocated at startup
1100will not be deallocated.
1101
1102Any successful deallocation event will trigger a callback, for which user
1103applications and other DPDK subsystems can register.
1104