1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2010-2014 Intel Corporation. 3 4.. _Environment_Abstraction_Layer: 5 6Environment Abstraction Layer 7============================= 8 9The Environment Abstraction Layer (EAL) is responsible for gaining access to low-level resources such as hardware and memory space. 10It provides a generic interface that hides the environment specifics from the applications and libraries. 11It is the responsibility of the initialization routine to decide how to allocate these resources 12(that is, memory space, devices, timers, consoles, and so on). 13 14Typical services expected from the EAL are: 15 16* DPDK Loading and Launching: 17 The DPDK and its application are linked as a single application and must be loaded by some means. 18 19* Core Affinity/Assignment Procedures: 20 The EAL provides mechanisms for assigning execution units to specific cores as well as creating execution instances. 21 22* System Memory Reservation: 23 The EAL facilitates the reservation of different memory zones, for example, physical memory areas for device interactions. 24 25* Trace and Debug Functions: Logs, dump_stack, panic and so on. 26 27* Utility Functions: Spinlocks and atomic counters that are not provided in libc. 28 29* CPU Feature Identification: Determine at runtime if a particular feature, for example, Intel® AVX is supported. 30 Determine if the current CPU supports the feature set that the binary was compiled for. 31 32* Interrupt Handling: Interfaces to register/unregister callbacks to specific interrupt sources. 33 34* Alarm Functions: Interfaces to set/remove callbacks to be run at a specific time. 35 36EAL in a Linux-userland Execution Environment 37--------------------------------------------- 38 39In a Linux user space environment, the DPDK application runs as a user-space application using the pthread library. 40 41The EAL performs physical memory allocation using mmap() in hugetlbfs (using huge page sizes to increase performance). 42This memory is exposed to DPDK service layers such as the :ref:`Mempool Library <Mempool_Library>`. 43 44At this point, the DPDK services layer will be initialized, then through pthread setaffinity calls, 45each execution unit will be assigned to a specific logical core to run as a user-level thread. 46 47The time reference is provided by the CPU Time-Stamp Counter (TSC) or by the HPET kernel API through a mmap() call. 48 49Initialization and Core Launching 50~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 51 52Part of the initialization is done by the start function of glibc. 53A check is also performed at initialization time to ensure that the micro architecture type chosen in the config file is supported by the CPU. 54Then, the main() function is called. The core initialization and launch is done in rte_eal_init() (see the API documentation). 55It consist of calls to the pthread library (more specifically, pthread_self(), pthread_create(), and pthread_setaffinity_np()). 56 57.. _figure_linux_launch: 58 59.. figure:: img/linuxapp_launch.* 60 61 EAL Initialization in a Linux Application Environment 62 63 64.. note:: 65 66 Initialization of objects, such as memory zones, rings, memory pools, lpm tables and hash tables, 67 should be done as part of the overall application initialization on the main lcore. 68 The creation and initialization functions for these objects are not multi-thread safe. 69 However, once initialized, the objects themselves can safely be used in multiple threads simultaneously. 70 71Shutdown and Cleanup 72~~~~~~~~~~~~~~~~~~~~ 73 74During the initialization of EAL resources such as hugepage backed memory can be 75allocated by core components. The memory allocated during ``rte_eal_init()`` 76can be released by calling the ``rte_eal_cleanup()`` function. Refer to the 77API documentation for details. 78 79Multi-process Support 80~~~~~~~~~~~~~~~~~~~~~ 81 82The Linux EAL allows a multi-process as well as a multi-threaded (pthread) deployment model. 83See chapter 84:ref:`Multi-process Support <Multi-process_Support>` for more details. 85 86Memory Mapping Discovery and Memory Reservation 87~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 88 89The allocation of large contiguous physical memory is done using hugepages. 90The EAL provides an API to reserve named memory zones in this contiguous memory. 91The physical address of the reserved memory for that memory zone is also returned to the user by the memory zone reservation API. 92 93There are two modes in which DPDK memory subsystem can operate: dynamic mode, 94and legacy mode. Both modes are explained below. 95 96.. note:: 97 98 Memory reservations done using the APIs provided by rte_malloc 99 are also backed by hugepages unless ``--no-huge`` option is given. 100 101Dynamic Memory Mode 102^^^^^^^^^^^^^^^^^^^ 103 104Currently, this mode is only supported on Linux and Windows. 105 106In this mode, usage of hugepages by DPDK application will grow and shrink based 107on application's requests. Any memory allocation through ``rte_malloc()``, 108``rte_memzone_reserve()`` or other methods, can potentially result in more 109hugepages being reserved from the system. Similarly, any memory deallocation can 110potentially result in hugepages being released back to the system. 111 112Memory allocated in this mode is not guaranteed to be IOVA-contiguous. If large 113chunks of IOVA-contiguous are required (with "large" defined as "more than one 114page"), it is recommended to either use VFIO driver for all physical devices (so 115that IOVA and VA addresses can be the same, thereby bypassing physical addresses 116entirely), or use legacy memory mode. 117 118For chunks of memory which must be IOVA-contiguous, it is recommended to use 119``rte_memzone_reserve()`` function with ``RTE_MEMZONE_IOVA_CONTIG`` flag 120specified. This way, memory allocator will ensure that, whatever memory mode is 121in use, either reserved memory will satisfy the requirements, or the allocation 122will fail. 123 124There is no need to preallocate any memory at startup using ``-m`` or 125``--socket-mem`` command-line parameters, however it is still possible to do so, 126in which case preallocate memory will be "pinned" (i.e. will never be released 127by the application back to the system). It will be possible to allocate more 128hugepages, and deallocate those, but any preallocated pages will not be freed. 129If neither ``-m`` nor ``--socket-mem`` were specified, no memory will be 130preallocated, and all memory will be allocated at runtime, as needed. 131 132Another available option to use in dynamic memory mode is 133``--single-file-segments`` command-line option. This option will put pages in 134single files (per memseg list), as opposed to creating a file per page. This is 135normally not needed, but can be useful for use cases like userspace vhost, where 136there is limited number of page file descriptors that can be passed to VirtIO. 137 138If the application (or DPDK-internal code, such as device drivers) wishes to 139receive notifications about newly allocated memory, it is possible to register 140for memory event callbacks via ``rte_mem_event_callback_register()`` function. 141This will call a callback function any time DPDK's memory map has changed. 142 143If the application (or DPDK-internal code, such as device drivers) wishes to be 144notified about memory allocations above specified threshold (and have a chance 145to deny them), allocation validator callbacks are also available via 146``rte_mem_alloc_validator_callback_register()`` function. 147 148A default validator callback is provided by EAL, which can be enabled with a 149``--socket-limit`` command-line option, for a simple way to limit maximum amount 150of memory that can be used by DPDK application. 151 152.. warning:: 153 Memory subsystem uses DPDK IPC internally, so memory allocations/callbacks 154 and IPC must not be mixed: it is not safe to allocate/free memory inside 155 memory-related or IPC callbacks, and it is not safe to use IPC inside 156 memory-related callbacks. See chapter 157 :ref:`Multi-process Support <Multi-process_Support>` for more details about 158 DPDK IPC. 159 160Legacy Memory Mode 161^^^^^^^^^^^^^^^^^^ 162 163This mode is enabled by specifying ``--legacy-mem`` command-line switch to the 164EAL. This switch will have no effect on FreeBSD as FreeBSD only supports 165legacy mode anyway. 166 167This mode mimics historical behavior of EAL. That is, EAL will reserve all 168memory at startup, sort all memory into large IOVA-contiguous chunks, and will 169not allow acquiring or releasing hugepages from the system at runtime. 170 171If neither ``-m`` nor ``--socket-mem`` were specified, the entire available 172hugepage memory will be preallocated. 173 174Hugepage Allocation Matching 175^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 176 177This behavior is enabled by specifying the ``--match-allocations`` command-line 178switch to the EAL. This switch is Linux-only and not supported with 179``--legacy-mem`` nor ``--no-huge``. 180 181Some applications using memory event callbacks may require that hugepages be 182freed exactly as they were allocated. These applications may also require 183that any allocation from the malloc heap not span across allocations 184associated with two different memory event callbacks. Hugepage allocation 185matching can be used by these types of applications to satisfy both of these 186requirements. This can result in some increased memory usage which is 187very dependent on the memory allocation patterns of the application. 188 18932-bit Support 190^^^^^^^^^^^^^^ 191 192Additional restrictions are present when running in 32-bit mode. In dynamic 193memory mode, by default maximum of 2 gigabytes of VA space will be preallocated, 194and all of it will be on main lcore NUMA node unless ``--socket-mem`` flag is 195used. 196 197In legacy mode, VA space will only be preallocated for segments that were 198requested (plus padding, to keep IOVA-contiguousness). 199 200Maximum Amount of Memory 201^^^^^^^^^^^^^^^^^^^^^^^^ 202 203All possible virtual memory space that can ever be used for hugepage mapping in 204a DPDK process is preallocated at startup, thereby placing an upper limit on how 205much memory a DPDK application can have. DPDK memory is stored in segment lists, 206each segment is strictly one physical page. It is possible to change the amount 207of virtual memory being preallocated at startup by editing the following config 208variables: 209 210* ``RTE_MAX_MEMSEG_LISTS`` controls how many segment lists can DPDK have 211* ``RTE_MAX_MEM_MB_PER_LIST`` controls how much megabytes of memory each 212 segment list can address 213* ``RTE_MAX_MEMSEG_PER_LIST`` controls how many segments each segment list 214 can have 215* ``RTE_MAX_MEMSEG_PER_TYPE`` controls how many segments each memory type 216 can have (where "type" is defined as "page size + NUMA node" combination) 217* ``RTE_MAX_MEM_MB_PER_TYPE`` controls how much megabytes of memory each 218 memory type can address 219* ``RTE_MAX_MEM_MB`` places a global maximum on the amount of memory 220 DPDK can reserve 221 222Normally, these options do not need to be changed. 223 224.. note:: 225 226 Preallocated virtual memory is not to be confused with preallocated hugepage 227 memory! All DPDK processes preallocate virtual memory at startup. Hugepages 228 can later be mapped into that preallocated VA space (if dynamic memory mode 229 is enabled), and can optionally be mapped into it at startup. 230 231Hugepage Mapping 232^^^^^^^^^^^^^^^^ 233 234Below is an overview of methods used for each OS to obtain hugepages, 235explaining why certain limitations and options exist in EAL. 236See the user guide for a specific OS for configuration details. 237 238FreeBSD uses ``contigmem`` kernel module 239to reserve a fixed number of hugepages at system start, 240which are mapped by EAL at initialization using a specific ``sysctl()``. 241 242Windows EAL allocates hugepages from the OS as needed using Win32 API, 243so available amount depends on the system load. 244It uses ``virt2phys`` kernel module to obtain physical addresses, 245unless running in IOVA-as-VA mode (e.g. forced with ``--iova-mode=va``). 246 247Linux allows to select any combination of the following: 248 249* use files in hugetlbfs (the default) 250 or anonymous mappings (``--in-memory``); 251* map each hugepage from its own file (the default) 252 or map multiple hugepages from one big file (``--single-file-segments``). 253 254Mapping hugepages from files in hugetlbfs is essential for multi-process, 255because secondary processes need to map the same hugepages. 256EAL creates files like ``rtemap_0`` 257in directories specified with ``--huge-dir`` option 258(or in the mount point for a specific hugepage size). 259The ``rte`` prefix can be changed using ``--file-prefix``. 260This may be needed for running multiple primary processes 261that share a hugetlbfs mount point. 262Each backing file by default corresponds to one hugepage, 263it is opened and locked for the entire time the hugepage is used. 264This may exhaust the number of open files limit (``NOFILE``). 265See :ref:`segment-file-descriptors` section 266on how the number of open backing file descriptors can be reduced. 267 268In dynamic memory mode, EAL removes a backing hugepage file 269when all pages mapped from it are freed back to the system. 270However, backing files may persist after the application terminates 271in case of a crash or a leak of DPDK memory (e.g. ``rte_free()`` is missing). 272This reduces the number of hugepages available to other processes 273as reported by ``/sys/kernel/mm/hugepages/hugepages-*/free_hugepages``. 274EAL can remove the backing files after opening them for mapping 275if ``--huge-unlink`` is given to avoid polluting hugetlbfs. 276However, since it disables multi-process anyway, 277using anonymous mapping (``--in-memory``) is recommended instead. 278 279:ref:`EAL memory allocator <malloc>` relies on hugepages being zero-filled. 280Hugepages are cleared by the kernel when a file in hugetlbfs or its part 281is mapped for the first time system-wide 282to prevent data leaks from previous users of the same hugepage. 283EAL ensures this behavior by removing existing backing files at startup 284and by recreating them before opening for mapping (as a precaution). 285 286Anonymous mapping does not allow multi-process architecture. 287This mode does not use hugetlbfs 288and thus does not require root permissions for memory management 289(the limit of locked memory amount, ``MEMLOCK``, still applies). 290It is free of filename conflict and leftover file issues. 291If ``memfd_create(2)`` is supported both at build and run time, 292DPDK memory manager can provide file descriptors for memory segments, 293which are required for VirtIO with vhost-user backend. 294This can exhaust the number of open files limit (``NOFILE``) 295despite not creating any visible files. 296See :ref:`segment-file-descriptors` section 297on how the number of open file descriptors used by EAL can be reduced. 298 299.. _segment-file-descriptors: 300 301Segment File Descriptors 302^^^^^^^^^^^^^^^^^^^^^^^^ 303 304On Linux, in most cases, EAL will store segment file descriptors in EAL. This 305can become a problem when using smaller page sizes due to underlying limitations 306of ``glibc`` library. For example, Linux API calls such as ``select()`` may not 307work correctly because ``glibc`` does not support more than certain number of 308file descriptors. 309 310There are two possible solutions for this problem. The recommended solution is 311to use ``--single-file-segments`` mode, as that mode will not use a file 312descriptor per each page, and it will keep compatibility with Virtio with 313vhost-user backend. This option is not available when using ``--legacy-mem`` 314mode. 315 316Another option is to use bigger page sizes. Since fewer pages are required to 317cover the same memory area, fewer file descriptors will be stored internally 318by EAL. 319 320Support for Externally Allocated Memory 321~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 322 323It is possible to use externally allocated memory in DPDK. There are two ways in 324which using externally allocated memory can work: the malloc heap API's, and 325manual memory management. 326 327+ Using heap API's for externally allocated memory 328 329Using a set of malloc heap API's is the recommended way to use externally 330allocated memory in DPDK. In this way, support for externally allocated memory 331is implemented through overloading the socket ID - externally allocated heaps 332will have socket ID's that would be considered invalid under normal 333circumstances. Requesting an allocation to take place from a specified 334externally allocated memory is a matter of supplying the correct socket ID to 335DPDK allocator, either directly (e.g. through a call to ``rte_malloc``) or 336indirectly (through data structure-specific allocation API's such as 337``rte_ring_create``). Using these API's also ensures that mapping of externally 338allocated memory for DMA is also performed on any memory segment that is added 339to a DPDK malloc heap. 340 341Since there is no way DPDK can verify whether memory is available or valid, this 342responsibility falls on the shoulders of the user. All multiprocess 343synchronization is also user's responsibility, as well as ensuring that all 344calls to add/attach/detach/remove memory are done in the correct order. It is 345not required to attach to a memory area in all processes - only attach to memory 346areas as needed. 347 348The expected workflow is as follows: 349 350* Get a pointer to memory area 351* Create a named heap 352* Add memory area(s) to the heap 353 - If IOVA table is not specified, IOVA addresses will be assumed to be 354 unavailable, and DMA mappings will not be performed 355 - Other processes must attach to the memory area before they can use it 356* Get socket ID used for the heap 357* Use normal DPDK allocation procedures, using supplied socket ID 358* If memory area is no longer needed, it can be removed from the heap 359 - Other processes must detach from this memory area before it can be removed 360* If heap is no longer needed, remove it 361 - Socket ID will become invalid and will not be reused 362 363For more information, please refer to ``rte_malloc`` API documentation, 364specifically the ``rte_malloc_heap_*`` family of function calls. 365 366+ Using externally allocated memory without DPDK API's 367 368While using heap API's is the recommended method of using externally allocated 369memory in DPDK, there are certain use cases where the overhead of DPDK heap API 370is undesirable - for example, when manual memory management is performed on an 371externally allocated area. To support use cases where externally allocated 372memory will not be used as part of normal DPDK workflow, there is also another 373set of API's under the ``rte_extmem_*`` namespace. 374 375These API's are (as their name implies) intended to allow registering or 376unregistering externally allocated memory to/from DPDK's internal page table, to 377allow API's like ``rte_mem_virt2memseg`` etc. to work with externally allocated 378memory. Memory added this way will not be available for any regular DPDK 379allocators; DPDK will leave this memory for the user application to manage. 380 381The expected workflow is as follows: 382 383* Get a pointer to memory area 384* Register memory within DPDK 385 - If IOVA table is not specified, IOVA addresses will be assumed to be 386 unavailable 387 - Other processes must attach to the memory area before they can use it 388* Perform DMA mapping with ``rte_dev_dma_map`` if needed 389* Use the memory area in your application 390* If memory area is no longer needed, it can be unregistered 391 - If the area was mapped for DMA, unmapping must be performed before 392 unregistering memory 393 - Other processes must detach from the memory area before it can be 394 unregistered 395 396Since these externally allocated memory areas will not be managed by DPDK, it is 397therefore up to the user application to decide how to use them and what to do 398with them once they're registered. 399 400Per-lcore and Shared Variables 401~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 402 403.. note:: 404 405 lcore refers to a logical execution unit of the processor, sometimes called a hardware *thread*. 406 407Shared variables are the default behavior. 408Per-lcore variables are implemented using *Thread Local Storage* (TLS) to provide per-thread local storage. 409 410Logs 411~~~~ 412 413A logging API is provided by EAL. 414By default, in a Linux application, logs are sent to syslog and also to the console. 415However, the log function can be overridden by the user to use a different logging mechanism. 416 417Trace and Debug Functions 418^^^^^^^^^^^^^^^^^^^^^^^^^ 419 420There are some debug functions to dump the stack in glibc. 421The rte_panic() function can voluntarily provoke a SIG_ABORT, 422which can trigger the generation of a core file, readable by gdb. 423 424CPU Feature Identification 425~~~~~~~~~~~~~~~~~~~~~~~~~~ 426 427The EAL can query the CPU at runtime (using the rte_cpu_get_features() function) to determine which CPU features are available. 428 429User Space Interrupt Event 430~~~~~~~~~~~~~~~~~~~~~~~~~~ 431 432+ User Space Interrupt and Alarm Handling in Host Thread 433 434The EAL creates a host thread to poll the UIO device file descriptors to detect the interrupts. 435Callbacks can be registered or unregistered by the EAL functions for a specific interrupt event 436and are called in the host thread asynchronously. 437The EAL also allows timed callbacks to be used in the same way as for NIC interrupts. 438 439.. note:: 440 441 In DPDK PMD, the only interrupts handled by the dedicated host thread are those for link status change 442 (link up and link down notification) and for sudden device removal. 443 444 445+ RX Interrupt Event 446 447The receive and transmit routines provided by each PMD don't limit themselves to execute in polling thread mode. 448To ease the idle polling with tiny throughput, it's useful to pause the polling and wait until the wake-up event happens. 449The RX interrupt is the first choice to be such kind of wake-up event, but probably won't be the only one. 450 451EAL provides the event APIs for this event-driven thread mode. 452Taking Linux as an example, the implementation relies on epoll. Each thread can monitor an epoll instance 453in which all the wake-up events' file descriptors are added. The event file descriptors are created and mapped to 454the interrupt vectors according to the UIO/VFIO spec. 455From FreeBSD's perspective, kqueue is the alternative way, but not implemented yet. 456 457EAL initializes the mapping between event file descriptors and interrupt vectors, while each device initializes the mapping 458between interrupt vectors and queues. In this way, EAL actually is unaware of the interrupt cause on the specific vector. 459The eth_dev driver takes responsibility to program the latter mapping. 460 461.. note:: 462 463 Per queue RX interrupt event is only allowed in VFIO which supports multiple MSI-X vector. In UIO, the RX interrupt 464 together with other interrupt causes shares the same vector. In this case, when RX interrupt and LSC(link status change) 465 interrupt are both enabled(intr_conf.lsc == 1 && intr_conf.rxq == 1), only the former is capable. 466 467The RX interrupt are controlled/enabled/disabled by ethdev APIs - 'rte_eth_dev_rx_intr_*'. They return failure if the PMD 468hasn't support them yet. The intr_conf.rxq flag is used to turn on the capability of RX interrupt per device. 469 470+ Device Removal Event 471 472This event is triggered by a device being removed at a bus level. Its 473underlying resources may have been made unavailable (i.e. PCI mappings 474unmapped). The PMD must make sure that on such occurrence, the application can 475still safely use its callbacks. 476 477This event can be subscribed to in the same way one would subscribe to a link 478status change event. The execution context is thus the same, i.e. it is the 479dedicated interrupt host thread. 480 481Considering this, it is likely that an application would want to close a 482device having emitted a Device Removal Event. In such case, calling 483``rte_eth_dev_close()`` can trigger it to unregister its own Device Removal Event 484callback. Care must be taken not to close the device from the interrupt handler 485context. It is necessary to reschedule such closing operation. 486 487Block list 488~~~~~~~~~~ 489 490The EAL PCI device block list functionality can be used to mark certain NIC ports as unavailable, 491so they are ignored by the DPDK. 492The ports to be blocked are identified using the PCIe* description (Domain:Bus:Device.Function). 493 494Misc Functions 495~~~~~~~~~~~~~~ 496 497Locks and atomic operations are per-architecture (i686 and x86_64). 498 499IOVA Mode Detection 500~~~~~~~~~~~~~~~~~~~ 501 502IOVA Mode is selected by considering what the current usable Devices on the 503system require and/or support. 504 505On FreeBSD, RTE_IOVA_PA is always the default. On Linux, the IOVA mode is 506detected based on a 2-step heuristic detailed below. 507 508For the first step, EAL asks each bus its requirement in terms of IOVA mode 509and decides on a preferred IOVA mode. 510 511- if all buses report RTE_IOVA_PA, then the preferred IOVA mode is RTE_IOVA_PA, 512- if all buses report RTE_IOVA_VA, then the preferred IOVA mode is RTE_IOVA_VA, 513- if all buses report RTE_IOVA_DC, no bus expressed a preference, then the 514 preferred mode is RTE_IOVA_DC, 515- if the buses disagree (at least one wants RTE_IOVA_PA and at least one wants 516 RTE_IOVA_VA), then the preferred IOVA mode is RTE_IOVA_DC (see below with the 517 check on Physical Addresses availability), 518 519If the buses have expressed no preference on which IOVA mode to pick, then a 520default is selected using the following logic: 521 522- if physical addresses are not available, RTE_IOVA_VA mode is used 523- if /sys/kernel/iommu_groups is not empty, RTE_IOVA_VA mode is used 524- otherwise, RTE_IOVA_PA mode is used 525 526In the case when the buses had disagreed on their preferred IOVA mode, part of 527the buses won't work because of this decision. 528 529The second step checks if the preferred mode complies with the Physical 530Addresses availability since those are only available to root user in recent 531kernels. Namely, if the preferred mode is RTE_IOVA_PA but there is no access to 532Physical Addresses, then EAL init fails early, since later probing of the 533devices would fail anyway. 534 535.. note:: 536 537 The RTE_IOVA_VA mode is preferred as the default in most cases for the 538 following reasons: 539 540 - All drivers are expected to work in RTE_IOVA_VA mode, irrespective of 541 physical address availability. 542 - By default, the mempool, first asks for IOVA-contiguous memory using 543 ``RTE_MEMZONE_IOVA_CONTIG``. This is slow in RTE_IOVA_PA mode and it may 544 affect the application boot time. 545 - It is easy to enable large amount of IOVA-contiguous memory use cases 546 with IOVA in VA mode. 547 548 It is expected that all PCI drivers work in both RTE_IOVA_PA and 549 RTE_IOVA_VA modes. 550 551 If a PCI driver does not support RTE_IOVA_PA mode, the 552 ``RTE_PCI_DRV_NEED_IOVA_AS_VA`` flag is used to dictate that this PCI 553 driver can only work in RTE_IOVA_VA mode. 554 555 When the KNI kernel module is detected, RTE_IOVA_PA mode is preferred as a 556 performance penalty is expected in RTE_IOVA_VA mode. 557 558IOVA Mode Configuration 559~~~~~~~~~~~~~~~~~~~~~~~ 560 561Auto detection of the IOVA mode, based on probing the bus and IOMMU configuration, may not report 562the desired addressing mode when virtual devices that are not directly attached to the bus are present. 563To facilitate forcing the IOVA mode to a specific value the EAL command line option ``--iova-mode`` can 564be used to select either physical addressing('pa') or virtual addressing('va'). 565 566.. _max_simd_bitwidth: 567 568 569Max SIMD bitwidth 570~~~~~~~~~~~~~~~~~ 571 572The EAL provides a single setting to limit the max SIMD bitwidth used by DPDK, 573which is used in determining the vector path, if any, chosen by a component. 574The value can be set at runtime by an application using the 575'rte_vect_set_max_simd_bitwidth(uint16_t bitwidth)' function, 576which should only be called once at initialization, before EAL init. 577The value can be overridden by the user using the EAL command-line option '--force-max-simd-bitwidth'. 578 579When choosing a vector path, along with checking the CPU feature support, 580the value of the max SIMD bitwidth must also be checked, and can be retrieved using the 581'rte_vect_get_max_simd_bitwidth()' function. 582The value should be compared against the enum values for accepted max SIMD bitwidths: 583 584.. code-block:: c 585 586 enum rte_vect_max_simd { 587 RTE_VECT_SIMD_DISABLED = 64, 588 RTE_VECT_SIMD_128 = 128, 589 RTE_VECT_SIMD_256 = 256, 590 RTE_VECT_SIMD_512 = 512, 591 RTE_VECT_SIMD_MAX = INT16_MAX + 1, 592 }; 593 594 if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_512) 595 /* Take AVX-512 vector path */ 596 else if (rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_256) 597 /* Take AVX2 vector path */ 598 599 600Memory Segments and Memory Zones (memzone) 601------------------------------------------ 602 603The mapping of physical memory is provided by this feature in the EAL. 604As physical memory can have gaps, the memory is described in a table of descriptors, 605and each descriptor (called rte_memseg ) describes a physical page. 606 607On top of this, the memzone allocator's role is to reserve contiguous portions of physical memory. 608These zones are identified by a unique name when the memory is reserved. 609 610The rte_memzone descriptors are also located in the configuration structure. 611This structure is accessed using rte_eal_get_configuration(). 612The lookup (by name) of a memory zone returns a descriptor containing the physical address of the memory zone. 613 614Memory zones can be reserved with specific start address alignment by supplying the align parameter 615(by default, they are aligned to cache line size). 616The alignment value should be a power of two and not less than the cache line size (64 bytes). 617Memory zones can also be reserved from either 2 MB or 1 GB hugepages, provided that both are available on the system. 618 619Both memsegs and memzones are stored using ``rte_fbarray`` structures. Please 620refer to *DPDK API Reference* for more information. 621 622 623Multiple pthread 624---------------- 625 626DPDK usually pins one pthread per core to avoid the overhead of task switching. 627This allows for significant performance gains, but lacks flexibility and is not always efficient. 628 629Power management helps to improve the CPU efficiency by limiting the CPU runtime frequency. 630However, alternately it is possible to utilize the idle cycles available to take advantage of 631the full capability of the CPU. 632 633By taking advantage of cgroup, the CPU utilization quota can be simply assigned. 634This gives another way to improve the CPU efficiency, however, there is a prerequisite; 635DPDK must handle the context switching between multiple pthreads per core. 636 637For further flexibility, it is useful to set pthread affinity not only to a CPU but to a CPU set. 638 639EAL pthread and lcore Affinity 640~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 641 642The term "lcore" refers to an EAL thread, which is really a Linux/FreeBSD pthread. 643"EAL pthreads" are created and managed by EAL and execute the tasks issued by *remote_launch*. 644In each EAL pthread, there is a TLS (Thread Local Storage) called *_lcore_id* for unique identification. 645As EAL pthreads usually bind 1:1 to the physical CPU, the *_lcore_id* is typically equal to the CPU ID. 646 647When using multiple pthreads, however, the binding is no longer always 1:1 between an EAL pthread and a specified physical CPU. 648The EAL pthread may have affinity to a CPU set, and as such the *_lcore_id* will not be the same as the CPU ID. 649For this reason, there is an EAL long option '--lcores' defined to assign the CPU affinity of lcores. 650For a specified lcore ID or ID group, the option allows setting the CPU set for that EAL pthread. 651 652The format pattern: 653 --lcores='<lcore_set>[@cpu_set][,<lcore_set>[@cpu_set],...]' 654 655'lcore_set' and 'cpu_set' can be a single number, range or a group. 656 657A number is a "digit([0-9]+)"; a range is "<number>-<number>"; a group is "(<number|range>[,<number|range>,...])". 658 659If a '\@cpu_set' value is not supplied, the value of 'cpu_set' will default to the value of 'lcore_set'. 660 661 :: 662 663 For example, "--lcores='1,2@(5-7),(3-5)@(0,2),(0,6),7-8'" which means start 9 EAL thread; 664 lcore 0 runs on cpuset 0x41 (cpu 0,6); 665 lcore 1 runs on cpuset 0x2 (cpu 1); 666 lcore 2 runs on cpuset 0xe0 (cpu 5,6,7); 667 lcore 3,4,5 runs on cpuset 0x5 (cpu 0,2); 668 lcore 6 runs on cpuset 0x41 (cpu 0,6); 669 lcore 7 runs on cpuset 0x80 (cpu 7); 670 lcore 8 runs on cpuset 0x100 (cpu 8). 671 672Using this option, for each given lcore ID, the associated CPUs can be assigned. 673It's also compatible with the pattern of corelist('-l') option. 674 675non-EAL pthread support 676~~~~~~~~~~~~~~~~~~~~~~~ 677 678It is possible to use the DPDK execution context with any user pthread (aka. non-EAL pthreads). 679There are two kinds of non-EAL pthreads: 680 681- a registered non-EAL pthread with a valid *_lcore_id* that was successfully assigned by calling ``rte_thread_register()``, 682- a non registered non-EAL pthread with a LCORE_ID_ANY, 683 684For non registered non-EAL pthread (with a LCORE_ID_ANY *_lcore_id*), some libraries will use an alternative unique ID (e.g. TID), some will not be impacted at all, and some will work but with limitations (e.g. timer and mempool libraries). 685 686All these impacts are mentioned in :ref:`known_issue_label` section. 687 688Public Thread API 689~~~~~~~~~~~~~~~~~ 690 691There are two public APIs ``rte_thread_set_affinity()`` and ``rte_thread_get_affinity()`` introduced for threads. 692When they're used in any pthread context, the Thread Local Storage(TLS) will be set/get. 693 694Those TLS include *_cpuset* and *_socket_id*: 695 696* *_cpuset* stores the CPUs bitmap to which the pthread is affinitized. 697 698* *_socket_id* stores the NUMA node of the CPU set. If the CPUs in CPU set belong to different NUMA node, the *_socket_id* will be set to SOCKET_ID_ANY. 699 700 701Control Thread API 702~~~~~~~~~~~~~~~~~~ 703 704It is possible to create Control Threads using the public API 705``rte_ctrl_thread_create()``. 706Those threads can be used for management/infrastructure tasks and are used 707internally by DPDK for multi process support and interrupt handling. 708 709Those threads will be scheduled on CPUs part of the original process CPU 710affinity from which the dataplane and service lcores are excluded. 711 712For example, on a 8 CPUs system, starting a dpdk application with -l 2,3 713(dataplane cores), then depending on the affinity configuration which can be 714controlled with tools like taskset (Linux) or cpuset (FreeBSD), 715 716- with no affinity configuration, the Control Threads will end up on 717 0-1,4-7 CPUs. 718- with affinity restricted to 2-4, the Control Threads will end up on 719 CPU 4. 720- with affinity restricted to 2-3, the Control Threads will end up on 721 CPU 2 (main lcore, which is the default when no CPU is available). 722 723.. _known_issue_label: 724 725Known Issues 726~~~~~~~~~~~~ 727 728+ rte_mempool 729 730 The rte_mempool uses a per-lcore cache inside the mempool. 731 For unregistered non-EAL pthreads, ``rte_lcore_id()`` will not return a valid number. 732 So for now, when rte_mempool is used with unregistered non-EAL pthreads, the put/get operations will bypass the default mempool cache and there is a performance penalty because of this bypass. 733 Only user-owned external caches can be used in an unregistered non-EAL context in conjunction with ``rte_mempool_generic_put()`` and ``rte_mempool_generic_get()`` that accept an explicit cache parameter. 734 735+ rte_ring 736 737 rte_ring supports multi-producer enqueue and multi-consumer dequeue. 738 However, it is non-preemptive, this has a knock on effect of making rte_mempool non-preemptible. 739 740 .. note:: 741 742 The "non-preemptive" constraint means: 743 744 - a pthread doing multi-producers enqueues on a given ring must not 745 be preempted by another pthread doing a multi-producer enqueue on 746 the same ring. 747 - a pthread doing multi-consumers dequeues on a given ring must not 748 be preempted by another pthread doing a multi-consumer dequeue on 749 the same ring. 750 751 Bypassing this constraint may cause the 2nd pthread to spin until the 1st one is scheduled again. 752 Moreover, if the 1st pthread is preempted by a context that has an higher priority, it may even cause a dead lock. 753 754 This means, use cases involving preemptible pthreads should consider using rte_ring carefully. 755 756 1. It CAN be used for preemptible single-producer and single-consumer use case. 757 758 2. It CAN be used for non-preemptible multi-producer and preemptible single-consumer use case. 759 760 3. It CAN be used for preemptible single-producer and non-preemptible multi-consumer use case. 761 762 4. It MAY be used by preemptible multi-producer and/or preemptible multi-consumer pthreads whose scheduling policy are all SCHED_OTHER(cfs), SCHED_IDLE or SCHED_BATCH. User SHOULD be aware of the performance penalty before using it. 763 764 5. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR. 765 766 Alternatively, applications can use the lock-free stack mempool handler. When 767 considering this handler, note that: 768 769 - It is currently limited to the aarch64 and x86_64 platforms, because it uses 770 an instruction (16-byte compare-and-swap) that is not yet available on other 771 platforms. 772 - It has worse average-case performance than the non-preemptive rte_ring, but 773 software caching (e.g. the mempool cache) can mitigate this by reducing the 774 number of stack accesses. 775 776+ rte_timer 777 778 Running ``rte_timer_manage()`` on an unregistered non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed. 779 780+ rte_log 781 782 In unregistered non-EAL pthreads, there is no per thread loglevel and logtype, global loglevels are used. 783 784+ misc 785 786 The debug statistics of rte_ring, rte_mempool and rte_timer are not supported in an unregistered non-EAL pthread. 787 788cgroup control 789~~~~~~~~~~~~~~ 790 791The following is a simple example of cgroup control usage, there are two pthreads(t0 and t1) doing packet I/O on the same core ($CPU). 792We expect only 50% of CPU spend on packet IO. 793 794 .. code-block:: console 795 796 mkdir /sys/fs/cgroup/cpu/pkt_io 797 mkdir /sys/fs/cgroup/cpuset/pkt_io 798 799 echo $cpu > /sys/fs/cgroup/cpuset/cpuset.cpus 800 801 echo $t0 > /sys/fs/cgroup/cpu/pkt_io/tasks 802 echo $t0 > /sys/fs/cgroup/cpuset/pkt_io/tasks 803 804 echo $t1 > /sys/fs/cgroup/cpu/pkt_io/tasks 805 echo $t1 > /sys/fs/cgroup/cpuset/pkt_io/tasks 806 807 cd /sys/fs/cgroup/cpu/pkt_io 808 echo 100000 > pkt_io/cpu.cfs_period_us 809 echo 50000 > pkt_io/cpu.cfs_quota_us 810 811.. _malloc: 812 813Malloc 814------ 815 816The EAL provides a malloc API to allocate any-sized memory. 817 818The objective of this API is to provide malloc-like functions to allow 819allocation from hugepage memory and to facilitate application porting. 820The *DPDK API Reference* manual describes the available functions. 821 822Typically, these kinds of allocations should not be done in data plane 823processing because they are slower than pool-based allocation and make 824use of locks within the allocation and free paths. 825However, they can be used in configuration code. 826 827Refer to the rte_malloc() function description in the *DPDK API Reference* 828manual for more information. 829 830 831Alignment and NUMA Constraints 832~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 833 834The rte_malloc() takes an align argument that can be used to request a memory 835area that is aligned on a multiple of this value (which must be a power of two). 836 837On systems with NUMA support, a call to the rte_malloc() function will return 838memory that has been allocated on the NUMA socket of the core which made the call. 839A set of APIs is also provided, to allow memory to be explicitly allocated on a 840NUMA socket directly, or by allocated on the NUMA socket where another core is 841located, in the case where the memory is to be used by a logical core other than 842on the one doing the memory allocation. 843 844Use Cases 845~~~~~~~~~ 846 847This API is meant to be used by an application that requires malloc-like 848functions at initialization time. 849 850For allocating/freeing data at runtime, in the fast-path of an application, 851the memory pool library should be used instead. 852 853Internal Implementation 854~~~~~~~~~~~~~~~~~~~~~~~ 855 856Data Structures 857^^^^^^^^^^^^^^^ 858 859There are two data structure types used internally in the malloc library: 860 861* struct malloc_heap - used to track free space on a per-socket basis 862 863* struct malloc_elem - the basic element of allocation and free-space 864 tracking inside the library. 865 866Structure: malloc_heap 867"""""""""""""""""""""" 868 869The malloc_heap structure is used to manage free space on a per-socket basis. 870Internally, there is one heap structure per NUMA node, which allows us to 871allocate memory to a thread based on the NUMA node on which this thread runs. 872While this does not guarantee that the memory will be used on that NUMA node, 873it is no worse than a scheme where the memory is always allocated on a fixed 874or random node. 875 876The key fields of the heap structure and their function are described below 877(see also diagram above): 878 879* lock - the lock field is needed to synchronize access to the heap. 880 Given that the free space in the heap is tracked using a linked list, 881 we need a lock to prevent two threads manipulating the list at the same time. 882 883* free_head - this points to the first element in the list of free nodes for 884 this malloc heap. 885 886* first - this points to the first element in the heap. 887 888* last - this points to the last element in the heap. 889 890.. _figure_malloc_heap: 891 892.. figure:: img/malloc_heap.* 893 894 Example of a malloc heap and malloc elements within the malloc library 895 896 897.. _malloc_elem: 898 899Structure: malloc_elem 900"""""""""""""""""""""" 901 902The malloc_elem structure is used as a generic header structure for various 903blocks of memory. 904It is used in two different ways - all shown in the diagram above: 905 906#. As a header on a block of free or allocated memory - normal case 907 908#. As a padding header inside a block of memory 909 910The most important fields in the structure and how they are used are described below. 911 912Malloc heap is a doubly-linked list, where each element keeps track of its 913previous and next elements. Due to the fact that hugepage memory can come and 914go, neighboring malloc elements may not necessarily be adjacent in memory. 915Also, since a malloc element may span multiple pages, its contents may not 916necessarily be IOVA-contiguous either - each malloc element is only guaranteed 917to be virtually contiguous. 918 919.. note:: 920 921 If the usage of a particular field in one of the above three usages is not 922 described, the field can be assumed to have an undefined value in that 923 situation, for example, for padding headers only the "state" and "pad" 924 fields have valid values. 925 926* heap - this pointer is a reference back to the heap structure from which 927 this block was allocated. 928 It is used for normal memory blocks when they are being freed, to add the 929 newly-freed block to the heap's free-list. 930 931* prev - this pointer points to previous header element/block in memory. When 932 freeing a block, this pointer is used to reference the previous block to 933 check if that block is also free. If so, and the two blocks are immediately 934 adjacent to each other, then the two free blocks are merged to form a single 935 larger block. 936 937* next - this pointer points to next header element/block in memory. When 938 freeing a block, this pointer is used to reference the next block to check 939 if that block is also free. If so, and the two blocks are immediately 940 adjacent to each other, then the two free blocks are merged to form a single 941 larger block. 942 943* free_list - this is a structure pointing to previous and next elements in 944 this heap's free list. 945 It is only used in normal memory blocks; on ``malloc()`` to find a suitable 946 free block to allocate and on ``free()`` to add the newly freed element to 947 the free-list. 948 949* state - This field can have one of three values: ``FREE``, ``BUSY`` or 950 ``PAD``. 951 The former two are to indicate the allocation state of a normal memory block 952 and the latter is to indicate that the element structure is a dummy structure 953 at the end of the start-of-block padding, i.e. where the start of the data 954 within a block is not at the start of the block itself, due to alignment 955 constraints. 956 In that case, the pad header is used to locate the actual malloc element 957 header for the block. 958 959* pad - this holds the length of the padding present at the start of the block. 960 In the case of a normal block header, it is added to the address of the end 961 of the header to give the address of the start of the data area, i.e. the 962 value passed back to the application on a malloc. 963 Within a dummy header inside the padding, this same value is stored, and is 964 subtracted from the address of the dummy header to yield the address of the 965 actual block header. 966 967* size - the size of the data block, including the header itself. 968 969Memory Allocation 970^^^^^^^^^^^^^^^^^ 971 972On EAL initialization, all preallocated memory segments are setup as part of the 973malloc heap. This setup involves placing an :ref:`element header<malloc_elem>` 974with ``FREE`` at the start of each virtually contiguous segment of memory. 975The ``FREE`` element is then added to the ``free_list`` for the malloc heap. 976 977This setup also happens whenever memory is allocated at runtime (if supported), 978in which case newly allocated pages are also added to the heap, merging with any 979adjacent free segments if there are any. 980 981When an application makes a call to a malloc-like function, the malloc function 982will first index the ``lcore_config`` structure for the calling thread, and 983determine the NUMA node of that thread. 984The NUMA node is used to index the array of ``malloc_heap`` structures which is 985passed as a parameter to the ``heap_alloc()`` function, along with the 986requested size, type, alignment and boundary parameters. 987 988The ``heap_alloc()`` function will scan the free_list of the heap, and attempt 989to find a free block suitable for storing data of the requested size, with the 990requested alignment and boundary constraints. 991 992When a suitable free element has been identified, the pointer to be returned 993to the user is calculated. 994The cache-line of memory immediately preceding this pointer is filled with a 995struct malloc_elem header. 996Because of alignment and boundary constraints, there could be free space at 997the start and/or end of the element, resulting in the following behavior: 998 999#. Check for trailing space. 1000 If the trailing space is big enough, i.e. > 128 bytes, then the free element 1001 is split. 1002 If it is not, then we just ignore it (wasted space). 1003 1004#. Check for space at the start of the element. 1005 If the space at the start is small, i.e. <=128 bytes, then a pad header is 1006 used, and the remaining space is wasted. 1007 If, however, the remaining space is greater, then the free element is split. 1008 1009The advantage of allocating the memory from the end of the existing element is 1010that no adjustment of the free list needs to take place - the existing element 1011on the free list just has its size value adjusted, and the next/previous elements 1012have their "prev"/"next" pointers redirected to the newly created element. 1013 1014In case when there is not enough memory in the heap to satisfy allocation 1015request, EAL will attempt to allocate more memory from the system (if supported) 1016and, following successful allocation, will retry reserving the memory again. In 1017a multiprocessing scenario, all primary and secondary processes will synchronize 1018their memory maps to ensure that any valid pointer to DPDK memory is guaranteed 1019to be valid at all times in all currently running processes. 1020 1021Failure to synchronize memory maps in one of the processes will cause allocation 1022to fail, even though some of the processes may have allocated the memory 1023successfully. The memory is not added to the malloc heap unless primary process 1024has ensured that all other processes have mapped this memory successfully. 1025 1026Any successful allocation event will trigger a callback, for which user 1027applications and other DPDK subsystems can register. Additionally, validation 1028callbacks will be triggered before allocation if the newly allocated memory will 1029exceed threshold set by the user, giving a chance to allow or deny allocation. 1030 1031.. note:: 1032 1033 Any allocation of new pages has to go through primary process. If the 1034 primary process is not active, no memory will be allocated even if it was 1035 theoretically possible to do so. This is because primary's process map acts 1036 as an authority on what should or should not be mapped, while each secondary 1037 process has its own, local memory map. Secondary processes do not update the 1038 shared memory map, they only copy its contents to their local memory map. 1039 1040Freeing Memory 1041^^^^^^^^^^^^^^ 1042 1043To free an area of memory, the pointer to the start of the data area is passed 1044to the free function. 1045The size of the ``malloc_elem`` structure is subtracted from this pointer to get 1046the element header for the block. 1047If this header is of type ``PAD`` then the pad length is further subtracted from 1048the pointer to get the proper element header for the entire block. 1049 1050From this element header, we get pointers to the heap from which the block was 1051allocated and to where it must be freed, as well as the pointer to the previous 1052and next elements. These next and previous elements are then checked to see if 1053they are also ``FREE`` and are immediately adjacent to the current one, and if 1054so, they are merged with the current element. This means that we can never have 1055two ``FREE`` memory blocks adjacent to one another, as they are always merged 1056into a single block. 1057 1058If deallocating pages at runtime is supported, and the free element encloses 1059one or more pages, those pages can be deallocated and be removed from the heap. 1060If DPDK was started with command-line parameters for preallocating memory 1061(``-m`` or ``--socket-mem``), then those pages that were allocated at startup 1062will not be deallocated. 1063 1064Any successful deallocation event will trigger a callback, for which user 1065applications and other DPDK subsystems can register. 1066