1.. BSD LICENSE 2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved. 3 All rights reserved. 4 5 Redistribution and use in source and binary forms, with or without 6 modification, are permitted provided that the following conditions 7 are met: 8 9 * Redistributions of source code must retain the above copyright 10 notice, this list of conditions and the following disclaimer. 11 * Redistributions in binary form must reproduce the above copyright 12 notice, this list of conditions and the following disclaimer in 13 the documentation and/or other materials provided with the 14 distribution. 15 * Neither the name of Intel Corporation nor the names of its 16 contributors may be used to endorse or promote products derived 17 from this software without specific prior written permission. 18 19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 31.. _Environment_Abstraction_Layer: 32 33Environment Abstraction Layer 34============================= 35 36The Environment Abstraction Layer (EAL) is responsible for gaining access to low-level resources such as hardware and memory space. 37It provides a generic interface that hides the environment specifics from the applications and libraries. 38It is the responsibility of the initialization routine to decide how to allocate these resources 39(that is, memory space, PCI devices, timers, consoles, and so on). 40 41Typical services expected from the EAL are: 42 43* DPDK Loading and Launching: 44 The DPDK and its application are linked as a single application and must be loaded by some means. 45 46* Core Affinity/Assignment Procedures: 47 The EAL provides mechanisms for assigning execution units to specific cores as well as creating execution instances. 48 49* System Memory Reservation: 50 The EAL facilitates the reservation of different memory zones, for example, physical memory areas for device interactions. 51 52* PCI Address Abstraction: The EAL provides an interface to access PCI address space. 53 54* Trace and Debug Functions: Logs, dump_stack, panic and so on. 55 56* Utility Functions: Spinlocks and atomic counters that are not provided in libc. 57 58* CPU Feature Identification: Determine at runtime if a particular feature, for example, Intel® AVX is supported. 59 Determine if the current CPU supports the feature set that the binary was compiled for. 60 61* Interrupt Handling: Interfaces to register/unregister callbacks to specific interrupt sources. 62 63* Alarm Functions: Interfaces to set/remove callbacks to be run at a specific time. 64 65EAL in a Linux-userland Execution Environment 66--------------------------------------------- 67 68In a Linux user space environment, the DPDK application runs as a user-space application using the pthread library. 69PCI information about devices and address space is discovered through the /sys kernel interface and through kernel modules such as uio_pci_generic, or igb_uio. 70Refer to the UIO: User-space drivers documentation in the Linux kernel. This memory is mmap'd in the application. 71 72The EAL performs physical memory allocation using mmap() in hugetlbfs (using huge page sizes to increase performance). 73This memory is exposed to DPDK service layers such as the :ref:`Mempool Library <Mempool_Library>`. 74 75At this point, the DPDK services layer will be initialized, then through pthread setaffinity calls, 76each execution unit will be assigned to a specific logical core to run as a user-level thread. 77 78The time reference is provided by the CPU Time-Stamp Counter (TSC) or by the HPET kernel API through a mmap() call. 79 80Initialization and Core Launching 81~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 82 83Part of the initialization is done by the start function of glibc. 84A check is also performed at initialization time to ensure that the micro architecture type chosen in the config file is supported by the CPU. 85Then, the main() function is called. The core initialization and launch is done in rte_eal_init() (see the API documentation). 86It consist of calls to the pthread library (more specifically, pthread_self(), pthread_create(), and pthread_setaffinity_np()). 87 88.. _figure_linuxapp_launch: 89 90.. figure:: img/linuxapp_launch.* 91 92 EAL Initialization in a Linux Application Environment 93 94 95.. note:: 96 97 Initialization of objects, such as memory zones, rings, memory pools, lpm tables and hash tables, 98 should be done as part of the overall application initialization on the master lcore. 99 The creation and initialization functions for these objects are not multi-thread safe. 100 However, once initialized, the objects themselves can safely be used in multiple threads simultaneously. 101 102Multi-process Support 103~~~~~~~~~~~~~~~~~~~~~ 104 105The Linuxapp EAL allows a multi-process as well as a multi-threaded (pthread) deployment model. 106See chapter 107:ref:`Multi-process Support <Multi-process_Support>` for more details. 108 109Memory Mapping Discovery and Memory Reservation 110~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 111 112The allocation of large contiguous physical memory is done using the hugetlbfs kernel filesystem. 113The EAL provides an API to reserve named memory zones in this contiguous memory. 114The physical address of the reserved memory for that memory zone is also returned to the user by the memory zone reservation API. 115 116.. note:: 117 118 Memory reservations done using the APIs provided by rte_malloc are also backed by pages from the hugetlbfs filesystem. 119 120Xen Dom0 support without hugetbls 121~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 122 123The existing memory management implementation is based on the Linux kernel hugepage mechanism. 124However, Xen Dom0 does not support hugepages, so a new Linux kernel module rte_dom0_mm is added to workaround this limitation. 125 126The EAL uses IOCTL interface to notify the Linux kernel module rte_dom0_mm to allocate memory of specified size, 127and get all memory segments information from the module, 128and the EAL uses MMAP interface to map the allocated memory. 129For each memory segment, the physical addresses are contiguous within it but actual hardware addresses are contiguous within 2MB. 130 131PCI Access 132~~~~~~~~~~ 133 134The EAL uses the /sys/bus/pci utilities provided by the kernel to scan the content on the PCI bus. 135To access PCI memory, a kernel module called uio_pci_generic provides a /dev/uioX device file 136and resource files in /sys 137that can be mmap'd to obtain access to PCI address space from the application. 138The DPDK-specific igb_uio module can also be used for this. Both drivers use the uio kernel feature (userland driver). 139 140Per-lcore and Shared Variables 141~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 142 143.. note:: 144 145 lcore refers to a logical execution unit of the processor, sometimes called a hardware *thread*. 146 147Shared variables are the default behavior. 148Per-lcore variables are implemented using *Thread Local Storage* (TLS) to provide per-thread local storage. 149 150Logs 151~~~~ 152 153A logging API is provided by EAL. 154By default, in a Linux application, logs are sent to syslog and also to the console. 155However, the log function can be overridden by the user to use a different logging mechanism. 156 157Trace and Debug Functions 158^^^^^^^^^^^^^^^^^^^^^^^^^ 159 160There are some debug functions to dump the stack in glibc. 161The rte_panic() function can voluntarily provoke a SIG_ABORT, 162which can trigger the generation of a core file, readable by gdb. 163 164CPU Feature Identification 165~~~~~~~~~~~~~~~~~~~~~~~~~~ 166 167The EAL can query the CPU at runtime (using the rte_cpu_get_feature() function) to determine which CPU features are available. 168 169User Space Interrupt Event 170~~~~~~~~~~~~~~~~~~~~~~~~~~ 171 172+ User Space Interrupt and Alarm Handling in Host Thread 173 174The EAL creates a host thread to poll the UIO device file descriptors to detect the interrupts. 175Callbacks can be registered or unregistered by the EAL functions for a specific interrupt event 176and are called in the host thread asynchronously. 177The EAL also allows timed callbacks to be used in the same way as for NIC interrupts. 178 179.. note:: 180 181 In DPDK PMD, the only interrupts handled by the dedicated host thread are those for link status change, 182 i.e. link up and link down notification. 183 184 185+ RX Interrupt Event 186 187The receive and transmit routines provided by each PMD don't limit themselves to execute in polling thread mode. 188To ease the idle polling with tiny throughput, it's useful to pause the polling and wait until the wake-up event happens. 189The RX interrupt is the first choice to be such kind of wake-up event, but probably won't be the only one. 190 191EAL provides the event APIs for this event-driven thread mode. 192Taking linuxapp as an example, the implementation relies on epoll. Each thread can monitor an epoll instance 193in which all the wake-up events' file descriptors are added. The event file descriptors are created and mapped to 194the interrupt vectors according to the UIO/VFIO spec. 195From bsdapp's perspective, kqueue is the alternative way, but not implemented yet. 196 197EAL initializes the mapping between event file descriptors and interrupt vectors, while each device initializes the mapping 198between interrupt vectors and queues. In this way, EAL actually is unaware of the interrupt cause on the specific vector. 199The eth_dev driver takes responsibility to program the latter mapping. 200 201.. note:: 202 203 Per queue RX interrupt event is only allowed in VFIO which supports multiple MSI-X vector. In UIO, the RX interrupt 204 together with other interrupt causes shares the same vector. In this case, when RX interrupt and LSC(link status change) 205 interrupt are both enabled(intr_conf.lsc == 1 && intr_conf.rxq == 1), only the former is capable. 206 207The RX interrupt are controlled/enabled/disabled by ethdev APIs - 'rte_eth_dev_rx_intr_*'. They return failure if the PMD 208hasn't support them yet. The intr_conf.rxq flag is used to turn on the capability of RX interrupt per device. 209 210Blacklisting 211~~~~~~~~~~~~ 212 213The EAL PCI device blacklist functionality can be used to mark certain NIC ports as blacklisted, 214so they are ignored by the DPDK. 215The ports to be blacklisted are identified using the PCIe* description (Domain:Bus:Device.Function). 216 217Misc Functions 218~~~~~~~~~~~~~~ 219 220Locks and atomic operations are per-architecture (i686 and x86_64). 221 222Memory Segments and Memory Zones (memzone) 223------------------------------------------ 224 225The mapping of physical memory is provided by this feature in the EAL. 226As physical memory can have gaps, the memory is described in a table of descriptors, 227and each descriptor (called rte_memseg ) describes a contiguous portion of memory. 228 229On top of this, the memzone allocator's role is to reserve contiguous portions of physical memory. 230These zones are identified by a unique name when the memory is reserved. 231 232The rte_memzone descriptors are also located in the configuration structure. 233This structure is accessed using rte_eal_get_configuration(). 234The lookup (by name) of a memory zone returns a descriptor containing the physical address of the memory zone. 235 236Memory zones can be reserved with specific start address alignment by supplying the align parameter 237(by default, they are aligned to cache line size). 238The alignment value should be a power of two and not less than the cache line size (64 bytes). 239Memory zones can also be reserved from either 2 MB or 1 GB hugepages, provided that both are available on the system. 240 241 242Multiple pthread 243---------------- 244 245DPDK usually pins one pthread per core to avoid the overhead of task switching. 246This allows for significant performance gains, but lacks flexibility and is not always efficient. 247 248Power management helps to improve the CPU efficiency by limiting the CPU runtime frequency. 249However, alternately it is possible to utilize the idle cycles available to take advantage of 250the full capability of the CPU. 251 252By taking advantage of cgroup, the CPU utilization quota can be simply assigned. 253This gives another way to improve the CPU efficiency, however, there is a prerequisite; 254DPDK must handle the context switching between multiple pthreads per core. 255 256For further flexibility, it is useful to set pthread affinity not only to a CPU but to a CPU set. 257 258EAL pthread and lcore Affinity 259~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 260 261The term "lcore" refers to an EAL thread, which is really a Linux/FreeBSD pthread. 262"EAL pthreads" are created and managed by EAL and execute the tasks issued by *remote_launch*. 263In each EAL pthread, there is a TLS (Thread Local Storage) called *_lcore_id* for unique identification. 264As EAL pthreads usually bind 1:1 to the physical CPU, the *_lcore_id* is typically equal to the CPU ID. 265 266When using multiple pthreads, however, the binding is no longer always 1:1 between an EAL pthread and a specified physical CPU. 267The EAL pthread may have affinity to a CPU set, and as such the *_lcore_id* will not be the same as the CPU ID. 268For this reason, there is an EAL long option '--lcores' defined to assign the CPU affinity of lcores. 269For a specified lcore ID or ID group, the option allows setting the CPU set for that EAL pthread. 270 271The format pattern: 272 --lcores='<lcore_set>[@cpu_set][,<lcore_set>[@cpu_set],...]' 273 274'lcore_set' and 'cpu_set' can be a single number, range or a group. 275 276A number is a "digit([0-9]+)"; a range is "<number>-<number>"; a group is "(<number|range>[,<number|range>,...])". 277 278If a '\@cpu_set' value is not supplied, the value of 'cpu_set' will default to the value of 'lcore_set'. 279 280 :: 281 282 For example, "--lcores='1,2@(5-7),(3-5)@(0,2),(0,6),7-8'" which means start 9 EAL thread; 283 lcore 0 runs on cpuset 0x41 (cpu 0,6); 284 lcore 1 runs on cpuset 0x2 (cpu 1); 285 lcore 2 runs on cpuset 0xe0 (cpu 5,6,7); 286 lcore 3,4,5 runs on cpuset 0x5 (cpu 0,2); 287 lcore 6 runs on cpuset 0x41 (cpu 0,6); 288 lcore 7 runs on cpuset 0x80 (cpu 7); 289 lcore 8 runs on cpuset 0x100 (cpu 8). 290 291Using this option, for each given lcore ID, the associated CPUs can be assigned. 292It's also compatible with the pattern of corelist('-l') option. 293 294non-EAL pthread support 295~~~~~~~~~~~~~~~~~~~~~~~ 296 297It is possible to use the DPDK execution context with any user pthread (aka. Non-EAL pthreads). 298In a non-EAL pthread, the *_lcore_id* is always LCORE_ID_ANY which identifies that it is not an EAL thread with a valid, unique, *_lcore_id*. 299Some libraries will use an alternative unique ID (e.g. TID), some will not be impacted at all, and some will work but with limitations (e.g. timer and mempool libraries). 300 301All these impacts are mentioned in :ref:`known_issue_label` section. 302 303Public Thread API 304~~~~~~~~~~~~~~~~~ 305 306There are two public APIs ``rte_thread_set_affinity()`` and ``rte_pthread_get_affinity()`` introduced for threads. 307When they're used in any pthread context, the Thread Local Storage(TLS) will be set/get. 308 309Those TLS include *_cpuset* and *_socket_id*: 310 311* *_cpuset* stores the CPUs bitmap to which the pthread is affinitized. 312 313* *_socket_id* stores the NUMA node of the CPU set. If the CPUs in CPU set belong to different NUMA node, the *_socket_id* will be set to SOCKET_ID_ANY. 314 315 316.. _known_issue_label: 317 318Known Issues 319~~~~~~~~~~~~ 320 321+ rte_mempool 322 323 The rte_mempool uses a per-lcore cache inside the mempool. 324 For non-EAL pthreads, ``rte_lcore_id()`` will not return a valid number. 325 So for now, when rte_mempool is used with non-EAL pthreads, the put/get operations will bypass the default mempool cache and there is a performance penalty because of this bypass. 326 Only user-owned external caches can be used in a non-EAL context in conjunction with ``rte_mempool_generic_put()`` and ``rte_mempool_generic_get()`` that accept an explicit cache parameter. 327 328+ rte_ring 329 330 rte_ring supports multi-producer enqueue and multi-consumer dequeue. 331 However, it is non-preemptive, this has a knock on effect of making rte_mempool non-preemptable. 332 333 .. note:: 334 335 The "non-preemptive" constraint means: 336 337 - a pthread doing multi-producers enqueues on a given ring must not 338 be preempted by another pthread doing a multi-producer enqueue on 339 the same ring. 340 - a pthread doing multi-consumers dequeues on a given ring must not 341 be preempted by another pthread doing a multi-consumer dequeue on 342 the same ring. 343 344 Bypassing this constraint it may cause the 2nd pthread to spin until the 1st one is scheduled again. 345 Moreover, if the 1st pthread is preempted by a context that has an higher priority, it may even cause a dead lock. 346 347 This does not mean it cannot be used, simply, there is a need to narrow down the situation when it is used by multi-pthread on the same core. 348 349 1. It CAN be used for any single-producer or single-consumer situation. 350 351 2. It MAY be used by multi-producer/consumer pthread whose scheduling policy are all SCHED_OTHER(cfs). User SHOULD be aware of the performance penalty before using it. 352 353 3. It MUST not be used by multi-producer/consumer pthreads, whose scheduling policies are SCHED_FIFO or SCHED_RR. 354 355 ``RTE_RING_PAUSE_REP_COUNT`` is defined for rte_ring to reduce contention. It's mainly for case 2, a yield is issued after number of times pause repeat. 356 357 It adds a sched_yield() syscall if the thread spins for too long while waiting on the other thread to finish its operations on the ring. 358 This gives the preempted thread a chance to proceed and finish with the ring enqueue/dequeue operation. 359 360+ rte_timer 361 362 Running ``rte_timer_manager()`` on a non-EAL pthread is not allowed. However, resetting/stopping the timer from a non-EAL pthread is allowed. 363 364+ rte_log 365 366 In non-EAL pthreads, there is no per thread loglevel and logtype, global loglevels are used. 367 368+ misc 369 370 The debug statistics of rte_ring, rte_mempool and rte_timer are not supported in a non-EAL pthread. 371 372cgroup control 373~~~~~~~~~~~~~~ 374 375The following is a simple example of cgroup control usage, there are two pthreads(t0 and t1) doing packet I/O on the same core ($CPU). 376We expect only 50% of CPU spend on packet IO. 377 378 .. code-block:: console 379 380 mkdir /sys/fs/cgroup/cpu/pkt_io 381 mkdir /sys/fs/cgroup/cpuset/pkt_io 382 383 echo $cpu > /sys/fs/cgroup/cpuset/cpuset.cpus 384 385 echo $t0 > /sys/fs/cgroup/cpu/pkt_io/tasks 386 echo $t0 > /sys/fs/cgroup/cpuset/pkt_io/tasks 387 388 echo $t1 > /sys/fs/cgroup/cpu/pkt_io/tasks 389 echo $t1 > /sys/fs/cgroup/cpuset/pkt_io/tasks 390 391 cd /sys/fs/cgroup/cpu/pkt_io 392 echo 100000 > pkt_io/cpu.cfs_period_us 393 echo 50000 > pkt_io/cpu.cfs_quota_us 394 395 396Malloc 397------ 398 399The EAL provides a malloc API to allocate any-sized memory. 400 401The objective of this API is to provide malloc-like functions to allow 402allocation from hugepage memory and to facilitate application porting. 403The *DPDK API Reference* manual describes the available functions. 404 405Typically, these kinds of allocations should not be done in data plane 406processing because they are slower than pool-based allocation and make 407use of locks within the allocation and free paths. 408However, they can be used in configuration code. 409 410Refer to the rte_malloc() function description in the *DPDK API Reference* 411manual for more information. 412 413Cookies 414~~~~~~~ 415 416When CONFIG_RTE_MALLOC_DEBUG is enabled, the allocated memory contains 417overwrite protection fields to help identify buffer overflows. 418 419Alignment and NUMA Constraints 420~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 421 422The rte_malloc() takes an align argument that can be used to request a memory 423area that is aligned on a multiple of this value (which must be a power of two). 424 425On systems with NUMA support, a call to the rte_malloc() function will return 426memory that has been allocated on the NUMA socket of the core which made the call. 427A set of APIs is also provided, to allow memory to be explicitly allocated on a 428NUMA socket directly, or by allocated on the NUMA socket where another core is 429located, in the case where the memory is to be used by a logical core other than 430on the one doing the memory allocation. 431 432Use Cases 433~~~~~~~~~ 434 435This API is meant to be used by an application that requires malloc-like 436functions at initialization time. 437 438For allocating/freeing data at runtime, in the fast-path of an application, 439the memory pool library should be used instead. 440 441Internal Implementation 442~~~~~~~~~~~~~~~~~~~~~~~ 443 444Data Structures 445^^^^^^^^^^^^^^^ 446 447There are two data structure types used internally in the malloc library: 448 449* struct malloc_heap - used to track free space on a per-socket basis 450 451* struct malloc_elem - the basic element of allocation and free-space 452 tracking inside the library. 453 454Structure: malloc_heap 455"""""""""""""""""""""" 456 457The malloc_heap structure is used to manage free space on a per-socket basis. 458Internally, there is one heap structure per NUMA node, which allows us to 459allocate memory to a thread based on the NUMA node on which this thread runs. 460While this does not guarantee that the memory will be used on that NUMA node, 461it is no worse than a scheme where the memory is always allocated on a fixed 462or random node. 463 464The key fields of the heap structure and their function are described below 465(see also diagram above): 466 467* lock - the lock field is needed to synchronize access to the heap. 468 Given that the free space in the heap is tracked using a linked list, 469 we need a lock to prevent two threads manipulating the list at the same time. 470 471* free_head - this points to the first element in the list of free nodes for 472 this malloc heap. 473 474.. note:: 475 476 The malloc_heap structure does not keep track of in-use blocks of memory, 477 since these are never touched except when they are to be freed again - 478 at which point the pointer to the block is an input to the free() function. 479 480.. _figure_malloc_heap: 481 482.. figure:: img/malloc_heap.* 483 484 Example of a malloc heap and malloc elements within the malloc library 485 486 487.. _malloc_elem: 488 489Structure: malloc_elem 490"""""""""""""""""""""" 491 492The malloc_elem structure is used as a generic header structure for various 493blocks of memory. 494It is used in three different ways - all shown in the diagram above: 495 496#. As a header on a block of free or allocated memory - normal case 497 498#. As a padding header inside a block of memory 499 500#. As an end-of-memseg marker 501 502The most important fields in the structure and how they are used are described below. 503 504.. note:: 505 506 If the usage of a particular field in one of the above three usages is not 507 described, the field can be assumed to have an undefined value in that 508 situation, for example, for padding headers only the "state" and "pad" 509 fields have valid values. 510 511* heap - this pointer is a reference back to the heap structure from which 512 this block was allocated. 513 It is used for normal memory blocks when they are being freed, to add the 514 newly-freed block to the heap's free-list. 515 516* prev - this pointer points to the header element/block in the memseg 517 immediately behind the current one. When freeing a block, this pointer is 518 used to reference the previous block to check if that block is also free. 519 If so, then the two free blocks are merged to form a single larger block. 520 521* next_free - this pointer is used to chain the free-list of unallocated 522 memory blocks together. 523 It is only used in normal memory blocks; on ``malloc()`` to find a suitable 524 free block to allocate and on ``free()`` to add the newly freed element to 525 the free-list. 526 527* state - This field can have one of three values: ``FREE``, ``BUSY`` or 528 ``PAD``. 529 The former two are to indicate the allocation state of a normal memory block 530 and the latter is to indicate that the element structure is a dummy structure 531 at the end of the start-of-block padding, i.e. where the start of the data 532 within a block is not at the start of the block itself, due to alignment 533 constraints. 534 In that case, the pad header is used to locate the actual malloc element 535 header for the block. 536 For the end-of-memseg structure, this is always a ``BUSY`` value, which 537 ensures that no element, on being freed, searches beyond the end of the 538 memseg for other blocks to merge with into a larger free area. 539 540* pad - this holds the length of the padding present at the start of the block. 541 In the case of a normal block header, it is added to the address of the end 542 of the header to give the address of the start of the data area, i.e. the 543 value passed back to the application on a malloc. 544 Within a dummy header inside the padding, this same value is stored, and is 545 subtracted from the address of the dummy header to yield the address of the 546 actual block header. 547 548* size - the size of the data block, including the header itself. 549 For end-of-memseg structures, this size is given as zero, though it is never 550 actually checked. 551 For normal blocks which are being freed, this size value is used in place of 552 a "next" pointer to identify the location of the next block of memory that 553 in the case of being ``FREE``, the two free blocks can be merged into one. 554 555Memory Allocation 556^^^^^^^^^^^^^^^^^ 557 558On EAL initialization, all memsegs are setup as part of the malloc heap. 559This setup involves placing a dummy structure at the end with ``BUSY`` state, 560which may contain a sentinel value if ``CONFIG_RTE_MALLOC_DEBUG`` is enabled, 561and a proper :ref:`element header<malloc_elem>` with ``FREE`` at the start 562for each memseg. 563The ``FREE`` element is then added to the ``free_list`` for the malloc heap. 564 565When an application makes a call to a malloc-like function, the malloc function 566will first index the ``lcore_config`` structure for the calling thread, and 567determine the NUMA node of that thread. 568The NUMA node is used to index the array of ``malloc_heap`` structures which is 569passed as a parameter to the ``heap_alloc()`` function, along with the 570requested size, type, alignment and boundary parameters. 571 572The ``heap_alloc()`` function will scan the free_list of the heap, and attempt 573to find a free block suitable for storing data of the requested size, with the 574requested alignment and boundary constraints. 575 576When a suitable free element has been identified, the pointer to be returned 577to the user is calculated. 578The cache-line of memory immediately preceding this pointer is filled with a 579struct malloc_elem header. 580Because of alignment and boundary constraints, there could be free space at 581the start and/or end of the element, resulting in the following behavior: 582 583#. Check for trailing space. 584 If the trailing space is big enough, i.e. > 128 bytes, then the free element 585 is split. 586 If it is not, then we just ignore it (wasted space). 587 588#. Check for space at the start of the element. 589 If the space at the start is small, i.e. <=128 bytes, then a pad header is 590 used, and the remaining space is wasted. 591 If, however, the remaining space is greater, then the free element is split. 592 593The advantage of allocating the memory from the end of the existing element is 594that no adjustment of the free list needs to take place - the existing element 595on the free list just has its size pointer adjusted, and the following element 596has its "prev" pointer redirected to the newly created element. 597 598Freeing Memory 599^^^^^^^^^^^^^^ 600 601To free an area of memory, the pointer to the start of the data area is passed 602to the free function. 603The size of the ``malloc_elem`` structure is subtracted from this pointer to get 604the element header for the block. 605If this header is of type ``PAD`` then the pad length is further subtracted from 606the pointer to get the proper element header for the entire block. 607 608From this element header, we get pointers to the heap from which the block was 609allocated and to where it must be freed, as well as the pointer to the previous 610element, and via the size field, we can calculate the pointer to the next element. 611These next and previous elements are then checked to see if they are also 612``FREE``, and if so, they are merged with the current element. 613This means that we can never have two ``FREE`` memory blocks adjacent to one 614another, as they are always merged into a single block. 615