15630257fSFerruh Yigit.. SPDX-License-Identifier: BSD-3-Clause 25630257fSFerruh Yigit Copyright(c) 2010-2014 Intel Corporation. 3fc1f2750SBernard Iremonger 4fc1f2750SBernard Iremonger.. _Mempool_Library: 5fc1f2750SBernard Iremonger 6fc1f2750SBernard IremongerMempool Library 7fc1f2750SBernard Iremonger=============== 8fc1f2750SBernard Iremonger 9fc1f2750SBernard IremongerA memory pool is an allocator of a fixed-sized object. 10449c49b9SDavid HuntIn the DPDK, it is identified by name and uses a mempool handler to store free objects. 11449c49b9SDavid HuntThe default mempool handler is ring based. 12fc1f2750SBernard IremongerIt provides some other optional services such as a per-core object cache and 13fc1f2750SBernard Iremongeran alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels. 14fc1f2750SBernard Iremonger 15a3f34a98SThomas MonjalonThis library is used by the :ref:`Mbuf Library <Mbuf_Library>`. 16fc1f2750SBernard Iremonger 17fc1f2750SBernard IremongerCookies 18fc1f2750SBernard Iremonger------- 19fc1f2750SBernard Iremonger 2089c67ae2SCiara PowerIn debug mode, cookies are added at the beginning and end of allocated blocks. 21fc1f2750SBernard IremongerThe allocated objects then contain overwrite protection fields to help debugging buffer overflows. 22fc1f2750SBernard Iremonger 239d87e05dSMorten BrørupDebug mode is disabled by default, 249d87e05dSMorten Brørupbut can be enabled by setting ``RTE_LIBRTE_MEMPOOL_DEBUG`` in ``config/rte_config.h``. 259d87e05dSMorten Brørup 26fc1f2750SBernard IremongerStats 27fc1f2750SBernard Iremonger----- 28fc1f2750SBernard Iremonger 299d87e05dSMorten BrørupIn stats mode, statistics about get from/put in the pool are stored in the mempool structure. 30fc1f2750SBernard IremongerStatistics are per-lcore to avoid concurrent access to statistics counters. 31fc1f2750SBernard Iremonger 329d87e05dSMorten BrørupStats mode is disabled by default, 339d87e05dSMorten Brørupbut can be enabled by setting ``RTE_LIBRTE_MEMPOOL_STATS`` in ``config/rte_config.h``. 349d87e05dSMorten Brørup 353f2d6766SJerin JacobMemory Alignment Constraints on x86 architecture 363f2d6766SJerin Jacob------------------------------------------------ 37fc1f2750SBernard Iremonger 383f2d6766SJerin JacobDepending on hardware memory configuration on X86 architecture, performance can be greatly improved by adding a specific padding between objects. 39fc1f2750SBernard IremongerThe objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded. 40fc1f2750SBernard Iremonger 41fc1f2750SBernard IremongerThis is particularly true for packet buffers when doing L3 forwarding or flow classification. 42fc1f2750SBernard IremongerOnly the first 64 bytes are accessed, so performance can be increased by spreading the start addresses of objects among the different channels. 43fc1f2750SBernard Iremonger 44fc1f2750SBernard IremongerThe number of ranks on any DIMM is the number of independent sets of DRAMs that can be accessed for the full data bit-width of the DIMM. 45fc1f2750SBernard IremongerThe ranks cannot be accessed simultaneously since they share the same data path. 46fc1f2750SBernard IremongerThe physical layout of the DRAM chips on the DIMM itself does not necessarily relate to the number of ranks. 47fc1f2750SBernard Iremonger 48fc1f2750SBernard IremongerWhen running an application, the EAL command line options provide the ability to add the number of memory channels and ranks. 49fc1f2750SBernard Iremonger 50fc1f2750SBernard Iremonger.. note:: 51fc1f2750SBernard Iremonger 52fc1f2750SBernard Iremonger The command line must always have the number of memory channels specified for the processor. 53fc1f2750SBernard Iremonger 544a22e6eeSJohn McNamaraExamples of alignment for different DIMM architectures are shown in 554a22e6eeSJohn McNamara:numref:`figure_memory-management` and :numref:`figure_memory-management2`. 56fc1f2750SBernard Iremonger 574a22e6eeSJohn McNamara.. _figure_memory-management: 58fc1f2750SBernard Iremonger 594a22e6eeSJohn McNamara.. figure:: img/memory-management.* 60fc1f2750SBernard Iremonger 614a22e6eeSJohn McNamara Two Channels and Quad-ranked DIMM Example 62fc1f2750SBernard Iremonger 63fc1f2750SBernard Iremonger 64fc1f2750SBernard IremongerIn this case, the assumption is that a packet is 16 blocks of 64 bytes, which is not true. 65fc1f2750SBernard Iremonger 66fc1f2750SBernard IremongerThe Intel® 5520 chipset has three channels, so in most cases, 67fc1f2750SBernard Iremongerno padding is required between objects (except for objects whose size are n x 3 x 64 bytes blocks). 68fc1f2750SBernard Iremonger 694a22e6eeSJohn McNamara.. _figure_memory-management2: 70fc1f2750SBernard Iremonger 714a22e6eeSJohn McNamara.. figure:: img/memory-management2.* 72fc1f2750SBernard Iremonger 734a22e6eeSJohn McNamara Three Channels and Two Dual-ranked DIMM Example 74fc1f2750SBernard Iremonger 75fc1f2750SBernard Iremonger 76fc1f2750SBernard IremongerWhen creating a new pool, the user can specify to use this feature or not. 77fc1f2750SBernard Iremonger 78*03e0f2deSJack Bond-Preston.. note:: 79*03e0f2deSJack Bond-Preston 80*03e0f2deSJack Bond-Preston This feature is not present for Arm systems. 81*03e0f2deSJack Bond-Preston Modern Arm Interconnects choose the SN-F (memory channel) 82*03e0f2deSJack Bond-Preston using a hash of memory address bits. 83*03e0f2deSJack Bond-Preston As a result, the load is distributed evenly in all cases, 84*03e0f2deSJack Bond-Preston including the above described, rendering this feature unnecessary. 85*03e0f2deSJack Bond-Preston 86*03e0f2deSJack Bond-Preston 8729e30cbcSThomas Monjalon.. _mempool_local_cache: 8829e30cbcSThomas Monjalon 89fc1f2750SBernard IremongerLocal Cache 90fc1f2750SBernard Iremonger----------- 91fc1f2750SBernard Iremonger 92fc1f2750SBernard IremongerIn terms of CPU usage, the cost of multiple cores accessing a memory pool's ring of free buffers may be high 93fc1f2750SBernard Iremongersince each access requires a compare-and-set (CAS) operation. 94fc1f2750SBernard IremongerTo avoid having too many access requests to the memory pool's ring, 95fc1f2750SBernard Iremongerthe memory pool allocator can maintain a per-core cache and do bulk requests to the memory pool's ring, 96fc1f2750SBernard Iremongervia the cache with many fewer locks on the actual memory pool structure. 97fc1f2750SBernard IremongerIn this way, each core has full access to its own cache (with locks) of free objects and 98fc1f2750SBernard Iremongeronly when the cache fills does the core need to shuffle some of the free objects back to the pools ring or 99fc1f2750SBernard Iremongerobtain more objects when the cache is empty. 100fc1f2750SBernard Iremonger 101fc1f2750SBernard IremongerWhile this may mean a number of buffers may sit idle on some core's cache, 102fc1f2750SBernard Iremongerthe speed at which a core can access its own cache for a specific memory pool without locks provides performance gains. 103fc1f2750SBernard Iremonger 104fc1f2750SBernard IremongerThe cache is composed of a small, per-core table of pointers and its length (used as a stack). 1054b506275SLazaros KoromilasThis internal cache can be enabled or disabled at creation of the pool. 106fc1f2750SBernard Iremonger 10789c67ae2SCiara PowerThe maximum size of the cache is static and is defined at compilation time (RTE_MEMPOOL_CACHE_MAX_SIZE). 108fc1f2750SBernard Iremonger 1094a22e6eeSJohn McNamara:numref:`figure_mempool` shows a cache in operation. 110fc1f2750SBernard Iremonger 1114a22e6eeSJohn McNamara.. _figure_mempool: 112fc1f2750SBernard Iremonger 1134a22e6eeSJohn McNamara.. figure:: img/mempool.* 114fc1f2750SBernard Iremonger 1154a22e6eeSJohn McNamara A mempool in Memory with its Associated Ring 116fc1f2750SBernard Iremonger 1174b506275SLazaros KoromilasAlternatively to the internal default per-lcore local cache, an application can create and manage external caches through the ``rte_mempool_cache_create()``, ``rte_mempool_cache_free()`` and ``rte_mempool_cache_flush()`` calls. 1184b506275SLazaros KoromilasThese user-owned caches can be explicitly passed to ``rte_mempool_generic_put()`` and ``rte_mempool_generic_get()``. 1194b506275SLazaros KoromilasThe ``rte_mempool_default_cache()`` call returns the default internal cache if any. 1205c307ba2SDavid MarchandIn contrast to the default caches, user-owned caches can be used by unregistered non-EAL threads too. 121fc1f2750SBernard Iremonger 1221fb6301cSGage Eads.. _Mempool_Handlers: 1231fb6301cSGage Eads 124449c49b9SDavid HuntMempool Handlers 125449c49b9SDavid Hunt------------------------ 126449c49b9SDavid Hunt 127449c49b9SDavid HuntThis allows external memory subsystems, such as external hardware memory 128449c49b9SDavid Huntmanagement systems and software based memory allocators, to be used with DPDK. 129449c49b9SDavid Hunt 130449c49b9SDavid HuntThere are two aspects to a mempool handler. 131449c49b9SDavid Hunt 132449c49b9SDavid Hunt* Adding the code for your new mempool operations (ops). This is achieved by 133cb77b060SAndrew Rybchenko adding a new mempool ops code, and using the ``RTE_MEMPOOL_REGISTER_OPS`` macro. 134449c49b9SDavid Hunt 135449c49b9SDavid Hunt* Using the new API to call ``rte_mempool_create_empty()`` and 136449c49b9SDavid Hunt ``rte_mempool_set_ops_byname()`` to create a new mempool and specifying which 137449c49b9SDavid Hunt ops to use. 138449c49b9SDavid Hunt 139449c49b9SDavid HuntSeveral different mempool handlers may be used in the same application. A new 140449c49b9SDavid Huntmempool can be created by using the ``rte_mempool_create_empty()`` function, 141449c49b9SDavid Huntthen using ``rte_mempool_set_ops_byname()`` to point the mempool to the 142449c49b9SDavid Huntrelevant mempool handler callback (ops) structure. 143449c49b9SDavid Hunt 144449c49b9SDavid HuntLegacy applications may continue to use the old ``rte_mempool_create()`` API 145449c49b9SDavid Huntcall, which uses a ring based mempool handler by default. These applications 146449c49b9SDavid Huntwill need to be modified to use a new mempool handler. 147449c49b9SDavid Hunt 148449c49b9SDavid HuntFor applications that use ``rte_pktmbuf_create()``, there is a config setting 149449c49b9SDavid Hunt(``RTE_MBUF_DEFAULT_MEMPOOL_OPS``) that allows the application to make use of 150449c49b9SDavid Huntan alternative mempool handler. 151449c49b9SDavid Hunt 1524ae9f32eSGage Eads .. note:: 1534ae9f32eSGage Eads 1544ae9f32eSGage Eads When running a DPDK application with shared libraries, mempool handler 1554ae9f32eSGage Eads shared objects specified with the '-d' EAL command-line parameter are 1564ae9f32eSGage Eads dynamically loaded. When running a multi-process application with shared 1574ae9f32eSGage Eads libraries, the -d arguments for mempool handlers *must be specified in the 1584ae9f32eSGage Eads same order for all processes* to ensure correct operation. 1594ae9f32eSGage Eads 160449c49b9SDavid Hunt 161fc1f2750SBernard IremongerUse Cases 162fc1f2750SBernard Iremonger--------- 163fc1f2750SBernard Iremonger 164fc1f2750SBernard IremongerAll allocations that require a high level of performance should use a pool-based memory allocator. 165fc1f2750SBernard IremongerBelow are some examples: 166fc1f2750SBernard Iremonger 167fc1f2750SBernard Iremonger* :ref:`Mbuf Library <Mbuf_Library>` 168fc1f2750SBernard Iremonger 169fc1f2750SBernard Iremonger* :ref:`Environment Abstraction Layer <Environment_Abstraction_Layer>` , for logging service 170fc1f2750SBernard Iremonger 171fc1f2750SBernard Iremonger* Any application that needs to allocate fixed-sized objects in the data plane and that will be continuously utilized by the system. 172