xref: /dpdk/doc/guides/prog_guide/mempool_lib.rst (revision 4a22e6ee3d2f8be8afd5b374a8916e232ab7fe97)
1..  BSD LICENSE
2    Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
3    All rights reserved.
4
5    Redistribution and use in source and binary forms, with or without
6    modification, are permitted provided that the following conditions
7    are met:
8
9    * Redistributions of source code must retain the above copyright
10    notice, this list of conditions and the following disclaimer.
11    * Redistributions in binary form must reproduce the above copyright
12    notice, this list of conditions and the following disclaimer in
13    the documentation and/or other materials provided with the
14    distribution.
15    * Neither the name of Intel Corporation nor the names of its
16    contributors may be used to endorse or promote products derived
17    from this software without specific prior written permission.
18
19    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
20    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
21    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
22    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
23    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
24    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
25    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
26    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
27    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
28    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
29    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
30
31.. _Mempool_Library:
32
33Mempool Library
34===============
35
36A memory pool is an allocator of a fixed-sized object.
37In the DPDK, it is identified by name and uses a ring to store free objects.
38It provides some other optional services such as a per-core object cache and
39an alignment helper to ensure that objects are padded to spread them equally on all DRAM or DDR3 channels.
40
41This library is used by the
42:ref:`Mbuf Library <Mbuf_Library>` and the
43:ref:`Environment Abstraction Layer <Environment_Abstraction_Layer>` (for logging history).
44
45Cookies
46-------
47
48In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled), cookies are added at the beginning and end of allocated blocks.
49The allocated objects then contain overwrite protection fields to help debugging buffer overflows.
50
51Stats
52-----
53
54In debug mode (CONFIG_RTE_LIBRTE_MEMPOOL_DEBUG is enabled),
55statistics about get from/put in the pool are stored in the mempool structure.
56Statistics are per-lcore to avoid concurrent access to statistics counters.
57
58Memory Alignment Constraints
59----------------------------
60
61Depending on hardware memory configuration, performance can be greatly improved by adding a specific padding between objects.
62The objective is to ensure that the beginning of each object starts on a different channel and rank in memory so that all channels are equally loaded.
63
64This is particularly true for packet buffers when doing L3 forwarding or flow classification.
65Only the first 64 bytes are accessed, so performance can be increased by spreading the start addresses of objects among the different channels.
66
67The number of ranks on any DIMM is the number of independent sets of DRAMs that can be accessed for the full data bit-width of the DIMM.
68The ranks cannot be accessed simultaneously since they share the same data path.
69The physical layout of the DRAM chips on the DIMM itself does not necessarily relate to the number of ranks.
70
71When running an application, the EAL command line options provide the ability to add the number of memory channels and ranks.
72
73.. note::
74
75    The command line must always have the number of memory channels specified for the processor.
76
77Examples of alignment for different DIMM architectures are shown in
78:numref:`figure_memory-management` and :numref:`figure_memory-management2`.
79
80.. _figure_memory-management:
81
82.. figure:: img/memory-management.*
83
84   Two Channels and Quad-ranked DIMM Example
85
86
87In this case, the assumption is that a packet is 16 blocks of 64 bytes, which is not true.
88
89The Intel® 5520 chipset has three channels, so in most cases,
90no padding is required between objects (except for objects whose size are n x 3 x 64 bytes blocks).
91
92.. _figure_memory-management2:
93
94.. figure:: img/memory-management2.*
95
96   Three Channels and Two Dual-ranked DIMM Example
97
98
99When creating a new pool, the user can specify to use this feature or not.
100
101Local Cache
102-----------
103
104In terms of CPU usage, the cost of multiple cores accessing a memory pool's ring of free buffers may be high
105since each access requires a compare-and-set (CAS) operation.
106To avoid having too many access requests to the memory pool's ring,
107the memory pool allocator can maintain a per-core cache and do bulk requests to the memory pool's ring,
108via the cache with many fewer locks on the actual memory pool structure.
109In this way, each core has full access to its own cache (with locks) of free objects and
110only when the cache fills does the core need to shuffle some of the free objects back to the pools ring or
111obtain more objects when the cache is empty.
112
113While this may mean a number of buffers may sit idle on some core's cache,
114the speed at which a core can access its own cache for a specific memory pool without locks provides performance gains.
115
116The cache is composed of a small, per-core table of pointers and its length (used as a stack).
117This cache can be enabled or disabled at creation of the pool.
118
119The maximum size of the cache is static and is defined at compilation time (CONFIG_RTE_MEMPOOL_CACHE_MAX_SIZE).
120
121:numref:`figure_mempool` shows a cache in operation.
122
123.. _figure_mempool:
124
125.. figure:: img/mempool.*
126
127   A mempool in Memory with its Associated Ring
128
129
130Use Cases
131---------
132
133All allocations that require a high level of performance should use a pool-based memory allocator.
134Below are some examples:
135
136*   :ref:`Mbuf Library <Mbuf_Library>`
137
138*   :ref:`Environment Abstraction Layer <Environment_Abstraction_Layer>` , for logging service
139
140*   Any application that needs to allocate fixed-sized objects in the data plane and that will be continuously utilized by the system.
141