1<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"> 2<html> 3<head> 4 <meta content="text/html; charset=ISO-8859-1" 5 http-equiv="content-type"> 6 <title>Bitmap Allocator</title> 7 <meta content="Dhruv Matani" name="author"> 8 <meta content="Bitmap Allocator" name="description"> 9</head> 10<body> 11<h1 style="text-align: center;">Bitmap Allocator</h1> 12<em><br> 13<small><small>The latest version of this document is always available 14at <a 15 href="http://gcc.gnu.org/onlinedocs/libstdc++/ext/ballocator_doc.html"> 16http://gcc.gnu.org/onlinedocs/libstdc++/ext/ballocator_doc.html</a>.</small></small></em><br> 17<br> 18<em> To the <a href="http://gcc.gnu.org/libstdc++/">libstdc++-v3 19homepage</a>.</em><br> 20<br> 21<hr style="width: 100%; height: 2px;"><br> 22As this name suggests, this allocator uses a bit-map to keep track of 23the used and unused memory locations for it's book-keeping purposes.<br> 24<br> 25This allocator will make use of 1 single bit to keep track of whether 26it has been allocated or not. A bit 1 indicates free, while 0 indicates 27allocated. This has been done so that you can easily check a collection 28of bits for a free block. This kind of Bitmapped strategy works best 29for single object allocations, and with the STL type parameterized 30allocators, we do not need to choose any size for the block which will 31be represented by a single bit. This will be the size of the parameter 32around which the allocator has been parameterized. Thus, close to 33optimal performance will result. Hence, this should be used for node 34based containers which call the allocate function with an argument of 1.<br> 35<br> 36The bitmapped allocator's internal pool is exponentially growing. 37Meaning that internally, the blocks acquired from the Free List Store 38will double every time the bitmapped allocator runs out of memory.<br> 39<br> 40<hr style="width: 100%; height: 2px;"><br> 41The macro __GTHREADS decides whether to use Mutex Protection around 42every allocation/deallocation. The state of the macro is picked up 43automatically from the gthr abstraction layer.<br> 44<br> 45<hr style="width: 100%; height: 2px;"> 46<h3 style="text-align: center;">What is the Free List Store?</h3> 47<br> 48The Free List Store (referred to as FLS for the remaining part of this 49document) is the Global memory pool that is shared by all instances of 50the bitmapped allocator instantiated for any type. This maintains a 51sorted order of all free memory blocks given back to it by the 52bitmapped allocator, and is also responsible for giving memory to the 53bitmapped allocator when it asks for more.<br> 54<br> 55Internally, there is a Free List threshold which indicates the Maximum 56number of free lists that the FLS can hold internally (cache). 57Currently, this value is set at 64. So, if there are more than 64 free 58lists coming in, then some of them will be given back to the OS using 59operator delete so that at any given time the Free List's size does not 60exceed 64 entries. This is done because a Binary Search is used to 61locate an entry in a free list when a request for memory comes along. 62Thus, the run-time complexity of the search would go up given an 63increasing size, for 64 entries however, lg(64) == 6 comparisons are 64enough to locate the correct free list if it exists.<br> 65<br> 66Suppose the free list size has reached it's threshold, then the largest 67block from among those in the list and the new block will be selected 68and given back to the OS. This is done because it reduces external 69fragmentation, and allows the OS to use the larger blocks later in an 70orderly fashion, possibly merging them later. Also, on some systems, 71large blocks are obtained via calls to mmap, so giving them back to 72free system resources becomes most important.<br> 73<br> 74The function _S_should_i_give decides the policy that determines 75whether the current block of memory should be given to the allocator 76for the request that it has made. That's because we may not always have 77exact fits for the memory size that the allocator requests. We do this 78mainly to prevent external fragmentation at the cost of a little 79internal fragmentation. Now, the value of this internal fragmentation 80has to be decided by this function. I can see 3 possibilities right 81now. Please add more as and when you find better strategies.<br> 82<br> 83<ol> 84 <li>Equal size check. Return true only when the 2 blocks are of equal 85size.</li> 86 <li>Difference Threshold: Return true only when the _block_size is 87greater than or equal to the _required_size, and if the _BS is > _RS 88by a difference of less than some THRESHOLD value, then return true, 89else return false. </li> 90 <li>Percentage Threshold. Return true only when the _block_size is 91greater than or equal to the _required_size, and if the _BS is > _RS 92by a percentage of less than some THRESHOLD value, then return true, 93else return false.</li> 94</ol> 95<br> 96Currently, (3) is being used with a value of 36% Maximum wastage per 97Super Block.<br> 98<br> 99<hr style="width: 100%; height: 2px;"><span style="font-weight: bold;">1) 100What is a super block? Why is it needed?</span><br> 101<br> 102A super block is the block of memory acquired from the FLS from which 103the bitmap allocator carves out memory for single objects and satisfies 104the user's requests. These super blocks come in sizes that are powers 105of 2 and multiples of 32 (_Bits_Per_Block). Yes both at the same time! 106That's because the next super block acquired will be 2 times the 107previous one, and also all super blocks have to be multiples of the 108_Bits_Per_Block value. <br> 109<br> 110<span style="font-weight: bold;">2) How does it interact with the free 111list store?</span><br> 112<br> 113The super block is contained in the FLS, and the FLS is responsible for 114getting / returning Super Bocks to and from the OS using operator new 115as defined by the C++ standard.<br> 116<br> 117<hr style="width: 100%; height: 2px;"> 118<h3 style="text-align: center;">How does the allocate function Work?</h3> 119<br> 120The allocate function is specialized for single object allocation ONLY. 121Thus, ONLY if n == 1, will the bitmap_allocator's specialized algorithm 122be used. Otherwise, the request is satisfied directly by calling 123operator new.<br> 124<br> 125Suppose n == 1, then the allocator does the following:<br> 126<br> 127<ol> 128 <li>Checks to see whether a free block exists somewhere in a 129region of memory close to the last satisfied request. If so, then that 130block is marked as allocated in the bit map and given to the user. If 131not, then (2) is executed.</li> 132 <li>Is there a free block anywhere after the current block right up to 133the end of the memory that we have? If so, that block is found, and the 134same procedure is applied as above, and returned to the user. If not, 135then (3) is executed.</li> 136 <li>Is there any block in whatever region of memory that we own free? 137This is done by checking <br> 138 <div style="margin-left: 40px;"> 139 <ul> 140 <li>The use count for each super block, and if that fails then </li> 141 <li>The individual bit-maps for each super block. </li> 142 </ul> 143 </div> 144Note: Here we are never touching any of the memory that the user will 145be given, and we are confining all memory accesses to a small region of 146memory! This helps reduce cache misses. If this succeeds then we apply 147the same procedure on that bit-map as (1), and return that block of 148memory to the user. However, if this process fails, then we resort to 149(4).</li> 150 <li>This process involves Refilling the internal exponentially 151growing memory pool. The said effect is achieved by calling 152_S_refill_pool which does the following: <br> 153 <div style="margin-left: 40px;"> 154 <ul> 155 <li>Gets more memory from the Global Free List of the Required 156size. </li> 157 <li>Adjusts the size for the next call to itself. </li> 158 <li>Writes the appropriate headers in the bit-maps.</li> 159 <li>Sets the use count for that super-block just allocated to 0 160(zero). </li> 161 <li>All of the above accounts to maintaining the basic invariant 162for the allocator. If the invariant is maintained, we are sure that all 163is well. Now, the same process is applied on the newly acquired free 164blocks, which are dispatched accordingly.</li> 165 </ul> 166 </div> 167 </li> 168</ol> 169<br> 170Thus, you can clearly see that the allocate function is nothing but a 171combination of the next-fit and first-fit algorithm optimized ONLY for 172single object allocations.<br> 173<br> 174<br> 175<hr style="width: 100%; height: 2px;"> 176<h3 style="text-align: center;">How does the deallocate function work?</h3> 177<br> 178The deallocate function again is specialized for single objects ONLY. 179For all n belonging to > 1, the operator delete is called without 180further ado, and the deallocate function returns.<br> 181<br> 182However for n == 1, a series of steps are performed:<br> 183<br> 184<ol> 185 <li>We first need to locate that super-block which holds the memory 186location given to us by the user. For that purpose, we maintain a 187static variable _S_last_dealloc_index, which holds the index into the 188vector of block pairs which indicates the index of the last super-block 189from which memory was freed. We use this strategy in the hope that the 190user will deallocate memory in a region close to what he/she 191deallocated the last time around. If the check for belongs_to succeeds, 192then we determine the bit-map for the given pointer, and locate the 193index into that bit-map, and mark that bit as free by setting it.</li> 194 <li>If the _S_last_dealloc_index does not point to the memory block 195that we're looking for, then we do a linear search on the block stored 196in the vector of Block Pairs. This vector in code is called 197_S_mem_blocks. When the corresponding super-block is found, we apply 198the same procedure as we did for (1) to mark the block as free in the 199bit-map.</li> 200</ol> 201<br> 202Now, whenever a block is freed, the use count of that particular super 203block goes down by 1. When this use count hits 0, we remove that super 204block from the list of all valid super blocks stored in the vector. 205While doing this, we also make sure that the basic invariant is 206maintained by making sure that _S_last_request and 207_S_last_dealloc_index point to valid locations within the vector.<br> 208<br> 209<hr style="width: 100%; height: 2px;"><br> 210<h3 style="text-align: center;">Data Layout for a Super Block:</h3> 211<br> 212Each Super Block will be of some size that is a multiple of the number 213of Bits Per Block. Typically, this value is chosen as Bits_Per_Byte x 214sizeof(size_t). On an x86 system, this gives the figure 8 x 2154 = 32. Thus, each Super Block will be of size 32 x Some_Value. This 216Some_Value is sizeof(value_type). For now, let it be called 'K'. Thus, 217finally, Super Block size is 32 x K bytes.<br> 218<br> 219This value of 32 has been chosen because each size_t has 32-bits 220and Maximum use of these can be made with such a figure.<br> 221<br> 222Consider a block of size 64 ints. In memory, it would look like this: 223(assume a 32-bit system where, size_t is a 32-bit entity).<br> 224<br> 225<table cellpadding="0" cellspacing="0" border="1" 226 style="text-align: left; width: 763px; height: 21px;"> 227 <tbody> 228 <tr> 229 <td style="vertical-align: top; text-align: center;">268<br> 230 </td> 231 <td style="vertical-align: top; text-align: center;">0<br> 232 </td> 233 <td style="vertical-align: top; text-align: center;">4294967295<br> 234 </td> 235 <td style="vertical-align: top; text-align: center;">4294967295<br> 236 </td> 237 <td style="vertical-align: top; text-align: center;">Data -> 238Space for 64 ints<br> 239 </td> 240 </tr> 241 </tbody> 242</table> 243<br> 244<br> 245The first Column(268) represents the size of the Block in bytes as seen 246by 247the Bitmap Allocator. Internally, a global free list is used to keep 248track of the free blocks used and given back by the bitmap allocator. 249It is this Free List Store that is responsible for writing and managing 250this information. Actually the number of bytes allocated in this case 251would be: 4 + 4 + (4x2) + (64x4) = 272 bytes, but the first 4 bytes are 252an 253addition by the Free List Store, so the Bitmap Allocator sees only 268 254bytes. These first 4 bytes about which the bitmapped allocator is not 255aware hold the value 268.<br> 256<br> 257<span style="font-weight: bold;">What do the remaining values represent?</span><br> 258<br> 259The 2nd 4 in the expression is the sizeof(size_t) because the 260Bitmapped Allocator maintains a used count for each Super Block, which 261is initially set to 0 (as indicated in the diagram). This is 262incremented every time a block is removed from this super block 263(allocated), and decremented whenever it is given back. So, when the 264used count falls to 0, the whole super block will be given back to the 265Free List Store.<br> 266<br> 267The value 4294967295 represents the integer corresponding to the bit 268representation of all bits set: 11111111111111111111111111111111.<br> 269<br> 270The 3rd 4x2 is size of the bitmap itself, which is the size of 32-bits 271x 2, 272which is 8-bytes, or 2 x sizeof(size_t).<br> 273<br> 274<hr style="width: 100%; height: 2px;"><br> 275Another issue would be whether to keep the all bitmaps in a separate 276area in memory, or to keep them near the actual blocks that will be 277given out or allocated for the client. After some testing, I've decided 278to keep these bitmaps close to the actual blocks. This will help in 2 279ways. <br> 280<br> 281<ol> 282 <li>Constant time access for the bitmap themselves, since no kind of 283look up will be needed to find the correct bitmap list or it's 284equivalent.</li> 285 <li>And also this would preserve the cache as far as possible.</li> 286</ol> 287<br> 288So in effect, this kind of an allocator might prove beneficial from a 289purely cache point of view. But this allocator has been made to try and 290roll out the defects of the node_allocator, wherein the nodes get 291skewed about in memory, if they are not returned in the exact reverse 292order or in the same order in which they were allocated. Also, the 293new_allocator's book keeping overhead is too much for small objects and 294single object allocations, though it preserves the locality of blocks 295very well when they are returned back to the allocator.<br> 296<br> 297<hr style="width: 100%; height: 2px;"><br> 298Expected overhead per block would be 1 bit in memory. Also, once the 299address of the free list has been found, the cost for 300allocation/deallocation would be negligible, and is supposed to be 301constant time. For these very reasons, it is very important to minimize 302the linear time costs, which include finding a free list with a free 303block while allocating, and finding the corresponding free list for a 304block while deallocating. Therefore, I have decided that the growth of 305the internal pool for this allocator will be exponential as compared to 306linear for node_allocator. There, linear time works well, because we 307are mainly concerned with speed of allocation/deallocation and memory 308consumption, whereas here, the allocation/deallocation part does have 309some linear/logarithmic complexity components in it. Thus, to try and 310minimize them would be a good thing to do at the cost of a little bit 311of memory.<br> 312<br> 313Another thing to be noted is the pool size will double every time 314the internal pool gets exhausted, and all the free blocks have been 315given away. The initial size of the pool would be sizeof(size_t) x 8 316which is the number of bits in an integer, which can fit exactly 317in a CPU register. Hence, the term given is exponential growth of the 318internal pool.<br> 319<br> 320<hr style="width: 100%; height: 2px;">After reading all this, you may 321still have a few questions about the internal working of this 322allocator, like my friend had!<br> 323<br> 324Well here are the exact questions that he posed:<br> 325<br> 326<span style="font-weight: bold;">Q1) The "Data Layout" section is 327cryptic. I have no idea of what you are trying to say. Layout of what? 328The free-list? Each bitmap? The Super Block?</span><br> 329<br> 330<div style="margin-left: 40px;"> The layout of a Super Block of a given 331size. In the example, a super block of size 32 x 1 is taken. The 332general formula for calculating the size of a super block is 33332 x sizeof(value_type) x 2^n, where n ranges from 0 to 32 for 32-bit 334systems.<br> 335</div> 336<br> 337<span style="font-weight: bold;">Q2) And since I just mentioned the 338term `each bitmap', what in the world is meant by it? What does each 339bitmap manage? How does it relate to the super block? Is the Super 340Block a bitmap as well?</span><br style="font-weight: bold;"> 341<br> 342<div style="margin-left: 40px;"> Good question! Each bitmap is part of 343a 344Super Block which is made up of 3 parts as I have mentioned earlier. 345Re-iterating, 1. The use count, 2. The bit-map for that Super Block. 3. 346The actual memory that will be eventually given to the user. Each 347bitmap is a multiple of 32 in size. If there are 32 x (2^3) blocks of 348single objects to be given, there will be '32 x (2^3)' bits present. 349Each 35032 bits managing the allocated / free status for 32 blocks. Since each 351size_t contains 32-bits, one size_t can manage up to 32 352blocks' status. Each bit-map is made up of a number of size_t, 353whose exact number for a super-block of a given size I have just 354mentioned.<br> 355</div> 356<br> 357<span style="font-weight: bold;">Q3) How do the allocate and deallocate 358functions work in regard to bitmaps?</span><br> 359<br> 360<div style="margin-left: 40px;"> The allocate and deallocate functions 361manipulate the bitmaps and have nothing to do with the memory that is 362given to the user. As I have earlier mentioned, a 1 in the bitmap's bit 363field indicates free, while a 0 indicates allocated. This lets us check 36432 bits at a time to check whether there is at lease one free block in 365those 32 blocks by testing for equality with (0). Now, the allocate 366function will given a memory block find the corresponding bit in the 367bitmap, and will reset it (i.e., make it re-set (0)). And when the 368deallocate function is called, it will again set that bit after 369locating it to indicate that that particular block corresponding to 370this bit in the bit-map is not being used by anyone, and may be used to 371satisfy future requests.<br> 372<br> 373e.g.: Consider a bit-map of 64-bits as represented below:<br> 3741111111111111111111111111111111111111111111111111111111111111111<br> 375<br> 376Now, when the first request for allocation of a single object comes 377along, the first block in address order is returned. And since the 378bit-maps in the reverse order to that of the address order, the last 379bit (LSB if the bit-map is considered as a binary word of 64-bits) is 380re-set to 0.<br> 381<br> 382The bit-map now looks like this:<br> 3831111111111111111111111111111111111111111111111111111111111111110<br> 384</div> 385<br> 386<br> 387<hr style="width: 100%; height: 2px;"><br> 388(Tech-Stuff, Please stay out if you are not interested in the selection 389of certain constants. This has nothing to do with the algorithm per-se, 390only with some vales that must be chosen correctly to ensure that the 391allocator performs well in a real word scenario, and maintains a good 392balance between the memory consumption and the allocation/deallocation 393speed).<br> 394<br> 395The formula for calculating the maximum wastage as a percentage:<br> 396<br> 397(32 x k + 1) / (2 x (32 x k + 1 + 32 x c)) x 100.<br> 398<br> 399Where,<br> 400k => The constant overhead per node. eg. for list, it is 8 bytes, 401and for map it is 12 bytes.<br> 402c => The size of the base type on which the map/list is 403instantiated. Thus, suppose the type1 is int and type2 is double, 404they are related by the relation sizeof(double) == 2*sizeof(int). Thus, 405all types must have this double size relation for this formula to work 406properly.<br> 407<br> 408Plugging-in: For List: k = 8 and c = 4 (int and double), we get:<br> 40933.376%<br> 410<br> 411For map/multimap: k = 12, and c = 4 (int and double), we get:<br> 41237.524%<br> 413<br> 414Thus, knowing these values, and based on the sizeof(value_type), we may 415create a function that returns the Max_Wastage_Percentage for us to use.<br> 416<br> 417<hr style="width: 100%; height: 2px;"><small><small><em> See <a 418 href="file:///home/dhruv/projects/libstdc++-v3/gcc/libstdc++-v3/docs/html/17_intro/license.html">license.html</a> 419for copying conditions. Comments and suggestions are welcome, and may 420be 421sent to <a href="mailto:libstdc++@gcc.gnu.org">the libstdc++ mailing 422list</a>.</em><br> 423</small></small><br> 424<br> 425</body> 426</html> 427