When multicore-enabling a C/C++ application, it's common to discover that
malloc()(or new) is a bottleneck that limits the speedup your parallelized application can obtain. This article explains the four basic problems that a good parallel storage allocator solves:
- thread safety,
- memory drift.
Basic storage allocators are not thread safe, although recent efforts have started to remedy this problem for many concurrency platforms. In other words, improper behavior due to races on the storage allocator's internal data structures can result from two parallel threads attempting allocate or deallocate at the same time. When threads have unrestricted access to the storage allocator, as shown below, they may end up "stomping on each others' toes," leading to anomalous behavior.
The simple solution to this problem is for applications to acquire a mutex (mutual exclusion) lock on the allocator before calling
free(), as illustrated below, which lets only one thread access the allocator's internal data structures at a time.
If the storage allocator is thread safe, the locking protocol is incorporated into the logic of the storage allocator itself.
Overhead and contention
Two problems may arise when an allocator is made thread safe by locking. The first is that allocation and deallocation may now be slower due to the overhead of locking. The second is that contention may arise in accessing the storage allocator, which can slow down the application and limit its scalability. Contention may not be a big problem for 2 or 4 cores, but as Moore's Law brings us dozens and even hundreds of cores per chip, contention can threaten scalability.
Both problems can be solved using a distributed allocator, which provides a local storage pool per thread, as illustrated below.
A distributed allocator allows allocation and deallocation to run out of the local storage pool most of the time. In the uncommon case that a thread's local pool is exhausted, the thread can obtain additional storage, typically in large blocks, from the global pool. The contention problem is solved, because threads only rarely access the global pool. The overhead problem is solved as well, because no locking is needed to access the local pool.
Unfortunately, local pools introduce yet another problem, especially in concurrency platforms where storage is actively shared among threads or which load-balance a computation across the threads. One thread A may continually allocate storage out of its local pool and pass it off to another thread B which frees it into its local pool. When thread A's local pool runs out, it allocates more storage from the global pool. This storage is passed to B, which proceeds to free it into its local pool. Over time, B's local pool grows unboundedly, creating something akin to a memory leak, where the virtual-memory footprint of the application continues to grow.
This memory drift problem can be solved in two ways. One solution is for a thread whose local pool becomes too large to return some of its storage to the global pool. The other is for all threads to return storage to the thread pool where the storage was allocated. Either method can be implemented with low overhead, and both provide satisfactory solutions to the memory drift problem.
There are other problems that can arise with parallel storage allocators. For example, false sharing is a particularly pernicious problem, where two threads access independent blocks of storage that happen to lie on the same cache line, leading to a thrashing of the cache coherency protocol in the processor. A storage allocator that fails to respect cache line boundaries and gives blocks of storage that share the same cache line to different threads may induce false sharing, which is hard to detect, because the logic of the code shows that the threads are accessing independent locations.
Two examples of parallel storage allocators include Hoard, written by Emery Berger of the University of Massachusetts, and the Miser allocator, distributed by Cilk Arts as part of our Cilk++ distribution. (More on Miser in an upcoming post - stay tuned!)