I'm working with large dataset. I'm using several concurrent_vector and concurrent_hash_map for that purpose. After running for a while I'm getting bad_alloc exception.
According to this answer
concurrent_vector, I assume, is adding new blocks of memory but keeps using the old ones. Not moving objects is important as it allows other threads to keep accessing the vector even as it is being re-sized. It probably also helps with other optimizations (such as keeping cached copies valid.) The downside is access to the elements is slightly slower as the correct block needs to be found first (one extra deference.)
So there's a chance I'm getting bad_alloc due to heap fragmentation. How can I avoid heap fragmentation?