First of all, I must say thatI amreallyenjoying theTBB library.I modified a few of our single thread algorithms to take advantage of the parallel_for and parallel_sort constructs, andI am really impressed by the results, especially on 8 and 16 cores servers.
Ialso decided to use TBB Malloc as our default allocator. I read the technical article, as well as the source code, to try to better understand how it works and what we should expect in terms of speed up on many cores, etc.
I know that memory is never returned to the OS, as it would add the necessity to lock, and thus would remove most if not all the scalability benefits. However, in our server application, I see a constant rise in memory consumption, and unfortunately, even on 64 bits systems, it can lead to out of memory condition (without TBB, our application in my test setup uses 1 to 1.2 Gb of memory, with TBB, it goes over 2 Gb, and keeps on rising).
I looked at the code to try to understand what we were doing that could possibly be causing TBB Malloc to enter a pattern where memory was not re-used effectively.
As I understand it, TBB Malloc tries to allocate memory from the TLS structures first, then from the publicly freed objects, and lastly from the other threads structures. Our application has a lot of threads running at the same time, over 100 in a typical setup. Also, memory is often allocated in a thread to be freed in another. Finally, threads are created and destroyed from time to time. Is it possible that TBB Malloc favors allocating locally, but to avoid locking when freeing using another thread, favors releasing memory in the public list ? Is it possible that this list is seldom used with our way of allocating and destroying objects and it becomes a very huge list of allocations ? Finally, is there a way to know at runtime, besides recompiling with statistics ?
I will try to analyze our pattern of allocations better, but any help or tip about TBB Malloc inner workings in this situation would be greatly appreciated.
Thank you for your time,