Memory leak with Intel MPI

Memory leak with Intel MPI


I compiled Siesta 3.2 ( with the Intel Compiler 14.0, Intel MPI 4.1.3 and MKL 11.2.3. The compilation runs fine but when I start a longer computation the memory usage increases up to the point when no free RAM is left and the program crashes.

Here is a picture of a profile run:


In contrast here is the same computation when Siesta is compiled with OpenMPI 1.8:

This Problem only appears when running on multiple nodes. On a single node the computation with Intel MPI runs fine and is 30% faster than with OpenMPI.

Here are my compilation flags:

-O0 -heap-arrays  -g -traceback -xHost -fpp -fltconsistency

and the linker flags:

${MKLROOT}/lib/intel64/libmkl_scalapack_lp64.a -Wl,--start-group ${MKLROOT}/lib/intel64/libmkl_intel_lp64.a ${MKLROOT}/lib/intel64/libmkl_core.a ${MKLROOT}/lib/intel64/libmkl_sequential.a -Wl,--end-group ${MKLROOT}/lib/intel64/libmkl_blacs_intelmpi_lp64.a -lpthread -lm

The same problem happens with Intel Compiler 15.0 and with IMPI 5.0.3 so it doesn't seem to be dependent on the specific versions. Are there any flags I could try? If you need more information about my setup please ask.


3 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Hi Mattias,

Please file the issue at Intel(R) Premier Support




Was there any resolution to this issue? We are seeing similar behavior.

An extra interesting piece of info: The memory usage seems fine when the application is built against Intel MPI with debug flags. When built in release mode, the memory slowly explodes until the application eventually crashes.

Just like the original post ... our application builds/runs fine with all other MPI packages we've tested: OpenMPI, MPICH, CrayMPICH, Platform MPI, MSMPI. However it looks like a classic memory leak ONLY when running Intel MPI without debug flags. Hard to know where to even start debugging this.

Leave a Comment

Please sign in to add a comment. Not a member? Join today