Debugging 64-bit MPI applications under Windows / Visual Studio 2008

Debugging 64-bit MPI applications under Windows / Visual Studio 2008

Bild des Benutzers michel.lestrade

Hi,

Is there any kind of information or guide regarding the procedure for debugging Intel MPI applications under Windows ?

So far, I have only found the TotalView program (which is Unix/Linux-based) and the following blog post which seems to be mainly for the Microsoft MPI libraries:
http://blogs.artinsoft.net/csaborio/archive/2007/08/08/1478.aspx

Since I have not yet build a cluster and still using mpiexec on my own workstation, I do not really know how to do this. I have installed the remote debugger (x64) and launched msvsmon.exe but where do I go from here ?

Right now, I am getting a stack overflow in our MPI application which doesn't show up unless I launch more than 1 process. SInce this application is using a solver which we did not write, it would be helpful to be able to step through the code and see where this is happening.

6 Beiträge / 0 neu
Letzter Beitrag
Nähere Informationen zur Compiler-Optimierung finden Sie in unserem Optimierungshinweis.
Bild des Benutzers Dmitry Kuzmin (Intel)
Best Reply

Hi Michel,

Have you seen this article? http://blogs.msdn.com/hpctrekker/archive/2009/07/18/5-minute-tutorial-on...
Debugging on Windows is a very difficult process - I'll try to find some hints.

Regards!
Dmitry

Bild des Benutzers michel.lestrade

Thanks, that is a good find. Definitely bookmarked for later use.

Fortunately, I already solved my immediate problem: apparently I shouldn't be using dynamic linking for my MPI projects .... Found this out when both /Qmkl:cluster and the Link Advisor told me to use mkl_blacs_intelmpi_lp64_dll.lib, a file which doesn't exist ....

I am a bit surprised that using mkl_blacs_intelmpi_lp64.lib and dynamic linking did not fail on my earlier, smaller projects.

I also wonder if I will be able to mix in any OpenMP code given how strongly dynamic linking is suggested in those cases.....

Bild des Benutzers Dmitry Kuzmin (Intel)

Hi Michel,

I'd say that MPI projects shouldn't be using STATIC linking because you may have 2 instances of the standard library - one worked from static code and one worked from dynamically linked libraries. I'm not sure about Windows but on Linux we defenitely saw unpredictable behavior in case of statically linked applications.
OpenMP should not be the big problem but as soon as you are using MKL and MKL creates internal threads you need to check whether your application will have benefits using openMP in your code or not.
Also you can control number of threads created by openMP and created by MKL using: OMP_NUM_THREADS and MKL_NUM_THREADS.
BTW: MKL knows how many thread it can create for optimal workload.

Regards!
Dmitry

Bild des Benutzers michel.lestrade

Hi Dimitry,

Maybe I was doing something wrong with the linking then... I was using link.exe in my makefile and specifiying the libraries directly since I had trouble with the mpiifort wrapper.

Let's say I go back to basics and use mpiifort for my link step. Is there something wrong with the following syntax ?

C:\MUMPS\test_exe>mpiifort -mt_mpi /Qmkl:cluster bench_mumps.obj solver_mumps.obj
mmio.obj libpord.lib libmumps_common.lib libdmumps.lib /link /libpath:..\lib\64\

The extra libpath at the end is for the .lib files for the MUMPS solver. Unfortunately, this gives me unresolved externals for those lib files (DCOPY, DGEMM and DSWAP, etc...) which is why I was manually linkiing in the first place. Removing -mt_mpi gives exactly the same result.

If I understand the help correctly /Qmkl is only used in the link step so it is probably not related to how I compiled the .lib files in the first place. Just in case, I am uploading those makefiles too. The project is mixed C and Fortran, compiled with icl and ifort.

Anlagen: 

AnhangGröße
Herunterladen Mklib64.2.99 KB
Herunterladen Mklib64_common.2.81 KB
Bild des Benutzers michel.lestrade
Hi Dimitry,

I managed to link it properly by using mpiifort and the source files directly and it runs without a crash. However, it does seem like only the OpenMP and MPI are linked dynamically. The rest of the MKL seems to linked statcically by default because I do not see it in the Dependency Walker (attached).

C:\MUMPS\test_exe>mpiifort /Qopenmp /Qparallel -I../include/ bench_mumps.f90 solver_mumps.f90 mmio.f
libpord.lib libmumps_common.lib libdmumps.lib /link /Qmkl:cluster /libpath:..\lib\64\

Anlagen: 

AnhangGröße
Herunterladen bench_mumps.txt314.14 KB

Melden Sie sich an, um einen Kommentar zu hinterlassen.