Intel Fortran 2013 slow compilation with Openmpi

Intel Fortran 2013 slow compilation with Openmpi

Bild des Benutzers htg20

Hello,

I have a Fortran 2003 based code with MPI. I have compiled Openmpi 1.6.1 from souce with Intel 2013 and try to compile my code. Without using MPI (using fake mpi subroutines) everything works fine. However, when I use MPI commands, for the last files the ifort freezes or it compiles very very slow. 

Any hint what might be the problem?

Cheers

Hossein

9 Beiträge / 0 neu
Letzter Beitrag
Nähere Informationen zur Compiler-Optimierung finden Sie in unserem Optimierungshinweis.
Bild des Benutzers Tim Prince

It will be very difficult for anyone to comment, if you can't tell us a few things, such as what OS and which ifort are you using, and your compile and link options. For example, excessive use of IPO with limited RAM or 32-bit mode compilation might be a cause of slow linking.

Bild des Benutzers htg20

Hi all,

I looked into the problem more..

I use Openmpi 1.6.3 , intel Fortran 13.1.0.146 (the latest) on Ubuntu 12.1-x64.  The flags to compile are:

ifort  -I/opt/openmpi/1.6.3_intel/include -I/opt/openmpi/1.6.3_intel/lib -L/opt/openmpi/1.6.3_intel/lib   -cpp     -O0 -debug -traceback   -check all,noarg_temp_created  -ftrapuv -openmp  -m64    -w -fpp   -lmpi_f90 -lmpi_f77 -lmpi -ldl -lm -Wl,--export-dynamic -lrt -lnsl -luti 

It seems ifort does not like such a thing (subroutine below).:

     use compute_homogen_val_class ,gugu1 =>computes_add  

Gfortran does not compile this if I remove ( ,gugu1 =>computes_add ) since it is ambigious. However, if I remove this mpif90(ifort) compiles with no complains, which I think is not compatible with the standard. When  add this, then intel hangs for compiling that file.

!==============================================================================

subroutine computes_add(keyword,num_kwl,computes,ncomputes,compute_id,parts)

use compute_homogen_val_class ,gugu1 =>computes_add  
use compute_jintegral_class ,gugu2 =>computes_add
use compute_mcouplerr_class ,gugu3 =>computes_add
use compute_testc_class ,gugu4 =>computes_add
use compute_nodalhvar_class , gugu5=>computes_add
! include "compute_style2.h" ! run the perl script (perl makestyle.pl) to generate this file
use style_create_mod
implicit none
!------------------- INOUT variables------------------
character(*) :: keyword(:) !the keyword
integer :: num_kwl !number of keyword lines
type(ptr_ty_compute_base) :: computes(:)
integer :: ncomputes, compute_id
type(ty_part) :: parts
!--------------------LOCAL variables------------------

...

end subroutine computes_add

Bild des Benutzers Steve Lionel (Intel)

Please provide a complete compilable source we can try.

Steve
Bild des Benutzers htg20

Dear Steve,

It is a big project and it is not open source yet. Please provide an email so I can share the source files with you.

Bild des Benutzers Steve Lionel (Intel)

Use Intel Premier Support - https://premier.intel.com/  Include a link to this thread.

Steve
Bild des Benutzers htg20

I have uploaded the sourve files to Google project, please download from here:

http://code.google.com/p/permix/source/checkout

You can compile with:   make intel_ompi_min

To get minimum dependancies and intel compiler. 

Bild des Benutzers htg20

Hi,

After sometime I tried to look into the problem in more detail. So, I tried to limit the namespace of the modules by the 'only' keyword. So, eventually the compilation time drastically reduced and also this problem is solved now. 

Though, I still not sure if there is no problem with the compiler.

Bild des Benutzers Steve Lionel (Intel)

There may still be a problem - we'll see if we can reproduce it from your source.

Steve

Melden Sie sich an, um einen Kommentar zu hinterlassen.