Intel® Fortran Compiler for Linux* and Mac OS X*

open MPI e intel MPI DATATYPE

Dear all,

 

I have notice small difference between OPEN-MPI and intel MPI. 

For example in MPI_ALLREDUCE in intel MPI is not allowed to use the same variable in send and receiving Buff.

 

I have written my code in OPEN-MPI, run in on a intel-MPI cluster. 

Now I have the following error:

 

Fatal error in MPI_Isend: Invalid communicator, error stack:
MPI_Isend(158): MPI_Isend(buf=0x1dd27b0, count=1, INVALID DATATYPE, dest=0, tag=0, comm=0x0, request=0x7fff9d7dd9f0) failed

 

 

 

Cannot open include file

I am using somebody else's solver, which I copied on my machine as is. After linking the correct libraries, I am getting this error for an include file:

mpif90 -c poisson_hypre.F90  poisson_hypre.F90 -O2 -fpp -I/myHomeDir/Codes/OpposedJet//lib/hypre//include/
poisson_hypre.F90(21): error #5102: Cannot open include file 'HYPREf.h'

The file HYPREf.h is in that directory, and I have even changed its permissions with `chmod 777 HYPREf.h` but that did not change anything. I checked with the code's author and everything is working for him under the same conditions.

libifcore.so.5: undefined symbol: _intel_fast_memmove

Hello,

I routinely use f2py to compile fortran code into python modules. Recently, I have run across a baffling problem. I compiled a module using the following command line:

f2py --compiler=intelem --fcompiler=intelem -c interpol_grid.pyf interpol_grid.F90

which produced the file interpo.so, as it was supposed to. However, when I tried loading it from within ipython, I got

Academic Research Performance Libraries from Intel (Linux*)--Intel® Math Kernel Library for Linux*--version 11.2

Hello Sir,,

I have downloaded and installed Academic Research Performance Libraries from Intel (Linux*)--Intel® Math Kernel Library for Linux*--version 11.2 to my computer but i am not able to find ifort compiler. Please reply.

On terminal if i use 'ifort'  then i get the message command not found.

Regards

common block problem in openmp Fortran

my code is:

 program
 ...
 ! Loop which I want to parallelize
 !$OMP parallel DO
 I = 1, N
 ...
 call FORD(i,j)
  ...
 !$OMP END parallel DO
 end program

  subroutine FORD(i,j)
  logical servo,genflg,lapflg
  dimension c(nvarc)
  dimension zl(3),zg(3),v1(3,3),v2(3,3),rn(3),
 .          rcg1(3),rcg2(3),ru1(3),ru2(3),
 .          rott1(3),rott2(3),velr(3),dt(3),
 .          dfs(3),ftu(3),fnr(3),amomet(3
  common /contact/ iab11,iab22,xx2,yy2,zz2,
 .                 ra1,rb1,rc1,ra2,rb2,rc2,
 .                 v1,v2,
 .                 xg1,yg1,zg1,xg2,yg2,zg2

How to I determine at runtime what vector instructions are being used when compiling with -ax

In a few weeks, we will have another generation of Intel HPC system.  We will have systems that support SSE4.2 (Nehalem, Westmere), AVX (SandyBridge, IvyBridge), and CORE-AVX2 (Haswell) optimizations.  Since the compile nodes are being upgraded to Haswell as well, I want to tell the users to specify something different than -xHost when using Intel Fortran so binaries can be backwards compatible and run on any of the clusters.  I planned to tell the users to use -xSSE4.2 -axCORE-AVX2,AVX.

 

My questions are:

 

Iscriversi a Intel® Fortran Compiler for Linux* and Mac OS X*