Intel® Fortran Compiler for Linux* and Mac OS X*

Direct access file and double precision complex

I want to write a 1000x1000 complex matrix "mat" in a direct access file.
When the matrix is in simple precision, my code works.
When the matrix is in double precision, my code fails, with a segmentation fault.
Attached is a very short test program which :

- initializes a complex matrix (the working precision "wp" can be made equal to "sp" (single precision) or "dp" (double precision))
- computes the length of the record rlength
- open a direct access file "mat.dat" with RECL=rlength
- writes mat in mat.dat

linker flag -export-dynamic?

Does anyone know what the linker flag -export-dynamic does? Apparantely it works with version 8.0 but it doesn't with 9.0. Is there an equivalent flag for version 9.0?

I am trying to compile and link in a fortran user subroutine into a Fininte Element code called ABAQUS.

I am running redhat 9 with Athlon 3000+.

The warning:

ifort: Command line warning: ignoring unknown option '-export-dynamic'

The ABAQUS executable then quits with this error message:


Please the customer has this environment:

1. license single user - Fortran, C++ and MKL ( Linux)

Hardware ; SGI Altix.

There are 5 nodes.


Every compilation is done in the node control ( first node).

If the time of cpu is exceeded ( there is a quote in this node)

automatically these jobs (compilations) are sending for other node ( any one of the other four).

Is it necessary to have other kind of license ?

Thanks again


Memory error from PRINT * under Fedora core 2

I have the file tst.f90, which only contains the code


I compile this, using ifort version 9.0, as

ifort tst.f90 -oftest

If I test the generated executable ftest for bad memory behaviour with Valgrind's memory checking tool, using the command

valgrind --tool=memcheck ftest

I get the following output:

Intel Fortran V7.1 - Mixing Fortran (Modules) and C

Dear All,
I have a question/problem related the using of Fortran Modules and C ++ code using Intel Fortran V7.1.

Unfortunately I'm not able to link the code, which I guess is because I can't figure out the correct naming conventient of subroutines in the modules.

Here is an example (for code see end of this message), which is illustrating the problem. The code is linking without any problems, when I change to V8.0

For v7.1 I use the following compiler commands:
ifc -c test.f90
icpc -c main.cpp
ifc test.o main.o

-pipe flag equivalent?

I'm working in a network environment where many of the applications I need to compile are on a NFS-mounted drive. Thus, when I compile the several-hundred source files of my program, it takes forever due to all of the network activity created during the assembly stage of the compile. Is there an equivalent to the gcc "-pipe" flag that will force ifort to use pipes rather than temporary files during assembly? This flag has sped up my C compiles by at least 2-3x, so I'd like to be able to do it with ifort as well.


OpenMP parallel loop incorrect iteration distribution?

G'day All,

The simple program enclosed seems to indicate a problem in ifort OpenMP compilation of parallel DO loop work-sharing directives that distributes increasing numbers of iterations instead of spreading them evenly amongst threads when the loop index variable 'I' is re-used in later loop statements inside a SINGLE directive. The problem does not occur when the program is modified to use a different variable 'J' instead for the 2nd loop. Nor does this problem occur in equivalent C code.

Large common blocks cause SIGKILL?


Large common blocks cause a "Killed" message before any
code is actually executed. This happens using the em64t
version of ifort 9.0-031 on a SuSE 9.3 x86_64 machine. I'm
including a simple program that demonstrates the problem.
If I comment out the common block statement it does not
have this problem. Here's the command line output
demonstrating the problem and then the source for the
program that has the problem:

Accuracy with -axN and other options, e.g. -mp1 -prec_div

I am seeing subtle differences with double precision arithmetic when using the -axN option, errors of the order of 1D-16 which appear a bit large for double precision. Since I presume that this is exploiting 32 bit registers, I wonder if it is fully compatible with higher accuracy (e.g. -mp or -mp1 -prec_div togethor with -pc80-r8 -fpconstant -O3), or whether some issues might be related to flushing to zero. For reference, this appears to also happen when -mtune=pentium-mmxis used.

S’abonner à Intel® Fortran Compiler for Linux* and Mac OS X*