Intel® Clusters and HPC Technology

Problems with Trace Collector

I'm trying to build trace for a program using Trace Collector, but I'm getting such error:

mpicc test.o -L$VT_LIB_DIR -lVT $VT_ADD_LIBS -o build/test.out
/opt/intel/itac/ In function `VT_IPCThreadLevel':
/nfs/isv/disks/sv-ssg_dpd_pdsd-mpi_users/dyulov/testing/ITAC/ITAC_P_7_2_0_05/ict/tracing/vampirtrace/src/generic/VT_ipc.c:(.text+0x316b): undefined reference to `PMPI_Query_thread'

Could you please tell me where the problem could be?

The program itself is very simple and not using ITC API.

Simple HelloWorld with Intel 11 (Fortran and C/C++)

I am trying to create a simple helloworld.for and to prove my Intel 11 compiiler is correctly installed on my RedHat linux box but get the following errors:

ld: cannot find -limf

ifort helloworld.for
ld: /opt/intel/Compiler/11.0/074/bin/lib/for_main.o: No such file: No such file or directory

Any idea what is going on? I can get these working finr with the old version 10 compiler.


Intel MPI Benchmark woes on Itanium2

Firstly apologies if there's a thread already on this (I couldn't find anything with the limited search facilities available)

The crux: I've downloaded the Intel MPI Benchmarks (IMB) and compiled on Itanium-2 chips using mpich2/icc. However I have had several problems, see below, so was hoping to share experiences with others - let me know what you've found!

The problems so far:

1) MPI-IO benchmark suite hangs just after:

Segmentation fault with ifort -openmp

Processor info : Intel dual processor, quad code

Memory : 16GB

I have been able to compile and run my code in the serial mode. However, after including 'omp_lib.h' and with Openmp worksharing constructs, I get Segementation fault, which happens even if I remove the worksharing constructs. i.e

The program runs with ifort < > -o <>

failes with ifort -openmp <> -o <> (Even after commenting(removing) the OMP pragmas)

I have tried increasing environment variable KMP_STACKSIZE, but that is not help ful.


mpd problems

Hi all,

I'm using Intel-MPI 3.2.011 on a cluster with 9 nodes and 36 cpus and a master node with 2 cpus. Ethernet interconnects all nodes.

The mpdboot commands on master:

/opt/intel/impi/ --ncpus=2 -e -d &

/opt/intel/impi/ --rsh=/usr/bin/ssh --totalnum=10 -1 --file=$HOME//machines.LINUX --verbose --ncpus=2 &b

bring out on nodes the daemon:

mpirun : [Errno 2] No such file or directory


I use impi 3.0 on a Linu 64b Xeon cluster. On the service node, I am not able to run any program with mpirun :

mpirun -np 2 -v ./myprogram
running mpdallexit on service0
LAUNCHED mpd on service0 via
RUNNING: mpd on service0
problem with execution of ./myprogram on service0: [Errno 2] No such file or directory

Same error if I put the whole path to my program or a system program (for instance /bin/hostname).

It works on compute node perfectly.

$PATH seems fine :

Cluster OpenMP for Intel Compilers

Dear all,

I would like to thank you first for reading my Thread. I will try to be as precise as possible.

I am an Intel Visual Fortran for Windows(happy) customer. Our programs are mainly a mix between Fortran (Compiler Intel)and C++ (Compiler VS 8.0 from Microsoft). We only use a single processor machine to run programs.

In a context of a future parralelization of our simulation softwares on distributed systems,I reached the " cluster OpenMP for Intel Compilers" page which leads me to the following questions:

Subscribe to Intel® Clusters and HPC Technology