Intel® Clusters and HPC Technology

Intel MPI with pthread

I am trying to run a program that uses Pthread with Intel MPI. The program was compiled and linked successfully. I ran it on a dual-socket machine with two quad-core processors, but no threads seemed to be created. Below is the command I used:

mpirun -n 2 exectable

The program is supposed to generate 8 threads in one of the 2 processes. Thanks.

to run MPI instructions in Ifort

I am new bie to linux. I am using redhat linux 4.0 .My processor is AMD dual quad core. I want to run MPI instructions in ifort. For that purpose i have downlaoded MPICH2-1.1. I configured it using

./configure --prefix=/home/you/mpich2-install 2>&1 | tee c.txt

It is configured succesfully. Now i build using below command

make 2>&1 | tee m.txt

it is giving below errors. Please help me out

Error when linking with Intel MPI -- C++ Binding problem?


I was trying to compile a c++ program using Intel compiler and Intel MPI 3.2. The compilation was successful, but during linking, I got the following error message:

: undefined reference to `MPI::INTEGER'

Is this a c++ binding problem, but isn't MPI 3.2 supposed to support C++ binding automatically? Could someone help me? Thanks!


MPI/IB Jobstart hanging

Hello to All,

I'm trying to start an MPI Job with a setup
SLES10SP2, IntelMPI3.0.1, OFED

When I try to run a Job with these env's:

export I_MPI_DEVICE=rdssm:OpenIB-cma
export I_MPI_DEBUG=256

My Job trys to start, but then hangs setting up the communication (see below).

dapltest works on the other hand.

inodeXYZ is the IPoIB adress of a node

Whats wrong?


Problem with Using Counters (ITAC)


I'm using the following code to log the variable momentum whenever its value changes at runtime:

int pd_counter_handle;
int counter_class;
const float boundaries[2]={0,50000};
VT_classdef( "Counters", &counter_class);
VT_countdef( "Pressure Development", counter_class, VT_COUNT_FLOAT|VT_COUNT_VALID_SAMPLE|VT_COUNT_ABSVAL, VT_ME, boundaries,"#",&pd_counter_handle);

mpd won't start on a multi-core RHEL 5 Linux workstation


I am would like to start mpd's on my multi-core RHEL 5 Linux workstation to run MPI software. However, when I tried starting mpd, I got the following errors:

mpdboot -f mpd.hosts (handle_mpd_output 837): failed to ping mpd on localhost; received output={}

MPI: Prevent mpirun from terminating on SIGTERM


I'm using a IntelMPI with PBS.
When I send a SIGTERM signal using qdel to my job mpirun exits immediatly and my program that is called by mpirun has no time to finish its cleanup work.
(I'm using
if [ x$PBS_ENVIRONMENT != x ]; then
trap "" SIGTERM
in my ~/.profile to prevent any shell from exiting when it gets the SIGTERM)

How can I tell IntelMPI's mpirun not to exit on SIGTERM?


undefined symbol: __intel_cpu_indicator


An MD applicatuion Amber-9 is installed with IntelMKL-8.0.2 and Intel Fortran-10.1.018 On Rocks-5.1 Linux cluster, CentOS-5.2.
There were no errors during compilation.

When executed the executable it gives:

# make test.serial
cd dmp; ./Run.dmp
../../exe/sander: symbol lookup error: ../../exe/sander: undefined symbol: __intel_cpu_indicator
./Run.dmp: Program error
make: *** [test.sander.BASIC] Error 1

The config.h file that the Makefile uses is:

Iscriversi a Intel® Clusters and HPC Technology