Intel® Cluster Ready

MPICH3.14 with intel c++ complier on os x 10.10.3,Error:dyld: Library not loaded: libiomp5.dylib

Hi friends:

I install MPICH with intel c++ complier on OS X,when I complier the code link with MKL library,and run the code,it gives me error:

mpicxx -mkl testcode.cpp -o testcode

mpiexec -n 3 ./testcode

dyld: Library not loaded: libiomp5.dylib

  Referenced from: /Users/.....

  Reason: image not found

 

how can  I fix this?

thanks.

yongbei

Seg Fault when using US NFS install of MPI 5.1.0.038 from site in Russia

Hello,

One of my team members from Russia is accessing a NFS installation of MPI 5.1.0.038 located at a US site. When this team member runs the simple ring application test.c, she encounters a segmentation fault when running with four processes and one process per node. This does not happen for the team members based at US sites.  The seg fault does not happen when the application is executed on only a single node, the login node.

The test.c application was compiled by each team member in this way (in a user-specific scratch space in the US NFS allocation) :

MPI_Comm_spawn strips empty strings from argv

Hi. I'm using Intel MPI 5 on Windows and have the following problem.  I have a C++ application that spawns worker processes using MPI_Comm_spawn() and passing parameters via the argv argument.  I have an array of strings that is passed this way - however, one of these string is empty.  When the worker process receives the argv array, the empty string has been removed from the array.

MPI peformance snapshot

Hello team,

I am using mps(mpi performance snapshot) for analyzing. I installed intel mpi 5.0.3, itac 9.0.3 and intel compiler 15.0.3. I have already created stats.txt and app_stat.txt files but when I am using mps command for analyzing these text files, I am facing issue:

mps command not found

Please suggest me..

Regards,

Pravesh Goyal

 

Intel MPI fails to load libmpi_lustre.so

Hello,

under which circumstances might we see the following error:

[3] ERROR - ADIO_Init(): Can't load libmpi_lustre.so library: libmpi_lustre.so: cannot open shared object file: No such file or directory
[2] ERROR - ADIO_Init(): Can't load libmpi_lustre.so library: libmpi_lustre.so: cannot open shared object file: No such file or directory

This is from a reduced test case where only ranks 2 and 3 out of 0-3 open a file with MPI_File_open. At this point the above message is printed and the job is aborted. We run on RHEL6 x86_64.

How to profilie wrf???

Hi 

I Choi w.
I'm running a wrf on Xeon Phi.
I wanted to WRF run faster.
So I try to profile.
What do you have to work?

I want to use this option.
Where do you need to add in configure.wrf?

---------------------------------------------------------------------------

-profile-functions -profile-loops=all -profile-loops-report=2
or
-pg

---------------------------------------------------------------------------

Are there ever any other way?
If you have taught me.

Compiling Intel Linpack MP Benchmark mpiicc error

Hello!  I am new to Intel Linpack Benchmark.  I downloaded the evaluation version of Intel Parallel Studio XE Cluster Edition.  I followed the directions and was able to run the single system benchmark and got about 340Gflops.  When I tried to compile the mp_linpack... I get some error with not finding mpiicc.  I did find mpiicc in I believe /opt/intel/ directory and I added the directory to $PATH.  But when compiling, it shows error mpiicc not found...  I changed the Make file to use mpicc and it compiled... but I am trying to use Intel mpiicc to get the best result.  

Building netcdf-4.3.3.1 with Intel MPI library with parallel support : FAIL: run_par_test.sh

Dear Support

I am trying to build netcdf-4.3.3.1 with parallel support using Intel MPI Library-4.1  in order to build RegCM-4.4.5.5.

I have used following environment variables before running the configure command:

export CC=mpiicc

export CXX=mpiicpc

export CFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'

export CXXFLAGS=' -03 -xHost -ip -no-prec-div -static-intel'

export F77=mpiifort

export FC=mpiifort

How to use the Intel® Cluster Checker v3 SDK with gcc5

When compiling connector extensions in the Intel® Cluster Checker v3 SDK it is recommended to use an Intel compiler version 15.0 or newer and a gcc/g++ compiler version 4.9.0 or newer, as described in the Intel® Cluster Checker developer's Guide. This explicitly includes gcc version 5.1.0 and newer as well.
  • Parceiros
  • Linux*
  • Servidor
  • Intel® Cluster Checker
  • Intel Cluster Ready
  • Intel Cluster Checker
  • Intel Cluster Checker v3
  • Intel Cluster Checker v3.0
  • Cluster Checker
  • Intel® Cluster Ready
  • Computação de cluster
  • Bizarre authenticity of host issue when running across multiple nodes with Intel MPI

    I am attempting to run a job across three nodes.  I have configured passwordless ssh and it definitely works in between every node (each node can ssh to the other two without a password).  The known_hosts file is definitely correct and all 3 nodes have identical .ssh directories.  I have also tried adding the keys to ssh-agent, although I'm not sure if that was necessary either as I didn't specify a pass phrase when generating the id_rsa key (I know this is terrible security but it's temporary for the sake of testing).

    Assine o Intel® Cluster Ready