Intel® Cluster Studio

Intel® Parallel Studio XE 2016 Beta program has started!

The Intel® Parallel Studio XE 2016 Beta program is now available!

In this beta test, you will have early access to Intel® Parallel Studio XE 2016 products and the opportunity to provide feedback to help make our products better. Registration is easy through the pre-Beta survey site.

This suite of products brings together exciting new technologies along with improvements to Intel’s existing software development tools:

Problem with Intel MPI on >1023 processes

I have been testing code using Intel MPI (version 4.1.3  build 20140226) and the Intel compiler (version 15.0.1 build 20141023) with 1024 or more total processes. When we attempt to run on 1024 or more processes we receive the following error: 

MPI startup(): ofa fabric is not available and fallback fabric is not enabled 

Anything less than 1024 processes does not produce this error, and I also do not receive this error with 1024 processes using OpenMPI and GCC.

Problems with Intel MPI

I have trouble with running Intel MPI on cluster with different different numbers of processors on nodes (12 and 32).

I use Intel MPI 4.0.3 and it works correctly on 20 nodes with 12 processors (Intel(Xeon(R)CPU X5650 @2.67)) at each, and all processors works correctly, then I try to run Intel MPI on other 3 nodes with 32 processors (Intel(Xeon(R)CPU E5-4620 v2@2.00) at each and they work correctly too.

Mapping ranks consecutively on nodes

Hi,

   Running Intel MPI 4.1.3

   Contrary to the user guide, which states for the default round-robin mapping,

To change this default behavior, set the number of processes per host by using the -perhost option, and set the total number of processes by using the -n option. See Local Options for details. The first <# of processes> indicated by the -perhost option is executed on the first host; the next <# of processes> is executed on the next host, and so on.

MPI: polling 'passive' rma operations

Hi,

lately I'm wondering if your implementation of the passive target communication was ever really ment for usage...

Despite the fact that it isn't really passive (since one has to call some mpi functions on the target to get the mpi_win_unlock ever to return), I couldn't even figure out which mpi functions exactly must/can be invoked to achieve the flushing. In the release notes is only written:

Intel MPI, perhost, and SLURM: Can I override SLURM?

All,

(Note: I'm also asking this on the slurm-dev list.)

I'm hoping you can help me with a question. Namely, I'm on a cluster that uses SLURM and lets say I ask for 2 28-core Haswell nodes to run interactively and I get them. Great, so my environment now has things like:

SLURM_NTASKS_PER_NODE=28
SLURM_TASKS_PER_NODE=28(x2)
SLURM_JOB_CPUS_PER_NODE=28(x2)
SLURM_CPUS_ON_NODE=28

Now, let's run a simple HelloWorld on, say, 48 processors (and pipe through sort to see things a bit better):

mpitune get "could not dump the session, because unknown encoding: utf-8"

Hi forum,

I try the following command on a server: (impi 5.0.2.044, icc 2015.2.164)

mpitune -of analysis.conf -application \"mpirun -n 24 -host `hostname` ./myexe\"

It did run for a while but output nothing of analysis.conf. Meanwhile the console output message like:

ERR | Could not dump the session, because unknown encoding: utf-8

I try to change LANG=C, as locale outputs:

LANG=C

LC_CTYPE="C"

...(all other environment variables are "C")

Intel MPI 5.0.2.044 and Windows firewall on localhost

Hi everybody,

We run mpiexec on Windows 7 on multiple network installations of our product in the following manner:

mpiexec.hydra.exe -np x -localroot -delegate -localonly -localhost 127.0.0.1 one_path_to_EXECUTABLE

The problem is with the windows firewall (which is executable path triggering) and alerts, because mpiexec.hydra.exe and pmi_proxy.exe do a listen on 0.0.0.0:port.

Is there an option / environment variable available to tell mpiexec.hydra only listen on 127.0.0.1 instead of all available interfaces if we run it only local on one workstation.

Subscribe to Intel® Cluster Studio