Intel® Cluster Studio XE

Intel® Parallel Studio XE 2016 Beta program has started!

The Intel® Parallel Studio XE 2016 Beta program is now available!

In this beta test, you will have early access to Intel® Parallel Studio XE 2016 products and the opportunity to provide feedback to help make our products better. Registration is easy through the pre-Beta survey site.

This suite of products brings together exciting new technologies along with improvements to Intel’s existing software development tools:

How do I get ITAC to show routine names from my code?

By default ITAC only displays "MPI" and "APPLICATION" in the event display.

 

  • I've set VT_PCTRACE, but that only seems to allow me to pop up a source dialog, and even that does not show a back trace so that's fairly useless.
  • I put VT_initialize/VT_finalize in my code, and create a bunch of state handles which as pass to VT_begin / VT_end.

None to any avail. What am I missing? What file should contain my routine names?

 

Victor.

Trying to make MPI work in a QuickWin application

Hi,

This is my first try to enter MPI into a simple QuickWin program under Visual Studio 2010

I first installed Cluster Studio 2015 update 4 on my i7 4910MQ Dell laptop as an x64 project. The simple code compiled, linked and run directly.

Then I just added INCLUDE *mpif.h' on a new line after the IMPLICIT NONE line and got the compiler message error #5102 cannot open include file

'mpif.h'.

I then tried a number of ways to inform the compiler about the location of the mpif.h file but instead got a number of warnings and errors. 

Parallel Universe link slightly mangled

I don't know exactly where to submit this, but one of the links on the Parallel Universe magazine page is mangled.  On the page https://software.intel.com/en-us/intel-parallel-universe-magazine the displayed Issue 19 actually points to issue 20.  Maybe someone at Intel would like to fix this.  (Of course you can still get issue 19 if you really want to, but there is an annoyance factor.)

Problem with Intel Trace Collector

I'm trying to use for the first time the intel trace collector on a cluster machine (with the intel xe 2013 and the itac 8.1.2.033).

I built my program in the standard production mode and in the bash script submitted to the PBS scheduler there were the following commands:

#PBS -l select=14:ncpus=16:mem=120gb:mpiprocs=16

module load intel/cs-xe-2013

source [path]/cs-xe-2013/none/itac/8.1.2.033/bin/itacvars.sh

mpirun -trace [path]/my_program [arguments]

Deadlock with MPI_Win_fence going from Intel MPI 4.1.3.049 to 5.0.3.048

We encountered a problem when migrating a code from Intel MPI 4.1.3.049 to 5.0.3.048. The code in question is a complex simulation that first reads global input state from disk into several parts in memory and then accesses this memory in a hard to predict fashion to create a new decomposition. We use active target RMA for this (on machines which support this like BG/Q we also use passive target) since a rank might need data from the part that is at another rank to form its halo.

[UPDATED] : Maximum MPI Buffer Dimension

HI,

there is a maximum dimension in MPI buffer size? I have a buffer dimension problem with my MPI code when trying to MPI_Pack large arrays. The offending instruction is the first pack call:

CALL MPI_PACK( VAR(GIB,LFMG)%R,LVB,MPI_DOUBLE_PRECISION,BUF,LBUFB,ISZ,MPI_COMM_WORLD,IE )

where the double precision array R has LVB=6331625 elements, BUF = 354571000, and LBUF = BUF*8 = 2836568000 (since I have to send other 6 arrays with the same dimension as R).

The error output is the following:

Assine o Intel® Cluster Studio XE