co array usage between nodes on new SGI & cray machines

co array usage between nodes on new SGI & cray machines


Since ifort 13.x at least nominally supports coarrays (I have yet to try this F2008 language feature) does anyone know if any new-ish Cray or SGI HPCs with Intel compilers support distributed memory coarrays *between* nodes? i.e. Is it possible yet to dispense with MPI and write a SPMD program with a global address space using CAF/F2008? Is this something that these and other MPP vendors currently support? Are they likely to support this in the distant future or the near future?


5 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

It is my (limited) understanding that -coarray=distributed produces MPI code under the hood (and run with mpirun) so it should work on any system where MPI does.  While still MPI it does seem a bit simpler to write coarrays from a programming perspective, but I haven't used them enough to say for sure.

Intel's coarray implementation uses MPI, even on systems such as Cray's that have special interconnects. Cray is the only vendor I know of that has a coarray implementation that uses a dedicated transport layer taking advantage of the interconnect hardware, and that is only on the XMP and similar systems that use Cray's own compiler.

The idea is that coarrays are integrated well into the Fortran language and are simpler to use. MPI has a lot more knobs and buttons you can use to get additional performance and features, at the expense of programming complexity.

Retired 12/31/2016

So, say on an SGI, is it reasonable to assume that I can compile a coarray fortran program with the intel compiler and appropriate flags and then execute it uner mpiexec_mpt (or mpirun or whatever) and that it will work? (Assuming you've been allocated the correct number of nodes/PEs etc.)

It would have to be an SGI system using Intel processors that supports Intel Fortran and Intel MPI. You would also need a license for Intel Cluster Studio, as without that we don't support multi-node coarray operation. You don't need to do your own mpirun - the compiler adds its own "launch" code.

Retired 12/31/2016

Leave a Comment

Please sign in to add a comment. Not a member? Join today