This is an essential guide to using the Coarray Fortran (CAF) feature of the Intel® Fortran Compiler.
The shared memory, single-node, version of Coarray Fortran is available in any edition of Intel® Parallel Studio XE 2015 or newer. The distributed memory implementation of CAF is available for Linux and only in Intel® Parallel Studio XE 2015 Cluster Edition for Linux or newer.
CAF is also implemented for Windows using Intel Parallel Studio XE Cluster Edition for Windows.
The CAF feature is currently not available under macOS.
Configuration Set Up
In order to run a distributed memory Coarray Fortran application, you must have an established Linux cluster and an installation of Intel's implementation of MPI (message passage interface). Intel's Coarray Fortran relies on Intel's MPI for cluster communication.
New to Intel MPI? Go here and read, particularly, the Prerequisites section. When you have run the setup scripts, especially sshconnectivity.exp, make sure you can run a simple MPI 'hello world' program on your cluster across multiple nodes. Attached to this article is a small Fortran MPI version of a 'hello world' program.
Successful configuration and running of MPI jobs under Intel MPI is a prerequisite to using the Intel CAF feature in distributed memory mode.
When the 'hello world' program has run successfully, here are additional steps for CAF.
1) Set up a machine file
If your cluster hosts are fixed and you do not run under a batch system like PBS or Slurm, set up a hosts file. In your home directory, create a file with the hostnames of your cluster, one host per line and, optionally, with the number of processors on each host, or the number of CAF images to place on each node. Something like this:
Syntax is <hostname>[:<number of CAF images>] This examples states to run 4 MPI/CAF processes on each node.
You may use any name for the file, by convention "machinefile" or "hostfile" are probably easiest to remember and maintain. If you are running under a batch system where the hosts are assigned dynamically, see the Intel MPI Library Developer Guide for details on host selection.
2) Source the setup scripts
Source Intel MPI and Intel Fortran compiler scripts to set up the paths to Intel MPI and Intel Fortran in your environment. Furthermore, these must be sourced by child processes. It is recommended to perform the following source commands and/or add these to your .bashrc or .cshrc files in your home directory:
source <path to Intel MPI installation>/[ia32 | intel64]/bin/mpivars.sh source <path to Intel Fortran installation>/bin/compilervars.sh [ia32 | intel64]
where you choose between 32 and 64 bit environments with ia32 or intel64, respectively.
3) Setup a Coarray Fortran (CAF) configuration file
When you run a distributed memory Coarray Fortran program, the application first runs a job launcher. The job launcher invoked to start a distributed memory CAF application will use the Intel MPI mpiexec.hydra command to start the job on the hosts in the cluster. This CAF job launcher will first read the CAF configuration file to pick up arguments that will be passed to the Intel MPI mpiexec.hydra command. Thus, the CAF configuration file is nothing more than arguments to the Intel MPI mpiexec.hydra command. An example CAF configuration file may contain:
-genvall -genv I_MPI_FABRICS=shm:tcp -machinefile ./hostsfile -n 8 ./my_caf_prog
In this example,
There are many, many commands and configurations possible in the machine file. See the documentation on mpiexec.hydra for a complete list of possible control options. Some other useful options to consider:
Building the Application
You are now ready to compile your Coarray Fortran application. Create or use an existing Coarray Fortran application. A sample Coarray Fortran 'hello world' application is included in the <compiler install dir>Samples/en_US/Fortran/coarray_samples/ directory. The essential compiler arguments to use for distributed memory coarray applications are:
ifort -coarray=distributed -corray-config-file=<CAF config filename> ./my_caf_prog.f90
Of course, you can include any other compiler options including all optimization options.
Running the Application
After compiling the program, simply execute the program.
That's it! The CAF executable will locate your machine file and your caf config file. The caf config file passes arguments to the mpiexec.hydra command to start up your distributed CAF program. Host information is pulled from the machine file.
Need to change the number of images launched or the arguments to mpiexec.hydra? Simply change the settings in the CAF config file. Remember, the -coarray-config-file= options used at compile time set the name and location for this file. You should use a name and location you can remember for this file, such as -coarray-config-file=~/cafconfig.txt
Then just add mpiexec.hydra options to ~/cafconfig.txt, for example,
-perhost 2 -envall -n 64 ./a.out
Note: The environment variable FORT_COARRAY_NUM_IMAGES has no effect on distributed memory CAF applications. This environment variable is only honored by a shared memory CAF image. Only the -n option in the CAF config file is used to control the number of CAF images for a distributed memory CAF application.
Again, read the mpiexec.hydra and I_MPI_FABRICS documentation in the Intel MPI Library Developers Reference.
Known Issues or Limitations
Many clusters have multiple MPI implementations installed along with Intel MPI. The PATH and LD_LIBRARY_PATH environment variables must have Intel MPI paths BEFORE any other MPI installed on your system. Make sure to ONLY source the mpivars.sh script to set this correctly OR insure that the correct Intel MPI paths appear before other MPI paths.
Batch system notes: In the above notes, we added the option '-envall' to the CAF config file. This is an attempt to get your current working environment variables to be inherited by your spawned remote CAF processes. This was done to help insure that your PATH and LD_LIBRARY_PATH contain the paths to Intel MPI and Intel Fortran AND that those paths appear before other MPI and compilers on your system. HOWEVER, some batch scheduling systems will not allow environment inheritance. In other words they will throw out your current environment variables and use defaults for these. That is why we suggested adding
source <path to intel MPI>/[ia32 | intel64]/bin/mpivars.sh
to your .bashrc, .cshrc, or .bash_profile. These dot files are invoked by each child process created, and hence, SHOULD set the PATH and LD_LIBRARY_PATH appropriately. When in doubt, execute 'which mpiexec.hydra' interactively, or put 'echo `which mpiexec.hydra`' in your batch script to insure the Intel MPI mpiexec.hydra is being used. Other MPI implementation 'mpiexec' commands cannot be used and will cause errors.
It is critical to insure that you can execute an Intel MPI application PRIOR to attempting to run an Intel CAF program.
READ: the Intel MPI Release Notes and the Getting_Started.pdf documents that come with Intel MPI in the <installdir>/doc/ directory.
Our User Forums are great places to see current issues and to post questions:
Intel Fortran User Forum
Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.
Notice revision #20110804