Building the WRF v.3 and the WPS v.3 with Intel® Compilers for distributed mode execution on Linux*


Introduction :
This document explains how to build the Weather Research & Forecasting(WRF) v3.1.1 and the WRF Preprocessing System(WPS) v3.1.1 using the Intel C++ and Fortran Compilers for Linux(e.g. version 11.1.046) and Intel MPI (e.g. 3.2.1).
Building WRF and WPS BKMs with Intel® Compilers in serial mode you can find here: WRF and WPS.

Before you continue reading this article please check another article at /en-us/articles/building-the-wrf-with-intel-compilers-on-linux-and-improving-performance-on-intel-architecture.

Version :
WRF v.3.1.1.
WPS v.3.1.1.
Intel® C++ and Fortran Compilers Professional Edition for Linux 11.1.046.

Obtaining Source Code :
The WRF source codes can be downloaded here.
The WPS source codes can be downloaded here.

Prerequisites :
The WRF requires netCDF library. Installation best known method with the Intel® Compilers you can find here.
The WPS requires:
1)The WRF.
2)The Jasper library. Installation best know method with the Intel® Compilers you can find here.
3)The JPEG library. Installation best know method with the Intel® Compilers you can find here.
4)The Zlib  library. Installation best know method with the Intel® Compilers you can find here.
5)The NCAR Graphics* library. How to build with Intel® Compilers you can find here.


Environment Set Up :
You should set up environment variables for Intel® C++ and Fortran Compiler and netCDF. If you want to build distributed version of WPS then you should set up environment variables for Inte MPI.
E.g.
$export INTEL_COMPILER_TOPDIR="/opt/spdtools/compiler/cpro/Compiler/11.1/046"
$. $INTEL_COMPILER_TOPDIR/bin/intel64/ifortvars_intel64.sh
$. $INTEL_COMPILER_TOPDIR/bin/intel64/iccvars_intel64.sh
$export NETCDF=/opt/netcdf
$. /opt/intel/impi/3.2.1.009/bin64/mpivars.sh

Source Code Changes : none

Building the Application :
Note: While building mpicc and mpif90 should be symbolic links to mpiicc and mpiifort, respectively.

WRF:
1)Copy/move tar file WRFV3.1.1.TAR.gz to the directory /opt.
2)Decompress source files

$ tar -zxvf WRFV3.1.1.TAR.gz
3)
$cd ./WRFV3

4)Create script build_WRF.sh:

"#!/bin/sh
export NETCDF=/opt/netcdf
export JASPERLIB=/opt/usr/lib
export JASPERINC=/opt/usr/include
export LD_LIBRARY_PATH="/opt/lib/:${LD_LIBRARY_PATH}"
export INCLUDE="/opt/include/:${INCLUDE}"

# To enable large file
# support in NetCDF, set the environment variable
#WRFIO_NCD_LARGE_FILE_SUPPORT to 1
export WRFIO_NCD_LARGE_FILE_SUPPORT=1

#If you have Intel MPI installed and wish to use instead, make the
#following changes to settings below:
#DM_FC = mpiifort
#DM_CC = mpiicc
#and source bin64/mpivars.sh file from your Intel MPI installation
#before the build.
export DM_FC=mpiifort
export DM_CC=mpiicc

./configure --prefix=/opt/WRFV3-mpi

mkdir ./buildlog
./compile em_b_wave &> ./buildlog/em_b_wave
./compile em_grav2d_x &> ./buildlog/em_grav2d_x
./compile em_heldsuarez &> ./buildlog/em_heldsuarez
./compile em_hill2d_x &> ./buildlog/em_hill2d_x
./compile em_les &> ./buildlog/em_les
./compile em_quarter_ss &> ./buildlog/em_quarter_ss
./compile em_real &> ./buildlog/em_real
./compile em_seabreeze2d_x &> ./buildlog/em_seabreeze2d_x
./compile em_squall2d_x &> ./buildlog/em_squall2d_x
./compile em_squall2d_y &> ./buildlog/em_squall2d_y

"
5)Make script executable:
$chmod +x build_WRF.sh

6)Execute this script with command line:
$((./build_WRF.sh) 3>&2 2>&1 1>&3 | tee build_WRF.err ) 2>&1 | tee build_WRF.log
Or just execute script:
$./build_WRF.sh
NOTE:
For building parallel WRF version chose
7. Linux x86_64 i486 i586 i686, ifort compiler with icc (dmpar)
or
8. Linux x86_64 i486 i586 i686, ifort compiler with icc (dm+sm)
while configuring.


WPS:

1)Copy/move tar file WPSV3.1.1.TAR.gz to the directory /opt.
2)Decompress source files
$tar -zxvf WPSV3.1.1.TAR.gz
.
3)Set up environment variables which are mentioned in section "Configuration Set Up".
4)Configure WPS
$./configure
NOTE:
For building WPS parallel version chose
3. PC Linux x86_64, Intel compiler DM parallel, NO GRIB2
or
4. PC Linux x86_64, Intel compiler DM parallel
while configuring.
5)Compile WPS
$./compile


Running the Application :
1)To run under Intel® MPI first of all MPD ring should be brought up
a) on the single machine:
$mpdboot &
b)on hosts (some hosts) in the file mpd.hosts:
$mpdboot -n <number to start > -f mpd.hosts
.
2)To run the main WRF simulation in parallel mode:
$mpiexec - n <number of procs> ./wrf.exe
3)To run WPS components in parallel mode:
$mpiexec - n <number of procs> ./component_name.exe

Verifying Correctness :
To verify results you can use utility diffwrf. You can find it in directory./external/io_netcdf/diffwrf.

E.g. To run diffwrf from directory ./test/em_real/
$../../external/io_netcdf/diffwrf /wrfout_d01_etalon ./wrfout_d01_2000-01-24_12\:00\:00


Known Issues or Limitations :
1)If WPS should be built with WRF parallel version then in WPS post-configure file variable WRF_DIR should be changed to
WRF_DIR = ../WRFV3-mpi
.
For more complete information about compiler optimizations, see our Optimization Notice.

15 comments

Top
Bill B.'s picture

Hi:

  I've already compiled wrf successfully,however I can't compile wps successfully.I got an error.The log is list below:

mpicc -cc=icc -D_UNDERSCORE -DBYTESWAP -DLINUX -DIO_NETCDF -DIO_BINARY -DIO_GRIB1 -DBIT32 -D_MPI -D_GEOGRID -w -c cio.c    
gcc: unrecognized option '-cc=icc'

gfortran: big_endian: No such file or directory
gfortran: unrecognized option '-convert'
f951: error: unrecognized command line option "-f90=ifort"
make[1]: [wrf_debug.o] Error 1 (ignored)

How do you solve this problem?

 

wanyuan l.'s picture

Dear Ronald W Green:

Thank you for your reply. In fact, I had tried to use the 'ulimit -s unlimited' command to unlimit my stack size before using the '-heap-arrays' option, but it failed. If you have a new idea, please tell me again through "Comment". Thank you very much!

YOu cannot use -heap-arrays AND -openmp together.  Remove -heap-arrays and unlimit your stack:

http://software.intel.com/en-us/articles/determining-root-cause-of-sigsegv-or-sigbus-errors

 

 

wanyuan l.'s picture

Dear Kirill Mavrodiev:

I have successfully compiled WRF3.5.1 using ifort and icc (Intel Parallel Studio XE 2013 SP1) accompanied with mpich-3.0.4 and completed running geogrid.exe, ungrib.exe, metgrid.exe, and real.exe. I had added '-heap-arrays' to CFLAGS_LOCAL, LDFLAGS_LOCAL, FCOPTIM and FCBASEOPTS_NO_G in configure.wrf before compiling em_real, but the relevant wrf-running log file ended quickly with the following lines:

*****************************************************************************

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source       
wrf.exe            00000000031A4689  Unknown               Unknown  Unknown
wrf.exe            00000000031A3000  Unknown               Unknown  Unknown
wrf.exe            000000000314BAE2  Unknown               Unknown  Unknown
wrf.exe            00000000030E3833  Unknown               Unknown  Unknown
wrf.exe            00000000030EA41B  Unknown               Unknown  Unknown
libpthread.so.0    00002B70AFF23CB0  Unknown               Unknown  Unknown
wrf.exe            0000000001652BE3  module_bc_em_mp_l        1067  module_bc_em.f90
wrf.exe            00000000016CAF40  start_domain_em_          261  start_em.f90
wrf.exe            0000000001403345  start_domain_             117  start_domain.f90
wrf.exe            0000000000F56ECE  med_initialdata_i         198  mediation_wrfmain.f90
wrf.exe            000000000040B79E  module_wrf_top_mp         144  module_wrf_top.f90
wrf.exe            000000000040B35B  MAIN__                      8  wrf.f90
wrf.exe            000000000040B316  Unknown               Unknown  Unknown

*****************************************************************************

The modified part of configure.wrf is as follows:

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

DMPARALLEL      =        1
OMPCPP          =       # -D_OPENMP
OMP             =       # -openmp -fpp -auto
OMPCC           =       # -openmp -fpp -auto
SFC             =       ifort
SCC             =       icc
CCOMP           =       icc
DM_FC           =       mpif90 -f90=$(SFC)
DM_CC           =       mpicc -cc=$(SCC) -DMPI2_SUPPORT
FC              =        $(DM_FC)
CC              =       $(DM_CC) -DFSEEKO64_OK
LD              =       $(FC)
RWORDSIZE       =       $(NATIVE_RWORDSIZE)
PROMOTION       =       -i4
ARCH_LOCAL      =       -DNONSTANDARD_SYSTEM_FUNC -DWRF_USE_CLM
CFLAGS_LOCAL    =       -w -O3 -ip -heap-arrays #-xHost -fp-model fast=2 -no-prec-div -no-prec-sqrt -ftz -no-multibyte-chars
LDFLAGS_LOCAL   =       -ip -heap-arrays #-xHost -fp-model fast=2 -no-prec-div -no-prec-sqrt -ftz -align all -fno-alias -fno-common
CPLUSPLUSLIB    =
ESMF_LDFLAG     =       $(CPLUSPLUSLIB)
FCOPTIM         =       -O3 -heap-arrays
FCREDUCEDOPT    =       $(FCOPTIM)
FCNOOPT         =       -O0 -fno-inline -fno-ip
FCDEBUG         =       -g $(FCNOOPT) -traceback # -fpe0 -check all -ftrapuv -unroll0 -u
FORMAT_FIXED    =       -FI
FORMAT_FREE     =       -FR
FCSUFFIX        =
BYTESWAPIO      =       -convert big_endian
FCBASEOPTS_NO_G =       -ip -heap-arrays -fp-model precise -w -ftz -align all -fno-alias $(FORMAT_FREE) $(BYTESWAPIO) #-xHost -fp-model fast=2 -no-heap-arrays -no-prec-div -no-prec-sqrt -fno-common
FCBASEOPTS      =       $(FCBASEOPTS_NO_G) $(FCDEBUG)
MODULE_SRCH_FLAG =
TRADFLAG        =      -traditional
CPP             =      /lib/cpp -C -P
AR              =      ar
ARFLAGS         =      ru
M4              =      m4
RANLIB          =      ranlib
CC_TOOLS        =      $(SCC)

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

And my cpu is "Intel® Core\u2122 i7-3630QM CPU @ 2.40GHz × 8", the linux system is "ubuntu 12.04", and the "uname -a" command will result as "Linux liwanyuan-Lenovo-IdeaPad-Y500 3.2.0-58-generic #88-Ubuntu SMP Tue Dec 3 17:37:58 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux". I need help eagerly. Can you help me solving the problem? Thank you a lot!

 

wanyuan l.'s picture

Excuse me. I have modified a comment filtered as a spam in order to avoid misunderstanding.

wanyuan l.'s picture

Dear everyone: Thank you very much for your time. 

 

anonymous's picture

I am trying to install WRF on HPC cluster. I get the following error
nup_em.f90(3): error #7002: Error in opening the compiled module file. Check INCLUDE paths. [MODULE_DOMAIN]
USE module_domain, ONLY : domain, wrfu_timeinterval, alloc_and_configure_domain, &
nup_em.f90(26): error #6457: This derived type name has not been declared. [GRID_CONFIG_REC_TYPE]
TYPE (grid_config_rec_type) config_flags
error #6632: Keyword arguments are invalid without an explicit interface.
finally the executable are not generate

mkbane's picture

vastviewec, what was in your rsl.out.0000 file?

I have a similar problem. I've built WRF (dmpar) on our IA-64 cluster using the mpif90/mpicc (and the Bull MPI)
I've the met_em* files downloaded from the NCAR website
./real.exe runs fine
but
prun -n40 ./wrf.exe (for example)
aborts with the following in the rsl files:

~/RCS/WRF-CHEM/v3.1.1/WRFV3/run$ less rsl.error.0000
taskid: 0 hostname: horace6
Quilting with 1 groups of 0 I/O tasks.
Namelist dfi_control not found in namelist.input. Using registry defaults for v
ariables in dfi_control
Namelist tc not found in namelist.input. Using registry defaults for variables
in tc
Namelist scm not found in namelist.input. Using registry defaults for variables
in scm
Namelist fire not found in namelist.input. Using registry defaults for variable
s in fire
Ntasks in X 5, ntasks in Y 8
WRF V3.1.1 MODEL
*************************************
Parent domain
ids,ide,jds,jde 1 74 1 61
ims,ime,jms,jme -4 22 -4 15
ips,ipe,jps,jpe 1 15 1 8
*************************************
DYNAMICS OPTION: Eulerian Mass Coordinate
alloc_space_field: domain 1, 9258880 bytes allocated
-------------- FATAL CALLED ---------------
un-normalized TimeInterval not allowed: ESMF_TimeIntervalAbsValue arg1
-------------------------------------------
aborting job:
application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0

Anybody have any thoughts? I've built and run on a Cray XT (using same met_em files) without problem

infagon's picture

Hi,
i tried to follow your suggestions. However, the version of the intel compiler that i have in the machine is 10.1.018 and the Intel MPI is 3.1.
This is what i do:

export INTEL_COMPILER_TOPDIR="/opt/intel"
source $INTEL_COMPILER_TOPDIR/cce/10.1.018/bin/iccvars.sh
source $INTEL_COMPILER_TOPDIR/fce/10.1.018/bin/ifortvars.sh
export NETCDF=/home/bcodina/netCDF
export WRFIO_NCD_LARGE_FILE_SUPPORT=1
export WRF_EM_CORE=1
source $INTEL_COMPILER_TOPDIR/impi/3.1/bin/mpivars.sh
export DM_FC=mpiifort
export DM_CC=mpiicc
./configure
mkdir ./buildlog
./compile em_real &> ./buildlog/em_real

However the compilation fails and in the log files i find error messages like this:

mpicc -DFSEEKO64_OK -w -O3 -ip -DDM_PARALLEL -c c_code.c
cc1: error: unrecognized command line option "-ip"

Does that mean that the optimization ip is not supported by Intel MPI 3.1??

Is there any way to solve this??

anonymous's picture

Hi,
i tried to follow your suggestions. However, the version of the intel compiler that i have in the machine is 10.1.018 and the Intel MPI is 3.1.
This is what i do:

export INTEL_COMPILER_TOPDIR="/opt/intel"
source $INTEL_COMPILER_TOPDIR/cce/10.1.018/bin/iccvars.sh
source $INTEL_COMPILER_TOPDIR/fce/10.1.018/bin/ifortvars.sh
export NETCDF=/home/bcodina/netCDF
export WRFIO_NCD_LARGE_FILE_SUPPORT=1
export WRF_EM_CORE=1
source $INTEL_COMPILER_TOPDIR/impi/3.1/bin/mpivars.sh
export DM_FC=mpiifort
export DM_CC=mpiicc
./configure
mkdir ./buildlog
./compile em_real &> ./buildlog/em_real

However the compilation fails and in the log files i find error messages like this:

mpicc -DFSEEKO64_OK -w -O3 -ip -DDM_PARALLEL -c c_code.c
cc1: error: unrecognized command line option "-ip"

Does that mean that the optimization ip is not supported by Intel MPI 3.1??

Is there any way to solve this??

Pages

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.