Building VASP* with Intel® MKL, Intel® Compilers and Intel® MPI

Building VASP* with Intel® MKL , Intel® Compilers and Intel® MPI

Step 1 – Overview

This guide is intended to help users on how to build VASP (Vienna Ab-Initio Package Simulation) using Intel® Math Kernel Library (Intel® MKL) and Intel® Compilers and Intel® MPI on Linux platforms.

VASP is a package for performing ab-initio quantum-mechanical molecular dynamics (MD) using pseudo potentials and a plane wave basis set. The approach implemented in VAMP/VASP is based on a finite-temperature local-density approximation (with the free energy as variational quantity) and an exact evaluation of the instantaneous electronic ground state at each MD-step using efficient matrix diagonalization schemes and an efficient Pulay mixing. These techniques avoid all problems occurring in the original Car-Parrinello method which is based on the simultaneous integration of electronic and ionic equations of motion. The interaction between ions and electrons is described using ultrasoft Vanderbilt pseudopotentials (US-PP) or the projector augmented wave method (PAW). Both techniques allow a considerable reduction of the necessary number of plane-waves per atom for transition metals and first row elements. Forces and stress can be easily calculated with VAMP/VASP and used to relax atoms into their instantaneous groundstate.  [Ref: VASP]

Version Information

This application note was created to help users who benchmark clusters using VASP to also incorporate the latest version of Intel® MKL on Linux platforms on Xeon systems.  This application note is verified with VASP 5.3.5 and Intel® Compilers 15.0, Intel® MKL 11.2 and Intel® MPI 5.0.3

More information on VASP can be found from http://cms.mpi.univie.ac.at/vasp/

Step 2 – Downloading VASP Source Code

VASP is not public-domain or share-ware, and will be distributed only after a license contract has been signed. Please visit VASP homepage to know more details on obtaining the license.

Prerequisites:

  Intel C++ and Fortran Compilers, Intel MKL, Intel MPI

The above products are bundled in Intel® Parallel Studio XE 2015 Cluster Edition and is available for 30 days evaluation or for purchase at

http://www.intel.com/software/products/ 

Step 3 - Configuration

Use the following commands to extract the VASPfiles:

$tar –xvzf vasp.tgz vasp.lib.tgz

This will create  vasp and vasp.lib directories

Set the Intel software tools environment variables by running the following command assuming the default path installation and building for Intel64 platform:

$source /opt/intel/parallel_studio_xe_2015/bin/psxevars.sh intel64

Note:  This application note is written specifically for use with the Intel compilers and MPI.

Step 4 – Building VASP

a. Build libdmy.a

Change directory to vasp.x.x.lib

Modify the makefile.linux_ifc_P4 Makefile to point to the correct Intel FORTRAN compiler.

FC=ifort

Run the following command from vasp.x.x.lib, using the Makefile for linux using the Intel compiler.

$make –f makefile.linux_ifc_P4

After a successful compilation, libdmy.a will be built in the same directory.

b. Build VASP

Change directory to vasp.x.x

Edit the makefile.linux_ifc_P4 to link with Intel® MKL libraries and change the Fotran compiler

Under FORTRAN and C++ compiler and linker part of the makefile

FC= mpiifort
CPP    = $(CPP_) -DMPI  -DHOST=\"LinuxIFC\" -DIFC \
     -DCACHE_SIZE=32000 -DPGF90 -Davoidalloc -DNGZhalf \
     -DMPI_BLOCK=64000 -Duse_collective -DscaLAPACK  –DMKL_ILP64

Change the FORTRAN flags section as shown here.

FFLAGS = -FR -names lowercase -assume byterecl -I$(MKLROOT)/include/fftw
........
...
OFLAG=-O3 -xCORE-AVX2
.......

The -xCORE-AVX2 is for enabling vectorization for a Haswell Architecture.  Please refer the Intel Compiler reference manual for architecture specific compiler flags, or you may use -xhost to enable the highest available SIMD instruction if you are building and running the VASP on the same platform. 

Point to use Intel MKL libraries by modifying the MKL section as shown below:   -mkl=cluster is an Intel compiler flag that to include Intel MKL libraries, that will link with Intel MKL BLAS, LAPACK, FFT, ScaLAPACK functions that are used in VASP.

MKLROOT=/opt/intel/composer_xe_2015/mkl
MKL_PATH=$(MKLROOT)/lib/intel64
....
.....

BLAS= -mkl=cluster

LAPACK= 
......
.....

Comment out the existing FFT3D line, FFTW wrappers are integrated with the Intel MKL 10.2 onwards, and you do not need to specify the wrapper libs in the FFT3D line, and will be taken care of by -mkl=cluster flag mentioned above.

#FFT3D = fftdfurth.o fftdlib.o
FFT3D= fftmpiw.o fftmpi_map.o fftw3d.o fft3dlib.o 
INCS = -I$(MKLROOT)/include/fftw

Since the -mkl=cluster, includes MKL ScaLAPACK libraries also, it is not required to mentioned the ScaLAPACK libs.  Leave it as it is. 

SCA=

Run the following command to build VASP.

$make –f makefile.linux_ifc_P4

This will create the VASP executable in the current directory.

Step 5 - Running VASP

Run vasp by executing mpiexec command with your required parameters.  Below, for e.g., to run 48 processes use as shown with your workloads, and hostnames are mentioned in the machinefile.

$mpiexec.hydra –np 48 -f machinefile ./vasp

Appendix A -How to check whether vasp is linked with Intel MKL?

To confirm the successful linking of MKL with VASP, please run ldd on vasp as below.  Version numbers have been modified with "x" to allow generic information.

[vasp.5.3]$ ldd vasp
        linux-vdso.so.1 =>  (0x00007fff3e7f0000)
        libmkl_intel_lp64.so => /opt/intel/composer_xe_201x.x.xxx/mkl/lib/intel64/libmkl_intel_lp64.so (0x00002ad1af89e000)
        libmkl_cdft_core.so => /opt/intel/composer_xe_201x.x.xxx/mkl/lib/intel64/libmkl_cdft_core.so (0x00002ad1b01b5000)
        libmkl_scalapack_lp64.so => /opt/intel/composer_xe_201x.x.xxx/mkl/lib/intel64/libmkl_scalapack_lp64.so (0x00002ad1b03dd000)
        libmkl_blacs_intelmpi_lp64.so => /opt/intel/composer_xe_201x.x.xxx/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so (0x00002ad1b0cc7000)
        libmkl_sequential.so => /opt/intel/composer_xe_201x.x.xxx/mkl/lib/intel64/libmkl_sequential.so (0x00002ad1b0efb000)
        libmkl_core.so => /opt/intel/composer_xe_201x.x.xxx/mkl/lib/intel64/libmkl_core.so (0x00002ad1b17db000)
        libiomp5.so => /opt/intel/composer_xe_201x.x.xxx/compiler/lib/intel64/libiomp5.so (0x00002ad1b3348000)
        libmpifort.so.12 => /opt/intel/impi/5.x.x.xxx/intel64/lib/libmpifort.so.12 (0x00002ad1b3685000)
        libmpi.so.12 => /opt/intel/impi/5.x.x.xxx/intel64/lib/libmpi.so.12 (0x00002ad1b390f000)
.......
....

Appendix B - Known Issues and Limitations

There could be a compilation error on nonlr.F file as reported below.

mpif90 -fc=ifort  -FR -names lowercase -assume byterecl  -O2 -ip   -I/opt/intel/compiler/2013_sp1.1.106/composer_xe_2013_sp1.1.106/mkl/include/fftw  -c nonlr.f90

nonlr.F(3069): error #6404: This name does not have a type, and must have an explicit type.   [LM]
             DO LM=1,LMMAXC
nonlr.F(3069): error #6404: This name does not have a type, and must have an explicit type.   [LM]
             DO LM=1,LMMAXC
----------------^
nonlr.F(3069): error #6063: An INTEGER or REAL data type is required in this context.   [LM]
             DO LM=1,LMMAXC
----------------^
nonlr.F(3206): error #6404: This name does not have a type, and must have an explicit type.   [LM]
             DO LM=1,LMMAXC
----------------^
nonlr.F(3206): error #6063: An INTEGER or REAL data type is required in this context.   [LM]
             DO LM=1,LMMAXC
----------------^

By applying the patch mentioned below to the nonlr.F, this compilation error can be fixed.

3002c3002
<     INTEGER IP, LMBASE, ISPIRAL, ISPINOR, NLIIND, NIS, NT, LMMAXC, NI, INDMAX, L, LM, IND
---
>     INTEGER IP, LMBASE, ISPIRAL, ISPINOR, NLIIND, NIS, NT, LMMAXC, NI, INDMAX, L, IND
3144c3144
<     INTEGER IP, LMBASE, ISPIRAL, ISPINOR, NLIIND, NIS, NT, LMMAXC, NI, INDMAX, L, LM, IND
---
>     INTEGER IP, LMBASE, ISPIRAL, ISPINOR, NLIIND, NIS, NT, LMMAXC, NI, INDMAX, L, IND

Appendix C – References

VASP (Vienna Ab-Initio Package Simulation)

Intel® Math Kernel Library

Intel® Parallel Studio XE

 

For more complete information about compiler optimizations, see our Optimization Notice.

14 comments

Top
Santanu M.'s picture

While installing VASP 5.3.5 in our local server using intel compilers 2016 latest version, we get stuck in the BLACS. The following error appears

undefined reference to MPI_Allgather   The similar error appears for all other MPI attributes.

Why and how does this error appear? Kindly resolve the issue.

Looking forward to a reply.

Thanks

 

 

Rohit T.'s picture

How to compile static version for vasp?

i have compiled the band and gamma version of vasp and is running well

i have made one change in the Makefile for compiling static version

FC=ifort -i-dynamic   to    FC=ifort -static   and then compiled again. But it is not working. Any help will be appreciated

thanks

Prasanna Kumar N.'s picture

Please ignore my previous post. 

I did installed VASP executable successfully, only I changed FC=mpif90 (openmpi compiled using Intel compiler) inside makefile.linux_ifc_P4

But I got the following error while running,

mpirun -np 4 /opt/VASP/vasp.5.3/vasp

this gives the error as follows, 

forrtl: severe (174): SIGSEGV, segmentation fault occurred
Image              PC                Routine            Line        Source                                                                           
libmpi.so.1        00002B3133018DE9  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D8B273  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D7D9FB  Unknown               Unknown  Unknown
libmkl_blacs_inte  00002B3130D7D409  Unknown               Unknown  Unknown
vasp               00000000004D7BCD  Unknown               Unknown  Unknown
vasp               00000000004CA239  Unknown               Unknown  Unknown
vasp               0000000000E23D62  Unknown               Unknown  Unknown
vasp               0000000000E447AD  Unknown               Unknown  Unknown
vasp               0000000000472BC5  Unknown               Unknown  Unknown
vasp               000000000044D25C  Unknown               Unknown  Unknown
libc.so.6          00002B31340C1C36  Unknown               Unknown  Unknown
vasp               000000000044D159  Unknown               Unknown  Unknown

                                                                         

 

--------------------------------------------------------------------------
mpirun has exited due to process rank 6 with PID 12042 on
node node01 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

 

Here all the libs associated with vasp executable,

ldd vasp

linux-vdso.so.1 =>  (0x00007fffcd1d5000)
        libmkl_intel_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so (0x00002b7018572000)
        libmkl_cdft_core.so => /opt/intel/mkl/lib/intel64/libmkl_cdft_core.so (0x00002b7018c84000)
        libmkl_scalapack_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_scalapack_lp64.so (0x00002b7018ea0000)
        libmkl_blacs_intelmpi_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so (0x00002b701968b000)
        libmkl_sequential.so => /opt/intel/mkl/lib/intel64/libmkl_sequential.so (0x00002b70198c8000)
        libmkl_core.so => /opt/intel/mkl/lib/intel64/libmkl_core.so (0x00002b7019f66000)
        libiomp5.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libiomp5.so (0x00002b701b174000)
        libmpi_f90.so.3 => /opt/intel/openmpi-icc/lib/libmpi_f90.so.3 (0x00002b701b477000)
        libmpi_f77.so.1 => /opt/intel/openmpi-icc/lib/libmpi_f77.so.1 (0x00002b701b67b000)
        libmpi.so.1 => /opt/intel/openmpi-icc/lib/libmpi.so.1 (0x00002b701b8b8000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00002b701bd03000)
        libm.so.6 => /lib64/libm.so.6 (0x00002b701bf07000)
        librt.so.1 => /lib64/librt.so.1 (0x00002b701c180000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x00002b701c38a000)
        libutil.so.1 => /lib64/libutil.so.1 (0x00002b701c5a2000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b701c7a5000)
        libc.so.6 => /lib64/libc.so.6 (0x00002b701c9c3000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b701cd37000)
        libifport.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifport.so.5 (0x00002b701cf4d000)
        libifcore.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifcore.so.5 (0x00002b701d17d000)
        libimf.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libimf.so (0x00002b701d4b3000)
        libintlc.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libintlc.so.5 (0x00002b701d96f000)
        libsvml.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libsvml.so (0x00002b701dbbe000)
        libifcoremt.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifcoremt.so.5 (0x00002b701e48c000)
        libirng.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libirng.so (0x00002b701e7f1000)
        /lib64/ld-linux-x86-64.so.2 (0x00002b7018351000)

Please take a look into this and help me in running the same.

Prasanna Kumar N.'s picture

I did installed VASP executable successfully, only I changed FC=mpif90 (openmpi compiled using Intel compiler) whatever you mentioned. But I got the following error while running,

mpirun -np 4 /opt/VASP/vasp.5.3/vasp

this gives the error as follows, 

WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 WARNING: for PREC=h ENMAX is automatically increase by 25 %
        this was not the case for versions prior to vasp.4.4
 LDA part: xc-table for Ceperly-Alder, standard interpolation
 POSCAR, INCAR and KPOINTS ok, starting setup
 FFT: planning ...
 WAVECAR not read
 entering main loop
       N       E                     dE             d eps       ncg     rms                                                                               rms(c)
^Cmpirun: killing job...

forrtl: error (69): process interrupted (SIGINT)

Stack trace terminated abnormally.
forrtl: error (69): process interrupted (SIGINT)
Image              PC                Routine            Line        Source                                                                           
vasp               000000000068789A  Unknown               Unknown  Unknown
vasp               00000000006D0A54  Unknown               Unknown  Unknown
vasp               0000000000E2FDF1  Unknown               Unknown  Unknown
vasp               0000000000E5549D  Unknown               Unknown  Unknown
vasp               000000000047C845  Unknown               Unknown  Unknown
vasp               0000000000456EDC  Unknown               Unknown  Unknown
libc.so.6          00002B4F36889C36  Unknown               Unknown  Unknown
vasp               0000000000456DD9  Unknown               Unknown  Unknown
forrtl: error (69): process interrupted (SIGINT)
Image              PC                Routine            Line        Source                                                                           

--------------------------------------------------------------------------
mpirun has exited due to process rank 6 with PID 12042 on
node node01 exiting improperly. There are two reasons this could occur:

1. this process did not call "init" before exiting, but others in
the job did. This can cause a job to hang indefinitely while it waits
for all processes to call "init". By rule, if one process calls "init",
then ALL processes must call "init" prior to termination.

2. this process called "init", but exited without calling "finalize".
By rule, all processes that call "init" MUST call "finalize" prior to
exiting or it will be considered an "abnormal termination"

This may have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------

 

Here all the libs associated with vasp executable,

ldd vasp

linux-vdso.so.1 =>  (0x00007fffcd1d5000)
        libmkl_intel_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_intel_lp64.so (0x00002b7018572000)
        libmkl_cdft_core.so => /opt/intel/mkl/lib/intel64/libmkl_cdft_core.so (0x00002b7018c84000)
        libmkl_scalapack_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_scalapack_lp64.so (0x00002b7018ea0000)
        libmkl_blacs_intelmpi_lp64.so => /opt/intel/mkl/lib/intel64/libmkl_blacs_intelmpi_lp64.so (0x00002b701968b000)
        libmkl_sequential.so => /opt/intel/mkl/lib/intel64/libmkl_sequential.so (0x00002b70198c8000)
        libmkl_core.so => /opt/intel/mkl/lib/intel64/libmkl_core.so (0x00002b7019f66000)
        libiomp5.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libiomp5.so (0x00002b701b174000)
        libmpi_f90.so.3 => /opt/intel/openmpi-icc/lib/libmpi_f90.so.3 (0x00002b701b477000)
        libmpi_f77.so.1 => /opt/intel/openmpi-icc/lib/libmpi_f77.so.1 (0x00002b701b67b000)
        libmpi.so.1 => /opt/intel/openmpi-icc/lib/libmpi.so.1 (0x00002b701b8b8000)
        libdl.so.2 => /lib64/libdl.so.2 (0x00002b701bd03000)
        libm.so.6 => /lib64/libm.so.6 (0x00002b701bf07000)
        librt.so.1 => /lib64/librt.so.1 (0x00002b701c180000)
        libnsl.so.1 => /lib64/libnsl.so.1 (0x00002b701c38a000)
        libutil.so.1 => /lib64/libutil.so.1 (0x00002b701c5a2000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x00002b701c7a5000)
        libc.so.6 => /lib64/libc.so.6 (0x00002b701c9c3000)
        libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00002b701cd37000)
        libifport.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifport.so.5 (0x00002b701cf4d000)
        libifcore.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifcore.so.5 (0x00002b701d17d000)
        libimf.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libimf.so (0x00002b701d4b3000)
        libintlc.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libintlc.so.5 (0x00002b701d96f000)
        libsvml.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libsvml.so (0x00002b701dbbe000)
        libifcoremt.so.5 => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libifcoremt.so.5 (0x00002b701e48c000)
        libirng.so => /opt/intel/composer_xe_2013.1.117/compiler/lib/intel64/libirng.so (0x00002b701e7f1000)
        /lib64/ld-linux-x86-64.so.2 (0x00002b7018351000)

Please take a look into this and help me in running the same.

 

Nadezhda Plotnikova (Intel)'s picture

Hi Vipin,How the CACHE and BLOCK size parameters (-DCACHE_SIZE and-DMPI_BLOCK ) are chosen? is there any logic behind that?And how to compile without MPI? Thanks! Nadya

Vipin Kumar E K (Intel)'s picture

The article has been updated (Jun 15th, 2015) after validating with Intel Parallel Studio XE 2015 version (Intel Compilers 15.0, Intel MKL 11.2, Intel MPI 5.0.3)  and with VASP 5.3.5.

Vipin Kumar E K (Intel)'s picture

Barsi,

 Can you please submit the issue that face with your MPI errors to premier.intel.com, we can investigate the same?

--Vipin

Baris M.'s picture

Thank you very much for the effort, but I would like to warn users that using these instructions, vasp-5.3.5 intelmpi/4.1.3.048,  intel compilers 14.0up03 ; all I got was strange mpi errors and segmentations faults when I am lucky, and complete non-sensible results when I am not-so-lucky. 

I'm going back to my own compilation which works probably more slowly but reliably.

I don't know what exactly went wrong, but I think these guidelines either need to be updated, or a complete makefile has to be presented.

Tom P.'s picture

Is this a typo, or is there something wrong with my install of MKL?

You show:

MKL_FFTW_PATH=$(MKL_PATH)/interfaces/fftw3xf

My install  has intefaces/fftw3sx under $MKLROOT, not $MKL_PATH=$MKLROOT/lib/intel64

Pages

Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.