Recipe: Building and Running MILC on Intel® Xeon® Processors and Intel® Xeon Phi™ Processors

By Smahane Douyeb, Karthik Raman, Published: 02/07/2017, Last Updated: 01/09/2018

Introduction

The MILC software represents a set of codes written by the MIMD Lattice Computation (MILC) collaboration used to study quantum chromodynamics (QCD), the theory of the strong interactions of subatomic physics. It performs simulations of four dimensional SU3 lattice gauge theory on MIMD parallel machines. "Strong interactions" are responsible for binding quarks into protons and neutrons and holding them all together in the atomic nucleus. MILC applications address fundamental questions in high energy and nuclear physics, and is directly related to major experimental programs in these fields. MILC is one of the largest compute cycle users at many US and European supercomputing centers.

Purpose

This article provides instructions for code access, build and run directions for the “ks_imp_rhmc” application on Intel® Xeon® Gold and Intel® Xeon Phi™ processors for better performance on a single node.

The “ks_imp_rhmc” is a dynamical RHMC (Rational Hybrid Monte Carlo algorithm) code for staggered fermions. In addition to the naive and asqtad staggered actions, the highly-improved-staggered-quark (HISQ) action is also supported.

Currently, the Conjugate Gradient (CG) Solver and the Gauge Force operations in the code uses the QPhiX library. Efforts are on-going to integrate other operations (like Fermion Force (FF)) with the QPhiX library as well.

The QPhiX library provides sparse solvers and Dslash kernels for Lattice QCD simulations optimized for Intel® architecture.

Code Access

The MILC Software and QPhiX library are required primarily. The MILC software can be downloaded from GitHub* here: https://github.com/milc-qcd/milc_qcd Download (git checkout) the “develop” branch. QPhiX support is integrated into this branch for CG solvers and Gauge Force operator. QPhiX support for Gauge Force is currently available on Intel® Xeon® Gold and Intel® Xeon Phi™ processors only.

git clone https://github.com/milc-qcd/milc_qcd.git
git checkout develop

The QPhiX library and Code Generator for use with Wilson-Clover fermions (e.g., for use with Chroma) are available from https://github.com/jeffersonlab/qphix.git and https://github.com/jeffersonlab/qphix-codegen.git respectively. For the most up to date version it is suggested to use the 'devel' branch of QPhiX.

The MILC version of QPhiX is currently not open source. Please contact the MILC collaboration group for access to the QPhiX (MILC) branch.

Build Directions

Compile the QPhiX Library:

Users need to build QPhiX library first before building the MILC package.

The QPhiX library has two repositories milc-qphix and milc-qphix-codegen.

Use “gauge_force” branch for both above repositories.

Build milc-qphix-codgen:

The files with intrinsics for QPhiX are built in the milc-qphix-codegen directory.

Enter the milc-qphix-codegen directory. Remember to checkout “gauge_force” branch.

Edit line #3 in “Makefile_xyzt”, enable “milc=1” variable.

Compile as:

source /opt/intel/compiler/<version>/bin/compilervars.sh intel64 
source /opt/intel/impi/<version>/mpi/intel64/bin/mpivars.sh
make avx512   # [for Intel® Xeon® Gold and Intel® Xeon Phi™ processors]

Build milc-qphix:

Enter the milc-qphix (mbench) directory. Remember to checkout “gauge_force” branch.

Use “Makefile_qphixlib” as makefile.

Set “mode=mic” to compile with Intel® Advanced Vector Extensions 512 (Intel AVX-512) for Intel® Xeon Phi™ processors and “mode=avx512” to compile with Intel AVX-512 for Intel® Xeon® Gold processors.

To enable MPI. Set ENABLE_MPI = 1

Compile as:

make -f Makefile_qphixlib mode=mic AVX512=1 # [Intel® Xeon Phi™ processor]
make -f Makefile_qphixlib mode=avx512 AVX512=1 # [Intel® Xeon® Gold processor]

Compile MILC Code:

  1. Install/Download the master branch from the above GitHub
  2. Download the Makefile.qphix file from the following location
    http://physics.indiana.edu/~sg/MILC_Performance_Recipe/
  3. Copy the Makefile.qphix to the corresponding application directory. In this case, copy the Makefile.qphix to “ks_imp_rhmc” application directory and re-name it as Makefile
  4. Make the following changes to the Makefile:
    • On line #17 - Add/Uncomment the appropriate ARCH variable
      • For example, ARCH = knl (compile with Intel AVX-512 for Intel® Xeon Phi™ Processor)
      • For example, ARCH = skx (compile with Intel AVX-512 for Intel® Xeon® Gold Processor)
    • On line #28 - Change MPP variable to “true” if you want MPI
    • On line #34 - Pick the PRECISION you want
      • 1 = Single, 2 = Double. We use Double for our runs
    • Starting line #37 - Compiler is set up and this should just work If directions above were followed. If not, customize starting at line #40
    • On line #124 - Setup of Intel compiler starts
      • Based on ARCH it will use the appropriate flags
    • On line #407 - QPhiX customizations starts
      • On line #413 – Set QPHIX_HOME to correct QPhiX path (Path to milc-qphix directory)
      • The appropriate QPhiX FLAGS will be set if the above is defined correctly
  5. Build:
    cd ks_imp_rhmc #The Makefile with the above changes should be in this directory
    source /opt/intel/compiler/<version>/bin/compilervars.sh intel64 
    source /opt/intel/impi/<version>/mpi/intel64/bin/mpivars.sh
    make su3_rhmd_hisq     # Build su3_rhmd_hisq binary
    make su3_rhmc_hisq     # Build su3_rhmc_hisq binary

Compile the above binaries for Intel® Xeon Phi™ and Intel® Xeon® Gold processors (edit Makefile accordingly).

Run Directions

Input Files

There are two required input files, params.rest and rat.m013m065m838

The file rat.m013m065m838 defines the residues and poles of the rational functions needed in the calculation. The file params.rest sets all the run-time parameters, including the lattice size, the length of the calculation (number of trajectories), and the precision of the various conjugate-gradient solutions.

In addition, a params.<lattice-size> file with required lattice size will be created during runtime. This file essentially has the params.rest appended to it with the lattice size (Nx * Ny * Nz * Ny) to run.

The Lattice Sizes

The size of the four-dimensional space-time lattice is controlled by the “nx, ny, nz, nt” parameters.

As an example, consider a problem as (nx x ny x nz x nt) = 32 x 32 x 32 x 64 running on 64 MPI ranks. To weak scale this problem user would begin by multiplying nt by 2, then nz by 2, then ny by 2, then nx by 2 and so on such that all variables get sized accordingly in a round robin fashion.

This is illustrated in the table below. The original problem size is 32 x 32 x 32 x 64, to keep the elements/rank constant (weak scaling), for 128 rank count, first multiply nt by 2 (32 x 32 x 32 x 128) Similarly, for 512 ranks, multiply nt by 2, nz by 2, ny by 2 from the original problem size to keep the same elements/rank.

Ranks 64 128 256 512
Nx 32 32 32 32
Ny 32 32 32 64
Nz 32 32 64 64
nt 64 128 128 128
         
Total Elements 2097152 4194304 8388608 16777216
Multiplier

 

1 2 4 8
Elements/Rank 32768 32768 32768 32768

Table. Illustrates Weak Scaling of Lattice Sizes

Running with MPI x OpenMP*

The calculation takes place on a four-dimensional hyper cubic lattice, representing three spatial dimensions and one time dimension. The quark fields have values on each of the lattice points and the gluon field has values on each of the links connecting nearest-neighbors of the lattice sites.

The lattice is divided into equal sub-volumes, one per MPI rank. The MPI ranks can be thought of as being organized into a four-dimensional grid of ranks. It is possible to control the grid dimensions with the params.rest file. Of course, the grid dimensions must be integer factors of the lattice coordinate dimensions.

Each MPI rank executes the same code. The calculation requires frequent exchanges of quark and gluon values between MPI ranks with neighboring lattice sites. Within a single MPI rank the site-by-site calculation is threaded using OpenMP directives, which have been inserted throughout the code. The most time-consuming part of production calculations is the conjugate gradient (CG) solver. In the QPhiX version of the CG solver, the data layout and the calculation at the thread level is further organized to take advantage of the Intel® Xeon® and Intel® Xeon Phi™ processors SIMD lanes.

Running the Test-cases

  1. Create a “run” directory in the top-level directory and add the input files obtained from above
  2. cd /run

    P.S: Run the appropriate binary for each architecture

  3. Create the lattice volume:
    cat << EOF > params.$nx*$ny*$nz*$nt
    prompt 0
    nx $nx
    ny $ny
    nz $nz
    nt $nt
    EOF
    cat params.rest >> params.$nx*$ny*$nz*$nt
    

    For this performance recipe, we evaluate the single node performance with the following weak scaled lattice volume:

    Single Node (nx * ny * nz * nt): 24 x 24 x 24 x 24

  4. Run MILC. (source the latest Intel compilers and Intel® MPI Library. Intel® Parallel Studio 2018 and above is recommended)

    Single node Intel® Xeon® Gold 6148:

    mpiexec.hydra –n 8 –env OMP_NUM_THREADS 5 –env KMP_AFFINITY 'granularity=fine,scatter,verbose'  <path-to>/ks_imp_rhmc/su3_rhmd_hisq.skx < params.24x24x24x24

    Single node Intel® Xeon Phi™ 7250:

    mpiexec.hydra –n 1 –env OMP_NUM_THREADS 64 –env KMP_AFFINITY 'granularity=fine,scatter,verbose' numactl –p 1 <path-to>/ks_imp_rhmc/su3_rhmd_hisq.knl < params.24x24x24x24

Performance Results and Optimizations:

The output below shows the performance of CG Solver.

The performance chart below is the speedup with respect to 2S Intel® Xeon® Gold, 2S Intel® Xeon® processor E5-2697 v4 and Intel® Xeon Phi™ processor, based on the CG GFLOPs/sec.

The optimizations as part of the QPhiX library include data layout changes to target vectorization and generation of packed aligned loads/stores, cache blocking, load balancing and improved code generation for each architecture (Intel® Xeon® processor, Intel® Xeon Phi™ processor) with corresponding intrinsics where necessary. See Reference section for details.

Testing Platform Configurations

The following hardware was used for the above recipe and performance testing.

Processor Intel® Xeon® Processor E5-2697 v4 Intel® Xeon Phi™ Processor 7250 Intel® Xeon® Gold 6148 Processor

Sockets / TDP

2S / 290W

1S / 215W

2S/150W

Frequency / Cores / Threads

2.3 GHz / 36 / 72

1.4 GHz / 68 / 272 2.4 GHz / 40 /80
DDR4 8x16GB 2400 MHz 6x16 GB 2133 MHz 2x16 GB 2666 MHz(192 GB)
MCDRAM N/A 16 GB Flat NA
Cluster/Snoop Mode Home Quadrant/Flat Home
Turbo On On On
BIOS GRRFSDP1.86B0271.R00.1510301446 GVPRCRB1.86B.0010.R02.1606082342 86B.01.00.0412
Operating System Red Hat Enterprise Linux* 6.7 Red Hat Enterprise Linux 6.7 Red Hat Enterprise Linux 7.3
(3.10.0-229.20.1.el6.x86_64) (3.10.0-229.20.1) 3.10.0-514.el7.x86_64

MILC Build Configurations

The following configurations were used for the above recipe and performance testing.

MILC Version Master version as of December 2017
Intel® Compiler Version 2018.1.163
Intel® MPI Library Version 2018.1.163
MILC Makefiles used Makefile.qphix, Makefile_qphixlib, Makefile

References and Resources

  1. MILC Staggered Conjugate Gradient Performance on Intel® Xeon Phi™ Processor - https://anl.app.box.com/v/IXPUG2016-presentation-10
  2. Intel® Xeon Phi™ Processor

Product and Performance Information

1

Intel's compilers may or may not optimize to the same degree for non-Intel microprocessors for optimizations that are not unique to Intel microprocessors. These optimizations include SSE2, SSE3, and SSSE3 instruction sets and other optimizations. Intel does not guarantee the availability, functionality, or effectiveness of any optimization on microprocessors not manufactured by Intel. Microprocessor-dependent optimizations in this product are intended for use with Intel microprocessors. Certain optimizations not specific to Intel microarchitecture are reserved for Intel microprocessors. Please refer to the applicable product User and Reference Guides for more information regarding the specific instruction sets covered by this notice.

Notice revision #20110804