HPL application note

HPL User Note

Step 1 - Overview

This guide is intended to help current HPL users get better benchmark performance by utilizing BLAS from the Intel® Math Kernel Library (Intel® MKL).

HPL (High Performance LINPACK), an industry standard benchmark for HPC, is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers.

We will be explaining 3 ways in this note to get the HPL running.

1.     Using Intel® optimized HPL binary directly (mp_inpack)
2.     Building and using HPL from source provided in MKL package
3.     Building and using open source HPL by linking with MKL

Version Information

This application note was created to help users who benchmark clusters using HPL to make use of the latest versions of Intel® MKL on Linux platforms. Specifically we'll address Intel® MKL version 11.3 and Intel® MPI 5.1.1 from Intel® Parallel Studio XE 2016.   

Step 2 - Downloading HPL Source Code

Download Open source HPL.

If you have installed MKL, HPL is included in MKL and can be found at

<MKL installation dir>/benchmarks/mp_linpack


1.     BLAS

BLAS (Basic Linear Algebra Subprograms) DGEMM is the core high performance routine exercised by HPL.  Intel® MKL BLAS is highly optimized for maximum performance on Intel® Xeon® processor-based systems.

BLAS from Intel® MKL can be obtained from  Intel® Parallel Studio XE

FREE Intel Optimized LINPACK Benchmark packages

The Intel MKL team provides FREE Intel® Optimized LINPACK Benchmark packages that are binary implementations of the LINPACK benchmarks which include Intel® MKL BLAS.  Not only are these SMP and Distributed Memory packages free, they are also much easier to use than HPL (no compilation needed, just run the binaries).  We highly recommend HPL users consider switching from HPL to the Free Intel Optimized LINPACK benchmark packages.

2.     MPI

 Intel® MPI is also available part of the Intel Parallel Studio XE Cluster Edition.

You may choose to run the pre-built binaries from the FREE Intel® Optimized LINPACK Benchmark packages or build HPL from the following steps and run.  The hybrid (mpi + openmp) parallel versions of HPL binaries are also included in the package.

If you are building HPL source that is available as part of Intel® MKL, please skip the Steps 3 & 4 mentioned below.  The two makefiles, Make.ia32 and Make.intel64 are provided for Ia32 and Intel64 platforms.  The makefiles are given in such a way that, you can build either serial or hybrid version of HPL.

If you downloaded hpl-2.1.tar.gz (from netlib) please follow below instructions.

Step 3 - Configuration 

1) Extract the tar file

Use the following commands to extract the tar file from the downloaded hpl-2.0.tar.gz file

$gunzip hpl-2.1.tar.gz
$tar -xvf hpl-2.1.tar. 

This will create an hpl-2.1 directory.  Rename this directory to hpl and copy the same to the $HOME.

2) Makefile Creation

Create a file Make.<arch> in the  top-level directory. For this purpose, you may want to re-use one contained in the setup directory ($HOME/hpl/setup/). Let us use Make.Linux_PII_CBLAS. This file essentially contains the compilers and libraries with their paths to be used.

Copy this file.

  $cp hpl/setup/Make.Linux_PII_CBLAS $HOME/hpl/

Rename this file

  $mv Make.Linux_PII_CBLAS Make.intel64

This user note explains how to build HPL for Intel64 platform.

Make sure that Intel® C++ and FORTRAN compilers are installed and they are in PATH, also set LD_LIBRARY_PATH to your compiler (C++ and FORTRAN), MPI, and MKL libraries.

Step 4 - Modifying Makefile

The steps below will explain the steps for building HPL

Edit Make.intel64

1) Change value of ARCH to intel64 (Whichever the value, you have given for <arch>)

# ----------------------------------------------------------------------
# - Platform identifier ------------------------------------------------ 
# ----------------------------------------------------------------------
 ARCH = intel64 

2) Point to your MPI library

 MPdir = /opt/intel/impi/
 MPinc = -I$(MPdir)/include64
 MPlib = $(MPdir)/intel64/lib/libmpi_mt.so

Here, we selected multi-threaded dynamic version of MPI library.

If you are using gnu MPI (MPICH2), it would be libmpich.a instead of libmpi.a

It is advisable to use Intel® MPI for better performance.

3) Point to the math library, MKL

LAdir = /opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
LAinc = -I/opt/intel/compilers_and_libraries_2016/linux/mkl/include
LAlib = -mkl=cluster

4) Change the compiler related information to use Intel Compiler

CC           = mpiicc
CCFLAGS      = -openmp -xHost -fomit-frame-pointer -O3 -funroll-loops $(HPL_DEFS)

LINKER       = mpiicc

Here, we are linking with the MPI parallel MKL library.

Step 4 - Building HPL

To build the executable use "make arch=<arch>". This should create an executable in the bin/<arch> directory called xhpl.

In our example, execute

 $make arch=intel64

This creates the executable file bin/intel64/xhpl. It also creates a HPL configuration file HPL.dat.

Typically, scripts are needed to be run, and perhaps portions of the readme file should be reprinted.

Also list the compiler command line syntax, etc.

Step 5 - Running HPL

Case 1:  If you have downloaded Intel® Optimized linpack

Extract the package and run the script for your platform

For e.g: Running hybrid HPL on Intel64 Xeon machines.


Please refer the lpk_notes_lin.htm provided with this package for more details.

Case 2 & 3: If you have built the hpl from the mkl package or open source hpl

Go to the directory where the executable is built.

e.g:  For the test run of hpl, use the following commands.

  $cd bin/<arch> 
  $mpiexec.hydra -np 4 ./xhpl

Create a machines file with node names.


For e.g. machines files contains names as

Running with the machines file. Assuming each node has 4 cores and running 512 MPI ranks.

$mpiexec.hydra -np 512 -machinefile machines ./xhpl

Please refer MPI documentation for various other arguments, which you can use.


Most of the performance parameters can be tuned, by modifying the input file bin/HPL.dat. See the file TUNING in the top-level directory for more information.

Note: If you use Intel® Optimized linpack, you have to change the input files provided with that package, for e.g: HPL_hybrid.dat.  You can refer the extended help xhelp.lpk for more info in modifying the input file.

Main parameters you need to consider while running HPL.

Problem size (N): Your problem size should be the largest to fit in the memory to get best performance. For e.g.: If you have 10 nodes with 1 GB RAM, total memory is 10GB. i.e. nearly 1342 M double precision elements. Square root of that number is 36635. You need to leave some memory for Operating System and other things.  As a rule of thumb, 80% of the total memory will be a starting point for problem size (So, in this case, say, 33000). If the problem size is too large, it is swapped out, and the performance will degrade.

Block Size (NB): HPL uses the block size NB for the data distribution as well as for the computational granularity. A very small NB will limit computational performance because no data reuse will occur, and also the number of messages will also increase. "Good" block sizes are almost always in the [32 .. 256] interval and it  depends on  Cache size.  These block size are found to be good, 80-216 for IA32; 128-192 for IA64 3M cache; 400 for 4M cache for IA64 and 130 for Woodcrests.

Process Grid Ratio (PXQ): This depends on physical interconnection network. P and Q should be approximately equal, with Q slightly larger than P. For e.g. for a 480 processor cluster, 20X24 will be a good ratio.

Tips: You can also try changing the node-order in the machine file for check the performance improvement. Choose all the above parameters by trial and error to get the best performance.

You can also use a simple PHP web tool to enter you system specs and it will suggest for you optimal input parameters for your HPL file before running the benchmark on the cluster. The tool can be accessed via the URL below under sourceforge:


Appendix A - Known Issues and Limitations

If you are building hpl from source rather than using the binary from Intel® Optimized linpack, make sure that, your MPI is running properly, Fortran, C++, MPI and MKL libraries are in LD_LIBRARY_PATH and Fortran, C++ and MPI binaries are in PATH.

Appendix B - References

High Performance Computing Software and Applications


For more complete information about compiler optimizations, see our Optimization Notice.