HPL application note

HPL User Note

Step 1 - Overview

This guide is intended to help current HPL users get better benchmark performance by utilizing BLAS from the Intel® Math Kernel Library (Intel® MKL).

HPL (High Performance LINPACK), an industry standard benchmark for HPC, is a software package that solves a (random) dense linear system in double precision (64 bits) arithmetic on distributed-memory computers.

We will be explaining 3 ways in this note to get the HPL running.

1.     Using Intel® optimized HPL binary directly (mp_inpack)
2.     Building and using HPL from source provided in MKL package
3.     Building and using open source HPL by linking with MKL

Version Information

This application note was created to help users who benchmark clusters using HPL to make use of the latest versions of Intel® MKL on Linux platforms. Specifically we'll address Intel® MKL version 11.3 and Intel® MPI 5.1.1 from Intel® Parallel Studio XE 2016.   

Step 2 - Downloading HPL Source Code

Download Open source HPL.

If you have installed MKL, HPL is included in MKL and can be found at

<MKL installation dir>/benchmarks/mp_linpack


1.     BLAS

BLAS (Basic Linear Algebra Subprograms) DGEMM is the core high performance routine exercised by HPL.  Intel® MKL BLAS is highly optimized for maximum performance on Intel® Xeon® processor-based systems.

BLAS from Intel® MKL can be obtained from  Intel® Parallel Studio XE

FREE Intel Optimized LINPACK Benchmark packages

The Intel MKL team provides FREE Intel® Optimized LINPACK Benchmark packages that are binary implementations of the LINPACK benchmarks which include Intel® MKL BLAS.  Not only are these SMP and Distributed Memory packages free, they are also much easier to use than HPL (no compilation needed, just run the binaries).  We highly recommend HPL users consider switching from HPL to the Free Intel Optimized LINPACK benchmark packages.

2.     MPI

 Intel® MPI is also available part of the Intel Parallel Studio XE Cluster Edition.

You may choose to run the pre-built binaries from the FREE Intel® Optimized LINPACK Benchmark packages or build HPL from the following steps and run.  The hybrid (mpi + openmp) parallel versions of HPL binaries are also included in the package.

If you are building HPL source that is available as part of Intel® MKL, please skip the Steps 3 & 4 mentioned below.  The two makefiles, Make.ia32 and Make.intel64 are provided for Ia32 and Intel64 platforms.  The makefiles are given in such a way that, you can build either serial or hybrid version of HPL.

If you downloaded hpl-2.1.tar.gz (from netlib) please follow below instructions.

Step 3 - Configuration 

1) Extract the tar file

Use the following commands to extract the tar file from the downloaded hpl-2.0.tar.gz file

$gunzip hpl-2.1.tar.gz
$tar -xvf hpl-2.1.tar. 

This will create an hpl-2.1 directory.  Rename this directory to hpl and copy the same to the $HOME.

2) Makefile Creation

Create a file Make.<arch> in the  top-level directory. For this purpose, you may want to re-use one contained in the setup directory ($HOME/hpl/setup/). Let us use Make.Linux_PII_CBLAS. This file essentially contains the compilers and libraries with their paths to be used.

Copy this file.

  $cp hpl/setup/Make.Linux_PII_CBLAS $HOME/hpl/

Rename this file

  $mv Make.Linux_PII_CBLAS Make.intel64

This user note explains how to build HPL for Intel64 platform.

Make sure that Intel® C++ and FORTRAN compilers are installed and they are in PATH, also set LD_LIBRARY_PATH to your compiler (C++ and FORTRAN), MPI, and MKL libraries.

Step 4 - Modifying Makefile

The steps below will explain the steps for building HPL

Edit Make.intel64

1) Change value of ARCH to intel64 (Whichever the value, you have given for <arch>)

# ----------------------------------------------------------------------
# - Platform identifier ------------------------------------------------ 
# ----------------------------------------------------------------------
 ARCH = intel64 

2) Point to your MPI library

 MPdir = /opt/intel/impi/
 MPinc = -I$(MPdir)/include64
 MPlib = $(MPdir)/intel64/lib/libmpi_mt.so

Here, we selected multi-threaded dynamic version of MPI library.

If you are using gnu MPI (MPICH2), it would be libmpich.a instead of libmpi.a

It is advisable to use Intel® MPI for better performance.

3) Point to the math library, MKL

LAdir = /opt/intel/compilers_and_libraries_2016/linux/mkl/lib/intel64
LAinc = -I/opt/intel/compilers_and_libraries_2016/linux/mkl/include
LAlib = -mkl=cluster

4) Change the compiler related information to use Intel Compiler

CC           = mpiicc
CCFLAGS      = -openmp -xHost -fomit-frame-pointer -O3 -funroll-loops $(HPL_DEFS)

LINKER       = mpiicc

Here, we are linking with the MPI parallel MKL library.

Step 4 - Building HPL

To build the executable use "make arch=<arch>". This should create an executable in the bin/<arch> directory called xhpl.

In our example, execute

 $make arch=intel64

This creates the executable file bin/intel64/xhpl. It also creates a HPL configuration file HPL.dat.

Typically, scripts are needed to be run, and perhaps portions of the readme file should be reprinted.

Also list the compiler command line syntax, etc.

Step 5 - Running HPL

Case 1:  If you have downloaded Intel® Optimized linpack

Extract the package and run the script for your platform

For e.g: Running hybrid HPL on Intel64 Xeon machines.


Please refer the lpk_notes_lin.htm provided with this package for more details.

Case 2 & 3: If you have built the hpl from the mkl package or open source hpl

Go to the directory where the executable is built.

e.g:  For the test run of hpl, use the following commands.

  $cd bin/<arch> 
  $mpiexec.hydra -np 4 ./xhpl

Create a machines file with node names.


For e.g. machines files contains names as

Running with the machines file. Assuming each node has 4 cores and running 512 MPI ranks.

$mpiexec.hydra -np 512 -machinefile machines ./xhpl

Please refer MPI documentation for various other arguments, which you can use.


Most of the performance parameters can be tuned, by modifying the input file bin/HPL.dat. See the file TUNING in the top-level directory for more information.

Note: If you use Intel® Optimized linpack, you have to change the input files provided with that package, for e.g: HPL_hybrid.dat.  You can refer the extended help xhelp.lpk for more info in modifying the input file.

Main parameters you need to consider while running HPL.

Problem size (N):Your problem size should be the largest to fit in the memory to get best performance. For e.g.: If you have 10 nodes with 1 GB RAM, total memory is 10GB. i.e. nearly 1342 M double precision elements. Square root of that number is 36635. You need to leave some memory for Operating System and other things.  As a rule of thumb, 80% of the total memory will be a starting point for problem size (So, in this case, say, 33000). If the problem size is too large, it is swapped out, and the performance will degrade.

Block Size (NB):HPL uses the block size NB for the data distribution as well as for the computational granularity. A very small NB will limit computational performance because no data reuse will occur, and also the number of messages will also increase. "Good" block sizes are almost always in the [32 .. 256] interval and it  depends on  Cache size.  These block size are found to be good, 80-216 for IA32; 128-192 for IA64 3M cache; 400 for 4M cache for IA64 and 130 for Woodcrests.

Process Grid Ratio (PXQ):This depends on physical interconnection network. P and Q should be approximately equal, with Q slightly larger than P. For e.g. for a 480 processor cluster, 20X24 will be a good ratio.

Tips: You can also try changing the node-order in the machine file for check the performance improvement. Choose all the above parameters by trial and error to get the best performance.

You can also use a simple PHP web tool to enter you system specs and it will suggest for you optimal input parameters for your HPL file before running the benchmark on the cluster. The tool can be accessed via the URL below under sourceforge:


Appendix A - Known Issues and Limitations

If you are building hpl from source rather than using the binary from Intel® Optimized linpack, make sure that, your MPI is running properly, Fortran, C++, MPI and MKL libraries are in LD_LIBRARY_PATH and Fortran, C++ and MPI binaries are in PATH.

Appendix B - References

High Performance Computing Software and Applications


For more complete information about compiler optimizations, see our Optimization Notice.


drMikeT's picture

Thanks Vipin

anonymous's picture

I run the hybrid version of hpl on 2 nodes. Each node has 12 cores. I have 2 processes on every node and 6 threads are assigned to each process.

My problem is that i see no improvement in the execution time. Its almost the same as when i was running the non-hybrid hpl version.

Am i doing something wrong?

Vipin Kumar E K (Intel)'s picture


We used iMPI4.x. Most often dapl (and for sure dapl on Endeavor), though ofa could be beneficial for really large machines (and when we tried it at small scale it worked comparable to dapl). Sorting of hostfile was in topological order (hpl communications are mostly with neighburs only), which in our case also was alphabetical. We have not set any non-default values.


anonymous's picture

With regards to your SNB Xeon-E5 cluster run mentioned above, how did you tune MPI? Did you use IMPI 3 or 4? Did you use dapl or ofa? how did you sort your hostfile? did you set any other non-default values?


Vipin Kumar E K (Intel)'s picture


We have added the new performance #s for our Sandy Bridge Architecture, which is published in top500 Nov'11 list and the corresponding HPL.dat is attached.


Vipin Kumar E K (Intel)'s picture


There is a -I missing in the /home/intel/mkl/include which may have resulted this error. Please fix the same.

/home/intel/bin/ifort -DHPL_CALL_CBLAS -I/home/intel/hpl-2.0/include -I/home/intel/hpl-2.0/include/intel64 /home/intel/mkl/include -I/home/intel/impi/ -o /home/intel/hpl-2.0/bin/intel64/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /home/intel/hpl-2.0/lib/intel64/libhpl.a /home/intel/mkl/lib/intel64/libmkl_intel_lp64.a /home/intel/mkl/lib/intel64/libmkl_intel_thread.a /home/intel/mkl/lib/intel64/libmkl_core.a -lpthread -lm /home/intel/impi/


Vipin Kumar E K (Intel)'s picture


We have updated the results for the Intel® Xeon E5 (Sandy Bridge - EP) cluster. Please also find the HPL.dat file mentioned above in the article for your reference.


anonymous's picture

The file Make.intel64, when try to run, have the error
The file:

SHELL = /bin/sh
CD = cd
CP = cp
LN_S = ln -s
MKDIR = mkdir
RM = /bin/rm -f
TOUCH = touch
ARCH = intel64
TOPdir = /home/intel/hpl-2.0
INCdir = $(TOPdir)/include
BINdir = $(TOPdir)/bin/$(ARCH)
LIBdir = $(TOPdir)/lib/$(ARCH)
HPLlib = $(LIBdir)/libhpl.a
MPdir = /home/intel/impi/
MPinc = -I$(MPdir)/include64
MPlib = $(MPdir)/lib64/libmpi_mt.a
LAdir = /home/intel/mkl/lib/intel64
LAinc = /home/intel/mkl/include
LAlib = $(LAdir)/libmkl_intel_lp64.a $(LAdir)/libmkl_intel_thread.a $(LAdir)/libmkl_core.a -lpthread -l
HPL_INCLUDES = -I$(INCdir) -I$(INCdir)/$(ARCH) $(LAinc) $(MPinc)
HPL_LIBS = $(HPLlib) $(LAlib) $(MPlib)
CC = /home/intel/bin/icc
LINKER = /home/intel/bin/ifort
RANLIB = echo

The error:

make[2]: Leaving directory `/home/intel/hpl-2.0/testing/ptimer/intel64'
( cd testing/ptest/intel64; make )
make[2]: Entering directory `/home/intel/hpl-2.0/testing/ptest/intel64'
/home/intel/bin/icc -o HPL_pddriver.o -c -DHPL_CALL_CBLAS -I/home/intel/hpl-2.0/include -I/home/intel/hpl-2.0/include/intel64 /home/intel/mkl/include -I/home/intel/impi/ ../HPL_pddriver.c
icc: warning #10147: no action performed for specified file(s)
/home/intel/bin/icc -o HPL_pdinfo.o -c -DHPL_CALL_CBLAS -I/home/intel/hpl-2.0/include -I/home/intel/hpl-2.0/include/intel64 /home/intel/mkl/include -I/home/intel/impi/ ../HPL_pdinfo.c
icc: warning #10147: no action performed for specified file(s)
/home/intel/bin/icc -o HPL_pdtest.o -c -DHPL_CALL_CBLAS -I/home/intel/hpl-2.0/include -I/home/intel/hpl-2.0/include/intel64 /home/intel/mkl/include -I/home/intel/impi/ ../HPL_pdtest.c
icc: warning #10147: no action performed for specified file(s)
/home/intel/bin/ifort -DHPL_CALL_CBLAS -I/home/intel/hpl-2.0/include -I/home/intel/hpl-2.0/include/intel64 /home/intel/mkl/include -I/home/intel/impi/ -o /home/intel/hpl-2.0/bin/intel64/xhpl HPL_pddriver.o HPL_pdinfo.o HPL_pdtest.o /home/intel/hpl-2.0/lib/intel64/libhpl.a /home/intel/mkl/lib/intel64/libmkl_intel_lp64.a /home/intel/mkl/lib/intel64/libmkl_intel_thread.a /home/intel/mkl/lib/intel64/libmkl_core.a -lpthread -lm /home/intel/impi/
ipo: warning #11010: file format not recognized for /home/intel/mkl/include
/home/intel/mkl/include: file not recognized: Is a directory
make[2]: *** [dexe.grd] Error 1
make[2]: Leaving directory `/home/intel/hpl-2.0/testing/ptest/intel64'
make[1]: *** [build_tst] Error 2
make[1]: Leaving directory `/home/intel/hpl-2.0'
make: *** [build] Error 2

drMikeT's picture

Hi Vipin,

could you provide us with the settings you used to achieve this performance ?

thanks --Michael

Carlos R.'s picture

Hi, could you share with us the NB you used for the runs in X5670 chips? I have seen good results with NB around 192, but I would like to know what you think is best since you have tested in larger systems than I have (10 x 2 socket is my largest).



Add a Comment

Have a technical question? Visit our forums. Have site or software product issues? Contact support.