How to Build a Linux* High-Performance Computing Cluster with Itanium® 2-Based Systems


Build a high-performance computing (HPC) cluster from machines based on Intel® Itanium® 2 processors running Linux. Clusters for HPC are growing in numbers and size. This is due to both their gaining popularity as a cost-effective HPC solution and the ease with which they can be built using available HPC clustering middleware, such as SCYLD,* from Scyld* Computing Solutions and OSCAR,* from the Open Cluster Group.


Download the necessary components from an open-source Web site and compile them, or use one of the available cluster packages, such as OSCAR (Open Source Cluster Application Resources) or SCYLD. This item focuses on building a 'self-made' Linux HPC cluster; that is, one that is built without using one of the freely downloadable or commercial packages. For more information on free and commercial packages, see /en-us/forums/intel-clusters-and-hpc-technology/.

If you are building the cluster from scratch, you can build it with the OS installed on each node or configure each node to load the operating system through the network from a master node at boot time using PXE. You can even build a diskless cluster, where each node runs purely out of memory. The following information focuses on building clusters with the OS on each node.

To build a Linux cluster from scratch, you will need the following:

  • Your Linux installation CDs.
  • Other parallel components, such as MPICH and PBS, preferably burned onto a CD.
  • A notebook to document your install process, problems you run into, and your resolutions.


Using the installation CD, build the master node first. The master node might function as the gateway and a DHCP server, depending on how you planned your cluster. The master node should be built with all the necessary files for the cluster to function, including the MPI libraries, batch/queue software (if required), SSH, etc. With the master node configured, it can provide to the compute nodes access to the Internet, if needed, and provide the supporting files not found on the OS CD, to the compute nodes as they are built.

The most tedious way to build the compute nodes is by running the installation CD on each node, adding the supporting files, and configuring each node to run in a cluster, including network information. If you use the installation scripts or wizard provided with your Linux CD, you will make the same selections for each node. Considering that it can take from 30 minutes to two hours to install a single node, a 128-node cluster can take at least eight days just to install (at two hours per node, that’s 32 days). Moreover, someone must attend each installation. Fortunately, there are a couple of tools that can help when building from scratch: the Kickstart file (with Red Hat Linux*) and SystemImager* from*.

The Kickstart file is a simple text file that automates much of the Red Hat install process. In the file, you can specify most of the configuration op tions, such as the keyboard, language, network, and disk partitioning. Red Hat provides a Kickstart configurator (wizard) to build a Kickstart file for your installation, or you can copy the one on the CD and modify it as necessary. The Kickstart file contains three sections: commands, packages, and scripts:

  • Commands: The commands section lists the installation options, including disk partitioning, network configuration (hostname, host address, and gateway), installation method, etc.
  • Packages: The packages section (starting with the %packages command) lists the packages you want installed. You can specify a group of packages (called a component) or a single package. You can create your own component and include it in the list. For example, you can create a component of the supporting libraries and applications that the cluster needs and list the component in the packages section. However, you will need to create a new ISO image of the install CD along with the component you specify.
  • Scripts: The scripts section allows you to perform post-installation functions, such as adding files not provided on the installation CD or customizing the installation configuration. The %post command starts the scripts section, and any files that are to be loaded must be available over the network, typically from the master node or through the Internet.


You have three options for installing a node based on the Kickstart file:

  • Create a Kickstart file for each node in the cluster (including the network address, host name, etc.) with the installation method set to CD, and then save it to a floppy. (You need a unique Kickstart floppy for each node.) Use the floppy to boot the node, and the Kickstart file will access the installation CD to install the OS and configure the node based on the information in the Kickstart file.
  • Place the installation image and all supporting files on the master node. Create a Kickstart file for each node in the cluster (including the network address, host name, etc.) with the installation method set to network, and then save it to a floppy. (You need a unique Kickstart floppy for each node.) Use the Kickstart floppy to boot the node. The node will retrieve the installation image from the location specified in the Kickstart file.
  • Set up the master node as a PXELinux* server, BOOTP/DHCP server, and NFS server with the installation image and all supporting files on it. This BOOTP/DHCP server must contain the configuration information (Kickstart files) for all the nodes in the cluster. (The compute node must support PXE network boot.) When the compute node boots, the DHCP server provides the networking information to the node and the location of the Kickstart file. Thus, each compute node can boot entirely from the network.


Using a DHCP server for installation reduces the administration involvement in building out the cluster, but, in a large cluster, a failed node can be hard to locate, since the DHCP server assigned its network address.

Depending on how your cluster will be used, you might need to install the Intel® compilers, Intel® Math Kernel Library, and other Intel® optimization tools on the master node or on all nodes. This is especially important if the cluster is used to develop parallel code on your Intel® Architecture-based nodes.

SystemImager* automatically configures compute nodes on a cluster based on a sample compute node – the golden client. With SystemImager, you do not need to write additional installation scripts to add the supporting files to the node. The golden client becomes the model for all other nodes in the cluster; SystemImager copies the image from the golden client onto the master node, and then builds and configures all nodes using that image.

Therefore, the golden client needs to be completely and properly configured before launching SystemImager. SystemImager also enables you to maintain the entire cluster by changing only the golden client and then updating the rest of the compute nodes based on the changes to the sample node.

Once the golden client is prepared and SystemImager is installed on the master node, command lines to SystemImager let you build a copy of the golden client on the master node, specify the partitions and mount points, and specify hostnames and IP addresses. Then, you have to determine how each node will boot: using a floppy or CDROM (using the makeautoinstallfloppy or makeautoinstallcd commands), or over the network. If the nodes boot over the network, the master node must run the PXELinux server and a DHCP server, and each node must support PXE boot. SystemImager provides tools to configure a file for mapping IP addresses to hostnames. With SystemImager, you can configure dynamic or static IP addresses. Static IP addresses make it easier to find failed nodes. The online documentation for SystemImager is located at*.


High Performance Computing Clusters with Intel® Architecture, Part 2

Para obter informações mais completas sobre otimizações do compilador, consulte nosso aviso de otimização.