Intel® Xeon Phi™ Cluster Integration – A hands-on Introduction
Join us for a Webinar on August 12
Space is limited. Reserve your Webinar seat now.
In this 4-hour course led by an instructor the attendants will learn how to integrate intel Xeon Phi coprocessors into a cluster. Each attendant will connect via ssh to a cluster, and be able to run all necessary commands alongside the presentation.
Title:: Intel® Xeon Phi™ Cluster Integration – A hands-on Introduction
Date: Tuesday, August 12, 2014
Time: 8:00 AM - 12:00 PM PDT
What will be covered, in greater detail:
In this 4-hour course led by an instructor the attendants will learn how to integrate Xeon Phi coprocessors into a cluster. The session will include:
Each attendant will connect via ssh to a cluster, and be able to run all necessary commands alongside the presentation.
- Discussion on prerequisites of the compute server (for instance what software needs to be installed, reserved IP addresses, user names, network file systems)
- Unpacking the driver software package, explanation of components
- Basic concepts (host, host OS, host kernel, coprocessor, MPSS stack, layout of MPSS files, boot image of the uOS, ramfs of the uOS)
- Recompiling HOST kernel packages; diagnose output and understand errors (necessary to work with nonstandard kernels)
- Install a minimal set of MPSS rpm packages using rpm
- Create a default MPSS configuration (using “micctrl --initdefaults “)
- Startup (aka boot) the coprocessor
- Connect via minicom to the coprocessor (this allows one to connect to the Xeon Phi WITHOUT figuring out network problems)
- modify uOS filesystem by overlaying an /etc/passwd file
- Create a bridgded network on the host
- Configure the coprocessor for bridged networking by modifying micX.conf directly
- Reboot card and connect via ssh
- Set up a ssh key-pair; diagnose ssh gotchas
- Mount a NFS file system on the coprocessor
- Configure a user known in the cluster by modifying the /etc/passwd file of the coprocessor
- Group up with neighbor – run MPI benchmark natively over Ethernet
- Recompile the MPSS OFED package to support a nonstandard kernel on the HOST
- Install MPSS-OFED rpms
- Start OFED on the coprocessor
- Group up with neighbor – run MPI benchmark natively over InfiniBand
- Create a minimal startup script wrapping everything up; this startup script can be used by a batch scheduling system to restart a coprocessor on behalf of a user before running a job.
- Where to find more resources or ask questions