Blog post

Working with Mellanox* InfiniBand Adapter on System with Intel® Xeon Phi™ Coprocessors

InfiniBand* is a network communications protocol commonly used in the HPC area because the protocol offers very high throughput.

Authored by Nguyen, Loc Q (Intel) Last updated on 06/14/2017 - 15:52

Intel® Xeon Phi™ Cluster Integration webinar, part 1 of 4

Next video

Authored by admin Last updated on 06/14/2017 - 08:44

Access to InfiniBand* from Linux*

by Robert J. Woodruff Software Engineering Manager

Authored by Robert Woodruff (Intel) Last updated on 06/01/2017 - 11:17

Troubleshooting InfiniBand connection issues using OFED tools

This article describes how to troubleshoot some common InfiniBand issues using the tools provided by the Open Fabrics Enterprise Distribution (OFED).
Authored by Peter Hartman (Intel) Last updated on 06/07/2017 - 09:23

Understanding the InfiniBand Subnet Manager

The InfiniBand subnet manager (OpenSM) assigns Local IDentifiers (LIDs) to each port connected to the InfiniBand fabric, and develops a routing table based off of the assigned LIDs.

Authored by Peter Hartman (Intel) Last updated on 06/14/2017 - 13:13

Experience with various interconnects and DAPL* providers

FAQ regarding how to use the Intel® MPI Library with various DAPL devices.
Authored by Last updated on 09/11/2017 - 13:01
For more complete information about compiler optimizations, see our Optimization Notice.