Search

Search Results for:

Search Results: 1,500

  1. Enabling IP over InfiniBand* on the Intel® Xeon Phi™ Coprocessor ...

    https://software.intel.com/en-us/articles/enabling-ip-over-infiniband-on-the-intel-xeon-phi-coprocessor

    Mar 9, 2016 ... Introduction InfiniBand (IB) networking offers a high throughput and low latency. To use IB, network applications must use IB verb APIs.

  2. Intel MPI, MPI_THREAD_MULTIPLE and Infiniband

    https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/667769

    Jul 14, 2016 ... Dear all, I have a general question about Intel MPI. First of all, some information on my installation (on a CentOS 6 cluster) mpiexec --version ...

  3. Working with Mellanox* InfiniBand Adapter on System with Intel ...

    https://software.intel.com/en-us/blogs/2014/09/23/working-with-mellanox-infiniband-adapter-on-system-with-intel-xeon-phi-coprocessors

    Sep 23, 2014 ... InfiniBand* is a network communications protocol commonly used in the HPC area because the protocol offers very high throughput. Intel and ...

  4. Troubleshooting InfiniBand connection issues using OFED tools ...

    https://software.intel.com/en-us/articles/troubleshooting-infiniband-connection-issues-using-ofed-tools

    Jan 21, 2010 ... This article describes how to troubleshoot some common InfiniBand issues using the tools provided by the Open Fabrics Enterprise Distribution ...

  5. Understanding the InfiniBand Subnet Manager | Intel® Software

    https://software.intel.com/en-us/articles/understanding-the-infiniband-subnet-manager

    Jan 28, 2010 ... The InfiniBand subnet manager (OpenSM) assigns Local IDentifiers (LIDs) to each port connected to the InfiniBand fabric, and develops a ...

  6. Using InfiniBand network fabrics to allocate globally shared memory ...

    https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/609361

    Feb 5, 2016 ... Dear Collegues, My MPI program implements a globally shared memory for processes on multiple nodes (hosts) using ...

  7. intel mpi failed with infiniband on new nodes of our cluster (Got ...

    https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/278341

    May 11, 2012 ... Strange, because a run with openmpi with infiniband works with the new nodes. If I'm using I_MPI_FABRICS=shm:dapl with the new nodes it ...

  8. RDMA in mixed Ethernet/Infiniband cluster

    https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/289575

    May 22, 2010 ... We just owned a new cluster with Infiniband.Infiniband is setup correctly and all tests passed (IPoIB, RDMA ...)We are able to run MPI jobs ...

  9. Access to InfiniBand* from Linux* | Intel® Software

    https://software.intel.com/en-us/articles/access-to-infiniband-from-linux

    Oct 30, 2009 ... by Robert J. Woodruff Software Engineering Manager Introduction Get acquainted with the Infiniband* software architecture and how support ...

  10. Infini Band Network configuration On Xeon Phi

    https://software.intel.com/en-us/forums/intel-many-integrated-core/topic/598154

    Oct 30, 2015 ... I need your help in setting up Xeon Phi coprocessor in cluster where I'm facing some issues in doing that. Below I've mentioned some of the ...

  11. intel mpi and infiniband udapl

    https://software.intel.com/es-es/forums/intel-clusters-and-hpc-technology/topic/303525

    hi, I am trying to use the Intel compilers and mpi libraries to run over infiniband. From the documentation and also from all the searches I did on the Intel forums I  ...

  12. Poor performance of MPI even with Infiniband

    https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/607526

    Jan 27, 2016 ... Hi, I am benchmarking how much network bandwidth MPI can exploit for my use cases. I tested the bandwidth between two machines using ...

  13. MPSS 3.5.2 MLNX OFED 2.3.2 Infiniband problems Centos 6.6

    https://software.intel.com/en-us/forums/intel-many-integrated-core/topic/611156

    Mar 1, 2016 ... Hi, I have installed MLNX_OFED_LINUX-2.3-2.0.0 with MPSS 3.5.2 and I'm getting the next error in the output of ibv_devinfo: Failed to query ...

  14. Intel Phi on Centos6.5 with RHEL InfiniBand Stack

    https://software.intel.com/en-us/forums/intel-many-integrated-core/topic/509042

    Apr 7, 2014 ... Hi,. I would like to ask whether it is possible to install Intel MPSS with standard ( distributed with OS) RHEL Infiniband stack. In manual ...

  15. How to handle mismatched InfiniBand HCA and switch speed with ...

    https://software.intel.com/en-us/articles/how-to-handle-mismatched-infiniband-hca-and-switch-speed-with-intel-cluster-checker-v30

    Jul 8, 2015 ... Intel® Cluster Checker v3.0 can detect the type of the InfiniBand Host Channel Adapter (HCA) that is installed in a cluster node. It can also ...

  16. infiniband connection host-mic and mic-mic

    https://software.intel.com/en-us/forums/networking/topic/542196

    Feb 25, 2015 ... Hi, I'm trying to set up infiniband connection between host and mic, mic and mic. Host is showing this on ifconfig: mic0:ib: flags=67 mtu 64512 ...

  17. Enabling Connectionless DAPL UD in the Intel® MPI Library | Intel ...

    https://software.intel.com/en-us/articles/dapl-ud-support-in-intel-mpi-library

    May 7, 2013 ... What is DAPL UD? Traditional InfiniBand* support involves MPI message transfer over the Reliable Connection (RC) protocol. While RC is ...

  18. ibscif/Infiniband problems

    https://software.intel.com/pt-br/forums/intel-many-integrated-core/topic/508661

    Hello, I am having problems with the actual mpss 3.2 relase in combination with CentOS 5.3 and the Intel 7.2.2.0.8 OFED stack. The kernel version is ...

  19. Performance of Infiniband verbs issued from the MIC processor

    https://software.intel.com/pt-br/forums/intel-many-integrated-core/topic/372394

    I wanted to look at using infinband verbs on the MIC card, where we're talking about transfers between two different compute nodes (two different MICs, system  ...

  20. Intel Micro Benchmark Result Problem with PCI-passthrough via ...

    https://software.intel.com/en-us/forums/intel-clusters-and-hpc-technology/topic/515816

    May 26, 2014 ... Hi everyone, I still evaluate cluster performance. For now, i move on virtualization with PCI-passthrough via FDR infiniband on KVM hypervisor.

For more complete information about compiler optimizations, see our Optimization Notice.