Running list of known issues

 

Why do I get a segmentation fault when I run Intel® MPI Library application using my Cisco* TopSpin* InfiniBand* Host Drivers?
Make sure you have Cisco* TopSpin* InfiniBand* Host Drivers version 3.2.0-47 or later. Submit an issue to Intel® Premier Support if the problem persists. Alternatively use OpenFabrics* Enterprise Distribution (OFED*) 1.2.5 or higher.

Why does an unexpected debugger window appear when I use mpiexec -tv for debugging Intel MPI Library application in TotalView*?
TotalView* version 7.1.x introduces a new way to launch application for debugging. The "New program" window appears when you use the -tv mpiexec option. Press the Cancel button in the "New program" window to continue debugging. The problem does not appear with TotalView* version 8.3

Why do I get a catastrophic compilation error "opt/intel/mpi/2.0/include/mpicxx.h(45): catastrophic error: #error directive: SEEK_SET is #defined but must not be for the C++ binding of MPI" when I compile C++ application?
Define the MPICH_IGNORE_CXX_SEEK macro at compilation stage to avoid this issue. For instance,

mpicc -DMPICH_IGNORE_CXX_SEEK

There are name-space clashes between stdio.h and the MPI C++ binding. MPI standard requires SEEK_SET, SEEK_CUR, and SEEK_END names in the MPI namespace, but stdio.h defines them to integer values. To avoid this conflict make sure your application includes the mpi.h header file before stdio.h or iostream.h or undefine SEEK_SET, SEEK_CUR, and SEEK_END names before including mpi.h.

 

Why do I get the following error message "ld:cannot find -lstdc++ shared" during compilation of C++ application?
Use the Intel C++ Compiler version 8.1. build 034 or later to avoid this issue.

Why do C++ tests fail when they are compiled with Intel C++ Compiler but work fine with GNU C++?
Use the Intel C++ Compiler version 8.1. build 034 or later to avoid this issue.

Why do I get a collective abort message when I run Intel MPI Library application using my Qlogic* SilverStorm* InfiniBand* Host Drivers?
Make sure you have Qlogic* SilverStorm* InfiniBand* Host Drivers version 4.1.1 or later. Or set the I_MPI_DYNAMIC_CONNECTION_MODE environment variable to disconnect value to avoid this issue. Submit an issue to Intel® Premier Support if the problem persists. Alternatively, use OpenFabrics* Enterprise Distribution (OFED*) 1.2.5 or higher.

Para obtener información más completa sobre las optimizaciones del compilador, consulte nuestro Aviso de optimización.