# [6] Assertion failed in file ../../segment.c

Hi,

we have compiled our parallel code by using the latest Intel's software stack. We do use a lot of passive RMA one-sided PUT/GET operations along with a derived datatypes. Now we are expericincing problem that sometimes our application fails with either segmentation fault or with the following error message:

[6] Assertion failed in file ../../segment.c at line 669: cur_elmp->curcount >= 0

[6] internal ABORT - process 6

The Intel's inspector shows a problem inside the Intel MPI library:

# -perhost not working with IntelMPI v.5.0.1.035

-perhost option does not work as expected with IntelMPI v.5.0.1.035, though it does work with  IntelMPI v.4.1.0.024:

$qsub -I -lnodes=2:ppn=16:compute,walltime=0:15:00 qsub: waiting for job 5731.hpc-class.its.iastate.edu to start qsub: job 5731.hpc-class.its.iastate.edu ready$ mpirun -n 2 -perhost 1 uname -n
hpc-class-40.its.iastate.edu
hpc-class-40.its.iastate.edu

# Problem in running mpirun command through newly created user

Hi Team,

I am facing a problem while running mpirun command through newly created user.

Package ID: l_mpi_p_4.0.3.008

Any help will be highly appreciated.

Regards.

# Intel MPI 5.0.1.037

Hi,

I have  two questions about Intel MPI on Micorsoft Windows 7 64bit.

The first one concerning about Intel MPI 5.0.1.037. If I execute

C:\Program Files (x86)\Intel\MPI-RT\5.0.1.037\em64t\bin\smpd -version

I get "3.1". If I execute

C:\Program Files (x86)\Intel\MPI-RT\4.1.3.045\em64t\bin\smpd -version

I get "4.1.3"

This is a problem for us, because our product is compiled with Intel MPI 4.1.3.045 and needs smpd version 4.1.3.

# Fortran code aborts beyond 108 nodes

Hi,

There is an issue we have been facing for the past few months.

We used to use a C code for our simulations. It used to run successfully on 108 nodes (each node has 16 processors), but we could not make the code run on more than 108 nodes.

# cpuinfo output from system call different

Hello,

I'm using Intel MPI 5.0 and am making a system call inside my fortran program and it returns different values depending on the env. variable I_MPI_PIN_DOMAIN. Why is that? How do I make it give consistent output?

Sample Fortran (Intel Fortran 13.1) program that can reproduce this:

        Program tester

call system("cpuinfo|grep 'Packages(sockets)'|
&                          tr -d ' '|cut -d ':' -f 2")

stop
end

\$ mpirun -genv I_MPI_PIN_DOMAIN node -np 1 ./a.out
2

# Intel MPI Runtime For Windows installer hang

We are seeing a couple of computers (all running Win 7) that are hanging in installation.  We ran in installer with the log argument and this is what it shows.  We see this with a couple different versions (4.0 and 4.1).  Any ideas?  There aren't any windows waiting for input.  Done with admin user - and tried using command prompt as Administrator - same issue.

...

# Intel MIC LDAP Issue

Dear Team,

In our setup we are having 6 MIC cards on 3 host machines.

Out of 6 mic cards, 1 card is not working with ldap authentication.

The troubleshooting steps given below:-

[root@phi1-mic0 ~]# rpm -qa | grep ldap
[root@phi1-mic0 ~]#
[root@phi1-mic0 ~]# logout
Connection to phi1-mic0 closed.
[root@phi1-mic1 ~]# rpm -qa | grep ldap
libldap-2.4-2-2.4.23-r1.k1om
nss-ldap-265-r0.k1om
pam-ldap-186-r0.k1om
[root@phi1-mic1 ~]#

MIC 0 Log

# How do I configure Visual Studio for Intel MPI?

I'm using Visual Studio 2013 and IncrediBuild 5.5 to build apps using Intel Parallel Studio. I've recently downloaded Intel MPI and found that it essentially has a compiler wrapper (mpiicpc) that I need to use to compile apps (and, correspondingly, run them with mpiexec).

My question is this: how do I configure Visual Studio 2013 so that mpiicpc is used instead of icpc for compilation?

Thanks.

# MPI_TYPE_CREATE_STRUCT error 174

Dear all,

this is becoming a nightmare. I have the following program, where i create MPI_TYPE_CREATE_STRUCT and then I try to send it to other processor. It crushes every-time and do not know why.