Параллельные вычисления

I need help. Wrong results when i use dgels, ubuntu 14.04+mkl 11.0


I want to use the MKL library for solving a system of overdetermined linear equations.For this I use the dgels LAPACK function, which is provided by MKL library. In my particular problem (matrix of 4800 rows and 81 columns) the results are incorrect: Both, the QR factorization and the solution are wrong. But seems to work good for toy matrices (6x4 dgels matrix example in mkl documentation).

implicit declaration warning not produced using C compiler

Compiling the below code in gcc (gcc -Wall foo.c) yields the following warning message:

foo.c: In function 'main':
foo.c:9: warning: implicit declaration of function 'strlen'
foo.c:9: warning: incompatible implicit declaration of built-in function 'strlen'

When I compile using Intel C++ compiler (icpc -Wall foo.c)  I get the below error message:

foo.c(9): error: identifier "strlen" is undefined
    len = strlen(foo);

When I compile using Intel C compiler (icc -Wall foo.c) no warning messages are displayed.  Why is this the case?


Mpirun is treating -perhost, -ppn, -grr the same: always round-robin

Our cluster has 2 Haswell sockets per node, each with 12 cores (24 cores/node).

Using: intel/15.1.133, impi/

Irrespective of which of the options mentioned in the subject line are used, ranks are always being placed in round-robin fashion.  The commands are being run in batch job that generates a host file that contains lines like the following when submitted with:

qsub -l nodes=2:ppn=1 ...


tfe02.% cat hostfile

micnativeloadex xlinpack_mic

I am trying to use the micnativeloadex to launch the SMP linpack for MIC that comes packaged with the MKL libs, but its complaining as below

$ micnativeloadex ./xlinpack_mic -a "/home/testing/phi/linpack/lininput_mic"
Either the supplied binary isn't a regular file, or its size is zero.

Any pointers?

MPSS 3.6, OFED-3.18 and test.c


I have installed MPSS 3.6, OFED-3.18 on hos and Intel Xeon Phi

I build simple test from /test directory of Intel MPI 5.1 for host and mic. I run test with command line 

mpirun -n 2 -genv I_MPI_DEBUG=5 -host node ./test :  -n 6 -host node-mic0 ./test-mic

but I got failed run

MKL gesv memory usage


Testing LAPACK_zgesv, 

I saw that it allocates a large amount of memory, in particular this can crash my rogram when for example a matrix 5000x5000 is passed.
Is there any alternative solution or option in order to limitate this overuse of memory?

I'm using MKL 11.3.01 in C# if it can be helpfull.




Using Boost.Align and Boost.Numeric.uBlas with Intel MKL

Hello everybody,

for my project I need to call some basic BLAS routines from the Intel MKL library like gemm or syev. For convenience I wrote my own matrix class whose functions wrap around the BLAS function calls from the MKL library. Since I already use some components of the Boost library, I would like to replace my matrix class with the boost::numeric::ublas::matrix class.

Till now the underlying storage of my own matrix class is a std::unique_ptr whose data is allocated with mkl_malloc(..., 64) and has a custom deleter

Подписаться на Параллельные вычисления