Intel® Math Kernel Library

Build Scipy With MKL failed

I use command to build numpy first:

python %MYPWD%/%NUMPY_VER%/setup.py config  --compiler=msvc build_clib --compiler=msvc  build_ext

the site.cfg content is:

[mkl]
library_dirs = C:\Program Files (x86)\Intel\Composer XE 2015\mkl\lib\intel64
include_dirs = C:\Program Files (x86)\Intel\Composer XE 2015\mkl\include
mkl_libs = mkl_rt
lapack_libs = 

Then I build scipy with command:

python %MYPWD%/%SCIPY_VER%/setup.py config  --compiler=msvc build_clib --compiler=msvc  build_ext

 

cluster_sparse_solver computes wrong solution

Hello,

I'm trying to use cluster_sparse_solver and solve a system in-place (iparm(6) = 1), with a distributed format (iparm(40) = 1). I adapted the example cl_solver_unsym_distr_c.c as you can see attached, and at runtime, on two MPI processes, I get the following output:

$ icpc -V
Intel(R) C++ Intel(R) 64 Compiler XE for applications running on Intel(R) 64, Version 15.0.1.133 Build 20141023

$ mpicc -cc=icc cl_solver_unsym_distr_c.c -lmkl_intel_thread -lmkl_core -lmkl_intel_lp64 -liomp5

$ mpirun -np 2 ./a.out

Running multiple Pardiso solves concurrently

Hi,

We're using MKL Pardiso inside an optimisation web service on Windows and Linux.  Clients can spin up multiple optimisations in one call to the service, and so we have multiple runs occurring concurrently in the same memory space, with multiple calls to Pardiso, so one might be analysing, another factorising, and another solving, and so on, all at the same time.  Under heavy loads we have crashes from heap corruption, and Pardiso is often in the call stack.

SVD produces wrong results in mkl=parallel (2013 sp1)

I have experienced a strange bug in MKL: zgesvd produces different results (some wrong) depending on the number of threads that MKL uses. Above 2, the singular values all become NaN, even though the matrix is perfectly diagonalizable. I would appreciate some help, as this is critical for my simulations at work.

I have placed a copy of a reproducible example here
https://www.dropbox.com/sh/0fejoblyv7w6t30/AABcD9jW3KZRR0z5BLJXA0KLa?dl=0

Using FEAST for large matrix

Hello,

   I am presently working with FEAST to find eigenvalues and eigenvectors for a symmetric matrix. I need to solve N X N matrix with N ~ 10^6- 10^8.     

  Now I have few queries :

   1. SInce the size is large it is not possible to allocate this storage in a desktop (it has 8GB ram). Is there any way to handle large matrix of this size ?

How can I reuse sparse factorizations in Pardiso

 

Hello, 

I have a serials of structurally identical matrixs such as {A1, A2, A3,....}

and I need to solve  A*X=Y, for A1,A2,A3......., note that rho vector Y changes as time goes while all matrixs are kept constant ,

so I need to solve all these equations at each time step. Is there any way I can do  factorization only once at the starting time and 

stores all the computed factors in a memory efficient way so that I can solve the linear equations whenever the Y vectors are updated?

Thank you! 

Issue during replacing ipp DCT function with MKL DCT function

Hi,

I want to replace my IPP based DCT function with mkl based DCT function .

I am getting different output data when I will cross check with the ipp DCT vs mkl DCT function output.

I used below functions to get the DCT by usng IPP.lib function calls :

ippsDCTFwdInitAlloc_32f
ippsDCTFwd_32f
ippsDCTFwdFree_32f

Below is my code :

//pfa of the fileinput.txt 

int main(int argc, char* argv[]){

Subscribe to Intel® Math Kernel Library