Intel® Math Kernel Library

MKL ERROR

Hello, when I compile and run the following code:

"
#include "stdafx.h"
#include "mkl_blas.h"
#include "mkl_cblas.h"
#include "mkl_types.h"

int _tmain(int argc, char *argv[])
{

MKL_INT m = 2, n = 4, k = 3;
MKL_INT lda = k, ldb = n, ldc = n;
float alpha = 1, beta = 1;
float *a = new float[6], *b = new float[12], *c = new float[8];
CBLAS_ORDER order = CblasRowMajor;
CBLAS_TRANSPOSE transA = CblasNoTrans, transB = CblasNoTrans;

Are type sizes consistent cross platform ??

I'm just wondering,

Are the MKL base types: single, double, complex single, complex double, always the same physical bit sizes across supported platforms?

I.e. is a Single always a 4-byte floating point value, and a Double 'always' an 8-byte value on: 32-bit, 64-bit, Itanium ... for AMD and Intel processors?

Newton
------

SIGSEGV in JVM when calculating with large matrices

I am having trouble running shared object library that is used by a Java application when the problem size is large. There seems to be a bug somewhere in the interaction between Java and MKL. Hopefully someone has seen this, or can reproduce the bug.

To create the JVM crash, simply modify the source code for the dgemm.java example supplied with the 10.1 MKL to include these lines:

PARDISO : turning off on screen output

Hi,

The diagostics that pardiso outputs to the screen is helpful but my code calls pardiso repeatedly so I would like to be able to read the output without being interrupted by pardiso output. This is especially bothersome for MPI code since every thread is outputting at the same time. Is there a way to turn this off? I've looked in the reference for pardiso but was unable to find anything

Xi Lin

PARDISO using > 2GB results in segfault

Hi,

I'm using the portland fortran 90 compiler (pgf90) to link with PARDISO in the MKL 10.0 library. When the memory needed by PARDISO exceeds 2 GB, I get a segmentation fault.

My compiler flags are

-O0 -g -Mbounds -Mlarge_arrays -mcmodel=medium

My link flags are

-tp=k8-64 -mp -Mcache_align -Mlarge_arrays -mcmodel=medium -L$(MKL_LINK) -lmkl_solver -lmkl_em64t -lguide -lacml_mp -lacml

MKL in VMWARE

We are thinking about getting a hefty computer and creating a number of virtual machines for use by various projects. I have created Java wrappers for many of the BLAS functions I'm using in the MKL, but I'm wondering if these will still work, or work as efficiently if there is yet another layer of virtualization. Has anyone had experience running MKL in Java on Windows within a VMWare virtual machine, or any subset of that?

Thanks!

dsyevr in fortran, MKL not working

      integer i,j,k,toff
      double precision , dimension(0:17):: ev
      double precision , dimension(1:18,1:18):: evect
      double precision , dimension(0:17,0:17):: ttt
      double precision work(18*64),dlamch
      integer iwork(18*10), status, support(2*18)
      
      toff=18
      do 120 i=0, toff-1
         do 120 j=9, toff-1
          ttt(i,j)=1.d0
          write(*,*) i,j,ttt(i,j)
 120  continue
 
      call dsyevr('v','a','l',toff,ttt,toff,0.d0,0.d0,0,0,dlamch('s'),
     1     i,ev,evect,toff,support,work,toff*64,iwork,toff*10,status)
Intel® Math Kernel Library abonnieren