Intel® Math Kernel Library

Multithreading when called by C++, not when called by R

Hi everyone, I have been struggling with a problem for quite some time, and I would greatly appreciate your input.

I have a program that uses Intel MKL's dgemm function many times.  In fact, to demonstrate my problem, I used exactly the same code as is in this dgemm tutorial:  https://software.intel.com/en-us/node/429920.

Parallel cluster sparse direct solver on very large matrix

Hi there,

We have some C code working for solving a big linear system that is quite sparse using the cluster sparse solver routines in MKL. We generally followed the `cl_solver_unsym_distr_c.c` example code. 

We want to now use this code on an even larger sparse matrix that has over 3 billion non-zero entries (but it is, in fact, sparse). It appears that the cluster sparse solver insists on the number of non-zeros being specified by an MKL_INT, which we are obviously overflowing. Passing in a long instead of an int generates an error.

How to set the adaqute parameters for feast eigen solver

I'm using feast general Eigen solver to solve the Eigen value equations. However, I found it is difficult to follow a general rules to set the emin, emax and m0. It seems I need to know in advance about the range of Eigenvalues for the m0 mode, if I set a large number to emax which exceeds the eigenvalues of the m0 mode, it will report error message 3. On the other hand, if I set emax too small, the solver will give incorrect predictions of the eigenvalues.

I'm solving a vibration problem and I want to do the following 2 things:

Fix arithmetic error

I am performing a distributed Cholesky factorization and inversion of a matrix. I assume that the serial versions of these routines give the correct result, thus I am trying to make my distributed version work like the serial version when it comes to the result, by reading this thread.

I called mkl_cbwr_set(MKL_CBWR_AVX), but I got the very same result with and without it. Here are the last 5 cells of the result matrix:

distributed:

PARDISO Questions

Hi Pardiso experts,

I have a couple of questions about PARDISO.

First a simple one. I am using out of core PARDISO and specify PARDISO_OOC_KEEP_FILE=0 to keep the files.

But then when I want to delete them, I am changing PARDISO_OOC_KEEP_FILE to 1 and call pardiso with the finalize flag.

sparse_matrix_checker error value constants

The MKL manual lists the following constants for the error value returned by the sparse_matrix_checker subroutine: MKL_SPARSE_CHECKER_SUCCESS, MKL_SPARSE_CHECKER_NON_MONOTONIC, MKL_SPARSE_CHECKER_OUT_OF_RANGE, MKL_SPARSE_CHECKER_NONTRIANGULAR and MKL_SPARSE_CHECKER_NONORDERED.

Where are they defined?

 

Thank you.

Parameter double **mat to cblas_dgemm

Dear Intel Forum,

I am developing a molecular dynamic system with a several MKL functions, the atoms position is a  double **mat to the mkl function like cblas_dgemm. However, is necessary to convert the pointer ** to *, like:

void mv(double** m,double* v)
{
int i = 0, j = 0,z = 0;

   for(i = 0; i < natom;i++)
   {
         for(j = 0; j < natom;j++)
          {
               v[z] = m[i][j]; z++;
          }
     }
}

Please, there is a way to use cblas_dgemm without this conversation ?

S’abonner à Intel® Math Kernel Library