Intel® Math Kernel Library

Heap Corruption and crash when calling pardiso

Hello,

I have hopefully linked pardiso correctly to my application? When I call pardiso (with phase=12) in the debug configuration, I recieve a debug error:

Microsoft Visual C++ Debug Library

Debug Error!

Program: c:lala...

HEAP CORRUPTION DETECTED: before Normal block (#0) at 0x02150FD0.

CRT detected that the application wrote to memory before the start of heap buffer.

I am guessing that pardiso is somehow writing where is shouldn't be in memory?

Has anybody seen this before? :<

Iterative Sparse Complex Linear System Solver

Sir,

I looked through MLK and IMSL libraries for a general purpose iterative sparse system solver for complex matrices (non-Hermitian). I could notfind one. I only found two direct sparse solversin IMSL.Can you tell me if there is sucha solver, e.g. based on BI-CGSTAB or QMR in the MLK and IMSL libraries? Or,how soon we'll have them in the nextrelease?

Thank you in advance!

David

MKL DFT with Matlab MEX application: environment variable needed

Hello. I'm trying to use MKL DFT's in a MEX application. I recall from a few years backthat there's an environment variable that needs to be set, but I don't remember the reasonor exactly what it is, and can't find it online. Without it,Matlab crashes during the FFT plan creation. If someone can remind me what the variable is, I would appreciate it. I am using Matlab 7 and MKL 10.

2D FFT with leading dimension divisible by 2048

The MKL user guide says that for best performance, 2D arrays where the leading dimension is divisble by 2048 should be avoided. Could someone please clarify the nature of this restriction, particularly for FFT?

For example, I have a 2D array that is 1500x1500 pixels. To use a radix-2 FFT implementation, the typical approach is to pad the array up to 2048x2048 and then run the FFT. But it seems that this is inefficient for MKL. So what would be the most efficient way to perform FFT on such an array?

Thanks.

Pardiso issues (small parallel speedup, direct-iterative solver crashes)

Hi,

I am beta testing Pardiso for Mac OS X. I am doing this on a Mac Pro that has a dual processor Intel Xeon with 4 cores per processor (8 cores total), and 4 Gb of RAM. The OS is Mac OS X Leopard 10.5.3.

My matrices have sizes about 50,000 x 50,000, and they are quite sparse (about 3 million non-zero elements). They arise from nonlinear elasticity problems (and the Finite Element Method), and are in my opinion very common kind of matrices that one would use with Pardiso.

*** Issue #1: small parallel speedup

MKL_DSS_OOC_VARIABLE - anyone got this to work?


I have recently moved to MKL 10.0 Update 3 and was hoping that the OOC of Pardiso/DSS might actually work....

My well-tested solver works perfectly fine without MKL_DSS_OOC_VARIABLE in the options, but when I add that to the options passed to dss_create, the dss solver crashes inside dss_factor_real on a medium-sized problem ( not that large!)

Andrew

Assine o Intel® Math Kernel Library