Threading

Rebuilding MPSS-3.2 (Yocto) release from source

Dear community,

I think I have managed to go some steps forward to rebuild the Intel Xeon Phi MPSS-3.2 release (with Yocto, poky, bitbake ...) to add some required features in our installation. Now I'm stuck at the following with missing source code to rebuild the minimal mpss image:

[OMP Target] Difference in mapping of global arrays (malloc vs static)

There are 2 ways to have a global array: Use a pointer and malloc, or just define it as an array:

#omp declare target
int gArray[10];
int* gVals /*somewhere later:*/ = malloc(10*sizeof(int));
#omp end declare target

I though, these where equivalent in handling but just discovered a huge difference: When I map them to a target, only the gVals will be actually mapped. If I want the values of gArray on my device I have to use "target update". Is this a bug in the current icc or is this covered in the spec? I could not find anything specific to this.

dss_solver gives non-stable solve

Hi there,

I am using the mkl_dss.f77 to solve large sparse non-linear equations by iteratively calling the dss_solver. I just spotted that although I set the same initial values, I could not obtain a same result for every time. And I found this is due to the dss_solver gives results with slightly difference every time and the difference adds up in the iteration process which leads to a non-stable solve for the large sparse non-linear equations.

How can I make the mkl_dss solver to have repetitive accuracy? Anyone could give me some advice on this subject? 

Out of memory error with Cpzgemr2d

Hello everybody, I am trying to distribute over a BLACS grid a square complex (double precision) matrix making use of the Cpzgemr2d function. This global matrix resides on a single "master" node and I need to distribute it over my grid as preliminary step of a linear system solution calculation. Everything works fine if I try to run my code with matrix of size of about 2GB or smaller, making use of various BLACS grids (2x2,3x3,4x4, etc...) with row/column block size equal to 64.

Inner boundary condition setting for solving the Poisson equation

Dears, I need your help with the inner boundary condition setting.

For simplicity, we take a 2-D Poisson equation for example, which correspond to the “s_Poisson_2D_f.f90” in the MKL library.

Case A

In case A, it is clear that we can assign the following array to set the boundary condition.

 For example in s_Poisson_2D_f.f90

bd_ax(iy) = 1.0E0,

bd_bx(iy) = 1.0E0 ,

bd_ay(ix) = -2.0*pi*sin(2*pi*(ix-1)/nx),

bd_by(ix) =  2.0*pi*sin(2*pi*(ix-1)/nx)

Case B

Internal consistency check failure when using DSS

Hi there,

I am working on a project to use DSS to solve large sparse linear equations. However I met with internal consistency check failure. I could see no problem with my codes. Could anyone help me to figure the problem out? Thank you so much. The attached is my project file.

Threading abonnieren