DSS - Memory problem

DSS - Memory problem

Imagen de vaidyt

Hello thereI tried solving Ax = b with A being a symmetric matrix of size N =5,26,338; with number of nonzeros being 20,787,787.The matrix is represented in CSR format. Thus the memory reqd to represent the linear system being: Memory(Values) = 158.6 MB Memory(Columns) = 79.3 MB Memory(rowindex) = 2.0 MB Memory(RHS) = 2.0 MB Total Memory = 241.91 MBI used the following options in creating the dss handle:MKL_INT opt = MKL_DSS_DEFAULTS; MKL_INT sym = MKL_DSS_SYMMETRIC; MKL_INT type = MKL_DSS_INDEFINITE; MKL_INT opt_parallel = MKL_DSS_METIS_OPENMP_ORDER;To my surprise, I found that the memory consumption (using Windows Task Manager + VS2010 Debugger) for the following two steps were out of proportion:1. Rerorder step(dss_reorder)required 0.63 GB of memory2. Factorization(dss_factor_real)requird 4.38 GB of memoryOn calling dss_delete, I recovered 6.39 GB. So, it is quite clear that reordering and factorization takes all the memory.I am not sure why this much of memory is being used up considering the fact that the matrix that is being factored is only about 240 MB.However, as far as I understand, the LU factorization should not require more than 2*240 = 480 MB (for this problem). Am I right?Although, am able to solve the system in a machine with 8 GB RAM, the solver fails in a 32 bit machine with 4 GB RAM. So, how do we make the solver work on low-end machines?Looking forward for your response at the earliest,Thanks & RegardsVaidy

publicaciones de 4 / 0 nuevos
Último envío
Para obtener más información sobre las optimizaciones del compilador, consulte el aviso sobre la optimización.
Imagen de Chao Y (Intel)

Hi Vaidy,

For the sparse matrix, after the LU factorization, in many cases, sparse matrix LU decomposition is pretty dense, and the memory requirement should largely increase. So for such problem size, N=500K, the size could be update to : 500K*500K*size of(double or float depending on the precision used)/2(matrix is symmetric) =1TB. 6G looks fine here.

If the memory is enough for problem, you can use the in-core functions. If the memory is not enough for you, you can use out-of-core solvers

Thanks,
Chao

Imagen de vaidyt

Hello Chao Thanks a lot... I also thought thro' this and came to the same conclusion.. but, Couple of more questions: 1. how do we estimate the size of L and U factors before solving it - so as to decide whether to go for in-core or variable/out of core?Do you know any kind of thumbrule that one could use to estimate the size of L and U based on the sparsity of the original matrix (A in Ax = b)? 2. Also, how do we decide (in runtime) whether to use direct or iterative solver based on the type of the system? 3. When using in-core, when there is not enough memory, the code simply crashes in LU factorization step. I have surrounded the code with try catch - but, still the application crashes (without throwing any exception). So, how do we handle this then? Thanks & Regards Vaidy

Imagen de Chao Y (Intel)

Vaidy,

A few more comments from our experts for the questions:

1)In the DSS, there is one statistics function, which can report the memory usage information:

dss_statistics(handle, opt, statArr, retValues)

2)there is no simple answer for this question : for DSS and pardiso in MKL, it is is a direct solver and it cannot switch into iterative mode although it can do iterative refinement. If you are taking other options, direct solvers use more memory and provide more reliable answer, and iterative solvers deso not use that much of the memory, but their answer are somewhat less reliable. Often users have to decide which one is good for his problem beforehand.

3) PARDISO returns error code if it cannot allocate memory. C++ code should read the error code and throw an appropriate exception. PARDISO itself does NOT throw exceptions as it is C code rather than C++.

Thanks,
Chao

Inicie sesión para dejar un comentario.