Hi,we have a program that has, up to now, worked fine with other compilers.We recently switched to the Intel Visual fortran and are ironing out "bugs" in the code.This recently appeared in a subroutine: SUBROUTINE sort_col_row(colv, rowv, indx, N, task)** Sort from smallest to largest after col # and within col after row #* using Heap sort,* order in indx, i.e. index(1) points to lowest column number* indx(new #) = orig #* If task = 1 (used from PREF_SUP)* colv(orig #) = new # INTEGER N, task INTEGER colv(N), rowv(N), indx(N) INTEGER i, i0, c1, ci, col INTEGER, DIMENSION(:), ALLOCATABLE :: work, rowwork allocate(work(N)) allocate(rowwork(N))* First some sorting is done, resulting in a vector of indices work.C indx = indx(work) ! stack overflow on IFORT rowwork = indx(work) indx = rowworkBefore, we could then perform the above, now commented out, operation indx = indx(work). Now, however, this results in a stack overflow for large cases, since the indx = indx(work) seems to allocate the needed extra structure on the stack. We have for now worked around by instead using another intermediate structure explicitly as in the code above, but we want to know what the best way to solve this would be, using as few vectors/matricec as possible, and not getting a stack overflow.Best regards,Andreas
For more complete information about compiler optimizations, see our Optimization Notice.