I am using a parallel code which works fine with small array size on each slave node on a Linux cluster, say 40x40x40. But once I increased the array size, e.g. 80x80x80 on eachnode onthe samecluster, the code would fail with segmentation (sigsegv) error. I doubt it was the limit of the stack size, so set ulimit to unlimited. But the problem was still there. GDB tells me that the problemalways happensin array operations, i,e, A= 0.2B+C where A B C are all arrays. If I change the array operations to do-loops, then the code works for large arrays. This seems really strange to me. Anyone can shed some light? My system is ifort 8.1, redhat 9.0 and mpich 1.2.5. Thanks.
For more complete information about compiler optimizations, see our Optimization Notice.