When compiling my code in debug mode, with -init=snan -init=zero -init=arrays (as advised by https://software.intel.com/en-us/articles/checking-for-unitialized-varia...) I get a segmentation fault when allocating a 'large' array (2GB) in size.
- ulimit -s unlimited before running so that is not the problem
- ifort (IFORT) 17.0.4 20170411
- -init=snan -init=zero pose no problem, except when combined with -init=arrays
- Allocating integer arrays of 5*10^8 elements (~2GB @ 4 bytes per int)
- 536870912 is the first failing size (2^29 exactly)
- Problem occurs on new SKL nodes with 192 GB total ram, BDW nodes
program test integer, dimension(:), allocatable :: x allocate(x(66893*8192)) end program test
ifort -O0 -g -traceback -ftrapuv -debug all -debug-parameters -align -warn all -check all,bounds,uninit -init=snan -init=zero -init=arrays -o alloc alloc.f90
Does -check uninit catch every use of an uninitialised array element? Because then I might not need -init=arrays at all.