I'm porting some software to the Xeon Phi that's using gsl. I've downloaded gsl 1.16 and configured and compiled it using
./configure --host=x86_64-unknown-linux-gnu CC=icc CXX=icpc CFLAGS="-mmic -O2" (using icc 14.0.3 20140422)
The code compiles OK but the test code coredumps on the Xeon Phi itself; there are multiple components of gsl that coredump, one of them is 'vector':
mic0> gdb ./test GNU gdb (GDB) 7.5+mpss3.2.3 [...] (gdb) r Starting program: /home/janjust/src/gsl-1.16/vector/test Program received signal SIGSEGV, Segmentation fault. 0x000000000040f026 in test_complex_func (stride=16, N=32) at test_complex_source.c:121 121 if (v->data[2*i*stride] != (ATOMIC) (i) || v->data[2*i*stride + 1] != (ATOMIC) (i + 1234))
The weird thing is that the function where it never crashes is never called using stride=16, N=32 so it seems the optimizer altered something.
If I remove the "-O2" then the code runs OK. The same code with CC=icc CFLAGS="-O2" runs fine on the host CPU (Xeon E5). Is this a compiler optimisation error? how do I 'downgrade' the compiler optimisation for a particular piece of code? How can I further troubleshoot this?