segfault when using 'omp parallel for simd' with no optimization

segfault when using 'omp parallel for simd' with no optimization

#include <vector>
#include "omp.h"

int main() {

  int const N = 1000;
  std::vector<double> x(N);
  x.assign(N, 2.0);

  #pragma omp parallel for simd
  for(int i=0; i<x.size(); ++i)  x[i] *= 2.0;

  return 0;
}

Compiling the above code with icpc -g -O0 -qopenmp (or icpc -debug -qopenmp) produces a segmentation fault (the value of i in the for loop for at least one of the threads is a very large size_t).  However, compiling with O3 or O2 (even with symbols) runs just fine.

Is this expected behavior for "parallel for simd"?  If I remove the simd and just use "#pragma omp parallel for", all levels of optimization work just fine.  Additionally, if I use the integer N in the for loop conditional (instead of x.size()), all levels of optimization work just fine.  It's only when I use O0 and the size method in the for loop.  GCC 5.3 produces correct results for all levels of optimization.

I also tested setting the size at runtime (i.e. pass it into the program) but still use the integer N in the for loop.  For that case all levels of optimization work fine.  The error only seems to occur when I use the .size method and no optimization.  I am using Intel 18.0.1.

4 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.
Best Reply

Apparently you have a compile time failure which almost by definition is a bug. However I would prefer to use a local variable set to size() as the for limit, to avoid the requirement for compiler analysis to determine if it is invariant. As there is little point in omp simd at O0 it may not be well tested. I suspect g++ may avoid the problem as it ignores simd when combined with parallel.

It looks like a bug to me. I've reported this problem to our OpenMP team for further investigation. The  ticket number is: CMPLRS-50474

Thanks,

Viet

Thank you both. For the meantime I have copied the size to a local var to avoid compiler checking.

Quote:

Tim P. wrote:

Apparently you have a compile time failure which almost by definition is a bug. However I would prefer to use a local variable set to size() as the for limit, to avoid the requirement for compiler analysis to determine if it is invariant. As there is little point in omp simd at O0 it may not be well tested. I suspect g++ may avoid the problem as it ignores simd when combined with parallel.


Could you tell me a bit more about gcc ignoring simd with parallel? Do you have a document detailing this behavior? Thanks.

Leave a Comment

Please sign in to add a comment. Not a member? Join today