(Internal) Compiler error with IPO

(Internal) Compiler error with IPO


my program compiles fine with -xHost, but the compilation crashes when using -ipo. I am using composer_xe_2013_sp1.0.080. The funny thing is that it does not crash when I compile it on our cluster whit the same compiler version, it crashes only on my local machine (which is more 'powerful' than the blade on the cluster).

The compile command is

ifort -I"/usr/local/intel/composer_xe_2013_sp1.0.080/mkl/include/intel64/lp64" -I"/usr/local/intel/composer_xe_2013_sp1.0.080/mkl/include" -openmp -fp-model fast=2 -check none -c -assume realloc_lhs -no-prec-div -xHost -ipo -o




The error output is:

ipo remark #11000: performing multi-file optimizations
ipo-1: remark #11006: generating object file /tmp/ipo_ifortMx0O8K1.o
ipo-2: remark #11006: generating object file /tmp/ipo_ifortMx0O8K2.o
ipo-3: remark #11006: generating object file /tmp/ipo_ifortMx0O8K3.o
ipo-4: remark #11006: generating object file /tmp/ipo_ifortMx0O8K4.o

fortcom: Severe: **Internal compiler error: internal abort** Please report this error along with the circumstances in which it occurred in a Software Problem Report.  Note: File and line given may not be explicit cause of this error.

ipo-4: error #11005: multi-object compilation 3 returned error status 3
ifort: error #10014: problem during multi-file optimization compilation (code 3)
make: *** [epss4] Error 3


Any idea?




7 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

Are you using a more reliable version of the compiler on your cluster, such as final xe2013 update, or xe2013 update 1?

There's probably little we can do here but advise you to present a full case which reproduces this, or give up on ipo for this version.

It's exactly the same version of the compiler on both the cluster and the local machine.

If I just run it on the cluster, do you think the results will be reliable in the sense that -ipo doesn't affect them? I know this is a vague question, but I could live with that solution.

At the moment, I can't provide a test case.

-ipo isn't intended to alter results, but only to improve performance for certain cases of frequently called subroutines.  It could be difficult to realize that potential performance improvement on large programs which may take a lot of memory to handle the extra load of linking this way.  Granted, internal error can't really be excused by insufficiency of available memory.

You could experiment with values for the -ipo, ipo-jobs and -ipo-separate options. It may be you are running out of memory for doing the large optimization. As Tim says, -ipo improves optimization, it does not change results.

Retired 12/31/2016

The -ipo-separate did the trick! Thanks a lot! Two related questions:

i) Does -ipo-separate perform the same optimizations as -ipo?

ii) I had also thought that I might be running out of memory, but the local machine has very similar specs to the node on the cluster, in particular 16G of RAM and a 64-bit system. In addition, on the local machine, I set

ulimit -f unlimited              # filesize
ulimit -d unlimited              # datasize
ulimit -s unlimited              # stacksize

Is there anything else I could try to avoid running out of memory (just to see whether it was this causing the errors)?

Thanks again!


Well, "unlimited" here doesn't mean unlimited.  Really!  It means some predefined value specified when the kernel was built. In some cases you can actually set an explicit value higher. Amount of RAM is not usually the factor.

Are you also specifying -ipo? If not, then you're not getting -ipo,  -ipo-separate is not a replacement for -ipo, it's a modifier. I'm not very familiar with the Linux environment so perhaps others have some additional suggestions for you here.

Retired 12/31/2016

Leave a Comment

Please sign in to add a comment. Not a member? Join today