Compiling option /od introduce results difference between the release mode and bebug mode

Compiling option /od introduce results difference between the release mode and bebug mode

I am maintaining a big code. I cannot upload the case. I found that /od is the reason for the results difference between release mode and debug mode.

In release mode, the compiling option is

 /nologo /real_size:64 /module:"x64\Release\\" /object:"x64\Release\\" /libs:static /threads /c

In debug mode, the compiling option is

 /nologo /debug:full /Od /gen-interfaces /warn:interfaces /real_size:64 /Qsave /module:"x64\Debug\\" /object:"x64\Debug\\" /traceback /check:bounds /check:uninit /libs:static /threads /dbglibs /c

Our code passed successfully using both compiling options but get different results after many iterations.

When I add /od in release mode, I found that the two results become same.

Can anyone explain why this code optimization option /od introduce the difference?

As a result, I found that the running speed becomes slower. So what coding part I should pay attention or update to remove this difference also running faster.

Thanks!

17 posts / novo 0
Último post
Para obter mais informações sobre otimizações de compiladores, consulte Aviso sobre otimizações.

a glaring difference is the /Qsave of debug which you don't use in release. you might try /Qsave in release and see if results change. Then find out what items /Qsave is saving that changes the results.

I tried /QSave already. Unfortunately, this option does not affact the result in release mode.

Disabling optimizations makes many changes in both the code and the layout of data. Results can change due to different order of operations, references to uninitialized values and more.

What I recommend is that you "instrument" your program to display intermediate results and determine where in the flow results start to diverge. This will help you identify what is triggering the difference. If you are talking about floating point differences., you could experiment with /fp:precise and see if that makes a difference.

Steve

I track the code and find the results begin to differ in the following function

SUBROUTINE CalNtNxdA(ie,fac,x,area,res)

USEShape_Function!NtNxCoef

IMPLICIT NONE

REALfac,x(3),area,res(3)

INTEGERi,j,ie

if(ie==57) then

print*,"--------------"

print*,"fac",fac

print*,"NtNxCoef=",NtNxCoef

print*,"x",x

print*,"area",area

endif

DOi=1,3

res(i)=0.0D0

DOj=1,3

res(i)=res(i)+fac*NtNxCoef(i,j)*x(j)*area/12.0D0

if(ie==57) then

print*,"res",res

endif

END DO

END DO

if(ie==57) then

print*,"--------------"

endif

RETURN

END

I am attaching two results in *.gif file for your reference. The results start to diverge from the calculation of res(i)

 

See attached results

Anexos: 

Good news! After I add /fp:precise, I got the same results. But I got lots of warning when compiling like below

1>------ Build started: Project: flare_anal, Configuration: Release x64 ------

1>Compiling with Intel(R) Visual Fortran 11.1.067 [Intel(R) 64]...

1>Data_Output.F90

1>ifort: command line warning #10212: /fp:precise evaluates in source precision with Fortran.

1>INTEGR.F90

1>ifort: command line warning #10212: /fp:precise evaluates in source precision with Fortran.

1>MMC-OutM.F90

1>ifort: command line warning #10212: /fp:precise evaluates in source precision with Fortran.

1>MMC-Output.F90

1>ifort: command line warning #10212: /fp:precise evaluates in source precision with Fortran.

1>Linking...

1>Embedding manifest...

1>

1>Build log written to "file://V:\Windows\FLARE10\NewSRCAddRingX64\flare_anal\x64\Release\BuildLog.htm"

1>flare_anal - 0 error(s), 4 warning(s)

========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========

Does the warning matter?

What did /Qopt-report say about optimization of this subroutine?  Compiler option /fp:source disables optimizations which are likely to cause numerical differences in sum reduction, as well as setting gradual underflow for protection against underflow.  It may also perform the /12.0 more accurately; you have permitted the compiler in effect to substitute *(1/12d0). 

You are leaving it up to chance what order your operations occur, and probably the optimizing compiler is trying to avoid the slowest interpretations of your source code.

If you don't get full accuracy with your options for promotion to double precision by

res = fac*area/12*matmul(NtNxCoef(1:3,1:3),x)

(I try to cut down on the number of permutations available) then I suspect you have numerical problems and can't trust either version you have seen.

With so many print statements, I doubt you'll notice the performance difference with optimization, so you could set

!dir$ optimize:0

if you think it is giving you better results.  I don't know whether that needs to come before or after the USE.

Those warnings you got with /fp:precise (same as /fp:source) might make me worry that real_size isn't taking consistent effect.  Maybe it's provoked by mixing the double precision constant with promoted reals, in which case it's OK.

Just use /fp:source instead.  In Fortran these mean the same thing, but they don't in C/C++. I always forget which one gets you the warning.

Steve

Thanks a lot!

I track the code and find the results begin to differ in the following function

SUBROUTINE CalNtNxdA(ie,fac,x,area,res)

USEShape_Function!NtNxCoef

IMPLICIT NONE

REALfac,x(3),area,res(3)

INTEGERi,j,ie

if(ie==57) then

print*,"--------------"

print*,"fac",fac

print*,"NtNxCoef=",NtNxCoef

print*,"x",x

print*,"area",area

endif

DOi=1,3

res(i)=0.0D0

DOj=1,3

res(i)=res(i)+fac*NtNxCoef(i,j)*x(j)*area/12.0D0

if(ie==57) then

print*,"res",res

endif

END DO

END DO

if(ie==57) then

print*,"--------------"

endif

RETURN

END

I am attaching two results in *.gif file for your reference. The results start to diverge from the calculation of res(i)

 

After using /fp:source instead of /fp:precise, the warning messages are gone and  I got the same results between these two options.

I tried the matrix multiplication function res = fac*area/12*matmul(NtNxCoef(1:3,1:3),x) without /fp:source. It also give us same results comparing to the debug results. A good stuff to be used.

I don't know how to set /Qopt-report in visual studio 2008?

Suggest to add /fp:source into default release compiling option. It seems that the default release option for the code optimization for maximum speed did not take enough tests when releasing it. In fact our code has no problem! From user respective, we don't like to see the release and debug results are different.

Thank you so much! 

/Qopt-report is Fortran > Diagnostics > Optimization Diagnostics (scroll down) > Optimization Diagnostic Level

Steve

See attached optimization output for CalNtNxda

Anexos: 

It's true that /fp:source as a default would be more in line with what the most other popular compilers do; however, it eliminates important optimizations which normally are safe. It is a good step for resolving problems such as you raised.  I always put some /assume: options in my ifort.cfg so as to remove a few compatibility problems.

It seems that in debug mode, although /fp:source is not in the default compiling option, the compiler actually treat it as /fp:source.

Am I correct?

No. In a Debug configuration, optimzation is disabled so the optimizations /fp:source turns off are typically not done.

Steve

Faça login para deixar um comentário.