As you're probably aware, Compaq Fortran had implicit promotion to double precision for expression evaluation. This effect is present in ifort ia32/32-bit version, when the option /arch:IA32 is in use. Among other effects, this prevents vectorization.

As Paul pointed out below, the implicit SAVE behavior of Compaq Fortran should also be considered, /Qsave option emulates that.

Your software isn't portable, and perhaps not reliable, unless all the places where you need double precision have it specified explicitly.

Expressions such as A**2 - B**2 are more reliably written as (A+B)*(A-B). If such a change makes a significant difference in your results, the A**2-B**2 version can't be trusted. /arch:IA32 should implicitly promote to double precision, just as if you wrote

dble(A)**2 - dble(B)**2

# Compaq Fortran to Modern Compiles

I'd recommend getting up to speed on some of the articles related to floating point precision

such as

http://software.intel.com/en-us/articles/consistency-of-floating-point-r...

also i use

/Qimf-arch-consistency:true

do insure consistent results across chipsets

You should also be aware that both Compaq and DVF implemented SAVE as a default, and this may need to be added to your code for equivalent performance when compiled by IVF.

Hi all,

what will happen if I use /fpconstant corresponding to TimP's example A**2 - B**2?

real(kind(1.d0)) :: A, B, C C = A**2 - B**2 ! will the compiler produce A**2.d0 - B**2.d0 ?

@TimP Should this:

" /arch:IA32 should implicitly promote to double precision, just as if you wrote dble(A)**2 - dble(B)**2 "

not mean A**dble(2) - B**dble(2)? If A and B are double precision on the constants "2" are single?!

Kind regards,

Johannes

Expressions that involve the squaring of real variables are more efficiently handled by keeping the exponent (=2) as an integer and evaluating the power by multiplication. Promoting integer exponents to real would require the use of the mathematical equivalences

x

^{y}= exp(y*log_{e}x) = 2^{y*log2x}

Evaluation of the transcendental functions may be much slower, and problems could arise if the variable x were not positive.

Your problem may be that the old version was using an 80-bit register for accumulation of the result, while the new version could be slipping into 32-bit storage. There are now compiler options that even enforce this.

Over the years, changes from 80-bit 8087 calculations to 64-bit has shown a slight increase in round-off for some other old codes, but your estimation of 10% is a lot worse than typical.

Replacing 32-bit real calculations with 64-bit might help, if you can isolate the ocurrence.

Given the increased memory availability, would a more general change from 32-bit to 64-bit (real(8)) be a possible approach ?

.

John

You don't get full use of the 80-bit x87 registers, even under /arch:IA32, unless you also set /Qpc80. By default, 53-bit precision mode is set. While you get extended exponent range protection in this case, you get only the precision protection you would get by explicit promotion to double. Those who argue that you should not get implicit promotion due to architecture, as x87 does, and CVF did, have pretty well won the day.

As John said, X64 OS may remove data set size constraints which prevent full use of double precision, but it will not alter the performance consideration.

Thanks to everyone for your input. After reviewing the results very carefully I realized that making the offending subroutines double precision fixed the problem. TimP, I did not know that CVF had the promotion to double, and that piece of information definitely helped me tremendously. I always try to write my own code so that I don't need to go to double precision, but this problem has provided me with some keen insight into how things can go wrong in the subtlest of ways. Thanks again for all of the help!

## Compaq Fortran to Modern Compiles

So here is the problem. I've got some older, but well written, legacy code that is used in some important software. In the process of making some upgrades to the software we realized that, simply by compiling the software with a new compiler (Composer XE), the results of the calculations are different...by a bunch (>10%). Digging through I see that the problem has to do with rounding errors. The crucial part of the code deals with the difference of the square of large numbers. The delta between these number is small, so it is near the noise for single precision math. The problem is that this code is currently being used, and has been used in the past, for high-profile decision making based on an executable compiled back in the Compaq Fortran days. There is no way I can 'upgrade' the software and have it produce different results. My question is, are there any compiler switches or changes I can make to force the compiler to use older single precision math? I could go back to using the old compiler, but that is becoming more and more difficult, and is hardly forward looking. I am looking into changing things to double precision, but again, that will likely change the answers, which is not acceptable at this point.

Thanks!