optimization level and floating point to interger type conversion

optimization level and floating point to interger type conversion

Why does using compiler optimization have an impact on the way type-conversion from floating point to integer occurs?

If 3.4 is entered into the program below, it responds with 33 as output if the optimization level is non-zero, and 34 if the no optimization is used. I can understand either result (depending on how 3.4 gets stored), but I can't understand how compiler optimization would have any impact on that.

program numtest
double precision::x
end program numtest

3 posts / 0 new
Last post
For more complete information about compiler optimizations, see our Optimization Notice.

3.4 is not exactly representable - the double precision approximation is 3.3999999999... When multiplied by 10 and rounded to 64-bit double precision, the result is exactly 34.0 (which is representable), and truncates to 34 as an integer.

With optimization, intermediate values are kept in 80-bit floating registers, and the multiplication results in 33.99999.., which truncates to 33.


Steve - Intel Developer Support

The architecture you choose could have an impact also. Suppose that the double precision value nearest to 3.4 is slightly less than 3.4, and you store that value in x.
Then, 10*x, expressed in extended precision, if you choose that mode of operation (default for Intel IA32 compilers), would be less than 34. Rounded off to double precision, it might be exactly 34. Optimization may remove that rounding step. That's why nint() would be used instead of int(), to get consistent results in a case like this.

Leave a Comment

Please sign in to add a comment. Not a member? Join today