Developer Guide and Reference

Contents

Reducing the Impact of
De
normal Exceptions

Denormalized
floating-point values are those that are too small to be represented in the normal manner; that is, the mantissa cannot be left-justified.
De
normal values require hardware or operating system interventions to handle the computation, so floating-point computations that result in
denormal
values may have an adverse impact on performance.
There are several ways to handle
denormals
to increase the performance of your application:
  • Scale the values into the normalized range
  • Use a higher precision data type with a larger range
  • Flush
    denormals
    to zero
For example, you can translate them to normalized numbers by multiplying them using a large scalar number, doing the remaining computations in the normal space, then scaling back down to the
denormal
range. Consider using this method when the small
denormal
values benefit the program design.
Consider using a higher precision data type with a larger range; for example, by converting variables declared as
float
to be declared as
double
. Understand that making the change can potentially slow down your program. Storage requirements will increase, which will increase the amount of time for loading and storing data from memory. Higher precision data types can also decrease the potential throughput of Intel® Streaming SIMD Extensions (Intel® SSE) and Intel® Advanced Vector Extensions (Intel® AVX) operations.
If you change the type declaration of a variable, you might also need to change associated library calls, unless these are generic;
; for example,
cos()
instead of
cosf()
.
.
Another strategy that might result in increased performance is to increase the amount of precision of intermediate values using the
-fp-model [double|extended]
option. However, this strategy might not eliminate all
denormal
exceptions, so you must experiment with the performance of your application.
You should verify that the gain in performance from eliminating
denormals
is greater than the overhead of using a data type with higher precision and greater dynamic range.
In many cases,
denormal
numbers can be treated safely as zero without adverse effects on program results. Depending on the target architecture, use flush-to-zero (
FTZ
) options.

IA-32 and Intel® 64 Architectures

These architectures take advantage of the
FTZ
(flush-to-zero) and
DAZ