bfloat16 - Hardware Numerics Definition

Submitted: November 14, 2018 Last updated: November 14, 2018
  • File:
    bf16-hardware-numerics-definition-white-paper.pdf
  • Size:
    0.25 MB
Download

Detailed Description

Intel® Deep Learning Boost (Intel® DL Boost) uses bfloat16 format (BF16). This document describes the bfloat16 floating-point format.

 

BF16 has several advantages over FP16:

- It can be seen as a short version of FP32, skipping the least significant 16 bits of mantissa.

- There is no need to support denormals; FP32, and therefore also BF16, offer more than enough range for deep learning training tasks.

- FP32 accumulation after the multiply is essential to achieve sufficient numerical behavior on an application level.

- Hardware exception handling is not needed as this is a performance optimization; industry is designing algorithms around checking inf/NaN.

Product and Performance Information

1

Performance varies by use, configuration and other factors. Learn more at www.Intel.com/PerformanceIndex.