22 Mayıs 2020 Cuma

Half Precision Arithmetic - 16 Bit İçindir

Giriş
Açıklaması şöyle
Low precision arithmetic seems to have gained some traction in machine learning, but there's varying definitions for what people mean by low precision
IEEE-754 half
Açıklaması şöyle
Half-precision wasn't a IEEE-754 standard format before 2008
IEEE 754- Half'ten önce olan bazı yazılımlar şöyle
Several earlier 16-bit floating point formats have existed including that of Hitachi's HD61810 DSP 2 of 1982, Scott's WIF [3] and the 3dfx Voodoo Graphics processor. [4]
Açıklaması şöyle. IEEE 754-2008 standardında binary16 olarak isimlendiriliyor.
10 bit mantissa, 5 bit exponent, 1 bit sign
bfloat16 - Brain Float 16
Açıklaması şöyle
7 bit mantissa, 8 bit exponent, 1 bit sign
Açıklaması şöyle
A new 16-bit FP format with the same exponent range as IEEE binary32 has been developed for neural network use-cases. Compared to IEEE binary16 like x86 F16C conversion instructions use, it has much less significand precision, but apparently neural network code cares more about dynamic range from a large exponent range. This allows bfloat hardware not to even bother supporting subnormals.
NVidia's 19-bit TensorFloat
Örnek ver.

AMD's fp24
Örnek ver.


Hiç yorum yok:

Yorum Gönder