Design Of Power Efficient POSIT Multiplier
Abstract
Design Of Power Efficient POSIT Multiplier
Abstract Posit number system has been used as an
alternative to the IEEE floating-point number
system in many applications, especially the recent
popular deep learning. Its non-uniformed number
distribution fits well with the data distribution of
deep learning and thus can speed up the deep
learning training process. Among all the related
arithmetic operations, multiplication is one of the
most frequent operations used in applications.
However, due to the bit-width flexibility nature of
posit numbers, the hardware multiplier is usually
designed with the maximum possible mantissa bit-
width. As the mantissa bit-width is not always the
maximum value, such multiplier design leads to a
high power consumption especially when the
mantissa bit-width is small. In this brief, a power-
efficient posit multiplier architecture is proposed.
The mantissa multiplier is still designed for the
maximum possible bit-width however, the whole
multiplier is divided into multiple smaller
multipliers. Only the required small multipliers are
enabled at run-time. Those smaller multipliers are
controlled by the regime bit-width which can be
used to determine the mantissa bit-width. This
design technique is applied to 8-bit, 16-bit, and 32-
bit posit formats in this brief, and an average of 16%
power reduction can be achieved with negligible
area and timing overhead. Power consumption distribution of a posit multiplier Posit component
extraction in hardware arithmetic unit Datapath of
the proposed posit multiplier.