The Posit Number System was introduced in 2017 as a replacement for
floating-point numbers. Since then, the community has explored its application
in Neural Network related tasks and produced some unit designs which are still
far from being competitive with their floating-point counterparts. This paper
proposes a Posit Logarithm-Approximate Multiplication (PLAM) scheme to
significantly reduce the complexity of posit multipliers, the most power-hungry
units within Deep Neural Network architectures. When comparing with
state-of-the-art posit multipliers, experiments show that the proposed
technique reduces the area, power, and delay of hardware multipliers up to
72.86%, 81.79%, and 17.01%, respectively, without accuracy degradation.