TY - GEN
T1 - Power-efficient Approximate Multipliers for Classification Tasks in Neural Networks
AU - Zanandrea, Vinicius
AU - Castro-Godinez, Jorge
AU - Meinhardt, Cristina
N1 - Publisher Copyright:
© 2025 IEEE.
PY - 2025
Y1 - 2025
N2 - Multiplication is a key operation in neural networks. To overcome the power efficiency challenges of designing dedicated hardware for neural networks, designers can explore approximate multipliers to reduce area and power while maintaining tolerable accuracy. In this work, we evaluate the power and accuracy trade-offs of adopting two approximate multiplier structures, AxMultV1 and AxMultV2, for image classification in neural networks. In these multipliers, we explore seven approximate 4:2 compressors from the literature and compare with our proposed MAX4:2CV1 compressor. The adoption of our proposed compressor in multipliers provides power savings up to 56%, a delay reduction of 45.5%, and reduction in transistor count up to 48% compared to an exact multiplier. The multipliers based on the MAX4:2CV1 compressor can be considered suitable for classification tasks in neural networks, achieving 95.54% accuracy on the MNIST using a Multilayer Perceptron and up to 81.27% accuracy on the SVHN dataset with the LeNet-5 architecture, comparable to the accuracy of an exact multiplier.
AB - Multiplication is a key operation in neural networks. To overcome the power efficiency challenges of designing dedicated hardware for neural networks, designers can explore approximate multipliers to reduce area and power while maintaining tolerable accuracy. In this work, we evaluate the power and accuracy trade-offs of adopting two approximate multiplier structures, AxMultV1 and AxMultV2, for image classification in neural networks. In these multipliers, we explore seven approximate 4:2 compressors from the literature and compare with our proposed MAX4:2CV1 compressor. The adoption of our proposed compressor in multipliers provides power savings up to 56%, a delay reduction of 45.5%, and reduction in transistor count up to 48% compared to an exact multiplier. The multipliers based on the MAX4:2CV1 compressor can be considered suitable for classification tasks in neural networks, achieving 95.54% accuracy on the MNIST using a Multilayer Perceptron and up to 81.27% accuracy on the SVHN dataset with the LeNet-5 architecture, comparable to the accuracy of an exact multiplier.
KW - approximate computing
KW - energy efficiency
KW - multipliers
KW - neural networks
UR - http://www.scopus.com/inward/record.url?scp=105004552786&partnerID=8YFLogxK
U2 - 10.1109/LASCAS64004.2025.10966304
DO - 10.1109/LASCAS64004.2025.10966304
M3 - Contribución a la conferencia
AN - SCOPUS:105004552786
T3 - 2025 IEEE 16th Latin American Symposium on Circuits and Systems, LASCAS 2025 - Proceedings
BT - 2025 IEEE 16th Latin American Symposium on Circuits and Systems, LASCAS 2025 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 16th IEEE Latin American Symposium on Circuits and Systems, LASCAS 2025
Y2 - 25 February 2025 through 28 February 2025
ER -