Configurable High-Level Synthesis Approximate Arithmetic Units for Deep Learning Accelerators

David Cordero-Chavarria, Luis G. Leon-Vega, Jorge Castro-Godinez

Producción científica: Capítulo del libro/informe/acta de congresoContribución a la conferenciarevisión exhaustiva

Resumen

The application of Artificial Intelligence (AI) to multiple sectors has grown impressively in the last decade, posing concerns about energy consumption and environmental footprint in this field. The approximate computing paradigm reports promising techniques for the design of Deep Neural Network (DNN) accelerators to reduce resource consumption in both low-power devices and large-scale inference. This work addresses the resource and power consumption challenge by proposing the implementation of configurable approximate arithmetic operators described in untimed C++ for High-Level Synthesis (HLS), evaluating the impact of the approximations on the model accuracy of Neural Networks (NN) used for classification with Zero-Shot Quantisation (ZSQ) and without fine-tuning. Our proposed operators are fully parametric in terms of the number of approximated bits and numerical precision by using C++ templated and achieve up to 39.04% resource savings while having 79% accuracy in a LeNet-5 trained for MNIST.

Idioma originalInglés
Título de la publicación alojada2024 IEEE 42nd Central America and Panama Convention, CONCAPAN 2024
EditorialInstitute of Electrical and Electronics Engineers Inc.
Edición2024
ISBN (versión digital)9798350366723
DOI
EstadoPublicada - 2024
Evento42nd IEEE Central America and Panama Convention, CONCAPAN 2024 - San Jose, Costa Rica
Duración: 27 nov 202429 nov 2024

Conferencia

Conferencia42nd IEEE Central America and Panama Convention, CONCAPAN 2024
País/TerritorioCosta Rica
CiudadSan Jose
Período27/11/2429/11/24

Huella

Profundice en los temas de investigación de 'Configurable High-Level Synthesis Approximate Arithmetic Units for Deep Learning Accelerators'. En conjunto forman una huella única.

Citar esto