Optimising Convolutions for Deep Learning Inference On ARM Cortex-M Processors
Ver/ Abrir
Impacto
Scholar |
Otros documentos de la autoría: Maciá-Lillo, Antonio; Barrachina Mir, Sergio; Fabregat Llueca, German; Dolz, Manuel F.
Metadatos
Mostrar el registro completo del ítemcomunitat-uji-handle:10234/9
comunitat-uji-handle2:10234/7036
comunitat-uji-handle3:10234/8620
comunitat-uji-handle4:
INVESTIGACIONMetadatos
Título
Optimising Convolutions for Deep Learning Inference On ARM Cortex-M ProcessorsFecha de publicación
2024-04-30Editor
Institute of Electrical and Electronics Engineers Inc.ISSN
2327-4662Cita bibliográfica
Maciá, A., Barrachina Mir, S., Fabregat Llueca, G., & Dolz, M. F. (2024). “Optimising Convolutions for Deep Learning Inference on ARM Cortex-M Processors”. in IEEE Internet of Things Journal, vol. 11, no. 15, pp. 26203-26219. https://doi.org/10.1109/JIOT.2024.3395335Tipo de documento
info:eu-repo/semantics/articleVersión de la editorial
https://ieeexplore.ieee.org/document/10513367Versión
info:eu-repo/semantics/publishedVersionPalabras clave / Materias
Resumen
We perform a series of optimisations on the convolution operator within the ARM CMSIS-NN library to improve the performance of deep learning tasks on Arduino development boards equipped with ARM Cortex-M4 and M7 ... [+]
We perform a series of optimisations on the convolution operator within the ARM CMSIS-NN library to improve the performance of deep learning tasks on Arduino development boards equipped with ARM Cortex-M4 and M7 microcontrollers. To this end, we develop custom microkernels that efficiently handle the internal computations required by the convolution operator via the lowering approach and the direct method, and we design two techniques to avoid register spilling. We also take advantage of all the RAM on the Arduino boards by reusing it as a scratchpad for the convolution filters. The integration of these techniques into CMSIS-NN, when invoked by TensorFlow Lite for microcontrollers for quantised versions of VGG, SqueezeNet, ResNet, and MobileNet-like convolutional neural networks enhances the overall inference speed by a factor ranging from 1.13× to 1.50×. [-]
Publicado en
IEEE Internet of Things Journal, 2024, 11, 15Entidad financiadora
European Union NextGenerationEU
Código del proyecto o subvención
TED2021-129334B
Derechos de acceso
info:eu-repo/semantics/openAccess
Aparece en las colecciones
- ICC_Articles [430]