High performance and energy efficient inference for deep learning on multicore ARM processors using general optimization techniques and BLIS
Ver/ Abrir
Impacto
Scholar |
Otros documentos de la autoría: Castelló, Adrián; Barrachina Mir, Sergio; Dolz, Manuel F.; Quintana-Orti, Enrique S.; San Juan, Pau; Tomás Domínguez, Andrés Enrique
Metadatos
Mostrar el registro completo del ítemcomunitat-uji-handle:10234/9
comunitat-uji-handle2:10234/7036
comunitat-uji-handle3:10234/8620
comunitat-uji-handle4:
INVESTIGACIONMetadatos
Título
High performance and energy efficient inference for deep learning on multicore ARM processors using general optimization techniques and BLISAutoría
Fecha de publicación
2022-03-22Editor
Elsevier; North-HollandISSN
1383-7621Cita bibliográfica
Castelló, A., Barrachina, S., Dolz, M. F., Quintana-Ortí, E. S., San Juan, P., & Tomás, A. E. (2022). High performance and energy efficient inference for deep learning on multicore ARM processors using general optimization techniques and BLIS. Journal of Systems Architecture, 125, 102459.Tipo de documento
info:eu-repo/semantics/articleVersión
info:eu-repo/semantics/publishedVersionPalabras clave / Materias
Resumen
We evolve PyDTNN, a framework for distributed parallel training of Deep Neural Networks (DNNs), into an efficient inference tool for convolutional neural networks. Our optimization process on multicore ARM processors ... [+]
We evolve PyDTNN, a framework for distributed parallel training of Deep Neural Networks (DNNs), into an efficient inference tool for convolutional neural networks. Our optimization process on multicore ARM processors involves several high-level transformations of the original framework, such as the development and integration of Cython routines to exploit thread-level parallelism; the design and development of micro-kernels for the matrix multiplication, vectorized with ARM’s NEON intrinsics, that can accommodate layer fusion; and the appropriate selection of several cache configuration parameters tailored to the memory hierarchy of the target ARM processors.
Our experiments evaluate both inference throughput (measured in processed images/s) and inference latency (i.e., time-to-response) as well as energy consumption per image when varying the level of thread parallelism and the processor power modes. The experiments with the new inference engine are reported for the ResNet50 v1.5 model on the ImageNet dataset from the MLPerf suite using the ARM v8.2 cores in the NVIDIA Jetson AGX Xavier board. These results show superior performance compared with the well-spread TFLite from Google and slightly inferior results when compared with ArmNN, the native library from ARM for DNN inference. [-]
Publicado en
Journal of Systems Architecture. 125 (2022) 102459Entidad financiadora
Ministerio de Ciencia, Innovación y Universidades (Spain) | Generalitat Valenciana
Código del proyecto o subvención
TIN2017-82972-R | Prometeo/2019/109 | FJC2019-039222-I | CDEIGENT/2018/014
Derechos de acceso
1383-7621/© 2022 The Authors. Published by Elsevier B.V.
info:eu-repo/semantics/openAccess
info:eu-repo/semantics/openAccess
Aparece en las colecciones
- ICC_Articles [419]