Mostrar el registro sencillo del ítem
High performance and energy efficient inference for deep learning on multicore ARM processors using general optimization techniques and BLIS
dc.contributor.author | Castelló, Adrián | |
dc.contributor.author | Barrachina Mir, Sergio | |
dc.contributor.author | Dolz, Manuel F. | |
dc.contributor.author | Quintana-Orti, Enrique S. | |
dc.contributor.author | San Juan, Pau | |
dc.contributor.author | Tomás Domínguez, Andrés Enrique | |
dc.date.accessioned | 2022-05-24T13:04:49Z | |
dc.date.available | 2022-05-24T13:04:49Z | |
dc.date.issued | 2022-03-22 | |
dc.identifier.citation | Castelló, A., Barrachina, S., Dolz, M. F., Quintana-Ortí, E. S., San Juan, P., & Tomás, A. E. (2022). High performance and energy efficient inference for deep learning on multicore ARM processors using general optimization techniques and BLIS. Journal of Systems Architecture, 125, 102459. | ca_CA |
dc.identifier.issn | 1383-7621 | |
dc.identifier.uri | http://hdl.handle.net/10234/197784 | |
dc.description.abstract | We evolve PyDTNN, a framework for distributed parallel training of Deep Neural Networks (DNNs), into an efficient inference tool for convolutional neural networks. Our optimization process on multicore ARM processors involves several high-level transformations of the original framework, such as the development and integration of Cython routines to exploit thread-level parallelism; the design and development of micro-kernels for the matrix multiplication, vectorized with ARM’s NEON intrinsics, that can accommodate layer fusion; and the appropriate selection of several cache configuration parameters tailored to the memory hierarchy of the target ARM processors. Our experiments evaluate both inference throughput (measured in processed images/s) and inference latency (i.e., time-to-response) as well as energy consumption per image when varying the level of thread parallelism and the processor power modes. The experiments with the new inference engine are reported for the ResNet50 v1.5 model on the ImageNet dataset from the MLPerf suite using the ARM v8.2 cores in the NVIDIA Jetson AGX Xavier board. These results show superior performance compared with the well-spread TFLite from Google and slightly inferior results when compared with ArmNN, the native library from ARM for DNN inference. | ca_CA |
dc.format.extent | 9 p. | ca_CA |
dc.format.mimetype | application/pdf | ca_CA |
dc.language.iso | eng | ca_CA |
dc.publisher | Elsevier | ca_CA |
dc.publisher | North-Holland | ca_CA |
dc.relation.isPartOf | Journal of Systems Architecture. 125 (2022) 102459 | ca_CA |
dc.rights | 1383-7621/© 2022 The Authors. Published by Elsevier B.V. | ca_CA |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | ca_CA |
dc.subject | convolutional neural network | ca_CA |
dc.subject | inference | ca_CA |
dc.subject | multicore low-power processors | ca_CA |
dc.title | High performance and energy efficient inference for deep learning on multicore ARM processors using general optimization techniques and BLIS | ca_CA |
dc.type | info:eu-repo/semantics/article | ca_CA |
dc.identifier.doi | https://doi.org/10.1016/j.sysarc.2022.102459 | |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | ca_CA |
dc.type.version | info:eu-repo/semantics/publishedVersion | ca_CA |
project.funder.name | Ministerio de Ciencia, Innovación y Universidades (Spain) | ca_CA |
project.funder.name | Generalitat Valenciana | ca_CA |
oaire.awardNumber | TIN2017-82972-R | ca_CA |
oaire.awardNumber | Prometeo/2019/109 | ca_CA |
oaire.awardNumber | FJC2019-039222-I | ca_CA |
oaire.awardNumber | CDEIGENT/2018/014 | ca_CA |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
-
ICC_Articles [424]