Efficient and portable GEMM-based convolution operators for deep neural network training on multicore processors
Ver/ Abrir
Impacto
Scholar |
Otros documentos de la autoría: Barrachina Mir, Sergio; Dolz, Manuel F.; San Juan, Pablo; Quintana-Orti, Enrique S.
Metadatos
Mostrar el registro completo del ítemcomunitat-uji-handle:10234/9
comunitat-uji-handle2:10234/7036
comunitat-uji-handle3:10234/8620
comunitat-uji-handle4:
INVESTIGACIONMetadatos
Título
Efficient and portable GEMM-based convolution operators for deep neural network training on multicore processorsFecha de publicación
2022-05-30Editor
Elsevier; Academic PressISSN
0743-7315Cita bibliográfica
Barrachina, S., Dolz, M. F., San Juan, P., & Quintana-Ortí, E. S. (2022). Efficient and Portable GEMM-based Convolution Operators for Deep Neural Network Training on Multicore Processors. Journal of Parallel and Distributed Computing.Tipo de documento
info:eu-repo/semantics/articleVersión
info:eu-repo/semantics/publishedVersionPalabras clave / Materias
Resumen
Convolutional Neural Networks (CNNs) play a crucial role in many image recognition and classification tasks, recommender systems, brain-computer interfaces, etc. As a consequence, there is a notable interest in ... [+]
Convolutional Neural Networks (CNNs) play a crucial role in many image recognition and classification tasks, recommender systems, brain-computer interfaces, etc. As a consequence, there is a notable interest in developing high performance realizations of the convolution operators, which concentrate a significant portion of the computational cost of this type of neural networks.
In a previous work, we introduced a portable, high performance convolution algorithm, based on the BLIS realization of matrix multiplication, which eliminates most of the runtime and memory overheads that impair the performance of the convolution operators appearing in the forward training pass, when performed via explicit im2col transform. In this paper, we extend our ideas to the full training process of CNNs on multicore processors, proposing new high performance strategies to tackle the convolution operators that are present in the more complex backward pass of the training process, while maintaining the portability of the realizations. In addition, we conduct a full integration of these algorithms into a framework for distributed training of CNNs on clusters of computers, providing a complete experimental evaluation of the actual benefits in terms of both performance and memory consumption. Compared with baseline implementation, the use of the new convolution operators using pre-allocated memory can accelerate the training by a factor of about 6%–25%, provided there is sufficient memory available. In comparison, the operator variants that do not rely on persistent memory can save up to 70% of memory. [-]
Publicado en
Journal of Parallel and Distributed Computing 167 (2022) 240–254Entidad financiadora
Generalitat Valenciana
Código del proyecto o subvención
PID2020-113656RB-C21/C22 | MCIN/AEI/10.13039/501100011033 | Prometeo/2019/109 | CDEIGENT/2018/014
Derechos de acceso
info:eu-repo/semantics/openAccess
Aparece en las colecciones
- ICC_Articles [427]