Paralelización del entrenamiento y compresión de redes neuronales convolucionales para la detección de enfermedades de tórax
![Thumbnail](/xmlui/bitstream/handle/10234/193879/TFM_2020_NavarroA.pdf.jpg?sequence=5&isAllowed=y)
Visualitza/
Metadades
Mostra el registre complet de l'elementcomunitat-uji-handle:10234/158176
comunitat-uji-handle2:10234/71345
comunitat-uji-handle3:10234/94547
comunitat-uji-handle4:
TFG-TFMMetadades
Títol
Paralelización del entrenamiento y compresión de redes neuronales convolucionales para la detección de enfermedades de tóraxAutoria
Tutor/Supervisor; Universitat.Departament
Dolz Zaragozá, Manuel Francisco; Castillo Catalán, María isabel; Universitat Jaume I. Departament d'Enginyeria i Ciència dels ComputadorsData de publicació
2020-11-26Editor
Universitat Jaume IResum
Optimization methods applied on convolutional neural networks can report multiple benefits in their training and inference stages. Specifically, using data-parallelism schemes on multi-GPU platforms allows decreasing ... [+]
Optimization methods applied on convolutional neural networks can report multiple benefits in their training and inference stages. Specifically, using data-parallelism schemes on multi-GPU platforms allows decreasing the training time. Similarly, the use of compression techniques, such as pruning or quantization, permits minimizing the total number of parameters or the use of reduced precisions, which imply a reduction in the training stage at the expense of minimal performance losses.
In this work, data parallelism, pruning, and quantization techniques are leveraged, tuned, and evaluated on a set of pre-trained convolutional neural networks able to diagnose common diseases on chest X-rays. The use of these techniques on these models has demonstrated that data-parallel schemes using platforms with multiple GPUs can effectively reduce the training times provided that the batch size is correctly selected.
Similarly, pruning non-significant connections among neurons at training time can lead to a considerable reduction in the number of operations performed and model trainable parameters with negligible accuracy loss. On the other hand, quantification-based techniques, such as quantification-aware training, permit an even lower memory usage and training times compared to pruning-based approaches; however, their use may carry negative effects on the classification results. [-]
Paraules clau / Matèries
Descripció
Treball Final de Màster Universitari en Sistemes Intel·ligents. Codi: SIU043. Curs acadèmic: 2019-2020
Tipus de document
info:eu-repo/semantics/masterThesisDrets d'accés
info:eu-repo/semantics/openAccess