Paralelización del entrenamiento y compresión de redes neuronales convolucionales para la detección de enfermedades de tórax
View/ Open
Metadata
Show full item recordcomunitat-uji-handle:10234/158176
comunitat-uji-handle2:10234/71345
comunitat-uji-handle3:10234/94547
comunitat-uji-handle4:
TFG-TFMMetadata
Title
Paralelización del entrenamiento y compresión de redes neuronales convolucionales para la detección de enfermedades de tóraxAuthor (s)
Tutor/Supervisor; University.Department
Dolz Zaragozá, Manuel Francisco; Castillo Catalán, María isabel; Universitat Jaume I. Departament d'Enginyeria i Ciència dels ComputadorsDate
2020-11-26Publisher
Universitat Jaume IAbstract
Optimization methods applied on convolutional neural networks can report multiple benefits in their training and inference stages. Specifically, using data-parallelism schemes on multi-GPU platforms allows decreasing ... [+]
Optimization methods applied on convolutional neural networks can report multiple benefits in their training and inference stages. Specifically, using data-parallelism schemes on multi-GPU platforms allows decreasing the training time. Similarly, the use of compression techniques, such as pruning or quantization, permits minimizing the total number of parameters or the use of reduced precisions, which imply a reduction in the training stage at the expense of minimal performance losses.
In this work, data parallelism, pruning, and quantization techniques are leveraged, tuned, and evaluated on a set of pre-trained convolutional neural networks able to diagnose common diseases on chest X-rays. The use of these techniques on these models has demonstrated that data-parallel schemes using platforms with multiple GPUs can effectively reduce the training times provided that the batch size is correctly selected.
Similarly, pruning non-significant connections among neurons at training time can lead to a considerable reduction in the number of operations performed and model trainable parameters with negligible accuracy loss. On the other hand, quantification-based techniques, such as quantification-aware training, permit an even lower memory usage and training times compared to pruning-based approaches; however, their use may carry negative effects on the classification results. [-]
Subject
Description
Treball Final de Màster Universitari en Sistemes Intel·ligents. Codi: SIU043. Curs acadèmic: 2019-2020
Type
info:eu-repo/semantics/masterThesisRights
info:eu-repo/semantics/openAccess