Paralelización del entrenamiento y compresión de redes neuronales convolucionales para la detección de enfermedades de tórax
Ver/ Abrir
Metadatos
Mostrar el registro completo del ítemcomunitat-uji-handle:10234/158176
comunitat-uji-handle2:10234/71345
comunitat-uji-handle3:10234/94547
comunitat-uji-handle4:
TFG-TFMMetadatos
Título
Paralelización del entrenamiento y compresión de redes neuronales convolucionales para la detección de enfermedades de tóraxAutoría
Tutor/Supervisor; Universidad.Departamento
Dolz Zaragozá, Manuel Francisco; Castillo Catalán, María isabel; Universitat Jaume I. Departament d'Enginyeria i Ciència dels ComputadorsFecha de publicación
2020-11-26Editor
Universitat Jaume IResumen
Optimization methods applied on convolutional neural networks can report multiple benefits in their training and inference stages. Specifically, using data-parallelism schemes on multi-GPU platforms allows decreasing ... [+]
Optimization methods applied on convolutional neural networks can report multiple benefits in their training and inference stages. Specifically, using data-parallelism schemes on multi-GPU platforms allows decreasing the training time. Similarly, the use of compression techniques, such as pruning or quantization, permits minimizing the total number of parameters or the use of reduced precisions, which imply a reduction in the training stage at the expense of minimal performance losses.
In this work, data parallelism, pruning, and quantization techniques are leveraged, tuned, and evaluated on a set of pre-trained convolutional neural networks able to diagnose common diseases on chest X-rays. The use of these techniques on these models has demonstrated that data-parallel schemes using platforms with multiple GPUs can effectively reduce the training times provided that the batch size is correctly selected.
Similarly, pruning non-significant connections among neurons at training time can lead to a considerable reduction in the number of operations performed and model trainable parameters with negligible accuracy loss. On the other hand, quantification-based techniques, such as quantification-aware training, permit an even lower memory usage and training times compared to pruning-based approaches; however, their use may carry negative effects on the classification results. [-]
Palabras clave / Materias
Descripción
Treball Final de Màster Universitari en Sistemes Intel·ligents. Codi: SIU043. Curs acadèmic: 2019-2020
Tipo de documento
info:eu-repo/semantics/masterThesisDerechos de acceso
info:eu-repo/semantics/openAccess