Variable-size batched Gauss–Jordan elimination for block-Jacobi preconditioning on graphics processors
Impacte
Scholar |
Altres documents de l'autoria: Anzt, Hartwig; Dongarra, Jack; Flegar, Goran; Quintana-Orti, Enrique S.
Metadades
Mostra el registre complet de l'elementcomunitat-uji-handle:10234/9
comunitat-uji-handle2:10234/7036
comunitat-uji-handle3:10234/8620
comunitat-uji-handle4:
INVESTIGACIONAquest recurs és restringit
https://doi.org/10.1016/j.parco.2017.12.006 |
Metadades
Títol
Variable-size batched Gauss–Jordan elimination for block-Jacobi preconditioning on graphics processorsData de publicació
2019Editor
ElsevierISSN
0167-8191Cita bibliogràfica
ANZT, Hartwig, et al. Variable-size batched Gauss–Jordan elimination for block-Jacobi preconditioning on graphics processors. Parallel Computing, 2019, vol. 81, p. 131-146.Tipus de document
info:eu-repo/semantics/articleVersió de l'editorial
https://www.sciencedirect.com/science/article/pii/S0167819117302107Versió
info:eu-repo/semantics/publishedVersionParaules clau / Matèries
Resum
In this work, we address the efficient realization of block-Jacobi preconditioning on graphics processing units (GPUs). This task requires the solution of a collection of small and independent linear systems. To fully ... [+]
In this work, we address the efficient realization of block-Jacobi preconditioning on graphics processing units (GPUs). This task requires the solution of a collection of small and independent linear systems. To fully realize this implementation, we develop a variable-size batched matrix inversion kernel that uses Gauss-Jordan elimination (GJE) along with a variable-size batched matrix–vector multiplication kernel that transforms the linear systems’ right-hand sides into the solution vectors. Our kernels make heavy use of the increased register count and the warp-local communication associated with newer GPU architectures. Moreover, in the matrix inversion, we employ an implicit pivoting strategy that migrates the workload (i.e., operations) to the place where the data resides instead of moving the data to the executing cores. We complement the matrix inversion with extraction and insertion strategies that allow the block-Jacobi preconditioner to be set up rapidly. The experiments on NVIDIA’s K40 and P100 architectures reveal that our variable-size batched matrix inversion routine outperforms the CUDA basic linear algebra subroutine (cuBLAS) library functions that provide the same (or even less) functionality. We also show that the preconditioner setup and preconditioner application cost can be somewhat offset by the faster convergence of the iterative solver. [-]
Publicat a
Parallel Computing, Volume 81, January 2019.Proyecto de investigación
DE-SC-0010042 ; VH-NG-1241 ; TIN2014-53495-R ; 732631Drets d'accés
0167-8191/© 2018 Elsevier B.V. All rights reserved.
http://rightsstatements.org/vocab/InC/1.0/
info:eu-repo/semantics/restrictedAccess
http://rightsstatements.org/vocab/InC/1.0/
info:eu-repo/semantics/restrictedAccess
Apareix a les col.leccions
- ICC_Articles [430]