Variable-size batched Gauss–Jordan elimination for block-Jacobi preconditioning on graphics processors
Impact
Scholar |
Other documents of the author: Anzt, Hartwig; Dongarra, Jack; Flegar, Goran; Quintana-Orti, Enrique S.
Metadata
Show full item recordcomunitat-uji-handle:10234/9
comunitat-uji-handle2:10234/7036
comunitat-uji-handle3:10234/8620
comunitat-uji-handle4:
INVESTIGACIONThis resource is restricted
https://doi.org/10.1016/j.parco.2017.12.006 |
Metadata
Title
Variable-size batched Gauss–Jordan elimination for block-Jacobi preconditioning on graphics processorsDate
2019Publisher
ElsevierISSN
0167-8191Bibliographic citation
ANZT, Hartwig, et al. Variable-size batched Gauss–Jordan elimination for block-Jacobi preconditioning on graphics processors. Parallel Computing, 2019, vol. 81, p. 131-146.Type
info:eu-repo/semantics/articlePublisher version
https://www.sciencedirect.com/science/article/pii/S0167819117302107Version
info:eu-repo/semantics/publishedVersionSubject
Abstract
In this work, we address the efficient realization of block-Jacobi preconditioning on graphics processing units (GPUs). This task requires the solution of a collection of small and independent linear systems. To fully ... [+]
In this work, we address the efficient realization of block-Jacobi preconditioning on graphics processing units (GPUs). This task requires the solution of a collection of small and independent linear systems. To fully realize this implementation, we develop a variable-size batched matrix inversion kernel that uses Gauss-Jordan elimination (GJE) along with a variable-size batched matrix–vector multiplication kernel that transforms the linear systems’ right-hand sides into the solution vectors. Our kernels make heavy use of the increased register count and the warp-local communication associated with newer GPU architectures. Moreover, in the matrix inversion, we employ an implicit pivoting strategy that migrates the workload (i.e., operations) to the place where the data resides instead of moving the data to the executing cores. We complement the matrix inversion with extraction and insertion strategies that allow the block-Jacobi preconditioner to be set up rapidly. The experiments on NVIDIA’s K40 and P100 architectures reveal that our variable-size batched matrix inversion routine outperforms the CUDA basic linear algebra subroutine (cuBLAS) library functions that provide the same (or even less) functionality. We also show that the preconditioner setup and preconditioner application cost can be somewhat offset by the faster convergence of the iterative solver. [-]
Is part of
Parallel Computing, Volume 81, January 2019.Investigation project
DE-SC-0010042 ; VH-NG-1241 ; TIN2014-53495-R ; 732631Rights
0167-8191/© 2018 Elsevier B.V. All rights reserved.
http://rightsstatements.org/vocab/InC/1.0/
info:eu-repo/semantics/restrictedAccess
http://rightsstatements.org/vocab/InC/1.0/
info:eu-repo/semantics/restrictedAccess
This item appears in the folowing collection(s)
- ICC_Articles [413]