Mostrar el registro sencillo del ítem

dc.contributor.authorIserte, Sergio
dc.contributor.authorPrades, Javier
dc.contributor.authorReaño, Carlos
dc.contributor.authorSilla, Federico
dc.date.accessioned2019-12-16T09:57:13Z
dc.date.available2019-12-16T09:57:13Z
dc.date.issued2019
dc.identifier.citationIserte, S, Prades, J, Reaño, C, Silla, F. Improving the management efficiency of GPU workloads in data centers through GPU virtualization. Concurrency Computat Pract Exper. 2019;e5275. https://doi.org/10.1002/cpe.5275
dc.identifier.issn1532-0626
dc.identifier.issn1532-0634
dc.identifier.urihttp://hdl.handle.net/10234/185456
dc.descriptionThis is the pre-peer reviewed version of the following article: Improving the management efficiency of GPU workloads in data centers through GPU virtualization, which has been published in final form at https://doi.org/10.1002/cpe.5275. This article may be used for non-commercial purposes in accordance with Wiley Terms and Conditions for Use of Self-Archived Versions.
dc.description.abstractGraphics processing units (GPUs) are currently used in data centers to reduce the execution time of compute‐intensive applications. However, the use of GPUs presents several side effects, such as increased acquisition costs and larger space requirements. Furthermore, GPUs require a nonnegligible amount of energy even while idle. Additionally, GPU utilization is usually low for most applications. In a similar way to the use of virtual machines, using virtual GPUs may address the concerns associated with the use of these devices. In this regard, the remote GPU virtualization mechanism could be leveraged to share the GPUs present in the computing facility among the nodes of the cluster. This would increase overall GPU utilization, thus reducing the negative impact of the increased costs mentioned before. Reducing the amount of GPUs installed in the cluster could also be possible. However, in the same way as job schedulers map GPU resources to applications, virtual GPUs should also be scheduled before job execution. Nevertheless, current job schedulers are not able to deal with virtual GPUs. In this paper, we analyze the performance attained by a cluster using the remote Compute Unified Device Architecture middleware and a modified version of the Slurm scheduler, which is now able to assign remote GPUs to jobs. Results show that cluster throughput, measured as jobs completed per time unit, is doubled at the same time that the total energy consumption is reduced up to 40%. GPU utilization is also increased.ca_CA
dc.format.extent16 p.ca_CA
dc.format.mimetypeapplication/pdfca_CA
dc.language.isoengca_CA
dc.publisherWileyca_CA
dc.relation.isPartOfPractice and Experience, 2019ca_CA
dc.rightsCopyright © John Wiley & Sons, Inc.ca_CA
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/*
dc.subjectCUDAca_CA
dc.subjectdata centersca_CA
dc.subjectGPUca_CA
dc.subjectInfiniBandca_CA
dc.subjectrCUDAca_CA
dc.subjectSlurmca_CA
dc.titleImproving the management efficiency of GPU workloads in data centers through GPU virtualizationca_CA
dc.typeinfo:eu-repo/semantics/articleca_CA
dc.identifier.doihttps://doi.org/10.1002/cpe.5275
dc.relation.projectIDGeneralitat Valenciana. Grant Number: PROMETEO/2017/077; MINECO and FEDER. Grant Numbers: TIN2014-53495-R, TIN2015-65316-P, TIN2017-82972-Rca_CA
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca_CA
dc.relation.publisherVersionhttps://onlinelibrary.wiley.com/doi/full/10.1002/cpe.5275ca_CA
dc.type.versioninfo:eu-repo/semantics/submittedVersionca_CA


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem