Mostrar el registro sencillo del ítem

dc.contributor.authorMartínez Martín, Ester
dc.contributor.authordel Pobil, Angel P.
dc.date.accessioned2019-06-07T15:43:35Z
dc.date.available2019-06-07T15:43:35Z
dc.date.issued2019
dc.identifier.citationMartinez-Martin, Ester; del Pobil, Angel P. "Vision for Robust Robot Manipulation." Sensors, 2019, vol. 19, núm. 7, p. 1648ca_CA
dc.identifier.issn1424-8220
dc.identifier.issn1424-8220
dc.identifier.urihttp://hdl.handle.net/10234/182758
dc.description.abstractAdvances in Robotics are leading to a new generation of assistant robots working in ordinary, domestic settings. This evolution raises new challenges in the tasks to be accomplished by the robots. This is the case for object manipulation where the detect-approach-grasp loop requires a robust recovery stage, especially when the held object slides. Several proprioceptive sensors have been developed in the last decades, such as tactile sensors or contact switches, that can be used for that purpose; nevertheless, their implementation may considerably restrict the gripper’s flexibility and functionality, increasing their cost and complexity. Alternatively, vision can be used since it is an undoubtedly rich source of information, and in particular, depth vision sensors. We present an approach based on depth cameras to robustly evaluate the manipulation success, continuously reporting about any object loss and, consequently, allowing it to robustly recover from this situation. For that, a Lab-colour segmentation allows the robot to identify potential robot manipulators in the image. Then, the depth information is used to detect any edge resulting from two-object contact. The combination of those techniques allows the robot to accurately detect the presence or absence of contact points between the robot manipulator and a held object. An experimental evaluation in realistic indoor environments supports our approach.ca_CA
dc.format.extent15 p.ca_CA
dc.format.mimetypeapplication/pdfca_CA
dc.language.isoengca_CA
dc.publisherMDPIca_CA
dc.relation.isPartOfSensors, 2019, vol. 19, núm. 7, p. 1648ca_CA
dc.rights© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).ca_CA
dc.rightsAtribución 4.0 Internacional*
dc.rights.urihttp://creativecommons.org/licenses/by-sa/4.0/*
dc.subjectroboticsca_CA
dc.subjectrobot manipulationca_CA
dc.subjectdepth visionca_CA
dc.titleVision for Robust Robot Manipulationca_CA
dc.typeinfo:eu-repo/semantics/articleca_CA
dc.identifier.doihttps://doi.org/10.3390/s19071648
dc.relation.projectIDThis research was partially funded by Ministerio de Economía y Competitividad grant number DPI2015-69041-R. This paper describes research done at UJI Robotic Intelligence Laboratory. Support for this laboratory is provided in part by Ministerio de Economía y Competitividad and by Universitat Jaume I (UJI-B2018-74).ca_CA
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca_CA
dc.relation.publisherVersionhttps://www.mdpi.com/1424-8220/19/7/1648ca_CA
dc.type.versioninfo:eu-repo/semantics/publishedVersionca_CA


Ficheros en el ítem

Thumbnail
Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Excepto si se señala otra cosa, la licencia del ítem se describe como: © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).