Increasing underwater manipulation autonomy using segmentation and visual tracking
comunitat-uji-handle:10234/9
comunitat-uji-handle2:10234/7036
comunitat-uji-handle3:10234/146069
comunitat-uji-handle4:
INVESTIGACIONEste recurso está restringido
http://dx.doi.org/10.1109/OCEANSE.2017.8084762 |
Metadatos
Título
Increasing underwater manipulation autonomy using segmentation and visual trackingFecha de publicación
2017-06Editor
IEEEISBN
9781509052783Cita bibliográfica
P. J. Sanz, M. Vincze and D. Fornas, "Increasing underwater manipulation autonomy using segmentation and visual tracking," OCEANS 2017 - Aberdeen, Aberdeen, 2017, pp. 1-5Tipo de documento
info:eu-repo/semantics/conferenceObjectVersión de la editorial
https://ieeexplore.ieee.org/document/8084762/Versión
info:eu-repo/semantics/publishedVersionResumen
The present research in underwater robotics aims to increase the autonomy of manipulation operations in fields such as archaeology or biology, that cannot afford costly underwater interventions using traditional Remote ... [+]
The present research in underwater robotics aims to increase the autonomy of manipulation operations in fields such as archaeology or biology, that cannot afford costly underwater interventions using traditional Remote Operated Vehicles (ROV). This paper describes a work towards the long term goal of autonomous underwater manipulation. Autonomous grasping, with limited sensors and water conditions which affect the robot systems, is a growing skill in underwater scenarios. Here we present a framework that uses vision, segmentation, user interfaces and grasp planning to perform visually guided manipulation to improve the specification of grasping operations. With it, a user commands and supervises the robot to recover cylinder shaped objects, a very common restriction in archaeological scenarios. This framework, though, can be expanded to detect other kind of objects. Information of the environment is gathered with stereo cameras and laser reconstruction methods to obtain a model of the object's graspable area. A RANSAC segmentation algorithm is used to estimate the model parameters and the best grasp is presented to the user in an intuitive user interface. The grasp is then executed by the robot. This approach has been tested in simulation and in water tank conditions. [-]
Descripción
Comunicació presentada a Oceans 2017 Conference, Aberdeen, 19-22 June 2017