2024-03-29T13:13:14Zhttps://repositori.uji.es/oai/requestoai:repositori.uji.es:10234/437892022-12-02T16:19:13Zcom_10234_7036com_10234_9col_10234_8620
00925njm 22002777a 4500
dc
Chinellato, Eris
author
del Pobil, Angel P.
author
2009
Being able to estimate pose and location of nearby objects is a fundamental skill for any natural or artificial agent actively interacting with its environment. The methods for extraction and integration of visual cues employed in artificial systems are usually very different from the solutions found in nature. We present a biologically plausible model of distance and orientation estimation based on neuroscience findings that is suitable to be implemented in a robotic vision-based grasping setup. Key novelties of the model are the use of simple retinal and proprioceptive data, and the integration between stereoptic and perspective cues. © 2008 Elsevier B.V. All rights reserved.
Neurocomputing, 72, , p. 879-886
9252312
http://hdl.handle.net/10234/43789
http://dx.doi.org/10.1016/j.neucom.2008.06.018
Cue integration
Neural coding
Pose estimation
Vision-based grasping
Distance and orientation estimation of graspable objects in natural and artificial systems