Mostrar el registro sencillo del ítem
Head and eye egocentric gesture recognition for human-robot interaction using eyewear cameras
dc.contributor.author | Marina-Miranda, Javier | |
dc.contributor.author | Traver Roig, Vicente Javier | |
dc.date.accessioned | 2022-10-11T07:12:07Z | |
dc.date.available | 2022-10-11T07:12:07Z | |
dc.date.issued | 2022-06-08 | |
dc.identifier.citation | Marina-Miranda, J., & Traver, V. J. (2022). Head and Eye Egocentric Gesture Recognition for Human-Robot Interaction. IEEE Robotics and Automation Letters. | ca_CA |
dc.identifier.uri | http://hdl.handle.net/10234/200314 | |
dc.description.abstract | Non-verbal communication plays a particularly important role in a wide range of scenarios in Human-Robot Interaction (HRI). Accordingly, this work addresses the problem of human gesture recognition. In particular, we focus on head and eye gestures, and adopt an egocentric (first-person) perspective using eyewear cameras. We argue that this egocentric view may offer a number of conceptual and technical benefits over scene- or robot-centric perspectives. A motion-based recognition approach is proposed, which operates at two temporal granularities. Locally, frame-to-frame homographies are estimated with a convolutional neural network (CNN). The output of this CNN is input to a long short-term memory (LSTM) to capture longer-term temporal visual relationships, which are relevant to characterize gestures. Regarding the configuration of the network architecture, one particularly interesting finding is that using the output of aninternal layer of the homography CNN increases the recognition rate with respect to using the homography matrix itself. While this work focuses on action recognition, and no robot or user study has been conducted yet, the system has been designed to meet real-time constraints. The encouraging results suggest that the proposed egocentric perspective is viable, and this proofof-concept work provides novel and useful contributions to the exciting area of HRI. | ca_CA |
dc.format.extent | 10 p. | ca_CA |
dc.format.mimetype | application/pdf | ca_CA |
dc.language.iso | eng | ca_CA |
dc.publisher | IEEE | ca_CA |
dc.relation.isPartOf | IEEE Robotics and Automation Letters ( Volume: 7, Issue: 3, July 2022) | ca_CA |
dc.rights | © 2022 IEEE | ca_CA |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | ca_CA |
dc.subject | gesture | ca_CA |
dc.subject | posture and facial expressions | ca_CA |
dc.subject | deep learning for visual perception | ca_CA |
dc.title | Head and eye egocentric gesture recognition for human-robot interaction using eyewear cameras | ca_CA |
dc.type | info:eu-repo/semantics/article | ca_CA |
dc.identifier.doi | https://doi.org/10.1109/LRA.2022.3180442 | |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | ca_CA |
dc.relation.publisherVersion | https://ieeexplore.ieee.org/document/9790312 | ca_CA |
dc.type.version | info:eu-repo/semantics/acceptedVersion | ca_CA |
project.funder.name | Universitat Jaume I | ca_CA |
project.funder.name | Ministerio de Ciencia, Innovación y Universidades (Spain) | ca_CA |
oaire.awardNumber | UJI-B2018-44 | ca_CA |
oaire.awardNumber | RED2018-102511-T | ca_CA |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
-
INIT_Articles [752]