Mostrar el registro sencillo del ítem

dc.contributor.authorMarina-Miranda, Javier
dc.contributor.authorTraver Roig, Vicente Javier
dc.date.accessioned2022-10-11T07:12:07Z
dc.date.available2022-10-11T07:12:07Z
dc.date.issued2022-06-08
dc.identifier.citationMarina-Miranda, J., & Traver, V. J. (2022). Head and Eye Egocentric Gesture Recognition for Human-Robot Interaction. IEEE Robotics and Automation Letters.ca_CA
dc.identifier.urihttp://hdl.handle.net/10234/200314
dc.description.abstractNon-verbal communication plays a particularly important role in a wide range of scenarios in Human-Robot Interaction (HRI). Accordingly, this work addresses the problem of human gesture recognition. In particular, we focus on head and eye gestures, and adopt an egocentric (first-person) perspective using eyewear cameras. We argue that this egocentric view may offer a number of conceptual and technical benefits over scene- or robot-centric perspectives. A motion-based recognition approach is proposed, which operates at two temporal granularities. Locally, frame-to-frame homographies are estimated with a convolutional neural network (CNN). The output of this CNN is input to a long short-term memory (LSTM) to capture longer-term temporal visual relationships, which are relevant to characterize gestures. Regarding the configuration of the network architecture, one particularly interesting finding is that using the output of aninternal layer of the homography CNN increases the recognition rate with respect to using the homography matrix itself. While this work focuses on action recognition, and no robot or user study has been conducted yet, the system has been designed to meet real-time constraints. The encouraging results suggest that the proposed egocentric perspective is viable, and this proofof-concept work provides novel and useful contributions to the exciting area of HRI.ca_CA
dc.format.extent10 p.ca_CA
dc.format.mimetypeapplication/pdfca_CA
dc.language.isoengca_CA
dc.publisherIEEEca_CA
dc.relation.isPartOfIEEE Robotics and Automation Letters ( Volume: 7, Issue: 3, July 2022)ca_CA
dc.rights© 2022 IEEEca_CA
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/ca_CA
dc.subjectgestureca_CA
dc.subjectposture and facial expressionsca_CA
dc.subjectdeep learning for visual perceptionca_CA
dc.titleHead and eye egocentric gesture recognition for human-robot interaction using eyewear camerasca_CA
dc.typeinfo:eu-repo/semantics/articleca_CA
dc.identifier.doihttps://doi.org/10.1109/LRA.2022.3180442
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca_CA
dc.relation.publisherVersionhttps://ieeexplore.ieee.org/document/9790312ca_CA
dc.type.versioninfo:eu-repo/semantics/acceptedVersionca_CA
project.funder.nameUniversitat Jaume Ica_CA
project.funder.nameMinisterio de Ciencia, Innovación y Universidades (Spain)ca_CA
oaire.awardNumberUJI-B2018-44ca_CA
oaire.awardNumberRED2018-102511-Tca_CA


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem