Mostrar el registro sencillo del ítem

dc.contributor.authorIbáñez Fernández, Damián
dc.contributor.authorFernandez-Beltran, Ruben
dc.contributor.authorPla, Filiberto
dc.contributor.authorYokoya, Naoto
dc.date.accessioned2023-02-14T10:06:32Z
dc.date.available2023-02-14T10:06:32Z
dc.date.issued2022-10-28
dc.identifier.citationD. Ibañez, R. Fernandez-Beltran, F. Pla and N. Yokoya, "Masked Auto-Encoding Spectral–Spatial Transformer for Hyperspectral Image Classification," in IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-14, 2022, Art no. 5542614, doi: 10.1109/TGRS.2022.3217892.ca_CA
dc.identifier.urihttp://hdl.handle.net/10234/201651
dc.description.abstractDeep learning has certainly become the dominant trend in hyperspectral (HS) remote sensing (RS) image classification owing to its excellent capabilities to extract highly discriminating spectral–spatial features. In this context, transformer networks have recently shown prominent results in distinguishing even the most subtle spectral differences because of their potential to characterize sequential spectral data. Nonetheless, many complexities affecting HS remote sensing data (e.g., atmospheric effects, thermal noise, quantization noise) may severely undermine such potential since no mode of relieving noisy feature patterns has still been developed within transformer networks. To address the problem, this article presents a novel masked auto-encoding spectral–spatial transformer (MAEST), which gathers two different collaborative branches: 1) a reconstruction path, which dynamically uncovers the most robust encoding features based on a masking auto-encoding strategy, and 2) a classification path, which embeds these features onto a transformer network to classify the data focusing on the features that better reconstruct the input. Unlike other existing models, this novel design pursues to learn refined transformer features considering the aforementioned complexities of the HS remote sensing image domain. The experimental comparison, including several state-of-the-art methods and benchmark datasets, shows the superior results obtained by MAEST. The codes of this article will be available at https://github.com/ibanezfd/MAEST .ca_CA
dc.format.extent14 p.ca_CA
dc.format.mimetypeapplication/pdfca_CA
dc.language.isoengca_CA
dc.publisherIEEEca_CA
dc.relation.isPartOfIEEE Transactions on Geoscience and Remote Sensing, vol. 60, 2022ca_CA
dc.rights© 2022 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.ca_CA
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/ca_CA
dc.subjecthyperspectral (HS) imagingca_CA
dc.subjectmask auto-encoders (MAEs)ca_CA
dc.subjectVision Transformers (ViTs)ca_CA
dc.titleMasked Auto-Encoding Spectral-Spatial Transformer for Hyperspectral Image Classificationca_CA
dc.typeinfo:eu-repo/semantics/articleca_CA
dc.identifier.doi10.1109/TGRS.2022.3217892
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca_CA
dc.type.versioninfo:eu-repo/semantics/acceptedVersionca_CA
project.funder.nameMinisterio de Ciencia, Innovación y Universidades (Spain)ca_CA
oaire.awardNumberPID2021-128794OB-I00ca_CA


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem