Mostrar el registro sencillo del ítem

dc.contributor.authorkang, jian
dc.contributor.authorFernandez-Beltran, Ruben
dc.contributor.authorWang, Zhirui
dc.contributor.authorSun, Xian
dc.contributor.authorNi, Jingen
dc.contributor.authorPlaza, Antonio
dc.date.accessioned2021-10-07T07:30:27Z
dc.date.available2021-10-07T07:30:27Z
dc.date.issued2021-06-28
dc.identifier.citationKANG, Jian, et al. Rotation-Invariant deep embedding for remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 2021.ca_CA
dc.identifier.issn0196-2892
dc.identifier.issn1558-0644
dc.identifier.urihttp://hdl.handle.net/10234/194923
dc.description.abstractEndowing convolutional neural networks (CNNs) with the rotation-invariant capability is important for characterizing the semantic contents of remote sensing (RS) images since they do not have typical orientations. Most of the existing deep methods for learning rotation-invariant CNN models are based on the design of proper convolutional or pooling layers, which aims at predicting the correct category labels of the rotated RS images equivalently. However, a few works have focused on learning rotation-invariant embeddings in the framework of deep metric learning for modeling the fine-grained semantic relationships among RS images in the embedding space. To fill this gap, we first propose a rule that the deep embeddings of rotated images should be closer to each other than those of any other images (including the images belonging to the same class). Then, we propose to maximize the joint probability of the leave-one-out image classification and rotational image identification. With the assumption of independence, such optimization leads to the minimization of a novel loss function composed of two terms: 1) a class-discrimination term and 2) a rotation-invariant term. Furthermore, we introduce a penalty parameter that balances these two terms and further propose a final loss to Rotation-invariant Deep embedding for RS images, termed RiDe. Extensive experiments conducted on two benchmark RS datasets validate the effectiveness of the proposed approach and demonstrate its superior performance when compared to other state-of-the-art methods. The codes of this article will be publicly available at https://github.com/jiankang1991/TGRS_RiDe.ca_CA
dc.format.extent13 p.ca_CA
dc.format.mimetypeapplication/pdfca_CA
dc.language.isoengca_CA
dc.publisherInstitute of Electrical and Electronics Engineers
dc.publisherIEEE
dc.rights1558-0644 © 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information.ca_CA
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/ca_CA
dc.subjectmeasurementca_CA
dc.subjectsemanticsca_CA
dc.subjectfeature extractionca_CA
dc.subjectimage retrievalca_CA
dc.subjecttrainingca_CA
dc.subjecttask analysisca_CA
dc.subjectNickelca_CA
dc.subjectconvolutional neural networks (CNNs)ca_CA
dc.subjectdeep learningca_CA
dc.subjectdeep metric learningca_CA
dc.subjectimage retrievalca_CA
dc.subjectrotation invariantca_CA
dc.subjectremote sensing (RS)ca_CA
dc.subjectscene classificationca_CA
dc.titleRotation-Invariant Deep Embedding for Remote Sensing Imagesca_CA
dc.typeinfo:eu-repo/semantics/articleca_CA
dc.identifier.doihttps://doi.org/10.1109/TGRS.2021.3088398
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca_CA
dc.relation.publisherVersionhttps://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=36ca_CA
dc.type.versioninfo:eu-repo/semantics/acceptedVersionca_CA
project.funder.nameMinisterio de Ciencia, Innovación y Universidades (Spain)ca_CA
project.funder.nameGeneralitat Valencianaca_CA
project.funder.nameFEDER-Junta de Extremaduraca_CA
project.funder.nameEuropean Union’s Horizon 2020 Researchca_CA
oaire.awardNumberRTI2018-098651-B-C54ca_CA
oaire.awardNumberGV/2020/167ca_CA
oaire.awardNumberGR18060ca_CA
oaire.awardNumber734541ca_CA


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem