Rotation-Invariant Deep Embedding for Remote Sensing Images
Visualitza/
Impacte
Scholar |
Altres documents de l'autoria: kang, jian; Fernandez-Beltran, Ruben; Wang, Zhirui; Sun, Xian; Ni, Jingen; Plaza, Antonio
Metadades
Mostra el registre complet de l'elementcomunitat-uji-handle:10234/9
comunitat-uji-handle2:10234/43662
comunitat-uji-handle3:10234/43643
comunitat-uji-handle4:
INVESTIGACIONMetadades
Títol
Rotation-Invariant Deep Embedding for Remote Sensing ImagesData de publicació
2021-06-28Editor
Institute of Electrical and Electronics Engineers; IEEEISSN
0196-2892; 1558-0644Cita bibliogràfica
KANG, Jian, et al. Rotation-Invariant deep embedding for remote sensing images. IEEE Transactions on Geoscience and Remote Sensing, 2021.Tipus de document
info:eu-repo/semantics/articleVersió de l'editorial
https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=36Versió
info:eu-repo/semantics/acceptedVersionParaules clau / Matèries
Resum
Endowing convolutional neural networks (CNNs) with the rotation-invariant capability is important for characterizing the semantic contents of remote sensing (RS) images since they do not have typical orientations. ... [+]
Endowing convolutional neural networks (CNNs) with the rotation-invariant capability is important for characterizing the semantic contents of remote sensing (RS) images since they do not have typical orientations. Most of the existing deep methods for learning rotation-invariant CNN models are based on the design of proper convolutional or pooling layers, which aims at predicting the correct category labels of the rotated RS images equivalently. However, a few works have focused on learning rotation-invariant embeddings in the framework of deep metric learning for modeling the fine-grained semantic relationships among RS images in the embedding space. To fill this gap, we first propose a rule that the deep embeddings of rotated images should be closer to each other than those of any other images (including the images belonging to the same class). Then, we propose to maximize the joint probability of the leave-one-out image classification and rotational image identification. With the assumption of independence, such optimization leads to the minimization of a novel loss function composed of two terms: 1) a class-discrimination term and 2) a rotation-invariant term. Furthermore, we introduce a penalty parameter that balances these two terms and further propose a final loss to Rotation-invariant Deep embedding for RS images, termed RiDe. Extensive experiments conducted on two benchmark RS datasets validate the effectiveness of the proposed approach and demonstrate its superior performance when compared to other state-of-the-art methods. The codes of this article will be publicly available at https://github.com/jiankang1991/TGRS_RiDe. [-]
Entitat finançadora
Ministerio de Ciencia, Innovación y Universidades (Spain) | Generalitat Valenciana | FEDER-Junta de Extremadura | European Union’s Horizon 2020 Research
Codi del projecte o subvenció
RTI2018-098651-B-C54 | GV/2020/167 | GR18060 | 734541
Drets d'accés
1558-0644 © 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
http://rightsstatements.org/vocab/InC/1.0/
info:eu-repo/semantics/openAccess
http://rightsstatements.org/vocab/InC/1.0/
info:eu-repo/semantics/openAccess
Apareix a les col.leccions
- INIT_Articles [754]