Semi- and Self-Supervised Metric Learning for Remote Sensing Applications
![Thumbnail](/xmlui/bitstream/handle/10234/206485/10.1109%3aLGRS.2024.3381228.pdf.jpg?sequence=4&isAllowed=y)
View/ Open
Impact
![Google Scholar](/xmlui/themes/Mirage2/images/uji/logo_google.png)
![Microsoft Academico](/xmlui/themes/Mirage2/images/uji/logo_microsoft.png)
Metadata
Show full item recordcomunitat-uji-handle:10234/9
comunitat-uji-handle2:10234/7038
comunitat-uji-handle3:10234/8634
comunitat-uji-handle4:
INVESTIGACIONMetadata
Title
Semi- and Self-Supervised Metric Learning for Remote Sensing ApplicationsDate
2024-03-25Publisher
Institute of Electrical and Electronics Engineers Inc.ISSN
1545-598XBibliographic citation
I. Hernandez-Sequeira, R. Fernandez-Beltran and F. Pla, "Semi- and Self-Supervised Metric Learning for Remote Sensing Applications," in IEEE Geoscience and Remote Sensing Letters, vol. 21, pp. 1-5, 2024, Art no. 6006305, doi: 10.1109/LGRS.2024.3381228Type
info:eu-repo/semantics/articlePublisher version
https://ieeexplore.ieee.org/abstract/document/10478648Version
info:eu-repo/semantics/acceptedVersionSubject
Abstract
Earth data collection from satellites and aircraft has exponentially grown, but a substantial portion of it remains unlabeled. This has prompted the remote sensing community to explore effective methods for leveraging ... [+]
Earth data collection from satellites and aircraft has exponentially grown, but a substantial portion of it remains unlabeled. This has prompted the remote sensing community to explore effective methods for leveraging unlabeled data. In our prior investigation, we evaluated various deep semi-supervised learning algorithms on two very high-resolution (VHR) optical datasets (UCM and AID). Notably, the CoMatch algorithm demonstrated the highest accuracy, motivating further exploration. This letter extends our earlier work by integrating the established class-aware contrastive semi-supervised learning framework (CoMatch + CCSSL) into CoMatch and introducing a new triplet metric learning loss (CoMatch + Triplet). CoMatch + Triplet excelled with 93.2% accuracy on UCM, while CoMatch led with 92.19% on AID. The addition of the triplet loss can produce a clearer separation of the samples from different classes in the embedding space at very early learning stages, being able to learn faster and getting maximum performance with few iterations. The exploration of diverse semi- and self-supervised training methodologies presented in this work sheds light on the strengths and limitations of these approaches, enhancing our understanding of their applicability in remote sensing applications. [-]
Is part of
IEEE Geoscience and Remote Sensing Letters, vol. 21, 2024Rights
This item appears in the folowing collection(s)
- LSI_Articles [362]