FloU-Net: An Optical Flow Network for Multi-modal Self-Supervised Image Registration
Ver/ Abrir
Impacto
Scholar |
Otros documentos de la autoría: Ibáñez Fernández, Damián; Fernández Beltrán, Rubén; Pla, Filiberto
Metadatos
Mostrar el registro completo del ítemcomunitat-uji-handle:10234/9
comunitat-uji-handle2:10234/43662
comunitat-uji-handle3:10234/43643
comunitat-uji-handle4:
INVESTIGACIONMetadatos
Título
FloU-Net: An Optical Flow Network for Multi-modal Self-Supervised Image RegistrationFecha de publicación
2023Editor
Institute of Electrical and Electronics EngineersISSN
1545-598X; 1558-0571Cita bibliográfica
IBAÑEZ, Damian; FERNANDEZ-BELTRAN, Ruben; PLA, Filiberto. FloU-Net: An Optical Flow Network for Multimodal Self-Supervised Image Registration. IEEE Geoscience and Remote Sensing Letters, 2023, vol. 20, p. 1-5Tipo de documento
info:eu-repo/semantics/articleVersión de la editorial
https://ieeexplore.ieee.org/abstract/document/10054383/keywords#keywordsVersión
info:eu-repo/semantics/acceptedVersionPalabras clave / Materias
Resumen
Image registration is an essential task in image
processing, where the final objective is to geometrically align
two or more images. In remote sensing, this process allows
comparing, fusing or analyzing data, ... [+]
Image registration is an essential task in image
processing, where the final objective is to geometrically align
two or more images. In remote sensing, this process allows
comparing, fusing or analyzing data, specially when multi-modal
images are used. In addition, multi-modal image registration
becomes fairly challenging when the images have a significant
difference in scale and resolution, together with local small image
deformations. For this purpose, this paper presents a novel optical
flow-based image registration network, named the FloU-Net,
which tries to further exploit inter-sensor synergies by means
of deep learning. The proposed method is able to extract spatial
information from resolution differences and through an U-Net
backbone generate an optical flow field estimation to accurately
register small local deformations of multi-modal images in a
self-supervised fashion. For instance, the registration between
Sentinel-2 (S2) and Sentinel-3 (S3) optical data is not trivial, as
there are considerable spectral-spatial differences among their
sensors. In this case, the higher spatial resolution of S2 result
in S2 data being a convenient reference to spatially improve
S3 products, as well as those of the forthcoming Fluorescence
Explorer (FLEX) mission, since image registration is the initial
requirement to obtain higher data processing level products.
To validate our method, we compare the proposed FloU-Net
with other state-of-the-art techniques using 21 coupled S2/S3
optical images from different locations of interest across Europe.
The comparison is performed through different performance
measures. Results show that proposed FloU-Net can outperform
the compared methods. The code and dataset are available in
https://github.com/ibanezfd/FloU-Net. [-]
Publicado en
IEEE Geoscience and Remote Sensing Letters, 2023, vol. 20, p. 1-5Entidad financiadora
Ministerio de Ciencia e Innovación | Generalitat Valenciana
Código del proyecto o subvención
PID2021- 128794OB-I00 | ACIF/2021/215
Derechos de acceso
“© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”
http://rightsstatements.org/vocab/InC/1.0/
info:eu-repo/semantics/openAccess
http://rightsstatements.org/vocab/InC/1.0/
info:eu-repo/semantics/openAccess
Aparece en las colecciones
- LSI_Articles [362]
- INIT_Articles [747]