Mostrar el registro sencillo del ítem

dc.contributor.authorIbáñez Fernández, Damián
dc.contributor.authorFernández Beltrán, Rubén
dc.contributor.authorPla, Filiberto
dc.date.accessioned2023-05-18T18:53:38Z
dc.date.available2023-05-18T18:53:38Z
dc.date.issued2023
dc.identifier.citationIBAÑEZ, Damian; FERNANDEZ-BELTRAN, Ruben; PLA, Filiberto. FloU-Net: An Optical Flow Network for Multimodal Self-Supervised Image Registration. IEEE Geoscience and Remote Sensing Letters, 2023, vol. 20, p. 1-5ca_CA
dc.identifier.issn1545-598X
dc.identifier.issn1558-0571
dc.identifier.urihttp://hdl.handle.net/10234/202545
dc.description.abstractImage registration is an essential task in image processing, where the final objective is to geometrically align two or more images. In remote sensing, this process allows comparing, fusing or analyzing data, specially when multi-modal images are used. In addition, multi-modal image registration becomes fairly challenging when the images have a significant difference in scale and resolution, together with local small image deformations. For this purpose, this paper presents a novel optical flow-based image registration network, named the FloU-Net, which tries to further exploit inter-sensor synergies by means of deep learning. The proposed method is able to extract spatial information from resolution differences and through an U-Net backbone generate an optical flow field estimation to accurately register small local deformations of multi-modal images in a self-supervised fashion. For instance, the registration between Sentinel-2 (S2) and Sentinel-3 (S3) optical data is not trivial, as there are considerable spectral-spatial differences among their sensors. In this case, the higher spatial resolution of S2 result in S2 data being a convenient reference to spatially improve S3 products, as well as those of the forthcoming Fluorescence Explorer (FLEX) mission, since image registration is the initial requirement to obtain higher data processing level products. To validate our method, we compare the proposed FloU-Net with other state-of-the-art techniques using 21 coupled S2/S3 optical images from different locations of interest across Europe. The comparison is performed through different performance measures. Results show that proposed FloU-Net can outperform the compared methods. The code and dataset are available in https://github.com/ibanezfd/FloU-Net.ca_CA
dc.format.extent6 p.ca_CA
dc.format.mimetypeapplication/pdfca_CA
dc.language.isoengca_CA
dc.publisherInstitute of Electrical and Electronics Engineersca_CA
dc.relation.isPartOfIEEE Geoscience and Remote Sensing Letters, 2023, vol. 20, p. 1-5ca_CA
dc.rights“© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”ca_CA
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/ca_CA
dc.subjectImage Registrationca_CA
dc.subjectConvolutional Neural Networksca_CA
dc.subjectInter-sensorca_CA
dc.subjectMulti-modalca_CA
dc.subjectMulti-spectralca_CA
dc.subjectSentinel-2-3ca_CA
dc.titleFloU-Net: An Optical Flow Network for Multi-modal Self-Supervised Image Registrationca_CA
dc.typeinfo:eu-repo/semantics/articleca_CA
dc.identifier.doihttps://doi.org/10.1109/LGRS.2023.3249902
dc.rights.accessRightsinfo:eu-repo/semantics/openAccessca_CA
dc.relation.publisherVersionhttps://ieeexplore.ieee.org/abstract/document/10054383/keywords#keywordsca_CA
dc.type.versioninfo:eu-repo/semantics/acceptedVersionca_CA
project.funder.nameMinisterio de Ciencia e Innovaciónca_CA
project.funder.nameGeneralitat Valencianaca_CA
oaire.awardNumberPID2021- 128794OB-I00ca_CA
oaire.awardNumberACIF/2021/215ca_CA


Ficheros en el ítem

Thumbnail

Este ítem aparece en la(s) siguiente(s) colección(ones)

  • LSI_Articles [362]
    Articles de publicacions periòdiques escrits per professors del Departament de Llenguatges i Sistemes Informàtics
  • INIT_Articles [752]

Mostrar el registro sencillo del ítem