Mostrar el registro sencillo del ítem
PiCoCo: Pixelwise Contrast and Consistency Learning for Semisupervised Building Footprint Segmentation
dc.contributor.author | kang, jian | |
dc.contributor.author | Wang, Zhirui | |
dc.contributor.author | Zhu, Ruoxin | |
dc.contributor.author | Sun, Xian | |
dc.contributor.author | Fernandez-Beltran, Ruben | |
dc.contributor.author | Plaza, Antonio | |
dc.date.accessioned | 2021-12-13T14:52:00Z | |
dc.date.available | 2021-12-13T14:52:00Z | |
dc.date.issued | 2021-10-11 | |
dc.identifier.citation | J. Kang, Z. Wang, R. Zhu, X. Sun, R. Fernandez-Beltran and A. Plaza, "PiCoCo: Pixelwise Contrast and Consistency Learning for Semisupervised Building Footprint Segmentation," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 10548-10559, 2021, doi: 10.1109/JSTARS.2021.3119286. | ca_CA |
dc.identifier.issn | 1939-1404 | |
dc.identifier.issn | 2151-1535 | |
dc.identifier.uri | http://hdl.handle.net/10234/196138 | |
dc.description.abstract | Building footprint segmentation from high-resolution remote sensing (RS) images plays a vital role in urban planning, disaster response, and population density estimation. Convolutional neural networks (CNNs) have been recently used as a workhorse for effectively generating building footprints. However, to completely exploit the prediction power of CNNs, large-scale pixel-level annotations are required. Most state-of-the-art methods based on CNNs are focused on the design of network architectures for improving the predictions of building footprints with full annotations, while few works have been done on building footprint segmentation with limited annotations. In this article, we propose a novel semisupervised learning method for building footprint segmentation, which can effectively predict building footprints based on the network trained with few annotations (e.g., only 0.0324 km2 out of 2.25-km2 area is labeled). The proposed method is based on investigating the contrast between the building and background pixels in latent space and the consistency of predictions obtained from the CNN models when the input RS images are perturbed. Thus, we term the proposed semisupervised learning framework of building footprint segmentation as PiCoCo, which is based on the enforcement of Pixelwise Contrast and Consistency during the learning phase. Our experiments, conducted on two benchmark building segmentation datasets, validate the effectiveness of our proposed framework as compared to several state-of-the-art building footprint extraction and semisupervised semantic segmentation methods. | ca_CA |
dc.format.extent | 12 p. | ca_CA |
dc.format.mimetype | application/pdf | ca_CA |
dc.language.iso | eng | ca_CA |
dc.publisher | IEEE | ca_CA |
dc.relation.isPartOf | IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, Vol. 14 (2021) | ca_CA |
dc.rights.uri | http://creativecommons.org/licenses/by-sa/4.0/ | ca_CA |
dc.subject | buildings | ca_CA |
dc.subject | image segmentation | ca_CA |
dc.subject | annotations | ca_CA |
dc.subject | feature extraction | ca_CA |
dc.subject | semantics | ca_CA |
dc.subject | predictive models | ca_CA |
dc.subject | training | ca_CA |
dc.subject | building footprint segmentation | ca_CA |
dc.subject | consistency learning | ca_CA |
dc.subject | contrastive learning | ca_CA |
dc.subject | missing labels | ca_CA |
dc.subject | semantic segmentation | ca_CA |
dc.subject | semisupervised learning | ca_CA |
dc.title | PiCoCo: Pixelwise Contrast and Consistency Learning for Semisupervised Building Footprint Segmentation | ca_CA |
dc.type | info:eu-repo/semantics/article | ca_CA |
dc.identifier.doi | 10.1109/JSTARS.2021.3119286 | |
dc.rights.accessRights | info:eu-repo/semantics/openAccess | ca_CA |
dc.type.version | info:eu-repo/semantics/publishedVersion | ca_CA |
project.funder.name | Jiangsu Province Science Foundation for Youths | ca_CA |
project.funder.name | National Natural Science Foundation of China | ca_CA |
project.funder.name | Jiangsu Higher Education Institution | ca_CA |
project.funder.name | Ministerio de Ciencia, Innovación y Universidades (España) | ca_CA |
project.funder.name | APRISA | ca_CA |
project.funder.name | Generalitat Valenciana | ca_CA |
project.funder.name | FEDER-Junta de Extremadura | ca_CA |
project.funder.name | European Union | ca_CA |
oaire.awardNumber | BK20210707 | ca_CA |
oaire.awardNumber | 62101371 | ca_CA |
oaire.awardNumber | 62076241 | ca_CA |
oaire.awardNumber | RTI2018-098651-B-C54 | ca_CA |
oaire.awardNumber | PID2019- 110315RB-I00 | ca_CA |
oaire.awardNumber | GV/2020/167 | ca_CA |
oaire.awardNumber | GR18060 | ca_CA |
oaire.awardNumber | info:eu-repo/grantAgreement/EC/H2020/734541 | ca_CA |
Ficheros en el ítem
Este ítem aparece en la(s) siguiente(s) colección(ones)
-
INIT_Articles [752]