Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models

Jose L. Gómez, Gabriel Villalonga, Antonio M. López

Producció científica: Contribució a una revistaArticleRecercaAvaluat per experts

1 Citació (Scopus)
3 Descàrregues (Pure)


Semantic image segmentation is a core task for autonomous driving, which is performed by deep models. Since training these models draws to a curse of human-based image labeling, the use of synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies addressing an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic segmentation models. It performs iterations where the (unlabeled) real-world training images are labeled by intermediate deep models trained with both the (labeled) synthetic images and the real-world ones labeled in previous iterations. More specifically, a self-training stage provides two domain-adapted models and a model collaboration loop allows the mutual improvement of these two models. The final semantic segmentation labels (pseudo-labels) for the real-world images are provided by these two models. The overall procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for onboard semantic segmentation. Our procedure shows improvements ranging from approximately 13 to 31 mIoU points over baselines.

Idioma originalEnglish
Número d’article621
Nombre de pàgines28
Estat de la publicacióPublicada - 5 de gen. 2023


Navegar pels temes de recerca de 'Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models'. Junts formen un fingerprint únic.

Com citar-ho