Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models

Jose L. Gómez, Gabriel Villalonga, Antonio M. López

Producción científica: Contribución a una revistaArtículoInvestigaciónrevisión exhaustiva

1 Cita (Scopus)
3 Descargas (Pure)


Semantic image segmentation is a core task for autonomous driving, which is performed by deep models. Since training these models draws to a curse of human-based image labeling, the use of synthetic images with automatically generated labels together with unlabeled real-world images is a promising alternative. This implies addressing an unsupervised domain adaptation (UDA) problem. In this paper, we propose a new co-training procedure for synth-to-real UDA of semantic segmentation models. It performs iterations where the (unlabeled) real-world training images are labeled by intermediate deep models trained with both the (labeled) synthetic images and the real-world ones labeled in previous iterations. More specifically, a self-training stage provides two domain-adapted models and a model collaboration loop allows the mutual improvement of these two models. The final semantic segmentation labels (pseudo-labels) for the real-world images are provided by these two models. The overall procedure treats the deep models as black boxes and drives their collaboration at the level of pseudo-labeled target images, i.e., neither modifying loss functions is required, nor explicit feature alignment. We test our proposal on standard synthetic and real-world datasets for onboard semantic segmentation. Our procedure shows improvements ranging from approximately 13 to 31 mIoU points over baselines.

Idioma originalInglés
Número de artículo621
Número de páginas28
EstadoPublicada - 5 ene 2023


Profundice en los temas de investigación de 'Co-Training for Unsupervised Domain Adaptation of Semantic Segmentation Models'. En conjunto forman una huella única.

Citar esto