Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection

David Vazquez, Antonio M. Lopez, Daniel Ponsa

Research output: Contribution to journalArticleResearchpeer-review

26 Citations (Scopus)

Abstract

Vision-based object detectors are crucial for different applications. They rely on learnt object models. Ideally, we would like to deploy our vision system in the scenario where it must operate. Then, the system should self-learn how to distinguish the objects of interest, i.e., without human intervention. However, the learning of each object model requires labelled samples collected through a tiresome manual process. For instance, we are interested in exploring the self-training of a pedestrian detector for driver assistance systems. Our first approach to avoid manual labelling consisted in the use of samples coming from realistic computer graphics, so that their labels are automatically available 1-121. This would make possible the desired self-training of our pedestrian detector However, as we showed in [14], between virtual and real worlds it may be a dataset shift. In order to overcome it, we propose the use of unsupervised domain adaptation techniques that avoid human intervention during the adaptation process. In particular, this paper explores the use of the transductive SVM (T-SVM) learning algorithm in order to adapt virtual and real worlds for pedestrian detection (Fig. I).
Original languageEnglish
Pages (from-to)3492-3495
Number of pages4
JournalInternational Conference On Pattern Recognition
Publication statusPublished - 2012

Fingerprint

Dive into the research topics of 'Unsupervised Domain Adaptation of Virtual and Real Worlds for Pedestrian Detection'. Together they form a unique fingerprint.

Cite this