Learning a part-based pedestrian detector in a virtual world

Jiaolong Xu, David Vazquez, Antonio M. Lopez, Javier Marin, Daniel Ponsa

Producció científica: Contribució a una revistaArticleRecercaAvaluat per experts

41 Cites (Scopus)


© 2000-2011 IEEE. Detecting pedestrians with on-board vision systems is of paramount interest for assisting drivers to prevent vehicle-to-pedestrian accidents. The core of a pedestrian detector is its classification module, which aims at deciding if a given image window contains a pedestrian. Given the difficulty of this task, many classifiers have been proposed during the last 15 years. Among them, the so-called (deformable) part-based classifiers, including multiview modeling, are usually top ranked in accuracy. Training such classifiers is not trivial since a proper aspect clustering and spatial part alignment of the pedestrian training samples are crucial for obtaining an accurate classifier. In this paper, we first perform automatic aspect clustering and part alignment by using virtual-world pedestrians, i.e., human annotations are not required. Second, we use a mixture-of-parts approach that allows part sharing among different aspects. Third, these proposals are integrated in a learning framework, which also allows incorporating real-world training data to perform domain adaptation between virtual- and real-world cameras. Overall, the obtained results on four popular on-board data sets show that our proposal clearly outperforms the state-of-the-art deformable part-based detector known as latent support vector machine.
Idioma originalEnglish
Número d’article6786000
Pàgines (de-a)2121-2131
RevistaIEEE Transactions on Intelligent Transportation Systems
Estat de la publicacióPublicada - 1 d’oct. 2014


Navegar pels temes de recerca de 'Learning a part-based pedestrian detector in a virtual world'. Junts formen un fingerprint únic.

Com citar-ho