Learning a part-based pedestrian detector in a virtual world

Jiaolong Xu, David Vazquez, Antonio M. Lopez, Javier Marin, Daniel Ponsa

Research output: Contribution to journalArticleResearchpeer-review

40 Citations (Scopus)


© 2000-2011 IEEE. Detecting pedestrians with on-board vision systems is of paramount interest for assisting drivers to prevent vehicle-to-pedestrian accidents. The core of a pedestrian detector is its classification module, which aims at deciding if a given image window contains a pedestrian. Given the difficulty of this task, many classifiers have been proposed during the last 15 years. Among them, the so-called (deformable) part-based classifiers, including multiview modeling, are usually top ranked in accuracy. Training such classifiers is not trivial since a proper aspect clustering and spatial part alignment of the pedestrian training samples are crucial for obtaining an accurate classifier. In this paper, we first perform automatic aspect clustering and part alignment by using virtual-world pedestrians, i.e., human annotations are not required. Second, we use a mixture-of-parts approach that allows part sharing among different aspects. Third, these proposals are integrated in a learning framework, which also allows incorporating real-world training data to perform domain adaptation between virtual- and real-world cameras. Overall, the obtained results on four popular on-board data sets show that our proposal clearly outperforms the state-of-the-art deformable part-based detector known as latent support vector machine.
Original languageEnglish
Article number6786000
Pages (from-to)2121-2131
JournalIEEE Transactions on Intelligent Transportation Systems
Issue number5
Publication statusPublished - 1 Oct 2014


  • Computer vision
  • multipart model
  • pedestrian detection
  • synthetic training data


Dive into the research topics of 'Learning a part-based pedestrian detector in a virtual world'. Together they form a unique fingerprint.

Cite this