Interactive Training of Human Detectors

David Vázquez, Antonio M. López, Daniel Ponsa, David Gerónimo

Research output: Contribution to journalArticleResearchpeer-review

3 Citations (Scopus)


Image based human detection remains as a challenging problem. Most promising detectors rely on classifiers trained with labelled samples. However, labelling is a manual labor intensive step. To overcome this problem we propose to collect images of pedestrians from a virtual city, i.e., with automatic labels, and train a pedestrian detector with them. The resulting detector performs correctly when such virtual-world data are similar to testing one, i.e., real-world pedestrians in urban areas. When testing data is acquired in different conditions than training ones, e.g., human detection in personal photo albums, dataset shift appears. In previous work, we treat this problem as one of domain adaptation and solve it with an active learning procedure. In this work, we focus on the same problem but evaluate a different set of faster to compute features, i.e., Haar, EOH and their combination. In particular, we train a classifier with virtual-world data, using such features and Real AdaBoost as learning machine. This classifier is applied to real-world training images. Then, a human oracle interactively corrects the wrong detections, i.e., few miss detections are manually annotated and some false ones are pointed out too. A low amount of manual annotation is fixed as restriction. Real- and virtual-world difficult samples are combined within what we call cool world and we retrain the classifier with this data. Our experiments show that this adapted classifier is equivalent to the one trained with only real-world data but requiring 90 annotations. © Springer-Verlag Berlin Heidelberg 2013.
Original languageEnglish
Pages (from-to)169-184
JournalIntelligent Systems Reference Library
Publication statusPublished - 21 Oct 2013


Dive into the research topics of 'Interactive Training of Human Detectors'. Together they form a unique fingerprint.

Cite this