Virtual and real world adaptationfor pedestrian detection

David Vazquez, Antonio M. Lopez, Javier Marin, Daniel Ponsa, David Geronimo

Producció científica: Contribució a revistaArticleRecercaAvaluat per experts

147 Cites (Scopus)

Resum

Pedestrian detection is of paramount interest for many applications. Most promising detectors rely on discriminatively learnt classifiers, i.e., trained with annotated samples. However, the annotation step is a human intensive and subjective task worth to be minimized. By using virtual worlds we can automatically obtain precise and rich annotations. Thus, we face the question: can a pedestrian appearance model learnt in realistic virtual worlds work successfully for pedestrian detection in real-world images? Conducted experiments show that virtual-world based training can provide excellent testing accuracy in real world, but it can also suffer the data set shift problem as real-world based training does. Accordingly, we have designed a domain adaptation framework, V-AYLA, in which we have tested different techniques to collect a few pedestrian samples from the target domain (real world) and combine them with the many examples of the source domain (virtual world) in order to train a domain adapted pedestrian classifier that will operate in the target domain. V-AYLA reports the same detection accuracy than when training with many human-provided pedestrian annotations and testing with real-world images of the same domain. To the best of our knowledge, this is the first work demonstrating adaptation of virtual and real worlds for developing an object detector. © 2014 IEEE.
Idioma originalAnglès
Número d’article6587038
Pàgines (de-a)797-809
RevistaIEEE Transactions on Pattern Analysis and Machine Intelligence
Volum36
Número4
DOIs
Estat de la publicacióPublicada - 1 de gen. 2014

Fingerprint

Navegar pels temes de recerca de 'Virtual and real world adaptationfor pedestrian detection'. Junts formen un fingerprint únic.

Com citar-ho