Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection

Alejandro Gonzalez, Gabriel Villalonga, Jiaolong Xu, David Vazquez, Jaume Amores, Antonio M. Lopez

Producció científica: Contribució a una revistaArticleRecercaAvaluat per experts

71 Cites (Scopus)


Despite recent significant advances, pedestrian detection continues to be an extremely challenging problem in real scenarios. In order to develop a detector that successfully operates under these conditions, it becomes critical to leverage upon multiple cues, multiple imaging modalities and a strong multi-view classifier that accounts for different pedestrian views and poses. In this paper we provide an extensive evaluation that gives insight into how each of these aspects (multi-cue, multimodality and strong multi-view classifier) affect performance both individually and when integrated together. In the multimodality component we explore the fusion of RGB and depth maps obtained by high-definition LIDAR, a type of modality that is only recently starting to receive attention. As our analysis reveals, although all the aforementioned aspects significantly help in improving the performance, the fusion of visible spectrum and depth information allows to boost the accuracy by a much larger margin. The resulting detector not only ranks among the top best performers in the challenging KITTI benchmark, but it is built upon very simple blocks that are easy to implement and computationally efficient. These simple blocks can be easily replaced with more sophisticated ones recently proposed, such as the use of convolutional neural networks for feature representation, to further improve the accuracy.
Idioma originalEnglish
Pàgines (de-a)356-361
Nombre de pàgines6
Revista2018 Ieee Intelligent Vehicles Symposium (iv)
Estat de la publicacióPublicada - 2015


Navegar pels temes de recerca de 'Multiview Random Forest of Local Experts Combining RGB and LIDAR data for Pedestrian Detection'. Junts formen un fingerprint únic.

Com citar-ho