Road Scene Segmentation from a Single Image

Jose M. Alvarez, Theo Gevers, Yann LeCun, Antonio M. Lopez

Research output: Contribution to journalArticleResearchpeer-review

181 Citations (Scopus)

Abstract

Road scene segmentation is important in computer vision for different applications such as autonomous driving and pedestrian detection. Recovering the 3D structure of road scenes provides relevant contextual information to improve their understanding.

In this paper, we use a convolutional neural network based algorithm to learn features from noisy labels to recover the 3D scene layout of a road image. The novelty of the algorithm relies on generating training labels by applying an algorithm trained on a general image dataset to classify on-board images. Further, we propose a novel texture descriptor based on a learned color plane fusion to obtain maximal uniformity in road areas. Finally, acquired (off-line) and current (on-line) information are combined to detect road areas in single images.

From quantitative and qualitative experiments, conducted on publicly available datasets, it is concluded that convolutional neural networks are suitable for learning 3D scene layout from noisy labels and provides a relative improvement of 7% compared to the baseline. Furthermore, combining color planes provides a statistical description of road areas that exhibits maximal uniformity and provides a relative improvement of 8% compared to the baseline. Finally, the improvement is even bigger when acquired and current information from a single image are combined
Original languageEnglish
Pages (from-to)376-389
Number of pages14
JournalLecture Notes in Computer Science (LNCS)
Publication statusPublished - 2012

Fingerprint

Dive into the research topics of 'Road Scene Segmentation from a Single Image'. Together they form a unique fingerprint.

Cite this