On Offline Evaluation of Vision-Based Driving Models

Felipe Codevilla, Antonio M. Lopez, Vladlen Koltun, Alexey Dosovitskiy

Research output: Contribution to journalArticleResearchpeer-review

34 Citations (Web of Science)
3 Downloads (Pure)


Autonomous driving models should ideally be evaluated by deploying them on a fleet of physical vehicles in the real world. Unfortunately, this approach is not practical for the vast majority of researchers. An attractive alternative is to evaluate models offline, on a pre-collected validation dataset with ground truth annotation. In this paper, we investigate the relation between various online and offline metrics for evaluation of autonomous driving models. We find that offline prediction error is not necessarily correlated with driving quality, and two models with identical prediction error can differ dramatically in their driving performance. We show that the correlation of offline evaluation with driving quality can be significantly improved by selecting an appropriate validation dataset and suitable offline metrics.
Original languageEnglish
Pages (from-to)246-262
Number of pages17
JournalLecture Notes in Computer Science
Publication statusPublished - 2018


  • Autonomous driving
  • Deep learning


Dive into the research topics of 'On Offline Evaluation of Vision-Based Driving Models'. Together they form a unique fingerprint.

Cite this