Multimodal inverse perspective mapping

Miguel Oliveira, Vitor Santos, Angel D. Sappa

    Research output: Contribution to journalArticleResearchpeer-review

    33 Citations (Scopus)

    Abstract

    © 2014 Elsevier B.V. Over the past years, inverse perspective mapping has been successfully applied to several problems in the field of Intelligent Transportation Systems. In brief, the method consists of mapping images to a new coordinate system where perspective effects are removed. The removal of perspective associated effects facilitates road and obstacle detection and also assists in free space estimation. There is, however, a significant limitation in the inverse perspective mapping: the presence of obstacles on the road disrupts the effectiveness of the mapping. The current paper proposes a robust solution based on the use of multimodal sensor fusion. Data from a laser range finder is fused with images from the cameras, so that the mapping is not computed in the regions where obstacles are present. As shown in the results, this considerably improves the effectiveness of the algorithm and reduces computation time when compared with the classical inverse perspective mapping. Furthermore, the proposed approach is also able to cope with several cameras with different lenses or image resolutions, as well as dynamic viewpoints.
    Original languageEnglish
    Pages (from-to)108-121
    JournalInformation Fusion
    Volume24
    DOIs
    Publication statusPublished - 1 Jan 2015

    Keywords

    • Intelligent vehicles
    • Inverse perspective mapping
    • Multimodal sensor fusion

    Fingerprint

    Dive into the research topics of 'Multimodal inverse perspective mapping'. Together they form a unique fingerprint.

    Cite this