Saliency for fine-grained object recognition in domains with scarce training data

Carola Figueroa Flores, Abel Gonzalez-Garcia, Joost van de Weijer, Bogdan Raducanu

    Research output: Contribution to journalArticleResearch

    48 Citations (Scopus)


    © 2019 This paper investigates the role of saliency to improve the classification accuracy of a Convolutional Neural Network (CNN) for the case when scarce training data is available. Our approach consists in adding a saliency branch to an existing CNN architecture which is used to modulate the standard bottom-up visual features from the original image input, acting as an attentional mechanism that guides the feature extraction process. The main aim of the proposed approach is to enable the effective training of a fine-grained recognition model with limited training samples and to improve the performance on the task, thereby alleviating the need to annotate a large dataset. The vast majority of saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline. Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition. We perform extensive experiments on various fine-grained datasets (Flowers, Birds, Cars, and Dogs) under different conditions and show that saliency can considerably improve the network's performance, especially for the case of scarce training data. Furthermore, our experiments show that saliency methods that obtain improved saliency maps (as measured by traditional saliency benchmarks) also translate to saliency methods that yield improved performance gains when applied in an object recognition pipeline.
    Original languageEnglish
    Pages (from-to)62-73
    JournalPattern Recognition
    Publication statusPublished - 1 Oct 2019


    • Fine-grained classification
    • Object recognition
    • Saliency detection
    • Scarce training data


    Dive into the research topics of 'Saliency for fine-grained object recognition in domains with scarce training data'. Together they form a unique fingerprint.

    Cite this