Multiple features in temporal models for the representation of visual contents in video

Juan M. Sánchez, Xavier Binefa, John R. Kender

    Research output: Contribution to journalReview articleResearchpeer-review

    1 Citation (Scopus)

    Abstract

    This paper analyzes different ways of coupling the information from multiple visual features in the representation of visual contents using temporal models based on Markov chains. We assume that the optimal combination is given by the Cartesian product of all feature state spaces. Simpler model structures are obtained by assuming independencies between random variables in the probabilistic structure. The relative entropy provides a measure of the information loss of a simplified structure with respect to a more complex one. The loss of information is then compared to the loss of accuracy in the representation of visual contents in video sequences, which is measured in terms of shot retrieval performance. We reach three main conclusions: (1) the full-coupled model structure is an accurate approximation to the Cartesian product structure, (2) the largest loss of information is found when direct temporal dependencies are removed, and (3) there is a direct relationship between loss of information and loss of representation accuracy. © Springer-Verlag Berlin Heidelberg 2003.
    Original languageEnglish
    Pages (from-to)216-226
    JournalLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
    Volume2728
    Publication statusPublished - 1 Dec 2003

    Fingerprint

    Dive into the research topics of 'Multiple features in temporal models for the representation of visual contents in video'. Together they form a unique fingerprint.

    Cite this