This paper analyzes different ways of coupling the information from multiple visual features in the representation of visual contents using temporal models based on Markov chains. We assume that the optimal combination is given by the Cartesian product of all feature state spaces. Simpler model structures are obtained by assuming independencies between random variables in the probabilistic structure. The relative entropy provides a measure of the information loss of a simplified structure with respect to a more complex one. The loss of information is then compared to the loss of accuracy in the representation of visual contents in video sequences, which is measured in terms of shot retrieval performance. We reach three main conclusions: (1) the full-coupled model structure is an accurate approximation to the Cartesian product structure, (2) the largest loss of information is found when direct temporal dependencies are removed, and (3) there is a direct relationship between loss of information and loss of representation accuracy. © Springer-Verlag Berlin Heidelberg 2003.
|Journal||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Publication status||Published - 1 Dec 2003|