Watching the News: Towards VideoQA Models that can Read

Soumya Jahagirdar*, Minesh Mathew, Dimosthenis Karatzas, C. V. Jawahar

*Autor corresponent d’aquest treball

Producció científica: Capítol de llibreCapítolRecercaAvaluat per experts

8 Cites (Scopus)

Resum

Video Question Answering methods focus on common-sense reasoning and visual cognition of objects or persons and their interactions over time. Current VideoQA approaches ignore the textual information present in the video. Instead, we argue that textual information is complementary to the action and provides essential contextualisation cues to the reasoning process. To this end, we propose a novel VideoQA task that requires reading and understanding the text in the video. To explore this direction, we focus on news videos and require QA systems to comprehend and answer questions about the topics presented by combining visual and textual cues in the video. We introduce the "NewsVideoQA"dataset that comprises more than 8, 600 QA pairs on 3, 000+ news videos obtained from diverse news channels from around the world. We demonstrate the limitations of current Scene Text VQA and VideoQA methods and propose ways to incorporate scene text information into VideoQA methods.

Idioma originalAnglès
Títol de la publicacióWACV
EditorInstitute of Electrical and Electronics Engineers Inc.
Pàgines4430-4439
Nombre de pàgines10
ISBN (electrònic)9781665493468
ISBN (imprès)9781665493468
DOIs
Estat de la publicacióPublicada - 2023

Sèrie de publicacions

NomProceedings - 2023 IEEE Winter Conference on Applications of Computer Vision, WACV 2023

Fingerprint

Navegar pels temes de recerca de 'Watching the News: Towards VideoQA Models that can Read'. Junts formen un fingerprint únic.

Com citar-ho