A Multilingual Approach to Scene Text Visual Question Answering

Josep Brugués i Pujolràs, Lluís Gómez i Bigordà*, Dimosthenis Karatzas

*Corresponding author for this work

Research output: Chapter in BookChapterResearchpeer-review

1 Citation (Scopus)


Scene Text Visual Question Answering (ST-VQA) has recently emerged as a hot research topic in Computer Vision. Current ST-VQA models have a big potential for many types of applications but lack the ability to perform well on more than one language at a time due to the lack of multilingual data, as well as the use of monolingual word embeddings for training. In this work, we explore the possibility to obtain bilingual and multilingual VQA models. In that regard, we use an already established VQA model that uses monolingual word embeddings as part of its pipeline and substitute them by FastText and BPEmb multilingual word embeddings that have been aligned to English. Our experiments demonstrate that it is possible to obtain bilingual and multilingual VQA models with a minimal loss in performance in languages not used during training, as well as a multilingual model trained in multiple languages that match the performance of the respective monolingual baselines.

Original languageEnglish
Title of host publicationDocument Analysis Systems - 15th IAPR International Workshop, DAS 2022, Proceedings
EditorsSeiichi Uchida, Elisa Barney, Véronique Eglin
PublisherSpringer Science and Business Media Deutschland GmbH
Number of pages15
ISBN (Print)9783031065545
Publication statusPublished - 2022

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume13237 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349


  • Deep learning
  • Multilingual word embeddings
  • Scene text
  • Vision and language
  • Visual question answering


Dive into the research topics of 'A Multilingual Approach to Scene Text Visual Question Answering'. Together they form a unique fingerprint.

Cite this