ICDAR2019 robust reading challenge on multi-lingual scene text detection and recognition-RRC-MLT-2019

Nibal Nayef*, Cheng Lin Liu, Jean Marc Ogier, Yash Patel, Michal Busta, Pinaki Nath Chowdhury, Dimosthenis Karatzas, Wafa Khlif, Jiri Matas, Umapada Pal, Jean Christophe Burie

*Autor corresponent d’aquest treball

Producció científica: Capítol de llibreCapítolRecercaAvaluat per experts

217 Cites (Scopus)

Resum

With the growing cosmopolitan culture of modern cities, the need of robust Multi-Lingual scene Text (MLT) detection and recognition systems has never been more immense. With the goal to systematically benchmark and push the state-of-the-art forward, the proposed competition builds on top of the RRC-MLT-2017 with an additional end-to-end task, an additional language in the real images dataset, a large scale multi-lingual synthetic dataset to assist the training, and a baseline End-to-End recognition method. The real dataset consists of 20,000 images containing text from 10 languages. The challenge has 4 tasks covering various aspects of multi-lingual scene text: (a) text detection, (b) cropped word script classification, (c) joint text detection and script classification and (d) end-to-end detection and recognition. In total, the competition received 60 submissions from the research and industrial communities. This paper presents the dataset, the tasks and the findings of the presented RRC-MLT-2019 challenge.

Idioma originalAnglès
Títol de la publicacióProceedings - 15th IAPR International Conference on Document Analysis and Recognition, ICDAR 2019
Pàgines1582-1587
Nombre de pàgines6
ISBN (electrònic)9781728128610
DOIs
Estat de la publicacióPublicada - de set. 2019

Sèrie de publicacions

NomProceedings of the International Conference on Document Analysis and Recognition, ICDAR
ISSN (imprès)1520-5363

Fingerprint

Navegar pels temes de recerca de 'ICDAR2019 robust reading challenge on multi-lingual scene text detection and recognition-RRC-MLT-2019'. Junts formen un fingerprint únic.

Com citar-ho