Exploring hate speech detection in multimodal publications

Raul Gomez, Jaume Gibert, Lluis Gomez, Dimosthenis Karatzas

Producció científica: Capítol de llibreCapítolRecercaAvaluat per experts

165 Cites (Scopus)

Resum

In this work we target the problem of hate speech detection in multimodal publications formed by a text and an image. We gather and annotate a large scale dataset from Twitter, MMHS150K, and propose different models that jointly analyze textual and visual information for hate speech detection, comparing them with unimodal detection. We provide quantitative and qualitative results and analyze the challenges of the proposed task. We find that, even though images are useful for the hate speech detection task, current multimodal models cannot outperform models analyzing only text. We discuss why and open the field and the dataset for further research.

Idioma originalAnglès
Títol de la publicacióProceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020
EditorInstitute of Electrical and Electronics Engineers Inc.
Pàgines1459-1467
Nombre de pàgines9
ISBN (electrònic)9781728165530
DOIs
Estat de la publicacióPublicada - de març 2020

Sèrie de publicacions

NomProceedings - 2020 IEEE Winter Conference on Applications of Computer Vision, WACV 2020

Fingerprint

Navegar pels temes de recerca de 'Exploring hate speech detection in multimodal publications'. Junts formen un fingerprint únic.

Com citar-ho