TY - CHAP
T1 - Is An Image Worth Five Sentences? A New Look into Semantics for Image-Text Matching
AU - Biten, Ali Furkan
AU - Mafla, Andres
AU - Gomez, Lluis
AU - Karatzas, Dimosthenis
N1 - Funding Information:
This work has been supported by projects PID2020-116298GB-I00, the CERCA Programme / Generalitat de Catalunya, AGAUR project 2019PROD00090 (BeARS) and PhD scholarships from AGAUR (2019-FIB01233) and UAB (B18P0073).
Publisher Copyright:
© 2022 IEEE.
PY - 2022
Y1 - 2022
N2 - The task of image-text matching aims to map representations from different modalities into a common joint visual-textual embedding. However, the most widely used datasets for this task, MSCOCO and Flickr30K, are actually image captioning datasets that offer a very limited set of relation-ships between images and sentences in their ground-truth annotations. This limited ground truth information forces us to use evaluation metrics based on binary relevance: given a sentence query we consider only one image as relevant. However, many other relevant images or captions may be present in the dataset. In this work, we propose two metrics that evaluate the degree of semantic relevance of retrieved items, independently of their annotated binary relevance. Additionally, we incorporate a novel strategy that uses an image captioning metric, CIDEr, to define a Semantic Adaptive Margin (SAM) to be optimized in a standard triplet loss. By incorporating our formulation to existing models, a large improvement is obtained in scenarios where available training data is limited. We also demonstrate that the performance on the annotated image-caption pairs is maintained while improving on other non-annotated relevant items when employing the full training set. The code for our new metric can be found at github.com/furkanbiten/ncs_metric and the model implementation at github.com/andrespmd/semantic_adaptive_margin.
AB - The task of image-text matching aims to map representations from different modalities into a common joint visual-textual embedding. However, the most widely used datasets for this task, MSCOCO and Flickr30K, are actually image captioning datasets that offer a very limited set of relation-ships between images and sentences in their ground-truth annotations. This limited ground truth information forces us to use evaluation metrics based on binary relevance: given a sentence query we consider only one image as relevant. However, many other relevant images or captions may be present in the dataset. In this work, we propose two metrics that evaluate the degree of semantic relevance of retrieved items, independently of their annotated binary relevance. Additionally, we incorporate a novel strategy that uses an image captioning metric, CIDEr, to define a Semantic Adaptive Margin (SAM) to be optimized in a standard triplet loss. By incorporating our formulation to existing models, a large improvement is obtained in scenarios where available training data is limited. We also demonstrate that the performance on the annotated image-caption pairs is maintained while improving on other non-annotated relevant items when employing the full training set. The code for our new metric can be found at github.com/furkanbiten/ncs_metric and the model implementation at github.com/andrespmd/semantic_adaptive_margin.
KW - Vision and Languages
UR - http://www.scopus.com/inward/record.url?scp=85126121167&partnerID=8YFLogxK
U2 - 10.1109/WACV51458.2022.00254
DO - 10.1109/WACV51458.2022.00254
M3 - Chapter
AN - SCOPUS:85126121167
T3 - Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022
SP - 2483
EP - 2492
BT - Proceedings - 2022 IEEE/CVF Winter Conference on Applications of Computer Vision, WACV 2022
PB - Institute of Electrical and Electronics Engineers Inc.
ER -