Exploring the impact of inter-query variability on the performance of retrieval systems

Francesco Brughi*, Debora Gil, Llorenç Badiella, Eva Jove Casabella, Oriol Ramos Terrades

*Corresponding author for this work

Research output: Book/ReportProceedingResearchpeer-review

Abstract

This paper introduces a framework for evaluating the performance of information retrieval systems. Current evaluation metrics provide an average score that does not consider performance variability across the query set. In this manner, conclusions lack of any statistical significance, yielding poor inference to cases outside the query set and possibly unfair comparisons. We propose to apply statistical methods in order to obtain a more informative measure for problems in which different query classes can be identified. In this context, we assess the performance variability on two levels: overall variability across the whole query set and specific query class-related variability. To this end, we estimate confidence bands for precision-recall curves, and we apply ANOVA in order to assess the significance of the performance across different query classes.

Original languageEnglish
Number of pages8
ISBN (Electronic)9783319117577
DOIs
Publication statusPublished - 2014

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume8814
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Dive into the research topics of 'Exploring the impact of inter-query variability on the performance of retrieval systems'. Together they form a unique fingerprint.

Cite this