Foundations and Trends® in Information Retrieval > Vol 10 > Issue 1

Online Evaluation for Information Retrieval

By Katja Hofmann, Microsoft Research, Cambridge, UK, katja.hofmann@microsoft.com | Lihong Li, Microsoft Research, Redmond, USA, lihongli@microsoft.com | Filip Radlinski, Microsoft Research, Cambridge, UK, filip.radlinski@microsoft.com

 
Suggested Citation
Katja Hofmann, Lihong Li and Filip Radlinski (2016), "Online Evaluation for Information Retrieval", Foundations and Trends® in Information Retrieval: Vol. 10: No. 1, pp 1-117. http://dx.doi.org/10.1561/1500000051

Publication Date: 22 Jun 2016
© 2016 K. Hofmann, L. Li, and F. Radlinski
 
Subjects
User modelling and user studies for IR,  Evaluation issues and test collections for IR,  Web Search,  Evaluation
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Controlled Experiments
3. Metrics for Online Evaluation
4. Estimation from Historical Data
5. The Pros and Cons of Online Evaluation
6. Online Evaluation in Practice
7. Concluding Remarks
Acknowledgements
References

Abstract

Online evaluation is one of the most common approaches to measure the effectiveness of an information retrieval system. It involves fielding the information retrieval system to real users, and observing these users’ interactions in-situ while they engage with the system. This allows actual users with real world information needs to play an important part in assessing retrieval quality. As such, online evaluation complements the common alternative offline evaluation approaches which may provide more easily interpretable outcomes, yet are often less realistic when measuring of quality and actual user experience. In this survey, we provide an overview of online evaluation techniques for information retrieval. We show how online evaluation is used for controlled experiments, segmenting them into experiment designs that allow absolute or relative quality assessments. Our presentation of different metrics further partitions online evaluation based on different sized experimental units commonly of interest: documents, lists and sessions. Additionally, we include an extensive discussion of recent work on data re-use, and experiment estimation based on historical data. A substantial part of this work focuses on practical issues: How to run evaluations in practice, how to select experimental parameters, how to take into account ethical considerations inherent in online evaluations, and limitations that experimenters should be aware of. While most published work on online experimentation today is at large scale in systems with millions of users, we also emphasize that the same techniques can be applied at small scale. To this end, we emphasize recent work that makes it easier to use at smaller scales and encourage studying real-world information seeking in a wide range of scenarios. Finally, we present a summary of the most recent work in the area, and describe open problems, as well as postulating future directions.

DOI:10.1561/1500000051
ISBN: 978-1-68083-163-4
130 pp. $90.00
Buy book (pb)
 
ISBN: 978-1-68083-162-7
130 pp. $130.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Controlled Experiments
3. Metrics for Online Evaluation
4. Estimation from Historical Data
5. The Pros and Cons of Online Evaluation
6. Online Evaluation in Practice
7. Concluding Remarks
Acknowledgements
References

Online Evaluation for Information Retrieval

Online evaluation is one of the most common approaches to measure the effectiveness of an information retrieval system. It involves fielding the information retrieval system to real users, and observing these users’ interactions in situ while they engage with the system. This allows actual users with real world information needs to play an important part in assessing retrieval quality.

Online Evaluation for Information Retrieval provides the reader with a comprehensive overview of the topic. It shows how online evaluation is used for controlled experiments, segmenting them into experiment designs that allow absolute or relative quality assessments. The presentation of different metrics further partitions online evaluation based on different sized experimental units commonly of interest: documents, lists, and sessions. It also includes an extensive discussion of recent work on data re-use, and experiment estimation based on historical data.

Online Evaluation for Information Retrieval pays particular attention to practical issues: How to run evaluations in practice, how to select experimental parameters, how to take into account ethical considerations inherent in online evaluations, and limitations that experimenters should be aware of. While most published work on online experimentation today is on a large scale in systems with millions of users, this monograph also emphasizes that the same techniques can be applied on a small scale. To this end, it highlights recent work that makes it easier to use at smaller scales and encourages studying real-world information seeking in a wide range of scenarios. The monograph concludes with a summary of the most recent work in the area, and outlines some open problems, as well as postulating future directions.

 
INR-051