Evaluation of IR Systems
description
Transcript of Evaluation of IR Systems
![Page 1: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/1.jpg)
Evaluation of IR Systems
Adapted from Lectures by
Prabhakar Raghavan (Yahoo and Stanford) and Christopher Manning (Stanford)
Prasad 1L10Evaluation
![Page 2: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/2.jpg)
2
This lecture
Results summaries: Making our good results usable to a user
How do we know if our results are any good? Evaluating a search engine
Benchmarks Precision and recall
Prasad L10Evaluation
![Page 3: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/3.jpg)
3
Result Summaries
Having ranked the documents matching a query, we wish to present a results list.
Most commonly, a list of the document titles plus a short summary, aka “10 blue links”.
![Page 4: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/4.jpg)
4
Summaries
The title is typically automatically extracted from document metadata. What about the summaries?
This description is crucial. User can identify good/relevant hits based on description.
Two basic kinds: A static summary of a document is always the same,
regardless of the query that hit the doc. A dynamic summary is a query-dependent attempt to
explain why the document was retrieved for the query at hand.
Prasad L10Evaluation
![Page 5: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/5.jpg)
Static summaries
In typical systems, the static summary is a subset of the document. Simplest heuristic: the first 50 (or so – this can be
varied) words of the document Summary cached at indexing time
More sophisticated: extract from each document a set of “key” sentences
Simple NLP heuristics to score each sentence Summary is made up of top-scoring sentences.
Most sophisticated: NLP used to synthesize a summary
Seldom used in IR (cf. text summarization work)
5
![Page 6: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/6.jpg)
Dynamic summaries
Present one or more “windows” within the document that contain several of the query terms
“KWIC” snippets: Keyword in Context presentation Generated in conjunction with scoring
If query found as a phrase, all or some occurrences of the phrase in the doc
If not, document windows that contain multiple query terms
The summary itself gives the entire content of the window – all terms, not only the query terms.
![Page 7: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/7.jpg)
7
Generating dynamic summaries
If we have only a positional index, we cannot (easily) reconstruct context window surrounding hits.
If we cache the documents at index time, then we can find windows in it, cueing from hits found in the positional index.
E.g., positional index says “the query is a phrase in position 4378” so we go to this position in the cached document and stream out the content
Most often, cache only a fixed-size prefix of the doc. Note: Cached copy can be outdated
Prasad L10Evaluation
![Page 8: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/8.jpg)
8
Dynamic summaries
Producing good dynamic summaries is a tricky optimization problem The real estate for the summary is normally small
and fixed Want snippets to be long enough to be useful Want linguistically well-formed snippets Want snippets maximally informative about doc
But users really like snippets, even if they complicate IR system design
Prasad L10Evaluation
![Page 9: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/9.jpg)
Alternative results presentations?
An active area of HCI research An alternative: http://www.searchme.com / copies the
idea of Apple’s Cover Flow for search results
Prasad 9L10Evaluation
![Page 10: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/10.jpg)
Evaluating search engines
Prasad 10L10Evaluation
![Page 11: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/11.jpg)
11
Measures for a search engine
How fast does it index Number of documents/hour (Average document size)
How fast does it search Latency as a function of index size
Expressiveness of query language Ability to express complex information needs Speed on complex queries
Uncluttered UI Is it free?Prasad L10Evaluation
![Page 12: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/12.jpg)
12
Measures for a search engine
All of the preceding criteria are measurable: we can quantify speed/size; we can make expressiveness precise
The key measure: user happiness What is this?
Speed of response/size of index are factors But blindingly fast, useless answers won’t make a user
happy Need a way of quantifying user happiness
Prasad L10Evaluation
![Page 13: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/13.jpg)
13
Data Retrieval vs Information Retrieval
DR Performance Evaluation (after establishing correctness) Response time Index space
IR Performance Evaluation How relevant is the answer set? (required to
establish functional correctness, e.g., through benchmarks)
Prasad L10Evaluation
![Page 14: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/14.jpg)
14
Measuring user happiness
Issue: who is the user we are trying to make happy? Depends on the setting/context
Web engine: user finds what they want and return to the engine Can measure rate of return users
eCommerce site: user finds what they want and make a purchase Is it the end-user, or the eCommerce site, whose
happiness we measure? Measure time to purchase, or fraction of searchers
who become buyers?Prasad L10Evaluation
![Page 15: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/15.jpg)
15
Measuring user happiness
Enterprise (company/govt/academic): Care about “user productivity”
How much time do my users save when looking for information?
Many other criteria having to do with breadth of access, secure access, etc.
Prasad L10Evaluation
![Page 16: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/16.jpg)
16
Happiness: elusive to measure
Most common proxy: relevance of search results
But how do you measure relevance? We will detail a methodology here, then examine
its issues Relevant measurement requires 3 elements:
1. A benchmark document collection
2. A benchmark suite of queries
3. A usually binary assessment of either Relevant or Nonrelevant for each query and each document
Some work on more-than-binary, but not the standard
Prasad L10Evaluation
![Page 17: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/17.jpg)
Evaluating an IR system
Note: the information need is translated into a query
Relevance is assessed relative to the information need, not the query E.g., Information need: I'm looking for information
on whether drinking red wine is more effective at reducing heart attack risks than white wine.
Query: wine red white heart attack effective You evaluate whether the doc addresses the
information need, not whether it has these words
Prasad 17L10Evaluation
![Page 18: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/18.jpg)
18
Difficulties with gauging Relevancy
Relevancy, from a human standpoint, is: Subjective: Depends upon a specific
user’s judgment. Situational: Relates to user’s current
needs. Cognitive: Depends on human
perception and behavior. Dynamic: Changes over time.
Prasad L10Evaluation
![Page 19: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/19.jpg)
Standard relevance benchmarks
TREC - National Institute of Standards and Technology (NIST) has run a large IR test bed for many years
Reuters and other benchmark doc collections used
“Retrieval tasks” specified sometimes as queries
Human experts mark, for each query and for each doc, Relevant or Nonrelevant or at least for subset of docs that some system
returned for that queryPrasad 19
![Page 20: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/20.jpg)
20
Unranked retrieval evaluation:Precision and Recall
Precision: fraction of retrieved docs that are relevant = P(relevant|retrieved)
Recall: fraction of relevant docs that are retrieved = P(retrieved|relevant)
Precision P = tp/(tp + fp) Recall R = tp/(tp + fn)
Relevant Nonrelevant
Retrieved tp fp
Not Retrieved fn tn
Prasad L10Evaluation
![Page 21: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/21.jpg)
21
Precision and Recall
Prasad L10Evaluation
![Page 22: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/22.jpg)
22
Precision and Recall in Practice
Precision The ability to retrieve top-ranked documents that
are mostly relevant. The fraction of the retrieved documents that are relevant.
Recall The ability of the search to find all of the relevant
items in the corpus. The fraction of the relevant documents that are retrieved.
Prasad L10Evaluation
![Page 23: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/23.jpg)
23
Should we instead use the accuracy measure for evaluation?
Given a query, an engine classifies each doc as “Relevant” or “Nonrelevant”
The accuracy of an engine: the fraction of these classifications that are correct Accuracy is a commonly used evaluation
measure in machine learning classification work
Why is this not a very useful evaluation measure in IR?
Prasad L10Evaluation
![Page 24: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/24.jpg)
24
Why not just use accuracy?
How to build a 99.9999% accurate search engine on a low budget….
People doing information retrieval want to find something and have a certain tolerance for junk.
Search for:
0 matching results found.
Prasad L10Evaluation
![Page 25: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/25.jpg)
25
Precision/Recall
You can get high recall (but low precision) by retrieving all docs for all queries!
Recall is a non-decreasing function of the number of docs retrieved
In a good system, precision decreases as either the number of docs retrieved or recall increases This is not a theorem, but a result with strong
empirical confirmation
Prasad L10Evaluation
![Page 26: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/26.jpg)
26
Trade-offs
10
1
Recall
Pre
cisi
on
The idealReturns relevant documents butmisses many useful ones too
Returns most relevantdocuments but includeslot of junk
Prasad L10Evaluation
![Page 27: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/27.jpg)
27
Difficulties in using precision/recall
Should average over large document collection/query ensembles
Need human relevance assessments People aren’t reliable assessors
Assessments have to be binary Nuanced assessments?
Heavily skewed by collection/authorship Results may not translate from one domain to
another
Prasad L10Evaluation
![Page 28: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/28.jpg)
28
A combined measure: F
Combined measure that assesses precision/recall tradeoff is F measure (harmonic mean):
Harmonic mean is a conservative average See CJ van Rijsbergen, Information Retrieval
RP
PR
RP
F
2
112
Prasad L10Evaluation
![Page 29: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/29.jpg)
29
Aka E Measure (parameterized F Measure)
Variants of F measure that allow weighting emphasis on precision over recall:
Value of controls trade-off: = 1: Equally weight precision and recall (E=F). > 1: Weight precision more. < 1: Weight recall more.
PRRP
PRE
1
2
2
2
2
)1()1(
Prasad L10Evaluation
![Page 30: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/30.jpg)
30
F1 and other averages
Combined Measures
0
20
40
60
80
100
0 20 40 60 80 100
Precision (Recall fixed at 70%)
Minimum
Maximum
Arithmetic
Geometric
Harmonic
Prasad L10Evaluation
![Page 31: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/31.jpg)
Recall vs Precision and F1
0
0.2
0.4
0.6
0.8
1
1.2
0 0.2 0.4 0.6 0.8 1 1.2
Recall
Pre
cis
ion
an
d F
1
Breakeven Point
Breakeven point is the point where precision equals recall.
Alternative single measure of IR effectiveness.
How do you compute it?
![Page 32: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/32.jpg)
32
Evaluating ranked results
Evaluation of ranked results:
The system can return any number of results
By taking various numbers of the top returned documents (levels of recall), the evaluator can produce a precision-recall curve
Prasad L10Evaluation
![Page 33: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/33.jpg)
33
R=3/6=0.5; P=3/4=0.75
Computing Recall/Precision Points: An Example
n doc # relevant
1 588 x2 589 x3 5764 590 x5 9866 592 x7 9848 9889 57810 98511 10312 59113 772 x14 990
Let total # of relevant docs = 6Check each new recall point:
R=1/6=0.167; P=1/1=1
R=2/6=0.333; P=2/2=1
R=5/6=0.833; p=5/13=0.38
R=4/6=0.667; P=4/6=0.667
Missing one relevant document.
Never reach 100% recall
L10Evaluation
![Page 34: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/34.jpg)
34
A precision-recall curve
0.0
0.2
0.4
0.6
0.8
1.0
0.0 0.2 0.4 0.6 0.8 1.0
Recall
Pre
cisi
on
![Page 35: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/35.jpg)
35
Interpolating a Recall/Precision Curve
Interpolate a precision value for each standard recall level: rj {0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0}
r0 = 0.0, r1 = 0.1, …, r10=1.0
The interpolated precision at the j-th standard recall level is the maximum known precision at any recall level above the j-th level:
)(max)( rPrPrr
jj
![Page 36: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/36.jpg)
36
Average Recall/Precision Curve
Typically average performance over a large set of queries.
Compute average precision at each standard recall level across all queries.
Plot average precision/recall curves to evaluate overall system performance on a document/query corpus.
![Page 37: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/37.jpg)
37
Evaluation Metrics (cont’d)
Graphs are good, but people want summary measures! Precision at fixed retrieval level
Precision-at-k: Precision of top k results Perhaps appropriate for most of web search: all people
want are good matches on the first one or two results pages But: averages badly and has an arbitrary parameter of k
11-point interpolated average precision The standard measure in the early TREC competitions: you
take the precision at 11 levels of recall varying from 0 to 1 by tenths of the documents, using interpolation (the value for 0 is always interpolated!), and average them
Evaluates performance at all recall levels
Prasad L10Evaluation
![Page 38: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/38.jpg)
38
Typical (good) 11 point precisions
SabIR/Cornell 8A1 11pt precision from TREC 8 (1999)
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1
Recall
Pre
cis
ion
Prasad
![Page 39: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/39.jpg)
39
11 point precisions
0
20
40
60
80
100
120
0 20 40 60 80 100 120
Recall
Pre
cisi
on
Prasad L10Evaluation
![Page 40: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/40.jpg)
Receiver Operating Characteristics (ROC) Curve
True positive rate =
tp/(tp+fn) = recall = sensitivity
False positive rate = fp/(tn+fp). Related to precision. fpr=0 <-> p=1
Why is the blue line “worthless”?
![Page 41: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/41.jpg)
41
Mean average precision (MAP)
MAP for a query Average of the precision value for each (of
the k top) relevant document retrieved This approach weights early appearance of a relevant
document over later appearance
MAP for query collection is the arithmetic average of MAP for each query Macro-averaging: each query counts
equally
Prasad L10Evaluation
![Page 42: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/42.jpg)
Average Precision
![Page 43: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/43.jpg)
Mean Average Precision (MAP) Mean Average Precision (MAP)
summarize rankings from multiple queries by averaging average precision
most commonly used measure in research papers
assumes user is interested in finding many relevant documents for each query
requires many relevance judgments in text collection
![Page 44: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/44.jpg)
MAP
![Page 45: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/45.jpg)
45
Summarize a Ranking: MAP
Given that n docs are retrieved Compute the precision (at rank) where each
(new) relevant document is retrieved => p(1),…,p(k), if we have k rel. docs
E.g., if the first rel. doc is at the 2nd rank, then p(1)=1/2. If a relevant document never gets retrieved, we
assume the precision corresponding to that rel. doc to be zero
Compute the average over all the relevant documents Average precision = (p(1)+…p(k))/k
![Page 46: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/46.jpg)
46
(cont’d)
This gives us (non-interpolated) average precision, which captures both precision and recall and is sensitive to the rank of each relevant document
Mean Average Precisions (MAP) MAP = arithmetic mean average
precision over a set of topics gMAP = geometric mean average
precision over a set of topics (more affected by difficult topics)
![Page 47: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/47.jpg)
Discounted Cumulative Gain
Popular measure for evaluating web search and related tasks
Two assumptions: Highly relevant documents are more useful
than marginally relevant document the lower the ranked position of a relevant
document, the less useful it is for the user, since it is less likely to be examined
![Page 48: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/48.jpg)
Discounted Cumulative Gain
Uses graded relevance as a measure of usefulness, or gain, from examining a document
Gain is accumulated starting at the top of the ranking and may be reduced, or discounted, at lower ranks
Typical discount is 1/log (rank) With base 2, the discount at rank 4 is 1/2,
and at rank 8 it is 1/3
![Page 49: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/49.jpg)
49
Summarize a Ranking: DCG
What if relevance judgments are in a scale of [1,r]? r>2
Cumulative Gain (CG) at rank n Let the ratings of the n documents be r1, r2, …
rn (in ranked order) CG = r1+r2+…rn
Discounted Cumulative Gain (DCG) at rank n DCG = r1 + r2/log22 + r3/log23 + … rn/log2n
We may use any base for the logarithm, e.g., base=b
![Page 50: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/50.jpg)
Discounted Cumulative Gain DCG is the total gain accumulated at a particular
rank p:
Alternative formulation:
used by some web search companies emphasis on retrieving highly relevant documents
![Page 51: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/51.jpg)
DCG Example
10 ranked documents judged on 0-3 relevance scale: 3, 2, 3, 0, 0, 1, 2, 2, 3, 0
discounted gain: 3, 2/1, 3/1.59, 0, 0, 1/2.59, 2/2.81, 2/3, 3/3.17, 0
= 3, 2, 1.89, 0, 0, 0.39, 0.71, 0.67, 0.95, 0 DCG:
3, 5, 6.89, 6.89, 6.89, 7.28, 7.99, 8.66, 9.61, 9.61
![Page 52: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/52.jpg)
52
Summarize a Ranking: NDCG
Normalized Cumulative Gain (NDCG) at rank n Normalize DCG at rank n by the DCG value at
rank n of the ideal ranking The ideal ranking would first return the
documents with the highest relevance level, then the next highest relevance level, etc
Compute the precision (at rank) where each (new) relevant document is retrieved => p(1),…,p(k), if we have k rel. docs
NDCG is now quite popular in evaluating Web search
![Page 53: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/53.jpg)
NDCG - Example
iGround Truth Ranking Function1 Ranking Function2
Document Order
riDocument
Orderri
Document Order
ri
1 d4 2 d3 2 d3 2
2 d3 2 d4 2 d2 1
3 d2 1 d2 1 d4 2
4 d1 0 d1 0 d1 0
NDCGGT=1.00 NDCGRF1=1.00 NDCGRF2=0.9203
6309.44log
0
3log
1
2log
22
222
GTDCG
6309.44log
0
3log
1
2log
22
2221
RFDCG
2619.44log
0
3log
2
2log
12
2222
RFDCG
6309.4 GTDCGMaxDCG
4 documents: d1, d2, d3, d4
![Page 54: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/54.jpg)
Graded ranking/ordering:
DCG = 4 + 2/log(2) + 0/log(3) + 1/log(4) = 6.5
IDCG = 4 + 2/log(2) + 1/log(3) + 0/log(4) = 6.63
NDCG = DCG/IDCG = 6.5/6.63 = .98
NDCG - Example
1024
54
![Page 55: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/55.jpg)
55
R- Precision
Precision at the R-th position in the ranking of results for a query that has R relevant documents.
n doc # relevant
1 588 x2 589 x3 5764 590 x5 9866 592 x7 9848 9889 57810 98511 10312 59113 772 x14 990
R = # of relevant docs = 6
R-Precision = 4/6 = 0.67
L10Evaluation
![Page 56: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/56.jpg)
56
Variance
For a test collection, it is usual that a system does crummily on some information needs (e.g., MAP = 0.1) and excellently on others (e.g., MAP = 0.7)
Indeed, it is usually the case that the variance in performance of the same system across queries is much greater than the variance of different systems on the same query. That is, there are easy information needs and hard
ones!
Prasad L10Evaluation
![Page 57: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/57.jpg)
57
Test Collections
Prasad
![Page 58: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/58.jpg)
Creating Test Collectionsfor IR Evaluation
Prasad 58L10Evaluation
![Page 59: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/59.jpg)
59
From document collections to test collections
Still need Test queries Relevance assessments
Test queries Must be germane to docs available Best designed by domain experts Random query terms generally not a good idea
Relevance assessments Human judges, time-consuming Are human panels perfect?
Prasad L10Evaluation
![Page 60: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/60.jpg)
60
Can we avoid human judgment?
Not really Makes experimental work hard
Especially on a large scale In some very specific settings, can use proxies But once we have test collections, we can reuse
them (so long as we don’t overtrain too badly)
Example below, approximate vector space retrieval
Prasad L10Evaluation
![Page 61: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/61.jpg)
61
Approximate vector retrieval
Let G(q) be the “ground truth” of the actual k closest docs on query q
Let A(q) be the k docs returned by approximate algorithm A on query q
For performance we would measure A(q) G(q) Is this the right measure?
Prasad L10Evaluation
![Page 62: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/62.jpg)
62
Alternative proposal
Focus instead on how A(q) compares to G(q). Goodness can be measured here in cosine
proximity to q: we sum up qd over d A(q). Compare this to the sum of qd over d G(q).
Yields a measure of the relative “goodness” of A vis-à-vis G.
Thus A may be 90% “as good as” the ground-truth G, without finding 90% of the docs in G.
For scored retrieval, this may be acceptable: Most web engines don’t always return the same
answers for a given query.Prasad
![Page 63: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/63.jpg)
63
Kappa measure for inter-judge (dis)agreement
Kappa measure Agreement measure among judges Designed for categorical judgments Corrects for chance agreement
Kappa = [ P(A) – P(E) ] / [ 1 – P(E) ] P(A) – proportion of time judges agree P(E) – what agreement would be by chance Kappa = 0 for chance agreement, 1 for total agreement.
Prasad L10Evaluation
![Page 64: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/64.jpg)
Kappa Measure: Example
Number of docs Judge 1 Judge 2
300 Relevant Relevant
70 Nonrelevant Nonrelevant
20 Relevant Nonrelevant
10 Nonrelevant Relevant
P(A)? P(E)?
![Page 65: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/65.jpg)
Kappa Example
P(A) = 370/400 = 0.925 P(nonrelevant) = (10+20+70+70)/800 = 0.2125 P(relevant) = (10+20+300+300)/800 = 0.7878 P(E) = 0.2125^2 + 0.7878^2 = 0.665 Kappa = (0.925 – 0.665)/(1-0.665) = 0.776
Kappa > 0.8 : good agreement 0.67< Kappa <0.8 : “tentative conclusions” (Carletta ’96) Depends on purpose of study
For >2 judges: average pairwise kappas 65
![Page 66: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/66.jpg)
66
Other Evaluation Measures
Adapted from Slides Attributed to
Prof. Dik Lee (Univ. of Science and Tech, Hong Kong)
Prasad L10Evaluation
![Page 67: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/67.jpg)
67
Fallout Rate
Problems with both precision and recall: Number of irrelevant documents in the
collection is not taken into account. Recall is undefined when there is no relevant
document in the collection. Precision is undefined when no document is
retrieved.
collection the in items tnonrelevan of no. totalretrieved items tnonrelevan of no.
Fallout
Prasad L10Evaluation
![Page 68: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/68.jpg)
68
Subjective Relevance Measure
Novelty Ratio: The proportion of items retrieved and judged relevant by the user and of which they were previously unaware.
Ability to find new information on a topic. Coverage Ratio: The proportion of relevant items retrieved
out of the total relevant documents known to a user prior to the search.
Relevant when the user wants to locate documents which they have seen before (e.g., the budget report for Year 2000).
Prasad L10Evaluation
![Page 69: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/69.jpg)
69
Other Factors to Consider
User effort: Work required from the user in formulating queries, conducting the search, and screening the output.
Response time: Time interval between receipt of a user query and the presentation of system responses.
Form of presentation: Influence of search output format on the user’s ability to utilize the retrieved materials.
Collection coverage: Extent to which any/all relevant items are included in the document corpus.
Prasad L10Evaluation
![Page 70: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/70.jpg)
70
SKIP DETAILS
Prasad L10Evaluation
![Page 71: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/71.jpg)
71
Previous experiments were based on the SMART collection which is fairly small. (ftp://ftp.cs.cornell.edu/pub/smart)
Collection Number Of Number Of Raw Size Name Documents Queries (Mbytes) CACM 3,204 64 1.5 CISI 1,460 112 1.3 CRAN 1,400 225 1.6 MED 1,033 30 1.1 TIME 425 83 1.5
Different researchers used different test collections and evaluation techniques.
Early Test Collections
Prasad L10Evaluation
![Page 72: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/72.jpg)
72
Critique of pure relevance
Relevance vs Marginal Relevance A document can be redundant even if it is highly
relevant Duplicates The same information from different sources Marginal relevance is a better measure of utility for
the user. Using facts/entities as evaluation units more
directly measures true relevance. But harder to create evaluation set
Prasad L10Evaluation
![Page 73: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/73.jpg)
Evaluation at large search engines
Search engines have test collections of queries and hand-ranked results
Recall is difficult to measure on the web Search engines often use precision at top k, e.g., k = 10 . . . or measures that reward you more for getting rank 1
right than for getting rank 10 right. NDCG (Normalized Cumulative Discounted Gain)
Search engines also use non-relevance-based measures. Clickthrough on first result
Not very reliable if you look at a single clickthrough … but pretty reliable in the aggregate.
Studies of user behavior in the lab A/B testing 73L10Evaluation
![Page 74: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/74.jpg)
A/B testing
Purpose: Test a single innovation Prerequisite: You have a large search engine up and
running. Have most users use old system Divert a small proportion of traffic (e.g., 1%) to the new
system that includes the innovation Evaluate with an “automatic” measure like clickthrough
on first result Now we can directly see if the innovation does improve
user happiness. Probably the evaluation methodology that large search
engines trust most In principle less powerful than doing a multivariate
regression analysis, but easier to understand
74
![Page 75: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/75.jpg)
75
TREC Benchmarks
Prasad L10Evaluation
![Page 76: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/76.jpg)
76
The TREC Benchmark
• TREC: Text REtrieval Conference (http://trec.nist.gov/) Originated from the TIPSTER program sponsored by Defense Advanced Research Projects Agency (DARPA).• Became an annual conference in 1992, co-sponsored by the National Institute of Standards and Technology (NIST) and DARPA.• Participants are given parts of a standard set of documents and TOPICS (from which queries have to be derived) in different stages for training and testing.• Participants submit the P/R values for the final document and query corpus and present their results at the conference.
Prasad L10Evaluation
![Page 77: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/77.jpg)
77
The TREC Objectives • Provide a common ground for comparing different IR techniques.
– Same set of documents and queries, and same evaluation method.• Sharing of resources and experiences in developing the benchmark.
– With major sponsorship from government to develop large benchmark collections.
• Encourage participation from industry and academia.• Development of new evaluation techniques, particularly for new applications.
– Retrieval, routing/filtering, non-English collection, web-based collection, question answering.
Prasad L10Evaluation
![Page 78: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/78.jpg)
78
TREC Advantages
Large scale (compared to a few MB in the SMART Collection).
Relevance judgments provided. Under continuous development with support from the
U.S. Government. Wide participation:
TREC 1: 28 papers 360 pages. TREC 4: 37 papers 560 pages. TREC 7: 61 papers 600 pages. TREC 8: 74 papers.
Prasad L10Evaluation
![Page 79: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/79.jpg)
79
TREC Tasks Ad hoc: New questions are being asked on a
static set of data. Routing: Same questions are being asked,
but new information is being searched. (news clipping, library profiling).
New tasks added after TREC 5 - Interactive, multilingual, natural language, multiple database merging, filtering, very large corpus (20 GB, 7.5 million documents), question answering.
Prasad L10Evaluation
![Page 80: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/80.jpg)
80
TREC
TREC Ad Hoc task from first 8 TRECs is standard IR task 50 detailed information needs a year Human evaluation of pooled results returned More recently other related things: Web track, HARD
A TREC query (TREC 5)<top>
<num> Number: 225
<desc> Description:
What is the main function of the Federal Emergency Management Agency (FEMA) and the funding level provided to meet emergencies? Also, what resources are available to FEMA such as people, equipment, facilities?
</top>Prasad L10Evaluation
![Page 81: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/81.jpg)
Standard relevance benchmarks: Others
GOV2 Another TREC/NIST collection 25 million web pages Largest collection that is easily available But still 3 orders of magnitude smaller than what
Google/Yahoo/MSN index NTCIR
East Asian language and cross-language information retrieval
Cross Language Evaluation Forum (CLEF) This evaluation series has concentrated on European
languages and cross-language information retrieval.
81L10EvaluationPrasad
![Page 82: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/82.jpg)
82
Characteristics of the TREC Collection
Both long and short documents (from a few hundred to over one thousand unique terms in a document).
Test documents consist of: WSJ Wall Street Journal articles (1986-1992) 550 M AP Associate Press Newswire (1989) 514 M ZIFF Computer Select Disks (Ziff-Davis Publishing) 493 M FR Federal Register 469 M DOE Abstracts from Department of Energy reports 190 M
Prasad L10Evaluation
![Page 83: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/83.jpg)
83
More Details on Document Collections
Volume 1 (Mar 1994) - Wall Street Journal (1987, 1988, 1989), Federal Register (1989), Associated Press (1989), Department of Energy abstracts, and Information from the Computer Select disks (1989, 1990)
Volume 2 (Mar 1994) - Wall Street Journal (1990, 1991, 1992), the Federal Register (1988), Associated Press (1988) and Information from the Computer Select disks (1989, 1990)
Volume 3 (Mar 1994) - San Jose Mercury News (1991), the Associated Press (1990), U.S. Patents (1983-1991), and Information from the Computer Select disks (1991, 1992)
Volume 4 (May 1996) - Financial Times Limited (1991, 1992, 1993, 1994), the Congressional Record of the 103rd Congress (1993), and the Federal Register (1994).
Volume 5 (Apr 1997) - Foreign Broadcast Information Service (1996) and the Los Angeles Times (1989, 1990).
Prasad L10Evaluation
![Page 84: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/84.jpg)
84
TREC Disk 4,5TREC Disk 4 Congressional Record of the 103rd Congress
approx. 30,000 documents approx. 235 MBFederal Register (1994) approx. 55,000 documents approx. 395 MBFinancial Times (1992-1994) approx. 210,000 documents approx. 565 MB
TREC Disk 5 Data provided from the Foreign Broadcast Information Service approx. 130,000 documents approx. 470 MBLos Angeles Times (randomly selected articles from 1989 & 1990) approx. 130,000 document approx. 475 MB
Prasad L10Evaluation
![Page 85: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/85.jpg)
85
Sample Document (with SGML)<DOC> <DOCNO> WSJ870324-0001 </DOCNO> <HL> John Blair Is Near Accord To Sell Unit, Sources Say </HL> <DD> 03/24/87</DD> <SO> WALL STREET JOURNAL (J) </SO><IN> REL TENDER OFFERS, MERGERS, ACQUISITIONS (TNM) MARKETING,
ADVERTISING (MKT) TELECOMMUNICATIONS, BROADCASTING, TELEPHONE, TELEGRAPH (TEL) </IN>
<DATELINE> NEW YORK </DATELINE> <TEXT> John Blair & Co. is close to an agreement to sell its TV station advertising
representation operation and program production unit to an investor group led by James H. Rosenfield, a former CBS Inc. executive, industry sources said. Industry sources put the value of the proposed acquisition at more than $100 million. ...
</TEXT> </DOC>
Prasad L10Evaluation
![Page 86: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/86.jpg)
86
Sample Query (with SGML)<top> <head> Tipster Topic Description <num> Number: 066 <dom> Domain: Science and Technology <title> Topic: Natural Language Processing <desc> Description: Document will identify a type of natural language processing
technology which is being developed or marketed in the U.S. <narr> Narrative: A relevant document will identify a company or institution
developing or marketing a natural language processing technology, identify the technology, and identify one of more features of the company's product.
<con> Concept(s): 1. natural language processing ;2. translation, language, dictionary
<fac> Factor(s): <nat> Nationality: U.S.</nat></fac> <def> Definitions(s): </top>
Prasad L10Evaluation
![Page 87: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/87.jpg)
87
TREC Properties
Both documents and queries contain many different kinds of information (fields).
Generation of the formal queries (Boolean, Vector Space, etc.) is the responsibility of the system. A system may be very good at querying and
ranking, but if it generates poor queries from the topic, its final P/R would be poor.
Prasad L10Evaluation
![Page 88: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/88.jpg)
88
Two more TREC Document Examples
Prasad L10Evaluation
![Page 89: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/89.jpg)
89
Another Example of TREC Topic/Query
Prasad L10Evaluation
![Page 90: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/90.jpg)
90
Evaluation
Summary table statistics: Number of topics, number of documents retrieved, number of relevant documents.
Recall-precision average: Average precision at 11 recall levels (0 to 1 at 0.1 increments).
Document level average: Average precision when 5, 10, .., 100, … 1000 documents are retrieved.
Average precision histogram: Difference of the R-precision for each topic and the average R-precision of all systems for that topic.
Prasad L10Evaluation
![Page 91: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/91.jpg)
91Prasad L10Evaluation
![Page 92: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/92.jpg)
92
Cystic Fibrosis (CF) Collection
1,239 abstracts of medical journal articles on CF. 100 information requests (queries) in the form of
complete English questions. Relevant documents determined and rated by 4
separate medical experts on 0-2 scale: 0: Not relevant. 1: Marginally relevant. 2: Highly relevant.
Prasad L10Evaluation
![Page 93: Evaluation of IR Systems](https://reader033.fdocuments.net/reader033/viewer/2022061520/56814376550346895daff4e8/html5/thumbnails/93.jpg)
93
CF Document Fields
MEDLINE access number Author Title Source Major subjects Minor subjects Abstract (or extract) References to other documents Citations to this document
Prasad L10Evaluation