1 A Statistical Analysis of the Precision-Recall Graph Ralf Herbrich Microsoft Research UK Joint...
-
Upload
brian-murphy -
Category
Documents
-
view
214 -
download
0
Transcript of 1 A Statistical Analysis of the Precision-Recall Graph Ralf Herbrich Microsoft Research UK Joint...
1
A Statistical Analysis of the Precision-Recall Graph
Ralf Herbrich
Microsoft Research
UK
Joint work with Hugo Zaragoza and Simon Hill
2
Overview
The Precision-Recall Graph A Stability Analysis Main Result Discussion and Applications Conclusions
3
Features of Ranking Learning
We cannot take differences of ranks. We cannot ignore the order of ranks. Point-wise loss functions do not capture the
ranking performance! ROC or precision-recall curves do capture
the ranking performance. We need generalisation error bounds for
ROC and precision-recall curves!
4
Precision and Recall
Given: Sample z=((x1,y1),...,(xm,ym)) 2 (X £ {0,1})m with
k positive yi together with a function f:X ! R. Ranking the sample:
Re-order the sample: f(x(1)) ¸ ¢¢¢ ¸ f(x(m)) Record the indices i1,…, ik of the positive y(j).
Precision pi and ri recall:
5
Precision-Recall: An Example
After reordering:
f(x(i))
6
Break-Even Point
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recall
Pre
cisi
on
Break-Even point
7
Average Precision
0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Recall
Pre
cisi
on
8
A Stability Analysis: Questions
1. How much does A(f,z) change if we can alter one sample (xi,yi)?
2. How much does A(f,¢) change if we can alter z?
We will assume that the number of positive examples, k, has to remain constant.
We can only alter xi, rotate one y(i).
9
Stability Analysis
Case 1: yi=0
Case 2: yi=1
10
Proof
Case 1: yi=0
Case 2: yi=1
11
Main Result
Theorem: For all probability measures, for all ®>1/m, for all f:X ! R, with probability at least 1-± over the IID draw of a training and test sample both of size m, if both training sample z and test sample z contain at least d®me positive examples then
12
Proof
1. McDiarmid’s inequality: For any function g:Zn ! R with stability c, for all probability measures P with probability at least 1-± over the IID draw of Z
2. Set n= 2m and call the two m-halfes Z1 and Z2. Define gi (Z):=A(f,Zi). Then, by IID
13
Discussions
First bound which shows that asymptotically (m!1) training and test set performance (in terms of average precision) converge!
The effective sample size is only the number of positive examples, in fact, only ®2m .
The proof can be generalised to arbitrary test sample sizes.
The constants can be improved.
14
Applications
Cardinality bounds Compression Bounds
(TREC 2002)
No VC bounds! No Margin bounds!
Union bound:
15
Conclusions
Ranking learning requires to consider non-point-wise loss functions.
In order to study the complexity of algorithms we need to have large deviation inequalities for ranking performance measures.
McDiarmid’s inequality is a powerful tool. Future work is focused on ROC curves.