EPL660: DATA CLASSIFICATION

101
Introduction to Information Retrieval EPL660: DATA CLASSIFICATION 1 Slides by Manning, Raghavan, Schutze

description

EPL660: DATA CLASSIFICATION. Relevance feedback revisited. In relevance feedback, the user marks a few documents as relevant / nonrelevant The choices can be viewed as classes or categories For several documents, the user decides which of these two classes is correct - PowerPoint PPT Presentation

Transcript of EPL660: DATA CLASSIFICATION

Page 1: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

EPL660: DATA CLASSIFICATION

1Slides by Manning, Raghavan, Schutze

Page 2: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Relevance feedback revisited In relevance feedback, the user marks a few

documents as relevant/nonrelevant The choices can be viewed as classes or categories For several documents, the user decides which of

these two classes is correct The IR system then uses these judgments to build a

better model of the information need So, relevance feedback can be viewed as a form of

text classification (deciding between several classes) The notion of classification is very general and has

many applications within and beyond IR2Slides by Manning, Raghavan, Schutze

Page 3: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Standing queries The path from IR to text classification:

You have an information need to monitor, say: Unrest in the Niger delta region

You want to rerun an appropriate query periodically to find new news items on this topic

You will be sent new documents that are found I.e., it’s text classification, not ranking

Such queries are called standing queries Long used by “information professionals” A modern mass instantiation is Google Alerts

Standing queries are (hand-written) text classifiers

Ch. 13

3Slides by Manning, Raghavan, Schutze

Page 4: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Spam filtering: Another text classification taskFrom: "" <[email protected]>Subject: real estate is the only way... gem oalvgkay

Anyone can buy real estate with no money down

Stop paying rent TODAY !

There is no need to spend hundreds or even thousands for similar courses

I am 22 years old and I have already purchased 6 properties using themethods outlined in this truly INCREDIBLE ebook.

Change your life NOW !

=================================================Click Below to order:http://www.wholesaledaily.com/sales/nmd.htm=================================================

Ch. 13

4Slides by Manning, Raghavan, Schutze

Page 5: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Text classification Today:

Introduction to Text Classification Also widely known as “text categorization”

Naive Bayes text classification Including a little on Probabilistic Language Models

Ch. 13

5Slides by Manning, Raghavan, Schutze

Page 6: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

6

The rest of text classification Today:

Vector space methods for Text Classification Vector space classification using centroids (Rocchio) K Nearest Neighbors Decision boundaries, linear and nonlinear classifiers Dealing with more than 2 classes

Later in the course More text classification

Support Vector Machines Text-specific issues in classification

Slides by Manning, Raghavan, Schutze

Page 7: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Categorization/Classification Given:

A description of an instance, d X X is the instance language or instance space.

Issue: how to represent text documents. Usually some type of high-dimensional space

A fixed set of classes:C = {c1, c2,…, cJ}

Determine: The category of d: γ(d) C, where γ(d) is a classification

function whose domain is X and whose range is C. We want to know how to build classification functions

(“classifiers”).

Sec. 13.1

7Slides by Manning, Raghavan, Schutze

Page 8: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Supervised Classification Given:

A description of an instance, d X X is the instance language or instance space.

A fixed set of classes:C = {c1, c2,…, cJ}

A training set D of labeled documents with each labeled document ⟨d,c⟩∈X×C

Determine: A learning method or algorithm which will enable us to

learn a classifier γ:X→C For a test document d, we assign it the class γ(d) ∈ C

Sec. 13.1

8Slides by Manning, Raghavan, Schutze

Page 9: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Multimedia GUIGarb.Coll.SemanticsML Planning

planningtemporalreasoningplanlanguage...

programmingsemanticslanguageproof...

learningintelligencealgorithmreinforcementnetwork...

garbagecollectionmemoryoptimizationregion...

“planning language proof intelligence”

TrainingData:

TestData:

Classes:(AI)

Document Classification

(Programming) (HCI)

... ...

(Note: in real life there is often a hierarchy, not present in the above problem statement; and also, you get papers on ML approaches to Garb. Coll.)

Sec. 13.1

9Slides by Manning, Raghavan, Schutze

Page 10: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

More Text Classification ExamplesMany search engine functionalities use classification

Assigning labels to documents or web-pages: Labels are most often topics such as Yahoo-categories

"finance," "sports," "news>world>asia>business" Labels may be genres

"editorials" "movie-reviews" "news” Labels may be opinion on a person/product

“like”, “hate”, “neutral” Labels may be domain-specific

"interesting-to-me" : "not-interesting-to-me” “contains adult language” : “doesn’t” language identification: English, French, Chinese, … search vertical: about Linux versus not “link spam” : “not link spam”

Ch. 13

10Slides by Manning, Raghavan, Schutze

Page 11: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Classification Methods (1) Manual classification

Used by the original Yahoo! Directory Looksmart, about.com, ODP, PubMed Very accurate when job is done by experts Consistent when the problem size and team is small Difficult and expensive to scale

Means we need automatic classification methods for big problems

Ch. 13

11Slides by Manning, Raghavan, Schutze

Page 12: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Classification Methods (2) Automatic document classification

Hand-coded rule-based systems One technique used by CS dept’s spam filter, Reuters, CIA, etc. It’s what Google Alerts is doing

Widely deployed in government and enterprise Companies provide “IDE” for writing such rules E.g., assign category if document contains a given boolean

combination of words Standing queries: Commercial systems have complex query

languages (everything in IR query languages +score accumulators) Accuracy is often very high if a rule has been carefully refined over

time by a subject expert Building and maintaining these rules is expensive

Ch. 13

12Slides by Manning, Raghavan, Schutze

Page 13: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

A Verity topic A complex classification rule

Note: maintenance issues

(author, etc.) Hand-weighting of

terms

[Verity was bought by Autonomy.]

Ch. 13

13Slides by Manning, Raghavan, Schutze

Page 14: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Classification Methods (3) Supervised learning of a document-label assignment

function Many systems partly rely on machine learning (Autonomy,

Microsoft, Enkata, Yahoo!, Google News, …) k-Nearest Neighbors (simple, powerful) Naive Bayes (simple, common method) Support-vector machines (new, more powerful) … plus many other methods No free lunch: requires hand-classified training data But data can be built up (and refined) by amateurs

Many commercial systems use a mixture of methods

Ch. 13

14Slides by Manning, Raghavan, Schutze

Page 15: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Probabilistic relevance feedback Rather than reweighting in a vector space… If user has told us some relevant and some irrelevant

documents, then we can proceed to build a probabilistic classifier,

such as the Naive Bayes model we will look at today:

P(tk|R) = |Drk| / |Dr|

P(tk|NR) = |Dnrk| / |Dnr| tk is a term; Dr is the set of known relevant documents; Drk is the

subset that contain tk; Dnr is the set of known irrelevant documents; Dnrk is the subset that contain tk.

Sec. 9.1.2

15Slides by Manning, Raghavan, Schutze

Page 16: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Recall a few probability basics For events a and b: Bayes’ Rule

Odds:

aaxxpxbp

apabp

bp

apabpbap

apabpbpbap

apabpbpbapbapbap

,)()|(

)()|(

)(

)()|()|(

)()|()()|(

)()|()()|()(),(

)(1

)(

)(

)()(

ap

ap

ap

apaO

Posterior

Prior

16Slides by Manning, Raghavan, Schutze

Page 17: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Probabilistic Methods Our focus this lecture Learning and classification methods based on

probability theory. Bayes theorem plays a critical role in probabilistic

learning and classification. Builds a generative model that approximates how

data is produced Uses prior probability of each category given no

information about an item. Categorization produces a posterior probability

distribution over the possible categories given a description of an item.

Sec.13.2

17Slides by Manning, Raghavan, Schutze

Page 18: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Bayes’ Rule for text classification For a document d and a class c

P(c,d) P(c | d)P(d) P(d | c)P(c)

P(c | d) P(d | c)P(c)

P(d)

Sec.13.2

18Slides by Manning, Raghavan, Schutze

Page 19: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes Classifiers

Task: Classify a new instance d based on a tuple of attribute values into one of the classes cj C

d x1,x2,K ,xn

),,,|(argmax 21 njCc

MAP xxxcPcj

),,,(

)()|,,,(argmax

21

21

n

jjn

Cc xxxP

cPcxxxP

j

)()|,,,(argmax 21 jjnCc

cPcxxxPj

Sec.13.2

MAP is “maximum a posteriori” = most likely class 19Slides by Manning, Raghavan, Schutze

Page 20: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes Classifier: Naive Bayes Assumption P(cj)

Can be estimated from the frequency of classes in the training examples.

P(x1,x2,…,xn|cj) O(|X|n•|C|) parameters Could only be estimated if a very, very large number of

training examples was available.Naive Bayes Conditional Independence Assumption: Assume that the probability of observing the conjunction of

attributes is equal to the product of the individual probabilities P(xi|cj).

Sec.13.2

20Slides by Manning, Raghavan, Schutze

Page 21: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Flu

X1 X2 X5X3 X4

feversinus coughrunnynose muscle-ache

The Naive Bayes Classifier

Conditional Independence Assumption: features detect term presence and are independent of each other given the class:

This model is appropriate for binary variables Multivariate Bernoulli model

)|()|()|()|,,( 52151 CXPCXPCXPCXXP

Sec.13.3

21Slides by Manning, Raghavan, Schutze

Page 22: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Learning the Model

First attempt: maximum likelihood estimates simply use the frequencies in the data

)(

),()|(ˆ

j

jiiji cCN

cCxXNcxP

C

X1 X2 X5X3 X4 X6

N

cCNcP j

j

)()(ˆ

Sec.13.3

22Slides by Manning, Raghavan, Schutze

Page 23: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Problem with Maximum Likelihood

What if we have seen no training documents with the word muscle-ache and classified in the topic Flu?

Zero probabilities cannot be conditioned away, no matter the other evidence!

ˆ P (X5 t | C Flu) N(X5 t,C Flu)

N(C Flu)0

i ic cxPcP )|(ˆ)(ˆmaxarg

Flu

X1 X2 X5X3 X4

feversinus coughrunnynose muscle-ache

)|()|()|()|,,( 52151 CXPCXPCXPCXXP

Sec.13.3

23Slides by Manning, Raghavan, Schutze

Page 24: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Smoothing to Avoid Overfitting

kcCN

cCxXNcxP

j

jiiji

)(

1),()|(ˆ

Somewhat more subtle version

# of values of Xi

mcCN

mpcCxXNcxP

j

kijkiijki

)(

),()|(ˆ ,,

,

overall fraction in data where

Xi=xi,k

extent of“smoothing”

Sec.13.3

24Slides by Manning, Raghavan, Schutze

Page 25: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Stochastic Language Models

Model probability of generating strings (each word in turn) in a language (commonly all strings over alphabet ∑). E.g., a unigram model

0.2 the

0.1 a

0.01 man

0.01 woman

0.03 said

0.02 likes

the man likes the woman

0.2 0.01 0.02 0.2 0.01

multiply

Model M

P(s | M) = 0.00000008

Sec.13.2.1

25Slides by Manning, Raghavan, Schutze

Page 26: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Stochastic Language Models

Model probability of generating any string

0.2 the

0.01 class

0.0001 sayst

0.0001 pleaseth

0.0001 yon

0.0005 maiden

0.01 woman

Model M1 Model M2

maidenclass pleaseth yonthe

0.00050.01 0.0001 0.00010.2

0.010.0001 0.02 0.10.2

P(s|M2) > P(s|M1)

0.2 the

0.0001 class

0.03 sayst

0.02 pleaseth

0.1 yon

0.01 maiden

0.0001 woman

Sec.13.2.1

26Slides by Manning, Raghavan, Schutze

Page 27: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Unigram and higher-order models

Unigram Language Models

Bigram (generally, n-gram) Language Models

Other Language Models Grammar-based models (PCFGs), etc.

Probably not the first thing to try in IR

= P ( ) P ( | ) P ( | )P ( | )

P ( ) P ( ) P ( ) P ( )

P ( )

P ( ) P ( | ) P ( | ) P ( | )

Easy.Effective!

Sec.13.2.1

27Slides by Manning, Raghavan, Schutze

Page 28: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes via a class conditional language model = multinomial NB

Effectively, the probability of each class is done as a class-specific unigram language model

C

w1 w2 w3 w4 w5 w6

Sec.13.2

28Slides by Manning, Raghavan, Schutze

Page 29: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Using Multinomial Naive Bayes Classifiers to Classify Text: Basic method

Attributes are text positions, values are words.

Still too many possibilities Assume that classification is independent of the

positions of the words Use same parameters for each position Result is bag of words model

)|text""()|our""()(argmax

)|()(argmax

1j

j

jnjjCc

ijij

CcNB

cxPcxPcP

cxPcPc

Sec.13.2

29Slides by Manning, Raghavan, Schutze

Page 30: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Textj single document containing all docsj

for each word xk in Vocabulary

nk number of occurrences of xk in Textj

Naive Bayes: Learning From training corpus, extract Vocabulary Calculate required P(cj) and P(xk | cj) terms

For each cj in C do docsj subset of documents for which the target class is cj

||)|(

Vocabularyn

ncxP k

jk

|documents # total|

||)( j

j

docscP

Sec.13.2

30Slides by Manning, Raghavan, Schutze

Page 31: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes: Classifying

positions all word positions in current document which contain tokens found in Vocabulary

Return cNB, where

positionsi

jijCc

NB cxPcPc )|()(argmaxj

Sec.13.2

31Slides by Manning, Raghavan, Schutze

Page 32: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes: Time Complexity

Training Time: O(|D|Lave + |C||V|)) where Lave is the average length of a document in D. Assumes all counts are pre-computed in O(|D|Lave) time during one

pass through all of the data. Generally just O(|D|Lave) since usually |C||V| < |D|Lave

Test Time: O(|C| Lt) where Lt is the average length of a test document.

Very efficient overall, linearly proportional to the time needed to just read in all the data.

Sec.13.2

32Slides by Manning, Raghavan, Schutze

Page 33: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Underflow Prevention: using logs Multiplying lots of probabilities, which are between 0 and 1

by definition, can result in floating-point underflow. Since log(xy) = log(x) + log(y), it is better to perform all

computations by summing logs of probabilities rather than multiplying probabilities.

Class with highest final un-normalized log probability score is still the most probable.

Note that model is now just max of sum of weights…

cNB argmaxcj C

[log P(c j ) log P(x i | c j )ipositions

]

Sec.13.2

33Slides by Manning, Raghavan, Schutze

Page 34: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes Classifier

Simple interpretation: Each conditional parameter log P(xi|cj) is a weight that indicates how good an indicator xi is for cj.

The prior log P(cj) is a weight that indicates the relative frequency of cj.

The sum is then a measure of how much evidence there is for the document being in the class.

We select the class with the most evidence for it34

cNB argmaxcj C

[log P(c j ) log P(x i | c j )ipositions

]

Slides by Manning, Raghavan, Schutze

Page 35: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Two Naive Bayes Models Model 1: Multivariate Bernoulli

One feature Xw for each word in dictionary Xw = true in document d if w appears in d Naive Bayes assumption:

Given the document’s topic, appearance of one word in the document tells us nothing about chances that another word appears

This is the model used in the binary independence model in classic probabilistic relevance feedback on hand-classified data (Maron in IR was a very early user of NB)

35Slides by Manning, Raghavan, Schutze

Page 36: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Two Models Model 2: Multinomial = Class conditional unigram

One feature Xi for each word pos in document feature’s values are all words in dictionary

Value of Xi is the word in position i Naive Bayes assumption:

Given the document’s topic, word in one position in the document tells us nothing about words in other positions

Second assumption: Word appearance does not depend on position

Just have one multinomial feature predicting all words

)|()|( cwXPcwXP ji for all positions i,j, word w, and class c

36Slides by Manning, Raghavan, Schutze

Page 37: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Multivariate Bernoulli model:

Multinomial model:

Can create a mega-document for topic j by concatenating all documents in this topic

Use frequency of w in mega-document

Parameter estimation

fraction of documents of topic cj

in which word w appears )|(ˆ

jw ctXP

fraction of times in which word w appears among all

words in documents of topic cj

)|(ˆji cwXP

37Slides by Manning, Raghavan, Schutze

Page 38: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Classification

Multinomial vs Multivariate Bernoulli?

Multinomial model is almost always more effective in text applications!

See results figures later

See IIR sections 13.2 and 13.3 for worked examples with each model

38Slides by Manning, Raghavan, Schutze

Page 39: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Feature Selection: Why?

Text collections have a large number of features 10,000 – 1,000,000 unique words … and more

May make using a particular classifier feasible Some classifiers can’t deal with 100,000 of features

Reduces training time Training time for some methods is quadratic or worse in

the number of features Can improve generalization (performance)

Eliminates noise features Avoids overfitting

Sec.13.5

39Slides by Manning, Raghavan, Schutze

Page 40: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Feature selection: how? Two ideas:

Hypothesis testing statistics: Are we confident that the value of one categorical variable is

associated with the value of another Chi-square test (2)

Information theory: How much information does the value of one categorical variable

give you about the value of another Mutual information

They’re similar, but 2 measures confidence in association, (based on available statistics), while MI measures extent of association (assuming perfect knowledge of probabilities)

Sec.13.5

40Slides by Manning, Raghavan, Schutze

Page 41: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

2 statistic (CHI)

2 is interested in (fo – fe)2/fe summed over all table entries: is the observed number what you’d expect given the marginals?

The null hypothesis is rejected with confidence .999, since 12.9 > 10.83 (the value for .999 confidence).

)001.(9.129498/)94989500(502/)502500(

75.4/)75.43(25./)25.2(/)(),(22

2222

p

EEOaj

9500

500

(4.75)

(0.25)

(9498)3Class auto

(502)2Class = auto

Term jaguar

Term = jaguar expected: fe

observed: fo

Sec.13.5.2

41Slides by Manning, Raghavan, Schutze

Page 42: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

There is a simpler formula for 2x2 2:

2 statistic (CHI)

N = A + B + C + D

D = #(¬t, ¬c)

B = #(t,¬c)

C = #(¬t,c)A = #(t,c)

Sec.13.5.2

42Slides by Manning, Raghavan, Schutze

Page 43: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Feature selection via Mutual Information In training set, choose k words which best

discriminate (give most info on) the categories. The Mutual Information between a word, class is:

For each word w and each category c

}1,0{ }1,0{ )()(

),(log),(),(

w ce e cw

cwcw epep

eepeepcwI

Sec.13.5.1

43Slides by Manning, Raghavan, Schutze

Page 44: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Feature selection via MI (contd.) For each category we build a list of k most

discriminating terms. For example (on 20 Newsgroups):

sci.electronics: circuit, voltage, amp, ground, copy, battery, electronics, cooling, …

rec.autos: car, cars, engine, ford, dealer, mustang, oil, collision, autos, tires, toyota, …

Greedy: does not account for correlations between terms

Why?

Sec.13.5.1

44Slides by Manning, Raghavan, Schutze

Page 45: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Feature Selection Mutual Information

Clear information-theoretic interpretation May select very slightly informative frequent terms that

are not very useful for classification Chi-square

Statistical foundation May select rare uninformative terms

Just use the commonest terms? No particular foundation In practice, this is often 90% as good

Sec.13.5

45Slides by Manning, Raghavan, Schutze

Page 46: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Feature selection for NB In general feature selection is necessary for

multivariate Bernoulli NB. Otherwise you suffer from noise, multi-counting

“Feature selection” really means something different for multinomial NB. It means dictionary truncation The multinomial NB model only has 1 feature

This “feature selection” normally isn’t needed for multinomial NB, but may help a fraction with quantities that are badly estimated

Sec.13.5

46Slides by Manning, Raghavan, Schutze

Page 47: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Evaluating Categorization Evaluation must be done on test data that are independent of

the training data (usually a disjoint set of instances). Sometimes use cross-validation (averaging results over multiple

training and test splits of the overall data) It’s easy to get good performance on a test set that was

available to the learner during training (e.g., just memorize the test set).

Measures: precision, recall, F1, classification accuracy Classification accuracy: c/n where n is the total number of

test instances and c is the number of test instances correctly classified by the system. Adequate if one class per document Otherwise F measure for each class

Sec.13.6

47Slides by Manning, Raghavan, Schutze

Page 48: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes vs. other methods

48

Sec.13.6

Slides by Manning, Raghavan, Schutze

Page 49: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

WebKB Experiment (1998) Classify webpages from CS departments into:

student, faculty, course,project Train on ~5,000 hand-labeled web pages

Cornell, Washington, U.Texas, Wisconsin

Crawl and classify a new site (CMU)

Results:Student Faculty Person Project Course Departmt

Extracted 180 66 246 99 28 1Correct 130 28 194 72 25 1Accuracy: 72% 42% 79% 73% 89% 100%

Sec.13.6

49Slides by Manning, Raghavan, Schutze

Page 50: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

NB Model Comparison: WebKB

Sec.13.6

50Slides by Manning, Raghavan, Schutze

Page 51: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

51Slides by Manning, Raghavan, Schutze

Page 52: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes on spam email

Sec.13.6

52Slides by Manning, Raghavan, Schutze

Page 53: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

SpamAssassin Naive Bayes has found a home in spam filtering

Paul Graham’s A Plan for Spam A mutant with more mutant offspring...

Naive Bayes-like classifier with weird parameter estimation Widely used in spam filters

Classic Naive Bayes superior when appropriately used According to David D. Lewis

But also many other things: black hole lists, etc.

Many email topic filters also use NB classifiers

53Slides by Manning, Raghavan, Schutze

Page 54: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Violation of NB Assumptions The independence assumptions do not really hold of

documents written in natural language. Conditional independence Positional independence

Examples?

54Slides by Manning, Raghavan, Schutze

Page 55: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes Posterior Probabilities Classification results of naive Bayes (the class with

maximum posterior probability) are usually fairly accurate.

However, due to the inadequacy of the conditional independence assumption, the actual posterior-probability numerical estimates are not. Output probabilities are commonly very close to 0 or 1.

Correct estimation accurate prediction, but correct probability estimation is NOT necessary for accurate prediction (just need right ordering of probabilities)

55Slides by Manning, Raghavan, Schutze

Page 56: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Naive Bayes is Not So Naive Naive Bayes won 1st and 2nd place in KDD-CUP 97 competition out of 16 systems

Goal: Financial services industry direct mail response prediction model: Predict if the recipient of mail will actually respond to the advertisement – 750,000 records.

More robust to irrelevant features than many learning methodsIrrelevant Features cancel each other without affecting resultsDecision Trees can suffer heavily from this.

More robust to concept drift (changing class definition over time) Very good in domains with many equally important features

Decision Trees suffer from fragmentation in such cases – especially if little data A good dependable baseline for text classification (but not the best)! Optimal if the Independence Assumptions hold: Bayes Optimal Classifier

Never true for text, but possible in some domains Very Fast Learning and Testing (basically just count the data) Low Storage requirements

56Slides by Manning, Raghavan, Schutze

Page 57: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

57

The rest of text classification Vector space methods for Text Classification

Vector space classification using centroids (Rocchio) K Nearest Neighbors Decision boundaries, linear and nonlinear classifiers Dealing with more than 2 classes

More text classification Support Vector Machines Text-specific issues in classification

Slides by Manning, Raghavan, Schutze

Page 58: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

58

Recall: Vector Space Representation Each document is a vector, one component for each

term (= word). Normally normalize vectors to unit length. High-dimensional vector space:

Terms are axes 10,000+ dimensions, or even 100,000+ Docs are vectors in this space

How can we do classification in this space?

Sec.14.1

Slides by Manning, Raghavan, Schutze

Page 59: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

59

Classification Using Vector Spaces As before, the training set is a set of documents,

each labeled with its class (e.g., topic) In vector space classification, this set corresponds to

a labeled set of points (or, equivalently, vectors) in the vector space

Premise 1: Documents in the same class form a contiguous region of space

Premise 2: Documents from different classes don’t overlap (much)

We define surfaces to delineate classes in the space

Sec.14.1

Slides by Manning, Raghavan, Schutze

Page 60: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

60

Documents in a Vector Space

Government

Science

Arts

Sec.14.1

Slides by Manning, Raghavan, Schutze

Page 61: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

61

Test Document of what class?

Government

Science

Arts

Sec.14.1

Slides by Manning, Raghavan, Schutze

Page 62: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

62

Test Document = Government

Government

Science

Arts

Is this similarityhypothesistrue ingeneral?

Our main topic today is how to find good separators

Sec.14.1

Slides by Manning, Raghavan, Schutze

Page 63: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Aside: 2D/3D graphs can be misleading

63

Sec.14.1

Slides by Manning, Raghavan, Schutze

Page 64: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Using Rocchio for text classification Relevance feedback methods can be adapted for text

categorization As noted before, relevance feedback can be viewed as 2-class

classification Relevant vs. nonrelevant documents

Use standard tf-idf weighted vectors to represent text documents

For training documents in each category, compute a prototype vector by summing the vectors of the training documents in the category. Prototype = centroid of members of class

Assign test documents to the category with the closest prototype vector based on cosine similarity.

64

Sec.14.2

Slides by Manning, Raghavan, Schutze

Page 65: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Illustration of Rocchio Text Categorization

65

Sec.14.2

Slides by Manning, Raghavan, Schutze

Page 66: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Definition of centroid

Where Dc is the set of all documents that belong to class c and v(d) is the vector space representation of d.

Note that centroid will in general not be a unit vector even when the inputs are unit vectors.

66

(c)

1

| Dc |

v (d)

d Dc

Sec.14.2

Slides by Manning, Raghavan, Schutze

Page 67: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

67

Rocchio Properties Forms a simple generalization of the examples in

each class (a prototype). Prototype vector does not need to be averaged or

otherwise normalized for length since cosine similarity is insensitive to vector length.

Classification is based on similarity to class prototypes.

Does not guarantee classifications are consistent with the given training data.

Sec.14.2

Slides by Manning, Raghavan, Schutze

Page 68: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

68

Rocchio Anomaly Prototype models have problems with polymorphic

(disjunctive) categories.

Sec.14.2

Slides by Manning, Raghavan, Schutze

Page 69: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Rocchio classification Rocchio forms a simple representation for each class:

the centroid/prototype Classification is based on similarity to / distance from

the prototype/centroid It does not guarantee that classifications are

consistent with the given training data It is little used outside text classification

It has been used quite effectively for text classification But in general worse than Naïve Bayes

Again, cheap to train and test documents69

Sec.14.2

Slides by Manning, Raghavan, Schutze

Page 70: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

70

k Nearest Neighbor Classification kNN = k Nearest Neighbor

To classify a document d into class c: Define k-neighborhood N as k nearest neighbors of d Count number of documents i in N that belong to c Estimate P(c|d) as i/k Choose as class argmaxc P(c|d) [ = majority class]

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 71: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

71

Example: k=6 (6NN)

Government

Science

Arts

P(science| )?

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 72: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

72

Nearest-Neighbor Learning Algorithm Learning is just storing the representations of the training examples

in D. Testing instance x (under 1NN):

Compute similarity between x and all examples in D. Assign x the category of the most similar example in D.

Does not explicitly compute a generalization or category prototypes.

Also called: Case-based learning Memory-based learning Lazy learning

Rationale of kNN: contiguity hypothesis

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 73: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

73

kNN Is Close to Optimal Cover and Hart (1967) Asymptotically, the error rate of 1-nearest-neighbor

classification is less than twice the Bayes rate [error rate of classifier knowing model that generated data]

In particular, asymptotic error rate is 0 if Bayes rate is 0.

Assume: query point coincides with a training point. Both query point and training point contribute error

→ 2 times Bayes rate

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 74: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

74

k Nearest Neighbor Using only the closest example (1NN) to determine

the class is subject to errors due to: A single atypical example. Noise (i.e., an error) in the category label of a single

training example. More robust alternative is to find the k most-similar

examples and return the majority category of these k examples.

Value of k is typically odd to avoid ties; 3 and 5 are most common.

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 75: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

75

kNN decision boundaries

Government

Science

Arts

Boundaries are in principle arbitrary surfaces – but usually polyhedra

kNN gives locally defined decision boundaries betweenclasses – far away points do not influence each classificationdecision (unlike in Naïve Bayes, Rocchio, etc.)

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 76: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

76

Similarity Metrics Nearest neighbor method depends on a similarity (or

distance) metric. Simplest for continuous m-dimensional instance

space is Euclidean distance. Simplest for m-dimensional binary instance space is

Hamming distance (number of feature values that differ).

For text, cosine similarity of tf.idf weighted vectors is typically most effective.

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 77: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

77

Illustration of 3 Nearest Neighbor for Text Vector Space

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 78: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

78

3 Nearest Neighbor vs. Rocchio Nearest Neighbor tends to handle polymorphic

categories better than Rocchio/NB.

Slides by Manning, Raghavan, Schutze

Page 79: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

79

Nearest Neighbor with Inverted Index Naively, finding nearest neighbors requires a linear

search through |D| documents in collection But determining k nearest neighbors is the same as

determining the k best retrievals using the test document as a query to a database of training documents.

Use standard vector space inverted index methods to find the k nearest neighbors.

Testing Time: O(B|Vt|) where B is the average number of training documents in which a test-document word appears. Typically B << |D|

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 80: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

80

kNN: Discussion No feature selection necessary Scales well with large number of classes

Don’t need to train n classifiers for n classes Classes can influence each other

Small changes to one class can have ripple effect Scores can be hard to convert to probabilities No training necessary

Actually: perhaps not true. (Data editing, etc.) May be expensive at test time In most cases it’s more accurate than NB or Rocchio

Sec.14.3

Slides by Manning, Raghavan, Schutze

Page 81: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

81

kNN vs. Naive Bayes Bias/Variance tradeoff

Variance ≈ Capacity kNN has high variance and low bias.

Infinite memory NB has low variance and high bias.

Decision surface has to be linear (hyperplane – see later) Consider asking a botanist: Is an object a tree?

Too much capacity/variance, low bias Botanist who memorizes Will always say “no” to new object (e.g., different # of leaves)

Not enough capacity/variance, high bias Lazy botanist Says “yes” if the object is green

You want the middle ground(Example due to C. Burges)

Sec.14.6

Slides by Manning, Raghavan, Schutze

Page 82: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

82

Bias vs. variance: Choosing the correct model capacity

Sec.14.6

Slides by Manning, Raghavan, Schutze

Page 83: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

83

Linear classifiers and binary and multiclass classification Consider 2 class problems

Deciding between two classes, perhaps, government and non-government One-versus-rest classification

How do we define (and find) the separating surface? How do we decide which region a test doc is in?

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 84: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

84

Separation by Hyperplanes A strong high-bias assumption is linear separability:

in 2 dimensions, can separate classes by a line in higher dimensions, need hyperplanes

Can find separating hyperplane by linear programming (or can iteratively fit solution via perceptron): separator can be expressed as ax + by = c

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 85: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

85

Linear programming / Perceptron

Find a,b,c, such thatax + by > c for red pointsax + by < c for blue points.

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 86: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

86

Which Hyperplane?

In general, lots of possiblesolutions for a,b,c.

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 87: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

87

Which Hyperplane? Lots of possible solutions for a,b,c. Some methods find a separating hyperplane,

but not the optimal one [according to some criterion of expected goodness] E.g., perceptron

Most methods find an optimal separating hyperplane

Which points should influence optimality? All points

Linear/logistic regression Naïve Bayes

Only “difficult points” close to decision boundary

Support vector machines

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 88: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

88

Linear classifier: Example

Class: “interest” (as in interest rate) Example features of a linear classifier wi ti wi ti

To classify, find dot product of feature vector and weights

• 0.70 prime• 0.67 rate• 0.63 interest• 0.60 rates• 0.46 discount• 0.43 bundesbank

• −0.71 dlrs• −0.35 world• −0.33 sees• −0.25 year• −0.24 group• −0.24 dlr

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 89: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

89

Linear Classifiers Many common text classifiers are linear classifiers

Naïve Bayes Perceptron Rocchio Logistic regression Support vector machines (with linear kernel) Linear regression with threshold

Despite this similarity, noticeable performance differences For separable problems, there is an infinite number of separating

hyperplanes. Which one do you choose? What to do for non-separable problems? Different training methods pick different hyperplanes

Classifiers more powerful than linear often don’t perform better on text problems. Why?

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 90: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Rocchio is a linear classifier

90

Sec.14.2

Slides by Manning, Raghavan, Schutze

Page 91: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Two-class Rocchio as a linear classifier Line or hyperplane defined by:

For Rocchio, set:

[Aside for ML/stats people: Rocchio classification is a simplification of the classic Fisher Linear Discriminant where you don’t model the variance (or assume it is spherical).]

91

widi i1

M

w

(c1)

(c2)

0.5 (|(c1) |2 |

(c2) |2)

Sec.14.2

Slides by Manning, Raghavan, Schutze

Page 92: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

92

Naive Bayes is a linear classifier Two-class Naive Bayes. We compute:

Decide class C if the odds is greater than 1, i.e., if the log odds is greater than 0.

So decision boundary is hyperplane:

dw#nCwP

CwP

CP

CPn

ww

wVw w

in of soccurrence of ;)|(

)|(log

;)(

)(log where0

( | ) ( ) ( | )log log log

( | ) ( ) ( | )w d

P C d P C P w C

P C d P C P w C

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 93: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

A nonlinear problem

A linear classifier like Naïve Bayes does badly on this task

kNN will do very well (assuming enough training data)

93

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 94: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

94

High Dimensional Data

Pictures like the one at right are absolutely misleading!

Documents are zero along almost all axes Most document pairs are very far apart (i.e.,

not strictly orthogonal, but only share very common words and a few scattered others)

In classification terms: often document sets are separable, for most any classification

This is part of why linear classifiers are quite successful in this domain

Sec.14.4

Slides by Manning, Raghavan, Schutze

Page 95: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

95

More Than Two Classes Any-of or multivalue classification

Classes are independent of each other. A document can belong to 0, 1, or >1 classes. Decompose into n binary problems Quite common for documents

One-of or multinomial or polytomous classification Classes are mutually exclusive. Each document belongs to exactly one class E.g., digit recognition is polytomous classification

Digits are mutually exclusive

Sec.14.5

Slides by Manning, Raghavan, Schutze

Page 96: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

96

Set of Binary Classifiers: Any of Build a separator between each class and its

complementary set (docs from all other classes). Given test doc, evaluate it for membership in each

class. Apply decision criterion of classifiers independently Done

Though maybe you could do better by considering dependencies between categories

Sec.14.5

Slides by Manning, Raghavan, Schutze

Page 97: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

97

Set of Binary Classifiers: One of Build a separator between each class and its

complementary set (docs from all other classes). Given test doc, evaluate it for membership in each

class. Assign document to class with:

maximum score maximum confidence maximum probability

?

??

Sec.14.5

Slides by Manning, Raghavan, Schutze

Page 98: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

98

Summary: Representation ofText Categorization Attributes Representations of text are usually very high

dimensional (one feature for each word) High-bias algorithms that prevent overfitting in high-

dimensional space should generally work best* For most text categorization tasks, there are many

relevant features and many irrelevant ones Methods that combine evidence from many or all

features (e.g. naive Bayes, kNN) often tend to work better than ones that try to isolate just a few relevant features*

*Although the results are a bit more mixed than often thought

Slides by Manning, Raghavan, Schutze

Page 99: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Which classifier do I use for a given text classification problem? Is there a learning method that is optimal for all text

classification problems? No, because there is a tradeoff between bias and

variance. Factors to take into account:

How much training data is available? How simple/complex is the problem? (linear vs. nonlinear

decision boundary) How noisy is the data? How stable is the problem over time?

For an unstable problem, it’s better to use a simple and robust classifier. 99Slides by Manning, Raghavan, Schutze

Page 100: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

Resources for today’s lecture IIR 13 Fabrizio Sebastiani. Machine Learning in Automated Text Categorization.

ACM Computing Surveys, 34(1):1-47, 2002. Yiming Yang & Xin Liu, A re-examination of text categorization methods.

Proceedings of SIGIR, 1999. Andrew McCallum and Kamal Nigam. A Comparison of Event Models for

Naive Bayes Text Classification. In AAAI/ICML-98 Workshop on Learning for Text Categorization, pp. 41-48.

Tom Mitchell, Machine Learning. McGraw-Hill, 1997. Clear simple explanation of Naive Bayes

Open Calais: Automatic Semantic Tagging Free (but they can keep your data), provided by Thompson/Reuters (ex-ClearForest)

Weka: A data mining software package that includes an implementation of Naive Bayes

Reuters-21578 – the most famous text classification evaluation set Still widely used by lazy people (but now it’s too small for realistic

experiments – you should use Reuters RCV1)

Ch. 13

100Slides by Manning, Raghavan, Schutze

Page 101: EPL660: DATA CLASSIFICATION

Introduction to Information RetrievalIntroduction to Information Retrieval

101

Resources for today’s lecture IIR 14 Fabrizio Sebastiani. Machine Learning in Automated Text

Categorization. ACM Computing Surveys, 34(1):1-47, 2002. Yiming Yang & Xin Liu, A re-examination of text categorization

methods. Proceedings of SIGIR, 1999. Trevor Hastie, Robert Tibshirani and Jerome Friedman,

Elements of Statistical Learning: Data Mining, Inference and Prediction. Springer-Verlag, New York.

Open Calais: Automatic Semantic Tagging Free (but they can keep your data), provided by Thompson/Reuters

Weka: A data mining software package that includes an implementation of many ML algorithms

Ch. 14

Slides by Manning, Raghavan, Schutze