CMSC 723 / LING 645: Intro to Computational Linguistics

54
CMSC 723 / LING 645: Intro to Computational Linguistics February 25, 2004 Lecture 5 (Dorr): Intro to Probabilistic NLP and N-grams (chap 6.1-6.3) Prof. Bonnie J. Dorr Dr. Nizar Habash TA: Nitin Madnani, Nate Waisbrot

description

CMSC 723 / LING 645: Intro to Computational Linguistics. February 25, 2004 Lecture 5 (Dorr): Intro to Probabilistic NLP and N-grams (chap 6.1-6.3) Prof. Bonnie J. Dorr Dr. Nizar Habash TA: Nitin Madnani, Nate Waisbrot. Why (not) Statistics for NLP?. Pro Disambiguation Error Tolerant - PowerPoint PPT Presentation

Transcript of CMSC 723 / LING 645: Intro to Computational Linguistics

Page 1: CMSC 723 / LING 645: Intro to Computational Linguistics

CMSC 723 / LING 645: Intro to Computational Linguistics

February 25, 2004

Lecture 5 (Dorr):Intro to Probabilistic NLP and N-grams (chap 6.1-6.3)

Prof. Bonnie J. DorrDr. Nizar Habash

TA: Nitin Madnani, Nate Waisbrot

Page 2: CMSC 723 / LING 645: Intro to Computational Linguistics

Why (not) Statistics for NLP?

Pro– Disambiguation– Error Tolerant– Learnable

Con– Not always appropriate– Difficult to debug

Page 3: CMSC 723 / LING 645: Intro to Computational Linguistics

Weighted Automata/Transducers Speech recognition: storing a pronunciation lexicon Augmentation of FSA: Each arc is associated with a

probability

Page 4: CMSC 723 / LING 645: Intro to Computational Linguistics

Pronunciation network for “about”

Page 5: CMSC 723 / LING 645: Intro to Computational Linguistics

Noisy Channel

Page 6: CMSC 723 / LING 645: Intro to Computational Linguistics

Probability Definitions

Experiment (trial)– Repeatable procedure with well-defined possible

outcomesSample space

– Complete set of outcomesEvent

– Any subset of outcomes from sample spaceRandom Variable

– Uncertain outcome in a trial

Page 7: CMSC 723 / LING 645: Intro to Computational Linguistics

More Definitions

Probability– How likely is it to get a particular outcome?– Rate of getting that outcome in all trials

Probability of drawing a spade from 52 well-shuffled playing cards:

Distribution: Probabilities associated with each outcome a random variable can take– Each outcome has probability between 0 and 1– The sum of all outcome probabilities is 1.

Page 8: CMSC 723 / LING 645: Intro to Computational Linguistics

Conditional Probability

What is P(A|B)?First, what is P(A)?

– P(“It is raining”) = .06

Now what about P(A|B)?– P(“It is raining” | “It was clear 10 minutes ago”) = .004

)(),()|(

BPBAPBAP

A BA,B Note: P(A,B)=P(A|B) · P(B)Also: P(A,B) = P(B,A)

Page 9: CMSC 723 / LING 645: Intro to Computational Linguistics

Independence

What is P(A,B) if A and B are independent?

P(A,B)=P(A) · P(B) iff A,B independent.– P(heads,tails) = P(heads) · P(tails) = .5 · .5 = .25– P(doctor,blue-eyes)

= P(doctor) · P(blue-eyes) = .01 · .2 = .002

What if A,B independent?– P(A|B)=P(A) iff A,B independent– Also: P(B|A)=P(B) iff A,B independent

Page 10: CMSC 723 / LING 645: Intro to Computational Linguistics

Bayes Theorem

)()()|()|(

APBPBAPABP

• Swap the order of dependence

• Sometimes easier to estimate one kind of dependence than the other

Page 11: CMSC 723 / LING 645: Intro to Computational Linguistics

What does this have to do with the Noisy Channel Model?

P(H)

(H) (O)

P(O|H) Best H

)()()|(

OPHPHOPargmax

H=Best H = argmax P(H|O)H

likelihood prior

Page 12: CMSC 723 / LING 645: Intro to Computational Linguistics

Noisy Channel Applied to Word Recognition

argmaxw P(w|O) = argmaxw P(O|w) P(w)Simplifying assumptions

– pronunciation string correct– word boundaries known

Problem:– Given [n iy], what is correct dictionary word?

What do we need?[ni]: knee, neat, need, new

Page 13: CMSC 723 / LING 645: Intro to Computational Linguistics

What is the most likely word given [ni]?

Now compute likelihood P([ni]|w), then multiplyWord P(O|w) P(w) P(O|w)P(w)

new .36 .001 .00036

neat .52 .00013 .000068

need .11 .00056 .000062

knee 1.00 .000024 .000024

Word freq(w) P(w)

new 2625 .001

neat 338 .00013

need 1417 .00056

knee 61 .000024

Compute prior P(w)

Page 14: CMSC 723 / LING 645: Intro to Computational Linguistics

Why N-grams?

Word P(O|w) P(w) P(O|w)P(w)

new .36 .001 .00036

neat .52 .00013 .000068

need .11 .00056 .000062

knee 1.00 .000024 .000024

P([ni]|new)P(new)

P([ni]|neat)P(neat)

P([ni]|need)P(need)

P([ni]|knee)P(knee)

Unigram approach: ignores contextNeed to factor in context (n-gram)

- Use P(need|I) instead of just P(need)- Note: P(new|I) < P(need|I)

Compute likelihood P([ni]|w), then multiply

Page 15: CMSC 723 / LING 645: Intro to Computational Linguistics

Next Word Prediction[borrowed from J. Hirschberg]

From a NY Times story...– Stocks plunged this ….– Stocks plunged this morning, despite a cut in

interest rates– Stocks plunged this morning, despite a cut in

interest rates by the Federal Reserve, as Wall ...– Stocks plunged this morning, despite a cut in

interest rates by the Federal Reserve, as Wall Street began

Page 16: CMSC 723 / LING 645: Intro to Computational Linguistics

– Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last …

– Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last Tuesday's terrorist attacks.

Next Word Prediction (cont)

Page 17: CMSC 723 / LING 645: Intro to Computational Linguistics

Human Word Prediction

Domain knowledge

Syntactic knowledge

Lexical knowledge

Page 18: CMSC 723 / LING 645: Intro to Computational Linguistics

Claim

A useful part of the knowledge needed to allow Word Prediction can be captured using simple statistical techniques.

Compute:– probability of a sequence– likelihood of words co-occurring

Page 19: CMSC 723 / LING 645: Intro to Computational Linguistics

Why would we want to do this?

Rank the likelihood of sequences containing various alternative alternative hypotheses

Assess the likelihood of a hypothesis

Page 20: CMSC 723 / LING 645: Intro to Computational Linguistics

Why is this useful?

Speech recognitionHandwriting recognitionSpelling correctionMachine translation systemsOptical character recognizers

Page 21: CMSC 723 / LING 645: Intro to Computational Linguistics

Handwriting Recognition

Assume a note is given to a bank teller, which the teller reads as I have a gub. (cf. Woody Allen)

NLP to the rescue ….– gub is not a word– gun, gum, Gus, and gull are words, but gun

has a higher probability in the context of a bank

Page 22: CMSC 723 / LING 645: Intro to Computational Linguistics

Real Word Spelling ErrorsThey are leaving in about fifteen minuets to go to

her house.The study was conducted mainly be John Black.The design an construction of the system will take

more than a year.Hopefully, all with continue smoothly in my

absence.Can they lave him my messages? I need to notified the bank of….He is trying to fine out.

Page 23: CMSC 723 / LING 645: Intro to Computational Linguistics

For Spell Checkers

Collect list of commonly substituted words– piece/peace, whether/weather, their/there ...

Example:“On Tuesday, the whether …’’“On Tuesday, the weather …”

Page 24: CMSC 723 / LING 645: Intro to Computational Linguistics

Language Model

Definition: Language model is a model that enables one to compute the probability, or likelihood, of a sentence S, P(S).

Let’s look at different ways of computing P(S) in the context of Word Prediction

Page 25: CMSC 723 / LING 645: Intro to Computational Linguistics

Word Prediction: Simple vs. Smart Simple:

Every word follows every other word w/ equal probability (0-gram)– Assume |V| is the size of the vocabulary– Likelihood of sentence S of length n is = 1/|V| × 1/|V| … × 1/|V|

– If English has 100,000 words, probability of each next word is 1/100000 = .00001

Smarter:Probability of each next word is related to word frequency (unigram)

– Likelihood of sentence S = P(w1) × P(w2) × … × P(wn)

– Assumes probability of each word is independent of probabilities of other words. Even smarter:

Look at probability given previous words (N-gram)– Likelihood of sentence S = P(w1) × P(w2|w1) × … × P(wn|wn-1)

– Assumes probability of each word is dependent on probabilities of other words.

n times

Page 26: CMSC 723 / LING 645: Intro to Computational Linguistics

Chain Rule

Conditional Probability– P(A1,A2) = P(A1) · P(A2|A1)

The Chain Rule generalizes to multiple events– P(A1, …,An) =

P(A1) P(A2|A1) P(A3|A1,A2)…P(An|A1…An-1) Examples:

– P(the dog) = P(the) P(dog | the)– P(the dog bites) = P(the) P(dog | the) P(bites| the dog)

Page 27: CMSC 723 / LING 645: Intro to Computational Linguistics

Relative Frequencies and Conditional ProbabilitiesRelative word frequencies are better than equal

probabilities for all words– In a corpus with 10K word types, each word would have

P(w) = 1/10K– Does not match our intuitions that different words are

more likely to occur (e.g. the)Conditional probability more useful than

individual relative word frequencies – Dog may be relatively rare in a corpus– But if we see barking, P(dog|barking) may be very large

Page 28: CMSC 723 / LING 645: Intro to Computational Linguistics

For a Word String

In general, the probability of a complete string of words w1…wn is:

P(w ) = P(w1)P(w2|w1)P(w3|w1..w2)…P(wn|w1…wn-1)

= )11|

1( wkn

kwkP

1n

But this approach to determining the probability of a word sequence is not very helpful in general….

Page 29: CMSC 723 / LING 645: Intro to Computational Linguistics

Markov Assumption

How do we compute P(wn|w1n-1)?

Trick: Instead of P(rabbit|I saw a), we use P(rabbit|a).– This lets us collect statistics in practice– A bigram model: P(the barking dog) = P(the|<start>)P(barking|

the)P(dog|barking)Markov models are the class of probabilistic

models that assume that we can predict the probability of some future unit without looking too far into the past– Specifically, for N=2 (bigram): P(w1) ≈ Π P(wk|wk-1)

n n

k=1Order of a Markov model: length of prior context

– bigram is first order, trigram is second order, …

Page 30: CMSC 723 / LING 645: Intro to Computational Linguistics

Counting Words in Corpora

What is a word? – e.g., are cat and cats the same word?– September and Sept?– zero and oh?– Is seventy-two one word or two? AT&T?– Punctuation?

How many words are there in English?Where do we find the things to count?

Page 31: CMSC 723 / LING 645: Intro to Computational Linguistics

Corpora

Corpora are (generally online) collections of text and speech

Examples:– Brown Corpus (1M words)– Wall Street Journal and AP News corpora– ATIS, Broadcast News (speech)– TDT (text and speech)– Switchboard, Call Home (speech)– TRAINS, FM Radio (speech)

Page 32: CMSC 723 / LING 645: Intro to Computational Linguistics

Training and Testing

Probabilities come from a training corpus, which is used to design the model.– overly narrow corpus: probabilities don't generalize– overly general corpus: probabilities don't reflect task or

domainA separate test corpus is used to evaluate the

model, typically using standard metrics– held out test set– cross validation– evaluation differences should be statistically significant

Page 33: CMSC 723 / LING 645: Intro to Computational Linguistics

Terminology

Sentence: unit of written languageUtterance: unit of spoken languageWord Form: the inflected form that appears in the

corpusLemma: lexical forms having the same stem, part

of speech, and word senseTypes (V): number of distinct words that might

appear in a corpus (vocabulary size) Tokens (N): total number of words in a corpusTypes seen so far (T): number of distinct words

seen so far in corpus (smaller than V and N)

Page 34: CMSC 723 / LING 645: Intro to Computational Linguistics

Simple N-Grams

An N-gram model uses the previous N-1 words to predict the next one:P(wn | wn-N+1 wn-N+2… wn-1 )– unigrams: P(dog)– bigrams: P(dog | big)– trigrams: P(dog | the big)– quadrigrams: P(dog | chasing the big)

Page 35: CMSC 723 / LING 645: Intro to Computational Linguistics

Using N-Grams

Recall that– N-gram: P(wn|w1 ) ≈ P(wn|wn-N+1)

– Bigram: P(w1) ≈ Π P(wk|wk-1)n

n

k=1

n-1 n-1

For a bigram grammar– P(sentence) can be approximated by multiplying all the

bigram probabilities in the sequenceExample:

P(I want to eat Chinese food) = P(I | <start>) P(want | I) P(to | want) P(eat | to) P(Chinese | eat) P(food | Chinese)

Page 36: CMSC 723 / LING 645: Intro to Computational Linguistics

A Bigram Grammar Fragment from BERPEat on .16 Eat Thai .03

Eat some .06 Eat breakfast .03

Eat lunch .06 Eat in .02

Eat dinner .05 Eat Chinese .02

Eat at .04 Eat Mexican .02

Eat a .04 Eat tomorrow .01

Eat Indian .04 Eat dessert .007

Eat today .03 Eat British .001

Page 37: CMSC 723 / LING 645: Intro to Computational Linguistics

Additional BERP Grammar

<start> I .25 Want some .04<start> I’d .06 Want Thai .01<start> Tell .04 To eat .26<start> I’m .02 To have .14I want .32 To spend .09I would .29 To be .02I don’t .08 British food .60I have .04 British restaurant .15Want to .65 British cuisine .01Want a .05 British lunch .01

Page 38: CMSC 723 / LING 645: Intro to Computational Linguistics

Computing Sentence Probability

P(I want to eat British food) = P(I|<start>) P(want|I) P(to|want) P(eat|to) P(British|eat) P(food|British) = .25×.32×.65×.26×.001×.60 = .000080

vs. I want to eat Chinese food = .00015Probabilities seem to capture “syntactic” facts,

“world knowledge”– eat is often followed by a NP– British food is not too popular

N-gram models can be trained by counting and normalization

Page 39: CMSC 723 / LING 645: Intro to Computational Linguistics

BERP Bigram Counts

I Want To Eat Chinese Food lunch

I 8 1087 0 13 0 0 0

Want 3 0 786 0 6 8 6

To 3 0 10 860 3 0 12

Eat 0 0 2 0 19 2 52

Chinese 2 0 0 0 0 120 1

Food 19 0 17 0 0 0 0

Lunch 4 0 0 0 0 1 0

Page 40: CMSC 723 / LING 645: Intro to Computational Linguistics

BERP Bigram Probabilities: Use Unigram Count

Normalization: divide bigram count by unigram count of first word.

I Want To Eat Chinese Food Lunch

3437 1215 3256 938 213 1506 459

Computing the probability of I I– P(I|I) = C(I I)/C(I) = 8 / 3437 = .0023

A bigram grammar is an NxN matrix of probabilities, where N is the vocabulary size

Page 41: CMSC 723 / LING 645: Intro to Computational Linguistics

Learning a Bigram Grammar

The formula P(wn|wn-1) = C(wn-1wn)/C(wn-1) is used for bigram “parameter estimation”

Relative FrequencyMaximum Likelihood Estimation (MLE):

Parameter set maximizes likelihood of training set T given model M — P(T|M).

Page 42: CMSC 723 / LING 645: Intro to Computational Linguistics

What do we learn about the language?

What about...– P(I | I) = .0023– P(I | want) = .0025– P(I | food) = .013

What's being captured with ...– P(want | I) = .32 – P(to | want) = .65– P(eat | to) = .26 – P(food | Chinese) = .56– P(lunch | eat) = .055

Page 43: CMSC 723 / LING 645: Intro to Computational Linguistics

Approximating Shakespeare

As we increase the value of N, the accuracy of the n-gram model increases

Generating sentences with random unigrams...– Every enter now severally so, let– Hill he late speaks; or! a more to leg less first you enter

With bigrams...– What means, sir. I confess she? then all sorts, he is

trim, captain.– Why dost stand forth thy canopy, forsooth; he is this

palpable hit the King Henry.

Page 44: CMSC 723 / LING 645: Intro to Computational Linguistics

More Shakespeare

Trigrams– Sweet prince, Falstaff shall die.– This shall forbid it should be branded, if

renown made it empty.Quadrigrams

– What! I will go seek the traitor Gloucester.– Will you not tell me who I am?

Page 45: CMSC 723 / LING 645: Intro to Computational Linguistics

Dependence of N-gram Models on Training SetThere are 884,647 tokens, with 29,066 word form

types, in about a one million word Shakespeare corpus

Shakespeare produced 300,000 bigram types out of 844 million possible bigrams: so, 99.96% of the possible bigrams were never seen (have zero entries in the table).

Quadrigrams worse: What's coming out looks like Shakespeare because it is Shakespeare.

All those zeroes are causing problems.

Page 46: CMSC 723 / LING 645: Intro to Computational Linguistics

N-Gram Training Sensitivity

If we repeated the Shakespeare experiment but trained on a Wall Street Journal corpus, there would be little overlap in the output

This has major implications for corpus selection or design

unigram: Months the my and issue of year foreign new exchange’s september were recession exchange new endorsed a acquire to six executives.bigram: Last December through the way to preserve the Hudson corporation N.B.E.C. Taylor would seem to complet the major central planners one point five percent of U.S.E has laready old M.X. …trigram: They also point to ninety nine point six billion dollars from twohundred four oh six three percent …

Page 47: CMSC 723 / LING 645: Intro to Computational Linguistics

Some Useful Empirical ObservationsA small number of events occur with high

frequencyA large number of events occur with low

frequencyYou can quickly collect statistics on the high

frequency eventsYou might have to wait an arbitrarily long time to

get valid statistics on low frequency eventsSome of the zeroes in the table are really zeroes.

But others are simply low frequency events you haven't seen yet. How to fix?

Page 48: CMSC 723 / LING 645: Intro to Computational Linguistics

Smoothing Techniques

Every ngram training matrix is sparse, even for very large corpora (Zipf’s law)

Computing perplexity requires non-zero probabilities

Unsmoothed: P(wi)=ci/NSolution: estimate the likelihood of unseen ngramsAdd-one smoothing:

– Add 1 to every ngram count– Normalize by N/(N+V)– Smoothed count is: ci′= (ci+1) · N/(N+V)– Smoothed probability: P′(wi) = ci′/N

Page 49: CMSC 723 / LING 645: Intro to Computational Linguistics

Add-One Smoothed Bigrams

P(wn|wn-1) = C(wn-1wn)/C(wn-1)

P′(wn|wn-1) = [C(wn-1wn)+1]/[C(wn-1)+V]

Page 50: CMSC 723 / LING 645: Intro to Computational Linguistics

Add-One Smoothing Flaw

VNNci′=(ci+1)

ci

Page 51: CMSC 723 / LING 645: Intro to Computational Linguistics

Discounted Smoothing: Witten-Bell

Witten-Bell Discounting– A zero ngram is just an ngram you haven’t seen yet…– Model unseen bigrams by the ngrams you’ve only seen

once: total number of n-gram types in the data.– Total probability of unseen bigrams estimated as:

T/(N+T)– View training corpus as series of events, one for each

token (N) and one for each new type (T): MLE.– We can divide the probability mass equally among

unseen bigrams. Let Z = number of unseen n-grams: ci′ = (T/Z) · N/(N+T) if ci=0; else ci′ = ci · N/(N+T)

Page 52: CMSC 723 / LING 645: Intro to Computational Linguistics

Witten-Bell Bigram Counts

VNNci′=(ci+1)

ci

TNNci′ = T/Z · if ci=0

ci · otherwiseTNN

Page 53: CMSC 723 / LING 645: Intro to Computational Linguistics

Other Discounting Methods

Good-Turing Discounting– Re-estimate amount of probability mass for

zero (or low count) ngrams by looking at ngrams with higher counts

– Estimate

N cN ccc 11*

– Assumes: • word bigrams follow a binomial distribution• We know number of unseen bigrams (VxV-seen)

Page 54: CMSC 723 / LING 645: Intro to Computational Linguistics

Readings for next time

J&M Chapter 7.1-7.3