Vector Semantics - Carnegie Mellon...

89
Vector Semantics Natural Language Processing Lecture 17 Adapted from Jurafsky and Martn, v3

Transcript of Vector Semantics - Carnegie Mellon...

Page 1: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Natural Language Processing

Lecture 17

Adapted from Jurafsky and Martn, v3

Page 2: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Why vector models of meaning?computng the similarity between words

“fast” is similar to “rapid”

“tall” is similar to “height”

Queston answering:

Q: “How tall is Mt. Everest?”Candidate A: “The ofcial height of Mount Everest is 29029 feet”

2

Page 3: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Word similarity for plagiarism detecton

Page 4: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Word similarity for historical linguistcs:semantc change over tme

4

Kulkarni, Al-Rfou, Perozzi, Skiena 2015Sagi, Kaufmann Clark 2013

dog deer hound0

5

10

15

20

25

30

35

40

45

<1250

Middle 1350-1500

Modern 1500-1710

Sem

ant

c B

road

en

ing

Page 5: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Problems with thesaurus-based meaning

• We don’t have a thesaurus for every language

• We can’t have a thesaurus for every year• For historical linguistcs, we need to compare word meanings

in year t to year t+1

• Thesauruses have problems with recall• Many words and phrases are missing

• Thesauri work less well for verbs, adjectves

Page 6: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Distributonal models of meaning= vector-space models of meaning = vector semantcs

Intuitons: Zellig Harris (1954):

• “oculist and eye-doctor … occur in almost the same environments”

• “If A and B have almost identcal environments we say that they are synonyms.”

Firth (1957):

• “You shall know a word by the company it keeps!”6

Page 7: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Intuiton of distributonal word similarity

• Nida example: Suppose I asked you what is tesgüino? A bottle of tesgüino is on the tableEverybody likes tesgüinoTesgüino makes you drunkWe make tesgüino out of corn.

• From context words humans can guess tesgüino means• an alcoholic beverage like beer

• Intuiton for algorithm: • Two words are similar if they have similar word contexts.

Page 8: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Four kinds of vector models

Sparse vector representatons1. Mutual-informaton weighted word co-occurrence matrices

Dense vector representatons:2. Singular value decompositon (and Latent Semantc

Analysis)

3. Neural-network-inspired models (skip-grams, CBOW)

4. Brown clusters

8

Page 9: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Shared intuiton

• Model the meaning of a word by “embedding” in a vector space.

• The meaning of a word is a vector of numbers• Vector models are also called “embeddings”.

• Contrast: word meaning is represented in many computatonal linguistc applicatons by a vocabulary index (“word number 545”)

• Old philosophy joke: Q: What’s the meaning of life?

A: LIFE’

9

Page 10: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Words and co-occurrence vectors

Page 11: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Co-occurrence Matrices

• We represent how ofen a word occurs in a document• Term-document matrix

• Or how ofen a word occurs with another• Term-term matrix

(or word-word co-occurrence matrix

or word-context matrix)11

Page 12: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Term-document matrix

• Each cell: count of word w in a document d:• Each document is a count vector in ℕv: a column below

12

As You Like It Twelfth Night Julius Caesar Henry V

battle 1 1 8 15

soldier 2 2 12 36

fool 37 58 1 5

clown 6 117 0 0

Page 13: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Similarity in term-document matrices

Two documents are similar if their vectors are similar

13

As You Like It Twelfth Night Julius Caesar Henry V

battle 1 1 8 15

soldier 2 2 12 36

fool 37 58 1 5

clown 6 117 0 0

Page 14: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

The words in a term-document matrix

• Each word is a count vector in ℕD: a row below

14

As You Like It Twelfth Night Julius Caesar Henry V

battle 1 1 8 15

soldier 2 2 12 36

fool 37 58 1 5

clown 6 117 0 0

Page 15: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

The words in a term-document matrix

• Two words are similar if their vectors are similar

15

As You Like It Twelfth Night Julius Caesar Henry V

battle 1 1 8 15

soldier 2 2 12 36

fool 37 58 1 5

clown 6 117 0 0

Page 16: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

The word-word or word-context matrix

• Instead of entre documents, use smaller contexts• Paragraph

• Window of 4 words

• A word is now defned by a vector over counts of context words• Instead of each vector being of length D,

each vector is now of length |V|

• The word-word matrix is |V|x|V|16

Page 17: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Word-Word matrixSample contexts 7 words

17

… …

aardvark computer data pinch result sugar …

apricot 0 0 0 1 0 1

pineapple 0 0 0 1 0 1

digital 0 2 1 0 1 0

information 0 1 6 0 4 0

Page 18: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Word-word matrix

• We showed only 4x6, but the real matrix is 50,000 x 50,000• So it’s very sparse

• Most values are 0.

• That’s OK, since there are lots of efcient algorithms for sparse matrices.

• The size of windows depends on your goals• The shorter the windows , the more syntactc the representaton

1-3 very syntactcy

• The longer the windows, the more semantc the representaton

4-10 more semantcy18

Page 19: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

2 kinds of co-occurrence between 2 words

• First-order co-occurrence (syntagmatc associaton):• They are typically nearby each other.

• wrote is a frst-order associate of book or poem.

• Second-order co-occurrence (paradigmatc associaton): • They have similar neighbors.

• wrote is a second- order associate of words like said or remarked.

19

(Schütze and Pedersen, 1993)

Page 20: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Positve Pointwise Mutual Informaton (PPMI)

Page 21: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Problem with raw counts

• Raw word frequency is not a great measure of associaton between words• It’s very skewed

• “the” and “of” are very frequent, but maybe not the most discriminatve

• We’d rather have a measure that asks whether a context word is partcularly informatve about the target word.

• Positve Pointwise Mutual Informaton (PPMI)

21

Page 22: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Pointwise Mutual Informaton

Pointwise mutual informaton: Do events x and y co-occur more than if they were independent?

PMI between two words: (Church & Hanks 1989)

Do words x and y co-occur more than if they were independent?

PMI(X,Y) =log2P(x,y)P(x)P(y)

Page 23: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Positve Pointwise Mutual Informaton• PMI ranges from

• But the negatve values are problematc

• Things are co-occurring less than we expect by chance

• Unreliable without enormous corpora• Imagine w1 and w2 whose probability is each 10-6

• Hard to be sure p(w1,w2) is signifcantly diferent than 10-12

• Plus it’s not clear people are good at “unrelatedness”

• So we just replace negatve PMI values by 0

• Positve PMI (PPMI) between word1 and word2:

•  

Page 24: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Computng PPMI on a term-context matrix

• Matrix F with W rows (words) and C columns (contexts)

• fij is # of tmes wi occurs in context cj

24

pij =fij

fijj=1

C

åi=1

W

åpi* =

fijj=1

C

å

fijj=1

C

åi=1

W

å

p* j =

fiji=1

W

å

fijj=1

C

åi=1

W

å

pmiij =log2

pijpi*p* j

ppmiij =pmiij if pmiij > 0

0 otherwise

ìíï

îï

Page 25: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

p(w=informaton,c=data) =

p(w=informaton) =

p(c=data) =

25

= .326/19

11/19 = .58

7/19 = .37 p(w,context) p(w)computer data pinch result sugar

apricot 0.00 0.00 0.05 0.00 0.05 0.11

pineapple 0.00 0.00 0.05 0.00 0.05 0.11digital 0.11 0.05 0.00 0.05 0.00 0.21

information 0.05 0.32 0.00 0.21 0.00 0.58

p(context) 0.16 0.37 0.11 0.26 0.11

pij =fij

fijj=1

C

åi=1

W

å

p(wi ) =

fijj=1

C

å

Np(cj ) =

fiji=1

W

å

N

Page 26: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

26

• pmi(informaton,data) = log2 ( .32 / (.37*.58) ) = .58(.57 using full precision)

pmiij =log2

pijpi*p* j

p(w,context) p(w)computer data pinch result sugar

apricot 0.00 0.00 0.05 0.00 0.05 0.11

pineapple 0.00 0.00 0.05 0.00 0.05 0.11

digital 0.11 0.05 0.00 0.05 0.00 0.21

information 0.05 0.32 0.00 0.21 0.00 0.58

p(context) 0.16 0.37 0.11 0.26 0.11

PPMI(w,context)computer data pinch result sugar

apricot - - 2.25 - 2.25

pineapple - - 2.25 - 2.25

digital 1.66 0.00 - 0.00 -

information 0.00 0.57 - 0.47 -

Page 27: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Weightng PMI

• PMI is biased toward infrequent events• Very rare words have very high PMI values

• Two solutons:• Give rare words slightly higher probabilites

• Use add-one smoothing (which has a similar efect)

27

Page 28: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Weightng PMI: Giving rare context words slightly higher probability

•  

28

Page 29: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

29

Use Laplace (add-k) smoothingAdd-2SmoothedCount(w,context)

computer data pinch result sugar

apricot 2 2 3 2 3

pineapple 2 2 3 2 3

digital 4 3 2 3 2

information 3 8 2 6 2

p(w,context)[add-2] p(w)computer data pinch result sugar

apricot 0.03 0.03 0.05 0.03 0.05 0.20

pineapple 0.03 0.03 0.05 0.03 0.05 0.20

digital 0.07 0.05 0.03 0.05 0.03 0.24

information 0.05 0.14 0.03 0.10 0.03 0.36

p(context) 0.19 0.25 0.17 0.22 0.17

Page 30: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

PPMI versus add-2 smoothed PPMI

30

PPMI(w,context)[add-2]computer data pinch result sugar

apricot 0.00 0.00 0.56 0.00 0.56

pineapple 0.00 0.00 0.56 0.00 0.56

digital 0.62 0.00 0.00 0.00 0.00

information 0.00 0.58 0.00 0.37 0.00

PPMI(w,context)computer data pinch result sugar

apricot - - 2.25 - 2.25

pineapple - - 2.25 - 2.25

digital 1.66 0.00 - 0.00 -

information 0.00 0.57 - 0.47 -

Page 31: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Measuring similarity: the cosine

Page 32: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Measuring similarity

• Given 2 target words v and w

• We’ll need a way to measure their similarity.

• Most measure of vectors similarity are based on the:

• Dot product or inner product from linear algebra

• High when two vectors have large values in same dimensions.

• Low (in fact 0) for orthogonal vectors with zeros in complementary distributon32

Page 33: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Problem with dot product

• Dot product is longer if the vector is longer. Vector length:

• Vectors are longer if they have higher values in each dimension• That means more frequent words will have higher dot products

• That’s bad: we don’t want a similarity metric to be sensitve to word frequency

33

Page 34: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Soluton: cosine

• Just divide the dot product by the length of the two vectors!

• This turns out to be the cosine of the angle between them!

34

Page 35: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Cosine for computng similarity

Dot product Unit vectors

vi is the PPMI value for word v in context i wi is the PPMI value for word w in context i.

Cos(v,w) is the cosine similarity of v and w

Sec. 6.3

cos(v,

w) =

w

v

w

=

vv

·

ww

=viwii=1

vi2

i=1

Nå wi

2

i=1

Page 36: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Cosine as a similarity metric

• -1: vectors point in opposite directons

• +1: vectors point in same directons

• 0: vectors are orthogonal

• Raw frequency or PPMI are non-negatve, so cosine range 0-1

36

Page 37: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

large data computer

apricot 2 0 0

digital 0 1 2

informaton 1 6 1

37

Which pair of words is more similar?

cosine(apricot,informaton) =

cosine(digital,informaton) =

cosine(apricot,digital) =

2+0+0 √2+0+0   

¿2

√2√38=.23 

cos(v,

w) =

w

v

w

=

vv

·

ww

=viwii=1

vi2

i=1

Nå wi

2

i=1

1+0+0

1+36+1

1+36+1

0+1+4

0+1+4

0+6+2

0+0+0

=8

38 5=.58

=0

Page 38: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Visualizing vectors and angles

1 2 3 4 5 6 7

1

2

3

digital

apricotinformation

Dim

ension

1: ‘

larg

e’

Dimension 2: ‘data’38

large data

apricot 2 0

digital 0 1

informaton 1 6

Page 39: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Clustering vectors to visualize similarity in co-occurrence matrices

Rohde, Gonnerman, Plaut Modeling Word Meaning Using Lexical Co-Occurrence

H E A D

H A N DF A C E

D O G

A M E R I C A

C A T

E Y E

E U R O P E

F O O T

C H I N AF R A N C E

C H I C A G O

A R M

F I N G E R

N O S E

L E G

R U S S I A

M O U S E

A F R I C A

A T L A N T A

E A R

S H O U L D E R

A S I A

C O W

B U L L

P U P P Y L I O N

H A W A I I

M O N T R E A L

T O K Y O

T O E

M O S C O W

T O O T H

N A S H V I L L E

B R A Z I L

W R I S T

K I T T E N

A N K L E

T U R T L E

O Y S T E R

Figure 8: Multidimensional scaling for three noun classes.

W R I S TA N K L E

S H O U L D E RA R ML E GH A N D

F O O TH E A D

N O S EF I N G E R

T O EF A C E

E A RE Y E

T O O T HD O GC A T

P U P P YK I T T E N

C O WM O U S E

T U R T L EO Y S T E R

L I O NB U L LC H I C A G OA T L A N T A

M O N T R E A LN A S H V I L L E

T O K Y OC H I N AR U S S I A

A F R I C AA S I AE U R O P E

A M E R I C AB R A Z I L

M O S C O WF R A N C E

H A W A I I

Figure 9: Hierarchical clustering for three noun classes using distances based on vector correlations.

20

39 Rohde et al. (2006)

Page 40: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Other possible similarity measures

Page 41: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Adding syntax

Page 42: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Using syntax to defne a word’s context

• Zellig Harris (1968)

“The meaning of enttes, and the meaning of grammatcal relatons among them, is related to the restricton of combinatons of these enttes relatve to other enttes”

• Two words are similar if they have similar syntactc contexts

Duty and responsibility have similar syntactc distributon:

Modifed by adjectves

additonal, administratve, assumed, collectve, congressional, consttutonal …

Objects of verbs assert, assign, assume, atend to, avoid, become, breach..

Page 43: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Co-occurrence vectors based on syntactc dependencies

• Each dimension: a context word in one of R grammatcal relatons• Subject-of- “absorb”

• Instead of a vector of |V| features, a vector of R|V|

• Example: counts for the word cell :

Dekang Lin, 1998 “Automatc Retrieval and Clustering of Similar Words”

Page 44: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Syntactc dependencies for dimensions

• Alternatve (Padó and Lapata 2007):• Instead of having a |V| x R|V| matrix

• Have a |V| x |V| matrix

• But the co-occurrence counts aren’t just counts of words in a window

• But counts of words that occur in one of R dependencies (subject, object, etc).

• So M(“cell”,”absorb”) = count(subj(cell,absorb)) + count(obj(cell,absorb)) + count(pobj(cell,absorb)), etc.

44

Page 45: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

PMI applied to dependency relatons

• “Drink it” more common than “drink wine”• But “wine” is a beter “drinkable” thing than “it”

Object of “drink” Count PMI

it 3 1.3

anything 3 5.2

wine 2 9.3

tea 2 11.8

liquid 2 10.5

Hindle, Don. 1990. Noun Classifcation from Predicate-Argument Structure. ACL

Object of “drink” Count PMI

tea 2 11.8

liquid 2 10.5

wine 2 9.3

anything 3 5.2

it 3 1.3

Page 46: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Alternatve to PPMI for measuring associaton

• tf-idf (that’s a hyphen not a minus sign)

• The combinaton of two factors• Term frequency (Luhn 1957): frequency of the word (can be logged)

• Inverse document frequency (IDF) (Sparck Jones 1972)

• N is the total number of documents

• dfi = “document frequency of word i”

= # of documents with word I

• wij = word i in document j

wij=tfij idfi46

idfi =logNdfi

æ

è

çç

ö

ø

÷÷

Page 47: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

tf-idf not generally used for word-word similarity

• But is by far the most common weightng when we are considering the relatonship of words to documents

47

Page 48: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Dense Vectors

Page 49: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Sparse versus dense vectors

• PPMI vectors are• long (length |V|= 20,000 to 50,000)

• sparse (most elements are zero)

• Alternatve: learn vectors which are• short (length 200-1000)

• dense (most elements are non-zero)

49

Page 50: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Sparse versus dense vectors

• Why dense vectors?

• Short vectors may be easier to use as features in machine learning (less weights to tune)

• Dense vectors may generalize beter than storing explicit counts

• They may do beter at capturing synonymy:

• car and automobile are synonyms; but are represented as distnct dimensions; this fails to capture similarity between a word with car as a neighbor and a word with automobile as a neighbor

50

Page 51: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Three methods for getng short dense vectors

• Singular Value Decompositon (SVD)• A special case of this is called LSA – Latent Semantc Analysis

• “Neural Language Model”-inspired predictve models• skip-grams and CBOW

• Brown clustering

51

Page 52: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Dense Vectors via SVD

Page 53: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Intuiton

• Approximate an N-dimensional dataset using fewer dimensions

• By frst rotatng the axes into a new space• In which the highest order dimension captures the most variance in the

original dataset

• And the next dimension captures the next most variance, etc.

• Many such (related) methods:• PCA – principle components analysis

• Factor Analysis

• SVD

53

Page 54: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

54

Dimensionality reducton

Page 55: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Singular Value Decompositon

55

Any rectangular w x c matrix X equals the product of 3 matrices:

W: rows corresponding to original but m columns represent dimensions in a new latent space, such that

• M column vectors are orthogonal to each other

• Columns are ordered by the amount of variance in the dataset each new dimension accounts for

S: diagonal m x m matrix of singular values expressing the importance of each dimension.

C: columns corresponding to original but m rows corresponding to singular values

Page 56: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Singular Value Decompositon

56 Landuaer and Dumais 1997

Page 57: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

SVD applied to term-document matrix:Latent Semantc Analysis

• If instead of keeping all m dimensions, we just keep the top k singular values. Let’s say 300.

• The result is a least-squares approximaton to the original X

• But instead of multplying, we’ll just make use of W.

• Each row of W:• A k-dimensional vector

• Representng word W

57

Deerwester et al (1988)

Page 58: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

LSA more details

• 300 dimensions are commonly used

• The cells are commonly weighted by a product of two weights• Local weight: Log term frequency

• Global weight: either idf or an entropy measure

58

Page 59: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Let’s return to PPMI word-word matrices

• Can we apply to SVD to them?

59

Page 60: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

SVD applied to term-term matrix

60 (I’m simplifying here by assuming the matrix has rank |V|)

Page 61: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Truncated SVD on term-term matrix

61

Page 62: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Truncated SVD produces embeddings

62

• Each row of W matrix is a k-dimensional representaton of each word w

• K might range from 50 to 1000

• Generally we keep the top k dimensions, but some experiments suggest that getng rid of the top 1 dimension or even the top 50 dimensions is helpful (Lapesa and Evert 2014).

Page 63: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Embeddings versus sparse vectors

• Dense SVD embeddings sometmes work beter than sparse PPMI matrices at tasks like word similarity• Denoising: low-order dimensions may represent unimportant

informaton

• Truncaton may help the models generalize beter to unseen data.

• Having a smaller number of dimensions may make it easier for classifers to properly weight the dimensions for the task.

• Dense models may do beter at capturing higher order co-occurrence.

63

Page 64: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Embeddings inspired by neural language models:

skip-grams and CBOW

Page 65: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Predicton-based models:An alternatve way to get dense vectors

• Skip-gram (Mikolov et al. 2013a) CBOW (Mikolov et al. 2013b)

• Learn embeddings as part of the process of word predicton.

• Train a neural network to predict neighboring words• Inspired by neural net language models.

• In so doing, learn dense embeddings for the words in the training corpus.

• Advantages:• Fast, easy to train (much faster than SVD)

• Available online in the word2vec package

• Including sets of pretrained embeddings!65

Page 66: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Skip-grams

• Predict each neighboring word • in a context window of 2C words

• from the current word.

• So for C=2, we are given word wt and predictng these

4 words:

66

Page 67: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Skip-grams learn 2 embeddings for each w

input embedding v, in the input matrix W

• Column i of the input matrix W is the 1×d

embedding vi for word i in the vocabulary.

output embedding v , in output matrix W’′

• Row i of the output matrix W is a ′ d × 1

vector embedding v′i for word i in the

vocabulary.67

Page 68: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Setup

• Walking through corpus pointng at word w(t), whose index in

the vocabulary is j, so we’ll call it wj (1 < j < |V |).

• Let’s predict w(t+1) , whose index in the vocabulary is k (1 < k < |

V |). Hence our task is to compute P(wk|wj).

68

Page 69: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

One-hot vectors

• A vector of length |V|

• 1 for the target word and 0 for other words

• So if “popsicle” is vocabulary word 5

• The one-hot vector is

• [0,0,0,0,1,0,0,0,0…….0]

69

Page 70: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

70

Skip-gram

Page 71: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

71

Skip-gramh = vj

o = W’h

o = W’h

Page 72: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

72

Skip-gram

h = vj

o = W’h

ok = v’khok = v’k v∙ j

Page 73: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Turning outputs into probabilites

• ok = v’k v∙ j

• We use sofmax to turn into probabilites

73

Page 74: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Embeddings from W and W’

• Since we have two embeddings, vj and v’j for each word wj

• We can either:

• Just use vj

• Sum them

• Concatenate them to make a double-length embedding

74

Page 75: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

But wait; how do we learn the embeddings?

75

Page 76: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Relaton between skipgrams and PMI?

• If we multply WW’T

• We get a |V|x|V| matrix M , each entry mij corresponding to

some associaton between input word i and output word j

• Levy and Goldberg (2014b) show that skip-gram reaches its optmum just when this matrix is a shifed version of PMI:

WW′T =MPMI −log k

• So skip-gram is implicitly factoring a shifed version of the PMI matrix into the two embedding matrices.76

Page 77: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

CBOW (Contnuous Bag of Words)

77

Page 78: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Propertes of embeddings

78

• Nearest words to some embeddings (Mikolov et al. 20131)

Page 79: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Embeddings capture relatonal meaning?

vector(‘king’) - vector(‘man’) + vector(‘woman’) vector(‘queen’)

vector(‘Paris’) - vector(‘France’) + vector(‘Italy’) vector(‘Rome’)

•  

79

Page 80: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Can I train embeddings on all of wikipedia

Good embeddings need lots of (appropriate) data

But there are pretrained models

Word2vec

Glove

But there’s more

Bert (and Elmo): context dependent word vectors

“Things are always beter with Bert” (or the thing beter than Bert)80

Page 81: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Brown clustering

Page 82: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Brown clustering

• An agglomeratve clustering algorithm that clusters words based on which words precede or follow them

• These word clusters can be turned into a kind of vector

• We’ll give a very brief sketch here.

82

Page 83: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Brown clustering algorithm

• Each word is initally assigned to its own cluster.

• We now consider consider merging each pair of clusters. Highest quality merge is chosen.• Quality = merges two words that have similar probabilites of preceding

and following words

• (More technically quality = smallest decrease in the likelihood of the corpus according to a class-based language model)

• Clustering proceeds untl all words are in one big cluster.

83

Page 84: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Brown Clusters as vectors

• By tracing the order in which clusters are merged, the model builds a binary tree from botom to top.

• Each word represented by binary string = path from root to leaf

• Each intermediate node is a cluster

• Chairman is 0010, “months” = 01, and verbs = 1

84

Brown Algorithm

•Words merged according to contextual similarity

•Clusters are equivalent to bit-string prefxes

•Prefx length determines the granularity of the clustering

011

president

walkrun sprint

chairmanCEO November October

0 1

00 01

00110010

001

10 11

000 100 101010

Page 85: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Brown cluster examples

85

Page 86: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Class-based language model

• Suppose each word was in some class ci:

86

19.7 • BROWN CLUSTERING 19

Figure19.15 Vector offsets showing relational properties of the vector space, shown byprojectingvectorsontotwodimensionsusingPCA. Intheleftpanel, ’king’ - ’man’ +’woman’iscloseto ’queen’. Intheright, weseetheway offsetsseemtocapturegrammatical number.(Mikolovetal., 2013b).

19.7 BrownClustering

Brown clustering (Brownetal., 1992) isanagglomerativeclustering algorithmforBrownclustering

derivingvector representations of wordsbyclusteringwordsbasedontheir associa-tionswith thepreceding or followingwords.

The algorithm makes use of the class-based language model (Brown et al.,class-basedlanguagemodel

1992), amodel inwhicheachwordw2V belongstoaclassc2CwithaprobabilityP(w|c). Class based LMs assigns aprobability to apair of wordswi−1 andwi bymodeling thetransition betweenclasses rather thanbetweenwords:

P(wi|wi−1) =P(ci|ci−1)P(wi|ci) (19.32)

Theclass-basedLM canbeusedtoassignaprobability toanentirecorpusgivenaparticularly clusteringC asfollows:

P(corpus|C) =nY

i−1

P(ci|ci−1)P(wi|ci) (19.33)

Class-based language models are generally not used as a language model for ap-plications like machine translation or speech recognition because they don’t workas well as standard n-grams or neural language models. But they arean importantcomponent inBrownclustering.

Brown clustering is ahierarchical clustering algorithm. Let’s consider anaive(albeit ineffcient) versionof thealgorithm:

1. Eachword is initially assigned to itsowncluster.2. We now consider consider merging each pair of clusters. The pair whose

merger results inthesmallestdecreaseinthelikelihoodof thecorpus(accord-ingto theclass-based languagemodel) ismerged.

3. Clustering proceedsuntil all wordsareinonebigcluster.

Twowordsarethusmost likely tobeclustered if they havesimilar probabilitiesfor preceding and followingwords, leading to morecoherent clusters. Theresult isthatwordswill bemergedif they arecontextually similar.

By tracingtheorder inwhichclustersaremerged, themodel buildsabinary treefrombottomto top, in which the leaves are thewords in thevocabulary, and eachintermediate node in the tree represents the cluster that is formed by merging itschildren. Fig. 19.16showsaschematic view of apartof atree.

19.7 • BROWN CLUSTERING 19

Figure19.15 Vector offsets showing relational properties of the vector space, shown byprojectingvectorsontotwodimensionsusingPCA. Intheleftpanel, ’king’ - ’man’ +’woman’iscloseto ’queen’. In theright, weseetheway offsetsseemtocapturegrammatical number.(Mikolov etal., 2013b).

19.7 BrownClustering

Brown clustering (Brownetal., 1992) isanagglomerativeclustering algorithmforBrownclustering

derivingvector representations of wordsbyclusteringwordsbasedontheir associa-tionswith theprecedingor followingwords.

The algorithm makes use of the class-based language model (Brown et al.,class-basedlanguagemodel

1992), amodel inwhicheachwordw2V belongstoaclassc2CwithaprobabilityP(w|c). Class based LMs assigns aprobability to apair of wordswi−1 andwi bymodeling thetransition betweenclasses rather thanbetween words:

P(wi|wi−1) =P(ci|ci−1)P(wi|ci) (19.32)

Theclass-basedLM canbeusedtoassignaprobability toanentirecorpusgivenaparticularly clusteringC asfollows:

P(corpus|C) =nY

i−1

P(ci|ci−1)P(wi|ci) (19.33)

Class-based language models are generally not used as a language model for ap-plications likemachine translation or speech recognition because they don’t workas well as standard n-grams or neural language models. But they arean importantcomponent inBrownclustering.

Brown clustering is ahierarchical clustering algorithm. Let’s consider anaive(albeit ineffcient) versionof thealgorithm:

1. Eachwordis initially assigned to itsowncluster.2. We now consider consider merging each pair of clusters. The pair whose

merger results inthesmallestdecreaseinthelikelihoodof thecorpus(accord-ingto theclass-based languagemodel) ismerged.

3. Clustering proceedsuntil all wordsareinonebigcluster.

Twowordsarethusmost likely tobeclustered if they havesimilar probabilitiesfor preceding and followingwords, leading to morecoherent clusters. Theresult isthatwordswill bemergedif they arecontextually similar.

By tracingtheorder inwhichclustersaremerged, themodel buildsabinary treefrombottomto top, in which the leaves are the words in thevocabulary, and eachintermediate node in the tree represents the cluster that is formed by merging itschildren. Fig. 19.16showsaschematic viewof apartof atree.

Page 87: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Vector Semantics

Evaluatng similarity

Page 88: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Evaluatng similarity

• Extrinsic (task-based, end-to-end) Evaluaton:• Queston Answering

• Spell Checking

• Essay grading

• Intrinsic Evaluaton:• Correlaton between algorithm and human word similarity ratngs

• Wordsim353: 353 noun pairs rated 0-10. sim(plane,car)=5.77

• Taking TOEFL multple-choice vocabulary tests

Levied is closest in meaning to: imposed, believed, requested, correlated

Page 89: Vector Semantics - Carnegie Mellon Universitydemo.clab.cs.cmu.edu/NLP/F19/files/slides/17-embeddings.pdf2 Word similarity for plagiarism detecton Word similarity for historical linguistcs:

Summary

• Distributonal (vector) models of meaning• Sparse (PPMI-weighted word-word co-occurrence matrices)

• Dense:

• Word-word SVD 50-2000 dimensions

• Skip-grams and CBOW, (Pretrained: Word2Vec, GloVe, Bert)

• Brown clusters 5-20 binary dimensions.

89