Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows...

74
Economics and Probabilistic Machine Learning David M. Blei Departments of Computer Science and Statistics Columbia University

Transcript of Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows...

Page 1: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Economics and Probabilistic Machine Learning

David M. BleiDepartments of Computer Science and StatisticsColumbia University

Page 2: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Modern probabilistic modelingAn efficient framework for discovering meaningful patterns in massive data.

Page 3: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Machine Learning in Practice

First PassÑ “Cookbook + Shoehorn +

Duct Tape”.Ñ Many available packages.Ñ Typically fast and scalable.

Generative Prob ModelsÑ Domain-specific knowledge

and assumptions.Ñ Challenging to implement.Ñ May not be fast or scalable.

How to use traditional machine learning and statistics to solve modern problems

Page 4: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Probabilistic machine learning: tailored models for the problem at hand.

Page 5: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Probabilistic machine learning: tailored models for the problem at hand.

I Compose and connect reusable parts

I Driven by disciplinary knowledge and its questions

I Large-scale data, both in terms of data points and data dimension

I Focus on discovering and using structure in unstructured data

I Exploratory, observational, causal analyses

Page 6: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Machine Learning in Practice

First PassÑ “Cookbook + Shoehorn +

Duct Tape”.Ñ Many available packages.Ñ Typically fast and scalable.

Generative Prob ModelsÑ Domain-specific knowledge

and assumptions.Ñ Challenging to implement.Ñ May not be fast or scalable.

Many software packages available; typically fast and scalable

Page 7: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

More challenging to implement; may not be fast or scalable

Page 8: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Make assumptions Discover patterns

DATA

Predict & Explore

KNOWLEDGE &QUESTION

R A. 7Aty

This content downloaded from 128.59.38.144 on Thu, 12 Nov 2015 01:49:31 UTCAll use subject to JSTOR Terms and Conditions

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops1234567

K=7

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops12345678

K=8

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIHpops

123456789

K=9

Figure S2: Population structure inferred from the TGP data set using the TeraStructure algorithmat three values for the number of populations K. The visualization of the ✓’s in the Figure showspatterns consistent with the major geographical regions. Some of the clusters identify a specificregion (e.g. red for Africa) while others represent admixture between regions (e.g. green for Eu-ropeans and Central/South Americans). The presence of clusters that are shared between differentregions demonstrates the more continuous nature of the structure. The new cluster from K = 7 toK = 8 matches structure differentiating between American groups. For K = 9, the new cluster isunpopulated.

28

The probabilistic pipeline

I Design models that reflect our domain expertise and knowledge

I Given data, compute the approximate posterior of hidden variables

I Use the computation to predict the future or explore the patterns in your data.

Page 9: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Adygei BalochiBantuKenyaBantuSouthAfricaBasque Bedouin BiakaPygmy Brahui BurushoCambodianColombianDai Daur Druze French Han Han−NChinaHazaraHezhenItalian Japanese Kalash KaritianaLahu Makrani Mandenka MayaMbutiPygmyMelanesianMiaoMongola Mozabite NaxiOrcadianOroqen Palestinian Papuan Pathan Pima Russian San Sardinian She Sindhi Surui Tu TujiaTuscanUygurXibo Yakut Yi Yoruba

prob

pops1234567

Adygei BalochiBantuKenyaBantuSouthAfricaBasque Bedouin BiakaPygmy Brahui BurushoCambodianColombianDai Daur Druze French Han Han−NChinaHazaraHezhenItalian Japanese Kalash KaritianaLahu Makrani Mandenka MayaMbutiPygmyMelanesianMiaoMongola Mozabite NaxiOrcadianOroqen Palestinian Papuan Pathan Pima Russian San Sardinian She Sindhi Surui Tu TujiaTuscanUygurXibo Yakut Yi Yoruba

prob

pops1234567

Population analysis of 2 billion genetic measurements

Page 10: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Communities discovered in a 3.7M node network of U.S. Patents

Page 11: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ST01CH10-Blei ARI 4 December 2013 17:0

Game

Second

SeasonTeam

Play

Games

Players

Points

Coach

Giants

1

House

Bush

Political

Party

ClintonCampaign

Republican

Democratic

SenatorDemocrats

6

School

Life

Children

Family

Says

Women

HelpMother

ParentsChild

11

StreetSchool

House

Life

Children

FamilySays

Night

Man

Know

2

Percent

Street

House

Building

Real

SpaceDevelopment

SquareHousing

Buildings

7

Percent

Business

Market

Companies

Stock

Bank

Financial

Fund

InvestorsFunds

12

Life

Says

Show

ManDirector

Television

Film

Story

Movie

Films

3

Game

SecondTeam

Play

Won

Open

Race

Win

RoundCup

8

Government

Life

WarWomen

PoliticalBlack

Church

Jewish

Catholic

Pope

13

House

Life

Children

Man

War

Book

Story

Books

Author

Novel

4

Game

Season

Team

RunLeague

GamesHit

Baseball

Yankees

Mets

9

Street

Show

ArtMuseum

WorksArtists

Artist

Gallery

ExhibitionPaintings

14

Street

House

NightPlace

Park

Room

Hotel

Restaurant

Garden

Wine

5

Government

Officials

WarMilitary

Iraq

Army

Forces

Troops

Iraqi

Soldiers

10

Street

YesterdayPolice

Man

CaseFound

Officer

Shot

Officers

Charged

15

Figure 5Topics found in a corpus of 1.8 million articles from the New York Times. Modified from Hoffman et al. (2013).

a particular movie), our prediction of the rating depends on a linear combination of the user’sembedding and the movie’s embedding. We can also use these inferred representations to findgroups of users that have similar tastes and groups of movies that are enjoyed by the same kindsof users.

Figure 4c illustrates the graphical model. This model is closely related to a linear factor model,except that each cell’s distribution is determined by hidden variables that depend on the cell’s rowand column. The overlapping plates show how the observations at the nth row share its embeddingwn but use different variables γm for each column. Similarly, the observations in the mth columnshare its embedding γm but use different variables wn for each row. Casting matrix factorization

214 Blei

Ann

ual R

evie

w o

f Sta

tistic

s and

Its A

pplic

atio

n 20

14.1

:203

-232

. Dow

nloa

ded

from

ww

w.a

nnua

lrevi

ews.o

rgby

Prin

ceto

n U

nive

rsity

Lib

rary

on

01/0

9/14

. For

per

sona

l use

onl

y.

Topics found in 1.8M articles from the New York Times

Page 12: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Neuroscience analysis of 220 million fMRI measurements

Page 13: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

2

096

097

098

099

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

EMNLP 2016 Submission ***. Confidential review copy. DO NOT DISTRIBUTE.

Figure 1: Measure of “eventness,” or time interval impact on cable content (Eq. 2). Grey background indicates the number of cables

sent over time. This comes from the model fit we discuss in Section 3. Capsule successful detects real-world events from National

Archive diplomatic cables.

and the primary sources around them. We developCapsule, a probabilistic model for detecting and char-acterizing important events in large collections ofhistorical communication.

Figure 1 illustrates Capsule’s analysis of the twomillion cables from the National Archives. They-axis is “eventness”, a loose measure how stronglya week’s cables deviate from the usual diplomaticchatter to discuss a matter that is common to manyembassies. (This is described in detail in Section 2.)

The figure shows that Capsule detects many ofthe important moments during this five-year span,including Indonesia’s invasion of East Timor (Dec. 7,1975), the Air France hijacking and Israeli rescue op-eration (June 27–July 4, 1976), and the fall of Saigon(April 30, 1975). It also identifies other moments,such as the U.S. sharing lunar rocks with other coun-tries (March 21, 1973) and the death of Mao Tse-tung(Sept. 9, 1976). Broadly speaking, Capsule gives apicture of the diplomatic history of these five years;it identifies and characterizes moments and sourcematerial that might be of interest to a historian.

The intuition behind Capsule is this: embassieswrite cables throughout the year, usually describingtypical business such as the visiting of a governmentofficial. Sometimes, however, there is an importantevent, e.g., the fall of Saigon. When an event occurs,it pulls embassies away from their typical businessto write cables that discuss what happened and itsconsequences. Thus Capsule effectively defines an

“event” to be a moment in history when embassiesdeviate from what each usually discusses, and wheneach embassy deviates in the same way.

Capsule embeds this intuition into a Bayesianmodel. It uses hidden variables to encode what “typi-cal business” means for each embassy, how to charac-terize the events of each week, and which cables dis-cuss those events. Given a corpus, the correspondingposterior distribution provides a filter on the cablesthat isolates important moments in the diplomatic his-tory. Figure 1 illustrates the mean of this posterior.

Capsule can be used to explore any corpora withthe same underlying structure: text (or other discretemultivariate data) generated over time by known en-tities. This includes email, consumer behavior, socialmedia posts, and opinion articles.

We present the model in Section 2, providing botha formal model specification and guidance on howto use its posterior to detect and characterize real-worlds events. In Section 3, we evaluate Capsuleand explore its results on a collection of U.S. StateDepartment cables and on simulated data.

Related work. We first review previous work onautomatic event detection and other related concepts.

In both univariate and multivariate settings, thegoal is often that analysts want to predict whether ornot rare events will occur (Weiss and Hirsh, 1998;Das et al., 2008). Capsule, in contrast, is designedto help analysts explore and understand the originaldata: our goal is interpretability, not prediction.

Events uncovered from 2M diplomatic cables

Page 14: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

sfy sel salsa southwest med

mission tortilla soft taco

mission corn tortillas 7in 12ct

tostitos scoops tortilla chips wht corn

la tapatia trtlla chips ca styl

taco bell refried beans ff

la tapatia corn tortilla grmt

santitas tortilla chips yellow corn

mission fajita size

mission brown bag tortilla chips round

mission brown bag tortilla trianglesmission tortilla strips family size

sfwy refried beans vegetarian

mission brown bag trtla strps white corn

mission flour tort burrito 8 ct

mcrmck seasoning mix taco mild

herdez salsa verde mild

tostitos restaurant style tortilla chips

sargento 4 chs mexican blend

pace thick & chnky salsa medium

padrinos tort strps rstrnt styl

sfwy olives ripe sliced

sargento 4 cheese mexican blend shredded

tostitos hint of lime

salsa medium rojo 16 oz

ortega taco seasoning mix

mission tortilla soft taco

lucerne cheese colby jack shredded

lucerne finely shredded cheese mexicanlawrys taco seasoning mix

mission tortilla corn super sz

la tapatia hmstyle tort chips

salsa mild deli counter

mission brn bag trtla chps wht crn trngl

mission tortilla burrito

kraft chse chdr/monterey jk finely shrd

rosarita refried beans fat free

tostitos tortilla chips gold

mission tort sft taco 30ct

mission tortilla taco soft ff

frsh exp shreds

lucerne cheddar mild shredded

tostitos scoops tortilla chips super sz

mcrmck seasoning mix taco

lucerne cheese 4 blend mexican

la tapatia tortilla bby burrito

mission tortilla rounds family size

sfy sel salsa southwest mild

guacamole spicy avoclassic

salsa medium deli counter 16 oz

guerrero tortilla burrito 10ct

salsa mild deli counter 16 oz

rosarita refried beans f/f

lucerne cheese monterey jack shredded

santitas tortilla chips white corn

kraft 4 cheese mexican finely shredded

kraft cheese taco finely shredd

tostitos restaurant style tortilla chips

guacamole salsa classic avomex

pace thck & chnky salsa mild

kraft chse chdr/monterey jk finely shrd

lawrys taco spices and seasoning

mission white corn taco shells

foster farms ground chicken

pico de gallo garden highway

guerrero tort riquisima reseal

pace picante sauce medium

don antonio trtla wht corn

oep taco shellssfy sel salsa peach/pineapple

ortega taco shells

sfwy olives ripe sliced

mission corn tortillas 36 ct

ortega taco shells white corn

mission tortilla fluffy gordita

rosarita refried blk beans l/f

rosarita refried beans vegetrn

lawrys taco seasoning mix chckn

sfwy taco shells white corn

rosarita refried beans

sfy sel salsa verde medium

pace picante sauce medium

concord guacamole mix mild

mission fajita size tortilla

guerrero wht corn tortilla 50ct

lucerne cheese taco blend finely shred

salsa mild rojo

mission jumbo taco shells

taco bell refried beanspace picante sauce mild

taco bell taco shells

sfy sel salsa southwest hot

ortega refried beans fat free

lucerne 4 cheese mex shredded

kraft 4 cheese mexican blend fine shred

rosarita refried beans z−slsa f/frosarita refried beans vegetarn

lucerne cheese cheddar medium shredded

taco bell taco sauce mild

oep taco shells

mcrmck seasoning mix taco ls

salsa pico de gallo rojos

sfwy taco shells jumbosalsa medium deli counter

guerrero tortlla riquisima 8 in

tostitos tortilla chips multigrain

taco bell taco seasoning mix

sfy sel salsa chipotle

sfwy refried beans nf

old el paso stand n stuff taco shells

Item characteristics learned from 5.6M purchases

Page 15: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Make assumptions Discover patterns

DATA

Predict & Explore

KNOWLEDGE &QUESTION

R A. 7Aty

This content downloaded from 128.59.38.144 on Thu, 12 Nov 2015 01:49:31 UTCAll use subject to JSTOR Terms and Conditions

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops1234567

K=7

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops12345678

K=8

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIHpops

123456789

K=9

Figure S2: Population structure inferred from the TGP data set using the TeraStructure algorithmat three values for the number of populations K. The visualization of the ✓’s in the Figure showspatterns consistent with the major geographical regions. Some of the clusters identify a specificregion (e.g. red for Africa) while others represent admixture between regions (e.g. green for Eu-ropeans and Central/South Americans). The presence of clusters that are shared between differentregions demonstrates the more continuous nature of the structure. The new cluster from K = 7 toK = 8 matches structure differentiating between American groups. For K = 9, the new cluster isunpopulated.

28

Our perspective:

I Customized data analysis is important to many fields.

I This pipeline separates assumptions, computation, application.

I It facilitates solving data science problems.

Page 16: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Make assumptions Discover patterns

DATA

Predict & Explore

KNOWLEDGE &QUESTION

R A. 7Aty

This content downloaded from 128.59.38.144 on Thu, 12 Nov 2015 01:49:31 UTCAll use subject to JSTOR Terms and Conditions

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops1234567

K=7

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops12345678

K=8

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIHpops

123456789

K=9

Figure S2: Population structure inferred from the TGP data set using the TeraStructure algorithmat three values for the number of populations K. The visualization of the ✓’s in the Figure showspatterns consistent with the major geographical regions. Some of the clusters identify a specificregion (e.g. red for Africa) while others represent admixture between regions (e.g. green for Eu-ropeans and Central/South Americans). The presence of clusters that are shared between differentregions demonstrates the more continuous nature of the structure. The new cluster from K = 7 toK = 8 matches structure differentiating between American groups. For K = 9, the new cluster isunpopulated.

28

What we need:

I Flexible and expressive components for building models

I Scalable and generic inference algorithms

I New applications to stretch probabilistic modeling into new areas

Page 17: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Make assumptions Discover patterns

DATA

Predict & Explore

KNOWLEDGE &QUESTION

R A. 7Aty

This content downloaded from 128.59.38.144 on Thu, 12 Nov 2015 01:49:31 UTCAll use subject to JSTOR Terms and Conditions

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops1234567

K=7

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops12345678

K=8

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIHpops

123456789

K=9

Figure S2: Population structure inferred from the TGP data set using the TeraStructure algorithmat three values for the number of populations K. The visualization of the ✓’s in the Figure showspatterns consistent with the major geographical regions. Some of the clusters identify a specificregion (e.g. red for Africa) while others represent admixture between regions (e.g. green for Eu-ropeans and Central/South Americans). The presence of clusters that are shared between differentregions demonstrates the more continuous nature of the structure. The new cluster from K = 7 toK = 8 matches structure differentiating between American groups. For K = 9, the new cluster isunpopulated.

28

Criticize model

Revise

Page 18: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

I Here I discuss two threads of research with Susan Athey’s group.

I Build probabilistic models to analyze large-scale consumer behavior;many consumers choosing among many items

I (Caveat: I’m not an economist.)

I also joint with Francisco Ruiz

Page 19: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

I Vision: a utility model for baskets of items:

U.basket/ D [subs/comps]C [shopper]C [prices]C [other]C �I Goals

– design, fit, check, and revise this model– answer counterfactual questions about purchase behavior

Page 20: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Economic embeddingsIdentifying substitutes and co-purchases in large-scale consumer data.

Page 21: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

I Word embeddings are a powerful approach for analyzing language.

I Discovers a distributed representation of words

– Distances appear to capture semantic similarity.

I Many variants, but each reflects the same main ideas:– Words are placed in a low-dimensional latent space– A word’s probability depends on its distance to other words in its context

Bengio et al., A neural probabilistic language model. Journal of Machine Learning Research, 2003.Mikolov et al., Efficient estimation of word representations in vector space. Neural Information Processing Systems, 2013.Image: Paul Ginsparg

Page 22: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ravioli 4 cheese lt buitoni

fettucine fresh egg buitoni

tortellini 3 cheese buitoni

linguine fresh egg buitoni

tortellini chse fmly sz buitoni

sauce pesto basil r/f buitoni

tortellini ital sausage buitoni

genova meat ravioli

tortellini chkn/prsct buitoni

tortellini herb chck fam sz buitoni

sauce marinara fresh buitoni

sauce alfredo fmly sz buitoni

angel hair buitoni

pasta chkn herb parigianna buitoni

genova meat sauce

ravioli 4 cheese buitoni

genova ricotta ravioli

sauce alfredo buitoni

ravioli cheese buitoni

tortellini mixed cheese buitoni

sauce alfredo light buitoni

I Exponential family embeddings generalize this idea to other types of data.

I Use generalized linear models and exponential families

I Examples:neuroscience; recommender systems; networks; shopping baskets

I Bigger picture: A statistical perspective on ideas from neural networks.

Page 23: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Zebrafish brain activity

single neuron held out 25% of neurons held outModel K D 10 K D 100 K D 10 K D 100�� 0:261˙ 0:004 0:251˙ 0:004 0:261˙ 0:004 0:252˙ 0:004�-��� (c=10) 0:230˙ 0:003 0:230˙ 0:003 0:242˙ 0:004 0:242˙ 0:004�-��� (c=50) 0:226˙ 0:003 0:222˙ 0:003 0:233˙ 0:003 0:230˙ 0:003��-��� (c=10) 0:238˙ 0:004 0:233˙ 0:003 0:258˙ 0:004 0:244˙ 0:004

Table 2: Analysis of neural data: mean squared error and standard errors of neuralactivity (on the test set) for di�erent models. Both ��-��� models significantlyoutperform ��; �-��� is more accurate than ��-���.

Figure 1: Top view of the zebrafish brain, with blue circles at the location of theindividual neurons. We zoom on 3 neurons and their 50 nearest neighbors (smallblue dots), visualizing the “synaptic weights” learned by a �-��� model (K D 100).The edge color encodes the inner product of the neural embedding vector and thecontext vectors ⇢>n ˛m for each neighborm. Positive values are green, negative valuesare red, and the transparency is proportional to the magnitude. With these weightswe can form hypotheses about how nearby neurons are connected.

the lagged activity conditional on the simultaneous lags of surrounding neurons. Westudied context sizes c 2 f10; 50g and latent dimension K 2 f10; 100g.Models. We compare ��-��� to probabilistic factor analysis (��), fitting K-dimensional factors for each neuron andK-dimensional factor loadings for each timeframe. In ��, each entry of the data matrix is a Gaussian with mean equal to theinner product of the corresponding factor and factor loading.

Evaluation. We train each model on the first 95% of the time frames and holdout the last 5% for testing. With the test set, we use two types of evaluation. (1)Leave one out: For each neuron xi in the test set, we use the measurements of theother neurons to form predictions. For �� this means the other neurons are usedto recover the factor loadings; for ��-��� this means the other neurons are usedto construct the context. (2) Leave 25% out: We randomly split the neurons into 4folds. Each neuron is predicted using the three sets of neurons that are out of its fold.(This is a more di�cult task.) Note in ��-���, the missing data might change thesize of the context of some neurons. See Table 5 in Appendix B for the choice ofhyperparameters.

10

Interactions between countries

Page 24: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Vacation-town deli

Jam PB #1 PB #2 Soda Bread Pizza

I Consider a vacation-town deli; it has six items.

I Customers either buy [pizza, soda] or [peanut butter, jam, bread]Customers only buy one type of peanut butter at a time.

I Items bought together (or not) are co-purchased (or not).The peanut butters are substitutes.(For now we ignore many issues, e.g., formal definitions, price, causality.)

I We would like to capture this purchase behavior.

Page 25: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Vacation-town deli

Jam PB #1 PB #2 Soda Bread Pizza

I We endow each item with two (unknown) locations in a real space Rk :an embedding � and context vector ˛.

I The conditional probability of each item depends on its embedding and thecontext vectors of the other items in the basket,

xb;i j xb;�i � Poisson�exp

n�>iPj¤i jxb;j

o�:

I ˛i are latent product attributes�i indicate how product i interacts with other products’ attributes

Page 26: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

−3 −2 −1 0 1 2 3−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

Jelly

Jelly

Bread

Bread

Peanut butter #1

Peanut butter #1

Peanut butter #2

Peanut butter #2

Pizza

Pizza

Soda

SodaEmbeddingContext

⇢ .interaction/

˛ .attributes/

Pizza and soda are never bought with bread, jam, and PB; and vice versa

Page 27: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

−3 −2 −1 0 1 2 3−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

Jelly

Jelly

Bread

Bread

Peanut butter #1

Peanut butter #1

Peanut butter #2

Peanut butter #2

Pizza

Pizza

Soda

SodaEmbeddingContext

⇢ .interaction/

˛ .attributes/

Bread, jam, and PB are bought together

Page 28: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

−3 −2 −1 0 1 2 3−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

Jelly

Jelly

Bread

Bread

Peanut butter #1

Peanut butter #1

Peanut butter #2

Peanut butter #2

Pizza

Pizza

Soda

SodaEmbeddingContext

⇢ .interaction/

˛ .attributes/

PB #1 is never bought with PB #2

Page 29: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

−3 −2 −1 0 1 2 3−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

Jelly

Jelly

Bread

Bread

Peanut butter #1

Peanut butter #1

Peanut butter #2

Peanut butter #2

Pizza

Pizza

Soda

SodaEmbeddingContext

⇢ .interaction/

˛ .attributes/

PB #1 is bought with similar items as PB #2

Page 30: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Exponential Family Embedding

I The goal of an EF-EMB is to discover a useful representation of data

I Observations x D x1Wn, where xi is a D-vector

I Examples:

DOMAIN INDEX VALUE

Language position in text i word indicatorNeuroscience neuron and time .n; t/ activity levelNetwork pair of nodes .s; d/ edge indicatorShopping item and basket .d; b/ number purchased

Page 31: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Exponential Family Embedding

xi

˛

OBSERVED DATA

EMBEDDINGS

CONTEXT VECTORS

I Three ingredients:context, conditional exponential family, embedding structure

I Two latent variables per data index, an embedding and a context vector

I Model each data point conditioned on its context and latent variables.

I The latent variables interact in the conditional.How depends on which indices are in context and which one is modeled

Page 32: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Context

I Each data point i has a context ci , a set of indices of other data points.

I We model the conditional of the data point given its context, p.xi j xci/.

I Examples

DOMAIN DATA POINT CONTEXT

Language word surrounding wordsNeuroscience neuron activity activity of surrounding neuronsNetwork edge other edges on the two nodesShopping purchased item other item counts on the same trip

Page 33: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Context

I Each data point i has a context ci , a set of indices of other data points.

I We model the conditional of the data point given its context, p.xi j xci/.

I Examples

DOMAIN DATA POINT CONTEXT

Language word surrounding wordsNeuroscience neuron activity activity of surrounding neuronsNetwork edge other edges on the two nodesShopping purchased item other item counts on the same trip

Page 34: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Context

I Each data point i has a context ci , a set of indices of other data points.

I We model the conditional of the data point given its context, p.xi j xci/.

I Examples

DOMAIN DATA POINT CONTEXT

Language word surrounding wordsNeuroscience neuron activity activity of surrounding neuronsNetwork edge other edges on the two nodesShopping purchased item other item counts on the same trip

Page 35: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Conditional exponential family

xi

˛

OBSERVED DATA

EMBEDDINGS

CONTEXT VECTORS

I The EF-EMB has latent variables for each data point’s index:an embedding �Œi � and a context vector ˛Œi �

I These are used in the conditional of each data point,

xi j xci� exp-fam.�.xci

I �Œi �; ˛Œci �/; t.xi //:

(Poisson for counts, Gaussian for reals, Bernoulli for binary, etc.)

Page 36: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Conditional exponential family

xi

˛

OBSERVED DATA

EMBEDDINGS

CONTEXT VECTORS

I The natural parameter combines the embedding and context vectors,

�i .xci/ D f

0@�Œi �>X

j2ci

˛Œj �xj

1A ;

I E.g., an item’s embedding (interaction) helps determine its count;its context vector (attributes) helps determine other item’s counts

Page 37: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Embedding Structure

˛

x

I The embedding structure determines how parameters are shared.

I E.g., �Œi � D �Œj � for i D .Oreos; t / and j D .Oreos; u/.

I Sharing enables learning about an object, such as a neuron, node, or item.

Page 38: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Embedding Structure

˛

xx

I The embedding structure determines how parameters are shared.

I E.g., �Œi � D �Œj � for i D .Oreos; t / and j D .Oreos; u/.

I Sharing enables learning about an object, such as a neuron, node, or item.

Page 39: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Embedding Structure

˛

x

I The embedding structure determines how parameters are shared.

I E.g., �Œi � D �Œj � for i D .Oreos; t / and j D .Oreos; u/.

I Sharing enables learning about an object, such as a neuron, node, or item.

Page 40: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Pseudolikelihood

0 100 200 300 400 500

I We model each data point, conditional on the others.

I Combine these ingredients in a “pseudo-likelihood” (i.e., a utility)

L.�;˛/ DnXiD1

��>i t .xi / � a.�i /

�C log f .�/C log g.˛/:

I Fit with stochastic optimization; exponential families simplify the gradients.

Page 41: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Pseudolikelihood

0 100 200 300 400 500

I The objective resembles a collection of GLM likelihoods.

I The gradient is

r�Œj �L DIXiD1

�t .xi / � EŒt .xi /�

�r�Œj ��i Cr�Œj � log f .�Œj �/:

I (Stochastic gradients give justification to NN ideas like “negative sampling.”)

Page 42: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Market basket analysis

I Data | Purchase counts of items in shopping trips at a large grocery store

– Category-level | 478 categories; 635,000 trips; 6.8M purchases– Item-level | 5,675 items; 620,000 trips; 5.6M purchases

I Context | Other items purchased at the same trip

I Structure | Embeddings for each item are shared across trips

I Family | Poisson (and we downweight the zeros)

Page 43: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Market basket analysis

I Recall the conditional probability

xi j x�i � Poisson�exp

n�>iPj¤i jxj

o�:

I ˛i reflects attributes of item i

I �i reflects the interaction of item i with attributes of other items.

Page 44: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

tSNE Component 1

tSN

E C

ompo

nent

2

A 2D representation of category attributes ˛i

Page 45: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

granulated sugar

specialty/miscellaneous deli items

cream

brown sugarbaking ingredients

frozen pastry dough

powdered sugarcondensed milk

pie crust

flourextracts

salt

shortening

pie filling

evaporated milk

Page 46: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

infant toiletries

childrens/infants analgesics

baby/youth wipes

disposable pants baby accessories

infant formula

disposable diapers

Page 47: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

A 2D representation of item attributes ˛i

Page 48: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ravioli 4 cheese lt buitoni

fettucine fresh egg buitoni

tortellini 3 cheese buitoni

linguine fresh egg buitoni

tortellini chse fmly sz buitoni

sauce pesto basil r/f buitoni

tortellini ital sausage buitoni

genova meat ravioli

tortellini chkn/prsct buitoni

tortellini herb chck fam sz buitoni

sauce marinara fresh buitoni

sauce alfredo fmly sz buitoni

angel hair buitoni

pasta chkn herb parigianna buitoni

genova meat sauce

ravioli 4 cheese buitoni

genova ricotta ravioli

sauce alfredo buitoni

ravioli cheese buitoni

tortellini mixed cheese buitoni

sauce alfredo light buitoni

Page 49: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

havarti w/dill primo taglio dv

wellington wtr crackers traditional

brie president

brie cambozola champignonbrie supreme dv

spread garlic/herb alouette

artichoke parm dip stonemill

dip spinach signature 16 oz

carrs cracker table water garlic & herb

cheddar smokey sharp deli counter dv

carrs cracker table water cracked pepper

wellington wtr crackers cracked peppermozz/proscuitto rl primo taglio

carrs crackers table water w/sesame

spread spinach alouette

dip artichoke cps toast garlic & cheese mussos

carrs crackers table water black

spread garlic herb lt alouette

brie ile de france cut/wrap

gouda domestic dv

gouda smoked deli counter dv

pep farm dist crackers quartet

toast tiny pride of fran

brie primo taglio

crackers water classic monet

crackers monet classic shp

alouette cheese spread sundrd tom w/bsl

wellington wtr crackers tsted ssme

brie mushroom champignon dvbrie camembert de france dv

carrs crackers whole wheat

pep farm cracker butter thins

brie marquis de lafayette

Page 50: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

sfy sel salsa southwest med

mission tortilla soft taco

mission corn tortillas 7in 12ct

tostitos scoops tortilla chips wht corn

la tapatia trtlla chips ca styl

taco bell refried beans ff

la tapatia corn tortilla grmt

santitas tortilla chips yellow corn

mission fajita size

mission brown bag tortilla chips round

mission brown bag tortilla trianglesmission tortilla strips family size

sfwy refried beans vegetarian

mission brown bag trtla strps white corn

mission flour tort burrito 8 ct

mcrmck seasoning mix taco mild

herdez salsa verde mild

tostitos restaurant style tortilla chips

sargento 4 chs mexican blend

pace thick & chnky salsa medium

padrinos tort strps rstrnt styl

sfwy olives ripe sliced

sargento 4 cheese mexican blend shredded

tostitos hint of lime

salsa medium rojo 16 oz

ortega taco seasoning mix

mission tortilla soft taco

lucerne cheese colby jack shredded

lucerne finely shredded cheese mexicanlawrys taco seasoning mix

mission tortilla corn super sz

la tapatia hmstyle tort chips

salsa mild deli counter

mission brn bag trtla chps wht crn trngl

mission tortilla burrito

kraft chse chdr/monterey jk finely shrd

rosarita refried beans fat free

tostitos tortilla chips gold

mission tort sft taco 30ct

mission tortilla taco soft ff

frsh exp shreds

lucerne cheddar mild shredded

tostitos scoops tortilla chips super sz

mcrmck seasoning mix taco

lucerne cheese 4 blend mexican

la tapatia tortilla bby burrito

mission tortilla rounds family size

sfy sel salsa southwest mild

guacamole spicy avoclassic

salsa medium deli counter 16 oz

guerrero tortilla burrito 10ct

salsa mild deli counter 16 oz

rosarita refried beans f/f

lucerne cheese monterey jack shredded

santitas tortilla chips white corn

kraft 4 cheese mexican finely shredded

kraft cheese taco finely shredd

tostitos restaurant style tortilla chips

guacamole salsa classic avomex

pace thck & chnky salsa mild

kraft chse chdr/monterey jk finely shrd

lawrys taco spices and seasoning

mission white corn taco shells

foster farms ground chicken

pico de gallo garden highway

guerrero tort riquisima reseal

pace picante sauce medium

don antonio trtla wht corn

oep taco shellssfy sel salsa peach/pineapple

ortega taco shells

sfwy olives ripe sliced

mission corn tortillas 36 ct

ortega taco shells white corn

mission tortilla fluffy gordita

rosarita refried blk beans l/f

rosarita refried beans vegetrn

lawrys taco seasoning mix chckn

sfwy taco shells white corn

rosarita refried beans

sfy sel salsa verde medium

pace picante sauce medium

concord guacamole mix mild

mission fajita size tortilla

guerrero wht corn tortilla 50ct

lucerne cheese taco blend finely shred

salsa mild rojo

mission jumbo taco shells

taco bell refried beanspace picante sauce mild

taco bell taco shells

sfy sel salsa southwest hot

ortega refried beans fat free

lucerne 4 cheese mex shredded

kraft 4 cheese mexican blend fine shred

rosarita refried beans z−slsa f/frosarita refried beans vegetarn

lucerne cheese cheddar medium shredded

taco bell taco sauce mild

oep taco shells

mcrmck seasoning mix taco ls

salsa pico de gallo rojos

sfwy taco shells jumbosalsa medium deli counter

guerrero tortlla riquisima 8 in

tostitos tortilla chips multigrain

taco bell taco seasoning mix

sfy sel salsa chipotle

sfwy refried beans nf

old el paso stand n stuff taco shells

Page 51: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Category level

MODEL K D 20 K D 50 K D 100Poisson embedding �7:497 �7:284 �7:199Poisson embedding (downweighting zeros) �7:110 �6:994 �6:950Additive Poisson embedding �7:868 �8:191 �8:414Hierarchical Poisson factorization �7:740 �7:626 �7:626Poisson PCA �8:314 �9:51 �11:01

Item level

MODEL K=50 K=100

Poisson embedding �7:72 �7:64Hierarchical Poisson factorization �7:86 �7:87

Gopalan et al., Scalable recommendation with hierarchical Poisson factorization. Uncertainty in Artificial Intelligence, 2015.Collins et al., A generalization of principal component analysis to the exponential family. Neural Information Processing Systems, 2002.

Page 52: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

−3 −2 −1 0 1 2 3−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

Jelly

Jelly

Bread

Bread

Peanut butter #1

Peanut butter #1

Peanut butter #2

Peanut butter #2

Pizza

Pizza

Soda

SodaEmbeddingContext

⇢ .interaction/

˛ .attributes/

We want to use this fit to understand purchase patterns.

I Exchangeables have a similar effect on the purchase of other items.

I Same-category items tend to be exchangeable and rarely purchased together(e.g., two types of peanut butter).

I Complements are purchased (or not purchased) together(e.g., hot dogs and buns).

Page 53: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

−3 −2 −1 0 1 2 3−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

Jelly

Jelly

Bread

Bread

Peanut butter #1

Peanut butter #1

Peanut butter #2

Peanut butter #2

Pizza

Pizza

Soda

SodaEmbeddingContext

⇢ .interaction/

˛ .attributes/

I PB #1 and PB #2 induce similar distributions of other items

I But they are rarely purchased together.

I Define the sigmoid function between two items,

�ki ,1�

1C expf��>k˛ig� I N�ki , 1 � �ki

Page 54: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

−3 −2 −1 0 1 2 3−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

Jelly

Jelly

Bread

Bread

Peanut butter #1

Peanut butter #1

Peanut butter #2

Peanut butter #2

Pizza

Pizza

Soda

SodaEmbeddingContext

⇢ .interaction/

˛ .attributes/

I The “substitute predictor” is

�0@ Xk=fi;j g

�ki log��ki

�kj

�C N�ki log

� N�kiN�kj

�1A � �j i log

��j i

1 � �j i

�:

Page 55: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

−3 −2 −1 0 1 2 3−2.5

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2.5

Jelly

Jelly

Bread

Bread

Peanut butter #1

Peanut butter #1

Peanut butter #2

Peanut butter #2

Pizza

Pizza

Soda

SodaEmbeddingContext

⇢ .interaction/

˛ .attributes/

I The “complement predictor” is the negative of the last term

�j i log�

�j i

1 � �j i

�:

I Notes:

– We use the symmetrized version of both quantities– These quantities generalize to other exponential families

Page 56: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ITEM 1 ITEM 2 SCORE (RANK)

organic vegetables organic fruits 6.18 (01)vegetables (<10 oz) beets (>=10 oz) 5.64 (02)baby food disposable diapers 3.43 (32)stuffing cranberries 3.30 (36)gravy stuffing 3.23 (37)pie filling evaporated milk 3.09 (42)deli cheese deli crackers 2.87 (55)dry pasta/noodles tomato paste/sauce/puree 2.73 (63)mayonnaise mustard 2.61 (69)cake mixes frosting 2.49 (78)

Example co-purchases at the category level

Page 57: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ITEM 1 ITEM 2 SCORE (RANK)

bouquets roses 0.20 (01)frozen pizza 1 frozen pizza 2 0.18 (02)bottled water 1 bottled water 2 -0.07 (03)carbonated soft drinks 1 carbonated soft drinks 2 -0.12 (04)orange juice 1 orange juice 2 -0.37 (05)bathroom tissue 1 bathroom tissue 2 -0.58 (06)bananas 1 bananas 2 -0.61 (07)salads-convenience 1 salads-convenience 2 -0.63 (08)potatoes 1 potatoes 2 -0.66 (09)bouquets blooming -1.18 (10)

Top ten potential substitutes at the category level

Page 58: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ITEM 1 ITEM 2 SCORE (RANK)

ygrt peach ff ygrt mxd berry ff 19.83 (0001)s&w beans garbanzo s&w beans red kidney 14.42 (0002)whiskas cat fd beef whiskas cat food tuna/chicken 8.45 (0149)parsnips loose rutabagas 8.32 (0157)celery hearts organic apples fuji organic 4.36 (0995)85p ln gr beef patties 15p fat sesame buns 4.35 (1005)kiwi imported mangos small 3.22 (1959)colby jack shredded taco bell taco seasoning mix 2.89 (2472)star magazine in touch magazine 2.87 (2497)seasoning mix fajita mission tortilla corn super sz 2.87 (2500)

Example co-purchases at the UPC level

Page 59: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ITEM 1 ITEM 2 SCORE (RANK)

coffee drip grande coffee drip venti -0 .33 (001)sandwich signature reg sandwich signature lrg -1.17 (020)market bouquet alstromeria/rose bouquet -2.89 (186)sushi shoreline combo sushi full moon combo -3.76 (282)semifreddis bread baguette crusty sweet baguette -7.65 (566)orbit gum peppermint orbit gum spearmint -7.96 (595)snickers candy bar 3 musketeers candy bar -7.97 (598)cheer lndry det color guard all lndry det liquid fresh rain -7.99 (602)coors light beer btl coors light beer can -8.12 (621)greek salad signature neptune salad signature -8.15 (630)

Example potential substitutes at the UPC level

Page 60: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Summary and Questions

xi

˛

OBSERVED DATA

EMBEDDINGS

CONTEXT VECTORS

I Word embeddings have become a staple in natural language processingWe distilled its essential elements, generalized to consumer data

I Compared to classical factorization, good performance in many data

– movie ratings, neural activity, scientific reading, shopping baskets

Rudolph et al. Exponential Family Embeddings. Neural Information Processing Systems, 2016.Liang et al. Modeling User Exposure in Recommendation. Recommendation Systems, 2016.Ranganath et al. Deep Exponential Families. Artificial Intelligence and Statistics, 2015.

Page 61: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Summary and Questions

xi

˛

OBSERVED DATA

EMBEDDINGS

CONTEXT VECTORS

I How can we capture higher-order structure in the embeddings?

I Why downweight the zeros?

I How can we include price and other complexities?

Rudolph et al. Exponential Family Embeddings. Neural Information Processing Systems, 2016.Liang et al. Modeling User Exposure in Recommendation. Recommendation Systems, 2016.Ranganath et al. Deep Exponential Families. Artificial Intelligence and Statistics, 2015.

Page 62: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Poisson factorizationA computationally efficient method for discovering correlated preferences

Page 63: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

I Economics

– Look at items within one category (e.g. yoghurt)– Try to estimate the effects of interventions (e.g., coupons, price, layout)

I Machine learning

– Look at all items– Estimate user preferences and make predictions (recommendations)– Ignore causal effects of interventions

Page 64: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Star Wars

Empire Strikes B

ack

Return of the Je

di

When Harry Met S

ally

Barefoot in the Park

Star Trek II

I: Wrath of K

hanItems

Use

rs

I Implicit data is about users interacting with items

– clicks– “likes”– purchases

I Less information than explicit data (e.g. ratings), but more prevalent

Page 65: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ITEM ATTRIBUTES USER PREFERENCES

✓uˇi yui

✓uk ⇠ Gam.�; �/ˇik ⇠ Gam.�; �/yui ⇠ Poisson

�✓>u ˇi

Poisson factorization

I Assumptions

– Users (consumers) have latent preferences �u.– Items have latent attributes ˇi .– How many items a shopper purchased comes from a Poisson.

I The posterior p.�;ˇ j y/ reveals purchase patterns.

Gopalan et al., Scalable recommendation with hierarchical Poisson factorization. Uncertainty in Artificial Intelligence, 2015.

Page 66: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ITEM ATTRIBUTES USER PREFERENCES

✓uˇi yui

✓uk ⇠ Gam.�; �/ˇik ⇠ Gam.�; �/yui ⇠ Poisson

�✓>u ˇi

Advantages

I captures heterogeneity of users

I implies a distribution of total consumption

I efficient approximation, only requires non-zero data

Gopalan et al., Scalable recommendation with hierarchical Poisson factorization. Uncertainty in Artificial Intelligence, 2015.

Page 67: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Stay Focused And Your Career Will Manage ItselfTo Tear Down Walls You Have to Move Out of Your OfficeSelf-Reliance Learned EarlyMaybe Management Isn't Your StyleMy Copyright Career

In Hard Economy for All Ages Older Isn't Better It's BrutalYounger Generations Lag Parents in Wealth-BuildingFast-Growing Brokerage Firm Often Tangles With RegulatorsThe Five Stages of Retirement Planning AngstSigns That It's Time for a New Broker

“Personal Finance”

“Business Self-Help”

Flying SoloCrew-Only 787 Flight Is Approved By FAAAll Aboard Rescued After Plane Skids Into Water at Bali AirportInvestigators Begin to Test Other Parts On the 787American and US Airways May Announce a Merger This Week

“All Things Airplane”

Biodiesel from microalgae.Biodiesel from microalgae beats bioethanolCommercial applications of microalgaeSecond Generation BiofuelsHydrolysis of lignocellulosic materials for ethanol production

Social Capital: Origins and Applications in Modern SociologyIncreasing Returns, Path Dependence, and PoliticsInstitutions, Institutional Change and Economic PerformanceDiplomacy and Domestic PoliticsComparative Politics and the Comparative Method

Theory of Star FormationError estimation in astronomy: A guideAstronomy & AstrophysicsMeasurements of Omega from 42 High-Redshift SupernovaeStellar population synthesis at the resolution of 2003

“Astronomy”

“Biodiesel”

“Political Science”

News articles from the New York Times Scientific articles from Mendeley

Figure 6: The top 10 items by the expected weight βi from three of the 100 components discovered by ouralgorithm for the New York Times and Mendeley data sets.

non-uniformly sampled items. JMLR W&CP, Jan2012.

[8] A. Gelman, J. Carlin, H. Stern, and D. Rubin.Bayesian Data Analysis. Chapman & Hall, London,1995.

[9] A. Gelman, X. Meng, and H. Stern. Posteriorpredictive assessment of model fitness via realizeddiscrepancies. Statistica Sinica, 6:733–807, 1996.

[10] Z. Ghahramani and M. Beal. Propagation algorithmsfor variational Bayesian learning. In NeuralInformation Processing Systems, pages 507–513, 2001.

[11] P. K. Gopalan and D. M. Blei. Efficient discovery ofoverlapping communities in massive networks.Proceedings of the National Academy of Sciences,110(36):14534–14539, 2013.

[12] M. Hoffman. Poisson-uniform nonnegative matrixfactorization. In ICASSP, 2012.

[13] M. Hoffman, D. Blei, and F. Bach. On-line learningfor latent Dirichlet allocation. In Neural InformationProcessing Systems, 2010.

[14] M. Hoffman, D. Blei, C. Wang, and J. Paisley.Stochastic variational inference. Journal of MachineLearning Research, 14(1303–1347), 2013.

[15] Y. Hu, Y. Koren, and C. Volinsky. Collaborativefiltering for implicit feedback datasets. Data Mining,Jan 2008.

[16] D. Inouye, P. Ravikumar, and I. Dhillon. Admixture ofpoisson mrfs: A topic model with word dependencies.In Proceedings of The 31st International Conferenceon Machine Learning, pages 683–691, 2014.

[17] J. Jack, Kris anwd Hammerton, D. Harvey, J. J. Hoyt,

J. Reichelt, and V. Henning. MendeleyEijs reply to

the datatel challenge. Procedia Computer Science,1(2):1–3, 2010.

[18] N. Johnson, A. Kemp, and S. Kotz. UnivariateDiscrete Distributions. John Wiley & Sons, 2005.

[19] M. Jordan, Z. Ghahramani, T. Jaakkola, and L. Saul.Introduction to variational methods for graphicalmodels. Machine Learning, 37:183–233, 1999.

[20] Y. Koren. Factorization meets the neighborhood: amultifaceted collaborative filtering model. In KDD ’08:Proceeding of the 14th ACM SIGKDD internationalconference on Knowledge discovery and data mining,pages 426–434, New York, NY, USA, 2008. ACM.

[21] Y. Koren, R. Bell, and C. Volinsky. Matrixfactorization techniques for recommender systems.Computer, 42(8):30–37, 2009.

[22] D. Lee and H. Seung. Learning the parts of objects bynon-negative matrix factorization. Nature,401(6755):788–791, October 1999.

[23] H. Ma, C. Liu, I. King, and M. R. Lyu. Probabilisticfactor models for web site recommendation. InProceedings of the 34th annual international ACMSIGIR conference on Research and development ininformation retrieval, pages 265–274. ACM Press,2011.

[24] J. Mairal, J. Bach, J. Ponce, and G. Sapiro. Onlinelearning for matrix factorization and sparse coding.Journal of Machine Learning Research, 11:19–60,2010.

[25] B. Marlin, R. S. Zemel, S. Roweis, and M. Slaney.Collaborative filtering and the missing at randomassumption. arXiv preprint arXiv:1206.5267, 2012.

[26] B. M. Marlin and R. S. Zemel. Collaborativeprediction and ranking with non-random missing data.

Page 68: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

!

!

!

!

!

!

!

!

!

!

!

!

!

!

!

!!

!

!

!

!!

!

!

!

Mendeley New York Times Echo Nest Netflix (implicit) Netflix (explicit)

0%

1%

2%

6%

8%

10%

4%

5%

6%

7%

8%

15%

20%

25%

10%

15%

20%

25%

30%

Mea

n no

rmal

ized

prec

isio

n

HPFBPFLDAMFNMF

!

!

!

!

!

!

!

!

!

!

!

!

!

!

!

!!

!

!

!

!!

!

!

!

Mendeley New York Times Echo Nest Netflix (implicit) Netflix (explicit)

0.0%

0.5%

1.0%

1.5%

2.0%

2.5%

4%

5%

6%

7%

8%

4%

5%

6%

7%

8%

9%

12%

15%

18%

5%

10%

15%

20%

Mea

n re

call HPF

BPFLDAMFNMF

Figure 4: Predictive performance on data sets. The top and bottom plots show normalized mean precisionand mean recall at 20 recommendations, respectively. While competing method performance varies acrossdata sets, HPF and BPF consistently outperform competing methods.

MCMC algorithm for inference. The authors [30] reportthat a single Gibbs iteration on the Netflix data set with60 latent factors, requires 30 minutes, and that they throwaway the first 800 samples. This implies at least 16 daysof training, while the HPF variational inference algorithmconverges within 13 hours on the Netflix data. Anotheralternative, Bayesian Personalized Ranking (BPR) [28, 7],optimizes a ranking-based criteria using stochastic gradientdescent. The algorithm performs an expensive bootstrapsampling step at each iteration to generate negative exam-ples from the vast set of unobserved. We found time andspace constraints to be prohibitive when attempting to useBPR with the data sets considered here.

Evaluation. Prior to training any models, we randomlyselect 20% of ratings in each data set to be used as a held-out test set comprised of items that the user has consumed.Additionally, we set aside 1% of the training ratings as avalidation set and use it to determine algorithm convergenceand to tune free parameters. We used the BPF and HPFsettings described in Section 3.2 across all data sets.

During testing, we generate the top M recommendationsfor each user as those items with the highest predictive scoreunder each method. For each user, we compute a variantof precision-at-M , which measures the fraction of relevantitems in the user’s top-M recommendations. So as not toartificially deflate this measurement for lightly active userswho have consumed fewer than M items, we compute nor-malized precision-at-M , which adjusts the denominator tobe at most the number of items that the user has in thetest set. Likewise, we compute recall-at-M , which capturesthe fraction of items in the test set present in the top Mrecommendations.

Figure 4 shows the normalized mean precision at 20 rec-

ommendations for each method and data sets. We see thatHPF and BPF outperform other methods on all data setsby a sizeable margin—as much as 8 percentage points. Pois-son factorization provides high-quality recommendations—arelatively high fraction of items recommended by HPF arefound to be relevant, and many relevant items are recom-mended. While not shown in these plots, the relative per-formance of methods within a data set is consistent as wevary the number of recommendations shown to users. Wealso note that while Poisson factorization dominates acrossall of these data sets, the relative quality of recommenda-tions from competing methods varies substantially from onedataset to the next. For instance, LDA performs quite wellon the Echo Nest data, but fails to beat classical matrixfactorization for the implicit Netflix data set.

We also study precision and recall as a function of useractivity to investigate how performance varies across usersof different types. In particular, Figure 5 shows the meannormalized precision and mean recall at 20 recommenda-tions as we look at performance for users of varying activity,measured by percentile. For example, the 10% mark showsmean performance across the bottom 10% of users, who areleast active; the 90% mark shows the mean performance forall but the top 10% of most active users. Here we see thatPoisson factorization outperforms other methods for users ofall activity levels—both the “light” users who constitute themajority, and the relatively few “heavy” users who consumemore—for all data sets.

For the New York Times and Netflix data, we see thathigher levels of user activity enable us to better estimateuser preferences and improve the quality of recommenda-tions, as measured by mean normalized precision. For theMendeley and Echo Nest data, however, we see a decline

Page 69: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

“FRUIT”

stone fruitpearstropical fruitapplesgrapes

“BABY ESSENTIALS”

baby foodstarbucks coffeedisposable diapersinfant formulababy/youth wipes

“HEALTHY”

health and milk substitutesorganic vegetablesorganic fruitscold cerealvegetarian / organic frozen

“CAT CARE”

cat food wetcat food drycat litter & deodorantcanned fishpaper towels

0 5 10 15 20Components

0

0.005

0.01

0.015

0.02

0.025

0.03

Late

nt p

refe

renc

es

0 5 10 15 20Components

0

0.2

0.4

0.6

0.8

1

1.2

Late

nt p

refe

renc

es

#10-4

Consumer #1 : “Cats and Babies” Consumer #2 : “Healthy and Cats”

Page 70: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Poisson factorization and economics

I Consider a utility model of a single purchase with Gumbel error,

U.yui / D log.�>u ˇi /C �:I Suppose a shopper u buys N items. Then

yu jN � Multi.N; �u/

�ui / expf�>u ˇig:I Thus, the unconditional distribution of counts is Poisson factorization,

yuj � Poisson.�>u j /:

Page 71: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Poisson factorization and economics

I With this connection, we can devise new utility models, e.g.,

U.yui / D log.�>u ˇi C ˛u expf�c � pricei /C �:I ...and other factors

– time of day– in stock– date– observed item characteristics & category– demographic information about the shopper

I Inference is still efficient.With assumptions, we can answer counterfactual questions.

Page 72: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

ITEM ATTRIBUTES USER PREFERENCES

✓uˇi yui

✓uk ⇠ Gam.�; �/ˇik ⇠ Gam.�; �/yui ⇠ Poisson

�✓>u ˇi

I Poisson factorization efficiently analyzes large-scale purchase behavior.

I Next steps

– include notions of co-purchases and substitutes– include time of day at a level that is unconfounded– include price and stock out; answer counterfactual questions

I Research in recommendation systems can help economic analyses.

Page 73: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Make assumptions Discover patterns

DATA

Predict & Explore

KNOWLEDGE &QUESTION

R A. 7Aty

This content downloaded from 128.59.38.144 on Thu, 12 Nov 2015 01:49:31 UTCAll use subject to JSTOR Terms and Conditions

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops1234567

K=7

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops12345678

K=8

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIHpops

123456789

K=9

Figure S2: Population structure inferred from the TGP data set using the TeraStructure algorithmat three values for the number of populations K. The visualization of the ✓’s in the Figure showspatterns consistent with the major geographical regions. Some of the clusters identify a specificregion (e.g. red for Africa) while others represent admixture between regions (e.g. green for Eu-ropeans and Central/South Americans). The presence of clusters that are shared between differentregions demonstrates the more continuous nature of the structure. The new cluster from K = 7 toK = 8 matches structure differentiating between American groups. For K = 9, the new cluster isunpopulated.

28

Probabilistic machine learning: design expressive models to analyze data.

I Tailor your method to your question and knowledge

I Use generic and scalable inference to analyze large data sets

I Form predictions, hypotheses, inferences, and revise the model

Page 74: Economics and Probabilistic Machine Learning · The visualization of the!Õs in the Figure shows patterns consistent with the major geographical regions. Some of the clusters identify

Make assumptions Discover patterns

DATA

Predict & Explore

KNOWLEDGE &QUESTION

R A. 7Aty

This content downloaded from 128.59.38.144 on Thu, 12 Nov 2015 01:49:31 UTCAll use subject to JSTOR Terms and Conditions

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops1234567

K=7

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIH

pops12345678

K=8

LWK YRI ACB ASW CDX CHB CHS JPT KHV CEU FIN GBR IBS TSI MXL PUR CLM PEL GIHpops

123456789

K=9

Figure S2: Population structure inferred from the TGP data set using the TeraStructure algorithmat three values for the number of populations K. The visualization of the ✓’s in the Figure showspatterns consistent with the major geographical regions. Some of the clusters identify a specificregion (e.g. red for Africa) while others represent admixture between regions (e.g. green for Eu-ropeans and Central/South Americans). The presence of clusters that are shared between differentregions demonstrates the more continuous nature of the structure. The new cluster from K = 7 toK = 8 matches structure differentiating between American groups. For K = 9, the new cluster isunpopulated.

28

Opportunities for economics and machine learning

I Push economics to high-dimensional data and scalable computation

I Push ML to explainable models, applied causal inference, new problems

I Develop new modeling methods together