Deepak-Computational Advertising-The LinkedIn Way

47
Computational Advertising: The LinkedIn Way Deepak Agarwal, LinkedIn Corporation CIKM, San Francisco Oct 30 th , 2013

Transcript of Deepak-Computational Advertising-The LinkedIn Way

Page 1: Deepak-Computational Advertising-The LinkedIn Way

Computational Advertising: The LinkedIn WayDeepak Agarwal, LinkedIn Corporation

CIKM, San Francisco

Oct 30th, 2013

Page 2: Deepak-Computational Advertising-The LinkedIn Way

©2013 LinkedIn Corporation. All Rights Reserved.

Computational Advertising

MatchMaker (Broder, CACM)– Placing the “best” ads in a given context for every user visit

Match making at scale requires automation – serving with low marginal cost increases profit margins

Automation through Machine Learning/Optimization

New discipline called Computational Advertising

Page 3: Deepak-Computational Advertising-The LinkedIn Way

LinkedIn Advertising: Brand, Self-Serve, Sponsored updates

Page 4: Deepak-Computational Advertising-The LinkedIn Way

Click Cost =

Bid3 x CTR3/CTR2

Profile:region = US, age = 20

Context = profile page, 300 x 250 ad slot

Ad request

Sorted by Bid * CTR

Response Prediction

Engine

Campaigns eligible for auction

Automatic Format

Selection

Filter Campaigns(Targeting criteria, Frequency Cap, Budget Pacing)

SERVING

Serving constraint < 100 millisec

Page 5: Deepak-Computational Advertising-The LinkedIn Way

©2013 LinkedIn Corporation. All Rights Reserved.

Response Prediction: Important Input for optimization

CTR of an ad format on some slot of a LinkedIn page – E.g. CTR of 160 x 600 ad slot formats

CTR of an ad on some position for a selected ad format

Page 6: Deepak-Computational Advertising-The LinkedIn Way

Counting clicks and views using moving window

Page type

Estimate CTR of formats for each page type

CTRpagetype,format = clickspagetypel,format/viewspagetype,format

formats

Page 7: Deepak-Computational Advertising-The LinkedIn Way

Onboarding new formats, avoiding starvationExplore/exploit dilemma

Format Clicks Views CTR % Traffic

V1.2 1523 624,915 0.24% ???

V2.0 34,872 11,839,741 0.29% ???

V2.1 37,224 12,594,481 0.30% ???

V3.0 0 0 0.28% ???

Exploit-only (greedy)• Maximizes performance given current knowledge• Cannot adapt to changing environment

Explore-only (random)• Serves old & new, good & bad evenly• Ignores our knowledge about performance

Explore/exploit (softmax)• Exploit format that are known to be good but explore

those that could be potentially good• Profits from current knowledge while continuing to learn

Format Clicks Views CTR % Traffic

V1.2 1523 624,915 2.4% 0%

V2.0 34,872 11,839,741 2.9% 0%

V2.1 37,224 12,594,481 3.0% 100%

V3.0 0 0 2.8% 0%

Format Clicks Views CTR % Traffic

V1.2 1523 624,915 2.4% 25%

V2.0 34,872 11,839,741 2.9% 25%

V2.1 37,224 12,594,481 3.0% 25%

V3.0 0 0 2.8% 25%

Format Clicks Views CTR % Traffic

V1.2 1523 624,915 2.4% 7.8%

V2.0 34,872 11,839,741 2.9% 35.3%

V2.1 37,224 12,594,481 3.0% 36.3%

V3.0 0 0 2.8% 20.7%

Page 8: Deepak-Computational Advertising-The LinkedIn Way

©2013 LinkedIn Corporation. All Rights Reserved.

Evaluating Explore/Exploit schemes

We evaluated several explore/exploit techniques offline– Offline replay based on precision@1 on randomized data

Provides unbiased estimates of online performance (Langford et al, 2009)

Softmax + epsilon greedy on LinkedIn advertising, provided significant gains

Thompson sampling, UCB, Softmax, epsilon-greedy with– moving window with

Different training update frequencies (few minutes, few hours, daily)

Epsilon-greedy + softmax, Thompson sampling, UCB among the promising schemes

– Faster updates help with new formats initially, daily updates are fine if very few new formats introduced into the system (as in our application)

– Segmenting by user attributes did not help much

Page 9: Deepak-Computational Advertising-The LinkedIn Way

CTR estimates for ads: Curse of dimensionality

Page 10: Deepak-Computational Advertising-The LinkedIn Way

10

Mitigating the curse

What segments ? What are good ones ?– Too few coarse segments: fails to personalize – Too many: curse of dimensionality, data sparseness

Most segments have no clicks, 0/5 == 0/50 == 0/5M ?

Pool data to mitigate sparseness

Pooling with hundreds of millions of segments is challenging

– Different ways to pool, we use logistic regression

0/5

profile page, ad 77, user from Palo Alto

20/2000040/1000

Visits on profile page users from Palo Alto

Page 11: Deepak-Computational Advertising-The LinkedIn Way

11

Taming curse of dimensionality: Logistic Regression

Dim 2

Dim 1

++

++

+

+

+

_

_

_

_

_

_

_

_

__ _

_

_

-2.5

Page 12: Deepak-Computational Advertising-The LinkedIn Way

CTR Prediction Model for Ads

Feature vectors– Member feature vector: xi

– Campaign feature vector: cj

– Context feature vector: zk

Model:

Page 13: Deepak-Computational Advertising-The LinkedIn Way

CTR Prediction Model for Ads

Feature vectors– Member feature vector: xi

– Campaign feature vector: cj

– Context feature vector: zk

Model:

Cold-start component

Warm-start

per-campaign component

Page 14: Deepak-Computational Advertising-The LinkedIn Way

CTR Prediction Model for Ads

Feature vectors– Member feature vector: xi

– Campaign feature vector: cj

– Context feature vector: zk

Model:

Cold-start component

Warm-start

per-campaign component

Cold-start:

Warm-start:

Both can have L2 penalties.

Page 15: Deepak-Computational Advertising-The LinkedIn Way

Model Fitting

Single machine (well understood)– conjugate gradient– L-BFGS– Trusted region– …

Model Training with Large scale data– Cold-start component Θw is more stable

Weekly/bi-weekly training good enough However: difficulty from need for large-scale logistic regression

– Warm-start per-campaign model Θc is more dynamic New items can get generated any time Big loss if opportunities missed Need to update the warm-start component as frequently as possible

Page 16: Deepak-Computational Advertising-The LinkedIn Way

Model Fitting

Single machine (well understood)– conjugate gradient– L-BFGS– Trusted region– …

Model Training with Large scale data– Cold-start component Θw is more stable

Weekly/bi-weekly training good enough However: difficulty from need for large-scale logistic regression

– Warm-start per-campaign model Θc is more dynamic New items can get generated any time Big loss if opportunities missed Need to update the warm-start component as frequently as possible

Large Scale Logistic Regression

Per-item logistic regression given Θc

Page 17: Deepak-Computational Advertising-The LinkedIn Way

Large Scale Logistic Regression: Computational Challenge

Hundreds of millions/billions of observations Hundreds of thousands/millions of covariates Fitting a logistic regression model on a single machine not feasible Model fitting iterative using methods like gradient descent,

Newton’s method etc– Multiple passes over the data

Problem: Find x to min(F(x)) Iteration n: xn = xn-1 – bn-1 F’(xn-1)

bn-1 is the step size that can change every iteration

Iterate until convergence Conjugate gradient, LBFGS, Newton trust region, …

Page 18: Deepak-Computational Advertising-The LinkedIn Way

Compute using Map-Reduce

Big Data

Partition 1 Partition 2 … Partition N

Mapper 1 Mapper 2 … Mapper N

<Key, Value> <Key, Value><Key, Value><Key, Value>

Reducer 1 Reducer 2 Reducer M…

Output 1 Output 1Output 1Output 1

Page 19: Deepak-Computational Advertising-The LinkedIn Way

Large Scale Logistic Regression

Naïve: – Partition the data and run logistic regression for each partition– Take the mean of the learned coefficients– Problem: Not guaranteed to converge to global solution

Alternating Direction Method of Multipliers (ADMM)– Boyd et al. 2011– Set up constraints: each partition’s coefficient = global consensus– Solve the optimization problem using Lagrange Multipliers– Advantage: converges to global solution

Page 20: Deepak-Computational Advertising-The LinkedIn Way

Large Scale Logistic Regression via ADMM

BIG DATA

Partition 1 Partition 2 Partition 3 Partition K

LogisticRegression

LogisticRegression

LogisticRegression

LogisticRegression

ConsensusComputation

Iteration 1

Page 21: Deepak-Computational Advertising-The LinkedIn Way

Large Scale Logistic Regression via ADMM

BIG DATA

Partition 1 Partition 2 Partition 3 Partition K

LogisticRegression

ConsensusComputation

LogisticRegression

LogisticRegression

LogisticRegression

Iteration 1

Page 22: Deepak-Computational Advertising-The LinkedIn Way

Large Scale Logistic Regression via ADMM

BIG DATA

Partition 1 Partition 2 Partition 3 Partition K

LogisticRegression

LogisticRegression

LogisticRegression

LogisticRegression

ConsensusComputation

Iteration 2

Page 23: Deepak-Computational Advertising-The LinkedIn Way

Large Scale Logistic Regression via ADMM

Notation

– (Xi , yi): data in the ith partition

– βi: coefficient vector for partition i

– β: Consensus coefficient vector

– r(β): penalty component such as ||β||22

Optimization problem

Page 24: Deepak-Computational Advertising-The LinkedIn Way

ADMM updates

LOCAL REGRESSIONSShrinkage towards currentbest global estimate

UPDATEDCONSENSUS

Page 25: Deepak-Computational Advertising-The LinkedIn Way

ADMM at LinkedIn

Lessons and Improvements– Initialization is important (ADMM-M)

Use the mean of the partitions’ coefficients Reduces number of iterations by 50%

– Adaptive step size (learning rate) (ADMM-MA) Exponential decay of learning rate

– Together, these optimizations reduce training time from 10h to 2h

Page 26: Deepak-Computational Advertising-The LinkedIn Way

26

Explore/Exploit with Logistic Regression

++

++

+

+

+

_

_

_

_

_

_

_

_

__ _

_

_COLD START

COLD + WARM START

for an Ad-id

POSTERIOR of WARM-START

COEFFICIENTS

E/E: Sample a line from the

posterior

(Thompson Sampling)

Page 27: Deepak-Computational Advertising-The LinkedIn Way

Models Considered

CONTROL: per-campaign CTR counting model

COLD-ONLY: only cold-start component

LASER: our model (cold-start + warm-start)

LASER-EE: our model with Explore-Exploit using Thompson sampling

Page 28: Deepak-Computational Advertising-The LinkedIn Way

Metrics

Model metrics– Test Log-likelihood– AUC/ROC– Observed/Expected ratio

Business metrics (Online A/B Test)– CTR– CPM (Revenue per impression)

Page 29: Deepak-Computational Advertising-The LinkedIn Way

Observed / Expected Ratio

Observed: #Clicks in the data Expected: Sum of predicted CTR for all impressions Not a “standard” classifier metric, but in many ways more useful for

this application What we usually see: Observed / Expected < 1

– Quantifies the “winner’s curse” aka selection bias in auctions When choosing from among thousands of candidates, an item with

mistakenly over-estimated CTR may end up winning the auction

Particularly helpful in spotting inefficiencies by segment– E.g. by bid, number of impressions in training (warmness), geo, etc.– Allows us to see where the model might be giving too much weight to

the wrong campaigns High correlation between O/E ratio and model performance online

Page 30: Deepak-Computational Advertising-The LinkedIn Way

Offline: ROC Curves

Page 31: Deepak-Computational Advertising-The LinkedIn Way

Online A/B Test

Three models– CONTROL (10%)– LASER (85%)– LASER-EE (5%)

Segmented Analysis – 8 segments by campaign warmness

Degree of warmness: the number of training samples available in the training data for the campaign

Segment #1: Campaigns with almost no data in training Segment #8: Campaigns that are served most heavily in the previous

batches so that their CTR estimate can be quite accurate

Page 32: Deepak-Computational Advertising-The LinkedIn Way

Daily CTR Lift Over Control

Page 33: Deepak-Computational Advertising-The LinkedIn Way

Daily CPM Lift Over Control

Page 34: Deepak-Computational Advertising-The LinkedIn Way

CPM Lift By Campaign Warmness Segments

Page 35: Deepak-Computational Advertising-The LinkedIn Way

O/E Ratio By Campaign Warmness Segments

Page 36: Deepak-Computational Advertising-The LinkedIn Way

Number of Campaigns Served Improvement from E/E

Page 37: Deepak-Computational Advertising-The LinkedIn Way

Insights

Overall performance:– LASER and LASER-EE are both much better than control– LASER and LASER-EE performance are very similar

Segmented analysis by campaign warmness– Segment #1 (very cold)

LASER-EE much worse than LASER due to its exploration property LASER much better than CONTROL due to cold-start features

– Segments #3 - #5 LASER-EE significantly better than LASER Winner’s curse hit LASER

– Segment #6 - #8 (very warm) LASER-EE and LASER are equivalent

Number of campaigns served– LASER-EE serves significantly more campaigns than LASER– Provides healthier market place

Page 38: Deepak-Computational Advertising-The LinkedIn Way

Theory vs. Practice

Textbook

Data is stationary Training data is clean Training is hard, testing and

inference are easy Models don’t change Complex algorithms work best

Reality

Features, items changing constantly

Fraud, bugs,tracking delays, online/offline inconsistencies, etc.

All aspects have challenges at web scale

Never-ending processes of improvement

Simple models with good features and lots of data win

Page 39: Deepak-Computational Advertising-The LinkedIn Way

Solutions to Practical Problems

Rapid model development cycle– Quick reaction to changes in data, product– Write once for training, testing, inference

Can adapt to changing data– Integrated Thompson sampling explore/exploit– Automatic training– Multiple training frequencies for different parts of model

Good tools yield good models– Reusable components for feature extraction and transformation– Very high-performance inference engine for deployment– Modelers can concentrate on building models, not re-writing common

functions or worrying about production issues

Page 40: Deepak-Computational Advertising-The LinkedIn Way

©2013 LinkedIn Corporation. All Rights Reserved.

Summary

Reducing dimension through logistic regression coupled with explore/exploit schemes like Thompson sampling effective mechanism to solve response prediction problems in advertising

Partitioning model components by cold-start (stable) and warm-start (non-stationary) with different training frequencies effective mechanism to scale computations

ADMM with few modifications effective model training strategy for large data with high dimensionality

Methods work well for LinkedIn advertising, significant improvements

Page 41: Deepak-Computational Advertising-The LinkedIn Way

Collaborators

I won’t be here without them, extremely lucky to work with such talented individuals

Liang Zhang Bo Long

Jonathan Traupman Romer RosalesDoris Xin

Page 42: Deepak-Computational Advertising-The LinkedIn Way

©2013 LinkedIn Corporation. All Rights Reserved.

Current Work

Investigating Spark and various other fitting algorithms– Promising results, ADMM still looks good on our datasets

Stream Ads– Multi-response prediction (clicks, shares, likes, comments)– Filtering low quality ads extremely important

Revenue/Engagement tradeoffs (Pareto optimal solutions)

Stream Recommendation– Holistic solution to both content and ads on the stream

Large scale ML infrastructure at LinkedIn– Powers several recommendation systems

Page 43: Deepak-Computational Advertising-The LinkedIn Way

©2013 LinkedIn Corporation. All Rights Reserved.

We are hiring !

Interns for summer 2014 (contact me [email protected])

Full-time – Graduating PhDs, experienced researchers

Page 44: Deepak-Computational Advertising-The LinkedIn Way

©2013 LinkedIn Corporation. All Rights Reserved.

Backup slides

Page 45: Deepak-Computational Advertising-The LinkedIn Way

LASER Configuration

Feature processing pipeline– Sources: transform external data into feature vectors– Transformers: modify/combine feature vectors– Assembler: Packages features vectors for training/inference

Configuration language– Model structure can be changed extensively– Library of reusable components– Train, test, and deploy models without any code changes– Speeds up model development cycle

Page 46: Deepak-Computational Advertising-The LinkedIn Way

LASER Transformer Pipeline

User SourceContext Source

Item Source

SubsetSubset

Interaction

Assembler

RequestUser

profileItem

Training or Inference

Page 47: Deepak-Computational Advertising-The LinkedIn Way

LASER Performance

Real time inference– About 10µs per inference (1500 ads = 15ms)– Reacts to changing features immediately

“Better wrong than late”– If a feature isn’t immediately available, back off to prior value

Asynchronous computation– Actions that block or take time run in background threads

Lazy evaluation– Sources & transformers do not create feature vectors for all items– Feature vectors are constructed/transformed only when needed

Partial results cache– Logistic regression inference is a series of dot products– Scalars are small; cache can be huge– Hardware-like implementation to minimize locking and heap pressure