Minimisation du regret vs. Exploration pure: Deux critères de...

32
Minimisation du regret vs. Exploration pure: Deux crit` eres de performance pour des algorithmes de bandit Emilie Kaufmann (Telecom ParisTech) joint work with Olivier Capp´ e, Aur´ elien Garivier and Shivaram Kalyanakrishnan (Yahoo Labs) ANR Spadro, 9 avril 2014 Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 1 / 26

Transcript of Minimisation du regret vs. Exploration pure: Deux critères de...

Page 1: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Minimisation du regret vs. Exploration pure:Deux criteres de performance pour des algorithmes de

bandit

Emilie Kaufmann (Telecom ParisTech)joint work with Olivier Cappe, Aurelien Garivier

and Shivaram Kalyanakrishnan (Yahoo Labs)

ANR Spadro, 9 avril 2014

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 1 / 26

Page 2: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

1 Two bandit problems

2 Regret minimization: a well solved problem

3 Algorithms for pure-exploration

4 The complexity of m best arms identification

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 2 / 26

Page 3: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Two bandit problems

1 Two bandit problems

2 Regret minimization: a well solved problem

3 Algorithms for pure-exploration

4 The complexity of m best arms identification

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 3 / 26

Page 4: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Two bandit problems The model

Bandit model

A multi-armed bandit model is a set of K arms where

Arm a is an unknown probability distribution νa with mean µa

Drawing arm a is observing a realization of νa

Arms are assumed to be independent

In a bandit game, at round t, an agent

chooses arm At to draw based on past observations, according to itssampling strategy (or bandit algorithm)

observes a sample Xt ∼ νAt

The agent wants to learn which arm(s) have highest means

a∗ = argmaxa µa

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 4 / 26

Page 5: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Two bandit problems The model

Bernoulli bandit model

A multi-armed bandit model is a set of K arms where

Arm a is a Bernoulli distribution B(µa) (with unknown mean µa)

Drawing arm a is observing a realization of B(µa) (0 or 1)

Arms are assumed to be independent

In a bandit game, at round t, an agent

chooses arm At to draw based on past observations, according to itssampling strategy (or bandit algorithm)

observes a sample Xt ∼ B(µAt)

The agent wants to learn which arm(s) have highest means

a∗ = argmaxa µa

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 5 / 26

Page 6: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Two bandit problems Regret minimization

The (classical) bandit problem: regret minimization

Samples are seens as rewards (as in reinforcement learning)

The forecaster wants to maximize the reward accumulated duringlearning or equivalentely minimize its regret:

Rn = nµa∗ − E

[n∑t=1

Xt

]

He has to find a sampling strategy (or bandit algorithm) that

realizes a tradeoff between exploration and exploitation

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 6 / 26

Page 7: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Two bandit problems Best arm identification

Best arm identification (or pure exploration)

The forecaster has to find the best arm(s), and does not suffer a losswhen drawing ’bad arms’.

He has to find a sampling strategy that

optimaly explores the environnement,

together with a stopping criterion, and then recommand a set S of marms such that

P(S is the set of m best arms) ≥ 1− δ.

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 7 / 26

Page 8: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Two bandit problems Regret minimization versus best arm identification

Zoom on an application: Medical trials

A doctor can choose between K different treatments for a given symptom.

treatment number a has unknown probability of sucess µaUnknown best treatment a∗ = argmaxa µaIf treatment a is given to patient t, he is cured with probability pa

The doctor:

chooses treatment At to give to patient t

observes whether the patient is healed : Xt ∼ B(µAt)

The doctor can ajust his strategy (At) so as to

Regret minimization Pure exploration

Maximize the number of patient healed Identify the best treatmentduring a study involving n patients with probability at least 1− δ

(and always give this one later)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 8 / 26

Page 9: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Two bandit problems Regret minimization versus best arm identification

Zoom on an application: Medical trials

A doctor can choose between K different treatments for a given symptom.

treatment number a has unknown probability of sucess µaUnknown best treatment a∗ = argmaxa µaIf treatment a is given to patient t, he is cured with probability pa

The doctor:

chooses treatment At to give to patient t

observes whether the patient is healed : Xt ∼ B(µAt)

The doctor can ajust his strategy (At) so as to

Regret minimization Pure exploration

Maximize the number of patient healed Identify the best treatmentduring a study involving n patients with probability at least 1− δ

(and always give this one later)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 8 / 26

Page 10: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Regret minimization: a well solved problem

1 Two bandit problems

2 Regret minimization: a well solved problem

3 Algorithms for pure-exploration

4 The complexity of m best arms identification

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 9 / 26

Page 11: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Regret minimization: a well solved problem Performance criterion

Asymptotically optimal algorithms

Na(t) be the number of draws of arm a up to time t

RT =

K∑a=1

(µ∗ − µa)E[Na(T )]

[Lai and Robbins,1985]: every consistent policy satisfies

µa < µ∗ ⇒ lim infT→∞

E[Na(T )]

log T≥ 1

KL(B(µa),B(µa∗))

A bandit algorithm is asymptotically optimal if

µa < µ∗ ⇒ lim supn→∞

E[Na(T )]

log T≤ 1

KL(B(µa),B(µa∗))

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 10 / 26

Page 12: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Regret minimization: a well solved problem Bandit algorithms for regret minimization

Algorithms: a family of optimistic index policies

For each arm a, compute a confidence interval on µa:

µa ≤ UCBa(t) w.h.p

Act as if the best possible model was the true model(optimism-in-face-of-uncertainty):

At = argmaxa

UCBa(t)

Example UCB1 [Auer et al. 02] uses Hoeffding bounds:

UCBa(t) =Sa(t)

Na(t)+

√α log(t)

2Na(t).

Sa(t): sum of the rewards collected from arm a up to time t.

UCB1 is not asymptotically optimal, but one can show that

E[Na(T )] ≤K1

2(µa − µ∗)2lnT +K2, with K1 > 1.

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 11 / 26

Page 13: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Regret minimization: a well solved problem Bandit algorithms for regret minimization

Algorithms: a family of optimistic index policies

For each arm a, compute a confidence interval on µa:

µa ≤ UCBa(t) w.h.p

Act as if the best possible model was the true model(optimism-in-face-of-uncertainty):

At = argmaxa

UCBa(t)

Example UCB1 [Auer et al. 02] uses Hoeffding bounds:

UCBa(t) =Sa(t)

Na(t)+

√α log(t)

2Na(t).

Sa(t): sum of the rewards collected from arm a up to time t.

UCB1 is not asymptotically optimal, but one can show that

E[Na(T )] ≤K1

2(µa − µ∗)2lnT +K2, with K1 > 1.

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 11 / 26

Page 14: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Regret minimization: a well solved problem Bandit algorithms for regret minimization

KL-UCB: and asymptotically optimal frequentist algorithm

KL-UCB [Cappe et al. 2013] uses the index:

ua(t) = argmaxx>

Sa(t)Na(t)

{d

(Sa(t)

Na(t), x

)≤ ln(t) + c ln ln(t)

Na(t)

}with d(p, q) = KL (B(p),B(q)) = p log

(pq

)+ (1− p) log

(1−p1−q

).

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

Sa(t)/N

a(t)

β(t,δ)/Na(t)

E[Na(T )] ≤1

d(µa, µ∗)lnT + C

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 12 / 26

Page 15: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Regret minimization: a well solved problem Bandit algorithms for regret minimization

Regret minimization: Summary

An (asymptotic) lower bound on the regret of any good algorithm

lim infT→∞

RTlog T

≥∑

a:µa<µ∗

µ∗ − µaKL(B(µa),B(µ∗))

An algorithm based on confidence intervals matching this lowerbound: KL-UCB

A Bayesian approach of the MAB problem can also lead toasymptotically optimal algorithms(Thompson Sampling, Bayes-UCB)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 13 / 26

Page 16: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Regret minimization: a well solved problem Bandit algorithms for regret minimization

Regret minimization: Summary

An (asymptotic) lower bound on the regret of any good algorithm

lim infT→∞

RTlog T

≥∑

a:µa<µ∗

µ∗ − µaKL(B(µa),B(µ∗))

An algorithm based on confidence intervals matching this lowerbound: KL-UCB

A Bayesian approach of the MAB problem can also lead toasymptotically optimal algorithms(Thompson Sampling, Bayes-UCB)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 13 / 26

Page 17: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration

1 Two bandit problems

2 Regret minimization: a well solved problem

3 Algorithms for pure-exploration

4 The complexity of m best arms identification

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 14 / 26

Page 18: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration m best arm identification

m best arms identification

Assume µ1 ≥ · · · ≥ µm > µm+1 ≥ . . . µK .

Parameters and notations

m the number of arms to find

δ ∈]0, 1[ a risk parameter

S∗m = {1, . . . ,m} the set of m optimal arms

The forecaster

chooses at time t one (or several) arms to draw

decides to stop after a (possibly random) total number of samplesfrom the arms τ

recommends a set S of m arms

His goal

P(S = S∗m) ≥ 1− δ, and E[τ ] is small (fixed-confidence setting)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 15 / 26

Page 19: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration m best arm identification

m best arms identification

Assume µ1 ≥ · · · ≥ µm > µm+1 ≥ . . . µK .

Parameters and notations

m the number of arms to find

δ ∈]0, 1[ a risk parameter

S∗m = {1, . . . ,m} the set of m optimal arms

The forecaster

chooses at time t one (or several) arms to draw

decides to stop after a (possibly random) total number of samplesfrom the arms τ

recommends a set S of m arms

His goal

P(S = S∗m) ≥ 1− δ, and E[τ ] is small (fixed-confidence setting)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 15 / 26

Page 20: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration m best arm identification

m best arms identification

Assume µ1 ≥ · · · ≥ µm > µm+1 ≥ . . . µK .

Parameters and notations

m the number of arms to find

δ ∈]0, 1[ a risk parameter

S∗m = {1, . . . ,m} the set of m optimal arms

The forecaster

chooses at time t one (or several) arms to draw

decides to stop after a (possibly random) total number of samplesfrom the arms τ

recommends a set S of m arms

His goal

P(S = S∗m) ≥ 1− δ, and E[τ ] is small (fixed-confidence setting)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 15 / 26

Page 21: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration Generic algorithms using confidence intervals

Generic algorithms based on confidence intervals

Generic notations:

confidence interval on the mean of arm a at round t:

Ia(t) = [La(t), Ua(t)]

J(t) the set of estimated m best arms at round t(m empirical best)

ut ∈ J(t)c and lt ∈ J(t) two ’critical’ arms (likely to be misclassified)

ut = argmaxa/∈J(t)

Ua(t) and lt = argmina∈J(t)

La(t).

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 16 / 26

Page 22: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration Uniform sampling and eliminations

(KL)-Racing: uniform sampling and eliminations

The algorithm maintains a set of remaining arms R and at round t:

draw all the arms in R (uniform sampling)possibly accept the empirical best or discard the empirical worst

0 1 2 3 4 5 6 70

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

µ = [0.6 0.5 0.4 0.3 0.2 0.1] m = 3 δ = 0.1

In this situation, arm 1 is selected

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 17 / 26

Page 23: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration Adaptive sampling

(KL)-LUCB algorithm: adaptive sampling

At round t, the algorithm:

draw only two well-chosen arms: ut and lt (adaptive sampling)

stops when CI for arms in J(t) and J(t)c are separated

0

1

58 118 346 330 120 72

Set J(t), arm lt in bold Set J(t)c, arm ut in bold

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 18 / 26

Page 24: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration Theoretical guarantees

Two δ-PAC algorithms

La(t) = min {q ∈ [0, µa(t)] : Na(t)d(µa(t), q) ≤ β(t, δ)} ,Ua(t) = max {q ∈ [µa(t), 1] : Na(t)d(µa(t), q) ≤ β(t, δ)} .

for β(t, δ) some exploration rate.

TheoremThe KL-Racing algorithm and KL-LUCB algorithm using

β(t, δ) = log

(k1Kt

α

δ

), (1)

with α > 1 and k1 > 1 + 1α−1 satisfy P(S = S∗m) ≥ 1− δ.

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 19 / 26

Page 25: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration Empirical performance

Confidence intervals based on KL are always better

0.16

0.12

0.08

0.04

02018161412108642

Samples / 10000

B2

0.16

0.12

0.08

0.04

0

Fraction of runs (in bins of width 1000)

B1 LUCBKL-LUCB

B1 : K = 15;µ1 = 12 ;µa = 1

2 −a40 for a = 2, 3, . . . ,K. B2 = 1

2B1

Adaptive Sampling seems to do better than Uniform Sampling

12

10

8

6

4

2

10 20 30 40 50 60

K

Expected sample complexity / 10000

RacingKL-Racing

LSCKL-LSC

LUCBKL-LUCB

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 20 / 26

Page 26: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration Sample complexity

Sample complexity analysis

A new informational quantity: Chernoff information

d∗(x, y) := d(z∗, x) = d(z∗, y),

where z∗ is defined by the equality

d(z∗, x) = d(z∗, y).

0.2 0.25 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.70

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

x yz*

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 21 / 26

Page 27: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

Algorithms for pure-exploration Sample complexity

Sample Complexity analysis

KL-LUCB with β(t, δ) = log(k1Ktα

δ

)is δ-PAC and satisfies, for α > 2,

E[τ ] ≤ 4αH∗[log

(k1K(H∗)α

δ

)+ log log

(k1K(H∗)α

δ

)]+ Cα,

with

H∗ = minc∈[µm+1;µm]

K∑a=1

1

d∗(µa, c).

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 22 / 26

Page 28: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

The complexity of m best arms identification

1 Two bandit problems

2 Regret minimization: a well solved problem

3 Algorithms for pure-exploration

4 The complexity of m best arms identification

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 23 / 26

Page 29: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

The complexity of m best arms identification Lower bound

Lower bound on the number of sample used complexity

For KL-LUCB, E[τ ] = O(H∗ log 1

δ

).

TheoremAny algorithm that is δ-PAC on every bandit model such that µm > µm+1

satisfies, for δ ≤ 0.15,

E[τ ] ≥

(m∑t=1

1

d(µa, µm+1)+

K∑t=m+1

1

d(µa, µm)

)log

1

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 24 / 26

Page 30: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

The complexity of m best arms identification The complexity term

The informational complexity of m best arm identification

For a bandit model ν, one can introduce the complexity term

κC(ν) = infA δ−PACalgorithm

lim supδ→0

Eν [τ ]log 1

δ

.

Our results rewrite

m∑t=1

1

d(µa, µm+1)+

K∑t=m+1

1

d(µa, µm)≤ κC(ν) ≤ 4 min

c∈[µm+1;µm]

K∑a=1

1

d∗(µa, c)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 25 / 26

Page 31: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

The complexity of m best arms identification The complexity term

Regret minimization versus Best arms Identification

KL-based confidence intervals are useful in both settings, altoughKL-UCB and KL-LUCB draw the arms in a different fashion

0

1

785 50 115 28 14 7

0

1

24 69 396 325 148 38

Do the complexity of these two problems feature the sameinformation-theoretic quantities?

infconstistentalgorithms

lim supT→∞

RTlog T

=

K∑a=2

µ1 − µad(µa, µ1)

infδ−PACalgorithms

lim supδ→∞

E[τ ]log(1/δ)

≥K∑a=1

1

d(µa, µm+1)+

K∑a=m+1

1

d(µa, µm)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 26 / 26

Page 32: Minimisation du regret vs. Exploration pure: Deux critères de …chercheurs.lille.inria.fr/ekaufman/Spadro0904.pdf · 2015-10-19 · Minimisation du regret vs. Exploration pure:

The complexity of m best arms identification The complexity term

Regret minimization versus Best arms Identification

KL-based confidence intervals are useful in both settings, altoughKL-UCB and KL-LUCB draw the arms in a different fashion

0

1

785 50 115 28 14 7

0

1

24 69 396 325 148 38

Do the complexity of these two problems feature the sameinformation-theoretic quantities?

infconstistentalgorithms

lim supT→∞

RTlog T

=

K∑a=2

µ1 − µad(µa, µ1)

infδ−PACalgorithms

lim supδ→∞

E[τ ]log(1/δ)

≥K∑a=1

1

d(µa, µm+1)+

K∑a=m+1

1

d(µa, µm)

Emilie Kaufmann (Telecom ParisTech) Two objectives for bandit algorithms SPADRO, 09/04/14 26 / 26