Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning...

36
Reinforcement Learning Introduction Passive Reinforcement Learning • Temporal Difference Learning • Active Reinforcement Learning • Applications • Summary

Transcript of Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning...

Page 1: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Reinforcement Learning

• Introduction• Passive Reinforcement Learning• Temporal Difference Learning• Active Reinforcement Learning• Applications• Summary

Page 2: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Introduction

Supervised Learning:

Example Class

Reinforcement Learning:

Situation Reward Situation Reward…

Page 3: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Examples

Playing chess: Reward comes at end of game

Ping-pong: Reward on each point scored

Animals: Hunger and pain - negative reward food intake – positive reward

Page 4: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Framework: Agent in State Space

1 2 3 R=+5

6 R=9

9 R=6

10

8 R=+4

5 R=+34

7

e e

s

s

snw

x/0.7

wn

sw

x/0.3n s

s

Problem: What actionsshould an agent chooseto maximize its rewards?

ne

Example: XYZ-World Remark: no terminal states

Page 5: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

1 2 3 R=+5

6 R=9

9 R=6

10

8 R=+4

5 R=+34

7

e e

s

s

snw

x/0.7

wn

sw

x/0.3n s

s

ne

XYZ-World: Discussion Problem 12 (3.3, 0.5)

(3.2, -0.5)

(0.6, -0.2)

Bellman TD P

P: 1-2-3-6-5-8-6-9-10-8-6-5-7-4-1-2-5-7-4-1.

Explanation of discrepancies TD for P/Bellman:• Most significant discrepancies in states 3 and 8; minor in state 10 • P chooses worst successor of 8; should apply operator x instead• P should apply w in state 6, but only does it only in 2/3 of the cases; which affects the utility of state 3• The low utility value of state 8 in TD seems to lower the utility value of state 10 only a minor discrepancy

I tried hard but: any betterexplanations?

Page 6: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

10.145 20.72 30.58 R=+5

6-8.27 R=9

9-5.98 R=6

100.63

83.17 R=+4

53.63 R=+340.03

70.001

e e

s

s

snw

x/0.7

wn

sw

x/0.3n s

s

ne

XYZ-World: Discussion Problem 12 Bellman Update =0.2

Discussion on using Bellman Update for Problem 12:• No convergence for =1.0; utility values seem to run away!• State 3 has utility 0.58 although it gives a reward of +5 due to the immediate penalty that follows; we were able to detect that.• Did anybody run the algorithm for other e.g. 0.4 or 0.6 values; if yes, did it converge to the same values?• Speed of convergence seems to depend on the value of .

Page 7: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

1 2 3 R=+5

6 R=9

9 R=6

10

8 R=+4

5 R=+34

7

e e

s

s

snw

x/0.7

wn

sw

x/0.3n s

s

ne

XYZ-World: Discussion Problem 12 (0.57, -0.65)

(-0.50, 0.47)

(-0.18, -0.12)

TD TD inverse R

P: 1-2-3-6-5-8-6-9-10-8-6-5-7-4-1-2-5-7-4-1.

Other observations:• The Bellman update did not converge for =1• The Bellman update converged very fast for =0.2• Did anybody try other values for(e.g. 0.6)?• The Bellman update suggest a utility value for 3.6 for state 5; what does this tell us about the optimal policy? E.g. is 1-2-5-7-4-1 optimal?• TD reversed utility values quite neatly when reward were inversed; x become –x+ with [-0.08,0.08].•

(2.98, -2.99)

Page 8: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

XYZ-World --- Other Considerations

• R(s) might be known in advance or has to be learnt.

• R(s) might be probabilistic or not• R(s) might change over time --- agent has to

adapt.• Results of actions might be known in advance or

have to be learnt; results of actions can be fixed, or may change over time.

Page 9: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Example: The ABC-World

1 2 3 R=+5

6 R=9

9 R=

10R=+9

8 R=3

5 R=44 R=1

7

e e

s

s

snw

x/0.9

wn

ne

x/0.1n s

n

Problem: What actionsshould an agent chooseto maximize its rewards?

ne

Remark: no terminal states

sww

To be used in Assignment3:

Page 10: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Basic Notations and Preview

• T(s,a,s’) denotes the probability of reaching s’ when using action a in state s; it describes the transition model

• A policy specifies what action to take for every possible state sS

• R(s) denotes the reward an agent receives in state s• Utility-based agents learn an utility function of states uses

it to select actions to maximize the expected outcome utility.

• Q-learning, on the other hand, learns the expected utility of taking a particular action a in a particular state s (Q-value of the pair (s,a)

• Finally, reflex agents learn a policy that maps directly from states to actions

Page 11: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Reinforcement Learning

• Introduction• Passive Reinforcement Learning• Temporal Difference Learning• Active Reinforcement Learning• Applications• Summary

Page 12: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Passive Learning

• We assume the policy Π is fixed.

• In state s we always execute action Π(s)

• Rewards are given.

Page 13: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Figure 21.1a

Terminal State

0.80.10.1

The Agent followsarrows with probability0.8 and moves right or left of an arrow with probability 0.1; agents are reflected off walls and transferred back to the original state, if they move towards a wall.

All non-terminal states have reward 0.04;The two terminal states have rewards +1 and 1.

Page 14: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Typical Trials

(1,1) -0.04 (1,2) -0.04 (1,3) -0.04 (1,2) -0.04 (1,3) -0.04 … (4,3) +1

Goal:Use rewards to learn the expectedutility UΠ (s)

Page 15: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Expected Utility

UΠ (s) = E [ Σt=0 γt R(st) | Π, S0 = s ]

Expected sum of rewards associated with s when the policy is followed.

Page 16: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Example

(1,1) -0.04 (1,2) -0.04 (1,3) -0.04 (2,3) -0.04 (3,3) -0.04 (4,3) +1

Total reward for s(1,1) Uπ(s)=(-0.04 x 5) + 1 = 0.80

Page 17: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Direct Utility Estimation

Convert the problem to a supervised learning problem:

(1,1) U = 0.72(2,1) U = 0.68…Learn to map states to utilities.

Problem: utilities are not independent of each other!

Page 18: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Bellman Equation

Utility values obey the following equations:

U(s) = R(s) + γmaxaΣs’ T(s,a,s’)U(s’)

Can be solved using dynamic programming.Assumes knowledge of transition model Tand reward R; the result is policy independent!

Assume γ =1, for this lecture!

Incorrect formula replaced on March 10, 2006

Page 19: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Example

U(1,3) = 0.84U(2,3) = 0.92

We hope to see that:U(1,3) = -0.04 + U(2,3)

The value is 0.88. Current value is a bit low and we must increase it.

(1,3) (2,3)

Page 20: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Bellman Update (Section 17.2 of textbook)

If we apply the Bellman update indefinitely often, we obtain the utility values that are the solution for the Bellman equation!!

Ui+1(s) = R(s) + γ maxa(Σs’(T(s,a,s’)Ui(s’)))

Some Equations for the XYZ World:Ui+1(1) = 0+ γ*Ui(2)Ui+1(5) = 3+ γ *max(Ui(7),Ui(8))Ui+1(8) = + γ *max(Ui(6),0.3*Ui(7) + 0.7*Ui(9) )

Bellman Update:

Page 21: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Updating Estimations Based on Observations:

New_Estimation = Old_Estimation*(1-) + Observed_Value*New_Estimation= Old_Estimation + Observed_Difference*

Example: Measure the utility of a state s with current value being 2 and observed values are 3 and 3 and the learning rate is 0.2:

Initial Utility Value:2Utility Value after observing 3: 2x0.8 + 3x0.2=2.2Utility Value after observing 3,3: 2.2x0.8 +3x0.2= 2.36

Page 22: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Reinforcement Learning

• Introduction• Passive Reinforcement Learning• Temporal Difference Learning• Active Reinforcement Learning• Applications• Summary

Page 23: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Temporal Difference LearningIdea: Use observed transitions to adjust values in observed states so that the comply with the constraint equation, using the following update rule:UΠ (s) UΠ (s) + α [ R(s) + γ UΠ (s’) - UΠ (s) ]

α is the learning rate; γ discount rateTemporal difference equation.No model assumption --- T and R have not to be known.

Page 24: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Fig. 21.5a

Page 25: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Fig. 21.5b

Page 26: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

TD-Q-LearningGoal: Measure the utility of using action ain state s, denoted by Q(a,s); the followingupdate formula is used every time an agentreaches state s’ from s using actions a:

Q(a,s) Q(a,s) + α [ R(s) + γ*maxa’Q(a’,s’) Q(a,s) ]

•α is the learning rate; is the discount factor•Variation of TD-Learning•Not necessary to know transition model T!

Page 27: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Reinforcement Learning

• Introduction• Passive Reinforcement Learning• Temporal Difference Learning• Active Reinforcement Learning• Applications• Summary

Page 28: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Active Reinforcement Learning

Now we must decide what actions to take.

Optimal policy: Choose action with highest utility value.

Is that the right thing to do?

Page 29: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Active Reinforcement Learning

No! Sometimes we may get stuck in suboptimal solutions.

Exploration vs Exploitation Tradeoff

Why is this important?

The learned model is not the same as the true environment.

Page 30: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Explore vs Exploit

Exploitation: Maximize its reward

vs

Exploration: Maximize long-term well being.

Page 31: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Simple Solution to the Exploitation/Exploration Problem

• Choose a random action once in k times

• Otherwise, choose the action with the highest expected utility (k-1 out of k times)

Page 32: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Another Solution --- CombiningExploration and Exploitation

U+ (s) R(s) + γmaxaf(u,n)u=s’(T(s,a,s’)*U+(s’)); n=N(a,s)U+ (s) : optimistic estimate of utilityN(a,s): number of times action a has been tried.f(u,n): exploration function (idea: returns the value u, if n is large, and values larger than u as n decreases)Example: f(u,n):= if n>navg then u else max(n/navg*u, uavg) navg being the average number of operator applications.Idea f: Utility of states/actions that have not been explored much is increased artificially.

Page 33: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Reinforcement Learning

• Introduction• Passive Reinforcement Learning• Temporal Difference Learning• Active Reinforcement Learning• Applications• Summary

Page 34: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Applications

Game Playing

Checker playing program by Arthur Samuel (IBM)

Update rules: change weights by difference between current states and backed-up value generating full look-ahead tree

Page 35: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Reinforcement Learning

• Introduction• Passive Reinforcement Learning• Temporal Difference Learning• Active Reinforcement Learning• Applications• Summary

Page 36: Reinforcement Learning Introduction Passive Reinforcement Learning Temporal Difference Learning Active Reinforcement Learning Applications Summary.

Summary

• Goal is to learn utility values of states and an optimal mapping from states to actions.• Direct Utility Estimation ignores dependencies among states we must follow Bellman Equations.• Temporal difference updates values to match those of successor states.• Active reinforcement learning learns the optimal mapping from states to actions.