Primbs1 Receding Horizon Control for Constrained Portfolio Optimization James A. Primbs Management...

Post on 18-Jan-2016

219 views 0 download

Tags:

Transcript of Primbs1 Receding Horizon Control for Constrained Portfolio Optimization James A. Primbs Management...

Primbs 1

Receding Horizon Control for Receding Horizon Control for Constrained Portfolio OptimizationConstrained Portfolio Optimization

James A. PrimbsManagement Science and Engineering

Stanford University

(with Chang Hwan Sung)

Stanford-Tsukuba/WCQF WorkshopStanford University

March 8, 2007

Primbs 2

Motivation and Background

OutlineOutline Semi-Definite Programming Formulation

Conclusions and Future Work

Numerical Examples

Receding Horizon Control

Primbs 3

A Motivational Problem from Finance: Index Tracking

Let ni ...1)())(1()1( kSkwkS iiiii represent the stochastic dynamics of the prices of n stocks, where i is the expected return per period, i is the volatility per period, and wi is an iid noise term with mean zero and standard deviation 1.

n

iii kSkI

1

)()(

An index is a weighted average of the n stocks:

The index tracking problem is to trade l<n of the stocks and a risk free bond in order to track the index as “closely as possible”.

Primbs 4

A Motivational Problem from Finance: Index Tracking

)()()()()1()1(1

kukwrkWrkW i

l

iiifif

If we let ui(k) denote the dollar amount invested in Si(k) for l<n, and we let rf denote the risk free rate of interest, then the total wealth W(k) of this strategy follows the dynamics:

0

22 ))()((mink

k

ukIkWE

One possible measure of how closely we track the index is given by an infinite horizon discounted quadratic cost:

where <1 is a discount factor.

Primbs 5

A Motivational Problem from Finance: Index Tracking

Limits on short selling: 0)( kui

Limits on wealth invested: )()( kWku ii

Value-at-Risk: 1))()(Pr( kIkW

etc...

Finally, it is quite common to require that constraints be satisfied, such as

Primbs 6

Index Tracking

subject to:

ni ...1

0

22 ))()((mink

k

ukIkWE

)())(1()1( kSkwkS iiiii

)()()()()1()1(1

kukwrkWrkW i

l

iiifif

n

iii kSkI

1

)()(

nl

10)()()(11

n

iii

l

iii kSckubkaWP

Primbs 7

Linear systems with State and Control Multiplicative Noise:

0 )(

)(

)(

)(min

k

T

u ku

kxM

ku

kxE

subject to:

q

iiii kwkuDkxCkBukAxkx

1

)()()()()()1(

dckubkxaP TT 1))()((

00

0

R

QM 0Qwhere

Primbs 8

The unconstrained version of the problem is known as the Stochastic Linear Quadratic (SLQ) problem.

It’s solution has been well studied...

Willems and Willems, ‘76Yao et. al., ‘01El Ghaoui, ’95Ait Rami and Zhou, ’00McLane, ’71Wonham, ’67, ‘68Kleinman, ’69

The constrained version has received less attention...

Primbs 9

Receding Horizon Control (also known as Model Predictive Control) has been quite successful for constrained deterministic systems.

Can receding horizon control be a successful tool for (constrained) stochastic control as well?

Dynamic resource allocation: (Castanon and Wohletz, ’02)Portfolio Optimization: (Herzog et al, ’06, Herzog ‘06)Dynamic Hedging: (Meindl and Primbs, ’04)Supply Chains: (Seferlis and Giannelos, ’04)Constrained Linear Systems: (van Hessem and Bosgra, ’01,’02,’03,’04)Stoch. Programming Approach: (Felt, ’03)Operations Management: (Chand, Hsu, and Sethi, ’02)

Primbs 10

Motivation and Background

OutlineOutline Semi-Definite Programming Formulation

Conclusions and Future Work

Numerical Examples

Receding Horizon Control

Primbs 11

Ideally, one would solve the optimal infinite horizon problem...

solve...

resolve... Horizon N

Horizon N

solve...

resolve... Horizon N

implement initial control action

implement initial control action

Instead, receding horizon control repeatedly solves finite horizon problems...

Primbs 12

resolve... Horizon N

Horizon N

solve...

resolve... Horizon N

implement initial control action

implement initial control action

Instead, receding horizon control repeatedly solves finite horizon problems...

So, RHC involves 2 steps:

1) Solve finite horizon optimizations on-line

2) Implement the initial control action

Primbs 13

Motivation and Background

OutlineOutline Semi-Definite Programming Formulation

Conclusions and Future Work

Numerical Examples

Receding Horizon Control

Primbs 14

resolve... Horizon N

Horizon N

solve...

resolve... Horizon N

implement initial control action

implement initial control action

Instead, receding horizon control repeatedly solved finite horizon problems...

So, RHC involves 2 steps:

1) Solve finite horizon optimizations on-line

2) Implement the initial control action

Primbs 15

In receding horizon control, we consider a finite horizon optimal control problem.

We will impose an open loop plus linear feedback structure:

),|0()|0( kuku NN )]|()|()[|()|()|( kixkixkiKkiukiu NNNN

)]|([)|( kixEkix NkN m

N kiu )|(where and

N denote the horizon length

Horizon N

solve...

10),|( NikiuN denote the predicted control

10),|( NikiuN

NikixN 0),|( denote the predicted state

Note that: )()|0( kxkxN

NikixN 0),|(

From the current state x(k), let:

Primbs 16

We will use quadratic expectation constraints (instead of probabilistic constraints) in the on-line optimizations.

Probabilistic constraints involve knowing the entire distribution. In general, this is too difficult to calculate.

Quadratic expectation constraints involve only the mean and covariance.

If one is willing to resort to approximations, probabilistic constraints can be approximated with quadratic expectation constraints.

Theoretical results can actually guarantee a form of probabilistic satisfaction, even though we use expectation constraints.

Primbs 17

Constraints will be in the form of quadratic expectation constraints

)|(

)|(

)|(

)|(

)|(

)|(

kiu

kixf

kiu

kixH

kiu

kixE

N

NT

N

N

T

N

Nk

XN

TXN

XTNk kixfkixHkixE )|()()|()|(

10 Ni

Ni 1

Note that state-only constraints are not imposed at time i=0.

C(x(k),N)

We will use quadratic expectation constraints (instead of probabilistic constraints) in the on-line optimizations.

Primbs 18

Receding Horizon On-Line Optimization:

1

0)|(

)|()|()|(

)|(

)|(

)|(min))((

N

iN

TN

N

N

T

N

Nk

kuN kNxkNx

kiu

kixM

kiu

kixEkxV

N

subject to:

q

jjNjNjNNN iwkiuDkixCkiBukiAxkix

1

)()|()|()|()|()|1(

)|()|( kNxkNxE NTNk

),|0()|0( kuku NN )]|()|()[|()|()|( kixkixkiKkiukiu NNNN

)),(( NkxP

)),(())|(),|(( NkxCkukx NN

We denote the the optimizing predicted state and control sequence by ).|( and )|( ** kixkiu NN

The receding horizon control policy for state x(k) is ).|0(* kuN

Primbs 19

This problem has a linear-quadratic structure to it.

Hence, it essentially depends on the mean and variance of the controlled dynamics.

For convenience, let )|()(),|()( kixixkiuiu NN ]))()())(()([()( T

k ixixixixEi

)()()1( iuBixAix mean satisfies:

covariance satisfies:TiBKAiiBKAi ))()(())(()1(

q

j

Tjjjj iuDixCiuDixC

1

))()())(()((

q

j

Tjjjj iKDCiiKDC

1

))()(())((

Primbs 20

Receding Horizon On-Line Optimization as an SDP (for q=1):

1

0)|(

))(())((min))((N

i

X

kuN NPTriMPTrkxV

subject to:

)(NPTr X

)),(( NkxP

)()()1( iuBixAix

0

100))()((

0)(0))()((

00)())()((

)()()()()()()1(

T

T

T

iuDixC

iiDUiC

iiBUiA

iuDixCiDUiCiBUiAi

0

10)()(

*)()()(

**)(

iuix

iiUi

iP

TT

T

**

*)()(

iPiP

X

)(

)()(

iu

ixfiHPTr T

XTXX ixfiPHTr )()()(

10 Ni

Ni 1

Primbs 21

By imposing structure in the on-line optimizations we are able to:

Formulate them as semi-definite programs.

Use closed loop feedback over the prediction horizon.

Incorporate constraints in an expectation form.

Primbs 22

resolve... Horizon N

Horizon N

solve...

resolve... Horizon N

implement initial control action

implement initial control action

Instead, receding horizon control repeatedly solved finite horizon problems...

So, RHC involves 2 steps:

1) Solve finite horizon optimizations on-line

2) Implement the initial control action

Primbs 23

Theoretical Results

Developing theoretical results for Stochastic Receding Horizon Control is an active research area.

In particular, important questions concern stability (when appropriate), performance guarantees, and constraint satisfaction.

I have developed results on these topics for special formulations of RHC, but I won’t present them here.

Primbs 24

Motivation and Background

OutlineOutline Semi-Definite Programming Formulation

Conclusions and Future Work

Numerical Examples

Receding Horizon Control

Primbs 25

Example Problem:

)()()()()()1( kwkDukCxkBukAxkx

98.01.0

1.002.1A

01.005.0

01.0B

04.00

004.0C

008.004.0

004.0D

Dynamics

Cost Parameters

10

02Q

200

05R

Primbs 26

Example Problem:

State Constraint3.2)]()(2[ 21 kxkxE

Optimal Unconstrained Cost to go:

)()())(( kxkxkxV T

3889.547929.5

7929.50331.41

Terminal Constraint

45)]|()|([ kNxkNxE Tk

Horizon

10N

Primbs 27

Level Sets of the State and Terminal Constraints.

Primbs 28

75 Random Simulations from Initial Condition [-0.3,1.2].

Uncontrolled Dynamics

Primbs 29

75 Random Simulations from Initial Condition [-0.3,1.2].

Optimal Unconstrained Dynamics

Primbs 30

75 Random Simulations from Initial Condition [-0.3,1.2].

RHC Dynamics

Primbs 31

Means of the 75 random paths for different approaches

Red-RHC, Blue-Optimal Unconstrained, Green-Uncontrolled

Primbs 32

A Simple Index Tracking Example:

Five Stocks: IBM, 3M, Altria, Boeing, AIG

Means, variances and covariances estimated from 15 years of weekly data beginning in 1990 from Yahoo! finance.

Initial prices and wealth assumed to be $100.

Risk free rate of 5%

Horizon of N = 5 with time step equal to 1 month.

Primbs 33

The index is an equally weighted average of the 5 stocks.

Initial value of the index is assumed to be $100.

The index will be tracked with a portfolio of the first 3 stocks: IBM, 3M, and Altria.

A Simple Index Tracking Example:

We place a constraint that the fraction invested in 3M cannot exceed 10%.

Five Stocks: IBM, 3M, Altria, Boeing, AIG

Primbs 34

Index – Green, Optimal Unconstrained - Cyan, RHC - Red

Primbs 35

Optimal Unconstrained Allocations

Blue – IBM, Red – 3M, Green - Altria

Primbs 36

Blue – IBM, Red – 3M, Green - Altria

RHC Allocations

Primbs 37

Motivation and Background

OutlineOutline Semi-Definite Programming Formulation

Conclusions and Future Work

Numerical Examples

Receding Horizon Control

Primbs 38

Conclusions

By exploiting and imposing problem structure, constrained stochastic receding horizon control can be implemented in a computationally tractable manner for a number of problems.

It appears to be a promising approach to solving many constrained portfolio optimization problems.

Future Research

There are many interesting theoretical questions involving properties of this approach, especially concerning stability and performance.

We are pursuing applications to other portfolio optimization problems, dynamic hedging, and coupled with filtering methods.