Quantifying the Value of Leadtime Information in a … the Value of Leadtime Information in a...

21
Quantifying the Value of Leadtime Information in a Single-Location Inventory System By Fangruo Chen and Bin Yu November 2001 DRO–2001-05 Decision, Risk & Operations Working Papers Series

Transcript of Quantifying the Value of Leadtime Information in a … the Value of Leadtime Information in a...

Quantifying the Value ofSingle-Location

By Fangruo C

Novem

DRO–

Decision, Risk

Working P

Leadtime Information in a Inventory System

hen and Bin Yu

ber 2001

2001-05

& Operations apers Series

Quantifying the Value of Leadtime Information in aSingle-Location Inventory System

This article studies the value of leadtime information in a single-location inven-

tory model with a Markovian leadtime process. The optimal strategy under complete

leadtime information (i.e., the leadtime state is observable) is a base-stock policy with

the order-up-to level dependent on the leadtime state. The optimal base-stock level

for each leadtime state can be computed in an iterative manner. The optimal strat-

egy under incomplete leadtime information (i.e., only part of the realized leadtime

process is observable) represents a signi¯cant challenge. Tight bounds for this case

are provided. Numerical examples show that the value of leadtime information can

be signi¯cant.

1

1 Introduction

If you ask an inventory manager to describe the things that make the job challenging,

you often get two stories. One has to do with unpredictable demand, and the other {

well, what else? { uncertain supply. Probing further on the latter, you may learn how

an order with the supplier may arrive, say, anywhere from two weeks to four weeks,

and it does not always come in one piece. One of the reasons for such uncertainties

is lack of communication. For example, when the inventory manager places an order,

the production manager at the supplier site may have some idea as to when that order

will be ¯lled based on his knowledge about the situation on the shop °oor. When

this knowledge is not shared with our inventory manager, supply uncertainty ensues.

The purpose of this article is to understand the value of the information about

delivery leadtimes. This is achieved by studying a single-location inventory model

where replenishment orders are delivered after random leadtimes. (Each order arrives

in one piece, however.) The stochastic leadtime process can be described by a Markov

chain with a structure that prevents orders from crossing over. (This makes the model

tractable.) The state of the Markov chain at the beginning of a period represents the

leadtime for an order placed at that time. In one scenario, the inventory manager

observes the state of the Markov chain, thus knowing the leadtime of every order

before it is placed. The other scenario is one where the inventory manager does not

see the state of the Markov chain on a real-time basis; he only knows the history of

order arrivals. The di®erence in system performance between the two scenarios tells

us about the value of leadtime information. Numerical evidence shows that this value

can be signi¯cant, with one example registering a 37% cost reduction.

This article contributes to a growing body of literature on supply chain infor-

mation sharing. Most of the existing work concentrates on the value of information

that comes from the supply chain's downstream, e.g., sales information, inventory

status at points of sales, etc. Very limited attention has been directed at the value of

upstream information such as the leadtime information considered here. We believe

that supply chain visibility is not a one-way street, and the value of both down-

stream and upstream information deserves research attention. The most closely re-

lated paper is Song and Zipkin (1996), who also consider a single-location inventory

2

problem with a Markovian supply process. They characterize the optimal policy

for the full-information scenario, and show that the information about supply con-

ditions does have value. Our paper complements theirs by providing an alternative

proof/algorithm for the full-information optimal policy, tight bounds on the system

performance in the incomplete-information case (as we will see, the optimal policy in

this case is intractable), and numerical results on the value of leadtime information.

For a review of the supply-chain information-sharing literature, see Chen (2001).

The rest of the paper is organized as follows. Section 2 describes the model in

detail. Section 3 deals with the complete-information case, and x4 the incomplete-information case. Section 5 contains the numerical results. Section 6 concludes.

2 The Model

Consider the following single-location, periodic-review inventory model with an in¯-

nite planning horizon. A retailer buys a product from an outside supplier, stores it in

a single location, and sells it to her customers. Customer demand arises periodically,

with demands in di®erent periods being independent, identically distributed random

variables. If demand exceeds the on-hand inventory in a period, the excess demand is

backlogged. On-hand inventories incur holding costs, and customer backorders incur

penalty costs. A key feature of this model is the supply process, which is charac-

terized by a Markov leadtime model to be speci¯ed below. The retailer's problem

is to choose a replenishment policy (when and how much to order from the outside

supplier) so as to minimize her long-run average holding and backorder costs.

To be precise about the material °ow in the inventory system, we assume the

following sequence of events in each period. At the beginning of the period, an order,

if any, is placed with the supplier, and any delivery (of a previous order) from the

supplier due this period is received. During the period, demand occurs. At the end

of the period, holding and backorder costs are assessed.

Let h be the holding cost rate per period, and p the backorder penalty cost rate

per period. (Nonlinear holding and backorder costs can be handled as long as certain

3

convexity properties are preserved.) The supplier charges a constant price for every

unit ordered by the retailer, and this cost term does not a®ect the retailer's ordering

decision and will thus be omitted.

We assume that the supply process can be modeled as a ¯nite-state Markov chain.

In particular, an order placed in period t arrives in period t+Lt, where Lt is a positive

integer. Thus Lt is the leadtime for the order placed in period t. And Ldef= fLtg

is a Markov chain with state space S = f1; 2; ¢ ¢ ¢ ;Mg, where M is a ¯xed, positive

integer. We assume that the Markov chain is time-homogeneous and ergodic. Denote

the one-step transition probability from state i to state j by pij , i; j 2 S. Let Pbe the transition matrix. The supply process is exogenous, i.e., the evolution of the

Markov chain is independent of the operations of the retailer's inventory system.

Moreover, orders are received in the sequence in which they were placed, i.e., no

order crossovers. This is achieved by restricting the transition probabilities so that

pij = 0 for any j < i ¡ 1, i.e., the transition matrix has a semi-upper triangularform. Finally, partial shipments of orders are not allowed, i.e., any order will be

received in its entirety, albeit after a random leadtime. Song and Zipkin (1996) have

provided several concrete examples where the above type of leadtime process may

arise. Many people have studied inventory models with random leadtimes, see, e.g.,

Kaplan (1970), Nahmias (1979), Ehrhardt (1984), Zipkin (1986), Svoronos and Zipkin

(1991), Song (1994), and Robinson, Bradley and Thomas (2000).

The supplier always sees the state of the leadtime Markov chain, i.e., he knows

the value of Lt at the beginning of period t, for all t. Below, we focus on the retailer's

replenishment decisions, which, as we will see, depend on whether or not the supplier

shares his leadtime information with the retailer.

3 Complete Leadtime Information

In this scenario, the supplier shares its leadtime information with the retailer. The

sequence of events in each period is as follows. At the beginning of each period t,

the retailer learns the value of Lt and decides how much, if any, to order. Then, any

order(s) due this period is received, demand is realized, and holding and backorder

4

costs are assessed. Since the value of Lt changes from period to period, it is conceivable

that the optimal ordering decision should depend on the state of the leadtime process.

Below, we show that this dependency is rather simple: the optimal policy is to place

an order in period t so as to increase the retailer's inventory position up to a base-

stock level that is a function of Lt, for all t. We call this a state-dependent, base-stock

policy.

Song and Zipkin (1996) have already proved the optimality of the state-dependent,

base-stock policy. Our work complements theirs by providing a simple, iterative

algorithm for determining the optimal policy. Moreover, as we will see later, the

algorithm itself is almost already an optimality proof (as opposed to Song and Zipkin's

dynamic programming approach).

We will concentrate on the description of the algorithm. The optimality of the

resulting policy is quite transparent from the algorithm. If that is not enough, the

reader can easily construct an optimality proof by modifying the arguments presented

in Chen and Song (2001) to suit the present context.

De¯ne inventory level to be on-hand inventory minus backorders. Let IL(t) be

the inventory level at the end of period t. Thus, the holding and backorder costs

incurred in period t are hIL(t)++pIL(t)¡. De¯ne inventory position to be inventory

level plus all outstanding orders (i.e., orders placed but not yet received). Let IP (t)

be the inventory position at the beginning of period t after order placement, if any.

Recall that an order placed in period t arrives in period t + Lt, and the next order

placed in period t+1 arrives in period t+1+Lt+1. The following are the well-known

inventory balance equation:

IL(¿) = IP (t)¡D[t; ¿ ]; ¿ = t+ Lt; ¢ ¢ ¢ ; t+ Lt+1

whereD[t; ¿ ] is the total demand in periods t; ¢ ¢ ¢ ; ¿ . Therefore, the inventory positionIP (t) determines the distributions of the inventory levels IL(¿ ) for ¿ = t+Lt; ¢ ¢ ¢ ; t+Lt+1 and thus the expected costs in those periods. We consequently charge the

expected costs incurred in [t+Lt; t+Lt+1] to period t. (Note that it is possible that

Lt+1 = Lt ¡ 1, in which case we charge zero costs to period t.) Let G(m; y) be theexpected costs charged to period t, given Lt = m and IP (t) = y. De¯ne for any

5

l ¸ 0,g(l; y) = E[h(y ¡D[0; l])+ + p(y ¡D[0; l])¡]:

Therefore,

G(m; y) =MX

m0=mpmm0

m0Xl=m

g(l; y) =MXl=m

Pr(Lt+1 ¸ ljLt = m)g(l; y):

Note that g(l; y) is convex in y for any l. Consequently, G(m; y) is convex in y for

any m.

Let s0m = argminyG(m; y) for any leadtime state m. That is, s0m is the myopic

base-stock level when the leadtime is m. The myopic base-stock policy is one where

in each period, we order to increase the inventory position to the myopic level; if the

inventory position before ordering is already above the myopic level, we do not place

any order. This policy may be suboptimal because the myopic base-stock level is,

well, myopic and does not take into account its impact on the costs in future periods.

It is also clear that it is never optimal to order to increase the inventory position to a

level above the myopic level; the myopic base-stock levels must be adjusted downward

to arrive at the optimal base-stock levels. Below, we describe an algorithm that does

this adjustment.

The algorithm has M iterations. (Recall M is the number of leadtime states.) In

iteration i, i = 1; ¢ ¢ ¢ ;M , the state space S of the Markov chain fLtg is partitionedinto two subsets, U i and V i, and an accounting scheme is given for assessing holding

and backorder costs. The accounting scheme is speci¯ed by a set of functions Gi(m; ¢)for all m 2 V i: to period t we charge zero costs if Lt 2 U i, and charge Gi(m; y) ifLt = m 2 V i and IP (t) = y. The algorithm is initialized with U1 = ;, V 1 = S, andG1(m; y) = G(m; y) for all m 2 V 1 and all y. Each iteration identi¯es a state anddetermines the optimal base-stock level for that state. More speci¯cally, for iteration

i = 1; ¢ ¢ ¢ ;M , de¯neyim = argminyG

i(m; y); m 2 V i

(Therefore, s0m = y1m, 8m 2 S.) Let i¤ be the leadtime state with the smallest value

of yim, m 2 V i, i.e.,i¤ = argminm2V iy

im:

6

Then, yii¤def= s¤(i¤) is the optimal base-stock level for leadtime state i¤. The intuition

for this is that ordering up to s¤(i¤) is myopically optimal and does not create any

\problem" for the other leadtime states in V i, because their \ideal" base-stock levels

are all greater than s¤(i¤). The state i¤ is then moved from V i to U i, i.e., U i+1 =

U i [ fi¤g and V i+1 = V i n fi¤g. Therefore, the V set always contains the leadtime

states for which the optimal base-stock levels are yet to be determined, and the U set

contains the states with known optimal base-stock levels. In the last iteration, there

is only one leadtime state in V M ; it is thusM¤. The minimum long-run average costs

are

¼(M¤)minyGM(M¤; y) = ¼(M¤)GM(M¤; s¤(M¤))

where ¼(¢) is the stationary distribution of the Markov chain L.

It only remains to determine Gi+1(¢; ¢) given Gi(¢; ¢), for any i = 1; ¢ ¢ ¢ ;M¡1. Takeany period t. Suppose Lt = m 2 V i+1 and IP (t) = y. Let t(V ) be the period beforethe Markov chain L returns to a state in V i+1. Therefore, for periods t+1; ¢ ¢ ¢ ; t(V ),the Markov chain remains in U i+1. In these periods, we assume that the optimal

policy is followed. (Recall that the optimal base-stock levels for the states in U i+1

have all been determined.) Then, Gi+1(m; y) is the expected costs incurred in the

interval [t; t(V )] under the accounting scheme for iteration i. De¯ne Ri(u; z) to be

the expected total costs incurred in [t+ 1; t(V )], again under the accounting scheme

for iteration i, given Lt+1 = u 2 U i+1 and IP¡(t + 1) = z, where IP¡(t + 1) is theinventory position at the beginning of period t+ 1 before ordering. Therefore,

Gi+1(m; y) = Gi(m; y) +X

u2U i+1pmuE[R

i(u; y ¡Dt)]; m 2 V i+1 (1)

where Dt is the demand in period t.

We now make the (inductive) assumption that the optimal base-stock levels iden-

ti¯ed up to iteration i are progressively larger, i.e., s¤(1¤) · s¤(2¤) · ¢ ¢ ¢ · s¤(i¤),

and that Gi(m; ¢) is a convex function with a ¯nite minimum point for all m 2 V i.Note that this assumption clearly holds for i = 1.

To determine Ri(u; z), ¯rst consider any period ¿ within the interval [t+1; t(V )].

We know that L¿ 2 U i+1. If L¿ 2 U i, then zero costs are assessed for the period.

7

Otherwise, if L¿ = i¤, then the costs assessed for period ¿ are Gi(i¤;maxfz0; s¤(i¤)g),

where z0 = IP¡(¿ ). There are two possible cases.

Case 1: z0 > s¤(i¤). In this case, it must be true that no orders have been placed

in periods t + 1; ¢ ¢ ¢ ; ¿ ¡ 1, because otherwise, we would have IP (t0) = s¤(j¤) ·s¤(i¤) for some j · i where t0 is the last period before ¿ with an order placement

and the inequality follows from the inductive assumption, which contradicts the case

assumption z0 > s¤(i¤). Therefore, z0 = z ¡D[t+ 1; ¿), where D[t+ 1; ¿) is the totaldemand in periods t+1; ¢ ¢ ¢ ; ¿¡1. The costs incurred in period ¿ can thus be writtenas

Gi(i¤;maxfz0; s¤(i¤)g) = Gi(i¤; z0) = Gi(i¤; z ¡D[t+ 1; ¿ ))where

Gi(i¤; x)def= Gi(i¤;maxfx; s¤(i¤)g); 8x:

Case 2: z0 · s¤(i¤). In this case, the costs incurred in period ¿ equal

Gi(i¤;maxfz0; s¤(i¤)g) = Gi(i¤; z0) = Gi(i¤; z ¡D[t+ 1; ¿ ))

where the second equality follows because the function Gi(i¤; x) is °at for x · s¤(i¤)and z¡D[t+1; ¿ ) · z0 · s¤(i¤). From the above discussions, we learn that in deter-mining Ri(u; z), we can simply assume that no orders are placed in [t+ 1; t(V )] and

charge costs according to Gi(i¤; ¢) whenever the Markov chain visits i¤. Consequently,we have

Ri(i¤; z) = Gi(i¤; z) +X

u02U i+1pi¤u0E[R

i(u0; z ¡Dt+1)] (2)

and

Ri(u; z) =X

u02U i+1puu0E[R

i(u0; z ¡Dt+1)]; u 2 U i: (3)

We can simplify the computation of Ri(u; z) by noting that Ri(u; z) is °at for

z · s¤(i¤). This follows from the arguments in the previous paragraph. Let ri(u) =

Ri(u; s¤(i¤)). From (2) and (3),

ri(i¤) = Gi(i¤; s¤(i¤)) +X

u02U i+1pi¤u0r

i(u0)

8

and

ri(u) =X

u02U i+1puu0r

i(u0); u 2 U i:

Once we have solved the above system of linear equations, we can compute Ri(u; z)

recursively. See Chen and Song (2001) for details.

Finally, one can use a sample-path argument to show that Ri(u; ¢) is a convexfunction for all u 2 U i+1. (We again refer the reader to Chen and Song (2001) fordetails.) Recall that Ri(u; z) is °at for z · s¤(i¤), for all u 2 U i+1, and that theminimum point of Gi(m; ¢), for all m 2 V i+1, is greater than or equal to s¤(i¤). Usingthese facts, one can see from (1) that the inductive assumption continues to hold for

iteration i+ 1.

Proposition 1 In the complete-information case, the retailer's optimal replenish-

ment strategy is to follow a state-dependent, base-stock policy with order-up-to level

s¤mdef= s¤(i¤) with i¤ = m, m 2 S. That is, if the leadtime state is m, order to increase

the inventory position to s¤m, and if the inventory position before ordering is already

above s¤m, do not order. Denote by CC the long-run average costs under the optimal

policy.

An interesting question is whether or not a longer leadtime always leads to a higher

base-stock level. This is certainly true in models with constant leadtimes. However,

in our model, the optimal base-stock level for a leadtime state also depends on the

next leadtime state. Consider an example where the leadtime Markov chain has the

following transition matrix:

P =

0@ 0 1

1 0

1A

If the current leadtime is 2, the next leadtime will be 1. Therefore, there is no reason

to order in this period. In other words, the optimal base-stock level for leadtime state

2 is ¡1. Now suppose, the current leadtime is 1. The next leadtime will be 2. Itis easy to see that the optimal base-stock level for leadtime state 1 is the myopic

inventory position y that minimizes g(1; y) + g(2; y) = G(1; y). This is a case where

9

a longer leadtime actually leads to a lower base-stock level. Song and Zipkin (1996)

have made a similar observation.

4 Incomplete Leadtime Information

We now consider the case where the supplier does not share with the retailer the

leadtime information. As a result, at the beginning of any period t, the retailer does

not know the exact value of Lt. However, the retailer has access to the history of order

arrivals, i.e., the orders placed with the supplier in the past, whether or not these

orders have arrived and if so, when. We, however, retain the assumption that the

retailer knows that Lt is generated by an underlying Markov chain with state space

S and one-step transition matrix P . By using the Markov structure and the order

history, the retailer may infer some information about the current leadtime and make

replenishment decisions accordingly. As we will see, the retailer's control problem is

much more complicated than in the complete-information case. Below, we consider

two scenarios that bound the system performance under incomplete information.

4.1 Constant Base-Stock Policy

The simplest policy the retailer can use is a base-stock policy with an order-up-to

level that is independent of the history of order arrivals. Let y be the constant order-

up-to level. Thus in each period, the retailer places an order to increase its inventory

position to y. Note that under this policy, the long-run average costs for the retailer

arePm2S ¼(m)G(m; y). (Recall that ¼(¢) is the steady-state distribution of Markov

chain L.) This is a convex function of y. Minimizing it over y, we obtain the optimal

constant base-stock level Y and the corresponding long-run average costs CS. Clearly,

CS is an upper bound on the minimum long-run average costs the retailer can achieve

under incomplete information.

10

4.2 History-Dependent Base-Stock Policy

Now suppose the retailer can use the order-arrival history to dynamically adjust

the base-stock level. What is the minimum achievable long-run average cost for the

retailer? Notice that there are now two purposes for placing an order with the supplier.

One is the usual need to replenish, and the other is to gather information about th

leadtime process. Without the presence of the former, the retailer can achieve the

latter goal by placing an in¯nitely small order and tracking the arrival time of this

order. To simplify analysis, we assume below that even when the retailer does not

place an order (or equivalently, place an order of size zero), the retailer still gets to

track the associated arrival time. Under this assumption, the retailer's information

about the leadtime process at the beginning of period t consists of all the pairs (¿; L¿)

with ¿ + L¿ · t (i.e., orders, including zero-size orders, that have arrived) and all

the periods ¿ 0 with ¿ 0 + L¿ 0 > t (i.e., orders that have not arrived). Because of

this informational assumption (i.e., the retailer is assumed to know more about the

leadtime process than he normally does), the resulting minimum long-run average

cost is a lower bound on the retailer's performance under incomplete information.

We proceed to analyze the retailer's replenishment decisions under the above in-

formational assumption. Let Ht represent the retailer's knowledge about the leadtime

process at time t. Due to the Markov property of the leadtime process, we only need

to track when the last order arrival occurred and when that order was placed. Sup-

pose the last order arrival occurred at time t2 (· t) and the order was placed at timet1 (< t2). (Recall that multiple orders may arrive at the same time. In this case,

rank the orders according to the times the orders were placed.) De¯ne ¿ = t¡ t1 andl = t2 ¡ t1. Knowing (¿; l) implies Lt¡¿ = l and Lt¡¿+1 ¸ ¿ , which is all we need toknow in order to describe the leadtime process from time t onward. In other words,

all the relevant information in Ht is embodied in the pair (¿; l). As a shorthand, we

write Ht = (¿; l).

Let Hdef= fHtg. Since ¿ can only take values 1; ¢ ¢ ¢ ;M and for each ¿ , 1 · l · ¿ ,

there are M(M + 1)=2 possible states for H. Moreover, H is a Markov chain. The

reason becomes clear after the following derivation of one-step transition probabilities.

Suppose Ht = (¿; l). The next state Ht+1 depends on whether there is an order arrival

11

at time t+ 1. If there is no order arrival, then Ht+1 = (¿ + 1; l). Otherwise, if there

is an order arrival at time t + 1, then Ht+1 = (i; i) if the order was placed i periods

ago (i.e., at time t+ 1¡ i), i = 1; ¢ ¢ ¢ ; ¿ . From Bayes' rule,

Pr(Ht+1 = (¿ + 1; l)jHt = (¿; l))= Pr(Lt¡¿+1 > ¿ jLt¡¿ = l; Lt¡¿+1 ¸ ¿ )=

cl;¿+1cl¿

where cmm0 = Pr(L1 ¸ m0jL0 = m) for all m;m0. Similarly, for i = 1; ¢ ¢ ¢ ; ¿ ,

Pr(Ht+1 = (i; i)jHt = (¿; l))= Pr(Lt¡¿+1 = ¿; Lt¡¿+2 = ¿ ¡ 1; ¢ ¢ ¢ ; Lt¡i+1 = i; Lt¡i+2 ¸ ijLt¡¿ = l; Lt¡¿+1 ¸ ¿)=

pl¿p¿;¿¡1 ¢ ¢ ¢ pi+1;iciicl¿

:

The probability of making a transition from (¿; l) to any other state is zero.

As in x3, one can show that a state-dependent, base-stock policy is optimal. Tocompute the optimal base-stock levels, ¯rst charge the following costs to period t,

G((¿; l); y)def=

Xm2S

Pr(Lt = mjHt = (¿; l))G(m; y)

if Ht = (¿; l) and IP (t) = y. To determine Pr(Lt = mjHt = (¿; l)) for all m 2 S,¯rst compute the probability distribution of Lt¡¿+1 given Ht as follows:

'(l0)def= Pr(Lt¡¿+1 = l0jHt = (¿; l))= Pr(Lt¡¿+1 = l0jLt¡¿ = l; Lt¡¿+1 ¸ ¿ )

=

8<: 0 l0 < ¿

pll0=cl¿ l0 ¸ ¿

for all l0 2 S. Then, ('(1); ¢ ¢ ¢ ; '(M)) ¢ P ¿¡1 gives the probability distribution of Ltgiven Ht. The algorithm is the same as that for the complete information case, with

only input changes: replace Markov chain L there with Markov chain H here, and

the cost functions G(m; y) there with G((¿; l); y) here. The number of iterations is

now M(M + 1)=2, the number of states for Markov chain H.

The following proposition summarizes the key results in this section.

12

Proposition 2 In the incomplete-information case, under the assumption that the

retailer at the beginning of period t, observes the value of L¿ for all ¿ with ¿ +L¿ · t,for any period t, the optimal strategy is to follow a state-dependent, base-stock policy

with order-up-to level s¤h, h 2 H, where H is the state space of H. Denote by CH the

long-run average costs under this policy. The retailer's minimum achievable long-run

average costs under incomplete information (without the informational assumption)

are bounded from below by CH and from above by CS, the long-run average costs under

a constant base-stock policy.

5 Numerical Examples

Recall that CC is the minimum long-run average costs achievable under complete lead-

time information, and that CH and CS are lower and upper bounds, respectively, on

the minimum long-run average costs when the retailer only has incomplete leadtime

information. By comparing these di®erent levels of performance in numerical exam-

ples, we learn something about the value of leadtime information. This represents

the ¯rst objective of this section.

The second objective is to test the performance of the myopic base-stock policy.

The myopic policy is attractive for computational reasons. As mentioned earlier, the

myopic base-stock levels are upper bounds on the optimal base-stock levels. How

suboptimal can it be? We intend to answer this question with numerical examples.

Note that the system parameters consist of the holding and penalty costs, the

demand distribution, and the transition matrix of the leadtime Markov chain. We

¯xed the holding cost rate at h = 1, and varied the penalty cost rate p = 1; 10; 20.

We used a negative binomial distribution to model the demand in a period. We

divided the examples into two classes depending on the mean demand per period. The

high-volume examples have a mean demand around 25, and the low-volume examples

have a mean demand less than 1. Table 1 summarizes the demand distributions used

with the corresponding coe±cients of variation.1

1Note that for the negative binomial distribution, ¹ · ¾2 · ¹(¹ + 1). Therefore, 1p¹· ¾

¹·

13

¹ ¾2 ¾=¹

0.6 0.72 1.41

0.2 0.24 2.45

0.1 0.105 3.24

24 600 1.02

26 164 0.50

25 69 0.33

Table 1: Demand Distributions.

We ¯xed the state space of the leadtime Markov chain at S = f1; 2; 3; 4g. Weused the following transition matrices with their corresponding stationary leadtime

distributions. Note that the stationary distributions exhibit four distinct shapes:

approximately normal, uniform, skewed high, and skewed low.

² Leadtime process normal

P =

0BBBBBBB@0:2 0:4 0:3 0:1

0:1 0:4 0:4 0:1

0 0:3 0:6 0:1

0 0 0:6 0:4

1CCCCCCCA¼ = (0:03622 0:28974 0:53119 0:14286)

² Leadtime process uniform

P =

0BBBBBBB@0:5 0:2 0:2 0:1

0:5 0:2 0:2 0:1

0 0:6 0:2 0:2

0 0 0:4 0:6

1CCCCCCCAq

¹+1¹ . As a result, a large coe±cient of variation is possible only with low-volume items. The fact

that the negative binomial distribution has an integer parameter also limits the values of ¹ and ¾.

14

¼ = (0:25 0:25 0:25 0:25)

² Leadtime process skewed high

P =

0BBBBBBB@0:025 0:025 0:05 0:9

0:05 0:05 0:1 0:8

0 0:1 0:2 0:7

0 0 0:2 0:8

1CCCCCCCA¼ = (0:001 0:021 0:198 0:780)

² Leadtime process skewed low

P =

0BBBBBBB@0:9 0:05 0:025 0:025

0:8 0:1 0:05 0:05

0 0:7 0:2 0:1

0 0 0:6 0:4

1CCCCCCCA¼ = (0:78 0:10 0:07 0:05)

The results for the low-volume items are summarized in Table 2. Notice that the

average of CS=CC is only 101%, with the maximum less than 105%. Therefore, the

leadtime information has little value when demand is low. Also note that the state-

dependent base-stock levels under complete information exhibit little variation. So a

constant base-stock policy should perform quite well in this case.

The results for the high-volume items are in Table 3. The average of CH=CC

is 109%, with the maximum ratio at 137%. Note that CH=CC represents a lower

bound on the ratio of the minimum achievable costs under incomplete information

to the minimum costs under complete information. Therefore, the value of leadtime

information can be signi¯cant when demand is high.

15

Figure 1 plots the ratio CH=CC against several system parameters for the high-

volume items. The results suggest that the value of leadtime information decreases if

the demand coe±cient of variation is high, the penalty cost increases, or the stationary

leadtime distribution has less variance.

For both low- and high-volume items, the gap between CH and CS is consistently

small, suggesting that these bounds are tight and that the stationary base-stock policy

is not a bad choice when the retailer only has incomplete leadtime information. The

order-arrival history is not very informative: ignoring this information altogether only

slightly increases the costs.

Table 3 also reports the performance of the myopic base-stock policy for the high-

volume items for the case with complete information. The long-run average cost of

the myopic policy was obtained by simulation, and the table also includes a 95%

con¯dence interval for each simulated result. The cost increase caused by the myopic

policy is negligible. This is good because myopic policies are very easy to compute.

(We did not test the performance of the myopic policy for the low-volume items

because in these cases, the constant base-stock policy without using any realized

leadtime information already performs very well.)

6 Conclusions

This article has shown that the value of leadtime information can be signi¯cant. This

is especially true when the leadtime distribution exhibits high variability or when

the demand is high volume. It is important to point out that the value of leadtime

information here is purely the result of information sharing between the supplier and

the retailer, without changing the underlying leadtime process. It is also found that

when leadtime information is lacking, a simple policy such as the constant base-stock

policy performs well, whereas under full leadtime information, myopic policy performs

well.

16

References

Chen, F. 2001. Information sharing and supply chain coordination. To appear in

Handbook in Operations Research and Management Science: Supply Chain Manage-

ment Volume, by T. de Kok and S. Graves (eds.), North-Holland.

Chen, F. and J.-S. Song. 2001. Optimal policies for multi-echelon inventory problems

iwth Markov-modulated demand. Operations Research 49 (2), 226-234.

Ehrhardt, R. 1984. (s, S) policies for a dynamic inventory model with stochastic

leadtimes. Operations Research 32, 121-132.

Nahmias, S. 1979. Simple approximations for a variety of dynamic leadtime lost-sales

inventory models. Operations Research 27, 904-924.

Kaplan, R. 1970. A dynamic inventory model with stochastic lead times. Manage-

ment Science 16, 491-507.

Robinson, L., J. Bradley, and J. Thomas. 2000. Consequences of order crossover in

inventory replenishment systems. Working paper, Cornell and Duke Universities.

Song, J.-S. 1994. The e®ect of leadtime uncertainty in a simple stochastic inventory

model. Management Science 40 (5), 603-613.

Song, J.-S. and P. Zipkin. 1996. Inventory control with information about supply

conditions. Management Science 42 (10), 1409-1419.

Svoronos, A. and P. Zipkin. 1991. Evaluation of one-for-one replenishment policies

for multi-echelon inventory systems. Management Science 37, 68-83.

Zipkin, P. 1986. Stochastic leadtimes in continuous-time inventory models. Naval

Research Logistics Quarterly 33, 763-774.

17

¼(¢) ¾=¹ ¹ p=h s¤1 s¤2 s¤3 s¤4 CC CH=CC Y CS=CCnormal 3.2 0.1 1 1 1 1 1 0.76 100.0% 1 100.0%normal 3.2 0.1 10 1 1 1 1 1.41 100.0% 1 100.0%normal 3.2 0.1 20 1 2 2 2 1.85 100.1% 2 100.1%normal 2.4 0.2 1 1 1 1 1 0.77 100.0% 1 100.0%normal 2.4 0.2 10 2 2 2 2 2.12 100.0% 2 100.0%normal 2.4 0.2 20 2 2 3 3 2.70 100.3% 3 100.3%normal 1.4 0.6 1 1 2 2 3 1.29 100.2% 2 100.2%normal 1.4 0.6 10 4 4 5 6 3.55 101.4% 5 101.5%normal 1.4 0.6 20 5 5 6 6 4.30 102.0% 5 102.1%uniform 3.2 0.1 1 1 1 1 1 0.78 100.0% 1 100.0%uniform 3.2 0.1 10 1 1 1 1 1.36 100.0% 1 100.0%uniform 3.2 0.1 20 1 1 2 2 1.83 101.4% 2 101.6%uniform 2.4 0.2 1 1 1 1 1 0.78 100.0% 1 100.0%uniform 2.4 0.2 10 2 2 2 2 2.10 100.0% 2 100.0%uniform 2.4 0.2 20 2 2 3 3 2.66 102.2% 3 102.6%uniform 1.4 0.6 1 1 2 2 3 1.29 102.0% 2 102.0%uniform 1.4 0.6 10 4 4 5 6 3.55 103.9% 5 105.1%uniform 1.4 0.6 20 5 5 6 6 4.32 103.4% 5 104.0%high 3.2 0.1 1 1 1 1 1 0.73 100.0% 1 100.0%high 3.2 0.1 10 1 1 1 1 1.68 100.0% 1 100.0%high 3.2 0.1 20 2 2 2 2 1.91 100.0% 2 100.0%high 2.4 0.2 1 1 1 1 1 0.80 100.0% 1 100.0%high 2.4 0.2 10 2 2 2 3 2.44 100.0% 2 100.0%high 2.4 0.2 20 3 3 3 3 2.85 100.0% 3 100.0%high 1.4 0.6 1 2 2 2 3 1.46 101.0% 3 101.0%high 1.4 0.6 10 5 5 5 6 3.88 100.9% 5 100.9%high 1.4 0.6 20 5 6 6 6 4.64 100.0% 6 100.0%low 3.2 0.1 1 1 1 1 1 0.83 100.0% 1 100.0%low 3.2 0.1 10 1 1 1 1 1.13 100.0% 1 100.0%low 3.2 0.1 20 1 1 1 1 1.46 100.0% 1 100.0%low 2.4 0.2 1 1 1 1 1 0.78 100.0% 1 100.0%low 2.4 0.2 10 1 2 2 2 1.87 101.0% 2 101.0%low 2.4 0.2 20 2 2 2 3 2.23 100.0% 2 100.0%low 1.4 0.6 1 1 1 2 3 1.00 101.5% 1 101.5%low 1.4 0.6 10 3 4 5 6 2.99 102.8% 3 104.7%low 1.4 0.6 20 4 5 5 7 3.67 102.9% 4 103.4%

Table 2. Comparison of long-run average costs for low volume items.

18

¼(¢) ¾=¹ ¹ p=h s¤1 s¤2 s¤3 s¤4 CC CH=CC Y CS=CC Cm=CC ; 95CInormal 1.02 24 1 58 74 88 104 38.0 103% 82 103% 101.1 § 1.1%normal 0.49 26 1 69 87 105 126 22.5 110% 97 111% 99.8 § 0.7%normal 0.33 25 1 66 83 101 122 16.4 118% 93 118% 99.9 § 0.5%normal 1.02 24 10 134 151 168 187 107.3 102% 163 102% 100.9 § 1.4%normal 0.49 26 10 120 132 146 168 57.7 106% 143 106% 100.3 § 1.0%normal 0.33 25 10 108 118 129 149 41.8 109% 127 109% 100.5 § 0.7%normal 1.02 24 20 159 175 192 213 130.7 102% 188 102% 100.5 § 1.0%normal 0.49 26 20 134 144 158 179 68.6 105% 155 106% 100.2 § 1.5%normal 0.33 25 20 119 127 137 156 49.4 107% 136 107% 100.6 § 1.0%uniform 1.02 24 1 56 71 88 106 37.8 107% 74 108% 101.0 § 0.8%uniform 0.49 26 1 66 87 108 128 24.3 124% 89 128% 100.1 § 0.7%uniform 0.33 25 1 62 84 105 123 18.9 137% 86 142% 100.1 § 0.5%uniform 1.02 24 10 132 150 169 191 108.3 106% 160 106% 100.5 § 1.1%uniform 0.49 26 10 119 135 153 170 63.5 113% 146 115% 100.1 § 0.9%uniform 0.33 25 10 109 122 137 150 48.9 115% 132 118% 100.3 § 0.7%uniform 1.02 24 20 157 175 195 217 132.2 105% 187 106% 100.6 § 1.9%uniform 0.49 26 20 134 149 166 182 75.4 111% 159 112% 99.8 § 0.8%uniform 0.33 25 20 120 132 145 157 57.3 112% 141 114% 100.5 § 0.6%high 1.02 24 1 73 86 98 111 42.0 101% 106 101% 100.6 § 1.0%high 0.49 26 1 87 101 114 129 23.3 104% 123 104% 100.1 § 0.7%high 0.33 25 1 84 96 109 124 16.0 108% 119 108% 99.9 § 0.7%high 1.02 24 10 159 169 181 196 115.2 101% 191 101% 100.2 § 1.0%high 0.49 26 10 144 150 158 171 57.5 102% 167 102% 100.7 § 0.8%high 0.33 25 10 131 135 140 150 37.9 102% 147 102% 100.6 § 1.0%high 1.02 24 20 185 195 207 222 139.6 101% 218 101% 99.8 § 1.9%high 0.49 26 20 158 163 170 182 67.9 102% 178 102% 101.1 § 1.5%high 0.33 25 20 141 144 148 157 44.3 102% 154 102% 100.6 § 1.3%low 1.02 24 1 44 60 79 99 30.3 103% 47 104% 101.4 § 0.8%low 0.49 26 1 54 77 102 125 19.1 111% 56 113% 101.3 § 0.9%low 0.33 25 1 52 75 99 121 14.2 117% 53 119% 101.2 § 0.7%low 1.02 24 10 110 133 157 181 93.1 104% 118 106% 100.2 § 1.7%low 0.49 26 10 93 125 147 168 58.1 113% 106 119% 100.3 § 1.4%low 0.33 25 10 81 114 133 149 48.0 116% 98 125% 100.5 § 1.5%low 1.02 24 20 132 158 182 207 115.6 104% 143 106% 100.6 § 1.9%low 0.49 26 20 108 140 161 180 72.6 111% 125 116% 100.3 § 1.0%low 0.33 25 20 98 128 142 157 61.1 112% 115 118% 99.7 § 0.8%

Table 3. Comparison of long-run average costs for high volume items. Cm is the simulatedlong-run average costs under myopic policy, with 95% con¯dence interval also scaled to CC .

19

1.020.50

0.33

p/h=1

p/h=10

p/h=200%

10%

20%

30%

40%

Valu

e o

f In

form

at

σ/µ

Steady State Distribution Normal

1.020.50

0.33

p/h=1

p/h=10

p/h=200%

10%

20%

30%

40%

Valu

e o

f In

form

at

σ/µ

Steady State Distribution Uniform

1.020.50

0.33

p/h=1

p/h=10

p/h=200%

10%

20%

30%

40%

Valu

e o

f In

form

at

σ/µ

Steady State Distribution High

1.020.50

0.33

p/h=1

p/h=10

p/h=200%

10%

20%

30%

40%

Valu

e o

f In

form

at

σ/µ

Steady State Distribution Low

Figure 1: Value of Leadtime Information.

20