Introduction to Markov Chain

download Introduction to Markov Chain

of 30

Transcript of Introduction to Markov Chain

  • 8/13/2019 Introduction to Markov Chain

    1/30

    Introduction to Markov Chain

  • 8/13/2019 Introduction to Markov Chain

    2/30

    A Markov chain is a random process

    It has the property that given the values of the

    process from time zero up through the current

    time, the conditional probability of the value

    of the process at any future time depends

    only on its value at the current time.

    Future and the past are conditionally

    independent given the present.

  • 8/13/2019 Introduction to Markov Chain

    3/30

    Examples

    Random walk

    Queuing processes

    Birth death process

  • 8/13/2019 Introduction to Markov Chain

    4/30

    Discrete time markov chains

    A sequence of integer-valued random

    variables,X0,X1, . . . is called a Markov chain if

    for n 1,

  • 8/13/2019 Introduction to Markov Chain

    5/30

    Consider a person who has had too much to

    drink and is staggering around.

    Suppose that with each step, the person

    randomly moves forward or backward by one

    step.

    This is the idea to be captured in the following

    example.

  • 8/13/2019 Introduction to Markov Chain

    6/30

  • 8/13/2019 Introduction to Markov Chain

    7/30

  • 8/13/2019 Introduction to Markov Chain

    8/30

    State space and transition probabilities

    The set of possible values that the random

    variablesXn can take is called the state space

    of the chain.

    The state space to be the set of integers or

    some specified subset of the integers.

    The conditional probabilities P(Xn+1 = j|Xn = i)are called transition probabilities.

  • 8/13/2019 Introduction to Markov Chain

    9/30

    we assume that the transition probabilities do

    not depend on time n. Such a Markov chain is

    said to have stationary transition probabilities

    or to be time homogeneous.

    For a time-homogeneous Markov chain, we

    use the notation

  • 8/13/2019 Introduction to Markov Chain

    10/30

  • 8/13/2019 Introduction to Markov Chain

    11/30

  • 8/13/2019 Introduction to Markov Chain

    12/30

  • 8/13/2019 Introduction to Markov Chain

    13/30

    Fig 12.3 is a special case in which ai=a and bi=b

    for all i.

    State transition diagram telling us that

  • 8/13/2019 Introduction to Markov Chain

    14/30

    To introduce a barrier at zero, leading to the

    state transition diagram in Figure 12.4.

    In this case, we speak of a random walk with

    a barrier. For i 1, the formula forpijis given

    by (12.9), while for i = 0,

  • 8/13/2019 Introduction to Markov Chain

    15/30

  • 8/13/2019 Introduction to Markov Chain

    16/30

    If a0 = 1, the barrier is said to be reflecting.

    If a0 = 0, the barrier is said to be absorbing.

    Once a chain hits an absorbing state, the chainstays in that state from that time onward.

    A random walk with a barrier at the origin has

    several interpretations.

  • 8/13/2019 Introduction to Markov Chain

    17/30

    When thinking of a drunken person staggeringaround, we can view a wall or a fence as areflecting barrier; if the person backs into the

    wall, then with the next step the person mustmove forward away from the wall.

    Similarly, we can view a curb or step as anabsorbing barrier; if the person trips and fallsdown when stepping over a curb, then thewalk is over.

  • 8/13/2019 Introduction to Markov Chain

    18/30

    A random walk with a barrier at the origin can beviewed as a model for a queue with an infinitebuffer.

    Consider a queue of packets buffered at anInternet router.

    The state of the chain is the number of packets inthe buffer. This number cannot go below zero.

    The number of packets can increase by one if anew packet arrives, decrease by one if a packet isforwarded to its next destination, or stay thesame if both or neither of these events occurs.

  • 8/13/2019 Introduction to Markov Chain

    19/30

    Consider a random walk with barriers at the

    origin and at N, as shown in Figure 12.5.

    The formula forpij

    is given by (12.9) above for

    1 i N 1, by (12.10) above for i = 0, and, for

    i = N, by

  • 8/13/2019 Introduction to Markov Chain

    20/30

  • 8/13/2019 Introduction to Markov Chain

    21/30

    This chain can be viewed as a model for a queuewith a finite buffer, especially if ai= a and bi= b

    for all i.

    When a0

    = 0 and bN

    = 0, the barriers at 0 and Nare absorbing, and the chain is a model for thegamblersruin problem.

    In this problem, a gambler starts at time zerowith 1 i N 1 dollars and plays until he either

    runs out of money, that is, absorption into statezero, or his winnings reach N dollars and he stops

    playing (absorption into state N).

  • 8/13/2019 Introduction to Markov Chain

    22/30

    If N = 2 and b2 = 0, the chain can beinterpreted as the story of life if we view state i= 0 as being the healthystate, i = 1 as being

    the sickstate, and i = 2 as being the deathstate.

    In this model, if you are healthy (in state 0),you remain healthy with probability 1a0 andbecome sick (move to state 1) with probabilitya0.

  • 8/13/2019 Introduction to Markov Chain

    23/30

    If you are sick (in state 1), you become healthy

    (move to state 0) with probability b1, remain

    sick (stay in state 1) with probability

    1(a1+b1), or die (move to state 2) withprobability a1.

    Since state 2 is absorbing (b2 = 0), once you

    enter this state, you never leave.

  • 8/13/2019 Introduction to Markov Chain

    24/30

    .

  • 8/13/2019 Introduction to Markov Chain

    25/30

  • 8/13/2019 Introduction to Markov Chain

    26/30

  • 8/13/2019 Introduction to Markov Chain

    27/30

    .

  • 8/13/2019 Introduction to Markov Chain

    28/30

    Stationary distributions

  • 8/13/2019 Introduction to Markov Chain

    29/30

  • 8/13/2019 Introduction to Markov Chain

    30/30

    .