Time-inhomogeneous discrete stochastic search methods for optimal Bernoulli parameters
-
Upload
mohamed-a-ahmed -
Category
Documents
-
view
213 -
download
1
Transcript of Time-inhomogeneous discrete stochastic search methods for optimal Bernoulli parameters
*Correspondence to: Mohamed A. Ahmed, Department of Statistics and Operations Research, Kuwait University,P.O. Box 5969, Safat, Kuwait
CCC 8755—0024/98/030199—19 $17.50 Received 11 June 1997( 1998 John Wiley & Sons, Ltd. Revised 12 January 1998
APPLIED STOCHASTIC MODELS AND DATA ANALYSIS
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)
TIME-INHOMOGENEOUS DISCRETE STOCHASTICSEARCH METHODS FOR OPTIMAL BERNOULLI
PARAMETERS
MOHAMED A. AHMED1,*, TALAL M. ALKHAMIS1 AND DOUGLAS R. MILLER2
1Department of Statistics and Operations Research, Kuwait University, P.O. Box 5969, Safat, Kuwait2ORE Department, Mail Stop 4A6, George Mason University, Fairfax, Virginia 22030, U.S.A.
SUMMARY
We present two time-inhomogeneous search processes for finding the optimal Bernoulli parameters, wherethe performance measure cannot be evaluated exactly but must be estimated through Monte Carlosimulation. At each iteration, two neighbouring alternatives are compared and the one that appears to bebetter is passed on to the next iteration. The first search process uses an increasing sample size of eachconfiguration at each iteration. The second search process uses a sequential sampling procedure withincreasing boundaries as the number of iterations increases. At each iteration the acceptance of a newconfiguration depends on the iterate number, therefore, the search process turns out to be inhomogeneousMarkov chain. We show that if the increase occurs slower than a certain rate, these search processes willconverge to the optimal set with probability one. ( 1998 John Wiley & Sons, Ltd.
KEY WORDS stochastic optimization; time-inhomogeneous Markov chains; simulation
1. INTRODUCTION
Discrete event simulation is widely used in planning communication networks, computer sys-tems, production assembly lines, and other complex systems. In planning these systems, decisionsmust be made concerning configurations with which various probabilistic events occur within thesystem. Planners typically want to know how the system will perform under various configura-tion settings.
Developing efficient ways of optimizing stochastic discrete event systems by simulation is noteasy but is extremely important in practice. In problems where the objective function is not givenas an explicit function of system parameters, as in the case in simulation, direct search methods(e.g., simulated annealing1) are often applicable. But applying these search techniques to optimizestochastic simulation problems would result in a major difficulty. For the majority of applicationsof search techniques it is assumed that a given set of values for decision variables results ina unique value for the objective function. This is not the case for stochastic simulation optimiza-tion. Rather we may observe samples from the objective function for each set of parameter values.
Thus, a single evaluation of the objective function in the latter case provides only one sample fromthat objective function.
Much of the literature on simulation output analysis concentrates on estimating the expectedvalue of system response. However, stochastic simulation optimization on the basis ofaverage system behavior alone can sometimes result in misleading conclusions.2 Therefore,we may consider measures of performance other than means. One such useful measureof performance is the probability that the system response is less than some value. Let X
ibe a random variable defined on the jth replication for j"1, 2,2, n. Suppose that we wouldlike to estimate the probability b"P (X3A), where ALR is a set of real numbers. By wayof example, for a computer system we might want to determine the probability that theresponse time of a job is less than or equal to 30 s. Estimating the probability b is equivalentto estimating the parameter of the Bernoulli distribution where b is defined as the probabilityof success.
In this paper, our search problem is to find optimal points in S, where S"M1, 2,2, sN is a finiteset of system configurations, with maximum value of b. We will assume that S has multipleoptima and S* is the set that contains them, i.e. S*"Mi :b
i"max
j | SbjN. If there is a single point
in S*, denote it by i* where bi*'b
j∀j3S, jOi*. Formally, our optimization problem is to
choose the best system, that is, the system with the largest value of bi:
bi*"max
j | Sbj
"maxj | S
P (performance event D configuration j )
Consider, for example, a general open k-station queueing network with measure of performancebeing, say, the throughput. Here, the performance event may be defined as Mthroughput is lessthan some critical valueN. We might be interested to select the configuration with the largest valueof b, where b"P Mthroughput is less than some critical valueN.
We focus on the case where the objective function is evaluated through simulation. In sucha situation, all the function evaluations will include noise, so conventional (deterministic)optimization methods cannot be used to solve this problem. Simulation optimization withdiscrete decision variables is currently a relatively underdeveloped.3,4 However, a fair amount ofliterature is available.
Yan and Mukai5 present a method for simulation optimization with respect to discretedecision variables. Their proposed method is related to, but different from, the technique ofsimulated annealing. In their approach they compare the observations of the system performancewith the observations of a fixed random variable (called a stochastic ruler (SR)) whose rangecovers all possible values of the objective function samples. The SR algorithm tries to maximizethe probability that the estimated objective function is smaller than the ruler for a minimizationproblem. Yan and Mukai show that under fairly general conditions the SR algorithm willconverge with probability one to the global optima.
Gong et al.6 proposed a stochastic comparison algorithm (SC) for simulation optimizationwith respect to discrete decision variables. At each iteration they compare the observations of theobjective function at two different solution points. Gong et al. show that the SC algorithm willconverge with probability one to the global solution if the noise in the evaluations of the objectivefunction at two different parameter values is independent and identically distributed and if thenoise terms have a symmetric and continuous probability density function with mean zero for
200 M. A. AHMED, T. M. ALKHAMIS AND D. R. MILLER
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
each different parameter value. They show that the SC algorithm converges with probability oneto the global solution.
Andradottir3 has also proposed a method for discrete simulation optimization. In eachiteration of this method one observation of the current solution candidate is compared with oneobservation of the new solution candidate which is selected from the neighbors of the currentcandidate. The solution point with the better observed value is selected to be the winner andpassed to the next iteration. Andradottir proves that her method converges almost surely toa local optimizer of the objective function under certain conditions. In fact, she shows that thealternative that has been visited most often in the first m iterations converges almost surely toa local optimizer of the objective function as m goes to infinity. In another work Andradottir4revised her search procedure Reference 3. In her revised approach, the observed objectivefunction value of the current solution point is compared with the observed objective functionvalue of the new solution point which is selected from the set of all feasible solution points. Sheshowed that the element of the set of feasible alternatives that the generated sequence visits mostoften converges almost surely to a globally optimal solution of the underlying optimizationproblem.
Ahmed et al.7 developed three search strategies for finding the optimal Bernoulli parameters.The first strategy of the proposed method uses a single observation of each configuration in everyiteration, while the second strategy uses a fixed number of observations of each configuration inevery iteration. The third strategy uses sequential sampling with fixed boundaries. These searchprocedures are time homogeneous because they have fixed sample size or fixed boundary at eachiteration. Ahmed et al.7 showed that these search procedures satisfy local balance equations andtheir equilibrium distributions give most weight to the optimal point. They also showed that theconfiguration that has been visited most often in the first m iterations converges almost surely toa globally optimum solution as in Andradottir.3,4
In this paper we extend the search procedures of Ahmed et al.7 to time-inhomogeneous searchby using increasing sample size or sequential sampling with increasing boundaries. We show thatif the increase occurs slower than a certain rate, the search process will converge to the optimal setwith probability one. This is in contrast to the non-converging, equilibrium behaviour of thetime-homogeneous versions of the search processes.
The paper is organized as follows: Section 2 presents problem structure, definitions andassumptions used through out the paper. In Section 3, we present the first search strategy usingincreasing sample size and prove that it converges to the optimal set with probability one. InSection 4, we present the second search strategy using sequential sampling with increasingboundaries and prove its convergence to the optimal set with probability one. Then in Section 5,we present computational experience for a simple example and for a real-life example. Finally,Section 6 contains some concluding remarks.
2. PROBLEM STRUCTURE AND ASSUMPTIONS
In this work we are interested in solving a stochastic optimization problem involving a finitenumber (say s) of different configurations. Define X
ito be the indicator function of performance
event with configuration i, i.e.
Xi"G
1
0
with probability bi,
with probability 1!bi,
0(bi(1
DISCRETE STOCHASTIC SEARCH METHODS FOR BERNOULLI PARAMETERS 201
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Our goal is to find the configuration that has maximum probability b. Assume S"M1, 2,2, sN isa non-empty discrete finite set of configurations and the search is conducted by picking an initialpoint in S and then comparing a neighbouring point according to the following definition.
Definition 1
For each i3S there exists a subset N(i) of S!MiN which is called the set of neighbours of i, suchthat each point in N(i) can be reached from i in a single transition.
Our search is organized in such a way that the next solution candidate is found among theneighbours of the present candidate. Hence, to ensure that our search will eventually cover all theelements of S, we make the following assumption about the system MN(i), i3SN of neighbourhoods.
Assumption 2
For any pair (i, j)3S]S, j is reachable from i, i.e. there exists a finite sequence, MnmNlm/0
forsome l, such that in
0"i, in
l"j and in
m`13N(in
m), m"0, 1, 2,2, l!1.
Now we impose a stochastic structure to the selection of a candidate among the neighbours bythe following function G. Given i3S, a candidate is selected among N (i) such that the probabilityof selecting a neighbour j3N (i) is equal to G
ijwhich is defined as follows:
Definition 3
A function G : S]SP[0, 1] is said to be a generating probability function for S and N if
(1) Gij'0 8 j3N(i) and
(2) +j | S
Gij"1 for all i3S.
We will consider Gij
such that the probability is distributed uniformly over N(i). Given i3S,a candidate is selected among N(i) such that the probability of selecting a neighbour j3N (i) isequal to G
ij, where
Gij"G
1
DN(i) Dfor j3N (i)
0 otherwise
3. SEARCH PROCESS WITH INCREASING SAMPLE SIZE (STRATEGY 1)
At each iteration k a sample of nk
pairs of observations is taken from i and j, (Xi1, X
j1),
(Xi2, X
j2),2, (X
ink, X
jnk). A candidate neighbour j3N(i) is accepted and a move is performed
from i to j if the observation from j dominates the observation from i in each pair, i.e. ifYnk
m/1MX
im(X
jmN occurs. In this case, the acceptance probability, A
ij(k), is the probability of
accepting the configuration j, once it is generated from configuration i, is defined as follows:
Aij(k)"PA
nkYm/1
MXim(X
jmNB"[(1!b
i)b
j]nk
202 M. A. AHMED, T. M. ALKHAMIS AND D. R. MILLER
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Define MIk, k"0, 1, 2,2N to be the state of the search process at each iterate. Then, MI
kN is
a discrete time-inhomogeneous Markov chain with transition matrices P1, P
2,2, where
Pk"(p(k,k`1)
ij), i, j3S
"A1
DN(i) D[(1!b
i)b
j]nkB, i, j3S.
Here p(k,k`1)ij
is the probability of going from state i at time k to state j at time k#1 which dependson k. The details of the algorithm for finding the configuration with optimal b for the searchstrategy with increasing sample size is as follows:
Algorithm 1
Step 0: Select a starting point I03S and let k"0.
Step 1: Given Ik"i, choose a candidate J
kfrom N (i) with probability distribution
P[Jk"j D I
k"i]"G
ij, j3N(i).
Step 2: Sample nkobservations from i and j (X
i1, X
j1), (X
i2, X
j2),2, (X
ink, X
jnk),
and set Ik`1
"GJk
ifnkYm/1
MXim(X
jmN,
Ik
otherwise.
Step 3: Set k"k#1, update nk. Go to step 1.
Convergence analysis for MIk, k"1, 2,2N to the limit probability vector p=, where
n=i"lim
k?=P (I
k"i ), (i.e., + i3S* n=
i"1 and n=
i"0 for i NS*) can be proved using the strong
ergodicity theory of inhomogeneous Markov chains.8,9 To prove ergodicity we need the follow-ing definitions.
Definition 4
Let P be a stochastic matrix. The ergodic coefficient of P, denoted by a (P), is defined bya(P)"min
i,k+=
j/1min(p
ij, p
kj).
Definition 5
A finite inhomogeneous Markov chain is weakly ergodic if ∀i, j, l3S, ∀m'0:lim
k?=(p(m,k)
il!p(m,k)
jl)"0, where p(m,k)
ijis the (i, j )th element of P
m.P
m`1.2.P
k.
Definition 6
A finite inhomogeneous Markov chain is strongly ergodic if there exists a stochastic vector q*,such that ∀i, j3S, ∀m'0: lim
k?=p(m,k)il
"q*j.
It is usually difficult to show that an inhomogeneous Markov chain is strongly ergodic directlyfrom the definition. In this section, we first show that the search process MI
k, k"0, 1,2N is
weakly ergodic.
DISCRETE STOCHASTIC SEARCH METHODS FOR BERNOULLI PARAMETERS 203
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
3.1. Weak ergodicity of strategy 1
If 0(bi(1, then 0(A
ij(1; therefore, all states will communicate if they are connected
by neighbourhoods. Assume for any i, j3S"M1, 2,2, sN there exists a path say,Mi"i
0Pi
1Pi
2P2Pi
l~1Pi
l"jN such that i
m`13N(i
m), m"0, 1,2, l. Let ¸"max
i,jMl
i,jN
where li,j"minimum length path from i to j. For fixed sample size n, the transition probability
pij
is given by
pij"
1
DN (i) D[(1!b
i)b
j]n for j3N(i).
Let PL denotes the ¸-step transition matrix. Now we find a bound on all the off-diagonal elementsof PL.
Lemma 7
For PL based on sample size n, all the off-diagonal elements are *mn and a(PL)*(DS D!2)mn,where
m"[min(1!b
i)minb
i]L
(maxiDN(i) D)L
Proof
PMpath i"i0Pi
1P2Pi
l~1Pi
l"jN
"
l~1<m/0
1
DN(im) D
[(1!bim)b
im`1]n
"
1
<l~1m/0
DN(im) D C
l~1<m/0
(1!bim) (b
im`1)D
n
*
[[mini(1!b
i)]l [min
ibi]l]n
[maxiDN (i) D]L
*C(min
i(1!b
i)min
ibi)L
[maxiDN (i) D]L D
n
"[m]n'0
Because 0(bi’s(1 and DS D(R, min b
iand min(1!b
i) exist. By Definition 4, it follows that
a(PL)*( DS D!2)mn. K
Theorem 8
Let the sample size at iteration k, nk, satisfy
nk)truncC1#
log(1#k/¸)
log(1/m) Dfor k"0, 1,2 where trunc [x] denotes the greatest integer smaller or equal to x, then theMarkov chain MI
kN generated by search strategy 1 using these n
kis weakly ergodic.
204 M. A. AHMED, T. M. ALKHAMIS AND D. R. MILLER
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
ProofA Markov chain for which +=
j/0a (P(n
j, n
j`1) ) diverges for some sequence n
1(n
2(2
(nj(n
j`1(2 is weakly ergodic.8 Consider the sequence n
i"(i!1)¸, i"1, 2, 3,2. Let k(n)
denote the number of iterates which sample at size n in multiples of ¸. Then
=+j/0
a (P(nj, n
j`1) )"
=+n/1
k (n)a((P)L)
*
=+n/1
k (n) DS D mn
"R, if k(n)*A1
mBn
Therefore, the series diverges if k (n)*(1/m)n. To find the condition on the sample size at eachiterate such that the series diverges, we proceed as follows. Let B
i, i*2, denote the total
number of iterates in multiples of ¸ performed with sample size )(i!1), i.e. Bi"+i~1
n/1k (n).
Assume that
Bi*
i~1+j/1A1
mBj"
1!(1/m)i
1!m!1
Then,
1#Bi*
1!(1/m)i
1!1/m:A
1
mBi~1
log(1#Bi)*(i!1) log(1/m)
i)1#log(1#B
i)
log(1/m)
Since iterates are measured in units of ¸, for individual iterates we have
nk)1#
log(1#k/¸)
log(1/m)
where
m"Cmin
i(1!b
i)min
ibi
maxiDN(i) D D
L
In words, if the sample size in the search strategy 1 increases slower thannk"trunc[1#Mlog(1#k/¸)/log(1/m)N], then the search process MI
kN is weakly ergodic. K
3.2. Strong ergodicity of strategy 1
As in the case of weak ergodicity, it is usually difficult to show that an inhomogeneous chainis strongly ergodic directly from the definition. In this section, we show that the search processgenerated by search strategy 1 is strongly ergodic. For a fixed sample size n, the search process
DISCRETE STOCHASTIC SEARCH METHODS FOR BERNOULLI PARAMETERS 205
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
MIk, k"0, 1,2N becomes time homogeneous Markov chain since the state transition probability
pij
becomes independent of k. In this case the equilibrium probabilities are:7
ni(n)"
DN (i) D (bi/1!b
i)n
+j | S
DN( j ) D (bj/1!b
j)n
i"1, 2,2, s.
To prove the strong ergodicity of the Markov chain associated with search strategy 1, we haveto prove the following two lemmas.
Lemma 9 (Monotone property of the equilibrium probabilities)
For a sequence of probability vectors p (n), of the form
ni(n)"
DN(i) D (bi/1!b
i)n
+sj | S
DN ( j ) D (bj/1!b
j)n
i"1, 2,2, s
the following hold:
(i) For each i3S*, if n(n@ then ni(n))n
i(n@),
(ii) For each i NS*, there exists an integer nisuch that if n
i)n(n@ then n
i(n)*n
i(n@).
ProofOur proof follows the proof of Proposition 6.1 in Reference 5. Consider
ni(n)"
aibni
+Sj/1
ajbnj
as a function of a real variable n, where ai"DN(i) D and b
i"(b
i/(1!b
i)). Then, n
i(n) is differenti-
able with respect to n. Noting dan/dn"an ln a, then we have
dni(n)
dn"
(+sj/1
ajbnj)a
ibniln b
i!a
ibni[+s
j/1(a
jbnjln b
j)]
[+sj/1
ajbnj]2
"
aibniM+s
j/1(a
jbnjln b
i!a
jbnjln b
j)N
[+sj/1
ajbnj]2
"
aibniM+s
j/1[a
jbnj(ln b
i!ln b
j)]N
[+sj/1
ajbnj]2
"
+ sj/1
ajbnj
aibni
ln(bi/b
j)
C+s
j/1ajbnj
aibniD2
"
+j | S*
ajbnj
aibni
ln(bi/b
j)#+
j | S~S*
ajbnj
aibni
ln(bi/b
j)
1
(aibni)2
+j | S* aj
bnj#
1
(aibni)2
+j | S~S* a
jbnj
206 M. A. AHMED, T. M. ALKHAMIS AND D. R. MILLER
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Figure 1. Illustrative example 12
Suppose that i3S*. Then bi'b
jfor all i3S* and j NS*.Therefore, b
i/(1!b
i)'b
j/(1!b
j), i.e.,
bi'b
j, so that dn
i(n)/dn'0 for any n'0. This implies conclusion (i). Suppose that i N S*, then
bi(b
jfor all i N S*. In this case as n goes to infinity the first term of the numerator decreases
monotonically to zero, while the second term monotonically decreases to !R. On the otherhand, the first term of the denominator decreases to zero, while the second term increasesmonotonically to #R as n goes to infinity. Therefore, there exists a real n
isuch that
dni(n)/dn(0 for any n*n
i. This implies conclusion (ii). K
Lemma 10
The probability vector defined as njof search strategy 1 satisfies
=+j/1
DDnj!n
j`1DD(R
ProofFollows from the proof of Lemma 7.1 in Reference 5. K
Theorem 11
Let nkbe as defined in Theorem 8. Then the Markov chain MI
kN generated by algorithm 1 is
strongly ergodic. Furthermore, limk?=
P(Ik3S*)"1.
ProofIt follows from Theorem 8 that the Markov chain MI
kN is weakly ergodic. Then, using
Theorem V.4.3. in Reference 8 it follows that the Markov chain MIkN is strongly ergodic. K
Illustrative example 12
Consider a test case with 4 points as depicted in Figure 1. In this example S"M1, 2, 3, 4N,S*"M1, 4N and b
1"0)9, b
2"0)5, b
3"0)1 and b
4"0)9. Using strategy 1 with
DISCRETE STOCHASTIC SEARCH METHODS FOR BERNOULLI PARAMETERS 207
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Figure 2. Optimization trajectory for illustrative example 12 and counter example 13 based on 10 replicates
nk"trunc[1#Mlog(1#k/¸)/log(1/m)N]. For this example 1/m"0)22 and ¸"3. Figure 2 shows
the average value of the objective function out of 10 replications vs. the number of iterations. Ask increases search strategy 1 converges to the optimal solution.
Counter example 13
The above result is very delicate and many search strategies will not give desired results. Thiscounter example illustrate this idea. Consider the same illustrative example 12, where theacceptance probability is given by A
ij(k)"[P(X
i)X
j)]nk instead of A
ij(k)"[P(X
i(X
j)]nk, i.e.,
we move on ties. Figure 2 shows the average value of the objective function out of 10 replicationsvs. the number of iterations. As shown in Figure 2, this search strategy does not converge to theoptimal solution.
4. TIME-INHOMOGENEOUS SEARCH PROCESS WITH INCREASINGBOUNDARIES (STRATEGY 2)
For each i, j3S generate sample pairs sequentially, say (Xi1, X
j1), (X
i2, X
j2)2 and let S
mdenote
the sign of (Xim
, Xjm
), such that
Sm"G
#1 if Xim(X
jm0 if X
im"X
jm!1 if X
im'X
jm
Let
½n(k)"
n+
m/1
Sm"
n+
m/1
1Mxim(x
jmN!
n+
m/1
1Mxim'x
jmN
where
1Mxim'x
jmN"G
1
0
if Xim'X
jmotherwise
.
208 M. A. AHMED, T. M. ALKHAMIS AND D. R. MILLER
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Figure 3. A typical sample path of the random walk M½nN
At iterate k of the search process we compare the current point i with an alternative point j asfollows: Conduct an experiment or a simulation to obtain samples from both i and j until the firsttime that ½
nfalls outside a given bounds $b
k; if it falls out below, the alternative point is rejected
whereas if it falls out above, the alternative point is accepted.The process M½
n(k), n"1, 2,2N, ½
0"0 is a random walk that moves a unit step in the positive
direction with probability p, moves a unit step in the negative direction with probability q, doesnot move with probability r"1!p!q and absorbing barriers at $b
k(Figure 3). Here,
p"P (Xim(X
jm)"(1!b
i)b
jand q"P(X
im'X
jm)"(1!b
j)b
i.
Let bkdenote the boundary for this sequential test at iterate k, then the acceptance probability
is given by7
Aij(k)"
1
1#A(1!b
j)b
i(1!b
i)b
jBbk
and pi,j
(k)"1/DN (i) DAij(k). As before, let MI
k, k"0, 1, 2,2N be the state of the search process at
each iterate. Then MIkN is a discrete time-inhomogeneous Markov chain with transition matrices
P1,P
2,2, where
Pk"(p(k,k`1)
ij), i, j3S
"
1
DN (i)D1
1#A(1!b
j)b
i(1!b
i)b
jBbk
, i, j3S
Algorithm 2
Step 0: Select a starting point I03S and let k"0.
Step 1: Given Ik"i, choose a candidate J
kfrom N (i) with probability distribution
P[Jk"j DI
k"i]"G
ij, j3N(i)
Step 2: Given Jk"j, set I
k`1"G
Jk
Ik
with probability Aij
with probability 1!Aij,
DISCRETE STOCHASTIC SEARCH METHODS FOR BERNOULLI PARAMETERS 209
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
where
Aij"
1
1#A(1!b
j)b
i(1!b
i)b
jBbk
Step 3: Set k"k#1, update bk. Go to step 1.
Our implementation of step 2 is as follows: we draw sample pairs from both current point i andthe new point j. We update the statistics ½
n; if ½
nequals #b
k, then I
k`1"J
k. If ½
nequals !b
k,
then Ik`1
"Ik. Convergence analysis for MI
k, k"1, 2,2N to the limit probability vector p=,
where n=i"lim
k?=P (I
k"i), (i.e., +
i | S* n=i"1 and n=
i"0 for i NS*) can be proved following the
same steps used in strategy 1.
4.1. Weak ergodicity of strategy 2
For fixed boundary b, let PL denote the ¸-step transition matrix. Now we find a bound on allthe off-diagonal elements of PL.
Lemma 14
For PL based on a boundary b, all the off-diagonal elements are *(e!(m)b/(d)b ), wherem"max
i,j(b
i/(1!b
i))/(b
j/(1!b
j)) and d"max
iDN(i) D.
Proof
P"Mi"i0Pi
1Pi
2P2Pi
l~1Pi
l"jN
"
l~1<m/0
1
DN (im) D
1
1#CAbim
1!bimBNA
bim`1
1!bim`1BD
b
*
l~1<m/0
1
DN (im) D CexpGA
!bim
1!bimBNA
bim`1
1!bim`1BH
b
D*A
e~(m)b
(d)b BL
because
ex"1#x#x2
2!#
x3
3!#2*1#x, so e~x)
1
1#x
and
1
1#xa*e~xa. K
210 M. A. AHMED, T. M. ALKHAMIS AND D. R. MILLER
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Theorem 15
Let bk
be the boundary at iteration k, satisfy
bk)truncC1#
log log(k/¸)!log¸
log m D,for k"0, 1,2, where trunc [x] denotes the greatest integer smaller or equal to x, then theMarkov chain MI
kN generated by search strategy 2 using the above b
kis weakly ergodic.
ProofA Markov chain for which +=
j/0a (P(n
j, n
j`1) ) diverges for some sequence
n1(n
2(2(n
j(n
j`1(2 is weakly ergodic.8 Define the sequence n
j"( j!1)]¸,
j"1, 2, 3,2. Let k(b) denote the number of iterates with boundary equal b in multiples of ¸.Then
=+j/0
a(P(nj, n
j`1) )"
=+n/1
k (b)a((P)L)
*
=+n/1
k (b) DS DAe~(m)b
(d)b BL
"
=+n/1
k (b)DS DdL
e~Lmb
"R, if k (b)*eLmb.
Therefore, the series diverges if k (b)*eLmb. To find the condition on the boundary size at eachiterate such that the series diverges, we proceed as follows. Let M
i, i*2 denote the total number
of iterates in multiples of ¸ performed with boundary size )(i!1), i.e., Mi"+ i~1
n/1k (b). Assume
that Mi*eLmi~1, then logM
i*¸mi~1
log log Mi*log ¸#(i!1) log m
i!1)log log M
i!log¸
log m
i)1#log log M
i!log¸
log m
Since iterates are measured in units of ¸, for individual iterates we have
bk)1#
log log k/¸!log¸
log m.
In words, if the boundary size in search strategy 2 increases slower than
bk"truncC1#
log log k/¸!log¸
log m D,then the search process MI
kN is weakly ergodic. K
DISCRETE STOCHASTIC SEARCH METHODS FOR BERNOULLI PARAMETERS 211
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
4.2. Strong ergodicity of Strategy 2
In this section, we show that the search process generated by search strategy 2 is stronglyergodic. To prove the strong ergodicity of the Markov chain associated with search strategy 2, wefollow same steps as in strategy 1.
Lemma 16 (Monotone property of the equilibrium probabilities).
For a sequence of probability vectors p (b), of the form
ni(b)"
DN (i) DAbi
1!biB"
s+j | S
DN ( j ) DAbj
1!bjBb, i"1, 2,2 , s
the following hold:
(i) For each i3S*, if b(b@ then ni(b))n
i(b@),
(ii) For each iNS*, there exists an integer bisuch that if b
i)b(b@ then n
i(b)*n
i(b@).
ProofThe proof follows the same argument as in Lemma 9. K
Lemma 17
The probability vector defined as njof search strategy 2 satisfies
=+j/1
DD nj!n
j`1DD(R.
ProofFollows from the proof of Lemma 7.1 in Reference 5. K
Theorem 18
Let bkbe as defined in Theorem 15. Then the Markov chain MI
kN generated by search strategy
2 is strongly ergodic. Furthermore, limk?=
P (Ik3S*)"1.
ProofIt follows from Theorem 15 that the Markov chain MI
kN is weakly ergodic. Then, using
Theorem V.4.3. of Reference 8 the Markov chain MIkN is strongly ergodic.
Illustrative example 19
Consider the same example of Figure 1. Using strategy 2 with
bk"truncC1#
log log k/¸!log¸
log m D
212 M. A. AHMED, T. M. ALKHAMIS AND D. R. MILLER
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Figure 4. Optimization trajectory for illustrative example 19 and counter example 20 based on 10 replicates
For this example 1/logm"0)524 and ¸"3. Figure 4 shows the average value of the objectivefunction out of 10 replications vs. the number of iterations. As k increases the search strategy2 converges to the optimal solution.
Counter example 20 (increasing boundary, move on ties)
For the previous illustrative example, consider the probability of moving in the positivedirection as
p"P(Xi)X
j)"1!P (X
i'X
j)"1!(1!b
j)b
i, i.e. move on ties.
Figure 4 shows the average value of the objective function out of 10 replications vs. the number ofiterations. As shown in Figure 4, this search strategy does not converge to the optimal solution.
5. COMPUTATIONAL RESULTS
5.1. Test case 1
In this example, we implement each of the two search strategies discussed earlier to solvea simple discrete stochastic optimization problem with 50 configurations. Consider the followingoptimization problem
maxi3S
bi, where S"M0, 1,2 , 49N
bi"G
0)95!0)05]i
0)15#(0)8/49)]i
for i"0, 1,2 , 9
for i"10, 11,2 , 49
See Figure 5 for the graph of the function bi. To make the search more difficult, this objective
function has multiple optima. Note that we have two global maxima at i"0 and i"49 withvalues equal to 0)95. We apply search strategies 1 and 2 to solve this optimization problem usingneighbourhood $5. Figure 6 includes the results obtained by applying search strategy 1. In thiscase, we let n
kgrows at a slow rate namely n
k"trunc[1#0)27 log(1#1 k/49)] for all k. Figure 7
DISCRETE STOCHASTIC SEARCH METHODS FOR BERNOULLI PARAMETERS 213
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Figure 5. Test case 1
Figure 6. Optimization trajectory for test case 1 using strategy 1 based on 100 replicates
Figure 7. Optimization trajectory for test case 1 using strategy 2 based on 100 replicates
214 M. A. AHMED, T. M. ALKHAMIS AND D. R. MILLER
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Figure 8. Schematic of machine repair model
shows the results obtained by applying search strategy 2. In particular, we let bkgrows at slow
rate namely bk"trunc[1#0)6624 log log 1
49k!log 49]. The figures show the average value of
the objective function out of 100 replications that were used versus the number of iterations.From the results given in Figures 6 and 7, we conclude that search strategies 1 and 2 converge tothe optimal solution.
5.2. Machine repair example
Consider a manufacturing system consisting of m#y machines as shown in Figure 8. Wedesire to have m machines operational at all times, and the additional y machines are spares thatsupport the system (i.e. there are y machines as cold standby). Failed machines are repaired byone of r repairmen who each work at rate k machines repaired/unit time. If more than r machinesrequire repair, a queue forms at the repair facility. Operating times until failure are exponentiallydistributed random variables with mean time to failure of any machines denoted by 1/j. Wechoose our test system to have exponentially distributed repair time (repair rate k), so that wecould easily compare the simulation optimization results to analytical results.
The general problem to be investigated is the determination of the optimal spares and repaircapacities which together guarantee a specified budget constraint at maximum system servicelevel. Let a(t) denote the system availability, which we define as the probability that the desirednumber of machines m are operating at time t. More specifically, we desire to
Maximize½,R
a (t)"m`y+n/m
nn(t)
Subject to cyy#c
rr)b
where a (t) is the system availability at time t, nn(t) the probability that n machines are operational
at time t, cythe cost per unit including annual operating costs and capital investment amortiza-
tion of a spare, cr
the cost per unit including annual operating costs and capital investmentamortization of a repair channel, and b the maximum available budget.
The cases considered are shown in Table 1. At each configuration i, we simulate the system for30 units of time and let X
ibe an indicator random variable that takes the value 1 if the number of
DISCRETE STOCHASTIC SEARCH METHODS FOR BERNOULLI PARAMETERS 215
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
Table I. Some machine repair test cases
Case m j k B
1 3 0)2 0)5 162 3 0)4 1 163 10 0)2 0)5 404 15 0)4 1 505 20 0)2 0)5 60
Table II. Values of L and N (i) for each test case of Table I
Case d 1 2 3 4 5
¸ 18 18 36 40 60N(i) $1 $1 $3 $4 $5
Table III. Comparison between analytical solutions and the proposed 2 strategies for the machine repairproblem
Analytical solution Strategy 1 Strategy 2
Case y* r* a* y* r* a* y* r* a*
1 2 2 0)860 2 2 0)869 2 2 0)8612 2 2 0)820 2 2 0)830 2 2 0)8233 7 4 0)863 7 4 0)848 7 4 0)8594 7 7 0)663 7 7 0)660 7 7 0)6655 8 9 0)656 8 9 0)649 8 9 0)658
machines operating at time 30 is greater than or equal to the desired number of operatingmachines, and zero otherwise. To estimate system availability we use the standard replicationsapproach for terminating simulation, mainly we use 100 replicates of each configuration.
For search strategy 1, we let nkgrows at a slow rate namely n
k"trunc [1#0)33 log(1#k/¸)]
for all k. For search strategy 2, we let bk
grow at a slow rate namelybk"trunc[1#0)834 log log k/¸!log¸] for all k. For each test case of Table I, the value of ¸ is
calculated based on the number of feasible configurations. Table II gives the values of ¸ and theneighbourhood structure used for each test case of Table I.
To provide a meaningful assessment of the accuracy of the proposed two strategies, wecompare our results with the analytical solutions. These results are summarized in Table III. Forboth strategies, the search processes were terminated after performing a predetermined number ofiterations, mainly after 30 000 iterations. Notice that in all test cases the proposed approacheslead to very accurate results.
216 M. A. AHMED, T. M. ALKHAMIS AND D. R. MILLER
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.
6. CONCLUSION
In this paper, we have presented two versions of a new stochastic search procedure for theBernoulli with the maximum probability. At each iteration the acceptance of the new pointdepends on the iterate number, therefore, the generated sequence of the solution estimates turnsout to be an inhomogeneous Markov chain. The first search procedure uses an increasingsequence of observation at each iteration. The second search procedure uses sequential samplingwith increasing boundaries. We show that if the increase occurs slower than a certain rate, theMarkov chain is strongly ergodic and that the search process converges to the optimal set withprobability one.
REFERENCES
1. E. H. L. Aarts and J. Korst, Simulated Annealing and Boltzmann Machines, Wiley, Chichester, 1988.2. A. M. Law and W. D. Kelton, Simulation Modeling and Analysis, 2nd edn, McGraw-Hill, New York, 19913. S. Andradottir, ‘A method for discrete stochastic optimization’, Management Science, 41(12) 1946—1961 (1995).4. S. Andradottir, ‘Global search for discrete stochastic optimization’, SIAM J. Control Optimiz 6(2) 513—530 (1996).5. D. Yan and H. Mukai, Stochastic Discrete Optimization’, SIAM J. Control Optimiz. 594—612 30 (1992).6. W. B. Gong, Y. C. Ho and W. Zhai, ‘Stochastic comparison algorithm for discrete optimization with estimation’, Proc.
31st CDC, (1992) 795—800.7. M. A. Ahmed, T. M. Alkhamis and D. R. Miller, ‘Discrete search methods for optimizing stochastic systems’, submitted
for publication.8. D. L. Isaacson and R. W. Madsen, Markov Chains: ¹heory and Applications, Wiley, New York, 19769. M. Iosifescu Finite Markov Processes and their Applications, Wiley, New York, 1980.
DISCRETE STOCHASTIC SEARCH METHODS FOR BERNOULLI PARAMETERS 217
Appl. Stochastic Models & Data Anal., 14, 199—217 (1998)( 1998 John Wiley & Sons, Ltd.