by - Spotlight exhibits at the UC Berkeley...

42
Nonparametric Estimation of Quadratic Functionals In Gaussian White Noise by Jianqing Fan Department of Statistics University of Califomia Berkeley, CA 94720 Teclhiical Report No. 166 July 1988 Researcih supported by NSF Granit No. DMS84-51753 Departmient of Statistics University of Califomia Berkeley, California

Transcript of by - Spotlight exhibits at the UC Berkeley...

Nonparametric Estimation of Quadratic Functionals

In Gaussian White Noise

by

Jianqing FanDepartment of StatisticsUniversity of CalifomiaBerkeley, CA 94720

Teclhiical Report No. 166July 1988

Researcih supported by NSF Granit No. DMS84-51753

Departmient of StatisticsUniversity of CalifomiaBerkeley, California

Nonparametric Estimation of Quadratic FunctionalsIn Gaussian White Noise

Jianqing Fan

Department of Statistics

University of Califoniia, Berkeley

Berkeley, CA 94720

ABSTRACT

Suppose that we observe dte process Y(t) fromii

Y(t) =if (u)du + a W(t), t e 10,1]

time optimal rates of coiivergence ( wheni a -+ 0 ) of quadratic functionals under hyperrectangletype of conistraint on unknown functioni f (t) is addressed, wilere W(t) is a standard Wieller

Process oni [0, 1J. Specially, ilie optimiial rate of estituating Lf (k)(j)12 dx wider the lhyperrec-

tangle constraint , = ff: xj(f) < n-P I is a wheni j, . 0.75 + 2k, anid a2-(4k + 1(4p - 1),when k + 0.25 < p < 0.75 + 2k, wlhere xj (f ) is the jlm Fourier-Bessel coefficient of unknown

funxction f. The lower bouinds are fowud by applying dte hardest 1-dimensional approach as

well as testing method. We use dte lower bounds to comlpare how far the lower bounds apart

from the upper bounds, not oily in rates but also in coIIstant factors, whiich gives us an idealhow efficienit thie best "unibiased" truncation estiniator is in comiiparisoni to the possible best esti-mnator. hideed, wi(th such a comparisoni, we find the asymptotic mininicax estimiiator for estimat-

I

ing the functional f2(x )dx wheii p 2 0.75. The more general testinlg bound are also dis-

cussed in the current setting.

Key Words and Phtrases: Quadratic funictionals, Lower bounds, Rates of convergence,Minimax risk, Gaussian wlhite noise, Estimiation of bounded squared-mean, testing ofhypothleses, Hardest 1-diimenisionial sutbpr-oblem, Bayesian appr-oach.

- 2 -

1. Introduction

Suppose the information available for untknown functioni [(t) is the observationt from

Y(t)= f (u) du +c W(t), te [0, 1] (1.1)

wilth conistraint f (t)E 1, a subset of L2[0, 1J, where W(t) is a standard Wiener process on [0,

1]. We wauit to estuimate a fwuctional T(f) nonparametrically. Thne functionials of particular

interest in this paper are quadratic functionials, witlh a lhyperrectantgle. For examuple, we want

to estimnate

T(V) = Ff(k)(t)12 (it (1.2)

where f(k)(t) is the ktlh derivative of the unknown functioni f(t). A natural questioni is. how

well canl we estimiiate the functional? How well does the best quadratic estinmator behave?

lbragimov et al (1986) niade a start on questions of this kiim(l.

Suclh a inodel is apparenitly tlhe mnost useful nmodel for stu(lyinig tlhe properties of nioni-

pwarnetlic estimnates, as pointed out by lbragiinov et ai (1986), sinice tlhe statistical nature of

tIme problemii is niot comiiplicated. Doiiolio anid Liu (1988) comuiect such imiodel with non-

parailletric deiisity estimnationi by lheuristic argumiienit that tlhe informiiationi available from an

emipirical process is alimiost the samie as imodel (1.1) because thie eimpirical process temids

weakly to a Browiani briidge. Indeed, thte result of Efroimnovichi and Pinisker (1982) -- under

ellipsoid type of conistrainit, the best linear estimator of denisity estiniator (found explicitly) is

an efficient estimator among all possible estimators -- is motivated by Pinisker's(1980) study of

model (1.1). Our results togetier with recent findings of Bickel an(l Ritov (1988) also support

such a heuristic.

The discretized version of niodel (1.1) is as follows: fix an orthlonornial basis I j(t))

in L 2[O, 1 j, expanuid the funictionial dY(t), f (t), dW(t) on this fixed basis. Tlheni the observa-

tion from (1.1) is equivalent to observe their itlh coordinate from

- 3 -

Yi -= Xi + (5 Zi (i = 1, 2, * * * ) (1.3)

whiere Yi xi, zi are the ilh coordinate of dY (t), f (t), dW (t) defined by

I

Yi = i(t) dY(t), xi = i;(t) f (t) dt, zi = I (t) d(t) (1.4)

aid the functional T(f ) of iinterest is equivalent to T(x ), and constraint 1; is the same as a sub-

set in R°.

Thle quadratic functionals under consideration are

Q (x)=Xi xi2 (1.5)

with hyperrectangle type of constraint

I=(xeR: IxiI.AiJ (1.6)

To make problemi imieaniingful (Q (x) <0, for x e :), assumie additionially that

,Xi Ai2 < 00 (1.7)

Specially, whien = ]2k, and Aj = jP (p 2 k + 0.25), theni Q (x) defined by (1.5) is die

stume as Ilht deriied in (1.2) witlh the sniootiness constraint onl f(t).

For find(ing the optimal rates of conivergence, miiathlemuatically it inivolves on one hand, to

fitid a lower bound, whicih says that niO estimator can do better than the lower bound, and on

tIie other liaid, to find an reasoniable estiiator to acihieve die rate of the lower bound. For the

current setting, it is not hard to find an intuitive estilnator, wliich is good enough to allow us to

obtain an optimnal rate of the convergence.

Finding a lower bounid usually inivolves more mathematical teclumiques. An usual way to

fintd a mininiatx lower botund is by assigiling an appropriate prior. However, we will shiow that

no intuitive Bayes mnethod works in the currenit setting. For the setting of nonparamietric den-

sity estimiiationi, tlie mathlemiiatical backgrouid behinld the constructioni of Farrell (1972) and

Stoine (1980) is that no perfect test, Ino good estimate even though they didn't state clearly.

- 4 -

The generalization of such an idea is illustrated in Donoho and Liu (1987b,c, 1988).

When T is a linear functional, Donoho and Liu (1987a) shows that when the constraint is

convex, balance and symmietric, uneder quadratic loss, the maxiniuni risk of best lilnear estinia-

tor is nlo worse than that of the best estinator times 1.34 by using thte hardest 1-dimensionxal

trick back to Stein (1956). Thne optimal rate of the convergence is equivalent to tde modulus

functioni defined by

bT (C) = SU I IT(X1)-T(X2)1: Il 1 -X12= 2,xX' X,2eC ) (1.8)

We will see that no matter wlhat kind of functional we want to estimate and what kind of con-

strainlt we hlave, the lower bound bT(O) is always available to be a lower bound. However, the

attainability of the lower bound bT(C) fails already even for quadratic functiolnals.

Looking inside of the modulus function defined above, it says by no mean that once we

camnot test between xi, aid x2 from the observations, then the clhange of thle functional of

T(,x) is the least mistake we will make for estimiiation of T(x). For the case that mnodulus

bound doesn't work, one mlay argue that the pair of x I r, and x2 E z miiay not be one of the

mIlost difficult pairs of subsets to be tested among all subsets having the sane change of the

functionial based on our observationt (1.3). Thus, a niore economical anid automatic way to for-

niulate a pair of subsets (usually comiposite) to be tested is that let thle chalnge of functional of

two sets be at least A, and theni see htow large A will be inl order to hiave a good but no perfect

test to separate two subsets. The idea of this kind is due to Donoho and Liu (1987c).

Contents of Paper: We begini witlh finding the best "unbiased" truncation quadratic esti-

mator, anid compute its MSE error. Theni, we apply the hardest 1-dimlenisional nmethod to give

a lower bound, wliich shows that wheni p 2 q + 0.75, the best "unbiased" truncation quadratic

estimator is quite efficient in asymplotic miniimax sense. Wheni (q + 1)/2 < 1p < q + 0.75, the

hardest I-dnimensionial approacli does n1ot give sharp lower bounids. To lhalndle this case, we

develop a new lower bou-nd, based onI thle Bayes risk in testing between two htiglily comiposite

priors, whichi shows that the best "unbiased" truncationi quadratic estimnator gives the optimal

- 5 -

rate in this case. However, the lower bound and upper bound is quite different in thie constant

factor. Motivated by the testing metlhod, we will suggest how to assign an appropriate prior to

give a bigger (possible) lower bound ( in conistant factor ). Finially, we generalize the idea of

testinig nietllol to more general settinig.

2. Quadratic estimators

Let start witlh the model (1.3). Suppose we observe

y =x +oz (2.1)

with lhyperrectanigle type of constraiint (1.6). A intuitive class of quadratic estimnators to esti-

mate

Q(x)= ,Xi X2 (2.2)

(where Xj . 0 ) is (lie the class of estimiators definied by

qB (Y) = y'By + c (2.3)

where B is a sytiunetric mlatrix, aned c is a constant. Simiple algebra shows the risk of qB y)

under quadhiatic loss is

R (B, x ) _^ E (qD (V ) Q (,())2 (2.4)

= (x'Bx + 0T2 trfB + c-Q (x))2 + 2oy4 trB2 + 402 x'B 2x (2.5)

Followinig Proposition tells us thie class of qua(ratic estimators wilh diagonal miatrix B is a

conilplete class among all estimiiators definled by (2.3).

Proposition 2.1: Let DB be a diagonal matrix, wliose diagonial elemiientts are those of B.

Thnent for- eacli symmetric B,

nmwx R (B, x ) . max R (DB , X)x 'E 2: xxeI

where 1; is defiiied by (1.6).

- 6 -

Tlhus onily diagonal matrix B is nieeded to be considered. For diagonal niatfix

B =diag(b ,b2.) (2.6)

the estimator (2.3) lhas risk

00 coo 00 00

R(B,x)=(Xbx12 +2 Xb1+xc -b X1/c)2 + 7 b/ (2oy + 402 X/) (2.7)

A natural questioni is wlhein the estimoator (2.3) with B defined by (2.6) converges almliost sturely.

Followinig Propositioii numay state in some stanidar-d probability book.

Proposition 2.2: Unider moodel (2.1), qB 'y) conlverges almost surely for eachl x e L iff

bbw(Ai2 +0 2) < 00 (2.8)

Even though for diagoital matrix B, it is hard to find the best quadratic estinmator. For thle

indinite dimenisional estimationi problem, usually the bias is a onie of miiajor contribution to the

risk. Thius, we would prefer to use a unique unbiased quadratic estimnator

z x2 - 2)

but it may not convergenice in L2, amid even it does coniverge, it may contribute too much on

varianice. Because of convergence conIditioni (1.7), we kinow that after certaini dliienision m (

whiiclh will go to ilnfiniite, as o-O0 ), we are not wortli to estimate v,. Thus, "unbiased"m

truncation quadratic estimator

m=q X(y2 - o2 (2.9)

is a good canididate for estimating Q(x), aid m is choseni to miniimize its MSE. For estimiator

quT () just definied above, tlIe miaxiinmumli MSE is

*tax R (quT, x) = (YXXA,2)2 + XX(2o4 + 4a2A/) (2.10)

- 7

Wlieii Xj = jq and Aj = jP, (p > (q + 1)/2), (2.10) can be simplified a lot. Let's study its

asymptotic property as a - 0. By (2.10), the estimator

miq(Y-2 02) (2.11)

lhas its iimaximiiuin risk

_~2n2q+1 4 1 -2p -q - 1R (in ) _ 2q + 1 O4( + o (l )) + 4(y2 E2q - 2p + m 2p -q -Il 2 OM

Case l: (q + 1)/2 <1. q +0.75.

4

Clhoose mn = [co 4P-1 , where c0 is specified below. Let

2q+ tg(c) -2q + + c- 2(~p -q - 1)/(2p - q - 1)2 (2.12)

Tilen the filhilnlizer of g(c) is co = (2p - q - 1) 4P 1 . Thus, (2.11) is simplified as

4- 4(2q+ 1)g(c) a 4p - I (I + o() (2.13)

and(lie optimal i11 is

, 4

no = 1(2p) - q - 1) 4P - Ia 4 (2.14)

Thus risk of (lhe optimnal estimatior "unbiased" is

4 4(2q + 1)g(c 0)a 4p - I (1 + o(l)) (2.15)

Case II: p > q + 0.75

4

Tlhe optimiial n0o = [a 4"-I ], with risk

00

4£ j2q - 2p(2(l + o(()) (2.16)

Let suminiarize tlie result we obtained above

- 8 -

Tiheorem 1: For Xi = I' and Ai = jp, (p > (q + 1)/2), the best "unbiased" truncation

estimiated is given by (2.11) with optimal mo defined by (2.14) wleii

(q + 1)/2 < 17 . q + 0.75, and the nmaxinum risk of ilte optimal estimator is given by (2.15),4

and tlie olt iinal ni( = [aT 4P-1 j when p) > q + 0.75 with the maximiumi risk given by (2.16).

Note tha[t when (q + 1)/2 < p < q + 0.75 the rate given above is bigger thian a2. In fact,

in the next two section, we will show that the "unbiased" truncation estimator is quite efficient

in the sense tlhat its risk is quite close to lower botndls, and thle rates given above -are optimal.

Note the wlhen q = 0, we will actually prove that the best "unibiased" truncation estimiator is

asymiiptotic efficient in miinimax sense.

hi tlhe followinig examiples, we always assume that the constraint is I = (x: Irj . j ].

Examiiple 1: Suppose we want to estimate T(f) = f 2(t) dt from model (1.1). Let

j (t) ) be a fixed ortlioniorimial basis. Then T(f) = , /2. Thus, tlie optimnal "unbiased" esti-j=l

1in'nmator is Xj (y2 - 2), wlere

4

={[a 4P-I,1 if p> 0.75[(2p - 1) 4p -

a 4P- I, if 0.5 p 0.75

Moreover, wheii p > 0.75, the estimator is ani asymptotic nminimnax estimator. For

0.5 . p . 0.75, tlhe optimal rates are achiieved.

Examiple 2: Let orthonoirnal basis be Ij (t)) = (1, cos2njt, sin2tjtl. We

want to estimnate

T(f) = I I (k)(t)12 dt = (21t)k/2 zj2kjx/26 j=2

mO

The optinal "unbiased" estinmator is (2J)k/2 2k (yj2 - 2), wherej=2

- 9 -

4

[a4P-']I fpfp>2k+0.75711o= -- _ 1 4

[(2p -2k -1) 4P a 4P- ]if k +O.5 .p 2k +0.75

Moreover, tie estimators achieve the optimal rates.

Example 3: (Inverse Problemi) Suppose we are interesting in recovering dte indirectly

observed funiction f(t) from data of thle form ( Donoho & MacGibbon (1987))

It s u

y())= K(t, s)f (t)dt ds +adVw, u E to, I]

wlhere again W is a Wiener Process and K is knowii. Let K: L 2[0, l]-+ L 2[0, 1] have a

singular systemn decompositioni (Bertero et al (1982)), i.e. a representation

Kf = , Xi(f,i ) lji=l

where the Xi are sinigular values, the I4) and (in) are orthogonial sets in L2[0, 1]. Then the

observation is equivalent to

Yi = Xi Oi + a ei

whlere -i are i.i.d. N(O, 1), Oi is thle Fourier-Bessel coefficient of (f, 4i), anid yi is a Fourier-

Bessel coefticicit of the observed data. Now suppose want to estimate

f2(t) dt = p0?=) 2xi2

whiere xi = Xi Oi. If the non-parametric constrainit on 0 is a hyperrectangle, ttiei the constraiit

of x is also a hyperrectangle in R . Applying Theorem 1, we can get an "optimial" estimator,

which achieve the optimal rate. Moreover, we will klow roughly hIow efricientt the estimator is

if we apply the Table 3.1 and 4.1.

- 10 -

3. Hardest 1-dimnensional lower bound

Tlhe hardest 1 -dimensional trick is suggested by Stein (1956). The idea is to use the

difficulty of the hardest t-dunensional problem as a lower bound of that of nonparametric

problemn. initis, for estimation of quadratic functionals ( infinite dimensional), we will end up

witlh ffie difficulty of eslinmation of 02 from observation Y - N(0, 02) (1-dimensional).

Let Y be a random variable distributed as N( 0, a2 ), and suppose it is known that

101 . . Let (lhe niiniiax risk of estimiiatinig 02 be

p(T, 02) = inf sup EO(S(Y) -02)2 (3.1)

By definiition, it is easy to slhow that

p(t, o2) = (4 p(t/o, 1) (3.2)

For estim1ationi of bounded iiiean, the quantity

inf sup Ee(6(Y) - 0)2 (3.3)

hals been studied by Casella anid Strnawderman (1981), Bickel (1982). However p(t, 02)

belhaves mtchcdifferent from the imiinimiiax risk of estiniation of the bountded nmeani (3.3). For

tlhe purpose of later use, we give the following asymptotic result of p(a, 1).

Thteoreml 2: As a -* oo,

p(a, 1) = 4a2(1 + o (1)) (3.4)

Moreover the least favorable prior is the linmit (as n -oo ) of the prior denisity of

gn (0, a) = (2 l)-2 (tle :!) (3.5)

Now we are ready to fiind a lower bound by using the lhardest 1-diinensional trick. Fix a

pOillt X E 1. Let x, = tx, t e [-1, 1J, be a line segment passing the origin. Then the problem

of estimnatinig of Q (x) in I x, ) is just that of esthinatinig

o2 = Q (ax) = t2Q (X) (3.6)

- 11 -

from observation

y=tx +cz

wlhere z is nornuial with miiean 0 an(d variance identity, and t is unknown to be estimated. As

(y, x) - NQ( % 112, O2I ,112) is a sufficienit statistics for t, the observation available for estimation

t or 0 is equivalenit to that from a(x) (y, x) - N(01 Q -X ) U2). Thus time niiniimax risk of

estimnationl 02 ill the problein is

p 4Q(X~) - ( 2) (3.7)

accordinig to our niotationi. By (3.2),

P (r ) a2) =Q p(IkIt / a, 1) (3.8)

But, the nmininax risk of estimation Q (x) is at least as difficulty as that of estimation 02. Con-

sequently,

ilif sup E. (8S) - Q (x ))2

= inf sUp Sip Etx(8(y) -t2Q (X ))2s X EY Iit1!I.i

)

2 sIlufisqPE,,(8(y) - 2)2

Xs6Z Q 2(1,1).(y4

1CUIICIUSiOXl:

hi coniclusionl,

Thteoremx 3: The miniiiax risk of estimation Q (x) from observation (2.1) is at least

S7 Q (IT))2(4 7p(Ill / a,1) (3.9)

and as a -4 0, the miniimax lower bound is at least

sxLl74 2@)a2 (1 +o (1)) (3.10)

X 6 . t1 1RVII > 0 l 112

- 12 -

Comilparing the lower bound anid upper bowud giveni by (2.16), wheni p 2 q + 0.75 the

rate a2 iS the optimal one.. When p = 0 , the best truncation is the asymptotic mhiimax estima-

t(or. How close is the upper bound to lower bound? By (2.16) and (3.10), as a --0

Lower Bound( c__2Upper Bounid c2p C4, - 2q

where Cr = E cani be calculated numerically. Following table shtows thte result:I

Table 3.1: Comparisoni of the loweer bouInd anid upper boundp = 1 + I + 0.5 i

i=- i=2 i=3 i=4 i=5 i=6 i=7 i=8

q=0.0 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000=0.5 0.976 0.991 0.996 0.998 0.999 1.000 1.000 1.000

1=1.0 0.940 0.977 0.990 0.995 0.998 0.999 0.999 1.000

=1.5 0.910 0.963 0.984 0.992 0.996 0.998 0.999 (.999

=2.0 0.887 0.952 0.979 0.990 0.995 0.998 0.999 0.999

=2.5 0.871 0.944 0.975 0.988 0.995 0.997 0.998 0.999

=3.0 0.859 0.938 0.972 0.987 0.994 0.997 0.998 0.999

=3.5 0.851 0.934 0.970 0.986 0.993 0.996 0.998 0.999

=4.0 0.845 0.931 0.968 0.985 0.993 0.996 0.998 0.999

=4.5 0.842 0.929 0.967 0.984 0.992 0.996 0.998 0.999

=5.0 (.839 0.928 0.966 0.984 0.992 0.996 0.998 (.999

(3.11)

- 13 -

4. Testing of tine vertices of lhypercube

Note that wlhen (q +1)/2 . p < q + 0.75, thle upper bound (2.15) is not at the rate of a2,

and conisequenitly, the liardest 1-diniensionial lower bounid (3.10) is iiiuch smaller. We niay

wonder whtether 1-dimensional bounds camot capture the difficulty of estimating quadratic

functionials or else the best "unbiased" truncation estimator is not a good estimator in this case.

Indeed, we will construct a bigger lower bound, which says the best "unbiased" truncation esti-

mator gives the optimal rate.

Tlhe idea of the conistruction is as follows: take a largest lIypercube of. dimensional n

(whtichi depentds on a ) in the hyperrectanigle, and assign probability I to each vertices, and2"

(lien test them against the origin. Wlhen Ino perfect test exists ( by choosinig sonie critical value

n, dependitng on1 a), the difference in functional values at two hypercubes supplies a lower

bound. The approachi we use is related to one of lbragimov et al (1986), who, however use

hypersphere rather thaIn hyperrectangle.

To carry out t(le idea, let's fornmulate testiing problemii

HO): Xi = O. +- HI1Xi = ± tn (i=l, * - *, n ), 0 (i > n ),(4.1)

i.e. we wantt to test wlhether (he observationi yi fromn H0J or H 1, which is specified as follows:

2H(: yi - N(O, a) (i = 1, * * * ,n) + (4.2)

HI: Yi -

I

[J)(Y .tn I () + V(Y . t, 7](iA = ,** 1)2

where t(y, t, a) is the density funiction of N(t, a2). The result of the testing can be summar-

ized as follows:

Lemimiia 4.1: The suin of type I and type 11 error of the best testinig procedure is

2 D(- I,(t )2 )(1 + o(1)) (4.3)

if 111/2 (tn / a)2_*C, wliere F(Q) is (lie probability fuliction of a stamndard normal distribution.

- 14 -

Now we are ready to derive a lower bound. Let

= IQ (HI) - Q (Ho)/2 = tn2 Xj1/2 (4.4)

be the half of chlange of the functional. For any estimlator T(y), the niliiniax risk under qua-

dratic loss is

supji Ex (T (y)-Q (x ))2

2 [E (T(y)- Q(x))2 + E, (T(y) - Q(x))2]

> Ir 2 [po (IT(y)I 2 rn) + PI(IT(y) -Q(x) 2 rn)]

2

> 2 rn2 [P(, (IT(Y )I 2 rn ) + P I( IT (Y)l 5 r, )] (4.5)2

2r,NF )(1 + o())

where E(> and E miiean that take the expectation under H0 anid H , respectively.

The last inequality holds because we can view the secoiid term of (4.5) as sum of type I

error and type 11 error, wlhichi is nio simaller Uiani that of best test procedure given by Lemma

4.1. Thulls, take In such that the last tcrnm is bouided away from 0, tlhcn r 2 is the order of the

lower bound.

Tlheorein 4: Suppose tliat Ai is increasing ini j, whien j is large. Let na be (lte largest

number of the equation

IF,(An/c0)2 < c,

Thieti, for any esthiator T(y), tlie niaxiniituni risk of estiniiatilig Q (x) is no slialler thian

su8p z ot 4c/%) (I x An 4, (1 + o(l)) (as o -4 0). (4.6)

Moreover,

- 15 -

na

I(£k,j ) AnaSu.p PXfT(Y)-Q(X)I 2 2+

and for any symmetric increasing loss function 1 (),

su}Ex [( ( )Q (,,)I )]2 1(I) qb(- )+ o (1)5X1 Aneno

WheIn Ai = ]-P and Xi = jq, we cani calculate the rate in Theoremi 4 explicitly.

Corollary 4.1: When Aj = j-P and Xj = jq, for any estimator the mininax risk under

qua(lratic loss is no better than

4 _4(2q +I§p,qO 4p - I (1+ O l?

Moreover, for any estiiator T(y),

2g_

I-2 (2+

siFp Px,(IT(y) - Q (X)I 2- c 4P '1(2(q +1)) 4 P Dt}2f(_ X)I ()

wlhere

supc~' ~ ~( ~~V40+ 1)2)4p, q = >0- )(4(q+

Wlheni (q + 1)/2 . p < q + 0.75, by comparing tlie lower bound and upper bound given

in section 2, againi we shiow that the best "unbiased" truncation estimiator gives the optiiial

rate. Followiing table shows hiow close the lower bound and the upper bound are.

- 16 -

Table 4.1: Comparisoni of the lower bound and the upper bounid

whlere ratio = lower bouid by testiingrmethodupper bound by qud(y)

The above table tells us that there is a large discrepancy at fie level of constants between

the upper and lower bounids. It appears to us that the hypersphere bound of lbragimov et al

(1986) would not give a lower bound of the sanie order as we are able to get via hypercubes.

5. Bayes Methiod

Often we use Bayes method to fined a minimtax lower bound. However, as jientioned in

the initroduction, the intuitihe Bayes methiod gives too small lower bound in the case of

(q + 1)/2 p < q + 0.75. By intuitit'e Bayes method, we niean tihat assign prior uniformly oln

the hiyperrectangle, whiich is equivalenit to say that inidepen(dently assigin prior uniformly on

each coordinale, or niore generally assign prior xj - nj(O) inidependenitly. Mathtematically,

assignin)g tlhe prior in this way does not include the interaction term ( wihicih lias order n2

terms) and hence give a smaller lower bound. To see thiis, let p, (jP, a) be the Bayes risk of

estimation 02 from Y - N(0, 2) knowing 10l j-P with prior Il. Let

8(y) = E ( jq X/2 t y ) be the Bayes solution of the problem. By indepenident assumption of

prior,

8(y) = E jqE(Xi21Yj

Theii the Bayes risk is

q=0 q=1 q=2 q=3 q=4

p= fratio p= ratio p= ratio p= ratio P= ratio

0.55 0.0333 1.15 0.0456 1.75 0.0485 2.35 0.0495 2.95 0.0499

0.60 0.0652 1.30 0.0836 2.00 0.0864 2.70 0.0864 3.40 0.0858

0.65 0.0949 1.45 0.1159 2.25 0.1170 3.05 0.1153 3.85 0.1132

0.70 0.1218 1.60 0.1431 2.50 0.1419 3.40 0.1383 4.30 0.1345

- 17 -

00

Enc Exc (8(y) - 1 qXj)I = 1: j24 px ( j-p, ()

I I~~ 0

y2q PRJ(j- p :2)+ j2q-4 (5.1)I 71+1

because the Bayes risk is no larger than the maximum risk of estinator 0. As Bayes risk is no

bigger thla tle minimax risk, by Theorem 2, the last quantity is no bigger than

12q pr(j 1) 4 +, (i- 4p + 2q +11

< 0 1t 2q - 2p + I 2) + Q(n- 4P + 2q + I

4_ 2q +I

= nax(O (a2), 0 ( ))

4 _ t + 1)= o(0 4 - ) (wihen (q + 1)/2 < 1 < q + 0.75)

by clhoosinig it = P.

To liid a bigger lower boun(d, inotivated by time testing methiod, take a largest n dimen-

siolnal liypercube in 1. Assign prior t(x) oni the diagonial linie segmiienits starting from tile ori-

gin uniiformitly, withi probability -I to each line seginenit of thie hiypercube.2"n

Mathenmatically, assigning the prior in this way can be reduced to find the Bayes risk of

tie estimating 02 based on thie nm i.i.d. observationis Yi lhavimig denisity

yi - I.- (+( 1) + +(- o0 ))2

with prior nt(0) is utnifornitly distributed on [0, t,, . anid dentote the Bayes risk by B (tn, n ),

wliere 4(0, 1) is the denisity of niormal distribution witlih imean 0 anid variance 1. Theni, a

minimax lower bouid of estimnating Q (x) is at least as big as the Bayes risk, i.e. no smaller

than t(ie quanitity

- 18 -

1?

min max E(6(y) - oX02)2&(y) 101 . A,,

n

> ( j)2&4 B (A, /a, ii)

We believe that for the case A,, = n-P and = jq by choosing suitable dinmension

4

11 = [(c ) 4'P - I I, the lower bowud miiiglht be improved. We are not goinig to pursue that hiere.

6. General testing lower bound

hi this sectioni, we will generalize the testing bound to more general setthig, which

applies to all functionals to be estimated and any kinid of conistraint. Suppose we have obser-

vation front (1.3), and want to estimnate T(x) knowning x e L. Then no estimator can esti-

nate T(x) Caster than the mo(dulus bounid defined by (1.8).

Tlheorenm 5: For aity estiiator (fy) fromti observationi (1.3),

sup E. (5r)-T(x ))2 2 suviD(- C/2)b72(c ()/4

wihere b17 is defined by (1.8).

Renmark: Whenl the hardest 1-dimienisioinal lower bound anid miodulus bound(i botlh give

the correct rates for estimiating quadratic functionial ( p2 q + 0.75), the miodulus bound gives a

smilaller constant factor. In fact, by Cauchly-Schwartz inequality,

bT(E) . 24jij £E = 24C2t_q)S q( - q>0.5)

and

supY[D(-c 12)c j C2pc c)C2pModiultus bound< c C(-017 2(p-q) 4'~l-dimim botund 4 C~p2c?p2whereC=- The second iseautdnmea2pl-q 2p-q

00

wliere Cr ,jr The second is evaluatedl iiuinerically iti Table 3.1. For p=O, tlle last temnl

is ino biggef tliani 0.167 anid somietimiies eveni imiu;li worse, if we evaluate botli boundts more

- 19 -

carefully.

Applying tie testing -trick iii section 4, we can give a more general result which allow us

to judge whietlier tie modulus lower bound is too snmall or not. To state the result, without

loss of genierality, asswne thiat T(O) = 0. Let I, be the lenigth of the largest n-diimensional

hypercube i 1, with in > 0 and 1,, -+O0, and N , be the largest integer such that

Thleorem 6: Under the above notations, assume thiat in the neighborhood of origin,

IT (x) > Q (x), whicih is symimetric in each argument. Then the rate of Q (x ) is a lower rate,

in t(le sense that no estimiator can estimate T(x) faster than that of Q (x@) (as 0-4O), where

X= (Oa *. , ,0,O ... )IN 0.25 with Na non-zero elements.

If mitoreover bT (c a)/Q (,V) -+ 0 for each c. Then modulus bouid bT (c a) does not give

an achievable lower rate under quadratic loss, where bT is defined by (1.8).

As nienitioned in the hltroduction, 2-poimits testinig bound may nlot be the best pair to be

tested. To give a imiore powerful lower botmd, let I;5; = (x 21: T(x) . t ) and 42, + ,

dclfined sinmilarly be the two subsets to be tested. Let conty (Er, t + A) be the convex hull of

E t +& ill the space of distributionis, and define coiv(>g ,t) siniilarly. Le Cam (1985)

shiows mininax risk of the sum of type I aid type 1I error of testinig

HO: x E ZT>t+ 4H1:x e IT t (6.1)

is just die minaxinum testing afliiity of tlhe lhardest pair in Con '(IT at + A) and conil'(IT

aid denote it by n(convi' (Er +t), conty (I 5 ),i.e.

J(convII(I >.1 +&)9 con'(?(. ) =

nilift Ynax (Ex(;y) + Ex (I - 4(y))). (6.2)T o e t1 x te m2,+oustd

Ttle stili Of type I and type If of the niost dlifficult subsels to is tested be

aT (A, a) = sup it(conv(I 2 t + A), conv((IT 6.3)(6.3)

- 20 -

Henice the largest chlange of the functionial A of the pair that we canmot test

AT(CC, a) = sul(A: aT (A,) 2 a) (6.4)

Tlhen, we lhave followiiig tlheorem, wlhich genieralizes all kiiids of testing lower bound given

above.

Theoremn 7: For any estimator S(y),

inf suip} P, 5(y -T(x )I . AT(a,cY)/21 > a/2

and for any symmetric increasing loss I(t), the minimax risk lower bound is given

inqf sup Ex 1(2(8(Y)-Q(x)) ) 2 1(1) a

6 x4

L AT(xa 2

7. Proofs

Proof of Proposition 2.1:

For aniy subset Scl 1, 2,.), let prior ps be the probability ineasure of indtlependently

assigniing xvj =±Aj witlh probability - eachi, for je S an(d assigninig probability I to the point2

0 for j 4 S. Tlhen by thle Jetnsen's inequality,

nmax R(B, x)X E

2 max E,s R(B, x)

S~~~~~~

mla (E s(x'Bx) + ( t-B + c - E s [Q (X)])2 + 2a4 trB 2 + 402 E ¶(x'B 2x). (7.1)

Let DA = E s(xx') whichI is a diagoInal matrix. Simple calculationi shows thiat

E s(x'Bx)= t-(BDA )=tr(DBDA)

E 1s(xBx) = t(B2DA) . tr(DB)2DA

trB 2= trB 'B = trD2B

- 21 -

Thnus by (7.1) and the last 3 display,

nmax -R (B, x) . iniax E SR (DB,X)= nax R (DBX )

The last equality holds because (2.7) is convex in xj , and consequently attains its maximum at

eitlier x2 = 0 or x2 = A,

Proof of Proposition 2.2

Suflicienicy follows fromii the monotone conivergenice theoremii:

EqB (X) bhi(xi2+ &2)+C <00

Suppose that (2.3) convergence a.s. for each x e 1, then according to Kolmogrov 3-

series theoremn,

P(biyi 2. 1) < 00 (7.2)

EEby2 l(biyi2 . I) < °° (7.3)

As the distributioii of Yi is lnormlal, it is easy to clheck that

E yi4 3(E yi2)2

Tlhus by Cauchy-Schwartz iniequality,

E byi 2 l(biyi22 1)

<,F1 Ebiyi2 P (biyi2 2I

=O( E biYi2) (7.4)

Hence the assertion follows from (7.3) and (7.4).

- 22 -

Proof of Theorem 2

Let (lhe prior distributioni of 0 is g, (0, a) defined by (3.5), for fixed n. Tlhetn the poste-

rior (lenisity of 0 given Y is

02n exp(y _0)2/2)l(ea) (7a

J 0211 exp(- (y _ 0)2 2) dO-a

Thus, the Bayes solution under quadratic loss is

a

(2n + 2 e-xp(- (y _ 0)2 / 2) dOE (02 Y) = -a

a

J O2n exp(- (y _ 0)2 / 2) dO-a

A 2+ (7.6)

Now, tlie risk of the Bayes estimate is

E(y2 + 8a(Y) - 02)2

= 402 + 3 + ES2(e + 0) + 40 Ee Fa(e + 0) + 2Ee2 sa(£ + 0)

whtere £ - N(O, 1). ThIus, the Bayes risk is

4 2 +1 a2 + Ea(e + 0) + 4 E£ 0 8a (£ + 0) + 2E£2 8a(£ + 0) + 3 (7.7)

wlhere E represenits taking the expectation over £ N (0, 1), and 0 - g" (0, a).

We will prove tlhat

Ea(c + 0) = o(a2) (7.8)

Suppose (7.8) is true, theni By Cauchy-Schwartz iniequality,

- 23 -

Es 0 8a(, + 0) . (E02Es2)112 (E62(e + 0))112=o (a2)

Thus, by (7.7), the Bayes risk is

4 2ii + 1 a2(1 + o(1)) -+ 4a2(1 + o(1)), ( as n - oo)2nt + 3

Consequently,

p(a, 1) 2 4a2(1 + v(l))

On the other hanld,

.vup Ee(Y2 02)2 = 4 a2 + 3

We conclude that

p(a, 1) = 4a2(1 + o (1))

Now, we have to chieck (7.8). By definiitioni of (7.6) and integration by parts,

a

2n (2- y2) exp(- (y _0)2 / 2) dO

a (Y ) a (7.9)

02 exp(- 0)2 / 2) dO-a

Y-(y)+ 82(y) + (2n + 1) + 2n83(y) (7.10)

where

81(,+^) = a2?v(y + a) extp(- (y - a)2 12)1 l1 (7.11)

A2(Y) = a2i"(y -a) exp(- (y + a)2/ 2) / I (7.12)

a

835(y) J 02n1 -I exp(- (y - 0)2 / 2) dO / 1 (7.13)-a

a

I = J 02" exp(- (y _ 0)2 / 2) dO (7.14)- a

- 24 -

Thlus, to prove (7.8), we need onily to prove

E J(y) = o(a2), ( =1,2,3)

which will be proved by following three Lenunas, where y = 0 + e

Leinmna 2.1: Under the above notationis, E 62(y) = o (a 2)

Proof: When y . a - 1, and a is large,

a

I .(a - 1)2n exp(- (y _0)2 / 2) dO0a - I

2(a - 1)2' exp(- (y -a)2/ 2)

2a2nexcp( (y - a)2 / 2)12n

IlVus, on the set I y: y < a -1 1'

82(y)<4(a +y)2< 16a2

Consequently, by Lebesque's dominated conivergence theorem,

E 82l(y)lt a-)= o(a 2

O tihe set ( a -i y .a ),wehlave

a

I f 02"iexp(- Cy - 0)2 / 2) dOa-I

> e .5(a _1)2n

2 0.5e -.5 a2n exp(- (y - a)2 / 2)

Thus, by (7.16)

(7.15)

(7.16)

- 25 -

E 8 )(a - I.v .a)

< i6e a2 p (a - 1.y . a ) = o(a2)

When y > a,

I . (a - 1)2n

a

f exp(- (y - 0)2 / 2) dOa - I

2 (a - ()2-[exp(-(y a )2 / 2) - exp(- (y - a + 1)2 / 2)]yy-a + I

> a2" ~(1-e-04LS)exp(-(y-a)2 /2)2y- a + 1)

Hlence,

(7.18)I4(1 (y > aE)

< 4(1 e 1-1)-2 E fy - a + 1)2(y + a )2 1(y > a)

Note that

P(y >a).Pfa -a025 < 0) +P(a025 <e) =0(a&0.75) (7.19)

Note also that

Ply >a +aOt25jI P(e> ao-)

= O (a- 025)exp(- 4a / 2)

By the easily clhecking fact that

E lytI = 0(atm)

aid Caucliy-Scliwartz iniequality,

E 6(y)l(v>a)

.4(1- -05)2[E (y -a +1)2(y +a)2l(ao2+a1ya)

(7.17)

(7.20)

- 26 -

+E (y-a + 1)2(y+a)l,> a025 +2a)I025 4 025 1/2+a

.O(a2)p(2a025 + a 2 y > a + 40(a4) [P(y > a +a)025J/2

= 0 (a 1.75) (7.21)

Consequenttly, by (7.15), (7.17), and (7.21), the desired assertion follows.

Lenmina 2.2: Under the above inotations, E 22y) = o (a2)

Proof: The proof is similar to that of Lemma 2.1.

Lemina 2.3: Unider the above inotatioIIs, E 2(y) = o (a 2)

Proof: Wheni a > 2,

y2(y)j52[2 ( 2n Iexp(- (y _ 0)2/ 2)dOI I)

+ 2 ( 02n - I exkp(_ (y _ 0)2 / 2) dO1 I )2]tel< 1

J 02n -" Iexp(- (y - 0)2 2) dO< 2 Y2 + y2 ( 101 <21 )](7.22)

02n exp(- (y - 0)2 / 2) dO

Let

02- exp(- (y _O)2 / 2) dOlet < I

2g(Y)= ~ 2p 2

J &2' exp(- (y 0)2 / 2) dO

Obviously, by thle continuity of g, wlen Iy 1 5 2, g (x) . c, a finite constant. Anld when y > 2,

02n& exp(- (y _-)2 / 2) dO 5 2exp(- (y - 1)2 / 2)let < I

- 27 -

wlile

2J 2 exp(- 0)2 / 2) dO 2 exp(- (y _ 1)2 / 2)- 2

Hence, Ig (x )I . 2, wheni x>2. Similarly, when x < - 2, Ig (x )I . 2. Thus, we concluded that

S32(y) .2 max (C2 + 1, 5)y23 3~y

As Y2 is niiformly integrable, and so is a2 Conisequenitly,aa

E 3 O(as a -+ oo)a2

The assertion follows.

Proof of Lemina 4.1

Witlhout loss of generality, assume that a = 1. Thnen the likelihood ratio of thle delnsity

un(der Ho anid H I is

n

L. =nLn,iI

(7.23)

where

Ln,i = exp(-_ 2t2/2) [exp(tn yi ) + exp(- tn yi )]/2

Denote 4,j,t, = log L ,i, theni

22

= - t / + log[1 + 2+~YCSti f" ?44

= tn2/2+2 '' +2

tn Yi24

4 4tn +Yi (tn )]24 +

4i4It + O0, (tn6)8

Consequcitly,

n

log Ln + ntn4/4 X(y12-1)tn2/2 - - 3)/12

;it /2 tn2 4;i/ tn2+ 0, (4n-t4)

- 28 -

By invokinig the ceIItral liIit theorem for i.i.d. case, we conclude that

log L,, + nt 4/4

4;~-7 tn2L- N(O, 1)

under Ho. Note that

log L,, - nt,,"4/41in27 tn2

n2 1 - t,,2)/2 - t,2(yi4 _ Eyi4)/12 + 0 (nt,,4)Dy t

I+ Op(-jIF,t4)

Now, wider H1,

nE (Iyi2-1- ,1/1 )4 = 0 (n-1)

and

nE (Ily 4 - Eyi4I/4n)4 = O (n -')

Hlence, the Lyapounov's condition holds for the triangular arrays. By triangular array central

limiit tieoremii, under HI,

log L, -n4/4LnN

4I1 -/2 2

Conisequenitly, tihe suin of type I aid type 1I error is

PHolLn > 1] + PHILLn < 1] = 2 0P(- t )(I + o(M))

Proof of Theorem 4:

The first two results is actually proved by the argument before statinig Theorein 4. The

third result follows from the inequality

(7.24)

Proof of Corollary 4.1:

4

Take n = [(F a) 4P I ]. Then,

i(It 1) > 1(1)1(ltl .: 1)

- 29 -

no(I jq)n a2P

q + 2P1I(q + 1) (1 +o(1))

=c o 41 Iq+1Z - 22=C 4p - I (Y /p-(q + 1)

Henice, tle assertion follows fromi Tlheorem 4.

Proof of Theorem 5:

Consider the testing problemii

HO: y- N(xI, o2) - HI: y- N(x2, a ) (7.25)

Tlhen, the likelilhood ratio is

Y (X2- XI) _Ir 2112 _II- ll2L = exp( 202

andi thle niniiiutnti sum of the type I aiid type l1 error is

nin (Ex1(y) + E.x2(1 - t,)))= PX(L > 1) + Px2(L . 1)

= 20(- 1 I - X211o

where 0( ) is the distributioni of the standard niormal distribution. Let r - ± IT(x,I) - T(x2)I2

be t(ie lhalf of the change of the functionial. Then, by the samiie argtunent as tlhose in (4.5), we

have

su Ex (8(y)TT(x,))2 > ,.2[PX,(86) - T(xt)l > r] +PXd(Y) - T(x1)I < r)]

IT(x,) - T(X2) 12 lb1, - X 2114 2

Taking "sup" over x 1, x2e I subject to iIv I - X2112 = Ca, we get the desiie result.

- 30 -

Proof of Tlheorem 6

Consider the testing problem (4.2) witl t,, = In, and n = N0. By Lemma 4.1, there is no

perfect test between Ho, and HI, i.e. the minimum sum of type I and type II error is bounded

away froiii 0. Hience, by the similar algebra as those in (4.5), the half of change of the func-

tionial froni Ho to HI is a lower bound of estiniating T(x). Now when a is small, by

assumption, the change of functional from Ho to HI is at least Q(xU). Hence, Q(xU) is a

lower rate. As BT (c a) = o (Q (x)), we conclude the second result.

Proof of Theorem 7

Without loss of generality, let the supremum over t in the definition of aT be attained, at

t(, anid thle supreniuim over A in tlie definitioni of AT be attained, at A. Then by definition, it

follows the iiiimiiax risk for testing betweein Ho anid H1 is at least a, i.e.

min. sup [Ex~+Ex°(lXI a (7.26)0<t5. 1 Xc-T- 50, C T t0+

Now for any estimnator $, aid xle T 0+2 A

PXI I 18(y ) - T(rxl)l > A/2) > PX, 18(y) - T(xO)1 < A/2

Hlenice, by (7.26)

sup naxPx 16(y)-T(x)I .A/2)X0EL 5 tot, X,1 ,2tt0+A X0 Xi

2 sup --Pso5(y)-T@O)I 2A/2j +[P ,x1y) -Ty -TO)I A2 )]xo= Lr,, X,ez t0 +A 2

. a/2

Tlhus, the conclusioni first result follows. The seconid result follows fronm the inequality (7.24)

- 31 -

ACKNOWLEDGEMENTS

The author gratefully,, acknowledges Prof. D.L. Donoho for leading me to this research

area, anid for his valuable suggestions and fruitful discussions. The author would like to thank

Prof. L. Le Cam for hlis valuable remarks and discussions.

- 32 -

REFERENCES

1. Bickel, P.J. (1981) Minimax estimation of the mean of a normal distribution when thle

paramiieter space is restricted, Anti. Stat., 9, 1307-1309.

2. Bickel, P.J. and Ritov(1988) Estimating integrated squared denisity derivatives, Technical

Report 146, Department of Statistics, University of California, Berkeley.

3. Bertero, M., Baccoaci, P., and Pike, E.R. (1982) On the recovery and resolution of

exponential relaxation rates from experimental data I, Proc. Roy. Sco. Lond., A 383, 15-.

4. Casella, G. and Strawderman, W.E. (1981) Estinmating a bounded nomial mean, Ann.

Stat. 9, 870-878.

5. DonoIo, D.L. and Litt, R.C. (1987a) On Mi1iimax Estimation of Linear Functionals,

Techntical Report 105, Department of Statistics, University of California, Berkeley.

6. DoInoIho, D.L. and Liu, R.C. (1987b) Geometrizing Rate of Convergence, 1, Tech. Report

137a, Departnmenit of Statistics, UnIiversity of California, Berkeley.

7. Donoiho, D.L. anid Liu, R.C. (1987c) GeoinetriziIg Rate of ConvergenIce, II, Technical

Report 120, Department of Statistics, UIIiversity of Califomia, Berkeley.

8. Donolio, D.L. anId Liu, R.C. (1988) Geometrizil1g Rate of Convergence, IlI Techtnical

Rcport 138, Department of Statistics, Uniiversity of California, Berkeley.

9. DoInoio, D.L. aind MacGibbon, B. (1987) Mininiiax risk for hyperrectaiigles, Techniical

Report 123, Departnment of Statistics, University of California, Berkeley.

10. Efroiiovich, S.Y. and Pinsker, M.S. (1982) Estimation of square-integrable probability

denlsity of a ran1dom variable, Problemy Peredachi Inifornmatsii, 18, 3, 19-38 (in Russian);

Problems of Inforniationi Transmission (1983), 175-189.

11. Farrell, R.H. (1972) OII the best obtainlable asymptotic rates of convergence in estima-

tion of a density function at a point, Anin. Matht. Stat., 43, #1, 170-180.

- 33 -

12. lbragimov, I.A. and Khas'minskii (1981), Statistical Estimation: Asymptotic Theory,

Springer-Verlag, New York-Berlin-Heidelberg.

13. Ibragimov, I.A., Neiiiirovskii, A.S. anid Khas'minskii, R.Z. (1986) Some problems on

lnonparametric estimation in Gaussian white noise, Theory Prob. Appl., 31, 3, 391-406.

14. Khas'niinskii, R.Z. (1978) A lower bound on the risks of nonparametric estimates densi-

ties in the uniform metric, Theory Prob. Appl., 23, 794-798.

15. Le Cain, L. (1973) Convergence of estimates under dimeiisionality restrictions, Ann.

Stat. 1, 38-53.

16 Le Cam, L. (1985) Asymptotic Methods in Statistical Decision Theory, Springer-Verlag,

New York-Berlin-Heidelberg.

17. Pinsker, M.S. (1980) Optimal filtering of square integrable signals in Gaussian white

noise, Problems of Information Transmission, 16, 2, 52-68.

18. Sacks, J. and Ylvisaker, D. (1981) Variance estimation for approximately linear models,

Math. Operationsforch. Statist., Ser. Statistics, 12, 147-162.

19. Stein, C. (1956) Efficient ionparametric estim-ation and testing, Proc. 3rd Berkeley

Symp. 1, 187-195.

20. Stone, C. (1980) Optimal rates of convergence for nonparametric estimators,

Annt . Stat. 8, 1348-1360.

21. Stolne, C. (1982) Optimal global rates of convergence for noniparametric regression, Ann.

Stat., 10, 4, 1040-1053.

TECHNICAL REPORTSStatistics Department

University of California, Berkeley

1. BRELMAN, L. and FREEDMAN, D. (Nov. 1981, revised Feb. 1982). How many variables should be entered in aregression equation? Jour. Amer. Statist. Assoc., March 1983, 78, No. 381, 131-136.

2. BRILLINGER, D. R. (Jan. 1982). Some contrasting examples of the time and frequency domain approaches to time seriesanalysis. Time Series Methods in Hydrosciences, (A. H. El-Shaarawi and S. R. Esterby, eds.) Elsevier ScientificPublishing Co., Amsterdam, 1982, pp. 1-15.

3. DOKSUM, K. A. (Jan. 1982). On the performance of estimates in proportional hazard and log-linear models. SurvivalAnalysis, (John Crowley and Richard A. Johnson, eds.) IMS Lecture Notes - Monograph Series, (Shanti S. Gupta, seriesed.) 1982, 74-84.

4. BICKEL, P. J. and BREIMAN, L. (Feb. 1982). Sums of functions of nearest neighbor distances, moment bounds, limittheorems and a goodness of fit test. Amn Prob., Feb. 1982, 11. No. 1, 185-214.

5. BRILLINGER, D. R. and TUKEY, J. W. (March 1982). Spectrum estimation and system identification relying on aFourier transforn. The Collected Works of J. W. Tukey, vol. 2, Wadsworth, 1985, 1001-1141.

6. BERAN, R. (May 1982). Jackknife approximation to bootstrap estimates. Anm. Statist., March 1984, 12 No. 1, 101-118.

7. BICKEL, P. J. and FREEDMAN, D. A. (June 1982). Bootstrapping regression models with many parameters.Lehmann Festschrift, (P. J. Bickel, K. Doksum and J. L. Hodges, Jr., eds.) Wadsworth Press, Belmont, 1983, 28-48.

8. BICKEL, P. J. and COLLINS, J. (March 1982). Minimizing Fisher information over mixtures of distributions. Sankhya,1983, 45, Series A, Pt 1, 1-19.

9. BREIMAN, L. and FRIEDMAN, J. (July 1982). Estimating optimal transformations for multiple regression and correlation.

10. FREEDMAN, D. A. and PETERS, S. (July 1982, revised Aug. 1983). Bootstrapping a regression equation: someempirical results. JASA, 1984, 79, 97-106.

11. EATON, M. L. and FREEDMAN, D. A. (Sept. 1982). A remark on adjusting for covariates in multiple regression.

12. BICKEL, P. J. (April 1982). Minimax estimation of the mean of a mean of a nonnal distribution subject to doing wellat a point. Recent Advances in Statistics, Academic Press, 1983.

14. FREEDMAN, D. A., ROTHENBERG, T. and SUTCH, R. (Oct 1982). A review of a residential energy end use model.

15. BRILLINGER, D. and PREISLER, H. (Nov. 1982). Maximum likelihood estimation in a latent variable problem. Studiesin Econometrics, Time Series, and Multivariate Statistics, (eds. S. Karlin, T. Amemiya, L. A. Goodman). AcademicPress, New York, 1983, pp. 31-65.

16. BICKEL, P. J. (Nov. 1982). Robust regression based on infinitesimal neighborhoods. Ann. Statist., Dec. 1984, 12,1349-1368.

17. DRAPER, D. C. (Feb. 1983). Rank-based robust analysis of linear models. I. Exposition and review. Statistical Science,1988, Vol.3 No. 2 239-271.

18. DRAPER, D. C. (Feb 1983). Rank-based robust inference in regression models with several observations per cell.

19. FREEDMAN, D. A. and FIENBERG, S. (Feb. 1983, revised April 1983). Statistics and the scientific method, Commentson and reactions to Freedman, A rejoinder to Fienberg's comments. Springer New York 1985 Cohort Analysis in SocialResearch, (W. M. Mason and S. E. Fienberg, eds.).

20. FREEDMAN, D. A. and PETERS, S. C. (March 1983, revised Jan. 1984). Using the bootstrap to evaluate forecastingequations. J. of Forecasting. 1985, Vol. 4, 251-262.

21. FREEDMAN; D. A. and PETERS, S. C. (March 1983, revised Aug. 1983). Bootstrapping an econometric model: someempirical results. JBES, 1985, 2, 150-158.

22. FREEDMAN, D. A. (March 1983). Structural-equation models: a case study.

23. DAGGETT, R. S. and FREEDMAN, D. (April 1983, revised Sept. 1983). Econometrics and the law: a case study in theproof of antitrust damages. Proc. of the Berkeley Conference, in honor of Jerzy Neyman and Jack Kiefer. Vol I pp.123-172. (L. Le Cam, R. Olshen eds.) Wadsworth, 1985.

- 2 -

24. DOKSUM, K. and YANDELL, B. (April 1983). Tests for exponentiality. Handbook of Statistics, (P. R. Krishnaiah andP. K. Sen, eds.) 4, 1984, 579-611.

25. FREEDMAN, D. A. (May 1983). Comments on a paper by Markus.

26. FREEDMAN, D. (Oct. 1983, revised March 1984). On bootstrapping two-stage least-squares estimates in stationary linearmodels. Amn. Statist., 1984, 12, 827-842.

27. DOKSUM, K. A. (Dec. 1983). An extension of partial likelihood methods for proportional hazard models to generaltransformation models. Ann. Statist., 1987, 15, 325-345.

28. BICKEL, P. J., GOET-ZE, F. and VAN ZWET, W. R. (Jan. 1984). A simple analysis of third order efficiency of estimateProc. of the Neyman-Kiefer Conference, (L. Le Cam, ed.) Wadsworth, 1985.

29. BICKEL, P. J. and FREEDMAN, D. A. Asymptotic normality and the bootstrap in stratified sampling. Anm. Statist.12 470-482.

30. FREEDMAN, D. A. (Jan. 1984). The mean vs. the median: a case study in 4-R Act litigation. 1BES. 1985 Vol 3pp. 1-13.

31. STONE, C. J. (Feb. 1984). An asymptotically optimal window selection rule for kemel density estimates. Ann. Statist.,Dec. 1984, 12, 1285-1297.

32. BREIMAN, L. (May 1984). Nail finders, edifices, and Oz.

33. STONE, C. J. (Oct. 1984). Additive regression and other nonparametric models. Ann. Statist., 1985, 13, 689-705.

34. STONE, C. J. (June 1984). An asymptotically optimal histogram selection rule. Proc. of the Berkeley Conf. in Honor ofJerzy Neyman and Jack Kiefer (L. Le Cam and R. A. Olshen, eds.), II, 513-520.

35. FREEDMAN, D. A. and NAVIDL W. C. (Sept. 1984, revised Jan. 1985). Regression models for adjusting the 1980Census. Statistical Science. Feb 1986, Vol. 1, No. 1, 3-39.

36. FREEDMAN, D. A. (Sept. 1984, revised Nov. 1984). De Finetti's theorem in continuous time.

37. DIACONIS, P. and FREEDMAN, D. (Oct. 1984). An elementary proof of Stirling's formula. Amer. Math Monthly. Feb1986, Vol. 93, No. 2, 123-125.

38. LE CAM, L. (Nov. 1984). Sur l'approximation de familles de mesures par des families Gaussiennes. Ann. Inst.Henri Poincare, 1985, 21, 225-287.

39. DIACONIS, P. and FREEDMAN, D. A. (Nov. 1984). A note on weak star uniformities.

40. BRELMAN, L. and IHAKA, R. (Dec. 1984). Nonlinear discriminant analysis via SCALING and ACE.

41. STONE, C. J. (Jan. 1985). The dimensionality reduction principle for generalized additive models.

42. LE CAM, L. (Jan. 1985). On the normal approximation for sums of independent variables.

43. BICKEL, P. J. and YAHAV, J. A. (1985). On estimating the number of unseen species: how many executions werethere?

44. BRILLINGER, D. R. (1985). The natural variability of vital rates and associated statistics. Biometrics, to appear.

45. BRILLINGER, D. R. (1985). Fourier inference: some methods for the analysis of array and nonGaussian series data.Water Resources Bulletin, 1985, 21, 743-756.

46. BREIMAN, L. and STONE, C. J. (1985). Broad spectrum estimates and confidence intervals for tail quantiles.

47. DABROWSKA, D. M. and DOKSUM, K. A. (1985, revised March 1987). Partial likelihood in transfonnation modelswith censored data. Scandinavian J. Statist., 1988, 15, 1-23.

48. HAYCOCK, K. A. and BRILLINGER, D. R. (November 1985). LIBDRB: A subroutine library for elementary timeseries analysis.

49. BRILLINGER, D. R. (October 1985). Fitting cosines: some procedures and some physical examples. Joshi Festschrift,1986. D. Reidel.

50. BRILLINGER, D. R. (November 1985). What do seismology and neurophysiology have in common? - Statistics!Comptes Rendus Math. Acad. Sci. Canada. January, 1986.

51. COX, D. D. and O'SULLIVAN, F. (October 1985). Analysis of penalized likelihood-type estimators with application togeneralized smoothing in Sobolev Spaces.

- 3 -

52. O'SULLIVAN, F. (November 1985). A practical perspective on ill-posed inverse problems: A review with somenew developments. To appear in Journal of Statistical Science.

53. LE CAM, L. and YANG, G. L. (November 1985, revised March 1987). On the preservation of local asymptotic normalityunder information loss.

54. BLACKWELL, D. (November 1985). Approximate normality of large products.

55. FREEDMAN, D. A. (June 1987). As others see us: A case study in path analysis. Joumal of EducationalStatistics. 12, 101-128.

56. LE CAM, L. and YANG, G. L. (January 1986). Replaced by No. 68.

57. LE CAM, L. (February 1986). On the Bernstein - von Mises theorem.

58. O'SULLIVAN, F. (January 1986). Estimation of Densities and Hazards by the Method of Penalized likelihood.

59. ALDOUS, D. and DIACONIS, P. (February 1986). Strong Unifonn Times and Finite Random Walks.

60. ALDOUS, D. (March 1986). On the Markov Chain simulation Method for Unifonn Combinatorial Distributions andSimulated Annealing.

61. CHENG, C-S. (April 1986). An Optimization Problem with Applications to Optimal Design Theory.

62. CHENG, C-S., MAJUMDAR, D., STUFKEN, J. & TURE, T. E. (May 1986, revised Jan 1987). Optimal step typedesign for comparing test treatments with a control.

63. CHENG, C-S. (May 1986, revised Jan. 1987). An Application of the Kiefer-Wolfowitz Equivalence Theorem.

64. O'SULLIVAN, F. (May 1986). Nonparametric Estimation in the Cox Proportional Hazards Model.

65. ALDOUS, D. (JUNE 1986). Finite-Time Implications of Relaxation Times for Stochastically Monotone Processes.

66. PITMAN, J. (JULY 1986, revised November 1986). Stationary Excursions.

67. DABROWSKA, D. and DOKSUM, K. (July 1986, revised November 1986). Estimates and confidence intervals formedian and mean life in the proportional hazard model with censored data. Biometrika, 1987, 74, 799-808.

68. LE CAM, L. and YANG, G.L. (July 1986). Distinguished Statistics, Loss of information and a theorem of Robert B.Davies (Fourth edition).

69. STONE, C.J. (July 1986). Asymptotic properties of logspline density estimation.

71. BICKEL, P.J. and YAHAV, J.A. (July 1986). Richardson Extrapolation and the Bootstrap.

72. LEHMANN, E.L. (July 1986). Statistics - an overview.

73. STONE, C.J. (August 1986). A nonparametric framework for statistical modelling.

74. BIANE, PH. and YOR, M. (August 1986). A relation between Levy's stochastic area formula, Legendre polynomial,and some continued fractions of Gauss.

75. LEHMANN, E.L. (August 1986, revised July 1987). Comparing Location Experiments.

76. O'SULLIVAN, F. (September 1986). Relative risk estimation.

77. O'SULLIVAN, F. (September 1986). Deconvolution of episodic hormone data.

78. P1TMAN, J. & YOR, M. (September 1987). Further asymptotic laws of planar Brownian motion.

79. FREEDMAN, D.A. & ZEISEL, H. (November 1986). From mouse to man: The quantitative assessment of cancer risks.To appear in Statistical Science.

80. BRILLINGER, D.R. (October 1986). Maximum likelihood analysis of spike trains of interacting nerve cells.

81. DABROWSKA, D.M. (November 1986). Nonparametric regression with censored survival time data.

82. DOKSUM, K.J. and LO, A.Y. (Nov 1986, revised Aug 1988). Consistent and robust Bayes Procedures forLocation based on Partial Information.

83. DABROWSKA, D.M., DOKSUM, KA. and MIURA, R. (November 1986). Rank estimates in a class of semiparametrictwo-sample models.

- 4 -

84. BRILLINGER, D. (December 1986). Some statistical methods for random process data from seismology andneurophysiology.

85. DIACONIS, P. and FREEDMAN, D. (December 1986). A dozen de Finetti-style results in search of a theory.Ann. Inst. Henri Poincare, 1987, 23, 397-423.

86. DABROWSKA, D.M. (January 1987). Unifonn consistency of nearest neighbour and kemel conditional Kaplan- Meier estimates.

87. FREEDMAN, D.A., NAVIDI, W. and PETERS, S.C. (February 1987). On the impact of variable selection infitting regression equations.

88. ALDOUS, D. (February 1987, revised April 1987). Hashing with linear probing, under non-uniform probabilities.

89. DABROWSKA, D.M. and DOKSUM, KA. (March 1987, revised January 1988). Estimating and testing in a twosample generalized odds rate model. J. Amer. Statist. Assoc., 1988, 83, 744-749.

90. DABROWSKA, D.M. (March 1987). Rank tests for matched pair experiments with censored data.

91. DIACONIS, P and FREEDMAN, D.A. (April 1988). Conditional limit theorems for exponential families and finiteversions of de Finetti's theorem. To appear in the Journal of Applied Probability.

92. DABROWSKA, D.M. (April 1987, revised September 1987). Kaplan-Meier estimate on the plane.

92a. ALDOUS, D. (April 1987). The Harmonic mean formula for probabilities of Unions: Applications to sparse randomgraphs.

93. DABROWSKA, D.M. (June 1987, revised Feb 1988). Nonparametric quantile regression with censored data.

94. DONOHO, D.L. & STARK, P.3. (June 1987). Uncertainty principles and sipal recovery.

95. CANCELLED

96. BRILLINGER, D.R. (June 1987). Some examples of the statistical analysis of seismological data. To appear inProceedings, Centennial Anniversary Symposium, Seismographic Stations, University of California, Berkeley.

97. FREEDMAN, DA. and NAVIDI, W. (June 1987). On the multi-stage model for carcinogenesis. To appear inEnvirormental Health Perspectives.

98. O'SULLIVAN, F. and WONG, T. (June 1987). Determining a function diffusion coefficient in the heat equation.99. O'SULLIVAN, F. (June 1987). Constrained non-linear regularization with application to some system identification

problems.100. LE CAM, L. (July 1987, revised Nov 1987). On the standard asymptotic confidence ellipsoids of Wald.

101. DONOHO, D.L. and LIU, R.C. (July 1987). Pathologies of some minimum distance estimators. Annals ofStatistics, June, 1988.

102. BRILLINGER, D.R., DOWNING, K.H. and GLAESER, R.M. (July 1987). Some statistical aspects of low-doseelectron imaging of crystals.

103. LE CAM, L. (August 1987). Harald Cramer and sums of independent random variables.

104. DONOHO, A.W., DONOHO, D.L. and GASKO, M. (August 1987). Macspin: Dynamic graphics on a desktopcomputer. IEEE Computer Graphics and applications, June, 1988.

105. DONOHO, D.L. and LIU, R.C. (August 1987). On minimax estimation of linear functionals.

106. DABROWSKA, D.M. (August 1987). Kaplan-Meier estimate on the plane: weak convergence, LIL and the bootstrap.

107. CHENG, C-S. (Aug 1987, revised Oct 1988). Some orthogonal main-effect plans for asymmnetrical factorials.

108. CHENG, C-S. and JACROUX, M. (August 1987). On the construction of trend-free run orders of two-level factorialdesigns.

109. KLASS, M.J. (August 1987). Maximizing E max Sk / ES': A prophet inequality for sums of I.I.D. mean zero variates.1 gk<n

110. DONOHO, D.L. and LIU, R.C. (August 1987). The "automatic" robustness of minimum distance functionals.Anmals of Statistics, June, 1988.

111. BICKEL, P.J. and GHOSH, J.K. (August 1987, revised June 1988). A decomposition for the likelihood ratio statisticand the Bartlett correction- a Bayesian argument.

- 5 -

112. BURDZY, K., PITMAN, J.W. and YOR, M. (September 1987). Some asymptotic laws for crossings and excursions.

113. ADHIKARI, A. and PITMAN, J. (September 1987). The shortest planar arc of width 1.

114. RITOV, Y. (September 1987). Estimation in a linear regression model with censored data.

115. BICKEL, PJ. and RlTOV, Y. (Sept. 1987, revised Aug 1988). Large sample theory of estimation in biased samplingregression models I.

116. RITOV, Y. and BICKEL, P.J. (Sept.1987, revised Aug. 1988). Achieving information bounds in non andsemiparametric models.

117. RlTOV, Y. (October 1987). On the convergence of a maximal correlation algorithm with alternating projections.

118. ALDOUS, D.J. (October 1987). Meeting times for independent Markov chains.

119. HESSE, C.H. (October 1987). An asymptotic expansion for the mean of the passage-time distribution of integratedBrownian Motion.

120. DONOHO, D. and LIU, R. (Oct. 1987, revised Mar. 1988, Oct. 1988). Geometrizing rates of convergence, IH.

121. BRILLINGER, D.R. (October 1987). Estimating the chances of large earthquakes by radiocarbon dating and statisticalmodelling. Statistics a Guide to the Unknown, pp. 249-260 (Eds. J.M. Tanur et al.) Wadsworth, Pacific Grove.

122. ALDOUS, D., FLANNERY, B. and PALACIOS, J.L. (November 1987). Two applications of um processes: The fringeanalysis of search trees and the simulation of quasi-stationary distributions of Markov chains.

123. DONOHO, D.L., MACGIBBON, B. and LIU, R.C. (Nov.1987, revised July 1988). Minimax risk for hyperrectangles.

124. ALDOUS, D. (November 1987). Stopping times and tightness HI.

125. HESSE, C.H. (November 1987). The present state of a stochastic model for sedimentation.

126. DALANG, R.C. (December 1987, revised June 1988). Optimal stopping of two-parameter processes onnonstandard probability spaces.

127. Same as No. 133.

128. DONOHO, D. and GASKO, M. (December 1987). Multivariate generalizations of the median and trimmed mean IH.

129. SMITH, D.L. (December 1987). Exponential bounds in Vapnik-tervonenkis classes of index 1.

130. STONE, C.J. (Nov.1987, revised Sept. 1988). Uniform error bounds involving logspline models.

131. Same as No. 140

132. HESSE, C.H. (December 1987). A Bahadur - Type representation for empirical quantiles of a large class of stationary,possibly infinite - variance, linear processes

133. DONOHO, D.L. and GASKO, M. (December 1987). Multivariate generalizations of the median and trimmed mean, I.

134. DUBINS, L.E. and SCHWARZ, G. (December 1987). A sharp inequality for martingales and stopping-times.

135. FREEDMAN, DA. and NAVIDI, W. (December 1987). On the risk of lung cancer for ex-smokers.

136. LE CAM, L. (January 1988). On some stochastic models of the effects of radiation on cell survival.

137. DIACONIS, P. and FREEDMAN, DA. (April 1988). On the uniform consistency of Bayes estimates for multinomialprobabilities.

137a. DONOHO, D.L. and LIU, R.C. (1987). Geometrizing rates of convergence, I.

138. DONOHO, D.L. and LIU, R.C. (January 1988). Geometrizing rates of convergence,Im.

139. BERAN, R. (January 1988). Refining simultaneous confidence sets.

140. HESSE, C.H. (December 1987). Numerical and statistical aspects of neural networks.

141. BRILLINGER, D.R. (Jan. 1988). Two reports on trend analysis: a) An elementary trend analysis of Rio negro levels atManaus, 1903-1985. b) Consistent detection of a monotonic trend superposed on a stationary time series.

142. DONOHO, D.L. (Jan. 1985, revised Jan. 1988). One-sided inference about functionals of a density.

- 6 -

143. DALANG, R.C. (Feb. 1988, revised Nov. 1988). Randomization in the two-armed bandit problem.144. DABROWSKA, D.M., DOKSUM, KA. and SONG, J.K. (February 1988). Graphical comparisons of cumulative hazards

for two populations.145. ALDOUS, D.J. (February 1988). Lower bounds for covering times for reversible Markov Chains and random walks on

graphs.146. BICKEL, P.J. and RITOV, Y. (Feb.1988, revised August 1988). Estimating integrated squared density derivatives.

147. STARK, P.B. (March 1988). Strict bounds and applications.

148. DONOHO, D.L. and STARK, P.B. (March 1988). Rearrangements and smoothing.

149. NOLAN, D. (March 1988). Asymptotics for a multivariate location estimator.

150. SEILLIER, F. (March 1988). Sequential probability forecasts and the probability integral transform.

151. NOLAN, D. (March 1988). Limit theorems for a random convex set.

152. DIACONIS, P. and FREEDMAN, DA. (April 1988). On a theorem of Kuchler and Lauritzen.

153. DIACONIS, P. and FREEDMAN, DA. (April 1988). On the problem of types.

154. DOKSUM, KA. (May 1988). On the correspondence between models in binary regression analysis and survival analysis.

155. LEHMANN, E.L. (May 1988). Jerzy Neyman, 1894-1981.

156. ALDOUS, D.J. (May 1988). Stein's method in a two-dimensional coverage problem.

157. FAN, J. (June 1988). On the optimal rates of convergence for nonparametric deconvolution problem.

158. DABROWSKA, D. (June 1988). Signed-rank tests for censored matched pairs.

159. BERAN, R.J. and MILLAR, P.W. (June 1988). Multivariate symmetry models.

160. BERAN, R.J. and MILLAR, P.W. (June 1988). Tests of fit for logistic models.

161. BREIMAN, L. and PETERS, S. (June 1988). Comparing automatic bivariate smoothers (A public service enterprise).

162. FAN, J. (June 1988). Optimal global rates of convergence for nonparametric deconvolution problem.

163. DIACONIS, P. and FREEDMAN, D.A. (June 1988). A singular measure which is locally unifonn. (Revised byTech Report No. 180).

164. BICKEL, P.J. and KRIEGER, A.M. (July 1988). Confidence bands for a distribution function using the bootstrap.

165. HESSE, C.H. (July 1988). New methods in the analysis of economic time series I.

166. FAN, JIANQING (July 1988). Nonparametric estimation of quadratic functionals in Gaussian white noise.

167. BRELMAN, L., STONE, C.J. and KOOPERRERG, C. (August 1988). Confidence bounds for extreme quantiles.

168. LE CAM, L. (August 1988). Maximum likelihood an introduction.

169. BREIMAN, L. (Aug.1988, revised Feb. 1989). Submodel selection and evaluation in regression I. The X-fixed caseand little bootstrap.

170. LE CAM, L. (September 1988). On the Prokhorov distance between the empirical process and the associated Gaussianbridge.

171. STONE, C.J. (September 1988). Large-sample inference for logspline models.

172. ADLER, R.J. and EPSTEIN, R. (September 1988). Intersection local times for infinite systems of planar brownianmotions and for the brownian density process.

173. MILLAR, P.W. (October 1988). Optimal estimation in the non-parametric multiplicative intensity model.

174. YOR, M. (October 1988). Interwinings of Bessel processes.

175. ROJO, J. (October 1988). On the concept of tail-heaviness.

176. ABRAHAMS, D.M. and RTZZARDI, F. (September 1988). BLSS - The Berkeley interactive statistical system:An overview.

- 7 -

177. MILLAR, P.W. (October 1988). Gamma-funnels in the domain of a probability, with statistical implications.

178. DONOHO, D.L. and LIU, R.C. (October 1988). Hardest one-dimensional subfamilies.

179. DONOHO, D.L. and STARK, P.B. (October 1988). Recovery of sparse signals from data missing low frequencies.

180. FREEDMAN, DA. and PITMAN, J A. (Nov. 1988). A measure which is singular and uniformly locally uniform.(Revision of Tech Report No. 163).

181. DOKSUM, K.A. and HOYLAND, ARNLJOT (Nov. 1988, revised Jan. 1989). A model for step-stress accelerated lifetesting experiments based on Wiener processes and the inverse Gaussian distribution.

182. DALANG, R.C., MORTON, A. and WILLINGER, W. (November 1988). Equivalent martingale measures andno-arbitrage in stochastic securities market models.

183. BERAN, R. (November 1988). Calibrating prediction regions.

184. BARLOW, M.T., PITMAN, J. and YOR, M. (Feb. 1989). On Walsh's Brownian Motions.

185. DALANG, R.C. and WALSH, J.B. (Dec. 1988). Almost-equivalence of the gern-field Markov property and the sharpMarkov property of the Brownian sheet.

186. HESSE, C.H. (Dec. 1988). Level-Crossing of integrated Ornstein-Uhlenbeck processes

187. NEVEU, J. and PrTMAN, J.W. (Feb. 1989). Renewal property of the extrema and tree property of the excursionof a one-dimensional brownian motion.

188. NEVEU, J. and PrTMAN, J.W. (Feb. 1989). The branching process in a brownian excursion.

189. P1TMAN, J.W. and YOR, M. (Mar. 1989). Some extensions of the arcsine law.

190. STARK, P.B. (Dec. 1988). Duality and discretization in linear inverse problems.

191. LEHMANN, E.L. and SCHOLZ, F.W. (Jan. 1989). Ancillarity.

192. PEMANTLE, R. (Feb. 1989). A time-dependent version of P6lya's um.

193. PEMANTLE, R. (Feb. 1989). Nonconvergence to unstable points in urn models and stochastic approximations.

194. PEMANTLE, R. (Feb. 1989, revised May 1989). When are touchpoints limits for generalized P6lya urns.

195. PEMANTLE, R. (Feb. 1989). Random walk in a random environment and first-passage percolation on trees.

196. BARLOW, M., P1TMAN, J. and YOR, M. (Feb. 1989). Une extension multidimensionnelle de la loi de l'arc sinus.

197. BREIMAN, L. and SPECTOR, P. (Mar. 1989). Submodel selection and evaluation in regression- the X-random case.

198. BREIMAN, L., TSUR, Y. and ZEMEL, A. (Mar. 1989). A simple estimation procedure for censoredregression models with known error distribution.

199. BRILLINGER, D.R. (Mar. 1989). Two papers on bilinear systems: a) A study of second- and third-order spectralprocedures and maximum likelihood identification of a bilinear system. b) Some statistical aspects of NMR spectros-copy, Actas del 2° congreso lantinoamericano de probabilidad y estadistica matematica, Caracas, 1985.

200. BRILLINGER, D.R. (Mar. 1989). Two papers on higher-order spectra: a) Parameter estimation for nonGaussian processesvia second and third order spectra with an application to some endocrine data. b) Some history of the study of higher-order moments and spectra.

201. DE LA PENA, V. and KLASS, M.J. (April 1989). L bounds for quadratic forms of independent random variables.

202. FREEDMAN, DA. and NAVIDI, W.C. (April 1989). Testing the independence of competing risks.

203. TERDIK, G. (May 1989). Bilinear state space realization for polynomial stochastic systems.

204. DONOHO, D.L. and JOHNSTONE, I.M. (May 1989). Minimax risk over Ip-Balls.

- 8 -

205. PEMANTLE, R., PROPP, J. and ULLMAN, D. (May 1989). On tensor powers of integer programs.

Copies of these Reports plus the most recent additions to the Technical Report series are available from the StatisticsDepartment technical typist in room 379 Evans Hall or may be requested by mail from:

Department of Statistics-. University of California

Berkeley, Califomia 94720Cost: $1 per copy.