Sharp asymptotics for the partition function
of some continuous-time directed polymers
Agnese Cadel Samy Tindel
Institut Elie Cartan, Universit�e de Nancy 1
BP 239, 54506-Vandoeuvre-l�es-Nancy, France
[cadel,tindel]@iecn.u-nancy.fr
Frederi Viens �
Dept. Statistics & Dept. Mathematics, Purdue University
150 N. University St., West Lafayette, IN 47907-2067, USA
May 5, 2008
Abstract
This paper is concerned with two related types of directed polymers in a random
medium. The �rst one is a d-dimensional Brownian motion living in a random en-
vironment which is white-noise in time and homogeneous in space. The second is a
continuous-time discrete-space Markov process on Zd, in a random environment with
similar properties as in continuous space, albeit de�ned only on R+�Zd. The case of aspace-time white noise environment can be achieved in this second setting. By means
of some Gaussian tools, we estimate the free energy of these models at low tempera-
ture, and give some further information on the strong disorder regime of the objects
under consideration.
Key words and phrases: Polymer model, Random medium, Gaussian �eld, Free energy,
Lyapunov exponent.
MSC: 82D60, 60K37, 60G15.
�This author's research partially supported by NSF grants no.: DMS 0204999 and DMS 0606615.
1
1 Introduction
1.1 Background, models, and motivation
Models for directed polymers in a random environment have been introduced in the physical
literature [10, 13, 14, 19] for two main reasons. First, they provide a reasonably realistic
model of a particle under the in uence of a random medium, for which a number of natural
questions can be posed, in terms of the asymptotic behavior for the path of the particle. The
second point is that, in spite of the fact that polymers seem to be some more complicated
objects than other disordered systems such as spin glasses, a lot more can be said about their
behavior in the low temperature regime, as pointed out in [11, 13]. At a mathematical level,
after two decades of e�orts, a substantial amount of information about di�erent models of
polymer is now available, either in discrete or continuous space settings (see [8, 18, 20] and
[3, 17] respectively).
The current article can be seen as a part of this global project consisting in describing
precisely the polymer's asymptotic behavior, beyond the spin glass case. Except for some toy
models such as the REM or GREM [2, 22], little is known about the low temperature behavior
of the free energy for spin glasses systems, at least at a completely rigorous level. We shall
see in this paper that polymer models are amenable to computations in this direction: we
work to obtain some sharp estimates on the free energy of two di�erent kind of polymers in
continuous time, for which some scaling arguments seem to bring more information than in
the discrete time setting. Here, in a strict polymer sense, time can also be interpreted as the
length parameter of a directed polymer.
A word about random media appellations: we believe the term \random environment"
normally implies that the underlying randomness is allowed to change over time; the appel-
lation \random scenery" or \random landscape" is more speci�cally used for an environment
that does not change over time; the models we consider herein fall under the time-varying
\environment" umbrella. We now give some brief speci�cs about these models.
1. We �rst consider a Brownian polymer in a Gaussian environment: the polymer itself
is modeled by a Brownian motion b = fbt; t � 0g, de�ned on a complete �lteredprobability space (C;F ; (Ft)t�0; (P xb )x2Rd), where P xb stands for the Wiener measurestarting from the initial condition x. The corresponding expected value is denoted by
Exb , or simply by Eb when x = 0.
The random environment is represented by a centered Gaussian random �eld W in-
dexed by R+�Rd, de�ned on another independent complete probability space (;G;P).Denoting by E the expected value with respect to P, the covariance structure of W is
given by
E [W (t; x)W (s; y)] = (t ^ s) �Q(x� y); (1)
2
for a given homogeneous covariance function Q : Rd ! R satisfying some regu-
larity conditions that will be speci�ed later on. In particular, the function t 7![Q(0)]�1=2W (t; x) will be a standard Brownian motion for any �xed x 2 Rd; for every�xed t 2 R+, the process x 7! t�1=2W (t; x) is a homogeneous Gaussian �eld on Rd
with covariance function Q. Notice that the homogeneity assumption is made here for
sake of readability, but could be weakened for almost all the results we will show. The
interested reader can consult [12] for the types of tools needed for such generalizations.
Once b and W are de�ned, the polymer measure itself can be described as follows:
for any t > 0, the energy of a given path (or con�guration) b on [0; t] is given by the
Hamiltonian
�Ht(b) =Z t
0
W (ds; bs): (2)
A completely rigorous meaning for this integral will be given in the next section, but
for the moment, observe that for any �xed path b, Ht(b) is a centered Gaussian random
variable with variance tQ(0). Based on this Hamiltonian, for any x 2 Rd, and a givenconstant � (interpreted as the inverse of the temperature of the system), we de�ne our
(random) polymer measure Gxt (with Gt := G0t ) as follows:
dGxt (b) =e��Ht(b)
ZxtdP xb (b); with Zxt = E
xb
�e��Ht(b)
�: (3)
2. The second model we consider in this article is a general continuous time process on the
discrete lattice Zd in a white noise potential: the polymer is modeled by a continuoustime homogeneous Markov process b = fbt; t � 0g on Zd, de�ned on a complete �lteredprobability space (C; F ; (Ft)t�0; (P xb )x2Zd). The corresponding expected value will bedenoted by Ex
b, or simply by Eb when x = 0. Notice that b can be represented in
terms of its jump times f�i; i � 0g and its positions f~xi; i � 0g between the jumps, asbt =
Pi�0 xi1[�i;�i+1)(t). Then, under Pb, �0 = x0 = 0, the sequence f�i+1 � �i; i � 0g is
i.i.d. with common exponential law E(�), and the sequence f~xi; i � 0g is an arbitraryhomogeneous Markov chain on Zd. We only assume that this chain has �nite one-steprange, i.e. that the number q of possible positions it can jump to in one step is �nite.
For instance, the simple symmetric random walk jumps only to nearest neighbors, with
equal probabilities, and we have � = 2d and q = 2d.
In this context, the random environment W will be de�ned as a sequence fW (:; z); z 2Zdg of Brownian motions, de�ned on another independent complete probability space(; G; P). Just like in the Brownian case described above, the covariance structure weassume on W is of the following type:
EhW (t; x)W (s; y)
i= [t ^ s] Q(x� y); (4)
3
for a covariance function Q de�ned on Zd. Note that the case where Q (z) = 0 for all zexcept Q (0) > 0, is the case where Brownian motions in the family fW (�; z) ; z 2 Zdgare independent, i.e. the case of space-time white noise. The Hamiltonian of our
system can be de�ned formally similarly to the continuous case, as
�Ht(b) =Z t
0
W (ds; bs):
Notice however that, since b is a piecewise constant function, the Hamiltonian Ht(b)
can also be written as
�Ht(b) =NtXi=0
W (�i+1; xi)� W (�i; xi); (5)
where Nt designates the number of jumps of b before time t, and �Nt+1 = t by conven-
tion. Once the Hamiltonian Ht is de�ned, a Gibbs-type measure Gt can be introduced
similarly to (3) in the Brownian case.
As mentioned before, our aim in this article is to give some sharp estimates on the free
energies p(�) and p(�) of the two systems described above, for large �. The quantities of
interest are de�ned asymptotically as
p(�) = limt!1
1
tE [log(Zt)] ; and p(�) = lim
t!1
1
tEhlog(Zt)
i;
it is known (see e.g. [20] for the Brownian case) that the limits above, which are often called
Lyapunov exponents, exist, are positive, and are both bounded from above by Q(0)�2=2.
It is then possible to separate a region of weak disorder from a region of strong disorder
according to the value of p(�): we will say that the polymer is in the weak disorder regime
if p(�) = Q(0)�2=2, while the strong disorder regime is de�ned by the strict inequality
p(�) < Q(0)�2=2. These two notions have some nice interpretations in terms of the behavior
of the particle under the Gibbs measure (see e.g. [4, 9]), and it is expected, for any model
of polymer in a random environment, that the strong disorder regime is attained whenever
� is large enough. It is then natural to ask if one can obtain a sharper information than
p(�) < Q(0)�2=2 in the low temperature phase. Indeed, on the one hand, this may quantify
in a sense how far we are from the weak disorder regime, and how much localization there is
on our measures Gt; Gt. On the other hand, the penalization method explained in [21] can be
roughly summarized in the following way: if one can get a sharp equivalent for the quantity
Eb[e��Ht(b)], then this will also allow a detailed description of the limit limt!1Gt. This
latter program is of course beyond the scope of the current article, but is a good motivation
for getting some precise information about the function p(�).
4
1.2 Summary of results
We now describe our main results. Our principal result in continuous space will be obtained
in terms of the regularity of Q in a neighborhood of 0. In particular, we shall assume some
upper and lower bounds on Q of the form
c0jxj2H � Q(0)�Q(x) � c1jxj2H ; for all x such that jxj 2 [0; r0]; (6)
for a given exponent H 2 (0; 1] and r0 > 0. It should be noticed that condition (6) is
equivalent to assuming that W has a speci�c almost-sure modulus of continuity in space, of
order jxjH log1=2 (1= jxj), i.e. barely failing to be H-H�older continuous (see [23] for details).Then, under these conditions, we will get the following conclusions.
Theorem 1.1. Assume that the function Q satis�es condition (6). Then the following hold
true:
1. If H 2 [1=2; 1], we have for some constants C0;d and C1;d depending only on Q and d,
for all � � 1,C0;d�
4=3 � p(�) � C1;d�2�2H=(3H+1):
2. If H 2 (0; 1=2], we have for some constants �Q, C 00;d, and C 01;d depending only on Qand d, for all � � �Q,
C 00;d�2=(1+H) � p(�) � C 01;d�2�2H=(3H+1):
Corresponding almost sure results on t�1E [log(Zt)] also hold, as seen in Corollary 1.3
and Proposition 2.1 below. Let us make a few elementary comments about the above the-
orem's bounds, which are also summarized in Figure 1.2. First of all, the exponent of �
in those estimates is decreasing with H, which seems to indicate a stronger disorder when
the Gaussian �eld W is smoother in space. Furthermore, in the case H 2 [1=2; 1], the gapbetween the two estimates decreases as H increases to 1; for H = 1=2, we get bounds with
the powers of � equal to 4=3 and 8=5; and for H = 1, the bounds are 4=3 and 3=2. It should
be noted that the case H = 1=2 is our least sharp result, while the case H = 1 yields the
lowest power of �; one should not expect lower powers for any potential W even if W is so
smooth that it is C1 in space: indeed, unlessW is highly degenerate, the lower bound in (6)
should hold with H = 1, while the upper bound will automatically be satis�ed with H = 1.
The case of small H is more interesting. Indeed, we can rewrite the lower and upper bounds
above as
C 00;d�2�2H+F (H) � p(�) � C 01;d�2�2H+G(H)
where the functions F and G satisfy F (x) = 2x2 + O (x3) and G (x) = 6x2 + O (x3) for x
near 0. We therefore see that the asymptotic �2�2H is quite sharp for small H, but that the
5
0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.01.3
1.4
1.5
1.6
1.7
1.8
1.9
2.0
Continuity coefficient H
Expo
nent
of b
eta
Upper boundLower bound
Figure 1: Exponent of � in p (�) as a function of H
second order term in the expansion of the power of � for small H, while bounded, is always
positive.
Using ideas introduced in [12] to deal with spatially non-homogeneous media, it is possible
to extend Theorem 1.1. The reader will check that the �rst of the following two corollaries
is trivial to prove using the tools in this article. The second corollary requires techniques in
[12], and can also be proved directly by using sub-Gaussian concentration results (see [27]).
We do not give any details of its proof, for the sake of conciseness. Neither corollary assumes
that W is spatially homogeneous. One will note that no assertion on the existence of p (�) is
made in these corollaries, but that the �rst corollary already implies strong disorder for large
� in the sense that lim sup t�1E [log(Zt)] < �2Q (0) =2. [12] can be consulted for conditions
under which p (�) exists even if W is not spatially homogeneous.
Corollary 1.2. In the non homogeneous case, the following bounds are satis�ed:
� [Upper bound] Assume that for some r1; c1 > 0, for all x; y 2 Rd such that jx� yj � r1,the spatial canonical metric of W is bounded above as
�2 (x; y) := E�(W (1; x)�W (1; y))2
�� c1 jx� yj2H :
Then, replacing p (�) by lim sup�!1 t�1E [log(Zt)], the two upper bound results in The-
orem 1.1 hold.
� [Lower bound] Assume that for some r0; c0 > 0, for all x; y 2 Rd such that jx� yj � r0,we have
�2 (x; y) := E�(W (1; x)�W (1; y))2
�� c0 jx� yj2H :
6
Then, replacing p (�) by lim inf�!1 t�1E [log(Zt)], the two lower bound results in The-
orem 1.1 hold.
Corollary 1.3. Under the hypotheses of Corollary 1.2, its conclusions also hold P-almost
surely with lim sup�!1 t�1E [log(Zt)] replaced by lim sup�!1 t
�1 log(Zt), and similarly for
the lim inf 's.
Since our estimates become sharper as H ! 0, and also due to the fact that the behavior
of p(�) is nearly quadratic in � for small H (i.e. approaching the weak disorder regime),
we decided to explore further the region of logarithmic spatial regularity for W , in order to
determine whether one ever leaves the strong disorder regime. Namely, we also examine the
situation of a covariance function Q for which there exist positive constants c0, c1, and r1such that for all x with jxj � r1,
c0 log�2 (1= jxj) � Q(0)�Q(x) � c1 log�2 (1= jxj) ; (7)
where is a given positive exponent. Assumption (7) implies thatW is not spatially H�older-
continuous for any exponent H 2 (0; 1]. Moreover, the theory of Gaussian regularity impliesthat, if > 1=2, W is almost-surely continuous in space, with modulus of continuity propor-
tional to log� +1=2 (1= jxj), while if � 1=2, W is almost-surely not uniformly continuous on
any interval in space, and in fact is unbounded on any interval. We will then establish the
following result, which is optimal, up to multiplicative constants.
Theorem 1.4. Assume condition (7) where > 0. We have for some constants D0;d and
D1;d depending only on Q and d, for all � large enough,
D0;d�2 log�2 (�) � p(�) � D1;d�
2 log�2 (�) :
Besides giving a sharp result up to constants for the free energy p(�), the last result will
allow us to make a link between our Brownian model and the discrete-space Markov process
polymer described by the Hamiltonian (5). Indeed, the following result will also be proved
in the sequel.
Theorem 1.5. Assume that there are two distinct sites e1 and e2 in Zd which can bothbe reached with positive probability after one jump of the Markov process b, and that 1 >
Q(0) � Q(e1 � e2) > 0, where Q has been de�ned in (4). Then the free energy p(�) of the
homogeneous Markov process polymer satis�es, for � large enough:
D00�
2 log�1 (�) � p(�) � D01�
2 log�1 (�) ; (8)
for two constants D00 and D
01 depending only on the covariance function Q and the the law
of the Markov process b.
7
Relation (8) will be obtained here thanks to some simple arguments, which allow the
extension to spatially inhomogeneous media. In the special homogeneous case of space-
time white noise (Q (x) = 0 for all x 6= 0), more can be said: the exact value of the limit
lim�!1 p(�) log (�) =�2 can be computed in this situation; this result has been established
by the authors of the work in preparation [15].
In relation with the continuous space model considered in Theorem 1.4, we see that
to obtain the same behavior as with space-time white noise in discrete space, we need to
use precisely the environment W in Rd with the logarithmic regularity corresponding to = 1=2 in (7). As mentioned before, this behavior of W happens to be exactly at the
threshold in whichW becomes almost-surely discontinuous and unbounded on every interval.
Nevertheless such a W is still function-valued. Hence, for the purpose of understanding the
polymer partition function, there is no need to study the space-time white noise in continuous
space, for which W (t; �) is not a bona�de function (only a distribution), and for which themeaning of Zt itself is di�cult to even de�ne. Another way to interpret the coincidence
of behaviors for \space-time white noise in R+ � Zd" and for \ = 1=2" is to say that
both models for W are function-valued and exhibit spatial discontinuity: indeed, in discrete
space, one extends W (t; �) to Rd by making it piecewise constant, in order to preserveindependence. The fact that the limit in Theorem 1.4 depends on does prove, however,
that the continuous-space polymer model under logarithmic regularity is richer than the
discrete-space one.
As in the H�older-scale continuous space setting, we have the following corollaries, in
which W is allowed to be spatially inhomogeneous. Again, we do not include proofs of these
results for the sake of conciseness.
Corollary 1.6. Assume the lower and upper bound hypotheses in Corollary 1.2 hold with
jx� yj2H replaced by log�2 (1= jx� yj). Then the conclusions of Theorem 1.4 hold with p (�)replaced by lim inf�!1 t
�1E [log(Zt)] for the lower bound, and by lim sup�!1 t�1E [log(Zt)]
for the upper bound. Almost-sure results as in Corollary 1.3 also hold.
Corollary 1.7. For the discrete-space polymer in Theorem 1.5, assume, instead of Q (0) >
Q (e1 � e2), that E�(W (1; e1)�W (1; e2))
2� > 0. Then the conclusions of Theorem 1.5 holdwith p (�) replaced by lim inf�!1 t
�1E[log(Zt)] for the lower bound, and with p (�) replaced
by lim sup�!1 t�1E[log(Zt)] for the upper bound. Almost-sure results as in Corollary 1.3
also hold.
Let us say a few words now about the methodology we have used in order to get our
results. It is inspired by the literature on Lyapunov exponents for stochastic PDEs [5, 6, 7,
12, 25, 26]; our upper bound results rely heavily on the estimation of the supremum of some
well-chosen Gaussian �elds, using such results as Dudley's so-called entropy upper bound,
and the Borell-Sudakov inequality (see [1] or [27]); our lower bound results are obtained
more \by hand", by isolating very simple polymer con�gurations b or b which maximize
8
the random medium's increments in the Hamiltonian Ht (b) or Ht(b), and showing that
these con�gurations contain enough weight to provide lower bounds. It turns out that these
estimation procedures works better when the con�guration b is simple enough, such as a
piecewise constant or linear function. For the upper bound in the continuous case, a careful
discretization of our Brownian path will thus have to be performed in order to get our main
results; the resulting proof cannot exploit the discrete case itself because of the di�erent
nature of the discrete and continuous environments.
Some of the techniques in this paper are inspired by the work [12] in one-dimensional con-
tinuous space; however, we have implemented a more e�cient use of Gaussian tools, based
fully on the idea of concentration initiated in [20]; as a consequence, our results are improve-
ments over those in [12] in terms of being quantitatively sharper, of handling d-dimensional
space explicitly, and also in terms of having more elegant and much shorter proofs. In the
special discrete-space case where b is taken to be the simple symmetric random walk, the re-
sults in Section 4 herein can be derived using a scaling argument via known results, e.g. from
[5]; however, our results are more general, as they show that the the Lyapunov exponent or-
der �2 log�2 � for the simple symmetric random walk is actually a universal property for any
polymer based on a Markov process in discrete space; in addition, based on continuous-space
ideas in [12], our discrete results extend easily (in slightly weaker form) to non-homogeneous
potentials, an extension which cannot be obtained using techniques such as in [5]; lastly, our
proofs in Section 4, using again the full force of Gaussian concentration, are signi�cantly
shorter and more e�cient than in previous works.
The structure of the article is as follows: Section 2 contains preliminary information on
the partition function. Section 3 deals with the Brownian polymer. Section 4 covers the
discrete-space polymer.
2 Preliminaries; the partition function
In this section, we will �rst recall some basic facts about the de�nition and the simplest
properties of the partition functions Zt and Zt which have been already considered in the
introduction. We will also give brie y some notions of Gaussian analysis which will be used
later on.
We begin with basic information about the partition function of the Brownian polymer.
Recall that W is a centered Gaussian �eld on R+ � Rd, de�ned by its covariance structure(1). The Hamiltonian Ht(b) given by (2) can be de�ned more rigorously through a Fourier
transform procedure: there exists (see e.g. [7] for further details) a centered Gaussian
independently scattered C-valued measure � on R+ � Rd such that
W (t; x) =
ZR+�Rd
1[0;t](s)e{ux�(ds; du); (9)
9
where the simple notation ux stands for the inner product u �x in Rd. For every test functionf : R+ � Rd ! C, set now
�(f) �ZR+�Rd
f(s; u)�(ds; du): (10)
While the random variable � (f) may be complex-valued, to ensure that it is real valued, it
is su�cient to assume that f is of the form f (s; u) = f1 (s) e{uf2(s) for real valued functions
f1 and f2. Then the law of � is de�ned by the following covariance structure: for any such
test functions f; g : R+ � Rd ! C, we have
E [�(f)�(g)] =
ZR+�Rd
f(s; u)g(s; u)Q(du)ds; (11)
where the �nite positive measure Q is the Fourier transform of Q (see [24] for details).
From (9), we see that the Ito stochastic di�erential of W in time can be understood
as W (ds; x) :=Ru2Rd e
{ux�(ds; du), or even, if the measure Q (du) has a density f (u) with
respect to the Lebesgue measure, which is typical, as
W (ds; x) :=
Zu2Rd
e{uxpf (u)M(ds; du)
where M is a white-noise measure on R+ � Rd, i.e. a centered independently scatteredGaussian measure with covariance given by E [M (A)M (B)] = mLeb (A \B) where mLeb is
Lebesgue's measure on R+ � Rd.We can go back now to the de�nition of Ht(b): invoking the representation (9), we can
write
�Ht(b) :=Z t
0
W (ds; bs) =
Z t
0
ZRde{ubs�(ds; du); (12)
taking this expression as a de�nition of Ht (b) for each �xed path b; it can be shown (see
[7]) that the right hand side of the above relation is well de�ned for any H�older contin-
uous path b, by a L2-limit procedure. Such a limiting procedure can be adapted to the
speci�c case of constructing Ht (b), using the natural time evolution structure; we will
not comment on this further. However, the reader will surmise that the following re-
mark, given for the sake of illustration, can be useful: when Q has a density f , we obtain
�Ht(b) =RR[0;t]�Rd e
{ubspf (u)M (ds; du) :
We use as the de�nition of the partition function Zxt , its expression in (3), and set its
expectation under P as
pt(�) :=1
tE [log (Zxt )] ; (13)
usually called the free energy of the system. It is easily seen that pt(�) is independent of the
initial condition x 2 Rd, thanks to the spatial homogeneity of W . Thus, in the remainder
10
of the paper, x will be understood as 0 when not speci�ed, and Eb; Zt will stand for E0b ; Z
0t ,
etc... We summarize some basic results on pt(�) and Zt established in [20] using Gaussian
concentration.
Proposition 2.1. For all � > 0 there exists a constant p(�) > 0 such that
p(�) := limt!1
pt(�) = supt�0pt(�): (14)
Furthermore, the function p satis�es:
1. The map � 7! p(�) is a convex nondecreasing function on R+.
2. The following upper bound holds true:
p(�) � �2
2Q(0): (15)
3. P-almost surely, we have
limt!1
1
tlogZt = p(�): (16)
For the discrete-space polymer on Zd, the Hamiltonian Ht(b) is easier to de�ne, and canbe expressed in a simple way by (5). Recall then that Zt; pt(�) are de�ned as:
Zt = Eb
he�Ht(b)
i; and pt(�) = E
hlog(Zt)
i:
Then, using the same kind of arguments as in [20] (see also [15]), we get the following:
Proposition 2.2. The same conclusions as in Proposition 2.1 hold true for the homogeneous
Markov process polymer b.
Proof. The proof of Proposition 2.1 above, given in [20] using Gaussian concentration, is
easily adapted to the case where b is an arbitrary homogeneous Markov process, to yield this
proposition. We omit the details.
3 Estimates of the free energy: continuous space
In this section, we will proceed to the proof of Theorems 1.1 and, 1.4, by means of some
estimates for some well-chosen Gaussian random �elds.
The hypothesis we use guarantees that there is some H 2 (0; 1) such that W is no
more than H-H�older continuous in space. Accordingly, we de�ne the homogeneous spatial
canonical metric � of W by
�2(x� y) := E�(W (1; x)�W (1; y))2
�= 2 (Q(0)�Q(x� y)) ; (17)
11
for all x; y 2 Rd. Our hypotheses on � translate immediately into statements about Q via
this formula.
In our results below, we have also tried to specify the dependence of our constants on the
dimension of the space variable. An interesting point in that respect is given in the lower
bound of Subsection 3.2 below, which has to do with weak versus strong disorder in very
high-dimensional cases.
3.1 Upper bound in the Brownian case
The upper bound in Theorem 1.1 follows immediately from the following proposition, which
proves in particular that strong disorder holds for all H 2 (0; 1].
Proposition 3.1. Assume that there exist a number H 2 (0; 1] and numbers c1; r1 such thatfor all x; y 2 Rd with jx� yj � r1 we have
�(x� y) < c1 jx� yjH : (18)
Then there exists a constant C depending only on Q and a constant �0 depending only on r1and d, such that for all � � �0,
p(�) � Cd7H
1+3H �2+4H1+3H :
Proof. Let us divide the proof in several steps:
Step 1: Strategy. From relation (14), we have
p(�) � lim supt!1
pt(�):
Our strategy is then to give an estimation of pt(�) for a discretized path ~b 2 "Zd that staysclose to b and proceeds only by jumps. Thanks to this substitution, and using H�older's and
Jensen's inequalities, we shall obtain
E [log(Zt)] = EhlogEb
hexp
���hHt(b)�Ht(~b)
i�exp��Ht(~b)
ii(19)
� 1
2EhlogEb
hexp
��2�[Ht(b)�Ht(~b)]
�ii+1
2EhlogEb
hexp
��2�Ht(~b)
�ii� 1
2logEb
�exp 2�2
Z t
0
��(bs � ~bs)
�2ds
�+1
2EhlogEb
hexp
��2�Ht(~b)
�ii:
Notice that the �rst term on the right-hand side represents the error made by considering
the discretized path ~b instead of b, but thanks to hypothesis (18) and the de�nition of ~b we
will easily control it.
Step 2: The discretized path. Let us describe now the discretized process we shall use in the
sequel: we will approximate the Brownian path b with a path that stays in "Zd, where " is
12
a small positive number. Let bj be the j-th component of the d-dimensional path b. Let T j1be the �rst time that bj exits the interval (�"; ") and T ji+1 be the �rst time after T
ji that
bj exits (bT ji� "; bT ji + "). So, for a �xed component j, the times (T
ji+1 � T
ji )1i=0 are i.i.d.
and the successive positions xjm = bjT jm, which are independent of the jump times, form a
one-dimensional symmetric random walk on "Z in discrete time.Now let (Tn)
1n=0 be the increasing sequence of all the (T
jm)j;m and let (xn)
1n=0 be the
nearest neighbor path in "Zd with x0 = 0 whose j-th component takes the same step as xjmat time T jm. We de�ne the discretized path
~b as the path that jumps to site xn at time Tnand it is constant between jumps.
Remark 3.2. At any time s, each coordinate of ~bs is within " of the corresponding one of
bs. So the distance separating the two paths is never more than "pd. Thus we have, for all
s � 0, jbs � ~bsj � "d1=2:
Remark 3.3. Thanks to Remark 3.2 we can now control the error term we have de�ned at
relation (19). In fact, owing to Hypothesis (18), we have
1
2tlogEb
�exp 2�2
Z t
0
��(bs � ~bs)
�2ds
�� 1
2tlogEb
�exp
�2�2C2
Z t
0
jbs � ~bsj2Hdt��
� C�2"2HdH ;
where we recall that C is a constant depending on Q that can change from line to line.
Plugging this last inequality into (19), and de�ning
p"t(�) =1
tEhlogEb
hexp
��2�Ht(~b)
�ii;
we have thus obtained the following estimate for pt(�):
pt(�) � C�2"2HdH +1
2p"t(�): (20)
We shall try now to get some suitable bounds on p"t(�).
Step 3: Study of p"t(�). Let Njt be the number of jumps of the j-th component of ~b up to
time t. For a multi-index k = (k1; � � � ; kd) let jkj = k1 + � � � + kd, so the total number ofjumps of ~b up to time t is jNtj = N1
t + � � �+Ndt . Denote by S(t; n) the simplex of all possible
sequences of n jump times up to time t, namely
S(t; n) = ft = (t0; � � � ; tn) : 0 = t0 � � � � � tn � tg : (21)
The set of the �rst kj jump times of the j-th component of ~b is a point (tji )kji=1 in S(t; kj).
Given the set of all jump times�tji : j 2 [1; � � � ; d] ; i 2 [1; � � � ; kj]
, let
�~tl : l 2 [0; jkj+ 1]
13
be the same set but ordered and with the convention ~t0 = 0, ~tjkj+1 = t. And �nally let ~xlbe the value of ~b between the two jump times ~tl and ~tl+1. Denote by Pn the set of all suchsequences ~x = (~xl)
nl=1, i.e. the set of all nearest-neighbor random walk paths of length n
starting at the origin.
Then if we �x jNtj = jkj, we can write
Ht(~b) = X�jkj; (~tl)jkjl=1; (~xl)
jkjl=1
�;
where
X�jkj; (~tl)jkjl=1; (~xl)
jkjl=1
�=
jkjXi=0
�W (~ti+1; ~xi)�W (~ti; ~xi)
�:
Thanks to these notations, we have
tp"t(�) = EhlogEb
hexp(�2�Ht(~b))
ii= E
hlogEb
hexp
��2�X
�jNtj; (~tl)jNtjl=1 ; (~xl)
jNtjl=1
��ii:
So we can write the expectation with respect to b as:
Eb
hexp(�2�Ht(~b))
i=Xn�1
Eb
hexp(�2�Ht(~b))
���jNtj 2 [t�(n� 1); t�n]i� Pb [jNtj 2 [t�(n� 1); t�n]] :
The number of jumps of the discretized path ~b in a given interval [0; t] will play a crucial
role in our optimization procedure. For a parameter � > 0 which will be �xed later on, let
us thus de�ne
Tn� =��k; ~t; ~x
�: k � tn�; ~t 2 S(t; k); ~x 2 Pk
:
Then the following estimates will be essential for our future computations:
Pb�N jt > n�t
�� exp
�� t2(�n")2 + t�n
�(22)
E
�supTn�
X(k; ~t; ~x)
�� Ktd
pn�; (23)
where K is a constant that depends on the covariance of the environment Q. Inequality
(22) can be found textually in [12]: it is Proposition 22 therein with � = 1. Inequality (23)
is established identically to equation (30) in [12], with the minor di�erence that the total
number of paths in Pm is not 2m but (2d)m, which, in the inequality above (30) near thebottom of page 33 in [12], accounts for a factor e1+log(6d) = 6ed instead of ec1 therein, hence
the factor d in (23).
De�ning Yn� = supTn� X(k;~t; ~x), we can now bound p"t(�) as follows:
tp"t(�) � E [log(A+B)] � E�(logA)+
�+ E
�(logB)+
�+ log 2;
14
where
A = Pb [jNtj � �t] exp (2�Y�) ; and B =Xn�1
Pb [jNtj 2 [n�t; (n+ 1)�t]] exp�2�Y�(n+1)
�:
We will now bound the terms A and B separately.
Step 4: The factor A. We can bound Pb [jNtj � �t] by 1 and we easily get, invoking (23),
E�(logA)+
�� 2�E [Y�] � 2�Kdt
p�: (24)
Step 5: The factor B. Let � = E�Y�(n+1)
�. Since X is a Gaussian �eld and since it is easy
to show that
�2 := sup(m;~t;~x)
Var(X(m; ~t; ~x)) � tQ(0);
the so called Borell-Sudakov inequality (see [1] or [27]) implies that, for a constant a > 0,
E [exp (a jY�n � �j)] � 2 exp�a2�2
2
�= 2 exp
�a2tQ(0)
2
�: (25)
Fix now a number 2 (1=2; 1) and let us denote log+(B) = (logB)+ . We have
1
t E�log+B
�= E
24log+ Xn�1
Pb [jNtj 2 [nt�; (n+ 1)t�]] exp�2�Y�(n+1)
�!t� 35� E
24log+ Xn�1
Pb [jNtj > nt�] exp�2�(Y�(n+1) � �)
�exp
�2�Ktd
p�(n+ 1)
�!t� 35 ;where we used that (23) implies � � Kdt
p(n+ 1)�. We also know that for any sequence
of non-negative reals (xn)n the following holds: (P
n xn)t� �
Pn x
t� n . Thus we have
1
t E�log+B
�� E
"log+
Xn�1
(Pb [jNtj > nt�])t� exp
�2�
t (Y�(n+1) � �)
�exp
�2t1� �Kd
p�(n+ 1)
�!#
� E"log+
"dt
� Xn�1
exp
�2�
t (Y�(n+1) � �)
�exp
��t
1�
2yn
�##;
where we used estimate (22) in the following way:
Pb [jNtj > nt�] �dXj=1
Pb
�N jt >
nt�
d
�= dPb
�N1t >
nt�
d
�� d exp
�� t2
��n"d
�2+t�n
d
�;
15
and where we have obtained:
yn =�"�nd
�2� 2�n
d� 4�Kd
p�(n+ 1):
Now, bounding log+(x) from above by log(1 + x), for x � 1, and using Jensen's inequality,we have:
1
t E�log+B
�� log
"1 + dt
� Xn�1
E
�exp
�2�
t (Y�(n+1) � �)
��exp
��t1� 2
yn
�#;
so, using (25), it is readily checked that
1
t E�log+B
�� log
"1 + 2dt
� exp
�2�2Q(0)
t2 �1
�Xn�1
exp
��t1� 2
yn
�#:
In order for the series above to converge, we must choose � so as to compensate the negative
terms in yn. Speci�cally, we choose��"d
�2= 16�Kd
p�; i.e. � = (16�Kd3"�2)2=3: (26)
With this choice, we end up with:
yn =��"d
�2�n2 � 2dn
�"2� 14
pn+ 1
�:
Now we note that:
If we choose "; � such that �" � d�3=2 ) �"2
d= (16K�")2=3 d � 4; (27)
so that
yn ���"d
�2�n2 � n
2� 14
pn+ 1
�;
and since n2 � n2�
pn+14
� n8, we getX
n�1exp
��t
1�
2
��"d
�2�n2 � 2dn
�"2� 14
pn+ 1
���Xn�1
exp
��t
1�
2
��"d
�2 n8
�=
1
1� exp�� 116t1�
��"d
�2� � 1:Notice that this last term can be made smaller than 1 if t is large enough. Hence we can
write a �nal estimate on E�log+B
�as follows: for large t we have
1
t E�log+B
�� log
�1 + 2dt
� exp
�2�2Q(0)
t2 �1
��� log(1 + 2dt� ) + 2�
2Q(0)
t2 �1: (28)
16
Final step. Using inequalities (24) and (28) and the value of �, we can estimate p"t(�) in the
following way:
p"t(�) � 2�Kdp�+
log 2
t+log(1 + 2dt
� )
t1� +2�2Q(0)
t
� 2�Kdp�+ o(1):
So using the value of � given in (26) we have
p"t(�) � C�4=3d2
"2=3+ o(1); (29)
where C is a constant that depends on Q and that can change from line to line. Putting
this result in (20) and taking the limit for t!1 we get
lim supt!1
pt(�) � C��2dH"2H + d2�4=3"�2=3
�:
In order to make this upper bound as small as possible we can choose " such that
�2dH"2H = d2�4=3"�2=3; i.e. " = d6�3H2+6H ��
11+3H ;
so that
lim supt!1
pt(�) � C�2+4H1+3H d
7H1+3H ;
which is the announced result. We then only need to check for what values of � we are
allowed to make this choice of ". Condition (18) states that we must use "pd � r1. This is
equivalent to � � �0 =: (r1)�1�3H d5=2. One can check in this case that the restriction on
"; � in (27) is trivially satis�ed.
3.2 Lower bound in the Brownian case
In the following proposition, which implies the lower bound in Theorem 1.1, we shall also
try to specify the dependence of the constants with respect to the dimension d. Let us
state an interesting feature of this dependence. The proof of the proposition below shows
that the results it states hold only for � � �0 = cd1�H=2. One may ask the question of
what happens to the behavior of the partition function when the dimension is linked to the
inverse temperature via the relation � = �0, and one allows the dimension to be very large.
The lower bounds on the value p (�) in the proposition below will then increase, and while
they must still not exceed the global bound �2Q (0) =2, the behavior for large � turns out
to be quadratic in many cases. The reader will check that, when H > 1=2, this translates
as p (�) � c�2=(2�H) which is quadratic when H = 1, and p (�) � c�2 for all H � 1=2. Thisis an indication that for extremely high dimensions and inverse temperatures, for H � 1=2or H = 1, strong disorder may not hold. Strong disorder for Brownian polymers may break
down for complex, in�nite-dimensional polymers. This is only tangential to our presentation,
however.
17
Proposition 3.4. Recall that � has been de�ned at (17) and assume that there exist a number
H 2 (0; 1] and some positive constants c2, r2 such that for all x; y 2 Rd with jx� yj � r2,we have
�(x� y) > c2 jx� yjH : (30)
Then if H � 1=2, there exists a constant C depending only on Q, and a constant �0 dependingonly on Q and d, such that, for all � > �0,
p(�) � Cd2H�1H+1 �
2H+1 :
On the other hand if H > 1=2, there exists a constant C 0 depending only on Q, and a constant
�00 depending only on Q and d, such that for all � > �00
p(�) � C 0d 2H�13 �43 :
Proof. Here again, we divide the proof in several steps.
Step 1: Strategy. From relation (14), we have
p(�) = supt�0pt(�);
where pt(�) is de�ned by equation (13). So a lower bound for p(�) will be obtained by
evaluating pt(�) for any �xed value t. Additionally, by the positivity of the exponential
factor in the de�nition of Zt, one may include as a factor inside the expectation Eb the sum
of the indicator functions of any disjoint family of events of b. In fact, we will need only
two events, which will give the main contribution to Zt at a logarithmic scale.
Step 2: Setup. Let A+(b) and A�(b) be two disjoints events de�ned on the probability space
b under Pb, which will be speci�ed later on. Set
Xb = ��H2t = �Z 2t
0
W (ds; bs):
Conditioning by the two events A+(b) and A�(b) and using Jensen's inequality we have
E(logZt) � log (min fPb(A+);Pb(A�)g) + Ehmax
n~Z+; ~Z�
oi; (31)
where~Z+ := Eb [Xb j A+] and ~Z� := Eb [Xb j A�] :
These two random variables form a pair of centered jointly Gaussian random variables:
indeed they are both limits of linear combinations of values of a single centered Gaussian
�eld. Thus this implies
Ehmax
n~Z+; ~Z�
oi=
1p2�
�E
��~Z+ � ~Z�
�2��1=2:
18
Therefore we only have to choose sets A+ and A� not too small, but still decorrelated enough
so that condition (30) guarantees a certain amount of positivity in the variance of ~Z+� ~Z�.
Step 3: Choice of A+ and A�. Let f be a positive increasing function. We take
A+ =�f(t) � bis � 2f(t);8i = 1; : : : ; d; 8s 2 [t; 2t]
;
A� =��2f(t) � bis � �f(t);8i = 1; : : : ; d; 8s 2 [t; 2t]
:
In other words, we force each component of our trajectory b to be, during the entire time
interval [t; 2t], in one of two boxes of edge size f (t) which are at a distance of 2f (t) from
each other. Because these two boxes are symmetric about the starting point of b; the cor-
responding events have the same probability. While this probability can be calculated in
an arguably explicit way, we give here a simple lower bound argument for it. Using time
scaling, the Markov property of Brownian motion, the notation a = f (t) =pt, we have
Pb(A+) =dYi=1
Pb�8s 2 [1; 2] : bis 2 [a; 2a]
�=
dYi=1
1
2�
Z 2a
a
Pb�8s 2 [0; 1] : bis + y 2 [a; 2a]
�e�y
2=2dy
�dYi=1
1
2�
Z 7a=4
5a=4
Pb
�8s 2 [0; 1] : bis + y 2
hy � a
4; y +
a
4
i�e�y
2=2dy
=�Pb�b11 2 [5a=4; 7a=4]
�Pb�8s 2 [0; 1] : jb1sj � a=4
��d: (32)
Step 4: Estimation of ~Z+ and ~Z�. It was established in [12] (inequality (18) on page 24 and
calculation following, on the same page) that in dimension d = 1
E
��~Z+ � ~Z�
�2�� �2
Z 2t
t
Eh��(x�s;+ � x�s;�)
�2ids
where the quantities x�s;+ and x�s;� are random variables such that for all s 2 [t; 2t]: x�s;+ 2
[f(t); 2f(t)] and x�s;� 2 [�2f(t);�f(t)]. In dimension d � 1 the result still holds. In fact
in this case we have x�s;+; x�s;� 2 Rd, so it is su�cient to take each component of the x�s;+ in
the interval [f(t); 2f(t)] and each component of x�s;� in [�2f(t);�f(t)], so their distance isgreater than d1=2f(t). Thus, using condition (30), we have
E
��~Z+ � ~Z�
�2�� �2
Z 2t
t
C��x�s;+ � x�s;���2H ds � Ct�2dH (f(t))2H ; (33)
where as usual C is a constant that can change from line to line. Hence, we obtain:
Ehmax
n~Z+; ~Z�
oi=
1p2�
�E
��~Z+ � ~Z�
�2��1=2� C�
pt (f(t))H dH=2; (34)
19
Observe that in order to use condition (30) we have to impose f(t) � r2.
Step 5: The case H � 1=2. It is possible to prove that in this case the optimal choice for fis f(t) =
pt, which corresponds to a = 1, so that Pb(A+) is a universal constant that does
not depend on t. Thus we have, from (31), (32) and (34), for any t > 0,
p2t(�) =E [logZ2t]
2t� d logC
2t+ C�dH=2t
H�12 : (35)
Now we may maximize the above function over all possible values of t > 0. To make things
simple, we choose t so that the second term equals twice the �rst, yielding t of the form
t = Cd2�HH+1��
2H+1 ; and therefore
supt>0p2t(�) � Cd
2H�1H+1 �
2H+1 :
This result holds as long as the use of condition (30) can be justi�ed, namely as long as
f(t) � r2. This is achieved as soon as � > �0 where �0 = Cr�H�12 d1�H=2; and since H � 1=2,�0 � Cd3=4.
Step 6: The case H > 1=2. In this case we consider f(t) = ct�, for a given � 2 [0; 1=2) andsome constant c chosen below. Thus we have a = ct��1=2. In this case, if a is larger than a
universal constant Ku, the result (32) yields that, for some constant C, we have
Pb(A+) �dYi=1
exp(�Ca2) = exp(�Cc2dt2��1):
So, using again condition (30) and relation (34) we obtain
p2t(�) � �Cdt2��2 + C�dH=2t�H�1=2;
where the constant C may also include the factor c2. Again, choosing t so that the second
term equals twice the �rst, we have
t = Cd1�H=2
�(H�2)+3=2��1
�(H�2)+3=2 ; (36)
and so
supt>0p2t(�) � Cd
H�1=2�(H�2)+3=2��
2��2�(H�2)+3=2 :
In order to maximize the power of � in the lower bound for supt>0 pt(�) we should �nd the
maximum of the function
g(�) =2� 2�
�(H � 2) + 3=2for 0 � � < 1=2: Since this function is monotone decreasing when H > 1=2, the maximum
is reached for � = 0, so g(0) = 4=3.
Recall once again that, in order to apply condition (30) in the computations above, we
had to assume f(t) � r2; since now f (t) is the constant c, we only need to choose c = r2.We also had to impose a = r2t
�1=2 > Ku, which translates as � > �00 := (Ku=r2)
4=3 d1�H=2.
20
3.3 Logarithmic regularity scale
As mentioned in the introduction, the special shape of our Figure 1.2 induces us to explore
the regions of low spatial regularity for W , in order to investigate some new possible scaling
in the strong disorder regime. In other words, we shall work in this section under the
assumptions that there exist positive constants c0, c1, and r1, and � 2 (0;1), such that forall x; y with jx� yj � r1,
c0 log� (1= jx� yj) � � (x� y) � c1 log� (1= jx� yj) ; (37)
where > 0. Assumption (37) implies that W is not spatially H�older-continuous for any
exponent H 2 (0; 1]. Moreover, the theory of Gaussian regularity implies that, if >
1=2, W is almost-surely continuous in space, with modulus of continuity proportional to
log� +1=2 (1= jx� yj), while if � 1=2, W is almost-surely not uniformly continuous on
any interval in space. The case = 1=2, which is the threshold between continuous and
discontinuous W , is of special interest, since it can be related to the discrete-space polymer
which will be studied in the next section. The main result which will be proved here is the
following:
Theorem 3.5. Assume condition (37). We have for some constants C0 and C1 depending
only on Q, for all � large enough,
C0�2
dlog�2
��pd
�� p(�) � C1�2 log�2
��pd
�:
Proof. Step 1: Setup. Nearly all the calculations in the proof of Propositions 3.1 and 3.4 are
still valid in our situation. Again, C may change from line to line.
Step 2: Lower bound. For the lower bound, reworking the argument in Step 2 in the proof
of Proposition 3.4, using the function log� (x�1) instead of the function xH , we obtain the
following instead of (33):
E�(Z+ � Z�)2
�� t (�c0)2
�log
�1pdf(t)
���2 ;
which implies, instead of (35) in Step 5 of that proof, the following:
p2t(�) �d logC
2t+ C�t�1=2
�log
�1pdf(t)
��� :
In other words, now choosing f (t) = t1=2 as we did in the case H < 1=2 (recall that we are
in the case of small H, as stated in the introduction),
p2t(�) �d logC
2t+ C�t�1=2
�log
�1pdt
��� :
21
Now choose t such that the second term in the right-hand side above equals twice the �rst,
i.e.
t1=2 log� �1pdt
�= Cd��1:
For small t, the function on the left-hand side is increasing, so that the above t is uniquely
de�ned when � is large. We see in particular that when � is large, t is small, and we have
t�1 � �2. This fact is then used to imply
1
t=
�C�
d
�2�log
�1pdt
���2 � 2 (C�)2 log�2 (�) :
Therefore, for some constants �2 and c depending only on Q, for the t chosen above with
� � �2,
p2t(�) �C�2
d
�log
��pd
���2 :
Step 3: Upper bound. Here, returning to the proof of Proposition 3.1, the upper bound (29)
in the �nal step of that proof holds regardless of �, and therefore, using the result of Remark
3.3 with � (r) = log� (1=r), we immediately get that there exists c depending only on Q
such that for all " < r1 and all � > �3,
lim supt!1
pt(�) � C�2�log
�1
"pd
���2 + Cd2�4=3"�2=3;
as long as one is able to choose " so that �" � 1. By equating the two terms in the right-handside of the last inequality above, we get
"
�log
�1
"pd
���3 = Cd3��1:
Since the function " 7! " log�3 �1=("
pd)�is increasing for small ", the above equation
de�nes " uniquely when � is large, and in that case " is small. We also see that for any
� > 0, for large �, 1=" � �1��. Therefore we can write, for � � �3, almost surely,
lim supt!1
pt(�) � C (1� �)�2 �2�log
��pd
���2 :
This �nishes the proof of the theorem.
4 Estimates of the free energy: discrete space
Recall that, up to now, we have obtained our bounds on the free energy in the following
manner: the upper bound has been computed by evaluation of the supremum of a well-
chosen random Gaussian �eld, while the lower bound has been obtained by introducing
22
two di�erent events, depending on the Brownian con�guration, which capture most of the
logarithmic weight of our polymer distribution. This strategy also works in the case of
the discrete-space polymer whose Hamiltonian is described by (5), without many additional
e�orts, but a separate proof is still necessary. This section shows how this procedure works,
resulting in the proof of Theorem 1.5.
Quantities referring to the discrete-space polymer have been denoted by b; W ; Eb; E, etc...
In this section, for notational sake, we will omit the hats in the expressions above, and write
instead b;W;Eb;E like in the Brownian case. Recall that since this path b under Eb is a
continuous-time homogeneous Markov process, it can be decomposed as the composition of
a discrete-time Markov chain and a Poisson process with parameter �. We only impose a
single condition on b: that it have �nite range, i.e. that the number of possible site that the
chain can jump to in one step is a �nite number p. For instance, if b is the simple symmetric
random walk, then � = 2d and p = 2d.
Recall our simple non-degeneracy condition on Q in this case: we assume that there are
two distinct sites e1 and e2 in Zd which can both be reached with positive probability afterone jump of the Markov process b, such that
cQ := Q(0)�Q(e1 � e2) > 0: (38)
Condition (38), which is used only in the lower bound result, is extremely weak. For instance,
if b has a positive probability of reaching all of its nearest neighbors in one step, one can
replace Condition (38) by the condition that Q not be identically constant; brief details
justifying this claim are given after the proof of the next proposition. Asking that Q not be
identically constant is, in fact, no restriction at all since the case of Q identically constant
is equivalent to W not depending on x, in which case the polymer measure is the original
measure Pb, which is of no interest.
4.1 Lower bound for the discrete-space polymer
The lower bound announced in Theorem 1.5 is contained in the following.
Proposition 4.1. Assume condition (38) holds true. Then there exists a constant �0 > 0,
which depends on cQ and the law of b, such that if � > �0 then almost surely
limt!1
1
tlogZt �
cQ18�
�2
log �:
Proof. Invoking Proposition 2.2, we have p (�) = limt!1 pt(�) = supt�0 pt(�). Therefore,
any chosen �xed value t yields pt(�) as a lower bound for p (�).
Recall that for our Markov process b in continuous time and discrete space, Nt the total
number of jumps before time t is a Poisson process with parameter �. For a lower bound
23
on pt(�), we throw out, in the expectation de�ning Zt, all the paths b that do not jump
exactly once before time t. Recall that by hypothesis, the two sites e1 and e2 both have a
positive probability of being reached at the time of that one jump; we also throw out all jump
positions that are not e1 or e2. Therefore, with �0 > 0 the smaller of the two probabilities
of reaching e1 and e2 in one jump, and 1[0;t] (s) ds=t the law of the single jump time given
Nt = 1,
Zt � Pb [Nt = 1] �0Z t
0
ds
t
�e�W (s;0)+�W ([s;t];e1) + e�W (s;0)+�W ([s;t];e2)
�;
where W ([s; t]; x) :=W (t; x)�W (s; x). Using this and Jensen's inequality, we get
Zt � �0�te�t�Z t
0
ds
t
�e�W (s;0)+�W ([s;t];e1) + e�W (s;0)+�W ([s;t];e2)
�;
1
tE(logZt) �
log(�0�t)
t� �+ �
Z t
0
ds
t2E [max (W ([s; t]; e1);W ([s; t]; e2))] :
Now we evaluate the expected maximum above. The vector (W ([s; t]; e1) ;W ([s; t]; e2))
is jointly Gaussian with common variancespt� sQ(0) and covariance
pt� sQ(e1 � e2).
Therefore
E [max (W ([s; t]; e1) ;W ([s; t]; e2))] =1
2E [jW ([s; t]; e1)�W ([s; t]; e2)j]
=1p2�(Var [W ([s; t]; e1)�W ([s; t]; e2)])
1=2 =1p�
pt� s
pQ (0)�Q (e1 � e2): (39)
Thus, recalling condition (38) with , where cQ = Q(0) � Q(e1 � e2), and choosing t =C log2 �=�2, we obtain
1
tE(logZt) �
log t
t+log(�0�)
t� �+ 2�
3p�t
pcQ
� �2
log �
�� 2C+2pcQ
3pC�
�+
�2
C log2 �(logC + 2 log log � + log(�0�))� �: (40)
The proof is completed by noting that the constant �2=C + 2pcQ=�3pC��in the leading
term above is maximal for C = 36�=cQ, and is then equal to cQ= (18�); as for �, it only need
to be chosen large enough so that the second and third terms in (40) contribute nonnegatively,
which is possible because of the presence of the term 2 log log �.
Now assume that, like the simple symmetric random walk, our Markov chain has a
positive probability to reach every one of its nearest neighbors. To adapt the above proof to
simply use, instead of Condition (38), the fact that Q is not identically constant, note that
since Q (0) = maxx2Zd Q (x), we already have Q (e0) < Q (0) for some e0 2 Zd. First considerthe case where e0 2 2Zd, i.e. that e0's components are even. Then we can write e0 = 2e1 with
24
e1 2 Zd. Then in the above proof, instead of choosing between two trajectories with exactlyone jump, one reaching e1 and one reaching e2, use two nearest-neighbor trajectories with the
same number of jumps, one reaching e1 and one reaching �e1. The remainder of the proof isidentical in structure and only slightly more involved computationally; it is omitted. In the
case where there is no e0 2 2Zd such that Q (e0) < Q (0), this means that Q is identically
equal to Q (0) all over 2Zd. Because of W 's homogeneity, this implies that Q takes on onlytwo values, Q (0) on 2Zd and Q (1) < Q (0) on Zd � 2Zd. Said di�erently, W is periodic
in all d directions with period 2. This very speci�c model is still non-degenerate enough to
allow the conclusion of the above proposition to hold. One can show this similarly to the
above proof, by using two trajectories, one with exactly one jump, and one with exactly two
jumps. The details are left to the reader.
4.2 Upper bound for the discrete-space polymer
The upper bound result in Theorem 1.5 can be summarized in the following proposition.
Proposition 4.2. Under the assumption that Q (0) < 1, there exists a constant �00 > 0,
which depends on Q, �, and q, and a universal constant C > 0, such that if � > �00 then
almost surely
limt!1
1
tlogZt � CQ (0) q2�
�2
log �:
Proof. Similarly to the decomposition of the discretized path in Step 3 of the proof of Propo-
sition 3.1, our homogeneous Markov path b here, which is already in Zd, necessarily jumpsat the jump times �i of a single Poisson process (Nt)t�0. We then only need to de�ne S(t; n)as the simplex of size t and of dimension n, which is thus the set of all possible jump times~t = (�i)
ni=1 conditional on fNt = ng, and its distribution L
�d~t�is uniform on S(t; n). Since
our Markov process b is not restricted to a simple random walk, the set Pn of all possiblejump-site sequences ~x = (xi)
ni=1 for a trajectory of length n is no longer of cardinality (2d)
n,
but the hypothesis that our Markov process has �nite range means that Card[Pn] � qn. Wedenote the probability of the discrete path ~x 2 Pn by �m (~x).We can de�ne
X�m; ~t; ~x
�:=
mXi=0
fW (�i+1; xi)�W (�i; xi)g :
Let � be a �xed positive number which will be chosen later. Let T� = [m��tSm, whereSm := fmg � S (t;m) � Pm, and set also Y� = supT� X: As in the Brownian case, we canbound E [logZt] above as follows:
E [logZt] � E [log (A+B)] � E�log+A
�+ E
�log+B
�+ log 2; (41)
25
where log+A = (logA)+ = max(logA; 0) and
A := Pb [Nt � �t] exp (�Y�) (42)
B :=Xm>�t
Pb [Nt = m] Eb
hexp
��X
�m; ~t; ~x
�� ���Nt = mi :Step 1: The term A. Similarly to inequality (23) in the continuous case, one invokes the
proof of equation (30) in [12] to obtain that
E
�supTn�
X(k; ~t; ~x)
�� Ktq
pn�; (43)
where K depends only on Q. So, bounding Pb [Nt � �t] by 1, we have
E�log+A
�� �E [Y�] � �Ktq
p�: (44)
Step 2: The term B. The term B de�ned in (42) can be bounded as follows:
E [logB+]
= E
"log+
Xm>�t
Pb [Nt = m]X~x2Pm
�m (~x)
ZSm;t
exp��X
�m; ~t; ~x
��L�d~t�#
= E
24log+Xn�1
Xm2[�nt;�(n+1)t]
Pb [Nt = m]X~x2Pm
�m (~x)
ZSm;t
exp��X
�m; ~t; ~x
��L�d~t�35
� E"log+
Xn�1
Pb [Nt > �nt] exp��Y(n+1)�
�#:
So, using the fact that for t > 1, the power t�1 of a sum is less than the sum of the terms
raised to the power t�1, followed by Jensen's inequality, we have, similarly to what we did
in the proof of Proposition 3.1,
1
tE�log+B
�� log
1 +
Xn�1
(Pb [Nt > �nt])t�1 E
�exp
��Y(n+1)�
t
��!:
Using once again Gaussian supremum analysis results (see [1] or [27]), for any �; x > 0,
E [exp (xY�)] � exp (xE [Y�]) exp�x2Ku max
(m;~t;~x)2I�Var
�X�m; ~t; ~x
���� exp
�xqKt
p��exp
�x2KutQ(0)
�;
where Ku designates a universal constant, and where we used (43) and the trivial fact
E[X�m; ~t; ~x
�2] = Q (0) t. Hence
E
�exp
��Y(n+1)�
t
��� exp
��qK
p�(n+ 1) +
�2KuQ(0)
t
�:
26
If we choose t such that t > (2�KuQ(0))=(qK�1=2), the estimate on B becomes
1
tE�log+B
�� log
(1 +
Xn�1
(Pb [Nt > �nt])t�1 exp
��qKu
p�
�pn+ 1 +
1
2
��): (45)
Step 3: The tail of Nt. Using the presumably well-known tail estimate Pb [Nt > �t] �exp
���t log
���
�� t(�� �)
�, valid for all � � 1 (see e.g. [16, pages 16-19]), if we set
�0 = �=� and we assume �0 � exp (1� 1=�) we have
Pb [Nt > �t] � exp (�t�0 log�0) : (46)
Step 4: Grouping our estimates and choosing �. From (45) and (46) we have
1
tE�log+B
�� log
(1 +
Xn�1
exp
���0n log�0n+ q�Ku
p�
�pn+ 1 +
1
2
��):
To exploit the negativity of the exponential term, we simply require
�0 log�0 = 4qKu�p�: (47)
Indeed, since n � 1, we then have that the term inside the exponential is
�0n log�0n� �qKu
p�
�pn+ 1 +
1
2
�= �0n log�0n� 1
4�0 log�0
�pn+ 1 +
1
2
�� �0n log�0 � 1
4�0 log�0
�pn+ 1 +
1
2
�=1
2�0n log�0 +
�1
2n� 1
4
�pn+ 1 +
1
2
���0 log�0 � 1
2�0n log�0;
which implies
1
tE�log+B
�� log
(1 +
Xn�1
exp
��12�0n log�0
�)= log
(1 +
1
exp�12�0 log�0
�� 1
):= c�:
The restriction �0 � exp(1� 1=�) implies that c� is a constant that depends on the Poissonparameter � only. Combining this with (41) and (44), we get
1
tE [logZt] �
log 2
t+ c� + qK�
p�: (48)
Step 5: Conclusion. It is easy enough to see that, with
x := (4qp��K)2 ; (49)
27
the equation (47), which is �0 = x= log2 �0, has a unique solution �0 when x exceeds
e, and that �0 also exceeds e in that case: indeed �0 = e when x = e and d�0=dx =�log2 �0 + 2 log�0
��1> 0 for all �0 � e. Therefore, since log2 �0 > 1, we can write �0 � x,
and thus we also have:
�0 =x
log2 �0� x
log2 x: (50)
This lower bound on �0 implies the following upper bound on �0:
�0 =x
log2 �0� x
log2�x= log2 x
� = x
(log x� 2 log(log x))2: (51)
Since there exists x0 such that, for any x > x0, we have
log x > 4 log(log x); (52)
and we can recast expression (51) into:
�0 � 4x
log2 x=
(4qp��K)2
(log � + log 4qp�K)2
� (4qp�K)2 �2
log2 �;
from which we obtain � � (4q�Ku)2�2 log�2 � (recall � =: ��0). Thus for t large enough:
E[log(Zt)]
t� log 2
t+ c� + 4K
2q2��2
log �:
Taking limits as t tends to 1 and choosing � so that
�2 >c� log �
4K2q2�; (53)
the theorem is proved with C = 16K2, after noting that in the proof of (30) in [12], the
constant K is a universal constant times Q (0).
Finally, we show that the theorem holds for � large enough. Analyzing the conditions
we used above, we only have to take � � �00 := max(Kd; ��; ��);where Kd; �
�; �� are now
speci�ed. This is due to the fact that we assumed �0 � exp(1 � 1=�) and this implies, via(50), that x � � exp(1 � 1=�) (log �+ 1� 1=�), and therefore, from (49), we have to take
� � Kq;�, where Kq;� is a constant that depends only on q and �. In addition, according to
(52) and (53), �� and �� are the solutions to the following equations:
�2 =c� log �
4K2q2�and log(4q
p�K�) = 4 log(log(4q
p�K�)):
28
References
[1] R. Adler (1990). An introduction to continuity, extrema and related topics for general
Gaussian processes. Institute of Math. Statistics, Hayward, CA.
[2] A. Bovier (2006). Statistical Mechanics of Disordered Systems. Cambridge U.P.
[3] P. Carmona; Y. Hu (2002). On the partition function of a directed polymer in a Gaussian
random environment. Probab. Theory Relat. Fields, 124 431-457.
[4] P. Carmona; Y. Hu (2006). Strong disorder implies strong localization for directed
polymers in a random environment. ALEA Lat. Am. J. Probab. Math. Stat. 2, 217{229
[5] R. Carmona; L. Koralov; S.A. Molchanov (2001). Asymptotics for the almost sure
Lyapunov exponent for the solution of the parabolic Anderson problem. Random Oper.
Stochastic Equations 9, no. 1, 77{86.
[6] R. Carmona; S.A. Molchanov (1994). Parabolic Anderson problem and intermittency.
Mem. Amer. Math. Soc. 518.
[7] R. Carmona; F. Viens (1998). Almost-sure exponential behavior of a stochastic An-
derson model with continuous space parameter. Stochastics and Stochastic Reports 62,
251-273.
[8] F. Comets; N. Yoshida (2005). Brownian directed polymers in random environment.
Comm. Math. Phys. 254, no. 2, 257{287.
[9] F. Comets; N. Yoshida (2006). Directed polymers in random environment are di�usive
at weak disorder. Ann. Probab. 34 no. 5, 1746{1770.
[10] B. Derrida; H. Spohn (1988). Polymers on disordered trees, spin glasses, and traveling
waves. New directions in statistical mechanics (Santa Barbara, CA, 1987). J. Statist.
Phys. 51 , no. 5-6, 817{840.
[11] D. Fisher; D. Huse (1991). Directed paths in random potential. Phys. Rev. B. 43, no
13, 10728-10742.
[12] I. Florescu; F. Viens (2006). Sharp estimation for the almost-sure Lyapunov exponent
of the Anderson model in continuous space. Probab. Theory and Related Fields, 135 no.
4, 603-644.
[13] D. Huse; C. Henley (1985). Pinning and Roughening of Domain Walls in Ising Systems
Due to Random Impurities. Phys. Rev. Lett. 54, 2708-2711.
29
[14] J. Imbrie; T. Spencer (1988). Di�usion of directed polymers in a random environment.
J. Statist. Phys. 52, no. 3-4, 609{626.
[15] D. M�arquez-Carreras; C. Rovira; S. Tindel (2008). A model of con-
tinuous time polymer on the lattice. Preprint. Available online at
http://arxiv.org/PS cache/arxiv/pdf/0802/0802.3296v1.pdf.
[16] P. Massart (2007). Concentration Inequalities and Model Selection. Ecole d'Et�e de Prob-
abilit�es de Saint-Flour XXXIII - 2003; Lecture Notes in Mathematics 1896. Springer.
[17] O. M�ejane (2004). Upper bound of a volume exponent for directed polymers in a random
environment. Ann. Inst. H. Poincar�e Probab. Statist. 40, no. 3, 299{308.
[18] M. Petermann (2000). Superdi�usivity of polymers in random environment. Ph.D. The-
sis, Univ. Z�urich.
[19] M.S.T. Piza (1997). Directed polymers in a random environment: some results on uc-
tuations. J. Statist. Phys. 89, no. 3-4, 581{603.
[20] C. Rovira; S. Tindel (2005). On the Brownian-directed polymer in a Gaussian random
environment. J. Funct. Anal. 222, no. 1, 178{201.
[21] B. Roynette; M. Yor (2007). Penalising Brownian Paths: Rigorous Results and Meta-
Theorems. Preprint.
[22] M. Talagrand (2003). Spin Glasses: a Challenge for Mathematicians, Springer.
[23] S. Tindel; C.A. Tudor; F. Viens (2004). Sharp Gaussian regularity on the circle, and
applications to the fractional stochastic heat equation. J. Funct. Anal. 217, no. 2, 280{
313.
[24] S. Tindel; F. Viens (1999). On space-time regularity for the stochastic heat equations
on Lie groups. J. Funct. Anal. 169, no. 2, 559-603.
[25] S. Tindel; F. Viens (2002). Almost sure exponential behaviour for a parabolic SPDE on
a manifold. Stochastic Process. Appl. 100 , 53{74.
[26] S. Tindel; F. Viens (2005) Relating the almost-sure Lyapunov exponent of a parabolic
SPDE and its coe�cients' spatial regularity. Potential Anal. 22, no. 2, 101{125.
[27] F. Viens; A. Vizcarra (2007) Supremum Concentration Inequality and Modulus of Con-
tinuity for Sub-nth Chaos Processes. J. Functional Anal. 248, 1-26.
30
Top Related