Stoch PDE: Anal Comp https://doi.org/10.1007/s40072-018-0118-9
An invariance principle for the stochastic heat equation Mathew Joseph1
Received: 3 November 2017 © Springer Science+Business Media, LLC, part of Springer Nature 2018
Abstract We approximate the white-noise driven stochastic heat equation by replacing the fractional Laplacian by the generator of a discrete time random walk on the one dimensional lattice, and approximating white noise by a collection of i.i.d. mean zero random variables. As a consequence, we give an alternative proof of the weak convergence of the scaled partition function of directed polymers in the intermediate disorder regime, to the stochastic heat equation; an advantage of the proof is that it gives the convergence of all moments. Keywords Stochastic heat equation · Weak approximation · Polymers · Interacting particle systems Mathematics Subject Classification Primary 60H15; Secondary 60F17 · 60K35
1 Introduction Consider the following discrete space–time stochastic heat equation u i+1 (k) − u i (k) =
P(k, l) · u i (l) − u i (k) + σ (u i (k)) · ξi (k)
(1.1)
l∈Z
where P is the transition kernel of a random walk X n on Z and σ : R → R is a Lipschitz continuous function with Lipschitz coefficient Lipσ . The collection ξ := {ξi (k), i ∈ Z, k ∈ Z+ } consists of i.i.d. random variables with
B 1
Mathew Joseph
[email protected] Indian Statistical Institute, 8th Mile Mysore Road, RVCE Post, Bengaluru 560059, India
123
Stoch PDE: Anal Comp
E ξi (k) = 0, E ξi2 (k) = 1, 2+κ E ξi (k) < ∞ for some κ > 0.
(1.2)
Since l P(k, l) = 1 we can remove u i (k) from both sides of (1.1). A solution to (1.1) satisfies u i+1 (k) =
Pi+1 (k, l) u 0 (l) +
l
i
Pi− j (k, l) · σ u j (l) · ξ j (l).
(1.3)
j=0 l
Above Pi (k, l) gives the i-step transition probability of jumping from k to l. Note by homogeneity Pi− j (k, l) = Pi− j (0, l − k) which we define to be Pi− j (l − k) by an abuse of notation. We shall assume that the random walk X n has an asymptotic speed μ and when centered it is in the domain of attraction of a symmetric Stable(α) process with generator −ν(−)α/2 . Our assumptions will be stated in terms of the characteristic function of X 1 in the next section. A consequence of our main result Theorem 1.1, is that after an appropriate scaling of space, time and noise in (1.1), we get the continuous space–time stochastic heat equation. To be precise consider (n) u i+1 (k) =
P(k, l) u i(n) (l) + σ u i(n) (k) ·
l∈Z
ξi (k) . (α−1)/2α n
(1.4)
We show that for t ∈ R+ , x ∈ R 1/α u (n) ] − [μnt] ⇒ vt (x), [nt] [xn where ⇒ denotes convergence in distribution, and vt (x) is the mild solution to the stochastic heat equation ∂t v = −ν(−)α/2 vt (x) + σ (vt (x)) · W˙ (t, x).
(1.5)
Note that the existence and uniqueness of mild solutions to (1.5) (see [14]), as well as the finiteness of the speed of the random walk X n in the domain of attraction of a Stable(α) process requires 1 < α 2, and we shall assume this without further mention for the rest of the paper. In the case that σ (x) ≡ 1, the limiting random field (1.5) is Gaussian. One can then use the Lindeberg–Feller theorem to prove the weak convergence of u (n) to v, and this was the approach used by [27] in their analysis of the Harness process. Another special case is when σ (x) ∝ x. In this case (1.5) is called the parabolic Anderson model or the multiplicative stochastic heat equation. Here one can explicitly write the solution
123
Stoch PDE: Anal Comp
to (1.5) as an infinite series involving multiple stochastic integrals (the Wiener chaos expansion). To illustrate, suppose that v0 (·) ≡ 1, then one can check that vt (x) = 1 +
∞ k=1 k
k
Rk i=1
pti −ti−1 (yi − yi−1 )W (dti−1 dyi−1 ),
where k = {(t0 , t1 , . . . , tk−1 ) : 0 < t0 < t1 < · · · < tk−1 < t = tk } and yk = x. Similarly u (n) also has an infinite series expansion involving sums of products of the random variables ξ . The approach in [1] and [4] shows the weak convergence of u (n) to v by showing the convergence of the individual terms in the infinite series expansions. Both these methods are specific and do not work in general, even for a slight perturbation of σ . The approach that we take to address the general case has been inspired by [22], where discrete approximations to stochastic differential equations were considered. We point out that as opposed to weak approximations, there are quite a few papers concerned with strong approximations of the stochastic heat equation (see [13,16,18,19,21]). Our results allow us to give an alternative proof of the weak convergence of the partition function of one dimensional directed random polymers in the intermediate disorder regime, obtained in [1,4] by different arguments. For each n consider the following measure on paths x = (0 = x0 , x1 , x2 , . . . , xn ) of length n, where each xi ∈ Z: Pnξ,β (x)
=
n ξi (xi ) exp β i=0 ξ
Z n (β)
Pn (x).
The measure Pn is a probability measure on paths of length n, of a random walk following the transition probability P and starting at 0. The partition function for the inverse temperature β Z n (β) =
Z nξ (β)
:= E 0 exp β
n
ξi (X i )
(1.6)
i=0 ξ,β
is the normalizing constant so that Pn (x) is a probability measure on paths of length n; here E 0 is the expectation over paths of the random walk starting at 0. One is ξ,β then interested in the behaviour of paths sampled from this new measure Pn . As the temperature becomes large, alternatively β becomes small, one expects that the paths behave like that of a random walk with transition matrix P. On the other hand as the temperature becomes small the effect of the environment ξ becomes important, since n ξi (xi ). A substantially more weight is on paths with larger energy Hn (ξ ) = i=0 big motivation for work in this area is to understand this competition between entropy (from the measure Pn ) and energy Hn (ξ ). The competition between entropy and energy is characterized by the limit of the martingale
123
Stoch PDE: Anal Comp
Mn =
Z n (β) . (E[eβξ ])n+1
If the limit is positive a.e. we say that the polymer measure is in weak disorder while it is in strong disorder if the limit is 0 a.e.. The polymer paths are diffusive in the weak disorder regime (see [7]) while we see vastly different behavior in the strong disorder regime. One (as well as two) dimensional polymers are in the strong disorder regime for any β > 0, while in higher spatial dimensions, the polymer is in the strong disorder regime only if β is large enough. It is believed that for one dimensional walks which are in the domain of attraction of Brownian motion the typical fluctuation of the polymer paths for any β > 0 would be of order n 2/3 (as opposed to n 1/2 when β = 0), although this question is still open in general. One also expects that log Z n has fluctuations of order n 1/3 (as opposed to 0 when β = 0), but this question is also open in general. We refer to [1,6,10] for a more detailed exposition on this subject. In this paper we shall consider (as in [1] and [4]) one dimensional polymers in the intermediate disorder regime, that is when the inverse temperature goes to 0 at a specified rate with n. With the choice of the inverse temperature β(n) = β/n (α−1)/2α , we observe interesting behavior which we shall detail in the following section. 1.1 Main results Let us denote by μ := k∈Z k · P(k) the mean of X 1 . Let φ denote the characteristic function associated to the random walk eizk · P(k), z ∈ [−π, π ]. φ(z) := k∈Z
The centered characteristic function is ˜ φ(z) := e−iμz φ(z). We shall make the following assumption on the random walk. Assumption 1.1 Assume that {z ∈ [−π, π ] : |φ(z)| = 1} = {0} and that there exists a constant 0 < a < 1 such that ˜ φ(z) = 1 − ν|z|α + D(z),
(1.7)
where D(z) = O(|z|α+a ) as |z| → 0. Remark 1.2 The first part of the assumption means that the random walk is strongly aperiodic and the assumption on φ˜ necessarily implies that the centered random walk is in the domain of attraction of the strictly Stable(α) law.
123
Stoch PDE: Anal Comp
Remark 1.3 The strong aperiodicity assumption is used in the local limit theorem A.1 below and it should be possible to modify the theorem to remove this assumption. In particular the results of this paper are valid for the simple random walk although it is not strongly aperiodic. Let us now state the main result of this paper. Let (n) (n) u¯ t (x) := u [nt] [xn 1/α ] − [μnt] .
(1.8)
We choose our initial profile (n)
u 0 (k) = v0
k n 1/α
(1.9)
for a continuous v0 . We allow v0 to be random but it should be independent of the white noise W˙ and the random field ξ . Our main result is the following Theorem 1.1 Let the conditions in Assumption 1.1 hold, and fix m ∈ N so that 2m < 2 + κ. Let v0 be a continuous (random) function so that supx E|v0 (x)|2m < ∞, that is independent of ξ and W˙ . Let u (n) be the solution to (1.4) with initial profile (n) (n) u 0 . Then for each t > 0, x ∈ R we have u¯ t (x) ⇒ vt (x), where v is the solution to (1.5) with initial profile v0 . Furthermore we have (n) 2m 2m Eu¯ t (x) → Evt (x) as n → ∞.
(1.10)
Remark 1.4 Above and for the rest of the paper we use E to denote the expectation over all random quantities involved. For ease of notation, we will often drop the superscript n when referring from u (n) and u¯ (n) . Remark 1.5 The proof also shows the convergence of all moments of order less than 2m. Our proof can also be used to give an upper bound on the rate of convergence in (1.10). This will of course depend on the initial profile v0 . √ Remark 1.6 In the case that Eξi2 (k) = ρ > 0 one has a factor of ρ with the last term in (1.5). We next focus on the case that σ is uniformly bounded, that is supx |σ (x)| < ∞. In this situation we can consider more general initial profiles, not necessarily those that are uniformly bounded in L 2m . We state a general result for the special case α = 2. Let η(k), k ∈ Z be i.i.d. random variables such that
E η(k) = 0, E η2 (k) = λ, E |η(k)|2+κ < ∞ for some κ > 0,
(1.11)
We further assume that the random variables η = {η(k), k ∈ Z} are independent of the noise ξ . Let ⎧ ⎪ if l = 0, ⎨0 (n),η u 0 (l) = n −1/4 lk=1 η(k) if l > 0, ⎪ ⎩ −1/4 0 n k=l η(k) if l < 0.
123
Stoch PDE: Anal Comp
We have the following Theorem 1.2 (α = 2) Assume that supx |σ (x)| < ∞, and the random variables η satisfy (1.11) and are independent of ξ and W˙ . Let the conditions in Assumption 1.1 hold, and fix m ∈ N such that 2m < 2 + min(κ, κ ). Let u (n) be the solution to (1.4) (n),η (n) with initial profile u 0 . Then for each t > 0, x ∈ R we have u¯ t (x) ⇒ vt (x), where v is the solution to (1.5) with initial profile B, a two-sided Brownian motion with variance λ. Furthermore we have (n) 2m 2m Eu¯ t (x) → Evt (x) as n → ∞. Remark 1.7 The above theorem explains the height fluctuations of the harness process obtained in [27]. This is the system h i+1 (k) = l P(k, l)·h i (l)+ξi (k), i 0, k ∈ Z. Let us consider the simplest case where the initial height profile satisfies h 0 (0) = 0, and the increments h 0 (k + 1) − h 0 (k), k ∈ Z are i.i.d. random variables with finite 2 + κ moments and mean ρ0 (say). Then the transformation u i (k) = n −1/4 · [h i (k) − (1.4) and the initial profile ρ0 μi − ρ0 k]satisfies √ √ u 0 is of the form above. We then have that n −1/4 · h [nt] ([x n] − [μnt]) − ρ0 [x n] ⇒ vt (x) where ∂t v = ν∂x2 v + W˙ , which implies that the harness process is in the Edwards-Wilkinson class; see [9] for an explanation of this class and the related KPZ class. We next consider the scaled partition function of the directed random polymer in the intermediate disorder regime: ˜
Mn(ξ ) =
n
˜
i=1 ξi (X i )) ˜ [Eeβ ξ ]n+1
E 0 exp(β
(1.12)
where ξ˜ is the scaled disorder ξ˜i (k) =
ξi (k) . n (α−1)/2α
We shall also be interested in the scaled point to point partition function given by ˜ Mn(ξ , x) (ξ˜ , x)
(ξ˜ )
=
n E 0 1{X n = x} · exp(β i=1 ξ˜i (X i )) [Eeβ ξ˜ ]n+1
(1.13)
ξ˜ ,β
Note that Mn /Mn = Pn (x) is the probability of the intermediate disorder polymer to be at x at time n. We rederive the following theorem from [1] and [4]. An advantage of our method is that we are able to show the convergence of all moments; see the first two statements of the theorem. The key idea is the observation that (1.12) and (1.13) can be related to (1.4) with σ (x) = x, see Sects. 6 and 7.3. Theorem 1.3 Assume that the environment variables ξi (k) are i.i.d. and have exponential moments: EeC|ξi (k)| < ∞ for all small C > 0. Suppose that μ = 0. Then the following statements hold.
123
Stoch PDE: Anal Comp (ξ˜ ) 1. Mn converges in distribution to v1 (0), where ∂t v = −ν(−)α/2 v + βv W˙ with (ξ˜ )
initial profile v0 ≡ 1. Furthermore the moments of Mn converge to those of v1 (0). 2.
(ξ˜ , [xn 1/α ]) n 1/α Mn g (x) + βg (x) W˙ (ξ˜ , [xn 1/α ])
Mn
3.
(x)
converges in distribution to g1 (0), where ∂t g (x) = −ν(−)α/2 (x) with initial profile g0 = δx . Furthermore the moments of (x)
converge to those of g1 (0).
ξ˜ ,β n 1/α Pn ([xn 1/α ])
(x)
converges in distribution to g1 (0)/v1 (0), where g (x) and v are as above and driven by the same white noise W˙ .
(x) Remark 1.8 One can show that [g1 (0)/v1 (0)] dx = 1. The first statement in the theorem implies that log Z n has fluctuations of order 1 in the intermediate disorder regime. The last statement says that the polymer paths have fluctuations of order n 1/α and satisfy a random local limit theorem. Recall that when α = 2 and the disorder is not scaled one expects n 1/3 fluctuations for log Z n and n 2/3 fluctuations for the polymer paths. We now provide a brief description of the notation and outline of the paper. We will use indices i, j ∈ Z+ for discrete time, k, l ∈ Z for discrete space. The letters q, r, s, t will usually be used for continuous time R+ ,while the variables x, y, z, w will usually be used for continuous space R. The notation P, E is used to denote probability and expectation for the random variables ξ , while P, E will be used for the probability and expectation over paths of random walks with transition kernel P. As already indicated P, E will be used when we integrate out all the random variables involved. We will denote by · m the norm in L m (P), 1 m < ∞. Constants C could vary from line to line. We will use the notation c, c1 , c2 , . . . to represent constants which remain constant within a proof but might change across different proofs. In Sect. 2 we prove existence and uniqueness for solutions to (1.4), and obtain moment bounds required in the proof of Theorem 1.1 Section 3 deals with the proof of our main Theorem 1.1. Section 4 deals with the proof of Theorem 1.2. In Sect. 5 we provide conditions for tightness to hold in Theorems 1.1 and 1.2. The proof of the first statement in Theorem 1.3 is in Sect. 6. Section 7 is devoted to several extensions including addition of a drift in (1.4), Dirac initial condition in (1.4) and the proofs of the last two statements in Theorem 1.3. The appendices contain a local limit theorem crucial for our arguments, as well as bounds needed for the proof of Proposition 3.5. 1.2 Strategy for the proof of Theorem 1.1 Here we describe the strategy for our main theorem proved in Sect. 3. Consider the solution to (1.4) given by (2.1). Let us simplify things and consider the case when v0 ≡ 0, so that the solution is simply the second term. To obtain a Gaussian random noise in the limit one needs to add many (scaled) ξ ’s. To this end we divide the space– time lattice into reasonably big blocks (the size of the block depends on n) and add the (scaled) ξ ’s in each of them, thus creating random variables ζ indexed by the blocks. If the blocks are not too big we show that the value of u at any space–time point (l, j) in a block is not far from the value of u at the center of the block. One
123
Stoch PDE: Anal Comp
can also then approximate Pi− j (k, l), the transition probability of the random walk from the space–time point (k, i) to the space–time point (l, j), with the transition probability from the center of the block containing (k, i) to the center of the block containing (l, j). These two observations allow us to approximate the second term in (2.1) by a sum over blocks involving the ζ random variables, the values of u at the centers of the blocks, and the transition probabilities between centres of blocks. Next we prove a coupling result which allows us to replace the ζ ’s by independent Gaussian random variables, and use a local limit theorem to replace the transition probability by a Gaussian density. Due to the coupling we are able to approximate the above sum over blocks by a discretisation of the stochastic heat equation (1.5), where the white noise is created from the Gaussian random variables. Simple Hölder continuity estimates finish the proof from there.
2 Existence and uniqueness In this section we show the existence and uniqueness of solutions to (1.4). The ideas are quite standard, see for example [15], but we present them here to show that we can obtain moment bounds which are uniform in n. We shall need these in the proof of Theorem 1.1. Recall that a solution to (1.4) satisfies
u i+1 (k) =
Pi+1 (k, l) u 0 (l) +
l
i
Pi− j (k, l) · σ u j (l) ·
j=0 l
ξ j (l) . n (α−1)/2α (2.1)
We call the second term the noise term because it depends on the noise ξ . We call the first term the non-noise term although the initial profile u 0 is allowed to be random. (n)
Theorem 2.1 For initial profiles u 0 = u 0 to (1.4) with supk,n E|u 0 (k)|2 < ∞, the following statements hold. 1. There exists a unique solution to equation (1.4) such that for any T > 0 sup
i[nT ], k∈Z
E|u i (k)|2 < ∞.
(2.2)
Furthermore the above bound holds uniformly in n N for some N large enough. 2. Let m ∈ N and suppose further that ξ has 2m moments. Assume also that 2m < ∞. Then for any T > 0 the following holds uniformly supk,n Eu 0 (k) in n N for some N large enough sup
i[nT ], k∈Z
123
2m Eu i (k) < ∞.
(2.3)
Stoch PDE: Anal Comp
Proof We use Picard’s iteration scheme to show the existence of a solution which (0) satisfies the required bound. Therefore let wi (k) = u 0 (k) and let ( p+1)
wi+1 (k) =
Pi+1 (k, l)u 0 (l) +
l
i
( p) Pi− j (k, l) · σ w j (l) ·
j=0 l
ξ j (l) (α−1)/2α n (2.4)
In the case Lipσ = 0 the convergence of the Picard iterates is immediate. So let us assume that Lipσ > 0. From the above one has ( p) ( p−1) 2 i 2 (l) Ew j (l) − w j ( p+1) ( p) 2 2 E wi+1 (k) − wi+1 (k) Lipσ Pi− j (l − k) · . (α−1)/α n j=0 l∈Z
For a parameter δ > 0 define W 2 ( p) =
sup
k∈Z, i[nT ]
2 ( p) ( p−1) e−δi/n · E wi (k) − wi (k) .
(2.5)
The above inequality gives W ( p + 1) 2
Lip2σ
· W ( p) 2
[nT ] e−δ j/n j=0
· P X j = X˜ j , n (α−1)/α
(2.6)
where X and X˜ are independent walks with the same transition probabilities P. We now choose δ to be large enough so that [nT ] e−δi/n i=0
· P X i = X˜ i 1 < (α−1)/α n 2Lip2σ
for all large n. The reason we can choose such a δ is that sup
[nT ]
n1 i=0
P X i = X˜ i <∞ n (α−1)/α
(2.7)
by Corollary A.2 (see Appendix A) applied to X − X˜ ; a similar statement to Theorem A.1 holds for X − X˜ . From here it follows that for this choice of δ W( p) < ∞, p1
uniformly over n. This implies the convergence of w ( p) to a random field u ∗ uniformly in the interval k ∈ Z, i [nT ]. One can then argue that u ∗ satisfies (1.4) in this region
123
Stoch PDE: Anal Comp
by considering the limit of both sides of (2.4). To prove uniqueness assume that there are two solutions u and u˜ to (1.4) satisfying (2.2). Then E |u i+1 (k) − u˜ i+1 (k)| 2
Lip2σ
i
2 Pi− j (l
j=0 l∈Z
2 Eu j (l) − u˜ j (l) − k) · . n (α−1)/α
As before multiply both sides by e−δi/n , and take supremum over k ∈ Z and i [nT ] to get a relation similar to (6.2). If we choose δ to be large enough we get E|u i (k) − u˜ i (k)|2 = 0 for k ∈ Z, i [nT ], thus proving uniqueness. Let us now prove the second part of the theorem. An application of Burkholder’s inequality for discrete time martingales gives u i+1 (k)2 2m i ξ j (l)ξ j (l ) c1 + c2 Pi− j (l − k)Pi− j (l − k) · σ u j (l) σ u j (l ) · (α−1)/α n m j=0 l,l ∈Z
c1 + c2
j=0
2 ξ (l) j Pi− j (l − k) · σ u j (l) · (α−1)/2α , n
i
(2.8)
m
l∈Z
the last step follows from Minkowski’s inequality. Now 2 m ξ (l) j Pi− j (l − k) · σ u j (l) · (α−1)/2α E n l∈Z
2m Pi− j (la − k) · σ u j (la ) · =E l1 ,l2 ,...l2m a=1
n
ξ j (la ) . (α−1)/2α
(2.9)
Clearly ξi (k) are independent of u j (l), l ∈ Z, j i. Consider now the expectation of each product term in the above expression. Our assumption that Eξ j (l) = 0 implies that the only product terms which have a nonzero contribution contain only powers of ξ j (l) greater than 1, for any l. Thus the above can be bound by a constant multiple of
m m · (α−1)/α 1
n
123
b
β Pi−a j (la − k)
b=1 (β1 ,...,βb )∈Nb l1 ,l2 ,...,lb a=1 each βq 2 β1 +···βb =2m
b
β σ a u j (la ) · E a=1
.
Stoch PDE: Anal Comp
Note also by Holder’s inequality, for any l1 , l2 , . . . , lb ∈ Z
b b
β σ a u j (la ) σ u j (la ) βa . E 2m a=1
a=1
We also need the following b
⎛ ⎞ β βa ⎠ ⎝ Pi−a j (la − k) σ u j (la ) 2m
a=1
la b
⎛ ⎝
a=1
⎞βa /2 2 2 ⎠ Pi− , j (la − k) σ u j (la ) 2m
la
which follows from the relation xir ( xi )r valid for any r 1 and nonnegative sequences xi . These observations imply that (2.9) can be bound by a constant multiple of $m m # 2 1 2 · Pi− j (l − k) · σ u j (l) 2m . n (α−1)/α l
If we plug this in (2.8) one gets i σ u j (l) 2 2 2m 2 u i+1 (k) c1 + c3 Pi− j (l − k) 2m n (α−1)/α j=0 l i u j (l)2 1 + 2m 2 c1 + c4 Pi− . j (l − k) · n (α−1)/α j=0 l
Now, for a fixed δ > 0 let X (i) := e−δi/n sup u i (k) 22m . k∈Z
Our calculations above give us X (i + 1) c1 + c4 sup [1 + X (q)] · qi
[nT ] e−δ j/n P(X
j = (α−1)/α n
j=0
X˜ j )
.
(2.10)
By choosing a δ large enough so that c4
[nT ] e−δ j/n P(X j=0
j = (α−1)/α n
X˜ j )
<
1 , 2
123
Stoch PDE: Anal Comp
one can obtain (2.3) by a recursive application of (2.10). This completes the proof of the theorem.
3 Proof of Theorem 1.1 Fix m ∈ N so that 2m < 2 + κ. For each n we shall construct a probability space containing copies of the random variables ξ , the white noise W˙ , and the initial profile v0 independent of ξ and W˙ so that 2m E u¯ t (x) − vt (x) →0
as n → ∞.
(3.1)
This would imply the weak convergence stated in the main theorem. We first prove the result under the assumption that the initial profile v0 ≡ 0. This shall be relaxed later. The proof will involve several steps which we divide into subsections. To find an upper bound on the rate of convergence in (3.1) we shall need to optimize several quantities over 0 < γ < θ < α1 such that θ +γ <
% 1& min(a, α − 1) and γ < (α − 1) ∧ θ. α α
(3.2)
3.1 Adding ξ over blocks In the first step we apply a coarse graining procedure and compare the solution to (1.4) to a random field on a coarser lattice. Each site in the coarser lattice would correspond to a block of size [n θ ] × [n γ ] in the original lattice. More precisely site ( j, l) (we shall use the first coordinate for time and the second for space) in the coarser lattice will correspond to B j (l) :=
j[n θ ], ( j + 1)[n θ ] × l[n γ ], (l + 1)[n γ ]
(3.3)
in the original lattice. For i ∈ Z+ , k ∈ Z define i−1 Ui[n θ ] k[n γ ] = P(i−1− j)[n θ ] (l − k)[n γ ] · σ u j[n θ ] (l[n γ ]) · ζ j (l), j=0 l∈Z
(3.4) where the random variables ζ are obtained by summing the contribution of the scaled ξ variables in these blocks: ζ j (l) =
(s,y)∈B j (l)
ξs (y) . (α−1)/2α n
(3.5)
Here again we are suppressing the dependence of U = U (n) on n in the notation.
123
Stoch PDE: Anal Comp
The main result of this section is the following theorem where we show that u is close to U . Choose r ∈ Z+ and z ∈ Z such that r [n θ ] [nt] < (r + 1)[n θ ], z[n γ ] [xn 1/α ] − [μnt] < (z + 1)[n γ ].
(3.6)
Recall the definition of u¯ in (1.8). [n θ ] n
Theorem 3.1 Fix T > 0. We have for
t T, x ∈ R
2 u¯ t (x) − Ur [n θ ] (z[n γ ]) 2m
nγ n (α−1)θ
+
n θ+γ +o(1) . n (α−1)/α
Proof The proof is obtained by combining Propositions 3.4 and 3.5 below, and our restrictions on the parameters θ and γ in (3.2).
We first recall a few facts about general one-dimensional recurrent random walks which we shall apply to Y = X − X˜ , where X and X˜ are independent random walks with transition probabilities P(k, l). Note first that the random walk Y is symmetric. The potential kernel of the random walk Y is the function a(x) ¯ := limn→∞ a¯ n (x) where n n P(Y j = 0) − P(Y j = x). a¯ n (x) := j=0
j=0
One can show that the function a(x) ¯ exists (Propositions 28.5 and 28.8 in [28]) and further that a¯ n (x) is increasing in n for each x. We will need the following lemmas. Lemma 3.2 The potential kernel a¯ of the Y walk satisfies |a(x)| ¯ |x|α−1 . Proof The characteristic function of Y1 is |φ(z)|2 . By Fourier inversion π 1 2π −π π 1 = 2π −π
a¯ n (x) =
& 1 − e−ix z % · 1 − |φ(z)|2(n+1) dz 2 1 − |φ(z)| & 1 − cos(x z) % · 1 − |φ(z)|2(n+1) dz 2 1 − |φ(z)|
By the assumptions on the characteristic function φ, we have the bound c1 |z|α 1 − |φ(z)|2 c2 |z|α for |z| π . Therefore uniformly in n |a¯ n (x)|
π
−π
which is of order |x|α−1 .
|1 − cos(x z)| dz |z|α
123
Stoch PDE: Anal Comp
Lemma 3.3 The following holds & % E |X n − μn|α−1 n (α−1)/α .
(3.7)
The above also holds when X is replaced by Y and μ by 0. Proof Let us denote Rn =
X n − nμ . n 1/α
One can check that for any δ > 0
∞
0
1 − cos(zr ) dz = c(δ)|r |δ , r ∈ R, z 1+δ
for some constant c(δ). Thus
E |Rn |
δ
1 = c(δ) 1 = c(δ) =
1 c(δ)
∞
0
0
∞
∞
1 − E cos(z Rn ) dz z 1+δ izR 1 − Re E e n z 1+δ 1/α ) n ˜ 1 − Re φ(z/n z 1+δ
0
dz.
By our assumption (1.7) on the characteristic function one can argue that for 0 < δ < α we have E[|Rn |δ ] < ∞ for all n and further 1 lim E |Rn |δ = n→∞ c(δ)
0
∞
α
1 − e−ν|z| dz. z 1+δ
This of course implies supn E |Rn |δ < ∞. We can choose δ = α − 1 to complete the proof of (3.7). We have a similar bound for Y with μ replaced by 0 because the
characteristic function of Y1 satisfies (1.7) with ν replaced by 2ν. The following should be thought of a Hölder continuity estimate for u. It is the first step in our proof of Theorem 3.1. Proposition 3.4 We have the following bound valid for t u¯ t (x) − u r [n θ ] (z[n γ ])2 2m
n
123
[n θ ] n
1 1 α −θ
(α−1)
.
Stoch PDE: Anal Comp
Proof We write [nt]−1
u¯ t (x) − u r [n θ ] (z[n γ ]) =
P[nt]− j−1 l + [μnt] − [xn 1/α ] · σ u j (l)
j=r [n θ ] l∈Z
·
ξ j (l) n (α−1)/2α
· σ u j (l) ·
r [n θ ]−1
% & P[nt]− j−1 l + [μnt] − [xn 1/α ] − Pr [n θ ]− j−1 l − z[n γ ]
+
j=0
ξ j (l) n (α−1)/2α
l∈Z
.
An application of Burkholder’s inequality along with the bound in Theorem 2.1 then gives u¯ t (x) − u r [n θ ] z[n γ ] 2
2m
[nt]−1
2 1/α ] P[nt]− j−1 l + [μnt] − [xn n (α−1)/α
j=r [n θ ] l∈Z
+
θ ]−1 r [n
j=0
(3.8)
2 P[nt]− j−1 l + [μnt] − [xn 1/α ] − Pr [n θ ]− j−1 l − z[n γ ] n (α−1)/α
l∈Z
For fixed time j and spatial point k the sum l P j (l)P j (l + k) = P(X j = X˜ j + k) where X and X˜ are independent random walks. Use (a − b)2 = a 2 − 2ab + b2 to expand the square of the second term in (3.8) to get the following bound
θ
1 n (α−1)/α
+
+
−
[n ] P X j = X˜ j j=0
1 n (α−1)/α 1 n (α−1)/α 2 n (α−1)/α
θ ]−1 r [n
P X [nt]− j−1 = X˜ [nt]− j−1
j=0 θ ]−1 r [n
(3.9)
P X r [n θ ]− j−1 = X˜ r [n θ ]− j−1
j=0 θ ]−1 r [n
P X [nt]− j−1 = X˜ r [n θ ]− j−1 + [xn 1/α ] − [μnt] − z[n γ ] .
j=0
Note that we have used 0 [nt] − j − 1 r [n θ ] so that the first term is a bound on the first term in (3.8). The sum of the last three terms is equal to the second term in (3.8). As already observed in (2.7) one can use Corollary A.2 with Y = X − X˜ to get
123
Stoch PDE: Anal Comp θ
[n ]
1 n (α−1)/α
P X j = X˜ j
j=0
1 . n (1−θ)(α−1)/α
(3.10)
We now proceed with (3.9). An application of the Markov property gives u¯ t (x) − u r [n θ ] z[n γ ] 2
2m
1 n (1−θ)(α−1)/α
−
2 n (α−1)/α
+
θ ]−1 r [n
1 n (α−1)/α
θ ]−1 r [n
P Y j = Z [nt]−r [n θ ] + P(Y j = 0)
j=0
P Y j = W[nt]−r [n θ ] − [xn 1/α ] + [μnt] + z[n γ ] ,
j=0
d d where Z = Y and W = X , and both Z , W are independent of the walks X and X˜ . Thus
u¯ t (x) − u r [n θ ] z[n γ ] 2
2m
1 E a¯ Z [nt]−r [n θ ] n (α−1)/α & +E a¯ W[nt]−r [n θ ] − [xn 1/α ] + [μnt] + z[n γ ] % & 1 1 (1−θ)(α−1)/α + (α−1)/α n θ(α−1) + n γ (α−1) n n 1 1 n ( α −θ)(α−1)
1
n (1−θ)(α−1)/α
+
(3.11)
In the second last inequality we use Lemmas 3.2 and 3.3. Note also that here we have used that γ θ < α −1 from (3.2). This completes the proof of the proposition.
To complete the proof of Theorem 3.1 we estimate the difference between u r [n θ ] (z[n γ ]) and Ur [n θ ] (z[n γ ]) in the following Proposition 3.5 Fix T > 0. We have for 1 r [T n 1−θ ]. u r [n θ ] (z[n γ ]) − Ur [n θ ] (z[n γ ])2 2m
nγ n (α−1)θ
+
n θ+γ +o(1) . n (α−1)/α
Proof Recall the definition of the random field U in (3.4). By writing the ζ variables as the sum of the ξ variables in the blocks we can write the above difference as
123
Stoch PDE: Anal Comp θ
γ (k+1)[n ]−1
r −1 (i+1)[n ]−1 i=0 k∈Z
l=k[n γ ]
j=i[n θ ] θ
−
r −1 (i+1)[n ]−1 i=0 k∈Z
·
j=i[n θ ]
Pr [n θ ]− j−1 l − z[n γ ] · σ u j (l) ·
γ (k+1)[n ]−1
l=k[n γ ]
ξ j (l) (α−1)/2α n
P(r −1−i)[n θ ] (k − z)[n γ ] · σ u i[n θ ] (k[n γ ])
ξ j (l) . n (α−1)/2α
(3.12)
At this stage it is convenient to split σ u j (l) in the first term as σ u j (l) = σ u j (l) − σ u i[n θ ] (k[n γ ]) + σ u i[n θ ] (k[n γ ]) . An application of Burkholder’s inequality then gives u r [n θ ] z[n γ ] − Ur [n θ ] z[n γ ] 2 2m 2 1 2 (α−1)/α Pr [n θ ]− j−1 l − z[n γ ] · u j (l) − u i[n θ ] k[n γ ] 2m n i, j k,l % &2 (3.13) 1 Pr [n θ ]− j−1 l −z[n γ ] −P(r −1−i)[n θ ] (k − z)[n γ ] + (α−1)/α n i, j k,l % γ γ & 1 1 2 P2j (l)+Pi[n 1 + (α−1)/α θ ] k[n ] −2P j (l)Pi[n θ ] k[n ] , n ( α −θ)(α−1) n i, j k,l
where the limits of the sums are just as earlier; we did this to simplify notation. The first term in the last line is a bound on the first term of the previous step, and this comes from the Hölder continuity estimate in Proposition 3.4 and (2.7). From this point on we focus on the last term in (3.13) which can be written as 1 n
(α−1) α
%
i, j
k,l
% & P j (l) · P j (l) − Pi[n θ ] k[n γ ] +
& · Pi[n θ ] k[n γ ] − P j (l) .
1 n
(α−1) α
i, j
k,l
Pi[n θ ] k[n γ ] (3.14)
In order to keep the exposition simpler for the reader we bound the above two terms in Lemmas B.4 and B.5 in Appendix B. The proof of this Proposition is complete once we take into account our restrictions on γ and θ in (3.2).
123
Stoch PDE: Anal Comp
3.2 Coupling with white noise Here we approximate ζ j (l) in the block B j (l) by replacing it by white noise in the rescaled box B˜ j (l) :=
j[n θ ] ( j + 1)[n θ ] , n n
×
l[n γ ] (l + 1)[n γ ] , n 1/α n 1/α
, j ∈ Z+ , l ∈ Z. (3.15)
It is this step which allows us to construct copies of u and v on the same space as in (3.1). For this we shall need the following coupling theorem. Theorem 3.6 We can construct random variables ξi (k), i ∈ Z+ , k ∈ Z and ζ j (l), j ∈ Z+ , l ∈ Z and a white noise W on the same probability space such that the following hold 1. ξ is distributed as ξ and ζ is distributed as ζ . 2. For fixed j ∈ Z+ , l ∈ Z, we have that W j (l), ζ j (l) is independent of the collection Wi (k), ζi (k) , i = j, k = l where
˜ j (l) . W j (l) := W B 3. For any real number m such that 0 < 2m < 2 + κ and j ∈ Z + , l ∈ Z we have 2m 1/α n −m{α−1−α(θ+γ )}/α n −(θ+γ )·min(1, κ)/2. (3.16) E ζ j (l)−n W j (l) Proof Parts 1 and 2 can be proved similar to Lemma 1 in [22]. Start with a white ˆ ˆ ˆ noise W = W (e1 ), W (e2 ), · · · on B˜ 0 (0) constructed on some space (, Q). Here
e1 , e2 , . . . form a basis of L 2 B˜ 0 (0) . Denote by F the distribution of ζ j (l) and let denote the standard normal distribution. Define the inverse function of F by F −1 (x) = sup u. F(u)x d
For a standard normal random variable Z we have (Z ) = U (0, 1) and therefore ⎛ ⎡)
⎤⎞
1+ α1 n d ζˆ0 (0) := F −1 ⎝ ⎣ Wˆ B˜ 0 (0) ⎦⎠ = F. [n θ ][n γ ] This gives a way of coupling ζ0 (0) and W0 (0). We now construct the random variables θ γ ξ in the box B0 (0) as follows. Define probability measures R on R[n ]·[n ] × R and R¯ on R × RN by
123
Stoch PDE: Anal Comp
R(C × D) = P (ξi (k)){(i,k)∈B0 (0)} ∈ C, ζ0 (0) ∈ D
¯ R(D × E) = P ζˆ0 (0) ∈ D, Wˆ ∈ E ¯ × E) are absolutely continuous with respect to the probability Since R(C × ·) and R(· θ γ ¯ × RN ) we have measure S(·) = R(R[n ]·[n ] × ·) = R(· pC (y)S(dy) R(C × D) = D ¯ q E (y)S(dy) R(D × E) = D
for measurable functions pC and q E . We now construct the probability measure Q¯ on ¯ = R[n θ ]·[n γ ] × R × RN by ¯ Q(C × D × E) := pC (y)q E (y)S(dy). D
We have constructed random variables ξ , ζ , W corresponding to the box B0 (0). To ¯ For any ( j, l) ¯ × Q. construct for the whole space we consider Z+ × Z i.i.d copies of the ζ ’s, ξ ’s constructed for the ( j, l)’th copy would correspond to B j (l). Similarly the white noise for the ( j, l)’th copy would correspond to B˜ j (l). The white noise on R+ × R is created by “gluing” together the white noise in the different boxes. To be precise for any f ∈ L 2 (R+ × R) define
W ( f ) =
W f · 1 B j (l) .
( j,l)∈Z+ ×Z
One can check that this is a white noise on R+ × R. Let us next prove part 3 of the theorem. Denote by F˜ the distribution function of the random variable Y :=
,
(i,k)∈B j (l)
ξi (k)
n (α−1)/2α =, · ζ j (l). θ γ θ γ [n ][n ] [n ][n ]
Note that this random variable has mean 0 and variance 1 since we have normalized by the square root of the size of the box B j (l). We shall compare Y with n (α+1)/2α W j (l) Z=, [n θ ][n γ ] which has the standard normal distribution. A modified version of the Berry Essen theorem (see [3,25,26]) gives ˜ F(x) − (x)
1 , x ∈ R. [1 + |x|2+κ ] · n (θ+γ )·min(1, κ)/2
123
Stoch PDE: Anal Comp
As observed in [12] for 2m < 2 + κ E|Y − Z |2m R
˜ − (x) dx |x|2m−1 F(x)
1 n (θ+γ ) min(1, κ)/2
.
The inequality (3.16) follows easily from this.
Remark 3.1 From this point on in the proof of Theorem 1.1 we shall assume that we are working on the probability space constructed above. We shall also ignore the superscript in the random fields ξ , ζ and W . In the next approximating process we replace ζ by the coupled white noise. Again the approximating process will be defined on the coarser lattice. Let i−1 P(i−1− j)[n θ ] (l − k)[n γ ] · σ u j[n θ ] l[n γ ] · n 1/α W j (l) Vi[n θ ] k[n γ ] = j=0 l∈Z
(3.17) We show now that V is close to U . Proposition 3.7 Fix any T > 0. We have uniformly for 1 r [T n 1−θ ], k ∈ Z Vr [n θ ] k[n γ ] − Ur [n θ ] k[n γ ] 2 2m
1 n (θ+γ )·min(1, κ)/(2m)
Proof By our assumption (3.2) on θ and γ we have θ + γ < (α − 1)/α. Therefore by Burkholder’s inequality and (3.16) r −1 2 Vr [n θ ] k[n γ ] − Ur [n θ ] k[n γ ] 2 (l − k)[n γ ] P(r −1− j)[n θ ] 2m j=0 l∈Z
·
n θ+γ n (α−1)/α n (θ+γ )·min(1, κ)/2m
1 , n (θ+γ )·min(1, κ)/2m
the last line follows because of (2.7) and the fact that second term on the second line of (3.13) goes to 0.
3.3 Replacing transition probabilities by the Stable(α) density Next we replace the random walk kernel P by the transition density p for the Stable(α) process. We can choose integers a j such that a j [n γ ] μj[n θ ] < (a j + 1)[n γ ],
123
Stoch PDE: Anal Comp
and on the coarser lattice for i 2, k ∈ Z let i−2 V¯i[n θ ] k[n γ ] = p (i−1− j)[nθ ] j=0 l∈Z
n
l[n γ ] − (k + ai−1 − a j )[n γ ] n 1/α
· σ u j[n θ ] l[n γ ] · W j (l) γ i−2 l[n ] − (k + ai−1 )[n γ ] = p (i−1− j)[nθ ] n 1/α n j=0 l∈Z · σ u j[n θ ] (l − a j )[n γ ] · W j (l − a j ),
(3.18)
the second line following from the first by a simple translation. Note that the sum over j runs from 0 to i − 2 ; this avoids the singularity of p at t = 0, x = 0. Proposition 3.8 Fix T > 0. We have uniformly for 2 r [T n 1−θ ] Vr [n θ ] z[n γ ] − V¯r [n θ ] z[n γ ] 2 2m
n θ+γ +o(1) n min(α−1, a)/α
.
Proof Indeed using the first expression in (3.18) Vr [n θ ] z[n γ ] − V¯r [n θ ] z[n γ ] 2 2m r −1 θ+γ θ+γ γ 1 (l − ar −1 + ar −1− j )[n γ ] 2 n n αP n l[n + ] − p j[n θ ] j[n θ ] 1 1 n 1/α n n 1− α n 1+ α j=1 l∈Z Let us focus on the second term. This is equal to r −1 γ (l − ar −1 + ar −1− j )[n γ ] n θ+γ 2 2 2 αP n l[n ] + p 1 j[n θ ] j[n θ ] n 1/α n n 1+ α j=1 l∈Z 1 (l − ar −1 + ar −1− j )[n γ ] . − 2n α P j[n θ ] l[n γ ] p j[nθ ] n 1/α n Firstly taking (1 − b)/α = γ in Theorem A.1 (see Appendix A) r −1 n θ+γ
n
1+ α1
1 n α P j[n θ ] l[n γ ]
j=1 l∈Z
1 (l − ar −1 + ar −1− j )[n γ ] · n α P j[n θ ] l[n γ ] − p j[nθ ] n 1/α n r −1 n θ+γ 1 nγ (α−1)/α + ( jn θ )2/α n ( jn θ )(1+a)/α j=1
n γ +o(1) n min(α−1, a)/α
+
n 2γ +o(1) . n (α−1)/α
123
Stoch PDE: Anal Comp
Also we control
(l − ar −1 + a j )[n γ ] 1 · n α P j[n θ ] l[n γ ] p j[nθ ] 1/α 1+ α1 n n n j=1 l∈Z (l − ar −1 + a j )[n γ ] − p j[nθ ] n 1/α n γ r −1
nθ 1 n nγ γ −1/α (α−1)/α l[n + p ]n θ j[n ] ( jn θ )2/α n 1/α n n ( jn θ )(1+a)/α r −1 n θ+γ
j=1
l∈Z
r −1
nθ n (α−1)/α
j=1
n o(1) n min(α−1, a)/α
1 ( jn θ )(1+a)/α
+
nγ ( jn θ )2/α
n γ +o(1) . n (α−1)/α
+
For the Riemann sum approximation of the density p we need γ θ/α which we have assumed in (3.2).
3.4 Comparison with the stochastic heat equation In this last step we compare our approximation V¯ with the stochastic heat equation (1.5) with initial profile 0, with respect to a white noise W constructed as W j (l) = W j (l − a j ). from W by spatially shifting the noise W in the time region Thusθ W is constructed j[n ]/n, ( j +1)[n θ ]/n by a j [n γ ]. One can check that W is a white noise as follows. For f ∈ L 2 (R+ × R) write W( f ) =
∞ j=0
=
( j+1)[n θ ] n j[n θ ] n
∞ j=0
( j+1)[n θ ] n j[n θ ] n
f (t, x) W(d x, ds) R
f (t, x + a j ) W (d x, ds) R
It now follows that W( f ) is Gaussian with mean 0 and E[W( f ) · W(g)] = ( f, g) for f, g ∈ L 2 (R+ × R). The unique solution to (1.5) with this white noise W when the initial profile is 0 is vt (x) =
t 0
123
R
pt−s (y − x)σ vs (y) W(ds dy)
(3.19)
Stoch PDE: Anal Comp
and satisfies supx,tT E[|vt (x)|m ] < ∞ for any T > 0 and m ∈ Z+ . The following Hölder continuity estimate will be very useful for us. Lemma 3.9 ([14]) For any p 2
vt (x) − vs (y) 2p |x − y|α−1 + |s − t|
α−1 α
.
Remark 3.2 For general initial profile v0 , the solution vt (x) is obtained by adding ( pt ∗ v0 )(x) to the right hand side of (3.19). Then the conclusion of Lemma 3.9 is valid only for the noise term (3.19). We shall first consider a spatial discretisation of v as in [13] and [21], and divide space into intervals of length [n γ ]/n 1/α . Let v˜t
k[n γ ] n 1/α
=
θ
t− [nn ]
0
l∈Z
pt−s
γ (l − k)[n γ ] l[n ] σ vs dBs(n) (l) 1/α n n 1/α (3.20)
where Bs(n) (l)
γ l[n ] (l + 1)[n γ ] . = W [0, s] × , n 1/α n 1/α
One has the following Lemma 3.10 We have γ 1/α 2 θ(α−1)/α vt (x) − v˜t [n ] xn n . n 1/α [n γ ] n (α−1)/α 2m
(3.21)
Proof In [13] and [21] we consider, instead of v, ˜ the random field vˆ which is of the αγ same form as v˜ but the integral in (3.20) is from 0 to t − nn . We have nθ γ γ γ 2 n nγ 2 (l − k)[n ] v˜t k[n ] − vˆt k[n ] · p ds s n αγ n 1/α n 1/α n 1/α n 1/α 2m n l
nθ n
ds
0 R n θ(α−1)/α
n (α−1)/α
dy ps2 (y)
.
The above along with the bound on the difference v − vˆ obtained in [21] completes the proof.
123
Stoch PDE: Anal Comp
Next we discretise v˜ in time. Let v¯ i[nθ ] n
k[n γ ] n 1/α
=
i−2
p (i−1− j)[nθ ] n
j=0 l∈Z
(l − k)[n γ ] n 1/α
γ l[n ] · σ v j[nθ ] · W j (l) n 1/α n (3.22)
for i 2 and 0 otherwise, where W j (l) = W
j[n θ ] ( j + 1)[n θ ] , n n
l[n γ ] (l + 1)[n γ ] × , n 1/α n 1/α
.
We will need the following lemma Lemma 3.11 ([23]) We have ∂ pt (x) pt (x/2) . ∂t t Proof For α < 2 one just has to look at estimate (3.7) in [23]. In this case the inequality is true even without the factor of 1/2 in the right. For α = 2 this is just a simple computation involving the Gaussian density.
The following gives an error bound on the temporal discretisation of v. ˜ Lemma 3.12 We have for any k ∈ Z and r ∈ Z+ satisfying (3.6) γ γ θ(α−1)/α 2 v˜t k[n ] − v¯ r [nθ ] k[n ] n . n 1/α n 1/α n (α−1)/α n 2m Proof Compare (3.20) with (3.22) by writing γ γ γ l[n ] l[n ] l[n ] σ vs = σ v − σ v θ s j[n ] n 1/α n 1/α n 1/α n γ l[n ] +σ v j[nθ ] n 1/α n Now use the temporal Hölder continuity supplied in Lemma 3.9 to get γ γ 2 v˜t k[n ] − v¯ r [nθ ] k[n ] n 1/α n 1/α 2m n γ θ α−1 t− [nθ ] α [n γ ] n l[n ] n 2 ds · pt−s 1/α n n n 1/α 0 l∈Z
+
r i=1
123
(i+1)[n θ ] n i[n θ ] n
ds
γ 2 [n γ ] l[n γ ] l[n ] p − p θ s i[n ] n 1/α n 1/α n 1/α n l∈Z
(3.23)
Stoch PDE: Anal Comp
The first term in (3.23) can be bound by
nθ n nθ n
α−1 α α−1 α
t [n θ ] n
t [n θ ] n
ds R
dx ps2 (x)
ds s 1/α
n θ(α−1)/α . n (α−1)/α
Using Lemma 3.11 we can bound the second term in (3.23) by
r
i[n θ ] n
i=1
n
n
nθ
(i+1)[n θ ] n i[n θ ] n
i=1
nθ
γ 2 [n γ ] s dq l[n ] pq ds θ] 1/α i[n n q 2n 1/α n l∈Z
r nθ
(i+1)[n θ ] n
2
n
γ [n γ ] s dq 2 l[n ] p n 1/α i[nnθ ] q 2 q 2n 1/α l∈Z
2 r i=1
ds
(i+1)[n θ ] n i[n θ ] n
dq [n γ ] 2 l[n γ ] p q2 n 1/α q 2n 1/α l∈Z
t
dq
[n θ ] n
q 2+ α
1
n θ(α−1)/α . n (α−1)/α
The condition γ θ/α allows us to remove the Riemann sum approximation in the third last line. This completes the proof of the lemma.
We now collect all our estimates in this section to give a bound on the difference (n) between u¯ t (x) and vt (x). Theorem 3.13 Suppose that v0 ≡ 0, and fix T 0. We then have the following θ uniform bound valid for 2 [nn ] t T 2 (n) u¯ t (x) − vt (x) 2m
nγ n (α−1)θ
+
n θ+γ +o(1) n min(a,α−1)/α
+
1 . n (θ+γ )·min(1, κ)/(2m) (3.24)
123
Stoch PDE: Anal Comp
Proof Recall r, z from (3.6). Firstly we apply Minkowski’s inequality γ 2 u r [n θ ] z[n γ ] − v r [nθ ] (z + ar )[n ] n 1/α n 2m γ 2 u r [n θ ] z[n ] − V¯r [n θ ] (z[n γ ])2m 2 (z + ar −1 )[n γ ] (z + ar )[n γ ] − v r [nθ ] + v r [nθ ] n 1/α n 1/α n n 2m γ ] γ ] 2 (z + a (z + a )[n )[n r −1 r −1 v − v ¯ + θ θ r [n ] r [nn ] n 1/α n 1/α n 2m γ ] 2 (z + a )[n r −1 γ ¯ + Vr [n θ ] (z[n ]) − v¯ r [nnθ ] n 1/α 2m
(3.25)
By Lemmas 3.10 and 3.12 of this subsection as well as Propositions 3.5, 3.7, 3.8 we can bound the above by
nγ
n θ+γ +o(1)
1 n (θ+γ )·min(1, κ)/(2m) r −2 (l − k)[n γ ] n γ +θ 2 + p (r −1− j)[nθ ] 1 n 1/α n n 1+ α j=0 l∈Z γ 2 l[n ] γ − u j[n θ ] (l − a j )[n ] . · v j[nθ ] n 1/α n 2m
n (α−1)θ
+
n min(a,α−1)/α
+
(3.26)
An application of Gronwall’s inequality then gives γ γ 2 n θ+γ +o(1) u r [n θ ] z[n γ ] − v r [nθ ] (z + ar )[n ] n + 1/α n n (α−1)θ n min(a,α−1)/α n 2m 1 + (θ+γ )·min(1, κ)/(2m) . n We can now use the Hölder continuity esitmates in Lemma 3.9 and Proposition 3.4 to conclude the proof.
We now complete the proof of Theorem 1.1. Proof of Theorem 1.1 The case of v0 ≡ 0 has already been covered. For general v0 , the solution to (1.5) with white noise W is vt (x) = ( pt ∗ v0 )(x) +
t 0
pt−s (y − x)σ vs (y) W(ds dy).
R
We have already shown in (3.24) that the difference of the noise terms above and in (2.1) goes to 0 in · 2m norm. It is thus enough to show that the difference of the
123
Stoch PDE: Anal Comp
· 2m of the non-noise terms goes to 0, or (z + a )[n γ ] 2 X r [n θ ] r → 0. E z[n γ ] v0 − p θ ] ∗ v0 r [n n 1/α n 1/α n 2m
To see this first note that the expression inside · 2m goes to 0 almost surely by the weak convergence of the centered X n to a Stable(α) random variable, and by the smoothing properties of the Stable(α) density. The dominated convergence theorem can then be applied to show the above. This then implies γ 2 u r [n θ ] z[n γ ] − v r [nθ ] (z + ar )[n ] → 0 n 1/α n 2m and the conclusion of the theorem follows from Lemma 3.9 and Proposition 3.4.
4 Proof of Theorem 1.2 In order to couple the initial profile η with a Brownian motion we need the following proposition whose proof is similar to that of Theorem 3.6. For 0 < θ < 1/2 let
l[n θ ] , tl = n
ζ¯l =
θ ]−1 l[n
k=(l−1)[n θ ]
η(k) √ . n
We have Proposition 4.1 ([22]) For each n ∈ N, one can construct a copy of η and a two sided Brownian motion B with variance λ on a probability space, so that for m ∈ N with 2m < 2 + κ % 2m & min(1,κ ) n −m(1−θ ) n −θ · 2 . E ζ¯l − Btl − Btl−1 (4.1) We can now provide the Proof of Theorem 1.2 A look at the proof of Theorem 2.1 tells us that existence and uniqueness of u = u (n) holds. However the bounds in (2.2) and (2.3) do not hold uniformly in i [nT ], k ∈ Z, although it is easily checked that it continues to hold for the noise term. Let us show instead that the · 2m norm of the non-noise term is finite. We split
Pi+1 (k, l) · u 0 (l) =
l
Pi+1 (k, l) · u 0 (l) − u 0 (k + [iμ]) + u 0 (k + [iμ]) .
l
Therefore 2 2 |l − k − [iμ]| Pi+1 (k, l) · u 0 (l)2m u 0 (k + [iμ])2m + Pi+1 (k, l) · , √ n l
l
123
Stoch PDE: Anal Comp
which is finite thanks to our assumptions (1.11) and Lemma 3.3. A careful reader of the proof of Theorem 1.1 would have observed that our assumption of bounded initial profile was only needed so that we had σ u j (l) < ∞. sup 2m j[nT ], l∈Z
While this does not hold for general Lipschitz σ if u 0 is unbounded in L 2m , it clearly holds in the case σ (x) is bounded uniformly in x. Therefore to complete the proof of Theorem 1.2 we just need to approximate the non-noise term. As before we divide the proof into several steps. Step 1: First we collect the initial random variables n −1/4 η into groups of size [n θ ]. Thus
l √ −1/4 P[nt] [x n] − [μnt], l · n η(i)
l
i=1
⎧ ⎫ θ ⎬ ] √ ⎨ −1/4 k[n P[nt] [x n] − [μnt], l · n η(i) ⎩ ⎭
=
θ (k+1)[n ]−1
l=k[n θ ]
k
+
i=1
(k+1)[n θ ]−1
√ P[nt] [x n] − [μnt], l · n −1/4
η(i)
.
l=k[n θ ]
k
l i=k[n θ ]+1
Consider the second term in the right hand side. The expression in square brackets is independent of k. Thus using Burkholder’s inequality one gets the following bound for · 22m of the second term θ
(k+1)[n ]−1 √ √ 1 P[nt] [x n] − [μnt], l · P[nt] [x n] − [μnt], l √ n θ k
×
l,l =k[n ]
l
η(i) ·
l j=k[n θ ]+1
i=k[n θ ]+1
η( j)
m
n √ P X [nt] − X˜ [nt] n θ , n θ
where one uses the Cauchy–Schwarz inequality to obtain the last step. This goes to 0 as n → ∞ since θ < 1/2. Step 2: Next we replace the sum over η in blocks by the Brownian motion constructed in Proposition 4.1. θ
]−1 (k+1)[n k
123
l=k[n θ ]
θ k[n ] √ −1/4 P[nt] [x n] − [μnt], l · n η(i)
i=1
Stoch PDE: Anal Comp
=
√ n 1/4 P[nt] [x n] − [μnt], l · B k[nθ ] n
k,l
+
k √ n 1/4 P[nt] [x n] − [μnt], l ζ¯i − B k[nθ ]
k,l
n
i=1
where the limits of the sum are as in the first line. Using the Cauchy–Schwarz inequality we bound · 22m of the second term as follows
√
n
k √ ¯i − B k[nθ ] ζ P[nt] [x n] − [μnt], l ·
k,l
n
i=1
2
2m
2 k √ √ ¯ ζi − B k[nθ ] n P[nt] [x n] − [μnt], l · . k,l
n
i=1
2m
We now use (4.1) to bound this
√ √ n θ (1− n P[nt] [x n] − [μnt], l · |k| · k,l
1
n
θ min(1,κ ) 2m
n
θ min(1,κ ) 2m
1
min(1,κ ) ) 2m
n
# $ X [nt] − [μnt] · E [x √n] √ n ,
where we used Lemma 3.3 in the last line. The above also goes to 0 with n. Step 3: As in the proof of Theorem 1.1 we now substitute the transition probability of the random walk by the heat kernel. θ
]−1 (k+1)[n k
√ n 1/4 P[nt] [x n] − [μnt], l · B k[nθ ] n
l=k[n θ ]
= [n θ ]
# pt
k
√ $ k[n θ ] − [x n] · n −1/4 B k[nθ ] √ n n
# θ √ $ ]−1 √ (k+1)[n √ k[n θ ] − [x n] + n P[nt] [x n] − [μnt], l − pt √ n θ k
·n
−1/4
l=k[n ]
B k[nθ ] . n
123
Stoch PDE: Anal Comp
We now bound · 22m of the second term on the right hand side. This gives an upper bound of 2 √ √ k[n θ ] − [x n] 1 √ · B k[nθ ] n P[nt] [x n] − [μnt], l − pt √ √ n n n 2m k,l (4.2) √ √ √ k[n θ ] − [x n] |k|n θ √ · n n P[nt] [x n] − [μnt], l − pt n k,l 2 where we used the Cauchy–Schwarz inequality along with the bound B k[nθ ] 2m kn θ n
n
. Fix > 0 such that 2 < min(a/2, we use Theorem A.1 to get
k
1 √ |l−[x n]|n 2 +
+
(1−2θ )/2). Continuing with our bound (4.2)
1 |k|n θ 1 · a/2 + (1−2θ )/2 n n n
k
1 √ |l−[x n]|>n 2 +
√ √ |k|n θ n P[nt] [x n] − [μnt], l · n
√ $ k[n θ ] − [x n] + pt √ n n 1+2 1 1 · a/2 + (1−2θ )/2 + n n n #
1 √ k: |kn θ −x n|>n 2 +
n θ |k|n θ √ · √ n n
√ $ k[n θ ] − [x n] · pt √ n 0 % 1& 1 1 + √ E X [nt] − μ[nt] + n θ · 1 |X [nt] − μ[nt]| > n 2 + n #
A bound for the third term comes from applying Hölder’s inequality along with Chebyshev’s inequality: % 0 1& 1 1 √ E |X [nt] − μnt · 1 |X [nt] − μnt| > n 2 + n % 2 &1/2 1/2 1 1 · P |X [nt] − μnt| > n 2 + √ E |X [nt] − μnt n − n , while the second term decays much more quickly because of the exponential decay of the heat kernel.
123
Stoch PDE: Anal Comp
Step 4: To complete the proof of the theorem it is enough for us to show θ
[n ]
# pt
k
√ $ B k[nθ ] k[n θ ] − [x n] n ˜ pt (x − y) B(y) dy · 1/4 ≈ √ n n R
where ˜ B(y) = n 4 B √y 1
n
is another Brownian motion. As the reader might have guessed this is just a Riemann sum argument. We just need to bound θ ] 2 (k+1)[n √ n k[n θ ] ˜ ˜ dy p (y − x) · B(y) − B √ t θ k[n n √ ]
k
2m
n
(4.3) θ ] (k+1)[n √ θ ] − [x √n] θ ] 2 n k[n k[n · B˜ √ dy pt (y − x) − pt + √ . θ k[n n n √ ] k
2m
n
For the first term we use the independence of Brownian increments as well as Burkholder’s inequality to get a bound of θ ] 2 (k+1)[n √ n k[n θ ] ˜ ˜ dy pt (y − x) B(y) − B √ θ ] k[n n √
k
m
n
(k+1)[n √ n nθ √ θ ] k[n n √ k
n
2θ
n
n
θ
]
2 k[n θ ] dy ˜ ˜ pt (y − x) B(y) − B √ n 2m
.
Next we have the following bound on · 22m for the second term in (4.3)
k
θ ] (k+1)[n √ n θ k[n √ ] n
θ − [x √n] k[n θ ] · B˜ √ pt (y − x) − pt k[n ] √ n n
2 dy 2m
θ ] (k+1)[n √ n
√ 2 √ k[n θ ] − [x n] n θ /2 |k| √ · n 1/4 dy pt (y − x) − pt θ k[n n √ ] k n 2 √ 1 n 2θ · pt (y − x)/2 y dy t n
n 2θ . n
123
Stoch PDE: Anal Comp
We have shown that (4.3) goes to 0 with n and this completes the proof of Theorem 1.2.
5 Tightness In this section we discuss the issue of tightness in Theorem 1.1. Theorem 5.1 Suppose the initial profile v0 satisfies
v0 (x) − v0 (y) 2m |x − y|ζ for some 0 < ζ < α/2, all x, y ∈ R and some m>
max(2α, α + 1) . min(2ζ, α − 1)
(5.1)
Then the process u¯ t (x), t ∈ R+ , x ∈ R is tight in every compact subset of R+ × R. Remark 5.1 One can check that the following tightness arguments will hold for Theorem 1.2 with ζ = 1/2. (n)
We show tightness for the process u¯ t (x) = u¯ t (x) in the time-space box [0, 1] × [0, 1], although the argument is valid for any compact subset of R+ × R. Define the modulus of continuity ¯ = wδ (u)
sup (t,x), (s,y)∈[0,1]2 , |(t,x)−(s,y)|<δ
|u¯ t (x) − u¯ s (y)| .
Tightness of the process u¯ will follow from the following Lemma 5.2 ([24]) Suppose there is a sequence δn ↓ 0 such that the following hold. 1. There is ψ > 0, λ > 2 and a constant c1 such that for all large enough n E |u¯ t (x) − u¯ s (y)|ψ c1 |(t, x) − (s, y)|λ , for all (t, x), (s, y) ∈ [0, 1]2 and |(t, x) − (s, y)| > δn . 2. For all , ρ > 0, ¯ > <ρ P wδn (u) for all large n. Then for all , ρ > 0 there is a 0 < δ < 1 such that ¯ > ] < ρ P [wδ (u) for all large n.
123
Stoch PDE: Anal Comp
Proof of Theorem 5.1 Let us check the first condition in Lemma 5.2. We use the triangle inequality u¯ t (x) − u¯ s (y)2 2u¯ t (x) − u¯ t (y)2 + 2u¯ t (y) − u¯ s (y)2 2m 2m 2m
(5.2)
and bound each of the two terms on the right, starting with the first. Recall from (2.1) that the solution u is the sum of two terms, and it is enough to look at each of them separately. The contribution of the noise term, the one involving ξ , gives a bound
[nt]−1 %
1
P j (k − [xn 1/α ]) − P j (k − [yn 1/α ])
n (α−1)/α
j=0
&2
k
a¯ [xn 1/α ] − [yn 1/α ])
1 n (α−1)/α
|x − y|α−1 +
1 n (α−1)/α
.
The contribution of the non-noise term in (2.1) corresponding to the initial profile is 2 % & 1/α 1/α P[nt] (l + [μnt]) · u 0 l + [xn ] − u 0 l + [yn ] l
2m
|x − y|
2ζ
+
1 n 2ζ /α
.
We have obtained u¯ t (x) − u¯ t (y)2 |x − y|α−1 + 2m
1 1 + |x − y|2ζ + 2ζ /α . n n (α−1)/α
(5.3)
Let us next consider the second term in (5.2). By arguments similar to those in Proposition 3.4 one can see that the contribution from the noise term
[nt]−1
1 n (α−1)/α +
P(Y j = 0)
j=[ns]
1 n (α−1)/α
[ns]−1 j=0
|t − s|(α−1)/α +
2
k
(α−1)/α
[nt] − [ns] n & + E a¯ Z [nt]−[ns]
P[nt]− j−1 (l + [μnt]) − P[ns]− j−1 (l + [μns]) +
1 n (α−1)/α
1 n (α−1)/α
%
E a¯ W[nt]−[ns] + [μnt] − [μns]
.
123
Stoch PDE: Anal Comp
The contribution of the non-noise term
l
&2 P[nt]−[ns] [yn 1/α ] − [μnt], l · u [ns] l − u [ns] [yn 1/α ] − [μns] 2m
%
2 P[nt]−[ns] [yn 1/α ] − [μnt], l · u [ns] l − u [ns] [yn 1/α ] − [μns] 2m
l
α−1 + E − [μnt] + [μns] X [nt]−[ns] 2ζ /α (α−1)/α n n 2ζ + E X [nt]−[ns] − [μnt] + [μns]
1
+
1
|t − s|(α−1)/α +
1 n (α−1)/α
+ |t − s|2ζ /α +
1 n 2ζ /α
.
The last line is because (3.7) also holds with α − 1 replaced by 2ζ ; this can be seen from following the proof of Lemma 3.3. We have obtained u¯ t (y) − u¯ s (y)2 |t − s|(α−1)/α + 2m
1 1 + |t − s|2ζ /α + 2ζ /α . (5.4) n n (α−1)/α
Combining (5.3) and (5.4) we obtain m·min[2ζ, α−1]/α 2m E u¯ t (x) − u¯ s (y) (t, x) − (s, y) when |(t, x) − (s, y)| > δn = n −1 . Because of our condition on m the exponent is greater than 2 and this verifies condition 1 in Lemma 5.2. We next check the second condition in Lemma 5.2. Since we have chosen δn = n −1 we just need to consider the maximum over u i (k)−u j (l) where |i − j| 1, |k−l| 1. There are of the order n × n 1/α such points. Thus 1 ¯ > n 1+ α P wn −1 (u) n
1+ α1
sup
|i− j|1, |k−l|1
P u i (k) − u j (l) > % 2m & E u i (k) − u j (l)
sup
|i− j|1, |k−l|1
2m
1
n 1+ α 1 · , 2m n m·min[2ζ,α−1]/α
where we used (5.3) and (5.4) for the last line. The above goes to 0 with n due to our restriction on m. Condition 2 of Lemma 5.2 is verified.
123
Stoch PDE: Anal Comp
6 Proof of part 1 of Theorem 1.3 Consider the martingale Mn in (1.12) (we ignore the superscript ξ˜ ). By the independence of the environment this has the same distribution as the ← M(n,k) =
E k exp
n
i=0 β
· ξ˜n−i (X i )
% &n+1 Eeβ ξ˜
,
(6.1)
where Ek is the expectation of paths of the random walk starting at k. It is therefore ← . Consider the random field enough to prove Theorem 1.3 for M(n,k) E k exp
wi (k) =
i
˜ β · ξ (X ) i− j j j=0 , i 0, k ∈ Z. % &i+1 Eeβ ξ˜
(6.2)
Using the Markov property we obtain wi+1 (k) =
# P(k, l) wi (l)
l
=
P(k, l) wi (l) +
l
˜
eβ ξi+1 (k)
$
Eeβ ξ˜
#
P(k, l) wi (l)
l
˜
eβ ξi+1 (k) Eeβ ξ˜
$
(6.3)
−1 .
with initial profile w0 (k) =
˜
eβ ξ0 (k) Eeβ ξ˜
.
Although (6.3) is not of the form (1.4) one can check that the statements of Theorem 2.1 hold for w; to see this one could follow the arguments of the proof of Theorem 2.1. Indeed a solution to (6.3) is wi+1 (k) =
l
i % eβ ξ˜ j+1 (y) & Pi+1 (k, l)w0 (l)+ Pi− j (k, l) P(l, y)w j (y) · −1 . ˜ Eeβ ξ y j=0 l
Therefore by Burkholder’s inequality wi+1 (k)2 2m
l
i 2 2 Pi+1 (k, l) · w0 (l)2m + Pi− (k, l) · P(l, y)w j (y) j j=0 l
y
123
Stoch PDE: Anal Comp
·
% eβ ξ˜ j+1 (y)
&2 −1
2m Eeβ ξ˜ 2 i 2 sup y w j (y)2m 2 Pi+1 (k, l) · w0 (l) 2m + Pi− j (k, l) , n (α−1)/α l
(6.4)
j=0 l
thanks to the independence of w j (y) and ξ˜ j+1 (y), and the first bound in (6.9). This argument shows that we obtain the same moment bounds for w as in Theorem 2.1. To obtain the first part of Theorem 1.3 we shall show that our random field w is close to the solution u of P(k, l) u i (l) + β ξ˜i+1 (k) · u i (k) u i+1 (k) = (6.5) l u 0 (k) = 1. We start with an estimate of the spatial Hölder continuity of u. Lemma 6.1 The following holds 2 |l − k|α−1 sup u i (k) − u i (l)2m (α−1)/α . n in Proof From (2.3) the 2m’th moment of u i (k) is bounded for i n, k ∈ Z. Therefore one can argue u i (k) − u i (l)2 2m
1 n (α−1)/α
i−1 2 Pi−1− j (k, y) − Pi−1− j (l, y) j=0 y
a(l ¯ − k) . n (α−1)/α
To get the last inequality we again expand the square and use the relation ˜ l P j (l)P j (l + k) = P(Y j = k) where Y = X − X is the difference of two random walks X, X˜ with transition P. The result follows from Lemma 3.2.
Consider now the random field w˜ defined as P(k, l) w˜ i (l) + P(k, l) w˜ i (l) · β ξ˜i+1 (k) w˜ i+1 (k) = l
(6.6)
l
with initial profile 1. The statements of Theorem 2.1 hold for w˜ also. We first compare w˜ with u. Lemma 6.2 The following holds sup in, k∈Z
123
u i (k) − w˜ i (k)2 2m
1 . n (α−1)/α
Stoch PDE: Anal Comp
Proof We can check that w˜ i+1 (k) − u i+1 (k) = β
i
Pi− j (k, l) · ξ˜ j+1 (l)
P(l, y) w˜ j (y) − u j (l) .
y
j=0 l
Split w˜ j (y) − u j (l) = [w˜ j (y) − u j (y)] + [u j (y) − u j (l)] to get 2 sup u i+1 (k) − w˜ i+1 (k)2m k∈Z
i
1 n (α−1)/α
2 Pi− j (k, l)
2 · u j (y) − u j (l)2m i
1
+
P(l, y)
y
j=0 l
(6.7)
2 Pi− j (k,
n (α−1)/α j=0 l 2 · sup u j (y) − w˜ j (y)2m .
l)
y∈Z
The lemma follows by using Gronwall’s inequality and Lemma 6.1. Next consider the random field w∗ given by ∗ (k) wi+1
=
P(k, l) wi∗ (l) +
l
P(k, l) wi∗ (l) ·
˜
eβ ξi+1 (k)
l
Eeβ ξ˜
−1
(6.8)
with initial profile 1. Once again Theorem 2.1 holds for w ∗ also. We compare w ∗ with w˜ below. Lemma 6.3 The following holds sup in, k∈Z
w˜ i (k) − w ∗ (k)2 i 2m
1 n (α−1)/α
.
Proof We write ∗ w˜ i+1 (k) − wi+1 (k)
=
i
˜
eβ ξ j+1 (l)
Pi− j (k, l) β ξ˜ j+1 (l) + 1 −
Eeβ ξ˜
j=0 l
·
P(l, y)w˜ j (y)
y
+
i j=0 l
·
Pi− j (k, l)
˜
eβ ξ j+1 (l) Eeβ ξ˜
−1
% & P(l, y) w ∗j (y) − w˜ j (y)
y
123
Stoch PDE: Anal Comp
One can check that for any m 1 2 eβ ξ˜ 1 − 1 (α−1)/α , and ˜ Eeβ ξ n 2m 2 eβ ξ˜ 1 ˜ − 1 − β ξ 2(α−1)/α . ˜ Eeβ ξ n
(6.9)
2m
Using this we obtain for i n 2 ∗ sup w˜ i+1 (k) − wi+1 (k)2m k∈Z
1
1
+
i
n (α−1)/α n (α−1)/α j=0 l 2 · sup w˜ j (y) − w ∗j (y)2m .
2 Pi− j (k, l)
y∈Z
The lemma follows by an application of Gronwall’s inequality. Finally we compare w∗ with w in (6.3). Lemma 6.4 The following holds sup in, k∈Z
∗ w (k) − wi (k)2 i 2m
1 . n (α−1)/α
Proof We now have ∗ wi+1 (k) − wi+1 (k)
=
Pi+1 (k, l) ·
˜
eβ ξ0 (l)
−1 Eeβ ξ˜ ˜ i eβ ξ j+1 (l) + Pi− j (k, l) −1 Eeβ ξ˜ j=0 l · P(l, y) w j (l) − w ∗j (l) . l
y
Burkholder’s inequality along with the first inequality in (6.9) shows 2 ∗ sup wi+1 (k) − wi+1 (k)2m k∈Z
1
+
1
i
n (α−1)/α n (α−1)/α j=0 l 2 ∗ · sup w j (y) − w j (y)2m .
2 Pi− j (k, l)
y∈Z
We can now use Gronwall’s inequality to complete the proof.
123
Stoch PDE: Anal Comp
Our main result Theorem 1.1 shows that u 1 (0) of (6.5) converges in distribution to v1 (0) of (1.5) with σ (x) = βx and initial profile 1. Furthermore the moments of u converges to that of v. As a consequence of the previous lemmas and the discussion at the beginning of this section, the same holds for Mn . This completes the proof of Theorem 1.3.
7 Extensions 7.1 Addition of a drift term We describe in this section an approximation to ∂t v = −ν(−)α/2 vt (x) + b(vt (x)) + σ (vt (x)) · W˙ (t, x).
(7.1)
where b : R → R is also Lipschitz continuous with Lipschitz coefficient Lipb . Recall that we scale time by n, which suggests an approximation of the form (n)
u i+1 (k) =
l∈Z
(n) (n) 1 ξi (k) (n) P(k, l) u i (l) + b u i (k) · + σ u i (k) · (α−1)/2α . (7.2) n n
We have the following Theorem 7.1 Let the conditions in Assumption 1.1 hold, and fix an integer m 1 so that 2m < 2 + κ. Let v0 be a continuous (random) function so that supx E|v0 (x)|2m < ∞ that is independent of ξ and W˙ . Let u (n) be the solution to (7.2) with initial profile (n) (n) u 0 . Then for each t > 0, x ∈ R we have u¯ t (x) ⇒ vt (x), where v is the solution to (7.1) with initial profile v0 . Furthermore we have 2m (n) 2m as n → ∞. Eu¯ t (x) → Evt (x) The proof of this theorem follows closely the proof of Theorem 1.1. We provide the outline of the proof keeping the notation same as in the proof of the theorem, and leave the details to the reader. It can be shown that the conclusions of Theorem 2.1 continues to hold true in our case; one needs to follow the proof of the theorem with minor modifications. As in the proof of Theorem 1.1 we start with initial profile v0 ≡ 0, and impose conditions on γ and θ as in (3.2). Our definition of U in (3.4) now includes an additional term i−1 j=0 l∈Z
[n θ ][n γ ] . P(i−1− j)[n θ ] (l − k)[n γ ] · b u j[n θ ] (l[n γ ]) · n
(7.3)
123
Stoch PDE: Anal Comp
We first consider Proposition 3.4. With the addition of the drift in (7.2) the difference between u¯ t (x) and u r [n θ ] (z[n γ ]) gives extra terms [nt]−1 1 P[nt]− j−1 l + [μnt] − [xn 1/α ] · b u j (l) n θ j=r [n ] l∈Z
r [n θ ]−1 & 1 % + P[nt]− j−1 l + [μnt] − [xn 1/α ] − Pr [n θ ]− j−1 l − z[n γ ] n j=0 l∈Z · b u j (l) .
The · 2m norm of the first term is of order n θ /n, while that of the second term is of order r [n θ ]−1 1 P[nt]−r [n θ ]+ j l + [μnt] − [xn 1/α ] − P j l − z[n γ ] n j=0
l∈Z θ
r [n ]−1 1 P[nt]−r [n θ ] (w) · P j l + [μnt] − [xn 1/α ] − w n j=0 w∈Z l∈Z γ − P j l − z[n ] .
Fix any > 0. We can remove the terms from j = 0 to j = [n] with an error of order . Also due to Lemma B.3 we might restrict to |w| c3 n θ up to a vanishing error in n. Now consider the above expression over the range j [n] and |w| c3 n θ . By Theorem A.1 for each l ∈ Z and j, w in the above ranges the difference goes to 0 with n. Moreover we can bound the difference by the sum and get an expression which is uniformly bounded in n. Therefore by the dominated convergence theorem and the arbitrariness of we can conclude that the above expression goes to 0 with n. Before we move on to Proposition 3.5 we state a lemma we shall need. Lemma 7.2 Suppose additionally that γ < aθ . Then for i 1 we have k∈Z
1 Pi[n θ ] k[n γ ] γ . n
Proof We shall use Theorem A.1 for |k| n −γ (in θ )(1+a)/α and a large deviation estimate for |k| > n −γ (in θ )(1+a)/α . One checks that Pi[n θ ] k[n γ ]
& γ 1 + P X i[n θ ] (in θ )(1+a)/α · p1 (inknθ )1/α + (in θ )(1+a)/α n1γ + P X i[n θ ] (in θ )(1+a)/α .
%
123
k
k
1 (in θ )1/α
Stoch PDE: Anal Comp
For 1 < α < 2 one can use the result in [20] along with Lemma B.2 to conclude
P X i[n θ ] (in θ )(1+a)/α (in θ )P X 1 (in θ )(1+a)/α
1 (in θ )a
1 nγ
.
One can use Chebyshev’s inequality to get a similar bound when α = 2.
We now move on to Proposition 3.5. The · 2m norm of the difference of u r [n θ ] (z[n γ ]) and Ur [n θ ] (z[n γ ]) arising from the drift terms in (7.2) and (7.3) is bound above by 1 Pr [n θ ]− j−1 l − z[n γ ] · u j (l) − u i[n θ ] k[n γ ] 2m n i, j k,l 1 + Pr [n θ ]− j−1 l − z[n γ ] − P(r −1−i)[n θ ] (k − z)[n γ ] n
i, j
(7.4)
k,l
The first term goes to 0 by the Hölder continuity of u argued above and earlier in Proposition 3.4. The second term also goes to 0 by an argument similar to above. Due to Lemma 7.2, the second expression in (7.4) is uniformly bounded in n. Therefore we can apply the dominated convergence theorem to conclude that it goes to 0. To the random field V in (3.17) we add the drift term (7.3). Therefore the difference in Proposition 3.7 arising from the drift terms is 0. The random field V¯ has an additional term γ i−2 l[n ] − (k + ai−1 − a j )[n γ ] [n θ ][n γ ] · b u j[n θ ] l[n γ ] . · p (i−1− j)[nθ ] 1/α 1+ α1 n n n j=0 l∈Z The bound on the · 2m norm arising from the difference of this from the drift term in V is r −1 γ (l − ar −1 + ar −1− j )[n γ ] n θ+γ 1 α 1 n P j[n θ ] l[n ] − p j[nnθ ] n 1/α n 1+ α j=1 l∈Z
(7.5)
We claim that the expression is uniformly bounded. Indeed the sum of the terms involving the density p is uniformly bounded by a Riemann sum argument. As for the sum of the terms involving the transition kernel P, it is also uniformly bounded by Lemma 7.2 if we assume γ < αθ . The convergence of (7.5) to 0 thus follows from the dominated convergence theorem, similar to what was argued earlier in this outline. Finally we consider vt (x) =
t 0
R
pt−s (y − x)σ vs (y) W(ds dy) +
t 0
pt−s (y − x)b vs (y) ds dy. R
We can similarly define the random fields v˜ and v¯ by including a drift term to each of them. Using nothing more than Hölder continuity estimates for v one can show
123
Stoch PDE: Anal Comp
that the differences v − v˜ and v˜ − v¯ are small. Collecting all our bounds we can then use (3.25) and apply Gronwall’s inequality to prove Theorem 7.1 in the case v0 ≡ 0. The case of general v0 follows as earlier. 7.2 Dirac initial condition Consider (1.4) with initial profile 1/α u (n) · 1{k = 0}. 0 (k) = n
(7.6)
Our main result of this section is the following Theorem 7.3 Let the conditions in Assumption 1.1 hold, and suppose |σ (x)| Lipσ |x| for all x. Fix an integer m 1 so that 2m < 2 + κ. Let u (n) be the solution (n) to (1.4) with initial profile (7.6). Then for each t > 0, x ∈ R we have u¯ t (x) ⇒ vt (x), where v is the solution to (1.5) with initial profile δ0 . Furthermore we have (n) 2m 2m Eu¯ t (x) → Evt (x) as n → ∞. To avoid unnecessarily complicated notation we assume from now on that μ = 0. The reader can convince theirselves that the proof of the above theorem continues to hold for nonzero μ. Due to the singularity of the density p at time 0 we can obtain uniform moment bounds on u only if we consider a compact time interval away from 0. More precisely we have the following Proposition 7.4 Let m ∈ N so that 2m < 2 + κ. Fix > 0. We have sup
nin, k∈Z
E|u i (k)|2m < ∞,
uniformly in n. Proof Burkholder’s inequality gives for i 1 i−1 n 2/α P2 2 i− j1 (k, l1 ) · u j1 (l1 )2m . (1+α)/α n
u i (k) 22m c0 n 2/α Pi2 (k, 0) + c0
j1 =0 l1
If we apply the above recursively we obtain
u i (k) 22m c0 n 2/α Pi2 (k, 0) + c02
i−1 n 2/α P2 i− j1 (k, l1 ) j1 =0 l1
+ c03
· n 2/α P2j1 (l1 , 0)
2 2/α P2 n 2/α Pi− j1 (k, l1 ) n j1 − j2 (l1 , l2 ) · n (1+α)/α n (1+α)/α
0 j2 < j1
· n 2/α P2j2 (l2 , 0) + · · ·
123
n (1+α)/α
(7.7)
Stoch PDE: Anal Comp
Suppose i = [c1 n], k = [c2 n 1/α ] for some c1 > 0 and c2 ∈ R. One can follow the argument in Section 7 of [4] to deduce that each of the terms converge to the corresponding iterated integrals involving the density p. For example the second term on the right would converge to c02
c1
ds 0
R
dy pc21 −s (c2 − y) ps2 (y).
(7.8)
Furthermore the tail sums are uniformly bounded in n. Now when we consider the moments of the parabolic Anderson model ∂t v = −ν(−)α/2 v + c0 v W˙ we get an infinite series involving the integrals of p 2 . Therefore we can bound the moments of u i (k) from above by the moments vc1 (c2 ). Equation (4.15) in [5] states that the moments of v are uniformly bounded in a compact subset of R+ × R. We next show that the iterated integrals of p above are maximized when c2 = 0. Indeed let us consider (7.8). By Plancheral’s theorem this is equal to a constant mulitple of
c1
ds 0
R
2 dξ F pc1 −s (c2 − ·) ps (·) (ξ ) ,
where F denotes the Fourier transform in the spatial variable. Using F( f g) = F( f )∗ F(g) for L 2 functions f and g one obtains
0 1 2 ds dξ F pc1 −s (c2 − ·) ∗ F ps (·) (ξ ) 0 c1R 2 0 α α1 = ds dξ e−ic2 · e−ν(c1 −s)|·| ∗ e−νs|·| (ξ ) 0 c1 R 2 0 α α1 ds dξ e−ν(c1 −s)|·| ∗ e−νs|·| (ξ ) 0 c1 R 2 = ds dξ F pc1 −s (·) ps (·) (ξ ) . c1
0
R
A similar argument holds for the higher iterated integrals. Therefore the moments of v are in fact uniformly bounded in a compact subset of time. This proves the proposition.
Proof of Theorem 7.3 We aim to use the arguments in the proof of our main theorem 1.1 here also. For this we remove some terms from the noise term since the moments of u, v are “large” near time 0. Let us make this more precise. Fix a small > 0. We claim that the random field vt (x) = pt (x) +
t 0
pt−s (x − y)σ vs (y) W (ds dy) R
123
Stoch PDE: Anal Comp
is close to v˜t (x) = pt (x) +
t
pt−s (x − y)σ vs (y) W (ds dy) R
2 in the sense that vt (x) − v˜t (x)2m → 0 as → 0. Note that t
vt (x)2 C p 2 (x) + C t 2m
0
2 2 pt−s (x − y)vs (y)2m ds dy,
R
which when iterated gives an infinite series which can be shown to be finite, see [5]. In particular t 0
R
2 2 pt−s (x − y)vs (y)2m ds dy < ∞,
from which our claim follows. Similarly 1/α 1/α [nt]−1 1/α u [nt] [xn ] = n P[nt] [xn ] + P[nt]− j−1 [xn 1/α ], l · σ u j (l) j=0
·
l
ξ j (l) n (α−1)/2α
is close to [nt]−1 u˜ [nt] [xn 1/α ] = n 1/α P[nt] [xn 1/α ] + P[nt]− j−1 [xn 1/α ], l j=[n] l
·σ u j (l) ·
ξ j (l) n (α−1)/2α
2 in the sense supn u [nt] ([xn −1/α ]) − u˜ [nt] ([xn −1/α ])2m → 0 as → 0. This can be seen by following the argument in Proposition 7.4. Note first that n 1/α P[nt] ([xn 1/α ]) → pt (x) as n → ∞. Away from time t = 0 the moments of u and v are uniformly bounded, that is for each > 0 and uniformly in n sup in, k∈Z
u i (k)2 < ∞, 2m
sup in, k∈Z
v i (kn −1/α )2 < ∞. 2m n
Thus the arguments in the proof of Theorem 1.1 can be used and we can show that 2 u˜ [nt] ([xn −1/α ]) − v˜t (x) → 0 as n → ∞. 2m
The theorem follows from this.
123
Stoch PDE: Anal Comp
7.3 Proof of parts 2 and 3 in Theorem 1.3 Consider now the random field %
& i ˜ n 1/α E k 1{X i = 0} · exp β · ξ (X ) i− j j j=0 wi (k) = , i 0, k ∈ Z. (7.9) % &i+1 Eeβ ξ˜ This satisfies (6.3) but now with initial profile w0 (k) = n 1/α 1{k = 0} ·
˜
eβ ξ0 (k) Eeβ ξ˜
.
Following the argument in Sect. 6 we show that w is close to the random field u in (6.5) with initial profile u 0 (k) = n 1/α {k = 0}. We start with a bound on the Hölder continuity of u. One checks u i (k) − u i (l)2
2m
2 i−1 2/α 2 Pi−1− j (k, y) − Pi−1− j (l, y) n Pi (k, 0) − Pi (l, 0) + n n (α+1)/α j=0 y 2 · u j (y)2m . 2/α
If we iterate this we obtain u i (k) − u i (l)2
2m
n
2/α
2 i−1 2/α 2 n Pi−1− j1 (k, y1 ) − Pi−1− j1 (l, y1 ) Pi (k, 0) − Pi (l, 0) + n (α+1)/α y j1 =0
1
· n 2/α P2j1 (y1 , 0) +
0 j2 < j1
(7.10) 2 2 2/α n 2/α Pi−1− j1 (k, y1 ) − Pi−1− j1 (l, y1 ) n P j1 − j2 (y1 , y2 ) · (α+1)/α n n (α+1)/α y ,y 1
2
· n 2/α P2j2 (y2 , 0) + · · · From this one can show Lemma 7.5 For each > 0 2 sup P(k, l) · u i (k) − u i (l)2m = o(1) as n → ∞ nin, k∈Z l
123
Stoch PDE: Anal Comp
2 Proof It is enough to show that u i (k) − u i (l)2m = o(1) for fixed k and l when n i n. The arguments in Section 7 of [4] show that the non vanishing contribution to (7.10) come from terms where the successive differences i − j1 , j1 − j2 , . . . are of 2 order n. Due to Theorem A.1 n 2/α Pi (k, y) − Pi (l, y) is of order o(1) if i is of order n. This along with the uniform moment bound of u obtained from (7.7) is enough to prove the lemma.
As before we compare first u with w˜ where w˜ now satisfies (6.6) but with initial profile w˜ 0 (k) = n 1/α 1{k = 0}. We have Lemma 7.6 We have 2 sup u n (k) − w˜ n (k)2m = o(1) as n → ∞. k∈Z
Proof We iterate (6.7), so as to obtain a bound which does not involve the difference between u and w. ˜ Thus we obtain 2 sup u n (k) − w˜ n (k)2m k∈Z
n−1 n 2/α P2 n−1− j1 (k, l1 )
n (α+1)/α
j1 =0 l
+
0 j2 < j1 n−1 l1 ,l2
2 P(l1 , y) · u j1 (y) − u j1 (l1 )2m
y
2 n 2/α Pn−1− j1 (k, l1 ) (α+1)/α n
·
n 2/α P2j1 − j2 (l1 , l2 ) n (α+1)/α
2 · u j2 (y) − u j2 (l2 )2m + · · ·
(7.11) P(l2 , y)
y
Consider the q’th sum in above. We have the trivial bound u j (y) − u j (lq )2 u j (y)2 + u j (lq )2 . q q q q 2m 2m 2m If we replace this bound in each of the terms in (7.11), and then use the bound (7.7) we get an expression which is similar to the expression on the right hand side of (7.7) minus the first term. Using arguments similar to those in [4] one can conclude that this expression is uniformly bounded. Moreover each of the terms in (7.11) go to 0, thanks to Lemma 7.5 and the arbitrariness of . Thus, by the dominated convergence theorem the lemma is proved.
Next we compare w˜ with w ∗ in (6.8) with initial profile w0∗ (k) = n 1/α 1{k = 0}. We have Lemma 7.7 We have 2 sup wn∗ (k) − w˜ n (k)2m = o(1) as n → ∞. k∈Z
123
Stoch PDE: Anal Comp
Proof We now have 2 ∗ sup w˜ i+1 (k) − wi+1 (k)2m k∈Z
i n 2/α P2 (k, l) i− j
1
n (α−1)/α n (α+1)/α j=0 l 2 · P(l, y)w ∗j (y)2m y
+
i
1
2 Pi− j (k, l) n (α−1)/α j=0 l 2 · sup w˜ j (y) − w ∗j (y)2m . y∈Z
Again one can check that the moment bounds of w∗ satisfy a relation similar to (7.7). Iterating the above proves the lemma.
Finally we compare w∗ with w Lemma 7.8 We have 2 sup wn∗ (k) − wn (k)2m = o(1) as n → ∞. k∈Z
Proof We now have i 2 (k, 0) 2 n 2/α Pi+1 1 ∗ 2 sup wi+1 (k) − wi+1 (k)2m + Pi− j (k, l) n (α−1)/α n (α−1)/α k∈Z j=0 l 2 · sup w j (y) − w ∗j (y)2m . y∈Z
Iterating the above proves the result.
Thanks to Theorem 7.3 we know that u n ([xn 1/α ]) converges in distribution to g1 (x), where ∂t g = −ν(−)α/2 g + βg W˙ and g0 = δ0 . The above lemmas then imply that (ξ˜ ,[−xn 1/α ])
the same holds for wn ([xn 1/α ]) and consequently for n 1/α Mn . The second part of Theorem 1.3 follows by a spatial translation of the noise. Denote u (1) be the solution to (6.5) with initial profile 1 and let u (2) be the solution to (6.5) with initial profile δ0 . Using the same probability space constructed (1) (2) in Theorem 3.6 for both u (1) and u (2) we see that u n ([xn 1/α ]), u n ([xn 1/α ]) converges jointly in distribution to (v1 (x), g1 (x)). Because of the previous lemmas the same holds for wn(1) ([xn 1/α ]), wn(2) ([xn 1/α ]) , where w (1) solves (6.2) and w (2) solves (7.9). By time reversal and independence of the random variables ξ we can (ξ˜ ) (ξ˜ , [xn 1/α ] conclude that Mn , n 1/α Mn converges jointly to (v1 (−x), g1 (−x)). The third statement of Theorem 1.3 follows from this and a spatial translation of the noise ξ˜ ,β
(ξ˜ , [xn 1/α ]
since n 1/α Pn ([xn 1/α ]) is simply the ratio of n 1/α Mn
(ξ˜ )
and Mn .
123
Stoch PDE: Anal Comp
7.4 Some more extensions There are a few other possible extensions of the above results. 1. The conditions on ξ can be relaxed. For example it is not necessary for them to be i.i.d.. It is enough that they are independent and satisfy conditions in (1.2) for the weak convergence in Theorem 1.1 to hold, see for example [3]. However we need to assume stronger conditions to obtain convergence of high moments. 2. We could relax the conditions on η, for example we could take them to be independent with slowly varying variances (as in equation 2.10 in [2]). The arguments in Sect. 4 show that the contributions of the noise ξ and initial configuration η can be treated separately, so one could even consider correlated η as in [27]. 3. With some work, it should be possible to extend Theorem 7.3 to more general compactly supported measures. 4. We can consider weak approximations of the stochastic heat equation with spatially colored noise; depending on the noise these can exist in higher dimensions (see [13]). The key ingredient in the proof of our main theorem 1.1 was the coupling theorem 3.6. A starting point would be Section 3.1 of [8] where a large family of colored noises are constructed by convoluting appropriate functions with white noise. These include Riesz noises (see [13]) which interpolate between spatially smooth noise and white noise. The construction also suggests that one should then consider random variables of the form x ax · ξi (k + x) for appropriate ax , x ∈ Zd as our discrete noise; here ξi (k) are i.i.d. random variables satisfying (1.2) as before. We could also consider temporally correlated noise. One could then apply the above to study intermediate disorder regimes for directed polymers in a correlated random environment. Acknowledgements The author thanks the referee for a careful reading of the paper and for several comments. He expresses his gratitude to Davar Khoshnevisan for providing the proof of Lemma 3.3. He also thanks David Applebaum and Timo Seppäläinen for comments on an earlier version of the paper. This work was done while the author was at the University of Sheffield, and he thanks the School of Mathematics and Statistics for a supportive environment. Partial support from the Engineering and Physical Sciences Research Council (EPSRC) through Grant EP/N028457/1 is gratefully acknowledged.
A Appendix: A local limit theorem The following local limit theorem is a modification of Proposition 3.3 in [13]. Below pt is the density for the Stable(α) process with generator −ν(−)α/2 . Theorem A.1 Suppose Assumption 1.1 holds. Then for any 0 b 1 and any c > 0 sup
sup
k∈Z |x−(k−μ[nt])|cn (1−b)/α
1 n a/α t (1+a)/α
uniformly for 1/n t T .
123
+
1
1 α n P[nt] (k) − p [nt] xn − α
1 n b/α t 2/α
n
,
(A.1)
Stoch PDE: Anal Comp
Proof To simplify notation we denote t˜ = [nt]/n. We first bound the expression on the left for x = k − μ[nt]. Fourier inversion gives pt˜(x) =
1 2π
P[nt] (k) =
1 2π
α
e−ix z e−ν t˜|z| dz, R
and
π −π
e−ikz · [φ(z)][nt] .
Therefore 1
1 (2π ) n α P[nt] (k) − pt˜ (k − μ[nt])n − α π n 1/α
z [nt] −ν t˜|z|α −ν t˜|z|α dz. ˜ e dz + − φ 1/α e n −π n 1/α [−π n 1/α ,π n 1/α ]c The first term can be bound just as term I2 in Proposition 3.3 of [13], and we get
1
α
c −π n 1/α ,π n 1/α
[
]
e−ν t˜|z| dz
n a/α t (1+a)/α
.
For the second term we split the region of integration depending on whether or not z is in n a/{α(a+α)} At,n := z ∈ R; |z| 1/(a+α) . t Our assumptions on the characteristic function φ imply z |z|α ˜ φ 1/α 1 − c1 n n on |z| π n 1/α , and so it follows as in [13] that
z [nt] −ν t˜|z|α 1 e dz ˜ − φ 1/α . a/α (1+a)/α c n n t At,n ∩[−π n 1/α ,−π n 1/α ]
123
Stoch PDE: Anal Comp
We next need to consider the above integrand over the region z ∈ At,n . First observe α
e−ν t˜|z| − φ˜
z [nt] %
z & −νt|z|α ˜ = e − exp [nt] log φ n 1/α n 1/α %
z & α = e−νt|z| 1 − exp [nt] log φ˜ 1/α + ν t˜|z|α n
z |z|α −νt|z|α =e 1 − exp [nt] log 1 − ν + D 1/α n n +ν t˜|z|α .
It is easy to see that [nt]D z/n 1/α is bounded on At,n , and so At,n ∩[−π n 1/α ,−π n 1/α ]
z a+α
z [nt] −ν t˜|z|α −ν t˜|z|α e dz ˜ − φ e · (nt) 1/α n 1/α n R
1 . n a/α t (1+a)/α
We thus have the required bound in the case x = k − μ[nt]. In the case of a general x such that |x − (k − μ[nt])| n (1−b)/α we have 1 1 pt˜(xn − α ) − pt˜((k − μ[nt])n − α ) 1 −iz(k−μ[nt])n − α1 −izxn − α −ν t˜|z|α −e dz e e R |z| α 1 ∧ b/α · e−ν t˜|z| dz n R 1 1 α α b/α 2/α |w|e−ν|w| dw + 1/α e−ν|w| dw b/α 1/α b/α 1/α n t t |w|n t |w|>n t 1 b/α 2/α . n t
This completes the proof. The local limit theorem has the following consequence.
Corollary A.2 Let Assumption 1.1 hold. Suppose an is an integer valued sequence such that an − μ[nt] → a, n 1/α as n → ∞ for some constant a. Then for fixed t > 0 1 n (α−1)/α
123
[nt] i=0
Pi (an ) → 0
t
a ds as n → ∞. p 1 s 1/α s 1/α
Stoch PDE: Anal Comp
Proof We write [nt]
1 n (α−1)/α
Pi (an ) =
i=1
[nt] 1 1 n α Pi (an ) n i=1
=
[nt] an − μ[nt] n 1/α 1 pi + n n n 1/α i (1+a)/α i=1
Use the scaling property: for any c > 0 pt (x) = cpcα t (cx)
(A.2)
to write the above as [nt] 1 n 1/α an − μ[nt] 1 + o(1). · p · 1 n (i/n)1/α i n 1/α i=1
The rest is a Riemann sum approximation.
B Appendix: Bounds required for Proposition 3.5 The main results of this section are Lemmas B.4 and B.5 which were needed in the proof of Proposition 3.5. Lemma B.1 The following holds
sup sup
n w∈Z
[n θ ] n (α−1)/α
r −1 P Yi[n θ ] = w sup n
i=1
[n θ ] n (α−1)/α
r −1 P Yi[n θ ] = 0 < ∞. i=1
Proof The first inequality is a simple consequence of the fact that P(Y j = w) =
1 2π
π −π
e−iwz |φ(z)|2 j dw P(Y j = 0)
for any w, since the characteristic function of Y j is |φ(z)|2 j . As for the finiteness of the sum observe that P(Y j = 0) is decreasing with j and therefore [n θ ]
r −1 n P Yi[n θ ] = 0 P(Y j = 0) n (α−1)/α , i=1
the last inequality following from (2.7).
j=0
123
Stoch PDE: Anal Comp
It follows from Assumption 1.1 on the characteristic function that the distributions function F(x) of X 1 belongs to the domain of normal attraction of the symmetric stable law with exponent α (see section 35 of [17]). This means that one has to scale the centered X n − μn by a constant multiple of n 1/α . One can characterize such distribution functions. Lemma B.2 ([17]) A necessary and sufficient condition for F to be in the domain of normal attraction of a Stable(α) law, 0 < α < 2, is the existence of contants c1 , c2 0, c1 + c2 > 0 such that 1 F(x) = c1 + f 1 (x) , for x < 0, |x|α 1 , for x > 0, F(x) = 1 − c2 + f 2 (x) |x|α where the functions f 1 and f 2 satisfy lim
x→−∞
f 1 (x) = lim f 2 (x) = 0. x→∞
We use the above lemma to deduce the following large deviation estimate. Lemma B.3 There exists a constant c3 > 0 such that P |X [nt]−r [n θ ] | c3 n θ
1 . n (α−1)θ
Proof For 1 < α < 2 we use the result in [20]. This gives for c3 > |μ| P(|X [nt]−r [n θ ] | c3 n θ ) n θ P(|X 1 | c3 n θ )
nθ , n αθ
from the previous lemma. For α = 2, one can use Exercise 3.3.19 in [11] to conclude that X 1 has second moments. One then uses Chebyschev’s inequality to prove the lemma.
For the rest of this section we provide the proofs of the bounds required for the two terms in (3.14). Below is the bound on the first term. Lemma B.4 The first term in (3.14) has the bound % γ & P (l) · P (l) − P θ j j i[n ] k[n ] (α−1)/α 1
n
i, j
k,l
where the limits of the summation are as in (3.12).
123
1 n (α−1)θ
+
n θ+o(1) , n (α−1)/α
Stoch PDE: Anal Comp
Proof We write
1 n (α−1)/α =
i, j
k,l
1 n (α−1)/α
−
% & P j (l) · P j (l) − Pi[n θ ] k[n γ ] P(X j = X˜ j )
i, j
P X i[n θ ] = l − w, X˜ [in θ ] = k[n γ ] · P j−[in θ ] (w),
1 n (α−1)/α
(B.1)
k,l w∈Z
i, j
the equality holding by an application of the Markov property. We focus on the second term for now, we will return to the first term later. We split the sum according to whether i = 0 or not. The i = 0 term is of order n θ n −(α−1)/α . For the term corresponding to i 1 replace X˜ i[n θ ] = k[n γ ] by X˜ i[n θ ] = l by using Theorem A.1 with b = 1 − αγ to obtain
1 n (α−1)/α
i, j
=O +
nθ
+
n (α−1)/α
n (α−1)/α
=O
r −1
1 n (α−1)/α
r −1
1
+
w
k,l
P X i[n θ ] = l − w, X˜ [in θ ] = k[n γ ] · P j−[in θ ] (w)
i=1
nθ n (α−1)/α
j
#
+O
n (α−1)/α
i=1
j
1 (in θ )(1+a)/α
j
+O
nγ (in θ )2/α
P X i[n θ ] = l − w, X˜ [in θ ] = l · P j−[in θ ] (w) $
n o(1) n min(a,α−1)/α
r −1
1
w
k,l
i=1
O
w
P X i[n θ ] − X˜ i[n θ ] = w · P j−[in θ ] (w).
Returning to our expression (B.1) we now can write it as 1 n (α−1)/α
i, j
P(X j = X˜ j ) −
1 n (α−1)/α
r −1 i=1
P X i[n θ ] − X˜ i[n θ ] = w
j |w|c3 n θ
· P j−[in θ ] (w) # $ n o(1) nθ 1 +O +O +O , n (α−1)θ n (α−1)/α n min(a,α−1)/α where we have used Lemma B.1 and Lemma B.3 with the constant c3 from there. Let us now consider each of the first two terms above. Using Theorem A.1 as well as (A.2) one gets for the first term
123
Stoch PDE: Anal Comp
1 n (α−1)/α
i, j
# $ (α−1)/α θ n o(1) α p˜ 1 (0) r [n ] − 1 ˜ P(X j = X j ) = +O , · α−1 n n min(a, α−1)/α
where p˜ 1 (·) is the transition kernel of −2ν(−)α/2 . Using Theorem A.1 again with b = 1 − αθ , we get for |w| c3 n θ r −1
[n θ ] n (α−1)/α
P X i[n θ ] − X˜ i[n θ ] = w · P j−i[n θ ] (w)
i=1 |w|c3 [n θ ]
p˜ 1 (0) nθ 1 +O +O = (α−1)/α (i[n θ ])1/α (in θ )2/α n (in θ )(1+a)/α i=1 # $ θ (α−1)/α n θ(α−1)/α r [n ] − 1 α p˜ 1 (0) · = +O α−1 n n (α−1)/α # $ # $ n o(1) n θ+o(1) +O + O . n min(a, α−1)/α n (α−1)/α [n θ ]
r −1
Collecting all our estimates and recalling our conditions (3.2) on γ and θ completes the proof.
We next bound the second term in (3.14). Lemma B.5 The second term in (3.14) has the bound & γ % γ P θ ] k[n ] · Pi[n θ ] k[n ] − P j (l) i[n (α−1)/α
nγ
1
n
i, j
k,l
n (α−1)θ
+
n θ+γ +o(1) n (α−1)/α
where the limits in the summation are as in (3.14). Proof We separate out the i = 0 term and obtain
1 n (α−1)/α =O
i, j
& % Pi[n θ ] k[n γ ] · Pi[n θ ] k[n γ ] − P j (l)
k,l
n θ+γ n (α−1)/α
+
1 n (α−1)/α
r −1 i=1 j,k,l w
Pi[n θ ] k[n γ ]
% & · Pi[n θ ] k[n γ ] − Pi[n θ ] (l − w) · P j−i[n θ ] (w) The reason for the error bound for the first term is that restricting i = 0 forces k = 0, and the number of terms in the summation over j and l is of order n θ+γ . As in Lemma B.4 we split the sum over w according to whether |w| c3 [n θ ] or not. In the case |w| c3 [n θ ] we use Theorem A.1 with b = 1 − αθ . Thus the above is
123
Stoch PDE: Anal Comp
=O +
n θ+γ n (α−1)/α
+
n (α−1)/α
r −1
1 n (α−1)/α
r −1
n γ +θ
i=1
O
1 (in θ )(1+a)/α
+O
nθ (in θ )2/α
Pi[n θ ] (k[n γ ]) · Pi[n θ ] (k[n γ ]) − Pi[n θ ] (l − w)
i=1 j,k,l |w|c3 n θ
· P j−i[n θ ] (w) # $ # $ n γ +o(1) n γ +θ+o(1) =O +O n min(a,α−1)/α n (α−1)/α +
r −1
1 n (α−1)/α
Pi[n θ ] (k[n γ ]) · Pi[n θ ] (k[n γ ]) − Pi[n θ ] (l − w)
i=1 j,k,l |w|c3 n θ
· P j−i[n θ ] (w) To bound the last term we ignore the difference in the expression and instead bound the sum. By Lemmas B.1 and B.3 r −1
1 n (α−1)/α
i=1 j,k,l |w|c3 n θ
2 γ Pi[n θ ] (k[n ]) · P j−i[n θ ] (w)
nγ n (α−1)θ
,
and r −1
1 n (α−1)/α
Pi[n θ ] (k[n γ ]) · Pi[n θ ] (l − w) · P j−i[n θ ] (w)
i=1 j,k,l |w|c3 n θ γ
]−1 r −1 [n
1 n (α−1)/α
n (α−1)/α n (α−1)θ
j
i=1
j
P X˜ i[n θ ] − X i[n θ ] = y − w · P j−i[n θ ] (w)
y=0 |w|c3 n θ
γ ]−1 r −1 [n
1 nγ
i=1
P X˜ i[n θ ] − X i[n θ ] = 0 · P j−i[n θ ] (w)
y=0 |w|c3 n θ
,
where we used Lemma B.1 in the last step. Collecting our estimates completes the proof.
References 1. Alberts, T., Khanin, K., Quastel, J.: The intermediate disorder regime for directed polymers in dimension 1 + 1. Ann. Probab. 42(3), 1212–1256 (2014) 2. Balázs, M., Rassoul-Agha, F., Seppäläinen, T.: The random average process and random walk in a space–time random environment in one dimension. Commun. Math. Phys. 266, 499–545 (2006)
123
Stoch PDE: Anal Comp 3. Borovkov, A.A.: On the rate of convergence for the invariance principle. Theory Probab. Appl. 18, 207–225 (1973) 4. Caravenna, F., Sun, R., Zygouras, N.: Polynomial chaos and scaling limits of disordered systems. J. Eur. Math. Soc. (JEMS) 19(1), 1–65 (2017) 5. Chen, L., Dalang, R.C.: Moments, intermittency and growth indices for the nonlinear fractional stochastic heat equation. Stoch. Partial Differ. Equ. Anal. Comput. 3(3), 360–397 (2015) 6. Comets, F.: Directed polymers in random environments. Volume 2175 of Lecture Notes in Mathematics. Springer, Cham, 2017. Lecture notes from the 46th Probability Summer School Held in Saint-Flour (2016) 7. Comets, F., Yoshida, N.: Directed polymers in random environment are diffusive at weak disorder. Ann. Probab. 34(5), 1746–1770 (2006) 8. Conus, D., Joseph, M., Khoshnevisan, D., Shiu, S.-Y.: On the chaotic character of the stochastic heat equation, II. Probab. Theory Relat. Fields 156(3–4), 483–533 (2013) 9. Corwin, I.: The Kardar–Parisi–Zhang equation and universality class. Random Matrices Theory Appl. 1(1), 1130001 (2012) 10. den Hollander, F.: Random polymers. Volume 1974 of Lecture Notes in Mathematics. Springer, Berlin, 2009. Lectures from the 37th Probability Summer School Held in Saint-Flour (2007) 11. Durrett, R.: Probability: Theory and Examples. Cambridge Series in Statistical and Probabilistic Mathematics, 4th edn. Cambridge University Press, Cambridge (2010) 12. Èbralidze, Š.S.: Inequalities for probabilities of large deviations in the multidimensional case. Theory Probab. Appl. 16, 733–741 (1971) 13. Foondun, M., Joseph, M., Li, S.-T.: An approximation result for a class of stochastic heat equations with colored noise. arXiv:1611.06829 14. Foondun, M., Khoshnevisan, D.: Intermittence and nonlinear parabolic stochastic partial differential equations. Electron. J. Probab. 14(21), 548–568 (2009) 15. Foondun, M., Khoshnevisan, D.: An asymptotic theory for randomly forced discrete nonlinear heat equations. Bernoulli 18(3), 1042–1060 (2012) 16. Funaki, T.: Random motion of strings and related stochastic evolution equations. Nagoya Math. J. 89, 129–193 (1983) 17. Gnedenko, B.V., Kolmogorov, A.N.: Limit Distributions for Sums of Independent Random Variables. Addison-Wesley Publishing Company Inc, Cambridge, Translated and annotated by K. L. Chung. With an Appendix by J. L. Doob (1954) 18. Gyöngy, I.: Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space–time white noise I. Potential Anal. 9(1), 1–25 (1998) 19. Gyöngy, I.: Lattice approximations for stochastic quasi-linear parabolic partial differential equations driven by space–time white noise II. Potential Anal. 11(1), 1–37 (1999) 20. Heyde, C.C.: On large deviation probabilities in the case of attraction to a non-normal stable law. Sankhy¯a Ser. A 30, 253–258 (1968) 21. Joseph, M., Khoshnevisan, D., Mueller, C.: Strong invariance and noise-comparison principles for some parabolic stochastic PDEs. Ann. Probab. 45(1), 377–403 (2017) 22. Kanagawa, S.: The rate of convergence for approximate solutions of stochastic differential equations. Tokyo J. Math. 12(1), 33–48 (1989) 23. Kolokoltsov, V.: Symmetric stable laws and stable-like jump-diffusions. Proc. Lond. Math. Soc. (3) 80(3), 725–768 (2000) 24. Kumar, R.: Space–time current process for independent random walks in one dimension. ALEA Lat. Am. J Probab. Math. Stat. 4, 307–336 (2008) 25. Nagaev, S.V.: Large deviations of sums of independent random variables. Ann. Probab. 7(5), 745–789 (1979) 26. Osipov, L.V.: Asymptotic expansions in the central limit theorem. Vestnik Leningrad. Univ. 22(19), 45–62 (1967) 27. Seppäläinen, T., Zhai, Y.: Hammersley’s harness process: invariant distributions and height fluctuations. Ann. Inst. Henri Poincaré Probab. Stat. 53(1), 287–321 (2017) 28. Spitzer, F.: Principles of Random Walks, 2nd edn. Springer, New York (1976). Graduate Texts in Mathematics, Vol. 34
123