Commun. Math. Phys. 230, 181–199 (2002) Digital Object Identifier (DOI) 10.1007/s00220-002-0704-5
Communications in
Mathematical Physics
On the Approximation of the Stochastic Burgers Equation Christoph Gugg1 , Hansj¨org Kielh¨ofer1 , Michael Niggemann2 1
Institut f¨ur Mathematik, Universit¨at Augsburg, 86135 Augsburg, Germany. E-mail:
[email protected] 2 Fachhochschule W¨ urzburg-Schweinfurt, 97070 W¨urzburg, Germany Received: 10 October 2001 / Accepted: 21 May 2002 Published online: 6 August 2002 – © Springer-Verlag 2002
Abstract: We prove mathematical approximation results for the (hyperviscous) Burgers equation driven by additive Gaussian noise. In particular we show that solutions of “approximating equations” driven by a discretized noise converge towards the solution of the original equation when the discretization parameter gets small. The convergence takes place in the expected value of arbitrary powers of certain norms; i.e., all moments of the difference of the solutions tend to zero in certain function spaces. For the hyperviscous Burgers equation, these results are applied to justify the approximation of certain correlation functions that play a major role in statistical turbulence theory. 1. Introduction During the last decade, the one-dimensional Burgers equation with stochastic noise has attracted considerable attention as a simplified model of fluid turbulence [Y-S, C-Y-95a, C-Y-95b, P, Y-C-96, H-J96, H-J97, H-J97a]. This is mainly due to the findings in [C-Y-95a]; there the hyperviscous Burgers equation driven by a random force η(x, t) with suitably chosen spatial correlations was treated numerically and by means of the dynamic renormalization group method (RG), and it was found that the statistical properties of its solution are surprisingly similar to those experimentally known from real three-dimensional turbulence. To be more specific, in the references cited above a Burgers-type equation ∂ ∂ 2p ∂ u + u u = (−1)p+1 ν 2p u + η, ∂t ∂x ∂x
(1)
where u(x, t) is the one-dimensional velocity field and ν the kinematic viscosity, is considered (with p = 1 corresponding to the original Burgers equation and p ≥ 2 corresponding to the hyperviscous Burgers equation, respectively). It is driven by a Gaussian
182
C. Gugg, H. Kielh¨ofer, M. Niggemann
noise η(x, t) of zero mean and with its spatial Fourier modes η(k, t) obeying a correlation function of the type η(k, t) η(k , t ) = D(k)δ(k + k )δ(t − t ), where δ stands for Dirac “function”; i.e. the noise term is white in time and homogeneously distributed in space. Its variance possesses a power-law spectrum D(k) ∼ |k|−y with wavenumber k (with the exception of the field-theoretic work in [P], where D(k) is assumed to be of compact support). In this power-law, the range of physically relevant exponents y is given by the interval −2 ≤ y ≤ d (with d = 1 being the space dimension). The case y = −2 corresponds to thermal equilibrium [F-N-S, Y-S], whereas for y = d = 1 the energy flux (k) is “almost” constant (i.e. a constant to logarithmic accuracy). This latter case is the one treated in [C-Y-95a], where numerical evidence was found: (i) for the energy spectrum E(k) = u(k) u(−k) to take the Kolmogorov form, E(k) ∼ |k|−σ with σ = 5/3 ± 0.02, in a range of wavenumbers being “almost” an inertial range, see also [C-Y-95b], (ii) for the velocity structure functions Sm = |u(x + r) − u(x)|m to show intermittent scaling behaviour (more precisely, “normal” Kolmogorov scaling, Sm (r) ∼ r m/3 , occurred for m ≤ 3, whereas higher-order structure functions scaled like Sm (r) ∼ r ξm with ξm ≈ 1 for m > 3, see also [C-Y-95b]), (iii) for the energy dissipation rate to be intermittent as well, with the rate of energy dissipation defined by = ν(∂ p u/∂x p )2 (the dissipation rate correlation function computed from the numerical data is reported to scale like (x + r)(x) ∼ r −µ with intermittency exponent µ = 0.2 ± 0.05). The above results for y = d = 1 have been confirmed (with σ = 1.65±0.05 and also with slightly different values for ξm and µ, respectively) in [H-J96], where numerical experiments were performed for the Burgers equation without hyperviscosity and for noise spectra with y ∈ [−2, d]; in [H-J97, H-J97a] the scaling behaviour of the velocity structure functions and the effect of a cutoff in the noise variance were investigated in more detail. So the random-force-driven Burgers equation has been extensively studied numerically, and it seems to provide a comparatively simple, but nevertheless surprisingly realistic model of real-life turbulence. This stresses its importance as a test case for checking physical turbulence theories. In fact, the latter aspect plays a major role in the references above (see especially [F-N-S, Y-S, C-Y-95a, H-J96]) for checking turbulence statistics predicted by the RG theory. The dynamic renormalization group theory has been applied for the first time to the randomly stirred Navier-Stokes equations in [F-N-S] and since then has been used for deriving turbulence statistics and turbulence models, which are very successful in describing large-scale properties of complex turbulent flows and which are therefore of practical importance in engineering (see e.g. [Y-O, Y-O-T-S-G, Or]); an application to viscoelastic flows can be found in [N-H-S-S]. To our knowledge, the application of RG theory to turbulence has not yet been rigorously justified (see e.g. [Or]). Therefore the good agreement of RG predictions with numerical results for the stochastic Burgers equation, which is reported in [Y-S, C-Y-95a, H-J96], is of special importance to the theory. In this article we prove (mathematically) rigorous approximation theorems for Eq. (1) with a class of very irregular stochastic noises η, which covers the type of noise considered above and also includes space-time white noise. In the hyperviscous case p ≥ 2 (note that in [C-Y-95a] p = 6 has been used), our approximation results are then applied to justify the computation of the velocity correlation function and the velocity
Approximation of Stochastic Burgers Equation
183
structure functions, which has led to the above listed findings (i) and (ii) in [C-Y-95a]; in fact, our regularity condition on the noise η is satisfied for arbitrary y ∈ [−2, d] (see Corollary 2.6 and Remark 2.8). For the approximation of the dissipation rate correlation function (x + r)(x) in (iii), our regularity condition on the noise, which is in some sense optimal, requires y > 1 (see Corollary 2.7 and the discussion in Remark 2.8). More precisely, we treat Eq. (1) with u : [0, T ] × (0, 1) → R, (t, x) → u(t, x), periodic in the spatial variable x with period 1, initial condition u(0, ·) = u0 (·), ν = 1, and with the Gaussian noise η defined as a (generalized) time derivative of a Wiener process W . All results hold for arbitrary kinematic viscosity ν > 0 and arbitrary domain of definition [0, T ] × (0, L) as well. Our solution concept is that of the mild solution which is described in more detail in Sect. 2. Applying the variation of constants formula to Eq. (1) we obtain t t ∂ tA (t−s)A u(t) = e u0 − e u(s) u(s)ds + e(t−s)A dW (s), (2) ∂x 0 0 ∂ where {etA }t≥0 is the semigroup generated by A = (−1)p+1 ∂x 2p . The stochastic inte t (t−s)A gral WA (t) = 0 e dW (s) is the mild solution of the linearized Eq. (1) with zero initial condition (see (9) and (10) below) and is called stochastic convolution. In fact, our regularity conditions on the noise are the minimal requirements for the Kolmogorov test method in order to obtain the continuity in time and continuity or differentiability in space of the stochastic convolution. We consider a piecewise affine time discretization of the underlying Wiener process W, denoted by W δ , where δ is the (equidistant) time step. The stochastic convolution t is then approximated by WAδ = 0 e(t−s)A dW δ (s) which is a Riemann-Stieltjes integral whose sample paths can be simulated. Our approximation results for the mild solution uδ of Eq. (2) driven by WAδ instead of WA are of the form 2p
E sup (u − uδ )(t)m −→ 0 L2 (0,1) t≤T
(m ≥ 1) when the approximation parameter δ tends to zero. Moreover, in the hyperviscous case p ≥ 2 we obtain this convergence in spaces of spatially continuous or differentiable functions. In the proof of these statements, convergence results for the stochastic convolution play an essential part. They are shown by an extension of the idea of the Kolmogorov test and they are of interest on their own, since the stochastic convolution occurs in the mild formulation of all SPDEs with additive noise. For the nonlinear results we start with the approximation in probability, using an idea of [G-S] applied to the two-dimensional Navier-Stokes equation driven by multiplicative, finite-dimensional noise. We extend this result to convergence “in expectation” using a well-known theorem of Vitali, which requires that moments of solutions are bounded uniformly in the corresponding norms. This, in turn, is the consequence of a-priori bounds for the solutions and a theorem of Fernique. The stochastic Navier-Stokes equation in two dimensions is closely related to Eq. (1). In fact, we applied our method to this case as well, see [G]. Let us therefore briefly summarize other available approximation results for the stochastic two-dimensional Navier-Stokes equation: [S-90] gives results for a certain semi-implicit Euler-Galerkin scheme applied to the equation with additive noise. It is assumed that the noise
184
C. Gugg, H. Kielh¨ofer, M. Niggemann
corresponds to a spatially L2 -valued Wiener process. Then it is shown that the distributions of the sequence defined by the approximation method converge to the solution of the equation. Here convergence means convergence of measures on spaces C([0, T ], H−s ) ∩ L2 (0, T , H) of square-integrable and continuous distribution-valued functions, e.g.. If the trajectories of the Wiener process take values in a space C(0, T , H1 ) of continuous functions with values in a space with weak first derivatives, in Chapter 3 of [S-85] a result on the Lp −convergence or convergence with probability 1 of a Galerkin approximation in a space of square-integrable functions is proved. Note that in the case of a spatially L2 -valued Wiener process, an application of the Ito formula yields a considerably easier a-priori estimate in our case (the nonlinearity drops out and the difficulties treated in Theorem 2.4 do not occur). Multiplicative noise is considered in [Br] and [Tw]. In the first reference it is shown for finite-dimensional noise that a sequence of solutions of recursively defined linear equations converges in expectation towards the solution. In the second reference, the same convergence is proved in a similar situation as in [G-S]. But again the multiplicative noise requires strong regularity of the noise. To our knowledge, the only work that permits Wiener processes with less spatial regularity than L2 is [F-T]. They prove convergence with probability 1 of an approximating sequence defined by a semi-implicit time discretization, in a space L2 (0, T , H) of square integrable functions. Approximation theory for the stochastic convolution is developed by a partial integration method. This requires more regularity of the noise as is needed for the application of the Kolmogorov test method used here. However, none of these works provides approximation results in spaces of functions that are bounded in time and continuous or differentiable in space. But this is required to justify the physical results in [C-Y-95a, C-Y-95b, Y-C-96], where, however, hyperviscous dissipation terms are employed in Eq. (1). 2. Main Results We rewrite Eq. (1) as an abstract evolution equation, du ∂ = Au − u u + η, dt ∂x u(0) = u0 , p+1 ∂ 2p ∂x 2p
where A = (−1) nition of A is s (0, 1) Hper
:= u =
(3)
subject to periodic boundary conditions. The domain of defi-
uk exp (2πikx), uk = u−k ,
k∈Z
2 s
(1 + |k| ) |uk | < ∞ 2
(4)
k∈Z
s (0, 1) is a Hilbert space with norm for s = 2p. Hper
s (0,1) = uHper
1 + |k|
2
s
1/2 |uk |
2
k∈Z
for all s ∈ R, which, for s ∈ N0 , is equivalent to the classically defined norms, cf. [T], e.g. The functions √ √ (5) e0 = 1, e−k (x) = 2 cos(2πkx), ek (x) = 2 sin(2π kx),
Approximation of Stochastic Burgers Equation
185
for k ∈ N, form a complete orthonormal system of eigenfunctions of A in L2 (0, 1) with corresponding eigenvalues λ0 = 0, λ−k = λk = −(2π k)2p , k ∈ N. Therefore 1 uk exp (2πikx) = αk ek , uk = √ (α−k − iαk ), k ∈ N, (6) u= 2 k∈Z k∈Z where, as before, we set u−k = uk . The Gaussian noise η is defined as the (generalized) time derivative of a Wiener process, η = ∂t∂ W, where Wk (t) exp (2πikx) = αk βk (t)ek (x), (7) W (x, t) = k∈Z
k∈Z
with αk ∈ R and stochastically independent real standard Brownian motions βk on a probability space (*, F, P ). The decay of the coefficients as |k| → ∞ determines the regularity of the noise, cf. Assumption 2.1 below. If and only if η = ∂t∂ W is a Gaussian noise with zero mean and covariance E[η(x, t)η(y, s)] = δ(t − s)q(|x − y|) with a real positive definite function or functional q, then q(x) = α02 + 2 k∈N αk2 cos(2kπ x) and α−k = αk in (7), see [Bl], Sect. 2.1.3. Note that for αk = 1 for all k ∈ Z we obtain q(x) = δ(x), i.e. space-time white noise. We treat Eq. (3) in its mild form 1 t (t−s)A ∂ 2 u(t) = etA u0 − e (8) (u (s))ds + WA (t), 2 0 ∂x where {etA }t≥0 is the semigroup generated by the operator A and t WA (t) := e(t−s)A dW (s)
(9)
0
is the so-called stochastic convolution, defined as a generalization of the classical Riemann-Stieltjes integral by means of the Ito-isometry, see [P-Z-92], Sect. 4.2. Note that it is not possible to define (9) as a Riemann-Stieltjes integral for P -almost all ω ∈ *, since the trajectories of W are not P -almost sure of bounded variation, cf. (14) below. Here and in the sequel * denotes the underlying probability space with probability measure P . WA is not only the mild but even the weak solution of the linear equation du = Au + η, dt
u(0) = 0,
(10)
see, e.g., [P-Z-92]. Now we introduce our approximation of the Burgers Equation (3). First, we discretize the basic Wiener process. For fixed T > 0 set tj :=
j T, 2n
j = 0, . . . , 2n , and δ :=
T , 2n
Wk (tj +1 ) − Wk (tj ) Wkδ (t) := Wk (tj ) + (t − tj ) for t ∈ (tj , tj +1 ], and δ W δ (t) = Wkδ (t) exp (2πikx), k∈Z
ηδ (t) =
Wk (tj +1 ) − Wk (tj ) k∈Z
δ
exp (2π ikx) for t ∈ (tj , tj +1 ].
(11)
186
C. Gugg, H. Kielh¨ofer, M. Niggemann
An analogous discretization of the Brownian motions βk in (7) will be denoted by βkδ . Note that the approximation index n is suppressed in this notation. Our “approximating equations” are then 1 t (t−s)A ∂ δ 2 uδ (t) = etA u0 − u (s) ds + WAδ (t) e (12) 2 0 ∂x or ∂ δ,M 2 1 t (t−s)A uδ,M (t) = etA PM u0 − u (s) ds + PM WAδ (t), e PM (13) 2 0 ∂x where PM is the orthogonal projection operator on the span of exp (2π ikx), k = −M, . . . , M, in L2 (0, 1). Since W δ is piecewise linear and therefore of bounded variation, note that t WAδ (t) := e(t−s)A dW δ (s) (14) 0
is defined as a classical Riemann-Stieltjes integral. For the Fourier modes uδ,M k , k = −M, . . . , M, of (13) it can be verified by a straightforward calculation: −δ(2kπ) uδ,M uδ,M k (tj +1 ) = e k (tj ) M tj +1 1 2p −ikπ e−(2kπ) (tj +1 −s) uδ,M (s)uδ,M l k−l (s)ds 2 t j l=−M Wk (tj +1 ) − Wk (tj ) tj +1 −(2kπ)2p (tj +1 −s) e ds, + δ tj 2p
(15)
where k = −M, ..., M, j = 0, ..., 2n − 1. This suggests a possible numerical scheme for Eq. (13), see also [H-J96, H-J97, Ba]. Unless otherwise stated, the regularity assumption on the noise (7) is as follows: Assumption 2.1. Let k∈N αk2 k −2p+ε < ∞ for arbitrary ε > 0, where 2p is the order of the linear operator A. In [P-Z-92], Sect. 5.5.1, it is shown that (for periodic boundary conditions) this condition is minimal for the stochastic convolutions (9) and (14) to have continuous paths in space and time. Assumption 2.1 covers space-time white noise. First we approximate solutions of the “linearized equation”. We show convergence of the stochastic convolutions WAδ (see (14)), i.e. solutions of ∂t∂ uδ = Auδ +ηδ , uδ (0) = 0, with a noise ηδ defined by W δ (see (11)), to the stochastic convolution WA (see (9)) solving (10), as δ tends to zero. Since the stochastic convolution is an additive term in the mild formulation of all SPDEs with additive noise, the following result is of interest on its own. Lemma 2.2. Let Assumption 2.1 be fulfilled and T > 0 be arbitrary. Then for all m ∈ N, δ→0 E WAδ − WA m C([0,T ]×[0,1]) −→ 0. The proof is postponed to Sect. 3. The following corollary follows from the proof as well.
Approximation of Stochastic Burgers Equation
187
δ Corollary 2.3. For some α ≥ 0, let WA−α and WA−α be defined analogously to WA δ and WA with the operator A − αI instead of A. Then under Assumption 2.1 we obtain for arbitrary m ∈ N, T > 0, α→∞ δ E WA−α m C([0,T ]×[0,1]) ≤ const (α, m, T ) −→ 0 δ uniformly in δ. Furthermore, WA−α has a symmetric Gaussian distribution in C([0, T ]× [0, 1]). Analogous results hold for WA−α .
It is not our goal to study the existence theory of Eqs. (8), (12), and (13), which (for (8) with p = 1 and space-time white noise) is given in [P-D-T] and [P-Z-96]; we refer to Theorem 4.1 in Sect. 4. Now we are ready to state our main result, which will be proved in Sect. 5. Theorem 2.4. Assume 2.1, T > 0, and the initial condition u0 ∈ L2 (0, 1). Then the following convergence “in expectation” holds for the solutions u and uδ of (8) and (12): T δ→0 δ→0 δ m δ 2 E sup (u − u )(t)L2 (0,1) −→ 0, E (u − u )(t)L∞ (0,1) dt −→ 0. t≤T
0
Here m ∈ N is arbitrary. The same approximation is valid for the solutions u and uδ,M of (8) and (13), when δ → 0 and M → ∞ simultaneously. In Sect. 6 we show that, under more restrictive assumptions on the regularity of the noise and the initial condition, the convergence for the hyperviscous equations (8) and (12) with p ≥ 2 is much sharper: j Theorem 2.5. Assume u0 ∈ Cper (0, 1), j ∈ N0 , and k∈N αk2 k 2j −2p+ε < ∞, with arbitrary ε > 0. Furthermore impose p ≥ 2, i.e. only the hyperviscous Burgers equation is considered here. Then for all m ∈ N : δ→0 E u − uδ m∞ −→ 0 j L ([0,T ],Cper (0,1))
for the mild solutions u and uδ of (8) and (12). The same holds for the solutions u and uδ,M of (8) and (13), when (δ, M) → (0, ∞). The following corollaries provide the approximation results concerning the statistical properties which were announced in the introduction. The proofs follow from Theorem 2.5. Corollary 2.6. With u and uδ from (8) and (12) set Sm (x, r, t) := E |u(t, x + r) − u(t, x)|m , m ∈ N, which is the mth order velocity structure function defined in Sect. 1, and let δ (x, r, t) := E |uδ (t, x + r) − uδ (t, x)|m be its approximation. Furthermore, imSm pose the conditions of Theorem 2.5 for j = 0. Then δ Sm → Sm for δ → 0
uniformly in x, r and t. An analogous result holds for the velocity structure functions of u and uδ,M in (8) and (13), when (δ, M) → (0, ∞).
188
C. Gugg, H. Kielh¨ofer, M. Niggemann
Corollary 2.7. Let δ (t, x) :=
2
∂p δ u (t, x) ∂x p
and (t, x) :=
2
∂p u(t, x) ∂x p
be the energy dissipation rate, and let the conditions of Theorem 2.5 be fulfilled for j = p. Then the dissipation rate correlation function (t, x, r) → E[(t, x + r)(t, x)] can be approximated as follows: δ→0 E[(t, x + r)(t, x)] − E δ (t, x + r) δ (t, x) −→ 0 uniformly in x, r and t. Again, we obtain a similar result for the energy dissipation rate of u and uδ,M in (8) and (13), when (δ, M) → (0, ∞). Remark 2.8 (Comparison with [Y-C-96, C-Y-95a, C-Y-95b]). For an easy numerical imW (t )−W (t ) plementation and for a better comparison we investigate the noise ηkδ (t) := k j +1 δ k j , t ∈ (tj , tj +1 ], occurring in (15), see also (7), (11). Using (7), we obtain 1 α−k 1 αk ηkδ (t) = √ √ σ−k,j − i √ √ σk,j 2 δ 2 δ
for t ∈ (tj , tj +1 ]
for k ∈ N,
α0 its complex conjugate for k ∈ Z − N0 and √ σ for k = 0, and with independent δ 0,j N(0, 1)-distributed real random variables σk,j , j = 0, . . . , 2n − 1. This is the discrety ized noise used in the above references. If we impose αk ∼ |k|− 2 , as the authors do, our condition for Corollary 2.6 becomes y > 1 + ε − 2p, p ≥ 2 which covers all their-physically relevant–values −2 ≤ y ≤ d = 1 even in the case p = 2. However, y if αk ∼ |k|− 2 , Corollary 2.7 requires y > 1 + ε, ε > 0 which excludes the relevant values. But, as described after Assumption 2.1, the regularity conditions of Corollary 2.7 are optimal. This seems to be in accordance with observations described in the above references, see a commentary in [C-Y-95a], p. 2741.
3. Linear Results In this section we prove Lemma 2.2 and Corollary 2.3. Since for the stochastic convolutions a martingale theory, in particular Doob inequalities, is not available in general, we generalize the method that is used for a proof of the Kolmogorov test in [P-Z-92], Theorem 3.4 and Sect. 5.5.1. ˜ ([0, T ]× Proof of Lemma 2.2. (i) The proof relies on a Sobolev imbedding theorem: W α,q ε [0, 1]) ⊂ C([0, T ] × [0, 1]) for αq ˜ > 2, see [A], p. 217. We assume 0 < α˜ < 4p and ˜ ([0, T ]×[0, 1]) is given by q > m with αq ˜ > 2 in the sequel. The intrinsic norm of W α,q |u(t, x) − u(t , x )|q q q uW α,q := u + d(t, t , x, x ). q ˜ ([0,T ]×[0,1]) L ˜ αq [0,T ]2 ×[0,1]2 (t − t )2 + (x − x )2 1+ 2
We set v δ = WAδ − WA . Since the first part of the norm is considerably easier to handle than the second, it suffices to show that δ (t, x) − v δ (t , x )|q |v δ→0 d(t, t , x, x ) −→ E 0. (16) ˜ 1+ αq [0,T ]2 ×[0,1]2 2 (t − t )2 + (x − x )2
Approximation of Stochastic Burgers Equation
189
This is shown by an application of Lebesgue’s theorem: for δ → 0 the integrand converges to 0 a.e. in [0, T ]2 × [0, 1]2 , as shown in steps (ii) and (iii). Furthermore, in step (iv) it will be proved that the difference in the numerator has a symmetric Gaussian distribution, whence E |v δ (t, x) − v δ (t , x )|q ≤ c1 (q)(E |v δ (t, x) − v δ (t , x )|2 )q/2 , cf. [P-Z-92], p. 57. In steps (ii) and (iii) we derive the estimate E |v δ (t, x) − v δ (t , x )|2 ≤ c2 αk2 (−λk )γ −1 )(|t − t |γ + |x − x |γ . (17) k∈N0
Here, 2α˜ < γ < ε, which according to Assumption 2.1 yields the convergence of the γ sum. By the inequality a γ + bγ ≤ 2(a 2 + b2 ) 2 for a, b ≥ 0, we obtain an integrable upper bound for the integrand in (16), which does not depend on δ. This finishes the proof, up to steps (ii)–(iv). (ii) Estimation of E |v δ (t, x) − v δ (t , x)|2 . Without loss of generality assume t > t . Set
fk (t, t , s) := χ[0,t] (s)e(t−s)λk − χ[0,t ] (s)e(t −s)λk , n −1 2 1 tj +1 δ Ik (t, t , s) := χ[tj ,tj +1 ) (s) fk (t, t , s˜ )d s˜ − fk (t, t , s). δ tj j =0
Inserting the definitions (7) and (11) in (9) and (14), respectively, we obtain E |v δ (t, x) − v δ (t , x)|2 2 T = E αk ek (x) Ikδ (t, t , s)dβk (s) 0 k∈N0
=
k∈N0
=
k∈N0
=:
αk2 ek2 (x) αk2 ek2 (x)
k∈N0
T 0
(Ikδ (t, t , s))2 ds (by Ito’s isometry, cf. [O], p. 26)
n −1 2 tj +1
j =0
tj
1 (fk (t, t , s))2 ds − δ
tj +1
tj
fk (t, t , s)ds
2
1 2 αk2 ek2 (x)(Bk,δ − Bk,δ ).
1 ≥ 0, B 2 ≥ 0, and B 1 ≥ B 1 − B 2 ≥ 0. Furthermore we compute Observe that Bk,δ k,δ k,δ k,δ k,δ
1 Bk,δ =
2(1 − e(t−t )λk ) − (et λk − etλk )2 ≤ |λk |γ −1 |t − t |γ , −2λk
because |e−ξ − e−η | ≤ |ξ − η|γ for all ξ, η ≥ 0. This, together with the boundedness |ek2 (x)| ≤ 2 of the eigenfunctions, gives the first part of the estimate (17). An elemen1 − B 2 → 0 for δ → 0, and therefore, by Lebesgue’s tary calculation shows that Bk,δ k,δ δ theorem for series, E |v (t, x) − v δ (t , x)|2 → 0 for δ → 0 (for all t, t ∈ [0, T ], all x ∈ [0, 1] and all k ∈ N0 ).
190
C. Gugg, H. Kielh¨ofer, M. Niggemann
(iii) Estimation of E |v δ (t, x) − v δ (t, x )|2 . Set this time fk (t, s) := χ[0,t] (s)e(t−s)λk , Ikδ (t, s)
:=
n −1 2
j =0
1 χ[tj ,tj +1 ) (s) δ
tj +1
tj
fk (t, s˜ )d s˜ − fk (t, s).
As in step (ii) we obtain again using Ito’s isometry E |v δ (t, x) − v δ (t, x )|2 T = αk2 (ek (x) − ek (x ))2 (Ikδ (t, t , s))2 ds 0
k∈N
=
k∈N
=:
αk2 (ek (x) − ek (x ))2
k∈N
2n −1 j =0
tj +1 tj
1 (fk (t, s))2 ds − δ
tj +1
tj
fk (t, s)ds
2
1 2 αk2 (ek (x) − ek (x ))2 (Bk,δ − Bk,δ ). 2tλk
2 ≥ 0 and B 1 ≥ B 1 − B 2 ≥ 0. We have B 1 = 1−e Again, Bk,δ k,δ k,δ k,δ k,δ −2λk ≤ γ /2 γ /2 |ek (x) − ek (x )| ≤ c3 |k| |x − x | implies E |v δ (t, x) − v δ (t, x )|2 ≤ c4 αk2 (−λk )γ −1 |x − x |γ ,
1 −2λk .
Then
k∈N
δ→0
1 − B 2 −→ 0, it which yields the second part of the estimate (17). Again, using Bk,δ k,δ δ δ→0 is not difficult to show that E |v (t, x) − v δ (t, x )|2 −→ 0 for all t ∈ [0, T ] and all x, x ∈ [0, 1]. (iv) v δ (t, x) − v δ (t , x ) has a symmetric Gaussian distribution on R : Let PN be the orthogonal projection operator onto span{e0 , . . . , eN }. Then as in steps (ii) and (iii) (using definitions (7) and (11) for (9) and (14) in v δ = WAδ − WA )
PN v δ (t, x) − PN v δ (t , x ) =
N k=0
αk
T 0
gk dβk (s)
with suitable deterministic integrands gk = gk (δ, x, x , t, t , s). This sum of independent R-valued random variables with symmetric Gaussian distribution has a symmetric Gaussian distribution as well. Furthermore, PN v δ (t, x) − PN v δ (t , x ) converges to v δ (t, x) − v δ (t , x ) in L2 (*, R) which follows from (17): 2 E v δ (t, x) − v δ (t , x ) − PN v δ (t, x) − PN v δ (t , x ) ≤ c2
∞ k=N+1
N→∞ αk2 (−λk )γ −1 |t − t |γ + |x − x |γ −→ 0.
According to [O], Theorem A.7, v δ (t, x) − v δ (t , x ) has a symmetric Gaussian distribution.
Approximation of Stochastic Burgers Equation
191
δ Proof of Corollary 2.3. Replace WA by WA−α and WAδ by WA−α in the above proof and set one of them to zero. By Assumption 2.1 and Lebesgue’s theorem the series in estimate (17) tends to zero as α → ∞ (replace −λk by α − λk everywhere in the above proof). The arguments of (i) in the proof of Lemma 2.2 then prove the first part of Corollary 2.3. δ To show that WA−α has a symmetric Gaussian distribution in E := C([0, T ] × [0, 1]), we verify the definition in [P-Z-92], p. 37. It has to be shown that for arbitrary w in δ the dual space E the real random variable < WA−α , w > has a symmetric Gaussian δ . As distribution, where < ·, · > is the duality pairing. For this purpose, set u := WA−α in step (iv) of the preceding proof, we obtain T N PN u(t, x) = αk gk dβk (s) k=0
0
with suitable deterministic gk = gk (t, x, s). Therefore PN u has a symmetric Gaussian distribution in E. The proof of Lemma 2.2 implicitly shows that PN u tends to u in L2 (*, E). Thus < uN , w > converges towards < u, w > in L2 (*, R), as is seen from < PN u − u, w > 2L2 (*,R) = E | < PN u − u, w > |2 ≤ E w 2E PN u − u2E N→∞
= w 2E PN u − u2L2 (*,E) −→ 0. δ has a symmetric Gaussian distribution Again, Theorem A.7 in [O] shows that u = WA−α in E = C([0, T ] × [0, 1]). The analogous result holds for WA−α .
For the approximation theory of the hyperviscous Burgers equation in more regular spaces as presented in Sect. 6, the following corollary is necessary. It is an easy extension of the main Lemma 2.2. Corollary 3.1. Suppose k∈N αk2 k 2j −2p+ε < ∞ for arbitrary ε > 0, j ∈ N. Then for all m ∈ N, δ→0 δ m E WA − WA −→ 0. j C([0,T ],Cper (0,1))
j 2 ∂ Proof. In the proof of Lemma 2.2 replace the estimates |ek2 (x)| ≤ 2 by ∂x j ek (x) ≤ j ∂ ∂j j +γ /2 |x − ck 2j and |ek (x) − ek (x )| ≤ ck γ /2 |x − x |γ /2 by ∂x j ek (x) − ∂x j ek (x ) ≤ ck x |γ /2 , see (5).
4. A-Priori Estimates We need the following a-priori estimate on the solutions of Eqs. (8), (12), and (13). We restrict ourselves to the first equation; the others are treated in the same way. Introducing a parameter α ≥ 0 for purposes which will be clear in the proof of Theorem 2.4, we rewrite Eq. (8) in the following form: du ∂ = (A − α)u − u u + αu + η, dt ∂x
(18)
192
C. Gugg, H. Kielh¨ofer, M. Niggemann
and pass to the mild form
1 t (t−s)(A−α) ∂ 2 u(t) = et (A−α) u0 − (u (s))ds e 2 0 ∂x t +α e(t−s)(A−α) u(s)ds + WA−α (t).
(19)
0
Theorem 4.1. Assume 2.1, α ≥ 0, T > 0, in the initial condition u0 ∈ L2 (0, 1). Then there exists a unique mild solution of Eq. (19) in C([0, T ], L2 (0, 1)) ∩ L2 ([0, T ], Cper (0, 1)), for P -almost all ω ∈ *, which P -almost sure equals the solution of (8). For u˜ α := u − WA−α the a-priori estimates t
u˜ α (t)2L2 ≤ ec 0 (1+WA−α (s)L∞ )ds t 1 t u0 2L2 + WA−α (s)4L4 ds + α 2 WA−α (s)2L2 ds 2 0 0 and
t
0
2
per
t
1 + WA−α (s)2L∞ u˜ α (s)2L2 ds + u0 2L2 0 t 1 t + WA−α (s)4L4 ds + α 2 WA−α (s)2L2 ds 2 0 0
u˜ α (s)2H p ds ≤ c
are valid for all t ∈ [0, T ]. (The constant c does not depend on u˜ α .) Analogous results hold for the solutions of (12) and (13). Proof. u˜ α := u − WA−α (formally) fulfills ∂ 1 ∂ u˜ α = (A − α)u˜ α − (u˜ α + WA−α )2 + α u˜ α + αWA−α . ∂t 2 ∂x (u˜ α is not differentiable with respect to t and x which is not used in the claim of Theorem 4.1. Replacing u˜ α by a smooth approximation, i.e. by a smooth approximation of η in (18) and (10), the following arguments prove Theorem 4.1 for any smooth approximation and therefore for u˜ α itself.) Multiplication by u˜ α and integration over (0, 1) gives 1 ∂ 1 ∂ u˜ α WA−α u˜ α dx u˜ α 2L2 = (Au˜ α , u˜ α )L2 + 2 ∂t ∂x 0 1 ∂ 1 1 2 WA−α u˜ α dx + α u˜ α WA−α dx, + 2 0 ∂x 0 1 ∂ where we use 0 u˜ 2α ∂x u˜ α dx = 0, due to the periodic boundary conditions. Because of + +2 1 ∂ 2 u˜ α WA−α u˜ α dx ≤ WA−α 2 ∞ u˜ α 2 2 + + ∂ u˜ α + 2 we obtain 0
∂x
L
+2 + p + +∂ + + + p u˜ α + + 2 ∂x
L
∂x
L
1 ∂ u˜ α 2L2 2 ∂t L +2 + + +∂ 1 2 2 + WA−α L∞ u˜ α L2 + + u˜ α + ≤ 2 ∂x +L2 +2 +
+ +∂ 1 1 4 2 2 2 + + + WA−α L4 + + u ˜ u ˜ + α W α L2 A−α L2 . + ∂x α + 2 4 2 L
Approximation of Stochastic Burgers Equation
193
+ p +2 + ∂ +2 + +∂ If p > 1, the interpolation inequality + ∂x u˜ α +L2 ≤ + ∂x ˜ α + 2 + c1 u˜ α 2L2 yields pu L
+ p +2 + ∂ ∂ 1+ 2 + u˜ α L2 + + p u˜ α + + 2 ∂t 2 ∂x L
1 ≤ c2 1 + WA−α 2L∞ u˜ α 2L2 + WA−α 4L4 + α 2 WA−α 2L2 . 2 The first estimate of Theorem 4.1 then follows by an application of the Gronwall lemma. The second is obtained by integrating the last equation over [0, t]. 5. Convergence We start with the convergence in probability of solutions of the “approximating equations” (12) towards the solution of Eq. (8). A convergence theorem of Vitali then proves the convergence “in expectation” that is claimed in Theorem 2.4. For the proof of convergence in probability we adopt an idea of [G-S], see Sect. 1. Theorem 5.1. Under the assumptions of Theorem 2.4, the following convergence δ→0
sup (u − uδ )(t)L2 (0,1) −→ 0,
t≤T
T 0
δ→0
(u − uδ )(t)2L∞ (0,1) dt −→ 0
holds in probability. The same approximation is valid for the solutions u and uδ,M of (8) and (13), when δ → 0 and M → ∞ simultaneously. Proof. (i) We follow the lines of Sect. 4. With u˜ δ = uδ − WAδ and u˜ = u − WA we obtain again formally (with α = 0):
∂ 1 ∂ (u(t) ˜ − u˜ δ (t)) = A(u˜ − u˜ δ )(t) − (u˜ + WA )2 − (u˜ δ + WAδ )2 (t). ∂t 2 ∂x Multiplication by (u˜ − u˜ δ )(t) and integration over (0, 1) give + p +2 +∂ + 1 ∂ δ 2 δ + (u˜ − u˜ )(t)L2 + + p (u˜ − u˜ )(t)+ + 2 2 ∂t ∂x L
∂ 1 1 2 δ δ 2 δ (u˜ + WA ) − (u˜ + WA ) (u˜ − u˜ ) (t)dx = 2 0 ∂x ≤
+ +2 + 1+ ∂ 1 δ + u˜ + u˜ δ + WA + WAδ 2L∞ u˜ − u˜ δ + WA − WAδ 2L2 + + ( u ˜ − u ˜ ) + + 2. 4 4 ∂x L
If p > 1, we apply the interpolation inequality as in Sect. 4. The Gronwall lemma then yields (by u(0) ˜ = u˜ δ (0)) (u˜ − u˜
δ
)(t)2L2
≤ c1 e
c1
t
0 (g(s)+1)ds
t 0
g(s)(WA − WAδ )(s)2L2 ds
194
C. Gugg, H. Kielh¨ofer, M. Niggemann
with g(s) := u˜ + u˜ δ + WA + WAδ 2L∞ (s). We now introduce the four stopping times t δ τ1 := inf{t ∈ [0, T ] : u˜ δ (s)2L∞ ds ≥ N }, 0 t δ τ2 := inf{t ∈ [0, T ] : WAδ (s)2L∞ ds ≥ N }, 0
and analogously defined τ1 and τ2 . Set τ = τ (δ, N ) := min{τ1δ , τ2δ , τ1 , τ2 }. Passing from u˜ δ to uδ = u˜ δ + WAδ and from u˜ to u = u˜ + WA , we deduce the following estimate, which is valid up to the stopping time τ : sup (u − uδ )(t)2L2 ≤ c2 (N )WA − WAδ 2C([0,T ]×[0,1]) . t≤τ
(20)
(ii) We show limN→∞ P (τ < T ) = 0 uniformly for all possible values of δ, cf. (11). N→∞
For this purpose, it suffices to prove P (τiδ < T ) −→ 0, i = 1, 2, uniformly in δ (and the analogous assertions without δ). Here only the case i = 1 is shown, the case i = 2 is an easy consequence of Corollary 2.3 and the Chebyshew inequality. Because t t → 0 u˜ δ (s)2L∞ ds is continuous and monotonously increasing, we obtain T δ δ 2 P (τ1 < T ) = P u˜ (s)L∞ ds > N . 0
From the a-priori estimates in Theorem 4.1 we infer (with a polynomial d and a constant c3 independent of N and δ)
δ 2 P (τ1δ < T ) ≤ P d WAδ (s)2C([0,T ]×[0,1]) , u0 2L2 ec3 (1+WA C([0,T ]×[0,1]) ) ≥ N . Taking the logarithm and then using the Chebyshew inequality, we conclude P (τ1δ < T )
E d WAδ (s)2C([0,T ]×[0,1]) , u0 2L2 + c3 1 + WAδ 2C([0,T ]×[0,1]) N→∞ ≤ −→ 0, ln(N ) uniformly in δ due to the uniform estimate of Corollary 2.3 with α = 0. (iii) Proof of the claim of Theorem 5.1. Let a > 0. Then P sup (u − uδ )(t)2L2 > a ≤ P sup (u − uδ )(t)2L2 > a and τ = T +P (τ < T ). t≤T
t≤T
The second probability can be made small uniformly in δ by a suitable choice of N, due to step (ii). We replace τ by T in (20) and apply the Chebyshew inequality to obtain P (sup (u − uδ )(t)2L2 > a and τ = T ) t≤T
≤
E c2 (N )WA − WAδ 2C([0,T ]×[0,1]) a
δ→0
−→ 0,
as shown in Lemma 2.2. This proves the first part of the assertion. The second part is shown analogously. The approximation result for the solutions u and uδ,M of (8) and
Approximation of Stochastic Burgers Equation
195
(13) is proved completely analogous: replace uδ by uδ,M and WAδ by PM WAδ ; the result δ→0, M→∞ −→ 0 follows by Lemma 2.2 and the fact that E WA − PM WAδ 2C([0,T ]×[0,1]) M→∞ E WAδ − PM WAδ 2C([0,T ]×[0,1]) −→ 0 uniformly in δ. Before we prove Theorem 2.4, we cite the following theorem of Fernique, which can be found in [P-Z-92], p. 37, e.g. Theorem 5.2. (Fernique). Let µ be a symmetric Gaussian measure on a Banach space E with Borel σ -algebra B(E); let r > 0 and λ > 0 fulfill 1 − µ(B¯ r ) ln + 32λr 2 ≤ −1, µ(B¯ r ) where B¯ r = {x ∈ E : x ≤ r}. Then 2 2 eλx dµ(x) ≤ e16λr + E
e2
e2 . −1
We now turn to the Proof of Theorem 2.4. It follows from Theorem 5.1 by an application of the Theorem of Vitali, see [H-T], Satz 1.8, e.g.. Since, by Theorem 5.1, δ→0
sup (u − uδ )(t)m −→ 0 L2 (0,1)
t≤T
in probability for all m ∈ N, it suffices to prove the boundedness of all moments of supt≤T (u − uδ )(t)L2 (0,1) . That means we have to show for all m ∈ N, ≤ cm and E sup u(t)m L2 (0,1) t≤T
E sup u t≤T
δ
(t)m L2 (0,1)
≤ cm where cm does not depend on δ.
(21)
We use Theorem 4.1 and Theorem 5.2 of Fernique. Taking Corollary 2.3 into account, δ we only have to show (21) for u˜ α = u − WA−α and u˜ δα = uδ − WA−α . We confine ourselves to the latter case. From Theorem 4.1 we infer E sup u˜ δα 2m L2 t≤T
T δ 2 ≤ c1 E e2mc 0 (1+WA−α (s)L∞ )ds 2m T 1 T 2 δ 4 2 δ 2 +c1 E , u0 L2 + WA−α (s)L4 ds + α WA−α (s)L2 ds 2 0 0 where c1 depends on m and T only. Corollary 2.3 shows that the second term remains bounded uniformly in δ for fixed α, T , and m. The first one can be estimated by T δ 2 δ 2 E e2mc 0 (1+WA−α (s)L∞ )ds ≤ c2 (c, m, T )E ec2 WA−α C([0,T ]×[0,1]) .
196
C. Gugg, H. Kielh¨ofer, M. Niggemann
It remains to show: for all constants c˜ > 0 there exists an α > 0 such that δ ˜ A−α 2 C([0,T ]×[0,1]) ≤ const (c), ˜ independently of δ. We verify the conditions E ecW δ
of Theorem 5.2 with E = C([0, T ] × [0, 1]), λ = c˜ and µ = P WA−α , defined by δ µ(B) := P (WA−α ∈ B) for all B in the Borel σ − field on E. The measure µ is a symmetric Gaussian measure on E, according to Corollary 2.3. From this corollary we infer δ 1 − µ(B¯ 1 ) = P (WA−α 2C([0,T ]×[0,1]) > 1) α→∞
δ 2C([0,T ]×[0,1]) ] ≤ c(α) −→ 0 ≤ E[WA−α ⇒ µ(B¯ 1 ) ≥ 1 − c(α) > 0 for sufficiently large α 1 − µ(B¯ 1 ) c(α) ⇒ ≤ e−1−32c˜ for sufficiently large α. ≤ ¯ 1 − c(α) µ(B1 )
For this choice of α, the assumption of Theorem 5.2 is fulfilled independently of δ, and δ
˜ A−α C([0,T ]×[0,1]) E[ecW ]=
2
C([0,T ]×[0,1])
δ
˜ C([0,T ]×[0,1]) dP WA−α (x) ≤ e 16c˜ + ecx
e2 , e2 − 1
which implies the claim. δ→0 T The proof of E 0 (u − uδ )(t)2L∞ dt −→ 0 is carried out analogously using the second a-priori estimate of Theorem 4.1. The approximation result for the solutions u and uδ,M of (8) and (13) is again comδ δ,M and W δ by P W δ ; it is a straightforward extension pletely analogous: replace A M A u by u α→∞
δ of Corollary 2.3 that E PM WA−α m C([0,T ]×[0,1]) ≤ const (α, m, T ) −→ 0 uniformly in δ and M.
6. Regularity This section is devoted to the proof of Theorem 2.5. We need some estimates on the nonlinearity in (8) and (12): + + +1 ∂ 2 1 ∂ 2 + + + (u (v ) − ) ≤ c uH 0 (0,1) + vH 0 (0,1) u − vH 0 (0,1) (22) + 2 ∂x + −ρ 2 ∂x Hper (0,1) for ρ >
3 2
and for all u and v in H 0 (0, 1), and
+ +
+1 ∂ 2 1 ∂ 2 + + + (u (v ) − ) ≤ c u + v u − vH j +1 (0,1) j +1 j +1 + 2 ∂x + j Hper (0,1) Hper (0,1) per 2 ∂x Hper (0,1) (23) j +1
for j ∈ N0 and for all u and v in Hper (0, 1). (For the definition of the function spaces r (0, 1), r ∈ R, see (4).) Hper
Approximation of Stochastic Burgers Equation
197
−r (0, 1) and the dual space of H r (0, 1) are isoProof of (22). Using the fact that Hper per morphic for r ∈ R, we obtain + + + ∂ 2 + + (u ) − ∂ (v 2 )+ + ∂x + −ρ ∂x Hper (0,1) , 1 ∂ 2 ∂ 2 ρ (u ) − (v ) ϕdx : ϕHper = sup = 1 (0,1) ∂x ∂x 0 , 1 ∂ ρ = sup (u + v)(u − v) ϕdx : ϕHper (0,1) = 1 ∂x . 0 / ≤ sup u + vH 0 (0,1) u − vH 0 (0,1) ϕCper 1 (0,1) : ϕH ρ (0,1) = 1 per ≤ c uH 0 (0,1) + vH 0 (0,1) u − vH 0 (0,1) , ρ
1 (0, 1) for ρ > 3 . where we use the continuous imbedding Hper (0, 1) ⊂ Cper 2 1 (0, 1) ⊂ Proof of (23). If j = 0 then we estimate with the continuous imbedding Hper ∞ L (0, 1) + + + + + + + + ∂ + + + + ∂ +(u − v) ∂ u+ +v( u − ∂ v)+ +u u − v ∂ v + ≤ + + + + + + ∂x ∂x H 0 (0,1) ∂x H 0 (0,1) ∂x ∂x +H 0 (0,1) + + + + + ∂ + +∂ ∂ + + + ≤ u − vL∞ (0,1) + + vL∞ (0,1) + + ∂x u+ 0 + ∂x u − ∂x v + 0 H (0,1) H (0,1)
≤ cu − vHper 1 (0,1) uH 1 (0,1) + vH 1 (0,1) . per per
For j ∈ N we obtain + + + + ∂ +u u − v ∂ v + + ∂x ∂x +
+ + + + + + ∂ + ∂ + +v( u − ∂ v)+ + ≤+ (u − v) + u + + + j j j ∂x Hper ∂x ∂x +Hper Hper (0,1) (0,1) (0,1)
≤ cu − vH j +1 (0,1) uH j +1 (0,1) + vH j +1 (0,1) per
per
per
j Hper (0, 1)
using that is a Banach algebra for j ∈ N. These estimates enable us to prove the claim by a boot-strapping argument. The base clause j = 0. From (8) and (12) we infer that u − uδ L∞ ([0,T ],Cper (0,1)) + t + + + (t−s)A 1 ∂ 2 δ 2 + ≤ sup + e ( − (u ) )(s)ds (u + + 2 ∂x t∈[0,T ]
0
Cper (0,1)
+WA − WAδ C([0,T ],Cper (0,1)) .
σ (0, 1) ⊂ C For the first summand we compute by (22) and the imbedding Hper per (0, 1) 1 for σ > 2 , + t + + + (t−s)A 1 ∂ 2 δ 2 + (u − (u ) (s)ds + sup + e + σ 2 ∂x 0 t∈[0,T ] Hper (0,1) t − σ +ρ ≤ sup C(T )(t − s) 2p u − uδ H 0 (0,1) uH 0 (0,1) + uδ H 0 (0,1) (s)ds t∈[0,T ] 0 δ
≤ cu − u L∞ ([0,T ],H 0 (0,1)) uL∞ ([0,T ],H 0 (0,1)) + uδ L∞ ([0,T ],H 0 (0,1)) ,
198
C. Gugg, H. Kielh¨ofer, M. Niggemann
since ρ and σ can be chosen such that (ρ + σ )/(2p) ≤ 3/(2p) ≤ 3/4 because of p ≥ 2. Taking the mth power and the expectation value then yields m t 1 ∂ E sup e(t−s)A (u2 − (uδ )2 (s)dsCper (0,1) 2 ∂x 0 t∈[0,T ]
m δ m ≤ cE u − uδ m u , +u ∞ 0 ∞ 0 ∞ 0 L ([0,T ],H (0,1)) L ([0,T ],H (0,1)) L ([0,T ],H (0,1)) which tends to zero for δ → 0 as is easily inferred from Theorem 2.4. Finally, Lemma 2.2 implies the analogous assertion for the second summand. That proves the base clause. The proof of the recursion clause j → j + 1 is then carried out in a similar way: u − uδ L∞ ([0,T ],C j +1 (0,1)) per + t + + + (t−s)A 1 ∂ 2 δ 2 + (s)ds (u ≤ sup + e − (u ) + + j +1 2 ∂x 0 t∈[0,T ] Cper (0,1) +WA − WAδ C([0,T ],C j +1 (0,1)) . per
j +σ
j +1
Estimate (23) and the imbedding Hper (0, 1) ⊂ Cper (0, 1) for σ > the first summand (j ≥ 1) + t + + + (t−s)A 1 ∂ 2 δ 2 + sup + (s)ds (u e − (u ) + + j +σ 2 ∂x 0 t∈[0,T ] Hper (0,1) t j +σ −(j −1) − 2p ≤ sup C(T )(t − s) ds t∈[0,T ] 0
u − uδ L∞ ([0,T ],H j
per (0,1))
uL∞ ([0,T ],H j
per
3 2
then imply for
+ uδ L∞ ([0,T ],H j (0,1))
per (0,1))
.
Because w.l.o.g. σ + 1 < 3, the claim follows from the induction hypothesis and Corollary 3.1. For the remaining case j = 0 → j +1 = 1 note that the proof of the base clause 1 (0, 1))−norm as well under the new conditions shows the claim in the L∞ ([0, T ], Cper of Theorem 2.5 for j = 1. The proof for the solutions u and uδ,M of (8) and (13) is again the same, by an easy extension of Corollary 3.1. References [A] [Ba] [Bl] [Br] [C-Y-95a]
Adams, R.: Sobolev Spaces. Volume 65 of Pure and Applied Mathematics. London-New York: Academic Press Inc., 1978 Basdevant, C., Deville, M., Haldenwang, P., Lacroix, J., Quazzani, J., Peyret, R., Orlandi, P., Patera, A.: Spectral and finite difference solutions of the Burgers equation. Computers and Fluids 14(1), 23 (1986) Bl¨omker. D.: Stochastic Partial Differential Equations and Surface Growth. Volume 35 of Augsburger Mathematisch-Naturwissenschaftliche Schriften. Augsburg: Wißner-Verlag, 2000 Breckner, H.: Approximation of the Solution of the Stochastic Navier-Stokes Equation. Report, Martin-Luther-Universit¨at Halle-Wittenberg, 1998 Chekhlov, A., Yakhot, V.: Kolmogorov turbulence in a random-force-driven Burgers equation. Phys. Rev. E 51(4), 2739–2742 (1995)
Approximation of Stochastic Burgers Equation [C-Y-95b]
199
Chekhlov, A., Yakhot, V.: Kolmogorov turbulence in a random-force-driven Burgers equation: anomalous scaling and probability density functions. Phys. Rev. E 52(5), 5681–5684 (1995) [D-E] Duan, J., Ervin, V.: On the stochastic Kuramoto-Shivashinsky equation. Nonlinear Anal. 44, 205–216 (2001) [F-N-S] Forster, D., Nelson, D., Stephen, M.: Large distance and long time properties of a randomly stirred fluid. Phys. Rev. A 16, 732–749 (1977) [F-T] Flandoli, F., Tortorelli, V.: Time discretization of Ornstein-Uhlenbeck equations and stochastic Navier-Stokes equations with a generalized noise. Stochastics and Stochastics Reports 55, 141–165 (1995) [G-S] Grecksch, W., Schmalfuß, B.: Approximation of the stochastic Navier-Stokes equation. Comp. Appl. Math. 15(3), 227–239 (1996) [G] Gugg, C.: Approximation of Stochastic Partial Differential Equations and Turbulence in Fluids. PhD Thesis, Universit¨at Augsburg, 2001 [H-T] Hackenbroch, W., Thalmair, A.: Stochastische Analysis, Teubner, 1994 [H-J96] Hayot, F., Jayaprakash, C.: Multifractality in the stochastic Burgers equation. Phys. Rev. E 54(5), 4681 (1996) [H-J97] Hayot, F., Jayaprakash, C.: From scaling to multiscaling in the stochastic Burgers equation. Phys. Rev. E 56(4), 4259 (1997) [H-J97a] Hayot, F., Jayaprakash, C.: Structure functions in the stochastic Burgers Equation. Phys. Rev. E 56(1), 227 (1997) [N-H-S-S] Niggemann, M., Holzmann, M., Schmidt, D., Soldner, K.: RNG-based viscoelastic turbulence model and numerical applications. Proceedings ACFD 2000 Beijing, Int. Conf. on Appl. CFD, organized by FES-CAST, Beijing, 2000, pp. 525–534 [O] Øksendal, B.: Stochastic Differential Equations. An Introduction with Applications. Fourth Edition, Berlin-Heidelberg-New York: Springer, 1995 [Or] Orszag, S., et al.: Introduction to renormalization group modeling of turbulence. In: Simulation and Modeling of Turbulent Flows, ICASE/LaRC Series in Comp. Sci. and Eng., T. Gatski, M. Hussaini, J. Lumley, eds., New York: Oxford Univ. Press, 1996 [P] Polyakov, A.: Turbulence without pressure. Phys. Rev. E 52(6), 6183–6188 (1995) [P-D-T] da Prato, G., Debussche, A., Temam, R.: Stochastic Burgers equation. Nonlinear Differ. Equ. Appl. 1(4), 389–402 (1994) [P-Z-92] da Prato, G., Zabczyk, J.: Stochastic Equations in Infinite Dimensions. Volume 44 of Encyclopedia of Mathematics and its Applications. Cambridge: Cambridge University Press, 1992 [P-Z-96] da Prato, G., Zabczyk, J.: Ergodicity for Infinite Dimensional Systems. Volume 229 of London Mathematical Society Lecture Note Series. Cambridge: Cambridge University Press, 1996 [S-85] Schmalfuß, B.: Zur Approximation der stochastischen Navier-Stokesschen Gleichungen. Wissenschaftliche Zeitschrift TH Leuna-Merseburg 27(5), 605–612 (1985) [S-90] Schmalfuß, B.: Endlichdimensionale Approximation der L¨osung der stochastischen Navier-Stokes-Gleichung. Statistics 21(1), 149–157 (1990) [T] Temam, R.: Infinite-dimensional Dynamical Systems in Mechanics and Physics. Volume 68 ofApplied Mathematical Sciences. Second edition. Berlin-Heidelberg-NewYork: Springer, 1997 [Tw] Twardowska, K.: An approximation theorem of Wong-Zakai type for stochastic NavierStokes equations. Rend. Sem. Mat. Univ. Padova 96, 15–36 (1996) [Y-C-96] Yakhot, V., Chekhlov, A.: Algebraic tails of probability density functions in the randomforce-driven Burgers turbulence. Phys. Rev. Lett. 77(15), 3118–3121 (1996) [Y-O] Yakhot, V., Orszag, S.: Renormalization group (RNG) methods for turbulence closure. J. Scientific Computing 1, 3–52 (1986) [Y-O-T-S-G] Yakhot, V., Orszag, S., Thangam, S., Speziale, C., Gatski, T.: Development of turbulence models for shear flows by a double expansion technique. Phys. Fluids A 4(7), 1510 (1992) [Y-S] Yakhot, V., She, Z.-S.: Long-time, large-scale properties of the random-force-driven Burgers equation. Phys. Rev. Lett. 60, 1840–1843 (1988) Communicated by P. Constantin