Guo Advances in Difference Equations (2017) 2017:386 https://doi.org/10.1186/s13662-017-1442-5
RESEARCH
Open Access
Lp (p ≥ 2)-strong convergence in averaging principle for multivalued stochastic differential equation with non-Lipschitz coefficients Zhongkai Guo* *
Correspondence:
[email protected] School of Mathematics and Statistics, South-Central University for Nationalities, Wuhan, 430074, China
Abstract We investigate the averaging principle for multivalued stochastic differential equations (MSDEs) driven by a random process under non-Lipschitz conditions. We consider the convergence of solutions in Lp (p ≥ 2) and in probability between the MSDEs and the corresponding averaged MSDEs. Keywords: multivalued stochastic differential equations; non-Lipschitz; averaging principle; Lp (p ≥ 2)-strong convergence
1 Introduction Most systems in science and industry are perturbed by some random environmental effects described by stochastic differential equations with (fractional) Brownian motion, Lévy process, Poisson process, and so on. A series of useful theories and methods have been proposed to explore stochastic differential equations, such as invariant manifolds [1–3], averaging principle [3–12], homogenization principle, and so on. All these theories and methods develop to extract an effective dynamics from these stochastic differential equations, which is more effective for analysis and simulation. Averaging principle is often used to approximate dynamical systems with random fluctuations and provides a powerful tool for simplifying nonlinear dynamical systems. The essence of averaging principle is to establish an approximation theorem for a simplified stochastic differential equation that replaces the original one in some sense and the corresponding optimal order convergence. The theory of stochastic averaging principle has a long and rich history. It was first introduced by Khasminskii [13] in 1968, and since then, the principle for stochastic differential equations was intensively and extensively studied. Stoyanov and Bainov [11] investigated the averaging method for a class of stochastic differential equations with Poisson noise, proving that under some conditions the solutions of averaged systems converge to the solutions of the original systems in mean square and in probability. Xu, Duan, and Xu [4] established an averaging principle for stochastic differential equations with general non-Gaussian Lévy noise. Quite recently, L2 (mean square) strong averaging principle for multivalued stochastic differential equations with Brownian motion was established by © The Author(s) 2017. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Guo Advances in Difference Equations (2017) 2017:386
Page 2 of 12
Xu and Liu [14]. Note that all the works mentioned are under the Lipschitz condition; however, in the real world, the Lipschitz condition seems to be exceedingly harsh when discussing various applications. So it is necessary and significant to consider some nonLipschitz cases; see [15]. In [16], the author discussed the existence and uniqueness of a solution in Lp (pth moment) sense for some multivalued stochastic differential equations under a non-Lipschitz condition. From the dynamic view, we concern the p-moment averaging principle for a multivalued stochastic differential equation under a non-Lipschitz condition, which is different from [14] under the Lipschitz condition. Although the method used in our paper is similar to that of [14], compared with the results of article [14], our conclusion is more general, as it is well known that the L2 -strong convergence does not imply Lp (p ≥ 2) strong convergence in general. Meanwhile, results for higher-order moments are needed that possess a good robustness and can be applied in computations in statistic, finance, and other aspects. Recently, many authors considered multivalued and set-valued stochastic differential equations; see, for example, [17–20]. In this article, we study the averaging principle for MSDEs of the form dXt + A(Xt ) dt f (t, Xt ) dt + g(t, Xt ) dBt ,
X0 = x ∈ D(A),
(1.1)
with t ∈ [0, T], where A is a multivalued maximal monotone operator, which we introduce in the next section, f : [0, T] × Rd → Rd and g : [0, T] × Rd → Rd are measurable functions and satisfy non-Lipschitz conditions with respect to x. To derive the averaging principle for non-Lipschitz multivalued stochastic differential equation, we need some assumptions given in the next section. This paper is organized as follows. In Section 2, we give some assumptions for our theory and then introduce the definition of a multivalued maximal monotone operator and related results. The convergence of solutions in Lp (p ≥ 2) and in probability between the MSDEs and the corresponding averaged MSDEs are considered in Section 3. Throughout this paper, the letter C will denote positive constants with values changing in different occasions. When necessary, we will explicitly write the dependence of constants on parameters.
2 Framework and preliminaries 2.1 Basic hypothesis In this paper, we impose the following assumptions. H1 Non-Lipschitz condition: Suppose that f and b are bounded and satisfy the following conditions: For any x, y ∈ Rd and t ∈ [0, T], g(t, x) – g(t, y)2 ≤ ρ 2 x – y 2,η
and
f (t, x) – f (t, y) ≤ ρ1,η x – y .
For 0 < η < 1e , let ρ1,η , ρ2,η be two concave functions defined by
ρj,η (x) :=
⎧ ⎨x[log x–1 ] 1j ,
x ≤ η, ⎩([log η–1 ] 1j – 1 [log η–1 ] 1j –1 )x + 1 [log η–1 ] 1j –1 η, x > η. j j
(2.1)
Guo Advances in Difference Equations (2017) 2017:386
Page 3 of 12
Let f¯ : Rd → Rd and g¯ : Rd → Rd be measurable functions satisfying the Lipschitz conditions with respect to x as f (t, x) and g(x, t). Moreover, we assume that f (t, x), f¯ (x), g(x, t), and g¯ (x) satisfied the following conditions: H2 1 T
T
f (s, x) – f¯ (x)2 ds ≤ ϕ1 (T) 1 + x2 ,
(2.2)
g(s, x) – g¯ (x)2 ds ≤ ϕ2 (T) 1 + x2 ,
(2.3)
0
H3 1 T
T 0
where ϕi (T), i = 1, 2, are positive bounded functions; moreover, if T is fixed, then ϕi (T) is a constant, which means that ϕi (·) only depends on time. H4 The operator A is a maximal monotone operator with D(A) = Rd .
2.2 Multivalued operators and MSDEs d A map A : Rd → 2R is called a multivalued operator. Define the domain and image of A as
D(A) := x ∈ Rd : A(x) = ∅ ,
Im(A) :=
A(x),
x∈D(A)
and the graph of A is
Gr(A) := (x, y) ∈ R2d : x ∈ Rd , y ∈ A(x) . Definition 2.1 (1) A multivalued operator A is called monotone if y1 – y2 , x1 – x2 ≥ 0
for all (x1 , y1 ), (x2 , y2 ) ∈ Gr(A).
(2) A monotone operator A is called maximal monotone if and only if (x1 , y1 ) ∈ Gr(A)
⇔
y1 – y2 , x1 – x2 ≥ 0 for all (x2 , y2 ) ∈ Gr(A) .
Now, we give a precise definition of the solution to equation (1.1). Definition 2.2 A pair of continuous and Ft -adapted processes (X, K) is called a strong solution of equation (1.1) if: • X0 = x, and X(t) ∈ D(A) a.s.; • K = {K(t), Ft ; t ∈ R+ } is of finite variation, and K(0) = 0 a.s.; • dXt = f (t, Xt ) dt + g(t, Xt ) dZt – dKt , t ∈ R+ , a.s.; • for any continuous processes (α(t), β(t)) satisfying
α(t), β(t) ∈ Gr(A),
t ∈ R+ ,
the measure
X(t) – α(t), dK(t) – β(t) dt ≥ 0.
Guo Advances in Difference Equations (2017) 2017:386
Page 4 of 12
Also, we need the following lemma from [21]. Lemma 2.1 Let A be a multivalued maximal monotone operator, let t → (X(t), K(t)) and ˜ ˜ ˜ ∈ D(A), and let t → K(t), K(t) ˜ t → (X(t), K(t)) be continuous functions with X(t), X(t) be of finite variation. Let (α(t), β(t)) be continuous functions satisfying
α(t), β(t) ∈ Gr(A),
t ∈ R+ .
If
X(t) – α(t), dK(t) – β(t) dt ≥ 0 and
˜ – α(t), dK(t) ˜ – β(t) dt ≥ 0, X(t) then
˜ ˜ X(t) – X(t), dK(t) – dK(t) ≥ 0. Lemma 2.2 ([22]) Under H1 and H4, let the initial condition satisfy Ex2p < +∞. For any p ≥ 1 and 0 ≤ t ≤ T, equation (1.1) has a unique solution satisfying (p,x) E sup Xt 2p ≤ CT < +∞. 0≤t≤T
The following example and two lemmas are taken from [22]. Lemma 2.3 Let ρ : R+ → R+ be a continuous nondecreasing function. If g(s) and q(s) are two strictly positive functions on R+ such that g(t) ≤ g(0) +
t
q(s)ρ g(s) ds,
t ≥ 0,
0
then g(t) ≤ f
–1
t f g(0) + q(s) ds , 0
where f (x) :=
x
1 x0 ρ(y)
dy is well-defined for some x0 > 0.
Example 2.1 For 0 < η < 1e , define a concave function as ⎧ ⎨x log x–1 , x ≤ η, ρη (x) := ⎩η log η–1 + (log η–1 – 1)(x – η), x > η. Choosing x0 = η, we have log η , f (x) = log log x
0 < x < η,
(2.4)
Guo Advances in Difference Equations (2017) 2017:386
f –1 (x) = exp log η · exp{–x} ,
Page 5 of 12
x < 0.
If g(0) < η, then substituting these into (2.4), we obtain exp{– 0t q(s) ds} g(t) ≤ g(0) .
(2.5)
Lemma 2.4 • For j = 1, 2, ρj,η is decreasing in η, that is, ρj,η1 ≤ ρj,η2 if 1 > η1 > η2 . • For any p ≥ 0 and η sufficiently small, we have j
xp ρj,η (x) ≤
1 ρ j+p xj+p , j + p 1,η
j = 1, 2.
3 Averaging principle for MSDEs In this section, we prove an averaging principle for multivalued stochastic differential equations (MSDEs) driven by a random process under non-Lipschitz conditions. We consider the convergence of solutions in Lp (p ≥ 2) and in probability between the MSDEs and the corresponding averaged MSDEs. For t ∈ [0, T], consider √ dXt + A Xt dt f t, Xt dt + g t, Xt dBt ,
X0 = x ∈ D(A).
(3.1)
The standard form of (3.1) is defined as Xt = X (0) +
t
√ f s, X (s) ds +
0
t
0
g s, Xs dBs – K(t),
t ∈ [0, T],
(3.2)
and the corresponding averaged MSDEs of (3.2) are defined as Yt = Y (0) +
t
√ f Y (s) ds +
0
t 0
g Ys dBs – K(t),
t ∈ [0, T].
(3.3)
Here f : Rd → Rd and g : Rd → Rd are measurable functions satisfying the non-Lipschitz conditions with respect to x as f (t, x) and g(t, x), Y (0) = X (0) = x, and f , f , g, g satisfy H2 and H3. Now, we are in the position to investigate the relationship between the processes Xt and Yt . Theorem 3.1 Suppose that conditions H1-H4 hold. Then, for a given arbitrarily small 1 number δ > 0 and for α ∈ (0, 12 ), there exists a number ˜ ∈ (0, 0 ] (0 = 16p 2 ) such that, for all ∈ (0, ˜ ) and p ≥ 1, we have E
sup t∈[0,
1 α– 2
√ (1–4p )]
X – Y 2p ≤ δ. t t
Guo Advances in Difference Equations (2017) 2017:386
Page 6 of 12
Proof Consider the difference Xt – Yt . From (3.2) and (3.3) we have Xt
– Yt
t
f s, X (s) – f Y (s) ds
= 0
+
√
t
g s, Xs – g Ys dBs – K(t) – K(t) .
0
By Itô’s formula [23],
X – Y 2p = –2p t t
t
X – Y 2p–2 X – Y , dK(s) – dK(s) s s s s
0
t
+ 2p
X – Y 2p–2 f s, X (s) – f Y (s) , X – Y ds s s s s
0
+
√
t
2p
X – Y 2p–2 g s, X – g Y , X – Y dBs s s s s s s
0
t
+ p
X – Y 2p–2 g s, X – g Y 2 ds s s s s
0
t
+ 2p(p – 1) 0
X – Y 2p–4 X – Y , g s, X – g Y 2 ds. s s s s s s
By Definition 2.2 and Lemma 2.1 we get X – Y 2p ≤ 2p t t +
t 0
√
X – Y 2p–2 f s, X (s) – f Y (s) , X – Y ds s s s s
t
2p 0
X – Y 2p–2 g s, X – g Y , X – Y dBs s s s s s s
t
+ p 0
X – Y 2p–2 g s, X – g Y 2 ds s s s s
t
+ 2p(p – 1) 0
X – Y 2p–4 X – Y , g s, X – g Y 2 ds. s s s s s s
Then 2p E sup Xt – Yt 0≤t≤T
t
2p–2 f s, X (s) – f Y (s) , Xs – Ys ds Xs – Ys ≤ 2pE sup 0≤t≤T 0 t
√ 2p–2 g s, Xs – g Ys , Xs – Ys dBs Xs – Ys + 2pE sup 0≤t≤T
0
t 2 2p–2 Xs – Ys g s, Xs – g Ys ds + pE sup 0≤t≤T
0
t
+ 2p(p – 1)E sup 0≤t≤T
0
X – Y 2p–4 X – Y , g s, X – g Y 2 ds. s s s s s s
= I1 + I 2 + I 3 + I 4 . We now estimate I1 , I2 , I3 , I4 separately.
Guo Advances in Difference Equations (2017) 2017:386
Page 7 of 12
Estimate of I1 . Using the trigonometric inequality, we have t
2p–2 I1 = 2pE sup f s, X (s) – f Y (s) , Xs – Ys ds Xs – Ys 0≤t≤T 0 t
2p–2 f s, X (s) – f s, Y (s) , Xs – Ys ds Xs – Ys ≤ 2pE sup 0≤t≤T
0
t
2p–2 f s, Y (s) – f Y (s) , Xs – Ys ds Xs – Ys + 2pE sup 0≤t≤T
0
= I11 + I12 .
For I11 , using the non-Lipschitz condition of f and the Cauchy-Schwarz inequality, we have t 2p–1 I11 ≤ 2pE sup ρ1,η Xs – Ys ds Xs – Ys
0≤t≤T
≤ 2p
0
2p–1 EXs – Ys ρ1,η Xs – Ys ds.
T 0
For I12 , using the Hölder and Young inequalities, we deduce t
2p–2 Xs – Ys f s, Y (s) – f Y (s) , Xs – Ys ds I12 = 2pE sup 0≤t≤T 0 t 2 2p–2 2 f s, Y (s) – f Y (s) ds Xs – Ys + Xs – Ys ≤ pE sup 0≤t≤T
≤ pE sup 0≤t≤T
0≤s≤t
0
≤
0
2p – 2 X – Y 2p + 1 f s, Y (s) – f Y (s) 2 ds s s 2p p
2p E sup Xs – Ys ds
t
+ p
0
t
(p,x) (p – 1)CT
+1 E
1 sup t 0≤t≤T t
t
f s, Y (s) – f Y (s) 2 ds
0
2p E sup Xs – Ys ds.
t
+ p
0≤s≤t
0
Taking condition H2, Lemma 2.2, and the Young inequality into account, we have 2p (p,x) + 1 sup tϕ1 (t) 1 + E sup Ys I12 ≤ (p – 1)CT 0≤t≤T
t
+ p 0
t
≤ p 0
0≤s≤T
2p E sup Xs – Ys ds 0≤s≤t
2p E sup Xs – Ys ds + C p, T, x T. 0≤s≤t
Guo Advances in Difference Equations (2017) 2017:386
Page 8 of 12
Finally, we have I1 ≤ TC p, T, x + p
t 0
2p E sup Xs – Ys ds 0≤s≤t
2p–1 E sup Xu – Yu ρ1,η Xu – Yu ds.
T
+ 2p
0≤u≤s
0
Estimate of I2 . Using the Burkholder-Davis-Gundy and Young inequalities, we have
I2 =
√
t 2p–2
g s, Xs – g Ys , Xs – Ys dBs 2pE sup Xs – Ys 0≤t≤T
≤
√
T
8pE
X – Y 4p–4 g s, X – g Y , X – Y 2 ds s s s s s s
0
≤
√
≤
12
2p 2p–2 2 g s, X – g Y ds sup Xs – Ys Xs – Ys s s
T
8pE
0≤s≤T
0
√
0
2p √ 4pE sup Xt – Yt + 4pE
0≤t≤T
12
T 0
X – Y 2p–2 g s, X – g Y 2 ds s s s s
= I21 + I22 .
In the following, we estimate
I22 =
√
T
4pE 0
X – Y 2p–2 g s, X – g Y 2 ds. s s s s
Using conditions H1 and H3 and the Young inequality, we get:
I22 ≤ ≤ ≤
√ √ √
0
T
8pE
T 0
+ ≤
√
T
0
8pE
2p – 2 X – Y 2p + 1 g s, Y – g Y 2 ds s s s s 2p p
T 0
+
X – Y 2p–2 ρ 2 X – Y ds s s 2,η s s
8pE
√
X – Y 2p–2 ρ 2 X – Y + g s, Y – g Y 2 ds s s 2,η s s s s
0
8pE √
X – Y 2p–2 g s, X – g s, Y 2 + g s, Y – g Y 2 ds s s s s s s
T
8pE
X – Y 2p–2 ρ 2 X – Y ds s s 2,η s s
C p, T, x E
sup t 0≤t≤T
≤
√
√ TC2 p, T, x + 8pE
1 t
t
0
g s, Y – g Y 2 ds s s
T 0
X – Y 2p–2 ρ 2 X – Y ds. s s 2,η s s
Guo Advances in Difference Equations (2017) 2017:386
Page 9 of 12
Combing the estimates of I21 and I22 , we conclude that
I2 ≤
2p √ √ 4pE sup Xt – Yt + TC2 p, T, x 0≤t≤T
+
√ 8pE
T 0
X – Y 2p–2 ρ 2 X – Y ds. s s 2,η s s
Estimate of I3 . Note that t 2 2p–2 I3 = pE sup ds. Xs – Ys g s, Xs – g Ys 0≤t≤T
0
Using the same estimate as for I22 , we have
I3 ≤ TC3 p, T, x + 2pE
T
X – Y 2p–2 ρ 2 X – Y ds. s s 2,η s s
0
Estimate of I4 . Using the Cauchy-Schwarz inequality, the term I4 has the same form with I3 with a different constant:
I4 ≤ TC4 p, T, x + 4p(p – 1)E
T
X – Y 2p–2 ρ 2 X – Y ds. s s 2,η s s
0
Combing the estimates of I1 , I2 and I3 , I4 , we have 2p E sup Xt – Yt 0≤t≤T
≤ TC1 p, T, x + C1 (p) T
+ 2p + +
√
T 0
√
0
2p E sup Xs – Ys dt 0≤s≤t
2p–1 EXt – Yt ρ1,η Xt – Yt dt
2p √ 4pE sup Xt – Yt + C2 p, T, x
0≤t≤T T
8p 0
2p–2 2 EXs – Ys ρ2,η Xs – Ys ds
+ TC3 p, T, x + 2p
T 0
2p–2 2 EXs – Ys ρ2,η Xs – Ys ds
+ TC4 p, T, x + 4p(p – 1)
T 0
2p–2 2 EXs – Ys ρ2,η Xs – Ys ds.
Guo Advances in Difference Equations (2017) 2017:386
Taking
√ 4p < 1, that is, <
1 , 16p2
Page 10 of 12
we have
2p E sup Xt – Yt 0≤t≤T
√
≤ + + + +
T 2p TC5 (p, x, T, ) C1 (p) E sup Xs – Ys dt + √ √ 1 – 4p 1 – 4p 0 0≤s≤t T 2p–1 2p EXt – Yt ρ1,η Xt – Yt dt √ 1 – 4p 0 √ T 2p–2 2 8p EXs – Ys ρ2,η Xs – Ys ds √ 1 – 4p 0 T 2p–2 2 2p EXs – Ys ρ2,η Xs – Ys ds √ 1 – 4p 0 2p–2 2 4p(p – 1) T EXs – Ys ρ2,η Xs – Ys ds. √ 1 – 4p 0
By Lemma 2.4 and the concavity of the function ρ1,η we have 2p E sup Xt – Yt 0≤t≤T
√
T 2p TC5 (p, x, T, ) C1 (p) ≤ E sup Xs – Ys dt + √ √ 1 – 4p 1 – 4p 0 0≤s≤t √ T 2p C6 (p, x, T, ) ρ1,η E sup Xs – Ys dt + √ 1 – 4p 0≤s≤t 0 √ TC5 (p, x, T, ) ≤ √ 1 – 4p √ 2p 2p C7 (p, x, T, ) T + E sup Xs – Ys + ρ1,η E sup Xs – Ys dt. √ 1 – 4p 0≤s≤t 0≤s≤t 0
Note that, for sufficiently small , we have g(0) =
√ C5 (p,x,T,) √ 1– 4p
≤η <
1 , e
Lemma 2.3 and Example 2.1 we get the following estimate: 2p E sup Xt – Yt ≤ 0≤t≤T
√
TC5 (p, x, T, ) (1–ln η) exp{– exp √ 1 – 4p
√
TC7 (p,x,T,) √ } 1– 4p
.
√ 1 Choose α ∈ (0, 12 ) such that, for every t ∈ [0, α– 2 (1 – 4p )] ⊆ [0, T], we have E
sup t∈[0,
1 α– 2
√ (1–4p )]
X – Y 2p ≤ C α , t t
where α C = C5 p, x, T, exp{(1–ln η) exp{– (C7 (p,x,T,)}} .
and from
Guo Advances in Difference Equations (2017) 2017:386
Consequently, given any number δ > 0, we can choose ˜ ∈ (0, 0 ] (0 = √ 1 each ∈ (0, ˜ ) and every t ∈ [0, α– 2 (1 – 4p )], E
sup t∈[0,
1 α– 2
√ (1–4p )]
Page 11 of 12
1 ) 16p2
such that, for
X – Y 2p ≤ δ, t t
which completes the proof of the theorem.
Using the Chebyshev-Markov inequality, we can also get the convergence in probability. Theorem 3.2 Suppose that conditions H1-H4 hold. Then, for a given arbitrarily small 1 number θ > 0 and for α ∈ (0, 12 ), there exists a number ˜ ∈ (0, 0 ] (0 = 16p 2 ) such that, for all ∈ (0, ˜ ) and p ≥ 1, we have lim P
sup
→0
t∈[0,
α– 1 2
√ (1–4p )]
X – Y 2p > θ = 0. t t
Acknowledgements We would like to thank the anonymous referee for careful reading of the manuscript and valuable comments and suggestions. This work was supported by NSFs of China (No. 11526196) and the Fundamental Research Funds for the Central Universities (SCUEC: CZQ17005; CZW15124). Competing interests The author declares that they have no competing interests. Author’s contributions All authors read and approved the final manuscript.
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Received: 1 May 2017 Accepted: 11 December 2017 References 1. Duan, J, Lu, K, Schmalfuss, B: Smooth stable and unstable manifolds for stochastic evolutionary equations. J. Dyn. Differ. Equ. 16(4), 949-972 (2004) 2. Duan, J, Lu, K, Schmalfuss, B: Invariant manifolds for stochastic partial differential equations. Ann. Probab. 31(4), 2109-2135 (2003) 3. Duan, J, Wang, W: Effective Dynamics of Stochastic Partial Differential Equations. Elsevier, Amsterdam (2014) 4. Xu, Y, Duan, J, Xu, W: An averaging principle for stochastic dynamical systems with Lévy noise. Physica D 240, 1395-1401 (2011) 5. Wang, W, Roberts, AJ: Average and deviation for slow-fast stochastic differential equations. J. Differ. Equ. 253, 1265-1286 (2012) 6. Cerrai, S: Stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction term. Probab. Theory Relat. Fields 125, 271-304 (2003) 7. Cerrai, S: A Khasminkii type averaging principle for stochastic reaction-diffusion equations. Ann. Appl. Probab. 19(3), 899-948 (2009) 8. Cerrai, S, Freidlin, MI: Averaging principle for a class of stochastic reaction-diffusion equations. Probab. Theory Relat. Fields 144, 137-177 (2009) 9. Fu, H, Liu, J: Strong convergence in stochastic averaging for two time-scales stochastic partial differential equations. J. Math. Anal. Appl. 384, 70-86 (2011) 10. Liu, D: Strong convergence of principle of averaging for multiscale stochastic dynamical systems. Commun. Math. Sci. 8(4), 999-1020 (2010) 11. Stoyanov, IM, Bainov, DD: The averaging method for a class of stochastic differential equations. Ukr. Math. J. 26(2), 186-194 (1974) 12. Pei, B, Xu, Y: Lp (p > 2)-strong convergence in averaging principle for two time-scales stochastic evolution equations driven by Lévy process. arXiv:1511.03438v3 13. Khasminskii, RZ: On the principle of averaging the Itô stochastic differential equations. Kibernetika 4, 260-279 (1968) 14. Xu, J, Liu, J: An averaging principle for multivalued stochastic differential equations. Stoch. Anal. Appl. 32(6), 962-974 (2014)
Guo Advances in Difference Equations (2017) 2017:386
Page 12 of 12
15. Tian, L, Lei, D: The averaging method for stochastic differential delay equations under non-Lipschitz conditions. Adv. Differ. Equ. 2013, 38 (2013) 16. Xu, S: Multivalued stochastic differential equations with non-Lipschitz coefficients. Chin. Ann. Math. 30B(3), 321-332 (2009) 17. Malinowski, MT: The narrowing set-valued stochastic integral equations. Dyn. Syst. Appl. 24(4), 399-419 (2015) 18. Malinowski, MT: Fuzzy and set-valued stochastic differential equations with local Lipschitz condition. IEEE Trans. Fuzzy Syst. 23(5), 1891-1898 (2015) 19. Malinowski, MT: Set-valued and fuzzy stochastic differential equations in M-type 2 Banach spaces. Tohoku Math. J. 67(3), 349-381 (2015) 20. Malinowski, MT, Agarwal, RP: On solutions set of a multivalued stochastic differential equation. Czechoslov. Math. J. 67(1), 11-28 (2017) 21. Cépa, E: Équations différentielles stochastiques multivoques. In: Séminaire de Probabilités XXIX. Lecture Notes in Math., vol. 1631, pp. 86-107. Springer, Berlin (1995) 22. Ren, J, Zhang, X: Stochastic flows for SDEs with non-Lipschitz coefficients. Bull. Sci. Math. 127(8), 739-754 (2003) 23. Chow, PL: Stochastic Partial Differential Equations. Chapman Hall/CRC, New York (2007)