International Journal of Machine Learning and Cybernetics https://doi.org/10.1007/s13042-017-0775-4
ORIGINAL ARTICLE
Stability for a class of generalized reaction–diffusion uncertain stochastic neural networks with mixed delays Tianshi Lv1 · Qintao Gan1 · Feng Xiao1 Received: 24 June 2016 / Accepted: 26 December 2017 © Springer-Verlag GmbH Germany, part of Springer Nature 2018
Abstract In this paper, the global robust asymptotic stability problem for a class of generalized reaction–diffusion uncertain stochastic neural networks with mixed delays is investigated under Dirichlet boundary conditions and Neumann boundary conditions, respectively. The proposed generalized neural networks model includes reaction–diffusion local field neural networks and reaction–diffusion static neural networks as its special cases. By using stochastic analysis approaches and constructing a suitable Lyapunov–Krasovskii functional, some simple and useful criteria for global robust asymptotic stability of the neural networks are obtained. According to the theoretical results, the influences of diffusion coefficients, diffusion spaces, stochastic perturbation, and uncertain parameters are analyzed. Finally, numerical examples are provided to show the feasibility and efficiency of the proposed methods, and by choosing different diffusion coefficients and diffusion spaces, different stability states can be achieved. Keywords Global robust asymptotic stability · Generalized neural networks · Reaction–diffusion · Mixed delays · Dirichlet boundary conditions · Neumann boundary conditions
1 Introduction Recent years, neural networks have been extensively investigated due to their fruitful applications in numerous areas such as pattern recognition, signal processing, quadratic optimization, automatic control, and so on. Such applications heavily depend on the dynamical behaviors of the neural networks. Therefore, the study of dynamical behaviors is a necessary step for practical design of neural networks (see [1–7]). As mentioned in [8], neural networks can be usually divided into a local field neural networks model or a static neural networks model depending upon whether neuron states (the external states of neurons) or local field states (the internal states of neurons) are taken as basic variables. By now, the study of local field neural networks has achieved significant results. Compared with rich results for local field neural networks, results for static neural networks are much * Tianshi Lv
[email protected] 1
Department of Basic Science, Shijiazhuang Mechanical Engineering College, Shijiazhuang 050003, People’s Republic of China
more scare. Although the results of local field neural networks can be applied to static neural networks under some assumptions, but these assumptions cannot always be satisfied in many cases. In order to solve this problem, in [8], Gan and Liu investigated a class of generalized neural networks model consisting of both local field neural networks and static neural networks as its special cases. The results provide a unified frame suitable for both local field neural networks and static neural networks. In practice, diffusion effects cannot be avoided because when electrons are moving in nonuniform electromagnetic field, the diffusion phenomena cannot be ignored in electric circuits and neural networks. Therefore, it is desired to consider the activation of neurons varying in space as well as in time. The stability for reaction–diffusion neural networks was investigated in [9–12] where some diffusion-independent criteria were presented. These criteria cannot reflect the influence of diffusion coefficients and diffusion spaces on stability, therefore, they are generally more conservative than diffusion-dependent criteria. On the other hand, in practice, the systems almost present some uncertainties because it is very difficult to obtain an exact mathematical model due to environmental noise, uncertain or slowly varying parameters, and so on. Therefore,
13
Vol.:(0123456789)
International Journal of Machine Learning and Cybernetics
the problems of robust stability and stabilization for uncertain time-delay systems have been of great importance and considerable amounts of efforts have been done to resolution of them during last decades. In addition, stochastic systems have received much attention since stochastic modeling has come to play an important role in many branches of science and engineering applications. Much effort has been devoted to extend many fundamental results for deterministic systems (see [11] and the references therein). At present, there have been a great deal of robust stability criteria for uncertain stochastic neural networks proposed by many researchers. For example, [12] investigated the exponential stability for stochastic Cohen-Grossberg BAM neural networks with discrete and distributed time-varying delays, [13] studied a new condition for robust stability of uncertain neural networks with time delays, [14] discussed the robust stability analysis for discrete-time uncertain neural networks with leakage time-varying delay. So far, the robust stability problem for uncertain stochastic static neural networks, especially the robust stability problem for reaction–diffusion uncertain stochastic static neural networks, has not been investigated in the literature. Therefore, it is interesting to study this problem both in theo-
stand for matrix transposition and matrix inversion, respectively; Rn and Rn×n represent the n − dimensional Euclidean space and the set of all n × n real matrices, respectively; ei , i = 1, 2, ⋯ , n represent the n − dimensional column vector with the ith element equal to 1 and 0 elsewhere; 21 (R+ × Rn ;R+ ) denotes the family of all nonnegative functions V(t, y(t, x)) on Rn × R+ which are continuously twice differentiable in y(t, x) and once differentiable in t ; (Ω, , ) is a complete probability space, where Ω is the sample space, is the 𝛿 − algebra of subsets of the sample space and is the probability measure on ; E{⋅} stands for the mathematical expectation operator with respect to the given probability measure ; the shorthand diag{⋯} denotes the block diagonal matrix; ‖⋅‖ stands for the Euclidean norm; I is the identify matrix of appropriate dimension; the symmetric terms in asymmetric matrix are denoted by ∗.
2 Problem formulation and preliminaries In this paper, we consider the following generalized reaction–diffusion uncertain stochastic neural networks with mixed delays:
( ) n ∑ ( ) 𝜕y(t, x) 𝜕 Dk dt − A(t)y(t, x)dt + w1 (t)f w0 y(t, x) dt 𝜕xk 𝜕x k=1 ( t ) ( ) ( ) +w2 (t)f w0 y(t − 𝜏, x) dt + w3 (t)f K(t − s) w0 y(s, x) ds dt ∫−∞
dy(t, x) =
(1)
+𝜎(t, y(t, x), y(t − 𝜏, x))d𝜔(t). ries and applications. In this paper, we consider the problem of global robust asymptotic stability for a class of generalized reaction–diffusion uncertain stochastic neural networks model with mixed delays which includes reaction–diffusion local field neural networks and reaction–diffusion static neural networks as its special cases. The organization of this paper is as follows: in the next section problem description and preliminaries are presented; in Sect. 3, by using stochastic analysis approaches and constructing a suitable Lyapunov–Krasovskii functional, we investigate the robust stability for the considered model concerning Dirichlet boundary conditions and Neumann boundary conditions in terms of Euclidean norm; numerical simulations will be given in Sect. 4 to demonstrate the effectiveness and feasibility of the results. Finally, in Sect. 5, conclusions are reached. Notations Throughout this paper, for real symmetric matrices X and Y , the notation 𝜆max (X) denotes the maximum eigenvalue of X , and X ≥ Y (respectively, X > Y ) means that the matrix X − Y is positive semidefinite (respectively, positive definite); the superscript T and −1
13
where Dk = diag(D1k , D2k , ⋯ , Dnk ), Dik ≥ 0 (i = 1, 2, ⋯ , n) represents the transmission )diffusion operator, ( T y(t, x) = y1 (t, x), y2 (t, x), ⋯ , yn (t, x) , is the state variable (( ( ) ( ) of the X neuron, f (y(t, x)) = f1 y1 (t, x) , f2 y2 (t, x) , ⋯ , ) ( ))T ( fn yn (t, x) , fi yi (t, x) denotes the neuron activation func( ) tion; A(t) = diag a1 (t), a2 (t), ⋯ , an (t) , ai (t) > 0 stands for ( ) the Rn neuron charging time constant, wlij (t) , (l = 1, 2), n×n [ 11 ] w3 (t) = w3 (t)e1 ⋯ w1n (t)e1 ⋯ wn1 (t)en ⋯ wnn (t)en n×n2 , 3 3 3 ij wlij (t), l = 1, 2, and w3 , 1 ≤ i, j ≤ n are the connection ( ) weights of the neurons, w0 = w(0) ,w(0) is a constant ij n×n ij [ coefficient, K(⋅) = K11 (⋅)e1 ⋯ K1n (⋅)en ⋯ Kn1 (⋅)e1 ⋯ Knn ]T (⋅)en n×n2 , the delay kernel Kij (⋅) is a real-valued nonnegative ∞ continuous function that) satisfies ∫0 Kij (s)ds = 1, ( T 𝜔(t) = 𝜔1 (t), 𝜔2 (t), ⋯ , 𝜔n (t) i s a n n − dimensional Brownian motion defined on a complete probability space (Ω, , ) with a natural filtration {t }t≥0 and E{d𝜔(t)} = 0, ( )T 𝜎(⋅) = 𝜎1 (⋅), 𝜎2 (⋅), ⋯ , 𝜎n (⋅) is the noise intensity matrix, { x = (x1 , x2 , ⋯ , xl )T ∈ Ω ⊂ Rl , Ω = x = (x1 , x2 , ⋯ , xl )T ||𝜀k
International Journal of Machine Learning and Cybernetics
} ≤ xk ≤ 𝜂k , k = 1, 2, ⋯ , l is a bound compact set with smooth boundary 𝜕Ω and Ω > 0, 𝜏 > 0 is a constant transmission delay.
where A, w1 , w2 , w3 ∈ Rn×n are constant matrices, ΔA(t) , Δw1 (t) , Δw2 (t) and Δw3 (t) are time-varying uncertain matrices and
Remark 2.1 It is easy to see that
[ ] ΔA(t), Δw1 (t), Δw2 (t), Δw3 (t) = ΛF(t)[E1 , E2 , E3 , E4 ],
1. if taking w1 (t) = w2 (t) = w3 (t) = I , (1) reduces to a class of reaction–diffusion static neural networks model; 2. if taking w0 = I , (1) reduces to a class of reaction–diffusion local field neural networks model.
(7) where Λ and Ei (i = 1, 2, 3, 4) are real matrices, F(t) is a time-varying uncertain matrix and satisfies the following condition
Therefore, (1) can be called a class of generalized reaction–diffusion neural networks model, based on which stability analysis for both reaction–diffusion local field neural networks model and reaction–diffusion static neural networks model can be made in a unified frame. In this paper, we consider two types of boundary conditions as follows: 1. Dirichlet boundary conditions
y(t, x)|𝜕Ω = 0,
(2)
t > 0, x ∈ 𝜕Ω,
2. Neumann boundary conditions
𝜕y(t, x) = 𝜕 n̄
(
𝜕y(t, x) 𝜕y(t, x) 𝜕y(t, x) , ,⋯, 𝜕x1 𝜕x2 𝜕xl
(3)
(t, x) ∈ [−𝜏, +∞) × 𝜕Ω. The initial value of system (1) is (4) ( )T where 𝜑(s, x) = 𝜑1 (s, x), 𝜑2 (s, x), ⋯ , 𝜑n (s, x) is bounded and continuous on [−𝜏, 0] × Ω. The following Assumptions, Lemmas and Definition are essential for the proof in what follows.
y(s, x) = 𝜑(s, x),
(s, x) ∈ [−𝜏, 0] × Ω,
Assumption 2.1 𝜎 ∶ 𝐑+ × 𝐑n × 𝐑n → 𝐑n satisfying the linear growth condition is locally Lipschitz continuous and satisfies the following condition [ ] trace 𝜎 T (t, y(t, x), y(t − 𝜏, x))𝜎(t, y(t, x), y(t − 𝜏, x)) (5) 2 2 ≤ ||M1 y(t, x)|| + ||M2 y(t − 𝜏, x)||
(8)
Assumption 2.3 The activation function f (y) is bounded and satisfies the following Lipschitz condition
‖f (y)‖ ≤ ‖Gy‖
(9)
where G ∈ R
n×n
is a known constant matrix.
{ T| Lemma 2.1 (see [15]). } Let Ω = x = (x1 , x2 , ⋯ , xl ) |𝜀k ≤ xk ≤ 𝜂k , k = 1, 2, ⋯ , l be a bound compact set with smooth boundary 𝜕Ω and Ω > 0, in space Rl , and let 𝜑(x) be a realvalued function belonging to 1 (Ω) , which vanish on the boundary 𝜕Ω of Ω , i.e., 𝜑(x)|𝜕Ω = 0 , then �Ω
) = 0,
F T (t)F(t) ≤ I.
𝜑2 (x)dx ≤
( 𝜂 − 𝜀 )2 ( 𝜕𝜑 )2 k k dx. �Ω 𝜕xk 𝜋
(10)
Lemma 2.2 (see [16]). We assume that Ω1 , Ω2 and Ω3 are real matrices with appropriate dimensions and Ω3 > 0 , then the following inequality holds
2xT ΩT1 Ω2 y ≤ xT ΩT1 Ω3 Ω1 x + yT ΩT2 Ω−1 Ω2 y, 3
(11)
where x and y are vectors with appropriate dimensions. Lemma 2.3 [Cauchy-Schwartz inequality]. (see [17].) Assume that there exist two continuous functions f (x) , g(x) ∶ Θ → R , then the following inequality holds
(
�Θ
)2 |f (x)g(x)|dx
≤
�Θ
|f (x)|2 dx +
�Θ
|g(x)|2 dx.
(12)
where M1 and M2 are constant matrices.
Lemma 2.4 (see [16]). We assume that A , D , E and F are real matrices with appropriate dimensions and F T F ≤ I , then for any symmetric matrix P > 0 and scalar 𝜀 > 0 , the following two conclusions hold
Assumption 2.2 Uncertain matrices A(t) , w1 (t) , w2 (t) and w3 (t) satisfy the following condition
1. If 𝜀I − EPET > 0 , then
A(t) = A + ΔA(t), w1 (t) = w1 + Δw1 (t),
(A + DFE)P(A + DFE)T ≤ APAT
w2 (t) = w2 + Δw2 (t),
(6)
+ APET (𝜀I − EPET )−1 EPAT + 𝜀DDT ,
(13)
13
International Journal of Machine Learning and Cybernetics
2. If P − 𝜀DDT > 0 , then
(A + DFE)T P−1 (A + DFE) ≤ AT (P − 𝜀DDT )−1 A + 𝜀−1 ET E. (14) T Lemma 2.5 (see [16]). We assume that F F ≤ I and scalar 𝜀 > 0 , then the following inequality holds DFE + ET F T DT ≤ 𝜀DDT + 𝜀−1 ET E,
Lemma 2.6 [Poincaré integral inequality]. (see [18].) Let Ω be a bound compact set with smooth boundary 𝜕Ω and mesΩ > 0 in space Rl . 𝜑(x) is a real-valued function belonging to 1 (Ω) and 𝜕𝜑(x)∕ 𝜕l|𝜕Ω = 0 . Then
1 𝜑 (x)dx = ∫Ω 𝜆1 ∫Ω
(
𝜕𝜑 𝜕xm
)2 dx,
(16)
where 𝜆1 is the smallest positive eigenvalue of the Neumann boundary problem
{
−ΔΨ(x) = 𝜆Ψ(x), x ∈ Ω, 𝜕Ψ(x) = 0, x ∈ Ω. 𝜕l
P < 𝜇I,
(15)
where D , E and F are matrices.
2
diagonal matrix R and symmetric positive definite matrix Q and positive scalars 𝜇 , 𝜀,𝜀1 , 𝜀2 , 𝜀3 , 𝛼 and 𝛽 such that the following linear matrix inequalities hold
(17)
0 ⎡ Ξ11 ⎢ 0 −Q + 𝜇M2T M2 + 𝛽GT G Ξ=⎢ 0 ⎢ 0 ⎣ 0 0
where
0 0 Ξ33 0
0 ⎤ 0 ⎥ < 0, 0 ⎥⎥ Ξ44 ⎦ (19)
Ξ11 = RT R + 2PPT − 2PD𝜋 + Q + 𝜇M1T M1 − PA − AT P + 𝜀PΛΛT PT + 𝜀−1 E1T E ( ) −1 + P w3 wT3 + w3 E4T (𝜀3 I − E4 E4T ) E4 wT3 + 𝜀−1 ΛΛT 3 PT + 𝛼GT G, Ξ33 = wT1 (I − 𝜀1 ΛΛT )−1 w1 + 𝜀−1 E2T E2 − 𝛼I, 1 Ξ44 = wT2 (I − 𝜀2 ΛΛT )−1 w2 + 𝜀−1 E3T E3 − 𝛽I, 2 D𝜋 = diag l ∑
( l ( ∑
(
k=1
)2 )2 l ( ∑ 𝜋 𝜋 D1k , D2k , ⋯ , 𝜂k − 𝜀k 𝜂k − 𝜀k k=1 )2 ) Dnk ,
𝜋 𝜂k − 𝜀k
Remark 2.2 When Ω is bounded, or at least bounded in one direction, Lemma 2.6 can also hold. The smallest eigenvalue 𝜆1 of {problem (17) is only determined by Ω . For example, } if Ω = x = (x1 , x2 , ⋯ , xl )T ||𝜀k ≤ xk ≤ 𝜂k , k = 1, 2, ⋯ , l then {( )2 ( )2 ( )2 } 𝜋 𝜋 𝜋 . 𝜆1 = min , ,⋯, 𝜂1 − 𝜀1 𝜂2 − 𝜀2 𝜂l − 𝜀l
Proof Define a Lyapunov–Krasovskii functional V(t, y(t, x)) ∈ 2,1 (R+ × Rn ;R+ ) for system (1) as
Now, we give the following definition of robust stability for system (1).
V(t, y(t, x)) =
Definition 2.1 The trivial solution of the generalized reaction–diffusion neural networks model (1) is said to be robustly asymptotically stable for all admissible uncertainties satisfying (7) and (8) in the mean square if
� � E ‖y(t, x)‖2 = 0.
(18)
k=1
and Dik (i = 1, 2, ⋯ , n) represents diffusion coefficient, 𝜂k − 𝜀k represents diffusion space.
∫Ω
(21)
V1 (t, y(t, x)) = yT (t, x)Py(t, x), t
V2 (t, y(t, x)) =
V3 (t, y(t, x)) =
13
(20)
where
3 Main results Theorem 3.1 The generalized reaction–diffusion neural networks model (1) concerning Dirichlet boundary conditions can be globally robustly asymptotically stable in the mean square, if there exist diagonal positive definite matrix P ,
{ } V1 (t, y(t, x)) + V2 (t, y(t, x)) + V3 (t, y(t, x)) dx,
∫t−𝜏
yT (s, x)Qy(s, x)ds,
n n ∑ ∑ i=1 j=1
rj2
∫0
∞
(22)
t
∫t−s
Kij (s)y2j (𝜃, x)d𝜃ds.
(23)
By the It̂o′ s differential formula, calculating the stochastic derivative of V(t, y(t, x)) along the trajectory (1), we obtain the following equation
International Journal of Machine Learning and Cybernetics dV(t, y(t, x)) = V(t, y(t, x))dt⊖ } { 3 ∑ 𝜕Vi (t, y(t, x)) 𝜎(y(t, x), y(t − 𝜏, x))d𝜔(t)dx, + �Ω 𝜕y i=1
{ V1 (t, y(t, x)) + V2 (t, y(t, x)) �Ω } +V3 (t, y(t, x)) dx,
�Ω
yT (t, x)RT Ry(t, x)dx − �Ω
�Ω
yT (t, x)PΦdx + � ] Ω 𝜎(y(t, x), y(t − 𝜏, x)) dx.
−
(25)
n n ∑ ∑ i=1 j=1
rj2
�0
rj2
∫0
Kij (s)y2j (t, x)ds ∞
Kij (s)y2j (t − s, x)ds
−
n n ∑ ∑
rj2
i=1 j=1
∫0
∞
Kij (s)y2j (t − s, x)ds,
(30) where R = diag(r1 , r2 , ⋯ , rn ). Together with the right terms in (25)–(30) to V(t, y(t, x)) , we obtain
Kij (s)y2j (t − s, x)dsdx
From the Dirichlet boundary conditions (2), Green formula [18] and Lemma 2.1, we obtain �Ω
After some computations, we get
∑ 𝜕Vi (t, y(t, x))
3 ∑ 𝜕 2 Vi (t, y(t, x))
(28) ≤
= 2P,
yT (t, x)P
( ) l ∑ 𝜕y(t, x) 𝜕 Dk dx ≤ − yT (t, x)PD𝜋 y(t, x)dx. �Ω 𝜕x 𝜕x k k k=1
(32)
According to Assumption 2.1, we get
3
= 2yT (t, x)P,
(31)
[ trace 𝜎 T (y(t, x), y(t − 𝜏, x))P×
(27)
i=1
∑∑
∫0
∞
( ) n ∑ ( ) 𝜕y(t, x) 𝜕 Dk − A(t)y(t, x) + w1 (t)f w0 y(t, x) 𝜕xk 𝜕x k=1 ( t ) ( ) ( ) +w2 (t)f w0 y(t − 𝜏, x) + w3 (t)f K(t − s) w0 y(s, x) ds . ∫−∞
𝜕y2
i=1 j=1 n n
∞
(26)
+2
𝜕y
rj2
= y (t, x)RT Ry(t, x)
(i = 1, 2, 3),
i=1
n n ∑ ∑
i=1 j=1
Vi (t, y(t, x)) 𝜕Vi (t, y(t, x)) 𝜕Vi (t, y(t, x)) = + Φ 𝜕t 𝜕y ] [ 𝜕 2 Vi (t, x) 1 𝜎(y(t, x), y(t − 𝜏, x)) , + trace 𝜎 T (y(t, x), y(t − 𝜏, x)) 2 𝜕y2
Φ=
=
T
With
V(t, y(t, x)) =
𝜕t
i=1
(24)
where
V(t, y(t, x)) =
3 ∑ 𝜕Vi (t, y(t, x))
(29)
=
�Ω
[ ] trace 𝜎 T (y(t, x), y(t − 𝜏, x))P𝜎(y(t, x), y(t − 𝜏, x)) dx
𝜆 (P)|M y(t, x)|| dx + 𝜆 (P)|M y(t − 𝜏, x)|| dx �Ω max | 1 �Ω max | 2 2
�Ω
𝜇yT (t, x)M1T M1 y(t, x)dx +
2
�Ω
𝜇yT (t − 𝜏, x)M2T M2 y(t − 𝜏, x)dx.
(33)
Therefore
13
International Journal of Machine Learning and Cybernetics
V(t, y(t, x)) =
�Ω
yT (t, x)RT Ry(t, x)dx − 2
+
�Ω
−
�Ω
yT (t, x)Qy(t, x)dx +
�Ω
In addition, on the basis of Lemma 2.3, we have
yT (t, x)PD𝜋 y(t, x)dx
𝜆 (P)yT (t, x)M1T M1 y(t, x)dx �Ω max
yT (t − 𝜏, x)Qy(t − 𝜏, x)dx − 2
�Ω
yT (t, x)PA(t)y(t, x)dx
𝜆 (P)yT (t − 𝜏, x)M2T M2 y(t − 𝜏, x)dx �Ω max ( ) +2 yT (t, x)Pw1 (t)f w0 y(t, x) dx �Ω ( ) yT (t, x)Pw2 (t)f w0 y(t − 𝜏, x) dx +2 �Ω ( t ) ( ) +2 yT (t, x)Pw3 (t)f K(t − s) w0 y(s, x) ds dx �Ω �−∞
=
+
−
�Ω
n n ∑ ∑
�0
rj2
i=1 j=1
�Ω
≤ =
∞
(34)
Kij (s)y2j (t − s, x)dsdx.
≤
2
�Ω
�Ω
�Ω
yT (t, x)PPT y(t, x)dx+
�Ω
( ) ( ) f T w0 y(t, x) wT1 (t)w1 (t)f w0 y(t, x) dx,
(35)
(
T
)
�Ω
yT (t, x)PPT y(t, x)dx
�Ω
+
) ( ) ( f T w0 y(t − 𝜏, x) wT2 (t)w2 (t)f w0 y(t − 𝜏, x) dx,
rj2
i=1 j=1 n n
�Ω
∑∑
�Ω
∑∑
rj2
i=1 j=1 n n
rj2
i=1 j=1
�0
Kij (s)yj (t − s, x)ds ∞
�0
∞
�0
∞
≤
�Ω
�Ω +
dx )2
1∕ 2 1∕ 2 Kij (s)Kij (s)yj (t
Kij (s)ds
�0
− s, x)ds
∞
Kij (s)y2j (t − s, x)dsdx
Kij (s)y2j (t − s, x)dsdx. (39)
(
T
y (t, x)Pw3 (t)f
t
�−∞
) ( ) K(t − s) w0 y(s, x) ds dx
yT (t, x)Pw3 (t)wT3 (t)PT y(t, x)dx
�Ω
dx
∑∑ n
n
i=1 j=1
rj2
�0
(40)
∞
Kij (s)y2j (t − s, x)dsdx.
�Ω
From Lemma 2.4 and Assumption 2.2, we can obtain that there exist positive scalars 𝜀1 , 𝜀2 , 𝜀3 such that the following inequalities hold
(36)
and
≤
n n ∑ ∑
�0 (
)2
∞
y (t, x)Pw2 (t)f w0 y(t − 𝜏, x) dx
≤
2
i=1 j=1
�Ω
2
( ) yT (t, x)Pw1 (t)f w0 y(t, x) dx
( rj2
Thus
From Lemma 2.2, we know 2
n n ∑ ∑
( yT (t, x)Pw3 (t)f
t
�−∞
) ( ) K(t − s) w0 y(s, x) ds dx
�Ω
yT (t, x)Pw3 (t)wT3 (t)PT y(t, x)dx ) ( t ) ( t ( ) ( ) + fT K(t − s) w0 y(s, x) ds f K(t − s) w0 y(s, x) ds dx. �−∞ �−∞ �Ω
(37)
On the other hand, it can be seen from Assumption 2.3 that for rj (j = 1, 2, ⋯ , n) given in (23), the following inequality holds
≤
�Ω
( fT
�Ω
t
�−∞
∑∑ n
) ( ( ) K(t − s) w0 y(s, x) ds f
n
i=1 j=1
13
rj2
(
�0
∞
t
) ( ) K(t − s) w0 y(s, x) ds dx
�−∞ )2 Kij (s)yj (t − s, x)ds dx.
(38)
International Journal of Machine Learning and Cybernetics
�Ω = = ≤
)( )T ( ) ( ) ( f T w0 y(t, x) w1 + Δw1 (t) w1 + Δw1 (t) f w0 y(t, x) dx
�Ω �Ω
f
�Ω
�Ω
)(
( T
w0 y(t, x) w1 + ΛF(t)E2
−2
) ) ( w1 + ΛF(t)E2 f w0 y(t, x) dx
)T (
) ( ( )( ( )−1 ) T f T w0 y(t, x) wT1 I − 𝜀1 ΛΛT w1 + 𝜀−1 E E f w0 y(t, x) dx, 2 1 2
=−
(41)
=−
�Ω
=
�Ω �Ω
∫Ω
( ) yT (t, x) PA(t) + AT (t)PT y(t, x)dx
(44)
( ) yT (t, x) PA(t) + AT (t)P y(t, x)dx.
f
)( ) )T ( ) ( w0 y(t − 𝜏, x) w2 + ΛF(t)E3 w2 + ΛF(t)E3 f w0 y(t − 𝜏, x) dx
(
T
(42)
) ( ( )( ( )−1 ) f T w0 y(t − 𝜏, x) wT2 I − 𝜀2 ΛΛT w2 + 𝜀−1 E3T E3 f w0 y(t − 𝜏, x) dx, 2 From Lemma 2.5, we find that there exists a positive sca-
yT (t, x)Pw3 (t)wT3 (t)PT y(t, x)dx
=
�Ω
=
�Ω
≤
∫Ω
yT (t, x)PA(t)y(t, x)dx
)( )T ( ) ( ) ( f T w0 y(t − 𝜏, x) w2 + Δw2 (t) w2 + Δw2 (t) f w0 y(t − 𝜏, x) dx
and
�Ω
∫Ω
( ) ( ) f T w0 y(t − 𝜏, x) wT2 (t)w2 (t)f w0 y(t − 𝜏, x) dx
=
≤
Due to P is a diagonal matrix, the following equality holds true
( ) ( ) f T w0 y(t, x) wT1 (t)w1 (t)f w0 y(t, x) dx
�Ω
( )( )T yT (t, x)P w3 + Δw3 (t) w3 + Δw3 (t) PT y(t, x)dx T
(43)
T T
y (t, x)P(w3 + ΛFE4 )(w3 + ΛFE4 ) P y(t, x)dx ( ) −1 yT (t, x)P w3 wT3 + w3 E4T (𝜀3 I − E4 E4T ) E4 wT3 + 𝜀−1 ΛΛT PT y(t, x)dx. 3 lar 𝜀 such that
−
�Ω
=− =− = ≤
( ) yT (t, x) PA(t) + AT (t)P y(t, x)dx
�Ω �Ω
�Ω �Ω
( ) yT (t, x) PA + AT P + PΔA(t) + ΔAT (t)P y(t, x)dx ( ( )T ) yT (t, x) PA + AT P + PΛF(t)E1 + ΛF(t)E1 P y(t, x)dx
(45)
( ) T yT (t, x) −PA − AT P + (PΛ)F(t)(−E1T ) + (−E1T )F T (t)(PΛ)T y(t, x)dx
yT (t, x)(−PA − AT P + 𝜀PΛΛT PT + 𝜀−1 E1T E1 )y(t, x)dx.
13
International Journal of Machine Learning and Cybernetics
Then, it follows from (25)–(45) that
V(t, y(t, x)) ≤ 𝜂 T (t)Π𝜂(t),
(46) where [ ( ) ( )] 𝜂 T (t) = yT (t, x), yT (t − 𝜏, x), f T w0 y(t, x) , f T w0 y(t − 𝜏, x) , (47)
0 ⎡ Π11 ⎢ 0 −Q + 𝜇M2T M2 Π=⎢ 0 ⎢ 0 ⎣ 0 0
0 0 Π33 0
0 ⎤ 0 ⎥ , 0 ⎥⎥ Π44 ⎦
(48)
Taking the mathematical expectation of both sides of (55), the following inequality can be obtained easily
dEV(t, y(t, x)) ≤ −𝜅E‖y(t, x)‖2 . dt
(56)
Based upon Definition 2.1, The generalized reaction–diffusion neural networks model (1) is globally robustly asymptotically stable in the mean square. This completes the proof. In fact, on the basis of Lyapunov–Krasovskii functional (20), we can prove that system (1) is globally robustly
Π11 = RT R + 2PPT − 2PD𝜋 + Q + 𝜇M1T M1 − PA − AT P + 𝜀PΛΛT PT + 𝜀−1 E1T E ( ) −1 T +P w3 wT3 + w3 E4T (𝜀3 I − E4 E4T ) E4 wT3 + 𝜀−1 ΛΛ PT , 3 asymptotically stable under Neumann boundary conditions (3) by using Lemma 2.6.
Π33 = wT1 (I − 𝜀1 ΛΛT )−1 w1 + 𝜀−1 E2T E2 , 1 Π44 = wT2 (I − 𝜀2 ΛΛT )−1 w2 + 𝜀−1 E3T E3 . 2 From Assumption 2.3, the following inequalities can be obtained easily, ( ) ( ) f T w0 y(t, x) f w0 y(t, x) − yT (t, x)GT Gy(t, x) ≤ 0, (49)
( ) ( ) f T w0 y(t − 𝜏, x) f w0 y(t − 𝜏, x) − yT (t − 𝜏, x)GT Gy(t − 𝜏, x) ≤ 0.
(50) Noticing that, for any scalars 𝛼 > 0 and 𝛽 > 0 , there exist [ ( ) ( ) ] −𝛼 f T w0 y(t, x) f w0 y(t, x) − yT (t, x)GT Gy(t, x) ≥ 0, (51)
[ ( ) ( ) − 𝛽 f T w0 y(t − 𝜏, x) f w0 y(t − 𝜏, x) ] −yT (t − 𝜏, x)GT Gy(t − 𝜏, x) ≥ 0.
Substituting (51) and (52) to V(t, y(t, x)) , we get
(52)
Remark 3.1 Inspired by [10, 12], we obtain the following inequality ( ) l ∑ 𝜕y(t, x) 𝜕 Dk dx ≤ − yT (t, x)PD𝜋 y(t, x)dx, y (t, x)P �Ω �Ω 𝜕xk 𝜕xk k=1 T
then
l ∑ k=1
V(t, y(t, x)) ≤ −𝜅‖y(t, x)‖2 , then
P < 𝜇I,
(54)
dV(t, y(t, x)) = −𝜅‖y(t, x)‖2 dt⊖ � � 3 � 𝜕Vi (t, y(t, x)) 𝜎(y(t, x), y(t − 𝜏, x))d𝜔(t)dx. + ∫Ω 𝜕y i=1
�
� is cleverly replaced by D𝜋 y(t, x). Dk 𝜕y(t,x) 𝜕x
0 ⎡ Ξ11 ⎢ 0 −Q + 𝜇M2T M2 + 𝛽GT G Ξ=⎢ 0 ⎢ 0 ⎣ 0 0
where
(55)
Ξ11 = RT R + 2PPT − 2P𝜆1 + Q + 𝜇M1T M1 − PA − AT P + 𝜀PΛΛT PT + 𝜀−1 E1T E ( ) −1 T +P w3 wT3 + w3 E4T (𝜀3 I − E4 E4T ) E4 wT3 + 𝜀−1 ΛΛ PT + 𝛼GT G, 3
13
k
Theorem 3.2 The generalized reaction–diffusion neural networks model (1) concerning Neumann boundary conditions can be globally robustly asymptotically stable in the mean square, if there exist diagonal positive definite matrix P , diagonal matrix R , symmetric positive definite matrix Q and positive scalars 𝜇 , 𝜀 , 𝜀1 , 𝜀2 , 𝜀3 , 𝛼 , 𝛽 such that the following linear matrix inequalities hold
V(t, y(t, x)) ≤ 𝜂 T (t)Ξ𝜂(t).
(53) In view to Ξ < 0, there exists a scalar 𝜅 > 0 such that Ξ + diag{𝜅I, 0, 0, 0} < 0, which indicates that
𝜕 𝜕xk
0 0 Ξ33 0
0 ⎤ 0 ⎥ < 0, 0 ⎥⎥ Ξ44 ⎦ (57)
International Journal of Machine Learning and Cybernetics
Ξ33 = wT1 (I − 𝜀1 ΛΛT )−1 w1 + 𝜀−1 E2T E2 − 𝛼I, 1
Based on the proof of Theorem 3.1, the following result can be easily obtained.
Ξ44 = wT2 (I − 𝜀2 ΛΛT )−1 w2 + 𝜀−1 E3T E3 − 𝛽I 2
Case 1 If there are no stochastic perturbation, system (1) can be described as
and 𝜆1 is the lowest positive eigenvalue of the Neumann boundary problem.
( ) n ∑ ( ) 𝜕y(t, x) 𝜕 Dk dt − A(t)y(t, x)dt + w1 (t)f w0 y(t, x) dt 𝜕xk 𝜕x k=1 ( t ) ( ) ( ) +w2 (t)f w0 y(t − 𝜏, x) dt + w3 (t)f K(t − s) w0 y(s, x) ds dt, ∫−∞
dy(t, x) =
(58)
[ ]T Fig. 1 The dynamical behaviors of the neurons y(t, x) = y1 (t, x), y2 (t, x)
[ ]T Fig. 2 The dynamical behaviors of the neurons y(t, x) = y1 (t, x), y2 (t, x) with different diffusion coefficients
13
International Journal of Machine Learning and Cybernetics
[ ]T Fig. 3 The dynamical behaviors of the neurons y(t, x) = y1 (t, x), y2 (t, x) with different space variables
where the parameters are as same as the ones in (1). Then we obtain the following corollary. Corollary 3.1 The neural networks model (58) concerning Dirichlet boundary conditions can be globally robustly asymptotically stable in the mean square, if there exist diagonal positive definite matrix P , diagonal matrix R , symmetric positive definite matrix Q and positive scalars 𝜀 , 𝜀1 , 𝜀2 , 𝜀3 , 𝛼 , 𝛽 such that the following linear matrix inequality holds
Remark 3.2 It is easy to see that in Corollary 3.1, the restriction condition P < 𝜇I, and the terms 𝜇M1T M1 > 0 , 𝜇M2T M2 > 0 are removed, therefore, compared with the result in Theorem 3.1, conditions for Corollary 3.1 are much easier to be satisfied, that is, its stability effect is better if there is no stochastic perturbation in neural networks model. Case 2 If there are no parameter uncertainties, system (1) is simplified to
( ) n ∑ ( ) 𝜕y(t, x) 𝜕 dy(t, x) = Dk dt − Ay(t, x)dt + w1 f w0 y(t, x) dt 𝜕xk 𝜕x k=1 ( t ) ( ) ( ) +w2 f w0 y(t − 𝜏, x) dt + w3 f K(t − s) w0 y(s, x) ds dt ∫−∞
(60)
+𝜎(t, y(t, x), y(t − 𝜏, x))d𝜔(t),
0 ⎡ Ξ11 ⎢ 0 −Q + 𝛽GT G Ξ=⎢ 0 ⎢ 0 ⎣ 0 0
0 0 Ξ33 0
0 ⎤ 0 ⎥ < 0, 0 ⎥⎥ Ξ44 ⎦
(59)
where Ξ11 = RT R + 2PPT − 2PD𝜋 + Q − PA − AT P + 𝜀PΛΛT PT + 𝜀−1 E1T E ( ) −1 T +P w3 wT3 + w3 E4T (𝜀3 I − E4 E4T ) E4 wT3 + 𝜀−1 ΛΛ PT + 𝛼GT G, 3
Ξ33 =
wT1 (I
T −1
− 𝜀1 ΛΛ ) w1 +
𝜀−1 E2T E2 1
− 𝛼I,
Ξ44 = wT2 (I − 𝜀2 ΛΛT )−1 w2 + 𝜀−1 E3T E3 − 𝛽I. 2
13
where the parameters are as same as the ones in (1). Then we get the following corollary. Corollary 3.2 The neural networks model (60) concerning Dirichlet boundary conditions can be globally robustly asymptotically stable in the mean square, if there exist positive definite matrix P , diagonal matrix R , symmetric positive definite matrix Q and positive scalars 𝜇 , 𝜀 , 𝜀1 , 𝜀2 , 𝜀3 , 𝛼 , 𝛽 such that the following linear matrix inequalities hold:
P < 𝜇I,
0 ⎡ Ξ11 ⎢ 0 −Q + 𝜇M2T M2 + 𝛽GT G Ξ=⎢ 0 ⎢ 0 ⎣ 0 0
0 0 Ξ33 0
0 ⎤ 0 ⎥ < 0, 0 ⎥⎥ Ξ44 ⎦ (61)
International Journal of Machine Learning and Cybernetics
where
𝜎11 = 0.2y(t, x) + 0.5y(t − 𝜏, x) , 𝜎12 = 0 , 𝜎21 = 0 , 𝜎22 = 0.3y(t, x) + 0.2y(t − 𝜏, x) , and it is easy to see that G = I. The initial conditions of system (62) are chosen as ( ( ) ) t t y1 (t, x) = 0.5 1 + cos(𝜋x), y2 (t, x) = 1 + cos(𝜋x), 𝜋 𝜋 (64) where
Ξ11 = RT R + 2PPT − 2PD𝜋 + Q + 𝜇M1T M1 − PA − AT P + Pw3 wT3 PT + 𝛼GT G, Ξ33 = wT1 w1 − 𝛼I, Ξ44 = wT2 w2 − 𝛽I. Remark 3.3 Comparing Theorem 3.2 and Corollary 3.2, we can know that parameter uncertainties have an effect on stability of neural networks model [19]. investigated the robust stability of uncertain stochastic neural networks with distributed and interval time-varying delays, and in Corollary 3, the global asymptotic stability criteria for neural networks without stochastic noise and parametric uncertainties was obtained. In our paper, Corollary 3.1 and Corollary 3.2 give the stability criteria for neural networks with uncertain parameters and neural networks with stochastic perturbation respectively. Compared with [19], our results can reflect the influence of uncertain parameters and stochastic perturbation on stability of neural networks respectively.
4 Numerical simulations In this section, some numerical examples are given to demonstrate the effectiveness of Theorem 3.1. Especially, effects of diffusion coefficients, diffusion spaces are all considered, and the corresponding simulations are also presented. For the sake of simplification, we consider the following 2-D reaction–diffusion uncertain stochastic neural networks model: ( ) ( ) 𝜕y(t, x) 𝜕 dy(t, x) = D dt − A + ΛF(t)E1 y(t, x)dt 𝜕x 𝜕x ( ) ) ( + w1 + ΛF(t)E2 f w0 y(t, x) dt ( ) ( ) + w2 + ΛF(t)E3 f w0 y(t − 𝜏, x) dt (62) +𝜎(y(t, x), y(t − 𝜏, x))d𝜔(t), where [
] [ ] [ ] 0.1 0 30 0.2 0.5 , A= , w0 = , 0 0.1 04 0.3 1.1 ] ] ] [ [ [ sin(t) 0 −0.6 0.8 1.3 0.6 , w2 = , F(t) = w1 = , 0 cos(t) 1.2 −0.2 0.5 1.7 D=
(t, x) ∈ [−𝜏, 0] × Ω. The dynamical behavior of system (62) with parameters (63) and initial conditions (64) is shown in Fig. 1. The simulation results show the feasibility and efficiency of the proposed methods. 1. Discussion of diffusion coefficients: the result in Theorem 3.1 shows that, the stability criteria on system (1) are dependent on diffusion effects. Furthermore, we can see a very interesting fact, that is, as long as diffusion coefficients Dik in the system is large enough, then condition Ξ11 < 0 always can be satisfied. This implies that the larger diffusion coefficients are, the better its stability [effect is. Dynamical behaviors of the neurons ]T y(t, x) = y1 (t, x), y2 (t, x) with different diffusion coefficients ( D1 = D2 = 0.08 and D1 = D2 = 0.12, respectively) are shown in Fig. 2. 2. Discussion of diffusion space: in Theorem 3.1, we can see that for given diffusion coefficients Dik , as long as diffusion space Ω in the system is small enough, then condition Ξ11 < 0 always can be satisfied. This implies that the smaller diffusion space is, the better its stability effect [ is. Dynamical ]T behaviors of the neurons y(t, x) = y1 (t, x), y2 (t, x) with different diffusion spaces ( 0 ≤ x ≤ 1.3 and 0 ≤ x ≤ 0.7, respectively) are shown in Fig. 3. Remark 4.1 [3] studied the dynamics of bidirectional associative memory networks with distributed delays and reaction–diffusion terms. In order to discuss the influence of reaction–diffusion terms, Song and Cao investigated the neural networks model with reaction–diffusion terms and the neural networks model without reaction–diffusion terms respectively. Moreover, in this paper we investigated the influence of diffusion coefficients and diffusion space on stability of neural networks respectively.
Λ = 0.3I,
(63)
[
] 0.1 0 Ei = , 0 0.1
i = 1, 2, 3,
f (u) = tanh(u),
0 ≤ x ≤ 1,
𝜏 = 2.05,
13
International Journal of Machine Learning and Cybernetics
5 Conclusion In this paper, several sufficient conditions guaranteeing the global robust asymptotic stability of a class of generalized reaction–diffusion uncertain stochastic neural networks with mixed delays under Dirichlet boundary conditions and Neumann boundary conditions are given. The neural networks model is quite general, and can be used to describe some well-known neural networks which include local field neural networks and static neural networks. Our results show that the stability criteria depends on diffusion coefficients, diffusion spaces, stochastic perturbation, and uncertain parameters. The problem considered in this paper is more general in many aspects. Finally, numerical examples are given to illustrate the usefulness of the obtained results. This has practical benefits, since easily verifiable conditions for the robust stability are important in the design and applications of neural networks. Acknowledgements This work was supported by the National Natural Science Foundation of China (no. 61305076) and the Funds for Basic Creative Research of Department of Basic Science, Shijiazhuang Mechanical Engineering College (no. JCCX1601).
References 1. Cao J, Rakkiyappan R, Maheswari K, Chandrasekar A (2016) Exponential H_ filtering analysis for discrete-time switched neural networks with random delays using sojourn probabilities. Sci China Technol Sci 59(3):387–402 2. Zhao H, Yuan J, Zhang X (2015) Stability and bifurcation analysis of reaction-diffusion neural networks with delays. Neurocomputing 147(1):280–290 3. Dong T, Liao X, Wang A (2015) Stability and Hopf bifurcation of a complex-valued neural network with two time delays. Nonlinear Dyn 82(1):1–12 4. Wu H, Zhang X, Li R, Yao R (2014) Adaptive exponential synchronization of delayed Cohen-Grossberg neural networks with discontinuous activations. Int J Mach Learn Cybern 6(2):253–263 5. Xia J, Ju HP, Zeng H (2015) Improved delay-dependent robust stability analysis for neutral-type uncertain neural networks with
13
6. 7. 8.
9. 10. 11. 12.
13. 14. 15. 16. 17.
18. 19.
Markovian jumping parameters and time-varying delays. Neurocomputing 149:1198–1205 Wang T, Zhao S, Zhou W, Yu W (2015) Finite-time state estimation for delayed Hopfield neural networks with Markovian jump. Neurocomputing 156:193–198 Li Y, Meng X (2015) Synchronization of generalized stochastic neural networks with delays and reaction-diffusion terms on timescales. Int J Dyn Syst Diff Equ 5(3):403–416 Gan Q, Liu T, Liu C, Lv T (2016) Synchronization for a class of generalized neural networks with interval time-varying delays and reaction-diffusion terms. Nonlinear Anal Model Control 21(3):379–399 Li R, Cao J (2016) Stability analysis of reaction-diffusion uncertain memristive neural networks with time-varying delays and leakage term. Appl Math Comput 278:54–69 Yang X, Cao J, Yang Z (2013) Synchronization of coupled reaction-diffusion neural networks with time-varying delays via pinning-impulsive controller. Siam J Control Optim 51(5):3486–3510 Wang Y, Cao J (2013) Exponential stability of stochastic higherorder BAM neural networks with reaction-diffusion terms and mixed time-varying delays. Neurocomputing 119(16):192–200 Du Y, Zhong S, Zhou N, Shi K, Cheng J (2014) Exponential stability for stochastic Cohen-Grossberg BAM neural networks with discrete and distributed time-varying delays. Neurocomputing 127(3):144–151 Arik S (2014) A new condition for robust stability of uncertain neural networks with time delays. Neurocomputing 128(5):476–482 Banu LJ, Balasubramaniam P, Ratnavelu K (2014) Robust stability analysis for discrete-time uncertain neural networks with leakage time-varying delay. Neurocomputing 151:808–816 Zhou J, Xu S, Shen H, Zhang B (2013) Passivity analysis for uncertain BAM neural networks with time delays and reactiondiffusions. Int J Syst Sci 44(8):1494–1503 Wang Y, Xie L, Souza CED (1992) Robust control of a class of uncertain nonlinear systems. Syst Control Lett 19(2):139–149 Toumi A, Toumi MA, Toumi N (2009) Simple proofs of the Cauchy-Schwartz inequality and the negative discriminant property in archimedean almost-algebras. Anal Theory Appl 25(2):117–124 Pan J, Zhong S (2010) Delay-dependent stability criteria for reaction-diffusion neural networks with time-varying delays. Neurocomputing 73:1344–1351 Feng W, Yang SX, Wu H (2009) On robust stability of uncertain stochastic neural networks with distributed and interval timevarying delays. Chaos Solitions Fractals 42(4):2095–2104