Int. J. Mach. Learn. & Cyber. DOI 10.1007/s13042-015-0373-2
ORIGINAL ARTICLE
State estimation for uncertain discrete-time stochastic neural networks with Markovian jump parameters and time-varying delays Mingang Hua1,2 • Huasheng Tan
1 •
Juntao Fei1
Received: 20 October 2014 / Accepted: 24 April 2015 Springer-Verlag Berlin Heidelberg 2015
Abstract The state estimation problem is considered for a class of discrete-time stochastic neural networks with Markovian jumping parameters in this paper. Norm-bounded parameter uncertainties in the state and measurement equation and time-varying delays are investigated. The neuron activation function satisfies sector-bounded conditions, and the nonlinear perturbation of the measurement equation satisfies standard Lipschitz condition and sectorbounded conditions. By constructing proper Lyapunov– Krasovskii functional, delay-dependent conditions are developed in terms of linear matrix inequalities (LMIs) to estimate the neuron state such that the dynamic of the estimation error system is asymptotically stable. Finally, numerical examples are shown to demonstrate the effectiveness and applicability of the proposed design method. Keywords State estimation Parameter uncertainties Discrete-time stochastic neural networks Markovian jumping parameters Time-varying delay LMI
1 Introduction In the past decade, substantial attention has been devoted to the investigation of various kinds of neural networks, such as recurrent neural networks, Hopfield neural networks,
& Mingang Hua
[email protected] 1
College of Internet of Things Engineering, Hohai University, Changzhou 213022, China
2
Jiangsu Key Laboratory of Power Transmission and Distribution Equipment Technology, Changzhou 213022, China
bidirectional associative memory neural networks, Cohen– Grossberg neural networks and so forth, due to their extensive applications in engineering technology fields such as signal and image processing, data handling, associative memory, optimization problems and so on. However, in the process of dynamic behavior of many biological and artificial neural networks, there exist inherent uncertainties and time delays, which may cause poor performance such as oscillation and instability. For example, in terms of an active suspension system’s control design, the uncertainty and delay of vehicle sprung and unsprung masses should be taken into account to meet vehicle travel performance criteria. The H1 controller design for active suspension systems with delay is considered [20, 21]. And electronic fluctuations like random currents or voltages, can hardly be avoidable in the electrical circuit hardware implementation of neural networks on middle-large scale integrated circuits. In addition, some neural networks will no longer be stable once the time delay is not taken into account. It is thus of great importance to study the stability analysis of neural networks with parameter uncertainties or time delays and several results are reported in [3, 9, 11, 17, 18, 31, 34, 37, 48] and the references therein. In recent years, the neural-network-based control developed rapidly and has become an active research area. For example, the control problems for complex nonlinear systems has been successfully solved via neural networks [16]. As a dual of control design, robust filtering and state estimation have been also received considerable attention [1, 2, 4–7, 12–15, 19, 22, 24–30, 32, 33, 38–40, 42–48] and the references cited therein. The aim of state estimation is to estimate the neuron states through available output measurements such that the dynamic of the estimation error system is asymptotically stable, and then employ the estimated neuron to obtain certain aims such as the practical
123
Int. J. Mach. Learn. & Cyber.
performance of controllers. However, as shown in [10, 14, 39], for relatively large scale neural networks, the neuron states are not often totally available in the network outputs. Therefore, it is of great significance to estimate the neuron states through available measurements in many applications. For example, an artificial neural networks was utilized to model the behavior of super capacitors then to control the super capacitor voltage [8]. State estimation problem was initially studied for neural networks with time-varying delays in [39], which was further improved in [10] with the aid of free-weighting matrix approach. However, the formulated condition in [10] is a matrix inequality not a LMI, which corresponds to a tough nonlinear programming problem. Therefore, based on the new bounding technique, a delay-dependent and LMI approach has been developed to design the robust state estimator for delayed neural networks [14]. Unfortunately, the parameter uncertainties considered in [14] only existed in state equation. On the other hand, Markovian jumping systems have been diffusely considered due to the fact that systems with Markovian jumping parameters are appropriate to model some practical systems whose structure and parameters vary subject to random changes. It is well known that they are often used in modeling abrupt phenomena, such as random failures. Neural networks with Markovian jumping parameters have also been a hot field in recent years. For instance, the problem of robust stability of nonlinear delayed Hopfield neural networks with Markovian jumping parameters is concerned by Takagi–Sugeno (T–S) fuzzy model [18]. Via delay partitioning technique combining with free-weighting matrix method, the stability for a class of discrete-time stochastic neural networks with mode-dependent delay and Markovian jumping parameters is investigated [31]. The state estimation for discrete-time neural networks with Markovian jumping parameters and time-varying delays is considered [45]. Stochastic state estimation for neural networks with distributed delays and Markovian jumping parameters is studied [6], and it has been extended to discrete-time stochastic setting [7]. The robust H1 state estimation problem is investigated for a class of discrete-time stochastic genetic regulatory networks with probabilistic measurement delays, which is characterized by a binary switching sequence satisfying Bernoulli probability distribution [38]. Meanwhile, stability of Markovian jumping neural networks with discrete and distributed timevarying delays are considered [35, 36]. It should also be noted that the state estimation problems are studied with stochastic perturbations [4, 7, 23, 24, 38, 41, 48]. However, in [4, 7, 24, 28], the parameter uncertainties are not taken into account and Markovian jumping parameters are not included in [4, 23, 24, 38, 41, 48]. Besides, the stochastic term coefficient is characterized by a nonlinear noise intensity function [7, 38]. In fact, the
123
continuous-time neural networks usually should be discretized when they come to computer-based simulation, experimentation or computation. Therefore, the state estimation for uncertain discrete-time stochastic neural networks with Markovian switching is still open and essential, especially when the stochastic term coefficient is of linear form, and this motivates us the present work. Taking into account all the elements aforementioned, in this paper, the state estimation for uncertain discrete-time stochastic neural networks with Markovian jumping parameters and time-varying delays is discussed. A delaydependent condition is formulated in terms of LMI such that the dynamic of the estimation error system is asymptotically stable. At last, numerical examples are given to show the effectiveness of the proposed method. The main contribution of this paper are summarized as follows: •
• •
The state estimation for uncertain discrete-time stochastic neural networks with Markovian jumping parameters and sector-bounded nonlinearity is firstly investigated. There exists parameter uncertainties in both the state and measurement output. Based on the Lyapunov–Krasovskii functional, delaydependent sufficient conditions are presented to guarantee the existence of the desired estimator for discretetime stochastic neural networks with Markovian jumping parameters and sector-bounded nonlinearity. Numerical examples are given to demonstrate the effectiveness of the proposed estimator design.
Notations: Rn denotes the n-dimensional Euclidean space, Rnm is the set of n m real matrices. I is the identity matrix. j j denotes Euclidean norm for vectors and jj jj denotes the spectral norm of matrices. N denotes the set of all natural number, i.e., N ¼ f0; 1; 2; . . .g. ðX; F ; fF k gk 0 ; PÞ is a complete probability space with a filtration fF k gk 0 satisfying the usual conditions. MT stands for the transpose of the matrix M. For symmetric matrices X and Y, the notation X [ Y (respectively X C Y) means that the X Y is positive definite (respectively, positive semi-definite). kmax ðÞ represents the maximum eigenvalue of a real symmetric matrix. denotes a block that is readily inferred by symmetry. Efg stands for the mathematical expectation operator with respect to the given probability measure P.
2 Problem description Consider the following uncertain discrete-time stochastic delayed neural networks with Markovian jump parameters:
Int. J. Mach. Learn. & Cyber.
xðk þ 1Þ ¼ Aðrk ; kÞxðkÞ þ A1 ðrk ; kÞFðxðkÞÞ þ A2 ðrk ; kÞFðxðk dðkÞÞÞ þ ½Dðrk ; kÞxðkÞ þ D1 ðrk ; kÞFðxðkÞÞ þ D2 ðrk ; kÞFðxðk dðkÞÞÞwðkÞ
ð1Þ
where xðkÞ ¼ ½x1 ðkÞ; x2 ðkÞ; . . .; xn ðkÞT 2 Rn is the state vector of the neural network associated with n neurons; FðxðÞÞ ¼ ½F1 ðxðÞÞ; F2 ðxðÞÞ; . . .; Fn ðxðÞÞT 2 Rn denotes the neuron activation functions with initial condition Fð0Þ ¼ 0. Aðrk ; kÞ, A1 ðrk ; kÞ, A2 ðrk ; kÞ, Dðrk ; kÞ, D1 ðrk ; kÞ, D2 ðrk ; kÞ are the matrix functions of the random jumping process rk , where rk is a finite-state Markovian jump process representing the system mode, i.e., rk takes discrete values in a given finite set S ¼ 1; 2; . . .; N. For the sake of simplicity, in the sequel, when rk ¼ i 2 S, Aðrk ; kÞ, A1 ðrk ; kÞ, A2 ðrk ; kÞ, Dðrk ; kÞ, D1 ðrk ; kÞ, D2 ðrk ; kÞ are denoted by Ai ðkÞ, A1i ðkÞ, A2i ðkÞ, Di ðkÞ, D1i ðkÞ D2i ðkÞ, respectively. wðkÞ is a scalar Brownian motion (Wiener process) defined on the complete probability space ðX; F ; fF k gk 0 ; PÞ with EfwðkÞg ¼ 0;
2
Efw ðkÞg ¼ 1;
EfwðiÞwðjÞg ¼ 0 ði 6¼ jÞ
ð2Þ In neural network (1), dðkÞ represents the time-varying delays, satisfying 0 dm dðkÞ dM ;
k ¼ 0; 1; 2; . . .
ð3Þ
where dm , dM are known positive integer represent the lower and upper bounds of dðkÞ, respectively. The transition probability matrix P ¼ ðpij ÞNN is given by Pr ðrkþ1 ¼ jjrk ¼ iÞ ¼ pij ;
8 i; j 2 S
ð4Þ
where pij ð8 i; j 2 SÞ is the transition rate from mode i to mode j and satisfies N X
pij ¼ 1;
8 i; j 2 S
ð5Þ
j¼1
The parameters uncertainties in (1) are described as follows: Ai ðkÞ ¼ Ai þ DAi ðkÞ; A1i ðkÞ ¼ A1i þ DA1i ðkÞ; A2i ðkÞ ¼ A2i þ DA2i ðkÞ Di ðkÞ ¼ Di þ DDi ðkÞ;
D1i ðkÞ ¼ D1i þ DD1i ðkÞ;
ð6Þ
D2i ðkÞ ¼ D2i þ DD2i ðkÞ where Ai is the state feedback coefficient matrices with Ai ¼ diagfai1 ; ai2 ; . . .; ain g; jaij j\1. j ¼ 1; 2; . . .; n. A1i ; D1i and A2i ; D2i are, respectively, the connection weight matrices and the delayed connection weight matrices. Di is a known constant matrix. Norm-bounded parameter uncertainties DAi ðkÞ, DA1i ðkÞ, DA2i ðkÞ, DDi ðkÞ, DD1i ðkÞ, DD2i ðkÞ are assumed to satisfy
DAi ðkÞ DA1i ðkÞ DA2i ðkÞ DDi ðkÞ DD1i ðkÞ DD2i ðkÞ M1i Fi ðkÞ½ N1i N2i N3i ¼ M2i
ð7Þ
where M1i , M2i , N1i , N2i , N3i are known real constant matrices and Fi ðÞ:R ! Rkl is an unknown time-varying matrix function, satisfying FiT ðkÞFi ðkÞ I;
8 k:
ð8Þ
The parameter uncertainties DAi ðkÞ, DA1i ðkÞ, DA2i ðkÞ, DDi ðkÞ, DD1i ðkÞ and DD2i ðkÞ are said to be admissible if both (6) and (7) hold. In this paper, we suppose the measurements of neural network (1) are described as yðkÞ ¼ Ei ðkÞxðkÞ þ /ðk; xðkÞÞ
ð9Þ
where Ei ðkÞ ¼ Ei þ DEi ðkÞ
ð10Þ
yðkÞ ¼ ½y1 ðkÞ; y2 ðkÞ; . . .; yn ðkÞT 2 Rn is the measurement output. /ðk; xðkÞÞ 2 Rn is a nonlinear disturbance on the neural network outputs. Ei 2 Rnm is a known constant matrix. Norm-bounded parameter uncertainty DEi ðkÞ is supposed to satisfy DEi ðkÞ ¼ M3i F1i ðkÞN4i
ð11Þ
F1i ðÞ:R ! Rkl is an unknown time-varying matrix function, satisfying T F1i ðkÞF1i ðkÞ I;
8 k:
ð12Þ
Remark 1 The parameter uncertainties discussed in [6, 15, 32, 41] only exist in state equation. However, due to the variability in measurement errors or quantization error, the parameter uncertainties could also exist in the measurement of neural networks yðkÞ. So the parameter uncertainties are allowed to enter into both the state and measurement equation in this paper. Hence, the network measurements yðkÞ proposed in this paper is more complex than that in [6, 15, 32, 41]. The following assumptions are essential in establishing our results. Assumption 1 The neuron-dependent nonlinear disturbance /ðk; xðkÞÞ in (9) is assumed to satisfy the following Lipschitz condition (or sigmoid function): j/ðk; s1 Þ /ðk; s~1 Þj jWðs1 s~1 Þj
ð13Þ
where W is a known constant matrix. Assumption 2 The neuron activation function FðÞ in (1) is supposed to satisfy the following sector-bounded condition:
123
Int. J. Mach. Learn. & Cyber.
½FðxÞ FðyÞ R1 ðx yÞT ½FðxÞ FðyÞ R2 ðx yÞ 0; 8 x; y 2 R
ð14Þ
Definition 1 Estimation error system (17) is said to be asymptotic mean-square stable, if system (17) satisfies
where R1 and R2 are known constant matrices. Remark 2 The neuron activation function FðÞ in [1, 2, 10, 13–15, 22–24, 27, 30, 32, 33, 39, 40] satisfy Assumption 1. Assumption 2 is discussed in [6, 38, 41]. As pointed in [6], when R1 ¼ R2 ¼ R, the condition (14) becomes the following form ½FðxÞ FðyÞT ½FðxÞ FðyÞ ðx yÞT RT Rðx yÞ;
8 x; y 2 R
ð15Þ
Obviously, condition (15) is equivalent to (13), that is, Assumption 1 is only a special case of Assumption 2. Therefore, we choose the less restrictive and more general sector-bounded condition in this paper. To estimate the neuron state, we establish the following full-order state estimator of neural network (1). x^ðk þ 1Þ ¼ Ai ðkÞ^ xðkÞ þ A1i ðkÞFð^ xðkÞÞ þ A2i ðkÞFð^ xðk dðkÞÞÞ þ ½Di ðkÞ^ xðkÞ þ D1i ðkÞFð^ xðkÞÞ
ð16Þ
þ D2i ðkÞFð^ xðk dðkÞÞÞwðkÞ þ Ki ½yðkÞ Ei ðkÞ^ xðkÞ /ðk; x^ðkÞÞ where x^ðkÞ is the state of the estimator, and Ki 2 Rnm is the estimator gain matrices to be determined. Thus, the objective of this paper is to design a suitable Ki so that x^ðkÞ asymptotically approaches xðkÞ. Define the estimation error eðkÞ ¼ xðkÞ x^ðkÞ. Then by virtue of (1), (9) and (16), we can obtain the following estimation error system eðk þ 1Þ ¼ ðAi ðkÞ Ki Ei ðkÞÞeðkÞ þ A1i ðkÞFðeðkÞÞ þ A2i ðkÞFðeðk dðkÞÞÞ
ð17Þ where FðeðkÞÞ ¼ FðxðkÞÞ Fð^ xðkÞÞ Fðeðk dðkÞÞ ¼ Fðxðk dðkÞÞ Fð^ xðk dðkÞÞ /ðk; eðkÞÞ ¼ /ðk; xðkÞÞ /ðk; x^ðkÞÞ
Lemma 1 [4] For any vectors x, y 2 Rn and any scalar [ 0, matrices D, F, E are real matrices of appropriate dimensions with F T F I, then the following inequality holds: 2xT DFEy 1 xT DDT x þ yT EET y
3 Main results Theorem 1 Consider the neural network (1), then the corresponding estimation error system (17) is asymptotically mean-square stable for all admissible uncertainties satisfying (7) and (11), if there exist symmetric positive matrices Pi [ 0, Q [ 0, matrices Ti , and scalars qi [ 0, e1i [ 0, e2i [ 0, k1i [ 0, k2i [ 0 such that the following LMI holds: 3 2 X6i X6i X1i X2i X4i 7 6 X7i X9i 7 6 X3i X5i 7 6 ð19Þ Xi ¼ 6 X3i X8i X10i 7 7\0 6 7 6 e I 0 5 4 1i e2i I
X11i 6 6 6 X1i ¼ 6 6 6 4
0
k1i H2 þ e1i N1iT N2i
e1i N1iT N3i
0
Q k2i H1
0 k1i I þ e1i N2iT N2i
k2i H2 e1i N2iT N3i
0 0
k2i I þ e1i N3iT N3i
0 qi I
3 7 7 7 7 7 7 5
X11i ¼ ðdM dm þ 1ÞQ Pi k1i H1 þ qi W T W þ e1i N1iT N1i þ e2i N4iT N4i 3 2 T 3 2 T Ai Pi EiT TiT Di Pi 7 7 6 6 0 7 6 0 7 6 7 6 T 7 6 T 7½I; I; . . .; I; 6 D Pi 7½I; I; . . .; I X X2i ¼ 6 ¼ A P 4i i 1i 7 6 1i 7 6 7 6 T 7 6 AT2i Pi 5 4 D2i Pi 5 4 TiT 0 1 1 X3i ¼ diagfp1 i1 ð2Pi þ P1 Þ; pi2 ð2Pi þ P2 Þ; . . .; piN ð2Pi þ PN Þg
/ðk; eðkÞÞ satisfies Assumption 1, FðeðkÞÞ, Fðeðk dðkÞÞ satisfy the following assumption.
123
To obtain our results in terms of LMI, we need the following lemma.
2
þ D2i ðkÞFðeðk dðkÞÞÞwðkÞ Ki /ðk; eðkÞÞ
Fð0Þ ¼ 0
lim Efje2 ðkÞjg ¼ 0
k!1
where
þ ½Di ðkÞeðkÞ þ D1i ðkÞFðeðkÞÞ
Assumption 3 satisfy
It is obvious that (17) admits a unique solution eðkÞ 0 corresponding to the initial condition (18).
X5i ¼ 0NN ;
X6i ¼ ½0 0 0 0 0T
X7i ¼ Pi M1i ½I; I; . . .; IT ; T
X9i ¼ Ti M3i ½I; I; . . .; I ;
X8i ¼ Pi M2i ½I; I; . . .; IT X10i ¼ 0N1
The functions FðeðkÞÞ, Fðeðk dðkÞÞ Moreover, the estimator gain matrices Ki are given by ð18Þ
ð20Þ Ki ¼ P1 i Ti Proof For convenience, the following notations are adopted:
Int. J. Mach. Learn. & Cyber.
Pi ¼
N X
pij Pj
ð21Þ
j¼1
EfDV3 ðkÞg ¼ EfV3 ðk þ 1Þ V3 ðkÞg kd k m þ1 X X eT ðiÞQeðiÞ ¼E j¼kdM þ2 i¼j
f ðkÞ ¼ ðAi ðkÞ Ki Ei ðkÞÞeðkÞ þ A1i ðkÞFðeðkÞÞ
ð22Þ
þ A2i ðkÞFðeðk dðkÞÞÞ Ki /ðk; eðkÞÞ
k1 X
kd Xm
T
e ðiÞQeðiÞ
j¼kdM þ1 i¼j
gðkÞ ¼ Di ðkÞeðkÞ þ D1i ðkÞFðeðkÞÞ þ D2i ðkÞFðeðk dðkÞÞÞ
ð23Þ
kd Xm
¼E
k X
eT ðiÞQeðiÞ
j¼kdM þ1 i¼jþ1
Choose the following Lyapunov functional VðkÞ ¼ V1 ðkÞ þ V2 ðkÞ þ V3 ðkÞ
kd Xm
ð24Þ
k1 X
T
e ðiÞQeðiÞ
j¼kdM þ1 i¼j
where ¼E
V1 ðkÞ ¼ eT ðkÞPi eðkÞ V2 ðkÞ ¼
k1 X
kd Xm
T
ðe ðkÞQeðkÞ e ðjÞQeðjÞÞ
j¼kdM þ1
¼ E ðdM dm ÞeT ðkÞQeðkÞ
eT ðiÞQeðiÞ
i¼kdðkÞ
V3 ðkÞ ¼
kd Xm
k1 X
T
kd Xm
T
e ðiÞQeðiÞ
eT ðiÞQeðiÞ
i¼kdM þ1
j¼kdM þ1 i¼j
Calculating the difference of VðkÞ and taking mathematical expectation, we have EfDVðkÞg¼EfDV1 ðkÞgþEfDV2 ðkÞgþEfDV3 ðkÞg
ð25Þ
where
ð28Þ and 11 ¼ ½ Ai ðkÞ Ki Ei ðkÞ 12 ¼ ½ Di ðkÞ
EfDV1 ðkÞg ¼ E½V1 ðeðk þ 1Þ; rkþ1 ÞjeðkÞ; rk ¼ i V1 ðeðkÞ; rk ¼ iÞ ¼ Eff T ðkÞPi f ðkÞ þ gT ðkÞPi gðkÞ eT ðkÞPi eðkÞg
nðkÞ ¼ eT ðkÞ
0
0
D1i ðkÞ
eT ðk dðkÞÞ
A1i ðkÞ
A2i ðkÞ
Ki
D2i ðkÞ 0
F T ðeðkÞÞ
F T ðeðk dðkÞÞÞ
ð30Þ /T ðk; eðkÞÞ
EfDV2 ðkÞg ¼ EfV2 ðk þ 1Þ V2 ðkÞg k k1 X X ¼E eT ðiÞQeðiÞ eT ðiÞQeðiÞ i¼kþ1dðkþ1Þ
i¼kdðkÞ
¼ E eT ðkÞQeðkÞ eT ðk dðkÞÞQeðk dðkÞÞ k1 X
þ
eT ðiÞQeðiÞ
i¼kþ1dðkþ1Þ
k1 X
eT ðiÞQeðiÞ
i¼kdðkÞþ1
From Assumption 1 and Assumption 2, for any scalars qi [ 0, k1i [ 0 and k2i [ 0, we have qi ½/T ðk; eðkÞÞ/ðk; eðkÞÞ eT ðkÞW T WeðkÞ 0 eðkÞ T H1 H2 eðkÞ k1i 0 FðeðkÞÞ FðeðkÞÞ I k2i
eðk dðkÞÞ
T
Fðeðk dðkÞÞÞ
H1
H2
I
eðk dðkÞÞ Fðeðk dðkÞÞÞ
kd Xm
þ
eT ðiÞQeðiÞ
i¼kdm þ1
k1 X
eT ðiÞQeðiÞ
H1 ¼
i¼kdðkÞþ1
ð33Þ
0
where
eT ðiÞQeðiÞ
i¼kþ1dðkþ1Þ k1 X
ð32Þ
ð34Þ
¼ E eT ðkÞQeðkÞ eT ðk dðkÞÞQeðk dðkÞÞ þ
T
ð31Þ
¼ EfnT ðkÞ1T1 Pi 11 nðkÞ þ nT ðkÞ1T2 Pi 12 nðkÞ eT ðkÞPi eðkÞg
ð26Þ
ð29Þ
RT1 R2 þ RT2 R1 ; 2
H2 ¼
RT1 þ RT2 2
ð35Þ
According to (21)–(34), we have
E eT ðkÞQeðkÞ eT ðk dðkÞÞQeðk dðkÞÞ þ
kd Xm
eT ðiÞQeðiÞ
E½Vðeðk þ 1Þ; rkþ1 ÞjeðkÞ; rk ¼ i VðeðkÞ; rk ¼ iÞ nT ðkÞHi nðkÞ
i¼kdM þ1
ð27Þ
ð36Þ
where
123
Int. J. Mach. Learn. & Cyber.
T H4i T 1 H4i Hi ¼ H1i H2i H1 H H 3i 2i 3i H5i H5i 2 0 k1i H2 0 H11i 6 Q k2i H1 0 k2i H2 6 6 k H1i ¼ 6 I 0 1i 6 6 k2i I 4
2
0
3
7 0 7 7 0 7 7 7 0 5 qi I
H11i ¼ ðdM dm þ 1ÞQ Pi k1i H1 þ qi W T W 2 3 ðAi ðkÞ Ki Ei ðkÞÞT 6 7 0 6 7 6 7 T 7½I; I; . . .; I; H2i ¼ 6 A1i ðkÞ 6 7 6 7 T 4 5 A2i ðkÞ KiT
3 DTi ðkÞ 7 6 6 0 7 7 6 T 7 H4i ¼ 6 6 D1i ðkÞ 7½I; I; . . .; I 7 6 T 4 D2i ðkÞ 5 0 2
H3i ¼
1 1 1 1 1 diagfp1 i1 P1 ; pi2 P2 ; . . .; piN PN g;
H5i ¼ 0NN
j ¼ 1; 2; . . .; N
Define matrix J ¼ diagfJ1 ; J2 ; J2 g, where
where
X3i
0NN
3 DEiT ðkÞTi 7 6 0 7 6 7 6 T 7½I; I; . . .; I DH1i ¼ 6 DA ðkÞP i 1i 7 6 7 6 T DA2i ðkÞPi 5 4 0 3 2 DDTi ðkÞPi 7 6 0 7 6 7 6 T 6 DH2i ¼ 6 DD1i ðkÞPi 7 7½I; I; . . .; I 7 6 T 4 DD2i ðkÞPi 5 0 DATi ðkÞPi
T T 1 T T e1 1i v1 v1 þ e1i v2 v2 þ e2i v3 v3 þ e2i v4 v4
ð39Þ where T T T T T T v1 ¼ ½XT6i M1i Pi M1i Pi . . . M1i Pi M2i Pi M2i Pi . . . M2i Pi T v2 ¼ ½N1i 0 N2i N3i 0 0 0 0. . .0 T T T T T T v3 ¼ ½XT6i M3i Ti M3i Ti . . . M3i Ti 01N T v4 ¼ ½N4i 0 0 0. . .0
Considering (39), utilizing Schur complement to (38), then the obtained result is equivalent to (19), which guarantees Hi \0. Let b ¼ maxfkmax ðHi Þg, obviously b\0. From (36), we have E½Vðeðk þ 1Þ; rkþ1 ÞjeðkÞ; rk ¼ i VðeðkÞ; rk ¼ iÞ bjeðkÞj2
1 1 J2 ¼ diagfP1 i ; Pi ; . . .; Pi g
Now, pre- and post-multiplying (37) by J, and considering parameter uncertainties. we can get 2 3 H1i X2i X4i 6 7 ð38Þ 4 X3i X5i 5 þ DHi \0
3 MH2i 7 0NN 5;
T DHi ¼ v1 Fi ðkÞv2 þ vT2 FiT ðkÞvT1 þ v3 F1i ðkÞv4 þ vT4 F1i ðkÞvT3
From (20), we have Ti ¼ Pi Ki . By noting P1 j [ 0, we can 1 obtain ðPj Pi ÞPj ðPj Pi Þ 0, which is equal to
J1 ¼ diagfI; I; I; I; Ig;
2
MH1i 0NN
On the basis of (7), (10) and Lemma 1
It remains to show that Hi \0. According to Schur complement, Hi \0 is equivalent to 3 2 H1i H2i H4i 7 6 ð37Þ 4 H3i H5i 5\0 H3i
Pi P1 j Pi Pj 2Pi ;
055 6 DHi ¼ 4
ð40Þ which implies E½Vðeðk þ 1Þ; rkþ1 Þ VðeðkÞ; rk Þ bjeðkÞj2
For any positive integer s, it can be calculated from (41) that E½Vðeðs þ 1Þ; s þ 1Þ Vðeð0Þ; 0Þ b
s X k¼0
123
ð41Þ
jeðkÞj2
ð42Þ
Int. J. Mach. Learn. & Cyber.
which indicates that s X k¼0
the state estimator (16) becomes
E½jeðkÞj2 b1 E½Vðeðs þ 1Þ; s þ 1Þ
ð43Þ
xðkÞÞ þ A2i Fð^ xðk dðkÞÞÞ x^ðk þ 1Þ ¼ Ai x^ðkÞ þ A1i Fð^ þ ½Di x^ðkÞ þ D1i Fð^ xðkÞÞ þ D2i Fð^ xðk dðkÞÞÞwðkÞ þ Ki ½yðkÞ Ei x^ðkÞ /ðk; x^ðkÞÞ
1
E½Vðeð0Þ; 0ÞÞ \ b E½Vðeð0Þ; 0ÞÞ Ps 2 This implies that k¼0 E½jeðkÞj is convergent and Ps limk!1 k¼0 E½jeðkÞj2 ¼ 0. This completes the proof. h Remark 3 As is well known, stochastic systems are extensively used in weather forecast, astrophysics, demographic theory, operational research and the like. Hence, Theorem 1 is established in stochastic setting. However, the state estimator studied for neural networks in [1, 2, 6, 10, 13–15, 19, 22, 25, 27, 30, 32, 33, 39, 40, 42, 45, 47] are not stochastic neural networks. In addition, It should be pointed out that the results in [19, 25, 32, 39, 48] are developed in the context of continuous systems. By contrast, it seems that the corresponding works for discrete-time case are relatively few [4, 7, 23, 24, 27, 30, 38, 41, 45]. In fact, in practical applications, sampled-data control method are widely adopted in many control systems such as industrial furnace control system, which requires a discretization of a continuous system. Thus, it is of great practical value to consider discrete-time neural networks. Remark 4 As pointed out in [14], in many practical realization of neural networks, the firing rates and the weight coefficients of the neurons hinge on certain resistance and capacitance values, which are subject to uncertainties. It is hence of a great significance to take parameters uncertainties into consideration at the same time when analyzing neural networks. However, the results in [1, 2, 4, 7, 10, 13, 15, 19, 24, 25, 27, 30, 32, 33, 39, 40, 42, 45, 48] fail to deal with this case. In this sense, the proposed Theorem 1 is more general and practical than the existing results in [1, 2, 4, 7, 10, 13, 15, 19, 24, 25, 27, 30, 32, 33, 39, 40, 42, 45, 48]. If the parameter uncertainties are neglected, in other words, neural networks (1) reduces to the following nominal system
ð45Þ and the state estimation error (17) is of the following form eðk þ 1Þ ¼ ðAi Ki Ei ÞeðkÞ þ A1i FðeðkÞÞ þ A2i Fðeðk dðkÞÞÞ þ ½Di eðkÞ þ D1i FðeðkÞÞ þ D2i Fðeðk dðkÞÞÞwðkÞ Ki /ðk; eðkÞÞ ð46Þ Therefore, we can obtain the following state estimator for neural network (44). Corollary 1 Consider the neural network (44) without parameters uncertainties], then the corresponding estimation error system (46) is asymptotically mean-square stable, if there exist symmetric positive matrices Pi [ 0, Q [ 0, matrices Ti , and scalars qi [ 0, k1i [ 0, k2i [ 0 such that the following LMI holds: 3 2 C1i X2i X4i 7 6 Ci ¼ 4 X3i X5i 5\0 ð47Þ X3i where 2
C11i 6 6 6 C1i ¼ 6 6 6 4
0 Q k2i H1
k1i H2 0 k1i I
0 k2i H2 0 k2i I
0 0 0 0
qi I
3 7 7 7 7 7 7 5
T
C11i ¼ ðdM dm þ 1ÞQ Pi k1i H1 þ qi W W
X2i , X3i , X4i , X5i are defined in Theorem 1. Moreover, the estimator gain matrices Ki are given by (20). Proof The proof can be carried out along the same lines as those in the proof of Theorem 1. h
xðk þ 1Þ ¼ Ai xðkÞ þ A1i FðxðkÞÞ þ A2i Fðxðk dðkÞÞÞ þ ½Di xðkÞ þ D1i FðxðkÞÞ þ D2i Fðxðk dðkÞÞÞwðkÞ ð44Þ
Remark 5 Theorem 1 investigates the state estimation for uncertain discrete-time stochastic delayed neural networks with Markovian jumping parameters. In practice, Markov
123
Int. J. Mach. Learn. & Cyber.
chain are often used for load prediction of power system and the premium adjustment of bonus-malus system. But only [2, 6, 7, 22, 45] investigates state estimator for neural networks with Markovian jumping parameters. Hence, Theorem 1 is more general and practical than [1, 13, 15, 25, 33, 39, 40, 42, 47]. Remark 6 The neuron-dependent nonlinear disturbances /ðk; xðkÞÞ in Theorem 1 is assumed to satisfy Assumption 1. If /ðk; xðkÞÞ is supposed to satisfy Assumption 2, in other words, /ðk; eðkÞÞ in (9) satisfies ½/ðk; eðkÞÞ R3 eðkÞT ½/ðk; eðkÞÞ R4 eðkÞ 0 or equivalently T eðkÞ G1 qi /ðk; eðkÞÞ
G2 I
ð48Þ
eðkÞ 0 /ðk; eðkÞÞ
ð49Þ
RT3 þ RT4 2
ð50Þ
where G1 ¼
RT3 R4 þ RT4 R3 ; 2
G2 ¼
Then we can obtain the following state estimation results for neural network (1), whose neuron activation function and neuron-dependent nonlinear disturbance /ðk; xðkÞÞ in the measurement Eq. (9) all satisfy Assumption 2 (sector-bounded conditions). Theorem 2 Consider the neural network (1), then the corresponding estimation error system (17) is asymptotically mean-square stable for all admissible uncertainties satisfying (7) and (11), if there exist symmetric positive matrices Pi [ 0, Q [ 0, matrices Ti , and scalars qi [ 0, e1i [ 0, e2i [ 0, k1i [ 0, k2i [ 0 such that the following LMI holds: 3 2 !6i !6i !1i !2i !4i 7 6 !7i !9i 7 6 !3i !5i 7 6 ð51Þ !i ¼ 6 !3i !8i !10i 7 7\0 6 7 6 e1i I 0 5 4 e2i I where
R3 and R4 are known matrices. 2
!11i
6 6 6 !1i ¼ 6 6 6 4
0
k1i H2 þ e1i N1iT N2i
e1i N1iT N3i
qi G 2
Q k2i H1
0 k1i I þ e1i N2iT N2i
k2i H2 e1i N2iT N3i
0 0
k2i I þ e1i N3iT N3i
0
qi I
qi G1 þ e1i N1iT N1i þ e2i N4iT N4i 2 T 3 Di Pi
!11i ¼ ðdM dm þ 1ÞQ Pi k1i H1 3 2 T Ai Pi EiT TiT 7 7 6 6 0 7 6 6 0 7 7 7 6 6 T 7½I; I; . . .; I; 7 !4i ¼ 6 !2i ¼ 6 AT1i Pi 7 6 6 D1i Pi 7½I; I; . . .; I 7 6 6 T 7 T A2i Pi 5 4 4 D2i Pi 5 T Ti 0
1 1 !3i ¼ diagfp1 i1 ð2Pi þ P1 Þ; pi2 ð2Pi þ P2 Þ; . . .; piN ð2Pi þ PN Þg
!5i ¼ 0NN ;
!6i ¼ ½0 0 0 0 0T
!7i ¼ Pi M1i ½I; I; . . .; IT ; !9i ¼ Ti M3i ½I; I; . . .; IT ;
123
!8i ¼ Pi M2i ½I; I; . . .; IT !10i ¼ 0N1
3 7 7 7 7 7 7 5
Int. J. Mach. Learn. & Cyber.
Moreover, the estimator gain matrices Ki are given by (20). Proof Replace (32) with (49), and the rest proof of Theorem 2 follows the same lines in the proof of Theorem 1. h Remark 7 The results in Theorem 2 can be easily extended to the case that parameter uncertainties are ignored and the case that Markovian jumping parameters are not taken into account, respectively. Thus, we have the following corollary. Corollary 2 Consider the neural network (44) [neural network (1) without parameter uncertainties], then the corresponding estimation error system (46) is asymptotically mean-square stable, if there exist symmetric positive matrices Pi [ 0, Q [ 0, matrices Ti , and scalars qi [ 0, k1i [ 0, k2i [ 0 such that the following LMI holds: 3 2 K1i !2i !4i 7 6 ð52Þ Ki ¼ 4 !3i !5i 5\0
!3i
where 2
K11i 6 6 6 K1i ¼ 6 6 6 4
0 Q k2i H1
k1i H2 0 k1i I
0 k2i H2 0
k2i I
3 q i G2 7 0 7 7 0 7 7 7 0 5 qi I
K11i ¼ ðdM dm þ 1ÞQ Pi k1i H1 qi G1
!2i , !3i , !4i , !5i are defined in Theorem 2. Moreover, the estimator gain matrices Ki are given by (20). Remark 8 It should be noted that the proposed conditions above are all formulated in terms of LMI. Hence, the estimator gain matrix Ki (or K) can be effectively acquired via solving a certain LMI, which can be facilitated readily by employing the Matlab LMI Control Toolbox.
4 Numerical examples Example 1
Consider the neural network (44) with
3 3 2 0:4 0 0 0:2 0:1 0:2 7 7 6 6 A1 ¼ 4 0 0:4 0 5; A11 ¼ 4 0 0:2 0:2 5; 0 0 0:4 0:1 0:1 0:2 3 2 0:2 0:2 0:1 7 6 A21 ¼ 4 0:2 0:1 0:1 5 0:1 0:2 0:3 3 3 2 2 0:2 0 0 0:1 0:2 0:1 7 7 6 6 D1 ¼ 4 0 0:3 0 5; D11 ¼ 4 0:3 0:2 0:1 5; 0 0 0:2 0:2 0:1 0:3 3 2 0:1 0 0:2 7 6 D21 ¼ 4 0 0:1 0 5 0:2 0:3 0:1 3 3 2 2 0:4 0 0 0:2 0:2 0:1 7 7 6 6 A2 ¼ 4 0 0:3 0 5; A12 ¼ 4 0 0:3 0:2 5; 0 0 0:5 0:2 0:1 0:2 3 2 0:2 0:1 0:2 7 6 A22 ¼ 4 0:1 0:3 0:2 5 0:3 0:1 20:2 3 3 2 2 0:1 0 0 0:2 0 0:1 7 7 6 6 D2 ¼ 4 0 0:2 0 5; D12 ¼ 4 0:1 0:3 0:2 5; 0 0 0:3 0:1 0:2 0 3 2 0:1 0:1 0:2 7 6 D22 ¼ 4 0:2 0 0:1 5 0 0:1 0:1 3 3 2 2 0:2 0 0:1 0:1 0 0 7 7 6 6 E1 ¼ 4 0 0:1 0:1 5; E2 ¼ 4 0 0:1 0 5 0:1 0 0:2 0 0 0:2 2
The transition probabilities matrix of the stochastic process rk is 1=3 2=3 P¼ 3=4 1=4
123
Int. J. Mach. Learn. & Cyber.
When the neuron activation function is chosen as FðxÞ ¼ 14 ðjx þ 1j jx 1jÞ, we can obtain R1 ¼ 0, R2 ¼ 0:5I, from (35) we have H1 ¼ 0, H2 ¼ 0:25I. The neuron-dependent nonlinear disturbance is chosen as /ðk; eðkÞÞ ¼ tanhðkÞ. Case 1 The neuron-dependent nonlinear disturbance /ðk; xðkÞÞ in (9) satisfies sector-bounded condition (48), then we have R3 ¼ 0, R4 ¼ I. From (50) we get G1 ¼ 0, G2 ¼ 0:5I. When dm ¼ 1, by solving LMI (52), we get the maximum allowance value of dM ¼ 6 and the following estimator gain matrices from (20): 3 2 0:1822 0:0214 0:0281 7 6 K1 ¼ 4 0:0026 0:2180 0:0104 5; 0:0713 0:0002 0:1064 3 2 0:4227 0:1057 0:0338 7 6 K2 ¼ 4 0:0042 0:4607 0:0255 5 0:0453 0:0059 0:7099 Case 2 The neuron-dependent nonlinear disturbance /ðk; xðkÞÞ in (9) satisfies Lipschitz condition (13), then we have W ¼ I. When dm ¼ 1, by solving LMI (47), we get the maximum allowance value of dM ¼ 5 and the following estimator gain matrices from (20): 3 2 0:0056 0:0001 0:0013 7 6 K1 ¼ 4 0:0014 0:0019 0:0005 5; 0:0024 0:0009 0:0019 2 3 0:0016 0:0003 0:0005 6 7 K2 ¼ 4 0:0000 0:0010 0:0003 5 0:0000 0:0000 0:0021
A1 ¼ A21 ¼
0:25 0 ; 0 0:3 0:15 0:1
D1 ¼
A11 ¼
0 0:5 0:2 0
0:1 0:1
0:1
0:2 ; 0:15
0:15
;
; D11 ¼ 0 0:32 0:15 0:2 0:1 0:2 D21 ¼ 0:05 0:1 0:2 0 0:1 0:2 A2 ¼ ; A12 ¼ ; 0 0:2 0:15 0:2 0:2 0:1 A22 ¼ 0:2 0:1 0:15 0 0:22 0 D2 ¼ ; D12 ¼ ; 0 0:25 0:1 0:3 0:1 0:15 D22 ¼ 0:2 0 0:2 0 0:2 0:1 A3 ¼ ; A13 ¼ ; 0 0:3 0:1 0:1 0:1 0:1 A23 ¼ 0 0:1 0:1 0 0:1 0 D3 ¼ ; D13 ¼ ; 0 0:2 0:1 0:2 0:1 0:2 D23 ¼ 0:1 0:1 0:2 0 0:1 0 E1 ¼ ; E2 ¼ ; 0 0:15 0 0:1 0:25 0 E3 ¼ 0 0:1
Notice that the maximum allowance value dM for the Lipschitz condition is dM ¼ 5, the same value for the sector-bounded condition case is dM ¼ 6, which has illustrated that the sector-bounded condition is less conservative than the Lipschitz condition. From Example 1, it can be concluded that sectorbounded condition is less conservative than Lipschitz condition. On the whole, sector-bounded condition is slightly better than Lipschitz condition in this paper.
M11 ¼ 0:2I; M21 ¼ 0:1I;
Example 2
The transition probabilities matrix of the stochastic process rk is
123
Consider neural network (1) with
M31 ¼ 0:1I;
M12 ¼ 0:15I; M13 ¼ 0:1I M22 ¼ 0:1I; M23 ¼ 0:15I M32 ¼ 0:1I;
M33 ¼ 0:15I
N11 ¼ 0:2I; N12 ¼ 0:18I; N13 ¼ 0:1I N21 ¼ 0:1I; N22 ¼ 0:12I; N23 ¼ 0:15I N31 ¼ 0:15I; N32 ¼ 0:11I; N33 ¼ 0:13I N41 ¼ 0:1I; N42 ¼ 0:05I; N43 ¼ 0:1I
Int. J. Mach. Learn. & Cyber.
2
0:2 6 P ¼ 4 0:4 0:1
3 0:3 0:5 7 0:2 0:4 5 0:5 0:4
2 x2 (k)
1.5
x ˆ2 (k) 1
When the neuron activation function is chosen as FðxÞ ¼ 14 ðjx þ 1j jx 1jÞ, we can obtain R1 ¼ 0, R2 ¼ 0:5I, from (35) we have H1 ¼ 0, H2 ¼ 0:25I. The neuron-dependent nonlinear disturbance is chosen as /ðk; eðkÞÞ ¼ tanhðkÞ. Case 1 The neuron-dependent nonlinear disturbance /ðk; xðkÞÞ in (9) satisfies sector-bounded condition (48), then we have R3 ¼ 0, R4 ¼ I. From (50) we get G1 ¼ 0, G2 ¼ 0:5I. When dm ¼ 1, by solving LMI (51), we get the maximum allowance value of dM ¼ 7 and the following estimator gain matrices from (20): 0:3320 0:0516 K1 ¼ ; 0:0091 0:3270 0:1600 0:1091 ; K2 ¼ 0:0067 0:1332 0:1009 0:0307 K3 ¼ 0:0019 0:3599 Case 2 The neuron-dependent nonlinear disturbance /ðk; xðkÞÞ in (9) satisfies Lipschitz condition (13), then we have W ¼ I. When dm ¼ 1, by solving LMI (19), we get the maximum allowance value of dM ¼ 7 and the following estimator gain matrices from (20): 0:0039 0:0002 K1 ¼ ; 0:0003 0:0006 0:0042 0:0040 ; K2 ¼ 0:0006 0:0047 0:0059 0:0000 K3 ¼ 0:0004 0:0024
0.5 0 −0.5 −1 −1.5 −2 −2.5
0
5
10
15
20 k
25
30
35
40
Fig. 2 State x2 ðkÞ and its estimation x^2 ðkÞ 2.5 e1 (k)
2
e2 (k) 1.5 1 0.5 0 −0.5 −1 −1.5 −2
0
5
10
15
20 k
25
30
35
40
30
35
40
Fig. 3 The error response e1 ðkÞ and e2 ðkÞ
2.5
3.5
x1 (k)
2
x ˆ1 (k)
3
1.5 1
2.5 Modes
0.5 0 −0.5
2
1.5
−1 1
−1.5 −2
0
5
10
15
20 k
25
Fig. 1 State x1 ðkÞ and its estimation x^1 ðkÞ
30
35
40
0.5
0
5
10
15
20 k
25
Fig. 4 The modes
123
Int. J. Mach. Learn. & Cyber.
The simulation results are plotted in Figs. 1, 2, 3, 4. Among them, Figs. 1 and 2 depicts the true estimation x1 ðkÞ, x2 ðkÞ and their estimations x^1 ðkÞ, x^2 ðkÞ, respectively. Figure 3 shows the estimation error e1 ðkÞ, e2 ðkÞ. Figure 4 draws one of the possible realizations of the Markovian jumping mode. It is clearly observed from the simulation results that all the expected objectives are well achieved.
5 Conclusions In this paper, the state estimation problem for uncertain discrete-time stochastic neural networks with Markovian switching and time-varying delays is investigated. The neuron activation function satisfies sector-bounded condition, and the nonlinear perturbation of the measurement equation satisfies standard Lipschitz condition. In addition, the nonlinear perturbation of the measurement equation satisfies sector-bounded condition has been also discussed. Delay-dependent conditions are formulated in terms of LMIs. Finally, numerical examples have been provided to demonstrate the usefulness of the proposed design method and the superiority of sector-bounded condition over Lipschitz condition. It should be mentioned that the results obtained in this paper can be extended to fuzzy system with Markovian jumping parameters [18]. Acknowledgments The authors wish to thank the editor and the anonymous reviewers very much for their valuable comments and suggestions, which have led to significant improvements of the quality of this manuscript. This work was supported by the Natural Science Foundation of Jiangsu Province (No. BK20130239) and the Research Fund for the Doctoral Program of Higher Education of China (No. 20130094120015).
References 1. Balasubramaniam P, Kalpana M, Rakkiyappan R (2011) State estimation for fuzzy cellular neural networks with time delay in the leakage term, discrete and unbounded distributed delays. Comput Math Appl 62:3959–3972 2. Balasubramaniam P, Lakshmanan S, Theesar S (2010) State estimation for Markovian jumping recurrent neural networks with interval time-varying delays. Nonlinear Dyn 60:661–675 3. Balasubramaniam P, Rakkiyappan R (2008) Global asymptotic stability of stochastic recurrent neural networks with multiple discrete delays and unbounded distributed delays. Appl Math Comput 204:680–686 4. Bao H, Cao J (2011) Delay-distribution-dependent state estimation for discrete-time stochastic neural works with random delay. Neural Netw 24:19–28 5. Chen B, Yu L, Zhang W (2011) H1 filtering for Markovian switching genetic regulatory networks with time-delays and stochastic disturbances. Circ Syst Signal Process 30:1231–1252 6. Chen Y, Zheng W (2012) Stochastic state estimation for neural networks with distributed delays and Markovian jump. Neural Netw 25:14–20
123
7. Chu H, Gao L (2009) State estimation for discrete-time Markov jumping stochastic neural networks with mixed time-delays. In: Proceedings of the Pacific-Asia Conference on Circuits, Communications and System. Chengdu, China, pp 717–721 8. Eddahech A, Briat O, Ayadi M, Vinassa J (2014) Modeling and adaptive control for supercapacitor in automotive applications based on artificial neural networks. Electr Power Syst Res 106:134–141 9. He Q, Liu D, Wu H, Ding S (2014) Robust exponential stability analysis for interval Cohen-Grossberg type BAM neural networks with mixed time delays. Int J Mach Learn Cybern 5:23–38 10. He Y, Wang Q, Wu M, Lin C (2006) Delay-dependent state estimation for delayed neural networks. IEEE Trans Neural Netw 17:1077–1081 11. Hua M, Liu X, Deng F, Fei J (2010) New results on robust exponential stability of uncertain stochastic neural networks with mixed time-varying delays. Neural Process lett 32:219–233 12. Huang H, Feng G (2009) Delay-dependent H1 and generalized H2 filtering for delayed neural networks. IEEE Trans Circuits I 56:846–857 13. Huang H, Feng G (2011) State estimation of recurrent neural networks with time-varying delay: a novel delay partition approach. Neurocomputing 74:792–796 14. Huang H, Feng G, Cao J (2008) Robust state estimation for uncertain neural networks with time-varying delay. IEEE Trans Neural Netw 19:1329–1339 15. Huang H, Feng G, Cao J (2010) State estimation for static neural networks with time-varying delay. Neural Netw 23:1202–1207 16. Hunt K, Sbarbaro D, Zbikowski R, Gawthrop P (1992) Neural networks for control system-A survey. Automatica 28:1083–1112 17. Kwon O, Park M, Park J, Lee S, Cha E (2013) New criteria on delay-dependent stability for discrete-time neural networks with time-varying delays. Neurocomputing 121:185–194 18. Li H, Chen B, Zhou Q, Qian W (2009) Robust stability for uncertain delayed fuzzy Hopfield neural networks with Markovian jumping parameters. IEEE Trans Syst Man Cybern B 39:94–102 19. Li T, Fei S (2007) Exponential state estimation for recurrent neural networks with distributed delays. Neurocomputing 71:428–438 20. Li H, Jing X, Karimi H (2014) Output-feedback-based H1 control for vehicle suspension systems with control delay. IEEE Trans Ind Electron 61:436–446 21. Li H, Liu H, Gao H, Shi P (2012) Reliable fuzzy control for active suspension systems with actuator delay and fault. IEEE Trans Fuzzy Syst 20:342–357 22. Liang J, Lam J, Wang Z (2009) State estimation for Markov-type genetic regulatory networks with delays and uncertain mode transition rates. Phys Letts A 373:4328–4337 23. Liang J, Wang Z, Liu X (2009) State estimation for coupled uncertain stochastic networks with missing measurements and time-varying delays: The discrete-time case. IEEE Trans Neural Netw 20:781–793 24. Liao C, Lu C, Zheng K, Ting C (2009) A delay-dependent approach to design state estimator for discrete stochastic recurrent neural network with interval time-varying delays. ICIC Express Lett 3:465–470 25. Liu Y, Wang Z, Liu X (2007) Design of exponential state estimators for neural networks with mixed time delays. Phys Letts A 364:401–412 26. Liu Y, Wang Z, Liu X (2008) Robust H1 filtering for discrete nonlinear stochastic systems with time-varying delay. J Math Anal Appl 341:318–336 27. Lu C (2008) A delay-range-dependent approach to design state estimation for discrete-time recurrent neural networks with interval time-varying delay. IEEE Trans Circuits II 55:1163–1167
Int. J. Mach. Learn. & Cyber. 28. Luan X, Liu F, Shi P (2010) H1 filtering for nonlinear systems via neural networks. J Frankl Ins 347:1035–1046 29. Mohammadian M, Abolmasoumi A, Momeni H (2012) H1 mode-independent filter design for Markovian jump genetic regulatory networks with time-varying delays. Neurocomputing 87:10–18 30. Mou S, Gao H, Qiang W, Fei Z (2008) State estimation for discrete-time neural networks with time-varying delays. Neurocomputing 72:643–647 31. Ou Y, Shi P, Liu H (2010) A mode-dependent stability criterion for delayed discrete-time stochastic neural networks with Markovian jumping parameters. Neurocomputing 73:1491–1500 32. Park J, Kwon O (2009) Further results on state estimation for neural networks of neutral-type with time-varying delay. Appl Math Comput 208:65–75 33. Park J, Kwon O, Lee S (2008) State estimation for neural networks of neutral-type with interval time-varying delays. Appl Math Comput 203:217–223 34. Syed Ali M (2014) Robust stability of stochastic uncertain recurrent neural networks with Markovian jumping parameters and time-varying delays. Int J Mach Learn Cybern 5(1):13–22 35. Syed Ali M (2014) Stability analysis of Markovian jumping stochastic Cohen-Grossberg neural networks with discrete and distributed time varying delays. Chin Phys B 23(6):060702 36. Syed Ali M (2015) Stability of Markovian jumping recurrent neural networks with discrete and distributed time-varying delays. Neurocomputing 149:1280–1285 37. Syed Ali M, Marudai M (2011) Stochastic stability of discretetime uncertain recurrent neural networks with Markovian jumping and time-varying delay. Math Comput Model 54(9–10):1979–1988 38. Wang T, Ding Y, Zhang L, Hao K (2013) Robust state estimation for discrete-time stochastic genetic regulatory networks with probabilistic measurement delays. Neurocomputing 111:1–12
39. Wang Z, Ho D, Liu X (2005) State estimation for delayed neural networks. IEEE Trans Neural Netw 16:279–284 40. Wang Z, Liu Y, Liu X (2009) State estimation for jumping recurrent neural networks with discrete and distributed delays. Neural Netw 22:41–48 41. Wang Z, Liu Y, Liu X, Shi Y (2010) Robust state estimation for discrete-time stochastic neural networks with probabilistic measurement delays. Neurocomputing 74:256–264 42. Wang H, Song Q (2010) State estimation for neural networks with mixed interval time-varying delays. Neurocomputing 73:1281–1288 43. Wang W, Zhong S, Liu F (2012) Robust filtering of uncertain stochastic genetic regulatory networks with time-varying delays. Chaos Soliton Fract 45:915–929 44. Wei G, Wang Z, Lam J, Fraser K, Rao G, Liu X (2009) Robust filtering for stochastic genetic regulatory networks with timevarying delay. Math Biosci 220:73–80 45. Wu Z, Su H, Chu J (2010) State estimation for discrete Markovian jumping neural networks with time delay. Neurocomputing 73:2247–2254 46. Zhang C, Chen Y, Wang J (2012) A state estimator of stochastic delayed neural networks. In: Proceedings of 24th Chinese Control and Decision Conference. Taiyuan, China, pp 2829–2832 47. Zhang F, Zhang Y (2013) State estimation of neural networks with both time-varying delays and norm-bounded parameter uncertainties via a delay decomposition approach. Commun Nonlinear Sci Numer Simul 18:3517–3529 48. Zheng C, Zhang Y, Wang Z (2014) Stability analysis of stochastic reaction-diffusion neural networks with Markovian switching and time delays in the leakage terms. Int J Mach Learn Cybern 5:3–12
123