J Control Theory Appl 2010 8 (1) 40–51 DOI 10.1007/s11768-010-9179-7
Robust disturbance attenuation for uncertain nonlinear networked control systems Dan HUANG, Sing Kiong NGUANG (Department of Electrical and Computer Engineering, University of Auckland, Private Bag 92019 Auckland, New Zealand) Abstract: This paper considers the problem of robust disturbance attenuation for a class of uncertain nonlinear networked control systems. Takagi-Sugeno fuzzy models are firstly employed to describe the nonlinear plant. Markov processes are used to model the random network-induced delays and data packet dropouts. The Lyapunov-Razumikhin method has been used to derive such a controller for this class of nonlinear systems such that it is stochastically stabilizable with a disturbance attenuation level. Sufficient conditions for the existence of such a controller are derived in terms of the solvability of bilinear matrix inequalities. An iterative algorithm is proposed to change this non-convex problem into quasi-convex optimization problems, which can be solved effectively by available mathematical tools. The effectiveness of the proposed design methodology is verified by a numerical example. Keywords: Networked control system; Takagi-Sugeno fuzzy model; Robust disturbance attenuation
1
Introduction
In recent years, due to the expansion of system physical setups and functionality, networked control systems (NCSs) have been introduced into the design of control systems. NCSs are a type of distributed control systems where sensors, actuators, and controllers are interconnected by communication networks. It can improve the efficiency, flexibility, and reliability of integrated applications, and reduce the time and costs of installation, reconfiguration, and maintenance. Due to its low cost, flexibility, and less wiring, the use of NCSs is rapidly increasing in industrial applications, including telecommunications, remote process control, altitude control of airplanes, and so on, and therefore, considerable attention has been devoted to the problem of NCSs [1∼9]. Network-induced delays and data packet dropouts are two main issues raised in the research of NCSs; see [1, 7∼9]. In the NCS, data are sent through the network in packets. Due to this network characteristic, therefore, any continuous-time signals from the plant are first sampled to be carried over the communication network. Chances are that those packets can be lost during transmission because of uncertainty and noise in communication channels. It may also occur at the destination when out-of-order delivery takes place. Furthermore, the network-induced delays are also a challenging problem in the control of NCSs that occurs while exchanging data among devices connected by the communication network. Depending on network characteristics, such as their topologies and routing schemes, these delays can be constant, time varying, or even random. They can degrade the performance of control systems and can even destabilize the system. The severity of the networkinduced delays is aggravated when data packet dropouts occur during a network transmission. On the other hand, the study of Markovian jump linear systems has attracted a great deal of attention; see [10∼17].
This class of systems is normally used to model stochastic systems which change from one mode to another randomly or according to some probabilities. Some of these results [14∼17] were applied to Markovian jump linear systems with mode-dependent time delays. In [17], stabilization of NCSs with the sensor-to-controller and controller-toactuator delays was considered in the discrete-time domain. According to the characteristics of NCSs, the Markov process is an ideal model of the random time delays happening in the communication network. Furthermore, due to the characteristics of the communication network, networkinduced time delays are input delays. It should be noted that for systems with time-varying input delays, it is difficult to analyze H∞ performance or disturbance attenuation based on the gain characterization, because the state variation depends on not only the current but also the history of exterior disturbance input. Motivated by the aforementioned issues, this paper firstly introduces a new disturbance attenuation notation for systems with input delays. We approximate the nonlinear plant by a Takagi-Sugeno model [18]. This fuzzy modeling is simple and natural. The system dynamics are captured by a set of fuzzy implications which characterize local relations in the state space. The main feature of a TakagiSugeno fuzzy model is to express the local dynamics of each fuzzy implication (rule) by a linear system model. The overall fuzzy model of the system is achieved by fuzzy “blending” of the linear system models. In light of such formulation, this paper proposes a dynamic output feedback controller with robust disturbance attenuation and stability, irrespective of the uncertainties and network-induced effects, i.e., network-induced delays and data packet dropouts in communication channels, which are to be modeled by the Markov processes. Based on the Lyapunov-Razumikhin method, the existence of a delay-dependent controller for the nonlinear plant is given in terms of the solvability of
Received 4 September 2009. c South China University of Technology and Academy of Mathematics and Systems Science, CAS and Springer-Verlag Berlin Heidelberg 2010
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
bilinear matrix inequalities (BMIs). An iterative algorithm is proposed to change this non-convex problem into quasiconvex optimization problems, which can be solved effectively by available mathematical tools. This paper is organized as follows. Problem formulation and preliminaries are given in Section 2. Section 3 gives the main results of this paper. An illustrating example is presented in Section 4. Finally, concluding remarks are drawn in Section 5.
2
Problem formulation and preliminaries
We nonlinear NCSs as follows: ⎧describe the r ⎪ ⎪ x(t) ˙ = μi (ν(t))[(Ai + ΔAi )x(t) + (B1i ⎪ ⎪ ⎪ i=1 ⎪ ⎪ ⎪ +ΔB1i )w(t) + (B2i + ΔB2i )u(t)], ⎪ ⎪ ⎪ r ⎪ ⎨ z(t) = μ (ν(t))[(C + ΔC )x(t) i 1i 1i (1) i=1 ⎪ ⎪ +(D1i + ΔD1i )u(t)], ⎪ ⎪ ⎪ r ⎪ ⎪ ⎪ μi (ν(t))[(C2i + ΔC2i )x(t) + (D2i y(t) = ⎪ ⎪ ⎪ i=1 ⎪ ⎩ +ΔD2i )w(t)], where x(t) ∈ Rn is the state vector, u(t) ∈ Rm is the control input, w(t) ∈ Rp is the exogenous disturbance input and/or measurement noise, and y(t) ∈ Rl and z(t) ∈ Rs denote the measurement and regulated output, respectively. Furthermore, i ∈ IR = {1, · · · , r}, r is the number of fuzzy rules; νk (t) are premise variables, Mικ are fuzzy sets, k = 1, · · · , p, p is the number of premise variables, and ν(t) = [ν1 (t), ν2 (t), · · · , νp (t)]T , while p r Mικ (νk (t)), ωi (ν(t)) 0, ωi (ν(t)) > 0, ωi (ν(t)) = k=1
i=1 r
ωi (ν(t)) μi (ν(t)) = , μi (ν(t)) 0, μi (ν(t)) = 1. r i=1 ωi (ν(t)) i=1
Here, Mik (νk (t)) denote the grade of membership of νk (t) in Mik .
41
In addition, matrices ΔAi , ΔB1i , ΔB2i , ΔC1i , ΔC2i , ΔD1i , and ΔD2i characterize the uncertainties in the system and satisfy the following assumption: Assumption 1 ΔAi ΔB1i ΔB2i = H1i F (t) E1i E2i E3i , ΔC1i ΔD1i = H2i F (t) E1i E3i , ΔC2i ΔD2i = H3i F (t) E1i E2i , where H1i , H2i , H3i , E1i , E2i , and E3i are known real constant matrices of appropriate dimensions, and F (t) is an unknown matrix function with Lebesgue-measurable elements and satisfies F (t)T F (t) I, in which I is the identity matrix of appropriate dimension. We consider a nonlinear NCS whose plant is described by the T-S model (1). The setup of the overall control system is depicted in Fig.1, where τ (t) 0 is the random time delay from sensor to controller and ρ(t) 0 is the random time delay from controller to actuator. These delays are assumed to be upper bounded. In the system setup, the premise vector ν(t) is connected to the controller and actuators via point-to-point architecture, which is immune to network-induced delays. The plant outputs are sampled with periodic sampling interval hs and sent through the network at times khs , k ∈ N. In the absence of data dropouts, it can be noted that the measurement signals {y(khs ), k ∈ N} are received by the controller side at times khs + τks where τks is the delay that measurement sent at khs experiences. A controller is constructed as follows:
r r ˆi yˆ(khs ) , μi (ν(t))μj (ν(t)) Aˆij x ˆ(t) + B x ˆ˙ (t) = u(t) =
i=1 j=1 r
μi (ν(t))Cˆi x ˆ(t)
i=1
(2)
s ], where x ˆ(t) is the for all t ∈ [khs + τks , (k + 1)hs + τk+1 s controller state, yˆ(kh ) is equal to the last successfully reˆi , Cˆi are the ceived measurement signal, and matrices Aˆij , B controller’s parameters.
Fig. 1 Block diagram of a nonlinear networked control system.
42
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
The controller sends control signals at times lha . These signals, assuming there are no dropouts, arrive at the plant side at time lha +τla where τla is the delay that control signal sent at lha experiences. This leads to r u(t) = μi (ν(t))Cˆi x ˆ(lha ) (3) for all t ∈ [lh + a
i=1 a τl , (l +
a 1)ha + τl+1 ]. Defining
s ], (4) τ (t) := t − kh , ∀t ∈ [khs + τks , (k + 1)hs + τk+1 a a a a a ρ(t) := t − lh , ∀t ∈ [lh + τl , (l + 1)h + τl+1 ], (5) (2) and (3) can be rewritten as
r r ˆi y(t−τ (t)) , μi (ν(t))μj (ν(t)) Aˆij x ˆ(t)+ B x ˆ˙ (t) = s
i=1j=1 r
μi (ν(t))Cˆi x ˆ(t − ρ(t)),
u(t) =
(6)
i=1
and s }], ∀k ∈ N, τ˙ (t) = 1, (7) τ (t) ∈ [min{τks }, hs +max{τk+1 k
k
l
l
a }], ∀l ∈ N, ρ(t) ˙ = 1. (8) ρ(t) ∈ [min{τla }, ha +max{τl+1
Fig.2 shows τ (t) with respect to time where for all k, τks = τ s and constant sampling interval hs with T = khs + τ s . The derivative of τ (t) is almost always one, except at the sampling times, where τ (t) drops to τ s .
lays. Modes of the Markov chain are defined as different network load conditions. For each mode of in the Markov chain, a corresponding delay is assumed to be time varying but upper bounded by a known constant. Following the same line as [19], we use two Markov chains {η1 (t)} and {η2 (t)} to model τks and τla , respectively. {η1 (t)} is a continuous-time discrete-state Markov process taking values in a finite set S = {1, 2, · · · , s}. In some small increment of time from t to t + Δ, the probability that the process makes a transition to some state j, given that it started in some state i = j at time t, is given by the following transition probability matrix: λij Δ + o(Δ), i = j, Pr{η1 (t+Δ) = j | η1 (t) = i} = 1+λii Δ+o(Δ), i = j, o(Δ) = 0. o(Δ) repΔ resents a quantity that goes to zero faster than Δ as Δ goes to zero. Hence, over a sufficiently small interval of time, the probability of a particular transition is roughly proportional to the duration of that interval. Here λij 0 is the transition rate from mode i to mode j (i = j), and λii = s λij . {η2 (t)} takes values in W = {1, 2, · · · , w} −
where Δ > 0, λij Δ < 1, and lim
Δ→0
j=1,j=i
with transition probability matrix given by πkl Δ + o(Δ), k = l, Pr{η2 (t+Δ) = l | η2 (t) = k} = 1+πkk Δ+o(Δ), k = l, with πkl 0 and πkk = −
Fig. 2 Evolution of τ (t) with respect to time without packet dropout.
Furthermore, data packet dropout can be viewed as a delay growing beyond the defined boundary in (7) and (8). Let us define ns and na as the number of consecutive dropouts in sensor and actuator channels, respectively. Then s }], τ (t) ∈ [min{τks }, (ns + 1)hs + max{τk+1 k
k
∀k ∈ N, τ˙ (t) = 1, a }], ρ(t) ∈ [min{τla }, (na + 1)ha + max{τl+1 l
(9)
l
∀l ∈ N, ρ(t) ˙ = 1. (10) s If the measurement packet sent at kh is lost, then τ (t) increases up to 2hs + τ s . We can see this scenario from Fig.3.
w l=1,l=k
πkl . Together with each
mode in the Markov chain, the corresponding delay is assumed to be time varying but upper bounded by a known constant. Furthermore, we assume that the mode of the Markov chain or state of the network load condition is accessible by the controller and the sensor. The sensor sends the mode of the network load condition and the measurement to the controller. These assumptions are reasonable and they are employed in [19]. From (9), {η1 (t)} can be regarded without loss of generality as the model of τ (t). Therefore, for the nonlinear plant represented by (1), the fuzzy dynamic output feedback controller at time t is inferred as follows:
r r μi (ν(t))μj (ν(t)) Aˆij (η1 (t), η2 (t))ˆ x(t) x ˆ˙ (t) = i=1 j=1 ˆi (η1 (t), η2 (t))y(t − τ (η1 (t), t)) +B
r r = μi (ν(t))μj (ν(t)) Aˆij (η1 (t), η2 (t))ˆ x(t) i=1 j=1
ˆi (η1 (t), η2 (t))(C2j +ΔC2j )x(t−τ (η1 (t), t)) +B ˆi (η1 (t), η2 (t))(C2j +ΔC2j )w(t−τ (η1 (t), t)) , +B u=
r
μi (ν(t))Cˆi (η1 (t), η2 (t))ˆ x(t − ρ(η2 (t), t)), (11)
i=1
Fig. 3 Evolution of τ (t) with respect to time with packet dropout at
khs .
In [19], a Markov chain is utilized to model network de-
ˆi (η1 (t), η2 (t)), Cˆi (η1 (t), η2 (t)) where Aˆij (η1 (t), η2 (t)), B in each plant rule are parameters of the controller which are to be designed.
43
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
Substituting (11) into (1) yields
r r μi (ν(t))μj (ν(t)) Aij (η1 (t), η2 (t))˜ x(t) x ˜˙ (t) = i=1 j=1
+Bij (η1 (t), η2 (t))˜ x(t − τ (η1 (t), t)) +Cij (η1 (t), η2 (t))˜ x(t − ρ(η1 (t), t)) +Dij (η1 (t), η2 (t))ω(t) , where
η1 (t)∈S,η2 (t)∈W
(12)
x(t) w(t) x ˜(t) = , ω(t) = , x ˆ(t) w(t − τ (η1 (t), t))
0 Ai + ΔAi , Aij (η1 (t), η2 (t)) = 0 Aˆij (η1 (t), η2 (t))
0 0 , Bij (η1 (t), η2 (t)) = ˆ Bi (η1 (t), η2 (t))(C2j + ΔC2j ) 0
0 (B2i + ΔB2i )Cˆj (η1 (t), η2 (t)) , Cij (η1 (t), η2 (t)) = 0 0 Dij (η1 (t), η2 (t))
0 B1i + ΔB1i = ˆi (η1 (t), η2 (t))(D2j + ΔD2j ) . 0 B
Let C 2,1 (Rn × S × W × [−τ, ∞); R+ ) denote the family of all nonnegative functions V (x(t), η1 (t), η2 (t), t) on Rn × S × W × [−τ, ∞) which are continuously twice differentiable in x and once differentiable in t. Also, let τ > 0 and C([−τ, 0]; Rn ) denote the family of continuous function ϕ from [−τ, 0] to Rn with the norm ϕ = sup |ϕ(θ)|. −τ θ0
Let (Ω, F, {Ft }t0 , P ) be a complete probability space with a filtration {Ft }t0 satisfying the usual conditions. Denote by L2Ft ([−τ, 0]; Rn ) the family of all {Ft }measurable C([−τ, 0]; Rn )-valued random variables φ = {φ(θ) : −τ θ 0} such that sup E|φ(θ)|2 < ∞. −τ θ0
We now cite the Razumikhin-type theorem established in [21] for the stochastic systems with Markovian jump. Definition 1 Let ζ, α1 , α2 be all positive numbers and δ > 1. Assume that there exists a function V ∈ C 2,1 (Rn × S × W × [−χ, ∞); R+ ) such that α1 x(t)2 V (x(t), η1 (t), η2 (t), t) α2 x(t)2 (13) for all (x(t), η1 (t), η2 (t), t) ∈ Rn × S × W × [−χ, ∞) and also for system (12), if its zero state response (x(φ) = 0, ω(φ) = 0, −χ φ 0) satisfies,
Tf z T (t)z(t)dt E 0
Tf sup ω T (t + φ)ω(t + φ)dt (14) γ2E 0
x = {x(ξ) : t − 2χ ξ t} ∈ L2Ft ([−2χ, 0]; Rn ) satisfying:
V (x(ξ), η1 (ξ), η2 (ξ), ξ) E min η1 (t)∈S,η2 (t)∈W
< δE max V (x(t), η1 (t), η2 (t), t) (15)
−χφ0
for any nonzero ω(t) ∈ L2 [0, Tf ] and Tf 0, provided
for all t − 2χ ξ t. Then system (12) is said to be stochastically stabilizable with a disturbance attenuation level γ. Remark 1 From Definition 1, it is easy to find that once there is no time delay in the system, i.e., φ = 0, (14) reduces to
Tf
Tf E z T (t)z(t)dt γ 2 E ω T (t)ω(t)dt , 0
0
which is H∞ control problem. In this chapter, we assume u(t) = 0 before the first control signal reaches the plant. From here, μi (ν(t)) and μj (ν(t)) are denoted as μi and μj , respectively for the convenience of notations. In the symmetric block matrices, we use (∗) as an ellipsis for terms that are induced by symmetry. Aˆij (η1 (t), η2 (t)) is denoted as Aˆij (ι, κ) if η1 (t) = ι and η2 (t) = κ.
3 Main result The following theorem provides sufficient conditions for the existence of a mode-dependent dynamic output feedback controller for system (12) that guarantees disturbance attenuation level γ. Theorem 1 Consider system (12) satisfying Assumption 1. Given positive scalars (ι, κ), ε1ijικ , ε2ijικ , ε3ijικ , ε4ijικ , ε5ijικ , ε6ijικ , ε7ijικ , ε8ijικ , ε9ijικ , and ε10ijικ , if there exist symmetric matrices X(ι, κ), Y (ι, κ), matrices Li (ι, κ), Fi (ι, κ), and positive scalars β1ικ , β2ικ , such that the following inequalities hold for all ι ∈ S and κ ∈ W:
Y (ι, κ) I > 0, (16) I X(ι, κ) (17) Ωii (ι, κ) < 0 for i ∈ IR , Ωij (ι, κ) + Ωji (ι, κ) < 0, for i < j < r, (18) Υij (ι, κ) < 0 for {i, j} ∈ IR × IR , (19)
R4ικ (∗)T > 0, (20) ΛT ι Q1ικ
R5ικ (∗)T > 0, (21) ΠκT Q2ικ ⎡ ⎤ −R1ικ (∗)T (∗)T (∗)T ⎢ 0 −I (∗)T (∗)T ⎥ ⎢ ⎥ (22) ⎣ 0 −Y (ι, κ) −R2 (∗)T ⎦ < 0, ικ
0
0
0
−R3ικ
⎤ (∗)T (∗)T (∗)T (∗)T (∗)T −β2ικ Y (ι, κ) ⎢ −β I −β2ικ X(ι, κ) (∗)T (∗)T (∗)T (∗)T ⎥ 2ικ ⎥ ⎢ ⎥ ⎢ T T (∗)T (∗)T (∗)T ⎥ ⎢ Lj (ι, κ)BiT LT j (ι, κ)Bi X(ι, κ) −Y (ι, κ) ⎥ < 0 for {i, j} ∈ IR × IR , ⎢ ⎢ 0 0 −I −X(ι, κ) (∗)T (∗)T ⎥ ⎥ ⎢ T T ⎣ ε6ijικ H1i ε6ijικ H1i X(ι, κ) 0 0 −ε6ijικ I (∗)T ⎦ 0 0 E2i Lj (ι, κ) 0 0 −ε6ijικ I ⎡
(23)
44
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
⎤ −β2ικ Y (ι, κ) (∗)T (∗)T (∗)T (∗)T (∗)T ⎢ −β I −β2ικ X(ι, κ) (∗)T (∗)T (∗)T (∗)T ⎥ 2ικ ⎥ ⎢ ⎢ ⎥ T T 0 Y (ι, κ)C2j Fi (ι, κ) −Y (ι, κ) (∗)T (∗)T (∗)T ⎥ ⎢ ⎥ < 0 for {i, j} ∈ IR × IR , ⎢ ⎢ 0 0 −I −X(ι, κ) (∗)T (∗)T ⎥ ⎥ ⎢ T T ⎣ 0 ε7ijικ H2j Fi (ι, κ) 0 0 −ε7ijικ I (∗)T ⎦ 0 0 E1j Y (ι, κ) 0 0 −ε7ijικ I ⎡
(24)
⎡
⎤ −Y (ι, κ) (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T ⎢ −I −X(ι, κ) (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T ⎥ ⎢ ⎥ ⎢ BT T T T T T T ⎥ B X(ι, κ) −I (∗) (∗) (∗) (∗) (∗) ⎢ ⎥ 1i 1i ⎢ ⎥ T T 0 D2i Fj (ι, κ) 0 −I (∗)T (∗)T (∗)T (∗)T ⎥ ⎢ ⎢ ⎥ < 0 for {i, j} ∈ IR × IR , (25) T T ⎢ ε8ijικ H1i ε8ijικ H1i X(ι, κ) 0 0 −ε8ijικ I (∗)T (∗)T (∗)T ⎥ ⎢ ⎥ ⎢ 0 0 E2i 0 0 −ε8ijικ I (∗)T (∗)T ⎥ ⎢ ⎥ T T ⎣ 0 ε9ijικ H3i Fj (ι, κ) 0 0 0 0 −ε9ijικ I (∗)T ⎦ 0 0 0 E2i 0 0 0 −ε9ijικ I where Υij (ι, κ) and Ωij (ι, κ) are given in (26) and (27) ⎡ −β1ικ Y (ι, κ) + 2R1ικ (∗)T ⎢ −β1ικ I −β1ικ X(ι, κ) ⎢ ⎢ T T T T −A − L (ι, κ)B X(ι, κ) − Y (ι, κ)C2j Fi (ι, κ) − (λιι + πκκ )I Y (ι, κ)AT i ⎢ i j 2i ⎢ T T Ai X(ι, κ) Ai ⎢ Υij (ι, κ) = ⎢ ⎢ 0 R4ικ ⎢ ⎢ 0 R5ικ ⎢ T T ⎣ ε5ijικ H1i ε5ijικ H1i X(ι, κ) 0 0 (∗)T (∗)T T (∗) (∗)T −Y (ι, κ) + 2R2ικ (∗)T −I −X(ι, κ) + 2R3ικ 0 0 0 0 0 0 E1i Y (ι, κ) E1i ⎡
(∗)T (∗)T (∗)T (∗)T −I 0 0 0
(∗)T (∗)T Ξ1ij (ι, κ) ⎢ (β + 6β ) I Ξ2ij (ι, κ) (∗)T 1ικ 2ικ ικ ⎢ ⎢ T T B1i B1i X(ι, κ) −γdf I ⎢ ⎢ T T ⎢ 0 D2i Fj (ι, κ) 0 ⎢ ⎢ C1i Y (ι, κ) + D1i Lj (ι, κ) C1i 0 ⎢ ⎢ E1i Y (ι, κ) + E3i Lj (ι, κ) 0 E 2i ⎢ ⎢ T H 0 0 ε 1ijικ 1i ⎢ ⎢ Ωij (ι, κ) = ⎢ E1i E2i E1i Y (ι, κ) ⎢ T ⎢ X(ι, κ) 0 0 ε2ijικ H1i ⎢ ⎢ 0 0 E 1i ⎢ T T ⎢ H F (ι, κ) 0 0 ε 3ijικ 3i j ⎢ ⎢ E Y (ι, κ) + E L (ι, κ) E1i 0 3i j ⎢ 1i ⎢ 0 0 0 ⎢ ⎢ ⎣ 0 0 S T (ι, κ) T 0 0 Z (ι, κ)
⎤ (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T ⎥ ⎥ T T (∗) (∗) (∗)T ⎥ ⎥ ⎥ (∗)T (∗)T (∗)T ⎥ ⎥, (∗)T (∗)T (∗)T ⎥ ⎥ −I (∗)T (∗)T ⎥ ⎥ 0 −ε5ijικ I (∗)T ⎦ 0 0 −ε5ijικ I (∗)T (∗)T (∗)T T T (∗) (∗) (∗)T T T (∗) (∗) (∗)T T −γdf I (∗) (∗)T 0 −I (∗)T 0 0 −ε1ijικ I 0 0 0 0 0 0 0 0 0 E2i 0 0 0 0 0 0 0 0 T 0 0 ε4ijικ H2i 0 0 0 0 0 0
(26)
45
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
(∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T T T T T T T (∗) (∗) (∗) (∗) (∗) (∗) (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T T T T T T T (∗) (∗) (∗) (∗) (∗) (∗) (∗)T T T T T T T (∗) (∗) (∗) (∗) (∗) (∗) (∗)T T T T T T T (∗) (∗) (∗) (∗) (∗) (∗) (∗)T −ε1ijικ I (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T T T T T 0 −ε2ijικ I (∗) (∗) (∗) (∗) (∗)T T T T 0 0 −ε2ijικ I (∗) (∗) (∗) (∗)T 0 0 0 −ε3ijικ I (∗)T (∗)T (∗)T T 0 0 0 0 −ε3ijικ I (∗) (∗)T 0 0 0 0 0 −ε4ijικ I (∗)T 0 0 0 0 0 0 −ε4ijικ I 0 0 0 0 0 0 0 0 0 0 0 0 0 0
(∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T (∗)T −Q1ικ 0
⎤ (∗)T (∗)T ⎥ ⎥ ⎥ (∗)T ⎥ ⎥ (∗)T ⎥ ⎥ (∗)T ⎥ ⎥ (∗)T ⎥ ⎥ (∗)T ⎥ ⎥ ⎥ (∗)T ⎥ , ⎥ (∗)T ⎥ ⎥ (∗)T ⎥ ⎥ (∗)T ⎥ ⎥ (∗)T ⎥ ⎥ ⎥ (∗)T ⎥ ⎥ (∗)T ⎦ −Q2ικ
(27)
with Proof Note that for each η1 (t) = ι ∈ S and η2 (t) = T κ ∈ W for system (12) at time t, it follows from LeibnizΞ1ij (ι, κ) = Ai Y (ι, κ) + Y (ι, κ)Ai Newton formula that T +B2i Lj (ι, κ) + LT j (ι, κ)B2i x ˜(t − τ (ι, t)) +(β1ικ+6β2ικ )ικ Y (ι, κ)+(λιι+πκκ )Y (ι, κ), 0 T x ˜˙ (t + θ)dθ = x ˜ (t) − Ξ2ij (ι, κ) = X(ι, κ)Ai + Ai X(ι, κ) + Fi (ι, κ)C2j −τ (ι,t) 0 T T r r +C2j Fi (ι, κ) + (β1ικ + 6β2ικ )ικ X(ι, κ) x ˜ (t) − μ μ [Aij (ι, κ)˜ x(t + θ) = i j s w −τ (ι,t) i=1 j=1 + λιj X(j, κ) + πκ X(ι, ), +Bij (ι, κ)˜ x(t − τ (ι, t) + θ) + Cij (ι, κ)˜ x(t − ρ(κ, t) + θ) j=1 =1 and +Dij (ι, κ)ω(t + θ)]dθ . S(ι, κ) = [ λι1 Y (ι, κ) · · · λι(ι−1) Y (ι, κ) Applying the same transformation to x ˜(t − ρ(κ, t)), the closed-loop system (12) can be rewritten as λι(ι+1) Y (ι, κ) · · · λιs Y (ι, κ)], ˙ x ˜(t) √ √ Z(ι, κ) = [ πκ1 Y (ι, κ) · · · πκ(κ−1) Y (ι, κ) r r r r √ √ μi μj μk μl Eij x ˜(t) = πκ(κ+1) Y (ι, κ) · · · πκw Y (ι, κ)], i=1 j=1 k=1 l=1 0 Λι = [ λι1 I · · · λι(ι−1) I λι(ι+1) I · · · λιs I], −B [Akl x ˜(t + θ) + Bkl x ˜(t − τ (ι, t) + θ) ij √ √ √ √ −τ (ι,t) Πκ = [ πκ1 I · · · πκ(κ−1) I πκ(κ+1) I · · · πκw I], ˜(t − ρ(κ, t) + θ) + Dkl ω(t + θ)]dθ +Ckl x Q1ικ = diag{Y (1, κ), · · · , Y (ι − 1, κ), 0 [Akl x ˜(t + σ) + Bkl x ˜(t − τ (ι, t) + σ) −Cij Y (ι + 1, κ), · · · , Y (s, κ)}, −ρ(κ,t) Q2ικ = diag{Y (ι, 1), · · · , Y (ι, κ − 1), ˜(t − ρ(κ, t) + σ) + Dkl ω(t + σ)]dσ +Ckl x Y (ι, κ + 1), · · · , Y (ι, w)}. (31) +Dij ω(t) , Then system (12) is said to achieve stochastic stability via controller (11) for all delays τ (ι, t) and ρ(κ, t) satisfying where τ (ι, t) and ρ(κ, t) are constant and Eij = Aij (ι, κ) + Bij (ι, κ) + Cij (ι, κ) for the sake of simplification of nota0 τ (ι, t) + ρ(κ, t) (ι, κ). Furthermore, the mode-dependent controller is of the form tion. Select a stochastic Lyapunov function candidate as (11) with −1 −1 T V (˜ x(t), η1 (t), η2 (t), t) = x ˜T (t)P (η1 (t), η2 (t))˜ x(t), (32) Aˆij (ι, κ) = [Y (ι, κ) − X(ι, κ)] [−Ai where P (η (t), η (t)) is the positive constant symmetric 1 2 −X(ι, κ)Ai Y (ι, κ) matrix for each η1 (t) = ι ∈ S and η2 (t) = κ ∈ W. It −Fi (ι, κ)Cj Y (ι, κ) − X(ι, κ)Bi Lj (ι, κ) follows that s λιj Y −1 (j, κ)Y (ι, κ) − x(t)2 V (˜ x(t), η1 (t), η2 (t), t) α2 ˜ x(t)2 , (33) α1 ˜ j=1 where w −1 −1 α1 = λmin (P (η1 (t), η2 (t))), πκ Y (ι, l)Y (ι, κ)]Y (ι, κ), (28) − =1 α2 = λmax (P (η1 (t), η2 (t))). ˆi (ι, κ) = [Y −1 (ι, κ) − X(ι, κ)]−1 Fi (ι, κ), B (29) The weak infinitesimal operator A˜ can be considered Cˆi (ι, κ) = Li (ι, κ)Y −1 (ι, κ). (30) as the derivative of the function of V (˜ x(t), η1 (t), η2 (t), t)
46
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
along the trajectory of the joint Markov process x(t), η1 (t) = {˜ x(t), η1 (t), η2 (t), t 0} at the point {˜ ι, η2 (t) = κ} at time t; see [10] and [20]. ˜ (˜ AV x(t), η1 (t), η2 (t), t) ∂V (·) ∂V (·) = +x ˜˙ T (t) ∂t ∂x ˜ η1 =ι,η2 =κ s w + λιj V (˜ x(t), j, κ, t) + πκ V (˜ x(t), ι, , t). (34) j=1
=1
Following (34), then we can get ˜ (˜ AV x(t), η(t), t) T ˙ x(t) + x ˜T (t)P (ι, κ)x ˜˙ (t) =x ˜ (t)P (ι, κ)˜ s w + λιj x ˜T (t)P (j, κ)˜ x(t) + πκ x ˜T (t)P (ι, )˜ x(t) j=1
r r r r i=1 j=1 k=1 l=1
=1
T ˜T (t)[Eij μi μj μk μl x P (ι, κ)
+P (ι, κ)Eij ]˜ x(t) + x ˜T (t)P (ι, κ)Dij ω(t) T P (ι, κ)˜ x(t) +ω T (t)Dij 1 T +τ (ι, t)[ x ˜ (t)P (ι, κ)Bij Akl β1ικ T ×P −1 (ι, κ)AT x(t) kl Bij P (ι, κ)˜ +β1ικ x ˜T (t + θ)P (ι, κ)˜ x(t + θ) 1 T T T + x ˜ (t)P (ι, κ)Bij Bkl P −1 (ι, κ)Bkl Bij P (ι, κ)˜ x(t) β2ικ ˜T (t − τ (ι, t) + θ)P (ι, κ)˜ x(t − τ (ι, t) + θ) +β2ικ x 1 T T T + x ˜ (t)P (ι, κ)Bij Ckl P −1 (ι, κ)Ckl Bij P (ι, κ)˜ x(t) β2ικ ˜T (t − ρ(κ, t) + θ)P (ι, κ)˜ x(t − ρ(κ, t) + θ) +β2ικ x T T T +˜ x (t)P (ι, κ)Bij Dkl Dkl Bij P (ι, κ)˜ x(t) +ω T (t + θ)ω(t + θ)] 1 T +ρ(ι, t)[ x ˜ (t)P (ι, κ)Cij Akl P −1 (ι, κ) β1ικ T x(t) ×AT kl Cij P (ι, κ)˜ +β1ικ x ˜T (t + σ)P (ι, κ)˜ x(t + σ) 1 T T T + x ˜ (t)P (ι, κ)Cij Bkl P −1 (ι, κ)Bkl Cij P (ι, κ)˜ x(t) β2ικ ˜T (t − τ (ι, t) + σ)P (ι, κ)˜ x(t − τ (ι, t) + σ) +β2ικ x 1 T T T + x ˜ (t)P (ι, κ)Cij Ckl P −1 (ι, κ)Ckl Cij P (ι, κ)˜ x(t) β2ικ ˜T (t − ρ(κ, t) + σ)P (ι, κ)˜ x(t − ρ(κ, t) + σ) +β2ικ x T T T +˜ x (t)P (ι, κ)Cij Dkl Dkl Cij P (ι, κ)˜ x(t) s +ω T (t + σ)ω(t + σ)] + λιj x ˜T (t)P (j, κ)˜ x(t) +
w
j=1
πκ x ˜T (t)P (ι, )˜ x(t) .
(35)
=1
Suppose that −1 Akl P −1 (ι, κ)AT (ι, κ), kl < β1ικ P −1 T −1 Bkl P (ι, κ)Bkl < β2ικ P (ι, κ), T Ckl P −1 (ι, κ)Ckl < β2ικ P −1 (ι, κ), T −1 Dkl Dkl < P (ι, κ).
(36) (37) (38) (39)
Furthermore, by adding and subtracting −z T (t)z(t) + γdf ω T (t)ω(t) to and from (35), it becomes ˜ (˜ AV x(t), η(t), t) x ˜T xe (t) e (t)Mικ ((τ (ι, t) + ρ(κ, t)), δ)˜ T +(τ (ι, t) + ρ(κ, t)) sup ω (t + φ)ω(t + φ) −χφ0 T
+τ (ι, t)[4β2ικ x ˜ (t)P (ι, κ)˜ x(t) +β1ικ x ˜T (t + θ)P (ι, κ)˜ x(t + θ) +β2ικ x ˜T (t − τ (ι, t) + θ)P (ι, κ)˜ x(t − τ (ι, t) + θ) T +β2ικ x ˜ (t − ρ(κ, t) + θ)P (ι, κ)˜ x(t − ρ(κ, t) + θ)] +ρ(κ, t)[4β2ικ x ˜T (t)P (ι, κ)˜ x(t) +β1ικ x ˜T (t + σ)P (ι, κ)˜ x(t + θ) +β2ικ x ˜T (t − τ (ι, t) + σ)P (ι, κ)˜ x(t − τ (ι, t) + σ) T +β2ικ x ˜ (t − ρ(κ, t) + σ)P (ι, κ)˜ x(t − ρ(κ, t) + σ)] −z T (t)z(t) + γdf ω T (t)ω(t), (40) where xT (t) ω T (t)]T , χ = max(τ (ι, t), ρ(κ, t)), x ˜e (t) = [˜ and Mικ (·, ·) is given by Mικ ((τ (ι, t) + ρ(κ, t)), δ) r r E P (ι, κ)Dij = μi μj T Dij P (ι, κ) −γdf I i=1 j=1 with T E = Eij P (ι, κ) + P (ι, κ)Eij + 4(τ (ι, t) +ρ(κ, t))β2ικ P (ι, κ) + ικ (β1ικ+ 2β2ικ )δP (ι, κ) s w T + λιj P (j, κ) + πκ P (ι, ) + Fij Fij , j=1
=1
Fij = [C1i + ΔC1i Cˆij (D2i + ΔD2i )]. We denote
E P (ι, κ)Dij ∇ι (i, j) = . (41) T Dij P (ι, κ) −γdf I Then Mι (τ (ι, t), δ)
r r r 1
= μ2i ∇ι (i, i) + 2 μi μj ∇ι (i, j) . (42) 2 i=1 i=1 i
ט x(t − τ (ι, t) + θ)dt}P (ι, κ)˜ x(t − ρ(κ, t) + θ)dt}]dθ
47
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
Tf [β1ικ E{ x ˜T (t + σ)P (ι, κ)˜ x(t + σ)dt} −ρ(κ,t) 0 Tf x ˜T (t − τ (ι, t) + σ)P (ι, κ) +β2ικ E{ 0 Tf ט x(t−τ (ι, t)+σ)dt}+β2ικ E{ x ˜T (t−ρ(κ, t)+σ) +
0
0
×P (ι, κ)˜ x(t − ρ(κ, t) + σ)dt}]dσ Tf −ικ (β1ικ + 2β2ικ )δE{ x ˜T (t)P (ι, κ)˜ x(t)dt} 0 Tf −E{ z T (t)z(t)dt} 0 Tf +γ 2 E{ sup ω T (t + φ)ω(t + φ)dt} (43) 0
−χφ0
2
with γ = max(ικ ) + γdf . Now applying Razumikhin-type theorem for stochastic systems with Markovian jumps from [21] (See Proof on page 6 in [21]), we know that for x = {x(ξ) : t − 2χ ξ t} ∈ L2Ft ([−2χ, 0]; Rn ) for any δ > 1, the following inequality holds:
V (x(ξ), η1 (ξ), η2 (ξ), ξ) E min η1 (t)∈S,η2 (t)∈W
< δE max V (x(t), η1 (t), η2 (t), t) . (44) η1 (t)∈S,η2 (t)∈W
Using (44) and the fact that x ˜(0) = 0 and V (˜ x(Tf )) 0 for all Tf = 0 and recalling the assumption that τ (i, t) + ρ(k, t) ικ , it is easy to get from (43): Tf E{ z T (t)z(t)dt} 0 Tf γ 2 E{ sup ω T (t + φ)ω(t + φ)dt}. 0
−χφ0
This satisfies the conditions set in Definition 1 and we can say system (1) is stochastically stabilizable with a disturbance attenuation level γ. Hereinafter, we will show that (17) and (18) guarantee Mικ (ικ , 1) < 0. Applying Schur complement to Mικ (ικ , 1) < 0, we can have ⎤ ⎡ E (∗)T (∗)T T ⎣ Dικ (45) P (ι, κ) −γdf I 0 ⎦ < 0, 0 −I Fικ where T E = Eικ P (ι, κ) + P (ι, κ)Eικ + ικ (β1ικ s w +6β2ικ )P (ι, κ) λιj P (j, κ) + πκ P (ι, ). j=1
=1
Using the partition
X(ι, κ) Y −1 (ι, κ) − X(ι, κ) P (ι, κ) = , Y −1 (ι, κ) − X(ι, κ) X(ι, κ) − Y −1 (ι, κ) T multiplying (45) to the left by ικ and to the right by ικ ⎤ ⎡ T
Jικ 0 0 Y (ι, κ) I where ικ = ⎣ 0 I 0 ⎦ with Jικ = , usY (ι, κ) 0 0 0 I ing Assumption 1 and Schur complement, and applying the controllers defined as in (28)∼(30), we can see that (17) guarantees the existence of Mικ (ικ , 1) < 0. Using the continuity property of the eigenvalues of Mικ (·, ·) with re-
spect to δ, there exists a sufficiently small > 0 such that Mικ (ικ , 1 + ) < 0. Hence, there exists a δ > 1 such that Mικ (ικ , δ) < 0 still holds. Next, it will be shown that (19)∼(25) are derived from (36)∼(39). Firstly, the inequality (36) can be rewritten as follows by applying Schur complement:
−β1ικ P −1 (ι, κ) Aij < 0. (46) AT −P (ι, κ) ij Using
T Assumption 1, multiplying (46)
to the left by Jικ P (ι, κ) 0 P (ι, κ)Jικ 0 and to the right by , T 0 Jικ 0 Jικ and using the controllers defined as (28)∼(30) yields ⎡ ⎤ (∗)T (∗)T (∗)T −β1ικ Y (ι, κ) ⎢ −β1ικ I −β1ικ X(ι, κ) (∗)T (∗)T ⎥ ⎢ ⎥ ⎣ Y (ι, κ)AT A −Y (ι, κ) (∗)T ⎦ AT i
⎡
i
AT i X(ι, κ) ⎤
−I
−X(ι, κ)
H1i ⎢ X(ι, κ)H1i ⎥ ⎥ F (t) 0 0 Ei Y (ι, κ) Ei +⎢ ⎣ ⎦ 0 0 ⎡
⎤T H1i ⎢ X(ι, κ)H1i ⎥ T ⎥ + 0 0 Ei Y (ι, κ) Ei F T (t) ⎢ ⎣ ⎦ 0 0 < 0,
(47)
where A = −Ai − Y (ι, κ)CjT FiT (ι, κ) − To address the term containing −
s j=1
λιj Y (ι, κ)Y −1 (j).
s
λιj Y (ι, κ)Y −1 (j),
j=1
we first rewrite (47) into the following equivalent form: ⎡ −β1ικ Y (ι, κ) + R1ικ (∗)T ⎢ −β1ικ I −β1ικ X(ι, κ) + Y ⎢ T ⎣ −Ai −Y (ι, κ)CjT FiT (ι, κ)−λιι I Y (ι, κ)Ai T AT Ai i X(ι, κ) ⎤ T T (∗) (∗) ⎥ (∗)T (∗)T ⎥ T ⎦ −Y (ι, κ) + R2 (∗) ικ
−I
⎡
−X(ι, κ) + R3ικ
−R1ικ (∗) (∗)T (∗)T ⎢ 0 −Y (∗)T (∗)T ⎢ s +⎢ ⎢ 0 − λιj Y −1 (j) −R2ικ (∗)T ⎣ j=1,j=ι ⎡
T
0
⎤
0
0
−R3ικ
H1i ⎢ X(ι, κ)H1i ⎥ ⎥ F (t) 0 0 Ei Y (ι, κ) Ei +⎢ ⎣ ⎦ 0 0
⎤ ⎥ ⎥ ⎥ ⎥ ⎦
48
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
⎡
⎤T H1i ⎢ X(ι, κ)H1i ⎥ T ⎥ + 0 0 Ei Y (ι, κ) Ei F T (t) ⎢ ⎣ ⎦ 0 0 < 0, where s s λιj Y −1 (j) λιj Y −1 (j). Y = j=1,j=ι
(48)
j=1,j=ι
On the left-hand side of (48), if the second term is less than zero, we get: ⎡ (∗)T −β1ικ Y (ι, κ) + R1ικ ⎢ −β1ικ I −β1ικ X(ι, κ) + Y ⎢ T ⎣ −Ai −Y (ι, κ)CjT FiT (ι, κ)−λιι I Y (ι, κ)Ai T AT Ai i X(ι, κ) ⎤ T T (∗) (∗) ⎥ (∗)T (∗)T ⎥ T ⎦ −Y (ι, κ) + R2 (∗) −I ⎤
⎡
ικ
−X(ι, κ) + R3ικ
H1i ⎢ X(ι, κ)H1i ⎥ ⎥ F (t) 0 0 Ei Y (ι, κ) Ei +⎢ ⎣ ⎦ 0 0 ⎡
⎤T H1i ⎢ X(ι, κ)H1i ⎥ T ⎥ < 0. (49) + 0 0 Ei Y (ι, κ) Ei F T (t) ⎢ ⎣ ⎦ 0 0 By defining new variables R4ικ and using (20), we get s λιj Y −1 (j), which also implies that R4ικ > j=1,j=ι
R4ικ R4ικ >
s j=1,j=ı
λιj Y
−1
s
(j)
j=1,j=ı
λιj Y
−1
(j) .
Therefore, by applying Lemma 2.1 [22] and Schur complement, it is not hard to see that if (19) holds, (49) is guaranteed and (36) is thereby satisfied. Furthermore, we address the negativeness of the second term on the left hand side of (48). Firstly, we want the second term to be less than zero, that is ⎤ ⎡ (∗)T (∗)T (∗)T −R1ικ ⎢ 0 Y (∗)T (∗)T ⎥ ⎥ ⎢ s ⎥ ⎢ −1 T ⎥ < 0. (50) ⎢ 0 − λιj Y (j) − R2ικ (∗) ⎦ ⎣ j=1,j=ι 0 0 0 By multiplying (50) both sides by ⎡ I 0 s ⎢ ⎢0 ( λιj Y −1 (j))−1 ⎢ j=1,j = ι ⎢ ⎣0 0
−R3ικ
0 0
⎤
⎥ 0 0⎥ ⎥, ⎥ I 0⎦ 0 0 0 I we can see that if there exists (22), (50) holds. It is straightforward to obtain that the third term is negative as well if
(22) holds. The equations (23)∼(25) can be derived from (37)∼(39) using the same procedure. Besides, P (ι, κ) > 0 is equivalent to
Y (ι, κ) I T Jικ P (ι, κ)Jικ = > 0. (51) I X(ι, κ) We therefore have the inequality condition (16). This completes the proof. It should be noted that the terms Y (ι, κ)CjT FiT (ι, κ), β1ικ X(ι, κ), and β1ικ Y (ι, κ) in (17)∼(25) are not convex constraints, which are difficult to solve. We therefore propose the following algorithm to change this non-convex feasibility problem into quasi-convex optimization problems [23]. Iterative linear matrix inequality (ILMI) algorithm Step 1 Find X(ι, κ), Y (ι, κ), Fi (ι, κ), and Li (ι, κ) subject to (16) and (17) with (ικ) = 0. Let n = 1 and Xn (ι, κ) = X(ι, κ) and Yn (ι, κ) = Y (ι, κ). Step 2 Solve the following optimization problem for ˆ i (ι, κ), Fi (ι, κ), and Li (ι, κ) with the given (ικ) αn , D and Xn (ι, κ) and Yn (ι, κ) obtained in the previous step: OP1: Minimize αn subject to the following LMI constraints: ⎤ ⎡ I Yn (ι, κ) 0⎥ ⎢ I Xn (ι, κ) ⎦ < 0 Left-hand side of (17) − αn ⎣ 0 0 (52) and (16), (19)∼(25). Step 3 If αn < 0, Xn (ι, κ), Yn (ι, κ) and Fi (ι, κ), and Li (ι, κ) are a feasible solution to the BMIs and stop. Step 4 Set n = n + 1. Solve the following optimization problem for αn , Xn (ι, κ) and Yn (ι, κ) with Fi (ι, κ), and Li (ι, κ) obtained in the previous step: OP2: Minimize αn subject to LMI constraints (52), (16), and (19)∼(25). Step 5 If αn < 0, Xn (ι, κ), Yn (ι, κ) and Fi (ι, κ), and Li (ι, κ) are a feasible solution to the BMIs and stop. Step 6 Set n = n + 1. Solve the following optimization problem for Xn (ι, κ) and Yn (ι, κ) with αn , Fi (ι, κ), and Li (ι, κ) obtained in the previous step:
I Y (ι, κ) ) subject to OP3: Minimize trace( n I Xn (ι, κ) LMI constraints (52), (16), and (19)∼(25).
Yn (ι, κ) I . If Step 7 Let Tn = I Xn (ι, κ) Tn − Tn−1 < ζ, Tn ζ is a prescribed tolerance, go to Step 8. Else, set n = n + 1, Xn (ι, κ) = Xn−1 (ι, κ) and Yn (ι, κ) = Yn−1 (ι, κ), then go to Step 2. Step 8 A fault estimator for the system may not be found, stop. Remark 2 1) In Step 1, the initial data are obtained by assuming that the system has no time delay.
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
⎤ I Yn (ι, κ) 0⎥ ⎢ I Xn (ι, κ) ⎦ is introduced in 2) A term −αn ⎣ 0 0 (17) to relax the LMI constraints. It is referred to as α/2stabilizable problem in [24]. If an αn < 0 can be found, the robust fault estimator can be obtained. The rationale behind this concept can also be found in [25]. 3) The optimization problem in Step 2 and Step 4 is a generalized eigenvalue minimization problem. These two steps guarantee the progressive reduction of αn . Step 6 guarantees the convergence of the algorithm. ⎡
4
Numerical example
To illustrate the validation of the results obtained previously, we consider the following problem of balancing an inverted pendulum on a cart. The equations of motion of the pendulum are described as follows: x˙1 = x2 , 1) − a(cos x1 )u g sin x1 − amlx22 sin(2x 2
+ w, (53) − aml cos2 x1 where x1 denotes the angle of the pendulum from the vertical position, and x2 is the angular velocity. g = 9.8 m/s2 is the gravity constant, m is the mass of the pendulum, 1 , M is the mass of the cart, 2l is the length a = m+M of the pendulum, and u is the force applied to the cart. In the simulation, the pendulum parameters are chosen as m = 2 kg, M = 8 kg, and 2l = 1.0 m. We approximate system (53) by the following T-S fuzzy model: Rule 1: If x1 (t) is M1 , then x(t) ˙ = (A1 +ΔA1 )x(t)+B1 w(t)+(B21 +ΔB21 )u(t), z(t) = C1 x(t) + D12 u(t), y(t) = C2 x(t); Rule 2: If x1 (t) is M2 , then x(t) ˙ = (A2 +ΔA2 )x(t)+B1 w(t)+(B22 +ΔB22 )u(t), z(t) = C1 x(t) + D12 u(t), y(t) = C2 x(t), where ⎡ ⎤ ⎡ ⎤ 0 1 0 g a ⎦, ⎣ ⎦ A1 = ⎣ 0 , B21 = − 4l 4l − aml − aml 3 ⎤ ⎡ 3 ⎡ ⎤ 0 0 1 ⎦, ⎦ , B22 = ⎣ 2g aβ A2 = ⎣ 0 − 4l 4l 2 2 π( − amlβ ) 3 − amlβ
3 0 , C1 = 1 0.3 , D12 = 0.01, C2 = 9 0.1 , B1 = 1
0.3 0 , H 1 1 = H1 2 = 0 0.3
0.5 0 0 , E 2 1 = E2 2 = E 1 1 = E1 2 = 0 0.5 0.2 ◦ and β = cos 88 . The disturbance attenuation level γ is set to be equal to 1 in this example and ε1 = ε2 = 1. The membership functions for Rule 1 and Rule 2 are shown in x˙2 =
4l 3
49
Fig.4.
Fig. 4 Membership function.
In our simulation, we assume the sampling period is 0.005 for both sensor and actuation channels, that is, ha = hs = 0.005, and ns = na = 0 which means no data packet dropout happens in the communication channel. Delay free attenuation constant γdf is set to be 1, while constants ε1ijικ , ε2ijικ , ε3ijικ , ε4ijικ , ε5ijικ , ε6ijικ , ε7ijικ , ε8ijικ , and ε9ijικ are set to be equivalent to 1. In the following simulation, we assume F (t) = sin t and it can be seen that F (t) 1. The random time delays exist in S = {1, 2} and W = {1, 2}, and their transition rate matrices are given by
−3 3 −1 1 Λ= , Π= . 2 −2 2 −2 Furthermore, we assume that the sensor-to-controller communication delays for two Markovian modes are |τ1s | < 0.01, |τ2s | < 0.008, while the controller-to-actuator delays are |τ1a | < 0.007, and |τ2a | < 0.012, and therefore by (7) and (8) we can have 11 = 0.027, 12 = 0.032, 21 = 0.025, and 22 = 0.03. By applying Theorem 1 and the iterative algorithm, we get the following controller gains by the calculation of (28)∼(30):
−20.589 −56.924 ˆ , A11 (1, 1) = 172.94 −9.0175
−21.434 −55.46 Aˆ12 (1, 1) = , 175.543 −9.554
−20.964 −53.3222 ˆ A21 (1, 1) = , 170.22 −10.21
−21.2219 −57.14 Aˆ22 (1, 1) = , 169.58 −9.15
1.9634 ˆ 4.4349 ˆ1 (1, 1) = B , B2 (1, 1) = , −3.0617 −9.6547 Cˆ1 (1, 1) = 3.4116 −9.9555 , Cˆ2 (1, 1) = 2.9754 −10.5743 ,
−34.519 −57.438 ˆ A11 (1, 2) = , 226.33 −5.0388
−35.002 −59.545 Aˆ12 (1, 2) = , 254.678 −5.22
−39.543 −56.412 ˆ A21 (1, 2) = , 244.4906 −6.7648
−38.719 −58.087 Aˆ22 (1, 2) = , 245.234 −7.031
50
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
2.3327 5.6578 ˆ2 (1, 2) = , B , −4.4194 −6.535 Cˆ1 (1, 2) = 3.3543 −9.844 , Cˆ2 (1, 2) = 1.9994 −5.541 ,
−14.197 −56.603 Aˆ11 (2, 1) = , 148.01 −10.9
−15.4276 −55.434 Aˆ12 (2, 1) = , 143.998 −10.843
−17.095 −59.483 ˆ A21 (2, 1) = , 145.321 −11.5743
−16.6546 −57.9798 Aˆ22 (2, 1) = , 148.01 −10.439
1.7708 2.584 ˆ1 (2, 1) = ˆ2 (2, 1) = B , B , −2.4243 −6.49 Cˆ1 (2, 1) = 3.3808 − 10.01 , Cˆ2 (2, 1) = 3.3789 − 12.6654 ,
−23.167 −56.865 ˆ A11 (2, 2) = , 182.63 −8.2498
−27.4833 −54.238 Aˆ12 (2, 2) = , 187.493 −10.4738
−25.096 −56.0943 ˆ A21 (2, 2) = , 181.48 −9.924
−24.8540 −55.5496 Aˆ22 (2, 2) = , 189.9403 −8.933
2.0131 2.438 ˆ1 (2, 2) = ˆ2 (2, 2) = B , B , −3.2839 −3.3444 Cˆ1 (2, 2) = 3.3512 −9.9144 , Cˆ2 (2, 2) = 5.0433 −5.9403 . ˆ1 (1, 2) = B
The ratio of the regulated output energy to the disturbance input noise is depicted in Fig.5. In our simulation, we use a uniform distributed random disturbance input signal w(t) with maximum value 2. It can be seen that the ratio tends to a constant value√of about 0.05, which means the attenuation level equals to 0.05 ≈ 0.22, √ less than the prescribed level γ = γdf + max(ικ ) = 1 + 0.032 ≈ 1.016.
In conclusion, the designed controller meets the performance requirements.
5 Conclusions In this paper, we proposed a technique of designing a delay-dependent dynamic output feedback controller with robust disturbance attenuation and stability for an uncertain NCS with random communication network-induced delays and data packet dropouts. The main contribution of this work is that both the sensor-to-controller and controllerto-actuator delays/dropouts have been taken into account. Furthermore, these delays are regarded as input delays and are dealt with in the scope of disturbance attenuation. The Lyapunov-Razumikhin method has been employed to derive such a controller for this class of systems. Sufficient conditions for the existence of such a controller for this class of NCSs were derived. Finally we used a numerical example to demonstrate the effectiveness of this methodology at the last section. References [1] W. Zhang, M. S. Branicky, S. M. Phillips. Stability of networked control systems[J]. IEEE Control System Magzine, 2001, 21(1): 84 – 99. [2] M. S. Branicky, S. M. Phillips, W. Zhang. Stability of networked control systems: Explicit analysis of delay[C]//Proceedings of the America Control Conference. New York: IEEE, 2000: 2352 – 2357. [3] G. C. Walsh, H. Ye, L. Bushnell. Stability analysis of networked systems[C]//Proceedings of the America Control Conference. New York: IEEE, 1999: 2876 – 2880. [4] H. Lin, G. Zhai, P. J. Antsaklis. Robust stability and disturbance attenuation analysis of a class of networked control systems[C]// Proceedings of the 42nd IEEE Conference on Decision and Control. Piscataway: IEEE, 2003: 1182 – 1187. [5] I. R. Petersen, A. V. Savkin. Multi-rate stabilization of multivariable discrete-time linear systems via a limited capacity communication channel[C]//Proceedings of the 40th IEEE Conference on Decision and Control. Piscataway: IEEE, 2001: 304 – 309. [6] N. Elia, S. Mittler. Stabilization of linear systems with limited information[J]. IEEE Transactions on Automatic Control, 2001, 46(9): 1384 – 1400. [7] M. Chow, Y. Tipsuwan. Network-based control systems: A tutorial[C]//Proceedings of the 27th Annual Conference of the IEEE Industrial Electronics Society. Piscataway: IEEE, 2001: 1794 – 1799. [8] D. Huang, S. K. Nguang. State feedback control of uncertain networked control systems withrandom time delays[J]. IEEE Transactions on Automatic Control, 2008, 53(3): 829 – 834. [9] P. Naghshtabrizi, J. P. Hespanha. Designing an observer-based controller for a network control system[C]//Proceedings of the 44th IEEE Conference on Decision and Control. New York: IEEE, 2005: 848 – 853. [10] H. J. Kushner. Stochastic Stability and Control[M]. New York: Academic Press, 1967. [11] S. K. Nguang, W. Assawinchaichote, P. Shi, et al. Robust H∞ control design for uncertain fuzzy systems with Markovian jumps: An LMI approach[C]//Proceedings of the American Control Conference. New York: IEEE, 2005: 1805 – 1810.
Fig. 5 The ratio of the regulated output energy to the disturbance input noise without data dropouts.
[12] W. Assawinchaichote, S. K. Nguang, P. Shi, et al. Robust H∞ control design for fuzzy singularly perturbed systems with Markovian jumps: an LMI approach[C]//Proceedings of the 43rd IEEE Conference on Decision and Control. Piscataway: IEEE, 2003: 803 – 808.
D. HUANG et al. / J Control Theory Appl 2010 8 (1) 40–51
51
[13] Y. Ji, H. J. Chizeck. Controllability, stabilizability, and continuoustime markovian jump linear quadratic control[J]. IEEE Transactions on Automatic Control, 1990, 35(7): 777 – 788.
[23] D. Huang, S. K. Nguang. Static output feedback controller design for fuzzy systems: An ILMI approach[J]. Information Sciences, 2007, 177(14): 3005 – 3015.
[14] S. Xu, T. Chen, J. Lam. Robust H∞ filtering for uncertain Markovian jump systems with mode-dependent time delays[J]. IEEE Transactions on Automatic Control, 2003, 48(5): 900 – 907.
[24] Y. Y. Cao, J. Lam, Y. X. Sun. Static output feedback stabilization: An ILMI approach[J]. Automatica, 1998, 34(12): 1641 – 1645.
[15] E. K. Boukas, Z. K. Liu. Robust stability and stabilizability of Markov jump linear uncertain systems with mode-dependent time delays[J]. Journal of Optimization Theory and Applications, 2001, 109(3): 587 – 600. [16] Y. Y. Cao, J. Lam. Robust H∞ control of uncertain Markovian jump systems with time-delay[J]. IEEE Transactions on Automatic Control, 2000, 45(1): 77 – 83. [17] L. Zhang, Y. Shi, T. Chen, et al. A new method for stabilisation of networked control systems with random delays[J]. IEEE Transactions on Automatic Control, 2005, 50(8): 1177 – 1181. [18] T. Takagi, M. Sugeno. Fuzzy identification of systems and its applications to modeling and control[J]. IEEE Transactions on Systems, Man, and Cybernetics-Part B: Cybernetics, 1985, 15(1): 116 – 132. [19] J. Nilsson, B. Bernhardsson. LQG control over a Markov communication network[C]//Proceedings of the 36th IEEE Conference on Decision and Control. New York: IEEE, 1997: 4586 – 4591. [20] R. Srichander, B. K. Walker. Stochastic stability analysis for continuous-time fault tolerant control systems[J]. International Journal of Control, 1993, 57(2): 433 – 452. [21] X. Mao. Stochastic functional differential equations with Markovian switching[J]. Functional Differential Equation, 1999, 6(3/4): 375 – 396. [22] Y. Wang, L. Xie, C. E. De Souza. Robust control of a class of uncertain nonlinear systems[J]. Systems & Control Letters, 1992, 19(2): 139 – 149.
[25] S. Boyd, L. Ghaoui, E. Feron, et al. Linear Matrix Inequalities in System and Control Theory[M]. Philadelphia: SIAM, 1994. Dan HUANG received the B.E. from Shanghai Jiao Tong University, Shanghai, China, in 2002, and the M.E. (with first class honors) and the Ph.D. degrees from The University of Auckland, Auckland, New Zealand, in 2004 and 2008, respectively. Currently, he is a lecturer at the Department of Electrical and Computer Engineering, University of Auckland, Auckland, New Zealand. His current research interests include nonlinear modeling and analysis, fuzzy logic control, and networked control systems. E-mail:
[email protected]. Sing Kiong NGUANG received the B.E. (with first class honors) and the Ph.D. degrees from the Department of Electrical and Computer Engineering of the University of Newcastle, Callaghan, Australia, in 1992 and 1995, respectively. Currently, he is an associate professor at the Department of Electrical and Computer Engineering, University of Auckland, Auckland, New Zealand. He has published over 80 journal papers and over 60 conference papers/presentations on nonlinear control design, nonlinear H∞ control systems, nonlinear time-delay systems, nonlinear sampled-data systems, biomedical systems modeling, fuzzy modeling and control, biological systems modeling and control, and food and bioproduct processing. E-mail:
[email protected].