Dynamics and Control, 9, 135–148 (1999)
c 1999 Kluwer Academic Publishers, Boston. Manufactured in The Netherlands. °
Optimal Compensation by Linear Robust Control for Uncertain Systems Y. H. CHEN
[email protected] The George W. Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0405, USA Editor: G. Leitmann Received March 13, 1998; Revised October 28, 1998; Accepted November 3, 1998 Abstract. Linear control design of nonlinear discrete-time uncertain systems with (possibly fast) uncertain parameters is considered. The possible uncertainty bound needs to be within a threshold. The threshold in turn is dependent on a design parameter. To achieve the maximum threshold by an optimal choice of the design parameter, a constrained optimization problem is proposed. The optimal compensation of the uncertainty is in general a partial compensation. Keywords: Stability, robust control, Lyapunov approach, discrete system
1.
Introduction
We investigate the control design issue for a class of discrete-time nonlinear uncertain systems. The uncertainty is (possibly fast) time-varying. Only the possible bound of uncertainty is known. The design methodology is via the Lyapunov minimax approach (see, e.g., [5], [13]). Historically, the approach was first applied to the control design of continuous-time case ([3], [7], [9], [10]). The control, which is based on the possible bound of uncertainty, compensates the worst case uncertainty. The resulting closed-loop system possess certain properties (e.g., practical stability) regardless of the realization of the uncertainty. The bound of the uncertainty can be arbitrarily large (provided it is finite). That the control compensates the (full) bound of uncertainty is called full compensation. The development of control design in the discrete-time case was made at a later stage (see, e.g., [5], [8], [12], [14], [15], [16], [17]), as compared with the continuous case. Full compensation of the uncertainty is also used in the control. The uncertainty magnitude, however, needs to be within a threshold (as opposed to be arbitrarily large in the continuous case) in order that certain system property (such as practical stability) is still guaranteed. In this article, through a detailed analysis (in the proof of Theorem 1), we show that the control in fact serves two roles: it both stabilizes and destabilizes the system (at least in the worst case). As a result, the threshold for uncertainty bound depends on the choice of a design parameter. Since the early work is based on full compensation, the design parameter is preempted. This in turn means that the size of the threshold is predetermined. On the other hand, the detailed analysis also reveals an interesting connection between the size of the threshold and the design parameter. It is through this connection that an optimal
136
CHEN
choice of the design parameter is proposed, which maximizes the threshold. This in general means that the control does not compensate the worst case bound of the uncertainty. Rather, it only compensates a portion of the uncertainty bound, and hence a partial compensation. This is believed to be the first time that partial compensation is ever proposed and argued to be more advantageous than full compensation. In this article, we first construct a threshold for uncertainty bound when the system is under a linear type robust control. Second, we propose to maximize the threshold by an optimal choice of a design parameter. The problem is formulated as a constrained optimization setting. The solution procedure is demonstrated. Third, we illustrate the control design by a prototype macroeconometric model for policy evaluation.
2.
Uncertain Systems
Consider a class of discrete-time dynamical systems modeled by the following difference equation: x(k + 1) = Ax(k) + Bu(k) + Cv(x(k), σ (k), k), x(k0 ) = x0
(2.1)
where k ∈ K := {0, 1, . . .}, the state x(k) ∈ Rn , the control u(k) ∈ Rm , the disturbance v(σ (k)) ∈ R p , and the uncertainty σ (k) ∈ Rs . The matrices A, B, and C are of appropriate dimensions. The value of the mapping σ (·) for each k is unknown. The following assumptions are proposed. Assumption 1 The pair (A, B) is stabilizable. That is, there is a constant gain matrix K ∈ Rm×n such that all eigenvalues of A¯ := A + B K are strictly inside the unit disc ([1]). Assumption 2 The matrix B is of full rank: rank(B) = m. Assumption 3 The mapping v(·) is continuous. Assumption 4 There is a prescribed compact set 6 ⊂ Rs such that σ (k) ∈ 6 ∀ k. Assumption 5 There is a matrix D such that C = B D.
(2.2)
Furthermore, there are constants γ1 , γ2 ≥ 0 such that for all x ∈ Rn , σ ∈ 6, k ∈ K, kv(x, σ, k)k ≤ γ1 kxk + γ2 .
(2.3)
The vector norms are taken to be euclidean and matrix norms are the corresponding 1 induced ones; hence for a real matrix M = M T , kMk = [λmax (M T M)] 2 where λmax (·) (or λmin (·)) denotes the maximum (or minimum) eigenvalue of the designated matrix.
OPTIMAL COMPENSATION BY LINEAR ROBUST CONTROL
137
Definition. Consider the dynamical system x(k + 1) = f (x(k), u(k), k)
(2.4)
x(k0 ) = x0 . A control u(k) = p(x(k), k) renders the system uniformly bounded (u.b.) and uniformly ultimately bounded (u.l.b.) iff the following holds: The solution x(·): [k0 , ∞) → Rn of the controlled system x(k + 1) = f (x(k), p(x(k), k), k)
(2.5)
is uniformly bounded. That is, if, for any kx0 k ≤ s, there is a constant d(s) such that kx(k)k ≤ d(s)
(2.6)
for all k ≥ k0 . In addition, the solution of (2.5) is uniformly ultimately bounded. That is, if ¯ s) ≥ 0 such that for all d¯ > d, there is a constant d ≥ 0 and a finite time Kˆ (d, kx(k)k ≤ d¯
(2.7)
¯ s). for all k ≥ k0 + Kˆ (d, The objective is to design a control u(k), which is linear in x(k), such that the solution of the resulting closed-loop system is uniformly bounded and uniformly ultimately bounded. The available information of the uncertainty to the designer is 6, not σ (·). 3.
Control Scheme
The control design is only based on the possible bound of the uncertainty (hence 6). For the uncertain system (2.1), we propose the following robust control scheme: ¯ u(k) = K x(k) − γ B T P Ax(k)
(3.1)
where P > 0 is the unique solution of the Lyapunov equation A¯ T P A¯ − P + Q = 0,
Q > 0,
γ ≥ 0.
(3.2) (3.3)
The specific choice of γ will be decided later. ¯ The first Remark. The control scheme (3.1) consists of two parts: K x and −γ B T P A(x). part stabilizes the nominal system (i.e., x(k + 1) = Ax(k) + Bu(k)) while the second part is meant to compensate the effect due to uncertainty. Remark. The control scheme (3.1) is motivated by a number of previous work, such as [15] and [8]. While the structure of the control is preempted by the choice of K and Q (hence P), the choice of the magnitude γ remains flexible. We shall take the advantage of this to pursue an optimal choice.
138
CHEN
Theorem 1 Consider the uncertain discrete-time system (2.1). Suppose that Assumptions 1–5 are met. The control scheme (3.1) renders the system uniformly bounded and uniformly ultimately bounded if γ > 0 and ¯ + γ1 )2 + γ12 . 2γ λmin (Q) > 2γ λmax (B T P B)(γ kB T P Ak In particular, we have s λmax (P) η0 , d= λmin (P)
(3.4)
(3.5)
if s ≤ η¯ 0 0, ¸ · ¯ s) = λmax (P)s 2 − λmin (P)η¯ 02 Kˆ (d, , if s > η¯ 0 , κ(η¯ 0 )
(3.6)
¸ γ12 2 ¯ ω2 κ(ω) = λmin (Q) − λmax (B P B)(γ kB P Ak + γ1 ) − 2γ · ¸ γ1 γ2 T T ¯ − 2λmax (B P B)γ1 γ2 + 2γ kB P Akγ2 + ω γ ·
T
+ γ22 + s η¯ 0 =
T
γ22 , 2γ
λmin (P) ¯ d λmax (P)
(3.7)
(3.8)
where [·]: R → N is the ceiling function (hence, for example, [1.14]=2). Proof: Let the Lyapunov function candidate be given by V (x(k)) = x T (k)P x(k).
(3.9)
With (2.1) into (3.9), we have, for each σ (·), 1V (x(k)) = V (x(k + 1)) − V (x(k)) = x(k + 1)T P x(k + 1) − x(k)T P x(k) = [Ax(k) + Bu(k) + Cv(x(k), σ (k), k)]T P × × [Ax(k) + Bu(k) + Cv(x(k), σ (k), k)] − x(k)T P x(k). This in turn means that, by (2.2) and (3.1) (from now on, arguments are sometimes omitted when no confusions are likely to arise), ¯ + B Dv]T P[Ax + B(K x − γ B T P Ax) ¯ + B Dv] 1V = [Ax + B(K x − γ B T P Ax) − xT Px ¯ + B Dv]T P[(A + B K )x − γ B B T P Ax ¯ + B Dv] = [(A + B K )x − γ B B T P Ax − x T P x.
(3.10)
139
OPTIMAL COMPENSATION BY LINEAR ROBUST CONTROL
This results in, by the Lyapunov equation (3.2), ¯ + Dv)]T P[B(−γ B T P Ax ¯ + Dv)] 1V = −x T Qx + [B(−γ B T P Ax ¯ + Dv). + 2x T A¯ T P B(−γ B T P Ax
(3.11)
Using the Rayleigh’s principle that λmin (M)kxk ≤ x M x ≤ λmax (M)kxk , which holds for any real symmetric matrix M, we obtain 2
T
−x T Qx ≤ −λmin (Q)kxk2 .
2
(3.12)
In addition (notice that λmax (B T P B) > 0 since B is of full rank), ¯ + Dv)]T P[B(−γ B T P Ax ¯ + Dv)] [B(−γ B T P Ax ¯ + Dvk2 ≤ λmax (B T P B)k − γ B T P Ax 2 ¯ ≤ λmax (B T P B)[(γ kB T P Akkxk) + (γ1 kxk + γ2 )2
¯ + 2γ kB T P Akkxk(γ 1 kxk + γ2 ) =: η2 (γ )kxk2 + η1 (γ )kxk + η0 (γ )
(3.13)
where ¯ + γ1 )2 η2 (γ ) := λmax (B T P B)(γ kB T P Ak T ¯ 2, η1 (γ ) := 2λmax (B P B)γ1 γ2 + 2γ kB T P Akγ 2 η0 (γ ) := γ2 .
(3.14) (3.15)
This reveals the “negative” contribution of the control toward u.b. and u.l.b. Next, we analyze the last term on the right-hand side of (3.11). By (2.3), ¯ + Dv) 2x T A¯ T P B(−γ B T P Ax ¯ 2 + 2kB T P Axk(γ ¯ = −2γ kB T P Axk 1 kxk + γ2 ) ≤2
(γ1 kxk + γ2 )2 4γ
=: δ2 (γ )kxk2 + δ1 (γ )kxk + δ0 (γ )
(3.16)
where γ12 , 2γ γ1 γ2 , δ1 (γ ) := γ γ2 δ0 (γ ) := 2 . 2γ
δ2 (γ ) :=
(3.17) (3.18) (3.19)
Using (3.12), (3.13), and (3.16) in (3.11), we have 1V ≤ −(λmin (Q) − η2 − δ2 )kxk2 + (η1 + δ1 )kxk + η0 + δ0
(3.20)
140
CHEN
That λmin (Q) − η2 (γ ) − δ2 (γ ) > 0
(3.21)
is equivalent to (3.4). We conclude that then 1V is negative definite in a region outside a sphere around the origin. The uniform boundedness and uniform ultimate boundedness are thus proven ([8], [14]). Remark. The inequality (3.4) imposes an upper bound of γ1 for any choice of γ . In the early work (such as [5], [14], [15]) one chose γ = γ1 . This is considered to be a full compensation. It is interesting to note that the condition (3.4) does not have to do with γ2 . Certainty, by (3.7), the magnitude γ2 still influences the size of the ultimate boundedness region. Remark. If γ1 = 0, then it can be shown, in a way similar to the proof, that the controlled system is still u.b. and u.l.b. with γ = 0. One can still have the discretion to choose γ > 0 to manipulate the size of the uniform ultimate boundedness region. Let us rewrite (3.4) as 0 > β2 (γ )γ12 + 2β1 (γ )γ1 − β0 (γ )
(3.22)
where β2 (γ ) := 2γ λ M (B T P B) + 1,
(3.23)
¯ β1 (γ ) := 2γ 2 λmax (B T P B)kB T P Ak,
(3.24)
¯ 2 + 2γ λmin (Q). β0 (γ ) := −2γ 3 λmax (B T P B)kB T P Ak
(3.25)
¯ > 0. This in turn means that γ1 ≥ 0 must In this article, we only consider that kB T P Ak be upper bounded: q −β1 (γ ) + β12 (γ ) + β2 (γ )β0 (γ ) γ1 < β2 (γ ) =: φ(γ ).
(3.26)
The purpose of applying robust control is the hope that it may tolerate a certain amount of uncertainty bound γ1 , preferably as large as possible. In other words, one seeks to maximize φ(γ ) for all possible choice of γ . Notice that β2 > 0 and β1 ≥ 0. One observes that φ(γ ) > 0 if and only if β0 > 0. Furthermore, φ(γ ) = 0 if and only if β0 = 0. That β0 > 0 occurs as s λmin (Q) =: γ¯ . (3.27) 0<γ < T ¯ 2 λmax (B P B)kB T P Ak
OPTIMAL COMPENSATION BY LINEAR ROBUST CONTROL
141
and β0 = 0 occurs as γ =0
(3.28)
γ = γ¯ .
(3.29)
or
In light of these, we propose the following constrained optimization problem: max φ(γ ).
(3.30)
γ ∈[0,γ¯ ]
We proceed to solve this problem. By (3.26), φ(γ ) and γ are related by β2 (γ )φ 2 (γ ) + 2β1 (γ )φ(γ ) − β0 (γ ) = 0.
(3.31)
Differentiating (3.31) with respect to γ yields β20 (γ )φ 2 (γ ) + β2 2φ(γ )φ 0 (γ ) + 2β10 (γ )φ(γ ) + 2β1 (γ )φ 0 (γ ) − β00 (γ ) = 0 0
where φ (γ ) ≡ dφ(γ )/dγ and occurs as φ 0 (γ ) = 0 and hence
β∧0˙ (γ )
(3.32)
˙ = 1, 2, 3. The extreme of φ(γ ) ≡ dβ∧˙ (γ )/dγ , ∧
β20 (γ )φ 2 (γ ) + 2β10 (γ )φ(γ ) − β00 (γ ) = 0. This is another constraint of φ(γ ) and γ . Certainly, by (3.33), we have q −β10 (γ ) + β10 2 (γ ) + β20 (γ )β00 (γ ) φ(γ ) = β20 (γ )
(3.33)
(3.34)
The extreme φ(γ ) and the corresponding γ must satisfy both (3.31) and (3.33) (or (3.26) and (3.34)). Multiplying (3.33) by 12 γ results in 1 1 0 γβ2 (γ )φ 2 (γ ) + γβ10 (γ )φ(γ ) − γβ00 (γ ) = 0. 2 2 Subtracting (3.31) by (3.35) yields ¯ 2γ 3. (λmax (B T P B)γ + 1)φ 2 (γ ) = γ λmin (Q) + λmax (B T P B)kB T P Ak Solving for φ(γ ) ≥ 0: s ¯ 2γ 3 γ λmin (Q) + λmax (B T P B)kB T P Ak . φ(γ ) = λmax (B T P B)γ + 1
(3.35)
(3.36)
(3.37)
Plugging (3.37) into (3.33) shows that ¯ 2γ 3 γ λmin (Q) + λmax (B T P B)kB T P Ak T λmax (B P B)γ + 1 s ¯ 2γ 3 γ λmin (Q) + λmax (B T P B)kB T P Ak T T ¯ + 8γ λmax (B P B)kB P Ak T λmax (B P B)γ + 1
2λmax (B T P B)
¯ 2 γ 2 − 2λmin (Q) = 0. + 6λmax (B T P B)kB T P Ak
(3.38)
142
CHEN
Let ¯ 2γ 3 γ λmin (Q) + λmax (B T P B)kB T P Ak λmax (B T P B)γ + 1 s ¯ 2γ 3 γ λmin (Q) + λmax (B T P B)kB T P Ak ¯ + 8γ λmax (B T P B)kB T P Ak λmax (B T P B)γ + 1
9(γ ) : = 2λmax (B T P B)
¯ 2 γ 2 − 2λmin (Q). + 6λmax (B T P B)kB T P Ak
(3.39)
To show that the extreme solution is the maximum, we consider the second order derivative with respect to γ . Differentiate (3.31) with respect to γ , we have β200 (γ )φ 2 (γ ) + β20 (γ )2φ(γ )φ 0 (γ ) + β20 (γ )2φ(γ )φ 0 (γ ) + β2 (γ )2φ 0 (γ )φ(γ ) + β2 2φ 0 (γ )φ 0 (γ ) + β2 (γ )2φ(γ )φ 00 (γ ) + 2β100 (γ )φ(γ ) + 2β10 (γ )φ 0 (γ ) + 2β10 (γ )φ 0 (γ ) + 2β1 (γ )φ 00 (γ ) − β000 (γ ) = 0.
(3.40)
0
As φ (γ ) = 0, the equation is reduced to β200 (γ )φ 2 (γ ) + β20 (γ )2φ(γ )φ 0 (γ ) + β2 (γ )2φ(γ )φ 00 (γ ) + 2β100 (γ )φ(γ ) + 2β1 (γ )φ 00 (γ ) − β000 (γ ) = 0.
(3.41)
where double prime denotes the second order derivative with respect to γ . This in turn results in −(2β2 (γ )φ(γ ) + 2β1 (γ )) φ 00 (γ ) = 00 β2 (γ )φ 2 (γ ) + 2β100 (γ )φ(γ ) − β000 (γ ) =
4λmax
(B T
−(β2 (γ )φ(γ ) + β1 (γ )) ¯ ¯ 2γ P Akφ(γ ) + 12λmax (B T P B)kB T P Ak
P B)kB T
< 0
(3.42)
for all γ ∈ [0, γ¯ ]. This proves that the extreme is indeed the maximum. The solution γ ∈ [0, γ¯ ] to the equation 9(γ ) = 0
(3.43)
then solves the constrained optimization problem. Remark. The analysis in this section suggests to choose γ in an alternative way (as compared with all past work in this area). This in fact proposes that in general partial compensation is more advantageous than full compensation. 4.
Design Procedure
For the choice of the gain γ , one may solve the equation (3.43) for γ ∈ [0, γ¯ ]. This can be performed via a number of numerical technique, such as bisection method, Newton-
OPTIMAL COMPENSATION BY LINEAR ROBUST CONTROL
143
Ralphson iteration, secant method etc., for root finding ([2]). The corresponding φ(γ ) is then given by (3.37). We summarize the procedure as follows: ¯ Step 1: Obtain K for a stable A(= A + B K ). Step 2: For given Q > 0, solve the Lyapunov equation (3.2). Step 3: Obtain γ¯ in (3.27). Step 4: Solve for γ ∈ [0, γ¯ ] from (3.43). Step 5: Obtain the corresponding φ(γ ) in (3.37). Remark. The solutions φ(γ ) and γ also can be solved simultaneously from either two of eqs. (3.31) (or (3.26)), (3.33) (or (3.34)), and (3.43). This in turn means Step 4 can have alternatives. The designer can have the discretion for the most effective search. 5.
Illustrative Example
We consider a prototype macroeconometric model for policy evaluation ([11]). Let k denote a period. Let also C(k) denote the consumption, I (k) the investment, Y (k) the income, and G(k) the level of government spending decided at the end of the period k, respectively. The model is given by C(k) = θ1 (k)Y (k − 1) + δ1 (k) + ²1 (k),
(5.1)
I (k) = θ2 (k)(C(k) − C(k − 1)) + δ2 (k) + ²2 (k),
(5.2)
Y (k) = C(k) + I (k) + G(k − 1).
(5.3)
where θ1 (k), θ2 (k), δ1 (k), and δ2 (k) are structural parameters, ²1 (k) and ²2 (k) are disturbance terms for consumption and investment, respectively. With (5.1) into (5.2): I (k) = θ1 (k)θ2 (k)(Y (k − 1) − Y (k − 2)) + θ2 (k)(δ1 (k) + ²1 (k)) − θ2 (k)(δ1 (k − 1) + ²1 (k − 1)) + δ2 (k) + ²2 (k).
(5.4)
Using (5.1) and (5.4) in (5.3) yields Y (k) = θ1 (k)Y (k − 1) + θ1 (k)θ2 (k)(Y (k − 1) − Y (k − 2)) + θ2 (k)(δ1 (k) + ²1 (k)) − θ2 (k)(δ1 (k − 1) + ²1 (k − 1)) + δ2 (k) + ²2 (k) + G(k − 1) = θ1 (k)(1 + θ2 (k))Y (k − 1) − θ1 (k)θ2 (k)Y (k − 2) + G(k − 1) + θ2 (k)(δ1 (k) + ²1 (k)) − θ2 (k)(δ1 (k − 1) + ²1 (k − 1)) + δ2 (k) + ²2 (k).
(5.5)
144
CHEN
The structural parameters are uncertain. They are decomposed to be θi (k) = θ¯i + 1θi (k),
i = 1, 2,
(5.6)
δi (k) = δ¯i + 1δi (k),
i = 1, 2.
(5.7)
Here θ¯i and δ¯i are the nominal portions and 1θi (k) and 1δi (k) are the uncertain portions. We assume that there are known constants li ≥ 0 and m i ≥ 0 such that |1θ(k)| ≤ li ,
(5.8)
|1δ(k)| ≤ m i .
(5.9)
In addition, there are known constants n i ≥ 0 such that |²i (k)| ≤ n i .
(5.10)
The macroeconomic policy is designed such that the income is to be close to the set point Y¯ . Let 1Y (k) := Y (k) − Y¯ .
(5.11)
This is the deviation from the set point. The dynamics in (5.5) can be represented in terms of 1Y (k): 1Y (k) = θ1 (k)(1 + θ2 (k))1Y (k − 1) − θ1 (k)θ2 (k)1Y (k − 2) + G(k − 1) + θ2 (k)(δ1 (k) + ²1 (k)) − θ2 (k)(δ1 (k − 1) + ²1 (k − 1)) + δ2 (k) + ²2 (k) + θ1 (k)(1 + θ2 (k))Y¯ − θ1 (k)θ2 (k)Y¯ .
(5.12)
The task is to design the government spending strategy G(k) so that 1Y (k) can be regulated to be reasonably small. The system (5.12) is in the form of (2.1) by letting ¸ · ¸ · 1Y (k − 1) x1 (k) = , (5.13) x(k) = 1Y (k) x2 (k) ¤T £ σ (k) = θ1 (k) θ2 (k) δ1 (k) δ2 (k) ²1 (k) ²2 (k) ,
u(k) = G(k),
¸ 0 1 , −θ¯1 θ¯2 θ¯1 (1 + θ¯2 )
(5.14)
· A=
¸ 0 0 , 1A(σ (k)) = −(θ1 (k)θ2 (k) − θ¯1 θ¯2 ) θ1 (k)(1 + θ2 (k)) − θ¯1 (1 + θ¯2 )
(5.15)
·
(5.16)
OPTIMAL COMPENSATION BY LINEAR ROBUST CONTROL
· ¸ 0 B= , 1
· ¸ 0 C= , 1
145
(5.17)
v(σ (k)) = θ2 (k)(δ1 (k) + ²1 (k)) − θ2 (k)(δ1 (k − 1) + ²1 (k − 1)) + δ2 (k) + ²2 (k) + θ1 (k)(1 + θ2 (k))Y¯ − θ1 (k)θ2 (k)Y¯ .
(5.18)
Assumptions 1–4 are met by the aforementioned prescriptions. Eq. (2.2) of Assumption 5 is met by letting D = 1.
(5.19)
For illustrative purpose, we adopt ([4]) θ¯1 = 3/4 and θ¯2 = 5/3. By choosing K = [1.04 − 1.38] (via LQR design), the eigenvalues of A¯ are given by 0.31 ± 0.33i, which ¯ = are within the unit disc. Let Q = I . We then obtain λmax (B T P B) = 2.85, kB T P Ak 1.57, λmin (Q) = 1. With these in (3.43), we can solve for γ = 0.103. Using (3.37) yields φ(0.103) = 0.292. For simulations, the numerical values of the parameters were chosen: c1 = 0.2 sin(5k), c2 = 0.2 cos(10k), w = random(k). We also took x1 (0) = 2, x 2 (0) = −1.5. Computer simulations were performed. Figures 1 and 2 depict the x1 (k) performance histories for γ = 0.103 (hence optimal compensation) and γ = 1 (hence “over” compensation), respectively. The performance by γ = 0.103 is superior. Figures 3 and 4 are the control histories for γ = 0.103 and γ = 1, respectively.
Figure 1. x1 (k) history: γ = 0.103.
146
Figure 2. x1 (k) history: γ = 1.
Figure 3. u(k) history: γ = 0.103.
CHEN
OPTIMAL COMPENSATION BY LINEAR ROBUST CONTROL
147
Figure 4. u(k) history: γ = 1.
6.
Conclusion
In the controlled system performance analysis, it is found that the control appears to serve two contradicting roles: stabilization and destabilization. As a result, there is an upper bound of the uncertainty that the system can tolerate. Furthermore, this upper bound is affected by the choice of the gain γ . It is crucial to properly choose the control magnitude γ so that this upper bound is maximized. We formulate this as a constrained optimization problem. After some analysis, standard numerical technique can be applied to solve the problem. This is believed to be the first attempt to suggest to use partial compensation, as opposed to full compensation. The design is then applied to a prototype macroeconometric model for policy evaluation. References ˚ str¨om, K. J. and Wittenmark, B., Computer-Controlled Systems: Theory and Design, Prentice-Hall: 1. A Englewood Cliffs, NJ, 1990. 2. Ayyub, B. M. and McCuen, R. H., Numerical Methods for Engineers, Prentice-Hall: Upper Saddle River, NJ, 1996. 3. Barmish, B. R., Corless, M., and Leitmann, G., “A new class of stabilizing controllers of uncertain dynamical systems,” SIAM Journal of Control and Optimization, vol. 21, pp. 246–255, 1983. 4. Caravani, P., “On H ∞ criteria for macroeconomic policy evaluation,” Journal of Economic Dynamics and Control, vol. 19, pp. 961–984, 1995.
148
CHEN
5. Corless, M., “Stabilization of uncertain discrete-time systems,” in Proc. of the IFAC Workshop on Model Error Concepts and Compensation, Boston, MA, pp. 125–128, 1985. 6. Corless, M, “Control of uncertain nonlinear systems,” Journal of Dynamic Systems, Measurement, and Control, vol. 115, pp. 362–372, 1993. 7. Corless, M. J. and Leitmann, G., “Continuous state feedback guaranteeing uniform ultimate boundedness for uncertain dynamic systems,” IEEE Transactions on Automatic Control, vol. AC-26, pp. 1139–1144, 1981. 8. Corless, M. and Manela, J., “Control of uncertain discrete-time systems,” in Proc. of the American Control Conference, Seattle, WA, pp. 515–520, 1986. 9. Gutman, S., “Uncertain dynamical systems–Lyapunov min-max approach,” IEEE Transactions on Automatic Control, vol. AC-24, pp. 437–443, 1979. 10. Gutman, S. and Leitmann, G., “Stabilizing feedback control for dynamic systems with bounded uncertainty,” in Proc. of 15th IEEE Conference on Decision and Control, 1976. 11. Intriligator, M. D., Bodkin, R. G., and Hsiao, C., Econometric Models, Techniques, and Applications, Prentice-Hall: Upper Saddle River, NJ, 1996. 12. Kaitala, V. and Leitmann, G., “Control of uncertain discrete systems: an application to resource management,” in Proc. of the 27th IEEE Conference on Decision and Control, Austin, TX, pp. 497–502, 1988. 13. Leitmann, G., “On one approach to the control of uncertain systems,” Journal of Dynamic Systems, Measurement, and Control, vol. 115, pp. 373–380, 1993. ˙ 14. Maga˜na, M. E. and Zak, S. H., “Robust state feedback stabilization of discrete-time uncertain dynamical systems,” IEEE Transactions on Automatic Control, vol. AC-33, pp. 887–891, 1988. 15. Manela, J., “Deterministic control of uncertain linear discrete and sampled-data systems,” Ph.D. dissertation, University of California, Berkeley, 1985. 16. Shrarav-Schapiro, N., Palmor, Z. J., and Steinberg, A., “Robust output feedback stabilizing control for discrete uncertain SISO systems,” IEEE Transactions on Automatic Control, vol. AC-41, pp. 1377–1391, 1996. 17. Shrarav-Schapiro, N., Palmor, Z. J., and Steinberg, A., 1997, “Discrete positive realness of the discrete min-max control law,” Dynamics and Control, vol. 7, pp. 135–142, 1997.