Journal of Mechanical Science and Technology 25 (1) (2011) 233~238 www.springerlink.com/content/1738-494x
DOI 10.1007/s12206-010-1107-8
On the stability of receding horizon control based on horizon size for linear discrete systems† Myung-Hwan Oh1,* and Jun-Ho Oh2 1
Samsung Corning Precision Glass, Asan-City, ChungNam, 336-840, Korea Humanoid Robot Research Center, KAIST, 373-1, Daejeon, 305-701, Korea
2
(Manuscript Received August 19, 2009; Revised Augsut 14, 2010; Accepted September 4, 2010) ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Abstract In this study, stability conditions of receding horizon control (RHC) based on a horizon size are proposed for linear discrete systems. The proposed stability conditions present a relevant horizon size which can guarantee the stability of RHC even though a final state weighting matrix does not satisfy non-increasing monotonicity of optimal cost. Therefore, the possible range of the final state weighting matrix ensuring the stability of RHC is extended to zero and also it can be applied to the stability problems of other forms of model predictive control like the conventional stability conditions. Keywords: Receding horizon control; Horizon size; Final state weighting matrix; Stability ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
1. Introduction It is well known that the condition of the final state weighting matrix is important for the stability of RHC and many researches based on the final state weighting matrix have been executed regarding the stability conditions of RHC. The focus of early methods to satisfy the stability of RHC based on the final state weighting matrix is the terminal equality condition. And the stability of RHC is guaranteed by adding a zero final state constraint [1-4]. But the terminal equality condition degrades system flexibility because it needs infinite value of the final state weighting matrix. Therefore, the matrix inequality condition was proposed [5, 6] and the value of the final state weighting matrix ensuring the stability of RHC has decreased from infinity to a manageable quantitative value. The matrix inequality condition is more flexible and realizable than the terminal equality condition but the available range of a final state weighting matrix satisfying the matrix inequality condition is still restricted. Furthermore, complex methods are inevitable to find the final state weighting matrix satisfying the matrix inequality condition. To find a relevant final state weighting matrix, linear matrix inequality (LMI) was used in [7] and iterative methods which the final state weighting matrix is updated until it converges to the solution of infinite horizon problem were investigated [8, 9]. These †
This paper was recommended for publication in revised form by Associate Editor Hyoun Jin Kim * Corresponding author. Tel.: +82 41 520 2038, Fax.: +82 41 520 1091 E-mail address:
[email protected] © KSME & Springer 2011
methods updating the final state weighting matrix are similar to increasing horizon size until the solution converge the steady-state LQ optimal control. These stability conditions of RHC for linear systems have been extended to cover the stability problem of various control methods. And matrix inequality condition was applied to the tracking control problem [10, 11] and robust control problems [12, 13]. Therefore, due to such a wide variety of usage, any proposed stability criterion is significant even though it is only a slight deviation from the conventional method to guarantee stability. And thus, in order to extend the available range of the final state weighting matrix ensuring the stability of RHC, the stability conditions based on horizon size were proposed [14, 15] for linear continuous and discrete system, respectively. And the stability of RHC can be ensured regardless of the final state weighting matrix. However, result of [15] can be only applied to time-invariant systems because the stability guaranteed horizon in [15] was induced from the eigenvalues conditions of the closed-loop system, that is, pointwise stability. Therefore, it needs additional conditions to apply itself to time-varying systems. In this study, a lower bound of stability guaranteed horizon size induced from Lyapunov stability theorem, is proposed for linear discrete time-varying and invariant systems, respectively. The proposed stability conditions present a relevant horizon size which guarantees the stability of RHC even though the final state weighting matrix does not satisfy the matrix inequality condition.
234
M.-H. Oh and J.-H. Oh / Journal of Mechanical Science and Technology 25 (1) (2011) 233~238
Section 2 describes the conventional and the newly proposed method to check the stability of RHC for linear discrete time-varying and invariant systems. In section 3, the proposed methods are verified by a numerical simulation and its performance is compared with the result of [15] for a timeinvariant system. And conclusion and proposals for future works are discussed in the final section.
2. The Stability of RHC for a linear discrete systems Consider a linear discrete time-varying system as given below.
x(t + 1) = A(t ) x(t ) + B(t )u (t )
(1)
A T (σ )Q f (σ +1) A(σ ) − A T (σ )Q f (σ +1) B (σ )[ R (σ )
where x (t )∈ R n , u (t )∈ R m , and a cost function
+ B T (σ )Q f (σ +1) B (σ )] −1 B T (σ )Q f (σ +1) A(σ )
∑[ x
T
(i)Q(i ) x(i ) +u T (i) R(i )u (i )]
i =t
(2)
+ x T (t + T )Q f (t + T ) x(t + T ) where Q (⋅) = C T (⋅)C (⋅) ≥ 0, R (⋅) > 0, and Q f (⋅) ≥ 0. In this study, matrices A(⋅), B (⋅), Q (⋅), R (⋅), and Q f (⋅) are assumed to be bounded. RHC law at the current time t can be found by minimizing the cost function (2) about the input u (t ) and is given by u ∗ (t ) = − R −1 (t ) B T (t )[ I + K (t +1, t + T ) × B(t ) R −1 (t ) B T (t )] −1 K (t +1, t + T ) A(t ) x (t )
(3)
where K (τ ,σ ) satisfies K (τ ,σ ) = AT (τ ) K (τ +1,σ )[ I + B(τ ) R −1 (τ ) BT (τ ) × K (τ +1,σ )]−1 A(τ ) + Q(τ ), τ ≤σ
(4)
with the boundary condition K (t + T ,t + T ) = Q f (t + T )
(6)
+ Q (σ ) − Q f (σ ) ≤ 0,
t +T −1
J ( x(t ), t , t + T ) =
Fig. 1. The relations the stability of RHC with the final state weighting matrix.
(5)
If the horizon size T is fixed, the stability of the system (1) with control law (3) can be guaranteed by selecting a pertinent value for the final state weighting matrix Q f (⋅), and the terminal equality condition and the matrix inequality condition are typically applied to achieve this. Since the terminal equality condition can be considered as a special case of the matrix inequality condition, it was not covered separately in this study. The matrix inequality condition is explained in Theorem 2.1. Theorem 2.1 [6]: Assume that the pairs ( A(⋅), B (⋅)) and
then the system (1) with control law (3) is uniformly asymptotically stable for an arbitrary T ≥l + 2 . ■ The inequality (6) is the matrix inequality condition for linear discrete time-varying systems and it is equivalent to K (σ ,σ +1) − K (σ ,σ ) ≤ 0, which implies the non-increasing monotonicity of the cost function (3) and the solution of Riccati equation [6]. Therefore, if a Q f (⋅) satisfies the matrix inequality condition (6), the solution of Riccati equation monotonically decreases and the stability of the closed-loop system can be guaranteed. l in Theorem 2.1 is defined as max (l c , l o ). l c and l o are positive values, and they satisfies α 1 I ≤ W (t , t + l c ) ≤ α 2 I and α 3 I ≤ G (t , t + l o ) ≤ α 4 I ( α 1 , α 2 , α 3 and α 4 are positive constants) where W (t , t + l c ) and G (t , t + l o ) are controllability and observability grammian, respectively [2]. Relations between the final state weighting matrix and monotonicity of the solution of Riccati equation are explained in Fig. 1. K (τ ) in Fig. 1 is the solution of Riccati equation for steady-state LQ optimal control and it is assumed that K (τ ) can be given. If the inequality Q f (σ ) ≥ K (τ ) holds, the solution of Riccati equation monotonically decreases, which implies, K (τ ,σ +1) − K (τ ,σ ) ≤ 0 holds and the stability of RHC can be ensured by Theorem 2.1. On the other hand, if the inequality Q f (σ ) ≤ K (τ ) holds, the solution of Riccati equation monotonically increases, which implies, K (τ ,σ +1) − K (τ ,σ ) ≥ 0 and the stability of RHC cannot be ensured by Theorem 2.1. Therefore, only the Q f (σ ) satisfying Q f (σ ) ≥ K (τ ) can satisfy the inequality (6) and can ensure the stability of RHC by Theorem 2.1. However, if T increases sufficiently large as in Fig. 1, K (τ ,σ ) converges to K (τ ) and the stability of RHC can be guaranteed even though Q f (σ ) satisfies K (τ , σ + 1) − K (τ , σ ) ≥ 0. It is summarized in the following Remark 2.1.
( A(⋅), C (⋅)) in (1) are uniformly completely controllable and
observable respectively. If the final state weighting matrix Q f (⋅) in (2) satisfies the following inequality
Remark 2.1 [10, 15]: If an arbitrary Q f (⋅) ≥ 0 satisfies the following non-decreasing monotonicity of the cost function
M.-H. Oh and J.-H. Oh / Journal of Mechanical Science and Technology 25 (1) (2011) 233~238
235
and σ for τ ≤ σ [16], and the following inequality
(3)
t +T −1
A T (σ )Q f (σ +1) A(σ ) − A T (σ )Q f (σ +1) B (σ )[ R (σ ) + B T (σ )Q f (σ +1) B (σ )] −1 B T (σ )Q f (σ +1) A(σ )
(7)
+ Q (σ ) − Q f (σ ) ≥ 0,
∑[(K (t,i +1) − K (t,i)) − ( K (t,t +T ) − K (t,t +T −1))] ≥ 0 i =t
or equivalently,
or equivalently, K (σ ,σ +1) − K (σ ,σ ) ≥ 0, then there exists a horizon size T ≥ l + 2 which satisfies that the system (1) with control law (3) is uniformly asymptotically stable. ■ Therefore, there exists a lower bound of the horizon T guaranteeing the stability of RHC for an arbitrary Q f (⋅) which does not satisfy the non-increasing monotonicity (6) and the condition to find an explicit lower bound of the horizon size is presented in Theorem 2.2.
K (t ,t +T ) − K (t ,t +T −1) ≤
K (t ,t +T ) − Q f (t ) T
is also satisfied. Therefore, if the following inequality, K (t , t + T ) − Q f (t ) T
≤ Q (t )
or equivalently, the inequality (8) is satisfied, then K (t , t + T ) − K (t , t + T − 1) ≤ Q (t ) holds and the closed-loop system is uniformly asymptotically stable. ■
Theorem 2.2: Assume that the pairs ( A(⋅), B (⋅)) and ( A(⋅), C (⋅)) in (1) are uniformly completely controllable and
observable respectively. If Q f (⋅) in (2) satisfies K (σ , σ + 1) − K (σ , σ ) ≥ 0 and K (σ ,σ + 2) − 2 K (σ ,σ +1) + K (σ ,σ ) ≤ 0 for all σ ≥ 0, and K (t , t + T ) for T ≥ l + 2 satisfies the following inequality
K (t ,t +T ) ≤ Q f (t ) + TQ(t )
(8)
then the system (1) with control law (3) is uniformly asymptotically stable. Proof: This theorem can be proved by Lyapunov stability theorem. Consider the following closed-loop system
Theorem 2.2 presents the inequality condition (8) to find an lower bound of horizon size guaranteeing the stability of RHC when Q f (⋅) does not satisfy the matrix inequality condition (6). And the stability of RHC can be checked by online during calculating control law by increasing T until K (t , t +T ) satisfies (8). Since the inequality (8) is obtained from its sufficient condition K (t , t + T ) − K (t , t + T −1) ≤ Q (t ), if the inequality (8) is replaced by its sufficient condition in Theorem 2.2, the smaller lower bound of horizon size ensuring stability of RHC can be obtained and it is introduced in Corollary 2.1. Corollary 2.1: Assume that the pairs ( A(⋅), B (⋅)) and
x (t + 1) = [ A(t ) − B (t )( R (t ) + B T (t ) K (t + 1, t + T ) B (t )]−1
( A(⋅), C (⋅)) in (1) are uniformly completely controllable and
× B T (t ) K (t + 1, t + T ) A(t )] x (t ) V (t , x(t )) = xT (t ) K (t ,t +T −1) x(t ) is defined as Lyapunov function, and since Q (⋅) and R (⋅) are bounded, K (t , t +T −1) is also
bounded. The difference equation of Lyapunov function is as follows. V (t + 1, x (t + 1)) − V (t , x (t ))
observable respectively. If Q f (⋅) in (2) satisfies K (σ , σ + 1) − K (σ , σ ) ≥ 0 and K (σ ,σ + 2) − 2 K (σ ,σ +1) + K (σ ,σ ) ≤ 0 for all σ ≥ 0, and K (t , t + T ) for T ≥ l + 2 satisfies the following inequality K (t , t + T ) − K (t , t + T − 1) ≤ Q (t )
(9)
then the system (1) with control law (3) is uniformly asymptotically stable. ■
= xT (t + 1) K (t + 1, T + 1) x (t + 1) − xT (t ) K (t , t + T − 1) x (t ) = xT (t )[ K (t , t + T ) − K (t , t + T − 1) − Q (t ) − H (t )] x (t )
where
H (t ) = A T (t ) K (t +1, t + T ) B (t )[ R (t ) + B T (t ) K (t +1, t + T ) B (t )
] −1 R (t )[ R (t ) + B T (t ) K (t +1, t + T ) B (t )] −1 B T (t ) K (t +1, t + T ) A H (t ) > 0.
and
In order to guarantee the Lyapunov stability, the difference of Lyapunov function, V (t +1, x(t +1))−V (t, x(t ))≤ −αI ( α is a positive constant) must be satisfied and it can be expressed conservatively via the inequality,
K (t , t + T ) − K (t , t + T −1) ≤ Q (t ). Since K (σ ,σ + 2) − 2 K (σ ,σ +1) + K (σ ,σ ) ≤ 0 holds by the assumption, K (τ ,σ + 2) − 2 K (τ ,σ +1) + K (τ ,σ ) ≤ 0 holds for all τ
Corollary 2.1 suggests the more tight condition than Theorem 2.2. But, in the case of time-varying systems, the inequality (9) needs more computations than the inequality (8) because two different Riccati equations must be solved to get K (t , t + T ) and K (t , t +T −1) at the same time. Therefore, the inequality (9) is more suitable in the case of time-invariant systems because K (t , t + T ) and K (t , t +T −1) can be calculated in the same Riccati equation easily in the time-invariant RHC. To apply the inequality (8) and (9) to real physical systems conveniently, a simple method for selecting appropriate Q f (⋅) satisfying the assumption in Theorem 2.2 and Corollary 2.1 is needed because it is very difficult to find Q f (⋅) satisfying the
236
M.-H. Oh and J.-H. Oh / Journal of Mechanical Science and Technology 25 (1) (2011) 233~238
assumptions by random search. Therefore, the simple selecting method for Q f (⋅) is suggested in the following step. If the horizon size is increased sufficiently, it means that the solution of Riccati equation K (τ ,σ ) converges to the optimal LQ solution K (τ ) in Fig.1, that is, K (τ ,σ +1) − K (τ ,σ ) close to zero. And thus, even though K (σ ,σ +1) − K (σ ,σ ) ≥ 0 satisfied at initial state and it remains for all time, there exists a horizon size satisfying K (τ ,σ + 2) − 2 K (τ ,σ +1) + K (τ ,σ ) ≤ 0 because the increment of K (τ ,σ +1) − K (τ ,σ ) must be decreased gradually. This relation is explained in Lemma 2.1. Lemma 2.1: Assume that an arbitrary Q f (⋅) ≥ 0 satisfies the inequality (7), or equivalently, K (σ ,σ +1) − K (σ ,σ ) ≥ 0. If T is increased, then there exists T f ≤ T which satisfies
K (σ + T f ,σ + T + 2) − 2 K (σ + T f ,σ + T +1) + K (σ + T f ,σ + T ) ≤ 0. Proof: If the horizon size T is increased, K (σ ,σ + T ) converges K (σ ), and even though K (σ + T , σ + T + 1) − K (σ + T , σ + T ) ≥ 0 at the initial time, K (σ ,σ + T +1) − K (σ ,σ + T ) ≈ 0 is satisfied. Therefore, to satisfy K (σ ,σ + T +1) − K (σ ,σ + T ) ≈ 0, there exists T f ≤ T which satisfies K (σ + T f , σ + T + 2) − 2 K (σ + T f , σ + T + 1) + K (σ + T f , σ + T ) ≤ 0 to decrease the increment of K (σ + T f ,σ + T +1) − 2 K (σ + T f ,σ + T ). ■
The inequality in Lemma 2.1 has monotonicity and it is shown in Lemma 2.2. Lemma 2.2 [16]: If an arbitrary Q f (⋅) ≥ 0 satisfies the following non-increasing monotonicity K (σ , σ + 2) − 2 K (σ , σ + 1) + K (σ , σ ) ≤ 0, then the inequality K (τ , σ + 2) − 2 K (τ , σ + 1) + K (τ , σ ) ≤ 0 is also satisfied for all τ ≤ σ . ■
Therefore, Q f (⋅) in Theorem 2.2 and Corollary 2.1 can be replaced by K (σ + T f ,σ + T ) in Lemma 2.1 and it is obtained by solving Riccati equation (4) using an arbitrary Q f (⋅). This result is summarized in Remark 2.2. Remark 2.2: Assume that the pairs ( A(⋅), B (⋅)) and ( A(⋅), C (⋅)) in (1) are uniformly completely controllable and observable respectively. If K (σ + T f ,σ + T ) for T f ≤ T satisfies K (σ + T f , σ + T + 2) − 2 K (σ + T f , σ + T + 1) + K (σ + T f , σ + T ) ≤ 0 for all σ ≥ 0, and K (σ ,σ + T ) for T ≥ l + 2 satisfies the
inequality (8) or (9), then the system (1) with control law (3) is uniformly asymptotically stable for an arbitrary Q f (⋅) ≥ 0 satisfying K (σ + T ,σ + T +1) − K (σ + T ,σ + T ) ≥ 0. ■ Remark 2.2 suggests the easier method selecting Q f (⋅) and it may helpful to apply the result of Theorem 2.2 or Corollary 2.1 to real physical systems. And the condition K (σ ,σ +1) − K (σ ,σ ) ≥ 0 in Theorem 2.2 or Corollary 2.1 is divided into K (σ ,σ +1) − K (σ ,σ ) > Q (σ ) and K (σ , σ + 1) − K (σ , σ ) ≤ Q (σ ). The latter case was already handled in [16]
and the stability of system (1) with control law (3) is guaranteed without the inequality (8) or (9). Therefore, the result of Theorem 2.2 and Corollary 2.1 can contribute to solve the stability problem of the former case K (σ , σ + 1) − K (σ , σ ) > Q (σ ) effectively. The latter case is introduced in Corollary 2.2. Corollary 2.2: If an arbitrary Q f (⋅) ≥ 0 satisfies the inequalities K (σ ,σ +1) − K (σ ,σ ) ≤ Q (σ ) and K (σ ,σ + 2) − 2 K (σ ,σ +1) + K (σ ,σ )
≤ 0, then the system (1) with control law (3) is uniformly asymptotically stable for an arbitrary T ≥ l + 2 ■
The stability condition (9) is more effective in the case of time-invariant RHC because Riccati equation is solved by forward iterations, and thus K (t , t + T ) and K (t , t + T −1) are obtained in the same Riccati equation by increasing horizon size. And thus, the smaller lower bound of horizon size T ensuring the stability of RHC for a time-invariant system can be acquired by increasing horizon size from zero while calculating iterative forward Riccati equation until horizon size T satisfies the stability condition (9). It is introduced as follows. Consider a linear discrete time-invariant system
x(t +1) = A(t ) x(t ) + B(t )u (t )
(10)
where x(t )∈ R n , u (t )∈ R m , and a cost function t +T −1
J ( x(t ),t ,t +T ) =
∑[ x
T
(i )Qx(i ) +u T (i ) Ru (i )]
i =t
+ xT
(11)
(t + T )Q f x (t + T )
where Q = C T C ≥ 0, R > 0, and. Q f ≥ 0. RHC law at the current time t can be found by minimizing the cost function (11) about the input u (t ) and is given by u ∗ (t ) = −[ R + B T K (T −1) B ]−1 B T K (T −1) Ax(t )
(12)
where K (τ ) satisfies K (τ +1) = AT K (τ )[ I + BR −1 B T K (τ )]−1 A + Q
(13)
with the boundary condition
K (0) = Q f .
(14)
The stability condition based on horizon size for timeinvariant system (10) with RHC law (12) can be obtained from the stability conditions of time-varying system in Theorem 2.2 and Corollary 2.1, and it presented in Theorem 2.3. Theorem 2.3: Assume that the pairs ( A, B ) and ( A, C ) in (10) are controllable and observable respectively. If Q f ≥ 0 satisfies K (1) − K (0) ≥ 0 and K ( 2) − 2 K (1) + K (0) ≤ 0, and K (T ) for T ≥ n − rank ( B ) + 2 satisfies the following inequality
M.-H. Oh and J.-H. Oh / Journal of Mechanical Science and Technology 25 (1) (2011) 233~238
K (T ) − K (T −1) ≤ Q
(15)
law. And the simple method to select Q f satisfying the assumption in Theorem 2.3 is introduced in Lemma 2.3, which is the time-invariant version of Lemma 2.1.
(16)
Lemma 2.3: If an arbitrary Q f ≥ 0 satisfies K (1) − K (0) ≥ 0, then there exists a horizon size T f ≥ 0 which satisfies
or its sufficient condition
K (T ) ≤ Q f + TQ
237
then the system (10) with control law (12) is asymptotically stable. Proof: This theorem can be proved by Lyapunov stability theorem. Consider the following closed-loop system
x (t +1) = [ A − B ( R + B T K (T −1) B ] −1 B T K (T −1) A] x (t ) V (t , x(t )) = x T (t ) K (T −1) x(t ) is the Lyapunov function and it
is positive definite and bounded.
V (t +1, x (t +1)) −V (t , x (t )) = x T (t +1) K (T −1) x (t +1) − x T (t ) K (T −1) x (t ) = x T (t )[ K (T ) − K (T −1) − Q − H ] x (t ) where H = A T K (T −1) B[ R + B T K (T −1) B ] −1 R[ R + B T K (T −1) B ] −1 × B T K (T −1) A and H > 0. In order to guarantee Lyapunov stability, the difference of Lyapunov function, V (t +1, x(t +1))−V (t, x(t ))≤ −β I ( β is a positive constant ) must be satisfied and it can be expressed conservatively via the inequality (15)
K (T ) − K (T −1) ≤ Q. Since K ( 2) − 2 K (1) + K 0) ≤ 0 holds by the assumption at initial time, K (τ + 2) − 2 K (τ +1) + K (τ ) ≤ 0 holds for all τ ≥ 0 [16], and the following inequality T −1
∑[(K (i +1) − K (i)) − (KT ) − K (T −1))] ≥ 0
K (T f + 2) − 2 K (T f +1) + K (T f ) ≤ 0.
■
The inequality condition K (T f + 2) − 2 K (T f +1) + K (T f ) ≤ 0 also has the monotonicity and it remains for all time. Therefore, even though Q f does not satisfy the assumption K ( 2) − 2 K (1) + K ( 0) ≤ 0, there exists K (T f ) satisfying Lemma 2.3 and it can replace Q f in Theorem 2.3 like the timevarying case in Remark 2.2. The inequality (15) presents the smaller horizon size than that of the inequality (16). Also, since K (T ) and K (T −1) are calculated in the same Riccati equation, the inequality (15) is more effective than the inequality (16). However, in the timevarying system, since the inequality (9) needs to calculate two Riccati equations at the same time, its efficiency is not better than the inequality (8). Therefore, the inequality (15) and (8) are recommended to guarantee the stability of a time-invariant system and a time-varying system, respectively.
3. Numerical simulation A numerical example for the stability of a linear timeinvariant RHC is executed using the system in [15]. And stability condition (15) in Theorem 2.3 is compared with the result of [15]. The example time-invariant system is as follows ⎡0 3⎤ ⎡2⎤ ⎡1.3 0 ⎤ ⎡0 0 ⎤ A= ⎢ ⎥ B=⎢ ⎥ Q=⎢ ⎥ R = 0 .1 Q f = ⎢ ⎥ (17) ⎣ 4 5⎦ ⎣2⎦ ⎣ 0 1.4 ⎦ ⎣0 0 ⎦
i =0
or equivalently, K (T ) − K (T −1) ≤
K (T ) − K (0) K (T ) − Q f = T T
is also satisfied. Therefore, if the following inequality, K (T ) − Q f T
≤Q
or equivalently, the inequality (16), K (T ) ≤ Q f + TQ is satisfied, then K (T ) − K (T −1) ≤ Q holds and the closed-loop system is uniformly asymptotically stable. ■ In the case of linear time-invariant systems, T satisfying (15) or (16) can be obtained easily by forward iterations of Riccati equation (13) while increasing T until K (T ) satisfies the inequality (15) or (16) in the middle of calculating control
Since Q f satisfies K (1) − K ( 0) ≥ 0 and does not satisfy K ( 2) − 2 K (1) + K ( 0) ≤ 0 in Theorem 2.3, Lemma 2.3 is applied first to find K (T f ). If Riccati equation (13) is solved while increasing horizon size from T = 0 until Lemma 2.3 and the inequality (15) are satisfied, T = 1 and T = 4 satisfy Lemma 2.3 and inequality (15), respectively. And thus, T f = 1. At T = 1, K (3) − 2 K ( 2) + K (1) ≤ 0 is satisfied and its eigenvalues are [−10.5629 − 0.1689]T ≤ 0. Therefore, if Q f is replaced by K (T f ), that is, K (1), the assumptions in Theorem 2.3 is satisfied. And if Riccati equation is solved continuously from T = 1, the inequality (15), K (T ) − K (T −1) − Q ≤ 0 is satisfied at T T = 4 and its eigenvalues are [−1.3886 −1.1781] ≤ 0. And thus, the closed-loop system is asymptotically stable for all T ≥ 4. At this time, the eigenvalues of the closed-loop system are [− 0.1792 0.0493]T and they are placed in the unit circle. On the other hand, the stability condition of [15] suggests T = 5 as the lower bound of the horizon size guaranteeing the stability of the closed-loop system. Therefore, the inequality (15) shows a little better performance than the stability condi-
238
M.-H. Oh and J.-H. Oh / Journal of Mechanical Science and Technology 25 (1) (2011) 233~238
tion of [15] for the system (1). However, this superiority may be varied according to the system.
4. Conclusion We have developed new stability conditions of RHC based on the horizon size for linear discrete systems. The proposed stability conditions present the explicit lower bound of horizon size guaranteeing the stability of RHC for a final state weighting matrix which does not satisfy the conventional stability condition. Also they can be checked easily by solving Riccati equation while increasing the horizon size until it satisfies the proposed stability condition.
Reference [1] D. L. Kleinman, An easy way to stabilize a linear constant system, IEEE Trans. Automat. Contr., 15 (6) (1970) 692-692. [2] W. H. Kwon and A. E. Pearson, A modified quadratic cost problem and feedback stabilization of a linear system, IEEE Trans. Automat. Contr., 22 (5) (1977) 838-842. [3] G. D. Nicolao and S. Strada, On the stability of recedinghorizon LQ control with zero-state terminal constraint, IEEE Trans. Automat. Contr., 42 (2) (1997) 257-260. [4] J. B. Rawlings and K. R. Muske, The stability of constrained receding horizon control, IEEE Trans. Automat. Contr., 38 (10) (1993) 1512-1516. [5] J. W. Lee, W. H. Kwon and J. H. Choi, On stability of constrained receding horizon control with finite terminal weighting matrix, Automatica, 34 (12) (1998) 1607-1612. [6] K. B. Kim, On stabilizing receding horizon controls for linear systems, Ph. D Dissertation, SNU, Seoul, Korea, (1999). [7] K. B. Kim, Implementation of stabilizing receding horizon controls for time-varying systems, Automatica, 38 (10) (2002) 1705-1711. [8] H. Zhang, J. Huang and F. L. Lewis, Algorithm and stability of ATC receding horizon control, IEEE Symposium on Adaptive Dynamic Programming and Reinforcement Learning, Nashville, Tennessee, USA (2009) 28-35. [9] H. Zhang, J. Huang and F. L. Lewis, An Improved Method in Receding Horizon Control with Updating of Terminal Cost Function, Applications of Intelligent Control to Engineering Systems, 39 (4) (2009) 365-393. [10] W. M. Kwon and D. G. Byun, Receding Horizon Tracking Control as a Predictive Control and its Stability Properties, International Journal of Control, 50 (5) (1989) 1807-1824. [11] J. H. Park, S. Han and W. H. Kwon, LQ tracking controls
with fixed terminal states and their application to receding horizon controls, Systems and Control Letters, (57) (9) (2008) 772-777. [12] K. B. Kim, T. W. Yoon and W. H. Kwon, Stabilizing receding horizon H infinity controls for linear continuous time-varying systems, IEEE Trans. Automat. Contr., (46) (8) (2001) 1273-1279. [13] K. B. Kim and W. H. Kwon, Stabilizing receding horizon H infinity controls for linear discrete time-varying systems, International Journal of Control, (75) (18) (2002) 14491456. [14] M. H. Oh and J. H. Oh, On the stability of receding horizon control based on horizon Size, IEICE Trans. Fundamentals, (E87-A) (2) (2004) 505-508. [15] Z. Quan, S. B. Han and W. H. Kwon, Stability-guaranteed horizon size for receding horizon control, IEICE Trans. Fundamentals, (E90-A) (2) (2007) 523-525. [16] E. DE Souza, On stabilizing properties of solutions of the Riccati difference equation, IEEE Trans. Automat. Contr., (34) (12) (1989) 1313-1316.
Myung-Hwan Oh received the BS, MS, and PhD degrees in Mechanical Engineering from KAIST. He is a senior researcher in Samsung Corning Precision Glass. His research interests include humanoid robots and predictive control.
Jun-Ho Oh received the BS and MS degrees in Mechanical Engineering from Yonsei University, Seoul, South Korea and PhD degree in Mechanical Engineering from the University of California, Berkeley, USA, in 1977, 1979 and 1985, respectively. He was a Researcher with Korea Atomic Energy Research Institute from 1979 to 1981. Since 1985, he has been with the Department of Mechanical Engineering, KAIST, where he is currently a professor. He was a Visiting Research Scientist in University of Texas, Austin, TX, USA, from 1996 to 1997. His research interests include humanoid robots, adaptive control, intelligent control, non-linear control, biomechanics, sensors, actuators and application of micro processors. He is a member of the IEEE, KSME, KSPE and ICROS.