Integr. equ. oper. theory 47 (2003) 197–216 0378-620X/020197-20, DOI 10.1007/s00020-002-1159-y c 2003 Birkh¨ auser Verlag Basel/Switzerland
Integral Equations and Operator Theory
Minimal Nonsquare J-Spectral Factorization, Generalized Bezoutians and Common Zeros for Rational Matrix Functions Mark A. Petersen and Andr´e C.M. Ran Abstract. The problem that we solve in this paper is to find (square or nonsquare) minimal J-spectral factors of a rational matrix function with constant signature. Explicit formulas for these J-spectral factors are given in terms of a solution of a particular algebraic Riccati equation. Also, we discuss the common zero structure of rational matrix functions that arise from the analysis of nonsquare J-spectral factors. This zero structure is obtained in terms of the kernel of a generalized Bezoutian. Mathematics Subject Classification (2000). Primary 47A68, 47A56 Secondary 15A24. Keywords. J-spectral factorization, algebraic Riccati equations, Bezoutians, common zeros.
1. Introduction The problem of finding the symmetric factors of selfadjoint rational matrix functions that are square has been studied in several contributions (see [5, 6, 7, 8, 9, 14, 15, 23, 24, 25, 27] and [28]). In particular, in [14] necessary and sufficient conditions are given for the existence of a complete set of minimal J-spectral factorizations of a selfadjoint rational matrix function with constant signature. Also, in [15] J-spectral factorization is discussed in the case of matrix polynomials. Recall that if Φ is a rational matrix valued function taking Hermitian values ¯ ∗ where J = on the imaginary axis, then a factorization Φ(λ) = W (λ)JW (−λ) ∗ −1 J = J is called a J-spectral factorization. This factorization is called a minimal J-spectral factorization if the McMillan degree δ(Φ) of Φ is twice the McMillan degree δ(W ) of W . In case J = I, the factorization is simply called a spectral factorization, which again is called minimal if δ(Φ) = 2δ(W ).
198
Petersen and Ran
IEOT
In the present paper, we discuss the J-spectral factorization of a rational matrix function with constant signature into nonsquare J-spectral factors. In particular, we take our lead from the parametrization of nonsquare spectral factors of rational matrix functions that are positive semi-definite in [24]. We recall from the main result of the latter paper that the unique minimal square spectral factor with pole pair (C1 , A1 ) and with all its zeros in the closed left half plane (i.e., 1 C1 ) ⊂ C− ) is given by σ(A1 − B 1 . W1 (λ) = Im + C1 (λI − A1 )−1 B
(1)
Minimal nonsquare spectral factors with the same pole pair as W1 are then given by 1 , 1 B (2) W (λ) = Im 0 + C1 (λI − A1 )−1 XC1∗ + B where X solves a certain Riccati inequality. More precisely, we have the following result. Theorem 1.1. Suppose that a positive semidefinite rational matrix function Φ has a realization Φ(λ) = Im + C(λI − A)−1 B. There is a one-to-one correspondence between the set of minimal spectral factors 1 }. Here W (λ) of Φ(λ) such that W (∞) = Im 0 and the set of triples {M, X, B 1 , let A1 and M is an A-invariant H-Lagrangian subspace. To describe X and B C1 be given by A1 = A|M and C1 = C|M . Furthermore, suppose that M× is the A× = (A − BC)-invariant, H-Lagrangian subspace such that σ(A× |M× ) ⊂ C− . Let π be the projection onto M along M× and denote a matrix representation for 1 . Then X solves the Riccati inequality πB by B 1 C1 )∗ − (A1 − B 1 C1 )X ≤ 0 XC ∗ C1 X − X(A1 − B 1
1 satisfies and B ∗. 1 C1 )∗ − (A1 − B 1 C1 )X = −B 1 B XC1∗ C1 X − X(A1 − B 1 This correspondence is given by 1 W (λ) = Im 0 + C1 (λI − A1 )−1 XC1∗ + B
(3)
1 . B
In the present paper, we consider J-spectral factorizations of the rational matrix function Φ of the form (−λ)∗ , Φ(λ) = W1 (λ)JW1 (−λ)∗ = W (λ)JW (4) where W1 is a minimal square J-spectral factor and W is a nonsquare J-spectral factor. Here Φ is a regular rational matrix function taking Hermitian values on the imaginary axis. For such a minimal J-spectral factorization to exist, the number of positive and negative eigenvalues of the matrix Φ(λ) must be the same (i.e., Φ has constant signature) for all imaginary λ, except for the poles and zeros of Φ.
Vol. 47 (2003) J-Spectral Factorization, Bezoutians and Common Zeros
199
See [26] for this and other necessary conditions for the existence of a minimal square J-spectral factorization. Compare also [14]. We shall assume throughout that Φ(∞) = J, and that W is proper, i.e., W has no pole at infinity, so that W (∞) is well-defined. We shall be particularly interested in finding all minimal nonsquare J-spectral factors with the same pole pair as a given minimal square J-spectral factor. In contrast with [25], nonsquare factors are studied here, while in contrast with [23, 24], where positive semidefinite functions are factored, the present paper deals with the case where the function is Hermitian but indefinite, with certain additional hypothesis. In this paper we also investigate the common zero structure of the square factor W with the same pole J-spectral factor W1 and the nonsquare J-spectral pair. Moreover, we study the zero data of rational matrix functions arising from the parametrization of nonsquare spectral factors in [23] and [24]. More specifically, we consider the unique square spectral factor W1 and nonsquare spectral factor W appearing in the statement and the proof of Theorem 1.1. The common zero (null) structure of two rational matrix functions that are square has been studied in several contributions (see [16, 17, 18, 19] and [20]). In this case, a characterization of common zeros may be obtained via the kernel of their Bezoutian (see [11, 13, 16, 17, 18, 19] and [20]). Of particular importance to us is the idea of a generalized Bezoutian for certain classes of rational matrix functions that was first introduced in [11] and studied further in [13]. Although Bezoutians have not been widely studied in the nonsquare case, some results on this topic can be found in [21, 22] and [10]. In work on the null structure of rational matrix functions, Lerer and Rodman (see [16, 17] and [18]) provide a description of the zeros, null functions, null chains and null pairs associated with the square case. Furthermore, attention is given to the idea of a common zero structure for two square rational matrix functions, say F (λ) and G(λ), at a common zero λ0 ∈ C. This structure is expressed in terms of common null functions in the following way. We choose a canonical sequence of common null functions that under certain conditions lead to a canonical set of common null chains for F (λ) and G(λ) at λ0 . We associate a pair of matrices with this set and a related Jordan canonical form that is known as the common null pair of F (λ) and G(λ) at λ0 . Under a minimality condition for the controllable (or observable) realization constituted by F (λ) and G(λ), this common null pair may be expressed in terms of the realization of F (λ) or G(λ) and a maximal invariant subspace that coincides with the kernel of a Bezoutian. In particular, the total number of common zeros of F (λ) and G(λ) is equal to the dimension of the kernel of the Bezoutian corresponding to these two square rational matrix functions. We would like to extend the discussion in the previous paragraph to a situation where either F (λ) or G(λ) is a nonsquare rational matrix function. This would necessitate an analysis of the zero structure of nonsquare rational matrices as given, for instance, in [1, 2, 3] and [4]. A new feature here is the existence of a
200
Petersen and Ran
IEOT
(left kernel) polynomial matrix, say P (λ), which annihilates the original nonsquare rational matrix function, say Q(λ). In this case, we have P (λ)Q(λ) = 0 at all points of analyticity of Q in C. Zeros associated with such a condition are called generic zeros. P may be assumed to be of full row rank and be row-reduced. Furthermore, P is uniquely determined by Q up to a certain type of unimodular matrix polynomial (left) factor. In addition, related concepts, like null pair and null function, have been extended to general rectangular nonsquare rational matrix functions (see [1, 2] and [3]). The paper is organized as follows. In Section 2 we elucidate the relationship between the J-matrix and the J-matrix appearing in the factorization of Φ in (4). Also, this section provides a characterization of all minimal nonsquare J-spectral factors. The formula for the nonsquare factor is given explicitly in terms of the components of an algebraic Riccati equation. In Section 3 we show that the generalized Bezoutian for the class of rational matrix functions arising from [23] (see Theorem 1.1 above) corresponds to a solution of the algebraic Riccati equation (3) used there. Our analysis depends largely on the work done in [11] and [13] on the properties of generalized Bezoutians. Also this section provides an analogue of one of the main results in [18] (Theorem 1.2) that asserts that the kernel of the Bezoutian coincides with the maximal subspace invariant under an associate operator arising from the realization of a given nonsquare rational matrix function. Section 4 discusses the common zero structure of a square J-spectral factor and a nonsquare J-spectral factor with the same pole pair. Section 5 treats the general zero structure of a nonsquare J-spectral factor. The analysis regarding the J-spectral factorization problem can be extended to arbitrary rational matrix functions that may not be analytic or invertible at infinity [12].
2. Minimal Nonsquare J-Spectral Factorization In this section we discuss the relationship between the J-matrix and the J-matrix appearing in the respective J-factorizations of Φ in (4). Also, we describe explicitly all minimal nonsquare J-spectral factors of the rational matrix function Φ with constant signature matrix J, and with the same pole pair as a given minimal square J-spectral factor. The formulas for these J-spectral factors are given in terms of the components of an algebraic Riccati equation and a given minimal square J-spectral factor. 2.1. The J-matrix and the J-matrix Our first observation is that, without loss of generality, we may take J in the J-spectral factorization (−λ)∗ Φ(λ) = W (λ)JW (5)
Vol. 47 (2003) J-Spectral Factorization, Bezoutians and Common Zeros
to be of the form J =
J 0
0 , J22
201
(6)
and at the same time we may assume that W (∞) = I 0 . Indeed, observe that in (5) we may multiply W (λ) on the right by a constant −∗ . As Φ(∞) = J, invertible matrix V at the expense of changing J to V −1 JV and W is proper, we can find an invertible matrix V such that W (∞)V = I 0 . Replacing W by W V and making the appropriate change to J as we indicated, see that we may assume without loss of generality that W (∞) = I 0 . J11 J12 Now put J = . Let λ go to ∞ in (5). Since W (∞) = I 0 , we ∗ J12 J22 J J12 see that J11 = Φ(∞) = J. Now, as J = it follows that ∗ J12 J22 J J12 J 0 I −J −1 J12 I 0 = . ∗ −1 ∗ ∗ −1 J I J12 J22 0 I 0 J22 − J12 J J12 −J12 So it is clear that we may always J-factorize Φ in the following way ∗ J J12 W11 Φ = W11 W12 ∗ ∗ J12 J22 W12 ∗ J 0 W11 I J −1 J12 I 0 = W11 W12 ∗ −1 ∗ −1 ∗ J I 0 J22 − J12 J J12 0 I W12 J12 ∗ J V11 0 = V11 V12 ∗ −1 ∗ J J12 V12 0 J22 − J12 ∗ J 0 V11 = V11 V12 , ∗ V12 0 J22 I 0 ∗ −1 where V11 V12 = W11 W12 J J12 . Oband J22 = J22 − J12 ∗ −1 J I J12 serve that I 0 V11 (∞) V12 (∞) = I 0 = I 0 , ∗ −1 I J12 J so that the value at infinity is still I 0 . This proves the claim. 2.2. Minimal Nonsquare J-Spectral Factors As before, let Φ be a rational matrix function with constant signature, for which we assume the existence of a square minimal J-spectral factorization Φ(λ) = ¯ ∗ . In the main result of this subsection, we describe explicitly all W1 (λ)JW1 (−λ) minimal nonsquare J-spectral factors W of Φ, for which W (∞) = I 0 , and J 0 with the same pole pair as W1 . Here, we assume that J = . 0 J22
202
Petersen and Ran
IEOT
Theorem 2.1. Suppose that the rational matrix function Φ with constant signature matrix has a realization Φ(λ) = J + C(λI − A)−1 B and a minimal square J-spectral factor W1 given by the minimal realization 1 . W1 (λ) = Im + C1 (λI − A1 )−1 B
(7)
1 C1 . For any X = X ∗ form XZ ∗ + ZX − XC ∗ JC1 X and let X2 Put Z = A1 − B 1 ∗ and J22 = J22 be any matrices such that XZ ∗ + ZX − XC1∗ JC1 X = X2 J22 X2∗ . Then for any such X, X2 and J22 the function 1 W (λ) = Im 0 + C1 (λI − A1 )−1 XC1∗ J + B
(8)
X2
(9)
is a J-spectral factor of Φ, where J is given by (6). Conversely, given J as in (6) all J-spectral factors of Φ are given by (9) where X and X22 satisfy (8). Proof. We start by proving the converse statement. We consider a nonsquare rational matrix of the form 1 X2 . (10) W (λ) = Im 0 + C1 (λI − A1 )−1 X1 + B We can rewrite (10) in terms of the square J-spectral factor (7) as W (λ) = W1 (λ) + R1 (λ) R2 (λ) ,
(11)
where R1 (λ) = C1 (λ − A1 )−1 X1 and R2 (λ) = C1 (λ − A1 )−1 X2 . If we form a J-spectral product with W (λ) in the form given by (11) we obtain ∗ ∗ (−λ)∗ = W1 (λ) + R1 (λ) R2 (λ) J W1 (−λ) + R1 (−λ) W (λ)JW R2 (−λ)∗ J W1 (−λ)∗ + R1 (−λ)∗ 0 = W1 (λ) + R1 (λ) R2 (λ) 0 J22 R2 (−λ)∗ =
(W1 (λ) + R1 (λ))J(W1 (−λ)∗ + R1 (−λ)∗ ) + R2 (λ)J22 R2 (−λ)∗
= W1 (λ)JW1 (−λ)∗ + R1 (λ)JW1 (−λ)∗ + W1 (λ)JR1 (−λ)∗ +R1 (λ)JR1 (−λ)∗ + R2 (λ)J22 R2 (−λ)∗ =
Φ(λ) + R1 (λ)JW1 (−λ)∗ + W1 (λ)JR1 (−λ)∗ +R1 (λ)JR1 (−λ)∗ + R2 (λ)J22 R2 (−λ)∗ .
(−λ)∗ if and only if Thus we have that Φ(λ) = W (λ)JW R1 (λ)JW1 (−λ)∗ + W1 (λ)JR1 (−λ)∗ + R1 (λ)JR1 (−λ)∗ + R2 (λ)J22 R2 (−λ)∗ = 0. (12)
Vol. 47 (2003) J-Spectral Factorization, Bezoutians and Common Zeros
203
Next, we multiply (12) on the left by W1 (λ)−1 and on the right by W1 (−λ)−∗ and use that W1 (λ)C1 (λ − A1 )−1 = C1 (λ − Z)−1 , 1 C1 . This yields that (12) is equivalent to where Z = A1 − B C1 (λ − Z)−1 X1 J − JX1∗ (λ + Z ∗ )−1 C1∗ −1
(X1 JX1∗
= C1 (λ − Z)
+
X2 J22 X2∗ )(λ
∗ −1
+Z )
(13) C1∗ .
Notice that (13) implies that C1 X1 J − JX1∗ C1∗ −1
C1 (λ − Z)
=
X1 Ju
=
0 0, (u ∈ Ker
(14) C1∗ ).
(15)
The pair (C1 , Z) is a zero kernel pair and hence (15) can be rewritten as X1 Ju
=
0, (u ∈ Ker C1∗ ).
(16)
From (14) and (15) it follows that there exists a selfadjoint matrix X such that XC1∗ = X1 J. Indeed, we define X on
Im C1∗ by setting XC1∗ u = X1 Ju,
(17)
(u ∈ Cn ).
From (16) it follows that X is well-defined and uniquely defined on Im C1∗ . Consider the orthogonal decomposition Cm = Im C1∗ ⊕Ker C1 , and the following partitioning of X Im C1∗ X11 : Im C1∗ → . X= X21 Ker C1 Notice that
C1 X1 J = C1 XC1∗ = C1 X11 C1∗ . Since C1 X1 J is selfadjoint by (14), we conclude that C1 X11 C1∗ is selfadjoint. This implies that X11 is selfadjoint. Indeed for each x = C1∗ u in Im C1∗ we have (X11 x, x) = (C1 X11 C1∗ u, u) and thus X11 is selfadjoint. Now define X on Cn by ∗ X11 X21 Im C1∗ X= on , X21 X22 Ker C1 where X22 is an arbitrary selfadjoint linear transformation on Ker C1 . Using (14) and (15) and the fact that there exists an X such that (17) holds we can rewrite (13) in the following equivalent form C1 (λ − Z)−1 {XZ ∗ + ZX − XC1∗ JC1 X − X2 J22 X2∗ } (λ + Z ∗ )−1 C1∗ = 0. (18) By using the fact that (C1 , Z) is a zero kernel pair, we see that (18) is equivalent to XZ ∗ + ZX − XC1∗ JC1 X − X2 J22 X2∗ = 0.
204
Petersen and Ran
IEOT
We note that the argument above suggests that there is freedom in the choice for X. That this is not the case can be seen as follows (compare also [23]). Assume that C1 (λ − A1 )−1 XC1∗ J = C1 (λ − A1 )−1 Y C1∗ J for some selfadjoint Y , and that also XZ ∗ + ZX − XC1∗ JC1 X = Y Z ∗ + ZY − Y C1∗ JC1 Y. Then by the observability of (C −1, A1 ) we have that XC1∗ = Y C1∗ . In other words, Im (X − Y ) ⊂ Ker C1 . But this implies XC1∗ JC1 X = Y C1∗ JC1 Y , and thus (X − y)Z ∗ + Z(X − Y ) = 0. From this we see that Im (X − Y ) is Z-invariant. As it is also contained in Ker C1 , and as (C1 , Z) is observable, we see that Im (X − Y ) = (0), i.e., X = Y . It remains to prove the direct statement. Given the formula for W (λ) one easily computes that ¯ ∗= (−λ) W (λ)JW 1 J) − (J B 1 + C1 X)(λ + A∗ )−1 C ∗ = J + C1 (λ − A1 )−1 (XC1∗ + B 1 1 −1 ∗ ∗ ∗ − C1 (λ − A1 ) {(XC1 + B1 J)(B1 + JC1 X) + X2 J22 X2 }(λ + A∗1 )−1 C1∗ . Using (8) we see that 1 J)(B ∗ + JC1 X) + X2 J22 X ∗ (XC1∗ + B 1 2 ∗ ∗ = XA1 + A1 X + B1 J B1 1 J B 1∗ . = X(λ + A∗1 ) − (λ − A1 )X + B Inserting this in the formula above easily leads to ¯ ∗ = W1 (λ)JW1 (−λ) ¯ ∗ = Φ(λ). (−λ) W (λ)JW
In the next corollary we look at the relationship between special choices of J and J-spectral factors of Φ. Corollary 2.2. Let J be given by (6). Under the assumptions of Theorem 2.1 the following hold. where Π+ (J) (resp., Π+ (J)) denotes the number of (a) Let Π+ (J) = Π+ (J), There is a one-to-one correspondence positive eigenvalues of J (resp., J). between J-spectral factors of Φ with pole pair (C1 , A1 ) and with value I 0 at infinity, and pairs of matrices (X, X2 ) satisfying XZ ∗ + ZX − XC1∗ JC1 X ≤ 0. and XZ ∗ + ZX − XC1∗ JC1 X = −X2 J22 X2∗ . This one-to-one correspondence is given by (9).
Vol. 47 (2003) J-Spectral Factorization, Bezoutians and Common Zeros
205
where Π− (J) (resp., Π− (J)) denotes the number of (b) Let Π− (J) = Π− (J), negative eigenvalues of J (resp., J). There is a one-to-one correspondence between J-spectral factors of Φ with pole pair (C1 , A1 ) and with value I 0 at infinity, and pairs of matrices (X, X2 ) satisfying XZ ∗ + ZX − XC1∗ JC1 X ≥ 0 and XZ ∗ + ZX − XC1∗ JC1 X = X2 J22 X2∗ . This one-to-one correspondence is given by (9). and Π− (J) = Π− (J). There is a one-to-one corre(c) Let Π+ (J) = Π+ (J) spondence between J-spectral factors of Φ with pole pair (C1 , A1 ) and with value I 0 at infinity, and matrices X satisfying XZ ∗ + ZX − XC1∗ JC1 X = 0 This one-to-one correspondence is given by (9). Part (c) of the above corollary corresponds to the square case which, for instance, is discussed in [14]. We note from the J-spectral factorization in (5) and the proof of Theorem 2.1 that (−λ)∗ (19) W (λ)JW =
(W1 (λ) + R1 (λ))J(W1 (−λ)∗ + R1 (−λ)∗ ) + R2 (λ)J22 R2 (−λ)∗ ,
(20)
where we have that W1 + R1 has a fixed pole pair (C1 , A1 ). If we make the assumption that J22 is positive semidefinite, i.e., J22 > 0, then it follows that (W1 + R1 )J(W1 + R1 )∗ ≥ Φ. On the other hand, if we suppose that J22 is negative semidefinite, i.e., J22 < 0, then it follows that (W1 + R1 )J(W1 + R1 )∗ ≤ Φ. This motivates the following proposition that deals with a generalization of the observation made above. Corollary 2.3. Assume Φ has a minimal square J-spectral factor 1 . W1 (λ) = Im + C1 (λ − A1 )−1 B All square rational matrix functions V such that 1. V JV ∗ ≤ Φ, 2. V has a pole pair of the form (C1 , A1 ), 3. V (∞) = I and 4. Φ − V J ∗ V ∗ has a spectral factor R2 with a pole pair that is a restriction of (C1 , A1 ) and R2 (∞) = 0, are given by 1 ), (21) V (λ) = Im + C1 (λI − A1 )−1 (XC1∗ J + B where X solves XZ ∗ + ZX − XC1∗ JC1 X ≥ 0.
(22)
206
Petersen and Ran
IEOT
1 C1 . Here Z = A1 − B Proof. The proof of the corollary is similar to the proof of Theorem 2.1. We write 1 ) and R2 (λ) = C1 (λ − A1 )−1 X2 and consider V (λ) = I + C1 (λ − A1 )−1 (X1 + B J 0 V (−λ)∗ Φ(λ) = V (λ) R2 (λ) , 0 I R2 (−λ)∗ 1 X2 . We see that where V (λ) R2 (λ) = Im 0 + C1 (λI − A1 )−1 X1∗ J + B X1 = XC1∗ J with X = X ∗ satisfying XZ ∗ + ZX − XC1∗ JC1 X = X2 X2∗ ≥ 0. The converse is proved by taking a V as in (21) then forming R2 (λ) as above with J22 = I. It follows that V JV ∗ ≤ Φ, V has a pole pair of the form (C1 , A1 ), V (∞) = I and Φ − V J ∗ V ∗ has a spectral factor R2 with a pole pair that is a restriction of (C1 , A1 ), R2 (∞) = 0 and R2 (λ) = C1 (λ − A1 )−1 X2 .
3. Generalized Bezoutians In this section we show that the solutions of the particular algebraic Riccati equation arising in the parametrization of all J-nonsquare spectral factors (see Theorem 2.1) can be interpreted as generalized Bezoutians in the sense of (see [11] and [13]) Also, we describe the kernel of the generalized Bezoutian X in terms of a maximal invariant subspace arising from the realizations (23) and (24) in the case where J22 > 0. 3.1. Generalized Bezoutians and solutions of Riccati equations Our starting point is the same as before. We keep in place the assumptions of Theorem 2.1, and we shall use the notation as in that theorem. We recall that the minimal square J-spectral factor and the corresponding minimal nonsquare J-spectral factor of Φ are given by (7) and (9), respectively. Since ∗ Φ = W1 JW1∗ = W JW it follows that
W1 J
W∗ 1 = 0. WJ −W ∗
Let an m × r matrix function L(λ) andan r × m matrix function M (λ) be ∗ W 1 given by L(λ) = W1 J W J and M (λ) = , respectively. Then we have −W ∗ (23) L(λ) = W1 J W J 1 J XC ∗ + B 1 J X2 J22 = J J 0 + C1 (λI − A1 )−1 B 1
Vol. 47 (2003) J-Spectral Factorization, Bezoutians and Common Zeros
and
207
∗ −B Im 1 ∗ (λI + A∗1 )−1 C1∗ . M (λ) = (24) = −Im + JC1 X + B 1 −W ∗ 0 X2 1 XC ∗ + B 1 X2 L(∞), C1 , We note that L(λ) is analytic at infinity and B 1 and A1 are p×r, m×r, m×p, and p×p, matrices, respectively, and the pair (C1 , A1 ) ∗ −B 1 ∗ , A∗1 is observable. Also, M (λ) is analytic at infinity and M (∞), JC1 X + B 1 ∗ X2 and C1∗ are r × m, r × p, p × p, p × m matrices, respectively, with (A∗1 , C1∗ ) being controllable. Importantly, as was seen before, we have
W1∗ ∗
L(λ)M (λ) = 0.
(25)
From [13] (see also [11]), it follows that, with L and M described as above, there exists a unique matrix B such that L(λ)M (ν) = C1 (λ − A1 )−1 B(ν + A∗1 )−1 C1∗ , (26) λ−ν for all (λ, ν) ∈ C2 . The matrix B is known as the generalized Bezoutian (see [13] and [11]) corresponding to (25) and the realizations (23) and (24). Our next result shows that, in fact, B coincides exactly with the solution X of the algebraic Riccati equation (8). Proposition 3.1. Suppose that X is a solution of the algebraic Riccati equation given by (8) and that B is the generalized Bezoutian associated with (25) and the realizations (23) and (24). Then B = X. Proof. We compute, by using the algebraic Riccati equation (8), that L(λ)M (ν) = −C1 (λ − A1 )−1 XC1∗ + C1 X(ν + A∗1 )−1 C1∗ + C1 (λ − A1 )−1 1∗ + B 1 C1 X + X2 J22 X2∗ ](ν + A∗1 )−1 C1∗ [XC1∗ JC1 X + XC1∗ B
= −C1 (λ − A1 )−1 XC1∗ + C1 X(ν + A∗1 )−1 C1∗ + C1 (λ − A1 )−1
[XC1∗ JC1 X − X(A1 − Z)∗ − (A1 − Z)X + X2 J22 X2∗ ](ν + A∗1 )−1 C1∗
= −C1 (λ − A1 )−1 XC1∗ + C1 X(ν + A∗1 )−1 C1∗
+C1 (λ − A1 )−1 [XA∗1 + A1 X](ν + A∗1 )−1 C1∗
= −C1 (λ − A1 )−1 XC1∗ + C1 X(ν + A∗1 )−1 C1∗
+C1 (λ − A1 )−1 [X(ν + A∗1 ) − νX + (A1 − λ)X + λX](ν + A∗1 )−1 C1∗
=
(λ − ν)C1 (λ − A1 )−1 X(ν + A∗1 )−1 C1∗ .
Thus, we have L(λ)M (ν) = C1 (λ − A1 )−1 X(ν + A∗1 )−1 C1∗ , λ−ν
208
Petersen and Ran
IEOT
and the result follows when we compare with B in (26).
3.2. The Kernel of the Generalized Bezoutian The next result describes the kernel of the generalized Bezoutian X in terms of a maximal invariant subspace arising from the realizations (23) and (24). Here we assume that J22 > 0. A similar result holds when J22 < 0. Theorem 3.2. Suppose that X is the solution of the algebraic Riccati equation (8) that is the generalized Bezoutian associated with (25) and the realizations (23) and ∗ ∗ ∗ (24). Then Ker X coincides with the maximal (A1 − C1 B1 )-invariant subspace N C1 X contained in Ker . X2∗ C1 X ∗ )and that it is (A∗1 − C1∗ B Proof. First we show that Ker X ⊂ Ker 1 X2∗ invariant. Indeed, if x ∈ Ker X then clearly x ∈ Ker C1 X, and from (8) we have that x∗ X2 J22 X2∗ x = 0. Using J22 > 0 we see that x ∈ Ker X2∗ . Then, again by (8) we see that XZ ∗ x = 0, i.e., Ker X is Z ∗ -invariant. ∗ )Now, suppose that Ker X does not correspond to the maximal (A∗1 − C1∗ B 1 invariant subspace N . Then there exists a nontrivial subspace N1 such that we can decompose the state space Cn as Cn = KerX ⊕ N1 ⊕ N ⊥ = N ⊕ N ⊥ .
(27)
∗ and X ∗ in terms of the decomposition (27) We may express X, C1 , A∗1 − C1∗ B 1 2 as follows. 0 0 0 X = 0 X22 X23 , C1 = C11 C12 C13 , ∗ 0 X23 X33 ∗ ∗ A11 A12 A∗13 ∗ ∗ = 0 A∗22 A∗23 and X2∗ = 0 0 X13 A∗1 − C1∗ B . (28) 1 0 0 A∗33 From the above and (28) it immediately follows that ∗ C12 X23 + C13 X33 = 0 0 C1 X = 0 C12 X22 + C13 X23
C12 X23 + C13 X33 . 0 Computing the algebraic Riccati equation (8) on the vector x = x2 yields 0 ∗ )x 1 C1 )Xx + X(A∗ − C ∗ B 0 = (A1 − B 1 1 1 0 0 0 0 A11 0 X22 x2 + 0 = A12 A22 ∗ X23 x2 0 A13 A23 A33
0 X22 ∗ X23
∗ 0 A12 x2 X23 A∗22 x2 . X33 0
Vol. 47 (2003) J-Spectral Factorization, Bezoutians and Common Zeros
This results in
209
0 0 1 C1 )X x2 = −X A∗22 x2 . (A1 − B 0 0
1 C1 )-invariant. It is also contained in Ker C1 as N1 ⊂ Thus Im XN1 is (A1 − B N ⊂ Ker C1 X. It is important to note that, in fact, the latter conclusion reduces to Im XN1 being A1 -invariant and in Ker C1 . But observability ensures that Im XN1 = (0), which in turn gives N1 ⊂ Ker X. This leads to a contradiction unless N1 = (0). The result above is an analogue of one of the main results in [16] where the kernel of the Bezoutian of two square rational matrix functions is described in terms of the coefficients of a controllable realization. We note that there the proof of the result depends on several linear matrix equations of the intertwining type valid for the Bezoutian under investigation.
¯ ∗ and W (−λ) ¯ ∗ 4. Common Zero Structure of W1 (−λ) In the literature, a (generalized) Bezoutian is used to count the number of common zeros of two matrix functions, see, e.g., [11, 18, 19, 20], and the literature cited there. Usually, the dimension of the kernel of the Bezoutian gives the number of common zeros. Thus, it is natural to investigate whether that is also the case for the functions under consideration here, and the corresponding generalized Bezoutian X that is a solution of (8). It turns out that the relation is indeed as expected. So, in this section, we focus on the common zero data of the nonsquare ¯ ∗ and W (−λ) ¯ ∗ given by rational matrix function W1 (−λ) ¯ ∗ = Im − B ∗ (λI + A∗ )−1 C ∗ W1 (−λ) 1 1 1 and ¯ ∗= W (−λ)
∗ Im JC1 X + B 1 − (λI + A∗1 )−1 C1∗ , 0 X2∗
(29)
(30)
respectively. We shall assume throughout that J22 > 0 (similar results may be obtained if J22 < 0). From [29] we know that the zero structure of W1∗ , respectively, W ∗ , is equal to the zero structure of the pencil λI + A∗1 −C1∗ , (31) ∗ −B I 1 respectively,
−C1∗ λI + A∗1 −(B ∗ + JC1 X) I . 1 ∗ 0 −X2
(32)
210
Petersen and Ran
IEOT
As a matter of fact the eigenvalues and Jordan chains of the former pencil are easily ∗. seen to be in one-to-one correspondence with those of the matrix −A∗1 + C1∗ B 1 The eigenvalues of the pencils above are called the invariant zeros of the functions W1∗ and W ∗ , respectively (in contrast to the so-called transmission zeros). As we are interested in common zeros of W1∗ and W ∗ , it is clear thatwe are x0 x ∗ only interested in finite zeros of W . Recall that a set of vectors , · · · , k−1 y0 yk−1 is called a Jordan chain for the pencil (31), respectively (32), corresponding to the eigenvalue λ0 , if the following holds for i = 1, · · · , k − 1 λ0 I + A∗1 −C1∗ I 0 xi 0 xi+1 + = , (33) ∗ y y 0 0 0 −B I i+1 i 1 λ0 I + A∗1 −C1∗ 0 x0 = , (34) ∗ y0 0 −B I 1 respectively, −C1∗ I x i+1 + 0 I yi+1 0 0 ∗ −C1 0 x0 = 0 . I y0 0 0
λ0 I + A∗1 −(B ∗ + JC1 X) 1 −X2∗ λ0 I + A∗1 −(B ∗ + JC1 X) 1 −X2∗
0 0 x i 0 = 0 , yi 0 0
(35)
(36)
A number λ is called a common zero of W1∗ and W ∗ if there is a set of vectors
k−1 0 xi that is a common Jordan chain for the two pencils (31) and (32). The yi i=0 following theorem describes the common zeros and the common Jordan chains.
x0 x , · · · , k−1 for the pencil (31), is a common y0 yk−1 Jordan chain for the pencils (31) and (32) if and only if xi ∈ Ker X for i = 0, · · · , k − 1. x0 x Likewise, a Jordan chain , · · · , k−1 for the pencil (32), is a common y0 yk−1 Jordan chain for the pencils (31) and (32) if and only if xi ∈ Ker X for i = 0, · · · , k − 1.
Theorem 4.1. A Jordan chain
Proof. First assume that xi ∈ Ker X for i = 0, · · · , k − 1. One easily sees, using (8) and the fact that J22 > 0, that this implies that X2∗ xi = 0 for i = 0, · · · , k − 1. x x0 , · · · , k−1 form a Jordan Suppose now that, in addition, the vectors y0 yk−1 chain of the pencil (31) at eigenvalue λ0 . Then, by our observation in the previous
Vol. 47 (2003) J-Spectral Factorization, Bezoutians and Common Zeros
211
paragraph, we see that λ0 I + A∗1 −C1∗ (λ0 + A∗1 )xi − C1∗ yi I 0 x xi−1 i ∗ −(B 1 + JC1 X)∗ = 0 0 = I −B1 xi − yi yi yi−1 0 0 0 0 −X2∗ x0 xk−1 (where x−1 = y−1 = 0). Thus ,··· , is a Jordan chain of the pencil y0 yk−1 (32) corresponding to the eigenvalue λ0 . x0 xk−1 Conversely, suppose that the vectors ,··· , form a common y0 yk−1 Jordan chain of the pencils (31) and (32) corresponding to the eigenvalue λ0 . Then C1 X (33) and (35) hold. It follows from the invertibility of J that xi ∈ Ker for X2∗ all i. We have to show that it follows from this that xi ∈ Ker X. By (8) we have 1 C1 )∗ xi + (A1 − B 1 C1 )Xxi = X(A1 − B 1 C1 )∗ xi + A1 Xxi . 0 = X(A1 − B Now from (33) we have ∗ xi = A∗ xi − C ∗ yi = A∗ xi − (λ0 I + A∗ )xi − xi−1 = −λ0 xi − xi−1 , A∗1 xi − C1∗ B 1 1 1 1 1 where x−1 = 0. Combining the latter two equations we see that 0 = −λ0 Xxi + A1 Xxi − Xxi−1 . In other words, Xx0 , · · · , Xxk−1 is a Jordan chain for A1 at λ0 . Thus we see that span{Xx0 , · · · , Xxk−1 } is A1 -invariant and contained in Ker C1 . By observability it follows that Xxi = 0 for all i = 0, · · · , k − 1. The second part of the theorem is proved in a similar manner. x0 x Next, we show that if , · · · , k−1 is a common Jordan chain of the y0 yk−1 pencils (31) and (32), corresponding to λ0 , then x0 , · · · , xk−1 are independent. ∗ This is almost trivial, as the latter vectors form a Jordan chain of −A∗1 + C1∗ B 1 ∗ xi . corresponding to the eigenvalue λ0 . Moreover, yi = B 1 ∗ ∗ ∗ Conversely, if a subspace N is invariant under −A1 + C1 B1 and contained C1 X in Ker then it has a basis of Jordan chains. Let x0 , · · · xk−1 be one such X2∗ Jordan chain from this basis, say corresponding to the eigenvalue λ0 , and put x x 0 ∗ xi . Then the vectors yi = B , · · · , k−1 form a common Jordan chain of 1 y0 yk−1 the pencils (31) and (32), corresponding to λ0 . As a corollary of these observations, (compare also Theorem 3.2) we have the following theorem. Theorem 4.2. The number of common zeros of W ∗ and W1∗ (multiplicities taken into account ) is equal to dim Ker X.
212
Petersen and Ran
IEOT
¯ ∗ and W (−λ) ¯ ∗ 5. Zero Structure of W1 (−λ) While we focused on common zeros in the previous section, in this section we describe the full zero structure of the adjoints of the minimal spectral factors (7) and (9), given by (29) and (30). As is well-known, the zeros of W1∗ are the ˜ ∗. eigenvalues of −A∗1 + C1∗ B 1 To describe the zero structure of W ∗ , firstly, we investigate the left annihilat ing polynomial P = P1 P2 that appears in a description of the null structure for the nonsquare spectral factor W ∗ (see [2]). We may rewrite W ∗ in terms of W1∗ as ∗ −1 ∗ ¯ ∗ C1 W11 (λ) 1) ¯ ∗ = W1 (−λ) − JC1∗X(λI + A . (37) = W (−λ) 0 X2 (λI + A∗1 )−1 C1∗ W2 (λ) A matrix polynomial P is called a left annihilating polynomial if W11 (λ) P1 (λ) P2 (λ) = 0, W2 (λ)
(38)
or, equivalently, P1 (λ) = −P2 (λ)W2 (λ)W11 (λ)−1 . We note that P1 (λ) is a polynomial and hence P2 (λ)W2 (λ)W11 (λ)−1 is a polynomial. We can compute W2 (λ)W11 (λ)−1 explicitly in terms of the realization (37) as 1∗ )(λI + A×∗ )−1 C1∗ ], W2 (λ)W11 (λ)−1 = −X2∗ (λI + A∗1 )−1 C1∗ [Im + (JC1 X + B 1 ∗ ∗ ∗ ∗ −1 where A×∗ 1 = A1 −JC1 C1 X −C1 B1 . Moreover, the expression for W2 (λ)W11 (λ) simplifies even further to −1 ∗ W2 (λ)W11 (λ)−1 = −X2∗ (λI + A×∗ C1 . 1 )
(39)
From this it follows that −1 ∗ P1 (λ) = P2 (λ)X2∗ (λI + A×∗ C1 . 1 ) −1 ∗ Our conclusion is that P2 (λ) has to cancel poles of X2∗ (λI + A×∗ C1 . By ob1 ) × ×∗ servability of (C1 , A1 ), P2 (λ) must have zeros at eigenvalues of A1 . For these polynomials we would have the following result. Theorem 5.1. A matrix polynomial P (λ) = P1 (λ) P2 (λ) is an annihilating polynomial for W (−λ)∗ if and only if one of the following equivalent conditions hold: −1 (a) P2 is a polynomial for which P2 (λ)X2∗ (λI + A×∗ is a polynomial. In 1 ) this case, we have −1 ∗ P1 (λ) = P2 (λ)X2∗ (λI + A×∗ C1 . 1 )
(40)
Vol. 47 (2003) J-Spectral Factorization, Bezoutians and Common Zeros
213
(b) P2 is expressible as P2 (λ) = P0 + λP1 + . . . + λk Pk ,
(41)
where P0 , . . . , Pk satisfy k
j Pj X2∗ (−A×∗ 1 ) = 0.
(42)
j=0
In that case P1 (λ) =
k−1
λs Qs ,
(43)
s=0
where Qs =
k
−s−1+j ∗ Pj X2∗ (−A×∗ C1 . 1 )
j=s+1
Proof. (a) The proof of Part (a) follows from the discussion that immediately preceded this theorem. (b) From (41) and the fact that −1 ∗ X2∗ (λI + A×∗ C1 = X2∗ 1 )
∞ (−1)n A×∗n 1 C1∗ , |λ| > ||A×∗ 1 ||. n+1 λ n=0
we compute that −1 ∗ P2 (λ)X2∗ (λI + A×∗ C1 1 )
=
k ∞ j=0 n=0
n ∗ λj−n−1 Pj X2∗ (−A×∗ 1 ) C1 .
Putting n = −s − 1 + j we have −1 P2 (λ)X2∗ (λI + A×∗ 1 )
=
j−1 k j=0 s=−∞
=
−1 s=−∞
−s−1+j ∗ λs Pj X2∗ (−A×∗ C1 1 )
k k−1 −s−1+j ∗ C λs Pj X2∗ (−A×∗ ) + λs Qs . 1 1 j=0
s=0
This is a polynomial if and only if for all s ≤ −1 k k ×∗ j −s−1+j ∗ ∗ −s−1 ∗ 0= Pj X2∗ (−A×∗ ) C = P X (−A ) C1 . (−A×∗ j 2 1 1 1 1 ) j=0
j=0
Observability of (C1 , A1 ) (hence of (C1 , A× 1 )) implies that (42) holds. k−1 ∗ −1 ∗ In that case P1 (λ) = P2 (λ)X2 (λI + A×∗ C1 is given by s=0 λs Qs , with 1 ) Qs as in the statement of the theorem.
214
Petersen and Ran
IEOT
We can describe general zeros of W ∗ according to one of the main results in [2] (Theorem 3.1 there) that deals with the direct problem of calculating a null-pole triple at any point λ ∈ C of an injective rational matrix function without a zero at infinity. Note that W ∗ is injective. In particular, we can choose a generalized I inverse D† = Im 0 for D = m in W ∗ whose row span has trivial intersection 0 with the row span of the left annihilating polynomial P (λ) discussedin Theorem 5.1 above. One easily checks that indeed, the row span of D† = Im 0 has trivial intersection with the row span of P (λ) = P1 (λ) P2 (λ) , because of (40). By considering a similarity transformation of the null-pole triple arising from the minimal realization (30) and JC1 X + B ∗ ∗ ∗ 1 1∗ ), 0 I −A×∗ = −A + C = −A∗1 + C1∗ (JC1 X + B m 1 1 1 X2∗ we can obtain a global null-pole triple for W ∗ . From the aforementioned result we have that a global null-pole triple for W ∗ in (30) is given by 1∗ + C1∗ JC1 X); (−A∗1 , C1∗ ); I . 1∗ + JC1 X), −A∗1 + C1∗ B (44) ω = (−(B Also we know that a global null-pole triple for W1∗ in (29) is given by 1∗ ); (−A∗1 , C1∗ ); I . 1∗ , −A∗1 + C1∗ B ω1 = (−B
(45)
From the results of the previous section we know that the common null pair of W1∗ and W ∗ is 1∗ )|N ) = 1∗ |N , (−A∗1 + C1∗ B (−B (46) ∗ ∗ ∗ ∗ ∗ + JC1 X)|N , (−A + C B (47) = (−(B 1 1 1 1 + C1 JC1 X)|N ), ∗ )-invariant subspace in Ker C1 X where N is the maximal (A∗1 − C1∗ B , i.e., 1 X2∗ N = Ker X (see Theorems 4.2 and 3.2).
References [1] J.A. Ball, N. Cohen and L. Rodman. Zero data and interpolation problems for rectangular matrix polynomials, Linear and Multilinear Algebra 29 (1991), 53–78. [2] J.A. Ball, I. Gohberg and M. Rakowski. Reconstruction of a rational nonsquare matrix function from local data, Integral Equations and Operator Theory 20 (1994), 249–305. [3] J.A. Ball and M. Rakowski. Null-pole subspaces of nonregular rational matrix functions, Linear Algebra and its Applications 159 (1991), 81–120. [4] N. Cohen. On minimal factorizations of rational matrix functions, Integral Equations and Operator Theory 6 (1983), 647–671. [5] L. Finesso and G. Picci. A characterization of minimal spectral factors, IEEE Transactions of Automatic Control AC-27 (1982), 122–127.
Vol. 47 (2003) J-Spectral Factorization, Bezoutians and Common Zeros
215
[6] P. Fuhrmann. On symmetric rational transfer functions, Linear Algebra and its Applications 50 (1983), 167–250. [7] P. Fuhrmann. On the characterization and parametrization of minimal spectral factors, Journal of Mathematical Systems, Estimation and Control 5 (1995), 383–444. [8] I. Gohberg, P. Lancaster and L. Rodman. Matrices and Indefinite Scalar Products, Operator Theory and Analysis, Operator Theory: Advances and Applications, OT 8, Birkh¨ auser Verlag, Basel, 1983. [9] I. Gohberg, P. Lancaster and L. Rodman. Factorization of selfadjoint matrix polynomials with constant signature, Linear and Multilinear Algebra 11 (1982), 209–224. [10] I. Gohberg and T. Shalom. On Bezoutians of nonsquare matrix polynomials and inversion of matrices with nonsquare blocks, Linear Algebra and its Applications 137/138 (1990), 249–323. [11] G. Gomez and L. Lerer. Generalized Bezoutian for analytic operator functions and inversion of structured operators, In U. Helmke, R. Mennicken and J. Sauer (Eds) Systems and Networks: Mathematical Theory and Applications, vol. II, Mathematical Research, vol. 79, Berlin: Akademie Verlag, 1994, 691–696. [12] G.J. Groenewald and M.A. Petersen. J-spectral factorization of rational matrix functions with alternative realization, in preparation. [13] I. Karelin and L. Lerer. Generalized Bezoutian, for analytic operator functions, factorization of rational matrix functions and matrix quadratic equations, In H. Bart, I. Gohberg and A.C.M. Ran (Eds) Operator Theory and Analysis, The M.A. Kaashoek Anniversary Volume, Operator Theory: Advances and Applications, OT 122, 2001, Birkh¨ auser Verlag, Basel, 303–320. [14] I. Karelin, L. Lerer and A.C.M. Ran. J-symmetric factorizations and algebraic Riccati equations, In A. Dijksma, M.A. Kaashoek and A.C.M. Ran (Eds) Recent Advances in Operator Theory, The Israel Gohberg Anniversary Volume, Operator Theory: Advances and Applications, OT 124, 2001, Birkh¨ auser Verlag, Basel, 319–360. [15] L. Lerer and A.C.M. Ran. J-pseudo-spectral and J-inner-pseudo-outer factorization for matrix polynomials, Integral Equations and Operator Theory 29 (1997), 23–51. [16] L. Lerer and L. Rodman. Bezoutians of rational matrix functions, Journal of Functional Analysis 141 (1996), 1–36. [17] L. Lerer and L. Rodman. Bezoutians of rational matrix functions, matrix equations and factorizations, Linear Algebra and its Applications 302/305 (1999), 105–135. [18] L. Lerer and L. Rodman. Common zero structure of rational matrix functions, Journal of Functional Analysis 136 (1996), 1–38. [19] L. Lerer and L. Rodman. Symmetric factorizations and locations of zeroes of rational matrix functions, Linear and Multilinear Algebra 40 (1996), 259–281. [20] L. Lerer and M. Tismenetsky. The Bezoutian and the eigenvalue separation problem, Integral Equations and Operator Theory 5 (1982), 386–445. [21] L. Lerer and M. Tismenetsky. Generalized Bezoutian and matrix equations. Linear Algebra Appl. 99 (1988), 123–160. [22] L. Lerer and M. Tismenetsky. Generalized Bezoutian and the inversion problem for block matrices. I. General scheme. Integral Equations Operator Theory 9 (1986), 790–819.
216
Petersen and Ran
IEOT
[23] M.A. Petersen and A.C.M. Ran. Minimal nonsquare spectral factors, Linear Algebra and its Applications 351–352 (2002), 553–565. [24] M.A. Petersen and A.C.M. Ran. Minimal nonsquare spectral factors via factorizations of unitary functions, Linear Algebra and its Applications 351–352 (2002), 567– 583. [25] M.A. Petersen and A.C.M. Ran. Minimal square spectral factors via triples, SIAM Journal of Matrix Analysis and Applications, 22 no. 4 (2001), 1222–1244. [26] A.C.M. Ran and L. Rodman. On symmetric factorizations of rational matrix functions. Linear and Multilinear Algebra 29 (1991), 243–261. [27] A.C.M. Ran and L. Rodman. Stability of invariant maximal semidefinite subspaces I, Linear Algebra and its Applications 62 (1984), 51–86. [28] A.C.M. Ran and P. Zizler. On selfadjoint matrix polynomials with constant signature, Linear Algebra and its Applications 259 (1997), 133–153. [29] H.H. Rosenbrock: State-space and multivariable theory. John Wiley & Sons, Inc., New York, 1970. Mark A. Petersen Department of Mathematics and Applied Mathematics, Potchefstroom University for CHE, Potchefstroom X 6001, South Africa E-mail:
[email protected] Andr´e C.M. Ran Divisie Wiskunde en Informatica Faculteit Wiskunde en Informatica Vrije Universiteit Amsterdam De Boelelaan 1081a 1081 HV Amsterdam, The Netherlands E-mail:
[email protected] Submitted: March 15, 2002 Revised: July 15, 2002
To access this journal online: http://www.birkhauser.ch