Mathematical Notes, vol. 76, no. 1, 2004, pp. 48–61. Translated from Matematicheskie Zametki, vol. 76, no. 1, 2004, pp. 52–65. c Original Russian Text Copyright 2004 by V. P. Derevenskii.
Integrable Systems of Linear Ordinary Differential Equations of General Order V. P. Derevenskii Received May 20, 2002
Abstract—In this paper, we define sufficient conditions for the solvability in quadratures of systems of linear ordinary differential equations of higher orders with variable coefficients. The method of variation of constants for equations of higher orders and for systems of equations of first order is generalized to the case of such systems. Results are obtained by reducing the systems under consideration to matrix equations of higher orders studied by methods of the theory of Lie algebras. Key words: linear ordinary differential equation of general order, matrix equation, Lie algebra, method of variation of constants.
In this paper, we define sufficient conditions for the solvability in quadratures of systems of linear ordinary differential equations (ODEs) of higher orders with variable coefficients. 1. Statement of the problem. Suppose that Mn a set of n × n matrices over the real field R , Mnk (t) is the set of elements of Mn whose entries are functions of t of class C k (t) , E is the unit matrix of Mn , and LN is the representation in Mn of the N -dimensional Lie algebra with basis γ ( α, β , γ = 1, . . . , N ≤ n2 < ∞). Suppose that RK is the (Eα ) and structural constants Cαβ radical of LN (K ≤ N ) , i.e., (by definition) its largest solvable subalgebra possessing the following properties: (1) [RK LN ] ⊂ RK , (M ) (m) (m−1) (m−1) RK ], (2) ∃M < ∞ : RK = 0 , RK ≡ [RK where [AB] is the commutant of the sets A and B . Let us number the basis matrices LN so that Eα ∈ RK
for α = 1, . . . , K
and
Eα ∈ UN −K ≡ LN \ RK
for α = K + 1, . . . , N .
Consider the normal system of linear ODEs of Kth order (ODEs.K) 0
(s)
Ajis xj = fi ,
s=K
AjiK = δij ,
i, j = 1, . . . , n,
(1)
where the As ≡ (Ajis ) ∈ Mn0 (t) , (δij ) = E are matrices, summation is performed over repeated superscripts and subscripts, t ∈ T , and T is the domain of existence of Ajis . A method of reducing scalar linear ODEs.K to a linear system of first order [1, p. 139] allows us to state the following proposition. 48
0001-4346/2004/7612-0048
c 2004 Plenum Publishing Corporation
INTEGRABLE SYSTEMS OF LINEAR ODEs OF GENERAL ORDER
49
Proposition 1. System (1) is equivalent to the system of Kn linear ODEs.1 y˙ p = Bpq yq + Cp ,
p, q = 1, . . . , Kn,
(2)
0 with matrix coefficient B ≡ (Bpq ) ∈ MKn (t) consisting of n2 ( K × K )-dimensional blocks Bij of the form 0 1 0 ... 0 0 1 ... 0 0 Bii = . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . , 0 0 0 ... 1 i i i i −Ai0 −Ai1 −Ai2 . . . −AiK−1 (3) 0 0 ... 0
Bij
0 ... 0 0 = ........................... , 0 0 ... 0 −Aji0 −Aji1 . . . −AjiK−1
and Cp = 0 , p = Ki , CKi = fi . Proof. We introduce the new unknowns (k−i)
yp ≡ yK(i−1)+k ≡ xi
,
k = 1, . . . , K.
(4)
Then for k = K we have yp = y˙ p−1 , and for k = K , i.e., p = Ki , system (1) enables us to determine y˙ p as K−1 (A1ps ys + A2ps yK+s + · · · + Anps y(n−1)K−s ). y˙ p = fp − s=0 0 . The set of such systems defines the system of linear ODEs.1 (2) in MKn
Proposition 1 yields Corollary. In the domain T , there exists a solution of system (1) which is unique under given initial conditions. Indeed, since system (1) is equivalent to system (2), which is uniquely solvable in the domain T [1, p. 22], it follows that the solution of (2) is also a solution of system (1). In the definition of system (1), the condition AK = E is essential. It not only normalizes each of the equations in the system of linear ODEs.K, but also ensures their simultaneous solvability with respect to the derivatives of higher orders of the unknown functions. This presupposes the study of normal systems of general order. Otherwise, the questions of the existence and uniqueness of solutions of system (1) remain open. Given a positive answer to these questions, we are confronted with the main problem of the theory of differential equations, namely, the problem of integrating a system of linear ODEs.K in quadratures. However, it is well known that there are cases of linear ODEs.2 that cannot be integrated in quadratures in the general case. Only sufficient conditions for their solvability and the corresponding methods of integration are known. The problem of finding conditions for solvability and methods of integrating systems of linear ODEs.K naturally arises. The present paper deals with the problem of finding sufficient conditions for the integrability in quadratures of a normal system of linear ODEs.K and of deriving the corresponding methods for determining their solutions. MATHEMATICAL NOTES
Vol. 76
No. 1
2004
50
V. P. DEREVENSKII
2. The reduction of solutions of inhomogeneous systems to those of homogeneous ones in the case of system (1) is, obviously, problem no. 1. Its solution must lead to a generalization of the classical “methods of variation of constants” for systems of inhomogeneous linear ODEs.1 and for equations of higher orders [1, pp. 135 and 143] to systems of linear ODEs.K. The simplest version of such a generalization obviously follows from Proposition 1. Proposition 2. If Y ≡ (Ypq ) is the fundamental system of solutions of the homogeneous system (2), where p, q , r = 1, . . . , Kn, (5) Y˙ pq = Bpr Yrq , then the general solution of system (1) is xi =
q YK(i−1)+1
(Y−1 )q Kj fj dt,
i, j = 1, . . . , n.
(6)
Proof. It follows from (4) that for k = 1 we have xi = yK(i−1)+1 . Therefore, the general solution of system (2) can be written as q (7) yp = Yp (Y−1 )rq Cr dt. Thus, q xi = yK(i−1)+1 = YK(i−1)+1
(Y−1 )rq Cr dt.
However, the nonzero components of the Kn-dimensional vector (Cr ) are CKi = fi , and this allows us to obtain (6) from the last relation. The indefinite integral on the right-hand side of this formula is given up to a constant Kn-dimensional vector. Thus, the equivalence of systems (1) and (2) and the generality of the solution (7) of the second system ensures the generality of the solution (6) the system of linear ODEs.K. This assertion has several shortcomings, the most important of which are a far greater working time in MKn as compared to Mn and the loss of the easily visualized dependence of the solution to system (1) on its matrix parameters. Therefore, for such a system it is appropriate to present analogs of well-known methods of variation of constants. Such a generalization of the method used above for systems of linear ODEs.1 is provided by the following theorem. Theorem 1. The general solution of system (1) is xi =
xjKi
xlK−1j
xm K−rl
···
xp1m
1 k=K
X−1 k
q p
fq dt · · · dt,
(8)
where i, j , l, m, p, q = 1, . . . , n and Xk ≡ (xjki ) ∈ Mnk (t) are the nondegenerate solutions of the matrix left-sided equations 0 s=K−q
(s)
Aq ,s XK−q = 0,
q = 0, . . . , K − 1,
|Xk | = 0,
(9)
whose parameters are defined by the recurrence relations Aq ,s =
K−q−s i=0
K −q−i+1 (K−q−s−i) Aq−1,K−q−i+1 XK−q+1 , s+1
m m! . = n!(m − n)! n
MATHEMATICAL NOTES
Vol. 76
No. 1
(10)
2004
INTEGRABLE SYSTEMS OF LINEAR ODEs OF GENERAL ORDER
51
Proof. Let us write the matrix left-sided linear ODE.K satisfied by any n solutions of system (1) reindexed by the superscript j , X ≡ (xji ): 0
As X(s) = F ,
AK = E,
(11)
s=K
where F ≡ (fij ) is a matrix with identical row elements and fij = fi . Suppose that XK ∈ MnK (t), Y1 ∈ MnK−1 (t), X = XK Y1 dt,
(12)
where XK is a nondegenerate solution of the homogeneous equation (11), 0 s=K
(s)
As XK = 0,
|XK | = 0.
(13)
Let us find the constraint imposed on Y1 by this assumption. To this end, we verify the Leibniz formula of multiple differentiation of a matrix product by the induction method. Obviously, for k = 1, 2 , k k (k) X(s) Y(k−s) . (XY) = s s=0 Assume that this relation is also valid for k > 2 . Then k k d (k+1) k (X(s+1) Y(k−s) + X(s) Y(k−s+1) ) (XY) = (XY) = dt s s=0 k k k (k+1) + X(s+1) Y(k−s) + XY(k+1) . Y+ =X s s + 1 s=0 However [2, p. 11 of the Russian translation], k k k+1 + = . s s+1 s+1 Therefore, the Leibniz formula is also valid for a matrix product. Thus, substituting X of the form (12) into (11), we obtain 0 s=K
(s−r) s s (r) XK Y1 dt = F. As r r=0
(14)
By (13), the sum of the coefficients of Y1 dt in Eq. (14) is zero. This makes it a linear equation of (K − 1)th order in Y1 . Adding together the coefficients of the derivatives of general order of the matrix Y1 , we obtain 0 s K −r (s−r) (K−s−1) AK−r XK Y1 = F. K − s r=0
(15)
s=K−1
Using the notation A1,s
s K −r (s−r) A0,K−r XK , ≡ K − s r=0
MATHEMATICAL NOTES
Vol. 76
No. 1
2004
A0,s ≡ As ,
Y0 ≡ X,
(16)
52
V. P. DEREVENSKII
we can write the original matrix equation (11) as 0
(s)
A0,s Y0 = F,
(17)
s=K
and its solution as Y0 = XK
0
Y1 dt,
s=K
(s)
A0,s XK = 0,
|XK | = 0,
(18)
where Y1 satisfies the inhomogeneous matrix left-sided linear ODE.K −1 , 0
(s)
A1,s Y1 = F.
(19)
s=K−1
The coefficients of this equation depend on the parameters of the original matrix equation and the solution of the homogeneous matrix left-sided linear ODE.K (18). This dependence is given by formulas (16). Since (17) and (19) differ only by K and by the notation of the matrix coefficients, the procedure of reducing the order of the inhomogeneous matrix left-sided linear ODE can be repeated. As in (18), we denote Y1 ≡ XK−1
Y2 dt,
(s) s=K−1
(s) A1,s XK−1
|XK−1 | = 0,
= 0,
0
(s)
A2,s Y2 = F, (20)
s=K−2
where the A2,s are expressed in terms of the A1,s by relations (16). Relations (20) can be obtained from formulas (18) and (19) by the substitution (XK , Y0 , Y1 ) → (XK−1 , Y1 , Y2 ) . In contrast to Eq. (11), whose coefficient of the higher derivative of the unknown matrix is one, matrix left-sided (K−i) are XK and XK−1 , linear ODEs.K −i ( i = 1, 2) are not normal; the coefficients of Yi respectively. Thus, in the process of reducing orders of inhomogeneous matrix linear ODEs, the solution of Eq. (11) can be reduced to the solution of K homogeneous matrix left-sided linear ODEs. k , k = 1, . . . , K of the form (9). The coefficients of these equations can be expressed in terms of all the coefficients and solutions of homogeneous equations of higher orders. They are connected by the recurrence formulas (10). The last in this sequence of equations is the algebraic matrix equation X1 X2 · · · XK YK = F. Since the Xk are nondegenerate, we have YK
K
−1 Xk
F.
k=1
Therefore,
X = XK
XK−1 · · ·
X1
−1 −1 X−1 K XK−1 · · · X1 F dt · · · dt,
(21)
which allows us to write the solution of system (1) as any column of this matrix. Discarding the number of the chosen column, we can present relation (21) written componentwise as (8). MATHEMATICAL NOTES
Vol. 76
No. 1
2004
INTEGRABLE SYSTEMS OF LINEAR ODEs OF GENERAL ORDER
53
Let us show that the solution (8) is general. To do this, it suffices to verify that X of the form (21) is a general solution of Eq. (11). Taking ˙ q = 0, R q = 0, . . . , K − 1, X(q) (t0 ) = Rq , as the initial data of Eq. (11), the solution (21) can be expressed as X = XK
Therefore,
t t0
t1
tK−2
tK−1
−1 −1 X1 X−1 K XK−1 · · · X1 F dtK t t0 t0 0 ˙ k = 0. Q + QK dtK−1 + QK−1 · · · dt2 + Q2 dt1 + Q1 ,
XK−1
XK−2 · · ·
(X(t0 ) = R0 ) =⇒ (XK (t0 )Q1 = R0 ) =⇒ (Q1 = X−1 K (t0 )R0 ).
Taking the derivative of X , we can impose the following condition on Q2 : ˙ 0 ) = R1 ) =⇒ (XK (t0 )Q1 + XK (t0 )XK−1 (t0 )Q2 = R1 ) (X(t ˙ K (t0 )Q1 )). =⇒ (Q2 = X−1 (t0 )X−1 (t0 )(R1 − X K−1
K
Continuing this process, we obtain an algebraic linear triangular matrix system from which the Qk are easily found. Thus, the classical method of variation of constants for a matrix left-sided linear ODE.K can be generalized by formula (8) to the case of a system of equations of higher orders. Corollary. The general solution of the systems x ¨i + Aji1 x˙ j + Aji0 xj = fi ,
i, j = 1, . . . , n,
and
As ≡ (Ajis ) ∈ Mn0 (t),
s = 0, . . . , 2,
... x i + Aji2 x ¨j + Aji1 x˙ j + Aji0 xj = fi
(22)
(23)
are, respectively, xi =
xj2i
xl1j
−1 m (X−1 2 X1 )l fm dt dt,
˙ 2 + A0 X2 = 0, ¨ 2 + A1 X X and xi =
xj3i
xl2j
Xk ≡ (xjki ),
˙ 1 + (2X ˙ 2 + A1 X2 )X1 = 0, X2 X
xm 1l
(24) |Xk | = 0,
(25)
−1 −1 p (X−1 3 X2 X1 )m fp dt dt dt,
... ¨ 3 + A1 X ˙ 3 + A0 X3 = 0, X3 + A2 X ¨ 2 + (3X ˙ 3 + A1 X3 )X2 = 0, ˙ 3 + A2 X3 )X ˙ 2 + (3X ¨ 3 + 2A2 X X3 X ˙ 1 + (2X3 X ˙ 2 + 3X ˙ 3 X2 + A2 X3 X2 )X1 = 0. X2 X3 X
(26)
Proof. If K = 2 in Theorem 1, then q = 0, 1 . For q = 0 , we have s = 2, 1, 0 and A0,s = As ; this gives the matrix left-sided linear ODE.2 in (25). For q = 1 , we have s = 1, 0 , while A1,s is given by (10): A1,1
2 = A0,2 X2 = X2 , 2
MATHEMATICAL NOTES
Vol. 76
A1,0 No. 1
1 2−i (1−i) ˙ 2 + A1 X2 . A0,2−i X2 = = 2X 1 i=0 2004
54
V. P. DEREVENSKII
If K = 3 , then q = 0, 1, 2 ; (q = 0) =⇒ (s = 3, . . . , 0) ; this yields the homogeneous equation of 3d order in (26). For q = 1 , we have s = 2, . . . , 0 , and, by (10), we have
A1,2 A1,0
1 3−i 3 (1−i) ˙ 3 + A2 X3 , X3 = A0,3 X3 = X3 , A1,1 = = 3X 2 3 i=0 2 3−i (2−i) ˙ 3 A1 X3 . ¨ 3 + 2A2 X A0,3−i X3 = = 3X 1 i=0 (s)
These coefficients are, indeed, the coefficients of X2 q = 2 , we have s = 1, 0 , so that, by (10), A2,1 A2,0
in the second matrix equation in (26). For
2 = A1,2 X2 = X3 X2 , 2 1 2−i (1−i) ˙ 2 + (3X ˙ 3 + A2 X3 )X2 . A1,2−i X2 = = 2X3 X 1 i=0
This yields the matrix left-sided linear ODE.1 in (26). The second version of the method of variation of constants for (1) arises as a generalization to the matrix case of the method of variation of constants for scalar linear ODEs of higher orders. To perform it, it is necessary to use Gauss operators, which enable one to solve linear algebraic matrix systems [3]. By a left Gauss operator we mean a linear operator Γ(Ai ) defined on the sequence of nondegenerate matrices (Ai ) , i = 1, . . . , I , which acts on another matrix sequence (Yj ) , j = 1, . . . , J , is multiplied by the rules Γ(Ai )Yj ≡ Yj − Aj (Ai )−1 Yi ,
(27)
Γ(Bk )Γ(Ai )Yj ≡ Γ(Γ(Ai )Bk )(Γ(Ai )Yj ).
(28)
In the symbols of these operators, the solution of the system of matrix left-sided linear equations Aji Xj = Bi :
i, j = 1, . . . , K ,
where, just as above, summation is performed over repeated superscripts and subscripts, is given by the formula
Xi =
i−1
k=1
Γ(Akk )Aii
−1 i+1
i−1
Γ(Aττ ) Γ(Akk )Bi ≡ Wi (A, B),
τ =K
A≡
(29)
k=1
(Aji ),
B ≡ (Bi ).
This relation assumes that the system of matrix left-sided linear equations is nondegenerate, which is defined by the condition K−1
1 1
i−k i Γ(Ai−k )Ai = 0. d(A) ≡ |D(A)| ≡ A1 i=2
(30)
k=i−1
Using the Gauss operators, we can prove the following assertion. MATHEMATICAL NOTES
Vol. 76
No. 1
2004
INTEGRABLE SYSTEMS OF LINEAR ODEs OF GENERAL ORDER
55
Theorem 2. The general solution of system (1) is colon X , where X= C1 = − Cl = −
K
0
Xk Ck ,
s=K
k=1
X−1 1
K
l−1
Xi Ci dt,
CK =
K−1
m=1
i=2
m=1
(s)
As Xk = 0,
(l−1) Γ(X(m−1) )Xl m
(31)
(K−1) Γ(X(m−1) )XK m
−1 K l−1
(l−1) Γ(X(m−1) )Xi m
i=l+1 m=1
−1 F dt, (32)
˙ i dt, C
and Xk is numbered in such a way that (q)
d(V) = 0,
V ≡ (Xk ),
q = 0, . . . , K − 1.
k = 1, . . . , K ,
(33)
Proof. Since the solutions of system (1) and of the matrix equation (11) are related by the relation (xi ) = colon X , it suffices to show that X of the form (31) is a general solution of a matrix leftsided linear ODE.K. To do this, consider the fundamental system of solutions of the homogeneous equation (11) (Xk ) , i.e., the set of K solutions numbered in such a way that condition (33) holds. Suppose that X of the form (31) satisfies the conditions X(q) =
K k=1
(q)
Xk Ck ,
q = 1, . . . , K − 1.
(34)
Differentiating X successively K − 2 times, we can easily verify that the set of such requirements imposes the constraints K (q) ˙ Xk C q = 0, . . . , K − 2. (35) k = 0, k=1
We substitute Xk into (11). Then 0 s=K
As X
(s)
K 0 0 K d K−1 d (K−1) (s) (s) = X + As X = Xk Ck + As Xk Ck dt dt s=K−1
=
K k=1
=
(K)
Xk Ck +
K
0
k=1
s=K
K k=1
(s) As Xk
s=K−1
k=1
(K−1)
Xk
˙k+ C
K k=1
Ck +
K k=1
0
(s)
s=K−1
(K−1) ˙ Ck Xk
=
k=1
As Xk Ck
K k=1
(K−1)
Xk
˙ k = 0. C
Thus, the following K conditions (similar to those occurring in the method of variation of constants for scalar linear ODEs) are imposed on the matrix Ck : K k=1
(q) ˙ Xk C k = 0,
q = 0, . . . , K − 2,
K k=1
(K−1)
Xk
˙ k = F. C
(36)
˙ k is nondegenerate. By condition (33), the resulting system of matrix left-sided linear equations in C Thus, there exists an upper triangulator [4] in MKn , i.e., the diagonal operator matrix T , with elements Tii : 1
(k−1) 1 i Ti = Γ(Xk ), i = 2, . . . , K , (37) T1 = E, k=i−1
MATHEMATICAL NOTES
Vol. 76
No. 1
2004
56
V. P. DEREVENSKII
which brings system (36) to the upper-triangular form
K i−1
(l−1) (i−1) ˙ Ck = 0, Γ(Xl )Xk k=i
K
i = 2, . . . , K − 1,
l=1
K−1
˙ k = 0, Xk C
k=1
l=1
(l−1)
Γ(Xl
(K−1)
)XK
˙ K = F. C
˙ k with least index, which is admissible by If each of the equations of this system is solved for C condition (33), then l−1
˙ k ≡ Sk ) =⇒ Sl = − (C
m=1
(l−1) Γ(X(m−1) )Xl m
−1 K l−1
i=l+1 m=1
(l−1) Γ(X(m−1) )Xi m
Si ,
l = 2, . . . , K − 1,
−1 K−1 K
(K−1) −1 m−1 Xi Si , SK = Γ(Xm )XK F. S1 = −X1 m=1
i=2
This yields (32). To verify the generality of the resulting solution, we can write Ck as ˙ k = 0. Jk (t) ≡ Sk dt, Q Ck = Jk (t) − Jk (t0 ) + Qk , Then X=
K
Xk (Jk (t) − Jk (t0 ) + Qk ),
k=1
and the set of initial conditions X(q) (t0 ) = Rq ( R˙ q = 0) is written as K K dq (q) Xk (t0 )Qk = Rq − q Xk Jk (t) . dt t=t0 k=1
k=1
By (33), this algebraic system is always solvable. Remark. As in the scalar case, by introducing the unknowns Yk ≡ X (k−1) , any matrix left-sided linear ODE.K can be reduced to the system of matrix left-sided linear ODEs.1 Y1 0 E 0 ... 0 Y1 0 d Y2 0 0 E ... 0 Y2 0 (38) = → . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ... dt YK YK −A0 −A1 −A2 . . . −AK−1 F l l Vl , where Yk0 are K solutions of the homogeneous system (38), we obtain From Yk = Yk0 l ˙ Yk0 Vl = Fk , (k−1)
l ≡ Xl Since Yk0
where
Fk = 0,
k = 1, . . . , K − 1,
FK = F.
, the last system can be written as K l=1
(k−1)
Xl
˙ l = FK . V
Under the substitution (l, k, V ) → (k , q , C) , the system takes the form (36). Therefore, the additional version of the method of variation of constants for Eq. (11), which is a consequence of the method of variation for the equivalent system of matrix left-sided linear ODE.1, coincides with what Theorem 2 yields. MATHEMATICAL NOTES
Vol. 76
No. 1
2004
INTEGRABLE SYSTEMS OF LINEAR ODEs OF GENERAL ORDER
57
Just as in the case of Theorem 1, we shall illustrate the significance and notation of Theorem 2. Corollary. The general solutions of Eqs. (22) and (23) are colon X : X = −X1
˙ −1 F dt X−1 1 X2 [Γ(X1 )X2 ]
˙ k + A0 Xk = 0, ¨ k + A1 X X
+ X2
˙ 2 ]−1 F dt, [Γ(X1 )X
˙ 2 | = 0, |X1 Γ(X1 )X
(39)
k = 1, 2,
and
˙ −1 [Γ(X1 )X ˙ 3 ] − X3 }[Γ(X ˙ 2 )Γ(X1 )X¨3 ]−1 F dt X−1 1 {X2 [Γ(X1 )X2 ] ˙ 2 ]−1 [Γ(X1 )X ˙ 3 ][Γ(X ˙ 2 )Γ(X1 )X¨3 ]−1 F dt − X2 [Γ(X1 )X ˙ 2 )Γ(X1 )X ¨ 3 ]−1 F dt, + X3 [Γ(X ... ¨ k + A1 X ˙ k + A0 Xk = 0, k = 1, . . . , 3, Xk + A2 X
X = −X1
(40)
where ˙ i−X ˙ 1 X−1 Xi , i = 1, 2, 3, Γ(X1 )X˙ i ≡ X 1 ¨ 3 ≡ Γ(Γ(X1 )X ˙ 2 )(Γ(X1 )X ¨ 3) ≡ X ¨3 −X ¨ 1 X−1 X3 ˙ 2 )Γ(X1 )X Γ(X 1 −1 −1 ¨ ¨ ˙ ˙ ˙ 3−X ˙ 1 X−1 X3 ). − (X2 − X1 X X2 )(X2 − X1 X X2 )−1 (X 1
1
1
Proof. If we set K = 2 in Theorem 2, then the general solution of system (1) taking the form (22) can be found in the form colon X: X = X1 C1 + X2 C2 , where ˙ k + A0 Xk = 0, ¨ k + A1 X X
˙ 2 | = 0, |X1 Γ(X1 )X
while the Ck are determined by relations (32). The first of these two relations degenerates into ˙ C1 = − X−1 1 X2 C2 dt , and the second one takes the form C2 =
˙ 2 |−1 F dt. [Γ(X1 )X
˙ 2 are nondegenerate, so that Ck is in Condition (33) means that the matrices X1 and Γ(X1 )X ˙ 2 and Ck into X , we the part of T where this holds. Differentiating C2 and substituting C obtain (39). Moreover, formula (27) yields ˙2=X ˙ 2−X ˙ 1 X−1 X2 . Γ(X1 )X 1 The solution of system (32) in Theorem 2 can be expressed in terms of the solutions of homogeneous matrix left-sided linear ODEs.3 as colon X: X=
3
Xk Ck ,
k=1
MATHEMATICAL NOTES
0 s=3
(s)
As Xk = 0,
Vol. 76
No. 1
2004
˙ 2 Γ(X ˙ 2 )Γ(X1 )X ¨ 3 | = 0. |X1 Γ(X1 )X
58
V. P. DEREVENSKII
Moreover, formulas (32) can be written as C1 = −
˙ ˙ X−1 1 (X2 C2 + X3 C3 ) dt,
˙ 2 ]−1 [Γ(X1 )X ˙ 3 ]C ˙ 3 dt, C2 = − [Γ(X1 )X ˙ 2 )Γ(X1 )X ¨ 3 ]−1 F dt. C3 = [Γ(X ˙ 3 into C2 , we obtain the right factor in formula (40). If we Differentiating C3 and substituting C ˙ 3 into C1 , then, by identical transformations, we can find the coefficient X1 ˙ 2 and C substitute C in (40). By the definition of (28), we have ¨ 3 = Γ(Γ(X1 )X ˙ 2 )(Γ(X1 )X ¨ 3 ). ˙ 2 )Γ(X1 )X Γ(X ˙ 2 and Γ(X1 )X ¨ 3 and introduce the sequences of matrices If we use (27) to write out Γ(X1 )X ˙i Ai = Γ(X1 )X
¨ i, and Bi = Γ(X1 )X
i = 2, 3,
¨ 3 in ˙ 2 )Γ(X1 )X then the definition (28) of the product of Gauss operators allows us to express Γ(X the form indicated above. 3. Sufficient conditions for the solvability in quadratures of systems of the form (1) are by far more difficult to determine than in the case of specific equations. This related to the fact that, in addition to the problems of integrating scalar linear ODEs.K generalized by systems (1), one encounters specific problems involving the matrix equation (11). Only under very rigid constraints on the parameters of a matrix left-sided linear ODE.K can its solution be reduced to the solution of a set of scalar equations of higher orders. An example of such constraints is, obviously, the condition that the matrices As be diagonal. The generalization of this case yields the following assertion. Proposition 3. If the matrices As in system (1) over the complex field belong to the solvable Lie algebra, then the solution of the system can be reduced to the solution of homogeneous linear ODEs.K 0 (s) ais yi = 0, ais ∈ C 0 (t). (41) s=K
Proof. If all the As belong to a solvable Lie algebra LN ( UN −K = ∅), then, by Lie’s Theorem [4, p. 62 of the Russian translation], there exists a nondegenerate matrix T such that the similarity transformation TAs T−1 brings all of them simultaneously to triangular form. Thus, multiplying (1) on the left by T , we obtain the system 0
(s)
(TAs T−1 )ji yj
= Tij fj ,
yj ≡ Tjk xk ,
T˙ij = 0,
T ≡ (Tij ).
s=K
If (TAs T−1 )ii = ais , then the resulting relation is equivalent to the system 0 s=K
(s)
ais yi
= Tij fj , −
0
(TAs T−1 )qi yq ,
s=K
MATHEMATICAL NOTES
Vol. 76
No. 1
2004
INTEGRABLE SYSTEMS OF LINEAR ODEs OF GENERAL ORDER
59
where q , depending on the form of triangularity, is either less or greater than i . Thus, the method of variation of constants allows us to reduce the solution of system (1) to the solution of the set of Eqs. (41). The noncommutativity of matrix multiplication essentially complicates the procedure of integrating of matrix left-sided linear ODEs.K, often leading to a number of unsolved problems of matrix theory. Nevertheless, some methods of the classical theory of ODEs can be generalized to the matrix case. This allows us to formulate additional conditions for the solvability of systems of the form (1). Thus, the theorem on the factorization of matrix left-sided linear ODEs.K given in [5] allows us to state the following proposition. Proposition 4. If in system (1) As = PK ,s and the matrices Pk ,s , k = 1, . . . , K , are determined inductively in the form Pk ,k = E, Pk ,k−1 = Bk + Pk−1,k−2 , ˙ k−1,j + Bk Pk−1,j−1 + Pk−1,j−1 , Pk ,j = P ˙ k−1,j + Bk Pk−1,0 , Pk ,0 = P
j = 1, . . . , K − 2,
(42)
P1,0 = B1 ,
α k where Bk = bα k Eα , α = 1, . . . , M , bk ∈ C (t) , Bk < ∞ , and the Eα are constant matrices of the basis of the M -dimensional radical RM of some representation in Mn of the Lie algebra LN , then the general solution of system (1) is colon X : −1 YK−1 · · · Y2−1 Y1 Y1−1 F dt · · · dt, (43) X = YK YK
where Yk = exp(ykα Eα ) : Yk (0) = E , while the ykα are solutions in quadratures of the system y˙ kβ
=
−bγk
∞
1 γ n (ykα Cαβ ) (n + 1)! n=0
−1 ,
ykα (0) = 0.
(44)
Proof. It was shown in [6] that the matrix X from (43) is a solution of the matrix left-sided linear ODE.K (11) with As of the form (42). Therefore, the solution of system (1) is any column of this matrix. To verify its generality, we write (43) as t t1 tK−1 tK−2 −1 YK YK−1 ··· Y2−1 Y1 Y1−1 F dtK X = YK 0 0 0 0 + Q1 dtK−1 + Q2 · · · dt1 + QK . ˙ q = 0 allow us to successively express Qk explicitly in terms Then the relations X(q) (0) = Rq : R (q) of Rq and Bk (0) . So (X(0) = R0 ) =⇒ (R0 = QK ) , ˙ (X(0) = R1 ) =⇒ (BK (0)QK + QK−1 = R1 ) =⇒ (QK−1 = R1 − BK (0)R0 ), ¨ ˙ K (0) + B2 (0))QK + BK−1 (0)QK−1 + QK−2 ) (X(0) = R2 ) =⇒ (R2 = (B K ˙ K (0) + B2 (0))R0 − BK−1 (0)(R1 − BK (0)R0 )) . . . . =⇒ (QK−2 = R2 − (B K It should be noted that the matrices Yk can be found not only in exponential but also in multiplicative exponential form [6]: Yk =
M
exp(gkα Eα ),
Yk (0) = E,
g˙ kα =
α=1
MATHEMATICAL NOTES
Vol. 76
No. 1
2004
−bβk
α−1
γ=1
−1
β exp(gkα Cγα )
,
gkα (0) = 0 ;
60
V. P. DEREVENSKII
this form is easier to use in a number of cases. The cases K = 2, 3 for matrix left-sided linear ODEs.K were studied in [5]. An example was presented there. Conditions for the solvability of systems of the form (1) which are more general than those in Proposition 4 can be established in the study of matrix left-sided linear ODEs.K equivalent to the triangulated systems of matrix equations [7]. To obtain such constraints, it is necessary to use formulas for the solution of left- and right-sided systems of algebraic linear equations. The first of these are of the form (29) and the second ones are solved by similar formulas containing right Gauss operators [3]. In view of the awkwardness of the expressions for such constraints, let us show their form, using a second-order system as an example. Proposition 5. If in system (22) fi = 0 and the matrices As are of the form ˙ 2 + B1 B2 + B2 B2 )(B2 )−1 , A1 = −(B 1 1 1 1 2 1 1 1 1 2 2 1 ˙ A0 = A1 B − B − (B ) − B B , 1
1
1
1
(45)
2
where Bpk ∈ Mn1 (t) , k, l = 1, 2 , |B21 | = 0 , B11 = [(T11 C11 + T21 C12 )(E + (T11 )−1 T21 H−1 T11 ) − T21 C22 H−1 T12 ](T11 )−1 , B21 = −(T11 C11 + T21 C12 )(T11 )−1 T21 C22 H−1 ,
(46)
B12 = (T12 C11 + T22 C12 )[E + (T11 )−1 T21 H−1 T12 ](T11 )−1 − T22 C22 H−1 T12 , B22 = −(T12 C11 + T22 C12 )(T11 )−1 T21 H−1 + T22 C22 H−1 ,
0 , Tji = 0 , Ckk = Cα and H = Γ(T11 )T22 , |H| = k Eα , while Eα is the basis of the radical Rk of the Lie algebra LN over Mn , then the solution of this system is colon(Tj1 Yj ) , where Y1 ≡ Y10 K1 ,
Y2 = Y20
−1 1 C2 Y1 dt, Y20
˙ 1 = 0, K
Yk0 = exp(ykα Eα ),
while the ykα are solutions in quadratures of system (44) in which the substitution (bα ) → (−cα k) was performed. Proof. Consider the system of homogeneous matrix left-sided linear ODEs.1 ˙ k = Bl X l , X k
Blk ∈ Mn1 (t),
k, l = 1, 2,
¯
|Bkk | = 0,
k = k¯ = 1, 2.
(47)
This system is equivalent to two matrix equations of second order ˙ k + Bl Bk )Xk = 0, ˙ k − (A1k Bk + B ¨ k + A1k X X k k k l
¯
¯
¯
˙ k + Bl Bk )(Bk )−1 . A1k = −(B k k k l
(48)
For k = 1 , the coefficients of the matrix left-sided linear ODE.2 take the form (45). Thus, by Proposition 2, and Theorems 1 and 2, the integrability of system (1) with the above-mentioned constraints on As , is guaranteed by the solvability of system (47). However, if in the latter system ˙ ≡ (T ˙ l ) = 0 , Yl satisfies the matrix we make the substitution Xk = Tlk Yl , where d(T) = 0 , T k ˙ l = Ck Yk in which C2 = 0 and the Ck belong to Rk , then the Yk can be obtained in system Y 1 l k ˙ 2 = C1 Y2 + C2 Y2 , has the solution quadratures. The second equation in this system, Y 2 2 Y2 = Y20
−1 1 C2 Y1 dt Y20
MATHEMATICAL NOTES
Vol. 76
No. 1
2004
INTEGRABLE SYSTEMS OF LINEAR ODEs OF GENERAL ORDER
61
˙ 20 = C2 Y20 . If Yk0 is expressed in the exponential form Yk0 = exp(y α Eα ) , then in which Y 2 k the ykα are subject to the requirements of the nonlinear system (44), where the functions bα must be replaced by −cα k [6]. For RK , this system is solvable in quadratures. Thus, the integrability l m of system (47) is guaranteed by the requirement Blk Tm l = Tk Cl following from (47) after the substitution of the matrices Tlk Yl for the Xk . Since relations (46) represent the solution of the last algebraic system by means of the Frobenius formulas [8, p. 59], it follows that conditions (46) ensure the triangulation and integrability of system (47). Therefore, colon X1 is a solution of the homogeneous system (22). It should be noted that the definition of Blk , up to numbering, allows us to consider both forms of the matrix left-sided linear ODE.2 (48) and both forms of the triangularity of the matrix C = (Clk ) in the assertion of Proposition 6. The example from [7] written out componentwise can serve as an example illustrating Proposition 5. A generalization of this result to the case of systems of higher order can be stated in the symbols of the left and right Gauss operators [3]. REFERENCES 1. L. S. Pontryagin, Ordinary Differential Equations [in Russian], Nauka, Moscow, 1970. 2. J. Riordan, Combinatorial Identities, Krieger Publ., Huntington, NY, 1979; Russian translation: Nauka, Moscow, 1982. 3. V. P. Derevenskii, “Solution of systems of matrix algebraic linear equations,” Izv. Vyssh. Uchebn. Zaved. Mat. [Russian Math. (Iz. VUZ )] (1998), no. 10 (437), 32–36. 4. N. Jacobson, Lie Algebras, Interscience Publ., New York–London, 1962; Russian translation: Mir, Moscow, 1964. 5. V. P. Derevenskii, “Matrix linear differential equations of higher orders,” Differentsial nye Uravneniya [Differential Equations], 29 (1993), no. 4, 711–714. 6. V. P. Derevenskii, “The exponential solution of matrix linear differential equations of first order,” Izv. Vyssh. Uchebn. Zaved. Mat. [Soviet Math. (Iz. VUZ )] (1981), no. 7, 31–33. 7. V. P. Derevenskii, “Systems of matrix linear differential equations of first order,” Mat. Zametki [Math. Notes], 66 (1999), no. 1, 63–75. 8. F. R. Gantmakher (F. R. Gantmacher), The Theory of Matrices [in Russian], Nauka, Moscow, 1967. Kazan State Academy of Architecture and Construction
MATHEMATICAL NOTES
Vol. 76
No. 1
2004