J. Appl. Math. Comput. DOI 10.1007/s12190-015-0970-y ORIGINAL RESEARCH
New results on condensed Cramer’s rule for the general solution to some restricted quaternion matrix equations Guang-Jing Song1 · Chang-Zhou Dong2
Received: 30 September 2015 © Korean Society for Computational and Applied Mathematics 2015
Abstract In this paper, we derive some condensed Cramer’s rules for the general solution, the least squares solution and the least norm solution to some restricted quaternion matrix equations, respectively. The findings of this paper extend some known results in the literature. Keywords Quaternion matrix · Cramer’s rule · Generalized inverse A(2) T,S · Determinant Mathematics subject classifications
15A09 · 15A24
1 Introduction Throughout, we denote the real number field by R, the set of all m × n matrices over the quaternion algebra H = {a0 + a1 i + a2 j + a3 k | i 2 = j 2 = k 2 = i jk = −1, a0 , a1 , a2 , a3 ∈ R} by Hm×n , the identity matrix with the appropriate size by I . For A ∈ Hm×n , the symbols A∗ , Rr (A) (Rl (A)) and Nr (A) (Nl (A)) stand for the conjugate transpose, the right (left) range space and the right (left) null space of A, respectively.
B
Guang-Jing Song
[email protected]
1
School of Mathematics and Information Sciences, Weifang University, Weifang 261061, People’s Republic of China
2
School of Mathematics and Science, Shijiazhuang University of Economics, Shijiazhuang 050031, People’s Republic of China
123
G.-J. Song, C.-Z. Dong
The Moore–Penrose inverse of A, denoted by A† , is the unique matrix X ∈ Hn×m satisfying the Penrose equations (1) AX A = A, (2) X AX = X, (3) (AX )∗ = AX, (4) (X A)∗ = X A. In addition, any matrix X that satisfies the first equation above is called an inner inverse of A, and often denoted by A− . R A and L A stand for the two projectors L A = I − A† A, R A = I − A A† induced by A. Suppose that L is a subspace of Hn , the orthogonal projector onto L is denoted by PL = PL ,L⊥ , where L ⊥ is the orthogonal complement of L. Quaternions were discovered by Sir William R. Hamilton in 1843 which first appeared in mathematics as an algebraic system–a skew field or noncommutative division algebra [1–4]. Nowadays quaternions are not only part of contemporary mathematics, but also widely and heavily used in rotation theory, signal and color image processing, and so on (see, e.g. [5–8]). The most important property of the quaternions is that every unit quaternion represents a rotation and this plays a special role in the study of rotations in three-dimensional vector spaces. Gupta [8] presented the application of linear quaternion equations in spacecraft attitude propagation. Moreover, in real life problems, we need to deal with constant coefficient quaternion differential equations. These differential equations can be transformed into linear quaternion equations and the initial conditions of differential equations can be transformed into some restrictions to the corresponding linear equations. Thus it is very important and interesting to investigate solutions of linear quaternion matrix equations under some restrictions. In 1970, Steve Robinson [9] gave an elegant proof of Cramer’s rule over the complex number field. After that different Cramer’s rules for the generalized inverses and solutions of some restricted equations have been studied by many authors [10–17]. However, we can not generalize the existing Cramer’s rules to the quaternion skew field directly, since the multiplication of quaternions is not commutative and there are some differences between the determinants of quaternion matrix and complex matrix. Kyrchei [19–23] proved the Cramer’s rules for the unique solution, the minimum norm least square solution of some quaternion matrix equations within the framework of the theory of the row and column determinants given in [24]. Motivated by their work, we [25,26] derived the Cramer’s rules for the unique solution of the following restricted quaternion matrix equations AX B = C, Rr (X ) ⊂ T1 , Nr (X ) ⊃ S1
(1.1)
AX B = C, Rl (X ) ⊂ T2 , Nl (X ) ⊃ S2
(1.2)
and (2)
by the row–column determinantal expressions of the generalized inverses ArT1 ,S1 and (2)
AlT ,S with prescribed right (left) range space T1 (T2 ) and right (left) null space S1 (S2 ), 2 2 respectively. However, if the solution to the restricted equations (1.1) and (1.2) are not unique, or the two equations are not consistent, we can not derive the general solution and the least squares solution by the existing Cramer’s rules.
123
New results on condensed Cramer’s rule for the general...
In this paper, we aim to consider the condensed Cramer’s rules for the general solution, the least squares solution and the least norm solution of (1.1)–(1.2), respectively. The paper is organized as follows. In Sect. 2, we start with some basic concepts and results about the row and column determinants of a square matrix over the quaternion skew field. In Sect. 3, when (1.1) and (1.2) are consistent, we derive the condensed Cramer’s rules for the unique solution and the general solution, respectively. As applications, we give the determinantal expressions of A(1,3) and A(1,4) , respectively. In Sect. 4, we show a set of Cramer’s rules for the least squares solution, the least norm solution as well as the best approximate solution of the restricted quaternion matrix equation (1.1) and (1.2), respectively. Some results are even new for the complex matrix cases. In Sect. 5, we show a numerical example to illustrate the main results. To conclude this paper, in Sect. 6 we propose some further research topics.
2 Preliminaries Unlike multiplication of real or complex numbers, multiplication of quaternions is not commutative. Many authors [27–33] had tried to give the definitions of the determinants of a quaternion matrix. Unfortunately, by their definitions it is impossible for us to give a determinantal representation of an inverse matrix. In 2008, Kyrchei [24] defined the row and column determinants of a square matrix over the quaternion skew field. By these new definitions he also derived the determinantal representations of the inverse matrix. Suppose Sn is the symmetric group on the set In = {1, . . . , n}. Definition 2.1 (Definition 2.4–2.5 [24]) (1) The ith row determinant of A = ai j ∈ Hn×n is defined by rdeti A =
σ ∈Sn
(−1)n−r aiik1 aik1 ik1+1 . . . aik1 +l1 i . . . aikr ikr +1 . . . aikr +lr ikr
for all i = 1, . . . , n. The elements of the permutation σ are indices of each monomial. The left-ordered cycle notation of the permutation σ is written as follows: σ = ii k1 i k1 +1 . . . i k1 +l1 i k2 i k2 +1 . . . i k2 +l2 . . . i kr i kr +1 . . . i kr +lr . The index i opens the first cycle from the left and other cycles satisfy the following conditions, i k2 < i k3 < · · · < i kr and i kt < i kt+s for all t = 2, . . . , r and s = 1, . . . , lt . (2) The jth column determinant of A = ai j ∈ Hn×n is defined by cdet j A =
τ ∈Sn
(−1)n−r a jkr jkr +lr . . . a jkr +1 jkr . . . a j jk1 +l1 . . . a jk1 +1 jk1 a jk1 j
for all j = 1, . . . , n. The elements of the permutation τ are indices of each monomial. The right-ordered cycle notation of the permutation τ is written as follows: τ = jkr +lr . . . jkr +1 jkr jk2 +l2 . . . jk2 +1 jk2 . . . jk1 +l1 . . . jk1 +1 jk1 j .
123
G.-J. Song, C.-Z. Dong
The index j opens the first cycle from the right and other cycles satisfy the following conditions, jk2 < jk3 < · · · < jkr and jkt < jkt +s for all t = 2, . . . , r and s = 1, . . . , lt . Definition 2.2 (Definition 7.2 in [24]) For A ∈ Hn×n , the determinant of its corresponding Hermitian matrix is called its double determinant, i.e. ddet A := det A∗ A = det A A∗ . Suppose that A. j (b) denotes the matrix obtained from A by replacing its jth column with the column b, and Ai. (b) denotes the matrix obtained form A by replacing its ith row with the row b. Theorem 2.1 (Theorem 8.1 in [24]) A necessary and sufficient condition of invertibility of A ∈ Hn×n is ddet A = 0. Then there exist A−1 = (L A)−1 = (R A)−1 , where ⎤ ⎡ L11 L21 · · · Ln1 ⎥ 1 ⎢ ⎢ L12 L22 · · · Ln2 ⎥ (L A)−1 = ⎢ .. .. . . .. ⎥ , ddet A ⎣ . . . ⎦ . L1n L2n · · · Lnn ⎤ ⎡ R11 R21 · · · Rn1 ⎥ 1 ⎢ ⎢ R12 R22 · · · Rn2 ⎥ (R A)−1 = ⎢ .. .. . . . ⎥, ddet A ⎣ . . .. ⎦ . R1n R2n · · · Rnn and Li j = cdet j (A∗ A). j a.i∗ , Ri j = rdeti (A A∗ )i. a ∗j. for all i, j = 1, · · · , n.
3 Cramer’s rules for the unique and general solution of (1.1)–(1.2) Using Cramer’s rules to represent the generalized inverse and solution of some restricted equations have been studied by many authors (see, e.g. [10–18]). Known from their work, Cramer’s rule is only used as a basic method to express the unique solution to some consistent matrix equation or the best approximate solution to some inconsistent matrix equation. In this section, we aim to find the general solution of (1.1)–(1.2) over quaternion skew field by some condensed Cramer’s rules. Concerning the solvability conditions and the general solutions of matrix equation, the following results have been shown. Lemma 3.1 [34] Let A ∈ Hm×n , B ∈ H p×q , C ∈ Hm×q be known and X ∈ Hn× p be unknown. The matrix equation AX B = C is consistent if and only if A A− C B B − = C. In this case, its general solution can be expressed as X = A† C B † + L A U + V R B = A† C B † + Z − A− AZ B B − ,
123
New results on condensed Cramer’s rule for the general...
where U, V and Z are arbitrary matrices over H with appropriate dimensions. Then we can get the followings. Theorem 3.2 Suppose that A ∈ Hm×n , B ∈ H p×q , C ∈ Hm×q , T1 ⊂ Hn , T2⊂ H1×n S1 ⊂ Hn , and S2 ⊂ H1× p are known. Denote T11 = Rr PT1 A∗ , S11 = Nr PS ⊥ B , 1 T22 = Rl B ∗ Q T2 , S22 = Nl AQ S ⊥ . 2 (1) (1.1) is consistent if and only if Rr (C) ⊆ AT1 , Nr (C) ⊇ Nr PS ⊥ B .
(3.1)
1
In this case, the general solution of (1.1) can be expressed as the following: † † X = A PT1 C PS ⊥ B + PT1 L A PT1 U1 PS ⊥ + PT1 V1 R PS⊥ B PS ⊥ 1 1 1 1 † † = A PT1 C PS ⊥ B + PT1 Z 1 PS ⊥ − PT11 Z 1 PS ⊥ 1 1 11 † † = A PT1 C PS ⊥ B + PT1 ∩Nr (A) U1 PS ⊥ + PT1 V1 PS ⊥ ∩Nr (B ∗ ) , 1
1
(3.2)
1
where U1 , V1 and Z 1 are arbitrary matrices over H with appropriate dimensions. (2) (1.2) is consistent if and only if Rl (C) ⊆ T2 B, Nl (C) ⊇ Nl AQ S ⊥ .
(3.3)
2
In this case, the general solution of (1.2) can be expressed as the following: † † X = AQ S ⊥ C Q T2 B + Q S ⊥ L AQ S⊥ U2 Q T2 + Q S ⊥ V2 R Q T2 B Q T2 2 2 2 2 † † = AQ S ⊥ C Q T2 B + Q S ⊥ Z 2 Q T2 − Q S ⊥ Z 2 Q T22 2 2 22 † † = AQ S ⊥ C Q T2 B + Q S ⊥ U2 Q T2 + Q S ⊥ ∩Nl (A∗ ) V2 Q T2 ∩Nl (B) , 2
2
2
(3.4)
where U2 , V2 and Z 2 are arbitrary matrices over H with appropriate dimensions. Proof (1) Note that Rr (X ) ⊂ T1 , Nr (X ) ⊃ S1 ⇔ X = PT1 W PS ⊥ . 1
Taking it into (1.1), and by Lemma 3.1, we can get A PT1 W PS ⊥ B = C 1
(3.5)
123
G.-J. Song, C.-Z. Dong
is consistent if and only if (3.1) is satisfied, and the general solution of (3.5) can be expressed as † † W = A PT1 C PS ⊥ B + L A PT1 U1 + V1 R PS⊥ B 1 1 † † † † PS ⊥ B A PT1 Z 1 PS ⊥ B = A PT1 C PS ⊥ B + Z 1 − A PT1 1
1
1
where U1 , V1 and Z 1 are arbitrary matrices over H with appropriate dimensions. Moreover, † † † † PT1 A PT1 = A PT1 , PS ⊥ B PS ⊥ = PS ⊥ B 1 1 1 † † A PT1 A PT1 = PT11 , PS ⊥ B PS ⊥ B = PS ⊥ , 1
1
11
then we can rewrite the general solution of (1.1) as X = X 0 + PT1 L A PT1 U1 PS ⊥ + PT1 V1 R PS⊥ B PS ⊥ = X 0 + PT1 Z 1 PS ⊥ − PT11 Z 1 PS ⊥ . 1
1
1
1
11
Note that † PT1 L A PT1 = PT1 − PT1 A PT1 A PT1 = PT1 ∩Nr (A) , † R PS⊥ B PS ⊥ = PS ⊥ − PS ⊥ B PS ⊥ B PS ⊥ = PS ⊥ ∩Nr (B ∗ ) , 1
1
1
1
1
1
1
thus the third equality in (3.2) follows immediately. Similarly, we can show (2).
In [19], Kyrchei gave a Cramer’s rule for the unique solution of the nonsingular quaternion matrix equation as follows. Lemma 3.3 ([19]) Suppose that A, B, C ∈ Hn×n are given, and X ∈ Hn×n is unknown. If det (A∗ A) = 0 and det (B B ∗ ) = 0, then AX B = C has a unique solution, which can be written as rdet j (B B ∗ ) j. ci.A , i, j = 1, . . . , n xi j = det (A∗ A) det (B B ∗ ) or xi j =
cdeti (A∗ A).i c.Bj det (A∗ A) det (B B ∗ )
, i, j = 1, . . . , n
where
ci.A := cdeti (A∗ A).i (d.1 ) , · · · , cdeti (A∗ A).i (d.n ) T
c.Bj := rdet j (B B ∗ ) j. (d1. ) , · · · , rdet j (B B ∗ ) j. (dn. )
123
New results on condensed Cramer’s rule for the general...
with di. , d. j are the ith row vector and jth column vector of A∗ C B ∗ , respectively, for all i, j = 1, . . . , n. In [25,26], we showed some necessary and sufficient conditions for equations (1.1) and (1.2) to have a unique solution, respectively. Moreover, we derived the corresponding Cramer’s rules by the determinantal representations of the generalized inverses (2) (2) ArT1 ,S1 and AlT ,S . In this section, we aim to study new condensed Cramer’s rules for 2 2 the unique and general solution of (1.1) and (1.2), respectively. Theorem 3.4 Suppose that A, B, C, T1 , T2 , S1 and S2 are known and defined as Theorem 3.2. (1) Let E 1∗ , F1 be two full column rank matrices such that T1 = Nr (E 1 ) , S1 = Rr (F1 ). Then (1.1) has a unique solution if and only if T1 ∩ Nr (A) = 0 and S1⊥ ∩ Nr B ∗ = 0. In this case, the unique solution can be expressed as † † −1 ∗ −1 X 0 = A PT1 C PS ⊥ B = A∗ A + E 1∗ E 1 A C B ∗ B B ∗ + F1 F1∗ , 1
and X 0 = xi j ∈ Hn× p possess the following determinantal representations: rdet j B B ∗ + F1 F1∗ j. ci.A , i = 1, . . . , n, j = 1, . . . , p xi j = det A∗ A + E 1∗ E 1 det B B ∗ + F1 F1∗
(3.6)
or cdeti A∗ A + E 1∗ E 1 .i c.Bj , i = 1, . . . , n, j = 1, . . . , p xi j = det A∗ A + E 1∗ E 1 det B B ∗ + F1 F1∗
(3.7)
where
ci.A := cdeti A∗ A + E 1∗ E 1 .i (d.1 ) , . . . , cdeti A∗ A + E 1∗ E 1 .i d. p T c.Bj := rdet j B B ∗ + F1 F1∗ j. (d1. ) , . . . , rdet j B B ∗ + F1 F1∗ j. (dn. ) with di. , d. j are the ith row vector and jth column vector of A∗ C B ∗ , respectively, for all i = 1, . . . , n, j = 1, . . . , p. (2) Let E 2 , F2∗ be two full row rank matrices such that T2 = Nl (F2 ) , S2 = Rl (E 2 ). Then (1.2) has a unique solution if and only if T2 ∩ Nl (B) = 0 and S2⊥ ∩ Nl A∗ = 0.
123
G.-J. Song, C.-Z. Dong
In this case, the unique solution can be expressed as † −1 ∗ −1 † A C B ∗ B B ∗ + F2 F2∗ , X 0 = AQ S ⊥ C Q T2 B = A∗ A + E 2∗ E 2 2
and X 0 = xi j ∈ Hn× p possess the following determinantal representations: rdet j B B ∗ + F2 F2∗ j. ci.A , i = 1, . . . , n, j = 1, . . . , p xi j = det A∗ A + E 2∗ E 2 det B B ∗ + F2 F2∗
(3.8)
or cdeti A∗ A + E 2∗ E 2 .i c.Bj , i = 1, . . . , n, j = 1, . . . , p xi j = det A∗ A + E 2∗ E 2 det B B ∗ + F2 F2∗
(3.9)
where
ci.A := cdeti A∗ A + E 2∗ E 2 .i (d.1 ) , . . . , cdeti A∗ A + E 2∗ E 2 .i d. p T c.Bj := rdet j B B ∗ + F2 F2∗ j. (d1. ) , . . . , rdet j B B ∗ + F2 F2∗ j. (dn. ) with di. , d. j are the ith row vector and jth column vector of A∗ C B ∗ , respectively, for all i = 1, . . . , n, j = 1, . . . , p. Proof (1) First, we need to show that: if (1.1) is consistent, then it has the same solutions with the following restricted equation A∗ AX B B ∗ = A∗ C B ∗ , Rr (X ) ⊂ T1 , Nr (X ) ⊃ S1 .
(3.10)
Obviously, all the solutions of (1.1) satisfies (3.10). For the other direction suppose that (3.10) is consistent, then by Theorem 3.2 Rr (C) ⊂ Rr A PT1 , Nr (C) ⊃ Nr PS ⊥ B , 1
which is equivalent to there is a matrix C1 such that C = A PT1 C1 PS ⊥ B. Suppose X 0 1 is an arbitrary solution of (3.10), then A∗ AX 0 B B ∗ = A∗ A PT1 C1 PS ⊥ B B ∗ . 1
By the reducing rules, we can get AX 0 B = A PT1 C1 PS ⊥ B = C, which is saying that 1 X 0 satisfied (1.1). Note that Rr (X ) ⊂ T1 ⇔ E 1 X = 0, Nr (X ) ⊃ S1 ⇔ X F1 = 0,
123
New results on condensed Cramer’s rule for the general...
thus (3.10) can be rewritten as
Multiply I
A∗ A E 1∗ E1 0
E 1∗ and
X 0 0 0
B B ∗ F1 F1∗ 0
=
A∗ C B ∗ 0 . 0 0
I from the two sides, respectively, we have F1∗
A∗ A + E 1∗ E 1 X B B ∗ + F1 F1∗ = A∗ C B ∗ .
(3.11)
Moreover, T1 ∩ Nr (A) = 0, S1⊥ ∩ Nr B ∗ = 0, thus A∗ A + E 1∗ E 1 and B B ∗ + F1 F1∗ are nonsingular. By Lemma 3.3, the unique solution of (3.10) can be written as (3.6)–(3.7) For (2). By the above proof we can get (1.2) and the following restricted equation A∗ AX B B ∗ = C, Rl (X ) ⊂ T2 , Nl (X ) ⊃ S2
(3.12)
have the same solution when (1.2) is consistent. Note that Rl (X ) ⊂ T2 ⇔ X F2 = 0, Nl (X ) ⊃ S2 ⇔ E 2 X = 0, then (3.12) can be written as
Multiply I
A∗ A E 2∗ E2 0
E 2∗
and
X 0 0 0
B B ∗ F2 F2∗ 0
=
A∗ C B ∗ 0 . 0 0
I from the two sides, respectively, we can get F2∗
A∗ A + E 2∗ E 2 X B B ∗ + F2 F2∗ = A∗ C B ∗ .
(3.13)
Note that T2 ∩ Nl (B) = 0, S2⊥ ∩ Nl A∗ = 0, thus A∗ A + E 2∗ E 2 and B B ∗ + F2 F2∗ are nonsingular. By Lemma 3.3 the unique solution of (3.12) can be written as (3.8)–(3.9).
Remark 3.1 Compared with Theorem 4.2 in Song [25], the new result makes a major improvement: we only need to find two matrices to construct nonsingular Hermitian coefficient matrices which are easy to compute their determinates.
123
G.-J. Song, C.-Z. Dong
Note that if (1.1) and (1.2) are consistent and the solution are not unique, we can not derive the determinantal representations of general solution by the existing Cramer’s rules. Then the main result of this section follows. Theorem 3.5 (1) Suppose that (1.1) is defined as in Theorem 3.2, and (3.1) is satisfied. And let E 1∗ , K 1∗ , F1 , L 1 be full column rank matrices such that T1 = Nr (E 1 ) , T1 ∩ Nr (A) = Rr K 1∗ , S1⊥ = Nr F1∗ , S1⊥ ∩ Nr B ∗ = Rr (L 1 ) . Then X 0 = xi j ∈ Hn× p is an arbitrary solution of (1.1) possess the following determinantal representations: rdet j B B ∗ + F1 F1∗ + L 1 L ∗1 j. ci.A , xi j = det A∗ A + E 1∗ E 1 + K 1∗ K 1 det B B ∗ + F1 F1∗ + L 1 L ∗1 i = 1, . . . , n, j = 1, . . . , p
(3.14)
cdeti A∗ A + E 1∗ E 1 + K 1∗ K 1 .i c.Bj , xi j = det A∗ A + E 1∗ E 1 + K 1∗ K 1 det B B ∗ + F1 F1∗ + L 1 L ∗1 i = 1, . . . , n, j = 1, . . . , p
(3.15)
or
where
ci.A := cdeti A∗ A + E 1∗ E 1 + K 1∗ K 1 .i (d.1 ) , . . . , cdeti A∗ A + E 1∗ E 1 + K 1∗ K 1 .i d. p T c.Bj := rdet j B B ∗ + F1 F1∗ + L 1 L ∗1 j. (d1. ) , . . . , rdet j B B ∗ + F1 F1∗ + L 1 L ∗1 j. (dn. )
with di. , d. j are the ith row vector and jth column vector of A∗ C B ∗ + A∗ A I − E 1† E 1 Z L 1 L ∗1 + K 1∗ K 1 Z I − F1 F1† B B ∗ + K 1∗ K 1 Z L 1 L ∗1 , respectively, for all i = 1, . . . , n, j = 1, . . . , p and Z is arbitrary. (2) Suppose that (1.2) is defined as in Theorem 3.2, and (3.3) is satisfied. And let E 2 , K 2 , F2∗ , L ∗2 be full row rank matrices such that T2 = Nl (F2 ) , T2 ∩ Nl (A) = Rl L ∗2 , S2⊥ = Nl E 2∗ , S2⊥ ∩ Nl B ∗ = Rl (K 2 ) . Then X 0 = xi j ∈ Hn× p is an arbitrary solution of (1.2) possess the following determinantal representations: rdet j B B ∗ + F2 F2∗ + L 2 L ∗2 j. ci.A , xi j = det A∗ A + E 2 E 2∗ + K 2 K 2∗ det B B ∗ + F2 F2∗ + L 2 L ∗2 i = 1, . . . , n, j = 1, . . . , p
123
(3.16)
New results on condensed Cramer’s rule for the general...
or cdeti A∗ A + E 2 E 2∗ + K 2 K 2∗ .i c.Bj , xi j = det A∗ A + E 2 E 2∗ + K 2 K 2∗ det B B ∗ + F2 F2∗ + L 2 L ∗2 i = 1, . . . , n, j = 1, . . . , p
(3.17)
where
ci.A := cdeti A∗ A + E 2 E 2∗ + K 2 K 2∗ .i (d.1 ) , . . . , cdeti A∗ A + E 2 E 2∗ + K 2 K 2∗ .i d. p T c.Bj := rdet j B B ∗ + F2 F2∗ + L 2 L ∗2 j. (d1. ) , . . . , rdet j B B ∗ + F2 F2∗ + L 2 L ∗2 j. (dn. )
with di. , d. j are the ith row vector and jth column vector of A∗ C B ∗ , respectively, for all i, j = 1, . . . , n. A∗ C B ∗ + A∗ A I − E 2 E 2† Z L 2 L ∗2 + K 2 K 2∗ Z I − F2 F2† B B ∗ + K 2 K 2∗ Z L 2 L ∗2 , respectively, for all i = 1, . . . , n, j = 1, . . . , p and Z is arbitrary. Proof (1) It follows Theorems 3.2 and 3.4 that when (1.1) is consistent and T1 ∩ Nr (A) = 0, S1⊥ ∩ Nr B ∗ = 0, the solution of (1.1) is not unique. Suppose that E 1∗ , K 1∗ , F1 , L 1 are full column rank matrices such that T1 = Nr (E 1 ) , T1 ∩ Nr (A) = Rr K 1∗ , S1⊥ = Nr F1∗ , S1⊥ ∩ Nr B ∗ = Rr (L 1 ) , then T11 = Nr
E1 K1
, S11 = Nr
F1∗ L ∗1
,
E 1∗ E 1 + K 1∗ K 1 PT11 = 0
and PS ⊥ F1 F1∗ + L 1 L ∗1 = 0, K 1 PT1 = K 1 , PS ⊥ L 1 = L 1 , E 1 PT1 = 0, PS ⊥ F1 = 0. 11
1
1
By Theorem 3.2, the general solution of (1.1) can be expressed as X = X 0 + PT1 Z 1 PS ⊥ − PT11 Z PS ⊥ , 1
11
123
G.-J. Song, C.-Z. Dong
where Z is arbitrary. Thus
A∗ A + E 1∗ E 1 + K 1∗ K 1 X 0 + PT1 Z 1 PS ⊥ − PT11 Z 1 PS ⊥ B B ∗ + F1 F1∗ + L 1 L ∗1 1 11 = A∗ C B ∗ + A∗ A I − E 1† E 1 Z 1 L 1 L ∗1 + K 1∗ K 1 Z 1 I − F1 F1† B B ∗ + K 1∗ K 1 Z L 1 L ∗1
= A∗ C B ∗ + W where
W = A∗ A I − E † E Z L L ∗ + K ∗ K Z I − F F † B B ∗ + K ∗ K Z L L ∗ It follows from T11 ∩ Nr (A) = 0 and S11 ∩ Nr B ∗ = 0 that (A∗ A + E ∗ E + K ∗ K ) and (B B ∗ + F F ∗ + L L ∗ ) are nonsingular, and X can be written as −1 −1 ∗ X = A∗ A + E ∗ E + K ∗ K A C B∗ + W B B∗ + F F ∗ + L L∗ . By Lemma 3.3 any solution of (1.1) can be expressed as (3.14 )–(3.15). Similarly, we can get (2).
m×n , C ∈ Hm×q , T ⊂ Hn and S ⊂ Hq . Corollary 3.6 (1)Suppose 1 1 that A ∈ H ∗ Denote T11 = Rr PT1 A , then the restricted matrix equation
AX = C, Rr (X ) ⊂ T1 , Nr (X ) ⊃ S1 is consistent if and only if Rr (C) ⊆ AT1 and the general solution can be expressed as † X = A PT1 C PS ⊥ + PT1 L A PT1 U1 PS ⊥ + PT1 V1 PS ⊥ 1 1 1 † = A PT1 C PS ⊥ + PT1 Z 1 PS ⊥ − PT11 Z 1 PS ⊥ 1 1 1 † = A PT1 C PS ⊥ + PT1 ∩Nr (A) U1 PS ⊥ + PT1 V1 PS ⊥ , 1
1
1
with the following determinantal representation: cdeti A∗ A + E 1∗ E 1 + K 1∗ K 1 .i d. j xi j = , i = 1, . . . , n, j = 1, . . . , m, det A∗ A + E 1∗ E 1 + K 1∗ K 1 with d. j is the jth column vector of A∗ C + K 1∗ K 1 Z , respectively, for all i = 1, . . . , n, , j = 1, . . . , m, U1 , V1 , Z 1 and Z are arbitrary matrices over H with appropriate dimensions.
123
New results on condensed Cramer’s rule for the general... n×q m×q , T ⊂ H1×n and S ⊂ H1×q . Denote (2) Suppose 2 2 ∗that B ∈ H , C ∈ H T22 = Rl B Q T2 , then the restricted matrix equation
X B = C, Rl (X ) ⊂ T2 , Nl (X ) ⊃ S2 is consistent if and only if Rl (C) ⊆ T2 B and the general solution can be expressed as † X = Q S ⊥ C Q T2 B + Q S ⊥ U2 Q T2 + Q S ⊥ V2 R Q T2 B Q T2 2 2 2 † = Q S ⊥ C Q T2 B + Q S ⊥ Z 2 Q T2 − Q S ⊥ Z 2 Q T22 2 2 2 † = Q S ⊥ C Q T2 B + Q S ⊥ U2 Q T2 + Q S ⊥ V2 Q T2 ∩Nl (B) , 2
2
2
with the following determinantal representation: rdet j B B ∗ + F2 F2∗ + L 2 L ∗2 j. (di. ) xi j = , i = 1, . . . , m, j = 1, . . . , n, det B B ∗ + F2 F2∗ + L 2 L ∗2 with d.i are the ith row vector of C B ∗ + Z L 2 L ∗2 , respectively, for all i = 1, . . . , m, j = 1, . . . , n, and U2 , V2 , Z 2 and Z are arbitrary matrices over H with appropriate dimensions. Corollary 3.7 (1) Suppose that A ∈ Hm×n and Nr (A) = Rr K 1∗ , then X 0 ∈ {1,3} possess the following determinantal representation: A cdeti A∗ A + K 1∗ K 1 .i d. j xi j = , i = 1, . . . , n, j = 1, . . . , m, det A∗ A + K 1∗ K 1 with d. j is the jth column vector of A∗ + K 1∗ K 1 Z , respectively, for all i = 1, . . . , n, j = 1, . . . , m, and Z is arbitrary. (2) Suppose that A ∈ Hm×n and Nl (A) = Rl K 2∗ , then X 0 ∈ A{1,4} possess the following determinantal representation: rdet j A A∗ + K 2 K 2∗ j. (di. ) xi j = , i = 1, . . . , n, j = 1, . . . , m, det A A∗ + K 2 K 2∗ with d.i are the ith row vector of A∗ + Z K 2 K 2∗ , respectively, for all i = 1, . . . , n, j = 1, . . . , m, and Z is arbitrary.
4 Cramer’s rules for the least squares solution and least norm solution of (1.1)–(1.2) In the study of theory and numerical computations of quaternionic quantum theory, in order to well understand the perturbation theory [35], experimental proposals
123
G.-J. Song, C.-Z. Dong
[36] and theoretical discussions [37] underlying the quaternionic formulations of the Schrodinger equation and so on, one often meets problems of approximate solutions of quaternion problems. By the complex and real representations of the quaternion matrix, Jiang [38] and Wang [39] discussed the algebraic algorithms for quaternion least square problem and derived some important theoretical results. In this section, we aim to consider some condensed Cramer’s rules for the Least F-squares and the Least F-norm solution for (1.1 )–(1.2), respectively. We first simplify the definition of quaternionic inner product space defined in [40] as follows: A right H-vector space V is a quaternionic inner product space if there is a function (·,·): V × V H such that for all q1 , q2 ∈ H and ζ, ζ1 , ζ2 ∈ V : −
−
(1) (ζ1 q1 + ζ2 q2 , ζ ) = q1 (ζ1 , ζ ) + q2 (ζ2 , ζ ); (2) (ζ1 , ζ2 ) = (ζ2 , ζ1 ); (3) (ζ, ζ ) ≥ 0, and (ζ, ζ ) = 0 if and only if ζ = 0. It is easy to verify that Hm is a quaternionic inner product space under the inner product defined by (A, B) = tr B ∗ A where A, B ∈ Hm×n . The matrix norm defined 1 by A F = (tr A∗ A) 2 is called Frobenius norm of A. Theorem 4.1 (1) Suppose that E 1 , F1 , T11 , S11 are known and defined as Theorem 3.4. If (1.1) is consistent, then the least F-norm solution of (1.1) is the unique solution of (4.1) AX B = C, Rr (X ) ⊂ T11 , Nr (X ) ⊃ S11 . † † which can be written as X 0 = xi j n× p = A PT1 C PS ⊥ B which possess the 1 following determinantal representations: rdet j B B ∗ + F1 F1∗ + L 1 L ∗1 j. ci.A , xi j = det A∗ A + E 1∗ E 1 + K 1∗ K 1 det B B ∗ + F1 F1∗ + L 1 L ∗1 i = 1, . . . , n, j = 1, . . . , p,
(4.2)
cdeti A∗ A + E 1∗ E 1 + K 1∗ K 1 .i c.Bj , xi j = det A∗ A + E 1∗ E 1 + K 1∗ K 1 det B B ∗ + F1 F1∗ + L 1 L ∗1 i = 1, . . . , n, j = 1, . . . , p,
(4.3)
or
where
ci.A := cdeti A∗ A + E 1∗ E 1 + K 1∗ K 1 .i (d.1 ) , . . . , cdeti A∗ A + E 1∗ E 1 + K 1∗ K 1 .i d. p T c.Bj := rdet j B B ∗ + F1 F1∗ + L 1 L ∗1 j. (d1. ) , . . . , rdet j B B ∗ + F1 F1∗ + L 1 L ∗1 j. (dn. )
with di. , d. j are the ith row vector and jth column vector of A∗ C B ∗ , respectively, for all i, j = 1, . . . , n.
123
New results on condensed Cramer’s rule for the general...
(2) Suppose that E 2 , F2 , T22 , S22 are known and defined as Theorem 3.4. If (1.2) is consistent, then the least F-norm solution of (1.2) is the unique solution of AX B = C, Rl (X ) ⊂ T22 , Nl (X ) ⊃ S22 † † which can be expressed as X 0 = xi j n× p = AQ S ⊥ C Q T2 B and possess the 2 following determinantal representations: rdet j B B ∗ + F2 F2∗ + L 2 L ∗2 j. ci.A , xi j = det A∗ A + E 2 E 2∗ + K 2 K 2∗ det B B ∗ + F2 F2∗ + L 2 L ∗2 i = 1, . . . , n, j = 1, . . . , p,
(4.4)
cdeti A∗ A + E 2 E 2∗ + K 2 K 2∗ .i c.Bj , xi j = det A∗ A + E 2 E 2∗ + K 2 K 2∗ det B B ∗ + F2 F2∗ + L 2 L ∗2 i = 1, . . . , n, j = 1, . . . , p,
(4.5)
or
where
ci.A := cdeti A∗ A + E 2 E 2∗ + K 2 K 2∗ .i (d.1 ) , . . . , cdeti A∗ A + E 2 E 2∗ + K 2 K 2∗ .i d. p T c.Bj := rdet j B B ∗ + F2 F2∗ + L 2 L ∗2 j. (d1. ) , . . . , rdet j B B ∗ + F2 F2∗ + L 2 L ∗2 j. (dn. )
with di. , d. j are the ith row vector and jth column vector of A∗ C B ∗ , respectively, for all i, j = 1, . . . , n. Proof (1) Assume that X 1 is an arbitrary solution of the consistent equation (1.1). It follows form Theorem 3.2 that the general solution of (1.1) can be expressed as X = X 1 + PT1 ∩Nr (A) U PS ⊥ + PT1 V PS ⊥ ∩Nr ( B ∗ ) 1
1
1
where U and V are arbitrary. Note the fact that X 1 is the least F-norm solution of (1.1) if and only if
X 1 , PT1 ∩Nr (A) U PS ⊥ + PT1 V PS ⊥ ∩Nr ( B ∗ ) = 0, 1
1
1
which is equivalent to
X 1 , PT1 ∩Nr (A) U PS ⊥ = 0 and X 1 , PT1 V PS ⊥ ∩Nr ( B ∗ ) = 0. 1
1
1
By Rr (X ) ⊂ T1 , Nr (X ) ⊃ S1 ⇔ PT1 X 1 = X 1 , X 1 PS ⊥ = X 1 1
123
G.-J. Song, C.-Z. Dong
then for any matrix U and V
X 1 , PT1 ∩Nr (A) U = 0, X 1 , V PS ⊥ ∩Nr ( B ∗ ) = 0 ⇔ X 1∗ PT1 ∩Nr (A) = 0, 1
PS ⊥ ∩Nr ( B ∗ ) X 1∗ = 0 1
1
1
⇔ Rr (X 1 ) ⊂ T1 , Rr (X 1 ) ⊥ (T1 ∩ Nr (A)) , Nr (X 1 ) ⊃ S1 , Nr (X 1 ) ⊃ S1⊥ ∩ Nr B ∗ ⇔ ⊥ Rr (X 1 ) ⊂ T1 ∩ (T1 ∩ Nr (A))⊥ = T11 , Nr (X 1 ) ⊃ S1 ∩ S1⊥ ∩ Nr B ∗ = S11 . Moreover, ⊥ ⊥ AT11 = AT1 , B ∗ S11 = B ∗ S1⊥ and T1 ∩ Nr (A) = 0, S11 ∩ Nr B ∗ = 0 thus (4.1) has a unique solution † † X = A PT1 C PS ⊥ B 1
which possess the determinantal representations of (4.2)–(4.3). Similarly, we can get (2).
Theorem 4.2 (1) Suppose that (1.1) is not consistent, and denote T11 = Rr PT1 A∗ , ⊥ S11 = Rr PS ⊥ B , then the least F-squares solution of (1.1) is the general solution 1 of (4.6) AX B = PAT1 C PB ∗ S ⊥ , Rr (X ) ⊂ T1 , Nr (X ) ⊃ S1 1
which can be expressed as † † X = A PT1 C PS ⊥ B + PT1 Z 1 PS ⊥ − PT11 Z 1 PS ⊥ 1
1
11
(4.7)
and possess the determinantal representations of (3.14)–(3.15 ). (2) Suppose that (1.2) is not consistent, and denote T22 = Rl A∗ Q T2 , S22 = ⊥ Rl B Q S ⊥ , then the least F-squares solution of (1.2) is the general solution of 2
AX B = PAT1 C PB ∗ S ⊥ , Rl (X ) ⊂ T2 , Nl (X ) ⊃ S2 1
(4.8)
which can be expressed as † † X = AQ S ⊥ C Q T2 B + Q S ⊥ Z 2 Q T2 − Q S ⊥ Z 2 Q T22 2
123
2
22
(4.9)
New results on condensed Cramer’s rule for the general...
and possess the determinantal representations of (3.16)–(3.17). Proof Denote AX B − C = AX B − PAT C PB ∗ S ⊥ + PAT C PB ∗ S ⊥ − C . 1
1
Thus
AX B − PAT C PB ∗ S ⊥ , PAT C PB ∗ S ⊥ − C 1 1 = AX B, PAT C PB ∗ S ⊥ − (AX B, C) − PAT C PB ∗ S ⊥ , PAT C PB ∗ S ⊥ 1 1 1 + PAT C PB ∗ S ⊥ , C 1 ∗ ∗ ∗ = tr B X A PAT C PB ∗ S ⊥ − tr B ∗ X ∗ A∗ C ∗ 1 − tr PB ∗ S ⊥ C PAT PAT C PB ∗ S ⊥ + tr PB ∗ S ⊥ C PAT C . 1
1
1
Note that Rr (X ) ⊂ T1 , Nr (X ) ⊃ S1 , we can get tr B ∗ X ∗ A∗ PAT C PB ∗ S ⊥ = tr B ∗ X ∗ C B , tr PB ∗ S ⊥ C PAT PAT C PB ∗ S ⊥ 1 1 1 = tr PB ∗ S ⊥ C PAT C . 1
Thus 2 2 AX B − C2F = AX B − PAT C PB ∗ S ⊥ + PAT C PB ∗ S ⊥ − C , 1
1
F
F
and the general least F-squares solution of (1.1) can be expressed as the solution of the restricted matrix equation (4.6). By Theorems 3.2 and 3.4, we also can get the general solution of (4.6) can be expressed as (4.7) which possess the determinantal representations of (3.14 )–(3.15).
Combine Theorems 4.1 and 4.2, the least F-norm and F-squares solution of (1.1) and (1.2) can be expressed as the followings. Theorem 4.3 (1) Suppose that (1.1) is not consistent, and T11 = Rr PT1 A∗ = Nr (E 1 ) , S11 = Nr PS ⊥ B = Rr (F1 ) 1
then the least F-norm and least F-squares solution of (1.1) is the unique solution of AX B = PAT1 C PB ∗ S ⊥ , Rr (X ) ⊂ T11 , Nr (X ) ⊃ S11 1
† † which can be expressed as X = A PT11 C PS ⊥ B , and possess the determinantal 11 representations of (4.2)–(4.3).
123
G.-J. Song, C.-Z. Dong
(2) Suppose that (1.2) is not consistent, and T22 = Rl B ∗ Q T2 = Nl (E 2 ) , S22 = Nl AQ S ⊥ = Rl (F2 ) 2
then the least F-norm and least F-squares solution of (1.2) is the unique solution of AX B = PAT1 C PB ∗ S ⊥ , Rr (X ) ⊂ T11 , Nr (X ) ⊃ S11 1
† † which can be expressed as X = A PT11 C PS ⊥ B , and possess the determinantal 11 representations of (4.4)–(4.5).
5 Example Let us consider the restricted matrix equation (1.1) where ⎛
1 i A = ⎝0 1 1 i ⎛ 1 T = Rr ⎝ i j
⎛ ⎞ ⎛ ⎞ ⎞ 1 1 j −1 0 j 0 0⎠, B = ⎝ 0 1 0 ⎠,C = ⎝0 i 0⎠, 1 −1 − j 1 0 j 0 ⎛ ⎞ ⎛ ⎞ ⎞ 2 2j −2 1 1 0 ⎠ , S = Rr ⎝ 0 ⎠ , A∗ C B ∗ = ⎝ −k − 2i i − 2k k + 2i ⎠ . 2 2j −2 −1 −1
We can get T = Nr (E) = Nr 1 i − k 1 , S = Nr F1∗ = Nr 1 0 −1 ⎛ ⎛ ⎛ ⎞ ⎞ ⎞ 1 1 0 −1 1 ∗ T ∩ Nr (A) = Rr ⎝ 0 ⎠ = Rr K 1 , S ⊥ ∩ Nr ⎝ − j 1 j ⎠ = Rr ⎝ 0 ⎠ −1 −1 0 1 1 = Rr (L 1 ) . Denote T11 = Nr
E1 K1
= Nr
1 i −k 1 1 0 −1 , S11 = Nr . 1 0 −1 10 1
Then
det A∗ A + E 1∗ E 1 + K 1∗ K 1
123
4 3i − k 2 5 −3i + k = 20 = det −3i + k 2 3i − k 4
New results on condensed Cramer’s rule for the general...
and 3 j −3 ∗ ∗ ∗ j = 4. det B B + F1 F1 + L 1 L 1 = det − j 1 −3 − j 5 It follows Theorem 4.1 that 2 3i − k 2 5 −3i + k cdet 1 A∗ A + F2 F2∗ + L 2 L 2 .1 (d.1 ) = cdet 1 −k − 2i 2 3i − k 4 = 10 − 10 j
∗
Similarly, we can get
c1.A := 10 − 10 j 10 + 10 j −10 + 10 j
c2.A := −20k 20i 20k
c3.A := 10 − 10 j 10 + 10 j −10 + 10 j It follows Theorem 4.1 that ⎡ x11
⎤ 10 − 10 j 10 + 10 j −10 + 10 j ⎦ = 0, 1 j = rdet 1 ⎣ − j −3 −j 5
x21 = x31 = x13 = x23 = x33 = 0, 1+ j 1+ j , x22 = i, x23 = . x12 = 2 2 Then ⎡
0 X = ⎣0 0
1+ j 2
i
1+ j 2
⎤ 0 0⎦ 0
is the least F-norm solution of (1.1).
6 Conclusion In this paper, we consider the Cramer’s rules for the unique solution, the general solution, the least squares solution and the least norm solution of the restricted quaternion matrix equations (1.1) and (1.2), respectively. Corresponding results on some special cases are also given. Motivated by the work in this paper, it would be of interest to investigate the Cramer’s rules for the following consistent system of quaternion matrix equations
123
G.-J. Song, C.-Z. Dong
A1 X B1 = C1 R (X ) ⊂ T1 , Nr (X ) ⊃ S1 . A2 X B2 = C2 r
We will show the results in the following papers. Acknowledgments This research was supported by the Post Doctoral Fund of China (2015M571539), the Doctoral Program of Shandong Province (BS2013SF011), the Scientific Research Foundation of Shandong University (J14LI01), the Education Department Foundation of Hebei province (QN2015218), the Natural Science Foundation of Hebei province (A2015403050) and the Scientific Research Foundation of Weifang (2014GX027).
References 1. Zhang, F.: Quaternions and matrices of quaternions. Linear Algebra Appl. 251, 21–57 (1997) 2. Huang, L., So, W.: On left eigenvalues of a quaternionic matrix. Linear Algebra Appl. 323, 105–116 (2001) 3. Zhang, F.: Geršgorin type theorems for quaternionic matrices. Linear Algebra Appl. 424, 139–153 (2007) 4. Frenkel, I., Libine, M.: Quaternionic analysis, representation theory and physics. Adv. Math. 218, 1806–1877 (2008) 5. Zupan, E., Saje, M., Zupan, D.: The quaternion-based three-dimensional beam theory. Comput. Methods Appl. Mech. Eng. 198, 3944–3956 (2009) 6. Song, C., Sang, J., Seung, H., Nam, H.S.: Robust control of the missile attitude based on quaternion feedback. Control Eng. Pract. 14, 811–818 (2006) 7. Wie, B., Weiss, H., Arapostathis, A.: Quaternion feedback regulator for spacecraft eigenaxis rotations. J. Guidance 12, 375–380 (1989) 8. Gupta, S.: Linear quaternion equations with application to spacecraft attitude propagation. IEEE Proc. Aerosp. Conf. 1, 69–76 (1998) 9. Robinson, S.M.: A short proof of Cramer’s rule. Math. Mag. 43, 94–95 (1970) 10. Ben-Israel, A.: A Cramer rule for least-norm solution of consistent linear equations. Linear Algebra Appl. 43, 223–236 (1982) 11. Verghes, G.C.: A Cramer rule for least-norm least-square-error solution of inconsistent linear equations. Linear Algebra Appl. 48, 315–316 (1982) 12. Wang, G.: A Cramer rule for minimum-norm (T) least-square (S) solution of inconsistent equations. Linear Algebra Appl. 74, 213–218 (1986) 13. Wang, G.: A Cramer rule for finding the solution of a class of singular equations. Linear Algebra Appl. 116, 27–34 (1989) (2) 14. Yu, Y., Wei, Y.: Determinantal representation of the generalized inverse A T,S over integral domains and its applications. Linear Multilinear Algebra 57, 547–559 (2009) (2) 15. Cai, J., Chen, G.: On determinantal representation for the generalized inverse A T,S and its applications. Numer. Linear Algebra Appl. 14, 169–182 (2007) 16. Wang, G., Wei, Y., Qiao, S.: Generalized Inverses: Theory and Computations. Science, Beijing (2004) 17. Chen, Y.L.: A Cramer rule for solution of the general restricted linear equation. Linear Multilinear Algebra 40, 61–68 (1995) 18. Jun, Ji: A condensed Cramers rule for the minimum-norm least-squares solution of linear equations. Appl. Math. Comput. 437, 2173–2178 (2012) 19. Kyrchei, I.I.: Cramer’s rule for some quaternion matrix equations. Appl. Math. Comput. 217, 2024– 2030 (2010) 20. Kyrchei, I.I.: Determinantal representations of the Moore–Penrose inverse over the quaternion skew field and corresponding Cramer’s rules. Linear Multilinear Algebra 59(4), 413–431 (2011) 21. Kyrchei, I.I.: Determinantal representation of the Moore–Penrose inverse matrix over the quaternion skew field. J. Math. Sci. 180(1), 23–33 (2012) 22. Kyrchei, I.I.: Analogs of Cramer’s rule for the minimum norm least squares solutions of some matrix equations. Appl. Math. Comput. 218, 6375–6384 (2012)
123
New results on condensed Cramer’s rule for the general... 23. Kyrchei, I.I.: Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations. Linear Algebra Appl. 438, 136–152 (2013) 24. Kyrchei, I.I.: Cramer’s rule for quaternionic system of linear equations. J. Math. Sci. 6, 839–858 (2008) 25. Song, G., Wang, Q., Chang, H.: Cramer rule for the unique solution of restricted matrix equations over the quaternion skew field. Comput. Math. Appl. 61, 1576–1589 (2011) 26. Song, G., Wang, Q.: Condensed cramer rule for some restricted quaternion linear equations. Appl. Math. Comput. 208, 3110–3121 (2011) 27. Aslaksen, H.: Quaternionic determinants. Math. Intell. 3, 57–65 (1996) 28. Cohen, N., De Leo, S.: The quaternionic determinant. Electron. J. Linear Algebra 7, 100–111 (2000) 29. Dyson, F.J.: Quaternion determinants. Helv. Phys. Acta 45, 289–302 (1972) 30. Gelfand, I., Retakh, V.: A determinants of matrices over noncommutative rings. Funkts. Anal. Prilozh. 2, 13–35 (1991) 31. Gelfand, I., Retakh, V.: A theory of noncommutative determinants and characteristic functions of graphs. Funkts. Anal. Prilozh. 4, 33–45 (1992) 32. Chen, L.: Definition of determinant and Cramer solutions over quaternion field. Acta Math. Sin. (N.S.) 2, 171–180 (1991) 33. Chen, L.: Inverse matrix and properties of double determinant over quaternion field. Sci. China Ser. A 34, 528–540 (1991) 34. Wang, Q.: A system of four matrix equations over von Neumann regular rings and its applications. Acta Math. Sin. (Eng. Ser.) 21(2), 323–334 (2005) 35. Adler, S.L.: Quaternionic Quantum Mechanics and Quantum Field. Oxford University Press, New York (1994) 36. Klein, A.G.: Schrôdinger inviolate: neutron optical searches for violations of quantum mechanics. Physica B 151, 44–49 (1988) 37. Davies, A.J., Mckellar, B.H.: Observability of quaternionic quantum mechanics. Phys. Rev. A 46, 3671–3675 (1992) 38. Jiang, T., Wei, M.: Equality constrained least squares least problem over quaternion field. Appl. Math. Lett. 16, 883–888 (2003) 39. Ling, S., Wang, M., Wei, M.: Hermitian tridiagonal solution with the least norm to quaternionic least squares problem. Comput. Phys. Commun. 181, 481–488 (2010) 40. Farenick, D.R., Pidkowich, B.A.F.: The spectral theorem in quaternions. Linear Algebra Appl. 371, 75–102 (2003)
123