Numer. Math. (2013) 124:279–307 DOI 10.1007/s00211-013-0514-z
Numerische Mathematik
Numerical integration of positive linear differential-algebraic systems A. K. Baum · V. Mehrmann
Received: 28 January 2012 / Revised: 7 November 2012 / Published online: 8 March 2013 © Springer-Verlag Berlin Heidelberg 2013
Abstract In the simulation of dynamical processes in economy, social sciences, biology or chemistry, the analyzed values often represent non-negative quantities like the amount of goods or individuals or the density of a chemical or biological species. Such systems are typically described by positive ordinary differential equations (ODEs) that have a non-negative solution for every non-negative initial value. Besides positivity, these processes often are subject to algebraic constraints that result from conservation laws, limitation of resources, or balance conditions and thus the models are differential-algebraic equations (DAEs). In this work, we present conditions under which both these properties, the positivity as well as the algebraic constraints, are preserved in the numerical simulation by Runge–Kutta or multistep discretization methods. Using a decomposition approach, we separate the dynamic and the algebraic equations of a given linear, positive DAE to give positivity preserving conditions for each part separately. For the dynamic part, we generalize the results for positive ODEs to DAEs using the solution representation via Drazin inverses. For the algebraic part, we use the consistency conditions of the discretization method to derive conditions under which this part of the approximation overestimates the exact solution and thus is non-negative. We analyze these conditions for some common Runge–Kutta and multistep methods and observe that for index-1 systems and stiffly accurate Runge–Kutta methods, positivity is conditionally preserved under similar
V. Mehrmann was supported by ERC Advanced Grant, MODSIMCONMP. This study was supported by DFG Research Center Matheon, Mathematics for Key Technologies in Berlin. A. K. Baum (B) · V. Mehrmann Institut für Mathematik, MA 4-5, TU Berlin, Str. des 17. Juni 136, 10623 Berlin, Germany e-mail:
[email protected]
123
280
A. K. Baum, V. Mehrmann
conditions as for ODEs. For higher index problems, however, none of the common methods is suitable. 1 Introduction We consider the numerical solution of initial value problems for linear differentialalgebraic equations (DAEs) with constant coefficients E x˙ = Ax + f, x(t0 ) = x0 ,
(1)
where E, A ∈ Rn×n . We assume that the system is positive, i.e., that every solution x(t) of (1) that starts with an element-wise non-negative initial value x0 stays nonnegative for all times t ≥ t0 . Positive systems arise in every application in which x(t) models a quantity that does not take negative values, like the concentration of chemical and biological species or the amount of goods and individuals in economic and social sciences. Examples are Leontief- and Leslie-Models, [14,15], compartment-models [2,5,11,14,30] or semi-discretized advection-diffusion equations [1,2,4,6,12,14,26– 28,37,38]. Besides positivity, these processes usually have to satisfy additional algebraic conditions resulting from limitation of resources, conservation laws or balance conditions and which extend the differential system by accessory algebraic equations. The description of the system by an input-output map or its linearization around equilibrium solutions [23] then leads to linear models of the type (1). In order to obtain a physically meaningful simulation of such processes, one has to assure that the numerical approximations satisfy the algebraic constraints. For unconstrained problems, i.e., problems in which E is invertible, this topic is well studied, see e.g.. [23–25,27] and the references therein. For systems in which additional conditions like balance equations or conservation laws lead to DAEs this question has not been analyzed so far. The paper is organized as follows. In Sect. 2, we introduce the basic concepts for the analysis of linear, time-invariant DAEs and their discretization by one- and multistep methods. In Sect. 3, we explain and prepare the decomposition of such systems by the Drazin inverse. In Sect. 4, we present the main result for positivity preserving discretizations of ordinary differential equation (ODEs). In Sect. 5, we generalize these results to DAEs, first for the dynamic components in Sect. 5.1, then for the algebraic part in Sect. 5.2. We illustrate our results at two examples in Sect. 6. 2 Preliminaries A matrix A ∈ Rn×n is called -Z-matrix, if there exists μ > 0, such that A ≥ −μI , where the inequality is considered entry-wise. If the eigenvalues λ of A additionally satisfy maxλ∈σ (A) |μ + λ| ≤ μ, where σ (A) denotes the spectrum of A, then A is called -M-matrix. If this inequality is strict, then A is called nonsingular -M-matrix and (−A)−1 is non-negative, see e.g. [22]. A matrix pair (E, A), E, A ∈ Rn×n is called regular, if there exist nonsingular matrices S, T ∈ Rn×n such that (E, A) can be written in Weierstraß canonical form
123
Numerical integration of positive linear differential-algebraic systems
(E, A) =
281
J 0 I 0 T, S d T , S d 0 Na 0 Ia
where Jd ∈ Rd×d is a matrix in Jordan canonical form [16], associated with the finite generalized eigenvalue λ ∈ σ f in (E, A), i. e., those λ ∈ C, for which there exists 0 = v ∈ Rn with λEv = Av, and Na ∈ Ra×a is a nilpotent matrix in Jordan canonical form associated with the infinite eigenvalue of E, A. For a regular pair (E, A), the index of nilpotency of Na , i.e., the smallest ν ∈ N0 with Naν = 0, Naν−1 = 0, is called the (Kronecker) index ind(E, A) := ν of (E, A). This definition is independent of the choice of S and T , see [32]. To give an explicit solution representation of (1), we use the Drazin inverse E D , that is characterized by the following properties, see e.g. [10], (i) E D E = E E D , (ii) E D E E D = E D , (iii) E D E ν+1 = E ν , ν = ind(E), where the index of a single matrix E is defined by ind(E) := ind(E, In ). For every E ∈ Rn×n , there exists a unique Drazin inverse and if E is nonsingular, then E D = E −1 , see [10]. For the numerical solution of (1), we denote by x N the approximation of the exact solution x(t) at time t = t N . As schemes, we consider implicit Runge–Kutta and linear multistep methods. For a given initial value x0 , an implicit Runge–Kutta method has the form, see e.g. [8,20],
x N +1 = x N + τ
s
βi X˙ N ,i , i=1 s
E X˙ N ,i = Ax N + τ
N ∈ N,
(2a)
αi j A X˙ N , j + f (t N + τ γi ), i = 1, . . ., s,
(2b)
j=1
where (A, β, γ ) denote the coefficients A := [αi j ], β := [βi ], and γ := [γi ], i, j, = 1, . . . , s. This can be rewritten as x N +1 = In + (τβ T ⊗ In )(Is ⊗ E − τ A ⊗ A)−1 (1 ⊗ A) x N +
s
(τβ T ⊗ In )(Is ⊗ E − τ A ⊗ A)−1 (ei ⊗ In ) f (t N + τ γi ),
(3)
i=1
where ⊗ denotes the Kronecker product defined by A ⊗ B := [ai j B] ∈ Rr k×ml for A ∈ Rr ×m , B ∈ Rk×l , see [34]. Throughout the paper ei denotes the i-th unit vector and 1 := [1, . . ., 1]T of appropriate size. If (E, A) is a regular matrix pair and the coefficient matrix A is nonsingular, then the inverse in (3) is always well defined for small τ > 0.
123
282
A. K. Baum, V. Mehrmann
For initial values x0 , . . ., xk , a linear multistep method is given by, see e.g. [13,20], k
αk− j E x N − j = τ
k
j=0
βk− j Ax N − j + τ
j=0
k
βk− j f (t N − j ),
N ∈N
(4)
j=0
where (α, β) denotes the coefficients α := [α j ], β := [β j ], j = 1, . . . , k. This can be rewritten as xN =
k−1 (αk E − βk τ A)−1 (β j τ A − α j E)x N −k+ j j=0
+τ
k
β j (αk E − βk τ A)−1 f (t N −k+ j ),
(5)
j=0
and this is well defined for small τ > 0 if αk = 0 and (E, A) is a regular pair. Thus, for linear problems the discretizations correspond to linear iterations of the previous values x N and x N −1 , . . ., x N −k , respectively. The iteration matrices, i. e., the matrices acting on the previous values and the inhomogeneities involving the input f , are referred by (6a) R(E, τ A) := In + (τβ T ⊗ In )(Is ⊗ E − τ A ⊗ A)−1 (1 ⊗ A), T −1 Qi (E, τ A) := (τβ ⊗ In )(Is ⊗ E − τ A ⊗ A) (ei ⊗ In ), i = 1, . . ., s (6b) for Runge–Kutta methods and by r j (E, τ A) := −(αk E − βk τ A)−1 (α j E − β j τ A), q j (E, τ A) := τβ j (αk E − βk τ A)
−1
,
j = 0, . . ., k − 1,
(7a)
j = 0, . . ., k − 1,
(7b)
for multistep methods. For ODEs, the matrices (6a) and (7a) correspond to the stability functions of the respective method and we set Rs (τ A) := R(In , τ A), Qs,i (τ A) := Qi (In , τ A) and rs, j (τ A) := r j (In , τ A), qs, j (τ A) := q j (In , τ A). We say that a Runge–Kutta or multistep method is positive for (1), if it provides nonnegative approximations x N , N ∈ N, for every set of non-negative starting values x0 , . . ., xk . Considering the initial values ei or 0, we see that the iterations (3), (5) are positive, if and only if the iteration matrices and the inhomogeneous part are elementwise nonnegative, see e.g. [14]. 3 Decomposition concept The crucial point in our analysis is the decomposition of (1) into its differential and algebraic components. In preparation, we present some results about Drazin inverses and Kronecker products. Lemma 1 1. Let E, T ∈ Rn×n . If T is nonsingular, then (T −1 E T ) D = T −1 E D T .
123
Numerical integration of positive linear differential-algebraic systems
283
2. Let E, C ∈ Rn×n . If C is nonsingular and EC = C E, then (EC) D = C −1 E D = E D C −1 . Proof We prove the assertion by verifying the properties (i)–(iii) of the Drazin inverse. 1. (i), (ii) follow from the corresponding property of E D and T D = T −1 . For (iii), we note that (T −1 E T )k = T −1 E k T for k ∈ N. Since T is nonsingular, we have that ker((T −1 E T )k ) = ker(E k ), because (T −1 E T )k v = 0 for v = 0 if and only if E k T v = 0, i.e., T v ∈ ker(E k ). Furthermore, if ν = ind(E), then ν−1 ν ker(E ν ) = ker(E ν+1 ), but ker(E ) = ker(E ). This kfollows from the Jordan JE 0 JE 0 −1 k −1 T , i.e., E = T T and ker(E k ) = canonical form E = T 0 NE 0 N Ek 0 , x˜2 ∈ ker(N k )}. If k ≥ ν, then N k = 0, and ker(E k ) does {x ∈ Rn |x = T −1 x˜2 not dependent of k for k ≥ ν, where dim(ker(E k )) < dim(ker(E k+1 )) if k < ν. Thus, we have that ker((T −1 E ν T )k ) = ker(E ν ) = ker(E ν+1 ) = ker((T −1 E ν+1 T )k ) and ker((T −1 E ν−1 T )k ) = ker(E ν−1 ) = ker(E ν ) = ker((T −1 E ν T )k ), which means that ind(T −1 E T ) = ind(E). With this, we obtain (T −1 E T ) D (T −1 E T )ν+1 = T −1 E D E ν+1 T = T −1 E ν T = (T −1 E T )ν . 2. If EC = C E, then E, C, E D , C −1 commute as well, see [32]. Then, (i), (ii) follow from the corresponding property of E D and C −1 . For (iii), we note that if EC = C E, then (EC)k = E k C k for k ∈ N. Since C is nonsingular, we have ker((EC)k ) = ker(E k ), as (EC)k v = 0 for v = 0 if and only if E k v = 0, then v ∈ ker(E k ). If ν = ind(E), ker((EC)k ) = ker((EC)k+1 ) for k ≥ ν and ker((EC)k ) = ker(E k+1 C) for k < ν, i.e., ind(EC) = ind(E) and we get (EC) D (EC)ν+1 = E D C −1 E ν+1 C ν+1 = C ν E ν = (EC)ν . Let now A, B, C, D be matrices, such that AC and B D exist, then (A ⊗ B)(C ⊗ D) = AC ⊗B D, see [33]. If A, C and B, D commute, respectively, then the Kronecker products commute as well. The factors in A ⊗ B can be reversed by the perfect shuffle matrices r and c , i.e., cT (A ⊗ B) r = B ⊗ A. If A, B are nonsingular, then (A ⊗ B)−1 = A−1 ⊗ B −1 . The same result holds for the Drazin inverse. Lemma 2 Let E ∈ Rn×n and D ∈ Rm×m , then (E ⊗ D) D = E D ⊗ D D . Proof Properties (i), (ii) of the Drazin inverse are implied by the corresponding properties of E D and D D , i.e., (E ⊗ D) D (E ⊗ D) = (E ⊗ D)(E ⊗ D) D and (E ⊗ D) D (E ⊗ D)(E ⊗ D) D = (E ⊗ D) D . For (iii), we determine the index of E ⊗ D as in Lemma 1. With ker(E ⊗ D) = {v ⊗ w | v ∈ ker(E), w ∈ Rm } ∪ {v ⊗ w | v ∈ Rn , w ∈ ker(D)} and transforming E ⊗ D to Jordan canonical form [33], i.e., E ⊗ D = (T −1 ⊗ S −1 )
JE 0
0 NE
⊗
JD 0
0 ND
(T ⊗ S),
with J E , J D nonsingular and N E , N D nilpotent with ν E = ind(E), ν D = ind(D), we obtain k k JD 0 JE 0 ⊗ (T ⊗ S) E k ⊗ D k = (T −1 ⊗ S −1 ) k 0 N Ek 0 ND
123
284
A. K. Baum, V. Mehrmann
and this means that 0 , v˜2 ∈ ker(N Ek ), w ∈ Rm } v˜2 0 k , w˜ 2 ∈ ker(N D ), v ∈ Rn }. ∪{v ⊗ w|w = S −1 w˜ 2
ker(E k ⊗ D k ) = {v ⊗ w|v = T −1
Thus, k = max{ν E , ν E } is the smallest number with ker(E k ⊗ D k ) = ker(E k+1 ⊗ D k+1 ), i.e., ν := ind(E ⊗ D) = k, and (E ⊗ D) D (E ⊗ D)ν+1 = E D E ν+1 ⊗ D D D ν+1 = E ν ⊗ D ν = (E ⊗ D)ν . We now turn to the decomposition of (1) into its differential and algebraic components. For a regular pair (E, A), E D E is a projection onto the differential components of (1) while P∞ := I − E D E picks out the algebraic equations, see [10]. This provides an explicit solution formula for (1). Theorem 1 (See e. g. [32]) Let E, A ∈ Rn×n be a regular, commuting matrix pair with ind(E, A) = ν, ν ∈ N0 , and let f ∈ C ν (R, Rn ). The initial value problem n E x˙ = Ax + f, 0 ) = x 0 is uniquely solvable if there exists v0 ∈ R with x 0 = x(t ν−1 D D
D ( ) E Ev0 − P∞ =0 (A E) A f (t0 ). If this is the case, then the solution is given for t ≥ t0 by x(t) = e
(t−t0 )E D A
t E Ev0 + D
e(t−s)E
DA
E D f (s)ds − P∞
ν−1
(A D E) A D f ( ) (t).
=0
t0
The differential part of x is then given by
E E x(t) = e D
(t−t0 )E D A
t E Ev0 + D
e(t−s)E
DA
E D f (s)ds,
t0
using E A = AE and (E D E)2 = E D E. This corresponds to the solution of x˙d = E D Axd + E D f with initial condition xd (t0 ) = E D Ev0 . The algebraic part of x is given by P∞ x(t) = −
ν−1
Pa f¯( ) (t),
=0
with Pa := P∞ A D E and f¯ := P∞ A D f . This corresponds to the solution of = xa + f¯. The initial value is fixed by the consistency condition P∞ x0 = Pa x˙a D
D ( ) (t ). −P∞ ν−1 0
=0 (A E) A f Thus, E x˙ = Ax + f is equivalent to x˙d = E D Axd + E D f and Pa x˙a = xa + f¯ provided that (E, A) is regular and commuting, see also [32]. The dynamic behavior of the solution is determined by the matrix E D A, which corresponds to the Jordan
123
Numerical integration of positive linear differential-algebraic systems
285
matrix Jd associated with the generalized finite eigenvalues and vanishes on the algebraic components. The constrained components are fixed by the function f and its derivatives, and the matrix P∞ A D E, which corresponds to the nilpotent matrix Na on im(P∞ ). The commutativity assumption in Theorem 1 can be obtained for any regular pair ˜ A) ˜ by a left transformation with ( E˜ − λ A) ˜ −1 , where λ ∈ C is such that det( E˜ − ( E, ˜ λ A) = 0. More exactly, if ˜ A := (λ E˜ − A) ˜ −1 A, ˜ f := (λ E˜ − A) ˜ −1 f˜, ˜ −1 E, E := (λ E˜ − A)
(8)
then E, A commute, see [9] and E x˙ = Ax + f has the same solution set as E˜ x˙ = ˜ + f˜, see [32]. In particular, the solution formula of E x˙ = Ax + f does not depend Ax on the choice of λ ∈ C. ˜ A) ˜ = ν. Let ˜ A) ˜ be a regular pair with E, ˜ A˜ ∈ Rn×n , ind( E, Lemma 3 Let ( E, E 1 , A1 and E 2 , A2 be defined by (8) with λ1 , λ2 , respectively. Then, E 1D E 1 = E 2D E 2 , E 1D A1 = E 2D A2 , (I − E 1D E 1 )A1D E 1 = (I − E 2D E 2 )A2D E 2 and A1D E 1 = A2D E 2 . J 0 Id 0 ˜ ˜ T, A = S T denotes the Weierstraß canonical Proof If E = S 0 Na 0 Ia ˜ A), ˜ then the scaled pair (E 1 , A1 ) can be written as, see [43], form of ( E, E 1 = T −1
−1 d 0
−1 0 −1 d Jd T, A = T 1 a−1 Na 0
0 T, a−1
where d := (Id − λ1 Jd ) and a := (Ia − λ1 Na ). The Drazin inverses are D 0 0 D −1 d D −1 Jd d T E1 = T T, A1 = T 0 0 0 a
(9)
(10)
from which we obtain E 1D E 1
=T
−1
Id 0
0 D −1 Jd T, E 1 A1 = T 0 0
0 T. 0
(11)
Thus, E 1D E 1 , In − E 1D E 1 and E 1D A1 are independent of the chosen λ1 ∈ C. The proof is similar for the other terms. ˜ −1 allows to conThus, besides achieving commutativity, the scaling by (λ E˜ − A) struct a simultaneous similarity transformation of (E, A) to block diagonal form. We therefore assume in the following that (E, A) is a regular, commuting pair and refer to the scaled quantities in (9). Lemma 3, furthermore, implies that the quantities in Runge–Kutta and multistep methods are also invariant against scaling from the left, ˜ τ A) ˜ f˜ = ˜ A˜ as in (8), we have R( E, ˜ τ A) ˜ = R(E, τ A), Qi ( E, i.e., with E, A, E, Qi (E, τ A) f , see [3]. For the projection of the iteration matrices on the differential and algebraic components, we observe the following.
123
286
A. K. Baum, V. Mehrmann
Lemma 4 Let E ∈ Rn×n . Then, E D E and P∞ are invariant under the application of D. the Drazin inverse, i.e., E D E = (E D E) D and P∞ = P∞ Proof We prove the assertion by verifying the properties (i)–(iii) of the Drazin inverse. Properties (i) and (ii) are immediate results of the corresponding property of E, i.e., (E D E) D E D E = E D E(E D E) D and (E D E) D (E D E)(E D E) D = D (E D E)3 = E D E = (E D E) . For (iii), we note that the Jordan canonical form 0 J E T, J E nonsingular and N E nilpotent. Then, of E is given by E = T −1 0 NE −1 I 0 J 0 D −1 D −1 T and E E = T E =T T , i.e., ind(E D E) = 1 and hence 0 0 0 0 (E D E) D (E D E)2 = E D E E D E = E D E. The proof for P∞ is analogous.
4 Positivity preservation for ODEs For ODEs, positivity is characterized by the sign pattern of the system matrix A. Since et A ≥ 0 for t ≥ 0 if and only if A ∈ Rn×n is a -Z-matrix, see e.g. [42], x˙ = Ax + f is positive if and only if A is a -Z-matrix and f ≥ 0, see, e. g. [14,27]. This follows because the evolution operator et A can always be approximated by the linearization In + τ A, τ = Nt , N ∈ N, since et A = (eτ A )n . Thus, A ≥ −μIn for some μ > 0 is sufficient and necessary for et A ≥ 0. For the rational functions arising in Runge–Kutta and multistep methods this property is lost and their full analytic expansion must be shown to be nonnegative. However, it is sufficient that the discretization stepsize is bounded by the radius of absolute monotonicity, see [7]. This is defined as the largest real number γ+ ≥ 0, such that the stability functions are absolutely monotonic on the interval [−γ+ , 0], i.e., the stability functions and its derivatives are nonnegative and have no poles in [−γ+ , 0]. Theorem 2 [7,27] Let x˙ = Ax + f , where A is a -M-matrix with μ > 0. 1. Consider a Runge–Kutta method with coefficients (A, β, γ ) and stability function Rs (z) = 1 + zβ T (Is − zA)−1 1 that is absolutely monotonic on [−γ+ , 0] for γ+ ≥ 0. Then, the method is positive for x˙ = Ax if τ ≤ γμ+ . If additionally Qs,i (z) = β T (Is − zA)−1 ei is absolutely monotonic on [−γ+ , 0] for i = 1, . . . , s, then the method is also positive for x˙ = Ax + f if τ ≤ γμ+ . 2. A multistep method with coefficients (α, β), αk , βk = 0, for which the stability α −β z functions rs, j (z) = − αkj −βkj z , j = 0, . . ., k − 1, are absolutely monotonic on [−γ+ , 0] for γ+ ≥ 0. Then, the method is positive for x˙ = Ax if τ ≤ γμ+ .
If additionally αβkk ≥ 0 and βαk−1 ≥ 0 holds for j = 1, . . . , k, then the method is k also positive for x˙ = Ax + f if τ ≤ γμ+ .
The concept of absolute monotonicity is further discussed in Sect. 5.1, where Theorem 2 is generalized to DAEs.
123
Numerical integration of positive linear differential-algebraic systems
287
5 Positivity preservation for DAEs To get an idea which properties characterize positivity, we first consider the continuous problem. Definition 1 A matrix pair (E, A) in Rn×n is called Z-matrix pair, if E D E ≥ 0 and if there exists μ > 0 with E D A ≤ μE D E. Moreover, (E, A) is called -Z-matrix pair, if (E, −A) is an Z-matrix pair. Like for single matrices, the -Z property ensures the nonnegativity of the evolution D operator et E A E D E. Lemma 5 [43] Let (E, A) be a regular, commuting matrix pair with E, A ∈ Rn×n . D Then et E A E D E ≥ 0 for t ≥ 0 if and only if (E, A) is a -Z-matrix pair. With Lemma 5, the positivity of DAEs is characterized similar to ODEs. Theorem 3 [43] Let (E, A) with E, A ∈ Rn×n be a regular, commuting matrix pair with ind(E, A) = ν, ν ∈ N0 and let f ∈ C ν (R, Rn ). If E D E ≥ 0 and D
D ( ) (t) ≤ 0 for t ≥ 0, then E x˙ = Ax + f is positive if and only P∞ ν−1
=0 (A E) A f if (E, A) is a -Z-matrix pair and E D f (t) ≥ 0 for t ≥ 0. By assuming that E D E ≥ 0, we ensure that the differential components E D E x are nonnegative if x ≥ 0. The algebraic components P∞ x are nonnegative by assumption. 5.1 Positivity preservation for the differential part To obtain an explicit formula for E D E x N +1 , we write the projected inverses in (3), (5) in terms of the Drazin inverse. Lemma 6 Let (E, A) be a regular, commuting pair with ind(E, A) = ν. 1. If A ∈ Rs×s , then (Is ⊗ E D E)(Is ⊗ E − τ A ⊗ A)−1 = ((Is ⊗ E D E)(Is ⊗ E − τ A ⊗ A)) D = (Is ⊗ E D )(Is ⊗ E D E − τ A ⊗ E D A) D . D = 2. If αk , βk = 0, then E D E(αk E − βk τ A)−1 = E D E(αk E − βk τ A) D D E (αk E − βk τ A) . Proof 1. For the first identity, we note that (E D E) D = E D E and (Is ⊗ E D E) = (Is ⊗ E D E) D by Lemma 2. Since (Is ⊗ E D E) and (Is ⊗ E − τ A ⊗ A) commute, we see from Lemma 1 that D (Is ⊗ E D E)(Is ⊗ E − τ A ⊗ A)−1 = (Is ⊗ E D E)(Is ⊗ E − τ A ⊗ A) D = (Is ⊗ E)(Is ⊗ E D E − τ A ⊗ E D A) . (12) For the second identity, we insert the transformation (9) into (12) and obtain
D (Is ⊗ E)(Is ⊗ E D E − τ A ⊗ E D A) −1 D Id 0 Jd 0 d 0 −1 Is ⊗ − τA ⊗ (Is ⊗ T ) = (Is ⊗ T ) Is ⊗ . 0 0 0 0 0 a−1 Na
123
288
A. K. Baum, V. Mehrmann
Permuting the Kronecker factors by the perfect shuffle matrix , we obtain
D Jd 0 Id 0 −1 d 0 ⊗ I ⊗ τ − A ⊗ T ) ⊗ I (I s s s 0 0 0 0 0 a−1 Na −1 D (d ⊗ Is )(Ids − Jd ⊗ τ A) 0 = (Is ⊗ T −1 ) T , (Is ⊗ T ) 0 0
(Is ⊗ T −1 ) T
with Ids := Id ⊗ Is . Applying Lemma 2, this can be written as −1 D (d ⊗ Is )(Ids − Jd ⊗ τ A) 0 (Is ⊗ T −1 ) T (Is ⊗ T ) 0 0 −1 D −1 T (d ⊗ Is )(Ids − Jd ⊗ τ A) 0 = (Is ⊗ T ) (Is ⊗ T ). 0 0
(13)
Since −1 d ⊗ Is and Ids − Jd ⊗ τ A are both nonsingular, the Drazin inverse can be written as
(−1 d ⊗ Is )(Ids ⊗ τ A) 0
0 0
D
d ⊗ Is = 0
0 0
Ids − Jd ⊗ τ A 0
0 0
D ,
which is verified by checking the properties of the Drazin inverse. Inserting (Is ⊗ T −1 ) T
d ⊗ Is 0
0 d (Is ⊗ T ) = Is ⊗ T −1 0 0
0 T = In ⊗ E D 0
and
D Ids − Jd ⊗ τ A 0 (Is ⊗ T ) 0 0 D 0 0 −1 Id −1 Jd T − τA ⊗ T T = In ⊗ T 0 0 0 0
(Is ⊗ T −1 ) T
= (Is ⊗ E D E − τ A ⊗ E D A) D into (12) finally yields the second identity. Note that for general E, C ∈ Rn×n it is not true that (EC) D = E D C D , see [41]. Here, we exploit the block diagonal structure that is given for E and A by (9). 2. Setting s = 1, A := αβkk , the assertion follows from 1. with E D E(αk E − βk τ A)−1 =
1 αk
E D E(E −
τβk αk
A)−1 .
As for ODEs, we restrict the class of problems regarding the eigenvalues. Definition 2 A matrix pair (E, A) in Rn×n is called M-matrix pair, if (E, A) is a Z-matrix pair and there exists μ > 0 such that maxλ∈σ f in (E,A) |μ − λ| ≤ μ. Moreover, (E, A) is called -M-matrix pair, if (E, −A) is an M-matrix pair.
123
Numerical integration of positive linear differential-algebraic systems
289
If the inequality on the finite eigenvalues is strict, then (E, A) is called a strict M-matrix pair and we can generalize the property of a nonnegative inverse in terms of the Drazin inverse. Lemma 7 If (E, A) is a regular, commuting, strict M-matrix pair in Rn×n , then (E D A) D ≥ 0. Proof If (E, A) is a strict M-matrix pair, then there exists μ > 0, such that μE D E − E D A ≥ 0 and maxλ∈σ f in (E,A) |μ − λ| < μ. Then, B := μE D E − E D A ≥ 0 and ∞
D ρ(B) < μ, such that (μIn − B)−1 =
=0 B ≥ 0 for μ = 0. With E E ≥ D −1 D D D ≥ 0 and since E E = (E E) , it follows 0, we have that E E(μIn − B) that0 ≤ (E D E) D (μIn −(μE D E − E D A))−1 = (E D E(μIn −μE D E + E D A)) D = μ(E D A) D . Lemma 8 If (E, A) is -M-matrix pair with μ > 0, then (E, κ E − A) is a strict M-matrix pair with μ + κ for any κ > 0. Proof If (E, A) is a -M-matrix pair, then there exists μ > 0, such that κ E D E − E D A ≤ μE D E + κ E D E for κ > 0, i.e., E D (κ E − A) ≤ (κ + μ)E D E. Furthermore, maxλ∈σ f in (E,A) |μ + λ| ≤ μ, such that maxλ∈σ f in (E,A) |μ + λ| < μ + κ for κ > 0. Hence, (E, κ E − A) is a strict M-matrix pair with μ + κ > 0 for κ > 0. We now give conditions for a positive approximation of E D E x. Theorem 4 Let E x˙ = Ax + f be positive, (E, A) a regular, commuting matrix pair with ν = ind(E, A) and f ∈ C ν (R, Rn ). If (E, A) is a -M-matrix pair with μ > 0, then the following assertions hold: 1. Consider a Runge–Kutta method with coefficients (A, β, γ ), A nonsingular, and stability function Rs (z) = 1 + zβ T (Is − zA)−1 1 that is absolutely monotonic on [−γ+ , 0] for γ+ ≥ 0. Then, the method is positive for E D E x˙ = E D Ax if τ ≤ γμ+ . If additionally Qs,i (z) = β T (Is − zA)−1 ei is absolutely monotonic on [−γ+ , 0] for γ+ ≥ 0, i = 1, . . ., s, then the method is positive for E D E x˙ = E D Ax + E D f if τ ≤ γμ+ . 2. Consider a multistep method with coefficients (α, β), αk , βk = 0, for which the α −β z stability functions rs, j (z) = − αkj −βkj z , j = 0, . . ., k − 1, are absolutely monotonic on [−γ+ , 0] for γ+ ≥ 0. Then, the method is positive for E D E x˙ = E D Ax if the stepsize satisfies τ ≤ γμ+ . β
β
If additionally αkj , αβkk , − βkj ≥ 0 holds for j = 0, . . ., k − 1, then the method is positive for E D E x˙ = E D Ax + E D f if τ ≤ γμ+ .
Proof As for ODEs, we use the concept of absolute monotonicity and identify the iteration matrices with a nonnegative Taylor series, cp. [27]. 1. The differential part of a Runge–Kutta is given by the projec s discretization E D EQi (E, τ A) f (t N + τ γi ), with tion E D E x N +1 = E D ER(E, τ A)x N + i=0 R(E, τ A), Qi (E, τ A) as in (6). Using Lemma 6, we obtain E D ER(E, τ A) = E D E +(τβ T ⊗ A)(Is ⊗ E D E)(Is ⊗ E −τ A ⊗ A)−1 (1 ⊗ E D E) = E D E +(τβ T ⊗ E D A)(Is ⊗ E D E −τ A ⊗ E D A) D (1 ⊗ E D E).
123
290
A. K. Baum, V. Mehrmann
Thus, for a given problem E x˙ = Ax + f , we can consider E D ER(E, τ A) as a function in the scaled matrix τ E D A. Setting Rd (τ E D A) := E D ER(E, τ A), we can thus proceed analogously to the ODE case, see e.g. [27], and formally expand Rd (τ E D A) into a Taylor series. Due to the projection onto im(E D E), we center the series in −τ μE D E and with B := μE D E + E D A we make the ansatz ∞ 1 (k) R (−τ μE D E) (τ B)k . Rd (τ E A) = k! d D
(14)
k=0
Since Rd (−τ μE D E) = E D E −(τ μβ T ⊗ E D E)(Is ⊗ E D E +τ μA ⊗ E D E) D (1 ⊗ In ) = E D E −(τ μβ T ⊗ E D E)((Is +τ μA)−1 ⊗ (E D E) D ) (1 ⊗ In ) = (1−τ μβ T (Is +τ μA)−1 es )E D E, it follows that Rd (−τ μE D E) = Rs (−τ μ)E D E. With E D E B = B, we thus can write (14) as a matrix power series with scalar coefficients, i.e., ∞ ∞ 1 (k) 1 (k) Rd (−τ μE D E) (τ B)k = R (−τ μ) (τ B)k . k! k! s k=0
k=0
This series converges if ρ(τ B) < r , where r ≥ 0 denotes the radius of convergence (k) ξk of the scalar series ∞ k=0 Rs (−τ μ) k! , ξ ∈ C, see e.g. [40]. By the -M-matrix pair property of (E, A) we know that ρ(B) = maxλ∈σ f in (E,A) |μ+λ| ≤ μ, i.e., ρ(τ B) < r holds for τ μ ≤ γ+ if R is absolutely monotonic on [−γ+ , 0]. Moreover, B ≥ 0 if (k) (E, A) is a -M-matrix pair, and Rs (−τ μ) ≥ 0 for τ μ ≤ γ+ by the monotonicity assumption. This proves that Rd (τ E D A) ≥ 0 for τ μ ≤ γ+ . The nonnegativity of E D EQi (E, τ A) is proved similarly. With E D EQi (E, τ A) = (β T ⊗ E D )(Is ⊗ E D − τ A ⊗ E D A) D (ei ⊗ E D E), the projection E D EQi (E, τ A) is a matrix function in τ E D A as well. Setting Qd,i (τ E D A) := E D EQi (E, τ A) and noting that Qi (−τ μE D E) = E D (β T (Is + τ μA)−1 ei ) ⊗ E D E = Qs,i (−τ μ)E D , we obtain a similar expansion as above, i.e., ∞ ∞ 1 (k) 1 (k) D k Q (−τ μE E)(τ B) = Q (−τ μ)(τ B)k E D . Qd,i (τ E A) = k! d,i k! s,i D
k=0
123
k=0
Numerical integration of positive linear differential-algebraic systems
291
The convergence and nonnegativity of this power series is again implied by the -M-matrix pair property of (E, A) and the absolute monotonicity of Qd,i (z). Then, the overall inhomogeneity is given by s i=0
∞ s 1 (k) k E EQi (E, τ A) f (t N + τ γi ) = Q (−τ μ)(τ B) E D f (t N +τ γi ), k! d,i D
i=0
k=0
which is nonnegative, since E D f ≥ 0 for a positive DAE. 2. For a multistep method, the differential part iteration (5) is given k of the D D by E D E x N = k−1 j=0 E Eq j (E, τ A) f (t N −k+ j ), j=0 E Er j (E, τ A)x N −k+ j + where r j (E, τ A) and q j (E, τ A) are given by (7). By Lemma 6, we obtain E D Er j (E, τ A) = −(α j E D E − β j τ E D A)(αk E D E − βk τ E D A) D . Again, we consider this as a function in τ E D A and with r j (−τ μE D E) = α −β τ μ − αkj −βkj τ μ E D E, we obtain the expansion ∞ 1 (l) r (−τ μ)(τ B)l . E Er j (E, τ A) = l! s, j D
l=0
where rs, j is the j-th stability function. Like for Runge–Kutta methods, the convergence and nonnegativity of this series follows from the -M-matrix pair property of (E, A) and the absolute monotonicity of rs, j . For q j (E, τ A), the inverse can be written as E D E (αk E − βk τ A)−1 = E D (E − If
βk αk
βk αk τ A)
D
E D.
> 0, then this is nonnegative for τ > 0 as (E, A) is an M-matrix pair, implying
that (E, E −
βk αk τ A) is a strict M-matrix pair with a nonnegative Drazin inverse, cp. βj D D αk ≥ 0, then E Eq j (E, τ A) f (t N −k+ j ) ≥ 0, since E f ≥ 0 for a
Lemma 7. If positive DAE.
In Table 1 the absolute monotonicity radii of some commonly used Runge–Kutta and multistep methods are presented. The values for the Runge–Kutta methods are taken from [18], where the monotonicity radius is computed for general Padé approximations. In [7,18], it is shown that γ+ = 0 for stage numbers s = 2l, l ∈ N, and γ+ = ∞ only holds for methods of convergence order p = 1. By Theorem 4, we are restricted to implicit methods. To explain the failure of the BDF methods, we note that their stability functions αj . Since αk > 0 for k = 1, . . ., 6, the denominator are given by r j (z) = − αk −z is nonnegative for any z ≤ 0, while the α j have an alternating sign pattern. Thus, γ+ = 0 for every BDF method with k ≥ 2.
123
292
A. K. Baum, V. Mehrmann
Table 1 Absolute monotonicity radius of Radau IIA, Lobatto IIIC and BDF methods
Radau IIA
s=1
s=2
s=3
s=4
s=5
s=6
γ+ = ∞
γ+ = 0
γ+ = 1.7034
γ+ = 0
γ+ = 1.7940
γ+ = 0
γ+ = 0
γ+ = 1.1954
γ+ = 0
γ+ = 1.4242
γ+ = 0
k=1
k=2
k=3
k=4
k=5
k=6
γ+ = 0
γ+ = 0
γ+ = 0
γ+ = 0
γ+ = 0
γ+ = 0
Lobatto IIIC
BDF
For many problems, the assumption of absolute monotonicity is too severe, see Sect. 6. There are several approaches to avoid these stepsize restrictions, e.g. by splitting methods [23] or including information about the initial values [24,25]. Using the decomposition approach, these ideas can be extended to DAEs by applying them to the differential components. However, there are problems, for which the stepsize condition is necessary, see e.g. [27]. The monotonicity radius also arises in the context of contractivity preserving oneand multistep methods, respectively, see [17,18,21,29,31,35,36,39].
5.2 Positivity preservation for the algebraic part To obtain an explicit formula for P∞ x N +1 , we expand the projected inverses in (3), (5) in terms of the Drazin inverse. Lemma 9 Let (E, A) be a regular, commuting pair with ind(E, A) = ν. 1. Let (A, β, γ ) be the coefficients of a Runge–Kutta with A nonsingular, ν−1 method then (Is ⊗ P∞ ) (Is ⊗ E − τ A ⊗ A)−1 = − =0 (τ A)−( +1) ⊗ P∞ Pa A D . 2. Let (αk , βk ) be the coefficients of a multistep method with αk , βk = 0, then ν−1 αk +1 P∞ (αk E − βk τ A)−1 = − α1k =0 P∞ Pa A D . τβk Proof 1. As for Lemma 6, we write the projected inverse in terms of the Drazin inverse. D = P and P A D A = P , which holds for every regular, commuting Using P∞ ∞ ∞ ∞ matrix pair (E, A) [32], we obtain (Is ⊗ P∞ ) (Is ⊗ E − τ A ⊗ A)−1 = ((Is ⊗ P∞ A D A)(Is ⊗ E − τ A ⊗ A)) D = ((Is ⊗ P∞ A)(Is ⊗ Pa − τ A ⊗ P∞ )) D = (Is ⊗ P∞ A D )(Is ⊗ Pa − τ A ⊗ P∞ ) D . (15) D To filter out the components of (Is⊗ Pa −τ A⊗ P∞ ) lying in im(P∞ ), we write P∞ = 0 0 0 0 T and Pa = T −1 T , cp. (10), (11), and using Lemma 1, this T −1 0 Id 0 Na implies that
123
Numerical integration of positive linear differential-algebraic systems
293
0 0 0 (Is ⊗ Pa −τ A ⊗ P∞ ) D = Is ⊗ T −1 T −τ A ⊗ T −1 0 Na 0 0 0 0 = (Is ⊗ T −1 ) Is ⊗ − τA ⊗ 0 Na 0
D 0 T Ia D 0 (Is ⊗ T ). Ia
(16) To separate the differential and algebraic parts, we use the perfect shuffle , i.e., D 0 0 0 0 ⊗ Is − ⊗ τA (Is ⊗ Pa − τ A ⊗ P∞ ) D = (Is ⊗ T −1 ) T (Is ⊗ T ) 0 Na 0 Ia
and expand N := (Na ⊗ Is − Ia ⊗ τ A) D in a power series. Since A is nonsingular, we have that N = −(In ⊗ (τ A)−1 )(Ia ⊗ In − Na ⊗ (τ A)−1 ) D . The eigenvalues of Na ⊗ (τ A)−1 are given by τησϑi , ϑ = 1, . . ., a, i = 1, . . ., s, where ηϑ and σl are the eigenvalues of N and A. But since Na is nilpotent, i.e., ηϑ = 0, ϑ = 1, . . ., a, we obtain ρ(Na ⊗ (τ A)−1 ) = 0. By the Neumann Lemma, see e.g. [42], we have that
(Ia ⊗ In − Na ⊗ (τ A)−1 )−1 exists and (Ia ⊗ In − Na ⊗ (τ A)−1 )−1 = ∞
=0 Na ⊗ −
(τ A) . Since Na is nilpotent, we thus obtain (Na ⊗ Is − Ia ⊗ τ A) D = (In ⊗ (τ A)−1 )
ν−1
Na ⊗ (τ A)− =
=0
ν−1
Na ⊗ (τ A)−( +1) .
=0
Inserting this into (16) yields ν−1 0 −1 (Is ⊗ Pa − τ A ⊗ P∞ ) = − (Is ⊗ T ) 0 D
=0
0 (Is ⊗ T ) (τ A)−( +1) ⊗ Na
ν−1 −( +1) −1 0 =− (τ A) ⊗T 0
=0
0 T Na
ν−1 =− (τ A)−( +1) ⊗ Pa ,
=0
0 0 0 0 0 0 −1
−
since T =T T T T = Pa . Inserting into 0 Na 0 Na 0 Na
(15) yields the desired expansion. 2. Setting s = 1, A = αβkk in 1. implies the assertion with P∞ (αk E − βk τ A)−1 = (A(αk P∞ A D E − βk τ P∞ A D A)) D . T −1
For the differential components, we claim that the projected iteration matrix and inhomogeneity are nonnegative, as the evolved value E D Ev0 can take any nonnegative value in im(E D E). For the algebraic components, the evolved value and its iteration should lie on the constraints, which are nonnegative for a positive DAE. Thus, we analyze the local approximation properties of the methods.
123
294
A. K. Baum, V. Mehrmann
Theorem 5 [32] Consider E x˙ = Ax + f , where (E, A) is a regular matrix pair with ind(E, A) = ν and f ∈ C ν (R, Rn ). 1. Let (A, β, γ ) be the coefficients of a Runge–Kutta method with nonsingular A. If there exist κ0 , . . ., κν−1 ∈ N, such that (A, β, γ ) satisfy T −( +1) γ +1−m (i) β T A−m 1 = β A( +1−m)! for m = 1, . . ., , m! for m = + 1, . . ., κ , (ii) β T A−( +1) γ m = (m− )! for = 0, . . ., ν − 1, where γ m = [γ1m . . .γsm ], then the method is consistent and, if x(t N ) = x N , the local error satisfies x(t N +1 ) − x N +1 = O(τ κ0 +1 ) + . . . + O(τ κν−1 −ν+2 ). 2. Let (α, β) be the coefficients of a multistep method with αk , βk = 0. If there exists p ∈ N, such that (α, β) satisfy kj=0 j m α j = m kj=0 j m−1 β j for m = 0, . . ., p, then the method is consistent of order p, i.e., if x(t N −k+ j ) = x N −k+ j for j = 0, . . ., k − 1, then the local error satisfies x(t N +1 ) − x N +1 = O(τ p+1 ).
For the convergence analysis, we assume that < κ and |1 − β T A−1 1| < 1 for
≥ 0, see [32]. Note that the κ -values are implicitly determined as the number of vanishing terms in the Taylor expansions of the methods, see [32]. With these results, we compute an explicit representation of the local discretization error. We set f N ,i := f (t N + τ γi ) and f N − j = f (t N − j ) and denote by Rκ f N (τ ) the m remainder of the Taylor expansion f (t N + τ ) = κm=0 τm! f N(m) + Rκ f N (τ ). Recall that f¯ = P∞ A D f . Lemma 10 Consider E x˙ = Ax + f , where (E, A) is a regular, commuting matrix pair with ind(E, A) = ν and f ∈ C ν (R, Rn ). 1. Let (A, β, γ ), A nonsingular, denote a Runge–Kutta method that is consistent with exponents κ0 , . . ., κν−1 . If P∞ x(t N ) = P∞ x N holds for some N ∈ N, then P∞ x N +1 is given by P∞ x N +1 = −
ν−1
Pa
=0
κ
−
τm m!
f¯N( +m) −
ν−1
Pa
=0
m=0
s β T A−( +1) e τ
i=1
i
Rκ f¯N (τ γi ),
and the local error P∞ N ,loc = P∞ x N +1 − P∞ x(t N +1 ) is given by P∞ N ,loc = −
ν−1
Pa
=0
s β T A−( +1) e i=1
i
τ
( ) ¯ ¯ Rκ f (τ γi ) − Rκ − f (τ ) .
2. Let (α, β), αk , βk = 0, denote a multistep method that is consistent of order p ∈ N. If P∞ x(t N −k+ j ) = P∞ x N −k+ j for j = 0, . . ., k −1 and some N ∈ N, then P∞ x N is given by P∞ x N = −
ν−1
=0
123
( )
Pa f¯N −
1 τβk
ν−1
=0
Pa
−1 m=0
αk τβk
−m−1
R p,m ,
Numerical integration of positive linear differential-algebraic systems
295
(m) (m+1) where R p,m := kj=0 α j R p−m f¯N −k ( jτ ) − τβ j R p−m−1 f¯N −k ( jτ ). The local error P∞ N ,loc = P∞ x N − P∞ x(t N ) satisfies
P∞ N ,loc = − τβ1 k
ν−1
Pa
=0
−1
αk τβk
−m−1
R p,m .
m=0
Proof 1. The algebraic s part of a Runge–Kutta iteration is given by P∞ x N +1 = P∞ Qi (E, τ A) f N ,i . Using Lemma 9, the projected iteraP∞ R(E, τ A)x N + i=0 tion matrix is given by
P∞ R(E, τ A) = P∞ − (τβ ⊗ In ) T
ν−1
(τ A)
−( +1)
⊗
Pa A D
(1 ⊗ A)
=0
= P∞ −
ν−1 β T A−( +1) 1 τ
=0
Pa ,
(17)
ν−1 β T A−( +1) ei D since P∞ Pa A D A = Pa . Similar, P∞ Qi (E, τ A) = − =0 Pa A , such τ
¯ ¯ that with f N ,i = f (t N + γi τ ) the inhomogeneous part is given by s
P∞ Qi (E, τ A) f N ,i = −
s ν−1 β T A−( +1) e
=0 i=1
i=0
τ
i
Pa f¯N ,i .
Inserting (17), (18) and assuming consistent initial values P∞ x N = − we obtain
P∞ x N +1 = −
ν−1
Pam f¯N(m) +
,m=0
m=0
−
ν−1
s ν−1
=0 i=1
β T A−( +1) ei τ
(18)
ν−1
m=0
(m) Pam f¯N ,
β T A−( +1) 1 +m ¯(m) Pa fN τ
Pa f¯N ,i .
j
With Pa = 0 for j ≥ ν, reordering of the sums in powers of Pa yields
P∞ x N +1 = −
ν−1
=0
Pa
( ) f¯N
+
s β T A−( +1) e i=1
τ
i
f¯N ,i −
m=0
β T A−( +1−m) 1 ¯(m) fN τ −m
.
If f is sufficiently smooth, then we can expand f¯N ,i in a Taylor series, i.e.,
123
296
A. K. Baum, V. Mehrmann
P∞ x N +1 = −
ν−1
f¯N( ) +
Pa
=0
−
κ
β T A−( +1) γ m m!τ −m
m=0
m=0
s β T A−( +1) e
f¯N(m) +
i=1
β T A−( +1−m) 1 ¯(m) fN τ −m
i
τ
Rκ f¯N (τ γi )
.
Since the method is consistent with κ0 , . . ., κν−1 , we have that β T A−( +1−m) 1 = for m = 1, . . ., , and we can combine the first and the last sum as
β T A−( +1) γ l l!
P∞ x N +1 = −
ν−1
=0
⎞ κ
s T −( +1) m T −( +1) ( ) β A γ ¯(m) β A ei Pa ⎝ f¯N + Rκ f¯N (τ γi )⎠ . fN + m!τ −m τ
⎛
Moreover, β T A−( +1) γ m =
P∞ x N +1 = −
ν−1
m= +1
i=1
m! (m− )!
for m = + 1, . . ., κ , such that we obtain
Pa
( ) f¯N
κ
+
=0
τ m− ¯(m) (m− )! f N
+
s β T A−( +1) e
i
τ
i=1
m= +1
Rκ f¯N (τ γi ) .
Shifting the summation index, the new approximation is given by
P∞ x N +1 = −
ν−1
=0
Pa
κ
−
τm m!
( +m) − f¯N
ν−1
Pa
=0
m=0
s β T A−( +1) e τ
i=1
i
Rκ f¯N (τ γi ).
For the local error, we compare this expression with the exact solution. Expanding ν−1 ( ) each f¯N( )+1 in a Taylor series, P∞ x(t N +1 ) = − =0 Pa f¯N +1 is P∞ x(t N +1 ) = −
ν−1
Pa
=0
κ
−
τm m!
( +m) − f¯N
ν−1
( )
Pa Rκ − f¯N (τ ),
=0
m=0
and the local error N ,loc = P∞ x N +1 − P∞ x(t N +1 ) has the proposed form. 2. For a multistep method, k the algebraic components are given by P∞ x N = r x + P P∞ k−1 j N −k+ j ∞ j=0 q j f N −k+ j . Using Lemma 9, the projected iteration j=0 matrices are given by
P∞r j (E, τ A) = − =
123
1 αk
ν−1
αk τβk
=0 ν−1 α
1 k τβk τβk
=0
+1
Pa A D
(β j τ A − α j E)
Pa (α j Pa − β j τ P∞ ),
(19)
Numerical integration of positive linear differential-algebraic systems
297
and the inhomogeneities by P∞ q j (E, τ A) = −
τβ j αk
ν−1
=0
+1
αk τβk
β
Pa A D = − βkj
ν−1
=0
αk τβk
Pa A D .
(20)
Inserting (19), (20) into the projected iteration, we obtain ⎛ P∞ x N = ⎝ τβ1 k
ν−1
=0
− β1k P∞
⎞ k−1 αk (α j Pa − β j τ P∞ )Pa ⎠ x N −k+ j τβk j=0
ν−1
=0
k
αk τβk
β j Pa f¯N −k+ j .
j=0
Assuming consistent initial values, P∞ x N −k+ j = − P∞ x N =
1 τβk
ν−1
P∞
,m=0
− β1k P∞
αk τβk
ν−1
m=0
(m) Pam f¯N −k+ j , this is
k−1 (m) (α j Pa − β j τ P∞ )Pa +m f¯N −k+ j j=0
ν−1
=0
αk τβk
k
β j Pa f¯N −k+ j .
j=0
Reordering the terms and sorting in powers of Pa yields P∞ x N = P∞
ν−1
=0 m=0
− τ1 P∞ −P∞
k−1 m
αk τβk
Pa f¯N( −m) −k+ j
j=0
ν−1
=0 m=0
ν−1
=0
βj βk
αk τβk
αk τβk
k−1 m
αj βk
( −m) Pa +1 f¯N −k+ j
j=0
k
βj βk
Pa f¯N −k+ j .
j=0
Transforming summation indices, and combining the first and last double sum, noting that for = 0 the corresponding terms add up to −P∞ f¯n , this is P∞ x N = −P∞ f¯N − P∞
ν−1
⎛ Pa ⎝−
=1
+ τ1
m=1
αk m−1 ( τβ ) k
m=0
k−1 α
j
βk
j=0
αk m ( τβ ) k
k−1 β
j
βk
( −m) f¯N −k+ j
j=0
( −m) αk
) f¯N −k+ j + ( τβ k
k β
⎞ j
βk
f¯N −k+ j ⎠ .
j=0
123
298
A. K. Baum, V. Mehrmann
For m = 1, . . ., , several summands cancel each other out and we may write
P∞ x N = −P∞ f¯N − P∞
ν−1
⎛ Pa ⎝−
=1
+ τ1
αk τβk
j
βk
f¯N( )−k+ j −
αj βk
αk τβk
k m
m=1
j=0
k m−1
m=1
k−1 β
( −m) f¯N −k+ j +
αk τβk
k
j=0
βj βk
f¯N( −m) −k+ j
j=0
⎞
βj βk
f¯N −k+ j ⎠ ,
j=0
and by combining further terms we obtain
P∞ x N = −P∞ f¯N − P∞
ν−1
⎛ Pa ⎝−
=1
+ τ1
αk τβk
αj βk
j
βk
j=0
k m−1
m=1
k−1 β
f¯N( )−k+ j −
−1
αk τβk
k m
m=1
⎞
βj βk
f¯N( −m) −k+ j
j=0
⎠ f¯N( −m) −k+ j .
j=0
To compare this with the exact solution, we consider the difference
k
−1 k k m m−1 β j ¯( ) β j ¯( −m) α j ¯( −m) αk αk 1 δ := − − + f f βk N −k+ j τβk βk N −k+ j τ τβk βk f N −k+ j , m=1
j=0
m=1
j=0
j=0
ν−1 ( ) such that P∞ x N = −P∞ =0 Pa f¯N + δ . For convenience, we multiply by τ βk and combine the first two sums, i.e.,
τ βk δ = −
−1
τ −m
αk βk
k m
m=0
β j f¯N( −m) −k+ j +
−1 m=0
j=0
τ −m−1
αk βk
k m
α j f¯N( −m−1) −k+ j ,
j=0
and combining the sums and transforming the summation indices yields
τ βk δ =
−1 m=0
τm
αk βk
k −m−1 (m) (m+1) α j f¯N −k+ j − τβ j f¯N −k+ j . j=0
If f¯ is sufficiently smooth, then we can expand each derivative into a Taylor series centered in t N −k and for equidistant steps t N −k+ j = t N −k + jτ , we obtain
123
Numerical integration of positive linear differential-algebraic systems
τ βk δ =
−1
τ
m
αk βk
k −m−1
m=0
+
⎛ ⎝α j
p−m
j=0
−1
τm
αk βk
−m−1
m=0
299
(m+h) f¯N −k −τβ j
( jτ )h h!
h=0
p−m−1
⎞ ( jτ )h h!
(m+h+1) ⎠ f¯N −k
h=0
¯(m+1) α j R p−m f¯N(m) −k ( jτ ) − τβ j R p−m−1 f N −k ( jτ ) ,
k j=0
or by sorting the terms in orders of f¯(m+h) ,
τ βk δ =
−1
τ
m
αk βk
m=0
−1
+
k −m−1
(m) α j f¯N −k
j=0
τm
αk βk
+
p−m
τh h!
j αj − hj h
h−1
βj
(m+h) f¯N −k
h=1
¯(m+1) ( jτ ) . f ( jτ ) − τβ R α j R p−m f¯N(m) j p−m−1 −k N −k
k −m−1
m=0
j=0
If the method is consistent of order p, this reduces to τ βk δ =
−1
τm
αk βk
k −m−1 (m) (m+1) (α j R p−m f¯N −k ( jτ ) − τβ j R p−m−1 f¯N −k ( jτ )),
m=0
and with P∞ x N = −P∞ P∞ x N = −P∞
j=0
ν−1
=0
ν−1
=0
( )
Pa f¯N + δ , the new approximation is given by ( )
Pa f¯N −
1 τβk
ν−1
=0
Pa
−1
αk τβk
−m−1
R p,m ,
m=0
(m) (m+1) where R p,m := kj=0 α j R p−m f¯N −k ( jτ ) − τβ j R p−m−1 f¯N −k ( jτ ). Compared with the exact solution, this yields the proposed form of N ,loc . From Lemma 10, we see that a Runge–Kutta method approximates the -th derivative occuring in the exact solution with the order κ , respectively. For multistep methods, all derivatives are approximated up to the same order p. In particular, N ,loc = 0 if ν ≤ 1, i.e., multistep methods exactly discretize the algebraic components of DAEs with index at most one. For Runge–Kutta methods that are consistent, stiffly accurate and invariant against autonomization, i.e, (A, β, γ ) satisfy β T 1 = 1, β T = esT A, γ = A1, this holds as well as κ0 = ∞, see [32]. Hence, N ,loc = 0. To obtain positivity for the algebraic components, we claim that the method locally overestimates the exact solution. By continuity, this implies that for a convergent method the discretization is always bounded from below by the nonnegative exact solution. Theorem 6 Let E x˙ = Ax + f be positive with regular, commuting matrix pair (E, A) in Rn×n , ind(E, A) = ν, and let f ∈ C ν (R, Rn ), f¯ ∈ C ∞ (R, Rn ).
123
300
A. K. Baum, V. Mehrmann
1. Let (A, β, γ ), A nonsingular, denote a Runge–Kutta method that is consistent with exponents κ0 , . . ., κν−1 . m! and Pa f¯(m) ≤ 0 for m ≥ κ + 1, = 0, . . ., ν − 1, If β T A−( +1) γ m ≥ (m− )! then P∞ x N +1 ≥ P∞ x(t N +1 ) for τ > 0. = 0 denote a multistep method that is consistent of order p. 2. Let (α, β), αk , βk If kj=0 j m α j ≥ kj=0 m j m−1 β j and Pa f¯(m) ≤ 0 for m ≥ p +1, = 0, . . ., ν − 1, then P∞ x N ≥ P∞ x(t N ) for τ > 0. Proof 1. If the method is consistent with exponents κ0 , . . ., κν−1 and f¯ ∈ C ∞ (R, Rn ), then, provided that P∞ x(t N ) = P∞ x N , the local error is given by P∞ N ,loc = −
ν−1
⎛ Pa ⎝
=0
=−
ν−1
⎛ Pa ⎝
=−
=0
1 τ
i
τ
i=1
m=κ +1 ∞
β T A−( +1) γ m τ
=0 ν−1
∞
s β T A−( +1) e
∞
Pa
m=κ +1
m=κ +1 τm m!
(γi m!
τm
m!
τ )m
∞
(m) f¯N −
m!
m=κ +1−
(m) f¯N −
∞
m! (m− )!
( +m) ⎠ f¯N
⎞
¯(m) ⎠ (m− )! f N τ m−
m=κ +1
β T A−( +1) γ m −
⎞ τm
(m) f¯N .
m! If Pa f¯(m) ≤ 0 and β T A−( +1) γ m ≥ (m− )! for m ≥ κ +1, then P∞ N ,loc ≥ 0. Since P∞ x N +1 = P∞ x(t N +1 ) + P∞ N ,loc , this successively proves that P∞ x N + ≥ 0 for
≥ N. 2. If the method is consistent of order p and f¯ ∈ C ∞ (R, Rn ), then ∞
R p,m =
τ h+1 (h+1)!
h= p−m
k
(m+h+1) ( j h+1 α j − (h + 1) j h β j ) f¯N −k .
j=0
Provided that P∞ x(t N ) = P∞ x N , then the local error is given by P∞ N ,loc = −τβ1k
ν−1
=0
Pa
−1 m=0
αk τβk
−m−1
∞ h= p−m+1
τh h!
k
j h α j −h j h−1 β j
(m+h) f¯N −k
j=0
by transforming the summation index. If −Pa f¯(h) ≥ 0, αk , βk > 0 and kj=0 j h α j ≥ k h−1 β for h ≥ p + 1, then P j ∞ N ,loc ≥ 0. Since P∞ x N = P∞ x(t N ) + j=0 h j P∞ N ,loc , this successively proves that P∞ x N + ≥ 0 for ≥ N . For positive systems with polynomial inhomogeneity, the conditions on the methods are even necessary. We denote by r (R, Rn ) := { rθ=0 cθ t θ |c ∈ Rn , t ∈ R} the set of vector valued polynomials of maximal degree r . Lemma 11 Let E x˙ = Ax + f be positive with regular, commuting pair (E, A) in Rn×n , ind(E, A) = ν, and f ∈ C ν (R, Rn ), f¯ = rθ=0 cθ t θ ∈ r (R, Rn ).
123
Numerical integration of positive linear differential-algebraic systems
301
1. Let (A, β, γ ), A nonsingular, denote a Runge–Kutta method that is consistent with ς (κ +1) κ0 , . . ., κν−1 . Let Pa f¯N ς ≤ 0, κς := min{κ0 , . . ., κν−1 }, and P∞ N ,loc ≥ 0 (κ +1)!
ς for some N ∈ N. Then, β T A−(ς +1) γ κς +1 ≥ (κς +1−ς)! . 2. Let (α, β), αk , βk = 0, denote a multistep method that is consistent of order p ∈ N. p+1 Let Paν−1 f¯N −k ≤ 0, m = 0, . . ., ν − 1, and P∞ N ,loc ≥ 0 for some N ∈ N. Then, k k p−ν+3 α ≥ ( p − ν + 3) p−ν+2 β . j j j=0 j j=0 j
r −m (θ+m)! r θ θ ¯(m) (t) = Proof 1. For f¯(t) = θ=0 cθ t , it follows that f θ=0 θ! cθ+m t . Provided that P∞ x(t N ) = P∞ x N for some N ∈ N, then N ,loc is given by P∞ N ,loc = −P∞
ν−1
r
Pa
m=κ +1
=0
τm m! ϑ ,m
r −m θ=0
(θ+m)! θ θ! cθ+m t N ,
m! where ϑ ,m := β T A−( +1) γ m − (m− )! . The smallest power of τ > 0 is κς + 1, κς := min{κ0 , . . ., κν−1 }, such that P∞ N ,loc ≥ 0 implies that
−P∞
ν−1
Pa
r m=κ +1
=0
(m) τ m−κς −1 ϑ ,m f¯N m!
≥ 0.
ϑς,κς +1 ς (κ ) P∞ Pa f¯N ς ≥ 0. Since Considering this for small τ > 0 implies that − (κς +1)! ς (κ +1) Pa f¯N ς ≤ 0 by assumption, this proves that ϑς,κς +1 ≥ 0. 2. Provided that P∞ x(t N ) = P∞ x N for some N ∈ N, we obtain
P∞ N ,loc = − τβ1 k
ν−1
=0
Pa
−1 m=0
αk τβk
−m−1
∞ h= p−m+1
τh h!
k
(m+h) ϕh f¯N −k
j=0
k r −m (θ+m)! θ h h−1 β ) and f¯(m+h) = with ϕh := j j=0 ( j α j − h j θ=0 N −k θ! cθ+m t N −k . The smallest power of τ > 0 is obtained for = ν − 1, m = ν − 2, i. .e, dividing by p − ν + 3 and letting τ tend to 0, then N ,loc ≥ 0 implies that ϕ p−ν+3 p+1 − β1k Paν−1 ( p−ν+3)! f¯N −k ≥ 0. p+1 Since Paν−1 f¯N −k ≤ 0 and βk > 0, it must hold that ϕ p−ν+3 ≥ 0.
We give an example of Lemma 11 in Sect. 6. In Table 2, the conditions of Theorem 6 are checked for Radau-IIA and Lobatto-IIIC methods with s = 2, 3 and for BDF methods with k = 1, . . ., 4, respectively. For index-one problems the values ϑ0,m and ϕm are nonnegative as in this case the methods yield an exact discretization of P∞ x. Corollary 1 Let E x˙ = Ax + f be positive with regular, commuting matrix pair (E, A) in Rn×n , ind(E, A) = 1, and f ∈ C ν (R, Rn ), f¯ ∈ C ∞ (R, Rn ).
123
302
A. K. Baum, V. Mehrmann
Table 2 Positivity conditions of Theorem 6 for Radau-IIA, Lobatto-IIIC and BDF methods with ϑ ,m = k m! and ϕ = k m m−1 β β T A−( +1) γ m − (m− )! m j j=0 j α j − j=0 m j
1. Let (A, β, γ ), A nonsingular, denote a consistent, stiffly accurate Runge–Kutta method that is invariant against autonomization. Then, the method is positive for Pa x˙ = P∞ x + f¯ for τ > 0. 2. Let (α, β) denote a multistep method with αk , βk > 0, then the method is positive for Pa x˙ = P∞ x + f¯ for τ > 0. Proof 1. As noted above, if (A, β, γ ) is consistent, stiffly accurate and invariant against autonomization, then κ0 = ∞, see [32]. For an index-1 DAE, this τ m ¯(m) implies that P∞ x N +1 = −P∞ ∞ provided that P∞ x N = P∞ x(t N ). m=0 m! f N If f¯ ∈ C ∞ (R, Rn ), then P∞ x N +1 = − f¯N +1 = P∞ x(t N +1 ), which is nonnegative for a positive system. 2. If ind(E, A) = 1, then P∞ x N = −P∞ f¯N = P∞ x(t N +1 ) provided that P∞ x N = P∞ x(t N ), which is nonnegative for a positive system. For higher index problems, none of the commonly used methods seem to satisfy the conditions of Theorem 6, see Table 2. This is probably related with the consistency issues of these methods when applied to higher index problems, see e.g., [19,32]. For special problems a positive discretization is sometimes possible if the approximation accuracy is high enough and the considered methods converge, see Sect. 6. However, regarding the various problems arising in the discretization of higher index problems, such as drift-off or order reduction, see e.g. [19,21,32], these results emphasize that the problem should be remodeled as an index-one problem preceding a discretization, see e.g. [32].
123
Numerical integration of positive linear differential-algebraic systems
303
6 Examples To illustrate our results, we consider two examples. I Let us consider the pair (E, A) with ⎡
−e11 ⎢0 ⎢ ⎢0 ⎢ E =⎢ ⎢0 ⎢0 ⎢ ⎣0 0
−1 −1 0 0 0 0 0
−1 −1 −1 0 0 0 0
−1 −1 −1 −1 0 0 0
0 0 0 0 0 0 0
0 0 0 0 1 0 0
⎤ ⎡ 0 1 ⎢0 0⎥ ⎥ ⎢ ⎢0 0⎥ ⎥ ⎢ ⎢0 0⎥ , A = ⎥ ⎢ ⎢0 0⎥ ⎥ ⎢ ⎣0 1⎦ 0 0
0 1 0 0 0 0 0
0 0 1 0 0 0 0
0 0 0 1 0 0 0
0 0 0 0 −1 0 0
0 0 0 0 0 −1 0
⎤ 0 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥, 0 ⎥ ⎥ 0 ⎦ −1
where e11 > 0, and the inhomogeneity is given by f = [−1 0 0 0 (t − t0 − 0.1)2
1 100 (t
+ 0.3)−2
1 100 (t
+ 0.1)−2 ]T .
For positivity, we observe that E D E ≥ 0 and E D A ≥ −μE D A if μ ≥ max{ e111 , 1} and that E D f ≥ 0. Since −P∞ 2 =0 (A D E) A D f ( ) ≥ 0, it holds that E x˙ = Ax + f is positive. For the positivity of E D E x N , we first check if (E, A) is a -M-matrix pair. The finite eigenvalues of (E, A) are given by σ f in = {−e11 , −1}, such that |μ+λ| ≤ μ, λ ∈ σ f in if μ ≥ max{e211 ,1} . Since E D A ≥ −μE D E must hold as well, (E, A) is a -M-matrix pair with μ ≥ max{ e111 , 1}. Thus, a method with monotonicity radius γ+ yields a positive approximation, if the stepsize τ is chosen by τ ≤ γ+ / max{ e111 , 1}. This threshold is not strict, as we see from Table 3, where the projected iteration matrix is computed for the Radau-IIA and Lobatto-IIIC methods and e11 = 0.1, 1. In particular for the 2-stage methods we note that a positive discretization is possible even though γ+ = 0, cp. Table 1. This may happen because possible negative summands in the expansion of Rd (τ E D A) are balanced by nonnegative higher and lower order terms if τ is slightly larger than γ+ . The amount of this balancing depends on the particular structure of E D A, see [3]. For problems with a known structure, this stepsize relaxation can be almost always achieved. Only if E D A equals a Jordan block of dimension n ≥ 2 the assumption of absolute monotonicity is necessary, see [27]. For the algebraic solution components, we note that −P∞ Pa A D f ( ) ≥ 0, = 0, 1, 2. The tested methods are stiffly accurate, so we obtain a positive discretization of the last component x7 for any τ > 0 as x7 does not involve derivatives of f . For sufficiently small τ > 0, we also obtain a positive discretization of x5 , x6 , see Table 4, even though none of the methods satisfy the assumptions of Theorem 6. This follows because the considered methods still converge for this problem, see [32], such that the approximation stays sufficiently close to the exact solution if the stepsize is sufficiently small. For larger stepsizes, the approximation error becomes too large and the numerical solution enters the negative orthant, see Figure 1.
123
304
A. K. Baum, V. Mehrmann
Table 3 Rd (τ E D A) for the Radau-IIA and Lobatto-IIIC methods
Table 4 Positivity preserving stepsizes for the algebraic part of the discretization with Radau-IIA and Lobatto-IIIC methods Radau-IIA (s = 2)
Radau-IIA (s = 3)
Lobatto-IIIC (s = 2)
Lobatto-IIIC (s = 3)
x5 ≥ 0
τ ∈ (0, 0.19]
τ ∈ (0, 0.05]
τ ∈ (0, 5]
τ ∈ (0, 5]
x6 ≥ 0
τ ∈ (0, 0.08]
τ ∈ (0, 5]
τ ∈ (0, 5]
τ ∈ (0, 0.10]
x7 ≥ 0
τ ∈ (0, 5]
τ ∈ (0, 5]
τ ∈ (0, 5]
τ ∈ (0, 5]
II To illustrate Lemma 11 and to show that there exist problems for which it is necessary that the discretization overestimates the exact solution in the algebraic components, we consider the pair
0 1 1 0 E= ,A= , 0 0 0 1 and the inhomogeneity f = [4(t − 0.1)3 + 45 (t − 0.1)2 , −(t − 0.1)4 −
5 12 (t
− 0.1)3 − 0.2].
Then, E D = 0 and the assumption E D A ≥ −μE D E is trivially satisfied. With P∞ = I2 , Pa = E, the exact solution is given by x(t) = −
123
0 f 1 (t) + f˙2 (t) = f 2 (t) (t − 0.1)4 +
5 12 (t
− 0.1)3 + 0.2
,
Numerical integration of positive linear differential-algebraic systems Algebraic components − x
305
Algebraic components − x
5
700
5
700 Radau IIIA, s=3, tau=0.05 Exact solution
Radau IIIA, s=3, tau=0.08 Exact solution
600
600
500
500
400 400 300 300 200 200
100
100
0
0 0
0.05
0.1
0.15
0.2
0.25
Algebraic components − x
0.3
0.35
−100 0
0.05
0.1
0.15
0.2
0.25
0.3
0.35
Algebraic components − x
6
25
0.4
0.45
0.5
6
25 Lobatto IIIC, s=3, tau = 0.1 Exact solution
Lobatto IIC, s=3, tau=0.13 Exact solution
20
20
15 15 10 10 5 5
0
0
−5 0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
Fig. 1 Algebraic part of the discretization with Radau-IIA and Lobatto-IIIC methods
which is nonnegative for t ≥ 0. In particular, we have that x(0.1) = 0, such that for t0 < t1 = 0.1 we obtain x1 = x(t1 ) + loc,0 = loc,0 . Choosing, e.g., the Radau-IIA method with s = 2, we obtain loc,0 = − τ1
(3) (4) τ3 ϑ τ4 3! 1,3 f 2,0 + 4! ϑ1,4 f 2,0
0
2 = τ3!
ϑ1,3 · (24(t0 − 0.1) + 25 ) + τ4 ϑ1,4 · 24 , 0
¯ it follows that x1 < 0 for ¯ ϑ1,4 = −1.5, since κ0 = ∞ and κ1 = 2. With ϑ1,3 = −0.6, every t0 < 0.1. 7 Conclusion We have discussed the positivity of Runge–Kutta and linear multistep methods for linear differential-algebraic equations with constant coefficients. The crucial tool in our analysis is the decomposition of E x˙ = Ax + f into its differential and algebraic components, which allows to discuss the positivity for each of these components separately. For the differential part, we have generalized the concept of positivity preservation by absolute monotonicity by extending the corresponding definitions and notations from single matrices to matrix pairs. For the algebraic part, we have given
123
306
A. K. Baum, V. Mehrmann
conditions under which the discretization overestimates the exact solution and thus is ensured to be nonnegative. Conveniently, this condition holds if the considered method not only satisfies the consistency condition up to a certain order, but overestimates them for higher orders. For polynomial input functions f , we have shown that this condition is even necessary. Checking the positivity conditions, we have shown that the assumptions for the differential part are as strict as for ODEs and a positive discretization is surely possible only if the stepsize is smaller than the radius of absolute monotonicity of the considered method. We have demonstrated by an example that this restriction can be relaxed if the structure of the problem is taken into account. Moreover, due to the decomposition approach is should be possible to apply splitting methods [23] or special initial values [24,25]. For the algebraic part, we have checked the positivity conditions for some common methods, and shown that they are satisfied in general only for DAEs of index one. For higher index problems, we have presented an example and illustrated that if we take into account the structure and the accuracy of the problem and the method, we may still achieve a positive discretization at least for small stepsizes. References 1. Anderson, A., Chaplain, M., Newmann, E., Steele, R., Thompson, A.: Mathematical modelling of tumour invasion and metastasis. J. Theor. Med. 2, 129–154 (2000) 2. Anderson, B.D.O.: New developments in the theory of positive systems. In: Byrnes, C. (ed) Systems and Control in the Twenty-First Century, Progr. Systems Control Theory, vol. 22, pp. 17–36. Birkhäuser, Boston (1997) 3. Baum, A., Mehrmann, V.: Numerical integration of positive linear differential-algebraic systems. Institut für Mathematik, TU Berlin, Str. des 17. Juni 136, D-10623 Berlin, FRG (2012, preprint) 4. Benvenuti, L., De Santis, A., Farina, L.: Positive Systems, vol. 294. Springer, Berlin (2003) 5. Benvenuti, L., Farina, L.: Positive and compartmental systems. IEEE Trans. Autom. Control 47, 370– 373 (2002) 6. Birkhoff, G., Varga, R.S.: Reactor criticality and non-negative matrices. J. Soc. Ind. Appl. Math. 6, 354–377 (1958) 7. Bolley, C., Crouzeix, M.: Conservation de la positivité lors de la discrétisation des problemes d’évolution paraboliques. R.A.I.R.O. Analyse numerique 12, 81–88 (1978) 8. Butcher, J.: The Numerical Analysis of Ordinary Differential Equations: Runge-Kutta and General Linear Methods. Wiley, Chichester (1987) 9. Campbell, S.: Singular Systems of Differential Equations I. Pitman, San Francisco (1980) 10. Campbell, S., Meyer, C.: Generalized Inverses of Linear Transformations. Pitman, San Francisco (1979) 11. Capasso, V.: Mathematical Structures of Epidemic Systems. Springer, Berlin (1993) 12. Commault, C., Marchand, N.: Positive Systems, vol. 341. Springer, Berlin (2006) 13. Dahlquist, G.: Convergence and stability in the numerical integration of ordinary differential equations. Math. Scand. 4, 33–53 (1956) 14. Farina, L., Rinaldi, S.: Positive Linear Systems: Theory and its Applications. Wiley, New York (2000) 15. Gandolfo, G.: Economic Dynamics. Springer, Heidelberg (1997) 16. Gantmacher, F.: The Theory of Matrices, vol. 1. Chelsea Publishing Company, New York (1959) 17. Gottlieb, S., Ketcheson, D., Shu, C.: Strong Stability Preserving Runge-Kutta and Multistep Time Discretizations. World Scientific Publishing Company, Singapore (2011) 18. Griend, J., Kraaijevanger, J.: Absolute monotonicity of rational functions occuring in the numerical study of initial value problems. Numer. Math. 49, 413–424 (1986) 19. Hairer, E., Lubich, C., Roche, M.: The Numerical Solution of Differential-Algebraic Systems by Runge–Kutta Methods. Springer, Berlin (1989)
123
Numerical integration of positive linear differential-algebraic systems
307
20. Hairer, E., Noersett, S., Wanner, G.: Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd edn. Springer, Berlin (1993) 21. Hairer, E., Wanner, G.: Solving Ordinary Differential Equations II Stiff and Differential-Algebraic Problems, 2nd edn. Springer, Berlin (1996) 22. Horn, R., Johnson, C.: Topics in Matrix Analysis. Cambridge University Press, Cambridge (1991) 23. Horvath, Z.: Positivity of Runge–Kutta and diagonally split Runge–Kutta methods. Appl. Numer. Math. 28, 309–326 (1998) 24. Horvath, Z.: Positively invariant cones of dynamical systems under Runge–Kutta and Rosenbrock-type discretization. Proc. Appl. Math. Mech. (PAMM) 4, 688–689 (2004). doi:10.1002/pamm.200410325 25. Horvath, Z.: On the positivity step size threshold of Runge–Kutta methods. Appl. Numer. Math. 53, 341–356 (2005) 26. Hundsdorfer, W., Koren, B., van Loon, M., Verwer, J.: A positive finite-difference advection scheme. J. Comp. Phys. 117, 35–46 (1994) 27. Hundsdorfer, W., Verwer, J.G.: Numerical Solution of Time-Dependent Advection-Diffusion-Reaction Equations. Springer, Berlin (2003) 28. Kaczorek, T.: Positive 1D and 2D Systems. Springer, London (2002) 29. Ketcheson, D.: Computation of optimal monotonicity preserving general linear methods. Math. Comp. 78(267), 1497–1513 (2009) 30. King, J., Unterkofler, K., Teschl, G., Teschl, S., Koc, H., Hinterhuber, H., Amann, A.: A mathematical model for breath gas analysis of volatile organic compounds with special emphasis on acetone. J. Math. Biol. 63(5), 959–999 (2011) 31. Kraaijevanger, J.: Absolute monotonicity of polynomials occuring in the numerical soution of initial value problems. Numer. Math. 48, 303–322 (1986) 32. Kunkel, P., Mehrmann, V.: Differential-Algebraic Equations Analysis and Numerical Solution. EMS Publishing House, Zürich (2006) 33. Lancaster, P., Tismenetsky, M.: The Theory of Matrices. Academic Press, New York (1985) 34. Laub, A.: Matrix analysis for scientists and engineers. Society for Industrial and Applied Mathematics, Philadelphia (2005) 35. Lenferink, H.: Contractivity preserving explicit linear multistep methods. Numer. Math. 55, 213–223 (1989) 36. Lenferink, H.: Contractivity preserving implicit linear multistep methods. Math. Comp. 56, 177–199 (1991) 37. Luenberger, D.G.: Introduction to Dynamic Systems. Wiley, New York (1979) 38. Murray, J., Lubkin, S., Tyson, R.: Model and analysis of chemotactic bacterial patters in a liquid medium. J. Math. Biol. 38, 359–75 (1999) 39. Spijker, M.: Stepsize conditions for general monotonicity in numerical initial value problems. SIAM Num Anal 45, 1226–1245 (2007) 40. Stoer, J., Bulirsch, R.: Introduction to Numerical Analysis. Springer, Berlin (2002) 41. Tian, H.: On the reverse order law (AB) D = B D A D . Numer. Math. J. Chin. Univ. 9(1), 355–358 (2000) 42. Varga, R.: Matrix Iterative Analysis, 2nd edn. Springer, Berlin (2000) 43. Virnik, E.: Stability analysis of positive descriptor systems. Linear Algebra Appl. 429, 2640–2659 (2008)
123