0 , G, maps S o into itself and is contracting. In fact, comparing (2.t3), (2.14) with (2.10), (2.11) and using (2.3) and the bounds on D-1 and L+, we obtain G~(w, tl) = (w o, qo) + O(e),
and IIa~(w, tl) - a~(v, #)II x --<[e C Ill(w, r/) -- (v, ~)11x,
for (w, t/) and (v, p) in S o, and some C > 0. Although the initial iterates (2.12) need not be in So, we easily verify as above that If(w(1), n (1)) --(W0,
'70) 11x-- O(~),
so that (win,@)) is in S o for sufficiently small [el. Now the existence and convergence conclusions of the theorem follow immediately from the contracting mapping theorem. The continuity and uniqueness statements similarly follow by standard arguments which need not be repeated here (see [12, 5]). The magnitude of the constant C is of concern in applications. It is determined by the bounds on the second and third derivatives of f and the bounds on D-1 and L+. If any of these is large, convergence will be slower, as seen to some degree in our second example (w6). On the other hand, convergence may proceed in higher powers of e than indicated by (2.15), as proven in [17], and demonstrated by the example of Section 4.
3. Numerical Implementation The major difficulty in the numerical implementation of the above iteration scheme lies in solving the two-point boundary value problem (2.14). Since the
176
W.F. Langford
differential operator L of (2.14) is not classically invertible, standard numerical methods as described for example in [2] or [13] are not applicable. However, the function w ~p+ 1~ satisfying (2.14) and its associated conditions is precisely the best least squares solutions (BLSS) discussed in [19]. We have accomplished the solution of (2.14) using an improved version of the algorithm announced in [19]. The present algorithm can be summarized in the following steps: Step 1. Compute a fundamental matrix by solving numerically on [a, b] the n
initial value problems Y'-AoY=O,
(3.1)
Y(a)=I,
where I is the n x n identity matrix. Step 2. Compute the characteristic matrix B=M
(3.2)
+ NY(b),
find its nullspace, and hence the nullspaces of L and L*. Set p = 0. Step3. Solve equations(2.13) for ~tp+ 1~. This has the effect of projecting the right
hand side of (2.14) on the range of L. Step4. Solve the initial value problem
[ ~d- A 01 x ' P + l ' = f 'p+I'
x'P+l'(O)=O,
(3.3)
where f~P§ 1) is the right hand side in (2.14), containing r/~p+~) as determined in step 3. Step5. Compute the initial vector of the BLSS of (2.14) using the formula w(P + 1~(a) = - Ps, r B* N x tp + 1)(b) - Pr,s G - i ( x~p + 1~, y )
(3.4)
where b
G = S Y*(t) Y(t) dt = Gram matrix, a
S = R ( G - 2 B*),
T = N(B),
Pr,s = projector onto T along S, Ps, r = I - PT,s,
and B* =Moore-Penrose inverse of matrix B. Step 6. The solution of (2.14) is then given by w ~p+ l)(t) = Y(t) w ~p+ l)(a) + x ~p+ 1)(0, Step 7. Increment p and return to step 3.
a < t < b.
(3.5)
Numerical Solution of Bifurcation Problems
177
Note that, after initialization, only one initial value problem is solved per iteration, so that iterations are relatively inexpensive. Equation (3.4) is an improved version of that given in [19], in that the new version is less expensive to evaluate. Whereas the previous version reduces to the method of adjoints for invertible problems, the present formula reduces to the method of complementary functions. (For the invertible case, H.B. Keller has shown the superiority of the method of complementary functions to the method of adjoints in [-16].) The significance of formula(3.4) is that it reduces the problem of calculating a generalized inverse of L to that of calculating the Moore-Penrose inverse B t of the characteristic matrix B. Due to recent developments in numerical linear algebra [2, 10] this is now a routine task. It was accomplished in the computations presented here by means of the subroutine DSVD. The discretization formulae selected to compute the integrals (in the inner products and for G), and to solve the initial value problems (3.1) and (3.3), should be compatible in two ways. First, they have to work on the same set of net points, and second they should give comparable truncation errors. Because the functions appearing in the examples are analytic, we have chosen to use formulae with high order truncation errors. The integrals were evaluated with the IMSL Romberg extrapolation subroutine DQATR, and the initial value problems were solved using standard high-order Adams predictor-corrector formulae [21] together with the iterative Milne-Rosser method [29] for the starting data. This non-standard starting procedure was chosen because at each iteration the right side of (3.3) is known only on the net points. The MilneRosser method uses known information from forward net points to efficiently obtain high order accurate solutions at the first few starting points, thus eliminating the need to reduce the initial step-size, as is often done for highorder Adams methods [32]. Furthermore, it is easy to match the truncation errors of the Milne-Rosser and Adams formulae. For most of our computations, we chose formulae with an overall truncation error of h 7 (h ---stepsize) and 129 = 2 7 + 1 equally spaced net points. The number of steps was a power of 2 to facilitate the Romberg integration. The variation of accuracy and cost with the number of steps is shown clearly in Table 2. All computations were performed on an IBM 370 model 158 computer in double precision arithmetic. 4. Buckled Rod Problem Probably the earliest successful treatment of a bifurcation problem was Leonhard Euler's study of the buckling of an inextensible elastica with prescribed end thrust, or the buckled rod problem. An excellent recent survey of this problem is given by Reiss [28] (see also [7]). Under standard physical assumptions (pinned uniform thin inextensible rod, axial thrust) one obtains the model equations 0 " + 2 sin.0=0, 0'(0) = 0 = 0'(1),
0
178
W.F. Langford
where O(x) is the angle between the centerline of the deformed rod and the xaxis, 2 is proportional to the applied thrust, and the length of the rod is 1 unit. One can obtain the actual displacements u(x) and v(x) parallel to and perpendicular to the x-axis from u'(x) = cos 0 - 1 , v'(x) = sin 0, u(0) = v(0) = v(1) = 0. Problem (4.1) is fairly easy to solve exactly in terms of elliptic integrals. Even so, for numerical purposes it may be easier to work directly from (4.1) than to evaluate the elliptic integrals numerically. In our work, the exact solution was useful as a check on the numerical solution by iteration. For the actual computations, problem (4.1) was converted to a first order system, but for this discussion it is convenient to retain the form (4.1). The linearized problem is ~b"+2~b = 0 ,
q~'(0)= 4)'(1) = 0,
(4.2)
which has eigenvalues and normalized eigenfunctions: An = n 2 7~2
c~,(x)=V~cosnnx,
n=1,2,....
(4.3)
We will consider only the lowest eigenvalue 21 = n 2 and seek buckled solutions (2, 0) near (n 2, 0). Problem (4.2) is selfadjoint, and so the orthogonal complement of the range is also spanned by c~(x)=c~l(x ). The bifurcation matrix of Theorem 1 is here 1 x 1 and given by D = (4~, ~b) = 1.
(4.4)
It follows that bifurcation occurs at (~2,0), and, for sufficiently small e, the buckled solutions are given by the limit of the iteration scheme of w2. For this simple example with 2 appearing linearly, the iteration equations can be written: 2(0) = ~2,
w~~
= O,
O(~)(x) = e 4~(x) + 5 2 w(~(x),
1
-1
[ [email protected]+r~2 ] wlp+ l) = [r~20~P'(x) - 2(v+ l) sin O(V)(x)]/e2 , dx
J
1
W(p+ x
1)
(p+ (0) = 0 = Wx 1)(1),
S4~(X)W(P+I)(x)clx=O, 0
p = 0, 1, 2, ....
(4.5)
Buckled states associated with eigenvalues other than n2 could be computed similarly. Some typical results are given in Tables 1 and 2. Convergence was observed to proceed in powers of e 2 rather than e, just as predicted by the theory in [17]. The constant C in formula (2.15) can be estimated a posteriori for this example to be about 0.04. Thus the convergence is exceptionally rapid.
Numerical Solution of Bifurcation Problems
179
Table 1. Solution of buckled rod problem. Stepsize= 1/128 Number of 0(0) iterations 0.0t 0.1 0.25 0.5 0.75 1.0 1.125
2 3 4 6 7 9 10
0.014142120892 0.141406613788 0.353322127970 0.705230092211 1.054170207686 1.398276152596 1.567787687377
0(88
2
Absolute error in 2
0.010000010417 0.100010424485 0.250163526830 0.501326947950 0.754587795604 1.011258763577 1.141380881815
9.869851145051 9.894317015898 10.025335401829 10.511366212101 11.389179765930 12.780256807841 13.731505354089
3x 4x 7x 5x 3x 4x 5x
10-t~ 10 -13 10 -13 10 -13 10 -12 10 -12 10 -12
Table2. Accuracy test. e = 1/2 Number of steps
Absolute error in 2
Number of Execution cost iterations C P U seconds
16 32 64 128 256
1.7 x 7.7 x 4.8 x 5.3 x 6.0 x
2 3 5 6 6
10 -4 10- 7 10 -1~ 10 -13 10-13
0.54 0.68 1.32 3.17 5.79
The calculated deflections 0 for x> 89 are mirror images of those for x< 89 with an error tess than 10-11, and the calculated values for 0(89 ranged between 10 - i v and 10 -11. (Theoretically 0(-12)=0.) The maximum angular deflection occurs at the ends of the rod, so 0(0) is identical to the sup norm of the solution. The "absolute error" shown in Tables 1 and 2 is the absolute difference between the computed 2 and the exact 2 (expressed as an elliptic integral [28]) corresponding to the computed initial deflection 0(0). If instead one fixed 2 and compared corresponding exact and computed values of 0(0), the absolute error would have the same order of magnitude for 0.1 < e < 1.25. Table 2 shows that one can control the accuracy and cost of the computation by adjusting the stepsize on the interval [0, 1]. The number of iterations shown is the optimum number to attain the stated error; additional iteration gave no increase in accuracy. The last line of Table 2 indicates that the limit of precision was effectively reached with stepsize 1/128. For small values of e, an asymptotic expansion of the solution, calculated by classical perturbation methods 1-23], is
O(X'~,S)=I3V~cos~x--~3~6
COS3nx+O(e 5)
2(e) = rc2[1 + 88e2 + O(e4)].
(4.6)
In comparison with the numerical solution in Table 1, this asymptotic formula gives at least 5 figures accuracy for le[ < 0.1, but its accuracy deteriorates rapidly for larger values of e.
180
W.F. Langford
5. Hopf Bifurcation Hopf bifurcation is the bifurcation of a family of periodic solutions from a stationary solution of an autonomous system of differential equations. We therefore consider two-point boundary value problems -~=F(y,#),
y(O)=y(T),
(5.1)
where F: R " x R ~ R " , T is the unknown period, p is the bifurcation parameter, and F satisfies F ( 0 , # ) = 0 and is three times continuously differentiable in a neighbourhood of (y, #)=(0, 0). It follows from standard theorems on differential equations that if y(s) is a solution of (5.1) on [-0, T], then the T-periodic extension of y is a solution of the differential equation for all seR. First we change variables to get the parameter T out of the boundary conditions and into the differential equation, by defining t = 2 ~ s / T and redefining y to obtain
dy T -& = 2re F(y, p),
y(0) - y(2~z)= 0.
Clearly (5.2) is a special case of (2.1) with 2 = ( p , T ) , M = I assumptions on F allow us to write
F(y, I~) = Ao y + #A1Y + Q(y) + R(y, ~L)
(5.2) and N = - 1 .
The (5.3)
where A o, AI, Q and R are as in (2.2) and (2.3), except that now they are all independent of t. The linearized problem corresponding to (5.2) is
aq~ T dt 2n A~
~b(0)=4~(2r~).
(5.4)
Problem (5.4) has a nonconstant solution if and only if A o has a pair of complex conjugate pure imaginary eigenvalues +_ifl, fl >0. Suppose this is true and let To =2rc/fl, then with T = To, Equation(5.4) has a linearly independent pair of 2~periodic solutions. Assume that ifl is an algebraically simple eigenvalue of A o, and that A o has no eigenvalues of the form kifl, ke{0,2,3,4 .... }. Then the nullspace of (5.4) is two-dimensional. In fact if c = a + i b is the (complex) eigenvector of A o corresponding to the eigenvalue ifl, then a real basis of the solution space of (5.4) is given by the pair q5a(0 = a cos t - b sin t = Re[e it C], ~b2(t) = a sin t + b cos t = Im [e" c].
(5.5)
Let us normalize c by requiring a* a + b* b = 2.
(5.6)
The set {a, b} is necessarily linearly independent. Let eeR" by any vector with a nonzero projection on span {a,b}. Then there is a unique (except for sign)
Numerical Solution of BifurcationProblems
181
solution q~ of (5.4) normalized by tt~bjl= 1 and satisfying e*qS(0)=0.
(5.7)
Note that, since (5.2) and (5.4) are autonomous, if y(t) is a solution of either, then so is y(t+7) for any real constant 7. In fact, the general solution ea~bl(t) + cx2 q~z(t) of (5.4) could equally be parametrized as c~bl(t + 7). The parameter 7 just corresponds to a time shift; it gives no essentially new solutions. The desire to remove this redundancy is the motivation behind condition (5.7). Now incorporate (5.7) into the boundary conditions by defining
,,8, where O* is the 1 x n null row vector, and replace problems (5.2) and (5.4) by
dy T dt -2~ F(y,#), d~b 1 ----AoqS=0
dt fl
,
My(O)+Ny(2n)=O,
(5.9)
M~b(0) +N~b(2~)=0.
(5.10)
Since e:t:0, we have rank
dOdt
fl A ~ 0 = 0 '
[MN]=n+
1, and the adjoint problem for (5.10) is
U0(0)+VO(2z0=0'
(5.11)
where U and V are ( n - 1 ) x n, the rows of U span e• and V = - U. Problem (5.11) necessarily has a two-dimensional nullspace, which orthogonally complements the range of the linear operator of (5.10). A convenient basis of the nullspace of (5.11) is given by Lemma 1 in the Appendix as t), =(SS*)-' ~b,
~k2=(SS,)_ 1 d ~
dt'
(5.t2)
where S is a nonsingular matrix which transforms A o to canonical form (see Appendix) and ~b is a normalized solution of (5.10). Recall that this norm is induced by the inner product 27t
{
By our hypotheses, the Jacobian matrix Fy(0,#) is continuously differentiable in # near # = 0 . Let ~(/~) denote the differentiable simple eigenvalue of Fy(0, #) near # = 0 which coincides with ifl at # = 0 . Now define c(= Re[6'(0)],
fl'= ImE6'(0)].
(5.13)
182
W,F. Langford
The following identities are derived in Lemma 2 of the Appendix: (@,~,,)= 1,
(~b,~2)=O,
(Ao $,$~) =0, =~',
(Ao ~b,6'2) =fl, =,B'.
(5.14)
In Hopfs original paper [12], the essential conditions for bifurcation were fl >0 and ~'#0. We now see that these are equivalent to the nonsingularity of the bifurcation matrix in our Theorem 1. Theorem 2. If fl > O, o( ~ 0 and F has the properties assumed above, then problem (5.9) has a continuous family of solutions of the form
y(t,~)=eq~(t)+a2w(t,Q,
(w,q~>=0,
~(~)=~,7(~), 2n r(e) =-ff [1 +ez(e)],
(5.15)
for lel less than some eo>0. These solutions are given by the limit of the iteration scheme of
Theoreml,
with the identifications ).*=(/~,T-2-ff-/,\
P/
f(t, y,2)-~-~ F(y,#), and k=2. Proof. We need only verify that D is nonsingular. But from Lemma 2 (or (5.14)) we find d e t D = ~', Ofl=a'fl4:O,
(5.16)
so the conclusions follow from Theorem 1. It also follows from Theorem 1 that for a given choice of e in (5.7), the bifurcating family is unique. Hopf [12] and others [5,25] have proven the stronger result that even varying the choice of e gives nothing essentially new, we omit the details here. From the discussion preceding Theorem 1, it is clear that a first approximation to (q, z) is determined by the integrals (Q(~b),~k~), i= 1, 2. For the case of Hopf bifurcation, these are integrals over one period of homogeneous cubic polynomials in sin t and cos t. Hence they vanish. It follows that the right-hand side of (2.13) is O(e) for Hopf bifurcation. By similar arguments to the proof of Theorem 1, we get the following strengthened version of Theorem 2, in which we redefine r/and z. Theorem 3. With the same hypotheses as Theorem 2, problem (5.9) has a continuous family of solutions of the form
y(t, e)= e ~b(t) + e2 w(t, ~),
~(~) =
~2
(w,~)=0,
~(~),
2rr T(e) =-if- [1 +e2 z(~)],
(5.57)
Numerical Solution of Bifurcation Problems
183
for let
y(P)=~ (a "3~e2 W(p),
~(p) =/~2 t/(p),
TiP) = ~ f
fl'
[1 -~e 2 'lT(P)], f
fl [ W +t)] = -(~(O(p)A 1 +T(P)Ao)w~P)+~z(p)~(P)AIy(~) + e - 3(1 + e2 z (p)) [Q(ytp)) + R(y(V), #(p))], ip),
(5.18)
d 1 [d_~_~Ao] W(p+l) = ~1 {8(r/(p+ 1)AI+ZtP+l)Ao)~b + ~2(r/(p)A ~+ z(P)A o) w(p)+ e2 z(p)r/(p)A ~y(P) + e - 2(1 + e2 z(p)) [Q (y(p))+ R(y(p), #(p))] },
Mw(P+l)(O)+Nw(t'+~)(2n)=O,
(w(P+ t), qS>=0.
(5.19)
The limiting values (Wo,rlo, Zo) of (w,r/,z) in (5.17) with ~ 0 are of great practical interest, and it is possible to solve for them exactly. If (~(x, y) denotes the bilinear operator 89 O)xy and K(y) denotes the cubic operator 1
~.~ Fyrr(0 , O)yyy (second and third Frechet derivaties of F with respect to y at (0, 0), respectively) then (w0, r/o, Co) satisfies d
1
]
1
Mwo(O)+Nwo(2~)=O ,
/~' /~
(wo, ~b> =0,
c0
At least in principle, the exact solution of (5.20) for (w o, t/o, co) is an easy hand calculation, since it involves only trigonometric functions and their integrals. Thus one obtains the asymptotic approximation to the branch of solutions of (5.9):
y(t, e) = e th(t) + e 2 wo(t) + O(eS), #(~)----E2 t/o + O(~3),
T(e) = ~
[1 + e2 Zo + O(~3)].
(5.21)
This asymptotic solution was known to Poincar6 [243, and also given by Hopf [12], so it will be referred to here as the Poincar6-Hopf solution. If t/o>0 (t/o<0) then there is a neighbourhood of the bifurcation point in which the bifurcating solutions exist only for # > 0 (# < 0). Pathological examples are known for the exceptional case t/o=0 (see [5]). It should be pointed out that even in the generic case t/0 ~ 0, this neighbourhood may be extremely small, so it is dangerous to rely upon (5.21) for finite values of e~e0. As shown by the
184
W.F. Langford
example in the next section, the Poincar6-Hopf solution may be grossly in error even for values of e which seem reasonably small. The solution calculated by the iteration procedure is accurate for a much wider range of e values.
6. Feedback-Inhibition Oscillator
Glass [8, 9] has recently proposed a model for feedback-inhibition oscillations in a biological control system, and he observes that the onset of these oscillations is an example of H o p f bifurcation. We first translate his variables to put tl , ~, 1 ~1~ at our (0, 0, 0), and thus obtain the equations his equilibrium point ~
~s=A y+G(y,#)
(6.1)
where A=
-1 2 -
,
G=[ [
+ 2y~)~'+ 1] -
g(yl, # ) | , g(Y2,/0J
2yi'
y takes values inside the cube [Yi[<89 in R 3, and p >0. Bifurcation occurs at (y, #) = (0, 4). Clearly g is analytic here with expansion g(yi, p) = 8 9
4) y,-
2y~- 8y3 +....
(6.2)
The eigenvalues of A are - 3, i 1/3 and - i ]/~, so the limiting value of the period is To =2n/]/~. We choose the normalized time variable t=2ns/T and consider the two-point boundary value problem
dt
]/~ A y=-f~ G(y, #) + 2-n
A y,
My(O) + Ny(2 n) = 0,
(6.3)
and corresponding linearized problem
d4~
dt
1
Mc~(O)+Nc~(2n)=O.
1/~A~b=0,
(6.4)
Here 1
M=
0 0
,
N=
0 -1
0 0
0 0
"
Problem (6.4) has a one-dimensional nullspace spanned by the normalized eigenfunction
Numerical Solution of Bifurcation Problems F sin t
185
]
l-cosh+ Al L ~ 3~j Since the matrix A is normal, it follows from Lemma 1 in the Appendix that the nullspace of the problem adjoint to (6.4) is spanned by ~bl(t) = q~(t),
~2(t) = ~b'(t).
(6.6)
The bifurcation matrix is easily calculated as
I fl= ~ D=fl,
~ ~/~,
67,
which is clearly nonsingular. Therefore the Hopf bifurcation theorem applies, and the branch of periodic solutions can be computed by our iteration scheme. This computation has been carried out, using the algorithm of Section 3 and the formulae of Theorem 3. The results are shown in Table 3 and Figures 1 and 2. For comparison purposes, the Poincar6-Hopf solution has also been calculated for this example. It is expressed by formulae (5.21) with 4~ as defined in (6.5), and
1 [14]
[cost ] 73/ cos(t-~-~|
11oi-5- I
--
24 c o s 2 t + 7 53
~ 3zI
50 s i n 2 t , -29
(6.8)
5O5 t/o = 9 + 5-~ = 9.89065256..., and 172 % = 1701 =0.1011169900 .... Table 3 shows that t/(e) and ~(~) (defined by (5.17)) agree well with t/o and % for very small values of 5. However, t/increases with e while ~ decreases and turns negative, so that the periods of the computed solutions rapidly diverge from those predicted by the Poincar~-Hopf solution, as shown in Figure 1. The last two columns of Table 3 give the "missing" components of the initial vector y(0) (remember that y1(0)=0 from our choice of M in (6.4)). Therefore, the solutions for all t can be computed from the data in the last three columns of
186
W.F. Langford
Table3. Periodic solutions of feedback-inhibition equations Iterations
0.001 0.01 0.06 0.10 0.14 0.18 0.20 0.22 0.24 0.26 0.28 0.30
5 6 8 10 12 14 14 16 17 22 30 30
q(e)
z(e)
9.89070850 9.89622788 10.09632484 10.48952412 11.15718572 12.23429263 13.00720304 14.01609742 15.36948074 17.26151957 20.08613035 24.837623
0.10111488 0.10090582 0.09333522 0.07853284 0.05367749 0.01440984 -0.01311693 -0.04824106 -0.09396743 -0.15533067 -0.24164649 -0.373599
#
Initial values
4.00000989 4.00098962 4.03634677 4.10489524 4.21868084 4.39639108 4.52028812 4.67837912 4.88528209 5.16687872 5.57475262 6.235386
y2(0)
y3(0)
-0.000707057 -0.007066288 -0.04231153 -0.07057207 -0.09920252 -0.12869276 -0.14399230 -0.15983486 -0.17638718 -0.19384223 -0.21240624 -0.232312
-0.000707120 -0.007072095 -0.04241224 -0.07056073 -0.09846884 -0.12592576 -0.13934930 -0.15242457 -0.16495494 -0.17662808 -0.18697363 -0.195332
0.3
3.6E
9 LANGFORD oGLASS
2~
--ASYMPTOTIC (POINCAR~-HOPF Q
3.60
0-2
o,
I,(:3 0
i: W a.
9 LANGFORD o GLASS
O-I
3.5~
3.5C
Fig.1
I 5 BIFURCATION
. t 6 PARAMETER /3.
0.0
I
Fig.2
I
5
,~
BIFURCATION PARAMETER
Fig. 1. Period of periodic solutions of feedback-inhibition equations Fig. 2. Amplitude of periodic solutions of feedback-inhibition equations
Table 3 using a standard IVP subroutine, and need not be presented here. It is estimated that the absolute error in the data of Table 3 is about 10-7, except for the last two lines where the iterations were cut off at 30 and the error estimates are 10 -6 and 10 -3 respectively. These estimates are based on analyses of the truncation errors involved in the integration and IVP subroutines, as well as the observed changes in successive iterates. The superior performance of the solution by iteration compared to the Poincar6-Hopf solution is not surprising in
Numerical Solution of Bifurcation Problems
187
view of [17], where it was shown that the p-th iteration is at least as accurate as a p-term asymptotic expansion obtained by perturbation theory. The amplitude of the computed solutions, shown in Figure 2, is defined to be the maximum interpolated absolute value attained by any of the three components. In fact, this was invariably component Yl and occurred for t near 1r/2. Figures 1 and 2 also show two solutions obtained by Glass [9], as follows. Since the periodic solution is asymptotically orbitally stable (i.e. an attracting limit cycle) any nearby solution eventually spirals into it. Therefore one can fix #, choose an initial point, and then numerically integrate the resulting IVP (using e.g. the Runge-Kutta method) until periodic behaviour is observed. This is the desired solution. Glass' results agree perfectly with those presented here in their common #-interval of validity. (He has presented solutions for # = 8 , 10, and 12 in addition to those shown.) However, it follows immediately from Hopf's "exchange of stabilities" formula [12, 25] and the principle of linearized stability that the periodic solution becomes more and more weakly attracting as #--,4 +, and this is verified numerically for this example. Thus Glass' procedure becomes more expensive and/or less accurate as #--.4 +, whereas the opposite is true of the present iterative method. Furthermore, there are other bifurcation problems of interest where at least part of the bifurcation branch is unstable, so that Glass' procedure would be inapplicable. In conclusion, an iterative algorithm has been presented which shows promise for computing bifurcation branches beyond the range of validity of asymptotic solutions, and to a point where conventional numerical techniques can take over.
Appendix
In this Appendix, we derive certain basic facts underlying Hopf bifurcation, which are used in Sections 5 and 6. If the matrix A o has the properties listed after Equation (5.4), then there exists a nonsingular n x n matrix S such that [3] S-1AoS=
io] _
=E,
(A.1)
where C is a real (n - 2) • (n - 2) matrix with no eigenvalues of the form ki~, k = 0, 1, 2,.... Furthermore, the first two columns of S are the vectors a and b as defined after (5.4). We now transform problems (5.10) and (5.11) by defining u = S- 1~b, v = S* ~b, and get du 1 ----Eu=O,
dt
dv .... dt
fl
1
E* v = 0 ,
MSu(O)+NSu(2rO=O, US* -1 v(0)+ VS* -1 v(2zt)=0.
(A.2) (A.3)
188
W.F. Langford
Define the n-vector functions u 1 and U2 by
-0m'[ cost-]
ul(t)=
[sint] /cos t l
(A.4)
Then 491 and 492 in (5.5) are Su 1 and Su z respectively. Problem (A.2) has a onedimensional nullspace spanned by Uo(t)=elul(t)+a2u2(t), where ax and ~2 satisfy ~le*a+a2e*b=O,
(A.5)
ct2+ot2 = 1.
(Recall a * a + b * b = 2 and e is defined above (5.7).) Then []uoll=l, and the normalized solution of (5.10) is 49(t)=S uo(t).
(A.6)
The adjoint problem (A.3) has its nullspace spanned convenient basis is given by the solutions u o = a l u l + which are clearly linearly independent, and in fact determine a basis of solutions of the original adjoint
r
by u 1 and u 2, but a more ~2 u2 and u~ = a2 ul - a l u2, orthogonal. These in turn problem (5.11), namely
C~(t)=s*-~u'o(t).
(A.7)
Lemma 1. If 49 is a normalized solution of (5.10) then a basis for the nullspace of (5.11) is given by
Ol(t)=(SS*)-149(t),
02(t) = ( S S * ) - 149'(t).
(A.8)
In case A is normal (that is AA* = A ' A ) , Equations(A.8) reduce to 0~ =49,
02=49 '.
(A.9)
Proof Formulae (A.8) follow immediately from (A.5), (A.6) and (A.7). If A is normal, the vectors a and b are orthogonal, and in fact S can be chosen to be an orthogonal matrix [-11]. But then SS* =I, hence (A.9). Next we derive very useful identities involving 49, 01 and 02Lemma 2. I f 49, 01, 02 are as in Lemma l and ~', fl' are as defined in (5.13), then (49,r
(49,02)=0,
(Ao 49,02>=fl,
(Aa 49, 015 =ct',
(A149,02)=fl'.
Proof By direct calculation,
(A.10)
Numerical Solution of Bifurcation Problems
189
Therefore,
<~,~t>=
A~=Fyt,(O,O ). By our hypotheses, there is a unique continuously differentiable branch 6(#) of eigenvalues of Fy.(0, #) with 6(0)= ifl, and corresponding eigenvectors c(#), in a neighbourhood of/~ = 0, that is Ft.(0 ,/2) c(#) = 6(#) c(#).
(A. 11)
Now differentiate (A.11) and set # = 0 : A , c(0) + A o c' (0) = 6'(0) c(0) + i/~ c'(0).
Taking real and imaginary parts and recalling the definition of ~b, one finds
A l C~= ~' q~+ fl' ~t + [ fl d~-- Ao] z(t) where
z(t) = (al cos t + az sin t) Re[c'(O)] +(a 2 cos t - ~ l sin t)Im[c'(O)]. Therefore,
d
,
and similarly (A 1 ~b,02> =if"
Acknowledgements.The author
would like to thank Professor K. Tam for support and encouragement of this work, and Mr. Ralph Nobel for writing the computer programs for the examples.
References 1. Aziz, A.K.: Numerical solution of boundary value problems for ordinary differential equations. New York: Academic Press 1975 2. Ben-lsrael, A., Greville, T.N.E.: Generalized inverses: Theory and applications. New York: Wiley 1974 3. Coddington, E.A., Levinson, N.: Theory of ordinary differential equations. New York: McGrawHill 1955 4. Cole, R.H.: Theory of ordinary differential equations. New York: Appleton-Century-Crofts 1968 5. Crandall, M.G., Rabinowitz, P.H.: The Hopf bifurcation theorem. M RC Tech. Summary Report No. 1604, Mathematics Research Center, University of Wisconsin-Madison, 1976
190
W.F. Langford
6. Demoulin, Y.J., Chen, Y.M.: An iteration method for studying the bifurcation of solutions of the nonlinear equations L(2)u + eR(2, u)= 0. Numerische Math. 23, 47-61 (1974) 7. Dickey, R.W.: Bifurcation problems in nonlinear elasticity. London: Pitman 1976 8. Glass, L.: Combinatorial and topological methods in nonlinear chemical kinetics. J. Chem. Phys. 63, 1325-1335 (1975) 9. Glass, L.: Global analysis of nonlinear chemical kinetics. In: Modern theoretical chemistry, Vol. 4, Statistical mechanics (B. Berne, ed.). New York: Plenum Press (to appear) 10. Golub, G.H., Reinsch, C.: Singular value decomposition and least squares solutions. Numerische Math. 14, 403-420 (1970) 11. Hoffman, K., Kunze, R.: Linear algebra. Englewood Cliffs, N.J.: Prentice-Hall 1961 12. Hopf, E.: Abzweigung einer periodischen LiAsung von einer station~iren L~Ssung eines Differentialsystems, Ber. Math.-Phys. K1. S~ichs. Akad. Wiss. Leipzig 94, 3-22 (1942) 13. Hsii, I., Kazarinoff, N.D.: Am applicable Hopf bifurcation formula and instability of small periodic solutions of the Field-Noyes model. J. math. Analysis Appl. 55, 61-89 (1976) 14. Keller, H.B.: Numerical methods for two-point boundary-value problems. Waltham, Mass.: Blaisdell 1968 15. Keller, H.B.: Nonlinear bifurcation. J. diff. Equations 7, 417-434 (1970) 16. Keller, H.B.: Numerical solution of two-point boundary-value problems. CBMS Regional Conference Series 24, SIAM, Philadelphia (1975) 17. Keller, H.B., Langford, W.F.: Iterations, perturbations and multiplicities for nonlinear bifurcation problems. Arch. rat. Mech. Analysis 48, 83-108 (1972) 18. Keller, J.B., Antman, S.: Bifurcation theory and nonlinear eigenvalue problems. New York: Benjamin 1969 19. Langford, W.F.: A shooting algorithm for the best least squares solution of two-point boundary value problems. SIAM J. numer. Analysis 14, No. 3 (1977) 20. Langford, W.F.: The generalized inverse of an unbounded linear operator with unbounded constraints, SIAM J. math. Analysis (submitted) 21. Lapidus, L., Seinfeld, J.H.: Numerical solution of ordinary differential equations. New York: Academic Press 1971 22. Milne, W.E.: Numerical solution of differential equations. New York: Dover 1970 23. Nayfeh, A.: Perturbation methods. New York: Wiley 1973 24. Poincar6, H.: New methods of celestial mechanics, Vols. 1-3, 1892, English Transl. NASA TTF450 (1967) 25. Poore, A.B.: On the theory and application of the Hopf-Friedrichs bifurcation theory, Arch. rat. Mech. Analysis 60, 371-393 (1976) 26. Rabinowitz, P.H.: Nonuniqueness of rectangular solutions of the Benard problem. In: E18], pp. 359-394 (1969) 27. Reid, W.T.: Generalized inverses of differential and integral operators. Proc. Symp. on Theory and Appl. of Generalized Inverses of Matrices, Lubbock, Texas (T.L. Boullion, P.L. Odell, eds.), 1968 28. Reiss, E.L.: Column buckling-an elementary example of bifurcation. In: [18], pp. 1-16 (1969) 29. Rosser, J.B.: A Runge-Kutta for all seasons. SIAM Review 9, 417-452 (1967) 30. Sanchez, D.A.: An iteration scheme for boundary value/alternative problems. Numerische Math. 23, 223-230 (1975) 31. Sattinger, D.H.: Topics in stability and bifurcation theory. Lecture notes in mathematics, No. 309. Berlin-Heidelberg-New York: Springer 1973 32. Shampine, L.F., Gordon, M.K.: Computer solution of ordinary differential equations. San Francisco: Freeman 1975 33. Stakgold, I.: Branching of solutions of nonlinear equations. SIAM Review 13, 289-332 (197t) 34. Vainberg, -M.M., Trenogin, V.A.: The methods of Liapunov and Schmidt in the theory of nonlinear equations and their further development. Russ. math. Surveys 17, No. 2, 1-60 (1962) 35. Vainberg, M.M., Trenogin, V.A.: Theory of branching of solutions of nonlinear equations. Leyden: Noordhoff 1974
Received October 27, 1976