J Sci Comput (2012) 51:421–448 DOI 10.1007/s10915-011-9516-0
Richardson Extrapolation of Iterated Discrete Galerkin Method for Eigenvalue Problem of a Two Dimensional Compact Integral Operator Bijaya Laxmi Panigrahi · Gnaneshwar Nelakanti
Received: 22 November 2010 / Revised: 23 May 2011 / Accepted: 4 July 2011 / Published online: 3 August 2011 © Springer Science+Business Media, LLC 2011
Abstract We consider approximation of eigenelements of a two-dimensional compact integral operator with a smooth kernel by discrete Galerkin and iterated discrete Galerkin methods. By choosing numerical quadrature appropriately, we obtain superconvergence rates for eigenvalues and iterated eigenvectors, and for gap between the spectral subspaces. We propose an asymptotic error expansions of the iterated discrete Galerkin method and asymptotic error expansion of approximate eigenvalues. We then apply Richardson extrapolation to obtain improved error bounds for the eigenvalues. Numerical examples are presented to illustrate theoretical estimate. Keywords Eigenvalues and eigenvectors · Compact integral operator · Richardson extrapolation · Superconvergence rates
1 Introduction Let X = L∞ ([a, b] × [c, d]) be a Banach space. Consider the following integral operator defined on X by
b
d
K u(s, t) =
K(s, t, x, y)u(x, y) dx dy, a
(s, t) ∈ D,
(1)
c
where kernel K(., ., ., .) ∈ C (D) × C (D), D = [a, b] × [c, d] ⊂ R2 is a given rectangular domain and C (D) is the set of all continuous functions on the domain D. Then K is a compact linear operator on L∞ (D). B.L. Panigrahi · G. Nelakanti () Department of Mathematics, Indian Institute of Technology Kharagpur, Kharagpur 721302, India e-mail:
[email protected] B.L. Panigrahi e-mail:
[email protected]
422
J Sci Comput (2012) 51:421–448
We are interested in the following eigenvalue problem: find u ∈ X and λ ∈ C − {0} such that K u = λu,
u = 1.
(2)
Many practical problems in science and engineering are formulated as eigenvalue problems (2) of compact linear integral operators K (cf., [9]). For many years, numerical solution of eigenvalue problems have attracted much attention. During the last some years, significant work has been done in the numerical analysis of the one-dimensional eigenvalue problem for the compact integral operator K . The Galerkin, petrove-Galerkin, collocation, Nyström and degenerate kernel methods are the commonly used methods for the approximation of eigenelements of the compact integral operator K . The analysis for the convergence of Galerkin, petrove-Galerkin, collocation, Nyström and degenerate kernel methods for the one dimensional eigenvalue problems are well documented in [1, 3–9, 16–18]. The general framework for the numerical solution of eigenvalue problem for compact integral operator was discussed in [5]. The convergence of eigenelements of a compact integral operator in Nyström method was discussed in [4]. Degenerate kernel method for the eigenvalue problem for a compact integral operator was discussed in [16]. Projection methods have been used for a long time. The general framework for projection and Nyström method for the approximations of eigenelements and their convergence rates for compact integral operator were given in [9, 18] under collectively compact convergence and norm convergence. In [1], existence of eigenelements and their convergence rates were given under the ν-convergence which is weaker than norm convergence. Fast collocation method for the approximation of eigenelements of a compact integral operator with weakly singular kernels were studied in [11]. The superconvergence results for the iterated eigenvector first noted by I.H. Sloan in [20] for the projection methods when the projection is an orthogonal projection. The superconvergence results for the iterated eigenvector of a compact integral operator for smooth and Greens kernels were studied in [9]. By replacing the various integrals appearing in these methods by numerical quadrature leads to discrete methods. In [15, 17] discrete and iterated discrete Galerkin methods were discussed for the one dimensional eigenvalue problem (2) with a smooth kernel. In these references, it was shown that, if Xn is space of piecewise polynomials of degree ≤ p − 1 and the numerical quadrature rule employed in the problem is of degree of precession d − 1, then the error in the eigenvectors and eigenvalues are of the orders O(hmin{p,d} ) and O(hmin{2p,d} ), respectively and for the iterated eigenvector it is of the order O(hmin{2p,d} ), where h is the norm of the partition. To improve the convergence rates for eigenvalues, in [10], iterated discrete extrapolation methods were discussed and it has been shown that error bounds for the eigenvalues are improved to O(h2p+2 ). In this paper, we consider eigenvalue problem for a two dimensional compact integral (−1) (−1) (2) ((1) operator K (1) with a smooth kernel. Let Sp−1 M ) and Sq−1 (N ) be the space of piecewise polynomials of degree p − 1 and q − 1 on [a, b] and [c, d] with norm of the partitions (−1) (−1) (2) ((1) h and k, respectively. Let XMN be the Cartesian product of Sp−1 M ) and Sq−1 (N ), and let Phk : X → XMN be the orthogonal projection. Then the Galerkin method for solving the eigenvalue problem (2) is defined as follows: find uhk ∈ XMN and λhk ∈ C − {0} such that Phk K uhk = λhk uhk ,
uhk = 1,
(3)
and the Sloan or iterated Galerkin eigenvector is defined by uhk =
1 K uhk . λhk
(4)
J Sci Comput (2012) 51:421–448
423
To solve the eigenvalue problem (3), we need to evaluate the various integrals arising from L2 -inner products and the integral operator K . When we replace these integrals by a numerical quadrature, the Galerkin method (3) and iterated Galerkin eigenvector (4), leads to discrete Galerkin method and iterated discrete Galerkin eigenvector, respectively. By choosing the composite Gauss quadrature rules of degree of precessions 2k − 1 and 2l − 1, respectively on [a, b] and [c, d], we prove that the error in the eigenvalues and gap between the spectral subspaces are of the orders O(hmin{2p,2k} , k min{2q,2l} ) and O(hmin{p,2k} , k min{q,2l} ), respectively, and for the iterated discrete Galerkin eigenvector, we prove that the error is of the order O(hmin{2p,2k} , k min{2q,2l} ). Similar to one dimensional case, to improve the convergence rates for the eigenvalues, we derive an asymptotic expansion for the iterated discrete Galerkin operator by choosing h = k and p = q = k = l and then using Richardson extrapolation we improve the convergence rates for the eigenvalues from O(h2p ) to O(h2p+2 ). In Sect. 2, we develop discrete Galerkin, iterated discrete Galerkin methods and theoretical frame work for the eigenvalue problem using discrete orthogonal projections. In Sect. 3, we discuss the convergence rates for the approximated eigenelements to the exact eigenelements. In Sect. 4, we discuss Richardson extrapolation for eigenvalue problem to improve convergence rates. In Sect. 5, we present numerical results, which agree with the theoretical results.
2 Discrete and Iterated Discrete Galerkin Methods Consider the following compact integral operator defined on X = L∞ (D) by
b
d
K u(s, t) =
K(s, t, x, y)u(x, y) dx dy, a
(s, t) ∈ D.
(5)
c
Let BL (X) denote the space of all bounded linear operators from X into X. The resolvent set of K is given by ρ(K ) = {z ∈ C : (K − zI )−1 ∈ BL (X)}, where I is the identity operator on X. The spectrum of K , σ (K ) is defined as σ (K ) = C \ ρ(K ). We are interested in the following eigenvalue problem: find u ∈ X and λ ∈ C − {0} such that K u = λu,
u = 1.
(6)
Assume λ = 0 be the eigenvalue of K with algebraic multiplicity m and ascent . Let ⊂ ρ(K ) be a simple closed rectifiable curve such that σ (K ) ∩ int = {λ} and 0 ∈ / int, where int denotes the interior of . Now we describe the Galerkin method for the eigenvalue problem (6). (2) Let (1) M and N be the uniform partitions of finite intervals [a, b] and [c, d], respectively, defined by (1) M : a = x0 < x1 < · · · < xM = b, (2) N : c = y0 < y1 < · · · < yN = d,
424
J Sci Comput (2012) 51:421–448
with h = xm+1 − xm = b−a and k = yn+1 − yn = d−c , for m = 0, 1, 2, . . . , M − 1 and n = M N (2) 0, 1, 2, . . . , N − 1. These partitions define a grid for D, MN = (1) M × N = {(xm , yn ) : 0 ≤ m ≤ M − 1, 0 ≤ n ≤ N − 1}. Set I0(1) = [x0 , x1 ],
Im(1) = (xm , xm+1 ],
I0(2)
In(2)
= [y0 , y1 ],
= (yn , yn+1 ],
m = 1, 2, . . . , M − 1, n = 1, 2, . . . , N − 1,
and Imn = Im(1) × In(2) , m = 0, 1, . . . , M − 1 and n = 0, 1, . . . , N − 1. For any given positive integer p and q, let Pp−1,q−1 denotes the space of polynomials of degree p − 1 in xdirection and q − 1 in y-direction, then (−1) XMN = Sp−1,q−1 (MN ) = {u : u |Imn = um,n ∈ Pp−1,q−1 ,
0 ≤ m ≤ M − 1, 0 ≤ n ≤ N − 1}, is the finite element space of dimension MNpq which is the tensor product space of univari(−1) (−1) (2) ate spline spaces Sp−1 ((1) M ) on [a, b] and Sq−1 (N ) on [c, d]. The use of superscript (−1) in the notation for the above finite element space is to emphasize that it is not a subspace of C (D). Let 1 di 2 i (t − 1) , i ≥ 1, L0 (t) = 1 and Li (t) = i 2 i! dt i be the Legendre polynomials of degree i on [−1, 1]. Define √ φi (t) = 2i + 1Li (2t − 1), t ∈ [0, 1], i = 0, 1, 2, . . . , p − 1. Then φ0 , φ1 , . . . , φp−1 denote the sequence of orthogonal polynomials associated with the one dimensional L2 -inner product, i.e., φi (t) is a polynomial of degree i and (φi , φi ) = δii , i, i = 0, 1, . . . , p − 1. Now we set, ψj (s) = 2j + 1Lj (2s − 1), s ∈ [0, 1], j = 0, 1, 2, . . . , q − 1. Define φim (x) = and
(−1/2) x−xm φi ( h ), h 0,
ψj n (y) =
if x ∈ [xm , xm+1 ], otherwise,
n k (−1/2) ψj ( y−y ), k
if y ∈ [yn , yn+1 ],
0,
otherwise.
(7)
(8)
Then the set {φim ψj n , 0 ≤ m ≤ M − 1, 0 ≤ i ≤ p − 1, 0 ≤ n ≤ N − 1, 0 ≤ j ≤ q − 1} form orthonormal basis for XMN . Let Phk : X → XMN be an orthogonal projection defined by, Phk u(s, t) =
p−1 q−1 M−1 N−1
(φim ψj n , u)φim (s)ψj n (t),
m=0 n=0 i=0 j =0
where (., .) denotes usual inner product in L2 (D).
(9)
J Sci Comput (2012) 51:421–448
425
Now the Galerkin method for solving the eigenvalue problem (6) is defined as follows: find uhk ∈ XMN and λhk ∈ C − {0} such that Phk K uhk = λhk uhk ,
uhk = 1,
(10)
which is equivalent to (K uhk , v) = λhk (uhk , v),
∀v ∈ XMN .
(11)
M−1 N−1 p−1 q−1 Using uhk = m=0 i=0 j =0 αimj n φim ψj n ∈ XMN , the eigenvalue problem (11) n=0 leads to the following matrix eigenvalue problem p−1 q−1 M−1 N−1
αimj n (K φim ψj n , φi m ψj n )
m=0 n=0 i=0 j =0
= λhk
p−1 q−1 M−1 N−1
αimj n (φim ψj n , φi m ψj n ),
m=0 n=0 i=0 j =0
i = 0, 1, . . . , p − 1, m = 0, 1, . . . , M − 1, j = 0, 1, . . . , q − 1, n = 0, 1, . . . , N − 1.
(12)
The Sloan or iterated eigenvector is defined by uhk =
1 K uhk . λhk
(13)
From (10) and (13), it follows that Phk uhk = uhk . Using this, (13) leads to the following iterated Galerkin eigenvalue problem, K Phk uhk = λhk uhk ,
0 = λhk ∈ C, uhk ∈ X.
(14)
To solve the matrix eigenvalue problem (12) and the iterated eigenvector (13), we need to evaluate the various integrals related to the L2 -inner product and the integral operator K . In practice, numerical quadrature has to be used to compute these integrals. This leads to discrete methods. To do this, for g ∈ C [0, 1], let R(g) =
k−1
wi g(ci ) ≈
i=0
1
g(s) ds,
(15)
0
be the numerical quadrature with weights wi > 0 and quadrature points ci , i = 0, 1, . . . , k − 1 chosen as Gauss points in [0, 1] which satisfy 0 < c0 < c1 < · · · < ck−1 < 1. Note that the quadrature rule is exact for all polynomials of degree ≤ 2k − 1. Let xm,i = xm + ci h, i = 0, 1, . . . , k − 1, be the quadrature points on [xm , xm+1 ], m = 0, 1, . . . , M − 1. Then the composite Gauss quadrature rule on [a, b] is given by Rh (g) = h
M−1 k−1 m=0 i=0
b
wi g(xm,i ) ≈
g(s) ds. a
(16)
426
J Sci Comput (2012) 51:421–448
Similarly let S(g) =
l−1
1
w˜ j g(dj ) ≈
g(t) dt,
(17)
0
j =0
be the numerical quadrature with Gauss points dj , j = 0, 1, . . . , l − 1, satisfying 0 < d0 < d1 , . . . , < dl−1 < 1 having degree of precision 2l − 1, and weights w˜ j > 0. Let yn,j = yn + dj k, j = 0, 1, . . . , l − 1, be the quadrature points on [yn , yn+1 ], n = 0, 1, . . . , N − 1. Then the composite Gauss quadrature rule on [c, d] is given by Sk (g) = k
N−1 l−1
w˜ j g(yn,j ) ≈
d
g(t) dt.
(18)
c
n=0 j =0
Now using the quadrature rules (16) and (18), we define the composite quadrature rule on D = [a, b] × [c, d] by Rh Sk (g) = hk
k−1 l−1 M−1 N−1
wi w˜ j g(xm,i , yn,j ),
g ∈ C (D).
(19)
i=0 j =0 m=0 n=0
Using this quadrature rule, we define a discrete semi-definite inner product by f, ghk = hk
M−1 k−1 l−1 N−1
wi w˜ j f (xm,i , yn,j )g(xm,i , yn,j ),
f, g ∈ C (D).
(20)
m=0 n=0 i=0 j =0
Let Khk : X → X be the Nyström operator defined by Khk u(s, t) = hk
M−1 k−1 l−1 N−1
wi w˜ j K(s, t, xm,i , yn,j )u(xm,i , yn,j ),
u ∈ X.
(21)
m=0 n=0 i=0 j =0
Analogous to the orthogonal projection operator Phk , we define the discrete orthogonal projection Qhk : X → XMN as follows: For u ∈ L∞ (D), let Qhk u be the unique element in XMN , which satisfies Qhk u, vhk = u, vhk ,
∀v ∈ XMN ,
(22)
and Qhk u =
p−1 q−1 M−1 N−1
φim ψj n , uhk φim ψj n .
(23)
m=0 n=0 i=0 j =0
It is well known that Qhk is uniformly bounded (see [2, 13]), q = suph,k>0 Qhk < ∞ and for each u ∈ C (D), ρhk (u) = min u − u˜ hk → 0, as h, k → 0. u˜ hk ∈XMN
Note that for any u ∈ C (D) and u˜ hk ∈ XMN , we have (I − Qhk )u∞ = u − u˜ hk − Qhk (u − u˜ hk )∞ ≤ (1 + q)u − u˜ hk ∞ .
(24)
J Sci Comput (2012) 51:421–448
427
As a consequence, (I − Qhk )u∞ ≤ (1 + q)ρhk (u) → 0, as h, k → 0.
(25)
Now using the Nyström operator (21) and discrete inner product (20), the matrix eigenvalue problem (12) becomes p−1 q−1 M−1 N−1
βimj n Khk φim ψj n , φi m ψj n hk
m=0 n=0 i=0 j =0
= λ˜ hk
p−1 q−1 M−1 N−1
βimj n φim ψj n , φi m ψj n hk ,
m=0 n=0 i=0 j =0
i = 0, 1, . . . , p − 1, j = 0, 1, . . . , q − 1, m = 0, 1, . . . , M − 1, n = 0, 1, . . . , N − 1.
(26)
By solving the above discrete matrix eigenvalue problem, we find the eigenvalue λ˜ hk ∈ C − {0} and the eigenvector β = [βimj n : m = 0, 1, . . . , M − 1, n = 0, 1, . . . , N − 1, i = 0, 1, . . . , p − 1, j = 0, 1, . . . , q − 1]. Then using u˜ hk =
p−1 q−1 M−1 N−1
βimj n φim ψj n ∈ XMN ,
m=0 n=0 i=0 j =0
we obtain the discrete Galerkin eigenvector. Using Qhk and Khk , the matrix eigenvalue problem (26) can be written in operator eigenvalue problem by Qhk Khk u˜ hk = λ˜ hk u˜ hk .
(27)
This is discrete Galerkin method. Next we define the iterated discrete Galerkin eigenvector by u˜ hk =
1 K u˜ . ˜λhk hk hk
Clearly we see that Qhk u˜ hk = u˜ hk and
Khk Qhk u˜ hk = λ˜ hk u˜ hk .
(28)
This is the iterated discrete Galerkin method. Next we discuss the convergence of approximated eigenvalues and eigenvectors to the exact eigenvalues and eigenvectors of the integral operator K . To do this, first we set the following notations: Let C (i,j ) (D) denotes the set of all continuously differentiable functions of order i on [a, b] and order j on [c, d]. Set K(s, t, x, y) = Ks,t (x, y), (s, t, x, y) ∈ D × D. For K(., ., ., .) ∈ C (i,j ) (D) × C (i ,j ) (D), denote b d i+j ∂ D (i,j ) K u(s, t) = K(s, t, ξ, η)u(ξ, η) dξ dη, i j a c ∂s ∂t
D (i,j,i ,j ) K(s, t, ξ, η) =
∂ i+j +i +j K(s, t, ξ, η). i ∂s ∂t j ∂ξ i ∂ηj
428
J Sci Comput (2012) 51:421–448
For any α, β, γ , δ ∈ N, we set Kα,β,γ ,δ,∞ =
γ β δ α
D (i,j,i ,j ) K(s, t, ξ, η)∞ ,
i=0 j =0 i =0 j =0
then we have D
(i,j )
(K u)∞
=
b
d
D
a
(i,j,0,0)
c
K(s, t, ξ, η)u(ξ, η) dξ dη
≤ (b − a)(d − c)Ki,j,0,0,∞ u∞ ,
(29)
and K uα,β,∞ =
β α D (i,j ) (K u)∞ .
(30)
i=0 j =0
For any u ∈ C (α,β) (D), we let uα,β,∞ =
β α
u(i,j ) ∞ ,
i=0 j =0
where u(i,j ) (x, y) =
∂ i+j ∂x i ∂y j
u(x, y), (x, y) ∈ D.
Theorem 2.1 Let f (., .) ∈ C (2k,2l) (D). Then there holds
b
d
f (x, y) dx dy − hk
a
c
M−1 k−1 l−1 N−1 m=0 n=0 i=0 j =0
wi w˜ j f (xm,i , yn,j )
∞
≤ C(h2k + k 2l )f 2k,2l,∞ ,
(31)
where C is constant independent of h and k. Proof Consider
b
d
f (x, y) dx dy = a
c
M−1 N−1 xm+1 m=0 n=0
= hk
xm
yn+1
f (x, y) dx dy yn
M−1 N−1 1 1 m=0 n=0
0
f (xm + τ h, yn + θ k) dτ dθ.
(32)
0
Now the Lagrange’s interpolation of f (xm + τ h, yn + θ k) (see [14]) is given by f (xm + τ h, yn + θ k) =
k−1 l−1
f (xm,α , yn,β )lα (τ )l˜β (θ ) + R(τ, θ ),
(33)
α=0 β=0
where lα (τ ) and l˜β (θ ) are Lagrange polynomials at c0 , c1 , . . . , ck−1 and d0 , d1 , . . . , dl−1 , respectively, and
J Sci Comput (2012) 51:421–448
429
R(τ, θ ) = k (τ )f [xm,0 , xm,1 , . . . , xm,k−1 , τ ; θ ] + l (θ )f [τ ; yn,0 , yn,1 , . . . , yn,l−1 , θ ] − k (τ )l (θ )f [xm,0 , xm,1 , . . . , xm,k−1 , τ ; yn,0 , yn,1 , . . . , yn,l−1 , θ ],
k (τ ) =
k−1 (τ − ci )
l (θ ) =
and
l−1 (θ − dj ). j =0
i=0
Using the expansion (33) in (32), we have
b
d
f (x, y) dx dy a
c
= hk
M−1 N−1 1 1 m=0 n=0
= hk
0
k−1 l−1
0
M−1 k−1 l−1 N−1
f (xm,α , yn,β )lα (τ )l˜β (θ ) + R(τ, θ ) dτ dθ
α=0 β=0
0
M−1 N−1 1 1 0
m=0 n=0
Hence by letting wα =
b
R(τ, θ ) dτ dθ.
1
and w˜ β =
0 lα (τ ) dτ
f (x, y) dx dy − hk a
= hk
c M−1 N−1 1 1 0
m=0 n=0
0
M−1 k−1 l−1 N−1
the quadrature error is given by
wα w˜ β f (xm,α , yn,β )
R(τ, θ ) dτ dθ
k (τ )f [xm,0 , xm,1 , . . . , xm,k−1 , τ ; θ ] dτ dθ
0
M−1 N−1 1 1 m=0 n=0
− hk
˜
0 lβ (θ ) dθ ,
0
M−1 N−1 1 1
+ hk
1
m=0 n=0 α=0 β=0
m=0 n=0
= hk
l˜β (θ ) dθ
0
0
d
EMN (f ) =
1
lα (τ ) dτ
m=0 n=0 α=0 β=0
+ hk
1
f (xm,α , yn,β )
0
M−1 N−1 1 1 m=0 n=0
l (θ )f [τ ; yn,0 , yn,1 , . . . , yn,l−1 , θ ] dτ dθ
0
0
k (τ )l (θ )f [xm,0 , . . . , xm,k−1 , τ ; yn,0 , . . . , yn,l−1 , θ ] dτ dθ
0
= E1 + E2 + E3 .
(34)
Now consider the first term E1 , E1 = hk
M−1 N−1 1 1 m=0 n=0
0
0
f [xm,0 , xm,1 , . . . , xm,k−1 , τ ; θ ]k (τ ) dτ dθ.
430
J Sci Comput (2012) 51:421–448
1 Since k (τ ) is polynomial of degree k, it follows that 0 k (τ )q(τ ) dτ = 0 for all polynomials q of degree < k. Now using this with the properties of divided differences (see [12]), we have E1 = hk
M−1 N−1 1 1 m=0 n=0
= hk
0
= C1 hk 1 0
1
f [xm,0 , . . . , xm,k−1 , xm,0 , . . . , xm,k−1 , ξ ; θ ] dθ
0
(k (τ ))2 dτ 0
M−1 N−1 1 m=0 n=0
where C1 =
0
M−1 N−1 1 m=0 n=0
f [xm,0 , . . . , xm,k−1 , xm,0 , . . . , xm,k−1 , τ ; θ ](k (τ ))2 dτ dθ
0
C1 h2k (2k,0) f (b − a)(d − c)h2k f 2k,0,∞ , (ξ, θ ) dθ ≤ 2k! 2k!
(k (τ ))2 dτ . Similarly, we can prove that E2 ≤ Ck 2l f 0,2l,∞
and
E3 ≤ Ch2k k 2l f 2k,2l,∞ .
(35)
Using the estimates E1 , E2 , and E3 in (34), we obtain the desired result. This completes the proof. In the following theorem we give the error bounds for the Nyström operator. Corollary 2.2 Let K be an integral operator with a kernel K(., ., ., .) ∈ C (2k,2l) (D) × C (2k,2l) (D) and Khk be the Nyström operator defined by (21), then for any u ∈ C (2k,2l) (D), there holds (K − Khk )u∞ ≤ C(h2k + k 2l )u2k,2l,∞ ,
(36)
where C is independent of h and k. Proof Proof follows by choosing f (x, y) = K(s, t, x, y)u(x, y) in the above theorem.
Definition 2.3 Let X be a Banach space and, T and Tn are bounded linear operators from X into X. Then {Tn } is said to be ν-convergent to T , if Tn ≤ c,
(Tn − T )T → 0,
(Tn − T )Tn → 0 as n → ∞.
We quote the following lemma which is useful in proving the existence of eigenvalue and eigenvectors in discrete and iterated discrete Galerkin methods. Lemma 2.4 [2] Let X be a Banach space and S ⊂ X is a relatively compact set. Assume that T and Tn are bounded linear operators from X into X satisfying Tn ≤ c for all n ∈ N, and for each x ∈ S, Tn x − T x → 0
as n → ∞,
where c is a constant independent of n. Then Tn x − T x → 0 uniformly for all x ∈ S. Theorem 2.5 Qhk Khk and Khk Qhk are ν-convergent to K .
J Sci Comput (2012) 51:421–448
431
Proof Since Khk and Qhk are uniformly bounded, it follows that Qhk Khk ≤ Qhk Khk < ∞, Khk Qhk ≤ Khk Qhk < ∞. Now using (25) and Corollary 2.2, we see that (Qhk Khk − K )u ≤ Qhk (Khk − K )u + (Qhk − I )K u → 0. This shows that Qhk Khk pointwise converges to K . Let B = {x ∈ X : x ≤ 1} be a closed unit ball in X. Since K is a compact operator, the set S = {K x : x ∈ B} is a relatively compact set in X. By Lemma 2.4, we have (Qhk Khk − K )K = sup{(Qhk Khk − K )K u : u ∈ B} = sup{(Qhk Khk − K )u : u ∈ S} → 0
as h, k → 0.
Since Qhk is bounded and Khk compact, S = {Qhk Khk x : x ∈ B} is a relatively compact set. Thus (Qhk Khk − K )Qhk Khk = sup{(Qhk Khk − K )Qhk Khk u : u ∈ B} = sup{(Qhk Khk − K )u : u ∈ S } → 0
as h, k → 0.
Combining all these results lead to the first result that Qhk Khk is ν-convergent to K . For the proof of second, using (25) and Corollary 2.2, we obtain (Khk Qhk − K )u ≤ Khk (Qhk − I )u + (Khk − K )u → 0, this shows that Khk Qhk pointwise converges to K on X. Using Lemma 2.4, we have (Khk Qhk − K )K = sup{(Khk Qhk − K )K u : u ∈ B} = sup{(Khk Qhk − K )u : u ∈ S} → 0
as h, k → 0.
Again for S = {Khk Qhk x : x ∈ B} a relatively compact set, we conclude the following (Khk Qhk − K )Khk Qhk = sup{(Khk Qhk − K )Khk Qhk u : u ∈ B} = sup{(Khk Qhk − K )u : u ∈ S } → 0 as h, k → 0. This proves Khk Qhk is ν-convergent to K .
Since Qhk Khk and Khk Qhk are ν-convergent to K , the spectrum of both Qhk Khk and Khk Qhk inside consists of m eigenvalues say λ˜ hk,1 , λ˜ hk,2 , . . . , λ˜ hk,m counted accordingly to their algebraic multiplicities inside (see, [9, 18]). Let λˆ˜ hk =
λ˜ hk,1 + λ˜ hk,2 + · · · + λ˜ hk,m , m
denote their arithmetic mean and we approximate λ by λˆ˜ hk . Let 1 (K − zI )−1 dz, PS = − 2πi
(37)
432
J Sci Comput (2012) 51:421–448 S Phk =−
1 2πi
S = − 1 P hk 2πi
(Qhk Khk − zI )−1 dz,
(38)
(Khk Qhk − zI )−1 dz,
(39)
be the spectral projections of K, Qhk Khk and Khk Qhk , respectively, associated with their corresponding spectra inside . To discuss the closeness of eigenvectors of the integral operator K and those of the approximate operators, we recall (see [9]) the concept of gap between the closed subspaces. For nonzero closed subspaces Y1 and Y2 of X, let δ(Y1 , Y2 ) = sup{dist(y, Y2 ) : y ∈ Y1 , y = 1}, then ˆ 1 , Y2 ) = max{δ(Y1 , Y2 ), δ(Y2 , Y1 )}, δ(Y is known as the gap between Y1 and Y2 . We quote the following three lemmas, which give the error bounds for the eigenelements. Theorem 2.6 [1, 17] Let X be a Banach space and let K , Qhk Khk ∈ BL (X) with S ) be the ranges of the specQhk Khk be ν-convergent to K . Let R(P S ) and R(Phk S , respectively. Then for sufficiently large M, N , there exists a tral projections P S and Phk constant c independent of M, N such that S S ˆ δ(R(P hk ), R(P )) ≤ c(K − Qhk Khk )K ∞ . S In particular, for any u˜ hk ∈ R(Phk ), we have
u˜ hk − P S u˜ hk ∞ ≤ c(K − Qhk Khk )K ∞ . Theorem 2.7 [1, 17] Let X be a Banach space, K , Khk Qhk ∈ BL (X) for M, N ∈ N, S ) be the ranges of the specand Khk Qhk is ν-convergent to K . Let R(P S ) and R(P hk S , respectively. Then for sufficiently large M, N , there exists a tral projections P S and P hk constant c independent of M, N such that S ), R(P S )) ≤ c(K − Khk Qhk )K ∞ . ˆ δ(R( P hk S ), we have In particular for any u˜ hk ∈ R(P hk u˜ hk − P S u˜ hk ∞ ≤ c(K − Khk Qhk )K ∞ . Theorem 2.8 [1, 17] If X be Banach space and K , Khk Qhk ∈ BL (X) with Khk Qhk is ν-convergent to K then for sufficiently large M, N , there exists a constant c independent of M, N such that for j = 1, 2, . . . , m, |λ − λˆ˜ hk | ≤ c(K − Khk Qhk )K ∞ , |λ − λ˜ hk,j | ≤ c(K − Khk Qhk )K ∞ .
J Sci Comput (2012) 51:421–448
433
3 Convergence Rates In this section we discuss the convergence rates for the approximated eigenvalues and eigenvectors to the exact eigenvalues and exact eigenvectors of the integral operator K . To do this, we quote the following lemma. Lemma 3.1 [19] Let u(., .) ∈ C (p, q) (D). Then there holds inf u − χ ∞ ≤ β max{hp , k q },
χ ∈XMN
1 where β = max{ 4p u(p,0) ∞ , 4q1 u(0,q) ∞ , 4p14q u(p,q) ∞ }.
Theorem 3.2 Let Qhk be the orthogonal projection from X into XMN and Khk be the Nyström operator defined by (21). Assume that K(., ., ., .) ∈ C (2k,2l) (D) × C (2k,2l) (D), k ≥ p, l ≥ q. Then there holds (K − Khk )K ∞ = O(max{h2k , k 2l }),
(40)
(I − Qhk )K ∞ = O(max{h , k }),
(41)
p
q
Khk (I − Qhk )K ∞ = O(max{h2p , k 2q }).
(42)
Proof Replacing u by K u in (36), and using the estimate (30), we obtain (K − Khk )K u∞ ≤ C(h2k + k 2l )K u2k,2l,∞ ≤ C(h2k + k 2l )K2k,2l,0,0,∞ u∞ , where C is a constant independent of h and k. From this the estimate (40) follows. For any u ∈ C (p, q) (D), using Lemma 3.1, and uniform boundedness of Qhk , we see that (I − Qhk )u∞ ≤ (1 + Qhk ∞ ) inf u − χ ∞ χ ∈XMN
≤ (1 + q)β max{hp , k q } ≤ C max{hp , k q }up,q,∞ ,
(43)
where C is a constant independent of h and k. Now replacing u by K u in (43) and using the estimate (30), we see that (I − Qhk )K u∞ ≤ C max{hp , k q }K up,q,∞ ≤ C max{hp , k q }Kp,q,0,0,∞ u∞ ,
(44)
this proves the estimate (41). Next using the Nyström operator Khk and orthogonality of Qhk , and the estimates (43) and (44), it follows that |Khk (I − Qhk )K u(s, t)| M−1 N−1 k−1 l−1 wi w˜ j Ks,t (xm,i , yn,j )(I − Qhk )K u(xm,i , yn,j ) = hk m=0 n=0 i=0 j =0
434
J Sci Comput (2012) 51:421–448
= |Ks,t , (I − Qhk )K uhk | = |(I − Qhk )Ks,t , (I − Qhk )K uhk | ≤ (I − Qhk )Ks,t ∞ (I − Qhk )K u∞ ≤ CK0,0,p,q,∞ max{hp , k q } CKp,q,0,0,∞ u∞ max{hp , k q }. Hence we obtain Khk (I − Qhk )K ∞ = O(max{h2p , k 2q }).
This completes the proof.
Theorem 3.3 Let K be a compact integral operator with a kernel K(., ., ., .) ∈ C (2k,2l) (D) × C (2k,2l) (D), k ≥ p, l ≥ q. Then there holds, (K − Qhk Khk )K ∞ = O(max{hmin{p,2k} , k min{q,2l} }), (K − Khk Qhk )K ∞ = O(max{hmin{2p,2k} , k min{2q,2l} }). Proof Since (K − Qhk Khk )K ∞ ≤ (I − Qhk )K ∞ K + Qhk (K − Khk )K ∞ , (K − Khk Qhk )K ∞ ≤ (K − Khk )K ∞ + Khk (I − Qhk )K ∞ , the proof follows from the above Theorem 3.2.
In the following theorem we give the superconvergence results for the eigenvalues and eigenvectors. Theorem 3.4 Suppose K is a compact integral operator with a kernel K(., ., ., .) ∈ C (2k,2l) (D) × C (2k,2l) (D), k ≥ p, l ≥ q, and suppose that λ be the eigenvalue of K with algebraic multiplicity m and ascent . Let {Khk Qhk } and {Qhk Khk } be a sequence of bounded operators on X, which convergence to K in ν-convergence and let λˆ˜ hk be the arithS S ) ) and R(P metic mean of the eigenvalues λ˜ hk,j , j = 1, 2, . . . , m. Let R(P S ), R(Phk hk S S S , respectively. Then be the ranges of the spectral projections of P , Phk and P hk S S min{p,2k} ˆ δ(R(P , k min{q,2l} }), hk ), R(P )) = O(max{h
(45)
S ), R(P S )) = O(max{hmin{2p,2k} , k min{2q,2l} }). ˆ δ(R( P hk
(46)
S S ), we have ) and u˜ hk ∈ R(P In particular, for any u˜ hk ∈ R(Phk hk
u˜ hk − P S u˜ hk ∞ = O(max{hmin{p,2k} , k min{q,2l} }),
(47)
u˜ hk − P S u˜ h k∞ = O(max{hmin{2p,2k} , k min{2q,2l} }).
(48)
And for j = 1, 2, . . . , m, |λ − λˆ˜ hk | = O(max{hmin{2p,2k} , k min{2q,2l} }), |λ − λ˜ hk,j | = O(max{hmin{2p,2k} , k min{2q,2l} }).
J Sci Comput (2012) 51:421–448
435
Proof It follows from Theorems 2.6 and 3.3 that S S ˆ δ(R(P hk ), R(P )) ≤ c(K − Qhk Khk )K ∞
= O(max{hmin{p,2k} , k min{q,2l} }), S ), we have and for any u˜ hk ∈ R(Phk S S ˆ u˜ hk − P S u˜ hk ≤ δ(R(P hk ), R(P ))
= O(max{hmin{p,2k} , k min{q,2l} }). This proves the estimates (45) and (47). Again using Theorems 2.7 and 3.3, we obtain that S ), R(P S )) ≤ c(K − Khk Qhk )K ∞ ˆ δ(R( P hk = O(max{hmin{2p,2k} , k min{2q,2l} }), S ), and for any u˜ hk ∈ R(P hk S ), R(P S )) = O(max{hmin{2p,2k} , k min{2q,2l} }), ˆ P u˜ hk − P S u˜ hk ≤ δ(R( hk which completes the proof of the estimates (46) and (48). Similarly combining Theorems 2.8 and 3.3, we prove that for j = 1, 2, . . . , m, |λ − λˆ˜ hk | ≤ c(K − Khk Qhk )K ∞ = O(max{hmin{2p,2k} , k min{2q,2l} }), |λ − λ˜ hk,j | ≤ c(K − Khk Qhk )K ∞ = O(max{hmin{2p,2k} , k min{2q,2l} }). This completes the proof.
Remark 3.5 From Theorem 3.4, we observe that discrete Galerkin eigenvectors converges with the order of convergence O(max{hmin{p,2k} , k min{q,2l} }) where as iterated discrete Galerkin eigenvectors and eigenvalues converges with the order of convergence O(max{hmin{2p,2k} , k min{2q,2l} }). This shows that iterated discrete eigenvectors gives superconvergence results over the discrete Galerkin eigenvectors. By choosing the degree of precisions of the numerical quadrature rules sufficiently large, i.e., 2k ≥ 2p and 2l ≥ 2q on [a, b] and [c, d], respectively, we obtain the superconvergence results for the eigenvalues and eigenvectors in the discrete Galerkin and iterated discrete Galerkin methods. Remark 3.6 We note that as the dimension of the approximating space XMN is MNpq, in order to solve the matrix eigenvalue problem (26) in Discrete Galerkin method, we need to evaluate the coefficient matrices of size MNpq × MNpq which are dense. However we obtained the superconvergence rates for the eigenvectors and eigenvalues.
4 Richardson Extrapolation In this section, we derive an asymptotic error expansions for the iterated discrete Galerkin operator Khk Qhk and an asymptotic error expansion of approximate eigenvalues. We then apply Richardson extrapolation to obtain improved error bounds for the eigenvalues.
436
J Sci Comput (2012) 51:421–448
Lemma 4.1 (Euler-MacLaurin summation formulae; see [13]) Let f (x, y) ∈ C r+1 (D), 0 ≤ τ ≤ 1, 0 ≤ θ ≤ 1. Then hk
M−1 N−1
f (xm + τ h, yn + θ k)
m=0 n=0
=
r r−a
b,d ∂ a+b−2 Ba (τ ) Bb (θ ) f (x, y) a! b! ∂x a−1 ∂y b−1 x=a,y=c
ha k b
a=0 b=0
+ O(hr+1 + k r+1 ),
(49)
where Bi (t) are Bernoulli polynomials of degree i. Theorem 4.2 Let K be a compact integral operator with a kernel K(., ., ., .) ∈ C (D) × C r+1 (D) and Khk be the Nyström operator defined by (21), then there holds r
Khk − K =
[2]
r
h2i D2i +
[2]
[ 2r ]−l [ 2r ]−i
k 2j E2j +
j =l
i=k
i=k
h2i h2j F2i,2j
j =l
+ O(hr+1 + k r+1 ), where D2i , E2j and F2i,2j are bounded linear operators independent of h and k, defined by b,d R(B2i ) ∂ 2i−2 K (x, y)u(x, y) , s,t 2i! ∂x 2i−1 ∂y −1 x=a,y=c b,d S(B2j ) ∂ 2j −2 K (x, y)u(x, y) , E2j = s,t 2j ! ∂s −1 ∂t 2j −1 x=a,y=c D2i =
and F2i,2j
b,d R(B2i ) S(B2j ) ∂ 2i+2j −2 = Ks,t (x, y)u(x, y) . 2i! 2j ! ∂x 2i−1 ∂y 2j −1 x=a,y=c
Proof Using the Nyström operator (21) and Euler-MacLaurin summation formula (49), we have
M−1 N−1 l−1 k−1 Khk u(s, t) = wi w˜ j hk Ks,t (xm + ci h, yn + dj k)u(xm + ci h, yn + dj k) i=0
=
j =0
m=0 n=0
r k−1 r−a a b h k wi Ba (ci ) a! b! i=0 a=0 b=0
×
l−1 j =0
w˜ j Bb (dj )
b,d ∂ a+b−2 Ks,t (x, y)u(x, y) ∂x a−1 ∂y b−1 x=a,y=c
+ O(hr+1 + k r+1 )
J Sci Comput (2012) 51:421–448
=
437
b,d r r−a a b ∂ a+b−2 h k R(Ba )S(Bb ) K (x, y)u(x, y) s,t a! b! ∂x a−1 ∂y b−1 x=a,y=c a=0 b=0 + O(hr+1 + k r+1 ),
(50)
l−1 where R(Ba ) = k−1 ˜ j Bb (dj ). i=0 wi Ba (ci ) and S(Bb ) = j =0 w Since ci , i = 0, 1, . . . , k − 1 are the Gauss points in the interval (0, 1) and the quadrature rule (15) is an symmetric quadrature rule, we have ck−i−1 = 1 − ci and wi = wk−i−1 for i = 0, 1, . . . , k − 1. Noting that the Bernoulli polynomials have the property that Ba (1 − c) = (−1)a Ba (c), we have R(Ba ) =
k−1
k−1
wi Ba (ci ) =
i=0
wk−i−1 Ba (1 − ck−i−1 ) = (−1)a R(Ba ).
i=0
From this, it follows that R(Ba ) is zero when a is odd. Also we have R(B0 ) =
k−1
1
wi B0 (ci ) =
B0 (t) dt = 1.
(51)
0
i=0
On the similar mechanism, we can prove that S(Bb ) is zero when b is odd and S(B0 ) = 1. Now since the degree of precision of this quadrature rule (15) is 2k − 1, it follows that for 1 ≤ a ≤ 2k − 1, R(Ba ) =
k−1
wi Ba (ci ) = 0
i=0
1
1 1 Ba+1 (t) Ba (t) dt = = 0. a+1 t=0
Similarly, we can prove that S(Bb ) = 0 for 1 ≤ b ≤ 2l − 1 as the degree of precision of quadrature rule (17) is 2l − 1. Combing all these results for R(Ba ) and S(Bb ) with (50), we obtain r
Khk − K =
[2]
r
h D2i + 2i
i=k
[2] j =l
[ 2r ]−l [ 2r ]−i
k E2j + 2j
i=k
h2i k 2j F2i,2j + O(hr+1 + k r+1 ),
j =l
where b,d R(B2i ) ∂ 2i−2 K (x, y)u(x, y) , s,t 2i! ∂x 2i−1 ∂y −1 x=a,y=c b,d S(B2j ) ∂ 2j −2 Ks,t (x, y)u(x, y) , E2j = 2j ! ∂x −1 ∂y 2j −1 x=a,y=c b,d R(B2i ) S(B2j ) ∂ 2i+2j −2 F2i,2j = K (x, y)u(x, y) . s,t 2i! 2j ! ∂x 2i−1 ∂y 2j −1 x=a,y=c D2i =
This completes the proof.
The following lemma gives the asymptotic expansion of discrete orthogonal projection Qhk (see [13]).
438
J Sci Comput (2012) 51:421–448
Lemma 4.3 [13] Assume that u(x, y) ∈ C r+1 (D) and Qhk be the discrete orthogonal projection defined by (22). Then for any (x, y) ∈ Imn , there holds (Qhk − I )u(x, y) =
r
hμ u(μ,0) (x, y)μ (τ ) +
μ=p
r
k ν u(0,ν) (x, y)ν (θ )
ν=q
r−q r−μ
+
hμ k ν u(μ,ν) (x, y)μ (τ )ν (θ ) + O(hr+1 + k r+1 ),
(52)
μ=p ν=q
where μ (τ ) =
p−1 p−1
wα φi (τα )φi (τ )
i =0 α=0
(τα − τ )μ , μ!
q−1 q−1
ν (θ ) =
w˜ β ψj (θβ )ψj (θ )
j =0 β=0
(θβ − θ )ν . ν!
Theorem 4.4 Let Khk and Qhk be the Nyström operator and discrete orthogonal projection defined by (21) and (22), respectively. Assume that the kernel K(., ., ., .) ∈ C r+1 (D) × C r+1 (D) and u ∈ C r+1 (D), then there holds, r
Khk (Qhk − I ) =
r
[2]
h R2i,0 + 2i
[2]
[ 2r ]−l [ 2r ]−i
k S0,2j + 2j
j =q
i=p
i=p
[ 2r ]−q [ 2r ]−i
+
+ O(h
r+1
h2i k 2j R2i,2j
j =l
[ 2r ]−q [ 2r ]−i
h k S2i,2j + 2i 2j
j =q
i=k
+k
i=p
r+1
h2i k 2j T2i,2j
j =q
),
where R2i,2j , S2i,2j and T2i,2j are bounded linear operators on C r+1 (D). Proof Using the definition of Khk defined by (21) and the estimate (52), we have Khk (Qhk − I )u(s, t) = hk
M−1 k−1 l−1 N−1
wi w˜ j Ks,t (xm,i , yn,j )(Qhk − I )u(xm,i , yn,j )
m=0 n=0 i=0 j =0
= hk
M−1 k−1 l−1 N−1
wi w˜ j Ks,t (xm,i , yn,j )
m=0 n=0 i=0 j =0
×
r
hμ u(μ,0) (xm,i , yn,j )μ (ci ) +
μ=p
+
r−q r−μ
r
k ν u(0,ν) (xm,i , yn,j )ν (dj )
ν=q
μ ν (μ,ν)
h k u
(xm,i , yn,j )μ (ci )ν (dj ) + O(h
r+1
+k
r+1
)
μ=p ν=q
= I1 + I2 + I3 + O(hr+1 + k r+1 ).
(53)
J Sci Comput (2012) 51:421–448
439
Now we consider I1 , M−1 k−1 l−1 N−1
I1 = hk
wi w˜ j Ks,t (xm,i , yn,j )
m=0 n=0 i=0 j =0
=
r
h
μ=p
μ
k−1
r
hμ u(μ,0) (xm,i , yn,j )μ (ci )
μ=p
wi μ (ci )
l−1
M−1 N−1
w˜ j hk
j =0
i=0
Ks,t (xm,i , yn,j )u
(μ,0)
(xm,i , yn,j ) .
m=0 n=0
Using Euler-MacLaurin summation formula (49) in I1 , we obtain I1 =
r
hμ
μ=p
× =
k−1
wi μ (ci )
l−1
w˜ j
j =0
i=0
r−μ r−a−μ a=0
ha k b
b=0
Ba (ci ) Bb (dj ) a! b!
b,d ∂ a+b−2 (μ,0) K (x, y)u (x, y) + O(hr+1 + k r+1 ) s,t ∂x a−1 ∂y b−1 x=a,y=c
r−μ r−a−μ μ+a b k−1 r l−1 h k wi μ (ci )Ba (ci ) w˜ j Bb (dj ) a! b! i=0 μ=p a=0 b=0 j =0
×
b,d ∂ a+b−2 (μ,0) K (x, y)u (x, y) + O(hr+1 + k r+1 ) s,t ∂x a−1 ∂y b−1 x=a,y=c
b,d r−μ r−a−μ μ+a b r h k ∂ a+b−2 (μ,0) = Aμ,a S(Bb ) Ks,t (x, y)u (x, y) a! b! ∂x a−1 ∂y b−1 x=a,y=c μ=p a=0 b=0 + O(hr+1 + k r+1 ),
(54)
where S(Bb ) is the numerical quadrature of Bb defined as in (17) and Aμ,a =
k−1
wi μ (ci )Ba (ci )
i=0
=
k−1 i=0
=
wi
p−1 k−1
wα φi (cα )φi (ci )
i =0 α=0
k−1 k−1
(cα − ci )μ Ba (ci ) μ!
wi wα Ba (ci )Up−1 (cα , ci )
i=0 α=0
(cα − ci )μ , μ!
(55)
p−1 with Up−1 (cα , ci ) = i =0 φi (cα )φi (ci ). Noting that φi (1 − c) = (−1)i φi (c) and Ba (c) = (−1)a Ba (1 − c), we have Up−1 (1 − cα , 1 − ci ) =
p−1
φi (1 − cα )φi (1 − ci )
i =0
= (−1)2i
p−1 i =0
φi (cα )φi (ci ) = Up−1 (cα , ci ),
440
J Sci Comput (2012) 51:421–448
Ba (1 − ci )Up−1 (1 − cα , 1 − ci )(1 − cα − 1 + ci )μ = (−1)μ+a Ba (ci )Up−1 (cα , ci )(cα − ci )μ . Using these estimates in (55), and noting that ci = 1 − ck−1−i and wk−i−1 = wi , for 0 ≤ i ≤ k − 1, we have Aμ,a =
k−1 k−1
wi wα Ba (ci )Up−1 (cα , ci )
i=0 α=0
(cα − ci )μ = (−1)μ+a Aμ,a . μ!
From this, it follows that Aμ,a = 0, when μ + a = odd. On the other hand, using Christoffel-Darboux Identity p−1
φi (s)φi (t) =
i =0
ap−1 φp (s)φp−1 (t) − φp−1 (s)φp (t) , ap s −t
where ap denotes the leading co-efficient of the Legendre polynomial φp and orthogonal property of {φi (s)}, we obtain that Aμ,a =
k−1 k−1
wi wα Ba (ci )
p−1
φi (cα )φi (ci )
i =0
i=0 α=0
(cα − ci )μ μ!
p−1 1 1 1 φi (s)φi (t) (s − t)μ Ba (t) ds dt = μ! 0 0 =
1 μ!
i =0
1 0
1 0
ap−1 [φp (s)φp−1 (t) − φp (t)φp−1 (s)](s − t)μ−1 Ba (t) ds dt ap
= 0, when 1 ≤ μ + a ≤ 2p − 1 1 1 and A0,0 = 0 0 (t)B0 (t) dt = 0 1 dt = 1. Substituting all these estimates for Aμ,a and S(Bb ) in (54), we obtain r
I1 =
[2]
[ 2r ]−l [ 2r ]−i
h R2i,0 u + 2i
i=p
i=p
h2i k 2j R2i,2j u + O(hr+1 + k r+1 ),
j =l
where, R2i,2j u =
Rμ+a,2j u,
(μ,a)∈Z(i)
Z(i) = {(μ, a), p ≤ μ ≤ r, 0 ≤ a ≤ r − μ, μ + a = 2i}, b,d Aμ,a S(B2j ) ∂ a+2j −2 (μ,0) K (x, y)u (x, y) . Rμ+a,2j u = s,t a! 2j ! ∂x a−1 ∂y 2j −1 x=a, y=c Similarly we can prove that I2 = hk
M−1 k−1 l−1 N−1 m=0 n=0 i=0 j =0
wi w˜ j Ks,t (xm,i , yn,j )
J Sci Comput (2012) 51:421–448
×
r
441
k ν u(0,ν) (xm,i , yn,j )ν (dj ),
ν=q
=
r−ν r−a−ν r ha k ν+b ∂ a+b−2 R(Ba )Bν,b a! b! ∂x a−1 ∂y b−1 ν=q a=0 b=0 b,d × Ks,t (x, y)u(0,ν) (x, y)
+ O(hr+1 + k r+1 ),
x=a,y=c
where R(Ba ) is the numerical quadrature of Ba defined as in (15) and Bν,b =
l−1
w˜ j ν (dj )Bb (dj ).
(56)
j =0
On the similar mechanism to the proof of properties of Aμ,a , we can prove that B0,0 = 1 and Bν,b = 0 when ν + b is odd or 1 ≤ ν + b ≤ 2q − 1. Hence we obtain r
I2 =
[2]
[ 2r ]−q [ 2r ]−i
k S0,2j u + 2j
j =q
h2i k 2j S2i,2j u + O(hr+1 + k r+1 ),
j =q
i=k
where,
S2i,2j u =
S2i,ν+b ,
(ν,b)∈Z(j )
Z(j ) = {(ν, b), q ≤ ν ≤ r, 0 ≤ b ≤ r − ν, ν + b = 2j }, b,d R(B2i ) Bν,b ∂ 2i+b−2 (0,ν) K (x, y)u (x, y) . S2i,ν+b u = s,t 2i! b! ∂x 2i−1 ∂y b−1 x=a, y=c Similarly using properties of Aμ,a and Bν,b for I3 , we can prove that I3 = hk
M−1 k−1 l−1 N−1
wi w˜ j Ks,t (xm,i , yn,j )
m=0 n=0 i=0 j =0
×
r−q r−μ
hμ k ν u(μ,ν) (xm,i , yn,j )μ (ci )ν (dj )
μ=p ν=q
μ=p ν=q
b=0
r−q r−μ r−μ−ν r−a−μ−ν
=
a=0
h
Bν,b ∂ a+b−2 a! b! ∂x a−1 ∂y b−1
μ+a ν+b Aμ,a
k
b,d × Ks,t (x, y)u(μ,ν) (x, y)
+ O(hr+1 + k r+1 )
x=a,y=c [ 2r ]−q [ 2r ]−i
=
i=p
j =q
h2i k 2j T2i,2j u + O(hr+1 + k r+1 )
442
J Sci Comput (2012) 51:421–448
where,
T2i,2j u =
Tμ+a,ν+b ,
(μ,a,ν,b)∈Z(i,j )
Z(i, j ) = {(μ, a, ν, b), p ≤ μ ≤ r − q, 0 ≤ a ≤ r − μ − ν, q ≤ ν ≤ r − μ, 0 ≤ b ≤ r − a − μ − ν, μ + a = 2i, ν + b = 2j }, b,d Aμ,a Bν,b ∂ a+b−2 (μ,ν) Ks,t (x, y)u (x, y) . Tμ+a,ν+b u = a! b! ∂x a−1 ∂y b−1 x=a, y=c Now combining the estimates for I1 , I2 and I3 with (53), we obtain the following asymptotic expansion r
Khk (Qhk − I ) =
[2]
r
h R2i,0 + 2i
[2]
[ 2r ]−l [ 2r ]−i
k S0,2j + 2j
j =q
i=p
i=p
[ 2r ]−q [ 2r ]−i
+
i=k
h2i k 2j R2i,2j
j =l
[ 2r ]−q [ 2r ]−i
h k S2i,2j + 2i 2j
j =q
i=p
h2i k 2j T2i,2j
j =q
+ O(hr+1 + k r+1 ).
This completes the proof.
Theorem 4.5 Let K be a compact integral operator with the kernel K(., ., ., .) ∈ C r+1 (D) × C r+1 (D) with p = k and q = l. Let Khk be the Nyström operator defined by (21) and Qhk : X → XMN be the discrete orthogonal projection defined by (22). Then there holds r
Khk Qhk − K =
[2]
r
h U2i + 2i
[2]
[ 2r ]−q [ 2r ]−i
k V2j + 2j
j =q
i=p
i=p
h2i k 2j W2i,2j + O(hr+1 + k r+1 ),
j =q
where U2i , V2j and W2i,2j are bounded linear operators on C r+1 (D). Proof Combining Theorems 4.2 and 4.4 with p = k and q = l, we have Khk Qhk − K = Khk Qhk − Khk + Khk − K r
=
[2] i=p
r
h U2i + 2i
[2]
[ 2r ]−l [ 2r ]−i
k V2j + 2j
j =q
i=p
h2i k 2j W2i,2j + O(hr+1 + k r+1 ),
j =l
where U2i = R2i,0 +D2i ,
V2j = S0,2j +E2j , and
W2i,2j = R2i,2j +S2i,2j +T2i,2j +F2i,2j ,
are bounded linear operators on C r+1 (D). This completes the proof.
In rest of the paper, we choose the domain D = [a, b] × [a, b] and the partition M,N = (2) (−1) (1) M × M , i.e., h = k. Then XMN = Sp−1,p−1 (MN ) denotes the space of piecewise poly-
J Sci Comput (2012) 51:421–448
443
nomials of degree ≤ p − 1 on D. Also we choose the numerical quadratures (16) and (18) to be same, i.e., k = l. Then we have the following corollary. Corollary 4.6 Assume that the kernel K(., ., ., .) ∈ C 2p+2 (D) × C 2p+2 (D) . Let Khk be the Nyström operator defined by (21) and Qhk : X → XMN be the discrete orthogonal projection defined by (22) with p = q = k = l. Then there holds Khk Qhk − K = h2p C2p + O(h2p+2 ), where C2p = U2p + V2p is a bounded linear operators on C 2p+2 (D), where D = [a, b] × [a, b]. 4.1 Richardson Extrapolation for Eigenvalue Problem We first present an asymptotic expansion of eigenvalue approximations, and then apply the Richardson extrapolation to improve the eigenvalue approximations. To do this, first we S associated with K and Khk Qhk , defined consider the spectral projections P S and P hk by (37) and (39), respectively. Since algebraic multiplicity of λ is m and Khk Qhk converges S ) = rank(P S ) = m. Hence traces satisfy to K in ν-convergence, it follows that rank (P hk S ) = trace(Khk Qhk P hk
m
λhk,j ,
trace(K P S ) = mλ.
(57)
j =1
Let S and D be the reduced resolvent and nilpotent operators, respectively associated with m S S K and λ, that is, S = (K − λI )|−1 (I −P) and D = 0. We have P S = S P = 0. The following theorem gives an asymptotic error expansion of the iterated discrete spectral projection. Theorem 4.7 Let K , Khk and Qhk be the operators defined by (15), (21) and (22), S be the spectral projections of K and Khk Qhk , respecrespectively. Let P S and P hk tively, associated with their corresponding spectra inside . Assume that K(., ., ., .) ∈ C 2p+2 (D) × C 2p+2 (D). Then there holds S = P S − h2p U2p + O(h2p+2 ), P hk
(58)
where U2p =
m−1 (D k C2p S k+1 + S k+1 C2p D k ) + P S C2p S + S C2p P S k=1
is a bounded linear operator from C 2p+2 (D) to C 2p+2 (D) independent of h. Proof From the following identity (Khk Qhk − zI )−1 − (K − zI )−1 = (Khk Qhk − zI )−1 (K − Khk Qhk )(K − zI )−1 , we have (Khk Qhk − zI )−1 = (K − zI )−1 + (K − zI )−1 (K − Khk Qhk )(K − zI )−1
444
J Sci Comput (2012) 51:421–448
+ (Khk Qhk − zI )−1 (K − Khk Qhk ) × (K − zI )−1 (K − Khk Qhk )(K − zI )−1 . Using asymptotic expansion of K − Khk Qhk from Corollary 4.6, we obtain (Khk Qhk − zI )−1 = (K − zI )−1 − (K − zI )−1 {h2p C2p + O(h2p+2 )}(K − zI )−1 + (Khk Qhk − zI )−1 {h2p C2p + O(h2p+2 )} × (K − zI )−1 {h2p C2p + O(h2p+2 )}(K − zI )−1 .
(59)
Note that the third term in the above expression (59) is of the order O(h4p ), which is less than equal to order O(h2p+2 ). Hence we obtain, (Khk Qhk − zI )−1 = (K − zI )−1 − h2p (K − zI )−1 C2p (K − zI )−1 + O(h2p+2 ).
(60)
Multiplying on both sides of the above equation (60) with − 2π1 i and integrating over the curve , we obtain S = P S − h2p V2p + O(h2p+2 ), P hk where V2p = −
1 2πi
(K − zI )−1 C2p (K − zI )−1 dz.
(61)
(62)
Note that Laurent series expansion of (K − zI )−1 at z = λ is given by PS Dk − + (z − λ)k−1 S k . k+1 z − λ k=1 (z − λ) k∈N m−1
(K − zI )−1 = − Using (63) and
1 2π i
dz (z−λ)j
(63)
dz = δ1j in (62), we obtain
V2p = P S C2p S + S C2p P S +
m−1 (D k C2p S k+1 + S k+1 C2p D k ) k=1
= U2p .
(64)
Clearly, U2p is a bounded linear operator on C 2p+2 (D) independent of h and k. Using (64) in (61), we obtain S = P S − h2p U2p + O(h2p+2 ). P hk
This completes the proof.
Theorem 4.8 Assume that all the conditions of Theorem 4.7 hold. Let λ be the eigenvalue of K with algebraic multiplicity m and λˆ˜ hk be the arithmetic mean of the eigenvalues λ˜ hk,j , j = 1, 2, . . . , m. Then there holds λˆ˜ hk − λ =
1 2p h trace(Q2p ) + O(h2p+2 ), m
where Q2p = C2p P S − K U2p is a bounded linear operator independent of h.
(65)
J Sci Comput (2012) 51:421–448
445
Proof Applying Khk Qhk to the asymptotic expansion (61), we obtain S = Khk Qhk P S − h2p Khk Qhk U2p + O(h2p+2 ). Khk Qhk P hk Using Khk Qhk = K + h2p C2p + O(h2p+2 ) in the above estimate, we have S = (K + h2p C2p )P S − h2p (K + h2p C2p )U2p + O(h2p+2 ) Khk Qhk P hk = K P S + h2p (C2p P S − K U2p ) + O(h2p+2 ). Taking trace on both sides, we obtain S ) = trace(K P S ) + h2p trace(C2p P S − K U2p ) + O(h2p+2 ). trace(Khk Qhk P hk S ) = Since trace(Khk Qhk P hk
m
j =1 λhk,j
= mλˆ˜ hk and trace(K P S ) = mλ, we obtain
1 λˆ˜ hk = λ + h2p trace(C2p P S − K U2p ) + O(h2p+2 ), m
which completes the proof.
According to the asymptotic expansion (65), the Richardson extrapolation for eigenvalue approximation should be the following. We first divide each subinterval with respect to the (2) (1) partitions of (1) M and M into two halves which makes up a new partitions denoted by 2M (2) and 2M , (2) (1) 2M = 2M : a = x0 < x 1 < x1 < · · · < xM− 1 < xM = b. 2
2
Here h = k = and D = [a, b] × [a, b]. We then have following asymptotic expansion for eigenvalue approximation with respect to this new partitions, 1 2M
1 h 2p trace(Q2p ) + O(h2p+2 ). λˆ˜ h/2,k/2 = λ + m 2
(66)
From the asymptotic expansions (65) and (66), the Richardson extrapolation for the eigenvalue approximation is defined by λˆ˜ Ehk =
22p λˆ˜ h/2,k/2 − λˆ˜ hk . 22p − 1
(67)
In the following theorem we give the superconvergence rates for the eigenvalue approximation using Richardson extrapolation. Theorem 4.9 Assume that conditions of Theorem 4.8 hold and the Richardson extrapolation λˆ˜ Ehk is defined by (67). Then there holds the error estimate |λˆ˜ Ehk − λ| = O(h2p+2 ). Proof The result follows from the asymptotic error expansions (65) and (66).
(68)
446
J Sci Comput (2012) 51:421–448
Remark 4.10 The discrete collocation and iterated discrete collocation methods, and Richardson extrapolation for 2-D eigenvalue problem of a compact integral operator with a smooth kernel will be discussed in our future articles. We expect the similar superconvergence results for both eigenvalues and eigenvectors as in discrete Galerkin and iterated discrete Galerkin methods.
5 Numerical Examples In this section we discuss the numerical results. Consider the eigenvalue problem (6), K u = λu,
0 = λ ∈ C , u = 1,
for the following integral operator 1 1 K(s, t, x, y)u(x, y) dx dy, (K u)(s, t) = 0
s, t ∈ [0, 1],
0
for various smooth kernels K(s, t, x, y). Let XMM be the space of piecewise constant functions (p = q = 1) on [0, 1] × [0, 1] with respect to the initial uniform partitions (2) (1) M = M : 0 <
2 M −1 1 < < ··· < < 1, M M M
(2) m n 1 MN = (1) M × M = {( M , M ) : 0 ≤ m ≤ M, 0 ≤ n ≤ M} with h = k = M . We choose numerical quadrature as the one-point composite Gaussian quadrature formula which is exact for all polynomials of degree less than 2, that is k = l = 1. The quadrature points and weights are given by
xm,i = ym,i =
2m + 1 , 2M
i = 1, m = 0, 1, . . . , M − 1
and wm = w˜ n = M1 , m, n = 0, 1, . . . , M − 1. As the exact eigenelements of K are not known, for the sake of comparison, we replace the integral operator K by (K N u)(s, t) =
μr μr
wk w˜ l K(s, t, sk , tl )u(sk , tl ),
k=0 l=0
obtained by using composite Gaussian quadrature with r points associated with an uniform partition of [0, 1] × [0, 1] with μ intervals in both the directions, where μ M and wk , w˜ l are weight functions and sk , tl are quadrature points on [0, 1]. Let λ be the eigenvalue of K N and P S be the associated spectral projection. For different kernels and for different values of M, we compute the discrete Galerkin eigenvector u˜ hk , iterated eigenvector u˜ hk and eigenvalue λˆ˜ hk , and approximated eigenvalue in Richardson extrapolation λˆ˜ E . Denote hk
|λ − λˆ˜ hk | = O(hα ),
u˜ hk − P S u˜ hk ∞ = O(hβ ),
u˜ hk − P S u˜ hk ∞ = O(hγ ),
|λ − λˆ˜ Ehk | = O(hδ ),
J Sci Comput (2012) 51:421–448
447
where h = k = M1 is the step length. For M = 2, 4, 8, 16, we compute α, β, γ and δ which are listed in the following table. Since k = l = 1, we get the theoretical convergence of the order of eigenvector is 1, the orders of the iterated eigenvector and eigenvalues are 2, and the order of eigenvalue in Richardson extrapolation is 4. In Tables 1, 2, 3 and 4 the numerical results agrees with the theoretical results. Example 1 K(s, t, x, y) = s sin(t) + xey , [a, b] = [0, 1], [c, d] = [0, 1] Table 1 Eigenvector error bounds u˜ hk − P S u˜ hk ∞
u˜ hk − P S u˜ hk ∞
β
2
2.052457e−01
7.279250e−03
*
*
4
9.147141e−02
1.615065e−03
1.165959
2.172197
8
3.960725e−02
3.824588e−04
1.207556
2.078216
16
1.442616e−02
9.319469e−05
1.457077
2.036984
M
γ
Table 2 Eigenvalue error bounds M
|λ − λˆ˜ hk |
α
|λ − λˆ˜ E hk |
δ
2
2.249878e−02
*
*
*
4
5.622068e−03
2.000674
4.516439e−06
*
8
1.405277e−03
2.000246
3.867777e−07
3.545609
16
3.513045e−04
2.000060
2.494245e−08
3.954829
Example 2 K(s, t, x, y) = sin(t + x) + sin(s + y) + 3, [a, b] = [0, 1], [c, d] = [0, 1] Table 3 Eigenvector error bounds u˜ hk − P S u˜ hk ∞
u˜ hk − P S u˜ hk ∞
β
2
7.296465e−02
4.016996e−04
*
*
4
3.621528e−02
9.765475e−05
1.010598
2.040354
8
1.634226e−02
2.422768e−05
1.147991
2.011033
16
6.046115e−03
6.043696e−06
1.434527
2.003153
M
γ
Table 4 Eigenvalue error bounds |λ − λˆ˜ hk |
α
2
2.624856e−03
4
6.643366e−04
8 16
M
|λ − λˆ˜ E hk |
δ
*
*
*
1.982252
8.688676e−05
*
1.665823e−04
1.995678
5.134002e−06
4.08098068
4.167123e−05
1.999111
3.048527e−07
4.07389921
448
J Sci Comput (2012) 51:421–448
Acknowledgements The authors take this opportunity to thank the reviewers for their valuable suggestions which improved the version of the paper.
References 1. Ahues, M., Largillier, A., Limaye, B.V.: Spectral Computations for Bounded Operators. Chapman and Hall/CRC, New York (2001) 2. Atkinson, K.E.: The Numerical Solution of Integral Equations of the Second Kind. Cambridge University Press, Cambridge (1997) 3. Atkinson, K.E., Potra, F.A.: On the discrete Galerkin method for Fredholm integral equations of the second kind. IMA J. Numer. Anal. 9, 385–403 (1989) 4. Atkinson, K.E.: Convergence rates for approximate eigenvalues of compact integral operators. SIAM J. Numer. Anal. 12, 213–222 (1975) 5. Atkinson, K.E.: The numerical solution of the eigenvalue problem for compact integral operators. Trans. Am. Math. Soc. 129, 458–465 (1967) 6. Babusa, I., Osborn, J.E.: Finite element-Galerkin approximation of the eigenvalues and eigenvectors of selfadjoint problems. Math. Comput. 52, 275–297 (1989) 7. Babusa, I., Osborn, J.E.: Estimates for the errors in eigenvalue and eigenvector approximation by Galerkin methods, with particular attention to the case of multiple eigenvalues. SIAM J. Numer. Anal. 24, 1249–1276 (1987) 8. Bramble, J.H., Osborn, J.E.: Rate of convergence estimates for nonselfadjoint eigenvalue approximations. Math. Comput. 27, 525–549 (1973) 9. Chatelin, F.: Spectral Approximation of Linear Operators. Academic Press, New York (1983) 10. Chen, Z., Long, G., Nelakanti, G.: Richardson extrapolation of iterated discrete projection methods for eigenvalue approximation. J. Comput. Appl. Math. 223, 48–61 (2009) 11. Chen, Z., Nelakanti, G., Xu, Y., Zhang, Y.: A fast collocation method for eigen-problems of weakly singular integral operators. J. Sci. Comput. 41, 256–272 (2009) 12. Conte, S.D., de Boor, C.: Elementary Numerical Analysis. Tata McGraw-Hill, Noida (2005) 13. Han, G., Wang, R.: Richardson extrapolation of iterated discrete Galerkin solution for two-dimensional Fredholm integral equations. J. Comput. Appl. Math. 139, 49–63 (2002) 14. Isaacson, E., Keller, H.B.: Analysis of Numerical Methods. Wiley, New York (1966) 15. Kulkarni, R.P., Nelakanti, G.: Spectral approximation using iterated discrete Galerkin method. Numer. Funct. Anal. Optim. 23, 91–104 (2002) 16. Nelakanti, G.: A degenerate kernel method for eigenvalue problems of compact integral operators. Adv. Comput. Math. 27, 339–354 (2007) 17. Nelakanti, G.: Spectral approximation for integral operators. Ph.D. thesis, Indian Institute of Technology, Bombay, India (2003) 18. Osborn, J.E.: Spectral approximation for compact operators. Math. Comput. 29, 712–725 (1975) 19. Prenter, P.M.: Splines and Variational Methods. Wiley, New York (1975) 20. Sloan, I.H.: Iterated Galerkin method for eigenvalue problems. SIAM J. Numer. Anal. 13(5), 753–760 (1976)