Theoretical and Mathematical Physics, 130(1): 25–44 (2002)
INITIAL BOUNDARY VALUE PROBLEM FOR THE KdV EQUATION ON A SEMIAXIS WITH HOMOGENEOUS BOUNDARY CONDITIONS I. T. Habibullin1 We consider the Korteweg–de Vries equation on the semiaxis with zero boundary conditions at x = 0 and arbitrary smooth decreasing initial data. We show that the problem can be effectively integrated by the inverse scattering transform method if the associated linear equation has no discrete spectrum. Under these assumptions, we prove the global solvability of the problem.
1. Introduction We consider the initial boundary value problem for the Korteweg–de Vries (KdV) equation, ut = uxxx − 6ux u, x > 0,
t > 0,
(1)
u|x=0 = 0,
uxx |x=0 = 0,
(2)
u|t=0 = u0 (x),
u0 (x)|x→+∞ → 0.
(3)
All the conditions to be satisfied by the data in problem (1)–(3) are formulated as assumptions 1 and 2. Assumption 1. The function u0 (x) that determines the initial condition is infinitely smooth and tends to zero at infinity in the Schwartz sense, i.e., the function u0 (x) and all its derivatives decrease faster than any power of x. At x = 0, the function u0 (x) vanishes together with all its derivatives. Although the KdV equation is one of the most popular and best studied equations in the context of integrability theory, the classical setting of the initial boundary value problem on the semiaxis for this equation remains poorly investigated. Even the existence of a global solution in the general case is an open question.2 We note that in a similar problem for the modified KdV equation, the global solvability theorem was proved by Khablov in 1979 (see [1]). The inverse scattering transform method, which has allowed a detailed investigation of the Cauchy problem for Eq. (1) within different classes of the initial data, is difficult to transfer to initial boundary value problems with arbitrary boundary and initial conditions. Little success has been achieved here, even though these problems have been actively investigated, most recently in particular (see, e.g., the work by Fokas [2], where original approaches were proposed). An exception is provided by the special types of boundary conditions called integrable. In particular, such are boundary conditions (2), which agree with the main integrability criteria: the problem in Eqs. (1) and (2) has infinitely many integrals of motion and higher symmetries [3]. The first three integrals of motion in this problem are given by (see [4])
∞
J1 =
u dx, 0
J2 = 0
∞
(u2x + 2u3 ) dx,
J3 =
∞
(u2xx − 5u2 uxx + 5u4 ) dx.
0
1 Mathematical Institute, Ufa Science Center, RAS, Ufa, Russia, e-mail:
[email protected]. 2 The author is grateful to A. V. Faminskii, who kindly pointed this out.
Translated from Teoreticheskaya i Matematicheskaya Fizika, Vol. 130, No. 1, pp. 31–53, January, 2002. Original article submitted March 27, 2001. c 2002 Plenum Publishing Corporation 0040-5779/02/1301-0025$27.00
25
We note, however, that not all the conservation laws of the KdV equation are compatible with the problem ∞ in Eqs. (1)–(3). For example, the well-known functional 0 u2 dx does not preserve its value. An infinite set of conservation laws seems to allow proving that the problem in Eqs. (1) and (2) is a completely integrable Hamiltonian system although this question has not been addressed anywhere. In principle, the compatibility of the boundary condition with the higher symmetries allows constructing partial solutions of the boundary problem (explicit partial solutions of the boundary problem for the KdV equation were described, e.g., in [4]). We note that thanks to Moses [5], the explicit solution u = −2Dx2 log(x3 − 12(t − t0)) of the problem in Eqs. (1) and (2) has long been known. Unfortunately, this solution has a second-order pole on the right semiaxis x > 0 for t > t0 . Incidentally, no decreasing explicit partial solution of problem (1), (2) is known that would be free of singularities for x > 0. The aim in this work is to show that problem (1)–(3) is effectively included in the framework of the inverse scattering transform method. We recall that underlying this method is the existence of the so-called L–A pair: the KdV equation is the compatibility condition for the pair of linear differential equations −yxx + uy = ξ 2 y, yt = ux y − 2(u + 2ξ 2 )yx .
(4) (5)
The main difficulty in transferring the inverse scattering transform method to mixed problems is that on the semiaxis, the t evolution of the scattering matrix for the problem in Eq. (4) is described not by explicit formulas but by a nonlinear differential equation (see Eq. (26) in what follows). In the case of an integrable boundary condition, this nonlinear equation has a nontrivial group of discrete symmetries. The existence of a discrete group allows calculating the scattering data for the second equation in the L–A pair from the scattering matrix for Eq. (4) known at t = 0. The potential and eigenfunctions of Eq. (5) at x = 0 are then reconstructed by solving the inverse scattering problem. This eventually allows effectively determining the time dependence of the scattering matrix because it satisfies essentially the same equation (5) (this point was addressed briefly in [6]). We now discuss the contents of this work in more detail. In Sec. 2, we recall some known facts from the scattering theory for a second-order differential operator; these facts are reformulated for the case of a smooth potential supported on the semiaxis. We define the eigenfunctions, the scattering matrix, etc., and we then formulate a certain global requirement imposed on the initial function u0 (x) in addition to the requirements above (see assumption 2). This requirement states that the equation −yxx + u0 (x)y = ξ 2 y must not have discrete eigenvalues. The basic idea of the work is explained in Sec. 3, where the evolution equation for the scattering matrix is reduced to a linear problem, the Riemann–Hilbert problem of the linear conjugation of piecewise analytic matrix functions. The unique solvability of the Riemann–Hilbert problem is proved by reducing it to a system of integral equations and by applying the known functional analysis theorems. Next, in Sec. 4, we verify that Eq. (4) has no discrete spectrum for t > 0. This is far from obvious because in contrast to the Cauchy problem, diagonal elements of the scattering matrix (whose zeros in fact give the discrete spectrum) are not integrals of motion in this case. In other words, the t evolution of Eq. (4) is not isospectral: the continuum spectrum is changed nontrivially. The functions r(ξ) and R(z), which are the scattering data for Eqs. (4) and (5) for t = 0 and x = 0 respectively (we recall that they are defined as the ratio of the reflection coefficient to the refraction coefficient) are related as r(−ξ) = R(z). This equality seems the correct generalization to the nonlinear (integrable) case (1)–(3) of the relation called the global relation by Fokas [2] in the case of a linear problem (and in a number of nonlinear problems). The global relation is essentially nonlinear for general boundary conditions for the KdV equation. The problem of describing the boundary conditions for which this relation becomes linear requires a special investigation. 26
The results in Secs. 3 and 4 allow deducing the theorem that the initial boundary value problem in Eqs. (1)–(3) under assumptions 1 and 2 has a global solution. A detailed proof of this theorem is given in Sec. 5. In constructing the common solution of the system of equations (4) and (5), we must solve the matrix linear conjugation problem in which the conjugation matrix satisfies a linear equation in t with variable coefficients, d Φ(ξ, t) = X− (ξ, t)Φ(ξ, t) − Φ(ξ, t)X(ξ, t) dt (see Eqs. (62) and (64) in what follows). In Sec. 6, we investigate the behavior of the function ux (0, t), which is the potential in Eq. (5) for x = 0. We show that in the general case, the asymptotic behavior is given by 1 1 ux (0, t) = + o , t → ∞. t t With the Taylor formula, it is easy to obtain the asymptotic behavior u(x, t) ∼ x/t of the solution in a neighborhood of the boundary because u(0, t) = uxx (0, t) = 0 and ux (0, t) ∼ 1/t. As a rule, in the construction of solutions to nonlinear equations via the inverse scattering transform method, the common solution of the system in Eqs. (4) and (5) is simultaneously a solution of the scattering problem for the first equation in this pair. On the other hand, this is not so in our case: the solution of the scattering problem for Eq. (4) does not satisfy the second equation. In the last section, Sec. 7, we briefly discuss the problems related to the discrete spectrum of Eq. (4) and advance a hypothesis regarding the nature of the interaction of “solitons” with the boundary.
2. The direct scattering problem The direct scattering problem consists in constructing a nonlinear functional whereby the potential u(x, t) in Eq. (4) is assigned the so-called scattering data. Because the scattering problem on the entire axis for a second-order equation is well studied, it is convenient to reduce the direct scattering problem on the semiaxis to a problem on the entire axis by continuing the potential u0 (x) from the positive semiaxis to the entire real axis. The potential is continued trivially by setting u0 (x) = 0 for all x < 0. This continuation is smooth in view of the conditions imposed on the initial data. We note that continuing the potential by zero does not agree with the dynamics of the equation and is only used here for convenience in presenting the direct scattering problem for t = 0. At the initial time, the potential u(x, 0) = u0 (x) is a smooth function and rapidly decreases, and it therefore follows from the general theory (see [7]) that for any ξ from the closed upper half-plane, there exists a solution of Eq. (4) that can be represented as e+ (x, ξ) = eiξx +
∞
K(x, τ )eiξτ dτ,
(6)
x
where the function K(x, τ ) is determined from the linear integral equation of the Volterra type
K(x, τ ) =
1 2
∞
u(s) ds + (x+τ )/2
1 2
∞
τ +s−x
u(s) x
K(s, u) du ds
(7)
τ −s+x
with K(x, τ ) = 0 for τ < x. Consequently, the estimate |K(x, τ )| ≤ C
∞
|u(s)| ds
(x+τ )/2
27
is satisfied. The solution e− (x, ξ) is defined similarly, e− (x, ξ) = e−iξx +
x
K − (x, τ )eiξτ dτ,
−∞
and it is then obvious that e− (x, ξ) = e−iξx for x ≤ 0. For real nonzero values of the parameter ξ, the functions e+ (x, ξ) and e+ (x, −ξ), as well as the functions e− (x, ξ) and e− (x, −ξ), constitute fundamental systems of solutions, and we can therefore write the decompositions e+ (x, ξ) = a(ξ)e− (x, −ξ) + b(ξ)e− (x, ξ),
(8)
e− (x, −ξ) = −b(−ξ)e+ (x, ξ) + a(ξ)e+ (x, −ξ).
(9)
The Wronskians of the basis solutions are obviously equal to 2iξ, and the coefficients in this decomposition are therefore explicitly given by a(ξ) =
1 + (e (0, ξ) + iξe+ (0, ξ)), 2iξ x
(10)
b(ξ) =
1 + (−e+ x (0, ξ) + iξe (0, ξ)), 2iξ
(11)
where the subscript denotes the derivative with respect to x. The respective functions a(ξ) and b(ξ) are called the refraction and reflection coefficients. For all nonzero real values of the parameter ξ, the functions a(ξ) and b(ξ) satisfy the involutions a(ξ) = a(−ξ) and b(ξ) = b(−ξ) and the constraint |a(ξ)|2 − |b(ξ)|2 = 1. The bar here denotes complex conjugation. As is known, the squares of zeros of the function a(ξ) occurring in the upper half-plane Im ξ > 0 are eigenvalues of Eq. (4). Assumption 2. Everywhere in what follows, we subject the initial function u0 (x) to the stringent restriction that the equation −yxx + u0 (x)y = ξ 2 y must not have a discrete spectrum, i.e., we must have a(ξ) = 0 for all ξ, Im ξ > 0. Explicit formulas (10) and (11) allow deriving (see [7]) integral representations for the functions a(ξ) and b(ξ), ∞ ∞ 1 u0 (s) ds + eiξs A(s) ds , 2iξ 0 0 ∞ 1 eiξs B(s) ds, b(ξ) = 2iξ 0
a(ξ) = 1 −
(12) (13)
where A(s) = Kτ (0, s) − Kx (0, s) ∈ L1 (0, ∞) and B(s) = −Kτ (0, s) − Kx (0, s) ∈ L1 (0, ∞). Lemma 2.1. The functions a(ξ) and b(ξ) are infinitely differentiable for ξ = 0, Im ξ ≥ 0, and their derivatives satisfy the estimates at ξ → ∞ |bk (ξ)| ≤ Ck,m |ξ|−m , k, m = 0, 1, 2, . . . ,
(14)
|ak (ξ)| ≤ Ck |ξ|−1 ,
(15)
|a(ξ) − 1| ≤ C|ξ|−1 . 28
k = 1, 2, . . . ,
(16)
Outline of the proof. It follows from Eq. (7) that the function K(x, τ ) and all its derivatives with respect to each variable decrease faster than any power of x and τ . The same equation also allows easily obtaining the formula ∞ 1 τ B(τ ) = u0 u0 (s)K(s, τ − s) ds. (17) + 2 2 0 This implies that B(0) = 0 because u0 (0) = 0 and K(s, −s) = 0 for s > 0. It can be shown by induction that B (k) (0) = 0. Integrating Fourier integral (13) by parts m times, we see that b(ξ) is a rapidly decreasing function. The functions a(ξ) and b(ξ) are differentiable because the function K(x, τ ) and its derivatives, and hence the functions A(τ ) and B(τ ), decrease faster than any power of τ . We note that because A(0) is nonzero in general, the function a(ξ) − 1 decreases as the first power of ξ. To complete the discussion of the direct scattering problem, we introduce the function r(ξ) = b(ξ)/a(ξ), which is a functional parameter allowing an unambiguous reconstruction of the potential in Eq. (4). In our case, in view of assumption 2, the full set of the scattering data of the equation consists of precisely one function r(ξ). Proposition 2.1. The function r(ξ) is infinitely differentiable for all ξ, Im ξ ≥ 0, and decreases together with all its derivatives faster than any power of ξ. For ξ = 0, the validity of the proposition follows from Lemma 2.1. We now prove the smoothness at zero. Adding both sides of Eqs. (10) and (11), we obtain the relation e+ (0, ξ) = a(ξ) + b(ξ), which makes it clear that the “residues” of a(ξ) and b(ξ) at zero coincide up to multiplication by −1. If the “residues” are nonzero, ξa(ξ) and ξb(ξ) are everywhere smooth functions on the set Im ξ ≥ 0 with ξa(ξ) = 0 on this set, and r(ξ) = ξb(ξ)/(ξa(ξ)) is therefore a smooth function (we here use that not only the functions ξa(ξ) and ξb(ξ) themselves but also their derivatives are continuous in the closed upper half-plane). On the other hand, if the “residues” are equal to zero, the functions a(ξ) and b(ξ) are continuous (and moreover smooth) at zero, and the constraint condition |a(ξ)|2 − |b(ξ)|2 = 1 implies that |a(ξ)| > 1 for ξ ∈ R. Therefore, r(ξ) is also a smooth function in this case. The proposition implies the following corollary. Corollary. The function r(ξ) admits the integral representation
∞
r(ξ) =
eiξx q(x) dx,
(18)
0
where q(x) is a smooth rapidly decreasing function vanishing at x = 0 together with all its derivatives. Proof. From the inverse Fourier transformation +∞ 1 q(x) = e−iξx r(ξ) dξ, 2πi −∞ we have q (k) (0) =
1 2πi
+∞
(−iξ)k r(ξ) dξ. −∞
Because the function (−iξ)k r(ξ) is analytic in the upper half-plane and rapidly decreases at infinity for any nonnegative integer k, the right-hand side of the last formula vanishes.
3. Time dynamics of both eigenfunctions and the scattering matrix In this section, we derive a differential equation governing the dependence of the scattering matrix a(ξ) b(−ξ) s(ξ) = (19) b(ξ) a(−ξ) 29
on the time t. Here, a(ξ) = a(ξ, t) and b(ξ) = b(ξ, t) are the refraction and reflection coefficients corresponding to the potential u(x, t) that is a solution of the initial boundary value problem in Eqs. (1)–(3). In deriving the main equations in what follows, we assume the existence of a solution of this problem in the class of rapidly decreasing functions. We first determine the nature of the time evolution of the eigenfunctions e± (x, ξ) = e± (x, ξ, t). For convenience, we pass from the scalar L–A-pair equations (4) and (5) to the matrix equations y 0 1 Yx = U Y, Y = , U= , (20) yx u−λ 0 ux −2u − 4λ Yt = AY, A= . (21) uxx − (4λ − 2u)(u − λ) −ux It can be easily verified that the matrix function P (x, ξ, t) constructed in accordance with the rule P (x, ξ, t) =
e+ (x, ξ, t)
e− (x, ξ, t)
e+ x (x, ξ, t)
e− x (x, ξ, t)
satisfies only the first equation for the L–A pair in (20) and (21). We use the right multiplication to correct this solution such that the second L–A-pair equation is also satisfied. The full statement is given by the following lemma. Lemma 3.1. The function P(x, ξ, t) = P (x, ξ, t)V (ξ, t) is a common solution of the pair of linear matrix differential equations of form (20) and (21) if the function V (ξ, t) satisfies the Cauchy problem dV (ξ, t) = dt
ux (0, t) a(ξ, t) V (ξ, t), ux (0, t)b(ξ, t) 4iξ 3 − a(ξ, t)
−4iξ 3 0
V (ξ, 0) = E,
(22)
where E is the unit matrix. The lemma is proved by a direct calculation. We substitute the first column of the matrix V in the equation dP V /dt = AP V and set x = ∞ and then substitute the second column and set x = 0. In both 3 cases, we obtain Eq. (22). The first column is evaluated in the explicit form, v11 (ξ, t) = e−4iξ t , v21 (ξ, t) = 0. The full solution of Eq. (22) is given below (see Eq. (27)). Lemma 3.1 implies that the t evolution of the eigenfunction e+ (0, ξ, t) is described by the equations d + e = (4iξ 3 + ux (0, t))e+ − 4ξ 2 e+ x, dt
(23)
d + e = 4ξ 4 e+ + (4iξ 3 − ux (0, t))e+ x. dt x The scattering matrix s(ξ, t) satisfies a system of linear ordinary differential equations in t [6], st = 4iξ 3 [s, σ3 ] + ux (0, t)σ1 s.
(24)
Here and everywhere in what follows, σi (i = 1, 2, 3) are the Pauli matrices σ1 =
30
0 1 1 0
,
σ2 =
0
i
−i
0
,
σ3 =
1
0
0 −1
.
To verify Eq. (24), we take the t derivatives of Eqs. (10) and (11) using Eqs. (23). Then b˙ = 8iξ 3 b + ux a.
a˙ = ux b,
(25)
It remains to rewrite the system in Eqs. (25) as the matrix equation (24) for matrix (19). In the general case, the evolution equation for the scattering matrix is given by [8] st = 4iξ 3 [s, σ3 ] + P1 σ1 s + P2 σ2 s + P3 σ3 s,
(26)
where P1 = ux (0, t),
P2 =
−4u(0, t)ξ 2 + uxx (0, t) − 2u2 (0, t) , 2ξ
P3 =
2u2 (0, t) − uxx (0, t) . 2ξ
It is clear that substituting boundary conditions (2) in Eq. (26), we again obtain Eq. (24). It is amusing that the system of equations (22) can be explicitly integrated in the sense that the result does not contain integrals. It can be verified by a direct substitution that the second column of the matrix V is given by e4iξ t a(ξ, 0)¯b(ξ, t) − e−4iξ t a(ξ, t)¯b(ξ, 0) , a(ξ, t) 3
v12 (ξ, t) =
3
3
v22 (ξ, t) =
Equations (23) also imply that the function m(t, ξ) = e+ (0, ξ, t)e−4iξ equation
3
t
e4iξ t a(ξ, 0). a(ξ, t)
(27)
is a solution of the second-order
mtt = (w(t) − µ)m,
(28)
where w(t) = dux (0, t)/dt + u2x (0, t) and µ = 16ξ 6 . The same equation is satisfied by the p12 entry of the matrix P in Lemma 3.1. It can be easily verified that Eq. (24) is invariant under the replacement of ξ with ωξ, where ω is a cubic root of unity. The existence of a discrete symmetry in the evolution equation singles out boundary condition (2) from the general class of boundary conditions for the KdV equation. A more general integrable boundary condition is given by u(0, t) = a1 and uxx (0, t) = a2 , where a1 and a2 are arbitrary real numbers [4]. With this boundary condition imposed, Eq. (26) also acquires a discrete symmetry group, but studying this case is beyond the scope of the present paper. We only note here that there exists a close relation between the compatibility property of the boundary condition and the integrability and discrete symmetries of the evolution equation for the scattering matrix. It is natural to set z = ξ 3 in Eq. (24). Temporarily assuming the scattering matrix s(ξ, t) to be known for all t > 0, we construct special matrix solutions of Eq. (24) that depend analytically on the parameter z and are called eigenfunctions, or solutions to the scattering problem. For this, we use the fact that simultaneously with the function s(ξ, t), a solution of system (24) is also given by the function s(ωξ, t). Therefore, the function c0 (z, t) composed of the first and second columns s1 (ωξ, t) and s2 (ξ, t) of the respective matrices s(ωξ, t) and s(ξ, t), c0 (z, t) = (s1 (ωξ, t), s2 (ξ, t)),
ω = e2πi/3 ,
(29)
also satisfies Eq. (24). By virtue of the analytic properties of the scattering matrix, the function c0 (z, t) is analytic in z in the upper half-plane and is continuous in its closure. 31
In other words, the function c0 (z, t) is a solution of the scattering problem for Eq. (24) on the semiaxis t ≥ 0. But the solution of the scattering problem on the semiaxis is not uniquely defined. In what follows, it is convenient to use a different solution of the scattering problem given by c+ (z, t) = c0 (z, t)e−4izσ3 t M (z)e4izσ3 t , where
a(−ξ) det c0 (z, 0) M (z) = b(ωξ) − det c0 (z, 0)
0 1 a(−ξ)
(30)
(it can be shown that in view of assumption 2, the determinant of the matrix c0 (z, t) is nonzero for Im z ≥ 0). At t = 0, the function c+ (z, t) has the upper-triangular matrix structure c+ (z, 0) =
1 R(z) 0
1
,
(31)
where R(z) = b(−ξ)/a(−ξ), −2π/3 ≤ arg ξ ≤ −π/3. It is obvious that Eq. (24) is invariant with respect to the involution given by c(z, t) → σ1 c(¯ z , t)σ1 ; therefore, the function c− (z, t) = σ1 c(¯ z , t)σ1 is a solution of the same equation and is analytic for Im z < 0. Therefore, the solution of the scattering problem for Eq. (24) is known for all z ∈ C at t = 0. The problem of reconstructing the potential in Eq. (24) is then effectively solved. By definition, we set p(z, t) = c−1 − (z, t)c+ (z, t). Being the ratio of two solutions of linear system (24), the function p(z, t) depends on t trivially, p(z, t) = e−4izσ3 t p(z, 0)e4izσ3 t .
(32)
As a result, the problem of finding the eigenfunctions c± (z, t) for t > 0 is reduced to the linear conjugation problem (see, e.g., [9] on the linear conjugation problem), with the sought functions c+ (z, t) and c− (z, t) that are analytic in the respective domains Im z > 0 and Im z < 0 and continuous and nondegenerate in the closures of these domains. The conjugation condition c+ (z, t) = c− (z, t)p(z, t)
(33)
is therefore satisfied on the real axis. The uniqueness of the solution of the problem follows from the additional normalization condition, which in our case takes the form c+ (∞, t) = E.
(34)
As can be seen from the formula p(z, t) =
1
R(z)e−8izt
−R(¯ z )e8izt
1 − |R(z)|2
,
(35)
the conjugation matrix p(z, t) is completely defined by the function R(z), and we therefore consider its properties in detail. 32
Lemma 3.2. The function R(z) is continuous on the real axis, satisfies the estimate |R(z)| < 1 for 0 = z ∈ R,
(36)
and admits the Fourier integral representation
∞
R(z) =
eizy k(y) dy,
(37)
0
where k(y) ∈ L1 (0, ∞), |k(y)| < ∞, and the function k(y) is differentiable with (1 + |y|)k (y) ∈ L1 (0, ∞). In addition, the estimate |R(z)|2 ≤ 1 − Cz 2 /(1 + z 2 ) is satisfied for all z ∈ R. Proof. We prove the first statement. From the relation |a(ξ)|2 − |b(ξ)|2 = 1, which is valid for all ξ ∈ R, we have |r(0)| ≤ 1 and |r(ξ)| < 1 for ξ = 0. But the function r(ξ) is analytic for Im ξ > 0, continuous for Im ξ ≥ 0, and tends to zero as |ξ| → ∞. By the maximum principle, we therefore have |r(ξ)| < 1 for Im ξ > 0. Together with R(z) = b(−ξ)/a(−ξ) = r(−ξ), this implies Eq. (36). We now go to the second statement. By construction, the function R(z) admits the representation
∞
R(z) =
e−iξx q(x) dx.
(38)
0
The function k(y), which is defined via the inverse Fourier transform
k(y) =
1 2π
+∞
e−izy R(z) dz,
(39)
−∞
is continuous and bounded. We now show that it is absolutely summable. For this, we transform the last integral to the form +∞ +∞ 2πk(y) = e−izy−iξx q(x) dx dz, z = ξ3, (40) −∞
0
and change the variables as λ = −ξy 1/3 . Then z = −λ3 y, and integral (40) becomes
+∞
+∞
3
ei(λ
2πk(y) =
3 +λs) dλ
y
−∞
0
q(x) dx,
(41)
where s = xy −1/3 . The inner integral in (41) is closely related to the Airy function v(s) defined as a special solution of the Airy equation v (s) = sv(s), which is expressed as the integral
+∞
3
ei(λ
v(s) =
+λs)
dλ.
(42)
−∞
Therefore, Eq. (41) can be rewritten as 2πk(y) = 0
This implies
2π 0
∞
∞
3 v (s)q(x) dx = y
|k(y)| dy ≤
∞
dy 0
0
∞
∞
3x v(s)q(x) dx. y 4/3
(43)
3x |v(s)| · |q(x)| dx. y 4/3
(44)
0
33
In the right-hand side of the inequality, we change the order of integrations and set y = x3 s−3 . As a result, we obtain the inequality ∞ ∞ ∞ 2π |k(y)|dy ≤ 9 |q(x)| dx |v(s)| ds. (45) 0
0
0
The Airy function exponentially decreases at infinity (see [10]), 1 1 −1/4 −2√s3 /3 v(s) = s e 1+O √ , 2 s3
s → ∞,
and the last integral on the right-hand side therefore converges. It can be similarly verified that the functions zR (z) = −ξr (−ξ)/3 and zR(z) = ξ 3 r(−ξ) are also Fourier integrals of absolutely summable functions. The last inequality in the lemma can be deduced from a similar inequality for r(ξ) by an elementary argument. The lemma is proved. Theorem 3.1. The linear conjugation problem in Eqs. (32)–(34) is uniquely solvable. Proof. The problem under consideration is certainly solvable if the real part of the conjugation matrix is positive-definite [11]. But in our case, this sufficient condition is not satisfied; more precisely, it is satisfied only in the particular case where |R(z)| < 1 for all z ∈ R. We prove the theorem by reducing the problem to a system of Fredholm integral equations. We represent the cij (s, t) entry of the matrix c+ (z, t) as the Fourier integral
∞
cij (s, t) = δij +
eizs αij (s, t) ds,
(46)
0
where δij is the Kronecker symbol. We substitute this expression in Eq. (33); after simple transformations, we obtain the Fredholm integral equations of the second kind
∞
α11 (θ, t) +
¯ k(−s − θ + 8t)¯ α21 (s) ds = 0,
(47)
¯ k(−s − θ + 8t)¯ α11 (s) ds = −k(−θ + 8t).
(48)
0 ∞
α21 (θ, t) + 0
¯ The kernel k(y) of the convolution operator in Eqs. (47) and (48) is bounded and absolutely summable, and in accordance with the Fr´echet–Kolmogorov theorem, the convolution operator is therefore compact in the space L1 (0, ∞) (see, e.g., [12]). To prove the unique solvability of the system of equations under consideration, it therefore suffices to verify the uniqueness of the solution of the homogeneous problem. Let a pair of functions α11 (s), α21 (s) ∈ L1 (0, ∞) satisfy the homogeneous equations
∞
α11 (θ) +
¯ k(−s − θ + 8t)¯ α21 (s) ds = 0,
(49)
¯ k(−s − θ + 8t)¯ α11 (s) ds = 0.
(50)
0
α21 (θ) +
∞
0
Because the kernel k(y) is bounded, Eqs. (49) and (50) directly imply the estimates |α11 | < ∞ and |α21 | < ∞, and therefore, obviously, α11 (s), α21 (s) ∈ L2 (0, ∞). We transform the second equation of the system to the form ∞ ∞ ¯ a21 (y) − k(−s − s + 8t)α21 (s ) ds = 0. k(−s − y + 8t) ds 0
34
0
Multiplying it by α ¯21 (y) and integrating with respect to y, we obtain
∞
0
∞
|α21 (y)|2 dy −
∞
dy 0
¯ α ¯ 21 (y)k(−s − y + 8t) ds
0
∞
α21 (s )k(−s − s + 8t) ds = 0.
0
The equation thus obtained can be written as
∞
0
if we set
c(y) = 0
∞
∞
|α21 (y)| dy − 2
|c(s)|2 ds = 0
(51)
0
1 ¯ α ¯ 21 (s )k(−s − s + 8t) ds = 2π
+∞
−∞
c¯21 (z)R(z)eiz(−y+8t) dz.
We next use the Parseval equality
∞
0
where c21 (z) =
∞ 0
1 |α21 (y)| dy = 2π
2
∞
|c21 (z)|2 dz,
0
eisz α21 (s) ds, and the Bessel inequality
∞
0
1 |c(s)| ds ≤ 2π
∞
2
|c21 (z)|2 |R(z)|2 dz.
0
With the last two relations taken into account, Eq. (51) implies the inequality
∞ 0
|c21 (z)|2 dz ≤
∞
|c21 (z)|2 |R(z)|2 dz,
0
whence we have (1 − |R(z)|2 )|c21 (z)|2 ≤ 0. But (1 − |R(z)|2 ) > 0 for z = 0. Because the function c21 is continuous, we hence have c21 (z) ≡ 0. We have thus proved that the homogeneous equation has only the trivial solution. By the Fredholm alternative, the system of equations (47) and (48) is uniquely solvable. Therefore, the linear conjugation problem in Eqs. (32)–(34) is also solvable. It remains to verify that the condition det c+ (z) = 0 is satisfied for Im z ≥ 0. We take the determinant of both sides of Eq. (33). The function det c+ (z) = det c¯+ (¯ z ) = d(z) is analytic in the entire complex plane and is equal to one at z = ∞. Therefore, d(z) ≡ 1 by the Liouville theorem.
4. Evaluation of the scattering data for t > 0 On the complex plane ξ ∈ C, we single out six equal sectors I1 , I2 , . . . , I6 using the rays 5j = {ξ, arg(ξ) = π(j − 1)/3} starting at the origin; thus, Ij is the sector confined between the rays 5j and 5j+1 (see Fig. 1 below). The functions c± (z, t) constructed in the previous section as solutions to the linear conjugation problem are fundamental solutions to the system of linear equations (24). Therefore, they allow, first, unambiguously defining the function ux (0, t) as the coefficient of this system of equations and, second, writing explicit formulas for the scattering data a(ξ, t) and b(ξ, t). We express the scattering matrix elements s(ξ, t) through the fundamental solutions of Eq. (24). It is convenient to do this separately for each column s1 (ξ, t) and s2 (ξ, t) of the scattering matrix, s1 (ξ, t) = c(z, t)e−4iztσ3 c−1 (z, 0)s1 (ξ, 0)e4izt ,
(52)
s2 (ξ, t) = c(z, t)e−4iztσ3 c−1 (z, 0)s2 (ξ, 0)e−4izt ,
(53) 35
53
52 ❏
I3
I2
❏ ❏ ❏
✡ ✡ ✡
✡ ✡ I1
❏ ❏ ✡ ❏ ✡ ❏ ✡ ✡ ❏ ✡ ❏ ✡ ❏ ✡ ❏ ✡ ❏ ✡ ❏ I5 ✡ ❏
54
I4
55
51
I6
56
Fig. 1
where c(z, t) is equal to either c+ (z, t) or c− (z, t) depending on the sector to which the point ξ = z 1/3 belongs. For example, for ξ ∈ I2 , we have s1 (ξ, t) =
c− (z, t)e−4izt(σ3 −E) c−1 − (z, 0)s1 (ξ, 0)
=
c−11 (z, t)a(ξ)
c−21 (z, t)a(ξ)
,
(54)
whereas for ξ ∈ I1 , I3 , s1 (ξ, t) = c+ (z, t)e−4izt(σ3 −E) c−1 + (z, 0)s1 (ξ, 0).
(55)
Thus, s1 (ξ, t) is an analytic function in each of the sectors I1 –I3 but is defined by different formulas in different sectors. The condition t ≥ 0 is important precisely here. For t < 0, the function e8izt in Eq. (55) increases in the domain I1 , and the function s1 (ξ, t) is therefore no longer bounded at the ξ infinity. Using Eq. (33) rewritten as 1 −R 1 0 c+ (z, t) = c− (z, t) , 0 e8izt −Re8izt e8izt we can verify that at the interface boundaries between the sectors I1 , I2 and I2 , I3 , the right and left limit values of s1 (ξ, t) coincide, and, in view of the analytic continuation principle, the obtained function is therefore analytic everywhere in the upper half-plane Im ξ > 0 and is continuous in the closed half-plane Im ξ ≥ 0, except at the point ξ = 0. In particular, for the refraction and reflection coefficients, we have the expressions a(ξ, t) = c−11 (z, t)a(ξ, 0),
b(ξ, t) = c−21 (z, t)a(ξ, 0),
which are valid for ξ ∈ I2 , ξ 3 = z, and t ≥ 0. In the remaining upper half-plane sectors I1 and I3 , these functions are given by a(ξ, t) = c11+ (z, t) a(ξ, 0) − R(z)b(ξ, 0) + c12+ (z, t)e8izt b(ξ, 0), b(ξ, t) = c21+ (z, t) a(ξ, 0) − R(z)b(ξ, 0) + c22+ (z, t)e8izt b(ξ, 0). 36
The scattering matrix is thus completely determined for all t ≥ 0 by the initial data in Eq. (3). Theorem 3.1 implies that the function c−11 (z, t) admits the integral representation c−11 (z, t) − 1 =
∞
e−izy k1 (y) dy.
(56)
0
We show that this function also admits the representation
∞
c−11 (z, t) − 1 =
e−iξx q1 (x) dx,
z = ξ3,
(57)
0
where q1 ∈ L1 (0, ∞). In other words, there exists a map that is inverse to the map defined by Eq. (41). From Eqs. (56) and (57), we find 2πq1 (x) =
∞
e
−iξx
∞
dξ
∞
e−iξ y k1 (y) dy. 3
0
Changing the variable as ξ = −λy −1/3 and setting s = xy −1/3 , we obtain
∞
2πq1 (x) =
k1 (y)y −1/3 dy
∞
3
ei(λs+λ ) dλ.
∞
0
The last integral on the right-hand side coincides with the Airy function (see (42)), and therefore
∞
2πq1 (x) =
k1 (y)y
−1/3
v
0
x y 1/3
dy.
From this, it is easy to obtain an estimate of the L1 norm of the function q1 ,
∞
2π 0
|q1 (x)| dx ≤
∞
∞
|k1 (y)| dy
0
|v(s)| ds < ∞,
0
which implies that the function c−11 (z, t) can be represented in form (56). A similar statement also holds for c−21 (z, t). It hence easily follows that the functions a(ξ, t) and b(ξ, t) admit the representations in Eqs. (12) and (13) for all t ≥ 0. Lemma 4.1. The condition |a(ξ, t)| > 0
(58)
is satisfied for any t ≥ 0 and any ξ in the closed upper half-plane Im ξ ≥ 0. Proof. For t = 0, the inequality is valid in view of assumption 2. Moreover, for t = 0, there is the condition |a(ξ, 0)| > |b(ξ, 0)|, which is satisfied for all ξ = 0, Im ξ ≥ 0. This follows because the function r(ξ) = b(ξ)/a(ξ) is analytic for Im ξ > 0 and continuous up to the real axis, along which it satisfies the inequalities |r(ξ)| < 1 for ξ = 0 and |r(0)| ≤ 1. By the maximum principle, |r(ξ)| < 1 for Im ξ > 0. The validity of the relation |a(ξ, t)|2 − |b(ξ, t)|2 = |a(ξ, 0)|2 − |b(ξ, 0)|2 for all real ξ = 0 can be easily verified by directly taking the t derivative with Eqs. (25) taken into account; therefore, |a(ξ, t)| > |b(ξ, t)| for 0 = ξ ∈ R. 37
We now suppose that the equality a(ξ0 , t0 ) = 0 is satisfied for a certain t0 > 0 at some point ξ0 , Im ξ0 > 0. Obviously, there then exists a point t1 , 0 < t1 < t0 , such that |a(0, t1 )| < ∞. In other words, the functions a(ξ, t1 ) and b(ξ, t1 ), which are analytic in the upper half-plane, are continuous down to the real axis and are also bounded at zero. Being solutions to a linear equation, these functions are then bounded at zero for all t. For any t, the inequality |a(ξ, t)| > |b(ξ, t)| is therefore satisfied everywhere on the real axis including the point ξ = 0. Hence, the number of zeros of a(ξ, t) in the upper half-plane cannot change with time, which contradicts the supposition. The lemma is proved. Because the elements of the scattering matrix s(ξ, t) can be expressed through the functions c± (z, t) that have been found in solving the linear conjugation problem in Eqs. (32)–(34) (see Eqs. (52) and (53)), some properties of these functions must be established for what follows. Lemma 4.2. For |z| → ∞, the functions c± (z, t) rapidly tend to the unit matrix E, |c± (z, t) − E| ≤ C|z|−m
∀m > 0,
z ∈ R.
(59)
Proof. The functions c± (z, t) − E satisfy the inhomogeneous linear conjugation problem c+ − E = (c− − E)p + p − E. For a known solution of the corresponding homogeneous problem, the solution is given by reducing the problem to that of the “jump” of an analytic function [13]. We multiply the last equation by c−1 + from the right, −1 −1 (c+ − E)c−1 + = (c− − E)c− + (p − E)c+ ,
whence we have 1 z (c± (z, t) − E) = 2πi
+∞
m
−∞
m (p(s, t) − E)c−1 + (s, t)s ds · c± (z, t). s − (z ± 0)
(60)
Because the function (p(s, t) − E)sm is H¨older-continuous and tends to zero as s → ∞ for any integer m ≥ 0, the integral in (60) converges in the sense of principal value and decreases as z → ∞. Lemma 4.3. The functions h+ (ξ, t) = c+ (ξ 3 , t) and h− (ξ, t) = c− (ξ 3 , t) are infinitely differentiable in the respective sectors I1 , I3 , I5 and I2 , I4 , I6 with the derivatives of these functions being continuous in the closures of these sectors. Proof. It follows from dp(z, t)/dz = (z −2/3 /3) dr(−ξ)/dξ that the function z 2/3 dp(z, t)/dz is continuous for z ∈ R. We formally differentiate both sides of Eq. (33) with respect to z and transform the result to z 2/3
dc+ (z, t) −1 dc− (z, t) −1 dp c+ (z, t) = z 2/3 c− (z, t) + z 2/3 c− (z, t) c−1 (z, t), dz dz dz +
which immediately implies that the functions z 2/3 (dc± (z, t)/dz)c−1 ± (z, t) are continuous (as reconstructions of an analytic function from a given “jump” of the function z 2/3 c− (z, t)(dp/dz)c−1 older+ (z, t) that is H¨ 2/3 continuous on the real axis). Therefore, the functions dh± (ξ, t)/dξ = z dc± (z, t)/dz are also continuous. That the higher derivatives are also continuous is verified similarly.
38
Proposition 4.1. For all t ≥ 0, the function r(ξ, t) = b(ξ, t)/a(ξ, t) has the following properties: 1. r(ξ, t) is analytic in the domain Im ξ > 0 and is continuous in its closure; 2. r(ξ, t) satisfies the estimate |r(ξ, t)| < 1 for Im ξ ≥ 0, ξ = 0; 3. r(ξ, t) is infinitely differentiable and decreases together with all its derivatives faster than any power of ξ as |ξ| → ∞. In addition, for any t ≥ 0, the functions a(ξ, t) and b(ξ, t) admit integral representations of form (12) and (13). The proposition follows from Lemmas 4.1–4.3.
5. Proof of the main theorem For each fixed t > 0, we reconstruct the potential u(x, t) in Eq. (4) from a given ratio of the reflection coefficient to the refraction coefficient, i.e., from the function r(ξ, t) = b(ξ, t)/a(ξ, t). We recall that in view of assumption 2 and Lemma 4.1, Eq. (4) has no discrete spectrum for t ≥ 0. It follows from Proposition 4.1 that all the solvability conditions of the inverse scattering problem (see, e.g., [7]) for the function r(ξ, t) are satisfied. The problem of reconstructing the potential u(x, t) from the scattering data r(ξ, t) is effectively solved, for example, by its reduction to the Gelfand–Levitan–Marchenko linear integral equation. It can be shown that because the given function r(ξ, t) has the additional properties of being smooth and decreasing (as listed in Proposition 4.1), the reconstructed potential u(x, t) is a smooth rapidly decreasing function. It remains to verify that KdV equation (1), boundary conditions (2), and initial condition (3) are satisfied. Because the KdV equation is the compatibility condition for the overdetermined system in Eqs. (4) and (5), it suffices for verification of conditions (1) to produce a common solution of this system. We first construct eigenfunctions e± (x, ξ) = e± (x, ξ, t) of the first L–A-pair equation for an arbitrary time t > 0. The eigenfunctions e± satisfy the matrix relation ξ ∈ R,
Φ+ (x, ξ, t) = Φ− (x, ξ, t)Φ(ξ, t), where 1 Φ(ξ, t) = a ¯(ξ, t)
1 b(ξ, t)
−¯b(ξ, t) 1
Φ+ (x, ξ, t) = Φ− (x, −ξ, t)σ1 =
(61)
,
e+
e−
e+ x
e− x
(62)
.
Obviously, the matrix functions Φ± (x, ξ, t) are solutions to the system of equations (20), which is equivalent to Eq. (4), but they do not satisfy system (21), which is equivalent to Eq. (5), the second L–A-pair equation. We must therefore multiply these functions from the right by the factors V (ξ, t) and V− (ξ, t) = ˜ + (x, ξ, t) = Φ+ (x, ξ, t)V (ξ, t) and σ1 V (−ξ, t)σ1 appropriately chosen such that the resulting functions Φ ˜ − (x, ξ, t) = Φ− (x, ξ, t)V− (ξ, t) satisfy both equations. More precisely, we have the following proposition. Φ ˜ ± (x, ξ, t) of the system of equations (20) to simultaneously satisfy Proposition 5.1. For solutions Φ the system of equations (21), it is sufficient that the matrix function V (ξ, t) satisfy conditions (22). Proof. The functions V (ξ, t) and V− (ξ, t) are solutions of the respective matrix differential equations (see (22)) d V = XV dt
and
d V− = X− V− , dt
(63) 39
where X− (ξ, t) = σ1 X(−ξ, t)σ1 and X(ξ, t) =
0
ux (0, t)
−4iξ 3
a
4iξ 3 −
ux (0, t)b a
.
The t-derivative of the conjugation matrix Φ(ξ, t) is given by d 1 Φ(ξ, t) = 2 dt a ¯
−ux (0, t)¯b
8iξ 3 a ¯¯b + ux (0, t)(¯b2 + a ¯2 ) ¯b + ux (0, t) −ux (0, t)¯b 8iξ 3 a
and can be represented as d Φ(ξ, t) = X− (ξ, t)Φ(ξ, t) − Φ(ξ, t)X(ξ, t). dt We differentiate Eq. (61) with respect to t and use Eq. (64). This gives the relation
(64)
−1 −1 Φ˙ + = Φ˙ − Φ−1 − Φ+ + Φ− (X− Φ− Φ+ − Φ− Φ+ X)
(with the dot over a symbol denoting the t derivative), which we consider in the symmetric form −1 −1 −1 ˙ + Φ−1 ˙ F (x, ξ, t) := Φ + + Φ+ XΦ+ = Φ− Φ− + Φ− X− Φ− .
We show that the function F (x, ξ, t) is a polynomial in ξ. Indeed, by construction, the functions Φ± (x, ξ, t)e−iσ3 xξ are analytic in the respective half-planes ± Im ξ > 0 and are continuous for ± Im ξ ≥ 0, except at the infinitely remote point, where these functions have a first-order pole. More precisely, the asymptotic representations Φ± (x, ξ, t) =
iξ
0
0
1 −1
+
1
1
q11 (1) −q12 (1)
+
∞ k=1
1 (iξ)k
q11 (k) q12 (k) q21 (k) q22 (k)
eiσ3 xξ
(65)
are valid as |ξ| → ∞, ± Im ξ ≥ 0. The functions X and X− have a third-order pole at infinity. Therefore, the function F (x, ξ, t) grows as ξ 4 at infinity. We now prove that it is regular in the finite part of the complex plane. We rewrite the expression for F as F (ξ, t) :=
d ˜ d ˜ ˜ −1 ˜ −1 Φ+ · Φ Φ− · Φ + = − , dt dt
(66)
˜ + = Φ+ V and Φ ˜ − = Φ− V− . We note that the tilded functions are no longer bounded in the where Φ half-planes ± Im ξ > 0, because V (ξ, t) exponentially grows for t > 0 in both the upper and the lower half-planes (see Eqs. (27)). ˜ ± (x, ξ, t) = −2iξa(±ξ, t)v22 (±ξ, t) are in general nonzero in the domains The determinants det Φ ˜ ± (x, 0, t) = 0. ± Im ξ ≥ 0, except in the degenerate case, where |a(0)| < ∞. In that case, we have det Φ We now investigate the nature of the degeneration of these matrices at ξ = 0. We let λ(t) denote the ˜ + (x, 0, t). It is obvious that λ(t) is indeproportionality coefficient between the columns of the matrix Φ pendent of x; we now show that it is also independent of t. For definiteness, we set x = 0. The condition ξa(ξ, t)|ξ=0 = 0 and Eq. (10) imply that e+ x (0, 0, t) = 0 for t ≥ 0. Therefore, the second row of the matrix ˜ + (0, 0, t) consists of only zeros. Therefore, λ(t) satisfies precisely the one scalar relation Φ λ(t)e+ (0, 0, t) = e+ (0, 0, t)v12 (0, t) + v22 (0, t). 40
It is now easy to derive the relation dλ/dt = 0 from Eqs. (22) and (23). Taking into account that
v12 (ξ, 0) = 0,
v22 (ξ, 0) = 1,
(67)
we find λ(t) = λ(0) = 1/e+ (0, 0, 0). In the degenerate case, where |a(0, 0)| < ∞ (and the inequality |a(0, t)| < ∞ is then satisfied for any t > 0!), the additional condition ˜ + (x, 0, t) Φ
λ
−1
=0
(68)
is therefore satisfied for all times t. But this implies that we then have
d ˜ Φ+ (x, 0, t) dt
λ −1
=0
(69)
for the derivative, and the function F (x, ξ, t) therefore has no pole at ξ = 0. It now follows from the Liouville theorem that the function F (x, ξ, t) is a fourth-order polynomial in ξ. ˜ −1 ˜ + (x, ξ, t)σ1 and Φ ˜ ∗+ (x, −ξ, t) = σ3 Φ ˜ + (x, −ξ, t) = Φ Next, the evident involutions Φ + (x, ξ, t)σ1 , where ∗ denotes the Hermitian conjugation, imply that 2
F (x, ξ, t) = A4 λ + A2 λ + A0 ,
A4 =
0 0 4 0
,
where λ = ξ 2 and A2 and A0 are matrices with real elements. To further refine the form of F (x, ξ, t), it is most convenient to use the fact that the overdetermined system of equations
yx = U y,
yt = F (x, ξ, t)y
(where the first equation coincides with (20)) is compatible for all ξ ∈ C. Simple calculations allow verifying that F (x, ξ, t) has the form of the matrix of coefficients for the system of equations (21). Therefore, the ˜ ± (x, ξ, t) are common solutions of the system of equations (20) and (21), where the coefficients functions Φ are given by the function u(x, t) and its derivatives; this implies that u(x, t) is a solution of the KdV equation. It is obvious that the initial condition is satisfied. That the boundary conditions are satisfied follows because the scattering matrix evolves with respect to t in accordance with Eq. (24) (cf. the general equation (26)). This proves the following theorem. Theorem 5.1. Let initial condition (3) satisfy all the smoothness and rapid decrease conditions listed in assumption 1, and let linear equation (4) with the potential given by the initial function have no discrete spectrum (see assumption 2). Then the initial boundary value problem is uniquely solved in the class of smooth rapidly decreasing functions. The solution is defined for all t > 0. ˜ ± of the system of Remark. The proof of Theorem 5.1 above shows that the common solution Φ equations (20) and (21) is not analytic in the half-planes. In other words, the solution of the scattering problem for Eq. (4) does not satisfy Eq. (5). 41
6. The large-t behavior of the function ux(0, t) We investigate the behavior of the function ux (0, t) as t → ∞. We first assume that the degenerate case applies, where ξa(ξ, t) → 0 as ξ → 0. It then follows from the results in [14] that the function ux (0, t) constructed in Sec. 4 by solving the Riemann problem in Eqs. (32)–(34) belongs to the class of absolutely summable functions. Indeed, the function R(z) is then in the Wiener class and satisfies the inequality |R(z)| < 1 for all real z, and the principal minors of the conjugation matrix p(z, t) and of the inverse matrix therefore have no zeros for real values of z. But then the condition [14]
∞
|ux (0, t)| dt < ∞
(70)
0
is satisfied. Next, in the general case, the principal minor of the matrix p−1 (z, t), which is equal to 1−|R(z)|2 , vanishes at z = 0, and the results in [14] are therefore no longer applicable here. We therefore use Eq. (28), for which the function R(z) constitutes the full set of scattering data. Indeed, knowing the solutions c± (z, t) of Riemann problem (32)–(34) that are simultaneously solutions of the system of linear equations (cf. (24)) ct = 4iξ 3 [c, σ3 ] + ux (0, t)σ1 c,
(71)
it is easy to construct solutions of Eq. (28). For this, it suffices to set m− (z, t) = (c+11 + c+21 )e−4izt
m+ (z, t) = (c+12 + c+22 )e4izt .
and
Conjugation condition (33) can then be rewritten as the Gelfand–Levitan–Marchenko equation m+ (z, t) = m− (z, t)R(z) + m+ (−z, t), solving which we obtain eigenfunctions of Eq. (28). It readily follows from the properties of the function R(z) that the potential w(t) satisfies the estimate
∞
(1 + t)|w(t)| dt < ∞.
(72)
0
To determine the function ux (0, t) from the found value of w(t), we must solve the Riccati equation u˙ x (0, t)+ u2x (0, t) = w(t), which is reduced to the linear second-order equation ψ¨ = w(t)ψ
(73)
˙ by the substitution ux (0, t) = ψ/ψ. Keeping in mind that ux (0, 0) = 0, we find ψ(0) = 1,
˙ ψ(0) = 0.
(74)
The fundamental system of solutions to Eq. (73) consists of the functions ψ1 (t) and ψ2 (t) such that ψ1 (t) = 1 + o(1),
t → ∞,
ψ2 (t) = t + o(1),
t → ∞,
with ψ1 (t) = m+ (0, t), where m+ (z, t) is an eigenfunction of Eq. (28) with z 2 = µ. 42
(75)
Lemma 6.1. The coefficient C2 is nonzero in the expansion ψ(t) = C1 ψ1 (t, 0)+C2 ψ2 (t) of the solution ψ(t) with respect to the fundamental system of solutions. We prove the lemma by supposing the contradiction. Let C2 = 0. We then have ux (0, t) = ψ˙ 1 (t)/ψ1 (t), ∞ and because ψ1 (t) = 1 + o(1), t → ∞, and ψ˙ 1 (t) = − t w(s)ψ1 (s) ds, we find
∞
|ψ˙ 1 (t)| ≤ 2
|w(s)| ds
for t 1.
t
Therefore,
∞
|ψ˙ 1 (s)| ds ≤ 2
t
∞
(s − t)|w(s)| ds < ∞.
t
We then obviously have ux (0, t) ∈ L1 (0, ∞) because for t 1,
∞
|ux (0, s)| ds ≤
t
t
∞
∞ ψ˙ 1 (s) ds ≤ 4 (s − t)|w(s)| ds < ∞. ψ1 (s) t
This contradicts the condition |R(0)| = 1. Indeed, it follows from the results in [14] that the scattering problem for the system of equations (71) with an absolutely summable potential has a regular solution. In other words, the function c+ (z, t) is such that we necessarily have |R(z)| < 1 for all z ∈ R in Eq. (31). The lemma is proved. Proposition 6.1. In the general case, i.e., in the case where the condition ξa(ξ) = 0 is satisfied for ξ = 0, the function ux (0, t) has the large-t asymptotic form ux (0, t) =
1 (1 + o(1)), t
t → ∞.
(76)
Otherwise, estimate (70) is satisfied. The validity of the proposition follows from Lemma 6.1. Indeed, in view of the lemma, the coefficient C2 is nonzero in the formula C1 ψ˙ 1 (t, 0) + C2 ψ˙ 2 (t) . (77) ux (0, t) = C1 ψ1 (t, 0) + C2 ψ2 (t) Together with Eqs. (75), this immediately implies Eq. (76).
7. Discussion of the results We have considered the initial boundary value problem in Eqs. (1)–(3) in the absence of the discrete spectrum of the associated linear operator. The spectrum of the second operator in the L–A pair taken along the line x = 0 is then also continuum. As a result, the potential ux (0, t) of the second operator is either localized (lies in L1 (0, ∞)) or decreases as 1/t depending on whether the inequality |R(z)| < 1 is satisfied everywhere on R or is violated at z = 0. The problem is considerably complicated if we assume that Eq. (4) has a discrete spectrum at the initial time. We recall the well-known tendency typical of the Cauchy problem: as time grows, the solitonic component of the solution of the KdV equation recedes to the right, whereas the radiation background corresponding to the continuum spectrum moves to the left. By analogy, we can expect that adding solitons to initial condition (3) should not strongly distort the picture. But this seems only partly correct. If the solitons are far from the point x = 0 at the initial time (which is mathematically expressed as the condition that the inequality |R(z)| < 1 holds for all real nonzero z), then 43
they indeed recede without considerable impact on the behavior of the solution in the neighborhood of the boundary. But if the distance between the soliton and the boundary is small, they actively interact, and a bound state is formed. In that case, the inequality |R(z)| < 1 is violated on a certain subset R0 ⊂ R. The second operator then acquires an eigenvalue at x = 0, and, in addition, the set R0 is a gap in the continuum spectrum of the second operator. We plan to analyze this case in more detail in a forthcoming paper. Acknowledgments. This work was supported by the Russian Foundation for Basic Research (Grant No. 99-01-00431) and INTAS (Grant No. 99-01782). REFERENCES 1. 2. 3. 4. 5. 6. 7. 8. 9.
10.
11. 12. 13. 14.
44
V. V. Khablov, Trudy Sem. Sobolev., 2, 137–148 (1979). A. S. Fokas, Selecta Math. (NS), 4, No. 1, 31–68 (1998). B. G¨ urel, M. G¨ urses, and I. Habibullin, J. Math. Phys., 36, 6809–6821 (1995). V. E. Adler, I. T. Habibullin, and A. B. Shabat, Theor. Math. Phys., 110, 78–90 (1997). H. E. Moses, J. Math. Phys., 17, No. 1, 73–75 (1976). I. T. Habibullin, Theor. Math. Phys., 119, 712–718 (1999). V. A. Marchenko, Sturm–Liouville Operators and Applications [in Russian], Naukova Dumka, Kiev (1977); English transl., Birkh¨ auser, Basel (1986). I. Habibullin and A. Vil’danov, “The KdV equation on a half-line,” solv-int/9910002 (1999). N. P. Vekua, Systems of Singular Integral Equations and Some Boundary Problems [in Russian] (2nd ed.), Nauka, Moscow (1970); English transl. prev. ed.: Systems of Singular Integral Equations, Noordhoff, Groningen (1967). V. M. Babich and V. S. Buldyrev, Asymptotic Methods in Problems of the Diffraction of Short Waves [in Russian], Nauka, Mowcow (1972); English transl.: Short–Wavelength Diffraction Theory: Asymptotic Methods, Springer, Berlin (1991). Yu. L. Shmul’yan, Usp. Mat. Nauk, 9, No. 4, 243–248 (1954). K. Yosida, Functional Analysis, Springer, Berlin (1965). M. A. Lavrent’ev and B. V. Shabat, Methods in the Theory of Functions of a Complex Variable [in Russian], Nauka, Moscow (1973). A. B. Shabat, Diff. Equat., 15, 1299–1307 (1980).