Squared Eigenfunctions for the Sasa-Satsuma Equation

advertisement
Squared Eigenfunctions for the Sasa-Satsuma Equation
Jianke Yang1 , D. J. Kaup2
1
Department of Mathematics and Statistics,
University of Vermont, Burlington, VT 05401
2
Department of Mathematics,
University of Central Florida, Orlando, FL 32816
Squared eigenfunctions are quadratic combinations of Jost functions and adjoint Jost functions
which satisfy the linearized equation of an integrable equation. They are needed for various studies
related to integrable equations, such as the development of its soliton perturbation theory. In this
article, squared eigenfunctions are derived for the Sasa-Satsuma equation whose spectral operator
is a 3 × 3 system, while its linearized operator is a 2 × 2 system. It is shown that these squared
eigenfunctions are sums of two terms, where each term is a product of a Jost function and an
adjoint Jost function. The procedure of this derivation consists of two steps: first is to calculate the
variations of the potentials via variations of the scattering data by the Riemann-Hilbert method. The
second one is to calculate the variations of the scattering data via the variations of the potentials
through elementary calculations. While this procedure has been used before on other integrable
equations, it is shown here, for the first time, that for a general integrable equation, the functions
appearing in these variation relations are precisely the squared eigenfunctions and adjoint squared
eigenfunctions satisfying respectively the linearized equation and the adjoint linearized equation of
the integrable system. This proof clarifies this procedure and provides a unified explanation for
previous results of squared eigenfunctions on individual integrable equations. This procedure uses
primarily the spectral operator of the Lax pair. Thus two equations in the same integrable hierarchy
will share the same squared eigenfunctions (except for a time-dependent factor). In the Appendix,
the squared eigenfunctions are presented for the Manakov equations whose spectral operator is
closely related to that of the Sasa-Satsuma equation.
I.
INTRODUCTION
Squared eigenfunctions are quadratic combinations of Jost functions and adjoint Jost functions which satisfy the
linearized equation of an integrable system. This name was derived from the fact that, for many familiar integrable
equations such as the KdV and NLS equations, solutions of the linearized equations are squares of Jost functions of
the Lax pairs [1–7]. Squared eigenfunctions are intimately related to the integrable-equation theory. For instance,
they are eigenfunctions of the recursion operator of integrable equations [8–11]. They also appear as self-consistent
sources of integrable equations [13, 14]. A more important application of squared eigenfunctions is in the direct soliton
perturbation theory, where squared eigenfunctions and their closure relation play a fundamental role [5, 6, 12, 15].
Squared eigenfunctions have been derived for a number of integrable equations including the KdV hierarchy, the
AKNS hierarchy, the derivative NLS hierarchy, the sine-Gordon equation, the massive-Thirring model, the BenjaminOno equation, the matrix NLS equations, the Kadomtsev-Petviashvili equation, etc. [4, 6, 9–11, 16–18]. Several
techniques have been used in those derivations. One is to notice the commutability relation between the linearized
operator and the recursion operator of the integrable equation [10, 11, 19]. Thus squared eigenfunctions are simply
eigenfunctions of the recursion operator. The drawback of this method is that one has to first derive the recursion
operator of the integrable equation and determine its eigenfunctions, which can be highly non-trivial for many equations (see [20] for instance). Another technique is to first calculate variations of the scattering data via the variations
of the potentials, then use the Wronskian relations to invert this relation to get the variations of the potentials via
variations of the scattering data [4]. This Wronskian-relation technique involves some ingenious steps and could
run into difficulties too if the problem is sufficiently complicated [4]. The third technique is related to the second
one, except that one directly calculates the variations of the potentials via variations of the scattering data by the
Riemann-Hilbert method [4, 6]. This third method is general and conceptually simpler. Regarding the second and
third methods however, one question which was never clarified is why the functions appearing in the variation relations are indeed squared eigenfunctions and adjoint squared eigenfunctions which satisfy respectively the linearized
equation and adjoint linearized equation of an integrable system. In all known examples, this was always found to be
true by direct verifications. But whether and why it would remain true for the general case was not known.
Recently, we were interested in the Sasa-Satsuma equation which is relevant for the propagation of ultra-short
optical pulses [21–23]. This equation is integrable [23]. Two interesting features about this equation are that its
solitons are embedded inside the continuous spectrum of the equation [15, 24], and their shapes can be doublehumped for a wide range of soliton parameters [23]. One wonders how these double-humped solitons evolve when the
2
equation is perturbed — a question which is interesting and significant from both physical and mathematical points
of view. This question can be studied by a soliton perturbation theory. Due to the embedded nature of these solitons,
external perturbations will generally excite continuous-wave radiation which is in resonance with the soliton. So a key
component in the soliton perturbation theory would be to calculate this continuous-wave radiation, for which squared
eigenfunctions are needed (a similar situation occurs in the soliton perturbation theory for the Hirota equation [15]).
A special feature of the Sasa-Satsuma equation is that, while the linearized operator of this equation is 2 × 2, its
spectral operator is 3 × 3. How to build two-component squared eigenfunctions from three-component Jost functions
is an interesting and curious question. As we shall see, only certain components of the 3 × 3 Jost functions and their
adjoints are used to construct the squared eigenfunctions and the adjoint squared eigenfunctions of the Sasa-Satsuma
equation. To calculate these squared eigenfunctions for the Sasa-Satsuma equation, we have tried to directly use
the first method mentioned above. The recursion operator for the Sasa-Satsuma equation has been derived recently
[20]. However, that operator is quite complicated, thus its eigenfunctions are difficult to obtain. The second method
mentioned above is met with difficulties too. Thus we choose the third method for the Sasa-Satsuma equation and
demonstrate here certain advantages of this method.
In this paper, we further develop the third method, using the Sasa-Satsuma equation as an example. We show that
the functions appearing in the expansion of the variations of the potentials are always the squared eigenfunctions
which satisfy the linearized equation of an integrable system, and that the functions appearing in the formulae for
the variations of scattering data are always the adjoint squared eigenfunctions which satisfy the adjoint linearized
equation of an integrable system. In addition, given these two relations between the variations of the potentials and
the variations of scattering data, there naturally follows the closure relation for the squared eigenfunctions and their
adjoints, as well as all the inner-product relations between the squared eigenfunctions and their adjoints. Thus no
longer is it necessary to grind away at calculating these inner products from the asymptotics of the Jost functions.
Rather one can just read off the values of the nonzero inner products from these two variation relations. This clarifies
the long-standing question regarding squared eigenfunctions in connection with the linearized integrable equation,
and streamlines the third method as a general and conceptually-simple procedure for the derivation of squared
eigenfunctions. We apply this method to the Sasa-Satsuma equation, and find that it readily gives the squared
eigenfunctions and adjoint squared eigenfunctions. The squared eigenfunctions for the Sasa-Satsuma equation are
sums of two terms, with each term being a product of a component of a Jost function and a component of an
adjoint Jost function. This two-term structure of squared eigenfunctions is caused by the symmetry properties of
scattering data of the Sasa-Satsuma equation. It should be noted that our derivation uses almost exclusively the
spectral operator of the Lax pair, thus all integrable equations with the same spectral operator (such as the SasaSatsuma hierarchy) will share the same set of squared eigenfunctions as we derived here (except for a time-dependent
factor which is equation-specific). An additional benefit of the method we used is that for two integrable equations
with similar spectral operators, the derivation of their squared eigenfunctions will be essentially the same. Thus one
can get squared eigenfunctions for one equation by minor modifications for the other equation. As an example, we
demonstrate in the Appendix how squared eigenfunctions for the Manakov equations can be easily obtained by minor
modifications of our calculations for the Sasa-Satsuma equation.
II.
THE RIEMANN-HILBERT PROBLEM
To start our analysis, we first formulate the Riemann-Hilbert problem for the Sasa-Satsuma equation which will be
needed for later calculations. The Sasa-Satsuma equation is
ut + uxxx + 6|u|2 ux + 3u(|u|2 )x = 0.
(1)
Its spectral (scattering) problem of the Lax pair is [23]
Yx = −iζΛY + QY,
(2)
where Y is a matrix function, Λ = diag(1, 1, −1),


0
0 u
0 u∗ 
Q= 0
−u∗ −u 0
(3)
is the potential matrix, the superscript “*” represents complex conjugation, and ζ is a spectral parameter. In this
paper, we always assume that the potential u(x) decays to zero sufficiently fast as x → ±∞. Notice that this matrix
3
Q has two symmetry properties. One is that it is anti-Hermitian, i.e. Q† = −Q, where the superscript “†” represents
the Hermitian of a matrix. The other one is that


0 1 0
σQσ = Q∗ ,
where σ = σ −1 =  1 0 0  .
(4)
0 0 1
Introducing new variables
J = Y E −1 ,
E = e−iζΛx ,
(5)
then Eq. (2) becomes
Jx = −iζ[Λ, J] + QJ,
(6)
where [Λ, J] = ΛJ − JΛ. The matrix Jost solutions J± (x, ζ) of Eq. (6) are defined by the asymptotics
J± → 11,
as
x → ±∞,
(7)
where “11” is the unit matrix. Here the subscripts in J± refer to which end of the x-axis the boundary conditions are
set. Since trQ = 0, using Abel’s formula on Eq. (6), we see that det J± = 1 for all x. In addition, since J± E are
both solutions of the linear equations (2), they are not independent and are linearly related by the scattering matrix
S(ζ) = [sij (ζ)]:
J− = J+ ESE −1 ,
det S = 1
(8)
for real ζ, i.e. ζ ∈ R (for non-real ζ, certain elements in S may not be well defined).
Due to the two symmetry properties of the potential matrix Q, the Jost solutions J± and the scattering matrix S
satisfy the corresponding symmetry properties. One is the involutive property. Since the potential Q is anti-Hermitian,
†
†
−1
we see that J±
(ζ ∗ ) and J±
(ζ) both satisfy the adjoint equation of (6). In addition, we see from Eq. (7) that J±
(ζ ∗ )
−1
and J± (ζ) have the same large-x asymptotics, thus they are equal to each other:
†
−1
J±
(ζ ∗ ) = J±
(ζ).
(9)
S † (ζ ∗ ) = S −1 (ζ).
(10)
Then in view of Eq. (8), we see that
To derive the other symmetry properties of the Jost solutions and the scattering matrix, we notice that due to the
∗
symmetry (4) of the potential Q, it is easy to see that σJ±
(−ζ ∗ )σ also satisfies Eq. (6). Then in view of the
asymptotics (7), we find that Jost solutions possesses the following additional symmetry
∗
J± (ζ) = σJ±
(−ζ ∗ )σ.
(11)
From this symmetry and the relation (8), we see that the scattering matrix S possesses the additional symmetry
S(ζ) = σS ∗ (−ζ ∗ )σ.
(12)
Analytical properties of the Jost solutions play a fundamental role in the Riemann-Hilbert formulation for the
scattering problem (6). For convenience, we express J± as:
J− = ΦE −1 ,
Φ = [φ1 , φ2 , φ3 ],
(13)
J+ = ΨE −1 ,
Ψ = [ψ1 , ψ2 , ψ3 ],
(14)
where Φ and Ψ are fundamental solutions of the original spectral problem (2) and are related by the scattering matrix
as
Φ = ΨS.
(15)
4
Then in view of the boundary conditions (7), the spectral equation (6) for J± can be rewritten into the following
Volterra-type integral equations:
Z x
J− (ζ; x) = 11 +
eiζΛ(y−x) Q(y)J− (ζ; y)eiζΛ(x−y) dy,
(16)
−∞
Z
∞
J+ (ζ; x) = 11 −
eiζΛ(y−x) Q(y)J− (ζ; y)eiζΛ(x−y) dy.
(17)
x
These integral equations always have solutions when the integrals on their right hand sides converge. Due to the
structure (3) of the potential Q, we easily see that Eq. (16) for the first and second columns of J− contain only the
exponential factor eiζ(x−y) which decays when ζ is in the upper half plane C+ , and Eq. (17) for the third column
of J+ contains only the exponential factor eiζ(y−x) which also falls off for ζ ∈ C+ . Thus these three columns can be
analytically extended to ζ ∈ C+ . In other words, Jost solutions
P+ = [φ1 , φ2 , ψ3 ]eiζΛx = J− H1 + J+ H2
(18)
are analytic in ζ ∈ C+ , where
H1 = diag(1, 1, 0),
H2 = diag(0, 0, 1).
(19)
Here the subscript in P refers to which half plane the functions are analytic in. From the Volterra integral equations
for P+ , we see that
P+ (x, ζ) → 11
as
ζ ∈ C+ → ∞.
(20)
Similarly, Jost functions [ψ1 , ψ2 , φ3 ]eiζΛx are analytic in ζ ∈ C− , and their large-ζ asymptotics is
[ψ1 , ψ2 , φ3 ]eiζΛx → 11
as
ζ ∈ C− → ∞.
(21)
To obtain the analytic counterpart of P+ in C− , we consider the adjoint spectral equation of (6):
Kx = −iζ[Λ, K] − KQ.
(22)
−1
The inverse matrices J±
satisfy this adjoint equation. Notice that
−1
J−
= EΦ−1 ,
−1
J+
= EΨ−1 .
Let us express Φ−1 and Ψ−1 as a collection of rows,


φ̄1
Φ−1 ≡ Φ̄ =  φ̄2  ,
φ̄3

ψ̄1
≡ Ψ̄ =  ψ̄2  ,
ψ̄3
(23)

Ψ−1
(24)
where the overbar refers to the adjoint quantity, then by similar techniques as used above, we can show that the first
−1
−1
and second rows of J−
and the third row of J+
are analytic in ζ ∈ C− , i.e. adjoint Jost solutions


φ̄1
−1
−1
+ H2 J+
(25)
P− = e−iζΛx  φ̄2  = H1 J−
ψ̄3
are analytic in ζ ∈ C− . In addition, their large-ζ asymptotics is
P− (x, ζ) → 11
as
ζ ∈ C− → ∞.
(26)
−1
−1
Similarly, the first and second rows of J+
and the third row of J−
, i.e. e−iζx ψ̄1 , e−iζx ψ̄2 , and eiζx φ̄3 , are analytic
in ζ ∈ C+ , and their large-ζ asymptotics is


ψ̄1
(27)
e−iζΛx  ψ̄2  → 11 as ζ ∈ C+ → ∞.
φ̄3
5
In view of the involution properties (9) of J± , we see that the analytic solutions P± satisfy the involutive property as
well:
P+† (ζ ∗ ) = P− (ζ).
(28)
This property can be taken as a definition of the analytic function P− from the known analytic function P+ .
The analytic properties of Jost functions described above have immediate implications on the analytic properties
of the scattering matrix S. Let us denote
S −1 (ζ) ≡ S̄(ζ) = [s̄ij (ζ)],
(29)
then since

φ̄1
S̄ = Φ−1 Ψ =  φ̄2  [ψ1 , ψ2 , ψ3 ],
φ̄3

ψ̄1
S = Ψ−1 Φ =  ψ̄2  [φ1 , φ2 , φ3 ],
ψ̄3


(30)
we see immediately that s11 , s12 , s21 , s22 and s̄33 can be analytically extended to the upper half plane ζ ∈ C+ , while
s̄11 , s̄12 , s̄21 , s̄22 and s33 can be analytically extended to the lower half plane ζ ∈ C− . In addition, their large-ζ
asymptotics are
s11 s12
→ 11, s̄33 → 1, as ζ ∈ C+ → ∞,
(31)
s21 s22
and
s̄11 s̄12
s̄21 s̄22
→ 11,
s33 → 1,
as
ζ ∈ C− → ∞.
(32)
Hence we have constructed two matrix functions P+ and P− which are analytic in C+ and C− respectively. On the
real line, using Eqs. (8), (18), (25), and (29), we easily see that
ζ ∈ R,
P− (ζ)P+ (ζ) = G(ζ),
(33)
where

G = E(H1 + H2 S)(H1 + S −1 H2 )E −1

1 0 s̄13
= E  0 1 s̄23  E −1 .
s31 s32 1
(34)
Equation (33) determines a matrix Riemann-Hilbert problem. The normalization condition for this Riemann-Hilbert
problem can be seen from (20) and (26) as
P± (x, ζ) → 11
as
ζ ∈ C± → ∞.
(35)
If this problem can be solved, then the potential Q can be reconstructed from an asymptotic expansion of its solution
for large ζ. Indeed, writing P+ as
P+ (x, ζ) = 11 + ζ −1 P (1) (x) + ζ −2 P (2) (x) + O(ζ −3 ),
(36)
inserting it into Eq. (6) and comparing terms of the same order in ζ −1 , we find at O(1) that
Q = i[Λ, P (1) ].
(37)
Thus the potential Q can be reconstructed from P (1) . At O(ζ −1 ), we find that
Px(1) = −i[Λ, P (2) ] + QP (1) .
(38)
From the above two equations as well as the large-x asymptotics of P+ (x, ζ) from Eq. (7), we see that the full matrix
P (1) is
Rx
 Rx

|u(y)|2 dy
u2 (y)dy
u(x)
−∞
−∞



Rx
Rx
1 
(1)
∗2
2
∗

.
P (x) =
(39)
u (y)dy
|u(y)| dy
u (x)


−∞
−∞
2i 

R∞
u∗ (x)
u(x)
2 x |u(y)|2 dy
6
This matrix, together with Eq. (36), gives the leading-order asymptotic expansions for the analytical functions P+ .
Leading-order asymptotic expansions for P− can be similarly derived.
Elements of the scattering matrix S are not all independent. Indeed, since the potential matrix Q contains only one
independent function u(x), matrix S should contain only one independent element as well. To get the dependence of
scattering coefficients, we first notice from equation S S̄ = 11 that
s33 (ζ)s̄33 (ζ) = 1 − s31 (ζ)s̄13 (ζ) − s32 (ζ)s̄23 (ζ),
ζ ∈ R.
(40)
Since s33 and s̄33 are analytic in C− and C+ respectively, the above equation defines a (scalar) Riemann-Hilbert
problem for (s33 , s̄33 ), under the canonical normalization conditions which can be seen from (31)-(32). Thus (s33 , s̄33 )
can be uniquely determined by the locations of their zeros in C± together with the scattering data (s31 , s32 , s̄13 , s̄23 )
on the real line ζ ∈ R by the Plemelj formula. Similarly, from equation S S̄ = 11 we have
s̄11 s̄12
s11 s12
s̄13
= 11 −
(s31 , s32 ), ζ ∈ R,
(41)
s̄21 s̄22
s21 s22
s̄23
which defines another (matrix) Riemann-Hilbert problem for (s̄11 , s̄12 , s̄21 , s̄22 ) and (s11 , s12 , s21 , s22 ), under the
canonical normalization conditions which can be seen from (31)-(32). Thus these analytical scattering elements
can be determined by the scattering data (s31 , s32 , s̄13 , s̄23 ) on the real line ζ ∈ R as well. The scattering data
(s31 , s32 , s̄13 , s̄23 ) themselves are dependent on each other by the symmetry relations (10) and (12). Specifically, for
ξ ∈ R, s̄13 (ξ) = s∗31 (ξ), s̄23 (ξ) = s∗32 (ξ), and s32 (ξ) = s∗31 (−ξ). Thus the scattering matrix S indeed contains only a
single independent element as we would expect.
In the next section, we will derive squared eigenfunctions for the Sasa-Satsuma equation. For the convenience of
that derivation, we introduce the following notations:
ρ1 ≡
s31
,
s33
s32
,
s33
ρ2 ≡
ρ̄1 ≡
s̄13
,
s̄33
ρ̄2 ≡
s̄23
.
s̄33
(42)
Due to the symmetry conditions (10) and (12) of the scattering matrix S, we see that ρk and ρ̄k satisfy the symmetry
conditions
ρ̄k (ζ) = ρ∗k (ζ), k = 1, 2,
ρ̄1 (ζ) = ρ2 (−ζ),
ρ̄2 (ζ) = ρ1 (−ζ),
ζ ∈ R.
(43)
In the next section, symmetry properties of Φ and Ψ will also be needed. Due to the symmetry conditions (9) and
(11) of the Jost solutions, Φ and Ψ satisfy the symmetry conditions
Φ† (ζ ∗ ) = Φ−1 (ζ),
Ψ† (ζ ∗ ) = Ψ−1 (ζ),
(44)
Ψ± (ζ) = σΨ∗± (−ζ ∗ )σ.
(45)
and
Φ± (ζ) = σΦ∗± (−ζ ∗ )σ,
Symmetry (44) means that
φ†k (ζ ∗ ) = φ̄k (ζ),
ψk† (ζ ∗ ) = ψ̄k (ζ),
k = 1, 2, 3,
(46)
while symmetry (45), together with (46), means that
σφ1 (ζ) = φ̄T2 (−ζ),
σφ2 (ζ) = φ̄T1 (−ζ),
σφ3 (ζ) = φ̄T3 (−ζ).
(47)
Here the superscript ”T ” represents the transpose of a matrix. These symmetry properties will be important for
deriving the final expressions of squared eigenfunctions for the Sasa-Satsuma equation in the next section.
III.
SQUARED EIGENFUNCTIONS AND THEIR CLOSURE RELATION
In this section, we calculate the variation of the potential via variations of the scattering data, then calculate
variations of the scattering data via the variation of the potential. The first step will yield squared eigenfunctions,
and it will be done by the Riemann-Hilbert method. The second step will yield adjoint squared eigenfunctions, and
it will be done using basic relations of the spectral problem (2). For the ease of presentation, we first assume that s33
and s̄33 have no zeros in their respective planes of analyticity, i.e. the spectral problem (2) has no discrete eigenvalues.
This facilitates the derivation of squared eigenfunctions. The results for the general case of s33 and s̄33 having zeros
will be given in the end of this section.
7
A.
Variation of the potential and squared eigenfunctions
In this subsection, we derive the variation of the potential via variations of the scattering data, which will readily
yield the squared eigenfunctions satisfying the linearized Sasa-Satsuma equation. Our derivation will be based on the
Riemann-Hilbert method.
To proceed, we define the following matrix functions
F+ = P+ diag(1, 1,
1
),
s̄33
F− = P−−1 diag(1, 1, s33 ).
(48)
The reason to introduce diagonal matrices with s33 and s̄33 in F± is to obtain a new Riemann-Hilbert problem (49)
with a connection matrix G̃ which depends on ρk and ρ̄k rather than sij and s̄ij . This way, the variation of the
potential will be expressed in terms of variations in ρk and ρ̄k . When s̄33 and s33 have no zeros in C+ and C−
respectively, then F± as well as F±−1 are analytic in C± . On the real line, they are related by
F+ (ζ) = F− (ζ)G̃(ζ),
ζ ∈ R,
(49)
where


1 0 ρ̄1
1
1
 E −1 .
) G diag(1, 1,
) = E  0 1 ρ̄2
G̃ = diag(1, 1,
s33
s̄33
ρ1 ρ2 1 + ρ1 ρ̄1 + ρ2 ρ̄2
(50)
Here the relation (40) has been used. Eq. (49) defines a regular Riemann-Hilbert problem (i.e. without zeros).
Next we take the variation of the Riemann-Hilbert problem (49), and get
δF+ = δF− G̃ + F− δ G̃,
ζ ∈ R.
(51)
Utilizing Eq. (49), we can rewrite the above equation as
δF+ F+−1 = δF− F−−1 + F− δ G̃F+−1 ,
ζ ∈ R,
(52)
which defines yet another regular Riemann-Hilbert problem for δF F −1 . Unlike the previous Riemann-Hilbert problems
(33) and (49) which were in matrix product forms, the present Riemann-Hilbert problem (52) can be explicitly solved.
Using the Plemelj formula, the general solution of this Riemann-Hilbert problem is
Z ∞
Π(ξ; x)
1
dξ,
(53)
δF F −1 (ζ; x) = A0 (x) +
2πi −∞ ξ − ζ
where
Π(ξ; x) ≡ F− (ξ; x) δ G̃(ξ; x) F+−1 (ξ; x),
ξ ∈ R.
(54)
Now we consider the large-ζ asymptotics of this solution. Notice from expansions (36), (31) and relation (48) that as
ζ → ∞,




u(x)
O( ζ1 ) O( ζ1 ) δu(x)
1 + O( ζ1 ) O( ζ1 )
2iζ
2iζ











u∗ (x)
δu∗ (x) 
1
1
1
1
F+ (ζ; x) →  O( ζ ) 1 + O( ζ )
,
δF
(ζ;
x)
→
(55)
O(
)
O(
)


.
+
2iζ
ζ
ζ
2iζ








u∗ (x)
u(x)
δu∗ (x) δu(x)
1 + O( ζ1 )
O( ζ1 )
2iζ
2iζ
2iζ
2iζ
In addition, it is easy to see that
Z
∞
−∞
Π(ξ; x)
1
dξ −→ −
ξ−ζ
ζ
Z
∞
Π(ξ; x)dξ,
ζ → ∞.
(56)
−∞
When these large-ζ expansions are substituted into Eq. (53), at O(1), we find that A0 (x) = 0. At O(ζ −1 ), we get
Z
Z
1 ∞
1 ∞
δu(x) = −
Π13 (ξ; x)dξ,
δu∗ (x) = −
Π31 (ξ; x)dξ.
(57)
π −∞
π −∞
8
Now we calculate the elements Π13 and Π31 . From Eq. (50), we see that


0
0
δ ρ̄1
 E −1 .
0
δ ρ̄2
δ G̃ = E  0
δρ1 δρ2 ρ1 δ ρ̄1 + ρ̄1 δρ1 + ρ2 δ ρ̄2 + ρ̄2 δρ2
(58)
Then in view of the F± definitions (48) and P± expressions (18), (25), we find from (54) that
−1
Π(ζ; x) = J− E(H1 + H2 S)−1 diag(1, 1, s33 )E −1 δ G̃ E diag(1, 1, s̄33 )(H1 + S −1 H2 )−1 E −1 J−
,
(59)
which simplifies to


0
0 δ ρ̄1
0 δ ρ̄2  Φ̄.
Π(ζ; x) = Φ  0
δρ1 δρ2 0
(60)
This relation, together with Eq. (57), shows that the variation of the potential δu can be expanded into quadratic
combinations between Jost solutions Φ and adjoint Jost solutions Φ̄. This relation is generic in integrable systems.
Now we calculate explicit expressions for δu. Inserting (60) into (57) and recalling our notations (24), we readily
find that
Z
1 ∞
φ31 φ̄13 δρ1 + φ31 φ̄23 δρ2 + φ11 φ̄33 δ ρ̄1 + φ21 φ̄33 δ ρ̄2 dξ.
(61)
δu = −
π −∞
Here the notations are


φk1
φk =  φk2  ,
φk3


ψk1
ψk =  ψk2  ,
ψk3
φ̄k = φ̄k1 , φ̄k2 , φ̄k3 ,
ψ̄k = ψ̄k1 , ψ̄k2 , ψ̄k3 ,
Utilizing the symmetry relations (43) and (47), the above δu formula reduces to
Z
1 ∞
φ31 φ̄13 + φ33 φ̄12 δρ1 + φ11 φ̄33 + φ13 φ̄32 δ ρ̄1 dξ.
δu = −
π −∞
k = 1, 2, 3.
(62)
(63)
This is an important step in our derivation, where symmetry conditions play an important role. Regarding δu∗ , its
formula can be obtained by taking the complex conjugate of the above equation and simplified by using the symmetry
relations (46). Defining functions
φ31 φ̄13 + φ33 φ̄12
φ11 φ̄33 + φ13 φ̄32
Z1 =
, Z2 =
,
(64)
φ12 φ̄33 + φ13 φ̄31
φ32 φ̄13 + φ33 φ̄11
then the final expression for the variation of the potential (δu, δu∗ )T is
Z
1 ∞
δu(x)
=−
[Z1 (ξ; x)δρ1 (ξ) + Z2 (ξ; x)δ ρ̄1 (ξ)] dξ.
δu∗ (x)
π −∞
(65)
Notice here that due to the symmetry relations (46), Z2 (ζ) is equal to Z1∗ (ζ ∗ ) with its two components swapped. Also
notice that Z1 and Z2 are the sum of two terms, where each term is a product of a component of a Jost function and a
component of an adjoint Jost function. The two-term feature of these functions is caused by the symmetry properties
of the scattering data and Jost functions, which enable us to combine terms together in the general expansion (61). In
the massive Thirring model, the counterparts of functions Z1,2 are also sums of two terms [4]. The two-term structure
there is not due to symmetry properties, but rather due to the asymptotics of its Jost functions in the spectral plane.
The feature of each term in Z1 and Z2 being a product between a Jost function and an adjoint Jost function, on the
other hand, is a generic feature in integrable systems. In previous studies on many integrable equations (such as the
KdV, NLS, derivative NLS, and massive Thirring equations), it was found that these functions were often “squares”
or products of Jost functions themselves (thus the name “squared eigenfunctions”) [1, 3, 4, 16]. This was so simply
because the spectral operators in those systems were 2 × 2, for which the adjoint Jost functions (rows of the inverse
of the Jost-function matrix) are directly proportional to Jost functions themselves. That is not generic however, and
does not hold for the Sasa-Satsuma equation, or in general for integrable equations whose spectral operator is 3 × 3
or higher.
9
The derivation of (65) for the expansion of (δu, δu∗ )T is an important result of this subsection. It readily gives the
variations in the Sasa-Satsuma fields in terms of variations in the initial data. To show this, we first restore the time
dependence in equation (65). An important fact we need to notice here is that the Jost solutions Φ as defined in (13)
do not satisfy the time evolution equation of the Lax pair of the Sasa-Satsuma equation. Indeed, this time evolution
equation of the Lax pair is [23]
Yt = −4iζ 3 ΛY + V (ζ, u)Y,
(66)
where the matrix V goes to zero as x approaches infinity. Obviously the large-x asymptotics of Φ [see (7) and (13)]
can not satisfy the above equation as |x| → ∞ where V vanishes. But this problem can be easily fixed. Defining the
“time-dependent” Jost functions
Φ(t) = Φe−4iζ
3
Λt
,
(67)
then these functions satisfy both parts of the Lax pair, (2) and (66). The reason they now satisfy the time evolution
equation (66) is due to their satisfying the asymptotic time evolution equation (66) as |x| → ∞ as well as the
compatibility relation of the Lax pair. Similarly we define the “time-dependent” adjoint Jost functions as
Φ̄(t) = e4iζ
3
Λt
Φ̄,
(68)
which satisfy both adjoint equations of the Lax pair. Now we use these new Jost functions and adjoint Jost functions
to replace those in the definitions (64) and get “time-dependent” (Z1 , Z2 ) functions
#
"
#
"
(t) (t)
(t) (t)
(t) (t)
(t) (t)
φ11 φ̄33 + φ13 φ̄32
φ31 φ̄13 + φ33 φ̄12
(t)
(t)
.
(69)
, Z2 =
Z1 =
(t) (t)
(t) (t)
(t) (t)
(t) (t)
φ12 φ̄33 + φ13 φ̄31
φ32 φ̄13 + φ33 φ̄11
In view of the relations (67) and (68), we see that these “time-dependent” (Z1 , Z2 ) functions are related to the original
ones as
3
(t)
Z1 = Z1 e8iζ t ,
3
(t)
Z2 = Z2 e−8iζ t .
(70)
Another fact we need to notice in conjunction with the time-restored equation of (65) is that for the Sasa-Satsuma
equation, the time evolution of the scattering matrix S is given by [23]
St = −4iξ 3 [Λ, S].
(71)
Thus
s033 (t) = 0,
s031 (t) = 8iξ 3 s31 (t),
(72)
where the prime indicates differentiation with respect to t. Then recalling the definition (42) of (ρ1 , ρ̄1 ) as well as the
symmetry property (10), we obtain the time evolution of the varied scattering data as
3
δρ1 (ξ, t) = δρ1 (ξ, 0)e8iξ t ,
3
δ ρ̄1 (ξ, t) = δ ρ̄1 (ξ, 0)e−8iξ t .
(73)
Now we insert the above time-dependence relations (70) and (73) into the time-restored expansion (65), and get a
new expansion
Z
i
1 ∞ h (t)
δu(x, t)
(t)
=−
Z1 (ξ; x, t)δρ1 (ξ, 0) + Z2 (ξ; x, t)δ ρ̄1 (ξ, 0) dξ.
(74)
∗
δu (x, t)
π −∞
(t)
(t)
The advantages of this expansion are that the Jost functions Φ(t) in [Z1 , Z2 ] satisfy both equations of the Lax pair,
and the expansion coefficients [δρ1 , δ ρ̄1 ] are time-independent.
Since δu and δu∗ must satisfy the homogeneous linearized Sasa-Satsuma equation, it follows that the functions
(t)
(t)
[Z1 , Z2 ] appearing in the expansion (74) must also and precisely satisfy the same linearized Sasa-Satsuma equation.
A similar fact for several other integrable equations has been noted before by direct verification (see [6] for instance).
Here we will give a simple proof of this fact which holds for any integrable equation where an expansion such as (74)
exists. Suppose the linearized operator of the Sasa-Satsuma equation for [u(x, t), u∗ (x, t)]T is L. Since the potential
10
u(x, t) satisfies the Sasa-Satsuma equation, and the variation of the potential [δu(x, t), δu∗ (x, t)]T is infinitestimal, it
follows that
δu(x, t)
L
= 0.
(75)
δu∗ (x, t)
Inserting Eq. (74) into the above equation, we get
Z ∞h
i
(t)
(t)
LZ1 (ξ; x, t)δρ1 (ξ, 0) + LZ2 (ξ; x, t)δ ρ̄1 (ξ, 0) dξ = 0.
(76)
−∞
Since the initial variations of the scattering data (δρ1 , δ ρ̄1 )(ξ, 0) are arbitrary and linearly independent, it follows that
(t)
(t)
LZ1 (ξ; x, t) = LZ2 (ξ; x, t) = 0
(t)
(77)
(t)
for any ξ ∈ R. Thus Z1 and Z2 satisfy the linearized Sasa-Satsuma equation and are therefore the squared
eigenfunctions of this equation.
Now in the derivation of (74), from (52) forward, no constraints have been put on the variations of u and u∗ , except
for the implied assumption that the scattering data for u and u∗ and its variations will exist. Whence we expect that
[δu, δu∗ ]T in (74) can be considered to be arbitrary. If so, then due to the equality which we see in (74), we already
(t)
(t)
can expect {Z1 (ξ; x, t), Z2 (ξ; x, t), ξ ∈ R} to form a complete set in an appropriate functional space (such as L1 ).
We remark that the squared eigenfunctions given here are all linearly independent. That can be easily verified upon
recognizing that from (2), one may readily construct an integro-differential eigenvalue problem, for which these squared
eigenfunctions are the true eigenfunctions. Then in the usual manner it follows that these squared eigenfunctions
are linearly independent. Similar for the adjoint squared eigenfunctions. The linear independence of these squared
eigenfunctions can also be easily established after we have obtained the dual relations of (74) which give variations of
scattering data due to arbitrary variations of the potentials (see next section).
(t)
(t)
The squared eigenfunctions [Z1 , Z2 ] have nice analytic properties. Indeed, recalling the analytic properties of
(t)
(t)
Jost functions discussed in Sec. 2, we see that Z1 (ζ)e−2iζx is analytic in C− , while Z2 (ζ)e2iζx is analytic in C+ .
These analytic properties will be essential when we extend our results to the general case where s33 and s̄33 have
zeros (see end of this section).
(t)
(t)
(t)
It should be noted that in the expressions (69) for [Z1 , Z2 ], if φ3 is replaced by other Jost functions (such as
(t)
(t)
(t)
φ1 ), or if φ̄1 is replaced by other adjoint Jost functions (such as φ̄2 ), the resulting functions would still satisfy
the linearized Sasa-Satsuma equation and are thus also squared eigenfunctions of this equation. These facts can be
verified directly by inserting such functions into the linearized Sasa-Satsuma equation and noticing that Φ(t) and Φ̄(t)
satisfy the Lax pair (2), (66) and the adjoint Lax pair respectively. Similar results also hold for other integrable
equations. However, these other squared eigenfunctions in general do not have nice analytic properties in C± nor are
they linearly independent of the set (69).
B.
Variations of the scattering data and adjoint squared eigenfunctions
In this subsection, we calculate variations of the scattering data which occur due to arbitrary variations in the
potentials. These formulae, together with Eq. (65) or (74), will give the “adjoint” form of the squared eigenfunctions
as well as their closure relation.
We start with the spectral equation (2) for Jost functions Φ, or specifically,
Φx = −iζΛΦ + QΦ.
(78)
δΦx = −iζΛδΦ + QδΦ + δQΦ.
(79)
Taking the variation to this equation, we get
Recalling that Φ → e−iζΛx as x → −∞ [see Eqs. (7) and (13)], thus δΦ → 0 as x → −∞. As a result, the solution of
the inhomogeneous equation (79) can be found by the method of variation of parameters as
Z x
δΦ(ζ; x) = Φ(ζ; x)
Φ−1 (ζ; y)δQ(y)Φ(ζ; y)dy.
(80)
−∞
11
Here


0
0 δu
0 δu∗ 
δQ =  0
−δu∗ −δu 0
(81)
is the variation of the potential. Now we take the limit of x → ∞ in the above equation. Recalling Φ = ΨS and the
asymptotics of Ψ → E as x → ∞, we see that Φ → ES, δΦ → EδS as x → ∞. Thus in this limit, the above equation
becomes
Z ∞
δS(ξ) = S(ξ)
Φ−1 (ξ; x)δQ(x)Φ(ξ; x)dx,
ξ ∈ R.
(82)
−∞
Noticing that SΦ−1 = Ψ−1 ≡ Ψ̄, the above equation can be rewritten as
Z ∞
δS(ξ) =
Ψ̄(ξ; x)δQ(x)Φ(ξ; x)dx,
ξ ∈ R.
(83)
−∞
This formula gives variations of scattering coefficients δsij via the variation of the potential δQ. From it, the variation
of the scattering data δρ1 can be readily found. This δρ1 formula contains both Jost functions Φ and adjoint Jost
functions Ψ̄. When we further express Φ in terms of Ψ through the relation (15) (so that the boundary conditions of
the Jost functions involved are all set at x = +∞), we obtain the expression for δρ1 as
Z ∞
1
(84)
δρ1 = 2
δu ψ̄31 ϕ3 − ψ̄33 ϕ2 + δu∗ ψ̄32 ϕ3 − ψ̄33 ϕ1 dx,
s33 −∞
where the column vector ϕ is defined as
ϕ = [ϕ1 , ϕ2 , ϕ3 ]T ≡ s̄22 ψ1 − s̄21 ψ2 .
(85)
The variation δ ρ̄1 can be derived by taking the complex conjugate of the above equation and utilizing the symmetry
relations (43) and (46). Defining functions
ϕ̄ = [ϕ̄1 , ϕ̄2 , ϕ̄3 ] ≡ s22 ψ̄1 − s12 ψ̄2 ,
Ω1 =
ψ̄31 ϕ3 − ψ̄33 ϕ2
ψ̄32 ϕ3 − ψ̄33 ϕ1
,
Ω2 =
(86)
ψ32 ϕ̄3 − ψ33 ϕ̄1
ψ31 ϕ̄3 − ψ33 ϕ̄2
,
(87)
and inner products
Z
∞
f T (x)g(x)dx,
hf, gi =
(88)
−∞
then the final expressions for variations of the scattering data δρ1 and δ ρ̄1 are
1
δu(x)
Ω1 (ξ; x),
,
δρ1 (ξ) = 2
δu∗ (x)
s33 (ξ)
δ ρ̄1 (ξ) =
1
s̄233 (ξ)
Ω2 (ξ; x),
δu(x)
δu∗ (x)
(89)
,
(90)
where ξ ∈ R.
The above two equations are associated with the expansion (65) of the previous section, where time is frozen in
both cases. A small problem with these equations is that the Jost functions and adjoint Jost functions appearing
in the definitions (87) of [Ω1 , Ω2 ] do not satisfy the time evolution equation (66) of the Lax pair (see the previous
section). To fix this problem, we repeat the practice of the previous section and introduce “time-dependent” versions
of the functions [Ω1 , Ω2 ] as
"
#
"
#
(t) (t)
(t) (t)
(t) (t)
(t) (t)
ψ̄31 ϕ3 − ψ̄33 ϕ2
ψ32 ϕ̄3 − ψ33 ϕ̄1
(t)
(t)
Ω1 =
, Ω2 =
,
(91)
(t) (t)
(t) (t)
(t) (t)
(t) (t)
ψ̄32 ϕ3 − ψ̄33 ϕ1
ψ31 ϕ̄3 − ψ33 ϕ̄2
12
where ϕ(t) (ξ, x, t) and ϕ̄(t) (ξ, x, t) are defined as
(t)
(t)
(t)
ϕ(t) ≡ s̄22 (ξ)ψ1 − s̄21 (ξ)ψ2 ,
(t)
ϕ̄(t) ≡ s22 (ξ)ψ̄1 − s12 (ξ)ψ̄2 ,
(92)
Φ(t) and Φ̄(t) have been defined in (67)-(68), and Ψ(t) and Ψ̄(t) are defined as
Ψ(t) = Ψe−4iζ
3
Λt
,
Ψ̄(t) = e4iζ
3
Λt
Ψ̄.
(93)
Notice from (71) and the symmetry condition (10) that (s12 , s22 , s̄21 , s̄22 ) are time-independent. Similar to what we
have done in the previous section, we can show that ϕ(t) and Ψ(t) in the definition (91) are Jost functions satisfying
(t)
(t)
both the Lax pair (2) and (66), while functions ϕ̄(t) and Ψ̄(t) satisfy both the adjoint Lax pair. In addition, [Ω1 , Ω2 ]
are related to [Ω1 , Ω2 ] as
3
(t)
Ω1 = Ω1 e−8iζ t ,
3
(t)
Ω2 = Ω2 e8iζ t .
(94)
Inserting this relation and (73) into the time-restored equations (89) and (90), we get the new relations
1
δu(x, t)
(t)
Ω1 (ξ; x, t),
,
δρ1 (ξ, 0) = 2
δu∗ (x, t)
s33 (ξ)
1
δ ρ̄1 (ξ, 0) = 2
s̄33 (ξ)
(t)
Ω2 (ξ; x, t),
δu(x, t)
δu∗ (x, t)
(95)
.
(96)
These two new relations are associated with the new expansion (74) in the previous section.
(t)
(t)
The functions (Ω1 , Ω2 ) appearing in the above variations-of-scattering-data formulae (95)-(96) are precisely adjoint squared eigenfunctions satisfying the adjoint linearized Sasa-Satsuma equation. Analogous facts for several other
integrable equations have been noted before by direct verification (see [6]). Below we will present a general proof of
this fact which will hold for those integrable equations where we have relations similar to (74), (95) and (96). Along
the way, we will also obtain the closure relation, the orthogonality relations and inner products between the squared
eigenfunctions and the adjoint squared eigenfunctions.
First, we insert Eqs. (95)-(96) into the expansion (74). Exchanging the order of integration, we get
Z
Z
1 ∞ ∞
1
1
δu(x, t)
δu(x0 , t)
(t)T
(t)T
(t)
(t)
0
0
=
−
(ξ;
x
,
t)
+
(ξ;
x
,
t)
dξ
dx0 .
(ξ;
x,
t)Ω
(ξ;
x,
t)Ω
Z
Z
1
2
δu∗ (x, t)
δu∗ (x0 , t)
π −∞ −∞ s233 (ξ) 1
s̄233 (ξ) 2
(97)
Since we are considering that (δu, δu∗ ) are arbitrary localized functions (in an appropriate functional space such as
L1 ), in order for the above equation to hold, we must have
Z
1 ∞
1
1
(t)T
(t)T
(t)
(t)
0
0
−
(98)
Z (ξ; x, t)Ω1 (ξ; x , t) + 2
Z (ξ; x, t)Ω2 (ξ; x , t) dξ = δ(x − x0 )11,
π −∞ s233 (ξ) 1
s̄33 (ξ) 2
which is the closure relation (discrete-spectrum contributions are absent here due to our assumption of s33 , s̄33 having
no zeros). Here δ(x) is the dirac delta function. This closure relation has the usually symmetry in that not only
(t)
(t)
do the squared eigenfunctions {Z1 (ξ; x, t), Z2 (ξ; x, t), ξ ∈ R} form a complete set, but also the adjoint functions
(t)
(t)
{Ω1 (ξ; x, t), Ω2 (ξ; x, t), ξ ∈ R} form a complete set as well.
To obtain the inner products and orthogonality relations between these functions, we reverse the above and insert
the expansion (74) into Eqs. (95)-(96). Exchanging the order of integration, we get
Z ∞
δρ1 (ξ 0 , 0)
δρ1 (ξ, 0)
M (ξ, ξ 0 , t)
dξ 0 ,
(99)
=
δ ρ̄1 (ξ 0 , 0)
δ ρ̄1 (ξ, 0)
−∞
where

D
E
D
E
(t)
(t)
(t)
(t)
− πs21 (ξ) Ω1 (ξ; x, t), Z1 (ξ 0 ; x, t) , − πs21 (ξ) Ω1 (ξ; x, t), Z2 (ξ 0 ; x, t)
33
33



M (ξ, ξ 0 , t) = 
D
E
D
E .

(t)
(t) 0
(t)
(t) 0
1
1
− πs̄2 (ξ) Ω2 (ξ; x, t), Z1 (ξ ; x, t) , − πs̄2 (ξ) Ω2 (ξ; x, t), Z2 (ξ ; x, t)
33
33
(100)
13
Since the relation (99) is taken to be valid for arbitrary functions of δρ1 (ξ, 0) and δ ρ̄1 (ξ, 0), thus
M (ξ, ξ 0 , t) = 11δ(ξ − ξ 0 ).
(101)
(t)
(t)
Consequently we get the inner products and orthogonality relations between squared eigenfunctions [Z1 , Z2 ] and
(t)
(t)
functions [Ω1 , Ω2 ] as
D
E
(t)
(t)
Ω1 (ξ; x, t), Z1 (ξ 0 ; x, t) = −πs233 (ξ)δ(ξ − ξ 0 ),
(102)
D
D
E
(t)
(t)
Ω2 (ξ; x, t), Z2 (ξ 0 ; x, t) = −πs̄233 (ξ)δ(ξ − ξ 0 ),
(103)
E D
E
(t)
(t)
(t)
(t)
Ω1 (ξ; x, t), Z2 (ξ 0 ; x, t) = Ω2 (ξ; x, t), Z1 (ξ 0 ; x, t) = 0
(104)
for any ξ, ξ 0 ∈ R.
To show that (Ω1 , Ω2 ) are adjoint squared eigenfunctions which satisfy the adjoint linearized equation of the SasaSatsuma equation, we first separate the temporal derivatives from the spatial ones in the linearized operator L as
L = 11∂t + L,
(105)
where L only involves spatial derivatives. The adjoint linearized operator is then
LA = −11∂t + LA ,
(106)
(t)
where LA is the adjoint operator of L. Now we take the inner product between the equation LZ1 (ξ 0 ; x, t) = 0 and
(t)
function Ω1 (ξ; x, t). When Eq. (105) is inserted into it, we find that
(t)
(t)
(t)
(t)
(t)
(t)
hLZ1 (ξ 0 ), Ω1 (ξ)i − hZ1 (ξ 0 ), ∂t Ω1 (ξ)i + ∂t hZ1 (ξ 0 ), Ω1 (ξ)i = 0.
(t)
(107)
(t)
Here and immediately below, the (x, t) dependence of Z1,2 and Ω1,2 is suppressed for notational simplicity. Recall
that s33 is time-independent for the Sasa-Satsuma equation, thus from the inner-product equation (102) we see that
(t)
(t)
∂t hZ1 (ξ 0 ), Ω1 (ξ)i = 0. Using integration by parts, the first term in the above equation can be rewritten as
(t)
(t)
(t)
(t)
hLZ1 (ξ 0 ), Ω1 (ξ)i = hZ1 (ξ 0 ), LA Ω1 (ξ)i + W (ξ 0 , ξ)|x=L
x=−L ,
L → ∞,
(108)
where function W (ξ 0 , ξ; x, t) contains terms which are generated during integration by parts. These terms are quadratic
(t)
(t)
combinations of Z1 (ξ 0 ), Ω1 (ξ) and their spatial derivatives. When x = ±L → ±∞, each term has the form
0
0
f (ξ )g(ξ)exp(±iξ L ± iξL ± 4iξ 03 t ± 4iξ 3 t) due to the large-x asymptotics of Jost functions Φ and Ψ. Here f (ξ 0 ) and
g(ξ) are related to scattering coefficients and are continuous functions. Then in the sense of generalized functions,
the last term in Eq. (108) is zero due to the Riemann-Lebesgue lemma. Inserting the above results into Eq. (107),
we find that
(t)
(t)
hZ1 (ξ 0 ), LA Ω1 (ξ)i = 0,
(t)
ξ 0 , ξ ∈ R,
(109)
(t)
(t)
i.e. LA Ω1 (ξ) is orthogonal to all Z1 (ξ 0 ). By taking the inner product between the equation LZ2 (ξ 0 ) = 0 and
(t)
function Ω1 (ξ) and doing similar calculations, we readily find that
(t)
(t)
hZ2 (ξ 0 ), LA Ω1 (ξ)i = 0,
(t)
(t)
ξ 0 , ξ ∈ R.
(t)
(110)
(t)
In other words, LA Ω1 (ξ) is also orthogonal to all Z2 (ξ 0 ). Since {Z1 (ξ 0 ), Z2 (ξ 0 ), ξ 0 ∈ R} forms a complete set,
(t)
Eqs. (109)-(110) dictate that LA Ω1 has to be zero for any ξ, x and t, i.e.
(t)
LA Ω1 (ξ; x, t) = 0,
(t)
ξ ∈ R.
(111)
Thus Ω1 (ξ) is an adjoint squared eigenfunction that satisfies the adjoint linearized Sasa-Satsuma equation for every
(t)
ξ ∈ R. Similarly, it can be shown that Ω2 (ξ) is also an adjoint squared eigenfunction for every ξ ∈ R. In short,
(t)
(t)
{Ω1 (ξ), Ω2 (ξ), ξ ∈ R} is a complete set of adjoint squared eigenfunctions.
14
(t)
(t)
(t)
(t)
Like squared eigenfunctions (Z1 , Z2 ), these adjoint squared eigenfunctions (Ω1 , Ω2 ) are also quadratic combinations of Jost functions and adjoint Jost functions. In addition, they have nice analytic properties as well. To
see this latter fact, notice from the definition (85) that ϕeiζx is analytic in C− since s̄21 , s̄22 , ψ1 eiζx and ψ2 eiζx are
(t)
analytic in C− . Then recalling that ψ̄3 eiζx is also analytic in C− , we see that Ω1 (ζ)e2iζx is analytic in C− . Similarly,
(t)
Ω2 (ζ)e−2iζx is analytic in C+ .
It is worthy to point out that in the above adjoint squared eigenfunctions (91), if ϕ(t) is replaced by the Jost
(t)
(t)
(t)
(t)
function ψ1 (or ψ2 ), or if ϕ̄(t) is replaced by the adjoint Jost function ψ̄1 (or ψ̄2 ), the resulting functions would
(t)
still satisfy the adjoint linearized Sasa-Satsuma equation. Thus one may be tempted to simply take ϕ(t) = ψk ,
(t)
ϕ̄(t) = ψ̄k (k = 1 or 2) rather than (92) as adjoint squared eigenfunctions. The problem with these “simpler” adjoint
squared eigenfunctions is that their inner products with squared eigenfunctions (69) are not what they are supposed
to be [see (102)-(104)], neither does one get the closure relation (98). Thus the adjoint squared eigenfunctions (91)
coming from our systematic procedure of variation calculations are the correct ones to use.
C.
Extension to the general case
In the previous subsections, the squared eigenfunctions and their closure relation were established under the assumption that s33 and s̄33 have no zeros, i.e. the spectral equation (2) has no discrete eigenvalues. In this subsection,
we extend those results to the general case where s33 and s̄33 have zeros in their respective planes of analyticity C± .
Due to the symmetry properties (10) and (12), we have
s̄33 (ζ) = s∗33 (ζ ∗ ),
s33 (ζ) = s∗33 (−ζ ∗ ).
(112)
{ζj∗ , −ζj }
−ζj∗
∈ C+ are both
∈ C− is also a zero of s33 , and
Thus if ζj ∈ C− is a zero of s33 , i.e. s33 (ζj ) = 0, then
zeros of s̄33 . This means that zeros of s33 and s̄33 always appear in quadruples. Suppose all the zeros of s33 and s̄33
are ζj ∈ C− and ζ̄j ∈ C+ , j = 1, ..., 2N , respectively. For simplicity, we also assume that all zeros are simple.
(t)
(t)
In this general case with zeros, squared eigenfunctions Z1 , Z2 clearly still satisfy the linearized Sasa-Satsuma
(t)
(t)
equation, and adjoint squared eigenfunctions Ω1 , Ω2 still satisfy the adjoint linearized Sasa-Satsuma equation —
facts which will not change when s33 and s̄33 have zeros. The main difference from the previous no-zero case is
(t)
(t)
that, the sets of (continuous) squared eigenfunctions {Z1 (ξ; x), Z2 (ξ; x), ξ ∈ R} and adjoint squared eigenfunctions
(t)
(t)
{Ω1 (ξ; x), Ω2 (ξ; x), ξ ∈ R} are no longer complete, i.e. the closure relation (98) does not hold any more, and
contributions from the discrete spectrum must be included now. To account for discrete-spectrum contributions, one
could proceed by adding variations in the discrete scattering data in the above derivations. But a much easier way
is to simply pick up the pole contributions to the integrals in the closure relation (98), as we will do below. Recall
(t)
(t)
(t)
from the previous subsections that Z1 (ζ; x, t)e−2iζx and Ω1 (ζ; x, t)e2iζx are analytic in C− , while Z2 (ζ; x, t)e2iζx
(t)
−2iζx
and Ω2 (ζ; x, t)e
are analytic in C+ . In addition, from the large-ζ asymptotics (31)-(32), (35), and symmetry
relations (46), we easily find that
3
(t)
3
(t)
3
(t)
Z2 (ζ; x, t) → e−2iζx−8iζ t (1, 0)T ,
Z1 (ζ; x, t) → e2iζx+8iζ t (0, 1)T ,
3
(t)
Ω1 (ζ; x, t) → −e−2iζx−8iζ t (0, 1)T ,
ζ → ∞,
Ω2 (ζ; x, t) → −e2iζx+8iζ t (1, 0)T ,
ζ → ∞.
(113)
(114)
Thus,
Z
C−
1
(t)
(t)T
Z (ζ; x, t)Ω1 (ζ; x0 , t)dζ = −
s233 (ζ) 1
Z
0
e2iζ(x−x ) dζ diag(0, 1),
(115)
C−
where the
integration path C − is the lower semi-circle of infinite radius in anti-closewise direction. Since the function
0
e2iζ(x−x ) in the right hand side of the above equation is analytic, its integration path can be brought up to the real
axis ζ ∈ R. In the sense of generalized functions, that integral is equal to πδ(x − x0 ), thus
Z
1
(t)
(t)T
Z1 (ζ; x, t)Ω1 (ζ; x0 , t)dζ = −πδ(x − x0 )diag(0, 1).
(116)
2
C − s33 (ζ)
(t)
(t)T
Doing the same calculation for the integral of Z2 Ω2 /s̄233 along the upper semi-circle of infinite radius C + in
closewise direction and combining it with the above equation, we get
Z
Z
1
1
1
1
(t)
(t)T
(t)
(t)T
0
−
Z (ζ; x, t)Ω1 (ζ; x , t)dζ −
Z (ζ; x, t)Ω2 (ζ; x0 , t)dζ = δ(x − x0 )11.
(117)
π C − s233 (ζ) 1
π C + s̄233 (ζ) 2
15
Now we evaluate the two integrals in the above equation by bringing down the integration paths to the real axis and
picking up pole contributions by the residue theorem. Recalling that zeros of s33 and s̄33 are simple, the poles in
the above integrals are second-order. After simple calculations, we get the closure relation for the general case (with
discrete-spectrum contributions) as:
Z
1 ∞
1
1
(t)
(t)T
(t)
(t)T
0
0
−
Z
(ξ;
x,
t)Ω
(ξ;
x
,
t)
+
Z
(ξ;
x,
t)Ω
(ξ;
x
,
t)
dξ
1
2
π −∞ s233 (ξ) 1
s̄233 (ξ) 2
#
"
2N
(t)
X
2i
∂Z1
(t)T
(t)
(t)T
0
0
−
(ζ̄j ; x, t)Ω1 (ζ̄j ; x , t)
Z1 (ζ̄j ; x, t)Θ1 (ζ̄j ; x , t) +
0
∂ζ
s 2 (ζ̄ )
j=1 33 j
#
"
2N
(t)
X
∂Z2
2i
(t)
(t)T
(t)T
0
0
Z2 (ζj ; x, t)Θ2 (ζj ; x , t) +
+
(ζj ; x, t)Ω2 (ζj ; x , t) = δ(x − x0 )11,
(118)
02
∂ζ
)
s̄
(ζ
j=1 33 j
where
(t)
(t)
Θ1 (ζ̄j ; x, t)
(t)
(t)
00
∂Ω1
s (ζ̄j ) (t)
=
Ω (ζ̄j ; x, t),
(ζ̄j ; x, t) − 33
0
∂ζ
s33 (ζ̄j ) 1
Θ2 (ζj ; x, t) =
(119)
00
∂Ω2
s̄ (ζj ) (t)
(ζj ; x, t) − 33
Ω (ζj ; x, t).
0
∂ζ
s̄33 (ζj ) 2
(120)
(t)
which have been termed “derivative states” [1, 2] due to the differentiation with respect to ζ. Clearly, Z1 (ζ̄j ),
(t)
(t)
(t)
∂Z1 /∂ζ (ζ̄j ), Z2 (ζj ), and ∂Z2 /∂ζ (ζj ) satisfy the linearized Sasa-Satsuma equation and are thus discrete squared
(t)
(t)
(t)
(t)
eigenfunctions, while Ω1 (ζ̄j ), Θ1 (ζ̄j ), Ω2 (ζj ), and Θ2 (ζj ) satisfy the adjoint linearized Sasa-Satsuma equation
and are thus discrete adjoint squared eigenfunctions.
In soliton perturbation theories, explicit expressions for squared eigenfunctions under soliton potentials are needed.
For such potentials, s31 = s32 = s̄13 = s̄23 = 0, thus G = 11 in the Riemann-Hilbert problem (33). Solutions of this
Riemann-Hilbert problem have been solved completely for any number of zeros and any orders of their algebraic and
geometric multiplicities [25]. In the simplest case where all zeros of the solitons are simple, solutions P± can be found
in Eq. (76) of [25] (setting r(i = r(j) = 1). One only needs to keep in mind here that the zeros of s33 and s̄33 always
appear in quadruples due to the symmetries (112). Once solutions P± are obtained, the solitons u(x, t) follow from
Eq. (37). Taking the large-x asymptotics of P± and utilizing the symmetry conditions (10), the full scattering matrix
S can be readily obtained. Then together with P± , explicit expressions for all the Jost functions Φ, Ψ as well as
squared eigenfunctions are then derived.
IV.
SUMMARY AND DISCUSSION
In this paper, squared eigenfunctions were derived for the Sasa-Satsuma equation. It was shown that these squared
eigenfunctions are sums of two terms, where each term is a product of a Jost function and an adjoint Jost function.
The procedure of this derivation consists of two steps: one is to calculate the variations of the potentials via variations
of the scattering data by the Riemann-Hilbert method. The other one is to calculate variations of the scattering data
via variations of the potentials through elementary calculations. It was proved that the functions appearing in these
variation relations are precisely the squared eigenfunctions and adjoint squared eigenfunctions satisfying respectively
the linearized equation and the adjoint linearized equation of the Sasa-Satsuma system. More importantly, our
proof is quite general and generally holds for other integrable equations. Since the spectral operator of the SasaSatsuma equation is 3 × 3, while its linearized operator is 2 × 2, we demonstrated how the two-component squared
eigenfunctions are built from only selected components of the three-component Jost functions, and we have seen that
symmetry properties of Jost functions and the scattering data play an important role here.
The derivation used in this paper for squared eigenfunctions is a universal technique. Our main contributions to
this method are triple-fold. First, we showed that for a general integrable system, if one can obtain the variational
relations between the potentials and the scattering data in terms of the components of the Jost functions and their
adjoints, then from these variational relations one can immediately identify the squared eigenfunctions and adjoint
squared eigenfunctions which satisfy respectively the linearized integrable equation and its adjoint equation. Second,
we showed that from these variational relations, one can read off the values of nonzero inner products between
squared eigenfunctions and adjoint squared eigenfunctions. Thus no longer is it necessary to calculate these inner
16
products from the Wronskian relations and the asymptotics of the Jost functions. Thirdly, we showed that from these
variational relations, one can immediately obtain the closure relation of squared eigenfunctions and adjoint squared
eigenfunctions. After squared eigenfunctions of an integrable equation are obtained, one then can readily derive the
recursion operator for that integrable equation, since the squared eigenfunctions are eigenfunctions of the recursion
operator [8, 10, 11, 19].
An important remark about this derivation of squared eigenfunctions is that it uses primarily the spectral operator
of the Lax pair. Indeed, the key variation relations (65), (89) and (90) were obtained exclusively from the spectral
operator (2), while the “time-dependent” versions of these relations (74), (95) and (96) were obtained by using
the asymptotic form of the time-evolution equation in the Lax pair as |x| → ∞. This means that all integrable
equations with the same spectral operator would share the same squared eigenfunctions (except an exponential-intime factor which is equation-dependent). For instance, a whole hierarchy of integrable equations would possess the
“same” squared eigenfunctions, since the spectral operators of a hierarchy are the same. This readily reproduces
earlier results in [10, 11, 19], where such results were obtained by the commutability relations between the linearized
operators and the recursion operator of a hierarchy.
Since the derivation in this paper uses primarily the spectral operator of the Lax pair, if two integrable equations
have similar spectral operators, derivations of their squared eigenfunctions would be similar. For instance, the spectral
operator of the Manakov equations is very similar to that of the Sasa-Satsuma equation (except that the potential
term Q in the Sasa-Satsuma spectral operator (2) possesses one more symmetry). Making very minor modifications
to the calculations for the Sasa-Satsuma equation in the text, we can obtain squared eigenfunctions for the Manakov
equations. This will be demonstrated in the Appendix.
With the squared eigenfunctions obtained for the Sasa-Satsuma equation, a perturbation theory for Sasa-Satsuma
solitons can now be developed. This problem lies outside the scope of the present article, and will be left for future
studies.
Acknowledgment
We thank Dr. Taras Lakoba for very helpful discussions. The work of J.Y. was supported in part by the Air Force
Office of Scientific Research under grant USAF 9550-05-1-0379. The work of D.J.K. has been support in part by the
US National Science Foundation under grant DMS-0505566.
Appendix: Squared eigenfunctions for the Manakov equation
The Manakov equations are [26]
iut + uxx + 2(|u|2 + |v|2 )u = 0,
(121)
ivt + vxx + 2(|u|2 + |v|2 )v = 0.
(122)
The spectral operator for the Manakov equations is Eq. (2) with


0
0 u
0 v .
Q= 0
−u∗ −v ∗ 0
(123)
Below we will use the same notations on Jost functions and the scattering matrix as in the main text. Since Q
above is also anti-Hermitian, Jost functions and the scattering matrix satisfy the involution properties (9) and (10)
as well. The main difference between the Sasa-Satsuma equation and the Manakov equations is that, the potential
Q above for the Manakov equations does not possess the additional symmetry (4), thus the Jost functions and the
scattering matrix do not possess the symmetries (11) and (12). In addition, the asymptotics (55) also needs minor
modification. It is noted that the spectral operator of the Manakov equations is 3 × 3, while the linearized operator
of these equations is 4 × 4. So one needs to build four-component squared eigenfunctions from three-component Jost
functions. This contrasts the Sasa-Satsuma equation where one builds two-component squared eigenfunctions from
three-component Jost functions.
Repeating the same calculations as in the main text with the above minor modifications kept in mind, we still get
Eq. (57) for the variations (δu, δu∗ ), while the expressions for variations (δv, δv ∗ ) are Eq. (57) with Π13 and Π31
17
replaced by Π23 and Π32 . The expression for matrix Π is still given by Eq. (60). Thus variations of the potentials
via variations of the scattering data are found to be
Z
1 ∞
[δu, δv, δu∗ , δv ∗ ]T = −
[Z1 (ξ; x)δρ1 (ξ) + Z2 (ξ; x)δρ2 (ξ) + Z3 (ξ; x)δ ρ̄1 (ξ) + Z4 (ξ; x)δ ρ̄2 (ξ)] dξ,
(124)
π −∞
where

φ31 φ̄13
 φ32 φ̄13 

Z1 = 
 φ33 φ̄11  ,
φ33 φ̄12


φ21 φ̄33
 φ22 φ̄33 

Z4 = 
 φ23 φ̄31  .
φ23 φ̄32

φ11 φ̄33
 φ12 φ̄33 

Z3 = 
 φ13 φ̄31  ,
φ13 φ̄32

φ31 φ̄23
 φ32 φ̄23 

Z2 = 
 φ33 φ̄21  ,
φ33 φ̄22



(125)
Variations of the scattering matrix δS are still given by Eq. (83), with δQ modified in view of the form of (123).
Then defining vectors
ϕ1 = (ϕ1j ) ≡ s̄22 ψ1 − s̄21 ψ2 ,
ϕ2 = (ϕ2j ) ≡ −s̄12 ψ1 + s̄11 ψ2 ,
(126)
ϕ̄1 = (ϕ̄1j ) ≡ s22 ψ̄1 − s12 ψ̄2 ,
ϕ̄2 = (ϕ̄2j ) ≡ −s21 ψ̄1 + s11 ψ̄2 ,
(127)
we find that variations of the scattering data are


δu(x) +
*
1
 δv(x) 
δρ1 (ξ) = 2
Ω1 (ξ; x),  ∗
,
δu (x) 
s33 (ξ)
∗
δv (x)
δ ρ̄1 (ξ) =
1
s̄233 (ξ)
δρ2 (ξ) =

δu(x) +
 δv(x) 
Ω3 (ξ; x),  ∗
,
δu (x) 
∗
δv (x)
*
(128)

δu(x) +
 δv(x) 
Ω4 (ξ; x),  ∗
,
δu (x) 
∗
δv (x)
(129)
1
s233 (ξ)
1
*

δ ρ̄2 (ξ) =

δu(x) +
 δv(x) 
Ω2 (ξ; x),  ∗
,
δu (x) 
∗
δv (x)
*
s̄233 (ξ)


where

ψ̄31 ϕ13
 ψ̄32 ϕ13 

Ω1 = 
 −ψ̄33 ϕ11  ,
−ψ̄33 ϕ12


ψ̄31 ϕ23
 ψ̄32 ϕ23 

Ω2 = 
 −ψ̄33 ϕ21  ,
−ψ̄33 ϕ22


−ϕ̄11 ψ33
 −ϕ̄12 ψ33 
Ω3 = 
,
ϕ̄13 ψ31 
ϕ̄13 ψ32


−ϕ̄21 ψ33
 −ϕ̄22 ψ33 
.
Ω4 = 
ϕ̄23 ψ31 
ϕ̄23 ψ32

(130)
For Manakov equations, the time evolution operator of the Lax pair is
Yt = −2iζ 2 ΛY + V (ζ, u)Y,
(131)
where V goes to zero as x approaches infinity. Introducing “time-dependent” Jost functions and adjoint Jost functions
such as
Φ(t) = Φe−2iζ
2
Λt
,
Φ̄(t) = e2iζ
2
Λt
Φ̄,
(132)
and use them to replace the original Jost functions and adjoint Jost functions in the Zk and Ωk definitions (125) and
(t)
(t)
(130), the resulting functions Z1,2,3,4 and Ω1,2,3,4 would be squared eigenfunctions and adjoint squared eigenfunctions
2
2
2
2
of the Manakov equations. Equivalently, {Z1 e−4iζ t , Z2 e−4iζ t , Z3 e4iζ t , Z4 e4iζ t } are squared eigenfunctions, and
2
2
2
2
{Ω1 e4iζ t , Ω2 e4iζ t , Ω3 e−4iζ t , Ω4 e−4iζ t } adjoint squared eigenfunctions, of the Manakov equations.
It is noted that squared eigenfunctions for the Manakov equations have been given in [18] under different notations
and derivations. Our expressions above are more explicit and easier to use.
[1] D. J. Kaup, A perturbation expansion for the Zakharov-Shabat inverse scattering transform, SIAM J. Appl. Math. 31, 121
(1976).
18
[2] D. J. Kaup, Closure of the squared Zakharov-Shabat eigenstates, J. Math. Anal. Appl. 54, 849864 (1976).
[3] A. C. Newell, The inverse scattering transform, in Solitons, Topics in Current Physics, Vol. 17, edited by R. K. Bullough
and P. J. Caudrey, Springer-Verlag, New York pp. 177242 (1980).
[4] D.J. Kaup and T.I. Lakoba, ”Squared eigenfunctions of the Massive Thirring Model in laboratory coordinates”, J. Math.
Phys. 37 308 (1996).
[5] X. Chen and J. Yang, ”A direct perturbation theory for solitons of the derivative nonlinear Schroedinger equation and the
modified nonlinear Schroedinger equation.” Phys. Rev. E. 65, 066608 (2002).
[6] D.J. Kaup, T.I. Lakoba and Y. Matsuno, “Perturbation theory for the Benjamin-Ono equation”, Inverse Problems 15
215-240 (1999).
[7] R. L. Herman, A direct approach to studying soliton perturbations, J. Phys. A 23, 23272362 (1990).
[8] M.J. Ablowitz, D.J. Kaup, A.C. Newell, and H. Segur, The inverse scattering transformFourier analysis for nonlinear
problems, Stud. Appl. Math. 53, 249336 (1974).
[9] A. S. Fokas and P. M. Santini, The recursion operator of the Kadomtsev-Petviashvili equation and the squared eigenfunctions of the Schrdinger operator, Stud. Appl. Math. 75, pp. 179185 (1986).
[10] J. Yang, ”Complete eigenfunctions of linearized integrable equations expanded around a soliton solution”. J. Math. Phys.
41, 6614 (2000).
[11] J. Yang, “Eigenfunctions of linearized integrable equations expanded around an arbitrary solution.” Stud. Appl. Math.
108, 145 (2002).
[12] D.J. Kaup, Perturbation theory for solitons in optical fibers, Phys. Rev. A 42, 5689 (1990).
[13] Y. Zeng, W. Ma and R. Lin, “Integration of the soliton hierarchy with self-consistent sources”, J. Math. Phys. 41, 5453-5489
(2000).
[14] H. Wang, X. Hu and Gegenhasi, “2D Toda lattice equation with self-consistent sources: Casoratian type solutions, bilinear
Bcklund transformation and Lax pair”, J. Computational and Applied Math. 202, 133-143 (2007).
[15] J. Yang, ”Stable embedded solitons.” Phys. Rev. Lett. 91, 143903 (2003).
[16] J. Yang and X. Chen, ”Linearization operators of solitons in the derivative nonlinear Schroedinger hierarchy”, Trends in
Soliton Research, pp 15-27, Editor L.V. Chen, Nova Publishers, 2006.
[17] D.J. Kaup, “The Squared Eigenstates of the Sine-Gordon Eigenvalue Problem”, J. Math. Phys. 25, 2467-71 (1984).
[18] V.S. Gerdjikov, “Basic aspects of soliton theory”, in Geometry, Integrability and Quantization - VI, Eds. I. M. Mladenov,
A. C. Hirshfeld, Softex, Sofia, pp. 78-122 (2005).
[19] A.S. Fokas, “A symmetry approach to exactly solvable evolution equations”, J. Math. Phys. 21, 1318-1325 (1980).
[20] A. Sergyeyev and D. Demskoi, “Sasa-Satsuma (complex modified Kortewegde Vries II) and the complex sine-Gordon II
equation revisited: Recursion operators, nonlocal symmetries, and more”, J. Math. Phys. 48, 042702 (2007).
[21] A. Hasegawa and Y. Kodama, Solitons in Optical Communications, Clarendon, Oxford (1995).
[22] G.P. Agrawal, Nonlinear Fiber Optics, Academic Press, San Diego (1989).
[23] N. Sasa and J. Satsuma, ”New type of soliton solutions for a higher-order nonlinear Schrödinger equation”, J. Phys. Soc.
Japan, 60, 409-417 (1991).
[24] J. Yang, B.A. Malomed, and D.J. Kaup, “Embedded solitons in second-harmonic-generating systems.” Phys. Rev. Lett.
83, 1958 (1999).
[25] V.S. Shchesnovich and J. Yang, ”General soliton matrices in the Riemann-Hilbert problem for integrable nonlinear equations.” J. Math. Phys. 44, 4604 (2003).
[26] S.V. Manakov, “On the Theory of Two-dimensional Stationary Self-focusing of Electromagnetic Waves”, Zh. Eksp. Teor.
Fiz 65, 1392 (1973) [Sov. Phys. JETP 38, 248-253 (1974)].
Download