KRESS SMOOTHING TRANSFORMATION FOR WEAKLY SINGULAR HASSAN MOHAMED SAEED BAWAZIR

advertisement
KRESS SMOOTHING TRANSFORMATION FOR WEAKLY SINGULAR
FREDHOLM INTEGRAL EQUATION OF SECOND KIND
HASSAN MOHAMED SAEED BAWAZIR
UNIVERSITI TEKNOLOGI MALAYSIA
PSZ 19:16 (Pind. 1/97)
UNIVERSITI TEKNOLOGI MALAYSIA
BORANG PENGESAHAN STATUS TESIS i
KRESS SMOOTHING TRANSFORMATION FOR WEAKLY SINGULAR
JUDUL:_____________________________________________________________
FREDHOLM INTEGRAL EQUATION OF SECOND KIND
_____________________________________________________________
_____________________________________________________________
SESI PENGAJIAN: 2005/2006
HASSAN MOHAMED SAEED BAWAZIR
Saya _______________________________________________________________________
(HURUF BESAR)
mengaku membenarkan tesis (PSM / Sarjana / Doktor Falsafah)* ini disimpan di
Perpustakaan Universiti Teknologi Malaysia dengan syarat-syarat kegunaan seperti berikut:
1.
2.
3.
4.
Tesis adalah hakmilik Universiti Teknologi Malaysia.
Perpustakaan Universiti Teknologi Malaysia dibenarkan membuat salinan untuk tujuan
pengajian sahaja.
Perpustakaan dibenarkan membuat salinan tesis ini sebagai bahan pertukaran antara
institusi pengajian tinggi.
** Sila tandakan (— )
—
SULIT
(Mengandungi maklumat yang berdarjah keselamatan atau
kepentingan Malaysia seperti yang termaktub di dalam
AKTA RAHSIA RASMI 1972)
TERHAD
(Mengandungi maklumat TERHAD yang telah
ditentukan oleh organisasi/badan di mana penyelidikan
dijalankan)
TIDAK TERHAD
Disahkan oleh
_______________________________
(TANDATANGAN PENULIS)
Alamat Tetap:
Department of Mathematics,
_______________________________
_______________________________
Faculty of Education - Seiyun,
_______________________________
Hadhramout University of Science
_______________________________
&
Technology, Hadramout, Yemen.
Tarikh: ________________________
27 MARCH 2006
CATATAN
__________________________________
(TANDATANGAN PENYELIA)
ASSOC.
PROF. DR. ALI BIN ABD RAHMAN
_____________________________________
Nama Penyelia
Tarikh: __________________________
27 MARCH 2006
* Potong yang tidak berkenaan.
** Jika tesis ini SULIT atau TERHAD, sila lampirkan surat daripada pihak berkuasa/organisasi
berkenaan dengan menyatakan sekali sebab dan tempoh tesis ini perlu dikelaskan sebagai
SULIT atau TERHAD.
i Tesis dimaksudkan sebagai tesis bagi Ijazah Doktor Falsafah dan Sarjana secara penyelidikan,
atau disertai bagi pengajian secara kerja kursus dan penyelidikan, atau Laporan Projek Sarjana
Muda (PSM).
”I hereby declare that I have read through this dissertation and in
my opinion this dissertation is sufficient in terms of scopes and
quality for the award of the degree of Master of Science (Mathematics).”
Signature
:
Name of Supervisor
:
Assoc. Prof. Dr. Ali bin Abd Rahman
Date
:
27 MARCH 2006
KRESS SMOOTHING TRANSFORMATION FOR WEAKLY SINGULAR
FREDHOLM INTEGRAL EQUATION OF SECOND KIND
HASSAN MOHAMED SAEED BAWAZIR
This dissertation is submitted
in partial fulfillment of the requirements for the
Master Degree of Science (Mathematics)
Faculty of Science
Universiti Teknologi Malaysia
MARCH 2006
ii
I declare that this dissertation entitled “Kress smoothing transformation for
weakly singular Fredholm integral equation of second kind ” is the result of my own
research except as cited in references. The dissertation has not been accepted for
any degree and is not concurrently submitted in candidature of any other degree.
Signature
:
Name
:
HASSAN MOHAMED SAEED BAWAZIR
Date
:
27 MARCH 2006
iii
To
my beloved family and friends,
especially
my sons: Faisal and Mohamed.
iv
ACKNOWLEDGEMENT
In The Name Of ALLAH, The Most Beneficent, The Most Merciful
All praise is due only to ALLAH, the lord of the worlds. Ultimately, Only
ALLAH has given us the strength and courage to proceed with our entire life.
His works are truly splendid and wholesome, and his knowledge is truly complete
with due perfection.
I am particularly appreciative of my supervisor, Assoc. Prof. Dr. Ali
bin Abd Rahman for his invaluable supervision, guidance and assistance. He
has provided me with some precious ideas and suggestions throughout this
dissertation.
In addition I would like to thank Hadhramout University of Science &
Technology for their support. I am also grateful to Al-Sheikh Eng. Abdullah
Ahmad Bogshan for his financial support.
Also I would like to thank the Mathematics Department, Faculty of
Science, UTM, for providing the facilities.
I am grateful for the help of my best friends. Among these are Dr.
Mohammed M. S. Nasser and Mr. Omer Abdulaziz Mohamed Ali.
Besides, I want to dedicate heartiest gratitude to my beloved parents, my
uncle, and my wife for direct and indirect support and encouragement during the
completion of my dissertation.
v
ABSTRACT
This work investigates a numerical method for the second kind Fredholm
integral equation with weakly singular kernel k(x, y), in particular, when k(x, y) =
ln |x−y|, and k(x, y) = |x−y|−α , −1 ≤ x, y ≤ 1, 0 < α < 1. The solutions of such
equations may exhibit a singular behaviour in the neighbourhood of the endpoints
x = ±1. We introduce a new smoothing transformation based on the Kress
transformation for solving weakly singular Fredholm integral equations of the
second kind, and then using the Hermite smoothing transformation as a standard.
With the transformation an equation which is still weakly singular is obtained,
but whose solution is smoother.
The transformed equation is then solved
numerically by product integration methods with interpolating polynomials. Two
types of interpolating polynomials, namely the Gauss-Legendre and Chebyshev
polynomials, have been used. Numerical examples are presented to investigate
the performance of the former.
vi
ABSTRAK
Kajian ini adalah untuk menyelidiki kaedah berangka bagi persamaan
kamiran Fredholm jenis kedua dengan inti aneh secara lemah k(x, y), khususnya,
apabila k(x, y) = ln |x − y|, dan k(x, y) = |x − y|−α , −1 ≤ x, y ≤ 1, 0 <
α < 1. Penyelesaian bagi persamaan ini mempamerkan perilaku singular dalam
kejiranan titik hujung x = ±1. Diperkenalkan juga penjelmaan berdasarkan
penjelmaan Kress untuk menyelesaikan kelemahan singular persamaan kamiran
Fredholm jenis kedua, seterusnya menggunakan penjelmaan Hermite, sebagai
piawai. Dengan penjelmaan ini persamaan yang masih lemah, diperolehi tetapi
penyelesaiannya lebih licin. Persamaan penjelmaan kemudian diselesaikan secara
berangka dengan kaedah hasildarab kamiran bersama polinomial interpolasi. Dua
jenis polinomial interpolasi, Gauss-Legendre dan Chebyshev, telah digunakan.
Contoh berangka diberikan menunjukkan keberkesanan kaedah ini.
vii
TABLE OF CONTENTS
CHAPTER
TITLE
PAGE
DISSERTATION STATUS DECLARATION
SUPERVISOR’S DECLARATION
1
TITLE PAGE
i
DECLARATION
ii
DEDICATION
iii
ACKNOWLEDGEMENT
iv
ABSTRACT
v
ABSTRAK
vi
TABLE OF CONTENTS
vii
LIST OF TABLES
xi
GLOSSARY
xiv
LIST OF APPENDICES
xv
PRELIMINARY REMARKS
1
1.1 Introduction
1
1.2 Problem Statement
4
1.3 Objectives of the Study
6
1.4 Scope of the Study
7
1.5 Simulation Tool
8
1.6 Dissertation’s Plan
9
viii
2
3
LITERATURE REVIEW
10
2.1 Introduction
10
2.2 Literature Review
11
2.3 Solution Behaviour
13
2.4 Hermite Transformation
14
2.5 Kress Transformation
16
PRODUCT INTEGRATION METHOD
20
3.1 Introduction
20
3.2 Integration Rules
21
3.3 Product Integration with Gaussian Abscissae
and Weights
24
3.3.1 Introduction
24
3.3.2 Application to Fredholm Equation
with Abel Kernels
25
3.3.3 Application to Fredholm Equation
with Logarithmic Kernels
27
3.4 Product Integration with Curtis-Clenshaw Points
4
28
3.4.1 Introduction
28
3.4.2 Application to Fredholm Equation
with Abel Kernels
29
3.4.3 Application to Fredholm Equation
with Logarithmic Kernels
31
3.5 Concluding Remarks
33
QUADRATURE FORMULA
34
4.1 Introduction
34
4.2 An Integration Quadrature Formula
35
4.2.1 Hermite Smoothing Transformation
35
4.2.2 Kress Smoothing Transformation
37
ix
5
4.2.3 Modified Kress Transformation
41
4.3 Application to Weakly Singular Fredholm
Integral Equation of the Second Kind
43
4.3.1 Fredholm Weakly Singular Integral Equations
of the Second Kind with Abel Kernels
44
4.3.2 Fredholm Weakly Singular Integral Equations
of the Second Kind with Logarithmic Kernels
45
NUMERICAL RESULTS
47
5.1 Introduction
47
5.2 Product Integration with Gaussian Abscissae
and Weights
48
5.2.1 Weakly Singular Integral Equations
with Abel Kernels
5.2.1.1 Matrix Elements
49
5.2.1.2 Examples
50
5.2.2 Weakly Singular Integral Equations
with Logarithmic Kernels
64
5.2.2.1 Matrix Elements
64
5.2.2.2 Examples
66
5.3 Product Integration with Curtis-Clenshaw Points
5.3.1 Weakly Singular Integral Equations
with Abel Kernels
70
70
5.3.1.1 Matrix Elements
71
5.3.1.2 Examples
72
5.3.2 Weakly Singular Integral Equations
with Logarithmic Kernels
6
48
77
5.3.2.1 Matrix Elements
78
5.3.2.2 Examples
79
5.4 Discussion
88
SUMMARY AND CONCLUSIONS
91
x
6.1 Summary
91
6.2 Conclusions
92
6.3 Recommendations for Future Study
93
REFERENCES
94
APPENDICES
APPENDIX A
97
APPENDIX B
101
APPENDIX C
104
APPENDIX D
108
xi
LIST OF TABLES
TABLE NO.
TITLE
PAGE
3.1
Error Norm of Example 3.1
23
5.1
Error Norm of Example 5.1 with p = 2
58
5.2
Error Norm of Example 5.1 with p = 3
58
5.3
Error Norm of Example 5.2 with p = 3
58
5.4
The Values |θ256 (t) − θn (t)| of Example 5.3 with p = 2
59
5.5
The Values |θ256 (t) − θn (t)| of Example 5.3 with p = 3
59
5.6
The Values |θ256 (t) − θn (t)| of Example 5.3 with p = 4
59
5.7
The Values |θ256 (t) − θn (t)| of Example 5.4
60
5.8
The Values |θ256 (t) − θn (t)| of Example 5.5 with
α0 = α2 = 4, α1 = 9
60
The Values |θ256 (t) − θn (t)| of Example 5.5 with
α0 = α1 = 3
60
5.10
The Values |θ256 (t) − θn (t)| of Example 5.6 with p = 1
61
5.11
The Values |θ256 (t) − θn (t)| of Example 5.6 with p = 2
61
5.12
The Values |θ256 (t) − θn (t)| of Example 5.6 with p = 3
61
5.13
The Values |θ256 (t) − θn (t)| of Example 5.6 with p = 4
62
5.14
The Values |θ256 (t) − θn (t)| of Example 5.7
62
5.15
The Values |θ256 (t) − θn (t)| of Example 5.8 with
α0 = α2 = 4, α1 = 9
62
The Values |θ256 (t) − θn (t)| of Example 5.8 with
α0 = α1 = 3
63
5.17
The Values |θ256 (t) − θn (t)| of Example 5.9 with p = 2
63
5.18
The Values |θ256 (t) − θn (t)| of Example 5.9 with p = 3
63
5.19
Error Norm of Example 5.10 with p = 2
67
5.20
Error Norm of Example 5.10 with p = 3
67
5.9
5.16
xii
5.21
Error Norm of Example 5.11.
68
5.22
The values |θ256 (t) − θn (t)| of Example 5.12
68
5.23
The values |θ128 (t) − θn (t)| of Example 5.13 with
α0 = α1 = 2
68
The values |θ128 (t) − θn (t)| of Example 5.13 with
α0 = α1 = 3
69
5.25
The values |θ128 (t) − θn (t)| of Example 5.14 with p = 2
69
5.26
The values |θ128 (t) − θn (t)| of Example 5.14 with p = 3
70
5.27
Error Norm of Example 5.15 with p = 2
73
5.28
Error Norm of Example 5.15 with p = 3
73
5.29
Error Norm of Example 5.15 with α0 = α1 = 2
74
5.30
Error Norm of Example 5.15 with α0 = α1 = 3
74
5.31
The Values |θ256 (t) − θn (t)| of Example 5.17 with p = 2
74
5.32
The Values |θ256 (t) − θn (t)| of Example 5.17 with p = 3
75
5.33
The Values |θ256 (t) − θn (t)| of Example 5.18
75
5.34
The Values |θ256 (t) − θn (t)| of Example 5.19 with p = 2
75
5.35
The Values |θ256 (t) − θn (t)| of Example 5.19 with p = 3
76
5.36
The Values |θ256 (t) − θn (t)| of Example 5.20
76
5.37
The Values |θ256 (t) − θn (t)| of Example 5.21 with
α0 = α1 = 3
76
The Values |θ256 (t) − θn (t)| of Example 5.21 with
α0 = α2 = 4, α1 = 9
77
5.39
Error Norm of Example 5.22 with p = 2
84
5.40
Error Norm of Example 5.22 with p = 3
84
5.41
Error Norm of Example 5.23 with α0 = α1 = 2
84
5.42
Error Norm of Example 5.23 with α0 = α1 = 3
85
5.43
The Values |θ256 (t) − θn (t)| of Example 5.24 with p = 2
85
5.44
The Values |θ256 (t) − θn (t)| of Example 5.24 with p = 3
85
5.45
The Values |θ256 (t) − θn (t)| of Example 5.25
86
5.46
The Values |θ256 (t) − θn (t)| of Example 5.26 with p = 2
86
5.47
The Values |θ256 (t) − θn (t)| of Example 5.26 with p = 3
86
5.48
The Values |θ256 (t) − θn (t)| of Example 5.27
87
5.49
The Values |θ256 (t) − θn (t)| of Example 5.28 with
5.24
5.38
xiii
5.50
α0 = α1 = 3
87
The Values |θ256 (t) − θn (t)| of Example 5.28 with
α0 = α2 = 4, α1 = 9
87
xiv
GLOSSARY
GM
Gauss method
CM
Clenshaw method
HT
Hermite transformation
KT
Kress transformation
MKT
Modified Kress transformation
xv
LIST OF APPENDICES
APPENDIX
TITLE
PAGE
A
MATLAB program for Gauss-Legendre method
97
B
MATLAB program for Clenshaw-Curtis method
101
C
MATLAB program which solves a weakly singular
Fredholm integral equation with Abel kernel using
Gauss-Legendre method.
D
104
MATLAB program which solves a weakly singular
Fredholm integral equation with logarithmic kernel
using Clenshaw-Curtis method.
108
CHAPTER 1
PRELIMINARY REMARKS
1.1
Introduction
An integral equation is an equation in which the unknown function f (x)
to be determined appears under the integral sign. A typical form of an integral
equation in f (x) is of the form
f (x) − λ
β(x)
k(x, y)f (y)dy = g(x),
(1.1)
α(x)
where k(x, y) is called the kernel of the integral equation, and α(x) and β(x) are
the limits of integration. It is important to point out that the kernel k(x, y) and
the function g(x) in (1.1) are given in advance, g(x) is called input function.
The standard form of a Volterra linear integral equation, where the limits
of integration are functions of rather than constants, are of the form
x
φ(x)f (x) − λ
k(x, y)f (y)dy = g(x), a ≤ x ≤ b,
(1.2)
a
and the standard form of a Fredholm linear integral equation, where the limits
of integration α(x) and β(x) are constants (say a and b), is given by the form
b
φ(x)f (x) − λ
k(x, y)f (y)dy = g(x), a ≤ x, y ≤ b,
(1.3)
a
2
where the kernel of the integral equation, k(x, y), and the function g(x) are given
in advance, and λ is a parameter. The equations (1.2) and (1.3) is called linear
because the unknown function f (x) under the integral sign occurs linearly, i.e,
the power of f (x) is one.
The value of φ(x) will give rise to the following kinds of Fredholm linear
integral equations:
1. When φ(x) = 0, equation (1.3) becomes
b
g(x) + λ
k(x, y)f (y)dy = 0,
a
a ≤ x, y ≤ b,
(1.4)
and is called Fredholm integral equation of the first kind.
2. When φ(x) = 1, equation (1.3) becomes
b
k(x, y)f (y)dy = g(x),
f (x) − λ
a
a ≤ x, y ≤ b,
(1.5)
and is called linear Fredholm integral equation of the second kind. In fact, the
form of equation (1.5) can be obtained from (1.3) by dividing both sides of (1.3)
by φ(x), provided that φ(x) = 0,
As a special case of equation (1.5) when g(x) = 0, we have the equation
b
f (x) − λ
k(x, y)f (y)dy = 0, a ≤ x, y ≤ b,
(1.6)
a
By a boundary value problem for an ordinary differential equation of nth
order, we mean the problem of determining the solution of the equation in a
certain interval, on the boundaries of which the solution and its derivatives of
order not higher than n − 1 take on prescribed values, or satisfy given relations.
These problems lead to Fredholm integral equations (see Pogorzelski (1966),
p.221).
The boundary value problems for partial differential equations the
parabolic and hyperbolic type lead to Volterra integral equations, while the
3
boundary value problems for partial differential equations of the elliptic type
yield Fredholm equations.
The solution of the Dirichlet and von Neumann problems are one of
applications of the theory of Fredholm equation (see Pogorzelski (1966), p.230).
Equation (1.1) is called singular if the lower limit, the upper limit or both
limits of integration are infinite. In addition, the equation (1.1) is also called a
singular integral equation if the kernel k(x, y) becomes infinite at one or more
points in the domain of integration (see Wazwaz (1997), p.7).
The kernels which become unbounded at x = y, for example
k(x, y) = |x − y|−α , 0 < α < 1,
or
k(x, y) = ln |x − y|,
are said to have a weak singularties (see Baker (1977), p.68). The case where
k(x, y) and g(x) are piecewise-continuous, with finite jump discontinuities only
on lines parallel to the coordinate axes; these ‘singularities’ are called ‘mild’ (see
Baker (1977), p.526).
Supposing that our functions k(x, y) and g(x) are piecewise-continuous
and bounded, then in solving (1.6) we seek values of the parameter λ for which
(1.6) has a non-trivial solution f (x). Such a value λ is called a characteristic
value and the solution is called the eigenfunction (see Baker (1977), p. 4).
In general we cannot guarantee the existence of any solution λ = 0 for
equation (1.6). In particular if the kernel k(x, y) is not identically zero, real, and
k(x, y) = k(y, x) ( in this case k(x, y) is said to be real and symmetric), there is
at least one non-zero characteristic value and all of the characteristic values are
real.
4
A value λ such that the equation (1.5) is uniquely solvable (when g(x)
is piecewise-continuous but otherwise arbitrary) is known as a regular value. If
λ is a characteristic value and ψ(x) a corresponding eigenfunction then to any
solution f (x) of equation (1.5) there corresponds another solution f (x) + αψ(x),
where α is arbitrary. Thus if λ is a characteristic value it cannot be a regular
value. Moreover, if λ is not a characteristic value it can be shown that equation
(1.5) has a unique solution, for arbitrary g(x), and hence that λ is a regular value
(see Baker (1977), p. 15).
The previous results, which are about uniqueness and existence of the
solution of Fredholm integral equations of the first and second kinds, are obtained
under the supposition that the kernel k(x, y) and the input function g(x) are
piecewise-continuous and bounded. Additional consideration of weakly singular
Fredholm integral equation requires some concepts such as compact integral
operators, and Banach spaces; furthermore it requires some theorems like the
Fredholm Alternative. Consider a weakly singular Fredholm integral equation of
the second kind of the form
1
k(x, y)f (y)dy = g(x)
f (x) − λ
−1
− 1 ≤ x, y ≤ 1,
(1.7)
with
k(x, y) = |x − y|−α , 0 < α < 1,
or
k(x, y) = ln |x − y|.
It can be proved that (1.7) has a unique solution if and only if the corresponding
homogeneous equation has only the trivial solution; for more details see Atkinson
(1997), pages 6-13.
1.2
Problem Statement
This dissertation introduces a new smoothing transformation based on
the Kress transformation for solving weakly singular Fredholm integral equations
5
of the second kind, and then using the Hermite smoothing transformation as a
standard, investigates the performance of the former.
Consider weakly singular Fredholm integral equation of second kind of the
form
f (x) − λ
1
k(x, y)f (y)dy = g(x)
−1
− 1 ≤ x, y ≤ 1,
(1.8)
with weakly singular kernels of one of the following forms:
Abel kernel
k(x, y) = |x − y|−α , 0 < α < 1,
logarithmic kernel
k(x, y) = ln |x − y|,
where −1 ≤ x ≤ 1.
The numerical solution of (1.8) is closely related to the solution of a linear
algebraic system. Indeed, the main goal of the numerical methods to solve (1.8) is
to reduce it approximately to a linear algebraic system. Then the linear algebraic
system is solved to obtain an approximate solution of (1.8) as shown in the next
chapters.
The numerical treatment of weakly singular integral equations should take
into account the nature of the singularities at the endpoints x = ±1. Some of the
techniques that can be used to solve these integral equations are as follows:
1. Canceling the singularity (of the kernel).
2. Modified quadrature method.
3. Smoothing the kernel.
4. Approximating the kernel by a degenerate kernel.
5. Expansion methods (Galerkin and collocation methods).
6. Product integration.
6
Kress (1990) introduces an algebraic transformation for smoothing the
solution of a boundary Fredholm integral equation in domains with corners.
The solution of this integral equation has a singularity at the corner point. He
considers integral equations of the second kind in the slightly unconventional
form, and supposes that the input function is continuous, so we will focus on
using of his transformation when the input function g(x) is smooth. We will do
some modifications of the Kress transformation to be applicable with non-smooth
input functions. More details for these transformations will be given later.
Elliott and Prössdorf (1995) introduce a transformation of [0,1] onto itself
such that an arbitrary number of derivatives vanish at the end points 0 and 1. If
the transformed kernel is dominated near the origin by a Mellin kernel then they
give conditions under which the use of a modified Euler-Maclaurin quadrature
rule and the Nyström method gives an approximate solution which converges to
the exact solution of the original equation.
Monegato and Scuderi (1998) introduce a simple smoothing change
of variable to solve one-dimensional linear weakly singular integral equations
on bounded intervals, with input functions which may be smooth or not.
In both cases either the input function is smooth or non-smooth, they
define the smoothing transformation w = w(t) by using piecewise Hermite
interpolation polynomial HM (t), so we will call this transformation as the Hermite
transformation. We will focus on using the Hermite smoothing transformation
for both cases as a standard. We will give more details for this transformation
later.
1.3
Objectives of the Study
1. Using the Hermite smoothing transformation, reduce a second kind
Fredholm integral equation with a weakly singular kernel, for both smooth
7
and non-smooth input functions, to an equivalent equation with smoother
solution.
2. Using the Kress smoothing transformation, reduce a second kind Fredholm
integral equations with a weakly singular kernel, for smooth input functions,
to an equivalent equation with smoother solution.
3. Introduce a new transformation by modifying the Kress transformation so
that it can be applied to non-smooth input functions.
4. Using the modified Kress transformation, reduce a second kind Fredholm
integral equation with a weakly singular kernel, for non-smooth input
functions, to an equivalent equation with smoother solution.
5. Solve the new transformed equation using the product integration method.
6. Compare the numerical results from the transformations.
1.4
Scope of the Study
This dissertation focuses on introducing a new usage of the Kress
smoothing transformation for solving weakly singular Fredholm integral equation
of second kind, and then using the Hermite smoothing transformation as a
standard, investigates the performance of the former.
Firstly, we shall introduce a quadrature formula for the numerical
evaluation of integrals of the form
1
f (x)dx,
(1.9)
−1
where the integrand is continuous on the interval (-1,1) and has singularities at
the endpoints ±1. The idea of the new quadrature formula is to use the Hermite
and Kress smoothing transformations to reduce the integral (1.9) to an equivalent
integral with a smooth integrand.
8
Next, each transformation will be used to reduce, respectively, a second
kind Fredholm integral equation with a weakly singular kernel to an equivalent
equation with smoother solution.
The new transformed equation will be discretized using the product
integration method to obtain an equivalent linear algebraic system. The following
product integration methods will be used:
1. Product integration with Gauss-Legendre points and weights.
2. Product integration with Clenshaw-Curtis (practical Chebyshev) points.
The linear system will be solved using the MATLAB software (refer to
Rosenberg (2001)) to obtain an approximate solution to the integral equation.
1.5
Simulation Tool
MATLAB is a language for mathematical computations whose
fundamental data types are vectors and matrices.
It is distinguished from
languages such as FORTRAN and C/C++ by operating at a higher mathematical
level, including hundreds of operations such as matrix inversion, the singular value
decomposition, and the fast Fourier transform as built-in commands. It is also
a problem-solving environment, processing top-level commends by an interpreter
rather than a compiler and providing in-line access to 2D and 3D graphics.
The version of MATLAB, MATLAB7.0, is used in the present study, and
the programs are written to reduce an integral equation to a linear algebraic
system, and to calculate the numerical solution of the algebraic problem. The
calculations are done on Intel Pentium 4 2.4GHz Personal Computer.
9
1.6
Dissertation’s Plan
This dissertation contains six chapters.
Chapter 2 is a literature review of some important numerical methods, the
solution behaviour, the Hermite smoothing transformation and Kress smoothing
transformation.
Chapter 3 contains a discussion of the product integration
method with Gaussian abscissae and product integration method with CurtisClenshaw points, and the application of the two methods to solving weakly
singular Fredholm integral equations of the second kind with Abel and logarithmic
kernels.
Chapter 4 discusses the quadrature formula to obtain a numerical
approximation of integrals with singularities at the endpoints of the interval of
the integration by using the smoothing transformations. Chapter 5 presents the
numerical results of this study. Finally, a conclusion of the work is given in
Chapter 6.
CHAPTER 2
LITERATURE REVIEW
2.1
Introduction
This chapter contains a background of the development of the numerical
solution of the weakly singular Fredholm integral equation of the second kind
1
f (x) − λ
k(x, y)f (y)dy = g(x) − 1 ≤ x, y ≤ 1,
(2.1)
−1
where
k(x, y) = |x − y|−α , 0 < α < 1,
(2.2)
k(x, y) = ln |x − y|.
(2.3)
or
This chapter contains four sections. Section 2.2 gives a literature review
on the numerical treatment of the weakly singular Fredholm integral equations of
second kind. Section 2.3 discusses the solution behaviour of the equation (2.1).
Then in Section 2.4 we shall discuss the Hermite smoothing transformation in
details, and in section 2.5 we shall discuss the Kress smoothing transformation
in details.
11
2.2
Literature Review
The product integration method is used for the numerical solution of
weakly singular integral equation of the second kind. These equations often have
solutions which have derivative singularties at the end points of the range of
integration. Therefore, the order of convergence results for smooth solutions do
not hold in general. Schneider (1981) shows that the results may be regained for
the general case by using an appropriate non-uniform mesh such that the spacing
of the knot points is defined by the behaviour of the solution at the end points.
If the solution is smooth enough the mesh becomes uniform.
Graham (1982) shows that the solution of weakly singular Fredholm
integral equation with weakly singular convolution kernel k(x − y) can be
expressed as a linear combination (of arbitrary length) of singular terms plus
an unknown smoother function. The singular terms are integrals, which in most
practical cases may be evaluated explicitly in terms of algebraic functions.
Graham (October, 1982) improves convergence for the Galerkin and
iterated Galerkin methods greatly by using of splines based on a mesh which
has been suitably graded to accommodate these singularities. In fact, under
suitable conditions, the Galerkin method converges optimally, and the iterated
Galerkin method is superconvergent.
Criscuolo et al.
(July, 1990) examine the convergence of product
quadrature formulas of interpolatory type, based on the zeros of certain
generalized Jacobi polynomials, for the discretization of integrals of the type
1
k(x, y)f (x)dx, −1 ≤ y ≤ 1,
(2.4)
−1
where the kernel k(x, y) is weakly singular and the function f (x) has singularities
only at the end points ±1. In particular, when k(x, y) = ln |x − y|, k(x, y) =
|x−y|α , α > −1, and f (x) has algebraic singularities of the form (1±x)β , β > −1,
they prove that the uniform rate of convergence of the rules is O(m−2−2β ln2 m)
12
in the case of the first kernel, and O(m−2−2β−2α ln m) if α ≤ 0, or O(m−2−2β ln m)
if α > 0, for the second, where m is the number of points in the quadrature rule.
Kress (1990) introduces an algebraic transformation for smoothing the
solution of boundary Fredholm integral equation in domains with corners. The
solution of this integral equation has a singularity on the corner point. He
considers integral equations of the second kind in the slightly unconventional
form, and supposes that the input function is continuous, so we will focus on
using of his transformation when the input function g(x) is smooth. More details
for this transformation will be given in the section 2.5.
Kaneko and Xu (1991) use some recent results concerning the regularity of
the solution of weakly singular integral equations of the second kind to transform
such equations into equivalent integro-differential equations then propose a
numerical method which provides fast converging numerical approximations.
Kaneko and Xu (1994) establish Gauss-type quadrature formulas for
weakly singular integrals then obtain numerical solutions of the weakly singular
Fredholm integral equation of the second kind. They call this method a discrete
product-integration method since the weights involved in the standard productintegration method are computed numerically.
For solving an integral equation on [0,1] whose kernel has a fixed singularity
at (0,0), Elliott and Prossdörf (1995) introduce a transformation of [0,1] onto
itself such that an arbitrary number of derivatives vanish at the end points 0 and
1, and they give conditions under which the use of a modified Euler-Maclaurin
quadrature rule and the Nyström method gives an approximate solution which
converges to the exact solution of the original equation. That is if the transformed
kernel is dominated near the origin by a Mellin kernel.
Monegato and Scuderi (1998) introduce a simple smoothing change of
variable for smoothing the solution of one-dimensional linear weakly singular
13
Fredholm integral equations on bounded intervals, with input functions which
may be smooth or not. The advantage of this approach is that the order of the
methods can be arbitrarily high and that the associated linear systems one has
to solve are very well-conditioned. More details for this transformation will be
given in the section 2.4.
In this dissertation we shall use the Hermite and Kress smoothing
transformations for smoothing the solution of weakly singular Fredholm integral
equation (2.1). Furthermore we will introduce a new smoothing transformation
by modifying the Kress transformation, the new transformation can be applied
for solving weakly singular Fredholm integral equation with non-smooth input
function.
2.3
Solution Behaviour
Since the rate of convergence of a numerical method depends on the
regularity of the solution of (2.1), the knowledge of the behaviour of the solution
is very important in the choice of the method; for this reason we shall discuss the
analysis of the properties of the solution of (2.1).
When g(x) is sufficiently smooth, the solutions of (2.1) with kernels (2.2),
or (2.3) have first derivatives which behave, respectively, like (x + 1)−α and
ln |x + 1| near x = −1, and have equivalent singularties near x = 1. When
0 < α < 1, the function (x + 1)−α certainly belongs to the space Lp [−1, 1] for any
p in the range 1 ≤ p < 1/α, and ln |x + 1| is in the space Lp [−1, 1] for any p in
the range 1 ≤ p < ∞, see Graham (1982).
When the input function g(x) is smooth, say g ∈ C p+1 [−1, 1], the solution
f (x) has only endpoints mild singularities, that is, f ∈ C p (−1, 1), in the case of
14
the equation (2.1) with the kernel (2.3), the solution f (x) admits an expansion
containing a finite number of terms of the form
j
(1 ± x)i ln |1 ± x| ,
plus a function of class C p (−1, 1).
i, j = 1, 2, ..., p,
i ≥ j,
(2.5)
If the input function or one of its first
derivatives has, for example, simple jumps at a finite number of points in (−1, 1)
and smooth elsewhere, the solution f (x) may be expressed as a linear combination
of g(x) and a finite number of terms which are mildly singular, as those in (2.5),
either at ±1 or at the jump points of g(x), plus an unknown smooth function,
see Monegato and Scuderi (1998).
2.4 Hermite Transformation
For smoothing the solution of one-dimensional linear weakly singular
Fredholm integral equations on bounded intervals, with input functions which
may be smooth or not, Monegato and Scuderi (1998) introduce a simple
smoothing change of variable.
Firstly, consider one-dimensional linear weakly singular Fredholm integral
equation (2.1) with kernels (2.2), and (2.3), taking an inhomogeneous term g(x)
has finite jumps or singularties, eventually in one of its derivatives, at a finite
number of points of (−1, 1) say −1 < x1 < x2 < ... < xM < 1, and assume
that g(x) is smooth (for simplicity C ∞ ) everywhere in [−1, 1] except at the x,s
k
(k = 1, 2, ..., M ) where its singularities satisfy the conditions
|(x − xk )i+1 g (i) (x)| ≤ c,
i = 0, 1, ...,
(2.6)
in a neighbourhood of xk , then they choose a nonlinear transformation y = w(t),
where w(t) is a sufficiently smooth monotone function mapping from [−1, 1] onto
[−1, 1], having as fixed points x0 = −1, x1 , ..., xM , xM +1 = 1, i.e., xk = w(xk ), and
whose leading derivatives vanish at xk . The simplest, and most efficient from the
15
computational point of view, among those satisfying the above properties is the
piecewise Hermite interpolation polynomial HM (t) associated with the partition
−1 = x0 , x1 , ..., xM , xM +1 = 1 of [−1, 1] and define in each subinterval by the
conditions


HM (xj ) = xj , j = k, k + 1

H (i) (xj ) = 0, j = k, k + 1, i = 1, ..., αj − 1, αj ≥ 2.
M
(2.7)
The integers αk , k = 0, ..., M + 1, are chosen accordingly to the smoothing
effect that ought to produce at the points xk , k = 0, ..., M + 1. Notice that the
smoothness of w(t) itself does not depend on the choice of α0 and αM +1
The construction and evaluation of HM (t) and HM
(t) is not as trivial as it
might appear at first, particularly if we want to have an automatic program where
the αk may be arbitrarily chosen. A numerically stable and efficient procedure is
the following one.
Since we know a priori that in [xk , xk+1 ]
HM
(t) = ck (t − xk )αk −1 (xk − t)αk+1 −1
(2.8)
where ck is a suitable constant, we can use this expression to derive the following
representation for HM (t):
t
HM (t) = ck
(y − xk )αk −1 (xk − y)αk+1 −1 dy + xk , t ∈ [xk , xk+1 ].
(2.9)
xk
By imposing the conditions HM (xk+1 ) = xk+1 , k = 0, ..., M, we determine
the coefficients ck as
ck = (xk+1 − xk )2−αk −αk+1
(αk + αk+1 − 1)!
, k = 0, ..., M
(αk − 1)!(αk+1 − 1)
(2.10)
Using (2.9), HM (t) can be evaluated exactly (up to machine accuracy),
without any loss of precision, by using an N-point Gauss-Legendre quadrature
k+1
, x means the greatest integer less than or equal to x.
rule, with N = αk +α
2
16
Both cases, either the input function g(x) is smooth or non-smooth, are
included in the Hermite smoothing transformation (2.9), that is M = 0 if g(x) is
smooth, otherwise if it is non-smooth.
2.5
Kress Transformation
Kress (1990) investigates a Nyström method for the numerical solution
of the double-layer boundary integral equation of the second kind for the plane
harmonic Dirichlet problem in domains with corners. For domains with corners,
however, due to the singularity of the solution to the boundary value problem at
the corner the equidistant trapezoidal rule yields poor convergence and therefore
has to be replaced by a graded mesh quadrature. He suggests basing this grading
upon the idea of a substituting a new variable in such a way that the derivatives of
the new integrands vanish up to a certain order at the endpoints of the integration
interval. Proceeding this way he obtains a high order quadrature rule for integrals
with end point singularties by using the trapezoidal rule for the transformed
integral.
Kress describes a numerical quadrature rule for the integral
2π
T (x)dx,
(2.11)
0
where the integrand T (x) is smooth in (0, 2π) but has singularties at the endpoints
x = 0 and
x = 2π. Let the function W : [0, 2π] → [0, 2π] be bijective,
strictly monotonically increasing and infinitely differentiable. Then he substitute
x = W (t) and consequently obtains
2π
T (x)dx =
0
2π
h(t)dt,
(2.12)
0 < t < 2π,
(2.13)
0
where
h(t) = W (t)T (W (t)),
17
applying the trapezoidal rule for 2n + 1 points to the transformed integral now
yields the quadrature formula
2π
0
2n−1
π (n) (n) T (x)dx ≈
a T tj ,
n j=1 j
with the weights and mesh points given by
jπ
(n)
(n)
jπ
aj = W
, tj = W
, j = 1, 2, ..., 2n − 1.
n
n
(2.14)
(2.15)
A typical example for a substitution is given by
[V (t)]p
W (t) = 2π
,
[V (t)]p + [V (2π − t)]p
where
V (t) =
1 1
−
p 2
π−t
π
3
0 ≤ t ≤ 2π,
1
1 t−π
+ ,
+
p
π
2
(2.16)
(2.17)
and p is a positive integer number.
It can be shown that
W (t) = 2πp
[V (2π − t)]p [V (t)]p−1 V (t) + [V (t)]p [V (2π − t)]p−1 V (2π − t)
,
2
[V (t)]p + [V (2π − t)]p
0 ≤ t ≤ 2π.
(2.18)
From (2.16), (2.17), and (2.18) for p > 1 we obtain




V (0) = 0, V (2π) = 1,



W (0) = 0, W (2π) = 2π,





 W (0) = W (2π) = 0, W (π) = 2.
(2.19)
It is noted that the cubic polynomial V is chosen such that V (0) = 0,
V (2π) = 1, and W (π) = 2. The later property ensures, roughly speaking, that
one half of the grid points is equally distributed over the total interval, whereas
the other half is accumulated towards the two end points.
18
We will use the same transformation with simple modification to be defined
on the interval [−1, 1], as the following:
Let
w(t) = a + bW (s(t)), t ∈ [−1, 1],
(2.20)
with
s(t) = c + dt, t ∈ [−1, 1],
such that:
W = 0 implies w = −1,
W = 2π implies w = 1,
t = −1 implies s = 0,
and
t = 1 implies s = 2π.
These give:
a = −1,
b = π1 ,
and
c = d = π.
Then the transformation (2.20) becomes
w(t) = − 1 +
1 W π(t + 1) ,
π
(2.21)
t ∈ [−1, 1].
If we define
v(t) = V π(1 + t) ,
where V is as in (2.17), then the transformation (2.21) becomes
w(t) =
[v(t)]p − [v(−t)]p
,
[v(t)]p + [v(−t)]p
−1 ≤ t ≤ 1,
(2.22)
19
where
v(t) =
1 1 3 t 1
−
t + + ,
2 p
p 2
(2.23)
and p is a positive integer number. We will call the transformation (2.22) as the
Kress smoothing transformation on the interval [−1, 1].
It can be shown that
w (t) = 2p
[v(t)]p [v(−t)]p−1 v (−t) + [v(−t)]p [v(t)]p−1 v (t))
,
2
[v(t)]p + [v(−t)]p
(2.24)
− 1 ≤ t ≤ 1.
From (2.22), (2.23), and (2.24) for p > 1 we obtain




v(−1) = 0, v(1) = 1,



w(−1) = −1, w(1) = 1,





 w (−1) = w (1) = 0, w (0) = 2.
(2.25)
CHAPTER 3
PRODUCT INTEGRATION METHOD
3.1
Introduction
This chapter describes the product integration method which is a powerful
method for numerical calculation of integrals whose integrands have singularities.
It contains five sections. In Section 3.2, we review the definition and the properties
of the product integration method. In the product integration methods, we
write the integrand function as a product of smooth function and a singular
function. Then we approximate the smooth function of the integrand at a set
of node points. Product integration method with Gaussian abscissae is reviewed
in Section 3.3, and product integration method with Curtis-Clenshaw points is
reviewed in Section 3.4. Each of the Sections 3.3 and 3.4 contains applications
to weakly singular Fredholm integral equations of the second kind with Abel and
logarithmic kernels. A short conclusion is given in Section 3.5.
21
3.2
Integration Rules
The standard numerical integration rules, such as the trapezoidal and
Simpson’s methods, are constructed under the assumption that the integrand is
at least bounded. When this is not the case such methods may not work. Even if
the integrand is continuous, a great deal of accuracy is lost if higher derivatives
fail to exist. Special methods are needed to handle such cases efficiently (Linz,
1985, p. 130).
One of the most powerful ways to deal with poorly behaved integrands is
the method of product integration. Consider the integral
1
ϕ(x)dx
I(ϕ) =
(3.1)
−1
where ϕ(x) is a real-valued absolutely integrable function which needs not be
continuous. Integral with finite endpoints other than -1 and 1 can be transformed
to the form (3.1) by a simple linear transformation. To evaluate the integral
numerically using the usual interpolatory integration rules, we first replace ϕ(x)
by some approximation ϕ̂, and then we compute
1
ˆ
ϕ̂(x)dx.
I(ϕ) =
(3.2)
−1
The approximation function ϕ̂(x) has to be chosen in such a way that the
ˆ
integral I(ϕ)
can be computed explicitly.
A similar approach is taken to construct product integration rules, but
instead of approximating the whole integrand, we only approximate the wellbehaved part. We write the integral as
1
k(x)f (x)dx,
I(f ) =
(3.3)
−1
where f (x) is assumed to be continuous, and whatever singularities or poor
behaviour in the integrand are included in k(x). The function k(x) is assumed to
be a real-valued absolutely integrable function, but needs not be continuous or of
22
one sign. We then approximate f (x) by an interpolating function fn (x), where
f (xi ) = fn (xi ), i = 0, 1, ..., n, and then compute the integral
1
k(x)fn (x)dx.
In (f ) =
(3.4)
−1
The type of approximation must be chosen so that the integral in (3.4)
can be evaluated (either explicitly or by an efficient numerical technique).
Let Pn be the space of all polynomials of degree less or equal to n, and let
φ0 (x), φ1 (x), ..., φn (x) be a basis for Pn . The functions φ0 (x), φ1 (x), ..., φn (x) will
be called interpolating elements. In this dissertation, the interpolating function
fn (x) will be assumed to be the interpolating polynomial
fn (x) =
Lfn (x)
=
n
φj (x)fn (xj ).
(3.5)
j=0
Substituting (3.5) into (3.4), we obtain
1
k(x)fn (x)dx
In (f ) =
−1
1
=
−1
=
n
k(x)
φj (x)fn (xj ) dx
j=0
n 1
j=0
−1
k(x)φj (x)dx f (xj ).
Hence the product integration rule for I(f ), is given by
In (f ) =
n
(k)
ωj f (xj ),
(3.6)
j=0
where
(k)
ωj
(k)
and we call ωj
1
=
−1
k(x)φj (x)dx,
j = 0, 1, ..., n,
(3.7)
the weights.
The integrals in (3.7) are assumed that they can be evaluated either
explicitly or by an efficient numerical technique. Davis and Rabinowitz (1984, p.
74) prove that the replacement of the function f (x) by interpolating polynomials
23
(k)
is equivalent to the choice of the weights ωj , j = 0, 1, ..., n in (3.6) such that the
rule (3.6) is exact when f is any polynomial of degree less or equal to n.
It is known that if the points xi , i = 0, 1, ..., n are chosen to be uniform
mesh points, then the behaviour of the function can be bad indeed; Sloan (1980)
gave an example of this case.
Example 3.1
Let
k(x) = 1 and f (x) =
1
.
1 + 25x2
So that the integral we are evaluating is merely
1
I(f ) =
f (x)dx,
−1
then using the equally spaced xi points over the interval [−1, 1] ,
xi = −1 + 2
i−1
, i = 1, 2, ..., n,
n−1
Sloan obtained the following results
Table 3.1: Error Norm of Example 3.1
n
In (f )
I − In 6
0.46
9.00(−02)
11
0.93
3.80((−01)
16
0.83
2.80(−01)
21
−5.37
5.92(+00)
26
−5.40
5.95(+00)
31
153.8
1.54(+02)
Exact
0.55
It is clear from Table 3.1 that the approximate integrals In (f ) are
spectacularly bad: they show no sign of convergence since error norm increases
and sometimes even have the wrong sign.
24
From the above example we conclude that the points must be chosen
carefully. In the following two sections, we shall discuss two cases of non-uniform
mesh points, namely, when xi are the zeros of the Legendre plynomial and the
Curtis-Clenshaw points.
3.3
Production Integration with Gaussian Abscissae and Weights
3.3.1 Introduction
In this case, we choose xi , i = 0, 1, ..., n to be the zeros of the Legendre
polynomial of degree n + 1, Pn+1 . We need to determine a suitable interpolating
elements φj (x), j = 0, 1, ..., n, such that
fn (x) =
Lfn (x)
n
=
φj (x)f (xj ),
(3.8)
j=0
is the unique interpolating polynomial of degree n, which interpolates f (x) at the
points xi , i = 0, 1, ..., n. Then we approximate the integral I(f ) by
1
I(f ) ≈ In (f ) =
k(x)Lfn (x)dx,
(3.9)
−1
such that the integral in (3.9) can be evaluated exactly.
From the properties of Legendre polynomials, P0 (x), P1 (x), ..., Pn (x),
where Pj (x), 0 ≤ j ≤ n is the j th degree Legendre polynomial, forms a basis
for Pn . Hence, the interpolating polynomial fn (x) = Lfn (x) can be written as
Lfn (x)
=
n
bj Pj (x)
(3.10)
j=0
where bj are real constants which we seek.
Since
1
−1
Pm (x)Lfn (x)dx
=
n
j=0
bj
1
−1
Pm (x)Pj (x)dx = bm
2
,
2m + 1
25
therefore
2m + 1
bm =
2
1
−1
Pm (x)Lfn (x)dx, 0 ≤ m ≤ n.
(3.11)
Because Pm (x)Lfn (x) is a polynomial of degree ≤ n + m ≤ 2n , then the
(n+1)-point Gauss-Legendre method gives an exact result for (3.11). Therefore
2m + 1 bm =
ωj Pm (xj )f (xj ), 0 ≤ m ≤ n.
2
j=0
n
(3.12)
where ωj , 0 ≤ j ≤ n are the (n+1)-point Gauss-Legendre weights. Hence the
interpolating polynomial Lfn (x) is given by
Lfn (x)
=
n
bm Pm (x)
m=0
n
2m + 1 =
ωj Pm (xj )f (xj )Pm (x)
2
m=0
j=0
n n
2m + 1
ωj
Pm (xj )Pm (x) f (xj ).
=
2
m=0
j=0
n
Consequently, Lfn (x) is given by
Lfn (x) =
n
φj (x)f (xj ),
(3.13)
j=0
where the elements φj (x), j = 0, 1, ..., n are given by
n
2m + 1
φj (x) = ωj
Pm (xj )Pm (x).
2
m=0
(3.14)
3.3.2 Application to Fredholm Equation with Abel Kernels
Consider the Fredholm integral equation of the second kind with the Abel
kernel
f (x) − λ
1
−1
|x − y|−α f (y)dy = g(x),
−1 ≤ x ≤ 1.
(3.15)
26
We shall approximate the function f (x) by the nth degree interpolating
polynomial
fn (x) =
Lfn (x)
=
n
φj (x)f (xj ),
(3.16)
j=0
which interpolates f (x) at the Gaussian abscissae xi , i = 0, 1, ..., n, where
φj (x), j = 0, 1, ..., n are given by (3.14).
Substituting f (y) in the integral in (3.15) from (3.16) and collocating at
the points xi , we obtain
f (xi ) −
n
f (xj )λ
j=0
1
−1
|xi − y|−α φj (y)dy = g(xi ), i = 0, 1, ..., n.
If we define
Aij =
1
−1
|xi − y|−α φj (y)dy
(3.17)
(3.18)
then the equation (3.17) can be written as the (n + 1) × (n + 1) linear system
(I − λA)fn = gn ,
(3.19)
where fn = (f (x0 ), f (x1 ), ..., f (xn ))T , gn = (g(x0 ), g(x1 ), ..., g(xn ))T and A is the
matrix whose (i, j)th element is given by (3.18).
Substituting (3.14) into (3.18), gives
1
n
2m + 1
Aij = ωj
Pm (xj )
|xi − y|−α Pm (y)dy
2
−1
m=0
n
2m + 1
Pm (xj )am (xi ),
= ωj
2
m=0
where am (xi ), 0 ≤ m, i ≤ n are defined by
1
|xi − y|−α Pm (y)dy.
am (xi ) =
(3.20)
(3.21)
−1
From Baker (1977, pp. 560-561), am (x), m = 0, 1, ..., n are given by the
recurrence relation
(m + 2 − α)am+1 (x) = (2m + 1)xam (x) − (m − 1 + α)am−1 (x),
m ≥ 1, (3.22)
27
where a0 and a1 are given by
1
1−α
1−α
,
+ (1 + x)
(1 − x)
a0 (x) =
1−α
1
2−α
2−α
.
a1 (x) = xa0 (x) +
(1 − x)
− (1 + x)
2−α
(3.23)
(3.24)
3.3.3 Application to Fredholm Equation with Logarithmic Kernels
Consider the Fredholm integral equation of the second kind with the
logarithmic kernel
f (x) − λ
1
−1
ln |x − y|f (y)dy = g(x),
−1 ≤ x ≤ 1.
(3.25)
We shall approximate the function f (x) by the nth degree interpolating
polynomial
fn (x) =
n
φj (x)f (xj ),
(3.26)
j=0
which interpolates f (x) at xi , i = 0, 1, ..., n, where φj (x) are given by (3.14).
Substituting (3.26) into (3.25) and collocating at the points xi , we obtain
fn (xi ) −
n
j=0
f (xj )λ
1
−1
ln |xi − y|φj (y)dy = g(xi ), i = 0, 1, ..., n.
Defining
Aij =
1
−1
ln |xi − y|φj (y)dy,
(3.27)
(3.28)
then the equation (3.27) can be written as the (n + 1) × (n + 1) linear system
(I − λA)fn = gn ,
(3.29)
where fn = (f (x0 ), f (x1 ), ..., f (xn ))T , gn = (g(x0 ), g(x1 ), ..., g(xn ))T and A is the
matrix whose (i, j)th element is given by (3.28).
28
Substituting (3.14) into (3.28), we obtain
1
n
2m + 1
Aij = ωj
Pm (xj )
ln |xi − y|Pm (y)dy
2
−1
m=0
n
2m + 1
Pm (xj )am (xi ),
= ωj
2
m=0
where am (xi ), 0 ≤ m, i ≤ n are defined by
1
ln |xi − y|Pm (y)dy.
am (xi ) =
(3.30)
(3.31)
−1
From Baker (1977, pp. 560-561), am (x), m = 0, 1, ..., n are given by the
recurrence relation
(m + 2)am+1 (x) = (2m + 1)xam (x) − (m − 1)am−1 (x),
m ≥ 2,
where ao , a1 , and a2 are given by




a0 (x) = (1 + x) ln(1 + x) + (1 − x) ln(1 − x) − 2,



−x,
a1 (x) = 12 (1 − x2 ) ln 1−x
1+x





 a2 (x) = xa1 (x) + 2 .
3
3.4
(3.32)
(3.33)
Production Integration with Curtis-Clenshaw Points
3.4.1 Introduction
This integration method is based on approximating the function f (x) by
a polynomial fn (x) = Lfn (x) of degree n, which interpolates at the points
iπ
, i = 0, 1, ..., n,
(3.34)
xi = cos
n
and evaluating exactly the integral
I(f ) ≈ In (f ) =
1
−1
k(x)Lfn (x)dx,
(3.35)
29
We first determine suitable interpolating elements φi (x), i = 0, 1, ..., n,
such that
fn (x) =
Lfn (x)
=
n
φj (x)f (xj ).
(3.36)
j=0
From Clenshaw and Curtis (1960), the interpolating polynomial Lfn (x)
which interpolates the function f (x) at the points (3.34) can be written as
Lfn (x)
n
n
2 =
Ti (xj )Ti (x)f (xj )
n j=0 i=0
n n
2
=
Ti (xj )Ti (x) f (xj ),
n i=0
j=0
where the double prime
denotes a sum whose first and last terms are halved
and Ti (x) is the Chebyshev polynomial of the first kind defined by
Ti (cos(θ)) = cos(iθ).
Hence
Lfn (x)
=
n
(3.37)
φj (x)f (xj ),
(3.38)
j=0
where
n
2γj γi Ti (xj )Ti (x),
φj (x) =
n i=0


 1/2,
and
γi =

 1,
(3.39)
i = 0 or i = n,
(3.40)
i = 1, 2, ..., n − 1.
3.4.2 Application to Fredholm Equation with Abel Kernels
Consider the Fredholm integral equation of the second kind with the Abel
kernel
f (x) − λ
1
−1
|x − y|−α f (y)dy = g(x),
−1 ≤ x ≤ 1.
(3.41)
30
We shall approximate the function f (x) by the nth degree interpolating
polynomial
fn (x) =
Lfn (x)
=
n
φj (x)f (xj ),
(3.42)
j=0
which interpolates f (x) at the points (3.34), where φj (x) are given by (3.39).
Substituting f (y) in the integral in (3.41) from (3.42) and collocating at
the points xi , we obtain
f (xi ) −
n
j=0
f (xj )λ
1
−1
|xi − y|−α φj (y)dy = g(xi ), i = 0, 1, ..., n.
If we define
Aij =
1
−1
|xi − y|−α φj (y)dy
(3.43)
(3.44)
then the equation (3.43) can be written as the (n + 1) × (n + 1) linear system
(I − λA)fn = gn ,
(3.45)
where fn = (f (x0 ), f (x1 ), ..., f (xn ))T , gn = (g(x0 ), g(x1 ), ..., g(xn ))T and A is the
matrix whose (i, j)th element is given by (3.44).
Substituting (3.39) into (3.44), we obtain
1
n
2γj Aij =
γm Tm (xj )
|xi − y|−α Tm (y)dy
n m=0
−1
n
2γj =
γm Tm (xj )am (xi ),
n m=0
where am (xi ), 0 ≤ m, i ≤ n are defined by
1
am (xi ) =
|xi − y|−α Tm (y)dy,
(3.46)
(3.47)
−1
the constants am (xi ), 0 ≤ m, i ≤ n can be evaluated from the recurrence relation
for am (x), for |x| < 1
1−α
1−α
1+
am+1 (x) − 2xam (x) + 1 −
am−1 (x)
m+1
m−1
2
1−α
m
1−α
(1 − x)
, m ≥ 2,
− (−1) (1 + x)
=
1 − m2
(3.48)
31
where the starting values for this recurrence relation are
1
1−α
1−α
,
+ (1 + x)
(1 − x)
a0 (x) =
1−α
1
2−α
2−α
,
a1 (x) = xa0 (x)
(1 − x)
− (1 + x)
2−α
2
2
3−α
3−α
a2 (x) = 4xa1 (x) − (2x + 1)a0 (x) +
+ (1 + x)
.
(1 − x)
3−α
(3.49)
(3.50)
(3.51)
3.4.3 Application to Fredholm Equation with Logarithmic Kernels
Consider the Fredholm integral equation of the second kind with the
logarithmic kernel
f (x) − λ
1
−1
ln |x − y|f (y)dy = g(x),
−1 ≤ x ≤ 1.
(3.52)
We shall approximate the function f (x) by the nth degree interpolating
polynomial
fn (x) = Lfn (x) =
n
φj (x)f (xj ),
(3.53)
j=0
which interpolates f (x) at the points (3.34), where φj (x) are given by (3.39).
Substituting f (y) in the integral in (3.52) from (3.53) and collocating at
the points (3.37), we obtain
f (xi ) −
n
j=0
f (xj )λ
1
−1
ln |xi − y|φj (y)dy = g(xi ), i = 0, 1, ..., n.
If we define
Aij =
1
−1
ln |xi − y|φj (y)dy
(3.54)
(3.55)
then the equation (3.54) can be written as the (n + 1) × (n + 1) linear system
(I − λA)fn = gn ,
(3.56)
32
where fn = (f (x0 ), f (x1 ), ..., f (xn ))T , gn = (g(x0 ), g(x1 ), ..., g(xn ))T and A is the
matrix whose (i, j)th element is given by(3.55).
Substituting (3.39) into (3.55), we obtain
1
n
2γj Aij =
γm Tm (xj )
ln |xi − y|Tm (y)dy
n m=0
−1
n
2γj =
γm Tm (xj )am (xi ),
n m=0
where am (xi ), 0 ≤ m, i ≤ n are defined by
1
am (xi ) =
ln |xi − y|Tm (y)dy,
(3.57)
(3.58)
−1
the constants am (xi ), 0 ≤ m, i ≤ n can be evaluated from the recurrence relation
for am (x), for |x| < 1
1
1
1+
am+1 (x) − 2xam (x) + 1 −
am−1 (x)
m+1
m−1
2
m
(1 − x) ln |1 − x| − (−1) (1 + x) ln |1 + x|
=
1 − m2
−
6(1 − (−1)m )
,
(m2 − 1)(m2 − 4)
m ≥ 3,
(3.59)
where the starting values for this recurrence relation are
a0 (x) = (1 + x)(ln(1 + x) + (1 − x) ln(1 − x) − 2,
1
2
2
(1 − x) ln(1 − x) − (1 + x) ln(1 + x) ,
a1 (x) = x(a0 (x) + 1) +
2
a2 (x) = −(1 + 2x2 )a0 (x) + 4xa1 (x)
2
4
3
3
+ (1 − x) + (1 + x) ln(1 + x) − (1 + 3x2 ),
3
9
(3.60)
(3.61)
(3.62)
a3 (x) = 2x(3 + 2x2 )a0 (x) − 3(4x2 + 1)a1 (x) + 6xa2 (x)
+(1 − x)4 ln(1 − x) − (1 + x)4 ln(1 + x) + 2x(1 + x2 ).
(3.63)
33
3.5
Concluding Remarks
This chapter presented two product integration methods, product
integration method with Gaussian abscissae and product integration method
with Curtis-Clenshaw points; furthermore they are applied to weakly singular
Fredholm integral equations of the second kind with Abel and logarithmic kernels.
An advantage of the product integration method is that it can be used
to calculate integrals with singularities. The only assumption made is that the
integrand is absolutely integrable function.
A disadvantage is that the method is not generally applicable, since it
requires a recurrence relation which depends on the kernel of the integral equation
(Piessens and Branders, 1976). However, for some important kernels, such as the
kernels k(x, y) = |x − y|−α , 0 < α < 1, and k(x, y) = ln |x − y|, the recurrence
relations for the Legendre and Chebyshev polynomials are given in equations
(3.22), (3.32), (3.48), and (3.59) (Baker, 1977, pp. 560-561).
CHAPTER 4
QUADRATURE FORMULA
4.1
Introduction
Numerical quadrature integration rules are very important because even
simple functions may not have exact formulas for their antiderivatives (indefinite
integrals). Even when an exact formula for the antiderivative does exist, it may
be difficult to find. In general a numerical quadrature formula approximates a
definite integral by a weighted sum of function values at points within the interval
of integration. A numerical quadrature integration rule has the form
a
b
f (x)dx ≈
n
ci f (xi ),
(4.1)
i=0
where the coefficients ci depend on the particular method.
The basic procedure for approximating the definite integral of a function
f on the interval [a, b] is to determine an interpolating polynomial that
approximates f and then integrate this polynomial.
If the integrand f is
unbounded at the endpoints of the interval of integration, then we say the function
has a singularities at the endpoints a and b. For cases such as this, the normal
rules of integration approximation must be modified.
35
In this chapter, we shall suggest a method for numerical approximation of
integrals with the above-mentioned singularities and the expected singularities
of the input function g(x) of weakly singular Fredholm integral equation of
the second kind. The method is based on using a transformation such that
the new integrand is smooth on the interval of integration. Without loss of
generality, we shall assume that the interval of integration is [−1, 1]. Integrals
over other intervals can be reduced to integrals over [−1, 1] by means of linear
transformation.
4.2
An Integration Quadrature Formula
In this section, we shall state the Hermite, Kress, and modified Kress nonlinear transformations, in such a way that the derivatives of the new integrand
vanish up to a certain order at the endpoints ±1 for the Hermite, Kress, and
modified Kress transformations, and at the singularties of the input function
g(x) for the Hermite, and modified Kress transformation. Proceeding this way we
obtain a high order quadrature rule by using the Gauss-Legendre rule and CurtisClenshaw rule for the transformed integral. We shall describe the numerical
integration rule which we will use in some details. Then in the subsections
4.2.1, 4.2.2, and 4.2.3, the Hermite, Kress, and modified Kress transformations,
respectively, are applied for a class of weakly singular Fredholm integral equations
of the second kind with singularities at the endpoints ±1 and the singularities of
the input function g(x).
4.2.1
Hermite Smoothing Transformation
For smoothing the solution of one-dimensional linear weakly singular
Fredholm integral equations on bounded intervals, with input functions which
36
may be smooth or not, Monegato and Scuderi (1998) introduce a simple
smoothing change of variable.
Firstly, consider one-dimensional linear weakly singular Fredholm integral
equation
f (x) − λ
1
− 1 ≤ x, y ≤ 1,
k(x, y)f (y)dy = g(x)
−1
(4.2)
where
k(x, y) = |x − y|−α , 0 < α < 1,
(4.3)
k(x, y) = ln |x − y|,
(4.4)
or
taking an inhomogeneous term g(x) has finite jumps or singularities, eventually in
one of its derivatives at a finite number of points of (−1, 1), say −1 < x1 < x2 <
... < xM < 1, and assume that g(x) is smooth (for simplicity C ∞ ) everywhere in
[−1, 1] except at xks (k = 1, ..., M ) where its singularities satisfy the conditions
|(x − xk )i+1 g (i) (x)| ≤ c,
i = 0, 1, ...,
(4.5)
in a neighbourhood of xk . Then they choose a nonlinear transformation y = w(t),
where w(t) is a sufficiently smooth monotone function mapping [−1, 1] onto
[−1, 1], having −1 = x0 , x1 , ..., xM , xM +1 = 1, as fixed points, i.e., xk = w(xk ) and
whose leading derivatives vanish at xk . The simplest, and most efficient from the
computational point of view, among those satisfying the above properties is the
piecewise Hermite interpolation polynomial HM (t) associated with the partition
−1 = x0 < x1 < x2 < ... < xM < xM +1 = 1 of [−1, 1], and define in each
subinterval [xk , xk+1 ], k = 0, ..., M by the conditions


 HM (xj ) = xj , j = k, k + 1

 H (i) M (xj ) = 0, j = k, k + 1, i = 1, ..., αj − 1, αj ≥ 2
(4.6)
The integers αk , k = 0, ..., M + 1 are chosen accordingly to the smoothing
effect that w(t) ought to produce at the points xk , k = 0, ..., M + 1. Notice that
the smoothness of w(t) itself does not depend on the choice of α0 and αM +1 .
37
(t) is not as trivial as it
The construction and evaluation of HM (t) and HM
might appear at first, particularly if we want to have an automatic program where
the αk may be arbitrarily chosen. A numerically stable and efficient procedure is
the following one.
Since we know a priori that in [xk , xk+1 ]
HM
(t) = ck (t − xk )αk −1 (xk+1 − t)αk+1 −1
(4.7)
where ck is a suitable constant, we can use this expression to derive the following
representation for HM (t):
t
HM (t) = ck
(y − xk )αk −1 (xk+1 − y)αk+1 −1 dy + xk , t ∈ [xk , xk+1 ].
(4.8)
xk
By imposing the conditions HM (xk+1 ) = xk+1 , k = 0, ..., M we determine
the coefficients ck as
ck = (xk+1 − xk )2−αk −αk+1
(αk + αk+1 − 1)!
, k = 0, ..., M
(αk − 1)!(αk+1 − 1)!
(4.9)
The Hermite transformation, then, can be defined as the following
H(t) = HM (t), t ∈ [xk , xk+1 ], k = 0, 1, ..., M,
(4.10)
where HM is as in (4.8), and M is the number of the singularities of the input
function g(x); for more details refer to Monegato and Scudri (1998).
4.2.2
Kress Smoothing Transformation
Kress describes a numerical quadrature rule for the integral
2π
I=
T (x)dx,
(4.11)
0
where the integrand T (x) is smooth in (0, 2π) but has singularities at the
endpoints x = 0 and x = 2π. Let the function W : [0, 2π] → [0, 2π] be bijective,
38
strictly monotonically increasing, and infinitely differentiable, in addition to that
the derivatives of W vanish up to a certain order at the endpoints. Then he
substitutes x = W (t) and consequently obtains
2π
2π
I=
T (x)dx =
h(t)dt,
0
(4.12)
0
where
h(t) = W (t)T (W (t)),
0 < t < 2π.
(4.13)
Applying the trapezoidal rule for 2n + 1 points to the transformed integral now
yields the quadrature formula
2π
2n−1
π (n) (n) T (x)dx ≈
a T tj ,
I=
n j=1 j
0
with the weights and mesh points given by
jπ
(n)
(n)
jπ
aj = W
, tj = W
, j = 1, ..., 2n − 1.
n
n
(4.14)
(4.15)
A typical example for a substitution is given by
W (t) = 2π
where
[V (t)]p
,
[V (t)]p + [V (2π − t)]p
V (t) =
1 1
−
p 2
π−t
π
3
0 ≤ t ≤ 2π,
1
1 t−π
+ ,
+
p
π
2
(4.16)
(4.17)
and p is a positive integer number.
It can be shown that
W (t) = 2πp
[V (2π − t)]p [V (t)]p−1 V (t) + [V (t)]p [V (2π − t)]p−1 V (2π − t)
,
2
[V (t)]p + [V (2π − t)]p
0 ≤ t ≤ 2π.
(4.18)
From (4.16), (4.17), and (4.18) for p > 1 we obtain




V (0) = 0, V (2π) = 1,



W (0) = 0, W (2π) = 2π,





 W (0) = W (2π) = 0, W (π) = 2.
(4.19)
39
Note that the cubic polynomial V is chosen such that V (0) = 0, V (2π) =
1, and W (π) = 2. The later property ensures, roughly speaking, that one half
of the grid points is equally distributed over the total interval, whereas the other
half is accumulated towards the two end points; for more details see Kress (1990).
We will use the same transformation with simple modification to be defined
on the interval [−1, 1], as the following:
Let
w(t) = a + bW (s(t)), t ∈ [−1, 1],
(4.20)
with
s(t) = c + dt, t ∈ [−1, 1],
such that:
W = 0 implies w = −1,
W = 2π implies w = 1,
t = −1 implies s = 0,
and
t = 1 implies s = 2π.
These give:
a = −1,
b = π1 ,
and
c = d = π.
Then the transformation (4.20) becomes
w(t) = − 1 +
1 W π(t + 1) ,
π
t ∈ [−1, 1].
If we define
v(t) = V π(1 + t) ,
(4.21)
40
where V is as in (4.17), then the transformation (4.21) becomes
w(t) =
[v(t)]p − [v(−t)]p
,
[v(t)]p + [v(−t)]p
where
v(t) =
−1 ≤ t ≤ 1,
1 1 3 t 1
−
t + + ,
2 p
p 2
(4.22)
(4.23)
and p is a positive integer number. We will call the transformation (4.22) as the
Kress smoothing transformation on the interval [−1, 1].
It can be shown that
w (t) = 2p
[v(t)]p [v(−t)]p−1 v (−t) + [v(−t)]p [v(t)]p−1 v (t))
,
2
[v(t)]p + [v(−t)]p
(4.24)
− 1 ≤ t ≤ 1.
From (4.22), (4.23), and (4.24) for p > 1 we obtain




v(−1) = 0, v(1) = 1,



w(−1) = −1, w(1) = 1,





 w (−1) = w (1) = 0, w (0) = 2.
Then (4.12) modified to be
1
I=
T (x)dx =
−1
(4.25)
1
h(t)dt,
(4.26)
−1
where
h(t) = w (t)T (w(t)),
−1 < t < 1.
(4.27)
The integral in the right-hand side of (4.26) has a smooth integrand (after
the smoothing transformation). Hence, it can be calculated using any quadrature
formula such as the Gauss formula or Clenshaw formula as will be done later.
41
4.2.3
Modified Kress Transformation
In this subsection we introduce a new smoothing transformation by
modifying the Kress smoothing transformation to be applicable with weakly
singular Freadholm integral equation with non-smooth input function g(x). This
transformation will be called the modified Kress transformation.
Suppose that g(x) has finite jumps or singularities,
eventually
in one of its derivatives, at a finite number of points of (−1, 1), say
−1 < x1 < x2 < ... < xM < 1, let x0 = −1, xM +1 = 1, so we need to
define a new 1 − 1 transformation wk = wk (t) on the interval [xk , xk+1 ] for
k = 0, 1, ..., M such that the following conditions are satisfied:
1. wk (xk ) = xk , wk (xk+1 ) = xk+1 .
2. wk (xk ) = wk (xk+1 ) = 0, wk ( xk+12+xk ) = 2.
We will use the transformation (4.22) to define the new transformation.
Let
wk (t) = a + bw(s(t)), t ∈ [xk , xk+1 ],
with
s(t) = c + dt, t ∈ [xk , xk+1 ],
such that:
w = −1 implies wk = xk ,
w = 1 implies wk = xk+1 ,
t = xk implies s = −1,
and
t = xk+1 implies s = 1.
These give:
a=
xk+1 +xk
,
2
b=
xk+1 −xk
,
2
(4.28)
42
c=
xk+1 +xk
,
xk+1 −xk
and
d=
2
.
xk+1 −xk
Then the transformation (4.28) becomes
1
2t − (xk+1 + xk )
,
wk (t) = (xk+1 + xk ) + (xk+1 − xk )w
2
xk+1 − xk
(4.29)
t ∈ [xk , xk+1 ], for k = 0, 1, ..., M.
and
wk (t)
=w
2t − (xk+1 + xk )
,
xk+1 − xk
(4.30)
t ∈ [xk , xk+1 ], for k = 0, 1, ..., M.
Using (4.25),(4.29), and (4.30) we obtain the following:
1. wk (xk ) = xk , wk (xk+1 ) = xk+1 .
2. wk (xk ) = w (−1) = 0, wk (xk+1 ) = w (1) = 0.
3. wk
xk+1 +xk 2
= w (0) = 2.
The new transformation can be defined on [−1, 1] as the following:
w(t) = wk (t), t ∈ [xk , xk+1 ], k = 0, 1, ..., M.
(4.31)
where wk as in (4.29), and M is the number of the singularities of the
input function g(x). This transformation will be called the modified Kress
transformation.
43
4.3
Application to Weakly Singular Fredholm Integral Equations of
the Second Kind
In this section, we shall apply the Hermite, Kress, and modified Kress
transformations as in (4.10), (4.22), and (4.31) respectively, to solve numerically
the following Fredholm integral equation of the second kind
f (x) − λ
1
k(x, y)f (y)dy = g(x),
−1
−1 ≤ x ≤ 1,
(4.32)
where
k(x, y) = |x − y|−α , 0 < α < 1,
(4.33)
k(x, y) = ln |x − y|.
(4.34)
or
The aim of this change of variables, is to obtain an integral equation whose
solution does not involve any more singularities in certain derivatives.
Suppose we introduce the change x = w(t) into (4.32), where w(t) is
either the Hermite, Kress or modified Kress transformation as in equations (4.10),
(4.22), and (4.31) respectively, we get
1
k(w(t), y)f (y)dy = g(w(t)),
f (w(t)) − λ
−1
−1 ≤ w(t) ≤ 1.
(4.35)
Setting y = w(s) in (4.35), we obtain
1
k(w(t), w(s))f (w(s))w (s)ds = g(w(t)), −1 ≤ w(t) ≤ 1, (4.36)
f (w(t)) − λ
−1
where −1 = w−1 (−1) ≤ t ≤ w−1 (1) = 1.
Multiplying both sides of (4.36) by w (t) and setting
θ(t) = w (t)f (w(t)),
ξ(t) = g(w(t))w (t),
(4.37)
we obtain
θ(t) − λ
1
k(w(t), w(s))θ(s)ds = ξ(t),
−1
−1 ≤ w(t) ≤ 1.
(4.38)
44
In the sequel we shall apply the transformations (4.10), (4.22), and (4.31)
to solve numerically the following Fredholm integral equation of the second kind
with Abel kernel and then with logarithmic kernel.
4.3.1
Fredholm Weakly Singular Integral Equations of the Second
Kind with Abel Kernels
Let us consider the Fredholm integral equation of the second kind with
Abel kernel
f (x) − λ
1
−1
|x − y|−α f (y)dy = g(x),
−1 ≤ x ≤ 1,
(4.39)
where 0 < α < 1. Suppose we introduce the change x = w(t) into (4.39), where
w(t) is either the Hermite, Kress or modified Kress transformation as in equations
(4.10), (4.22), and (4.31) respectively, we get
1
f (w(t)) − λ
|w(t) − y|−α f (y)dy = g(w(t)).
(4.40)
Setting y = w(s), we obtain
1
f (w(t)) − λ
|w(t) − w(s)|−α f (w(s))w (s)ds = g(w(t))
(4.41)
−1
−1
Multiplying both sides of (4.41) by w (t), we get
1
f (w(t))w (t) − λ
|w(t) − w(s)|−α f (w(s))w (s)w (t)ds = g(w(t))w (t). (4.42)
−1
Setting
θ(t) = w (t)f (w(t)),
we obtain
θ(t) − λ
1
−1
ξ(t) = w (t)g(w(t)),
|w(t) − w(s)|−α θ(s)w (t)ds = ξ(t).
Then defining, for computational convenience,


 | w(t)−w(s) |−α w (t),
t−s
δα (t, s) =

 |w (t)|−α w (t),
and rewriting (4.42) as
θ(t) − λ
1
−1
t = s,
(4.43)
(4.44)
(4.45)
t = s,
δα (t, s)|t − s|−α θ(s)ds = ξ(t)
(4.46)
45
4.3.2
Fredholm Weakly Singular Integral Equations of the Second
Kind with Logarithmic Kernels
Let us consider the Fredholm integral equation of the second kind with
logarithmic kernel
f (x) − λ
1
−1
ln |x − y|f (y)dy = g(x),
−1 ≤ x ≤ 1.
(4.47)
Suppose we introduce the change x = w(t) into (4.47), where w(t) is
either the Hermite, Kress or modified Kress transformation as in equations (4.10),
(4.22), and (4.31) respectively, we get
1
f (w(t)) − λ
ln |w(t) − y|f (y)dy = g(w(t)).
(4.48)
Setting y = w(s), we obtain
1
f (w(t)) − λ
ln |w(t) − w(s)|f (w(s))w (s)ds = g(w(t)).
(4.49)
−1
−1
Multiplying both sides of (4.49) by w (t), we get
1
f (w(t))w (t) − λ
ln |w(t) − w(s)|f (w(s))w (s)w (t)ds = g(w(t))w (t). (4.50)
−1
Setting
θ(t) = w (t)f (w(t)),
we obtain
θ(t) − λ
1
−1
ξ(t) = w (t)g(w(t)),
ln |w(t) − w(s)|θ(s)w (t)ds = ξ(t).
Then defining, for computational convenience,


 ln | w(t)−w(s) |w (t),
t−s
δ(t, s) =

 ln |w (t)|w (t),
we know that
t = s,
(4.51)
(4.52)
(4.53)
t = s,
w(t) − w(s)
lnw(t) − w(s)= ln
(t − s)
(t − s)
w(t) − w(s) + ln |t − s|.
= ln
t−s
(4.54)
46
From (4.53) and (4.54) we can rewrite (4.52) as
1
δ(t, s) + w (t) ln |t − s| θ(s)ds = ξ(t).
θ(t) − λ
(4.55)
−1
The transformed integrals (4.46), and (4.55) will be solved numerically,
using the product integration methods with Gaussian abscissae and weights, as
well as with Curtis-Clenshaw points, in the next chapter.
CHAPTER 5
NUMERICAL RESULTS
5.1
Introduction
In this chapter, we shall use the product integration methods described
in Chapter 3 together with the Hermite, Kress, and modified Kress smoothing
transformations introduced in Chapter 4 to solve numerically the weakly singular
Fredholm integral equation of the second kind
1
f (x) − λ
k(x, y)f (y)dy = g(x),
−1
−1 ≤ x ≤ 1,
(5.1)
where
k(x, y) = |x − y|−α , 0 < α < 1
(5.2)
k(x, y) = ln |x − y|
(5.3)
or
It is known that the solution of the integral equation (5.1) has singularities
at the endpoints x = ±1 (see Chapter 2). Hence, we shall use the Hermite, Kress,
and modified Kress smoothing transformations in the equations (4.10), (4.22),
and (4.31) respectively to reduce (5.1) to a weakly singular integral equation
whose solution is smooth at x = ±1. Then we shall use the product integration
48
method to solve the new equation numerically by reducing it approximately to
the algebraic linear system
(I − λA)zn = hn ,
where zn
=
(5.4)
(z0 , z1 , ..., zn )T is an approximation to the values of the
function f at the points x0 , x1 , ..., xn , i.e., f (xi ) ≈ zi , i = 0, 1, ..., n, hn =
(h(x0 ), h(x1 ), ..., h(xn ))T and A is a (n + 1) × (n + 1) matrix which as we shall
determine depends on the method we shall use. Then the linear system (5.4)
will be solved using MATLAB \ operator that makes use of Gauss elimination
method.
5.2
Product Integration with Gaussian Abscissae and Weights
In this section, we shall use the Hermite, Kress, and modified Kress
transformations (4.10), (4.22), and (4.31) respectively to smooth the solution
of the weakly singular integral equations of the second kind with Abel and
logarithmic kernels and then we shall solve them numerically using the product
integration with Gaussian abscissae and weights.
5.2.1
Weakly Singular Integral Equations with Abel Kernels
Consider the weakly singular Fredholm integral equation of the second
kind with Abel kernel
f (x) − λ
1
−1
|x − y|−α f (y)dy = g(x),
−1 ≤ x ≤ 1.
(5.5)
Using the smoothing transformation x = w(t), where w(t) is either the
Hermite, Kress or modified Kress transformation as in (4.10), (4.22), and (4.31)
respectively, then from Chapter 4, equation (5.5) reduces to the equation
1
θ(t) − λ
δα (t, s)|t − s|−α θ(s)ds = ξ(t),
−1
(5.6)
49
where
δα (t, s) =


 | w(t)−w(s) |−α w (t),
t−s
t = s,

 |w (t)|−α w (t),
t = s.
Hence, for the Gaussian abscissae xi , i = 0, 1, ..., n, , we have
1
δα (xi , s)|xi − s|−α θ(s)ds = ξ(xi ).
θ(xi ) − λ
(5.7)
(5.8)
−1
5.2.1.1
The Matrix Elements
Since the kernel δα (t, s) is continuous for both variables, the function
δα (xi , s)θ(s) is continuous as a function of s . We shall approximate the function
δα (xi , s)θ(s) by the nth degree interpolating polynomial
n
δα (xi , s)θ(s) ≈ Lθn (s) =
φj (s)δα (xi , xj )θ(xj ),
(5.9)
j=0
which interpolates δα (xi , s)θ(s) at xi , i = 0, 1, ..., n, where φj (s) is given by
n
2m + 1
φj (s) = ωj
Pm (xj )Pm (s), 0 ≤ j ≤ n.
(5.10)
2
m=0
Substituting (5.9) into (5.8) and collocating at the points xi , we obtain
1
n
θ(xi ) −
λδα (xi , xj )θ(xj )
|xi − s|−α φj (s)ds = ξ(xi ), i = 0, 1, ..., n. (5.11)
−1
j=0
If we define
bij =
1
−1
|xi − s|−α φj (s)ds,
then equation (5.11) can be written as
n
θ(xi ) −
bij λδα (xi , xj )θ(xj ) = ξ(xi ), i = 0, 1, ..., n.
(5.12)
(5.13)
j=0
Equation (5.13) can be written as the (n + 1) × (n + 1) linear system
(I − λA)θn = ξn ,
(5.14)
where θn = (θ(x0 ), θ(x1 ), ..., θ(xn ))T , ξn = (ξ(x0 ), ξ(x1 ), ..., ξ(xn ))T and A =
(aij )(n+1)×(n+1) is the matrix whose (i, j)th element is given by
aij = bij δα (xi , xj ).
The constants bij in (5.12) will be calculated as described in Chapter 3.
(5.15)
50
5.2.1.2
Examples
We solve the equation (5.6) with λ = π1 , and α = 12 .
We will use the the following abbreviations: GM for Gauss method, CM
for Clenshaw method, HT for the Hermite transformation, KT for the Kress
transformation, and MKT for the modified Kress transformation.
Example 5.1
Using GM with KT (p = 2, 3), exact solution f (x) = x3 , as shown in Tables 5.1,
and 5.2.
In this example we seek a comparison between the exact solution and the
approximate solution of the transformed equation (5.6). Since they depend on the
corresponding solution and approximate solution of the original equation (5.5),
let us start from the original equation. Suppose that the exact solution of the
original equation (5.5) with λ =
1
π
1
2
and α =
is f (x) = x3 , then we determine
the input function g(x) as shown below.
Substituting f (x) = x3 into (5.5) with λ =
g(x) = x3 −
where
1
I(x) =
−1
1
π
and α =
1
2
1
I(x),
π
−1
2
|x − y|
y 3 dy.
Now,
1
I(x) =
−1
x
=
−1
−1
2
|x − y|
−1
2
(x − y)
y 3 dy
3
y dy +
x
1
−1
2
(y − x)
setting
I(x) = I1 (x) + I2 (x),
y 3 dy,
gives
51

1

 I1 (x) = x (x − y)− 2 y 3 dy,
−1
1

 I1 (x) = 1 (y − x)− 2 y 3 dy.
x
where
For I1 (x), let x − y = u2 , we obtain
I1 (x) =
1
(1+x) 2
0
(x − u2 )3 du
1
3
=2 − x71 + xx51 − x2 x31 + x3 x1 ,
7
5
1
where x1 = (1 + x) 2 . By the same way
I2 (x) = 2
1 7 3 5 2 3
x2 + xx2 x x2 + x3 x2 ,
7
5
1
where x2 = (1 − x) 2 .
Then
1 7 3 5 2 3
2 1 7 3 5
2 3
3
3
g(x) = x −
− x1 + xx1 − x x1 + x x1 + x2 + xx2 x x2 + x x2 ,
π
7
5
7
5
3
1
1
where x1 = (1 + x) 2 and x2 = (1 − x) 2 .
To solve (5.6) which is transformed from (5.5), the approximate solution
refers to the solution of linear algebraic system (5.14). Let us solve (5.6) in details
for n = 4, p = 2. The system is
1
I5 − A5 θ4 = ξ4 ,
π
where I5 is identity matrix of order 5, and
 

w (x0 )f (w(x0 ))
θ(x0 )
 

 

 θ(x1 )   w (x1 )f (w(x1 ))
 

 

θ4 =  θ(x1 )  =  w (x2 )f (w(x2 ))
 

 

 θ(x3 )   w (x3 )f (w(x3 ))
 

θ(x4 )
w (x4 )f (w(x4 ))
is the approximate solution vector which we seek.











52
Notice that



ξ(x0 )
 

 

 ξ(x1 )  
 

 

ξ4 =  ξ(x1 )  = 
 

 

 ξ(x3 )  
 

ξ(x4 )

w (x0 )g(w(x0 ))


w (x1 )g(w(x1 )) 


w (x2 )g(w(x2 )) 


w (x3 )g(w(x3 )) 

w (x4 )g(w(x4 ))
is the input vector which is calculated from (4.43), and the matrix A5


a
a
a
a
a
 00 01 02 03 04 


 a10 a11 a12 a13 a14 




A5 =  a20 a21 a22 a23 a24 




 a30 a31 a32 a33 a34 


a40 a41 a42 a43 a44
has entries aij as follows:
aij = bij δ 1 (xi , xj ), i, j = 0, 1, 2, 3, 4,
2
where δ 1 (xi , xj ) are calculated from (5.7) with α =
2
δ 1 (xi , xj ) =
2
1
2


 | w(xi )−w(xj ) |− 12 w (xi ),
xi −xj

 |w (xi )|− 12 w (xi ),
as follows:
i = j,
i = j.
In MATLAB code δ 1 (xi , xj ) refers to the matrix ‘delta’; bij are calculated
2
as in (3.20), and in MATLAB code it is written as the entries of matrix ‘B’, where
‘B’ is returned by the MATLAB function ‘[B,p1]=Gau Leg Abs Mat(n)’; refer to
APPENDIX C, PART I.
In this example w is the Kress transformation (4.22), and w its first
derivative (4.24), with p = 2, which is referred to as the vectors ‘w’ and ‘wd’,
respectively in the MATLAB code. The vector x = x0 , x1 , x2 , x3 , x4 includes
the zeros of Legendre polynomial of degree 5, P5 (x). In MATLAB code the
vector x refers to the vector x which is returned by the MATLAB function
‘[x,wg]=gau point(n)’.
53
Now, for n = 4, and p = 2, we obtain

−9.0618(−01)


 −5.3847(−01)


T
x =
0


 +5.3847(−01)

+9.0618(−01)

−9.9517(−01)






,









 −8.3487(−01) 




T
w =
,
0




 +8.3487(−01) 


+9.9517(−01)


1.0784(−01)
wT
and




 8.5344(−01) 




=  2.0000(+00)  ,




 8.5344(−01) 


1.0784(−01)


4.516(−01) 1.370(−01) 5.968(−02)


 5.333(−01) 1.807(+00) 5.623(−01)


A5 =  4.648(−01) 1.107(+00) 3.017(+00)


 1.512(−01) 3.086(−01) 5.623(−01)

1.791(−02) 3.884(−02) 5.968(−02)
The approximate solution is

−1.0849(−01)
3.884(−02) 1.791(−02)


3.086(−01) 1.512(−01) 


1.107(+00) 4.648(−01)  .


1.807(+00) 5.333(−01) 

1.370(−01) 4.516(−01)





 −5.2413(−01) 




θ4 =  −2.4748(−16)  ,




 5.2413(−01) 


1.0849(−01)
54
while the exact solution is


−1.0629(−01)




 −4.9663(−01) 




θ=
.
0




 4.9663(−01) 


1.0629(−01)
Finally, we found that ||θ − θ4 ||∞ = 2.7503(−02).
For other values n and p, refer to APPENDIX C, PART I, and Tables 5.1,
and 5.2.
Example 5.2
Using GM with HT ( α0 = α2 = 4, α1 = 9 ), exact solution f (x) = x3 , as shown
in Table 5.3.
The method of solution is as in previous example, we just replace the Kress
transformation with the Hermite transformation.
To explain the Hermite transformation in this example, for α0 = α2 =
4, α1 = 9, M = 1, x0 = −1, x1 = 0, x2 = 1, we obtain c0 = c1 = 1980,


 1980 t y 8 (1 + y)3 dy − 1, t ∈ [−1, 0],
−1
H1 (t) =

 1980 t y 8 (1 − y)3 dy,
t ∈ [0, 1],
0
and
H1 (t) =


 1980 t8 (1 + t)3 ,
t ∈ [−1, 0],

 1980 t8 (1 − t)3 ,
t ∈ [0, 1];
refer to the Hermite smoothing transformation (4.10).
55
Now, for n = 4, we obtain

−9.0618(−01)





 −5.3847(−01) 




T
x =
,
0




 +5.3847(−01) 


+9.0618(−01)

−9.7929(−01)





 −1.1783(−01) 




T
H1 = 
,
0




 +1.1783(−01) 


+9.7929(−01)


7.4348(−01)
H1T
and




 1.3758(+00) 




=
,
0




 1.3758(+00) 


7.4348(−01)


1.185(+00) 4.075(−01)


 3.709(−01) 2.295(+00)


A5 = 
0
0


 3.149(−01) 1.324(+00)

1.245(−01) 3.459(−01)
The approximate solution is

4.148(−01) 3.459(−01) 1.245(−01)


2.413(+00) 1.324(+00) 3.149(−01) 


.
0
0
0


2.413(+00) 2.295(+00) 3.709(−01) 

4.148(−01) 4.075(−01) 1.185(+00)
−7.1907(−01)





 9.3573(−03) 




θ4 =  3.1731(−18)  ,




 −9.3573(−03) 


7.1907(−01)
56
while the exact solution is


−6.9824(−01)




 −2.2507(−03) 




θ=
.
0




 2.2507(−03) 


6.9824(−01)
Refer to APPENDIX C, PART I, replacing the MATLAB function
‘[w,wd]=w wd(x)’ with ‘[w,wd]=h hd(x)’.
Finally, we found that ||θ − θ4 ||∞ = 2.0832(−02).
For other values n refer to Table 5.3.
Example 5.3
Using GM with KT (p = 2, 3, 4), g(x) = |x|, consider the solution as n = 256 as
a reference, as shown in Tables 5.4-5.6.
In most problems the exact solutions are not known; so to estimate the
efficiency of the method we refer to the approximate solution at some order n as
a reference. In this example we consider the approximate solution at n = 256
as a reference, then compute the absolute error of the difference between the
approximate solution at n and the reference. This absolute error will be computed
at vector over the range of integration; in our example we will choose the vector
x = (0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9) which covers the positive part of the
range of integration uniformly.
Since the vector x may not be the vector of the node points of the method,
the approximate solution at x in this example can be computed as in (3.13) which
depends on (3.14) as follows:
θn (x) =
=
n
φj (x)f (xj ),
j=0
n j=0
n
2m + 1
ωj
Pm (xj )Pm (x) f (xj );
2
m=0
57
refer to APPENDIX C, PART II.
Example 5.4
Using GM with MKT ( p = 3, and M = 1 ), g(x) = |x|, consider the solution as
n = 256 as a reference, as shown in Table 5.7.
Example 5.5
Using GM with HT ( ( α0 = α2 = 4, α1 = 9 ), ( α0 = α1 = 3. ) ), g(x) = |x|,
consider the solution as n = 256 as a reference, as shown in Tables 5.8, and 5.9.
Example 5.6
Using GM with KT ( p = 1, 2, 3, 4 ), g(x) = x, consider the solution as n = 256
as a reference, as shown in Tables 5.10-5.13.
Example 5.7
Using GM with HT ( α0 = α1 = 3 ), g(x) = x, consider the solution as n = 256
as a reference, as shown in Table 5.14.
Example 5.8
Using GM with HT ( ( α0 = α2 = 4, α1 = 9 ), ( α0 = α1 = 3. ) ), g(x) =
√x ,
2−x
consider the solution as n = 256 as a reference, as shown in Tables 5.15, and
5.16.
Example 5.9
Using GM with KT ( p = 2, 3 ), g(x) =
√x ,
2−x
consider the solution as n = 256
as a reference, as shown in Tables 5.17, and 5.18.
58
Table 5.1: Error Norm of Example 5.1 with p = 2.
n
θ − θn ∞
4
8
16
32
64
128
256
2.7503(−02)
3.0453(−03)
4.0266(−06)
9.7174(−08)
6.6385(−09)
4.3428(−10)
2.7777(−11)
Table 5.2: Error Norm of Example 5.1 with p = 3.
n
θ − θn ∞
4
8
16
32
64
128
256
4.5997(−02)
8.8590(−04)
3.1764(−06)
7.5340(−10)
6.8719(−12)
5.8155(−14)
7.6848(−15)
Table 5.3: Error Norm of Example 5.2.
n
θ − θn ∞
4
8
16
32
64
128
256
2.0832(−02)
8.9763(−02)
1.3372(−03)
3.2974(−07)
2.8645(−10)
3.4232(−13)
8.3822(−15)
59
Table 5.4: The Values |θ256 (t) − θn (t)| of Example 5.3 with p = 2.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
−5.83937557423138
−4.59406077822662
−3.29760538599995
−2.19156227174302
−1.36729193941544
−0.80760930429880
−0.44939854413178
−0.22717709413543
−0.08863675412977
5.6245(−03)
4.7834(−03)
2.8193(−03)
1.7898(−03)
3.7167(−03)
6.3456(−03)
3.3676(−04)
3.4711(−03)
2.8890(−03)
8.9863(−04)
6.9148(−04)
2.5836(−04)
9.5931(−04)
1.8613(−03)
2.9071(−06)
3.1088(−04)
2.1410(−04)
3.5444(−04)
Table 5.5: The Values |θ256 (t) − θn (t)| of Example 5.3 with p = 3.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
−5.89029345423331
−4.72472398875914
−3.43967128944800
−2.26330205451274
−1.34307342163110
−0.71640457082352
−0.33606671856362
−0.12735966541007
−0.02774404425589
5.7169(−03)
5.0962(−03)
3.2877(−03)
2.1936(−03)
3.8477(−03)
6.1465(−03)
7.4427(−05)
3.0168(−03)
2.5849(−03)
9.1717(−04)
7.5468(−04)
3.5304(−04)
1.0405(−03)
1.8875(−03)
3.6035(−05)
2.2646(−04)
3.0575(−04)
4.1544(−04)
Table 5.6: The Values |θ256 (t) − θn (t)| of Example 5.3 with p = 4.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
5.8442(−03)
5.5182(−03)
3.8701(−03)
2.5723(−03)
3.7678(−03)
5.6939(−03)
5.8257(−04)
2.6743(−03)
2.4718(−03)
9.4271(−04)
8.3993(−04)
4.7069(−04)
1.1160(−03)
1.8704(−03)
1.2622(−04)
1.2204(−04)
3.7423(−04)
4.3705(−04)
60
Table 5.7: The Values |θ256 (t) − θn (t)| of Example 5.4.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
−0.53608181204257
−2.18907101572013
−4.16063988216782
−5.09026519483316
−4.58067444538190
−3.19288487860264
−1.67472985666517
−0.61634082872987
−0.12313857715963
7.5754(−05)
8.8740(−05)
7.9118(−05)
4.5723(−05)
3.2700(−05)
1.1430(−04)
5.6234(−08)
6.6524(−05)
5.8880(−05)
7.7414(−06)
8.1481(−06)
6.0509(−06)
1.8871(−06)
8.3306(−06)
1.3285(−06)
2.1697(−06)
1.6266(−06)
1.9010(−06)
Table 5.8: The Values |θ256 (t) − θn (t)| of Example 5.5 with α0 = α2 = 4, α1 = 9.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
−0.00004806650465
−0.00864205037672
−0.14830300295410
−0.92851011738586
−3.12500470199527
−6.28430508754176
−7.12021700311572
−3.82992846021788
−0.72767909728558
6.0104(−09)
1.6526(−08)
8.3742(−09)
1.4771(−08)
5.0488(−09)
6.7602(−09)
1.2036(−08)
3.4539(−09)
8.0900(−09)
1.9594(−11)
1.7595(−11)
2.2022(−11)
1.1881(−12)
8.3306(−11)
1.9257(−11)
8.8729(−12)
1.5370(−11)
7.7550(−12)
Table 5.9: The Values |θ256 (t) − θn (t)| of Example 5.5 with α0 = α1 = 3.
t
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
n = 64
n = 128
4.4448(−03)
3.9818(−03)
2.6133(−03)
1.9569(−03)
3.6876(−03)
5.8534(−03)
3.9234(−04)
2.9335(−03)
2.3586(−03)
6.8847(−04)
5.6180(−04)
2.5341(−04)
9.1712(−04)
1.7178(−03)
6.5616(−05)
2.9113(−04)
2.0742(−04)
3.4371(−04)
61
Table 5.10: The Values |θ256 (t) − θn (t)| of Example 5.6 with p = 1.
t
n=8
0.1
0.5
0.9
n = 16
n = 32
n = 64
n = 128
1.9318(−02) 1.1980(−02) 3.3458(−03) 4.0984(−03) 1.1185(−03)
3.9565(−02) 1.5676(−02) 1.1242(−02) 2.2051(−02) 1.5491(−03)
3.1144(−02) 1.0551(−02) 3.5703(−02) 9.4158(−02) 1.8133(−03)
Table 5.11: The Values |θ256 (t) − θn (t)| of Example 5.6 with p = 2.
t
n=8
0.1
0.5
0.9
n = 16
n = 32
n = 64
n = 128
9.4656(−02) 5.7343(−04) 1.3701(−09) 5.3073(−12) 7.1942(−14)
7.6816(−02) 1.1418(−04) 4.6787(−09) 1.4150(−11) 2.9421(−13)
5.6670(−02) 2.5463(−04) 1.3342(−09) 2.9720(−11) 1.2740(−14)
Table 5.12: The Values |θ256 (t) − θn (t)| of Example 5.6 with p = 3.
t
n=8
n = 16
n = 32
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.6128(−02)
2.1372(−02)
5.3755(−03)
2.5474(−02)
3.8652(−02)
6.9140(−03)
4.0743(−02)
2.6956(−02)
4.9437(−02)
3.0442(−04)
1.1871(−04)
2.6296(−04)
2.6301(−04)
8.4877(−05)
3.1895(−04)
2.7714(−04)
1.6512(−04)
2.3901(−04)
7.3332(−10)
1.5407(−09)
2.3092(−09)
2.9611(−09)
2.7874(−09)
1.1294(−09)
1.7903(−09)
6.9226(−10)
3.1120(−10)
5.0404(−14)
1.2212(−13)
2.1583(−13)
2.6912(−13)
4.9738(−14)
5.0382(−13)
2.7733(−13)
8.3034(−13)
1.7685(−12)
5.1070(−15)
9.1038(−15)
5.7732(−15)
2.2204(−15)
3.3307(−15)
4.8850(−15)
5.9952(−15)
5.9952(−15)
4.4895(−15)
62
Table 5.13: The Values |θ256 (t) − θn (t)| of Example 5.6 with p = 4.
t
n=8
n = 16
n = 32
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.1191(−01)
1.1460(−01)
3.0455(−02)
2.0260(−02)
1.5059(−03)
4.0493(−03)
4.7336(−02)
3.6747(−02)
7.2161(−02)
1.3985(−03)
3.8934(−04)
5.3037(−04)
8.2148(−06)
1.2112(−04)
9.7528(−04)
1.1353(−03)
7.8690(−04)
1.2377(−03)
8.4921(−09)
2.5694(−09)
3.2950(−08)
1.0242(−07)
1.6365(−07)
9.1167(−08)
1.8934(−07)
9.4219(−08)
5.5314(−08)
4.3299(−14)
3.1086(−14)
1.5987(−14)
1.4655(−14)
3.9968(−15)
0
1.1102(−16)
5.6344(−15)
4.9578(−15)
4.1078(−14)
3.2419(−14)
5.7732(−15)
9.7700(−15)
8.4377(−15)
2.2204(−16)
3.9968(−15)
3.0947(−15)
2.4928(−15)
Table 5.14: The Values |θ256 (t) − θn (t)| of Example 5.7.
t
n=8
n = 16
n = 32
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
2.3463(−02)
2.6791(−02)
6.8758(−03)
1.8629(−02)
2.5125(−02)
4.1290(−03)
2.2056(−02)
1.4086(−02)
2.5529(−02)
6.8358(−06)
2.5907(−06)
5.5832(−06)
5.3764(−06)
1.7046(−06)
6.4876(−06)
5.7381(−06)
3.5211(−06)
5.4749(−06)
1.3339(−10)
2.1175(−10)
4.6612(−10)
5.5212(−10)
7.7009(−10)
3.1141(−10)
8.1022(−10)
5.9831(−10)
5.9845(−10)
2.2415(−13)
4.4609(−13)
7.5562(−13)
8.9928(−13)
1.8785(−13)
1.6214(−12)
8.9362(−13)
2.7468(−12)
5.8055(−12)
3.5860(−14)
1.7764(−15)
1.1102(−15)
1.3101(−14)
1.4433(−14)
8.8818(−16)
1.0325(−14)
5.3846(−15)
5.3013(−15)
Table 5.15: The Values |θ256 (t)−θn (t)| of Example 5.8 with α0 = α2 = 4, α1 = 9.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
7.7574(−10)
2.1302(−09)
1.0975(−09)
1.9168(−09)
6.4345(−10)
8.8822(−10)
1.5513(−09)
4.5732(−10)
1.0282(−09)
2.5102(−12)
2.2588(−12)
2.8227(−12)
2.9243(−13)
1.5307(−12)
2.4633(−12)
1.1520(−12)
1.9993(−12)
1.0045(−12)
63
Table 5.16: The Values |θ256 (t) − θn (t)| of Example 5.8 with α0 = α1 = 3.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
−0.13807324229446
0.48737593200470
0.95595352673831
1.19095915592691
1.17762606159100
0.96384746834087
0.64231202925019
0.31991945447268
0.08658512867079
1.7839(13)
3.6326(−13)
5.9974(−13)
7.2098(−13)
1.4744(−13)
1.2749(−12)
7.0266(−13)
2.1594(−12)
4.5817(−12)
1.5821(−14)
2.4980(−15)
7.7716(−16)
1.1102(−15)
7.5495(−15)
4.5519(−15)
5.6621(−15)
6.2728(−15)
6.5226(−16)
Table 5.17: The Values |θ256 (t) − θn (t)| of Example 5.9 with p = 2.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
2.6118(−12)
5.9392(−12)
7.7636(−12)
8.5625(−12)
1.0510(−11)
1.4481(−11)
6.6781(−12)
1.5243(−11)
2.3390(−11)
2.2884(−14)
9.5812(−14)
1.3411(−13)
1.8363(−13)
2.2671(−13)
1.2468(−13)
1.6043(−13)
6.0285(−14)
8.6597(−15)
Table 5.18: The Values |θ256 (t) − θn (t)| of Example 5.9 with p = 3.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
−0.09967483498219
0.60557053502114
1.10594976836864
1.29759387032449
1.17964684757411
0.86070233015847
0.49734767703076
0.21077314041164
0.04804619713681
1.8499(−14)
8.6042(−14)
1.7031(−13)
2.1028(−13)
4.3077(−14)
3.9790(−13)
2.1794(−13)
6.5448(−13)
1.4015(−12)
3.0420(−14)
1.5987(−14)
5.3291(−15)
3.7748(−15)
4.4409(−15)
6.2172(−15)
3.7192(−15)
4.3854(−15)
2.9976(−15)
64
5.2.2
Weakly Singular Integral Equations with Logarithmic Kernels
Consider the Fredholm integral equation of the second kind with
logarithmic Kernel
f (x) − λ
1
−1
ln |x − y|f (y)dy = g(x),
−1 ≤ x ≤ 1.
(5.16)
Using the smoothing transformation x = w(t), where w(t) is either the
Hermite, Kress or modified Kress transformation as in (4.10), (4.22), and (4.31)
respectively, then from Chapter 4, equation (5.16) reduces to the equation
1
(δ(t, s) + w (t) ln |t − s|)θ(s)ds = ξ(t),
(5.17)
θ(t) − λ
−1
where
δ(t, s) =


 ln | w(t)−w(s) |w (t),
t−s
t = s,

 ln |w (t)|w (t),
t = s.
Hence, for the Gaussian abscissae xi , i = 0, 1, ..., n, , we have
1
(λδ(xi , s) + λw (xi ) ln |xi − s|)θ(s)ds = ξ(xi ).
θ(xi ) −
(5.18)
(5.19)
−1
5.2.2.1
The Matrix Elements
Since the kernel λδ(t, s) and λw (t) are continuous for both variables, the
function λδ(xi , s)θ(s) and λw (xi )θ(s) are continuous as functions of s. We shall
approximate the functions θ1 (s) = λδ(xi , s)θ(s), and θ2 (s) = λw (xi )θ(s) by the
nth degree interpolating polynomials
λδ(xi , s)θ(s) ≈ Lθn1 (s) =
n
φj (s)λδ(xi , xj )θ(xj ),
(5.20)
j=0
λw (xi )θ(s) ≈
Lθn2 (s)
=
n
φj (s)λw (xi )θ(xj ),
(5.21)
j=0
which interpolates θ1 (s) = λδ(xi , s)θ(s), and θ2 (s) = λw (xi )θ(s) at xi ,
i = 0, 1, ..., n, where φj (s) is given by
n
2m + 1
φj (s) = ωj
Pm (xj )Pm (s), 0 ≤ j ≤ n.
2
m=0
(5.22)
65
Substituting (5.20) and (5.21) into (5.19), we obtain
θ(xi ) −
n λδ(xi , xj )
j=0
1
−1
φj (s)ds + λw (xi )
1
−1
ln |xi − s|φj (s)ds θ(xj ) = ξ(xi ),
i = 0, 1, ..., n.
If we define
bj =
cij =
1
−1
(5.23)
1
φj (s)ds,
(5.24)
ln |xi − s|φj (s)ds,
(5.25)
−1
then equation (5.23) can be written as
n bj λδ(xi , xj ) + cij λw (xi ) θ(xj ) = ξ(xi ), i = 0, 1, ..., n.
θ(xi ) −
(5.26)
j=0
Equation (5.26) can be written as the (n + 1) × (n + 1) linear system
(I − λA)θn = ξn ,
where θn = (θ(x0 ), θ(x1 ), ..., θ(xn ))T ,
(5.27)
ξn = (ξ(x0 ), ξ(x1 ), ..., ξ(xn ))T and
A = (aij )(n+1)×(n+1) is the matrix whose (i, j)th element is given by
aij = bj δ(xi , xj ) + cij w (xi ).
(5.28)
The constants cij in (5.25) will be calculated as described in Chapter 3, while the
constants bj in (5.24) will be calculated as follows:
bj =
1
−1
φj (s)ds
1
n
2m + 1
= ωj
Pm (xj )
Pm (s)ds.
2
−1
m=0
Defining
dm =
(5.29)
1
−1
Pm (s)ds.
If m = 0 then
dm = 2;
(5.30)
66
if m ≥ 1 then from Davis & Rabinowitz (1984, p. 34),
dm = 0;
so (5.29) becomes
bj = ωj P0 (xj );
(5.31)
since P0 (x) = 1, for −1 ≤ x ≤ 1, then (5.31) becomes
b j = ωj .
(5.32)
Substituting (5.32) into (5.28) gives
aij = ωj δ(xi , xj ) + cij w (xi ).
5.2.2.2
(5.33)
Examples
We solve the equation (5.17) with λ = π1 .
We will use the the following abbreviations: GM for Gauss method, CM
for Clenshaw method, HT for the Hermite transformation, KT for the Kress
transformation, and MKT for the modified Kress transformation.
Example 5.10
Using GM with KT (p = 2, 3 ), exact solution f (x) = 1, as shown in Tables 5.19,
and 5.20.
Example 5.11
Using GM with HT ( α0 = α1 = 2 ), exact solution f (x) = 1, as shown in Table
5.21.
Example 5.12
Using GM with KT ( p = 3 ), g(x) = x, consider the solution as n = 256 as a
reference, as shown in Table 5.22.
67
Example 5.13
Using GM with HT ( ( α0 = α1 = 2 ), ( α0 = α1 = 3 ) ), g(x) = x, consider the
solution as n = 128 as a reference, as shown in Tables 5.23, and 5.24.
Example 5.14
Using GM with KT ( p = 2, 3 ), g(x) = x, consider the solution as n = 128 as a
reference, as shown in Tables 5.25, and 5.26.
Table 5.19: Error Norm of Example 5.10 with p = 2.
n
θ − θn ∞
32
2.0512(−10)
64
3.6821(−12)
128
6.1696(−14)
256
5.1070(−15)
Table 5.20: Error Norm of Example 5.10 with p = 3.
n
θ − θn ∞
32
4.1920(−13)
64
1.1102(−15)
128
1.9984(−15)
256
8.4377(−15)
68
Table 5.21: Error Norm of Example 5.11
n
θ − θn ∞
32
1.8625(−09)
64
3.3217(−11)
128
5.5560(−13)
256
8.9868(−15)
Table 5.22: The values |θ256 (t) − θn (t)| of Example 5.12.
t
0.1
0.5
0.9
n=8
n = 16
n = 32
n = 64
n = 128
1.0947(−02) 5.0809(−05) 4.0624(−11) 9.1593(−16) 8.6042(−16)
8.7098(−03) 1.0608(−05) 1.8699(−10) 1.1102(−15) 1.1102(−16)
2.1325(−03) 1.8569(−05) 3.7918(−11) 1.0783(−14) 4.6838(−15)
Table 5.23: The values |θ128 (t) − θn (t)| of Example 5.13 with α0 = α1 = 2.
t
0.1
0.5
0.9
n=4
n=8
n = 16
n = 32
n = 64
1.3270(−03) 1.9885(−04) 4.4219(−07) 3.7250(−10) 1.5483(−12)
9.4398(−05) 2.6838(−04) 1.5070(−07) 2.3382(−09) 2.0405(−12)
5.6929(−04) 4.5672(−04) 1.3602(−06) 1.9407(−09) 3.7245(−11)
69
Table 5.24: The values |θ128 (t) − θn (t)| of Example 5.13 with α0 = α1 = 3.
t
n=4
n=8
n = 16
n = 32
n = 64
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
5.3733(−02)
9.2859(−02)
1.0501(−01)
8.2824(−02)
2.7437(−02)
4.7376(−02)
1.1175(−01)
1.1918(−01)
1.1592(−02)
1.5798(−03)
1.8370(−03)
4.7156(−04)
1.4090(−03)
1.9735(−03)
3.2607(−04)
1.9060(−03)
1.2394(−03)
2.2443(−03)
3.0738(−07)
1.1636(−07)
2.4673(−07)
2.3563(−07)
7.0856(−08)
2.5040(−07)
1.9336(−07)
8.9457(−08)
5.7983(−08)
5.1949(−12)
1.1271(−11)
1.8913(−11)
2.7457(−11)
3.1924(−11)
1.6353(−11)
3.7064(−11)
2.3774(−11)
2.2176(−11)
2.7756(−16)
3.8858(−15)
3.8858(−15)
8.2157(−15)
2.2204(−15)
6.3283(−15)
4.4409(−16)
1.4794(−14)
2.5417(−14)
Table 5.25: The values |θ128 (t) − θn (t)| of Example 5.14 with p = 2.
t
n=4
n=8
n = 16
n = 32
n = 64
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
9.3859(−02)
1.5371(−01)
1.5928(−01)
1.1115(−01)
2.8988(−02)
5.6034(−02)
1.0995(−01)
1.0228(−01)
8.8995(−03)
7.8554(−03)
8.7889(−03)
2.0669(−03)
6.0597(−03)
7.7787(−03)
1.1619(−03)
6.4605(−03)
3.9589(−03)
6.9561(−03)
1.2116(−05)
4.5389(−06)
9.4277(−06)
8.8748(−06)
2.6684(−06)
9.7093(−06)
8.2295(−06)
4.8547(−06)
7.0504(−06)
4.0555(−11)
8.5997(−11)
1.4625(−10)
2.0859(−10)
2.4524(−10)
1.2390(−10)
2.8654(−10)
1.9179(−10)
1.9545(−10)
1.6354(−13)
3.6587(−13)
6.0763(−13)
7.2486(−13)
2.1827(−13)
1.0239(−12)
6.9394(−13)
1.8409(−12)
4.0111(−12)
70
Table 5.26: The values |θ128 (t) − θn (t)| of Example 5.14 with p = 3.
t
n=4
n=8
n = 16
n = 32
n = 64
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
8.4265(−02)
1.4897(−01)
1.7276(−01)
1.3810(−01)
4.5234(−02)
7.5148(−02)
1.6493(−01)
1.5994(−01)
1.3922(−02)
1.0947(−02)
1.2036(−02)
2.6699(−03)
7.6253(−03)
8.7098(−03)
1.0284(−03)
4.8576(−03)
2.0748(−03)
2.1325(−03)
5.0809(−05)
1.9060(−05)
3.9468(−05)
3.6671(−05)
1.0608(−05)
3.6544(−05)
2.8359(−05)
1.4841(−05)
1.8569(−05)
4.0624(−11)
8.5745(−11)
1.3633(−10)
1.8215(−10)
1.8699(−10)
8.0410(−11)
1.4354(−10)
6.5915(−11)
3.7918(−11)
9.1593(−16)
3.4972(−15)
1.1102(−16)
3.8858(−15)
1.1102(−15)
4.4409(−16)
2.7200(−15)
9.7145(−17)
1.0783(−14)
5.3
Product Integration with Curtis-Clenshaw Points
In this section, we shall use the transformations (4.10), (4.22), and (4.31)
to smooth the solution of the weakly singular integral equations of the second
kind with Abel and logarithmic kernels and then we shall solve them numerically
using the product integration with Curtis-Clenshaw points.
5.3.1
Weakly Singular Integral Equations with Abel Kernels
Consider the weakly singular Fredholm integral equation of the second
kind with Abel kernel
f (x) − λ
1
−1
|x − y|−α f (y)dy = g(x),
−1 ≤ x ≤ 1.
(5.34)
Using the smoothing transformation x = w(t), where w(t) is either the
Hermite, Kress or modified Kress transformation as in (4.10), (4.22), and (4.31)
respectively, then from Chapter 4, equation (5.34) reduces to the equation
1
δα (t, s)|t − s|−α θ(s)ds = ξ(t),
(5.35)
θ(t) − λ
−1
71
where
δα (t, s) =


 | w(t)−w(s) |−α w (t),
t−s
t = s,

 |w (t)|−α w (t),
t = s.
Hence, for the Curtis-Clenshaw points xi , i = 0, 1, ..., n, we have
1
δα (xi , s)|xi − s|−α θ(s)ds = ξ(xi ).
θ(xi ) − λ
(5.36)
(5.37)
−1
5.3.1.1
The Matrix Elements
Since the kernel δα (t, s) is continuous for both variables, the function
δα (xi , s)θ(s) is continuous as a function of s . We shall approximate the function
δα (xi , s)θ(s) by the nth degree interpolating polynomial
δα (t, s)θ(s) ≈
Lθn (s)
=
n
φj (s)δα (xi , xj )θ(xj ),
(5.38)
j=0
which interpolates δα (xi , s)θ(s) at xi , i = 0, 1, ..., n, where φj (s) is given by
n
2γj φj (s) =
γi Ti (xj )Ti (x),
n i=0
(5.39)
where Ti cos(θ) = cos iθ .
Substituting (5.38) into (5.37) and collocating at the points xi , we obtain
1
n
θ(xi ) −
λδα (xi , xj )θ(xj )
|xi − s|−α φj (s)ds = ξ(xi ), i = 0, 1, ..., n. (5.40)
−1
j=0
If we define
bij =
1
−1
|xi − s|−α φj (s)ds,
(5.41)
then equation (5.40) can be written as
θ(xi ) −
n
bij λδα (xi , xj )θ(xj ) = ξ(xi ), i = 0, 1, ..., n.
(5.42)
j=0
Equation (5.42) can be written as the (n + 1) × (n + 1) linear system
(I − λA)θn = ξn ,
(5.43)
72
where θn = (θ(x0 ), θ(x1 ), ..., θ(xn ))T ,
ξn = (ξ(x0 ), ξ(x1 ), ..., ξ(xn ))T and
A = (aij )(n+1)×(n+1) is the matrix whose (i, j)th element is given by
aij = bij δα (xi , xj ).
(5.44)
The constants bij in (5.41) will be calculated as described in Chapter 3.
5.3.1.2
Examples
The equation (5.35) is solved with λ = π1 , and α = 12 .
We will use the the following abbreviations: GM for Gauss method, CM
for Clenshaw method, HT for the Hermite transformation, KT for the Kress
transformation, and MKT for the modified Kress transformation.
Example 5.15
Using CM with KT (p = 2, 3 ), exact solution f (x) = x3 , as shown in Tables
5.27, and 5.28.
Example 5.16
Using CM with HT ((α0 = α1 = 2), (α0 = α1 = 3)), exact solution f (x) = x3 , as
shown in Tables 5.29, and 5.30.
Example 5.17
Using CM with KT (p = 2, 3), g(x) = x, consider the solution as n = 256 as a
reference, as shown in Tables 5.31, and 5.32.
Example 5.18
Using CM with HT (α0 = α1 = 3), g(x) = x, consider the solution as n = 256 as
a reference, as shown in Table 5.33.
Example 5.19
Using CM with KT (p = 2, 3), g(x) = |x|, consider the solution as n = 256 as a
reference, as shown in Tables 5.34, and 5.35.
73
Example 5.20
Using CM with MKT (p = 3, M = 1), g(x) = |x|, consider the solution as
n = 256 as a reference, as shown in Table 5.36.
Example 5.21
Using CM with HT ((α0 = α1 = 3 ), (α0 = α2 = 4 α1 = 9 )), g(x) = |x|, consider
the solution as n = 256 as a reference, as shown in Tables 5.37, and 5.38.
Table 5.27: Error Norm of Example 5.15 with p = 2.
n
θ − θn ∞
32
9.6178(−08)
64
6.0261(−09)
128
3.7686(−10)
256
2.3557(−11)
Table 5.28: Error Norm of Example 5.15 with p = 3.
n
θ − θn ∞
32
1.0257(−09)
64
8.0736(−12)
128
6.3219(−14)
256
4.2422(−15)
74
Table 5.29: Error Norm of Example 5.16 with α0 = α1 = 2.
n
θ − θn ∞
32
5.0514(−07)
64
3.1397(−08)
128
1.9595(−09)
256
1.2243(−10)
Table 5.30: Error Norm of Example 5.16 with α0 = α1 = 3.
n
θ − θn ∞
32
3.2392(−09)
64
2.5603(−11)
128
2.0065(−13)
256
2.4280(−14)
Table 5.31: The Values |θ256 (t) − θn (t)| of Example 5.17 with p = 2.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.95913365422293
1.65987139811794
1.96870514871904
1.91014442967586
1.60810176942442
1.20315345693091
0.80034672357158
0.45660351173114
0.19091749037686
5.0849(−11)
1.2182(−10)
2.3002(−10)
3.5902(−10)
3.5184(−10)
1.1973(−10)
2.5904(−10)
2.0122(−10)
4.3937(−10)
2.5467(−12)
6.0867(−12)
1.0320(−11)
8.0245(−12)
1.0005(−11
7.8779(−12)
1.2972(−11)
9.8943(−12)
2.0697(−11)
75
Table 5.32: The Values |θ256 (t) − θn (t)| of Example 5.17 with p = 3.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.97079720217752
1.73197989212059
2.12312447868046
2.08890174338843
1.71016859897723
1.16729751992722
0.64800355765324
0.26907935068606
0.06089132057342
9.8832(−13)
2.5024(−12)
4.9516(−12)
7.9963(−12)
7.8446(−12)
3.3469(−12)
6.5637(−12)
4.3264(−12)
9.5067(−12)
1.1102(−14)
1.5543(−14)
1.6875(−14)
1.7764(−14)
3.1086(−14)
4.4187(−14)
3.1530(−14)
1.3989(−14)
6.7543(−14)
Table 5.33: The Values |θ256 (t) − θn (t)| of Example 5.18.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.85502964490873
1.53942672289507
1.93190095807136
1.99123121221341
1.75878214805490
1.33579700446511
0.84776386407514
0.41057028032138
0.10982862614238
3.2877(−12)
8.2188(−12)
1.6206(−11)
2.6009(−11)
2.5462(−11)
1.0870(−11)
2.1314(−11)
1.3952(−11)
3.0795(−11)
8.9595(−14)
1.1724(−13)
1.3811(−13)
1.0525(−13)
7.4829(−14)
1.1458(−13)
1.1668(−13)
9.0039(−14)
2.1455(−13)
Table 5.34: The Values |θ256 (t) − θn (t)| of Example 5.19 with p = 2.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
8.2077(−03)
7.4378(−03)
5.1169(−03)
2.5635(−03)
2.3101(−03)
4.2259(−03)
3.0882(−03)
5.9603(−04)
5.8983(−05)
1.3877(−03)
1.1640(−03)
5.3270(−04)
4.3170(−04)
1.7604(−03)
4.3189(−04)
7.6904(−04)
3.5177(−04)
2.4828(−04)
76
Table 5.35: The Values |θ256 (t) − θn (t)| of Example 5.19 with p = 3.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
8.3016(−03)
7.7627(−03)
5.6057(−03)
2.9868(−03)
2.4512(−03)
4.0215(−03)
2.6513(−03)
1.2454(−04)
3.7259(−04)
1.4065(−03)
1.2288(−03)
6.2996(−04)
5.1537(−04)
1.7874(−03)
3.9167(−04)
6.8211(−04)
2.5729(−04)
1.8535(−04)
Table 5.36: The Values |θ256 (t) − θn (t)| of Example 5.20.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
−0.53608163694969
−2.18907097742637
−4.16064009097534
−5.09026513337578
−4.58067458936544
−3.19288487350754
−1.67473002008713
−0.61634070802009
−0.12313868635323
3.7554(−05)
4.4443(−05)
4.1594(−05)
3.3997(−05)
2.0721(−06)
7.0072(−05)
5.9575(−05)
5.1439(−06)
7.6804(−06)
5.0241(−06)
5.7561(−06)
5.3988(−06)
1.1361(−06)
7.4281(−06)
1.1446(−06)
4.4120(−06)
1.9976(−06)
1.2737(−06)
Table 5.37: The Values |θ256 (t) − θn (t)| of Example 5.21 with α0 = α1 = 3.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
6.6765(−03)
6.2893(−03)
4.6273(−03)
2.6525(−03)
2.4833(−03)
4.0193(−03)
2.7983(−03)
4.1985(−04)
2.2151(−04)
1.1138(−03)
9.7424(−04)
4.9455(−04)
4.5738(−04)
1.6320(−03)
4.4290(−04)
6.9283(−04)
2.8725(−04)
1.8307(−04)
77
Table 5.38: The Values |θ256 (t) − θn (t)| of Example 5.21 with α0 = α2 = 4, α1 =
9.
5.3.2
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
−0.00004806650461
−0.00864205037670
−0.14830300295414
−0.92851011738589
−3.12500470199518
−6.28430508754171
−7.12021700311564
−3.82992846021786
−0.72767909728557
3.3377(−09)
1.3291(−08)
8.8816(−10)
1.4303(−08)
1.4958(−08)
5.5378(−09)
9.1627(−09)
1.3127(−08)
5.1083(−09)
1.4721(−11)
1.0759(−11)
2.2396(−11)
1.1753(−11)
8.4115(−12)
1.5124(−11)
6.0130(−12)
7.0615(−12)
5.6299(−13)
Weakly Singular Integral Equations with Logarithmic Kernels
Consider the Fredholm integral equation of the second kind with
logarithmic Kernel
f (x) − λ
1
−1
ln |x − y|f (y)dy = g(x),
−1 ≤ x ≤ 1.
(5.45)
Using the smoothing transformation x = w(t), where w(t) is either the
Hermite, Kress or modified Kress transformation as in (4.10), (4.22), and (4.31)
respectively, then from Chapter 4, equation (5.45) reduces to the equation
1
(δ(t, s) + w (t) ln |t − s|)θ(s)ds = ξ(t),
(5.46)
θ(t) − λ
−1
where
δ(t, s) =


 ln | w(t)−w(s) |w (t),
t−s
t = s,

 ln |w (t)|w (t),
t = s.
Hence, for the Curtis-Clenshaw points xi , i = 0, 1, ..., n, where xi = cos
have
θ(xi ) −
1
−1
(λδ(xi , s) + λw (xi ) ln |xi − s|)θ(s)ds = ξ(xi ).
(5.47)
iπ n
, we
(5.48)
78
5.3.2.1
The Matrix Elements
Since the kernel λδ(t, s) and λw (t) are continuous for both variables, the
function λδ(xi , s)θ(s) and λw (xi )θ(s) are continuous as functions of s. We shall
approximate the functions θ1 (s) = λδ(xi , s)θ(s), and θ2 (s) = λw (xi )θ(s) by the
nth degree interpolating polynomials
λδ(xi , s)θ(s) ≈ Lθn1 (s) =
n
φj (s)λδ(xi , xj )θ(xj ),
(5.49)
j=0
λw (xi )θ(s) ≈
Lθn2 (s)
=
n
φj (s)λw (xi )θ(xj ),
(5.50)
j=0
which interpolates
θ1 (s) = λδ(xi , s)θ(s),
and
θ2 (s) = λw (xi )θ(s) at xi ,
i = 0, 1, ..., n, where φj (s) is given by
n
2γj γi Ti (xj )Ti (x),
n i=0
φj (s) =
(5.51)
where Ti (cos (θ)) = cos (iθ).
Substituting (5.49), and (5.50) into (5.48), we obtain
1
1
n λδ(xi , xj )
φj (s)ds + λw (xi )
ln |xi − s|φj (s)ds θ(xj ) = ξ(xi ),
θ(xi ) −
−1
j=0
−1
i = 0, 1, ..., n.
If we define
bj =
cij =
1
−1
(5.52)
1
φj (s)ds,
(5.53)
ln |xi − s|φj (s)ds,
(5.54)
−1
then equation (5.52) can be written as
n bj λδ(xi , xj ) + cij λw (xi ) θ(xj ) = ξ(xi ), i = 0, 1, ..., n.
θ(xi ) −
(5.55)
j=0
Equation (5.55) can be written as the (n + 1) × (n + 1) linear system
(I − λA)θn = ξn ,
(5.56)
79
where θn = (θ(x0 ), θ(x1 ), ..., θ(xn ))T ,
ξn = (ξ(x0 ), ξ(x1 ), ..., ξ(xn ))T and
A = (aij )(n+1)×(n+1) is the matrix whose (i, j)th element is given by
aij = bj δ(xi , xj ) + cij w (xi ).
(5.57)
The constants cij in (5.54) will be calculated as described in Chapter 3, while the
constants bj in (5.53) will be calculated as follows:
1
n
2γj bj =
γm Tm (xj )
Tm (s)ds.
n m=0
−1
(5.58)
Davis (1984, p. 35) gives
1
−1
Tm (s)ds =


 2/(1 − m2 ),
m even,

 0,
m odd.
(5.59)
By substituting (5.59) into (5.58) we obtain
[n/2]
4γj γ2m T2m (xj )
,
bj =
n m=0 1 − 4m2
(5.60)
where [n/2] is the greatest integer less than or equal to n/2.
From the properties Ti (cos(θ)) = cos (iθ), and xi = cos
becomes
5.3.2.2
2mjπ [n/2]
4γj γ2m cos n
bj =
.
n m=0
1 − 4m2
iπ n
, (5.60)
(5.61)
Examples
The equation (5.46) is solved with λ = π1 .
We will use the the following abbreviations: GM for Gauss method, CM
for Clenshaw method, HT for the Hermite transformation, KT for the Kress
transformation, and MKT for the modified Kress transformation.
80
Example 5.22
Using CM with KT (p = 2, 3 ), exact solution f (x) = 1, as shown in Tables 5.39,
and 5.40.
In this example we seek a comparison between the exact solution and
the approximate solution of the transformed equation (5.46). Since they depend
on the corresponding solution and approximate solution of the original equation
(5.45), let us start from the original equation. Suppose that the exact solution of
the original equation (5.45) with λ =
1
π
is f (x) = 1, then we determine the input
function g(x) in the same way as in Example 5.1, so that
1
g(x) = 1 −
(1 + x) ln(1 + x) + (1 − x) ln(1 − x) − 2
π
To solve (5.46) which is transformed from (5.45), the approximate solution
refers to the solution of linear algebraic system (5.56). Let us solve (5.46) in details
for n = 4. The system is
1
I5 − A5 θ4 = ξ4 ,
π
where I5 is identity matrix of order 5, and

 
w (x0 )f (w(x0 ))
θ(x0 )

 

 
 θ(x1 )   w (x1 )f (w(x1 ))

 

 
θ4 =  θ(x1 )  =  w (x2 )f (w(x2 ))

 

 
 θ(x3 )   w (x3 )f (w(x3 ))

 
θ(x4 )
w (x4 )f (w(x4 ))











is the approximate solution vector which we seek.
Notice that


ξ(x0 )

 

 

 ξ(x1 )  
 

 

ξ4 =  ξ(x1 )  = 
 

 

 ξ(x3 )  
 

ξ(x4 )
w (x0 )g(w(x0 ))



w (x1 )g(w(x1 )) 


w (x2 )g(w(x2 )) 


w (x3 )g(w(x3 )) 

w (x4 )g(w(x4 ))
81
is the input vector which is calculated from (4.51), and the matrix A5


a
a
a
a
a
 00 01 02 03 04 


 a10 a11 a12 a13 a14 




A5 =  a20 a21 a22 a23 a24 




 a30 a31 a32 a33 a34 


a40 a41 a42 a43 a44
has entries aij as follows:
aij = bj δ(xi , xj ) + cij w (xi ), i, j = 0, 1, 2, 3, 4,
where δ(xi , xj ) are calculated from (5.47) as follows:


 ln | w(xi )−w(xj ) |w (xi ),
xi −xj
δ(xi , xj ) =

 ln |w (xi )|w (xi ),
i = j,
i = j.
In MATLAB code δ(xi , xj ) refers to the matrix ‘delta’; bj are calculated
as in (5.61), and in MATLAB code it is written as the entries of the matrix
‘B’, and cij are calculated as in (3.57), and in MATLAB it is written as the
entries of the matrix ‘C’, where ‘C’ is returned by the MATLAB function ‘[C,x]
= Cel Cur Log Mat(n)’; refer to APPENDIX D.
In this example w is the Kress transformation (4.22), and w its first
derivative (4.24), with p = 2, which is referred to the vectors ‘w’ and
‘wd’, respectively in the MATLAB code. The entries of the vector x =
x0 , x1 , x2 , x3 , x4 are defined by xi = cos( iπ4 ). In MATLAB code the vector
x refers to the vector x which is returned by the MATLAB function ‘[C,x] =
Cel Cur Log Mat(n)’.
Now, for n = 4, and p = 2, we obtain

1.0000(+00)


 7.0711(−01)


T
x =  6.1232(−17)


 −7.0711(−01)

−1.0000(+00)






,




82


1.0000(+00)




 9.4281(−01) 




wT =  1.1102(−16)  ,




 −9.4281(−01) 


−1.0000(+00)

wT

0




 4.4444(−01) 




=  2.0000(+00)  ,




 4.4444(−01) 


0
and


0
0
0
0
0




 −8.60(−02) −7.48(−01) −5.59(−02) 1.69(−01)
1.17(−02) 




A5 =  6.22(−02) −2.62(−01) −1.87(+00) −2.62(−01) 6.22(−02)  .




 1.17(−02)
1.69(−01) −5.59(−02) −7.48(−01) −8.60(−02) 


0
0
0
0
0
The approximate solution is


0




 4.4951(−01) 




θ4 =  2.0018(+00)  ,




 4.4951(−01) 


0
while the exact solution is


0




 4.4444(−01) 




θ =  2.0000(+00)  .




 4.4444(−01) 


0
83
Finally, we found that ||θ − θ4 ||∞ = 5.0607(−03).
For other values n and p, refer to APPENDIX D, and Tables 5.39, and
5.40.
Example 5.23
Using CM with HT ((α0 = α1 = 2), (α0 = α1 = 3)), exact solution f (x) = 1, as
shown in Tables 5.41, and 5.42.
Example 5.24
Using CM with KT (p = 2, 3 ), g(x) = x, consider the solution as n = 256 as a
reference, as shown in Tables 5.43, and 5.44.
Example 5.25
Using CM with HT (α0 = α1 = 3), g(x) = x, consider the solution as n = 256 as
a reference, as shown in Table 5.45.
Example 5.26
Using CM with KT (p = 2, 3), g(x) = |x|, consider the solution as n = 256 as a
reference, as shown in Tables 5.46, and 5.47.
Example 5.27
Using CM with MKT (p = 3, M = 1), g(x) = |x|, consider the solution as
n = 256 as a reference, as shown in Table 5.48.
Example 5.28
Using CM with HT ((α0 = α1 = 3), (α0 = α2 = 4 α1 = 9)), g(x) = |x|, consider
the solution as n = 256 as a reference, as shown in Tables 5.49, and 5.50.
84
Table 5.39: Error Norm of Example 5.22 with p = 2.
n
θ − θn ∞
4
8
16
32
64
128
256
5.0607(−03)
4.3756(−04)
1.9638(−07)
2.5212(−10)
3.9860(−12)
6.2463(−14)
7.5495(−15)
Table 5.40: Error Norm of Example 5.22 with p = 3.
n
θ − θn ∞
4
8
16
32
64
128
256
1.4475(−02)
8.5867(−04)
5.0980(−07)
1.4011(−12)
2.2204(−15)
1.7764(−15)
4.8850(−15)
Table 5.41: Error Norm of Example 5.23 with α0 = α1 = 2.
n
θ − θn ∞
32
2.3060(−09)
64
3.6021(−11)
128
5.6274(−13)
256
8.7925(−15)
85
Table 5.42: Error Norm of Example 5.23 with α0 = α1 = 3.
n
θ − θn ∞
32
6.1521(−12)
64
6.2090(−15)
128
2.2204(−15)
256
4.6629(−15)
Table 5.43: The Values |θ256 (t) − θn (t)| of Example 5.24 with p = 2.
t
n = 32
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.5751(−10)
3.9498(−10)
8.1604(−10)
1.4873(−09)
2.3945(−09)
2.9040(−09)
1.0222(−09)
3.6632(−09)
4.4391(−09)
2.5555(−12)
6.4197(−12)
1.2664(−11)
2.0295(−11)
1.9885(−11)
8.3604(−12)
1.6510(−11)
1.0956(−11)
2.4230(−11)
4.5991(−14)
1.0081(−13)
1.6098(−13)
1.1535(−13)
1.6065(−13)
1.2618(−13)
2.0528(−13)
1.6201(−13)
3.2971(−13)
Table 5.44: The Values |θ256 (t) − θn (t)| of Example 5.24 with p = 3.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.23250880470793
0.43469991418204
0.57446264928486
0.62350008246226
0.57130137030700
0.43779826821010
0.27034834794028
0.12237247211698
0.02935658005220
9.9643(−15)
7.2720(−15)
8.5487(−15)
2.1982(−14)
2.6645(−14)
3.8303(−15)
1.6320(−14)
1.1990(−14)
1.4704(−14)
8.0214(−15)
8.3267(−16)
1.6653(−15)
1.9984(−15)
5.4401(−15)
4.2744(−15)
1.8874(−15)
1.3878(−16)
6.5260(−15)
86
Table 5.45: The Values |θ256 (t) − θn (t)| of Example 5.25.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
0.20438696624436
0.38340091184633
0.51391294207212
0.57778165246629
0.56539014860469
0.47974384038945
0.33994544815180
0.18172582621680
0.05239261198952
1.9346(−14)
2.7534(−14)
5.2403(−14)
9.0927(−14)
9.6700(−14)
3.3251(−14)
6.4060(−14)
5.5317(−14)
1.0324(−13)
8.2989(−15)
1.2212(−15)
2.1094(−15)
0
6.1062(−15)
2.7756(−15)
6.9389(−15)
5.6899(−15)
2.4425(−15)
Table 5.46: The Values |θ256 (t) − θn (t)| of Example 5.26 with p = 2.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
4.0637(−04)
1.3777(−03)
2.4767(−03)
3.2897(−03)
2.0448(−03)
8.0817(−04)
8.5722(−04)
4.4743(−04)
4.3610(−04)
2.8483(−04)
5.6714(−04)
9.5689(−04)
7.7464(−04)
7.1775(−04)
1.9204(−04)
3.1647(−04)
1.0949(−04)
1.3693(−04)
Table 5.47: The Values |θ256 (t) − θn (t)| of Example 5.26 with p = 3.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
3.9890(−04)
1.3673(−03)
2.4699(−03)
3.2885(−03)
2.0483(−03)
8.0140(−04)
8.5322(−04)
4.4713(−04)
4.3489(−04)
2.8337(−04)
5.6523(−04)
9.5584(−04)
7.7502(−04)
7.1624(−04)
1.9311(−04)
3.1578(−04)
1.0940(−04)
1.3700(−04)
87
Table 5.48: The Values |θ256 (t) − θn (t)| of Example 5.27.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
9.8872(−07)
1.5000(−06)
2.0325(−06)
2.3724(−06)
1.6809(−06)
5.5096(−07)
7.1581(−07)
2.9881(−07)
3.1891(−07)
1.2624(−07)
1.6175(−07)
1.8510(−07)
1.0380(−07)
1.0653(−07)
4.8837(−08)
6.1989(−08)
2.8196(−08)
2.7981(−08)
Table 5.49: The Values |θ256 (t) − θn (t)| of Example 5.28 with α0 = α1 = 3.
t
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
3.6131(−04)
1.1967(−03)
2.1609(−03)
2.8805(−03)
1.7923(−03)
7.1180(−04)
7.5315(−04)
3.9378(−04)
3.8316(−04)
2.5165(−04)
4.9594(−04)
8.3815(−04)
6.7877(−04)
6.3234(−04)
1.6870(−04)
2.7819(−04)
9.6241(−05)
1.2049(−04)
Table 5.50: The Values |θ256 (t) − θn (t)| of Example 5.28 with α0 = α2 = 4, α1 =
9.
t
θ256 (t)
n = 64
n = 128
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
−0.00000120870968
−0.00021716139689
−0.00365618287254
−0.01928243024348
−0.01501984305641
0.24086273416531
1.01205607720613
1.59482865487222
0.70226573180964
8.3933(−11)
1.9900(−10)
3.1068(−10)
3.8254(−10)
2.8303(−10)
8.8108(−11)
1.2179(−10)
5.1527(−11)
5.7386(−11)
3.0582(−13)
4.7585(−13)
5.6272(−13)
2.9490(−13)
2.8337(−13)
1.5016(−13)
2.0184(−13)
7.3719(−14)
8.5820(−14)
88
5.4
Discussion
For solving weakly singular Fredholm integral equation of the second kind
(5.1) ( we took λ = π1 , as a special case ) with kernel (5.2) (as a special case α =
1
2
)
and (5.3), we used three types of smoothing transformations, namely the Hermite
smoothing transformation (4.10), Kress smoothing transformation (4.22), and
modified Kress transformation (4.31) to reduce weakly singular Fredholm integral
equation to an equivalent equation which has smoother solution; then we solve
the new equation using two methods, that is, product integration with Gaussian
abscissae and weights, and Product integration with Clenshaw-Curtis points.
We considered two cases for investigating the efficiency of the Kress, and
modified Kress transformations. Firstly, for the case in which the input function
is smooth on the whole domain of integration; secondly, for the case when it has
finite jumps of singular points.
We investigated the kress and modified Kress transformations in problems
with known exact solution (as a special case exact solution f (x) = 1, and f (x) =
x3 ) as shown in Tables 5.1, 5.2, and 5.3 for solving equation (5.6) using Gauss
method, Tables 5.19, 5.20, and 5.21 for solving equation (5.17) using Gauss
method, Tables 5.27, 5.28, 5.29, and 5.30 for solving (5.35) using Clenshaw
method, and Tables 5.39, 5.40, 5.41, and 5.42 for solving (5.46) using Clenshaw
method. In the Kress transformation we obtain various grades of accuracy by
various values of the parameter p as well as in choosing the parameters αk in
the Hermite transformation, so one can obtain same accuracy by suitable choice
of the parameters. The best accuracy using the Kress transformation appear in
Tables 5.2, 5.20, 5.28, and 5.40 for different problems and methods, and the
best accuracy using the Hermite transformation appear in Tables 5.3, 5.21, 5.30,
and 5.42 for the corresponding problems and methods.
For input function g(x) which is smooth on whole domain of integration
(as special case g(x) = x, and g(x) =
√ x
),
2−x2
and suitably chosen parameters,
89
there is no difference between the Hermite and kress transformations; refer to
Tables 5.13, 5.22, 5.32 and 5.44 for the Kress transformation, and Tables 5.14,
5.24, 5.33, and 5.45 for corresponding problems and methods using the Hermite
transformation.
Using the Hermite transformation for solving equations (5.6), (5.17),
(5.35), and (5.46) with g(x) = |x| as input function in the corresponding original
equations does not give accuracy as it is clear in Tables 5.9, 5.37 and 5.49,
in spite of taking various values of the parameters αk , M = 0. The reason is
the input function g(x) = |x| is not smooth at the point x = 0; so dividing
the domain of integration, [−1, 1], into two subintervals, [−1, 0], and [0, 1], i.e.,
choosing M = 1, which means that the input function has singular point x1 = 0
according to the Hermite transformation (4.10), so the accuracy can be obtained
clearly in Tables 5.8, 5.38, and 5.50.
Using the Kress transformation for solving equations (5.6), (5.17), (5.35),
and (5.46) with g(x) = |x| as input function in the corresponding original
equations does not give accuracy as it is clear in Tables 5.4, 5.5, 5.6, 5.34,
5.35, 5.46 and 5.47, in spite of taking various values of the parameter p. The
reason is the input function is not smooth at the point x = 0; so dividing
the domain of integration, [−1, 1], into two subintervals, [−1, 0] and [0, 1], i.e.,
choosing M = 1, and using the modified Kress transformation (4.31). By using
the modified Kress transformation some accuracy appear in Tables 5.7, 5.36, and
5.48.
In the case of g(x) = |x|, we find that the Hermite transformation gives the
best accuracy compared to the modified Kress transformation; refer to Tables 5.8,
5.38, and 5.50 for the Hermite transformation and Tables 5.7, 5.36, and 5.48 for
the modified Kress transformation. This is because the Hermite transformation
gives more smoothing of the solution since the transformation vanishes up to eight
derivatives at the singular point x1 = 0 which is related to the choice α1 = 9, while
the modified Kress transformation vanishes up to only two derivatives at the same
90
singular point which is related to p = 3. Choosing p > 3 gives unsolvable system
since the concentration of the nodes near the singular points is so high, increasing
as n becomes large. Another reason is that the modified Kress transformation
is rational compared to the polynomial nature of the Hermite transformation so
that the calculations become more complicated.
In other examples when input function is smooth on whole domain of
integration we obtain the same accuracy using the Kress transformation as well
as using the Hermite transformation by choosing suitable value of the parameter
p for the Kress transformation.
CHAPTER 6
SUMMARY AND CONCLUSIONS
6.1
Summary
In this dissertation, we have studied the numerical solution of Fredholm
integral equations of the second kind with weakly singular kernels using product
integration methods.
Some concepts and remarks which are related to this
problem were introduced in chapter 1.
The literature review of the numerical methods for solving weakly singular
Fredholm integral equations of the second kind and smoothing transformation was
presented in Chapter 2.
The product integration methods are presented in Chapter 3.
We
discussed the product integration of interpolating polynomial type based on
Gauss-Legendre abscissa and weights, as well as on Clenshaw-Curtis points, and
we applied the product integration methods to the weakly singular Fredholm
integral equations of the second kind.
Chapter 4 presented some discussion about the quadrature formula and
introduced the Hermite, Kress, and modified Kress transformations to reduce the
92
integral, which its integrand has singularities at endpoints of the interval of the
integration, to a new integrals with smoother integrands. Next, we have applied
the transformations to the weakly singular Fredholm integral equation of the
second kind in order to smooth its solution.
In Chapter 5, we have presented the numerical results for some problems to
investigate the performance of the Kress transformation. These numerical results
are obtained from solving the weakly singular Fredholm integral equation of the
second kind with Abel or logarithmic kernels by using the product integration
method which was described in Chapter 3, with the transformations that have
been introduced in Chapter 4.
The simulation is written using MATLAB7.0. The programs are written
first to find the approximate matrix using product integration methods presented
in Chapter 3, and then to find the numerical solution of the matrix.
6.2
Conclusions
Throughout this work, product integration methods gathered with
the Hermite, Kress, and modified Kress transformations were used to solve
numerically the weakly singular Fredholm integral equation with Abel or
logarithmic kernels.
The investigation of the performance of the Kress
transformation was obtained by using the Hermite transformation as standard,
that is from comparing of the results obtained from the product integration
methods gathered with the Kress transformation and those results obtained from
the product integration methods gathered with the Hermite transformation.
It could be shown from the current results of this study that for the case
in which the input function of the weakly singular Fredholm integral equation
of the second kind is smooth on whole the domain of integration, the product
integration methods gathered with the Kress transformation showed accurate
93
results as well as the product integration methods gathered with the Hermite
transformation, and for the case in which the input function has finite number
of singularities on the domain of integration, the product integration methods
gathered with the modified Kress transformation showed more accurate results
than the product integration methods gathered with the Kress transformation,
but the most accurate results were showed by the product integration methods
gathered with the Hermite transformation.
The advantage of using the product integration method is that it can
be used to calculate integrals with singularities with only assuming that the
integrand is absolutely integrable function.
Thus, it can be used to solve
the weakly singular Fredholm integral equations with Abel and logarithmic
kernels. However, it is noteworthy to address that the disadvantage of using
the product integration methods is that it is not generally applicable, since it
requires a recurrence relation which depends on the kernel of the integral equation
except for some important kernels, such as the kernels k(x, y) = |x − y|−α and
k(x, y) = ln |x − y| which are known in the literature.
6.3
Recommendations for Future Study
In this study, we used the Hermite, Kress, and modified Kress
transformations to reduce the weakly singular Fredholm integral equation to
another weakly singular integral equation but with smoother solution.
The
new transformed weakly singular integral equation has been solved using the
product integration methods with interpolating polynomials namely Legendre
polynomials and Chebyshev polynomials of the first kind. We suggest solving
the new transformed weakly singular integral equations by using the product
integrations methods with piecewise polynomials. We suggest also extending our
work to solve the weakly singular Fredholm integro-differential equations.
94
REFERENCES
Atkinson, Kendall E. (1967). The Numerical Solution of Fredholm Integral
Equation of Second Kind SIAM J. Numer. Anal. 4: 337–348.
Atkinson, Kendall E. (1976). A Survey of Numerical Methods for Solution of
Fredholm Integral Equation of Second Kind. SIAM, Philadelphia.
Atkinson, Kendall E. (1997). The Numerical Solution of Integral Equation of the
Second Kind. Cambridge University press.
Baker, C.T.H. (1977). The Numerical Treatment of Integral Equation. Oxford
University Press.
Clenshaw, C. W. and Curtis, A.R. (1960). A Method for Numerical Integration
on an Automatic Computer. Numer. Math. 2: 197–205.
Criscuolo, G., Mastroianni, G. and Monegato, G. (1990). Convergence Properties
of A class of Product Formulas for Weakly Singular Integral Equations. Math.
Comp. 55: 213–230.
Davis, Philip F. and Rabinowitz, Philip. (1984). Methods of Numerical
Integration. 2nd ed. Academic Press Inc.
Delves, L.M. and Mohamed, J. L. (1985). Computational Methods for Integral
Equations. Cambridge University Press, Cambridge.
Elliott, David and Prössdorf, Siegfried. (1995). An algorithm for the approximate
95
solution of integral equations of Mellin type. Numer. Math. 70: 427–452.
Graham , Ivan G. (1982). Singularity expansions for the solutions of second kind
Fredholm integral equations with weakly singular convolution kernels. J.
Integral Equations 4: 1–30.
Graham, Ivan G. (1982). Galerkin Method for Second Kind Integral Equations
with Singularities. Math. Comp. 39: 519–533.
Graham, I. and Chandler, G. (1988). High order methods for linear functionals
of solutions of second kind integral equations. SIAM J. Numer. Anal. 25:
1118–1137.
Jerri, Abdul J. (1985). Introduction to Integral Equations with Applications.
Marcel Dekker, INC. New York.
Kaneko, Hideaki and Xu, Yuesheng. (1991). Numerical Solutions for Weakly
Singular Fredholm Integral Equations of the Second Kind. Appl. Numer.
Math. 7: 167–177.
Kaneko, Hideaki and Xu, Yuesheng. (1994). Numerical Gauss-Type Quadratures
for Weakly Singular Integrals and Their Application to Fredholm Integral
Equations of the Second Kind. Math. Comp. 62: 739-753.
Kress, Rainer. (1989). Linear Integral Equations. Berlin: Springer-Verlag.
Kress, Rainer. (1990). A Nyström method for boundary integral equations in
domains with corners. Numer. Math. 58:145–161.
Kythe, Prem K. and Puri, Pratap. (2002). Computational Methods for Linear
Integral Equations. Birkhauser Boston.
Linz, Peter. (1985). Analytical and Numerical Methods for Volterra Equations.
SIAM Pub., Philadelphia.
96
Monegato, G. and Scuderi, L. (1998). High Order Methods for Weakly Singular
Integral Equations with Non Smooth Input Function. Math. Comp. 67:
1493–1515.
Piessens, Robert and Branders, Maria. (1976). Numerical Solution of Integral
Equations of Mathematical Physics Using Chebyshev Polynomials. J. Comp.
Phys. 21: 178–196.
Pogorzelski, W. (1966). Integral Equation and Their Applications. Vol. 1. Oxford:
Pergamon Press.
Rosenberg, Jonathan M. (2001). A Guide to Matlab: For Beginners and
Experienced Users. Cambridge University Press.
Schneider, Claus. (1981). Product Integration for Weakly Singular Integral
Equations. Math. Comp. 36: 207–213.
Sloan, Ian H. (1980). On Choosing the Points in Product Integration. J. Math.
Phys. 21: 1032-1039.
Wazwaz, Abdul-Majid. (1997). A first course in integral equations. World
Scientific.
Young, A. (1954). Approximate Product-Integration. Proc. Roy. Soc. London
Ser. A 224: 552–561.
APPENDICES
APPENDIX A
MATLAB program to find the approximation
matrix using Gauss-Legendre method.
% gau_point.m
function [t,wg] = gau_point(n)
n1=n+1; e1=n1*(n1+1);
if mod(n1,2)==0
m=n1/2
else
m=(n1+1)/2
end
for i=1:m
t=(4*i-1)*pi/(4*n1+2); xo=(1-(1-1/n1)/(8*n1^2))*cos(t);
pkm1=1; pk=xo;
for k=2:n1
t1=xo*pk; pkp1=t1-pkm1-(t1-pkm1)/k+t1;
pkm1=pk; pk=pkp1;
end
den=1-xo^2; d1=n1*(pkm1-xo*pk);
dpn=d1/den; dpn2=(2*xo*dpn-e1*pk)/den;
dpn3=(4*xo*dpn2+(2-e1)*dpn)/den;
dpn4=(6*xo*dpn3+(6-e1)*dpn2)/den;
u=pk/dpn; v=dpn2/dpn;
h=-u*(1+0.5*u*(v+u*(v^2-dpn3/(3*dpn))));
p=pk+h*(dpn+0.5*h*(dpn2+h/3*(dpn3+0.25*h*dpn4)));
dp=dpn+h*(dpn2+0.5*h*(dpn3+h*dpn4/3));
98
h=h-p/dp; x(i)=xo+h;
fx=d1-h*e1*(pk+0.5*h*(dpn+h/3*(dpn2+0.25*h*(dpn3+0.2*h*dpn4))));
w(i)=2*(1-x(i)^2)/(fx^2);
end
if (m+m > n1 )
x(m)=0;
end
if (m+m > n1 )
t(m)=0;
wg(m)=w(m);
for i=0:m-2
t(i+1)=-x(i+1); t(i+m+1)=x(m-i-1);
wg(i+1)=w(i+1); wg(i+m+1)=w(m-i-1);
end
else
for i=0:m-1
t(i+1)=-x(i+1); t(i+m+1)=x(m-i);
wg(i+1)=w(i+1); wg(i+m+1)=w(m-i);
end
end
1. For kernel
1
k(x, y) = |x − y|− 2
% Gau_Leg_Abs_Mat.m
function [w, p] = Gau_Leg_Abs_Mat(n)
[t,wg] = gau_point(n);
for j=0:n
p(1,j+1)=1;
p(2,j+1)=t(j+1);
end
for i=1:n-1
for j=0:n
p(i+2,j+1)=((2*i+1)*t(j+1)*p(i+1,j+1)-i*p(i,j+1))/(i+1);
end
end
for i=0:n
a(1,i+1)=2*(sqrt(1+t(i+1))+sqrt(1-t(i+1)));
end
for i=0:n
a(2,i+1)=t(i+1)*a(1,i+1)+2*((1-t(i+1))^1.5-(1+t(i+1))^1.5)/3;
end
99
for k=1:n-1
for i=0:n
a(k+2,i+1)=2*((2*k+1)*t(i+1)*a(k+1,i+1)-0.5*(2*k-1)*a(k,i+1))/...
(2*k+3);
end
end
for i=0:n
for j=0:n
sum1=0;
for k=0:n
sum1=sum1+(2*k+1)*p(k+1,j+1)*a(k+1,i+1)/2;
end
w(i+1,j+1)=wg(j+1)*sum1;
end
end
2. For kernel
k(x, y) = ln |x − y|
% Gau_Leg_log_Mat.m
function [w,p] = Gau_Leg_Log_Mat(n)
[t,wg] = gau_point(n);
for j=0:n
p(1,j+1)=1;
p(2,j+1)=t(j+1);
end
for i=1:n-1
for j=0:n
p(i+2,j+1)=((2*i+1)*t(j+1)*p(i+1,j+1)-i*p(i,j+1))/(i+1);
end
end
for i=0:n
a(1,i+1)=(1+t(i+1))*log(1+t(i+1))+(1-t(i+1))*log(1-t(i+1))-2;
end
for i=0:n
a(2,i+1)=0.5*(1-t(i+1)^2)*log((1-t(i+1))/(1+t(i+1)))-t(i+1);
end
for i=0:n
a(3,i+1)=0.5*t(i+1)*(1-t(i+1)^2)*log((1-t(i+1))/(1+t(i+1)))...
+(2-3*t(i+1)^2)/3;
end
for k=2:n-1
100
for i=0:n
a(k+2,i+1)=((2*k+1)*t(i+1)*a(k+1,i+1)-(k-1)*a(k,i+1))/(k+2);
end
end
for i=0:n
for j=0:n
sum1=0;
for k=0:n
sum1=sum1+(2*k+1)*p(k+1,j+1)*a(k+1,i+1)/2;
end
w(i+1,j+1)=wg(j+1)*sum1;
end
end
101
APPENDIX B
MATLAB program to find the approximation
matrix using Clenshaw-Curtis method
1. For kernel
1
k(x, y) = |x − y|− 2
% Cel_Cur_Abs_Mat.m
function [w,t] = Cel_Cur_Abs_Mat(m)
for i=0:m
t(i+1)=cos(i*pi/m);
end
for i=0:m
a(1,i+1)=2*(sqrt(1+t(i+1))+sqrt(1-t(i+1)));
end
for i=0:m
a(2,i+1)=t(i+1)*a(1,i+1)+2*((1-t(i+1))^1.5-(1+t(i+1))^1.5)/3;
end
for i=0:m
a(3,i+1)=4*t(i+1)*a(2,i+1)-(2*(t(i+1))^2 +1)*a(1,i+1)...
+4*((1-t(i+1))^(2.5)+(1+t(i+1))^(2.5))/5;
end
for j=2:m-1
for i=0:m
a(j+2,i+1)=(2*j+2)*(2*t(i+1)*a(j+1,i+1)-(2*j-3)*a(j,i+1)/(2*(j-1))...
+2*(sqrt(1-t(i+1))-((-1)^j)*sqrt(1+t(i+1)))/(1-j^2))/(2*j+3);
end
end
p(1)=0.5; p(m+1)=0.5;
for i=1:m-1
p(i+1)=1;
end
for j=0:m
for i=0:m
sum=(a(1,i+1)+a(m+1,i+1)*cos(j*pi))/2;
for k=1:m-1
sum=sum+a(k+1,i+1)*cos(j*k*pi/m);
end
102
w(i+1,j+1)=2*p(j+1)*sum/m;
end
end
2. For kernel
k(x, y) = ln |x − y|
% Cel_Cur_log_Mat.m
function [w,t] = Cel_Cur_Log_Mat(n)
for i=0:n
t(i+1)=cos((i*pi)/n);
end
a(1,1) =2*log(2)-2; a(1,n+1)=2*log(2)-2;
for j=1:n-1
a(1,j+1)=(t(j+1)+1)*log(1+t(j+1))+(1-t(j+1))*log(1-t(j+1))-2;
end
a(2,1) =-a(1,1)-1+2*log(2); a(2,n+1)=a(1,n+1)+1-2*log(2);
for j=1:n-1
a(2,j+1)=t(j+1)*(a(1,j+1)+1)+0.5*(((1-t(j+1))^2)*log(1-t(j+1))...
-((1+t(j+1))^2)*log(1+t(j+1)));
end
a(3,1) =-3*a(1,1)-4*a(2,1)+16*(3*log(2)-1)/9;
a(3,n+1)=-3*a(1,n+1)+4*a(2,n+1)+16*(3*log(2)-1)/9;
for j=1:n-1
a(3,j+1)=-(1+2*t(j+1)^2)*a(1,j+1)+4*t(j+1)*a(2,j+1)...
+(6*(((1+t(j+1))^3)*log(1+t(j+1))...
+((1-t(j+1))^3)*log(1-t(j+1)))-4*(1+3*(t(j+1))^2))/9;
end
a(4,1) =-10*a(1,1)-15*a(2,1)-6*a(3,1)+16*log(2)-4;
a(4,n+1)=10*a(1,n+1)-15*a(2,n+1)+6*a(3,n+1)-16*log(2)+4;
for j=1:n-1
a(4,j+1)=2*t(j+1)*(3+2*t(j+1)^2)*a(1,j+1)-3*(1+4*t(j+1)^2)*a(2,j+1)...
+6*t(j+1)*a(3,j+1)+(1-t(j+1))^4*log(1-t(j+1))-(1+t(j+1))^4*...
log(1+t(j+1))+2*t(j+1)*(1+t(j+1)^2);
end
for i=3:n-1
for j=0:n
if( j==0)
a(i+2,j+1)=(i+1)*(-2*a(i+1,j+1)-(i-2)*a(i,j+1)/(i-1)+4*log(2)/(1-i^2)...
-6*(1-(-1)^i)/((i^2-1)*(i^2-4)))/(i+2);
elseif (j==n)
a(i+2,j+1)=(i+1)*(2*a(i+1,j+1)-(i-2)*a(i,j+1)/(i-1)-4*(-1)^i*log(2)/...
(1-i^2)-6*(1-(-1)^i)/((i^2-1)*(i^2-4)))/(i+2);
103
else
a(i+2,j+1)=(i+1)*(2*t(j+1)*a(i+1,j+1)-(i-2)*a(i,j+1)/(i-1)...
+2*((1-t(j+1))*log(1-t(j+1))-(-1)^i*(1+t(j+1))*log(1+t(j+1)))/(1-i^2)...
-6*(1-(-1)^i)/((i^2-1)*(i^2-4)))/(i+2);
end
end
end
p(1)=0.5; p(n+1)=0.5;
for i=1:n-1
p(i+1)=1;
end
for j=0:n
for i=0:n
sum1=(a(1,i+1)+a(n+1,i+1)*cos(j*pi))/2;
for k=1:n-1
sum1=sum1+a(k+1,i+1)*cos(j*k*pi/n);
end
w(i+1,j+1)=2*p(j+1)*sum1/n;
end
end
104
APPENDIX C
MATLAB program which solves a weakly singular Fredholm integral
equation with Abel kernel using Gauss-Legendre method
PART I Computation of the error norm between the exact and approximate
solutions.
% main.m
clear
n=256;
% choose n
p=2;
% choose p
[x,wg]=gau point(n);
[B, p1]=Gau Leg Abs Mat(n);
[w,wd]=w wd(x,p); % For Hermite, replace it by ‘[w,wd]=h hd(x,n)’.
xi n=(wd.*g(w)). ;
% delta beginning
alpha=0.5;
for i=0:n
for j=0:n
if(i==j)
if(wd(i+1)==0)
delta(i+1,j+1)=0;
else
delta(i+1,j+1)=((abs(wd(i+1)))^(-alpha))*wd(i+1);
end
else
delta(i+1,j+1)=((abs((w(i+1)-w(j+1))/(x(i+1)-x(j+1))))...
^(-alpha))*wd(i+1);
end
end
end
% delta end
A=B.*delta;
approximate solution=(eye(n+1)-(1/pi).*A)\ xi n;
exact solution=(wd.*f(w)). ;
norm infinity=norm(exact solution-approximate solution,inf)
105
clear
% w wd.m
function [w,wd]=w wd(t,p)
a1=v(t,p).^p; a2=v(-t,p).^p; b1=v(t,p).^(p-1); b2=v(-t,p).^(p-1);
w=(a1-a2)./(a1+a2);
wd=2.*p.*(a1.*b2.*vd(-t,p)+a2.*b1.*vd(t,p))./((a1+a2).^2);
function v=v(t,p)
v = (1/2-1/p).*t.^3+t./p+1/2;
function vd=vd(t,p)
vd = 3.*(1/2-1/p).*t.^2+1/p;
% h hd.m
function [h,hd]=h hd(t,n)
syms y
h1=1980*y^8*(1+y)^3;
h2=1980*y^8*(1-y)^3;
for i=0:n
if (t(i+1)<=0)
h(i+1)=double(int(h1,-1,t(i+1))-1);
hd(i+1)=hd1(t(i+1));
else
h(i+1)=double(int(h2,0,t(i+1)));
hd(i+1)=hd2(t(i+1));
end
end
function z=hd1(r)
z=1980*(r^8)*((1+r)^3);
function z=hd2(r)
z=1980*(r^8)*((1-r)^3);
% g.m
function g=g(x)
x1=(1+x).^0.5; x2=(1-x).^0.5;
g=x.^3-(2/pi).*(((-1/7).*x1.^7+(3/5).*x.*(x1.^5)-(x.^2).*(x1.^3)+...
(x.^3).*x1)+((1/7).*x2.^7+(3/5).*x.*(x2.^5)+(x.^2).*(x2.^3)+(x.^3).*x2));
% f.m
function f=f(x);
f=x.^3;
106
PART II Computation of the absolute error between the reference and
approximate solutions.
% main.m
clear
n=128;
% choose n
p=3;
% choose p
[t,wg]=gau point(n);
[B,p1]=Gau Leg Abs Mat(n);
[w,wd]=w wd(t,p);
xi n=(wd.*g(w)). ;
% delta beginning
alpha=0.5;
for i=0:n
for j=0:n
if(i==j)
if(wd(i+1)==0)
delta(i+1,j+1)=0;
else
delta(i+1,j+1)=((abs(wd(i+1)))^(-alpha))*wd(i+1);
end
else
delta(i+1,j+1)=((abs((w(i+1)-w(j+1))/(t(i+1)-t(j+1))))...
^(-alpha))*wd(i+1);
end
end
end
% delta end
A=B.*delta;
approximate solution=(eye(n+1)-(1/pi).*A)\xi n;
% Computation of the approximate solution at the vector x
x = [0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9];
for m=0:n
p2(m+1,:)=Leg(m,x);
end
for j=0:n
sum=0;
for m=0:n
sum=sum+(m+0.5)*p1(m+1,j+1).*p2(m+1,:);
end
phi(j+1,:)=wg(j+1).*sum;
end
107
sum=0.*x;
for j=0:n
sum=sum+approximate solution(j+1).*phi(j+1,:);
end
theta n = sum. ;
theta 256=[-5.89029345423331
-4.72472398875914
-3.43967128944800
-2.26330205451274
-1.34307342163110
-0.71640457082352
-0.33606671856362
-0.12735966541007
-0.02774404425589];
absolut error = abs(theta 256-theta n)
clear
% w wd.m
function [w,wd]=w wd(t,p)
a1=v(t,p).^p; a2=v(-t,p).^p; b1=v(t,p).^(p-1); b2=v(-t,p).^(p-1);
w=(a1-a2)./(a1+a2);
wd=2.*p.*(a1.*b2.*vd(-t,p)+a2.*b1.*vd(t,p))./((a1+a2).^2);
function v=v(t,p)
v = (1/2-1/p).*t.^3+t./p+1/2;
function vd=vd(t,p)
vd = 3.*(1/2-1/p).*t.^2+1/p;
% g.m
function g=g(x)
g=abs(x);
% Leg.m
function y = Leg (n,x)
P3(1,:)=1+x-x;
P3(2,:)=x;
for i=1:n-1
P3(i+1+1,:)=((2*i+1).*x.*P3(i+1,:)-i.*P3(i,:))/(i+1);
end
y=P3(n+1,:);
108
APPENDIX D
MATLAB program which solves a weakly singular Fredholm integral
equation with logarithmic kernel using Clenshaw-Curtis method
% main.m % Computes the error norm between the exact and
clear
% approximate solutions.
n=256;
% choose n
p=2;
% choose p
[C,x] = Cel Cur Log Mat(n);
[w,wd]=w wd(x,p);
xi n=(wd.*g(w)). ;
% B beginning
gamma(1)=0.5;
gamma(n+1)=0.5;
for i=1:n-1
gamma(i+1)=1;
end
for i=0:n
for j=0:n
sum=0;
for m=0:floor(n/2)
sum=sum+gamma(2*m+1)*cos((2*m*j*pi)/n)/(1-4*m^2);
end
B(i+1,j+1)=(4*gamma(j+1)/n)*sum;
end
end
% B end
% delta beginning
for i=0:n
for j=0:n
if(i==j)
if(wd(i+1)==0)
delta(i+1,j+1)=0;
else
delta(i+1,j+1)=(log(abs(wd(i+1))))*wd(i+1);
end
109
else
delta(i+1,j+1)=(log(abs((w(i+1)-w(j+1))/(x(i+1)-x(j+1)))))*wd(i+1);
end
end
end
% delta end
A=((wd. )*ones(1,n+1)).*C-B.*delta;
approximate solution=(eye(n+1)-(1/pi).*A)\xi n;
exact solution=(wd.*f(w)). ;
norm infinity=norm(exact solution-approximate solution,inf)
clear
% w wd.m
function [w,wd]=w wd(t,p)
a1=v(t,p).^p; a2=v(-t,p).^p; b1=v(t,p).^(p-1); b2=v(-t,p).^(p-1);
w=(a1-a2)./(a1+a2);
wd=2.*p.*(a1.*b2.*vd(-t,p)+a2.*b1.*vd(t,p))./((a1+a2).^2);
function v=v(t,p)
v = (1/2-1/p).*t.^3+t./p+1/2;
function vd=vd(t,p)
vd = 3.*(1/2-1/p).*t.^2+1/p;
% g.m
function g=g(x)
g=1-(1/pi).*(log((x+1).^(x+1))+log((1-x).^(1-x))-2);
% f.m
function f=f(x)
f=x-x+1;
Download