Second Order ODE Notes

advertisement
Second and Higher Order
Differential Equations
1
Constant Coefficient Equations
The methods presented in this section work for nth order equations.
1.1
Homogeneous Equations
We consider the equation:
a0 y (n) (t) + a1 y (n−1) (t) + · · · + an y(t) = 0
(1.1.1)
where the ai are real numbers, and we attempt to find a solution of the form
y(t) = ert . (Note the following notational convention:
y
(n)
dn y
:= n
dt
while y n := y raised to the nth power.) This attempt leads to the characteristic
equation (after dividing by ert ):
P (r) := a0 rn + a1 rn−1 + · · · + an−1 r + an = 0 .
(1.1.2)
The Fundamental Theorem of Algebra guarantees that we will have n (not
necessarily distinct) roots, ri , of our characteristic equation.
The type of solutions which we get will now depend on whether the root
of our characteristic equation is real or complex.
• Corresponding to a real root r̃ repeated ℓ times we get the ℓ solutions:
er̃t , ter̃t , . . . , tℓ−1 er̃t .
• Corresponding to a complex root λ + iµ, we will always have its complex
conjugate λ − iµ (we are using the fact that the ai are real), and we get
the solutions
eλt sin(µt) and eλt cos(µt) .
With this kind of problem, the only hard part is to find the zeros of our
characteristic equation (1.1.2) . Here are two helpful facts from algebra:
1
1. If the ai are integers, and p/q is a rational root, then p is a factor of an
and q is a factor of a0 .
2. If r̃ is a root of P (r), our characteristic polynomial, then r − r̃ is a factor
of P (r). In other words, there is a polynomial Q(r) (which can be found
by long division) such that (r − r̃)Q(r) = P (r). (This fact is called
The Factor Theorem in some Precalculus texts.) Now the degree of Q
is smaller than the degree of P, so it will probably be easier to find the
roots of Q.
1.2
Nonhomogeneous Equations
Now we consider the equation:
a0 y (n) (t) + a1 y (n−1) (t) + · · · + an y(t) = g1 (t) + · · · + gm (t)
(1.2.1)
where the ai are real numbers, and each function gi (t) is the product of some
combination of
1. t to a positive whole number power,
2. e to a constant times t, and
3. a sine or cosine of a constant times t.
We attempt to solve equation ( 1.2.1) by guessing the form of the answer.
This method is called the method of undetermined coefficients. Rather than
completely random guesswork we have some guides:
1. We can solve (1.2.1) by adding our solutions to the m separate problems:
a0 y (n) (t) + a1 y (n−1) (t) + · · · + an y(t) = gi (t) .
2. We never guess a sine without a cosine and vice versa.
3. If A0 tℓ f (t) is the leading term of our guess, then we should guess
A0 tℓ f (t) + A1 tℓ−1 f (t) + · · · + Aℓ−1 tf (t) + Aℓ f (t).
4. Now if any of the terms of our guess are in fact solutions to the homogeneous problem (i.e. (1.1.1) ), then we adjust our guess by multiplying it
by t to a power large enough to ensure that all of the terms of our new
guess will NOT be solutions of the homogeneous problem (1.1.1) .
2
2
Nastier Equations
Now we will present some methods to solve some nastier equations.
2.1
Variation of Parameters
If p, q, and g are continuous on the open interval I, and if y1 and y2 are
linearly independent solutions of the homogeneous equation
y ′′ (t) + p(t)y ′ (t) + q(t)y(t) = 0 ,
(2.1.1)
then
∫
Y (t) = −y1 (t)
y2 (t)g(t)
dt + y2 (t)
W (y1 , y2 )(t)
∫
y1 (t)g(t)
dt
W (y1 , y2 )(t)
(2.1.2)
will be a solution of the inhomogeneous equation
y ′′ (t) + p(t)y ′ (t) + q(t)y(t) = g(t) .
(2.1.3)
Of course this method assumes that you have already found two linearly independent solutions, y1 and y2 .
2.2
When the Dependent Variable Is Missing
For a 2nd order ODE of the form
y ′′ (t) = f (t, y ′ ) ,
(2.2.1)
the substitution v = y ′ , v ′ = y ′′ leads to the 1st order ODE
v ′ (t) = f (t, v) .
(2.2.2)
y (n) (t) = f (t, y (n−1) , y (n−2) , . . . , y (n−m) ) ,
(2.2.3)
In general, if we have
where n − m > 0, then the substitution u = y (n−m) will simplify things. (It
will reduce the order from n to m.)
2.3
Euler Equations
We consider the equation:
t2 y ′′ (t) + αty ′ (t) + βy(t) = 0.
3
(2.3.1)
The trick to solving this equation is to introduce the change of variables x =
∵
ln(t) (so dx
= 1t ), and use the chain rule (and a bunch of scratch paper ⌣ )
dt
to derive the following equation relating y and x :
y ′′ (x) + (α − 1)y ′ (x) + βy(x) = 0.
(2.3.2)
In this last equation we have y as a function of x, not t, and the derivatives are
with respect to x, not t. Now we have a constant coefficient equation whose
solution, say y(x) = c1 y1 (x) + c2 y2 (x), is easy to find. Our solution to (2.3.1)
is then
y(t) = c1 y1 (ln(t)) + c2 y2 (ln(t)) .
(2.3.3)
Now for those of you who want practice with the chain rule, and those mathematicians among you who don’t want to accept stuff on faith, here is the
derivation:
We use the chain rule in the following form:
d
dg
dg dx
g(x(t)) = g ′ (x(t))x′ (t) or
=
·
dt
dt
dx dt
with g = y and then again with g =
dy
.
dx
(2.3.4)
Since x = ln(t) we get:
dy
dy dx
dy 1
=
·
=
·
dt
dx dt
dx t
[ ]
[
]
d2 y
d dy
d 1 dy
=
=
·
dt2
dt dt
dt t dx
(2.3.5)
[ ]
1 dy 1 d dy
=− 2 ·
+ ·
by the product rule
t dx t dt dx
[
=−
2
1 dy 1 d y dx
·
+ ·
·
t2 dx t
dx2 dt
(2.3.6)
]
by the chain rule with g =
dy
dx
1 dy
1 d2 y
+ ·
.
=− 2 ·
t dx t2 dx2
Now we substitute into (2.3.1) :
d2 y
dy
0 = t · 2 + αt ·
+ βy
dt
dt
]
[
]
[
1 d2 y
1 dy
1 dy
2
+ ·
+ αt
·
=t − 2 ·
+ βy
t dx t2 dx2
t dx
2
= y ′′ − y ′ + αy ′ + βy = y ′′ + (α − 1)y ′ + βy .
4
(2.3.7)
2.4
Reduction of Order
Suppose that we have one solution, y1 (t) (not identically zero), of
y ′′ (t) + p(t)y ′ (t) + q(t)y(t) = 0 .
(2.4.1)
To find a second solution, we let y2 (t) := v(t)y1 (t). Substituting into (2.4.1)
yields
0 = y1 v ′′ + (2y1′ + py1 )v ′ + (y1′′ + py1′ + qy1 )v = y1 v ′′ + (2y1′ + py1 )v ′ . (2.4.2)
So by letting u := v ′ , we get
0 = y1 u′ + (2y1′ + py1 )u ,
(2.4.3)
which is a linear 1st order equation for u, and hopefully not too hard to solve.
Since u = v ′ , we integrate u (our solution of the last ODE) to get v, and then
we multiply by y1 to get y2 .
2.5
Series Solutions
We consider the equation
y ′′ (x) + p(x)y ′ (x) + q(x)y(x) = 0
or
(2.5.1)
P (x)y ′′ (x) + Q(x)y ′ (x) + R(x)y(x) = 0
where
p(x) =
R(x)
Q(x)
and q(x) =
P (x)
P (x)
are rational functions. We look for solutions near a point x0 where P (x0 ) ̸= 0.
We look for solutions of the form
y(x) =
∞
∑
aj (x − x0 )j ,
(2.5.2)
j=0
i.e. we assume that our solution is analytic, and we plug in its power series.
A few facts:
1. With the conditions which we have imposed on P, Q, and R, we will
have two linearly independent analytic solutions of our equation. (The
important part of this statement is the analyticity.)
∞
∞
∑
∑
j
If
aj (x − x0 ) =
bj (x − x0 )j
(2.5.3)
2.
j=0
j=0
in an interval containing x0 , then aj = bj for all j.
5
3.
If f (x)
=
∞
∑
aj (x − x0 )j ,
j=0
then aj =
f (j) (x0 )
.
j!
Because of the second item above, when substituting into the ODE (2.5.1), one
typically has to “shift indices” so that like powers of (x − x0 ) can be grouped
together. In particular, the following shifts need to be done quite frequently.
∑
y ′ (x) = ∞
jaj (x − x0 )j−1
∑j=1
∞
= j=0 (j + 1)aj+1 (x − x0 )j
(2.5.4)
∑
j−2
y ′′ (x) = ∞
j(j
−
1)a
(x
−
x
)
j
0
∑j=2
= ∞
(j
+
2)(j
+
1)a
(x − x0 )j
j+2
j=0
General Theory of nth Order
Linear Equations
3
We consider the homogeneous equation:
L[y] := y (n) (t) + p1 (t)y (n−1) + · · · + pn−1 (t)y ′ (t) + pn (t)y(t) = 0 ,
(3.0.1)
and the inhomogeneous equation:
L[y] = g(t)
(3.0.2)
each with the n initial conditions
y(t0 ) = y0 , y ′ (t0 ) = y0′ , . . . , y (n−1) (t0 ) = y0
(n−1)
3.1
.
(3.0.3)
Preliminary Definitions
The functions f1 , . . . , fn are linearly dependent on I if there are constants
k1 , . . . , kn , which are not all zero, such that k1 f1 (t) + · · · + kn fn (t) = 0 for all
t in I. Equivalently, the fi are linearly dependent if one of them, say fi0 , is a
linear combination of the remaining fi , i.e.
∑
fi 0 =
c i fi ,
i̸=i0
for some constants, ci . If the functions fi are solutions to a linear ODE, then
the last statement is equivalent to saying that the solution fi0 can be obtained
from the other fi by the principle of superposition (see below).
The functions f1 , . . . , fn are linearly independent on I if they are not
linearly dependent. So, none of the fi are linear combinations of the other fi .
If the fi are solutions, then none of the fi are superpositions of the others.
6
3.2
Results
3.1 Theorem (Existence and Uniqueness). If pi (t) and g(t) are all continuous
on the open interval I (and t0 is in I), then there exists a unique solution of
(3.0.2) which satisfies (3.0.3) .
Trivial obserations:
• The coefficient of y (n) is 1, not a function of t.
• Equation (3.0.1) is a special case of equation (3.0.2) where g(t) ≡ 0, so
the theorem above for (3.0.2) with (3.0.3) applies to (3.0.1) with (3.0.3)
as well.
Now we start listing everything that we know about our homogeneous equation
(3.0.1) on an interval I where we assume that the pi (t) are continuous.
1. There exists n linearly independent solutions y1 , . . . , yn of (3.0.1) on I.
(Note that the order of our ODE is n.)
2. Every solution y of (3.0.1) on I is a linear combination of the y1 , . . . , yn .
In other words, there exist constants c1 , . . . , cn such that y(t) =
c1 y1 (t) + · · · + cn yn (t).
3. The Principle of Superposition: All linear combinations of the yi are
solutions of (3.0.1) on I. In particular, sums, differences, and constant
multiples of solutions are solutions.
4. Given any n solutions of (3.0.1) , y1 , . . . , yn which are not necessarily
linearly independent, their Wronskian, W (y1 , . . . , yn )(t), which is a
function of t on I, turns out to be either identically zero (i.e. zero for all
t in I), in which case the yi are linearly dependent, or it is never zero in
I, in which case the yi are linearly independent.
5. As a consequence of the last point, it suffices to check the Wronskian
of n solutions at a single point to see if they are linearly independent.
Abel’s formula:
[ ∫
]
W (y1 , . . . , yn )(t) = c exp − p1 (t) dt .
In light of the first three points above, we can make more sense of the existence and uniqueness theorem: Since the general solution of (3.0.1) has n
undetermined constants, c1 , . . . , cn , we need exactly n initial conditions to
get n equations for our n unknowns, and therefore we need exactly n initial
conditions to get exactly one solution. (3.0.3) has n conditions as expected.
7
Now we turn to our inhomogeneous problem ( 3.0.2) . We assume that
y1 , . . . , yn are linearly independent solutions of our homogeneous problem
(3.0.1) , and we assume that Y (t) and Z(t) are particular solutions of (3.0.2) .
1. Any solution y(t) of (3.0.2) can be put into the form y(t) = c1 y1 (t) +
· · · + cn yn (t) + Y (t).
2. Anything of the form c1 y1 (t) + · · · + cn yn (t) + Y (t) is a solution of (3.0.2)
3. Y (t) − Z(t) is a solution of the homogeneous equation (3.0.1) , NOT the
inhomogeneous equation (3.0.2) .
4. All of these results are based on the linearity of L, our differential operator, which is due to the linearity of taking m derivatives:
dm
dm
dm
[c
f
(x)
+
·
·
·
+
c
f
(x)]
=
c
f
(x)
+
·
·
·
+
c
fn (x) .
1 1
n n
1
1
n
dxm
dxm
dxm
To prove (2) for example we compute:
L[y] = L[c1 y1 + · · · + cn yn + Y ]
= c1 L[y1 ] + · · · + cn L[yn ] + L[Y ]
= c1 · 0 + · · · + cn · 0 + g(t) = g(t) .
c
Copyright ⃝1999
Ivan Blank
8
Download