OUTLINE OF ELEMENTARY DIFFERENTIAL EQUATIONS
I. INTRODUCTION
An ordinary differential equation is like an algebraic equation for an
unknown, but the unknown is a function, and the equation involves derivatives.
dy
Example you already studied:  = f(x), the search for an antiderivative.
dx
The answer is a solution set, as usual.
II. INITIAL VALUE PROBLEM:
If we add conditions, we cut down the number of solutions. If, for example, we
require that f satisfy both the differential equation y'=x+3, and the initial
condition y(0)= 1, there is a unique answer.
Second example:
a mass one particle attached to a spring. (Harmonic oscillator).
x''(t) = x(t).
The solution set is twodimensional, A cos(t) + B sin(t), A and B arbitrary scalars.
One initial condition cuts it down to a onedimensional set, two initial conditions,
such as x(0)=0 and x'(0)=2, cuts it down to only x(t)= 2 sin(t).
IN GENERAL we could have dy
 = f(x,y), (#)
dx
and we will have to spend a good deal of time studying special cases of this,
many of them have solutions which cannot be expressed in formulas at all.
2 2
There is a big difference between y'=x +1 and, y'=y +1. The first can be solved
by the indefinite integral and solutions are `global,' i.e., keep going in both dire
ctions for ever. The second requires a new technique and the solutions don't last for
very long before exploding.
III. EULER'S METHOD
To understand what a differential equation means, we concentrate on the first order
case, (#) above: we draw direction fields: f(x,y) is the slope, at each point (x,y)
we draw a tiny arrow with that slope. The solution curves are curves which are al
ways following those directions at every point, always tangent to those arrows.
This method also allows us to approximately compute the answers, just like Riemann
sums did for antiderivatives. This method is used by our computer software.
IV. POLYNOMIAL OPERATORS 2
(D 3D+2)f=0, or, f''3f'+2f=0, can be solved by fac
toring the polynomial and using exponential functions. The solution to (D1)f=0 is
exp(x) and to (D2)f=0, exp(2x). These are a basis for the 2dimensional solution
set, {A exp(x) + B exp(x)} with A and B arbitrary constants. One must use linear
algebra to solve for A and B that satisfy two given initial conditions.
V. SPECIAL FIRSTORDER EQUATIONS
Separable equations can be solved by a quadrature. Homogeneous equations become
separable after a change of variable. If a differential form is exact, one can find
a function whose level curves are the integral curves of the problem.
VI. Systems of First Order Differential Equations
A. A higherorder differential equation can always be turned into a system of
firstorder differential equations.
1. Introduce the new variable z for y', w for y''=z', etc. as needed.
2. If the order was n, there will now be n dependent variables, and so the
system will have n firstorder equations (instead of one nth order equation).
3. One initial condition for each dependent variable is needed, so, n ini
tial conditions total.
B. Euler's method can be used to study systems of firstorder equations.
C. As long as the coefficients in the system are smooth, local solutions exist
no matter what the initial conditions are. And, given an initial condition for each
dependent variable, the solution is unique.
VII. Linearity
A. If the coefficients are linear then the system has especially nice
properties. Most physical systems are at least approximately linear.
B. True linearity means zero constant term: this is called 'linear homogeneous.'
C. If there's a constant term, the procedure is the same as in linear algebra.
1. Find a particular solution of the inhomogeneous system, form the associ
ated homogeneous solution and find its ndimensional family of solutions, and then
add the particular solution to the family in order to get the general solution.
2. There are no general rules for finding the particular solution. The
form of the particular solution must be guessed, based on the form of the constant
term. Then use operator methods or undetermined coefficients.
VIII. Constant Coefficients
A. A higherorder linear differential equation with constant coefficients is
given by a polynomial differential operator, but is still equivalent to a system of
first order equations, also with constant coefficients.
B. Linear systems with constant coefficients can be solved explicitly.
1. The system is given by matrix multiplication by a matrix, A.
2. A rough idea of the behaviour comes from raising the matrix (I+A/n) to
higher powers as time goes on. (Euler's method with a step size of 1 gives, as an
approximation, raising (I+A) to higher powers as time goes on. If the step size is
1/n, it gives (I+A/n), raised to higher and higher powers.)
3. An exact solution comes from exponentiating the matrix A:
exp (tA) applied to the vector of initial conditions
C. Inhomogeneous systems and particular solutions.
1. Everything in VII C is still true when coefficients are constant.
2. Since the coefficients are constant, matrix methods will find a
particular solution.
3. Undetermined coefficients is a confused version of matrix methods that
doesn't always work. But, as a skill, it carries over to many topics, not just ODE's.
IX. Laplace Transform Methods
A. Laplace transforms are like a dictionary, like logarithms.
1. The operation of Laplace transform takes a given function, called the
`original,' to a new function of a complex variable p, called the `image' function.
2. There are tables of Laplace transforms, like there used to be of loga
rithms, which can be used like a dictionary: the original function is listed on one
side, and the image, on the other.
3. The Laplace transform is the unique operation which is linear, takes
derivatives to multiplication by p, and agrees with the table of transforms.
B. Laplace transforms turn differential equations into algebraic problems.
1. The rules for Laplace transform include rules for the derivative, anti
derivative, and hence a table can be used to solve differential equations.
2. Often one needs to use a partial fractional expansion.
3. Heaviside expansion theorem is a quick way to obtain the partial fraction
expansion, especially when there are no repeated factors.
4. The initial conditions appear as the coefficients of a polynomial in the
numerator.
C. The Dirac Delta Function
1. The derivative of the Heaviside function is the Dirac Delta function.
2. The area under the Delta Function is one, and integrating against any
function simply picks out its value at the origin.
3. The derivative of the Delta Function is hard to understand, but if it is
integrated against any smooth function f(t), the result is f'(0). Distributions
such as the Delta function and its derivatives are best understood in terms of the
results they yield when integrated against smooth functions.
4. An infinite wave has no Laplace transform since it is not zero for
negative time. Applying differential operators to H(t)cos(t) produces the Delta
function and its derivatives. This can be used to calculate the Laplace transform
of H(t)cos(t), H(t)sin(t), and H(t)exp(t).
D. Discontinuous forcing terms are more realistic for electric circuits.
1. Linear combinations of the Heaviside function describe square waves.
2. The Laplace transform of a square wave can be simplified using the
geometric series.
3. The initial values of the function and its derivative match automatically
at every point of discontinuity of the forcing term.
X. Convolution and Electric Circuits
A. An electric circuit is an inputoutput device. Given an input function, the
applied e.m.f., the resulting current as a function of t is the output function.
B. Every circuit is described by its descriptor function f(t). (Admittance.)
C. The convolution of f with g(t) is defined as the output of that circuit
when g(t) is the input. Notation: f*g(t).
1. Example of an oscillator: then f(t) is the natural frequency. But if an
applied frequency of g(t) is imposed, the result is f*g(t), e.g., cos(3t)*cos(2.9t).
2. L(f*g) = F(p)G(p). There are also integral formulas for L and for *.
CHAPTER I INTRODUCTION
Differential equations are the heart of applied mathematics. Most applications of
mathematics to the physical sciences involve the quantitative study of rates of
change, and hence are using equations that involve different derivatives. These are
called differential equations. Some of the social sciences use these as well, for
example growth theory in economics and population dynamics in sociology.
Suppose that a body is moving straight up and down with time and its height at time
t is given by the function s(t). Then its velocity is a derivative, s'(t) = v(t),
and its acceleration is the derivative of velocity, a(t) = v'(t) = s''(t). Newton
and Galileo found that (approximately), the acceleration due to gravity is a constant
32 ft/sec/sec. This leads to a differential equation, 2
dv d s
 = 32.2, and even another,  = 32.2; of course
dt 2
dt
these two equations are closely related. As we know from sad experience, many diffe
rent motions are possible, so there are infinitely many solutions.
These equations were solved in Calculus I using antiderivatives. Every function
v(t) whose derivative is 32.2 is of the form v(t) = 32.2 t + C where C is a cons
2
tant of integration. Integrating once more yields s(t) = 16.1 t + Ct+D, now we have
two arbitrary constants, so we have a doubly infinite family of functions, each of
which is equally valid a solution.
CHAPTER II INITIAL VALUE PROBLEMS
Differential equations have, as answer, a solution set. So did polynomial algebraic
equations, but the solution sets of differential equations are bigger and have more
structure to them. The solution set of a quadratic equation is merely two disjoint
points, but the solution set of a seconddegree differential equation is doubly
infinite, twodimensional, and has extra structures. The most important of these
extra structures is what we call initial conditions.
A basic kind of initial value problem, that of finding a particular antiderivative,
was studied in Calc II. Given a differential equation of the simple type y' = f(x),
every solution has the form y(x) = F(x) + C where F is some antiderivative of f.
If we further specify that y(0) = 2, we can solve for C. For example, y' = sin (x).
If we have to satisfy the initial condition y(0) = 2, we get y(x) = cos (x) + 3 as
the only possible answer.
If a mass one particle is attached by a spring (with a certain stiffness) to a wall,
its oscillations about the spot where the spring is neither stretched nor compressed,
the socalled equilibrium position, are governed by Hooke's law: F = kx where F is
the force the spring imposes on the particle, k is the stiffness constant, and x is
the distance away from equilibrium. Assume k = 1. Newton's law says that F = ma,
mass times acceleration, and since we assumed m = 1, we get F = x''(t). Putting these
together we get x'' = x. The solutions are A cos(t) + B sin(t). (Check by plugging
it in). Given initial conditions x(0) = a and v(0) = b we get x(t) = a cos t+b sin t.
Suppose the differential equation is (x1)dy/y = dx. Algebra makes this dy/y =
dx/(x1). The rules of algebra are still true even though this is a course in
differential equations, and one of those basic rules is that if you have equals, like
dy/y and dx/(x1), and do the exact same thing to both sides, you get equals.
Hence we may integrate both sides and get / /
 dy  dx
  =  
 y  (x1)
or, log y = log x1. Exponentiating both / / sides we get
y = x1. But wait. The indefinite integral is not just one function, but a family of
functions. So really the integral equation above means log y + C = log x1 + D, where
each side means a family of functions: we're saying the family of functions are equal
so we could have log y= log x1 + DC i.e., y = exp(DC) (x1). Relabelling the con
stants, we get in general the solution y = mx  m. This may be checked by plugging
in. For we get dy = mdx, so dy/y is mdx/(mxm) and this is equal to dx/(x1), Q.E.D.
As we have seen by now, a differential equation can be written in superficially dif
ferent ways. These both mean exactly the same thing: y''(t) + 9y(t) = 0;
2
d y
 + 9 y = 0; These are called ``secondorder'' differential equa
2
dt
tions because they have a second derivative in them. We will focus on firstorder
differential equations for a little while. They may be written in three different
2 dy 2 2
ways, which are all the same: y' = y + 1;  = y +1; dy = (y + 1) dx.
dx
Let us integrate both sides in the last equation...whoops, this makes no sense.
It makes no sense to integrate dx a function of y! Before we can integrate both sides,
2 2
we must divide both sides by y + 1, getting dy/(y + 1) = dx. Now each side
is an ordinary differential, and can be integrated. We get arctan y = x + C.
But we wanted y as a function of x, so we have to take tangent of both sides, and get
2
y = tan(x+C). We check our work by plugging in: dy = sec (x+C) dx. This don't look
2 2
right, but there is a trig identity: so dy = (1+tan (x+C))dx and this is (1 + y )dx.
This is an extremely important example to be aware of. Notice that every solution ex
plodes within a short time (or a few inches, whatever x means...). I.e., because tan
has singularities at angles of +/ 90 degrees, y becomes plus or minus infinity. Yet
there was nothing visible in the equation to warn us of this...the denominator never
becomes zero (the sneaky denominator pretended to make the left hand side bounded,
2
like the function 1/(1+y ) is. So now we know we have to be alert for unsuspected
blowups. An initial condition for this equation could be any condition like y(3) =
7. Initial conditions don't *have* to be at the time when x = 0, any spot could be
picked as the spot to impose a condition. Firstorder equations have a unique sol
ution obeying one arbitrary initial condition. But that solution might be ``local''
just like tangent is. The solution is defined for all x near enough to x = 3, but is
not defined for x =, well, this is an exercise. When does y blow up if y(3) = 7?
(Solve the initial value problem, then graph it.)
CHAPTER III EULER'S METHOD
Euler's method (pronounced, ``Oiler'' since it is a Swiss name) is a method for
approximately solving first order differential equations. It is the basic idea used
by our computer software. But, more importantly, learning to use it gives you a
better intuition about how the solutions to differential equations behave. Euler's
method uses the tangent line approximation that you learned in Calc I. A smooth
curve has many tangent lines, it has a tangent line at each point. The tangent line
to the curve M at the point p is called Tp(M). The tangent line is a good approxi
mation to the function or curve, as long as we are near enough to the point p. Far
away from p, the tangent line may get further and further away from the curve M, and
so it may become a poor approximation. The advantage of it is that it has a simple
formula, since it is a straight line its formula is linear, y = mx + b.
The formula becomes even simpler if we do like we always do in calculus, introduce
coordinates on the line which are zero at p. If p = (2,3), for example, we measure
how much change has occurred from 2 in the xdirection and call that dx. Also, dy
is the coordinate which is y3. That way, dx = 0 at p, and dy = 0 at p. The
equation of the tangent line becomes simply dy = m dx, which makes sense since m,
the slope of the tangent line, is
dy
___________
dx
by definition of derivative, the derivative is the slope of the tangent line.
If we are given a firstorder equation such as
dy
___________ = x  y,
dx
and an initial condition, such as y(1) = 2, then we will show how to
estimate y(1.3) in steps, using the tangent line approximation three times in a row,
first at 1, then at x = 1.1, then x = 1.2, one more and we are done.
If y(1) = 2 then we are starting at the point p = (1,2). Since we are interested in
estimating y(1.1) as first step, we will let dx = .1 and need to find dy. We use the
differential equation to tell us m, the slope, or,
dy
___________ .
dx
It is x  y = 1  2 = 1. Therefore, the tangent line approximation tells us that
dy = .1 and hence y =1.9 since it's changed .1 from what it was at p (which was 2).
This is the procedure over and over. We know x and y, so we figure out m from the
differential equation. We have ourselves chosen some step size dx, so we use the
tangent line approximation to deduce dy. Then we add that to y to get the next y.
And of course add dx to the old x to get the next x. Repeat as needed.
The next point is p = (1.1, 1.9) and here m = x  y = 1.1  1.9 = .8 which is used
in the tangent line approximation, dy = m dx. We already chose dx = .1 so for us,
dy now equals .08, so y changes from 1.9 to 1.82 when x changes from 1.1 to 1.2.
That makes the next point p = (1.2,1.82). It is not necessary to graph all this,
but it is helpful for the beginner to graph each point p, each tangent line and each
slope in order to see what is happening. And that makes it like direction fields.
Now we only have one more step, we have reached x = 1.2 but we were really asked for
x = 1.3, so we have dx = .1 more to go. Here, m = x  y = 1.2  1.82 = .62 and so
dy = .62 dx. But since dx = .1, dy = .062 and so the new y is 1.82 + .062 = 1.758.
Our conclusion, then, is that y(1.3) is approximately 1.758. We made this estimate
by obeying the instructions of the differential equation, so to speak, at intervals
of .1. A true solution to the differential equation would obey them instantaneously
instead of at intervals...but we can always get an improved approximation if we want
to do the extra work, by choosing shorter intervals. Let us `walk' from 1.0 to
1.3 in intervals of .05 for a change, and see a better approximation is obtained.
We could organise our work into a chart
x y m dx dy next y
1 2 1 .05 .05 1.95
1.05 1.95 .9 .05 .045 1.905
(notice this is a change from our previous estimate of 1.9)
1.1 1.905 .805 .05 .04025 1.86475
1.15 1.86475 .71475 .05 .0357375 1.8290125
1.2 1.8290125 .6290125 .05 .031450625 1.797561875
1.25 1.797561875 .547561875 .02737809375 1.77018378125
so this is the answer, when x reaches 1.3, y is 1.77018378125. This differs by
less than 1 % from the previous estimate, which, remember, was 1.758.
We will later learn that the exact answer is 1.78164...to get this kind of accuracy,
we would have to choose more, but smaller, steps. Euler's approximation is like the
tangent line approximation in two respects: it gets worse the further you go, and
also it is worse, the larger the step size. Notice that our approximation for y(1.2)
1.827375, is closer to the exact answer. In fact, the exact answers are, and the
previous estimates are,
x y Estimate based on dx =. 1 y, est. based on dx = .05
exact
1 2 2 2
1.05 1.9524... 1.95
1.1 1.9096... 1.9 1.905
1.15 1.8714... 1.86475
1.2 1.8374... 1.82 1.8290125
1.25 1.8076... 1.797561875
1.3 1.7816... 1.758 1.77018378125
You may have noticed that the estimates are always rational numbers, but the truth
is usually an irrational number. That is typical of engineering. I have calculated
each estimate exactly, even when it means a silly number of decimal places, in
order to make the difference clear. (In practice you would roundoff shrewdly.)
Exercise: choose a step size of .025 and get better approximations.
Exercise: Suppose f(x,y) = f(x) is a function of x only, and does not depend on y.
Show that Euler's method is then the exact same thing as Riemann sums.
Just as in Calc II, there were integrals whose exact answer could not be found, one
had to fall back on Riemann sums to get an approximate value, so too there are many
differential equations where there is no solution that is a formula. There's a solu
tion, but no way to write it down in a formula, only Euler's approximation method.
CHAPTER IV POLYNOMIAL DIFFERENTIAL EQUATIONS
A linear ordinary differential equation whose coefficients are constants can be
solved relatively easily, and forms a worthwhile special topic for study. Worthwhile
because it is tractable, yet complicated enough to introduce to the student some of
the typical features of the vast topic of ordinary differential equations in general.
And also of considerable practical importance, for example simple electrical circuits
are modelled by secondorder linear ordinary differential equations whose coefficients
are the three constants, resistance, capacitance, and inductance.
Such an equation is necessarily given by a polynomial differential operator. For if
the equation is, for example, y''' 3y'' + 2y' 2y = 2, then introducing the symbol D
to stand for the `operator' d/dx, it becomes rewritten as
3 2
(D  3D + 2D  2)y = 2. This is because Dy means y', so D(Dy) means (y')' which, of
course, just means y''. And in general,
n
n d y
D y =  . The operator D is linear and behaves a lot like
n
dt
a variable like one would find in a polynomial: it obeys the commutative law, the
distributive law, etc., as long as there are only constant coefficients. (It would
get much more complicated if one had to study D(xDxy) and things like that. Leibniz's
rule would come into play, and make it behave differently than the sorts of variables
that one finds in polynomials.)
Polynomials made up out of the operator D are called polynomial differential opera
tors. The auxiliary equation is the polynomial equation obtained by replacing D by
the ordinary variable m, and setting it equal to zero, thus:
3 2
m  3m + 2m 2 = 0. The roots of this are the eigenvalues or characteristic
frequencies of the differential equation, and are 0.23931 +/ 0.857874 i, 2.52138.
These roots give very easily the solutions to, not the original equation, but the
associated homogeneous equation obtained by setting the right hand side = zero,
3 2
(D  3D + 2D  2)y = 0. The solutions to this equation are spanned by three
fundamental solutions, cos(.857874t)exp(.23931t), sin(.857874t)exp(.23931t), and
exp(2.52138t). That means, concretely, that the general solution of this homogeneous
equation is the threedimensional set of all
y(t) = A cos(.857874t)exp(.23931t) + B sin(.857874t)exp(.23931t) + C exp(2.52138t)
as A, B, and C vary through all possible real (or complex) numbers.
We will address the issue of what to do about the original, nonhomogeneous, equation
in a minute.
Why does this procedure work? Consider an arbitrary polynomial P(D). The first thing
to notice is that the polynomial of course factors into
P(D)y = (D  a)(D  b)(D  c) ... (D  z) y = 0,
where a,b,c,...,z are the roots of the polynomial. As usual, this can only be zero
if one of the factors (D  c)y = 0, but this is a differential equation we learned to
solve in Calc II, the solution set is A exp(ct). So y(t) is either exp(at), exp(bt),
exp(ct),...,exp(zt), or some linear combination. (Complication: if there are repeated
roots, there are additional possibilities).
Returning to our example, the problem is that if any such y(t) is plugged into the
lefthandside, we get zero instead of 2. We need to guess a particular solution of
y'''3y''+2y'2y = 2, we guess a polynomial would do, and in fact y(t) = 1 works
perfectly!!! There are infinitely many solutions, but we only need to guess one of
them. Now because the operator is linear,
3 2 3 2 3 2
(D  3D + 2D  2)(y(t) + z(t)) = (D  3D + 2D  2)y + (D  3D + 2D  2)z.
Hence, without even doing any recalculating,
3 3
(D  3D + 2D  2) (A cos(.857874t)exp(.23931t) + B sin(.857874t)exp(.23931t) +
C exp(2.52138t) + 1) = 0+2. No matter what A, B, and C are. So this is the general
solution.
There are two complications which we have touched on but not yet explained. The first
one is if some of the roots are complex. If some of the roots are complex, they come
in complex conjugate pairs like this one did. And we showed that then
exp((0.23931 +/ 0.857874 i)t) is a solution. So where did the cosine come from?
This exponential involves complex numbers, i is the square root of negative one. In
a+bi
this case, e , is defined by the same power series that we always use,
a+bi 2 3
e = (1 + (a+bi) + (a+bi) /2! + (a+bi) /3! + ...). It obeys the same laws
of exponents as always, so we can divide this into two parts, since exp(a+bi) =
exp(a)exp(ib). The exp(a) part is well understood, we need to explain
ib 2 3
e = 1 +(bi) + (bi) /2! + (bi) /3! + ... = 1 + ib  b /2! + i b /3!  ...
2 3 4
since i = 1, i = i, i =1 repeats the cycle. Let us group the imaginary terms toge
ther, we get ib 2 4 3 5
e = (1b /2!+b /4!...) + i(bb /3! + b /5!  ...) and recognise the
power series for sine and cosine!!! exp(ib) = cos(b) + i sin(b), Euler's formula.
The other complication is that if there are repeated roots, there are fewer linearly
independent cosines or exponentials, but there are extra solutions. If the root m is
repeated n times, then not only is exp(mt) a solution, so is t exp(mt), and higher
powers too, up to n1
y(t) = t exp(mt).
2 2
Example: (D  2D + 1)(D  D + 1) y(t) = 0. The polynomial factors as
(m1)(m1)(m0.5 0.866025 i)(m0.5+ 0.866025i). The repeated root is 1, so there
are two fundamental solutions from it, exp(t) and t exp(t). We also get
exp(t/2)sin(.866025t) and exp(t/2)cos(.866025) from the complex exponentials.
CHAPTER V SPECIAL CASES OF FIRSTORDER EQUATIONS
A differential form is an expression of the form M(x,y)dx + N(x,y)dy, such as occur
in line integrals in Calc III. The exterior differentiation operator, d, measures
the total change of a function as both x changes by dx, and y changes by dy. So a
function f(x,y) has an exterior (or `total') derivative df which is a differential
form, and the chain rule tells us that
df = D f(x,y) dx + D f(x,y) dy.
x y
The rules for the d operator were given in Calc III, they are the Leibniz rule
d(uv) = du v + u dv, and change of sign upon change of orientation, dx^dy =  dy^dx.
It follows from this that dd=0. For example, ddf=0 since mixed partials commute.
Hence if M(x,y)dx + N(x,y)dy = df for some function f(x,y) then its total derivative
will be zero. Poincare proved the converse: if d (M(x,y)dx+N(x,y)dy) = 0 then
there exists some function f(x,y) such that M(x,y)dx + N(x,y) dy = df.
Such a differential form is called exact. This means that there is a simple test
for exactness. We just have to see if d of it is zero. And this is simple, we
just have to see if D M(x,y) dx dy + D N(x,y) dy dx= 0 . Since dx^dy =  dy^dx
y x
(because of the reversal of orientation, the area gets counted negative), this is
the same as to see if D M(x,y) dx dy  D N(x,y) dx dy= 0 .
y x
Any first order differential equation can be written in the form
M(x,y) dx + N(x,y) dy =0.
If this differential form is exact and equal to df, then the differential equation
takes the simple form
df =0
and obviously the solution curves to this are f = C a constant, since we can just
integrate both sides.
Most differential forms are not exact, but one that is can be very easily solved.
The simple test for exactness is whether
D M = D N.
y x
(This is because these would be the mixed partials of f, and since mixed partials
commute, they had better be equal.)
If the equation passes this test, then we can find f by two partial integrations
and solving for the constants of integration.
We must have D f = M and D f = N.
x y
Integrating the first of these with respect to x and the second one of these
by y, we must get the same answer. For example, let's test whether or not
x dy  (3y 2x) dx is exact.
Here, M = 2x  3y and N = x. In this differential form, M is attached to the dx,
so to get a *mixed* partial, we have to take the *other* derivative, i.e, the
partial derivative with respect to y. We get 3. Similarly with N, we have to
take the other derivative, which is with repect to x, so we get 1. These are not
equal, so the differential form is not exact.
Next, let us consider the problem
Find the general solution to x dy = (2x  y +3) dx
This one has to be rewritten as a differential form =0 before we do any testing.
It becomes, (2xy+3) dx  x dy = 0. Hence M = 2x y+3 and its mixed derivative
is 1. Also, N = x and its mixed derivative is 1. So they are okay it's exact.
So we get to do the partial integrations. The partial integrations are *not*
mixed, they are matching. We integrate M with respect to x and get
2
f(x,y) = x xy + 3x + C(x). Also, N tells us f(x,y) = xy + D(y).
2
These have to be equal, but D(y) could equal x + 3x since it doesn't depend on y
2
and then they would match if C = 0. So if f(x,y) = x xy + 3x, then
df = (2x  y + 3) dx + x dy so the equation becomes df = 0 which has the obvious
solutions f(x,y) = C. It is easy to solve for y and we obtain y = C/x + 3 + x .
2
For another example, 2xy dx + (x + 1)dy = 0 2
is exact, so we integrate 2xy with respect to x and get x y + C = f(x,y).
here, C is the constant of integration...but since y is being regarded as a
constant, C might be a function of y. So it is better to write it as C(y).
2
Entonce, x y + C(y) = f(x,y). 2
Similarly, integrating by y the dy factor, we get y(x +1) + D(x) = f(x,y)
2 2
Setting these equal, we get x y + C(y) = y(x +1) + D(x)
which can be solved if C(y) = y + C and D(x) is constant or even zero. We might
as well take C = 0, too, (adding or subtracting a constant from f doesn't change
either df or the level curves f(x,y) = C), so we get
2
f(x,y) = x y + y
and its level curves are the integral curves of the original equation.
The next problem is to
test whether or not xdy(3y2x)dx becomes exact after you multiply it by the
4 3 4 3
function x . Upon multiplying, it becomes (2x  3yx ) dx + x dy,
4
so the mixed derivative of M is 3x which equals the mixed derivative of N.
If u(x)sin(x)dy  u(x)(3y2)dx is exact, what differential equation must u obey?
Notice that the dx coefficient is 2u(x)  3y u(x). The mixed derivative is the
derivative with respect to y. Then 2u is regarded as a constant, so we get 3u.
Next, take the mixed derivative of N in order to make the test for exactness. Here
N = u sin, and its xderivative, the mixed one, is u'(x) sin(x) + u(x) sin'(x) =
= u' sin (x) u cos(x). The test for exactness says the mixed derivatives must be
equal, so we equate them: u' sin  u cos = 3u, and since u is a function of x, this
is an ordinary differential equation for u which must be satisfied or else the test
for exactness will fail. Or u'/u = cot  3 csc, or d log u = (cot x 3 csc x)dx.
Example of how to turn a nonexact equation into an exact one.
The equation 2xy dx +(x+1)dy = 0 isn't exact. But we can multiply it by a function
u(x) on both sides and the result will be exact, if we choose the function u very
cleverly. This will not change the solutions.
What sort of u(x) do we need? u(x)2xy dx + u(x) (x +1)dy
will be exact if D (u(x)2xy) = D (u(x)(x +1) so we need to find u such that
y x
2xu(x) = u'(x+1) + u. This is a differential equation for u, but it can be solved
easier than the original equation for y...luckily. The procedure is to isolate
u' dlog(u)
 one one side, since it is .
u dx
We obtain u'/u = (2x1)/(x+1). Hence dlog(u) = (2x1)/(x+1) dx
A exp (2x)
so log u = 2x  3 log(x+1) +C and u =  .
3
(x+1)
This procedure always works if M and N are linear in y. (Finding the logarithmic
derivative is the trick.)
Hence, an exact equation with the same solutions as the problem posed, is
exp(2x) 2xy exp(2x)
______________ dx + _________________ dy = 0 .
3 2
(x+1) (x+1)
We now have to follow the usual procedure for integrating an exact equation. We have
now to integrate the first coefficient by dx, and the second, by dy. It might have
been hard, but luckily the integral of the first coefficient is staring us in the
face: it is almost the second coefficient. Because this equation is exact, the deri
vative of the first coefficient with respect to y is the derivative of the second
coefficient with respect to x. But the first coefficient is linear in y, so its
derivative with respect to y simply is
exp(2x) 2x
______________
3
(x+1)
and what we just said is that this is the derivative of the second coefficient,
hence the second coefficient is the antiderivative of it.
Therefore the partial integral of it multiplied by y is just
exp(2x)
______________ y + C(y), since y is treated as a constant.
2
(x+1)
(This could have been done by integration by parts.)
Hence the solution curves f(x,y) = C are just
exp(2x) 2
______________ y = C, or, y = (x+1) exp (2x).
2
(x+1)
This is easily checked by plugging in. Every firstorder equation
whose coefficients are linear in y can be solved this way. Sometimes the partial
integrals that arise cannot be done so easily, or even done in a formula at all.
CHAPTER VI SYSTEMS OF DIFFERENTIAL EQUATIONS
A. If an equation has extra derivatives we can replace them with extra variables.
If the unknown function is y, its first derivative is y' and that's okay. But if
there is an extra derivative somewhere, y'', we can replace this by z' if we just
introduce the new variable z=y'.
For example, consider the secondorder equation
2 2
(1x) y''  x y' + y = x
and the 'extra' derivative is y''. So we let z=y' and the
equation becomes 2 2
(1x) z' x z +y = x .
This is now a first order equation. We have to include the other equation we
introduced, which y'=z, so the system becomes
2
(1x) z'x z +y = x
y'=z
Now for neatness, we arrange it with the derivatives on the left and everything else
on the right, 2
x y +x z
z' = 
(1x)
y' = z.
This system is equivalent to the original equation.
B. Euler's method for systems.
Euler's method can be used for systems of firstorder differential equations.
First consider the silly case of two equations which really have nothing to do
with each other, we just made them a system by stacking them on top of each other.
Suppose the initial conditions are y(1)=2 and z(1)=0.
y' = x  y
z' = cos(xz)
are really two separate equations: the first one doesn't have anything but x and y,
and the second one ignores y and just has x and z. (Such a system is called,
`uncoupled'.)So it would be reasonable to estimate y(1.3) separately, using Euler's
method as before, and only when finished, look at z. But we could do both together.
We could put the charts together to save a little space, like this:
x y z dy/dx dz/dx dx dy dz new y new z
1 2 0 1 1 .1 .1 0 1.9 .1
1.1 1.9 .1 .8 .99396 .1 .08 .099396 1.82 .199396..
1.2 1.82 .199396 .62 .9714 .1 .062 .09714 1.758 .2965...
Now the point is, this kind of chart works perfectly even if the systems are
coupled. Suppose the first equation does involve z, and the second one, y. It
doesn't change the logic of how we fill out the chart.
y(1) = 2, z(1) = 0,
y' = x  y + z/3
z' = cos(xz)  y/4
x y z dy/dx dz/dx dx dy dz new y new z
1 2 0 1 .5 .1 .1 .05 1.9 .05
you see, only how we get the *entry* in the dy/dx or the dz/dx has changed, only
because the actual formula for y' or z' which we need to use has changed. What we
do with the entry is unchanged.
1.1 1.9 .05 .7833.. cos(.055)(1.9)/4 .1 .07833.. ? 1.82166... ???
1.2 1.82166.. ? ? ? .1
An example of great theoretical importance is the following system:
y' = z
z' = y
Suppose for example that the initial conditions are y(0) = 1 and z(0) = 0.
x y z dy/dx dz/dx dx dy dz new y new z
0 1 0 0 1 .1 0 .1 1 .1
.1 1 .1 .1 1 .1 .01 .1 .99 .2
.2 .99 .2 .2 .99 .1 .02 .099 .97 .299
.3 .97 .299 .299 .97 .1 .0299 .097 .9401 .329
etc. Of course this is the system for cosine and y(.3) is exactly .95534
Exercise: use Euler's method to estimate y(.3) using a step size .05 instead of .1.
Exercise: what function is z(x)?
Exercise: find out whether this method of getting cosine is faster than power series.
Nowadays, we express Euler's idea as, choosing a step size, and then replacing the
system of differential equations by a system of difference equations, since the
exercise for example could be phrased as, find the exact value of y(.3) if y and z
satisfy the difference equations,
y(x+.05)  y(x) = .05 z
z(x+.05)  z(x) = .05 y
and in some branches of engineering, difference equations are more important than
differential equations.
C. Existence and Uniqueness
The most general possibility for a system of firstorder equations is something like
x'(t) = f(x,y,z,w,t)
(*) y'(t) = g(x,y,z,w,t)
z'(t) = h(x,y,z,w,t)
w'(t) = p(x,y,z,w,t)
But as long as f,g,h,p are reasonably smooth functions, it doesn't matter whether we
know how to solve it or not, maybe there doesn't exist a formula for the answer, but
at least we know that an answer exists. We also know more.
THEOREM. If f,g,h,p are differentiable functions whose derivatives are continuous,
then no matter what the initial conditions are, there is a unique local solution to
the initial value problem. Furthermore, if the initial values are changed continuous
ly, the solution changes continuously.
By `local' we mean the same thing we meant in Chapter II. The initial conditions are
of the form x(0)=a, y(0)=b, z(0)=c, w(0)=d, or at some fixed t not necessarily 0.
The proof, which is not covered in this course, basically consists in showing that
Euler's method gives improved approximations the smaller we make the step size. Each
choice of step size gives polygonal lines as the approximate solution to the system.
Careful research shows that in the limit as the step size approaches zero, the poly
gonal lines we get approach, in the limit, definite and unique differentiable solu
tions. We will give this proof later in the special case of linear systems of first
order differential equations with constant coefficients, and that is already enough
to get the flavour of the proof. In this sense, the proof is not any different for
systems than it would have been for a single firstorder differential equation.
CHAPTER VII Linear Systems
A. Linearity means taking linear combinations of things. In this context it means if
u(t) is one solution and v(t) is another, then rf+sg is always also another solution
no matter what numbers r and s are. Hence, we say that the system (*) is a linear
system if whenever the vector valued function (x (t), y (t), z (t), w (t)) is a sol
ution of the system, and whenever 1 1 1 1
(x (t), y (t), z (t), w (t)) is another solution, then no matter what numbers r and
2 2 2 2
s are chosen, the linear combination of the two vectors,
(rx (t)+sx (t), ry (t)+sy (t), rz (t)+sz (t), rw (t)+sw (t)) is also a solution.
1 2 1 2 1 2 1 2
B. This true linearity means the space of solutions forms a vector space. In this
course, it is enough if they form an affine linear space which is merely parallel to
some true vector space but does not have to actually go through the origin.
Another way to put it is, the system is linear homogeneous if it is linear in the
strict sense, and hence f,g,h, and p are all linear functions of x,y,z,and w, without
a constant term. But we allow there to be a constant term, and still call it linear,
just inhomogoneous, not homogeneous.
C. If the system is not homogeneous, we need to study both it and a closely related
system, the socalled associated homogeneous system: get it by just erasing the
constant terms from the inhomogeneous system. This is the same as in linear algebra.
The solution space of the associated homogenous system is a vector space with n lin
early independent basis elements, called fundamental solutions. This is the space
that is a true vector space and goes through the origin.
If we find even one particular solution of the inhomogeneous system, any other must
differ from it by an element of that vector space. So by adding it to the vector
space, we obtain the entire affine linear space of solutions of the original system.
CHAPTER VIII LINEAR SYSTEMS WITH CONSTANT COEFFICIENTS: MATRIX EXPONENTIALS
In Calc II the exponential function exp(x) is defined by the property that
d exp(x)
 = exp(x) and exp(0) = 1.
dx
I.e., it is defined as the unique solution of an initial value problem.
However, it follows easily from this definition that it is given by a power series
2 3 4 5 n
x x x x x
exp ( x ) = 1 + x +  +  +  +  + ... +  + ...
2 6 24 120 n!
Matrix exponentials are defined for any square matrix A with this power series.
/ \ / \
 0 t 0   cos t sin t 0 
Exercises: calculate that exp  t 0 0  =  sin t cos t 0  and satisfy the
 0 0 0   0 0 1 
\ / \ /
same property that they are the solution to the differential equation in vectors,
/ \ / \ / \ / \
d  x(t)   x(0)   x(t)   x(0) 
_____  y(t)  = A  y(0)  has the solution  y(t)  = exp(tA)  y(0) .
dt  z(t)   z(0)   z(t)   z(0) 
\ / \ / \ / \ /
This method only works for truly linear systems, i.e., homogeneous systems.
If the matrix A is diagonal, its exponential is especially easy to calculate,
the power series becomes very simple:
/ \ / \
 t 0 0   exp t 0 0 
Exercise: calculate that exp  0 u 0  =  0 exp u 0  .
 0 0 v   0 0 exp v
\ / \ /
If a matrix is not diagonal, maybe a change of basis will make it diagonal.
If so, change the basis, use the above formula, and then change the basis back
to what it used to be. The resulting matrix will be the answer.
/ \ / \
 0 t 0   cosh t sinh t 0 
Exercise: calculate that exp  t 0 0  =  sinh t cosh t 0  .
 0 0 v   0 0 exp v
\ / \ /
Not every matrix is diagonalisable. Not every matrix has a basis of eigenvectors.
Sometimes when the characteristic polynomial has a double root, there is only one
associated eigenvector. In that case, for example resonance, one must fall back
on power series. The following system exhibits resonance:
y' = y + z
z' = z
This system is equivalent to a second order differential equation in y.
(y''  2y' + y = (y+z)' 2y' + y
but y = y'  z, so y''  2y' + y = (y+z)' 2y' + y'z
= y' +z' 2y' +y' z = z'z=0 so this system is equivalent to y''2y'+y=0.)
/ 1 1 \
One must exponentiate the matrix t   using a power series to get the
\ 0 1 /
answer, which involves y(t) = t exp(t) among other things.
(To be precise, you will get the answer y(t) = y(0)exp(t) + z(0) t exp(t),
z(t) = z(0) exp(t) .)
If you don't need an explicit formula for the answer giving the dependence on
the initial conditions, but only need the general solution or the fundamental
solution, then you don't need to change the basis back. If the matrix is
diagonalisable, all you need are the eigenvalues. If the characteristic polynomial
is too hard to factor, you can estimate the largest eigenvalue by either using
Euler's method or just raising A to a very high power. On average, you will find
n n
lim A v = m v for pretty much any random vector v,
n > infinity
where m is the largest eigenvalue in absolute value.
The advantages of matrix exponentials are: if you need a formula that goes from the
initial values right to the final answer (skipping the general solution), if there
is resonance, if you need a quick approximation and can get your computer to do the
matrix algebra, or if you need a quick guide to the qualitative behaviour over
a long period of time of the solution. It works for any system without exception.
The disadvantages are: if you only wanted y anyway, and not z,w, etc., like if it
was from a higher order polynomial differential operator in the first place,
if it is not an initial value problem, then it requires more linear algebra than
is totally necessary to get the general solution so using the auxiliary polynomial
would be quicker (if there is no resonance).
Higher order linear equations cannot always be solved with a formula except in the
simple case where the coefficients are constant. The procedure is as always, find
the general solution of the associated homogeneous equation. Find a particular
solution of the originally given (nonhomogeneous) equation. Add them to get the
general answer. Then fiddle with the constants, if necessary, to solve the initial
value problem.
Example: y''5y' + 6 y = 2. 2
The associated homogeneous equation is (D 5D + 6)y = 0. Since the polynomial fac
tors, the eigenvalues are easy to find: they are 2 and 3. So the fundamental solu
tions are exp(2x) and exp(3x) and the general solution is A exp(2x) + B exp(3x).
Now we have to find a particular solution y (x) that satisfies the original equation.
p
FINDING PARTICULAR SOLUTIONS
MATRIX METHODS: guess a vector space V which : contains the constant term 2, (the
forcing term), is preserved by the differential operator D, and is small. Our first
guess is the one dimensional space spanned by the function 2, and this makes things
easy, since the matrix of D on this space is zero. So the operator equation becomes
2
(D 5D + 6)y = 2
i.e., (OO+6I)y = (1)
since the vector 2 is the basis vector, its coordinate is 1.
but OO+6I=6I which is invertible.......6I y = (1) becomes
y = (1)/6 which means .3333 since we have to divide the function 2 by the
coefficient 6.
Hence y = .3333 a constant function. This was too easy.
p
The general solution is A exp(2x) + B exp(3x) + 1/3.
2
Suppose the constant term was x + 2.
2 0 1 0 2 6 5 2
We'd better choose V = <1,x,x >. Then D = 0 0 2 and so D 5D+6I = 0 6 10.
0 0 0 0 0 6
This is invertible, and since the constant term has coordinates (2,0,1) we get
/ \ / \ / \
1  1 5/6 19/18  2  1 55/18
y =   0 1 5/3   0  =   5/3 
p 6  0 0 1   1  6  1 
\ / \ / \ /
which are the coordinates in the basis for V of the quadratic polynomial
2
(1/6) (55x /18 + 5x/3 +1) = y . Unless I have made a mistake. Exercise: check!!!
p
Suppose the constant term was exp(2x). Choose V to be the onedimensional space
spanned by exp(2x). On this space, D is just the scalar 2 so the equation becomes
2
(2) 5(2) + 6)y = (1). Which is, y = (1)/20.
p p
Since (1) means exp(2x), y = (1/20) exp (2x).
p
UNDETERMINED COEFFICIENTS: If the driving force is a polynomial, guess a polynomial.
If it is trigonometric with frequency w, guess A cos(wt) + B sin(wt) and solve
for A and B using expand out, collect like terms, equate coefficients.
Unless there is resonance, in which case guess A cos(wt) + B sin(wt) + polynomials
times cos(wt) and sin(wt).
If it is exponential, guess A exp (wt) unless there is resonance, in which case
guess a polynomial times A exp (wt).
If it is a polynomial times an exp, guess something of the same form.
If it is a polynomial times a trig....
If it is a trig times an exponential, guess something of the same form...unless there
is resonance...then throw in some polynomials. Including too many possibilities is
not that bad, it just means you will get some zeroes as the answer for whatever terms
weren't really needed. It makes your work a little more complicated, but otherwise
does no harm. But if you do not include enough possibilities, the equations will be
inconsistent and have no solutions for A,B,C,D, etc.....
EXAMPLE
Consider the equation y'' + 4y = sin(2x) + .4 cos(2x)
Guessing the form of the particular solution with unknown coefficients
A x cos(2x) + B x sin(2x) 2
is basically like guessing the vector space V which D and (D + 4I)
will act on... V = < x cos(2x), x sin(2x), cos(2x), sin(2x) > ... and the work
of plugging in the guess, expanding out, collecting like terms, equating
coefficients, and solving the resulting simultaneous linear equations
is basically like the matrix methods of calculating with the matrix of D on the
space V in the basis given. This is an example of resonance so the matrix of
2
D + 4I will not be invertible, but the equation will have lots of solutions
(be underdetermined). (If it were invertible, there would only be one solution.)
Exercise: find a particular solution using *both* methods, i.e., matrix
methods, and the method of undetermined coefficients. (If the answers disagree,
both are right! the disagreement will be by a function which is part of the
general solution anyway...)
CHAPTER IX LAPLACE TRANSFORMS
Laplace transforms and Fourier transforms are like logarithms. They are a dictionary
that can simplify a math problem. Logarithms were a dictionary that turned multipli
cation into addition, and division into subtraction. Also raising to a power got
turned into scalar multiplication, and extracting a root got turned into dividing by
a radix. Laplace transforms are the same way. They turn a differential equation
into a multiplication problem. (Unfortunately, they turn simple multiplication
problems into something tricky...)
How did logarithms work? (Your teacher maybe skipped this because of calculators).
If you have the problem 2.12 times 3.14, you translate 2.12 into its log which is
.75142, and 3.14 into its log which is 1.14422, and then you translate `multiply'
into `add' so you add them. You get 1.89464, but that is the log of the answer, we
are still in `loglanguage' so to speak. We have to use the antilog dictionary now
and find the antilog, or, `unlogarithm' of 1.89464. That is 6.65... and that is our
answer (approximately).
Only positive numbers have logarithms....not every number has a logarithm. Only
functions which are nicely behaved have Laplace transforms, it is the same thing.
If f(t) is a function defined for all t > 0 which does not grow too quickly as t
approaches either 0 or infinity (never mind the details), then its Laplace trans
form is a function L(f). By tradition, we make the variable of L(f) to be a
different letter than t, we make it p and actually p can even be a complex number or
a negative number. Engineers use a table of Laplace transforms as a dictionary, to
translate functions of t into functions of p. Here is part of one, and this part
must be memorised:
f(t) L(f)
n1
t 1
 
(n1)! n
p
1
sin (at) /a 
2 2
p + a
(.577...)  log(t) log (p) / p
(the constant in parentheses is called Euler's constant, or EulerMascheroni).
H(tk) exp(kp) / p
where H is Heaviside's function, the discontinuous step function.
1 _____________________________________________




____________________________________ 0
0
Sometimes we use a notation with capital function letters to mean the Laplace trans
form, i.e., the Laplace transform of f(t) is F(p).
Now what is the translation of operations into the `Laplace transform' language?
Just like translating into loglanguage turned `multiply' into `add' etc., so does
translating into Laplace transforms turn addition into addition, scalar multipli
cation into scalar multiplication (Laplace transforms are `linear'), derivatives into
multiplying by the variable p, and integrals into dividing by the variable p. Unfor
tunately, there is no product rule for Laplace transforms.
RULES
L(af + bg) = a L(f) + b L(g)
L (f') = p L(f)  f(0)
If F is the antiderivative of f, then
L(F) = L(f) / p as long as F(0) = 0.
{L(f*g)}(p) = {L(f)}(p){L(g)}(p)
where * is a new symbol, read, convolution. Convolution is a new way to combine fun
ctions which is important in physics and differential equations We will give its de
finition later. Roughly, if f is a good description of an electric circuit, then if
the driving e.m.f. is g, the output is f*g. So this rule is why the Laplace trans
form is good for circuit analysis. (So is the Fourier transform, Fourier and Laplace
hated each other...)
Anyway, just like logarithm is the unique function which turns multiplication into
addition and satisfies log(e) = 1, so L is the only transform which obeys those rules
and takes the polynomials to what I said they go to. Namely it is linear, takes
derivatives to multiplication by the variable p (provided the function was zero at
0), takes convolution to multiplication, and takes 1 to 1/p.
n (n1)
Theorem: L(t ) = (n1)! p Since L(1) = 1/p, it is true when n = 0.
2
The integral of 1 is t , so by the rule, L(t) = 1/p . The integral of t is
2 2 3
t /2, so by the rule, L(t /2) = 1/p . Etc.
L(f)
It follows from this that L(antiderivative of f) =  (the antiderivative must
be zero at zero.) p
EXAMPLE of the use of the Laplace transform to solve a differential equation.
Solve the equation Dy = t. 2
Taking Laplace transforms, we get L(y')=L(t) = 1/p
2 3 3
But since L(f') = pL(f) we get p L(y) = 1/p and so L(y) = 1/p . But 2/p is
in the table on the image side, so we know 2 3
L(t ) = 2/p , so, by linearity,
2 3
we get to multiply both sides by 1/2 and get L(t /2) =1/p ,
2
so the original, y, must be t /2.
2 2 2
Another example: Solve the equation (D 1)y = t. Taking transforms, (p 1)F=1/p
and hence 1
F(p) = 
2 2
(p 1)p
This is not in our table anywhere. But using partial fractions, we can rewrite it as
a linear combination of images in our table.
1 1/2 1/2
F(p) =  + _______ + _________
2 p1 p+1
p
So the original is t + exp(t)/2  exp(t)/2.
MORE OPERATIONS
If f is a function, it can be translated left or right by, say, 2. That is, we could
define g(t) = f(t+2)...that is translation to the left. The graph gets pushed to the
left by two units. It is important to know what happens to the Laplace transform
under this situation: it is an interesting and useful formula.
Theorem: if L(f) = F(p), then L(f(t+a)) = exp(ap)F(p).
2 3
Because of Taylor's theorem, g = f + af' + a f''/2! + a f'''/3! + ...
2 2 3 3
Because of the rules, G = F + apF + a p F/2! + a p F/3! + ... but now F(p) is a
common factor and we just get multiplication by F(p)
2 2 3 3 ap
G = F(p) (1 + ap + a p /2! + a p /3! + ...) = F(p) e .
Now for an important technicality: each of our functions f(t) is assumed to be zero
if t < 0. This is because we are interested in switching on circuits, and really t
will be time, so everything starts at t = 0 and we ignore the past. So, really, 1
means H(t), the function which is zero for past time and is 1 now and in the future.
Also, t is short for tH(t). Etc. This is not a problem for right translation but
it is a slight technicality for left translation:
We suppose all our functions vanish for negative numbers, f(t) = 0 for t < 0. But
g(t) = f(t + 2) does not. So the theorem above is really about H(t) f(t+2).
THE DIRAC DELTA FUNCTION AND ITS DERIVATIVES
By looking at the graph of H we can calculate its derivative. H' = 0 on the negative
halfline, zero on the positive halfline, but infinity at the spot, zero, where the
graph is completely vertical. Because it has infinite slope there. The rise is 1
but the run is 0.
This function is called the Dirac delta function d(t), or, the unit impulse function.
Since H is its antiderivative, the integral of d on any interval (a,b) is H(b)H(a),
which is zero or one, depending on whether the interval contains the jump or not.
Similarly, the integral of d `against' a function f(t) is just f(0). Integration of
a function against another one, f(t), is important.
/

 d (t) f(t) dt = f(0)

/
The derivative of the delta function is sort of a more complicated double spike,
first up and then down. It is hard to understand, but to be careful and understand
it well, we need to know what d' gives when integrated against some function f(t).
It gives f'(0). /

 d'(t) f(t) dt = f'(0)

/
Since L H = 1/p, taking derivatives, we get L d = p  = 1.
p
Taking derivatives again, we get 2
L d' = p, L d'' = p , etc.
Functions such as cos(t) do not occur in Nature, there was a time before the oscil
lator was built or the switch thrown on the circuit, so it is really H(t) cos (t)
which is more realistic, and so if it is put in a differential equation, we neces
sarily get d and d' as transient spikeup responses to turning things on.
(It raises the issue: our proof of the existence and uniqueness theorem for solutions
assumed that all coefficients, including the driving force, were continuous and had
continuous first derivatives. What now? In the cases we study, the solutions still
exist anyway. Because the discontinuities only occur at intervals, not all the time,
we could use the theorem of existence to show that there exists a solution for the
time period when the driving force is continuous, then we could stop, figure out what
the initial conditions have become at that point, and start a new differential equa
tion with those matching boundary conditions and do the next interval. For example,
y'' = H(t)cos(t), y(1)=0,y'(1)=1. The solution of y''=0 for all t<0 with those I.C.
is y(t) = 1+t. Switch at 0: now y(0)=1 and y'(0) =1, so from now on, y(t) =
cos t + 2 + t (integrate cos(t) twice and use the right constants of integration).)
Exercise: what does d' yield, when integrated against sin(t)cos(2t) ?
What does d' yield, when integrated against sin(t)f(t) ?
What does d' yield, when integrated against f(t)g(t)?
For short, the result of integrating d or d' or whatever against f(t) is notated,
or or whatever, e.g., <2d3d'+d'',f>.
Laplace transforms are useful for solving differential equations, especially initial
value problems with discontinuous forcing terms. But this is only true if you know
the Laplace transforms you need. On the other hand, if you already know the solution
to a differential equation, you can find out a useful Laplace transform.
COSINE
2
y=cos(t) satisfies (D +1)y = 0.
But this function has no Laplace transform, it is only H(t) cos(t) which is supported
on the right halfline. What happens if we plug in y(t) = H(t) cos(t) instead?
2
(D +1)y = D(DH (t) cos(t) + H(t) D cos(t)) + y =
D(d cos(t)Hsin(t)) + Hcos(t) = d'cos(t) + d cos'(t)H'sin(t)Hcost+Hcost
= d' cos(t)  d sin(t)  d sin(t) but since sin(t) = 0 when d is nonzero, the
d sin(t) terms are zero. So, we get d' cos(0) = d'.
2
(D + 1) y = d'.
On taking Laplace transforms of both sides, we get
2
(p + 1) F = p
and hence
p
F = . Q. E. D.
2
p + 1
TRANSLATIONS AND EXPONENTIAL DECAY
1
Theorem: L (1/(p+1)) = exp(t).
proof: By Taylor's theorem, or the geometric series,
1 2 3 4 n
1/(p+1) = p  p + p  p +... + (p) + ...
1 1 n
By linearity of L ,we need to add up all the L (p ).
0 1 2 3 (n1)
t t t t n1 t t
=    +    + ... + (1)  + ... = e .
0! 1! 2! 3! (n1)!
1
But after all, we had a comparable result, L (1/p) = 1 or H(t). It looks as though
translating the function 1/p to the left by 1 has produced a multiplication by an
exponential, in this case exp(t). It suggests the general theorem,
If L(f) = F(p), then the new original exp(at)f(t) has the image F(p+a).
What about the other way around? What if we translate the original? Then we just
multiply the image by an exponential decay factor.
Theorem, if L(f) = F(p), then L(f(xa)) = exp(ap)F(p).
2 2 3 3
Proof: By Taylor's theorem, f(xa) = f(x) aDf(x) +a D f(x)/2!  a D f(x)/3! + ...
Now if we take Laplace transforms applying D becomes multiplication by p so we get
2 2 3 3
L(f(xa)) = F(p)  a pF(p) +a p F(p)/2!  a p F(x)/3! + ...
2 2 3 3
= F(p) ( 1 + ap + a p /2! + a p /3!  ...) = F(p) exp ( ap), Q.E.D.
It is important to notice that going from image to original, the sign of a gets
reversed, but going from original to image, the sign of a stays the same. There is
also an unfortunate technicality if a is negative and you try to take L(f(ta)).
You can't quite do it since f(ta) is not supported on the right halfline if a is
negative. It is if a is positive, since f was and f got shifted further right.
DISCONTINUOUS DRIVING FORCE
The method of Laplace transforms does not have many advantages except when the
driving force is discontinuous or the initial values are given.
Example with initial values given.
Consider y''+y = 0. then (Hy)'' + Hy = y(0)d' + y'(0)d as usual.
Hence the image of the solution Hy is
y(0)p + y'(0)

2
p + 1
so the original of this is y(0)cos(t) + y'(0)sin(t).
Consider y'' + y = H(t)  H(t1) +H(t2)  H(t3) +H(t4)  H(t5) +...
(this is a square wave driving force).
On taking transforms, and supposing y(0)=0 and y'(0) = 0, we get
2 1  exp(p) + exp(2p)  exp(3p) + exp(4p)  exp(5p) ...
(p +1) F= 
p
1/p
= ___________ by the geometric series.
1 + exp(p)
1 / 1 p \
Therefore, F = (1+ exp (p))   +   by partial fractions,
 p 2 
\ p +1 /
Obviously the original of the partial fractions is just H(t)  cos(t) but we have to
convolve it with the original of that first factor, 1/(1+exp(p)) which is the square
wave...so we get
y(t) = H(t) H(t)cos t  H(t1) + H(t1)cos(t1) + H(t2)  H(t2)cos(t2) ...
The real convenience of Laplace transforms is that we are automatically assured that
at every point of discontinuity of the driving force, the solution is continuous and
its first derivative is continuous, so the different intervals patch together as
smoothly as possible (the second derivatives are discontinuous). We could double
check this, in this simple case. From 0 to 1, y(t) = 1cos t. From 1 to 2,
y(t) = cos t + cos(t1). From 2 to 3, 1cos t + cos(t1)  cos(t2), etc.
Now suppose that y(0) = 1 and y'(0) = 0. Then
2 1/p
(p + 1) F(p) = ___________ + p , and so
1 + exp(p)
1 / 1 p \ p
F(p) = (1+ exp (p))   +   +  ,
 p 2  2
\ p +1 / p +1
so all we have to do is add H(t)cos(t) to the previous answer, hence we get
y(t) = H(t)  H(t1) + H(t1)cos(t1) + H(t2)  H(t2)cos(t2) ...
CHAPTER X CONVOLUTION AND ELECTRIC CIRCUITS
An electric circuit is modelled by a differential equation. The applied voltage is
the driving force, i.e., the nonhomogeneous part, the constant term. For example,
an oscillator without any dissipative resistance with a natural frequency of 3 would
2
be given by the differential equation (D +9)y = 0. But if it is driven by an exter
nal power source with frequency 2.9, then the equation for the resulting behaviour is
2
(D + 9)y = cos (2.9t). We can think of the circuit itself as an integral transform
which transforms the input cos(2.9t) into the output (cos(3t)) * cos(2.9t). The func
tion cos(3t) is the description of the circuit and * means the convolution transform.
This is a process which gets applied to the input, which is cos(2.9t). There are
three ways to find the convolution of two functions, f(t) and g(t). One way is to
build the circuit which is described by f(t) just like our oscillator was described
by cos(3t). Then input the voltage g(t), and measure the result, that will be the
answer. Another way is to multiply the images F(p) G(p) together and find the origi
nal of the result. The third way is to calculate the integral
/ t

 f(tu) g(u) du = f*g(t). Exercises:1) if f = cos 3t and g = cos 2.9t
 check that this is a particular solution
/ of the given differential equation.
u=0 2) for any g, d*g=g and d'*g=g'.
EXAMPLES TO ACCOMPANY THE OUTLINE OF ELEMENTARY DIFFERENTIAL EQUATIONS
2
I Find the general solution of y'=1/(x +1)
Give an example of a third order equation with constant coefficients.
Check whether y = x exp(2x) is a solution to the equation y''4y'=4y
Find all possible s(t) if a(t) = 32.2, where a(t) is the acceleration of s.
II Find the solution to the initial value problem y' = x + 3, y(0) = 1.
Find the solution to the initial value problem, y'=1/(xx+1), y(0)= 2.
Solve y''= 4y, y(0) = 1/2, y'(0) = 1/3.
2
Solve y' = x +1, y(0) = 2.
2
Find the general solution of y' = y +1.
2
III Draw the direction field of y' = x + 1 and sketch the solution curve that
passes throught the point (0, 2). 2
Draw the direction field of y' = y + 1 and sketch the family of solutions.
Use Euler's method to approximate y(1.2) if y' = y and y(1) = 1.
IV Solve the initial value problem, y'' 5y' +6y = 0, y(0) = 1, y'(0) = 0.
V Use separation of variables to find the general solution to y' = 2xy.
Find the general solution to
dy
x  = 2x + 3y
dx
Test whether or not x dy  (3y 2x) dx is exact.
Find the general solution to x dy = (2x  y +3) dx
Test whether or not xdy(3y2x)dx becomes exact after you multiply it by the
4
function 1/x .
If u(x)sin(x)dy  u(x)(3y2)dx is exact, what differential equation must u obey?
VI Find the system of firstorder equations equivalent to
2
y''' + (1x) y''  x y' + x = cos(x)
Use Euler's method to approximate y(.3) if y(0) = 1, y'(0) =0, and y''(0) = 1.
How could you use uniqueness of solutions of a system to prove that
cos ( x + a) = cos (x) cos (a)  sin (x) sin (a)
2
VII If p(x) and q(x) each satisfy (1x )y''  2xy' + 6y = 0, then prove that
p(x)  2q(x) satisfies the same differential equation (called Legendre's equation).
3
Since cos (2x) satisfies the equation y'' + 4y = 0, and since x 3 satisfies
3 3
y'' + 4y = 4x + 6x 12, check that 3 cos (2x) + x  3 satisfies this last
equation as well.
Which of the three linear equations just mentioned are homogeneous?
VIII Find the system of first order equations equivalent to y''' + y'' y = 0.
Write them in matrix form.
Write the polynomial differential operator inherent in y'''+2y''y'2y=x.
Factor it and write down the general solution.
Solve y'' + 9y = cos (2x), solve y'' + 9y = cos (3x).
Use Mathematica to graph the solutions of the initial value problems
y'' + 9y = cos (nx), y(0) = 2, y'(0) = .25, for n=2,2.1,2.2,2.3,...2.9,2.99,2.999,3.
Find the maximum value of each solution.
Consider the system of equations y' = y + z Write it in matrix form.
z' = z.
If y(0) = 2 and z(0) = 3, what does Euler's method with step size 1 yield for
y(1) and z(1)? y(2) and z(2)? y(n) and z(n), n arbitrary?
Same question with step size 1/5 instead.
Calculate the matrix exponential and use it to write down an explicit solution
for the system.
Use the method of undetermined coefficients to find any solution at all of
Legendre's equation, above, except the zero solution.
2
IX Use linearity to find the Laplace transform of t +2t3.
Use derivatives to find the Laplace transform of sin(t), given that
1
L(cos t) = 
2
p + 1
3
Use antiderivatives to find the transform of t , and verify that it agrees with
the entry in the table.
1
Find the partial fractional expansion of  and use it to solve
2 2
p (p +1)
the differential equation f'' + f = t.
2
Find (D +D+I)(H(t)sin(t)) and simplify the answer using the rules for d',d.
Use Laplace transforms to solve the equation f''+9f =cos(wt), f(0)=0,f'(0)=1/4.
Same thing but with a dissipative term f''2f'+10f = cos(wt) etc.
Write the square wave g(t)
1 _______ _______ _______
     
      etc.
1/2   
0 1 2 3 4 5 6 as an infinite sum of
translates of various multiples of Heaviside functions, use the geometric series to
simplify, and then find its image function.
Use your answer to solve the initial value problem f''+9f=g, f(0)=f'(0)=2.
X What is the descriptor function for each of the differential equations in IX?
What is the differential equation and descriptor function of the series LCR
circuit with L = 2, C = 1/1000, and R = 10000?
Using Laplace transforms, calculate the convolution of J (t) with itself. Find
cos(ut) * cos(wt), exp(at) * sin(t), g(t) * cos(3t). 0
METHODS OF SOLUTION
IV. The equation y'' 5y' +6y = 0, y(0) = 1, y'(0) = 0 is an example of a
second order differential equation with constant coefficients. The polynomial
2
differential operator is D  5D + 6 so we have to find the roots of the
polynomial (m2)(m3)=0, which is obviously 2 and 3. Therefore exp (2x) and
exp (3x) are the fundamental solutions. The general solution is
y = A exp (2x) + B exp (3x). To solve for A and B we have to plug in the given
information to the formulas for y and y'. Pluggin in 0 to y, we get
1 = y(0) = A+B. We need to find the formula for y', but it is 2A exp(2x)+3Bexp(3x).
Plugging in, we get 0 = y'(0) = 2A + 3B. Using a little algebra we obtain A and B.
We have y(x) = 3 exp (2x) + 2 exp (3x).
V. y'=2xy becomes dy = 2xy dx. We need to isolate all the y terms on the left.
so we multiply both sides by 1/y, getting dy/y = 2x dx. Now we can integrate
2
both sides, getting log y + C = x , and hence, upon exponentiating both sides,
2
y = A x . dy
The equation x  = 2x + 3y is homogeneous so we make the change of
dx
variables u=y/x, so ux = y so dy=udx+xdu. Eliminating y and dy and replacing them
with the expressions for them we just got, we then get
udx + xdu
x  = 2x + 3ux = x (2+3u), so
dx
xdu
xu + x  = x (2+3u), and this is separable.
dx
multiplying both sides by dx, we get xu dx + xxdu = x (2+3u) dx or,
2 2
x du = x(2+3u) xu dx = (2x + 2xu)dx. Dividing both sides by x , we get
du = dx(2+2u)/x. Now dividing both sides by (u+1) we get du/(u+1) = 2dx/x.
Now the variables have been separated. So we can integrate both sides, and get
log (u+1) = 2 log x + C.
As so often, we need to exponentiate both sides to solve for u.
2
u+1 = A x , with A an arbitrary constant.
2 3
But u was y/x so we get y/x = A x 1, and hence y = A x x.
VI. If an equation has extra derivatives we can replace them with extra variables.
If the unknown function is y, its first derivative is y' and that's okay. But if
there is an extra derivative somewhere, y'', we can replace this by z' if we just
introduce the new variable z=y'.
In the case at hand, we have
2
y''' + (1x) y''  x y' + x = cos(x)
and the 'extra' derivatives are y'' and y'''. So we let z=y' and the
equation becomes 2
z'' + (1x) z' x z +x = cos(x).
There is still an extra derivative, z'', so we let w=z' and so z''=w'.
Now the equation becomes 2
w' + (1x) w x z +x = cos(x).
This is now a first order equation. We have to include the two equations
we introduced, which were y'=z and z'=w, so the system becomes
2
w' + (1x) w x z +x = cos(x)
y'=z
z'=w.
Now for neatness, we arrange it with the derivatives on the left and everything else
on the right,
2
w' = cos(x) x +x z + (x1)w
y' = z
z' = w.
This system is equivalent to the original equation. So are other systems, even
when they are less convenient. For example, we could let p be something unmotivated
like p = xy'+x  cos(x)
and q = p'  sin (x) .
Then q = xy'' + y' +1 + sin(x)  sin(x) = xy''+y'+1
and so q' = y'' + xy''' + y''.
Hence y''' = (q'  2y'' )/x
but since y'' = (qy'1)/x we have, in turn, y'''=(q'2(qy'1)/x )/x
so we can still write down a system
p = xy'+xcos(x)
q = p'  sin(x) 2
(xq'2(qy'1)/x ) + (1x) (qy'1)/x xz +x = cos(x)
simply in order to make the point that there are many systems which are equivalent
to each other and equivalent to the same higher order equation.
Next, we look at the use of Euler's method for these systems. Consider the initial
value problem given y(0) = 1, y'(0) = 0, y''(0) = 1.
In terms of our new variables, this becomes y(0)= 1, z(0)= 0, and w(0) = 1.
Using a step size of .1, we estimate each variables change by using the tangent line
approximation. The change in y will be approximately its rate of change multiplied
by .1, and the same for z and w. These rates of change are y'(0), z'(0), and w'(0).
The first of these is z(0) = 0, the second of these is w(0) = 1, and the last of
these is cos(0) 0+0+(01)w(0) = 1  (1) = 2. So the changes are 0, .1, and .2.
So our estimates are y(.1) = 1, z(.1)= .1, and w(.1)= .8 .
Let us calculate from the formulas of the system, the derivatives.
y'(.1) = z(.1) = .1, z'(.1) = w(.1) = .8, and
w'(.1) = cos(.1) .1 + .01z(.1) + (.1  1)w(.1) = .995 .1 + .01(.1) + (.9)(.8)=
1.614 . (This is not too far from 2, which was w'(0), so it is not unreasonable).
Hence the changes are onetenth of each of these, so the new values are
y(.2)= .99, z(.2)= .18, w(.2)= .654.(Not all of these were really needed...) Next,
we only need the change in y, now y'(.2) = z(.2) = .18, so the change will be .018
so the new value will be y(.3) = .972. We were not asked for the others, so we stop.
Because the theorem of uniqueness of solutions for systems is true, it follows that
the solution of the equivalent second order differential equation is also unique
once both initial conditions are specified.
Now cos(x) satisfies the differential equation y''= y, with initial conditions
y(0) = 1, y'(0) = 0. And cos(x+a) satisfies the same differential equation but with
initial conditions y(0) = cos(a), y'(0) = sin(a). Therefore, any function y(x)
that satisfies the same differential equation with the same initial conditions,
must be equal to cos(x+a). But the function cos(a)cos(x)  sin(a)sin(x) also does!
(Check by plugging in.) Therefore it must be equal to cos(x+a), Q.E.D.