Our goal in this not is to give some type of answer to the question "When can we find a power series solution to a differential equation?" We begin by recalling a few basic facts about power series. We then analyze the case of second-order, linear, homogeneous differential equations. As we will eventually see, that situation is already complicated enough.
Analytic functions
There are two key properties of each power series: its center and its radius of convergence. A power series centered at is a series of the form
A big part of Calculus III is building up the machinery to make sense of power series as functions. One of the central results is that a power series defines a function whose domain is always an interval centered at , hence the name center. The radius of that interval of convergence is called the radius of convergence of the power series. If that radius is finite, then we are guaranteed that the series converges absolutely for all values of in the open interval . The series might or might not converge at the endpoints. That depends on the particular series.
Part of the usefulness of power series is that "most" functions can be represented by power series, at least locally. More precisely, we say a function is analytic at if it can be represented by a power series near , i.e., if there is a power series centered at such that for all in some open interval containing .
Warning
The word "analytic" can only be used to describe a function. It cannot be used to describe a differential equation.
In fact, we can say even more.
Taylor's Formula
If is analytic at , then is infinitely differentiable at and there is an open interval containing such that for all in one has
In other words, if can be represented by a power series centered at , then there is only one possible such power series and its coefficients are given by the above formula. The most famous examples of these formulas are probably the following three Maclaurin series:
Each of these three equalities holds for all real , a fact that is not part of Taylor's formula above but is wonderful nonetheless. Another famous power series representation comes from the geometric series formula
In contrast to the first three Maclaurin series, the above equality only holds for in the interval . The function on the left is defined for all , but it only agrees with the power series on the right on the interval , which happens to be the interval of convergence of that power series. What could we say about the function on the left at a point like ? Taylor's Formula gives us the recipe for the only possible power series centered at that could represent it. It turns out that we have
with equality holding for all in the interval .
At this point you might be wondering how to tell when a given function is analytic at a point . Technically, the answer is that the function must satisfy two properties: first, must be infinitely differentiable at ; second, the function must agree with its Taylor series centered at on some open interval around . This is not always easy to verify, although you may have learned how to do so in Calculus III using Taylor's Remainder Estimate.
Fortunately there's good news. As far as we're concerned, "most" functions that we deal with are analytic at all points in their domain. Polynomials are analytic everywhere. So are the functions , and . Products of analytic functions are analytic, as are sums and differences. And ratios of analytic functions are analytic everywhere they're defined, i.e., everywhere except where the denominator vanishes. For our purposes we are going to be mostly concerned with rational functions, which are ratios of polynomials. For those functions, we conveniently have the following result.
Radius of convergence for rational functions
Suppose , where and are polynomials with no common zeroes. Let be any points where . Then is analytic at , and the power series representation for at has radius of convergence , where is the minimum distance (in the complex plane) from to the zeroes of .
The zeroes of the denominator are called the poles of . If we choose a center for our power series representation of , then the radius of convergence of that series is the distance from to the nearest pole of .
Example
Consider the function
Using the quadratic formula, the poles of are at and . It follows that is analytic at every point except those three. In particular, it is analytic at 0 and the radius of convergence for the corresponding Taylor series is the minimum distance from 0 to those three poles.
Using the distance formula, the distances from 0 to those three poles are 2, , and , respectively, so the minimum distance from 0 to any of the poles is . By our above theorem, the Maclaurin series for thus has radius of convergence .
Note
The best part of the above theorem is that it tells us radii of convergence without having to work out explicit formulas for Taylor series and then run Ratio Tests, computations that could be exceedingly tedious.
Ordinary points of differential equations
For reasons that will become clear, we will restrict our attention to second-order, linear, homogeneous differential equations. Every such differential equation can be written in the form
We call this the standard form of the differential equation.
Note
You don't have to put your differential equation into standard when solving for a power series solution. This form is only needed when analyzing the differential equation, to determine whether looking for a power series solution is a good idea.
We can now answer the following:
Question
For which points does this differential equation possess a solution that is analytic at , i.e., that can be represented by a power series centered at ?
The following theorem gives sufficient (but not necessary) conditions to guarantee success:
Radius of convergence of power series solutions
Suppose and are analytic at Then the general solution to the differential equation
is analytic at . Moreover, the radius of convergence of the general power series solution is at least as large[1] as the minimum of the radii of convergence for the power series representations of and .
In other words, as long as the coefficient functions and are analytic at our chosen center , then so is the general solution. And that means we can find it using our power series method. Let's introduce a new term to describe this situation.
Definitions of ordinary and singular points
Consider the differential equation
We say is an ordinary point of the differential equation if both and are analytic at . We say is a singular point of the differential equation if either or (or both!) is not analytic at .
Warning
Again note that a differential equation cannot be said to be analytic. The word "analytic" only applies to a function.
Example
Consider the differential equation we first looked at when trying for a power series solution:
This differential equation is already in standard form, so the coefficient functions are the constant functions and . Since constant functions are analytic everywhere, this means every point is an ordinary point of this differential equation. By our theorem above, it follows that the general solution is analytic everywhere, as well, so we could find it using a power series (centered at wherever we liked). Moreover, the radii of convergence for the power series representations of and are both infinity, so the general solution to our differential equation is guaranteed to also have infinite radius of convergence.
Note
Polynomials can be viewed as finite power series. Since polynomials are defined everywhere, they have infinite radius of convergence when viewed as power series.
Example
Our second example earlier was the differential equation below, which is also already in standard form:
Here we see and . Both coefficient functions are polynomials and hence are analytic everywhere, with infinite radii of convergence. So once again every point is an ordinary point of the differential equation, and the general solution can be represented by a power series (with infinite radius of convergence) centered at whichever point we desire.
Example
Consider the differential equation
In standard form, this differential equation is
so and . These coefficient functions are both analytic everywhere except. In our new terminology, we can say that every other than is an ordinary point of the differential equation, while and are singular points. For a specific example, the point is an ordinary point of the differential equation, and the power series representations for and centered at 0 have radii of convergence 1. By our above theorem, it follows that the general solution to this differential equation can be represented by a power series centered at 0 with radius of convergence at least 1.
There are a few items worth stressing here. First, we should be very clear what our theorem does and does not say. It does say that if is an ordinary of the differential equation, then it is guaranteed that the general solution to the differential equation can be represented by a power series centered at . It does not say that if is a singular point of the differential equation, then there does not exist a solution to the differential equation that can be represented by a power series centered at . It could still happen, it's just no longer guaranteed. We will see what can go wrong, and how things could still go right.
What can possibly go wrong?
Consider the differential equation
In standard form, this differential equation is
so the coefficient functions are and . The first is analytic everywhere, while the second is analytic everywhere except at . So is a singular point of this differential equation. This means we are not guaranteed we will can find a power series solution centered at 0. Let's try to find one anyway.
We will look for all solutions of the form . If we substitute this into the left side of the current (standard) form of our differential equation, we will find
Let's pause for a moment and imagine how this is about to play out. We need to perform some basic arithmetic, reindex some series, and then combine like powers of to obtain a single power series. A power series is a linear combination of nonnegative integer powers of , which means everything in our expression must be expressed as combinations of nonnegative integer powers of . Expressions like are not allowed, so they must be dealt with somehow. We essentially have three options:
Option 1
Algebraically manipulate the differential equation before substituting in our power series. In our current case, we could go back to the original version of the differential equation.
Option 2
Algebraically manipulate the offending expressions so as to rewrite them as linear combinations of powers of . In our current example, we could write . In this case we've introduced negative powers of , but it turns out that this is okay and everything will work out fine.
Option 3
Replace every offending expression with its corresponding power series. For example, if there had been an in our differential equation, we could replace it with its Maclaurin series. This is usually the most computationally painful option.
For this example let's go with the first option, and plug our generic power series into the original differential equation. We find
So, is a solution of our differential equation exactly when
The first two equations give and . The last equation can be solved for to give
for every . In particular, each is a multiple of the coefficient before it. Since and , it now follows that every . Thus, the only power series solution centered at 0 is
i.e., the trivial solution .
Warning
We will see in the next chapter that it can be critically important when solving a recursion equation like the one above to check to see if we have any danger of dividing by zero. For this equation, that amounts to asking whether there are any values of for which . One can quickly find that the roots of this equation are and . Since we are only considering integers , there is no danger here for us.
For a linear, homogeneous differential equation, this is the only thing that can "go wrong" when looking for a power series solution centered at 0. The trivial solution is always a solution, and in this case it was the only solution that could be represented by a power series centered at 0. Nothing especially catastrophic has happened, other than us wasting our time and coming up empty handed. If you're a pessimist, this is what you should expect to happen when your desired center is a singular point of the differential equation. In the next example, however, we will see that this is always a chance that things might still work out.
So you're saying there's a chance
Let's take a look at the differential equation
In standard form this differential equation is
so we can see that is a singular point of this differential equation. Remember, this only means we are not guaranteed we can find a power series solution centered at 0, but there might be one. So let's look and see. As usual, we'll substitute our general power series centered at 0 into the left side of the differential equation and then simplify:
So, our power series is a solution exactly when the following equations are satisfied:
The first two equations both imply . We then find
Thus, all power series solutions centered at 0 are of the form
While we didn't find two linearly independent solutions to our differential equation, we did at least find one, which is certainly better than nothing! In the next note we will see how to find another solution, which is something a little more general than a power series.