Let's begin with a test example, for which our proposed method is complete overkill:
Using the methods of Linear Analysis I (or simply by guessing) we can show that the general solution to this differential equation is
Let's see how our proposed power series method plays out. For this first example let's avoid using summation notation and write everything in expanded form. The general power series centered at 0 is of the form
Computing derivatives term by term, we then have
and
Note
You can see one disadvantage of not using summation notation: each time we take a derivative we have one fewer written term. In this current example, by the time we hit the sixth derivative we would have , which is not very much information at all! We can always go back and add more terms, but using summation notation at the outset is probably the ideal solution.
Plugging into the left side of our differential equation, we then have
Warning
Note that we can't write down the terms for and beyond, since we haven't explicitly listed those terms in .
Our power series is a solution to the differential equation exactly when , and a power series equals zero (as a function) exactly when the coefficients of each power of vanish. Based on our work above, this means is a solution exactly when the following equations are satisfied:
So, our infinite collection of coefficients must satisfy an infinite collection of relations in order for the corresponding power series to be a solution to the differential equation. What is not entirely obvious from the limited list of equations above is that what were are dealing with is a recursion relation. Although there are general methods for solving such relations, those methods are beyond the scope of our current notes. Instead, we will result to a crude but generally effective method of "bootstrapping" our way to a solution, in which we use each equation to write the unknown coefficients in terms of the coefficients with the lowest possible subscripts. In our current situation, we find that
From the limited information we have obtained, it appears that every coefficient can be expressed in terms of and , and that the coefficients and are "free." Assuming the trends continues---and we'll revisit this example shortly using summation notation---we have found that the power series solutions are those of the form
Continuing with the assumption that all can be written in terms of and , we can express this more cleanly as
This presentation exhibits as the linear combination of two specific power series solutions, namely
Now recall that the Maclaurin series for sine and cosine are
It looks like our function is the Maclaurin series for and our solution for is the Maclaurin series for , but our work is not sufficient to justify that claim. We haven't explicitly determined the general pattern for the coefficients in each of our solutions, so we don't know sure sure that they forever match the terms in the two Maclaurin series.
This first example illustrates both the strengths and the weaknesses of writing power series in expanded form. Its greatest strength is in how it makes combining power series completely straightforward: we simply combine terms with like powers of . As we will see shortly, combining power series requires a few additional steps when working with summation notation.
On the other hand, using expanded notation has several drawbacks. The most serious is the lack of explicit formulas we find in our process. We did not find a general recursion formula, just the first few instances; we did not find a general formula for the , just the first few values. Our final answer only contained a few explicit terms, a fact which could cause us serious headaches in the real world if we need to use our solution to model the behavior of a physical system, where accuracy would likely be a concern. Using summation notation can help us mitigate some of these issues.
Using summation notation
Let's revisit the previous example, only now using summation notation. If at any point you lose track of what we are doing, you can always expand back out the sums and compare with our work above. The general power series centered at 0 is of the form
Taking derivatives term by term, we then have
and
Question
Why does the summation for begin at ? The term in when was the constant , the derivative of which is 0, so that term has "disappeared" from the sum for . You may see some authors insist that the sum for starts at . Why is that also correct? Notice that if you substitute into our formula for the sum for , the result is , which just so happens to equal 0. So the result is technically correct, although it amounts to declaring that the derivative of 1 is . Is that something you really want to do? It's also a risky decision, since there are examples of manipulating series in which tricks like that do not work.
The left side of our differential equation is then
We want to combine those two series into a single power series. There are two issues that prevent us from immediately doing so. First, the two sums do not begin at the same index value: the first begins at and the second at . Second, the two series are not "aligned," in the sense that the index in each sum does not correlate directly with the exponent of . Put another way, even if the two series did begin at the same value of , if we merged them into one sum we would have
which is not the standard representation of a power series. That's because we need to gather up the like powers of . This step was simple when the series were written in expanded form, since we could visually match the terms with like powers. When using summation notation, however, we need to do one small extra step, called reindexing.
The process of reindexing a summation is analogous to a basic change of variables in an integral. We are going to shift the indices in each of our summations so that the new index exactly matches the power of inside the sum. In the first summation, we see that our terms are multiples of , so we will switch to a new index defined by . We now replace every appearance of in that summation with :
It is important to note that we haven't done anything to the series. Both summations above describe the series
Notice how the index matches with the subscript on , while the index matches with the power of . If we wanted to combine terms with the same coefficient , we would use the index . Since we want to combine terms with the same , we'll use the index .
We do the same thing to the second sum above (the one representing ), although in that case we make the trivial reindexing . Our work now looks like
A happy coincidence
It is convenient that both sums now begin at , but that is only a happy coincidence in this example.
We now have a general formula for , as opposed to our previous method which only provided the terms up through . To confirm we have reproduced our previous calculation, we can expand out the last line above:
It has taken a few more steps, but what we have gained is a precise formula for every term in the power series for . Returning to the task at hand, is a solution to our differential equation exactly when . Based on our power series representation for , this happens exactly when
for every integer . This is what we will call our master relation. It dictates the relationships the coefficients must satisfy in order for our power series to be a solution to the differential equation.
At this point we sync up with our previous method, using the bootstrapping method to write each in terms of and . However, we can now crank out as many as we desire. Just as before, we find
Thus, our general power series solution is of the form
If we needed another term, we could proceed to , and so on.
Notice that our recursion relation tells us that each is a multiple of . This proves[1] that every can be written as a multiple of either or , and so we are justified in writing
Before moving on to a second example, we should point out two features of the current example that are not representative of what we will see in general.
First, in this example each ended up equaling a multiple of either or . This is not something you should generally expect. It is akin to computing a Taylor series and discovering that all even coefficients vanish. We will soon see that for a second-order, linear, homogeneous differential equation, we should instead expect each to be a linear combination of both and .
Second, in this example (and many others) you might be able to deduce a pattern for the . In our current example, for instance, it's not hard to show that
In terms of our particular solutions and , this means we have explicit formulas
and
Against all odds these are recognizable power series: they are the Maclaurin series for cosine and sine, respectively. We have thus actually shown that the general solution to the differential equation is
which at last aligns with our very first observations about this differential equation. This is nice but incredibly rare. You should consider the decision to work with power series akin to checking into Hotel California: you can check in, but you can never leave.
For example, is a multiple of , which in turn is a multiple of , which is itself a multiple of . So is ultimately a multiple of . In general, if is even then is a multiple of ; if is odd, then is a multiple of . ↩︎