Power Series Solutions IV - Analytic functions and ordinary points

Our goal in this not is to give some type of answer to the question "When can we find a power series solution to a differential equation?" We begin by recalling a few basic facts about power series. We then analyze the case of second-order, linear, homogeneous differential equations. As we will eventually see, that situation is already complicated enough.

Analytic functions


There are two key properties of each power series: its center and its radius of convergence. A power series centered at x=x0 is a series of the form

P(x)=∑n=0∞an(x−x0)n=a0+a1(x−x0)+a2(x−x0)2+⋯

A big part of Calculus III is building up the machinery to make sense of power series as functions. One of the central results is that a power series defines a function whose domain is always an interval centered at x0, hence the name center. The radius R of that interval of convergence is called the radius of convergence of the power series. If that radius is finite, then we are guaranteed that the series P(x) converges absolutely for all values of x in the open interval (x0−R,x0+R). The series might or might not converge at the endpoints. That depends on the particular series.

intervalOfConvergence.png|300

Part of the usefulness of power series is that "most" functions can be represented by power series, at least locally. More precisely, we say a function is analytic at x0 if it can be represented by a power series near , i.e., if there is a power series P(x) centered at such that f(x)=P(x) for all in some open interval containing .

Warning

The word "analytic" can only be used to describe a function. It cannot be used to describe a differential equation.

In fact, we can say even more.

Taylor's Formula

If f is analytic at x0, then f is infinitely differentiable at x0 and there is an open interval I containing x0 such that for all x in I one has

f(x)=∑n=0∞f(n)(x0)n!(x−x0)n.

In other words, if f can be represented by a power series centered at x0, then there is only one possible such power series and its coefficients are given by the above formula. The most famous examples of these formulas are probably the following three Maclaurin series:

ex=∑n=0∞1n!xn=1+x+12!x2+13!x3+⋯sin⁥(x)=∑n=0n odd∞(−1)(n−1)/2n!xn=x−13!x3+15!x5+⋯cos⁥(x)=∑n=0n even∞(−1)n/2n!xn=1−12!x2+14!x4+⋯

Each of these three equalities holds for all real x, a fact that is not part of Taylor's formula above but is wonderful nonetheless. Another famous power series representation comes from the geometric series formula

11−x=∑n=0∞xn=1+x+x2+⋯

In contrast to the first three Maclaurin series, the above equality only holds for x in the interval (−1,1). The function on the left is defined for all x≠1, but it only agrees with the power series on the right on the interval (−1,1), which happens to be the interval of convergence of that power series. What could we say about the function on the left at a point like x=2? Taylor's Formula gives us the recipe for the only possible power series centered at x=2 that could represent it. It turns out that we have

11−x=∑n=0∞(−1)n+1(x−2)n=−1+(x−2)−(x−2)2+⋯,

with equality holding for all x in the interval (1,3).

At this point you might be wondering how to tell when a given function f is analytic at a point x=x0. Technically, the answer is that the function f must satisfy two properties: first, f must be infinitely differentiable at x0; second, the function f must agree with its Taylor series centered at x0 on some open interval around a. This is not always easy to verify, although you may have learned how to do so in Calculus III using Taylor's Remainder Estimate.

Fortunately there's good news. As far as we're concerned, "most" functions that we deal with are analytic at all points in their domain. Polynomials are analytic everywhere. So are the functions ex, sin⁥(x) and cos⁥(x). Products of analytic functions are analytic, as are sums and differences. And ratios of analytic functions are analytic everywhere they're defined, i.e., everywhere except where the denominator vanishes. For our purposes we are going to be mostly concerned with rational functions, which are ratios of polynomials. For those functions, we conveniently have the following result.

Radius of convergence for rational functions

Suppose f(x)=g(x)h(x), where g and h are polynomials with no common zeroes. Let x0 be any points where h(x0)≠0. Then f is analytic at x0, and the power series representation for f at x0 has radius of convergence R, where R is the minimum distance (in the complex plane) from x0 to the zeroes of h.

The zeroes of the denominator h are called the poles of f. If we choose a center x0 for our power series representation of f, then the radius of convergence of that series is the distance from x0 to the nearest pole of f.

Example

Consider the function

f(x)=x−1(x2+2x+2)(x−2).

Using the quadratic formula, the poles of f are at x=2 and x=−1±i. It follows that f is analytic at every point except those three. In particular, it is analytic at 0 and the radius of convergence for the corresponding Taylor series is the minimum distance from 0 to those three poles.

poles.png|300

Using the distance formula, the distances from 0 to those three poles are 2, 2, and 2, respectively, so the minimum distance from 0 to any of the poles is 2. By our above theorem, the Maclaurin series for f thus has radius of convergence R=2.

Note

The best part of the above theorem is that it tells us radii of convergence without having to work out explicit formulas for Taylor series and then run Ratio Tests, computations that could be exceedingly tedious.

Ordinary points of differential equations


For reasons that will become clear, we will restrict our attention to second-order, linear, homogeneous differential equations. Every such differential equation can be written in the form

yâ€ŗ+p(x)y′+q(x)y=0.

We call this the standard form of the differential equation.

Note

You don't have to put your differential equation into standard when solving for a power series solution. This form is only needed when analyzing the differential equation, to determine whether looking for a power series solution is a good idea.

We can now answer the following:

Question

For which points x0 does this differential equation possess a solution that is analytic at x0, i.e., that can be represented by a power series centered at x0?

The following theorem gives sufficient (but not necessary) conditions to guarantee success:

Radius of convergence of power series solutions

Suppose p and q are analytic at x0 Then the general solution to the differential equation

yâ€ŗ+p(x)y′+q(x)y=0

is analytic at x0. Moreover, the radius of convergence of the general power series solution is at least as large[1] as the minimum of the radii of convergence for the power series representations of p and q.

In other words, as long as the coefficient functions p and q are analytic at our chosen center x0, then so is the general solution. And that means we can find it using our power series method. Let's introduce a new term to describe this situation.

Definitions of ordinary and singular points

Consider the differential equation

yâ€ŗ+p(x)y′+q(x)y=0.

We say x0 is an ordinary point of the differential equation if both p and q are analytic at x0. We say x0 is a singular point of the differential equation if either p or q (or both!) is not analytic at x0.

Warning

Again note that a differential equation cannot be said to be analytic. The word "analytic" only applies to a function.

Example

Consider the differential equation we first looked at when trying for a power series solution:

yâ€ŗ+y=0.

This differential equation is already in standard form, so the coefficient functions are the constant functions p(x)=0 and q(x)=1. Since constant functions are analytic everywhere, this means every point x0 is an ordinary point of this differential equation. By our theorem above, it follows that the general solution is analytic everywhere, as well, so we could find it using a power series (centered at wherever we liked). Moreover, the radii of convergence for the power series representations of p and q are both infinity, so the general solution to our differential equation is guaranteed to also have infinite radius of convergence.

Note

Polynomials can be viewed as finite power series. Since polynomials are defined everywhere, they have infinite radius of convergence when viewed as power series.

Example

Our second example earlier was the differential equation below, which is also already in standard form:

yâ€ŗ+(2−4x2)y′−8xy=0.

Here we see p(x)=2−4x2 and q(x)=−8x. Both coefficient functions are polynomials and hence are analytic everywhere, with infinite radii of convergence. So once again every point x0 is an ordinary point of the differential equation, and the general solution can be represented by a power series (with infinite radius of convergence) centered at whichever point x0 we desire.

Example

Consider the differential equation

(1−x2)yâ€ŗâˆ’2xy′+6y=0.

In standard form, this differential equation is

yâ€ŗâˆ’2x1−x2y′+61−x2y=0,

so p(x)=−2x1−x2 and q(x)=61−x2. These coefficient functions are both analytic everywhere except x=±1. In our new terminology, we can say that every x0 other than ±1 is an ordinary point of the differential equation, while x0=1 and x0=−1 are singular points. For a specific example, the point x0=0 is an ordinary point of the differential equation, and the power series representations for p and q centered at 0 have radii of convergence 1. By our above theorem, it follows that the general solution to this differential equation can be represented by a power series centered at 0 with radius of convergence at least 1.


There are a few items worth stressing here. First, we should be very clear what our theorem does and does not say. It does say that if x0 is an ordinary of the differential equation, then it is guaranteed that the general solution to the differential equation can be represented by a power series centered at x0. It does not say that if x0 is a singular point of the differential equation, then there does not exist a solution to the differential equation that can be represented by a power series centered at x0. It could still happen, it's just no longer guaranteed. We will see what can go wrong, and how things could still go right.

What can possibly go wrong?


Consider the differential equation

4x2yâ€ŗâˆ’(3+4x)y=0.

In standard form, this differential equation is

yâ€ŗâˆ’3+4x4x2y=0,

so the coefficient functions are p(x)=0 and q(x)=−3+4x4x2. The first is analytic everywhere, while the second is analytic everywhere except at x0=0. So x0=0 is a singular point of this differential equation. This means we are not guaranteed we will can find a power series solution centered at 0. Let's try to find one anyway.

We will look for all solutions of the form y=∑n=0∞anxn. If we substitute this into the left side of the current (standard) form of our differential equation, we will find

yâ€ŗâˆ’3+4x4x2y=∑n=2∞n(n−1)anxn−2−3+4x4x2∑n=0∞anxn.

Let's pause for a moment and imagine how this is about to play out. We need to perform some basic arithmetic, reindex some series, and then combine like powers of x to obtain a single power series. A power series is a linear combination of nonnegative integer powers of x, which means everything in our expression must be expressed as combinations of nonnegative integer powers of x. Expressions like 3+4x4x2 are not allowed, so they must be dealt with somehow. We essentially have three options:

Option 1

Algebraically manipulate the differential equation before substituting in our power series. In our current case, we could go back to the original version of the differential equation.

Option 2

Algebraically manipulate the offending expressions so as to rewrite them as linear combinations of powers of x. In our current example, we could write 3+4x4x2=34x−2+x−1. In this case we've introduced negative powers of x, but it turns out that this is okay and everything will work out fine.

Option 3

Replace every offending expression with its corresponding power series. For example, if there had been an ex in our differential equation, we could replace it with its Maclaurin series. This is usually the most computationally painful option.

For this example let's go with the first option, and plug our generic power series into the original differential equation. We find

4x2yâ€ŗâˆ’(3+4x)y=4x2∑n=0∞n(n−1)anxn−2−(3+4x)∑n=0∞anxn=∑n=2∞4n(n−1)anxn−∑n=0∞3anxn−∑n=0∞4anxn+1=∑m=2∞4m(m−1)amxm−∑m=0∞3amxm−∑m=1∞4am−1xm=−3a0⏟m=0+(−3a1−4a0)x⏟m=1+∑m=2∞(4m(m−1)am−3am−4am−1)xm

So, y is a solution of our differential equation exactly when

m=0:−3a0=0m=1:−3a1−4a0=0mâ‰Ĩ2:4m(m−1)am−3am−4am−1=0.

The first two equations give a0=0 and a1=−43a0=0. The last equation can be solved for am to give

am=44m(m−1)−3am−1

for every mâ‰Ĩ2. In particular, each am is a multiple of the coefficient before it. Since a0=0 and a1=0, it now follows that every am=0. Thus, the only power series solution centered at 0 is

y(x)=0+0⋅x+0⋅x2+⋯,

i.e., the trivial solution y(x)=0.

Warning

We will see in the next chapter that it can be critically important when solving a recursion equation like the one above to check to see if we have any danger of dividing by zero. For this equation, that amounts to asking whether there are any values of m for which 4m(m−1)−3=0. One can quickly find that the roots of this equation are m=32 and m=−12. Since we are only considering integers mâ‰Ĩ2, there is no danger here for us.

For a linear, homogeneous differential equation, this is the only thing that can "go wrong" when looking for a power series solution centered at 0. The trivial solution is always a solution, and in this case it was the only solution that could be represented by a power series centered at 0. Nothing especially catastrophic has happened, other than us wasting our time and coming up empty handed. If you're a pessimist, this is what you should expect to happen when your desired center is a singular point of the differential equation. In the next example, however, we will see that this is always a chance that things might still work out.

So you're saying there's a chance


Let's take a look at the differential equation

x2yâ€ŗ+xy′−(1+x)y=0.

In standard form this differential equation is

yâ€ŗ+1xy′−1+xx2y=0,

so we can see that x0=0 is a singular point of this differential equation. Remember, this only means we are not guaranteed we can find a power series solution centered at 0, but there might be one. So let's look and see. As usual, we'll substitute our general power series centered at 0 into the left side of the differential equation and then simplify:

x2yâ€ŗ+xy′−(1+x)y=x2∑n=2∞n(n−1)anxn−2+x∑n=1∞nanxn−1−(1+x)∑n=0∞anxn=∑n=2∞n(n−1)anxn+∑n=1∞nanxn−∑n=0∞anxn−∑n=0∞anxn+1=∑m=2∞m(m−1)amxm+∑m=1∞mamxm−∑m=0∞amxm−∑m=1∞am−1xm=−a0⏟m=0+(a1−a1−a0)x⏟m=1+∑m=2∞(m(m−1)am+mam−am−am−1)xm=−a0−a0x+∑m=2∞((m2−1)am−am−1)xm.

So, our power series is a solution exactly when the following equations are satisfied:

m=0:−a0=0m=1:−a0=0mâ‰Ĩ2:(m2−1)am−am−1=0.

The first two equations both imply a0=0. We then find

m=2:a2=13a1m=3:a3=18a2=18⋅3a1m=4:a4=115=115⋅8⋅3a1⋮

Thus, all power series solutions centered at 0 are of the form

y(x)=0+a1x+13a1x2+124a1x3+1360a1x4+⋯=a1(x+13x2+124x3+1360x4+⋯)⏟y1(x)

While we didn't find two linearly independent solutions to our differential equation, we did at least find one, which is certainly better than nothing! In the next note we will see how to find another solution, which is something a little more general than a power series.

Suggested next notes


Frobenius Series Solutions I - Slightly generalizing power series


  1. We will see examples for which the power series solution has a larger radius of convergence than either p or q. â†Šī¸Ž