Laplace transform I - Desperate times

We've defined the Fourier transform of a function f(t) by the integral formula

(Ff)(s)=f(t)e2πitsdt,

provided the integral converges. We then saw that this transform has loads of nice properties, with a nearly symmetric inverse transform and the potential to provide an incredible new way to solve differential equations. However, we also discovered one glaring issue:

Trouble in paradise

For most of our "familiar" functions, the above integral does not converge!

Our transform integral doesn't converge for polynomials, sine, cosine, basic exponential functions, etc. It doesn't even converge for the constant function f(t)=1!

You might argue that the "issue" is that the term e2πits in the integrand does nothing to help convergence, since (for real values of s, at least) its simply a complex number whirling around on the unit circle in the complex plane. Maybe we can tinker with things a bit, with the goal of improving convergence without losing the main properties of the Fourier transform.

Tinkering with our transform


Suppose we first considered tweaking the integrand by erasing that i, so that the integral became

f(t)e2πtsdt.

Now that exponential function e2πts has the potential to "help" the integral converge. Actually, that term will actually make things worse (i.e., it will grow exponentially) when t (at least when s>0; if we allow s<0, then we'd need to worry about t). So let's hedge our bets and adjust the lower bound, as well:

0f(t)e2πtsdt.

We could try to work with this, but let's make one last adjustment. That factor of 2π in the exponential function was originally there to help us create exponential functions of period 1 when working with Fourier series. Now that we've removed the i there's no justifiable reason to keep it (although we certainly could, if we wanted). Let's drop it, leaving us with the integral

0f(t)etsdt.

And that's our new transform! Notice that it really only make sense to apply it to functions defined on [0,), since it won't incorporate any information of a function defined beyond that:

Definition of the Laplace transform

For each function f defined on [0,), the Laplace transform of f is the function denoted (Lf)(s) defined by

(Lf)(s)=0f(t)etsdt,

if the integral converges.

Remember our hope for this new transform is:

  1. It exists for many more of our "familiar" functions.
  2. It possesses properties similar to the Fourier transform.

In particular, we hope it can help us solve differential equations!

Before we start investigating this new transform, let's point out that it's clear this new transform is certainly still linear:

Linearity of the Laplace transform

Suppose f and g are functions on [0,) that have Laplace transforms. Then:

  1. L(f+g)=Lf+Lg
  2. L(cf)=cLf

Examples


Let's get right to it and show that polynomials, sine and cosine, and basic exponential functions all have Laplace transforms.

The Laplace transform of polynomials

First consider the function f(t)=1. By definition,

(Lf)(s)=0etsdt=limb0betsdt.

When s=0, we have

(Lf)(0)=limb0b1dt=limbb=,

so the integral diverges. Assume now s0. Then we can make the substitution u=ts, so that du=sdt and we have

(Lf)(s)=limb0bseudus=limb1s(ebs1)={, when s<01s, when s>0

So, we've proven than (Lf)(s)=1s with domain s>0. We will often write this more succinctly as:

L(1)=1s(for s>0)

If we repeat this same type of analysis for the function f(t)=t, we ultimately find a similar formula, namely that

L(t)=1s2(for s>0)

In fact, it's not terribly hard to prove the following general rule:

The Laplace transform of monomials

For each integer n0, one has

L(tn)=n!sn+1(for s>0)

You can now combine the above formula with linearity to immediately compute the Laplace transform of any polynomial. For example,

L(t22t+3)=L(t2)2L(t)+3L(1)=2s321s2+31s,

with domain s>0.

The Laplace transform of exponential functions

It's actually very easy to compute the Laplace transform of a basic exponential function like f(t)=eat:

L(eat)(s)=0eatetsdt=0e(as)tdt=limb1as(e(as)b1)(for sa)=1sa(for s>a)

Let's add that to our official list:

Laplace transform of basic exponential functions

For each number a, one has

L(eat)=1sa(for s>a).

The Laplace transform of basic trig functions

Let's round out the list of functions for which we know the Laplace transform by looking at some sine and cosine functions. Specifically, let's start with the Laplace transform of f(t)=sin(bt). We could do this directly, computing the somewhat annoying integral

L(sin(bt))=0sin(bt)etsdt.

Or we could live life dangerously, by converting the sine function into complex exponential functions and then using the formula for the transform of the exponential function (even though we definitely intended for a to be a real number in that formula). Let's just do that and see what happens:

L(sin(bt))=L(eibteibt2i)=12iL(eibt)12iL(eibt)=12i1sib12i1s(ib)=12i(s+ib)(sib)(sib)(s+ib)=12i2ibs2+b2=bs2+b2.

It turns out this is correct, as is a very similar formula that's produced when one tries the same trick for cos(bt). So let's collect those two results here:

Laplace transforms of sine and cosine functions

For each number b, one has

L(sin(bt))=bs2+b2(for s>0)L(cos(bt))=ss2+b2(for s>0)

A quick note on existence


We tinkered with the Fourier transform to produce a transform that we hoped existed for more functions. Based on our examples above, that goal appears to have been achieved. In fact, it turns out that now any function f(t) that doesn't grow too fast has a Laplace transform. More precisely, we have the following:

Existence of the Laplace transform

Suppose f(t) is a piecewise continuous function on [0,) and there are constants M,α such that

|f(t)|Meαt

for all sufficiently large t. Then (Lf)(s) exists for s>α.

In other words, so long as f(t) doesn't grow faster than a standard exponential function, it will have a Laplace transform. But functions like f(t)=et2 are still out of luck!

Suggested next notes


Laplace transform II - The inverse Laplace transform