(last update: 17 Feb 2013)

The Fourier Transform is a mathematical technique, which can be used to decompose a more complicated waveform, into a series of simpler waveforms, which when added together, form the more complicated waveform. Being able to do this mathematically, opens the door to a better understanding of waveform phenomena (sound waves, radio waves, ... etc), and how to manipulate them in the physical world.

To get an idea of how this might work, Couch's book[1] has a good example, which I'll expand upon a little...

First up is exponential decay:

\label{eqn:DecayOfExponential}
w(t) = A_i e^{t}

This equation is a simplistic form of how a voltage signal decays over time. The term A_i is the initial amplitude.

Next the Fourier transform of some function the function w(t) is defined. There are specific criteria which w(t) must satisfy but the criteria are sufficiently broad that we can ignore that for now. For now, just presume that equation\ref{eqn:DecayOfExponential} satisfies all the necessary criteria.

\label{eqn:TheAnalyticFourierTransform}
W(f) = \int_{-\infty}^\infty [w(t)]e^{-j2\pi ft}dt

Keep in mind that the Fourier transform allows us to break up a complex waveform into a sum of simpler waveforms, in this case sinusoids. In this formulation we are using complex numbers as denoted by the "j" in the exponent. The "j" is not an algebraic variable. In this representation j = \sqrt{-1} is a specific number, which doesn't really exist. We use this "spooky" representation because, believe it or not, it makes the math simpler.

Imaginary numbers are "spooky" because of equation\ref{eqn:jSquared}.

\label{eqn:jSquared}
-1 = j \cdot j

Think about it... How can two numbers multiplied by each other equal negative one?... Weird...

Regardless of how strange imaginary numbers are, they make the mathematics cleaner.

Remember we are using an analytical approach now. We will get to a numerical approach later.

One of the limitations of the analytical approach is seen in the limits of the integral in equation\ref{eqn:TheAnalyticFourierTransform}. the limits are the -\infty, and \infty.

The limits are saying "from all previous time (-\infty)" until "the end of all future time (\infty)", the relationship of equation\ref{eqn:TheAnalyticFourierTransform} holds.

I like to think I'm broad minded but those time limits are a bit too broad for me... but we'll let that slip for the time being. Let's concentrate on the rest of equation\ref{eqn:TheAnalyticFourierTransform}

This analytic equation is telling us that by integrating the product of w(t) and e^{-j2\pi ft}, in infinitely small pieces (dt), over all time, we can get an expression W(f), which represent all the simple sinusoid frequencies which are inside w(t).

For the time being, take it on faith that this assertion concerning equation\ref{eqn:TheAnalyticFourierTransform}, is true. There is quite a bit going on behind the scenes but let's not get too hung up on the gory derivation details.

Next we will apply equation\ref{eqn:DecayOfExponential}, to equation\ref{eqn:TheAnalyticFourierTransform}. At the same time, lets remove the "from all previous time" limit as well, and confine our interest from now "0" until the end of time. This will give us an expression for the frequency content in a decaying exponential.

W(f) = \int_0^\infty A_i e^{t} e^{-j2\pi ft}dt

This equation can be simplified a bit by setting A_i = 1. Doing so will make things simpler in the long run.

\label{eqn:CouchEx2-2}
W(f) = \int_0^\infty e^{t} e^{-j2\pi ft}dt

Let's gloss over the grunt work of performing the integration, and jump straight to the result.

\label{eqn:CouchEx2-2_integrRes}
W(f) = \frac{1}{1 + j 2 \pi f}

Well... that's nice... but so what?

... let's go a little farther...

The result, expressed in equation\ref{eqn:CouchEx2-2_integrRes} is a complex number. This means that the function W(f) may be expressed in a similar way as any typical complex number, such as: 5 + j3, as in,

W(f) = X(f) + jY(f)

This is called quadrature form.

If the correct math is done W(f) may be expressed as two functions X(f), and Y(f), which do not have the sometimes awkward (although useful) imaginary number j in them.

In the case of equation\ref{eqn:CouchEx2-2_integrRes}, the imaginary number j can go away by doing the following,

\label{eqn:Rationalize_CouchEx2-2_integrRes}
W(f) = \frac{1}{1 + j 2 \pi f} \bigg [ \frac{1 - j 2 \pi f}{1 - j 2 \pi f} \bigg ]

which is called rationalization,

W(f) = \frac{1 - j 2 \pi f}{(1 + j 2 \pi f)(1 - j 2 \pi f)}

W(f) = \frac{1 - j 2 \pi f}{(1 \cdot 1 + j 2 \pi f - j 2 \pi f + (-1)(j^2) (2 \pi f)(2 \pi f))}

in the denominator the + j 2 \pi f and j 2 \pi f subtract out, and 1 \cdot 1 = 1,

W(f) = \frac{1 - j 2 \pi f}{(1 + (-1)(j^2) (2 \pi f)(2 \pi f))}

remember j = \sqrt{-1}, so j^2 = \sqrt{-1}\sqrt{-1} which is simply -1,

W(f) = \frac{1 - j 2 \pi f}{(1 + (-1)(-1) (2 \pi f)(2 \pi f))}

the product of two negative numbers is a positive number so the two -1 go away, and the end product can be combined,

W(f) = \frac{1 - j 2 \pi f}{1 + (2 \pi f)^2}

This result can be combined into quadrature notation by, splitting up the numerator over a common denominator thus,

X(f) = \frac{1}{1 + (2 \pi f)^2}

Y(f) = \frac{- j 2 \pi f}{1 + (2 \pi f)^2}

There is one other way to represent the result W(f), and that is phase magnitude form.

Magnitude is,

\label{eqn:MagnitudeForm}
|{W(f)}| = \sqrt{X(f)^2 + Y(f)^2}

and phase,

\label{eqn:PhaseForm}
\theta (f) = \tan^{-1} \bigg ( \frac{Y(f)}{X(f)} \bigg )

The magnitude form of equation\ref{eqn:MagnitudeForm} is most often what is shown graphically but we'll get to that shortly.

Note that the form of equation\ref{eqn:MagnitudeForm} is the same as the old friend the "triangle equation" or Pythagorean Theorem, otherwise written as A^2 + B^2 = C^2, solved for C.

Before digging into numerical examples, there are one or two more analytical examples of the Fourier transform that should be looked into.

The math for the exponential delay equation was fairly straight forward but the equation itself does not illustrate why the Fourier transform is said to decompose a complex signal into a set of simpler signals.

To help illustrate that the next example will be of a sinusoid. Again I will base this example on one found in Couch[1].

Earlier I glossed over the use of complex representation of signals. If you thought you might not have to bother with such a notion again, I have to tell you no such luck.

There are some identities which make dealing with sinusoids as complex numbers fairly straight forward. I'll use one now,

\label{eqn:EulerSinIdentity}
sin(x) = \frac{e^{jx} - e^{-jx}}{2j}

Equation\ref{eqn:EulerSinIdentity}. is the Euler identity for the sine function.

It looks much more complicated than just writing sin(x), and appears to be a step backward.

However, using this complex notation, actually saves a great deal of mathematical hassle. If the typical sine, cosine, tangent... etc. forms are used the mathematics mushrooms into a nightmare of sines, cosines, tangents... and all manner of other gobbledy-gook.

... so hang in there, the complex representation isn't so bad...

There is another short hand that is often used,

\label{eqn:OmegaDef}
\omega_0 = 2 \pi f_0

This equation helps to reduce the clutter of 2 \pi f's which often get in the way. The symbol on the left is the lower case Greek letter Omega. The subscript zero is used to distinguish this value of omega from other values of omega. Note also that a specific value of f is associated with omega by sharing the same zero subscript.

As an example of its utility lets try it out on the Euler sine identity.

Neglecting amplitude, and a phase term, a sinusoid can be represented by,

\label{eqn:BasicSinusoid}
x(t) = A \; sin(2 \pi f t)

In this equation f is a frequency, typically in Hertz (Hz, cycles per second), Kilo-Hertz (KHz, 10^3 cycles per second), Mega-Hertz (MHz, 10^6 cycles per second), Giga-Hertz GHz, and so on, adding another 3 to the exponential of 10 each time.

The variable A, is the amplitude of the sine wave. The variable t is simply time.

The two other terms, 2, and \pi are simply constants. If you need some background on these terms, go back and refresh your basic trigonometry, and geometry.

Employing the short hand Omega, equation\ref{eqn:BasicSinusoid} looks like this,

\label{eqn:BasicSinusoidUsingOmega}
x(t) = A \; sin(\omega_0 t)

which is a bit more succinct, and will reduce the clutter, moving forward.

Applying this short hand to the Euler identity,

\label{eqn:EulerSinIdentityOmega}
sin(t) = A \; \frac{e^{j \omega_0 t} - e^{-j \omega_0 t}}{2j}

Since the Fourier transform introduced earlier is also a function using e, the Fourier transform of the sinusoid looks like the following, using equation\ref{eqn:TheAnalyticFourierTransform}, and equation\ref{eqn:EulerSinIdentityOmega}, also don't forget \omega, which can also be applied to the Fourier transform (without the 0 subscript because its a different \omega),

X(f) = \int_{-\infty}^\infty \bigg [ A \; \frac{e^{j \omega_0 t} - e^{-j \omega_0 t}}{2j} \bigg ] \; e^{-j \omega t}dt

which is simply multiplying the numerator by e^{-j \omega t},

X(f) = \int_{-\infty}^\infty \bigg [ A \; \frac{e^{j \omega_0 t}(e^{-j \omega t}) - e^{-j \omega_0 t}(e^{-j \omega t})}{2j} \bigg ] dt

using the 2j as a common denominator, the single integral can be broken up into two integrals,

X(f) = \int_{-\infty}^\infty \bigg [ A \; \frac{e^{j \omega_0 t}(e^{-j \omega t})}{2j} \bigg ] dt - \int_{-\infty}^\infty \bigg [ A \; \frac{e^{-j \omega_0 t} (e^{-j \omega t})}{2j} \bigg ]dt

Some simplification can be done, by pulling the terms A, and 2j outside the integrals,

X(f) = \frac{A}{2j}\int_{-\infty}^\infty e^{j \omega_0 t} (e^{-j \omega t}) dt - \frac{A}{2j} \int_{-\infty}^\infty e^{-j \omega_0 t} (e^{-j \omega t}) dt

Next the exponential multiplications can be simplified. To see what's going on look at the exponentials only.

The trick is to recognize that when multiplying something raised to a power, you just need to add the exponents. The tricky part here is that the exponents are complex numbers. To be sure this is done correctly, we can make the exercise more explicit.

Note that the exponents are really,

e^{0 + j \omega_0 t} (e^{0 - j \omega t})

or

e^{(0 + j \omega_0 t) + (0 - j \omega t)}

The exponents are now complete complex numbers. They are added together by adding the coefficients of the real, and complex parts separately,

e^{([0 + 0] + [j \omega_0 t] + [ - j \omega t])}

e^{(0 + j [\omega_0 t - \omega t])}

e^{j (\omega_0 - \omega) t}

so,

X(f) = \frac{A}{2j}\int_{-\infty}^\infty e^{j (\omega_0 - \omega) t} dt - \frac{A}{2j} \int_{-\infty}^\infty e^{-j \omega_0 t} (e^{-j \omega t}) dt

The right hand term of the difference can be simplified in a similar manner,

e^{-j \omega_0 t - j\omega t}

pull the -j out of the difference, thus forming a summation,

e^{-j (\omega_0 t + \omega t)}

so the completed difference of integrals is,

X(f) = \frac{A}{2j}\int_{-\infty}^\infty e^{j (\omega_0 - \omega) t} dt - \frac{A}{2j} \int_{-\infty}^\infty e^{-j (\omega_0 + \omega)t} dt

moving the constant terms out of the difference,

X(f) = \frac{A}{2j} \bigg [ \int_{-\infty}^\infty e^{j (\omega_0 - \omega) t} dt - \int_{-\infty}^\infty e^{-j (\omega_0 + \omega)t} dt \bigg ]

which has an inverse imaginary term,

X(f) = \frac{1}{j} \bigg (\frac{A}{2} \bigg ) \bigg [ \int_{-\infty}^\infty e^{j (\omega_0 - \omega) t} dt - \int_{-\infty}^\infty e^{-j (\omega_0 + \omega)t} dt \bigg ]

X(f) = (-1) j \bigg (\frac{A}{2} \bigg ) \bigg [ \int_{-\infty}^\infty e^{j (\omega_0 - \omega) t} dt - \int_{-\infty}^\infty e^{-j (\omega_0 + \omega)t} dt \bigg ]

so the sense of the difference may be swapped,

X(f) = j \bigg (\frac{A}{2} \bigg ) \bigg [ \int_{-\infty}^\infty e^{-j (\omega_0 + \omega)t} dt - \int_{-\infty}^\infty e^{j (\omega_0 - \omega) t} dt \bigg ]

Now to make things really simple, you have to know of something called a delta function. The delta function equals 1 at a particular value (which happens to be zero), and is zero at all other values. At which value the delta function goes to one is arranged by adding or subtracting an appropriate value.

\delta (\omega) = \int_{-\infty}^\infty e^{\pm j \omega t} dt

There is a problem though. The subtracted integral's order of the omegas is a bit strange (\omega_0 - \omega).

X(f) = j \bigg (\frac{A}{2} \bigg ) \bigg [ \int_{-\infty}^\infty e^{-j (\omega + \omega_0)t} dt - \int_{-\infty}^\infty e^{-j (\omega - \omega_0) t} dt \bigg ]

so,

X(\omega) = j \bigg (\frac{A}{2} \bigg ) \bigg [ \delta (\omega + \omega_0) - \delta (\omega - \omega_0) \bigg ]

Taking into account the definition of (\omega),

X(f) = j \bigg (\frac{A}{2} \bigg ) \bigg [ \delta (f + f_0) - \delta (f - f_0) \bigg ]

We now have an expression of which frequencies are present in a sinusoid.

If we take the absolute value of this result, the imaginary term goes away, and the amplitude of the individual frequencies remains,

X(f) = \frac{A}{2} \delta (f - f_0) + \frac{A}{2} \delta (f + f_0)

This is a bit under-whelming, and a bit odd at the same time. We started with a single sinusoid, and ended up with two.

What is interesting is that the two frequencies are the same, except for a factor of -1. One frequency is a negative frequency (-f_0), and the other is a more normal positive frequency (f_0). Each of these two frequencies contains half the amplitude.

This means that, using this complex number based formulation, each individual sinusoid in the time domain representation, will have a frequency pair in the frequency domain. This pair of frequencies will consist of symmetrical positive and negative frequencies.

[1]: "Digtal And Analog Communications Systems", 4th ed., 1990 Macmillan