Simple FFT in Haskell

Published on

The article develops a simple implementation of the fast Fourier transform in Haskell.

Raw performance of the algorithm is explicitly not a goal here; for instance, I use things like nub, Writer, and lists for simplicity. On the other hand, I do pay attention to the algorithmic complexity in terms of the number of arithmetic operations performed; the analysis thereof will be done in a subsequent article.

Background

Discrete Fourier transform turns \(n\) complex numbers \(a_0,a_1,\ldots,a_{n-1}\) into \(n\) complex numbers

\[f_k = \sum_{l=0}^{n-1} e^{- 2 \pi i k l / n} a_l.\]

An alternative way to think about \(f_k\) is as the values of the polynomial

\[P(x)=\sum_{l=0}^{n-1} a_l x^l\]

at \(n\) points \(w^0,w^1,\ldots,w^{n-1}\), where \(w=e^{-2 \pi i / n}\) is a certain \(n\)th primitive root of unity.

The naive calculation requires \(\Theta(n^2)\) operations; our goal is to reduce that number to \(\Theta(n \log n)\).

An excellent explanation of the algorithm (which inspired this article in the first place) is given by Daniel Gusfield in his video lectures; he calls it “the most important algorithm that most computer scientists have never studied”. You only need to watch the first two lectures (and maybe the beginning of the third one) to understand the algorithm and this article.

Roots of unity

Roots of unity could in principle be represented in the Cartesian form by the Complex a type. However, that would make it very hard to compare them for equality, which we are going to do to achieve a subquadratic complexity.

So here’s a small module just for representing these special complex numbers in the polar form, taking advantage of the fact that their absolute values are always 1 and their phases are rational multiples of \(\pi\).

Fast Fourier transform

So we want to evaluate the polynomial \(P(x)=\sum_{l=0}^{n-1}a_lx^l\) at points \(w^k\). The trick is to represent \(P(x)\) as \(A_e(x^2) + x A_o(x^2)\), where \(A_e(x)=a_0+a_2 x + \ldots\) and \(A_o(x)=a_1+a_3 x + \ldots\) are polynomials constructed out of the even-numbered and odd-numbered coefficients of \(P\), respectively.

When \(x\) is a root of unity, so is \(x^2\); this allows us to apply the algorithm recursively to evaluate \(A_e\) and \(A_o\) for the squared numbers.

But the real boon comes when \(n\) is even; then there will be half as many of these squared numbers, because \(w^k\) and \(w^{k+n/2}\), when squared, both give the same number \(w^{2k}\). This is when the divide and conquer strategy really pays off.

We will represent a polynomial \(\sum_{l=0}^{n-1}a_lx^l\) in Haskell as a list of coefficients [a_0,a_1,...], starting with \(a_0\).

To be able to split a polynomial into the even and odd parts, let’s define a corresponding list function

(I think I learned the idea of this elegant implementation from Dominic Steinitz.)

Now, the core of the algorithm: a function that evaluates a polynomial at a given list of points on the unit circle. It tracks the number of performed arithmetic operations through a Writer monad over the Sum monoid.

If the polynomial is a constant, there’s not much to calculate. This is our base case.

Otherwise, use the recursive algorithm outlined above.

The actual FFT function is a simple wrapper around evalFourier which substitutes the specific points and performs some simple conversions. It returns the result of the DFT and the number of operations performed.