Desmystifying Curvelets
Signal-processing deep dive
Learn what curvelets are, how they are built and what they can be used for

Introduction
Curvelets are multiscale, oriented, non-adaptive representations of images and multi-dimensional signals. If some of these words didn't make sense, you're in the right place.
Developed in the early 2000s by Emmanuel Candès and David Donoho [1] in the flurry of wavelet-transform related Signal Processing boom, curvelets were designed to solve certain issues that plagued previous alternatives. Wavelets generalize the 1D Fourier transform by using arbitrary – but especially crafted – functions instead of complex exponentials. Like the Fourier transform, they can be readily extended to 2D by simply repeatedly applying the transform over multiples axes. Constructing 2D wavelets with this naïve approach runs into issues when representing edges that are not exactly horizontal or exactly vertical. In practice, transforms that have trouble with edges can cause "blocky" artifacts in strongly compressed images. Many transforms suffer from the same fate, including the discrete cosine transform (DCT) which powers the ubiquitous JPEG format (see Figure 1). JPEG2000 which relies on the wavelet transform to improve on JPEG, still suffers from the same blocky artifacts.

Curvelets don't have these issues – they can preserve any kind of edges. They also exhibit some nice properties, such as being the optimal representation of wave-like phenomena, energy conservation, unitarity (having its adjoint/transpose be the same as its inverse), among others.
Today, curvelets are used in a variety of image-processing tasks including denoising, compression, inpainting, smoothing, etc. In the geophysical community, it has become a powerful workhorse for several tasks including adaptive subtraction, image matching, preconditioning, among others. In the medical community, it has been used for segmentation, diagnosis, etc.
While it is true that deep learning has quickly displaced classical algorithms for many of these tasks, curvelets (and other Wavelet transforms) still have their use. Whereas deep learning offers the possibility to create adaptive, multiscale representations of signals, curvelets already offer that predictably, without requiring training, and optimally for many types of signals. One may use curvelets directly as a substitute when no training data is available, or even employing them alongside deep learning methods, thereby reducing model complexity and data requirements.
In this deep dive, we will go over the building blocks of the curvelet transform, focusing on the intuition behind it. The goal is to provide a starting point for users interested in developing new applications which use curvelets. Find all the code for this tutorial in the Curvelops repository.
Before Curvelets: the Fourier Transform
To understand the curvelet transform, we need to first understand the Fourier transform. The 2D FFT (Fast Fourier Transform) tells us what kind of spatial frequencies an image has. The more energy on the larger (away from the origin) wavenumbers (spatial frequencies), the faster the image varies along a certain direction. In this section we will understand how to find this direction, as well as how to quantify "fast".
Let's start by creating images which vary rapidly in the one particular direction, but not at all in the perpendicular direction. These will be helpful in understanding what the