In many fields of study, from science and engineering to economics and psychology, we need to analyze data so that we can discover underlying patterns and information. A common way of doing this is to transform the data by applying mathematical functions.

One of the best-known processing techniques is Fourier analysis, in which you can approximate a real-world data stream by adding together a series of sine and cosine curves at different frequencies; the more curves you include in your approximation, the more closely you can replicate the original data. Since we know how to work with these well-defined trigonometric curves, we can often deduce patterns in the data that would otherwise remain hidden.



But Fourier analysis has limitations. It works best when the original data has features that repeat periodically, and it has trouble with transient signals or data that shows abrupt changes, such as the spoken word. Often, we need to be able to change our analytical representation depending on the actual data, so that we can resolve more detail in specific parts of the data stream. In essence, we need a way to change scale at various points, and scale is at the heart of wavelets.

The Notion of Scale

The following explanation is adapted from Dana Mackenzie's highly recommended article "Wavelets: Seeing the Forest and the Trees" .

Consider how we view a landscape. If you're looking down from a jet airliner in summer, a forest appears as a solid canopy of green. If you're in a car driving by, however, you see individual trees. If you stop and move closer, you can make out individual branches and leaves. Up close, you may spot a dewdrop or an insect sitting on a leaf. With a magnifying glass, you can see structural details of the leaf and its veins.

As we get ever closer to an object, our view becomes narrower and we see finer and finer detail. In other words, as our scope becomes smaller, our resolution becomes greater.

Our eyes and mind adapt quickly to these changes in perspective, moving from the macro scale to the micro. Unfortunately, we can't apply this technique to a photograph or computerized digital image.

If you enlarged a picture of a forest (as if you were trying to get "closer" to a tree), all you'd see is a fuzzier image; you still wouldn't be able to make out the branch, the leaf or the dewdrop. Regardless of what you might see in the movies, no amount of "sharpening" or processing can help you see detail that hasn't already been encoded into the image. We can't see anything smaller than a pixel, and the camera can show us only one resolution at a time.

Wavelet algorithms allow us to record or process different areas of a scene at different levels of detail (resolution) and using greater amounts of compression (scale). In essence, they let us take new photos at closer range. If you look at a collection of data (also called a signal) from a broad perspective, you'll notice large-scale features; using a smaller, closer perspective, you can observe much smaller features.

Enter Wavelets

1 2 Page 1
Page 1 of 2
Bing’s AI chatbot came to work for me. I had to fire it.
Shop Tech Products at Amazon