Search the FAQ Archives

3 - A - B - C - D - E - F - G - H - I - J - K - L - M
N - O - P - Q - R - S - T - U - V - W - X - Y - Z - Internet FAQ Archives

comp.compression Frequently Asked Questions (part 2/3)
Section - [72] What is wavelet theory?

( Part1 - Part2 - Part3 - Single Page )
[ Usenet FAQs | Web FAQs | Documents | RFC Index | Zip codes ]

Top Document: comp.compression Frequently Asked Questions (part 2/3)
Previous Document: [71] Introduction to MPEG (long)
Next Document: [73] What is the theoretical compression limit?
See reader questions & answers on this topic! - Help others by sharing your knowledge

Preprints and software are available by anonymous ftp from the
Yale Mathematics Department computer
and /pub/software/ .

For source code of several wavelet coders, see item 15 in part one of
this FAQ.

A list of pointers, covering theory, papers, books, implementations,
resources and more can be found at

Bill Press of Harvard/CfA has made some things available on There is a short TeX article on wavelet
theory (wavelet.tex, to be included in a future edition of Numerical
Recipes), some sample wavelet code (wavelet.f, in FORTRAN - sigh), and
a beta version of an astronomical image compression program which he
is currently developing (FITS format data files only, in

The Rice Wavelet Toolbox Release 2.0 is available in and /pub/dsp/papers/ .  This is a
collection of MATLAB of "mfiles" and "mex" files for twoband and
M-band filter bank/wavelet analysis from the DSP group and
Computational Mathematics Laboratory (CML) at Rice University,
Houston, TX.  This release includes application code for Synthetic
Aperture Radar despeckling and for deblocking of JPEG decompressed
Images.  Contact: Ramesh Gopinath <>.

A wavelet transform coder construction kit is available at
Contact: Geoff Davis <>

A matlab toolbox for constructing multi-scale image representations, 
including Laplacian pyramids, QMFs, wavelets, and steerable pyramids, 
is available at
Contact: Eero Simoncelli <>.

A mailing list dedicated to research on wavelets has been set up at the
University of South Carolina. To subscribe to this mailing list, send a
message with "subscribe" as the subject to
For back issues and other information, check the Wavelet Digest home page

A tutorial by M. Hilton, B. Jawerth, and A. Sengupta, entitled
"Compressing Still and Moving Images with Wavelets" is available in . The
files are "" and "".  fig8 is a comparison of
JPEG and wavelet compressed images and could take several hours to
print. The tutorial is also available at

A page on wavelet-based HARC-C compression technology is available at

Commercial wavelet image compression software:

Details of the wavelet transform can be found in

A 5 minute course in wavelet transforms, by Richard Kirk <>:

Do you know what a Haar transform is? Its a transform to another orthonormal
space (like the DFT), but the basis functions are a set of square wave bursts
like this...

   +--+                         +------+
   +  |  +------------------    +      |      +--------------
      +--+                             +------+

         +--+                                 +------+
   ------+  |  +------------    --------------+      |      +
            +--+                                     +------+

               +--+             +-------------+
   ------------+  |  +------    +             |             +
                  +--+                        +-------------+

                     +--+       +---------------------------+
   ------------------+  |  +    +                           +

This is the set of functions for an 8-element 1-D Haar transform. You
can probably see how to extend this to higher orders and higher dimensions
yourself. This is dead easy to calculate, but it is not what is usually
understood by a wavelet transform.

If you look at the eight Haar functions you see we have four functions
that code the highest resolution detail, two functions that code the
coarser detail, one function that codes the coarser detail still, and the 
top function that codes the average value for the whole `image'.

Haar function can be used to code images instead of the DFT. With bilevel
images (such as text) the result can look better, and it is quicker to code.
Flattish regions, textures, and soft edges in scanned images get a nasty
`blocking' feel to them. This is obvious on hardcopy, but can be disguised on
color CRTs by the effects of the shadow mask. The DCT gives more consistent

This connects up with another bit of maths sometimes called Multispectral
Image Analysis, sometimes called Image Pyramids.

Suppose you want to produce a discretely sampled image from a continuous 
function. You would do this by effectively `scanning' the function using a
sinc function [ sin(x)/x ] `aperture'. This was proved by Shannon in the 
`forties. You can do the same thing starting with a high resolution
discretely sampled image. You can then get a whole set of images showing 
the edges at different resolutions by differencing the image at one
resolution with another version at another resolution. If you have made this
set of images properly they ought to all add together to give the original 

This is an expansion of data. Suppose you started off with a 1K*1K image.
You now may have a 64*64 low resolution image plus difference images at 128*128
256*256, 512*512 and 1K*1K. 

Where has this extra data come from? If you look at the difference images you 
will see there is obviously some redundancy as most of the values are near 
zero. From the way we constructed the levels we know that locally the average
must approach zero in all levels but the top. We could then construct a set of
functions out of the sync functions at any level so that their total value 
at all higher levels is zero. This gives us an orthonormal set of basis 
functions for a transform. The transform resembles the Haar transform a bit,
but has symmetric wave pulses that decay away continuously in either direction
rather than square waves that cut off sharply. This transform is the
wavelet transform ( got to the point at last!! ).

These wavelet functions have been likened to the edge detecting functions
believed to be present in the human retina.

Loren I. Petrich <> adds that order 2 or 3 Daubechies
discrete wavelet transforms have a speed comparable to DCT's, and
usually achieve compression a factor of 2 better for the same image
quality than the JPEG 8*8 DCT. (See item 25 in part 1 of this FAQ for
references on fast DCT algorithms.)

User Contributions:

Report this comment as inappropriate
Nov 26, 2015 @ 1:01 am
thank you!Within the past decade, significant effort has occurred in
developing methods of facial expression analysis.
Because most investigators have used relatively limited
data sets, the generalizability of these various methods
remains unknown. We describe the problem space for
facial expression analysis, which includes level of
description, transitions among expression, eliciting
conditions, reliability and validity of training and test
data, individual differences in subjects, head orientation
and scene complexity,

Comment about this article, ask questions, or add new information about this topic:


Top Document: comp.compression Frequently Asked Questions (part 2/3)
Previous Document: [71] Introduction to MPEG (long)
Next Document: [73] What is the theoretical compression limit?

Part1 - Part2 - Part3 - Single Page

[ Usenet FAQs | Web FAQs | Documents | RFC Index ]

Send corrections/additions to the FAQ Maintainer:

Last Update March 27 2014 @ 02:11 PM