Sei sulla pagina 1di 10

WAVELET

A wavelet is a wave like oscillation with an amplitude that starts out at zero, increases, and then decreases back to zero. It can typically be visualized as a "brief oscillation" like one might see recorded by a seismograph or heart monitor. Generally, wavelets are purposefully crafted to have specific properties that make them useful for signal processing. Wavelets can be combined, using a "reverse, shift, multiply and sum" technique called convolution, with portions of an unknown signal to extract information from the unknown signal. As a mathematical tool, wavelets can be used to extract information from many different kinds of data, including - but certainly not limited to - audio signals and images. Sets of wavelets are generally needed to analyze data fully. A set of "complementary" wavelets will deconstruct data without gaps or overlap so that the deconstruction process is mathematically reversible. Thus, sets of complementary wavelets are useful in wavelet based compression/decompression algorithms where it is desirable to recover the original information with minimal loss. In formal terms, this representation is a wavelet series representation of a square-integrable function with respect to either a complete, orthonormal set of basis functions, or an overcomplete set or frame of a vector space, for the Hilbert space of square integrable functions.
Contents

1 Name 2 Wavelet theory

o o o

2.1 Continuous wavelet transforms (continuous shift and scale parameters) 2.2 Discrete wavelet transforms (discrete shift and scale parameters) 2.3 Multiresolution discrete wavelet transforms

3 Mother wavelet 4 Comparisons with Fourier transform (continuous-time) 5 Definition of a wavelet

o o o

5.1 Scaling filter 5.2 Scaling function 5.3 Wavelet function

6 Applications of discrete wavelet transform 7 History

7.1 Timeline

8 Wavelet transforms

8.1 Generalized transforms

9 List of wavelets

o o

9.1 Discrete wavelets 9.2 Continuous wavelets

9.2.1 Real-valued


10 See also 11 Notes

9.2.2 Complex-valued

12 References 13 External links

Wavelet theory
Wavelet theory is applicable to several subjects. All wavelet transforms may be considered forms of time-frequency representation for continuous-time (analog) signals and so are related to harmonic analysis. Almost all practically useful discrete wavelet transforms use discretetime filterbanks. These filter banks are called the wavelet and scaling coefficients in wavelets nomenclature. These filterbanks may contain either finite impulse response (FIR) or infinite impulse response (IIR) filters. The wavelets forming acontinuous wavelet transform (CWT) are subject to the uncertainty principle of Fourier analysis respective sampling theory: Given a signal with some event in it, one cannot assign simultaneously an exact time and frequency response scale to that event. The product of the uncertainties of time and frequency response scale has a lower bound. Thus, in the scaleogram of a continuous wavelet transform of this signal, such an event marks an entire region in the time-scale plane, instead of just one point. Also, discrete wavelet bases may be considered in the context of other forms of the uncertainty principle. Wavelet transforms are broadly divided into three classes: continuous, discrete and multi resolution-based.

Continuous wavelet transforms (continuous shift and scale parameters)


In continuous wavelet transforms, a given signal of finite energy is projected on a continuous family of frequency bands (or similar subspaces of the L function space ). For instance the signal may be represented on every frequency band of the form [f,2f] for all positive frequencies f>0. Then, the original signal can be reconstructed by a suitable integration over all the resulting frequency components. The frequency bands or subspaces (sub-bands) are scaled versions of a subspace at scale 1. This subspace in turn is in most situations generated by the shifts of one generating function band , the mother wavelet. For the example of the scale one frequency this function is
p

with the (normalized) sinc function. Other example mother wavelets are:

Meyer

Morlet

Mexican Hat

The subspace of scale a or frequency band (sometimes called child wavelets)

is generated by the functions

,
where a is positive and defines the scale and b is any real number and defines the shift. The pair (a,b) defines a point in the right halfplane .

The projection of a function x onto the subspace of scale a then has the form

with wavelet coefficients

.
See a list of some Continuous wavelets. For the analysis of the signal x, one can assemble the wavelet coefficients into a scaleogram of the signal.

Discrete wavelet transforms (discrete shift and scale parameters)


It is computationally impossible to analyze a signal using all wavelet coefficients, so one may wonder if it is sufficient to pick a discrete subset of the upper half plane to be able to reconstruct a signal from the corresponding wavelet coefficients. One such system is the affine system for some real parameters a>1, b>0. The corresponding discrete subset of the half plane consists of all the points wavelets are now given as with integers . The corresponding baby

.
A sufficient condition for the reconstruction of any signal x of finite energy by the formula

is that the functions

form a tight frame of

Multi resolution discrete wavelet transforms

D4 wavelet

In any discretised wavelet transform, there are only a finite number of wavelet coefficients for each bounded rectangular region in the upper halfplane. Still, each coefficient requires the evaluation of an integral. To avoid this numerical complexity, one needs one auxiliary function, the father wavelet . Further, one has to restrict a to be an integer. A typical choice is a=2 and b=1. The most famous pair of father and mother wavelets is the Daubechies 4 tap wavelet. From the mother and father wavelets one constructs the subspaces

, where
and

, where .
From these one requires that the sequence

forms a multiresolution analysis of subspaces above sequence, that is, subspace space from 0 to band

and that the are the orthogonal "differences" of the

is the orthogonal complement of

inside the

. In analogy to the sampling theorem one may conclude that the with sampling distance more or less covers the frequency baseband roughly covers the

. As orthogonal complement, .

From those inclusions and orthogonality relations follows the existence of sequences and that

satisfy the identities and and

and
The second identity of the first pair is a refinement equation for the father wavelet . Both pairs of identities form the basis for the algorithm of the fast wavelet transform. Note that not every discrete wavelet orthonormal basis can be associated to a multiresolution analysis; for example, [2] the Journe wavelet set wavelet admits no multiresolution analysis.

Definition of a wavelet
There are a number of ways of defining a wavelet (or a wavelet family).

Scaling filter
An orthogonal wavelet is entirely defined by the scaling filter - a low-pass finite impulse response (FIR) filter of length 2N and sum 1. In biorthogonal wavelets, separate decomposition and reconstruction filters are defined. For analysis with orthogonal wavelets the high pass filter is calculated as the quadrature mirror filter of the low pass, and reconstruction filters are the time reverse of the decomposition filters. Daubechies and Symlet wavelets can be defined by the scaling filter.

Scaling function
Wavelets are defined by the wavelet function function (i.e. the mother wavelet) and scaling

(also called father wavelet) in the time domain.

The wavelet function is in effect a band-pass filter and scaling it for each level halves its bandwidth. This creates the problem that in order to cover the entire spectrum, an infinite number of levels would be required. The scaling function filters the lowest level of the transform and ensures all the spectrum is covered. See [1] for a detailed explanation. For a wavelet with compact support, to the scaling filter g. can be considered finite in length and is equivalent

Meyer wavelets can be defined by scaling functions

Wavelet function
The wavelet only has a time domain representation as the wavelet function .

For instance, Mexican hat wavelets can be defined by a wavelet function. See a list of a few Continuous wavelets.

Wavelet transforms
A wavelet is a mathematical function used to divide a given function or continuous-time signal into different scale components. Usually one can assign a frequency range to each scale component. Each scale component can then be studied with a resolution that matches its scale. A wavelet transform is the representation of a function by wavelets. The wavelets are scaled and translated copies (known as "daughter wavelets") of a finite-length or fastdecaying oscillating waveform (known as the "mother wavelet"). Wavelet transforms have advantages over traditional Fourier transforms for representing functions that have discontinuities and sharp peaks, and for accurately deconstructing and reconstructing finite, non-periodic and/or non-stationary signals. Wavelet transforms are classified into discrete wavelet transforms (DWTs) and continuous wavelet transforms (CWTs). Note that both DWT and CWT are continuous-time (analog) transforms. They can be used to represent continuous-time (analog) signals. CWTs operate over every possible scale and translation whereas DWTs use a specific subset of scale and translation values or representation grid.

List of wavelets
Discrete wavelets

Beylkin (18) BNC wavelets Coiflet (6, 12, 18, 24, 30) Cohen-Daubechies-Feauveau wavelet (Sometimes referred to as CDF N/P or Daubechies biorthogonal wavelets) Daubechies wavelet (2, 4, 6, 8, 10, 12, 14, 16, 18, 20) Binomial-QMF (Also referred to as Daubechies wavelet) Haar wavelet

Mathieu wavelet Legendre wavelet Villasenor wavelet Symlet[14]

Continuous wavelets

Beta wavelet Hermitian wavelet Hermitian hat wavelet Mexican hat wavelet Shannon wavelet

Complex-valued
Complex mexican hat wavelet Morlet wavelet Shannon wavelet Modified Morlet wavelet

v
In this paper, we propose a combined face and ear approach that uses 3D data. An annotated deformable model is constructed for each object class, face and ear. Each model is fitted to the corresponding 3D data sets using a subdivision-based deformable framework. Subsequently, the geometry image of the deformed model is computed, and wavelet coefficients are extracted. These coefficients form a multimodal biometric signature that achieves state-ofthe-art performance. The method is automatic, robust and efficient and it requires no training as it does not use statistical data. It is shown that each modality confutes the shortcomings of the other, thus making 3D faces and ears a very accurate multimodal biometric.

The rest of the paper is organized as follows: Section 2 describes the methods we have developed, Section 3describes the biometric databases, Section 4 presents our state-of-the-art performance, while Section 5summarizes our work. 2. Methods
The proposed method processes each face and ear data set through a common pipeline of algorithms. The only difference between the processing of faces and ears is that each uses its own annotated model. This model is representative of the respective classes (face and ear) and

is purely geometrical. The model is used for registering each data set and then, through a fitting process, acquires its shape.

Preprocessing: Raw data are preprocessed and segmented. Registration: The raw data are registered to the annotated model using a two-phase
algorithm.

Deformable model fitting: The annotated model is fitted to the data. The fitted model is
then converted to a geometry image.

Wavelet analysis: A wavelet transform is applied on the geometry image (and derived
normal image) and the wavelet coefficients are exported and stored.

2.1. Annotated models


We have constructed two annotated models, the annotated face model (AFM) and the annotated ear model (AEM), both depicted in Fig. 1. These models need to be created only once, and are used to process any number of data sets. Both models share some basic characteristics. They are both average shapes of their class, created from statistical data. They are represented by a 3D polygonal mesh that mainly has valence-6 vertices. This is due to the fact that these vertices define the control points of a subdivision surface as explained in Section 2.4. The AEM utilizes only the inner area of the ear (called concha) because the outer part of the ear is usually occluded by hair or other accessories, thus limiting its value as a biometric characteristic.

Certain areas are annotated on each model (see Fig. 1). The annotation was done empirically and is based on the physiology of the face and ear. These areas have different properties associated with them which are utilized by our method (e.g., rigidness, feature importance, resilience to noise).
2.2. Preprocessing
The purpose of preprocessing is mainly to eliminate sensor-specific problems and for the case of the ear data sets to segment them from the rest of the head (see Fig. 2). In general, modern 3D sensors output either a range image or 3D polygonal data, but in our experiments we used only range images from laser scanners. Therefore, the following preprocessing algorithms operate directly on the range data, before the conversion to polygonal data.

Segmentation: Only ear data sets need segmentation; the face data sets omit this step.
We keep the 3D geometry that resides within a sphere of a certain radius that is centered roughly on the ear pit. Using a custom tool, a human operator places the center of this sphere, guided by information such as the center of mass and the average normal. This segmentation is the only part of our method that is not fully automatic and we are currently in the process of automating it.

Median cut: This filter is applied to remove spikes from the data. Spikes are common
issues with laser range scanners, especially in certain areas such as the eyes or the ear pit. A 33 kernel is used.

Hole filling: Laser scanners usually produce holes in certain areas where hair are
present, so a hole filling procedure is applied.

Smoothing: A smoothing filter is applied to remove white noise, as most high resolution
scanners produce noisy data in real-life conditions. A 33 kernel is used.

Subsampling: Only face data sets need subsampling, the ear data sets omit this step.
Using data with resolutions higher than the AFM is a waste of computational time, as the AFM effectively resamples the data during fitting.

Potrebbero piacerti anche