Sei sulla pagina 1di 15

Lecture 8

Linear Filtering and Convolution

Linear Filtering and Convolution

• Linear filtering is used in a wide variety of applications including noise reduction, edge detection, and edge enhancement.

• Linear filtering is defined as an action in which the value of a pixel in the output image is a linear combination of corresponding neighboring pixels from the input image.

Linear Filtering and Convolution

• Linear filtering can be accomplished through convolution in the spatial domain. In the convolution process the value of output pixels is computed as the weighted sum of neighboring pixels from the input image.

• The matrix of weighting factors is referred to as the convolution kernel and represents the filter implementation. For our purposes these matrices will be of odd dimension (e.g. 3x3, 5x5, 7x7, etc.). Linear Filtering and Convolution
• The weighting process may be expressed as:
R
wI
w I
w
I
1
1
2
2
mn mn
mn
i 1
Where m,n are the indices of the kernel (often a
square matrix) and I represents the input image
pixel intensities. The weighting factors are given
by w.

i

wI

i Convolution
• As an example, consider a portion of an image (A) and
a convolution kernel (h):
17
24
1
8
15
23
5
7
14
16
8
1
6
A
4
6
13
20
22
h
3
5
7
10 12
19
21
3
4
9
2
11 18
25
2
9

Convolution

• Convolution steps:

– Slide the center element of the kernel to the pixel to be computed.

– Multiply the convolution weights in the kernel by the corresponding pixel values in the input image.

– Sum the individual products from the previous step. The result is the pixel value for the output image.

– Repeat over the entire image

Convolution

• Referring to the example, if the kernel is acting on

A(2,4):

17

23

4

10

11

 24 1(2) 8(9) 5 7(7) 14(5) 6 13(6) 20(1) 12 19 21 8 25 2

15(4)

22(8)

3

9

• The corresponding output pixel value is:

– 1x2 + 8x9 + 15x4 + 7x7 + 14x5 + 16x3 + 13x6 + 20x1 + 22x8 = 575

16(3)

Convolution

• This process is repeated for all of the pixels in the image, except those at the edges.

• I.e. for a 3x3 convolution kernel, the output image is two pixels smaller in each dimension owing to the need for the convolution kernel to act on neighboring pixels. Convolution
• This method of “sliding” the kernel across the image
should be reminiscent of the classical definition of 2D
convolution:
g
x, y f x, y
f
,
g x , y
d d
• Where and are the dummy variables of
integration, the range of which is across the entire
image. Convolution
• This is simply an extension of 1D convolution as you
have encountered in signal analysis:
f
t f
t
2
f
t f
t d
1
2
1
• Here f 1 and f 2 are both functions of time, and is the
independent variable of integration.
• This example graphically shows the convolution of a
linear ramp and a rect function.  Convolution

• If the Fourier transforms of f(x,y) and g(x,y) are F(u,v) and G(u,v) respectively, then the convolution operation in frequency space is simply the point-by- point multiplication of the transforms:

g x, y f x, y G u,v F u,v

Linear Filtering and Convolution

• Some practical considerations: application of kernels in the spatial domain avoids pixels near or at the edge of the image, since adequate neighbors do not exist.

• As an alternative, a one-sided kernel can be applied at the edge, or the edge can be regarded as a mirror.

Convolution

• Convolution can be accomplished in the frequency domain, but recall that the transform regards the image as one period of a periodic function of infinite extent.

• Regarding the image in this “wraparound” fashion will usually result in artifacts at the edges.

• The solution commonly used is to place the image within a larger array in which the borders are filled with a mean brightness value, or are smoothly interpolated from the values of the edge pixels.

Linear Filtering and Convolution

• In many cases, the convolution kernel is separable. That is, the matrix may be decomposed into a pair of 1D vectors, that individually applied to rows and columns, will produce the same result.

• If the separation operation can be performed, the computation time for filtering can be substantially reduced.

Linear Filtering and Convolution

• The requirement for separability is that the kernel matrix must be of rank = 1.

• If the separation is performed correctly, multiplication of the two vectors will return the original kernel.

• MATLAB provides functions for rank determination and separation.

Filtering

• In general, we will be dealing with finite impulse response (FIR) filters. These filters have a response to an impulse function that is finite in extent.

• FIR filters are simple to represent as matrices and are natural extensions of the 1D filters used in signal processing.

• There are many reliable methods for implementing FIR filters.

Filtering

• FIR filters can have linear phase response, which aids in preventing distortion (just as in the 1D signal case).

• Infinite impulse response (IIR) filters are difficult to implement, and tend to be unstable.

• The MATLAB image processing toolbox does not support IIR filters.

Low-pass Filtering

• Functions such as low-pass filtering would appear as obvious uses for convolution. There are a number of additional ones that include:

– Edge sharpening

– Edge detection

– Median filtering

Filtering

• Low pass filtering in medical imaging is most often used for noise reduction.

• Noise reduction is easily accomplished, but blurring will result as a byproduct of the filtering process, and is generally considered undesirable.

• Blurring can be useful however, for elimination of small details, and joining of gaps in boundaries prior to segmentation.

Filtering

• The simplest low pass filter implementation is the averaging filter.

• In this case the output pixel value is simply the average of the input pixel values taken to some extent (e.g. 3x3, 5x5, etc.).

• In this case, the elements of the convolution kernel all have a value of 1.

Averaging Filter

• The effect of the averaging filter is to reduce the magnitude of sharp transitions.

• These transitions can be either random noise, or edges – the filter cannot distinguish between the two.

• In general, blurring is desirable only when preparing images for segmentation.

Weighted Filtering

• The blurring problem can sometimes be mitigated using a weighted filter.

• In this case, the weighting factors are determined as a function of distance from the center (input pixel) of the convolution kernel.

• Weighting factors decrease with increasing distance from the center (input pixel). Filtering
• Compare:
1
1
1
1
W
1
1
1
Non-weighted
9
1
1
1
1
2
1
1
W
2
4
2
Weighted
16
1
2
1
• The coefficients in front of the kernels preserve
the intensity scaling.

Linear Filtering and Convolution

• The general form of the convolution can be given as (where the denominator is the sum of the values in the kernel):

g

a

b

 x , y m 1 2 n 1 2

a b

 w k , t f x k , y t k a t a a b w k t ,

k

a

t

b

Linear Filtering and Convolution

• Generally, 3x3 and 5x5 filters are used for noise reduction.

• Larger kernels are used for the deliberate elimination of small features prior to a segmentation effort or some other automated process, such as edge detection or region growing.

Linear Filtering and Convolution

• If you are beginning with a magnitude reconstructed image, the filtering steps in frequency space typically include:

– 2D FFT

– Center (fftshift) if necessary

– Apply filter function (scalar array multiplication)

– 2D IFT

– Magnitude

– Scaling

Linear Filtering and Convolution

• The filter function is implemented by computing an array of weighting factors on a pixel basis.

• This resulting array will have a pixel-by-pixel correspondence to the (centered) frequency domain representation of the input image.

• Scalar multiply the transform of the input image by this array (in MATLAB, use the .* syntax for multiplication, not * alone which performs true matrix multiplication).

Linear Filtering and Convolution

• Filter examples:

– Low Pass (typically for noise reduction)

– High Pass (typically for edge enhancement)

– High Emphasis