Sei sulla pagina 1di 12

ABSTRACT

Edge detection identifies points in a digital image where the brightness of the image changes

abruptly or has discontinuities with the help of mathematical methods. Vision systems and object

recognition systems require edge detection. There are many edge detection operators, such as the

gradient operator and the Laplacia operator, which are based on the assumption that the edges in

several images are edges of pitch intensity. Due to this, thick and fragmented edges are detected.

It is still a difficult task to find the true edges in an image.

The quality of the edge detection depends on many factors, such as lighting, the density of the

edges in the scene and the noise. The large search space is another problem with most existing

operators. The task of edge detection is time consuming and the memory is exhausting if it is

done without optimization. To avoid this, several evolutionary algorithms are increasingly

sought after for edge detection, such as bacterial feed search, particle swarm optimization, and

differential search algorithms. These algorithms do not require any preprocessing and, therefore,

are much more efficient than the previous methods for edge detection.

1
INTRODUCTION

Edge detection is a major problem in image processing. Edge detection is a fundamental

technique for compressing the content of data in images to simplify high-level image processing

tasks, since the edges contain important information about the image and the edges can help to

segregate the shape of the images. objects. In one image, the edge provides the boundary

between two regions that is determined by sharp discontinuity in the intensity values. The edges

can be determined in the spatial domain by direct manipulation of the pixels of the image or by

transforming them into another domain (frequency domain). To determine the pixel of the edges,

it is necessary to detect the position between two pixels. The resulting image after performing

edge detection is simply the outline of the input image from which various important features

can be extracted, such as corners, lines and curves.

Much research has been done in the area of image segmentation by edge detection. The given

image is separated into object and background, when detecting an edge between them.

Most edge detection techniques involve the calculation of a local derivative of first or second

order, after which steps are taken to minimize the effects of noise. The literature consists of

many edge detection techniques such as Sobel, Prewitt and Roberts that proposed some of the

first operators to detect edges in an image. The methods used by these operators are gradient

methods that are used to detect edges in a particular direction. Noise is the main factor that

inhibits these edge detection procedures. This problem was overcome by Canny, who convolved

the image with the first-order derivatives of the Gaussian filter for smoothing in the direction of

the local gradient followed by edge detection using the threshold.

2
Ant colony optimization (ACO) is an algorithm inspired by nature that is based on the natural

behavior of ant species. The ants while looking for pheromones to deposit food on the ground.

ACO is used to treat the problem of edge detection in images. Using this approach, a pheromone

matrix is established based on the movement of a group of ants that travel in the image. This

matrix provides the information about the edges presented in each pixel location of the image. In

addition, the movement of the ant is characterized by the local variation of pixel intensity values.

The PSO-based algorithm uses an optimization method to detect edges in images damaged by

noise. Restrictions are handled using a preservation method. The PSO-based algorithm

effectively maximizes the distance of interest (ie, the distances between the pixel values in the

two regions separated by a continuous edge) and minimizes the intra-established distance (which

is the distance between the pixel intensities within each region). It accurately detects fine,

continuous and smooth edges in complex images. A single initialization of the algorithm is

sufficient for all executions, making the algorithm faster. To represent these curves in this search

space, a coding scheme is used. For the evaluation of each curve, a physical fitness function is

used to measure how similar the pixels are within each sub region and how different they are

between two sub regions.

We have implemented the earlier said work and tried to improve the same using some

modifications.

3
LITERATURE

Digital Images

Digital images are representations of two-dimensional images as a set of pixel intensity values.

Digitization establishes that an image is an approximation of a real scenario. In a mathematical

view, a monochromatic image is a 2D function f (x, y) where x and y are spatial coordinates. The

intensity or gray level of the image in any pair of coordinates (x, y) is the amplitude at that point.

There are three types of digital images based on the number bits needed to store the intensity

value for each pixel position:

Black and white (1 bit per pixel): They can only take 2 values: black and white, 0 or 1 and they

are the simplest type of images, so only 1 bit is required to represent each pixel. In addition,

these images are generally used in applications. They are used where the required information is

only general form or outline. Grayscale images are converted to binary images using a threshold

operation where all pixels above the threshold value are marked as white (1) and those below are

marked as black (0).

2.1Black and white image

4
Grayscale (8 bits per pixel): Grayscale images are classified as monochrome (single-color)

images. They provide gray level information and no information about color. The available gray

levels are determined by the number of bits per pixel. The typical grayscale image contains 8 bits

/ pixel that allows 256 levels of gray.

2.2 Grayscale Image

RGB (24 bits per pixel): The color images are basically data of monochromatic images of three

bands where each band represents different colors. The gray level information in each spectral

band is the true information stored in the image.

They are represented as Red, Green and Blue, that is, RGB images. The color image has 24 bits

per pixel.

2.3 RGB Image

5
Noise

Noise means any unwanted signal. Noise in an image is random, that is, they are not present in

an image. They are variation of the brightness levels or the color information in images, and is

usually an aspect of electronic noise. Digital images could be contaminated by noise during

image acquisition and transmission.

Corrupted images must be restored in order to extract useful information as they severely disrupt

the subsequent image processing operations.

Thus digital image processing is performed to restore the images for subsequent use.

In most cases there are two types of noises that are added to images, namely, additive Gaussian

noise and impulse noise.

In Impulse noise, there are two types of impulse noise,

1) Fixed valued impulse noise(salt-and-pepper).

2) Random valued impulse noise.

Gaussian noise affects all pixels of the image. Such noise is used during image acquisition

process. Its probability density function is equal to normal distribution.

Image features

2 types of features are there in an image. They are-


• Features that are Global – these depict the global features of an image, including
frequency domain descriptors, intensity histogram, high order statistics and covariance
matrix etc.
• Features that are Local – these depict the local regions with properties like corners, edges,
curves, lines, regions with special properties, etc.

Various features are useful, depending on the applications.

6
Image processing

The technique that converts an image into digital form and performs all the operations on it is

called Image processing. It is done in order to get an enhanced image and extract information

that is useful to us from it. In image processing, input is image, like video frame or photograph.

The output may be image or characteristics associated with that image. It is a type of signal

dispensation.

Usually Image Processing system applies signal processing methods to them.

The two major task of image processing are: improvement of pictorial information to make it

viable for human interpretation and processing image data for transmission, storage and

representation for autonomous machine perception.

7
Purpose of Image processing

The various purposes of image processing are divided into following groups. They are:

(1) Visualizing images – Observe the objects that are not visible. Visualization is a method

for creating images, diagrams, or animations to send a message.

(2) Sharpening of images and its restoration– Image sharpening is a very important tool

that emphasizes texture and draws viewer’s focus to create a better image. Digital camera

sensors and lenses always smoothens an image to some extent in which correction is

required. The noise removal (motion blur, sensor noise etc.) from images is called Image

Restoration. Various types of filters such as median filters or low-pass filters are the

simplest of the possible approaches for noise removal. More sophisticated methods

assume a method to differentiate them from noise.

(3) Retrieval of images –Find the image of interest. For searching, browsing and retrieving

images from a large database of digital images is called image retrieval system. Most

common and traditional techniques of image retrieval utilize various methods of adding

captions, keywords, or image descriptors. It is done so that retrieval can be performed

over the annotation words.

(4) Recognition of images: It used for identifying objects present in an image. The process

begins with methods like noise removal, followed by basic level feature extraction that

helps in locating regions, lines and areas with various texture levels.

8
Edge Detection

The mathematical method that identifies points in a digital image at which the image

brightness has discontinuities or changes sharply is called Edge detection. The points of sharp

change are termed as edges. Edge detection is an important tool used in the areas of feature

extraction and feature detection and is used widely in image processing, machine

vision and computer vision.

Edges can be classified as follows based on intensity levels.

Step edges: The intensity in the image changes drastically from one value to a different value at

the point of discontinuity.

Ramp edges: It is a type of step edge. In this type of edge, the change in intensity is not

instantaneous. Instead, it occurs over a certain distance.

Roof edges: It is a type of ridge edge in which the change in intensity is not instantaneous.

Instead, it occurs over a certain distance.

The edges from a 2-D image of a 3-D scene that are extracted are classified as either view-point

dependent or view-point independent. An edge that depicts characteristic properties of the 3-D

objects like surface shape and surface markings is called a viewpoint independent edge. An

edge which on changing the viewpoint gets changed, and depicts the geometry of the scene, such

as occlusion of objects on one another is called a viewpoint dependent edge.

2.4 Intensity edges-(a) Step-Edges (b) Ramp-edges (c) Roof-edges


9
Desired characteristics of images
• Accurate localization– It is often desired that an edge should lie in a region that is

spatially accurate, separating the different regions in the best way possible. In many real

images, the edge position may be ambiguous. This is the case when the same pair of

dissimilar regions are separated by a collection of closely adjacent boundaries. The degree

of dissimilarity between the regions on either sides of the boundary will vary for each

boundary because each boundary in the collection has a unique spatial location. When the

edge coincides with the boundary that results in the maximum degree of dissimilarity, then

the edge is said to be accurately localized.

• Thinness-It is often desired that edges (also considered as boundaries) form thin lines in

an image. Ideally, they should be only one pixel wide in the direction that is

perpendicular to the edge direction.

• Continuity–Edges must exhibit a continuity which thus, reflects the nature of the

boundary. Usually the physical boundaries are continuous in nature. Only the correct

edges should exhibit this property. However, there should be no constraint on edges that

make closed boundaries in an image.

• Length–Appearance of short and scattered edges of one or two pixels in length is caused

by noise and fine textures. We will not consider such short edges and keep our concept of

edges to those that are minimum three pixels long. The trade-off between the different

desirable characteristics of an edge is always present. Due to conflicting edge

requirements, there are many situations where it is not possible to simultaneously achieve

two or more characteristics. For instance, poor localization and the appearance of false

boundaries can be resulted as a consequence of requiring every edge to be long and

10
continuous. Hence, associating a measure of importance with each desirable edge

characteristic is appropriate so that situations that have conflicting edge requirements

may be solved. Importance of each characteristic is emphasized by attaching a weight to

its associated cost factor which is seen in the formulation of the comparative cost

function.

Parameters for optimal detection of edges

Optimal detection: The edge detector must potentially reduce the chances of detecting

unwanted edges caused due to noise and also should not miss real images.

Optimal localization: The edges that are detected should be extremely close to the true

edges.

Response Count: Ensure that a real edge does not result in more than one already

detected edge.

11
CONCLUSION

Edge detection is an important part of image processing. It is beneficial for many research areas
of computer vision and image segmentation. It provides many important details for high level
processing tasks like feature detection etc. This report discusses the achievement obtained by
implementing an ACO based approach for edge detection. Experimental results show the
feasibility of the method in identifying edges in images. The report also discusses the
implementation of a PSO based approach for detection of edges in the image. Experimental
results show that our proposed algorithm used in conjunction with ACO effectively maximizes
the distances between pixel intensities in the two regions (inter-set distance) and minimizes the
distance between the pixel intensities within each region (intra-set distance), and detect thin,
smooth and continuous edges in complex as well as simple images.

12

Potrebbero piacerti anche