Sei sulla pagina 1di 35

OBJECT IDENTIFICATION

CHAPTER- 1 OBJECT DETECTION 1.1 OVERVIEW


Given an image, object detection is to determine whether or not the specifiedobject is present, and, if present, determine the locations and sizes of each object. The research for object detection and recognition is focusing on 1) Representation: How to represent an object, 2) Learning: Machine Learning algorithms to learn the common property of a class of objects 3) Recognition: Identify the object in an image using models learned from step Depending on how we represent an object; we can divide the object detection methods as Global representation and Part-Based representation. Depending on the machine learning algorithms, we can divide all object detection methods as Generative Methods and Discriminative Methods.

Fig. 1.1 Overview of Object Detection Methods

Dept of ECE, SSNEC, Ongole.

Page 1

OBJECT IDENTIFICATION Usually the recognition methods vary with respect to object detection applications. Major object detection algorithms, therefore, can be roughly classified as the following figure. In this below figure we used the technique of discriminative analysis method.

1.2 PART-BASED REPRESENTATION WITH GENERATIVE LEARNING


eformable objects or articulate objects can be described as a combination of their parts, so that these objects can be properly described using graphic models. Graph StructuresDifferent objects, due to their distinct structure and texture properties, might results in different graphmodels. For example, Composition model, Constellation Model, and Pictorial Model.

1.2.1Optimization of Computations
Computational speed is another issue in the part-based method objects detectionalgorithms. Simple graph model usually results in faster computation while complicated graph models need optimization in the computations. Dynamic program, belief propagation algorithms are already applied to accelerate the computation.

1.2.2 Definition
Articulated objects are usually referred to "A multi-body system composed of at least two rigid components and at most six independent degrees of freedom between any two components". For example, human bodies can be regarded as articulated objects where the arms, legs, head and torch are rigid objects. Other examples of articulated objects include human hands and animals. Issues Despite their wide application, the research on articulated object detection is still limited to experimental systems. That is, there are still no reliable practical commercial systems in the markets because of the difficulty of detecting articulated objects. The difficulty lies in two aspects: the shape variance and the self-occlusion. Because the large number of degrees of freedom of articulated objects, it is hard to build a shape model to model all possible shapes of articulated objects, although some researchers did build such models. The other factor is the self-occlusion of articulated objects. Approaches Previous articulated object detecting systems, in order to deal with the large shape variance, either take the "pose-based" approaches or part-based approaches. Dept of ECE, SSNEC, Ongole. Page 2

OBJECT IDENTIFICATION

1.3 APPLICATIONS
An object recognition method has the following applications: Image panoramas Image watermarking Global robot localization Face Detection Optical Character Recognition Manufacturing Quality Control Content-Based Image Indexing Object Counting and Monitoring Automated vehicle parking systems Visual Positioning and tracking Video Stabilization

Dept of ECE, SSNEC, Ongole.

Page 3

OBJECT IDENTIFICATION

CHAPTER- 2

INTRODUCTION 2.1 FUNDAMENTAL STEPS IN DIP


a) Applications of image processing b) A simple image model c) Fundamental steps in image processing d) Elements of digital image processing systems

2.1.1 Applications of image processing


Interest in digital image processing methods stems from 2 principal application areas:

Fig 2.1 Block diagram of the basic image processing 1. Improvement of pictorial information for human interpretation, and 2. Processing of scene data for autonomous machine perception In the second application area, interest focuses on procedures for extracting from image information in a form suitable for computer processing. Dept of ECE, SSNEC, Ongole. Page 4

OBJECT IDENTIFICATION The use of insufficient number of gray levels in smooth areas of a digital image examples include automatic character recognition, industrial machine vision for product assembly and inspection, military recognizance, automatic processing of fingers etc.

What's an image? 1. An image refers to a 2D light intensity function f (x, y), where (x, y) denote spatial
coordinates and the value of f at any point (x, y) is proportional to the brightness or gray levels of the image at that point.

2. A digital image is an image f (x, y) that has been discretized both in spatial
coordinates and brightness.

3. The elements of such a digital array are called image elements or pixels. 2.1.2 Simple image model a) To be suitable for computer processing, an image f(x, y) must be digitalized both
spatially and in amplitude.

b) Digitization of the spatial coordinates (x, y) is called image sampling. c) Amplitude digitization is called gray-level quantization. d) The storage and processing requirements increase rapidly with the spatial resolution
and the number of gray levels. e) Example: A 256 gray-level image of size 256x256 occupies 64K bytes of memory. f) Images of very low spatial resolution produce a checkerboard effect.

2.1.3 Fundamental Steps in Image Processing


1. Image acquisition: To acquire a digital image. 2. Image preprocessing: To improve the image in ways that increases the chances for success of the other processes. 3. Image segmentation: To partitions an input image into its constituent parts or objects

Dept of ECE, SSNEC, Ongole.

Page 5

OBJECT IDENTIFICATION

Fig 2.2 Images of different spatial resolution 4. Image representation: To convert the input data to a form suitable for computer processing. 5. Image description: To extract features that result in some quantitative information of interest or features that are basic for differentiating one class of objects from another. 6. Image recognition: To assign a label to an object based on the information provided by its descriptors. 7. Image interpretation: To assign meaning to an ensemble of recognized objects. Knowledge about a problem domain is coded into an image processing system in the form of a knowledge database. Elements of digital image processing systems: The basic operations performed in a digital image processing systems include (1) acquisition, (2) storage, (3) processing, (4) communication and (5) Display.

2.1.4 Color Processing


Basics of color Light and spectra Dept of ECE, SSNEC, Ongole. Page 6

OBJECT IDENTIFICATION Color is the perceptual result of light in the visible region of the spectrum, having in the region of 400nm to 700nm, incident upon the retina. Visible Light is a form of electromagnetic energy consisting of a spectrum of frequencies having wavelengths range from about 400nm for violet light to about 700nm for red light. Most light we see is a combination of many wavelengths. Primaries Any color can be matched by proper proportions of three component colors called primaries. The most common primaries are red, blue and green.

2.2 IMAGE ACQUISITION


Digital imaging or digital image acquisition is the creation of digital images, typically from a physical scene. The term is often assumed to imply or include the processing, compression, storage, printing, and display of such images. The first stage of any vision system is the image acquisition stage. Image Acquisition enables you to acquire images and video from cameras and frame grabbers directly. Digital imaging was developed in the 1960s and 1970s, largely to avoid the operational weaknesses of film cameras, for scientific and military missions including the KH-11 program. As digital technology became cheaper in later decades it replaced the old film methods for many purposes.

2.2.1 Descriptions
A digital image may be created directly from a physical scene by a camera or similar devices. Alternatively, it may be obtained from another image in an analog medium, such as photographs, photographic film, or printed paper, by an image scanner or similar device. Many technical images such as those acquired with tomographic equipment, side-scan sonar, or radio telescopes are actually obtained by complex processing of non-image data. This digitalization of analog real-world data is known as digitizing, and involves sampling (discretization) and quantization. Dept of ECE, SSNEC, Ongole. Page 7

OBJECT IDENTIFICATION Finally, a digital image can also be computed from a geometric model or mathematical formula. In this case the name image synthesis is more appropriate, and it is more often known as rendering. Digital image authentication is an emerging issue for the providers and producers of high resolution digital images such as health care organizations, law enforcement agencies and insurance companies. There are methods emerging in forensic science to analyze a digital image and determine if it has been altered. After the image has been obtained, various methods of processing can be applied to the image to perform the many different vision tasks required today. However, if the image has not been acquired satisfactorily then the intended tasks may not be achievable, even with the aid of some form of image enhancement.

2.2.2 Methods of Acquisition


An image can be acquired or read by different methods. The types of images in which we are interested are generated by the combustion of an "illumination" source and the reflection or absorption of energy from that source by the elements of scene being imaged. Depending on the nature of the source, illumination energy is reflected from, or transmitted through, objects. There are three principal sensor arrangements used to transform illumination energy into digital images. Basically there are three types to implement the process of image acquisition. They are 1. Image acquisition using a single sensor. 2. Image acquisition using sensor strips. 3. Image acquisition using sensor arrays. Image acquisition using a single sensor In this we use a single sensor to acquire an image. The single sensor is mounted on a lead screw that provides motion in the perpendicular direction. Image acquisition using sensor strips A geometry that is used much more frequently than single sensors consists of an in-line arrangement of sensors in the form of a sensor strips. The strip provides imaging elements in one direction. Motion perpendicular to the strip provides imaging in the other Dept of ECE, SSNEC, Ongole. Page 8

OBJECT IDENTIFICATION direction. This is the type of the arrangement used in most flat bed scanners. Sensing device with 4000 or more in-line sensors are possible. Image acquisition using sensor arrays In the present work we use image acquisition using sensor arrays to read or acquire an input image. The reason behind this is we use a camera to acquire an image. The camera uses the sensor arrays to sense and acquire the image. So in turn the present work needs to use the sensor arrays to acquire an input image. In this individual sensors, arranged in the form of a 2-D array. Numerous electromagnetic and some ultrasonic sensing devices frequently are arranged in an array format. This is also the predominant arrangement found in digital cameras. A typical sensor for these cameras is a CCD array, which can be manufactured with a broad range of sensing properties and can be packaged in rugged arrays of 4000*4000 elements or more. CCD is used widely in digital cameras and other light sensing instruments. The response of each sensor is proportional to the integral of the light energy projected on to the surface of the sensor, a property that is used astronomical and other applications requiring low noise images. Noise reduction is achieved by letting the sensor integrate the input light signal over minutes or even hours. Since the sensor array is 2-D, its key advantage is that a complete image can be obtained by focusing the energy pattern on to the surface of the array. Motion obviously so not necessary, as is the case with the sensor arrangements discussed in the preceding two sections. The principal manner in which array sensor is used is shown in the following figure. In this the energy from an illumination source being reflected from ma scene element, but, as mentioned at the beginning of this section, the energy also could be transmitted through the scene elements. The first function performed by the imaging system is to collect the incoming energy and focus it on to an image plane. If the illumination is a light, the front end of the imaging system is a lens, which projects the viewed scene on to the lens focal plane. The sensor array, which is coincident with the focal plane, produces output proportional to integral of the light received at each sensor. Digital and analog circuitry sweeps these outputs and converts them to a video signal, which is then digitalized by another section of imaging system. The output is a digital.

Dept of ECE, SSNEC, Ongole.

Page 9

OBJECT IDENTIFICATION

Fig 2.3An example for Digital Image Acquisition Process.

2.3 IMAGE SEGMENTATION


Segmentation refers to the process of partitioning a digital image into multiple segments i.e.; sets of pixels, the goal of segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. Image segmentation is typically used to locate objects and boundaries (lines, curves, etc.) in images. More precisely, image segmentation is the process of assigning a label to every pixel in an image such that pixels with the same label share certain visual characteristics.

The purpose of image segmentation is to partition an image into meaningful


regions with respect to a particular application.

The segmentation is based on measurements taken from the image and might be
grey level, color, texture, depth or motion.

Usually image segmentation is an initial and vital step in a series of processes


aimed at overall image understanding.

2.3.1Applications
Some of the practical applications of image segmentation are: Medical Imaging Locate objects in satellite images (roads, forests, etc.) Dept of ECE, SSNEC, Ongole. Page 10

OBJECT IDENTIFICATION Face recognition Fingerprint recognition Traffic control systems Brake light detection Machine vision a) Identifying objects in a scene for object-based measurements such as size and shape. b) Identifying objects in a moving scene for object-based video compression (MPEG4). c) Identifying objects which are at different distances from a sensor using depth measurements from a laser range finder enabling path planning for mobile robots.

2.3.2Segmentation Techniques
Several general-purpose algorithms and techniques have been developed for image segmentation. Since there is no general solution to the image segmentation problem, these techniques often have to be combined with domain knowledge in order to effectively solve an image segmentation problem for a problem domain. Compression-based method Compression based methods postulate that the optimal segmentation is the one that minimizes, over all possible segmentations, the coding length of the data. The connection between these two concepts is that segmentation tries to find patterns in an image and any regularity in the image can be used to compress it. The method describes each segment by its texture and boundary shape. Histogram-based method: Histogram-based methods are very efficient when compared to other image segmentation methods because they typically require only one pass through the pixels. In this technique, a histogram is computed from all of the pixels in the image, and the peaks and valleys in the histogram are used to locate the clusters in the image. Edge detection: Dept of ECE, SSNEC, Ongole. Page 11

OBJECT IDENTIFICATION Edge detection is a well-developed field on its own within image processing. Region boundaries and edges are closely related, since there is often a sharp adjustment in intensity at the region boundaries. Edge detection techniques have therefore been used as the base of another segmentation technique. The edges identified by edge detection are often disconnected. To segment an object from an image however, one needs closed region boundaries. Region growing method The first region growing method was the seeded region growing method. This method takes a set of seeds as input along with the image. The seeds mark each of the objects to be segmented. The regions are iteratively grown by comparing all unallocated neighboring pixels to the regions. The difference between a pixel's intensity value and the region's mean is used as a measure of similarity. The pixel with the smallest difference measured this way is allocated to the respective region. This process continues until all pixels are allocated to a region. Seeded region growing requires seeds as additional input. The segmentation results are dependent on the choice of seeds. Noise in the image can cause the seeds to be poorly placed. Unseeded region growing is a modified algorithm that doesn't require explicit seeds. It starts off with a single region

A1

the pixel chosen here does not significantly

influence finalsegmentation. At each iteration it considers the neighboring pixels in the same way as seeded region growing. It differs from seeded region growing in that if the minimum

is less than a predefined threshold T then it is added to the respective region

Aj. If not, then the pixel is considered significantly different from all current regions Ai
and a new region An + 1 is created with this pixel.

2.3.3 Implementation
Start-Points Our final stage of development was to supply the region-growing program with an array of starting points, rather than just one. The program starts the algorithm sequentially over each of these starting points, and maintains a linked list of regions. We chose not to Dept of ECE, SSNEC, Ongole. Page 12

OBJECT IDENTIFICATION spend the time developing an automatic starting-point finder, although we had discussed several methods. The method that we felt would work best is the following: Low-pass filter the image very heavily, perform a gradient operation, take the absolute value, and use local minima as starting points. Assuming (as we did above) that the intensity values were equal to or greater in the middle of the object than towards the edges, this method should give starting points close to the middle of regions. Image Preprocessing We used a median filter for low-pass filtering the image before finding the gradient of the image. We decided to use a 7x7 median filter (the size was chosen through experimentation). We then found the gradient of the smoothed image, and then normalized and histogram-equalized it, simply for convenience. Then we found the places where the gradient was in the top 5% (only the edges of the objects) and averaged the intensity values at these points. This gave us an approximate lower bound on intensity for membership in the region. Breadth-FirstSearchAlgorithm We writes a C program that takes as inputs an image, and a set of points from which to start the region-growing process. The algorithm begins at each starting pixel and scans the neighboring 8 pixels in a circular fashion, to determine membership of the region around the central pixel. As pixels are added, the process repeats with the newly added pixels as the center pixel. A list of tested pixels is maintained, to avoid unnecessary duplication of tests. In this way, the region is constructed, using the membership criteria already discussed. RegionMerging Because of the short-term nature of this project, we did not have the time to implement a region-merging technique. However, we did think about ways to do this. When we pick several points around which to start growing regions, there is a chance that two of these regions will eventually meet. If they do meet, we may not necessarily want to join them - there could be two objects very close to one another, even touching each other. Dept of ECE, SSNEC, Ongole. Page 13

OBJECT IDENTIFICATION To decide whether or not to merge two regions, we could compare their histograms to see if they are similar (in mean gray level and texture). If the histograms were similar (meaning the regions are either in the same object or two different objects of the same type), and we had a priori information about the shape of an object, we could check to see if the merger would produce a shape closer to, or farther away from what we are looking for. A good measure of shape is the ratio of the perimeter squared to the area of the region.

2.4 MORPHOLOGICAL
The identification of objects within an image can be a very difficult task. One way to simplify the problem is to change the grayscale image into a binary image, in which each pixel is restricted to a value of either 0 or 1. The techniques used on these binary images go by such names as: blob analysis, connectivity analysis, and morphological image processing. The foundation of morphological processing is in the mathematically rigorous field of set theory; however, this level of sophistication is seldom needed. Most morphological algorithms are simple logic operations and very ad hoc.

Fig 2.4 Morphological Operations In other words, each application requires a custom solution developed by trial-anderror. This is usually more of an art than a science. A bag of tricks is used rather than standard algorithms and formal mathematical properties. Here are some examples.

Dept of ECE, SSNEC, Ongole.

Page 14

OBJECT IDENTIFICATION This above figure shows an example binary image. This might represent an enemy tank in an infrared image, an asteroid in a space photograph, or a suspected tumor in a medical x-ray. Each pixel in the background is displayed as white, while each pixel in the object is displayed as black. Frequently, binary images are formed by threshold a grayscale image; pixels with a value greater than a threshold are set to 1, while pixels with a value below the threshold are set to 0. It is common for the grayscale image to be processed with linear techniques before the threshold. For instance, illumination flattening can often improve the quality of the initial binary image. Figures (b) and (c) show how the image is changed by the two most common morphological operations, erosion and dilation. In erosion, every object pixel that is touching a background pixel is changed into a background pixel. In dilation, every background pixel that is touching an object pixel is changed into an object pixel. Erosion makes the objects smaller, and can break a single object into multiple objects. Dilation makes the objects larger, and can merge multiple objects into one. As shown in (d), opening is defined as erosion followed by dilation. Figure (e) shows the opposite operation of closing, defined as dilation followed by erosion. As illustrated by these examples, opening removes small islands and thin filaments of object pixels. Likewise, closing removes islands and thin filaments of background pixels. These techniques are useful for handling noisy images where some pixels have the wrong binary value. For instance, it might be known that an object cannot contain a "hole", or that the object's border must be smooth. Shown an example of morphological processing Figure (a) is the binary image of a fingerprint. Algorithms have been developed to analyze these patterns, allowing individual fingerprints to be matched with those in a database. A common step in these algorithms is shown in (b), an operation called skeletonization. This simplifies the image by removing redundant pixels; that is, changing appropriate pixels from black to white. These results in each ridge being turned into a line only a single pixel wide. Below tables shows the skeletonization program. Even though the fingerprint image is binary, it is held in an array where each pixel can run from 0 to 255. A black pixel is denoted by 0, while a white pixel is denoted by 255. As shown in Table 25-1, the algorithm is composed of 6 iterations that gradually erode the ridges into a thin line. The Dept of ECE, SSNEC, Ongole. Page 15

OBJECT IDENTIFICATION number of iterations is chosen by trial and error. An alternative would be to stop when iteration makes no changes. During iteration, each pixel in the image is evaluated for being removable; the pixel meets a set of criteria for being changed from black to white. Lines 200-240 loop through each pixel in the image, while the subroutine in Table 25-2 makes the evaluation. If the pixel under consideration is not removable, the subroutine does nothing. If the pixel is removable, the subroutine changes its value from 0 to 1. This indicates that the pixel is still black, but will be changed to white at the end of the iteration. After all the pixels have been evaluated, lines 260-300 change the value of the marked pixels from 1 to 255. This two-stage process results in the thick ridges being eroded equally from all directions, rather than a pattern based on how the rows and columns are scanned.

Fig 2.5 Binary image of a finger prints All of these rules must be satisfied for a pixel to be changed from black to white. The first three rules are rather simple, while the fourth is quite complicated. As shown in Fig. 2.5, a pixel at location [R, C] has eight neighbors. The four neighbors in the horizontal and vertical directions (labeled 2, 4, 6, and 8) are frequently called the close neighbors. The diagonal pixels (labeled 1, 3, 5, and 7) are correspondingly called the distant neighbors. The four rules are as follows: Rule one: The pixel under consideration must presently be black. If the pixel is already white, no action needs to be taken.

Dept of ECE, SSNEC, Ongole.

Page 16

OBJECT IDENTIFICATION Rule two: At least one of the pixel's close neighbors must be white. This insures that the erosion of the thick ridges takes place from the outside. In other words, if a pixel is black, and it is completely surrounded by black pixels, it is to be left alone on this iteration. Why use only the close neighbors, rather than all of the neighbors? The answer is simple: running the algorithm both ways shows that it works better. Remember, this is very common in morphological image processing; trial and error is used to find if one technique performs better than another. Rule three: The pixel must have more than one black neighbor. If it has only one, it must be the end of a line, and therefore shouldn't be removed. Rule four: A pixel cannot be removed if it results in its neighbors being disconnected. This is so each ridge is changed into a continuous line, not a group of interrupted segments. As shown by the examples in Fig. connected means that all of the black neighbors touch each other. Likewise, unconnected means that are black neighbors form two or more groups.

Dept of ECE, SSNEC, Ongole.

Page 17

OBJECT IDENTIFICATION

CHAPTER-3 IMPLEMENTATION
3.1BLOCK DIAGRAM

Fig 3.1Block diagram of the coin detection

3.2 DESCRIPTION Image acquisition


The first process of the image processing is the image acquisition. Digital imaging or digital image acquisition is the creation of digital images. In this process of acquisition, we will read the image from graphical file. Reading the image can be done by using the imread command in the Matlab. Image Acquisition enables you to acquire images and video from cameras and frame grabbers directly.

Segmentation
Segmentation refers to the process of partitioning a digital image into multiple segments i.e.; sets of pixels, the main purpose of the segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. There are different types of the segmentation, in this we are using region based segmentation. The first region growing method was the seeded region growing method.

Circle detector
Dept of ECE, SSNEC, Ongole. Page 18

OBJECT IDENTIFICATION Circle detector detects the circular objects in the image. The identification of objects within an image can be a very difficult task. In this by using morphological processing we can easily detect the circular objects. In this morphological processing, it tracing out the boundaries, edges of the required object in the image. Here we are using different types of commands like bwtraceboundaries, imdilate etc.; in the Matlab.

Radius
The next stage is finding out the radius of the object in the image. In this the simplest method is implemented to find out the radius of the required object. The method is taking horizontal rows and vertical rows first and the point where the intersection of the both horizontal rows and vertical rows is to be taken as the Centre. And the distance between centers to circumference is called the radius.

Matcher
The matcher is used to compare the obtained radius with pre-defined radius of the object.The pre-defined radius is compared with the obtained radius; if the obtained radius is matched then the object is detected out.

3.3FLOW CHART

Dept of ECE, SSNEC, Ongole.

Page 19

OBJECT IDENTIFICATION

Fig. 3.2 flow chart of the coin detection

3.4DESCRIPTION Image acquisition


The first process of the image processing is the image acquisition. Digital imaging or digital image acquisition is the creation of digital images. In this process of acquisition, we will read the image from graphical file. Reading the image can be done by using the imread command in the Matlab. Image Acquisition enables you to acquire images and video from cameras and frame grabbers directly.

Is rgb
Before segmentation of the image we will check out whether the image is rgb or not. If the image is gray given the output directly to the next stage else if it is rgb convert rgb to gray and the output is given to the next stage.

Dept of ECE, SSNEC, Ongole.

Page 20

OBJECT IDENTIFICATION

Segmentation
Segmentation refers to the process of partitioning a digital image into multiple segments i.e.; sets of pixels, the main purpose of the segmentation is to simplify and/or change the representation of an image into something that is more meaningful and easier to analyze. There are different types of the segmentation, in this we are using region based segmentation. The first region growing method was the seeded region growing method.In this region based segmentation will use different commands like in the Matlab.

Circle detector
Circle detector detects the circular objects in the image. The identification of objects within an image can be a very difficult task. In this by using morphological processing we can easily detect the circular objects. In this morphological processing, it tracing out the boundaries, edges of the required object in the image. Here we are using different types of commands like bwtraceboundaries, imdilate etc.; in the matlab.

Radius
The next stage is finding out the radius of the object in the image. In this the simplest method is implemented to find out the radius of the required object. The method is taking horizontal rows and vertical rows first and the point where the intersection of the both horizontal rows and vertical rows is to be taken as the Centre. And the distance between centers to circumference is called the radius.

Matcher
The matcher is used to compare the obtained radius with pre-defined radius of the object. The pre-defined radius is compared with the obtained radius, if they obtained radius is matched then the object is detected out.

Dept of ECE, SSNEC, Ongole.

Page 21

OBJECT IDENTIFICATION

CHAPTER 4 RESULTS

Fig. 4.1 coins which are wanted to be detected

Fig.4.2 coins after edge detection

Dept of ECE, SSNEC, Ongole.

Page 22

OBJECT IDENTIFICATION

Fig. 4.3 after dilation operation

Fig. 4.4 after filling the holes

Dept of ECE, SSNEC, Ongole.

Page 23

OBJECT IDENTIFICATION

Fig. 4.5 first coin detection

Fig. 4.6 second coin detection

Dept of ECE, SSNEC, Ongole.

Page 24

OBJECT IDENTIFICATION

Fig. 4.7 third coin detection

Fig. 4.8 forth coin detection

Dept of ECE, SSNEC, Ongole.

Page 25

OBJECT IDENTIFICATION

Fig. 4.9 fifth coin detection

Fig. 4.10 sixth coin detection

Dept of ECE, SSNEC, Ongole.

Page 26

OBJECT IDENTIFICATION

CHAPTER 5 CONCLUSION
5.1 SCOPE & FEATURES
In this project, we are calculating the parameters of object for detection such as radius and shape. If more parameters are taken then we can increase the performance and efficiency.

5.2 CONCLUSION
By this we conclude that, if we adopt this simplest technique we can easily detect the circular objects and implement the method for coin detection.

Dept of ECE, SSNEC, Ongole.

Page 27

OBJECT IDENTIFICATION

CHAPTER-6 REFERENCES
^ "Reading Machine Speaks Out Loud, February 1949, Popular Science. ^ Holley, Rose (Apr 2009). "How Good Can It Get? Analysing and Improving OCR Accuracy in Large Scale Historic Newspaper Digitisation Programs". D-Lib Magazine.

http://www.dlib.org/dlib/march09/holley/03holley.html. Retrieved 5 Jan 2011.


^ Suen, C.Y., et al (1987-05-29), Future Challenges in Handwriting and Computer Applications, 3rd International Symposium on Handwriting and Computer Applications, Montreal, May 29, 1987.

http://users.erols.com/rwservices/pens/biblio88.html#Suen88.
^Tappert, Charles C., et al (1990-08), The State of the Art in On-line Handwriting Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol 12 No 8, August 1990, pp 787-ff

http://users.erols.com/rwservices/pens/biblio90.html#Tappert90c, retrieved 200810-03. ^ LeNet-5, Convolutional Neural Networks ^ Milian, Mark (December 20, 2010). "New iPhone app translates foreignlanguage signs". CNN: Tech.

http://www.cnn.com/2010/TECH/mobile/12/20/word.lens.iphone.app/index.html.
Retrieved December 20, 2010.

Dept of ECE, SSNEC, Ongole.

Page 28

OBJECT IDENTIFICATION

APPENDIX A COMMANDS
Imread:
IMREAD Read image from graphics file. A = IMREAD (FILENAME, FMT) reads a grayscale or color image from the file specified by the string FILENAME. If the file is not in the current directory, or in a directory on the MATLAB path, specify the full pathname.

Isrgb:
ISRGB Return true for RGB image. FLAG = ISRGB A) returns 1 if A is an RGB true color image and 0 otherwise.

rgb2gray:
RGB2GRAY Convert RGB image or color map to grayscale. RGB2GRAY converts RGB images to grayscale by eliminating the hue and saturation information while retaining the luminance.

Edge:
EDGE Find edges in intensity image. EDGE takes intensity or a binary image I as its input, and returns a binary image BW of the same size as I, with 1's where the function finds edges in I and 0's elsewhere.

Strel:
STREL Creates morphological structuring element SE = STREL ('arbitrary', NHOOD) creates a flat structuring element with the specified neighborhood. NHOOD is a matrix containing 1's and 0's; the location of the 1's Dept of ECE, SSNEC, Ongole. Page 29

OBJECT IDENTIFICATION defines the neighborhood for the morphological operation. The center (or origin) of NHOOD is its center element, given by FLOOR ((SIZE (NHOOD) + 1)/2). You can also omit the 'arbitrary' string and just use STREL (NHOOD).

Imfill:
IMFILL Fill image regions and holes. BW2 = IMFILL (BW1, LOCATIONS) performs a flood-fill operation on background pixels of the input binary image BW1, starting from the points specified in LOCATIONS. LOCATIONS can be a P-by-1 vector, in which case it contains the linear indices of the starting locations. LOCATIONS can also be a P-by-NDIMS(IM1) matrix, in which case each row contains the array indices of one of the starting locations.

Bwlabel:
BWLABEL Label connected components in 2-D binary image. L = BWLABEL (BW, N) returns a matrix L, of the same size as BW, containing labels for the connected components in BW. N can have a value of either 4 or 8, where 4 specifies 4-connected objects and 8 specifies 8-connected objects; if the argument is omitted, it defaults to 8.

Bwdist:
BWDIST Distance transform of binary image. D = BWDIST (BW) computes the Euclidean distance transform of the binary image BW. For each pixel in BW, the distance transform assigns a number that is the distance between that pixel and the nearest nonzero pixel of BW. BWDIST uses the Euclidean distance metric by default. BW can have any dimension. D is the same size as BW.

Function:

Dept of ECE, SSNEC, Ongole.

Page 30

OBJECT IDENTIFICATION FUNCTION adds new function. New functions may be added to MATLAB's vocabulary if they are expressed in terms of other existing functions. The commands and functions that comprise the new function must be put in a file whose name defines the name of the new function, with a filename extension of '.m'. At the top of the file must be a line that contains the syntax definition for the new function. For example, the existence of a file on disk called STAT.M with:

Size:
SIZE size of array. D = SIZE (X), for M-by-N matrix X, returns the two-element row vector D = [M, N] containing the number of rows and columns in the matrix. For N-D arrays, SIZE(X) returns a 1-by-N vector of dimension lengths. Trailing singleton dimensions are ignored.

Imdilate:
IMDILATE Dilate image. IM2 = IMDILATE (IM, SE) dilates the grayscale, binary, or packed binary image IM, returning the dilated image, IM2. SE is a structuring element object, or array of structuring element objects, returned by the STREL function.

Imshow:
IMSHOW Display image in Handle Graphics figure. IMSHOW (I) displays the grayscale image I.

Figure:
Dept of ECE, SSNEC, Ongole. Page 31

OBJECT IDENTIFICATION FIGURE Create figure window. FIGURE, by itself, creates a new figure window, and returns its handle.

Dept of ECE, SSNEC, Ongole.

Page 32

OBJECT IDENTIFICATION

APPENDIX B MATLAB CODE


Close all; Clear all; Clc; File = input ('please enter the image file name:','s'); I = imread (file); Flg =isrgb (I); %imgeninrenkuzay test ediliyor. (Testing the color space) ifflg==1 I=rgb2gray (I); end [h,w]=size (I); Figure; imshow (I); c = edge (I, 'canny', 0.3); % mcannykenaralglamafonksiyonu (Mcanny edge detection) Figure; imshow (c); Se = strel ('disk', 2); I2 = imdilate(c, se); Imshow (I2); Figure, imshow (d2); Label=bwlabel (d2, 4); a1= (Label==1); a2= (Label==2); a3= (Label==3); a4= (Label==4); a5= (Label==5); a6= (Label==6); D1 = Bwdist (~a1); Figure, imshow (D1, []), [xc1 yc1 r1]=merkz (D1); f1=coindetect (r1) D2 = bwdist (~a2); Figure, imshow (D2, []), % computing minimal Euclidean distance to non-white pixel % Page 33 % computing minimal Euclidean distance to non-white pixel % % ikiliimgeolarakkenartespiti (binary edges) % % pupil blgesinintespitiaamas % %

d2 = Imfill (I2, 'holes'); % pupil blgesialantespiti

Dept of ECE, SSNEC, Ongole.

OBJECT IDENTIFICATION [xc2 yc2 r2]=merkz (D2); f2=coindetect (r2) D3 = bwdist (~a3); Figure, imshow (D3,[]), [xc3 yc3 r3]=merkz (D3); f3=coindetect (r3) D4 = bwdist (~a4); Figure, imshow (D4,[]), [xc4 yc4 r4]=merkz (D4); f4=coindetect (r4) D5 = bwdist (~a5); Figure, imshow (D5,[]), [xc5 yc5 r5]=merkz (D5); f5=coindetect (r5) D6 = bwdist (~a6); Figure, imshow (D6,[]), [xc6 yc6 r6]=merkz (D6); f6=coindetect (r6) Function [centx, centy, r] =merkz (D); [w h]=size (D'); mx = max (max(D)); r=mx; For i=1: h For j=1: w If D (i,j)==mx; Centx = j; Centy =i; end end end Function f = coindetect (rad); if rad>70 f=200; elseif (62<rad) & (rad<65) Dept of ECE, SSNEC, Ongole. Page 34 % computing minimal Euclidean distance to non-white pixel % % computing minimal Euclidean distance to non-white pixel % % computing minimal Euclidean distance to non-white pixel % % computing minimal Euclidean distance to non-white pixel %

OBJECT IDENTIFICATION f=100; elseif (60<rad) & (rad<62) f=500; elseif (45<rad) & (rad<50) f=25; elseif (38<rad) & (rad<40) f=10; else f=0; end

Dept of ECE, SSNEC, Ongole.

Page 35

Potrebbero piacerti anche