Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
1. INTRODUCTION
Figure 1. Block diagram of the system
People use sensor to count the number of car that
enter to park area. The sensor must be able to detect 2. OBJECT DETECTION
the car and classified it from other object. The
conventional sensor cannot do it well. This research Before object detection process, the system will do
will use a camera as a sensor to recognize the car image preprocessing i.e. converting image from RGB
visually. In this case, the car is moving to enter park to gray scale, histogram equalization and noise
area. removal. This process will minimize error of the
The presented approach uses Gabor filter image.
responses to effectively represent the object image. The converting image process uses equation [2][3]:
The choice of Gabor filter responses is biologically
motivated because they model the response properties Gray = 0.299 R + 0.587 G + 0.114 B (1)
of human visual cortical cells [1]. Further, feature
extraction based on Gabor filtering has been deployed After converting image to gray scale, system will
successfully for texture analysis, character do histogram equalization process. This process will
recognition, fingerprint recognition, as well as face adjust brightness and contrast of the image. Histogram
recognition. The essence of the success of Gabor equalization uses equation:
filters is that they remove most of the variability in
images due to variation in lighting and contrast. At the g ( x, y ) = c. f ( x, y ) + b (2)
same time they are robust against small shifts and
deformations. The Gabor filter representation in fact f ( x, y ) is original gray level image, g ( x, y ) is
increases the dimensions of the feature space such that result of histogram equalization process, c is contrast
salient features can effectively be discriminated in the constant and b is brightness constant.
extended feature space. Then the system will do noise removal process
Generally, there are four stages in car recognizing, using low pass filter with 3x3 neighborhood. The
i.e. object detection, object segmentation, feature result is obtained by convolving low pass filter kernel
extraction and matching. The first stage gives
with original image. This process is represented by
equation: (c)
g x , y = hx , y * f x , y (3)
Figure 3. a) The object b) Object segmentation area
Where h x , y is low pass filter kernel.
c) Object segmentation result
After image preprocessing, the system will do There are two stages in object segmentation
object detection process. This process is done in process. The first stage will discard image
predefined area of the image. To detect the existence background, so that the image will show the object
of the car, the system will subtract background image only. To discard image background, the system will
from the image. If the color of image is different from do subtraction as well as in object detection process by
the color of background image then there is an object using equation 4 and then morphology operation.
in the image. On the contrary, if the image and Type of morphology operation used in this
background image has the same color, there is no research is opening operation. This operation will
object in the image. This process is represented by smooth edge of the object. Opening operation is
equation: represented by equation:
i f f ( x, y ) − b ( x, y ) < e r r o r
G = ( X ⊗B) ⊕B
0
(5)
f ( x, y ) f o o r t h e r s
structuring element B.
In the second stage, system will seek the optimal
position of object in the image. It is done by calculate
the cumulative histogram value for each possible
Figure 2 shows the object detection area and an existence of object in the image. The area with
example of object detection result. maximum cumulative histogram value shows the
optimal position of object. Figure 3 shows example of
object segmentation result. After this process, system
will clip the object at that optimal area.
4. FEATURE EXTRACTION
Object segmentation process does the image 1
( x cos θ k + y sin θ k )
2
(− x sin θ k +
segmentation process to get the detected car and f ( x, y, θ k , λ ) = exp − +
discard the other part. This process is done in the 2 σx
2
σ
predefined area of image where the car is certainly in
that area. 2π ( x cos θ k + y sin θ k )
. exp i
λ
(6)
( )
figure 4. For sampling point (X,Y), the Gabor filter
response, denoted as g(.), is defined as: max
'
Sa J, J ' (10)
∀J
N −X −1N −Y −1
g ( X , Y ,θk , λ) = ∑ ∑I ( X + x, Y + y ) f ( x,Where
y ,θ J, λ
is )gabor jet of the image and J’ is gabor jet of
k
x =− X y =−Y template image. Sa(J, J’) is the Gabor jet similarity
(8) measure that can be defined by this equation:
Unknown object
Similarity Value
Figure 4. Gabor Filter Kernels Template w/o Gabor w/ Gabor Filter
Filter
0.85395 0,59360 0,49339
0.84744 0,51477
Template Similarity
0.80719 0,34463
0,47388
0.80378 0,56870
0,66080
0.84302 0,53214
0,30865
0.85034 0,50135
0,32365
0.82101 0,57449
0,34412
0,64056
Table 2. Similarity value of an unknown object with
gabor filter
0,74638
Unknown object
0,30545
Template Similarity
0,32256 0,49179
0,44837 0,46551
0,24369 0,51339
0,25080 0,40318
0,33497
Table 4. Several experiment results of unknown object
0,48761
Unknown object Template Similarity
Van Van 0,68042
0,91404
7. CONCLUSION
REFERENCE