Sei sulla pagina 1di 8

International Journal of Imaging and Robotic

2016, Volume 16, Issue Number 1

Color Based Omnidirectional Target Tracking

Z. EL KADMIRI, O. EL KADMIRI, S. EL JOUMANI, Z. Kaddouri and L. MASMOUDI

Faculty of sciences, Mohammed V University, Morocco


B.P N° 1014 Av. Ibn Battouta, Rabat, Morocco
{zakariaelkadmiri;omar.elkadmiri;jomanisalh;kaddouri.zakaria;lhmasmoudi}@gmail.com

ABSTRACT

The main inconvenience of any detection or object-tracking algorithm based on color indices
is its sensitivity to lighting changes. In fact, the representation of a color by its coordinates
in a color space such as RGB, L*a*b*, or HSV implicitly includes information on the color
illumination. Another disability that limits the performances for object tracking applications
is the limited field of view (FOV) of conventional cameras. This paper addresses the problem
of real-time moving-target detection, and tracking in dynamic environments. In this scenario,
a mobile robot uses an omnidirectional camera to recognize, track and navigate toward a
moving target. We propose a technique based on color detection less sensitive to
illumination variation of the environment, using an omnidirectional camera with 360° FOV.
Experimental results indicate that the proposed chromatic invariant method can effectively
distinguish and track moving targets in indoor and outdoor environment.

Keywords: Omnidirectional vision, Color indices, Object-tracking, HSV

Computing Classification System: I.4.6, I.5.4, I.2.9

1. INTRODUCTION

In recent years, object tracking using computer vision systems has attracted the interest of many studies
related to visual servoing and artificial vision [Kobilarov et al. 2006] [Basso et al. 2013]. Computer vision
in robotic systems provides the ability of recognition and reconstruction of environments in which robots
operate without requiring a modeling of the environment. This is potentially important when the tasks
have to be accomplished in unknown or dynamic environments. These systems are very useful for
tracking applications, obstacle detection [Myint 2013] and autonomous navigation [Poza-Luján 2014].
Autonomous robots need the ability to detect and track movements over a large 3D space in various
illumination conditions. Moreover, the limited field of view of conventional cameras restrict their
performance. By offering a wide field of view, omnidirectional cameras proves to be a very valuable
solution. They have been widely used in many applications such as automated video surveillance or
3D reconstruction.
There are many ways to enhance the field of view and obtain a large one, such as replacing classical
optics of the camera by a very short focal length lens called fisheye lens [Xiong et al. 1997], multiple-
camera devices [Kangni et al. 2006] [Sato et al. 2005] [Cutler et al. 2002] and moving camera systems
[Benosman et al. 2001]. All these techniques have some advantages in typical applications and are
limited in others [Fermüller et al. 2000]. However, a compromise has to be done, depending of the
application, between high-resolution images and real time processing or video rate. Yagi [Yagi 1999]
described the different techniques for making wide field of view cameras and Svoboda proposed in
[Pajdla et al. 2000] several classifications of omnidirectional cameras.
The catadioptric imaging is a common approach to omnidirectional image instantaneous acquisition
providing 360° FOV. A convex mirror is aligned with a standard camera to obtain such a system.
Spherical, conic, parabolic or hyperbolic mirrors [Baker et al. 1999] [ Nayar 1997] can be used in this
case.
In our situation, we adopted an omnidirectional system with single camera and a spherical mirror
embedded on a mobile robot. The main contribution of this work is a tracking algorithm for visual
servoing experiments based on color detection using HSV color space. The goal is to detect a target
in an observed scene simply by its color using an omnidirectional image acquired by the catadioptric
system.
The major inconvenience of any detection or object-tracking algorithm based on color indices is its
sensitivity to lighting changes. Indeed the representation of a color by its coordinates in RGB color
space, L*a*b* or HSV, implicitly includes information about color illumination.
According to the selected color space, this information is often correlated with that of the color
chrominance. Nevertheless, it is noted that some color representation spaces are more discriminating
than others. For example, the intensity of a color is calculated using the three coordinates red, green
and blue in RGB space. Any illumination variation affects the three color coordinates R, G and B
simultaneously. Other spaces of representation try to separate information on the color intensity from
information on its chrominance. In the HSV space, the chrominance is represented by two coordinates,
H the color hue, and S which designates its saturation. The intensity is represented by the third
component, which is the value V.
The detection of an object by its color indices is mainly based on defining intervals which delimits its
color coordinates in a given representation space for different lighting conditions.
Unfortunately, the aforementioned spaces do not provide a satisfactory robustness against the problem
of illumination variations of the scene in real conditions of use. This results in false detections or false
rejects. Therefore, we chose to work on a modified color representation based on the HSV space.
The algorithm we developed allows improvement of robustness and performance resulting in target
detection even in various brightness conditions.

2. EXPERIMENTAL SETUP

2.1. The omnidirectional vision sensor

A vision system is a rich source of information but the narrow field of view offered by standard cameras
limits the range of possible applications. The catadioptric sensor can solve this problem and it is a
reliable solution for omnidirectional images acquisition.
For this study, we adopted an omnidirectional vision system mounted on a mobile robot combining a
standard camera and a spherical mirror to acquire real-time omnidirectional images of the surrounding
scene. This sensor is designed by combining a high-resolution Logitech C310 camera with a resolution
of 1280x720 pixels, a USB 2.0 interface, a ratio of video stream of 30 fps, and 24 bit color depth. The
used spherical stainless steel mirror has a radius of 6 cm, and its optical axis is vertically aligned with
the one of the camera (Figure 1).

Figure 1. The catadioptric omnidirectional vision


system.

2.2. Vehicle

Based on a “PackBot” robot concept, the vehicle used for this research named ESCALADE360 v2.0
has a mass of approximately 31kg and overall dimensions of 90 cm x 53 cm x 30 cm (see Figure 2). It
has three kinematic degrees of freedom. Two independently controlled motors move the main tracks
on the sides of the vehicle, and two synchronized motors rotate both flippers about a horizontal axis at
the front of the vehicle. The design of this robot is part of our research project that aims to develop and

Figure 2. View of “ESCALADE360 v2.0” Robot with obstacle crossing


flippers position and embedded omnidirectional camera system.

test new algorithms for object tracking and servoing using omnidirectional cameras. The choice of the
“PackBot” concept is justified by the need to have a vehicle able of crossing small obstacles, and even
greater mobility in rough terrain.
3. TARGET TRACKING
As described above, the developed algorithm allows target tracking by detecting its color even in various
illumination conditions. The tracking process is operated as following:

3.1. Target color parameters determination


In this first step, the target is placed in the field of view of the omnidirectional vision system at low
illumination conditions.
From a first acquired omnidirectional image, the three color components in the HSV space of five pixels
from the target are taken and defined as:
(ℎ𝑖 , 𝑠𝑖 , 𝑣𝑖 ) where 𝑖 = 1,2, . . ,5
In high illumination conditions, the same process is operated as we define the three color components
of five pixels acquired from a second image:
(𝐻𝑖 , 𝑆𝑖 , 𝑉𝑖 ) where 𝑖 = 1,2, . . ,5
In order to perform a robust color detection algorithm less sensitive to illumination variation of the
surrounding scene, a correlation must be created between the three color components obtained in both
low and high illumination conditions. Creating this correlation consists of computing the parameters
below:
5 5
1 ℎ𝑖 1 𝐻𝑖
𝑇1′ = ∑ 𝑇1′′ = ∑
5 𝑠𝑖 ∗ 𝑣𝑖 5 𝑆𝑖 ∗ 𝑉𝑖
𝑖=1 𝑖=1
5 5
1 𝑠𝑖 1 𝑆𝑖
𝑇2′ = ∑ (1) and 𝑇2′′ = ∑ (2)
5 ℎ𝑖 ∗ 𝑣𝑖 5 𝐻𝑖 ∗ 𝑉𝑖
𝑖=1 𝑖=1
5 5
1 𝑣𝑖 1 𝑉𝑖
𝑇3′ = ∑ 𝑇3′′ = ∑
{ 5 𝑠𝑖 ∗ ℎ𝑖 { 5 𝑆𝑖 ∗ 𝐻𝑖
𝑖=1 𝑖=1

𝑇1 = (𝑇1′ + 𝑇1′′ )/2


𝑇2 = (𝑇2′ + 𝑇2′′ )/2 (3)
𝑇3 = (𝑇3′ + 𝑇3′′ )/2
T1′ , 𝑇2′ , and 𝑇3′ represents correlation parameters at low illumination conditions, as saying that 𝑇1′′ ,
𝑇2′′ , and 𝑇3′′ are correlation parameters computed at high illumination. 𝑇1 , 𝑇2 , and 𝑇3 are the mean of
parameters calculated using (1) and (2), those will be used at the step of automatic target detection.
This step is performed only once as a first step of the target tracking process and there is no need to
reproduce it unless another target is chosen.

3.2. Target detection


Once the target color parameters are set, the following step will allow the navigating robot to detect
automatically the chosen moving target. Acquired omnidirectional images are swept, for each pixel pi
we compute tree parameters: 𝜆𝑖1 ,𝜆𝑖2 , 𝜆𝑖3 given below, where 𝑖 = 1,2, … , 𝑁 with 𝑁 the total number of
pixels in the image.
ℎ𝑖 𝑠𝑖 𝑣𝑖
𝜆𝑖1 = , 𝜆𝑖2 = , 𝜆𝑖3 = (4)
𝑠𝑖 ∗ 𝑣𝑖 ℎ𝑖 ∗ 𝑣𝑖 𝑠𝑖 ∗ ℎ𝑖
If the three following conditions are respected by a pixel pi, it will be considered as a point of the
chosen target.

|𝜆𝑖1 − 𝑇1 | < 𝛼
{|𝜆𝑖2 − 𝑇2 | < 𝛽 (5)
|𝜆𝑖3 − 𝑇3 | < 𝛾
Where
𝛼 = |𝑇1′ − 𝑇1′′ |, 𝛽 = |𝑇2′ − 𝑇2′′ |, 𝛾 = |𝑇3′ − 𝑇3′′ |
For each detected pixel of the target, its relative orientation angle 𝜃𝑖 to the robot mark is calculated.
Finally, the centroid 𝜃𝑔 (see Figure 3) of the observed target is computed as follow:
𝑛

𝜃𝑔 = ∑ 𝜃𝑖 /n (6)
𝑖=1

Where n is the total number of target detected pixels.

Figure 3. Illustration of target centroid marking.

3.3. Vehicle servoing and accurate orientation recovery


The tracking target process (see figure 4) can be summarized in the three following steps: detection
and identification of the target as described above, determination of the accurate orientation and robot
servoing according to the estimated direction.
The robot orientation step includes a correction loop sub-process, which allows the robot to determinate
the generated orientation error due to mechanical and electrical issues. Therefore, the exact orientation
will be redefined.
Figure 4. Target tracking process

4. EXPERIMENTAL RESULTS

As described above, the hardware used to implement the developed algorithm in this study is the
ESCALADE360 v2.0 robot with an omnidirectional vision system. Figure 2 shows the described robot
with the omnidirectional vision system mounted on top of it.
For the proposed method, experiments were achieved by choosing the target and determining its color
parameters in different illumination conditions as illustrated in figures 5 and 6. In this scenario as shown
in figure 7 the chosen target is a person dressed in a vest whose the color was already parameterized.
The experiments were conducted under natural and artificial lighting. The proposed approach was
compared to the following colorimetric representation methods:
 Color indices delimitation in RGB, HSV, YCbCR, and L*a*b* color spaces.
 Combination of pairs of the above mentioned delimitation color spaces.
 Subtraction of the mean image.
 Normalized RGB, obtained by dividing the R,G and B coordinates by the sum : 𝑅 + 𝐺 + 𝐵.
For those methods, color parameters characterizing the target were determined under normal lighting
conditions. After setting the color of the target, the robot is supposed to follow it over a path under
varying illumination conditions. Evaluated methods that led to inaccurate tracking or false orientation
movements were considered ineffective.
As shown in figure 7, using the proposed color indexing approach the robot was able to track the chosen
target in various illumination conditions with accurate orientation determination due to the color
detection approach robustness. Experiments have proved that this approach provides the best
illumination invariant performance compared to the evaluated models.
Target Target

Figure 5. HSV omnidirectional image Figure 6. HSV omnidirectional image


at low illumination conditions. at high illumination conditions.

Figure 7. Sample target-tracking experiments.

5. CONCLUSION

This paper presented an efficient indexing algorithm for real-time target tracking by an autonomous
mobile robot based on a color detection algorithm in HSV color space. The main advantage of the
proposed approach is its robustness against illumination variations which results in a deformation of the
chromaticity distribution so as to degrade the performance of color recognition. The experiments were
conducted under natural and artificial lighting. The proposed approach was compared to several
colorimetric representation methods. Test results show that the chromatic invariant yields excellent
recognition rate even when the illuminant color conditions vary substantially.
6. REFERENCES

Baker, S.,Nayar, S., 1999, A theory of Single-Viewpoint Catadioptric Image Formation,


International Journal of Computer Vision, 35, 175-196.
Basso, F., Munaro, M., Michieletto, S., Pagello, E., & Menegatti, E, 2013, Fast and robust
multi-people tracking from RGB-D data for a mobile robot. In Intelligent Autonomous Systems
12 (pp. 265-276). Springer Berlin Heidelberg.
Benosman, R., Kang, S. B., 2001, Panoramic Vision : Sensors, theory and applications, Ed
Springer Verlag.
Cutler, R., Rui, Y., Gupta, A., Cadiz, J., Tashev, I., Wei He, L., Colburn, A., Zhang, Z.,
Liu, Z., Silverberg, S., 2002, Distributed Meetings : A Meeting Capture and Broadcasting
System, ACM Multimedia.
Fermüller, C., Aloimonos, Y., 2000, Geometry of Eye Design : Biology and Technology,
Theoretical Foundations of Computer Vision, 22-38.
Kangni, F., Laganière, R., 2006, Epipolar Geometry for the Rectification of Cubic
Panoramas,Third Canadian Conference on Computer and Robot Vision, CRV 2006.
Kobilarov, M., Sukhatme, G., Hyams, J., Batavia, P., 2006, People tracking and following
with mobile robot using an omnidirectional camera and a laser, Proceedings of the 2006 IEEE
International Conference on Robotics and Automation, Orlando : Florida.
Myint, Y. M., 2013, Development of Process Control with Obstacle Avoidance Behavior in
Autonomous Mobile Model. International journal of imaging and robotics, 10(2), 30-43.
Nayar, S., 1997, Catadioptric Omnidirectional Camera, Proc. of IEEE Conf. on Computer
Vision and Pattern Recognition, 482-488.
Pajdla, T., Svoboda, T., 2000, Panoramic cameras for 3d computation, Proceedings of the
Czeck Pattern Recognition Workshop, 63–70.
Poza-Luján, J. L., Posadas-Yagüe, J. L., & Simó-Ten, J. E. , 2014, Quality of Control and
Quality of Service in Mobile Robot Navigation. International journal of imaging and robotics, 8,
81-89.
Sato, T., Yokoya, N., 2005, Omni-directional Multi-baseline Stereo without Similarity
Measures, IEEE Workshop on Omnidirectional Vision and Camera Networks, OMNIVIS 2005.
Xiong, Y., Turkowski, K., 1997, Creating image based vr using a selfcalibrating fisheye lens,
In CVPR97, 237-243.
Yagi, Y., 1999, Omnidirectional sensing and its applications, IEICE Trans on Information and
Systems, E82-D(3), 568–578,

Potrebbero piacerti anche