Sei sulla pagina 1di 2

IEEE ISCE 2014 1569960647

Automatic Top-View Transformation for Vehicle


Backup Rear-View Camera
Vivek Maik
Department of Electronics and Communication
The Oxford College of Engineering, Bangalore,
India

Daehee Kim, Hyungtae Kim, Jinho Park, Donggun


Kim, and Joonki Paik1
Department of Image
Chung-Ang University, Seoul, Korea

vivek5681@gmail.com

wangcho100@gmail.com, ipis.hyungtae@gmail.com,
dkskzmffps@gmail.com, deepain83@gmail.com,
paikj@cau.ac.kr

Abstract An automatic top-view transformation method is


presented for a vehicle backup rear-view camera. The proposed
method consists of two steps: i) automatic corresponding points
estimation based on the lens specification and ii) view
transformation based on the direct linear transform algorithm.
Major contribution of this work is automatic view
transformation that is optimized for the vehicle rear view camera
system. The proposed method can be applied to various imaging
systems, such as automotive imaging systems, intelligent
surveillance systems, and vehicle rear view cameras.

II.

AUTOMATIC CORRESPONDING COORDINATE


ESTIMATION

At least four corresponding pairs are needed for view


transformation. The target coordinate is estimated using the
lens specification and the installation information. Fig. 1 shows
the installation information of camera and reference points in
the 3D world coordinate.

KeywordsView transformation, direct linear transform,


vehicle rear-view camera

I.

INTRODUCTION

For the past few years, vehicles have been equipped with
various cameras for driver safety, convenience, and video
event recording to provide an extended visual information. As
the demand of a vehicle vision system increases, the related
research has become popular, such as lane detection [1], video
event data recording, and inattentive drive monitoring [2].

Fig. 1. The installation information of camera and reference points in the 3D


coordinate

The real position of a camera is given by the installation


information in the vehicle, and the virtual or target position of
the camera is specified by a user. The reference points in the
3D space are not necessarily coplanar. The projected reference
points onto the image plane is estimated using the angle of lens.

A vehicle rear-view camera provides users with


geometrically distorted images because of a fisheye lens for a
wide field-of-view. Drivers are subject to accidents because of
the unrealistic distance of an object. Existing geometric
distortion correction algorithms required user input for the
corresponding coordinates. However, it is difficult to provide a
highly accurate correction result because of the estimation error
of corresponding point pairs.

Angles and are first calculated between the camera


position and a reference point, respectively.

In this paper, an automatic top-view transformation method


is presented for a vehicle backup rear-view camera. The
proposed method consists of
automatic correspondence
estimation using lens specifications and view transformations
using the direct linear transform (DLT) algorithm.

= cos1 (( zc z p ) / d ),
= tan 1 (( yc y p ) / ( xc x p )),

(1)

where ( xc , yc , zc ) represents the camera coordinate and d the


distance between the camera and reference points. Next, the
coordinate of projected points on the image I ( x, y ) is given as

The coordinate of the target view is estimated using the


external camera parameters. The three-dimensional (3D) real
world coordinate is first estimated using four corresponding
point pairs to calculate the projection transformation matrix.
The top-view transformation is then obtained using the
projection matrix.

xI = (w / 2) + ( / N FOV ) cos( ),
yI = (h / 2) ( / N FOV )sin( ),

(2)

The virtual camera setting data used in the experiment are


xr = 0 , yr = 151 , zr = 150 , v = 90D . Fig 2 shows the result
of proposed method.

where w and h are the horizontal and vertical sizes of the


image, and N FOV the filed-of-view of the camera.
III.

TOP-VIEW TRANSFORMATION

The estimated coordinates of the predicted response is used


to generate a transformation matrix defined as

h11
H = h21
h31

h12
h22
h32

h13
h23 
h33

(3)
(a)

where H is calculated using the DLT algorithm that solves the


following linear equations
0T

T
wix i
yix iT

wix Ti
0T
xix iT

yix iT h1

xix iT h 2 = 0,
0T h 3

(4)

where x i = ( xi , yi , wi )T and x i = ( xi, yi, wi)T are the


corresponding pair of the real and virtual positions,
respectively. Although there are three equations in (4), only
two of them are linearly independent. Since each point
correspondence gives two equations, it is possible to solve for
H without the third equation. One may choose wi = 1 , which
means that ( xi, yi) are the coordinates measured in the image.
Finally, the top-view image is generated as

Iv = HIr

(b)
Fig. 3. Results of the proposed method, (a) the results of transformation, (b)
the results of crop.

V.

An automatic top-view transformation method was


presented for vehicle backup rear-view camera. The proposed
method is transformed the top-view automatically based on the
virtual camera information by defined an user, so that can be
provided a convenience to a driver. The proposed method can
be applied to an image transform system, automotive image
system, and intelligent surveillance as well as a vehicle rear
view backup camera.

(5)

where I r and I r respectively represent the real and virtual


views.
IV.

CONCLUSION

ACKNOWLEDGMENT

EXPERIMENTAL RESULTS

This research was supported by Basic Science Research


Program through National Research Foundation (NRF) of
Korea funded by the Ministry of Education, Science and
Technology (2013R1A1A2060470) and by the Ministry of
Science, ICT & Future Planning as Software Grand Challenge
Project (grant no. 14-824-09-003) and by the Technology
Innovation Program (Development of Super Resolution Image
Scaler for 4K UHD) under Grant K10041900.

In order to evaluate performance of the proposed system,


we used 720 480 test images taken by a fisheye lens camera
installed in a vehicle rear view system. The real camera
installation data used in the experiment are xr = 0 , yr = 91 ,
zr = 0 , v = 35D and FOV = 134D .

REFERENCES
[1]

[2]

[3]
Fig. 2. Input test images of a rear-view camera.

R. Danescu and S. Nedevschi, Probabilistic lane tracking in difficult


road scenarios using stereovision, IEEE Trans. Intell. Transp. syst.,
vol. 10, no. 2, pp. 272-282, June 2009.
Y. Dong, Z. Hu, K. Uchimura, and N. Murayama, Driver inattention
monitoring system for intelligent vehicles: a review, IEEE Trans.
Intell. Transp. syst., vol. 12, no. 2, pp. 596-614, June 2011.
R. Hartley and A. Zisserman, Multiple View Geometry in Computer
Vision, 2rd ed., Cambridge, 2003, pp. 8892.

Potrebbero piacerti anche