Sei sulla pagina 1di 17

Transportation Research Part C 9 (2001) 231247

www.elsevier.com/locate/trc

Automatic vehicle classication system with range sensors


Charles Harlow *, Shiquan Peng
Department of Electrical and Computer Engineering, Louisiana State University, Baton Rouge, LA 70803, USA
Received 20 February 1998; accepted 21 June 2000

Abstract
Trac management systems use inductive loop detectors and more recently video cameras to detect
vehicles. Loop detectors are expensive to maintain and video-based systems are sensitive to environmental
conditions and do not perform well in vehicle classication. Cameras based upon range sensors are not
sensitive to lighting and may be less sensitive to other environmental conditions. In addition, range imagery
should provide data to form a good basis for vehicle classication applications. In this paper, we describe
methods for processing range imagery and performing vehicle detection and classication. A vehicle
classication rate of over 92% accuracy was obtained in classifying vehicles into dierent vehicle classes. 2001 Elsevier Science Ltd. All rights reserved.
Keywords: Vehicle; Classication; Image processing; Laser sensor

1. Introduction
Congestion is a major problem on our busy highways. Building new roads to relieve congestion
is very expensive. For this reason the US Federal highway administration (FHWA) is looking at
intelligent roadway management as a means to relieve congestion (Carlson, 1997). The development of advanced trac management systems (ATMS) is an important area of intelligent
transportation systems (ITS) (Michalopoulos, 1991). The activities of detection, classication, and
surveillance of trac are critical to the ecient operation of streets and highways. Timely information on trac patterns is required for signal control (Rouphail and Dutt, 1995), ramp
metering (Kunigahalli, 1995; Tanaka et al. 1995), incident response teams (Ritchie, et al., 1995),
driver information systems (Nojima et al., 1995; Eccles, 1995), and trac control center operation
(Dillenburg et al., 1995; Jansma and Kaiser, 1995). Video units can detect vehicle presence, speed,

Corresponding author. Tel.: +1-225-388-6796; fax: +1-225-388-5263.


E-mail address: ch@rsip.lsu.edu (C. Harlow).

0968-090X/01/$ - see front matter 2001 Elsevier Science Ltd. All rights reserved.
PII: S 0 9 6 8 - 0 9 0 X ( 0 0 ) 0 0 0 3 4 - 6

232

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

stopped vehicles, wrong direction, queue lengths, and measure travel times automatically
(Washburn and Nihan, 1999).
Actuated intersections need vehicle detectors to show the amount and location of vehicle trac.
Traditional vehicle detectors are inductive loop detectors (ILD) that sense a change in inductance
when a (metal) vehicle is above the loop. These detectors work well in all types of weather and for
all types of vehicles (Juba, 1996). Detection accuracy has been reported at over 99%. Loop detectors make errors such as detecting truck axles as separate vehicles and missing small vehicles
with little metal content. These detectors are imbedded in the roadway surface where heavy trac
and construction can cause damage to the detectors. They have high failure rates due to pavement
failures and poor maintenance. Repair work requires closing trac lanes and is expensive. High
maintenance costs are causing agencies to nd above ground replacement technologies (Carlson,
1997).
Vision and other overhead mounted trac monitoring systems can be expected to provide
several benets. One benet is reduced operation and maintenance costs since maintenance can be
performed o the traveled way, which reduces lane closure and costs. The systems can be recongured quickly and cheaply. There is a disadvantage of higher initial capital cost, but computer
systems prices will continue to decrease and reduce these costs. Another benet is automatic trac
surveillance. Video detection units can automatically detect trac incidents. This allows a trac
control system to limit vehicle access to accident areas, send information to rescue services, send
trac road condition information to drivers, and provide a more regular ow of trac. Another
application area is vehicle classication. This should help in improved system operation with
better trac enforcement, regulation, and development of new methods for fee collection.
2. Background
There are a number of sensing technologies relevant to vehicle detection (FHWA, 1997).
Technologies relevant to vehicle detection include loop detectors, infrared, ultrasonic, radar,
microwave, and video detectors. Active infrared detectors operate on the Doppler principle. A
receiving unit records the reected wave. A vehicle interrupts the wave, which leads to vehicle
detection. Passive units note the presence of a vehicle by the change in wavelength reected from
the pavement. Ultrasonic detectors emit signals that are interrupted by vehicles, which permit
vehicle detection. Microwave detectors utilize the Doppler-shift principle. The unit sends out a
signal toward the roadway. When a vehicle passes through this pattern, some of the energy is
reected back to the unit at a dierent frequency. The detector senses the change in the frequency,
thus detecting the vehicle (Clippard, 1996). The remote trac microwave sensor (RTMS) is a
miniature radar employing the frequency modulated continuous wave principle, any non-background targets will reect the signal back to the RTMS where the targets are detected and their
range measured. RTMS radar is a low-cost, general-purpose, all-weather trac sensor which
detects presence and measures trac parameters in multiple independent zones (Manor, 1996).
These units show promise for trac surveillance, vehicle detection, incident detection, and
roadside technology. Microwave radar has been reported to be a superior technology for trac
management (FHWA, 1995). An exception was in the area of vehicle classication. Radar systems
are also able to compute ow and queue length. For example, the BEATRICS radar system for

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

233

automatic incident detection (Roussel et al., 1996) records speed through Doppler processing and
detects incidents through speed pattern analysis. Incident detection is based on vehicle stoppage
and other trac parameter measurements such as average and instantaneous speed and lane
occupation rate. In Europe, the use of photo-radar systems to catch speeders is common. Two
USA rms have developed a system consisting of a laser speed gun and CCD progressive scan
camera to identify speeders (Jones-Bey, 1997). The laser gun detects speeders and signals the
camera to take a picture of the vehicle for identication. It is pointed out that in the USA, laws
may have to be changed to use such information as evidence.
Current research and development on overhead vehicle detection systems has concentrated
upon systems and processing techniques related to video systems (Coifman et al., 1998). These
systems are relatively inexpensive and the technologies are familiar to DOT personnel. These
systems can detect vehicles and measure their speed under good environmental and trac conditions. Environmental variables such as rain, fog, lighting conditions, and shadows can aect the
performance of these systems. They have problems: at night, with lighting changes at sunrise and
sunset, with shadows cast by other vehicles or objects, with headlight reections from wet
roadways, and with glare from a setting sun reected from the roadway (Juba, 1996; FHWA,
1997). Trac monitoring systems with a license plate reading capability (ALPR) are useful in
compliance applications. Conditions that aect ALPR are vehicle speed and spacing, weather,
lighting, trac conditions, plate materials, plate condition, plate format including color and
character fonts, mounting of the plate on the vehicle, and an obscure view caused by dirt and
snow or structures such as a trailer hitch. One can measure travel time by matching the same
vehicle as it passes under dierent video stations.
Camera selection is an important issue for trac applications (Juba, 1997). Monochrome CCD
cameras are more sensitive to light energy than color CCD cameras. This is important in trac
images acquired at dusk or night. For night viewing with roadways illuminated by lights, one
should match the spectral response of the camera to that of the lighting. In addition, a camera
tuned to acquiring images in low-light conditions may be ``blinded'' (saturated) by bright light.
Daylight scenes can have scene luminance of 10,000 lux while nighttime freeway scenes may have
luminance of 0.1 lux. One needs to examine a vision system under all operational lighting conditions. The required detection accuracy is important (Juba, 1996). Transportation personnel were
surveyed and it was determined that a minimum level of performance for acceptance is 99%
detection accuracy with no more than 3% false detection when no vehicle is present. One needs to
test systems under a variety of environmental conditions. It is dicult to obtain good ground
truth because all sensors have errors including human observers. Trac ow may not be a good
indicator. At busy intersections with multiple detectors on dierent lanes some sensors can fail
and there will still be good trac ow because the other sensors are detecting trac. One will get a
better indication of performance at intersections where vehicles arrive intermittently. Camera
location was reported to be a critical issue.
Extensive tests of systems for non-intrusive monitoring of trac have been conducted (FHWA,
1997; Bahler et al., 1998; Kranig, 1998). They compared passive infrared, passive magnetic, radar,
Doppler microwave, passive acoustic, pulse ultrasonic, and video. Cold temperatures and
weather conditions aected passive acoustic and magnetic detectors. Video and passive acoustic
detectors counted within 10% of loops, while ultrasonic, passive and active infrared, Doppler
microwave and magnetic devices counted within 3% of loops. Active infrared devices were

234

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

aected by snow. Passive infrared detectors performed well for vehicle detection. It was also
reported that lighting conditions aected the performance of video devices. Snow was the environmental condition that had the most impact on the performance of the devices (FHWA, 1997).
No device could classify vehicles into the dierent classes dened by the FHWA. Some devices
will perform better in detection studies than in classication studies since their data are more
appropriate for detection.
In Kehtarnavaz et al. (1995) a video system for classifying vehicles is given. The system classies
vehicles according to length. Vehicles with lengths less than 5 m are classied as passenger cars
and others as long vehicles or trucks. An accuracy of classication of 90% was reported. Video
systems to characterize vehicles from standard optical camera images have been developed (Jolly
et al., 1996). A camera is located at the side of the roadway to capture images of moving vehicles.
They used ve classes: sedans, pickup-trucks, hatchbacks, station wagons, 4 by 4s, and vans.
Vehicles were identied by matching the vehicle prole. An accuracy of 91.9% classication was
reported on a data set of 393 vehicle images. Doppler radar has also been used for vehicle classication (Bullimore and Hutchinson, 1996). The system makes length and speed calculations.
Vehicles are classied by type according to length. Count accuracy was reported to be 99%, but no
classication accuracy was given. Another project constructed a system for vehicle classication
by using neural networks based upon acoustic signals. The classication accuracy is reported to be
96% for the two vehicle classes consisting of passenger cars and trucks, but it is dicult for this
system to classify four or more vehicle classes (James and Sampan, 1995). Factors such as speed
and road conditions aect the acoustic signal and may distort the results. This study was limited
to dry road conditions. A trac monitoring system based on the AUTOSENSE II laser imaging
system has been developed (Myers, 1996). A high accuracy is reported but few details are given
about the processing or classication system.
The work reported in the remainder of this paper gives details on processing of range imagery
and also considers a number of vehicle classes. Laser imaging systems are considered because the
data are less environmentally sensitive. In addition, the range data provided by these sensors
should be useful for obtaining good vehicle classication.
3. Methods
In this section, we describe the data sources and the processing methods developed for vehicle
location and classication (Peng and Harlow, 1996). In order to classify the vehicle, we have
developed features in the form of an n-dimensional vector suitable for characterizing the vehicle
type. We have also developed classication methods that use the feature vector for vehicle classication.
The data source utilized was a laser sensor that returns range and intensity information. The
AUTOSENSE II laser range image system was manufactured by Schwartz Electro-Optics. This
system operates in line-scan mode. The range accuracy was 3 in. The system was located above the
trac lanes. The scan rate is 720 lines per second. There were 30 range measurements, pixels,
collected on each scan-line. The resolution of the system was 1. The system produces two scanlines that are separated by 10. The system produced an intensity image and a range image where
each pixel value is the distance of the point from the sensor. Each pixel value in the intensity image

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

235

represents the reectance of the point. A data set of range and video imagery was also acquired
from Schwartz Electro-Optics.
3.1. Geometric correction
Now let us examine the calculation of the pixel values for the range image (Inigo, 1989; Verly
and Delanoy, 1993). First consider the imaging geometry. The range camera is a line scanner as
shown in Fig. 1. In order to determine the position of each pixel on a line-scan, one must consider
the geometry. Since we are considering one scan-line at a time, the y value is xed. If one reads a
range value at pixel x on a scan-line, then the true position x0 is related to the height of the object
as shown in Fig. 2. The following relations hold:
tana

xc
hs

hp hs

r cos a;

Fig. 1. Geometry for range scanner.

Fig. 2. Geometric corrections.

236

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

x
x

x0 hs

xc

r cos a
;
hs

x0 x

hs

r cosax
hs

xc

The position x0 is the correct pixel location associated with the range value r. The correct height
associated with pixel x0 is hp .
3.2. Segmentation
In order to determine features, it is necessary to segment the image. By this process, the pixels in
the image corresponding to the vehicle are isolated from the rest of the pixels. The histogram of
the image is computed and analyzed for the presence of a vehicle. The presence of a vehicle is
indicated by the modes of the histogram. There should be modes for the roadway and for the
vehicle. The vehicle is indicated by the fact that it is closer to the range sensor than the roadway. If
a mode is present indicating a vehicle, then a threshold tseg is selected and used to segment the
image into a background region and a vehicle region as shown in Fig. 3. Range values greater
than tseg are in the background region and range values less than tseg are in the vehicle region.
Once the vehicle and roadway regions have been determined, we can obtain the height of the
vehicle at each pixel in the vehicle region by subtracting the pixel value from the average background value. Let Rb be the region corresponding to the roadway and Rv be the region corresponding to the vehicle (Fig. 3). Let b be the average pixel value in Rb . If gr (p) is the range value of
point p x; y, then the height pixel value is given by gh p b gr p. The values given by gh
give us a height image. Each pixel value is the height above the roadway and gives the height of
the vehicle for pixels in the vehicle region.
3.3. Data correction
Some of the data obtained from the laser scanner are erroneous. This results from weak reections from the vehicle surface. The bad data points result in very large or very small range

Fig. 3. Vehicle and background regions.

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

237

Fig. 4. Neighborhood of a pixel.

values. These parameters are called kbad1 and kbad2 . The values are rst corrected before any image
processing. In order to correct the data, at point p x; y, the neighborhood Rp of the pixel is
considered, Fig. 4. If p has an erroneous range value gr (p), and p is in Rb , then it is replaced with
k 0 averagefgr pjp in Rp & kbad1 < gr p < kbad2 g. This ensures that only pixels, which have
reasonable range values are included in the calculation. An alternative to this approach would be
to use a median lter that is not sensitive to extreme values.
3.4. Feature extraction
It is useful to calculate the absolute dimensions of a vehicle. This includes the calculation of a
vehicle's length, width and height. Firstly, let us consider the height.
3.4.1. Height
We can obtain the height of the vehicle at each pixel in the vehicle region by subtracting the
pixel value from the average background value.
3.4.2. Width
Over the vehicle region Rv , let x1 ; y be leftmost pixel in Rv and xr ; y be the rightmost pixel in
Rv for a given y. Then the width of the object can be determined from the following formula:
Pw maxfjxr

xl j over all xl ; y and xr ; y in Rv g;

the unit of Pw is pixels, to transform it into feet, we must consider the range nder imaging geometry (Fig. 5).
The range scanner gives out 30 rays that impinge upon the surface of a scene. This gives ray
i 0 6 i 6 29 at spot i. The separation between rays is 1. Let wi;i1 0 6 i 6 28 represent the width
between spot i and spot i 1, the 14th ray and the 15th ray represent the two closest rays to the
center line (represented by the dashed line). Assuming that the distance between the scanner and
the roadway is 22.5 feet, see Fig. 5, we can calculate the values of w14;15 and w0;1 as w14;15
2  22:5  tan0:5 0:3927 feet and w0;1 22:5  tan14:5 tan13:5 0:4185 feet. The width
between spot 0 and spot 29 is w0:29 2  22:5  tan14:5 11:6378 feet. Thus the average width
between two successive spots is waverage 11:6378=29 0:4013 feet. The quantity waverage is used to
calculate the vehicle width. The largest relative error is
Er max of fw0;1 waverage =waverage ; waverage
w0;1 waverage =waverage 4:2%:

w14;15 =waverage g

238

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

Fig. 5. Image geometry and width calculation.

The largest absolute error is


Ea max of fw0;1 waverage  29; waverage
w0;1 waverage  29 0:4988 feet:

w14;15  29g

Through the analysis above, we know that the following formula:


w Pw  waverage
gives a reasonable calculation of the width of a vehicle in feet. For example, if we nd that for a
vehicle, Pw equals 20 pixels, then we know that the vehicle is 20  0:4013 8:026 feet in width.
3.4.3. Length and speed
In order to calculate the length, one must have two scan-lines taken at dierent positions. The
system has two laser beams separated by 10 in the y-direction (Fig. 6). This allows the speed and
length of the vehicle to be determined (Verly and Delanoy, 1993). Beam 1 scans a point p of the
object into the y1 line, while beam 2 scans the same point into the y2 line. The distance d between
the two beams in feet is d 0:25  r  tan10. The time required to travel this distance in seconds
is t y2 y1 =sr , where sr is the number of lines scanned per second by beam 2. The scan rate for
our system is 720 lines per second total or 360 lines per second for each individual beam. The
speed in feet per second is then s d=t. The nal equation for the speed is s
0:25  r  tan10  sr =y2 y1 . If there are ny lines in Rv , then it takes ny =sr seconds for the
vehicle to travel its length. Therefore, the length of the vehicle is l s  ny =sr in feet.
3.4.4. Feature vector
The vehicles can be placed into basic subclasses using the feature vector (l, w, h). In order to
distinguish more completely the type of vehicle, one needs more feature vectors. In addition to the
length, width, and height features, 16 other features are extracted. The features are extracted from
subregions of the vehicle region Rv (Fig. 7). For
P each subregion a feature is calculated as bi which
is the average value of gh over Ri , i. e., bi fgpjp 2 Ri g. The feature b16 is the average value

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

239

Fig. 6. Geometry for speed measurement.

Fig. 7. Subregions for feature extraction.

over the region Rv . The feature vector is then l; w; h; b1 ; b2 ; . . . ; b16 . These features are chosen for
their ability to distinguish vehicle type. The types of vehicles such as automobile, van, pickuptruck will be determined by dierences in their heights above the roadway in certain subregions.
The measures extracted from R1 ; . . . ; R8 give an indication of the height distribution of the vehicle
from front to back. This is useful for discriminating cars, vans, and trucks. It is also useful for
determining a vehicle with a trailer. The measures extracted from R9 ; . . . ; R13 give an indication of
the height distribution across the back of the vehicle. A empty pickup will have high sides and be

240

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

at in the middle, while an empty at bed truck will be at across the back of the vehicle. Also,
trucks hauling irregular shaped loads will be reected in these measures. The measures extracted
from R14 and R15 give an indication of the height distribution at the back of the vehicle. Some
empty trucks such as pickups and dump trucks will have a high back and be at on the rest of the
truck bed. More features are extracted from the back of the vehicle because this area can be an
important indicator of vehicle type.
3.4.5. Classication
The vehicle types considered are given in Table 1. For dierent applications one might have
dierent classes of vehicles, but these classes are typical of the type of vehicles on the road. Some
of the types are very similar such as mini-vans, vans, and sport utility vehicles. The images were
obtained in a variety of conditions such as daytime, nighttime, and rain conditions. The vehicles
can be placed into basic subclasses using the feature vector (l, w, h). Vehicles have basic dimensions (AASHTO, 1990). The features l, w, and h are the length, width, and height of the vehicle.
With this feature vector, one can determine if the vehicle is in one of the subclasses
type 1: small vehicles such as motorcycles,
type 2: medium size vehicles such as automobiles and light trucks, or
type 3: large vehicles such as large trucks and buses.
In order to determine more accurately the vehicle type, additional features are used. In addition to
length, width, and height, 16 other features are used. Our classier is a rule-based classier,
(Trucco and Fisher, 1995; Lovell and Bradley, 1996), which operates upon the feature vector
l; w; h; b1 ; b2 ; . . . ; b16 and determines the vehicle type (Peng and Harlow, 1996). The units in the
parameters and rules are in feet. The rules were determined by knowledge of the relative sizes of
dierent types of vehicles and examining a small number of vehicles from each class.
3.4.6. Classication parameters
Several parameters are dened that are useful in the classication process. One parameter is
related to the characterization of pickup vehicles. The shape of a loaded pickup is dierent from
Table 1
Data set for classication
Vehicle class

Number in class

Percentage correct classication

Motorcycle
Passenger car
Pickup
Mini-van
Van
Sport utility
Cargo van
Delivery truck
Tractor truck
Tractor with trailer
Garbage truck
Dump truck
Bus
Recreation vehicle

5
445
233
45
67
62
16
28
6
51
3
6
10
5

100
98
89
78
92
76
93
73
100
87
50
50
33
33

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

241

Fig. 8. Pickup characterization.

the shape when it is empty. This is reected in the average height of each section of the rear part of
the pickup. The heights will be irregular. A parameter like pickup is used in the characterization
of pickups. When the shape of the back part of a vehicle has shapes like that shown in Fig. 8(a) )),
the parameter like pickup is set to 1. Refer to Fig. 7 also.
The classication rules are given in a pseudo-code notation. The term && means logical and,
the term k means logical or, and the term [ means set union. The pickup parameter is given by the
following rule:
if (there exists at least two local maximum in the sequence
fR9; R10; R11; R12; R13g) or
(there exists at least two local maximum in the sequence
fR3; R4; R5; R6; R7; R8g)
then like pickup 1
Another determination is whether the rear part of a vehicle is at. This parameter is used to
analyze whether the rear part of an object is at flat 1 or not flat 0. It is particularly
useful when we classify vans, mini-vans, buses, cargo-vans and even pickups. The rule is
Let max y2 maxfR4; R5; R6; R7g
if( max y2 R4 < 0:25&& max y2 R5 < 0:25&&
max y2 R6 < 0:25&& max y2 R7 < 0:25)
flat 1
Another useful parameter relates to trailers. Before the system classies the vehicle classes, it rst
analyzes whether a vehicle has a trailer. For the vehicle region Rv in an image, if there exists a
subset
Rhitch  fR2 [ R3 [ R4 [ R5 [ R6 [ R7 g
with the following dimension properties:
the length of Rhitch > 11 in.
the width of Rhitch < 2=3  width
the height of Rhitch < 3 feet
then trailer 1.
Because it is possible for many types of vehicles to have a trailer, this parameter is useful to
classify these cases. Some types of vehicles with a trailer are shown in Fig. 9.

242

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

Fig. 9. Typical vehicles with trailer.

3.4.7. Rules
After these parameters have been obtained, the system will classify the vehicles into certain
types according to the classication rules. A few of the rules are described below.

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

243

One rule is
1: length < 11:0 && width < 4:5
If the length and the width of the object are less than 3.35 m (11 feet) and 1.37 m (4.5 feet),
respectively, then it is classied into the MOTORCYCLE type, otherwise it is classied into the
other types. Our system can classify this type of vehicle with high accuracy.
Another rule is
2: max y < 3:5kR1< 1:75 && length < 21:5kmax y < 4 &&
R1< 2:5klike pickup 0 && length < 21
Some particular passenger cars are very small, but their size is larger than the motorcycles'. It is
not unusual for a passenger car that the average height of its bumper and part of its hood is less
than 0.533 m (1.75 feet), i.e., R1 < 1:75 feet. These constraints are shown in Fig. 10. The system
classies a vehicle as a passenger car if this rule is satised.
Another rule is
3: max y < 4:5 && R1 < 2klike pickup 0 && length < 21:0
All vehicles satisfying this rule (Fig. 11) are also small, but are larger than these satisfying Rule
1 and 2. No other vehicles except for a pickup and passenger car have such dimensional characteristics. It is to be noted that, for some passenger cars and some pickups, the average height of
their bumpers and parts of their hoods are less than 0.61 m (2.0 feet), i.e., R1 < 2:0 feet, thus all

Fig. 10. Rule 2.

244

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

Fig. 11. Rule 3.

vehicles satisfying this rule are either a PICKUP or a PASSENGER CAR, these two classes are
then classied further by the following Rule 3A:
3A: R8 > 4kR4 R5 R6 R7kR1 > R6 && R1 > R7j
R4 R5 R6 R7 > 18
If any one of the following constraint conditions is satised, the system classies an object into
PICKUP, otherwise the object is classied as a PASSENGER CAR:
1. The average height of section 8 of the vehicle is greater than 1.22 m (4 feet). Section 8 is the rear
of a vehicle. For a passenger car, the average height of this section varies from 0.61 (2 feet) to
1.07 m (3.5 feet), but not higher than 1.22 m (4 feet). Thus, for these two classes, if R8 > 4, it is a
PICKUP,
2. The average heights of sections 4, 5, 6 and 7 of the vehicle are equal. For a passenger car, its
rear half part includes both half of its passenger compartment and the trunk, so it is not at.
Thus, for these two classes, if R4 R5 R6 R7, it is must be a PICKUP.
3. The average height of section 1 of the vehicle is greater than the average height of sections 6 or
7 of the vehicle. For a passenger car, the front end is not higher than the rear end. For a pickup,
its bed is lower than its front part (Fig. 12).

Fig. 12. Pickup.

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

245

Fig. 13. A pickup with a load.

4. The sum of the average heights of sections 4, 5, 6 and 7 of the vehicle is greater than 5.49 m (18
feet). This condition considers the factor that a pickup can be loaded (Fig. 13).
3.4.8. Classication results
We ran several tests with the rule-based classier. One test involved the data set given in Table 1.
Since we are using a rule-based classier, there was no training stage where parameters were
automatically set from training data. The data were chosen from a variety of vehicle types,
weather conditions, and lighting conditions. The percentage correct classication for passenger
cars was 98%, pickups was 89%, vans was 92%, sport utility vehicles was 76%, tractor with trailer
vehicles was 87%, and mini-vans was 78%. The rest of the classes had small populations with
correct classications ranging from 33% to 100% (Table 1). The overall correct classication rate
was 92%. We ran another test with 2,000 vehicles. The classication accuracy obtained was again
92%.

4. Conclusions
This research has demonstrated that a system for real-time classication of vehicles in trac
situations can be implemented with a laser range imaging system. In this study, the laser imaging
system was mounted over a trac lane. We did not conduct any studies involving dierent road
types and the required imaging geometry. The number of lanes the sensor could cover depends
upon the resolution of the sensor and the mounting geometry. The resolution used in this study of
30 points per lane was satisfactory. It would be useful to conduct further studies emphasizing
vehicle classication like those conducted for vehicle counting (FHWA, 1997). These studies
might have larger databases and involve eld tests of prototype units. This would allow one to
determine the inuence of environmental conditions on classication accuracy and determine an
appropriate imaging geometry for dierent road congurations.

246

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

A trac monitoring system of this type oers potential for trac controllers where the trac
control reects the type of vehicles in the trac pattern. It also has potential in improved highway
safety because highway safety is related to the types of vehicles and their observance of speed
limits and trac rules. One could proceed with studies for examining the relationship between
vehicle types (trucks, automobiles) and highway safety. This work might also provide a method
for unique vehicle identication. This would be an alternative to methods that read the vehicle
license plates. This would aid in improved trac management.

Acknowledgements
This research was supported by the Louisiana Transportation Research Center. Schwartz
Electro-Optics provided a database used in the research.

References
AASHTO, 1990. A policy on geometric design of highways and streets. American Association of State Highway and
Transportation Ocials (AASHTO), Washington, DC.
Bahler, S.J., Minge, E.D., Kranig, J.M., 1998. Field test of non-intrusive trac detection technologies. In: Proceedings
of 77th Annual Meeting, Transportation Research Board, Washington, DC.
Bullimore, D.E.D., Hutchinson, P.M., 1996. Life without loops. Trac Technol. Int. 96, 106112.
Carlson, B., 1997. Clearing the congestion: vision makes trac control intelligent. Advanced Imaging, February, 1997,
pp. 5457.
Clippard, D., 1996. Overhead microwave detectors cut out the loop. Trac Technol. Int. 96, 118120.
Coifman, B., Beymer, D., McLauchlan, P., Malik, J., 1998. A real-time computer vision system for vehicle tracking and
trac surveillance. Transportation Research Part C: Emerging Technologies 6 (4), 271278.
Dillenburg, J., Lain, C., Nelson, P.C., Rorem, D., 1995. Design of the ADVANCE trac information center. In:
Proceedings of the 1995 Annual Meeting of ITS AMERICA, pp. 321328.
Eccles, W., Implementation of a mobile data system. In: Proceedings of the 1995 Annual Meeting of ITS AMERICA,
pp. 217224.
FHWA, 1995. Detection Technologies for IVHS. DTFH61-91-C00076, Hughes Aircraft Report, US FHWA.
FHWA, 1997. Field test of monitoring of urban vehicle operations using non-intrusive technologies. Minnesota
Department of Transportation, US FHWA, May, 1997.
Inigo, R.M., 1989. Application of machine vision to trac monitoring and control. IEEE Trans. Veh. Technol. August,
pp. 112122.
James, R.D., Sampan, S., 1995. Vehicle classication using neural networks based upon acoustic signals. In:
Proceedings of the 1995 Annual Meeting of ITS AMERICA, pp. 975982.
Jansma, G.L., Kaiser, G., 1995. TRAFFICALERT: a commercial application of real-time trac information. In:
Proceedings of the 1995 Annual Meeting of ITS AMERICA, pp. 335340.
Jolly, M.-P.D., Lakshmanan, S., Jain, A.K., 1996. Vehicle segmentation and classication using deformable templates.
IEEE Trans. Patterns Anal. Mach. Intelligence 18 (3), 293308.
Jones-Bey, H., 1997. Digital data acquisition bolsters laser-based trac enforcement. Laser World Focus, November,
1997, pp. 161163.
Juba, M., 1996. Succeeding with video detection. Trac Technol. Int., Oct/Nov, 1996, pp. 3336.
Juba, M., 1997. Sometimes color isn't the answer: selecting video cameras for trac applications. Trac Technol. Int.
1997, pp. 142146.

C. Harlow, S. Peng / Transportation Research Part C 9 (2001) 231247

247

Kehtarnavaz, N., Huang, C., Urbanik, T., 1995. Video image sensing for a smart controller at diamond interchanges.
In: Proceedings of the 1995 Annual Meeting of ITS America, Washington, DC.
Kranig, J., 1998. Evaluation of non-intrusive detection technologies for trac detection and data collection. In:
Proceeding NATMEC 98, Charlotte, North Carolina, 1115 May, 1998.
Kunigahalli, R., 1995. A Multidimensional geometric approach to detection of freeway congestion. In: Proceedings of
the 1995 Annual Meeting of ITS AMERICA, pp. 597604.
Lovell, B.C., Bradley, A.P., 1996. The multiscale classier. IEEE Trans. Pattern Anal. Mach. Intelligence 18 (2),
124137.
Manor, D., 1996. Multiple zone radar detection by RTMS. Trac Technol. Int. 96, 126131.
Michalopoulos, P.G., 1991. Vehicle detection through video image processing: the autoscope system. IEEE Trans. Veh.
Technol. 40 (1), 2129.
Myers, T., 1996. Laser sensors for trac monitoring and control. Trac Technol. Int. 96, 132138.
Nojima, A., Iwata, Y., Hirano, K., 1995. An in-vehicle information system using simple deformed map displays and
two-way infra-red beacons. In: Proceedings of the 1995 Annual Meeting of ITS AMERICA, pp. 201206.
Peng, S., Harlow, C.A., 1996. Automatic vehicle classication using range imagery. In: Proceedings of the TwentyEighth Southeastern Symposium on System Theory, 31 March2 April, 1996, pp. 327331.
Ritchie, S.G., Abdulhai, B., Parkany, A.E., Sheu, J.-B., Cheu, R.L., Khan, S.I., 1995. A comprehensive system for
incident detection on freeways and arterials. In: Proceedings of the 1995 Annual Meeting of ITS AMERICA,
pp. 617622.
Rouphail, N.M., Dutt, N., 1995. Estimating travel-time distributions for signalized links: model development and
potential its applications. In: Proceedings of the 1995 Annual Meeting of ITS AMERICA, pp. 623632.
Roussel, J.-C., Petrucci, J., Lion, D., Jaouen, R., 1996. BEATRICS radar system for automatic incident detection.
Trac Technol. Int. 96, 121125.
Tanaka, R., Maeoka, A., Taga, H., Seki, F., Sugimoto, A.M., 1995. Dispersion of trac ow using travel-time
information on routes connecting two cities. In: Proceedings of the 1995 Annual Meeting of ITS AMERICA,
pp. 413418.
Trucco, E., Fisher, R.B., 1995. Experiments in curvature-based segmentation of range data. IEEE Trans. Pattern Anal.
Mach. Intelligence 17 (2), 177182.
Verly, J.G., Delanoy, R.L., 1993. Adaptive mathematical morphology for range imagery. IEEE Trans. Image Process. 2
(2), 272275.
Washburn, S.S., Nihan, N.L., 1999. Estimating link travel time with the mobilizer video image tracking system.
J. Transp. Eng. 125 (1), 1520.

Potrebbero piacerti anche