Sei sulla pagina 1di 11

ARTICLE IN PRESS

Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

Contents lists available at ScienceDirect

Robotics and Computer-Integrated Manufacturing


journal homepage: www.elsevier.com/locate/rcim

Review

Advances in 3D data acquisition and processing for industrial applications


Z.M. Bi a,n, Lihui Wang b
a
b

Department of Engineering, Indiana University Purdue University Fort Wayne, 2101 E. Coliseum Blvd., Fort Wayne, IN 46805, USA
Centre for Intelligent Automation University of Sk
ovde, PO Box 408, 541 28 Sk
ovde, Sweden

a r t i c l e in fo

abstract

Article history:
Received 15 March 2009
Accepted 25 March 2010

A critical task of vision-based manufacturing applications is to generate a virtual representation of a


physical object from a dataset of point clouds. Its success relies on reliable algorithms and tools. Many
effective technologies have been developed to solve various problems involved in data acquisition and
processing. Some articles are available on evaluating and reviewing these technologies and underlying
methodologies. However, for most practitioners who lack a strong background on mathematics and
computer science, it is hard to understand theoretical fundamentals of the methodologies. In this paper,
we intend to survey and evaluate recent advances in data acquisition and progressing, and provide an
overview from a manufacturing perspective. Some potential manufacturing applications have been
introduced, the technical gaps between the practical requirements and existing technologies discussed,
and research opportunities identied.
& 2010 Elsevier Ltd. All rights reserved.

Keywords:
Vision-based system
Data acquisition
Data processing
3D images
Point clouds
Surface reconstruction.

Contents
1.
2.

3.

4.

5.

6.

Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Development of data acquisition and processing systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
2.1.
Hardware systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
2.2.
Software systems for data processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 404
Overview of hardware systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
3.1.
Classications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
3.1.1.
Passive systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
3.1.2.
Active systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 405
3.2.
Control of data acquisition systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
3.3.
Available systems. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 406
Overview of software tools for data processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
4.1.
Data ltering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
4.2.
Data registration and integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 408
4.3.
Feature detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
4.4.
3d Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
4.5.
Surface simplication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
4.6.
Segmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 409
4.7.
Other relevant work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
4.8.
Available software tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 410
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
5.1.
Manufacturing applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
5.2.
Technical gaps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Summary and research trends . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
6.1.
Hardware systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
6.2.
Software systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
6.3.
Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 412

Corresponding author. Tel.: + 1 260 481 5711; fax: + 1 260 481 6281.
E-mail addresses: biz@ipfw.edu (Z.M. Bi), lihui.wang@his.se (L. Wang).

0736-5845/$ - see front matter & 2010 Elsevier Ltd. All rights reserved.
doi:10.1016/j.rcim.2010.03.003

ARTICLE IN PRESS
404

Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

1. Introduction
The current manufacturing environment is highly competitive
and uncertain. A manufacturing system has to be exible or
recongurable so that its structures or components can be
adaptive in the dynamic environment. The critical task of
implementing a exible/recongurable manufacturing system is
to identify or dene changes and uncertainties occurring during
production processes. When changes and uncertainties to the
geometric shape of a part occur, a vision-based system is usually
required to capture the surfaces or features. The process of
transferring the acquired data to a virtual representation is
complex. Current technologies still face challenges in applying 3D
vision in a manufacturing environment.
In this paper, available technologies are examined from the
viewpoint of a manufacturing engineer. The limitations in actual
manufacturing applications are discussed. The rest of this paper is
organized as follows. After critical issues involved in data
acquisition and processing are explored in Section 2, different
data acquisition hardware systems and their working principles
are overviewed in Section 3. In Section 4, software tools for data
acquisition and processing are reviewed and classied. The
advantages and disadvantages, as well as their application scopes,
are investigated and compared. Section 5 is concerned with
current applications in a dynamic manufacturing environment.
Technical gaps between the industrial needs and existing
technologies are discussed. Finally, in Section 6, a summary is
provided together with future research trends to make data
acquisition and processing systems more applicable to real
manufacturing environments.

2. Development of data acquisition and processing systems


A data acquisition and processing system includes both
hardware and software. A hardware system acquires point clouds
or volumetric data by using established mechanisms or phenomena for interacting with the surface or volume of an object of
interest. A software system processes raw point clouds or
volumetric data and transfers them into a virtual representation
of the object such as surfaces and features. In a manufacturing
environment, these virtual representations can be easily understood or used by practitioners.
2.1. Hardware systems
Over the last decade, extensive use of digital technology in
both design and production has yielded enormous cost and
quality benets. However, the ability to leverage digital technology for inspection and quality control without incurring expensive time penalties has not kept pace with the ability to produce
complex, close tolerance parts. A digital inspection technique
must embody a number of simultaneous characteristics: high
speed, high accuracy, large measurement volume, ability to
measure features on both simple and complex parts, ability to
measure parts with complex surface nishes ranging from freshly
machined to dark paint, operation in ambient factory lighting
conditions, real-time data analysis, and operational simplicity [1].
Many types of vision systems are available today. However,
design of a vision system needs to take considerations of many
factors such as accuracy, speed, working volume, reliability, and
cost [1]. These factors often conict and need to be carefully
balanced for a particular application. Moreover, most hardware
systems have environmental operating constraints such as lighting, temperature, and humidity. The performance of a vision

system also depends on part attributes such as color and


reectivity, and can change signicantly from one part to another.
There exists no industrial vision system capable of handling all
tasks in every application domain. Only after the requirements in
a particular application are specied, can the appropriate
decisions for the design and development of such a system be
taken. The rst problem to solve in automating a machine vision
system is to acquire the data and transform the data into
measurements or features extracted from images [2].

2.2. Software systems for data processing


When raw data is acquired by the hardware of a vision system,
they have to be processed and transformed into useful information for an application. For example, the sequential surface
treatment of an automated system would require a part surface
model; the task of defect detection needs to identify the features
related to a defect; automated alignment would require a part
model which can be compared with another CAD model, so that
the part can be located and xed at an optimized position. The
interpretation of raw data to required computer model is a
complicated process [3], and it involves the following typical
issues.
Data Filtering. Raw data include noise, distorted, and invalid
data caused by the hardware system and/or the environment.
Raw data needs to be ltered to remove unwanted and noise
data [2].
Data Registration and Integration. A vision device such as a laser
scanner can capture the surface facing the scanner in the eld of
view. Therefore, multiple views are needed to acquire data over
the entire surface, and the data from different views have to be
integrated. The registration is used to determine the transformation of data from two different views so that data can be
integrated under the same coordinate system. Integration is the
process of creating a single surface representation from the
sample points of two or more range images [4].
Surface Reconstruction. Using the raw point clouds or volumetric data acquired from an unknown surface, an approximation
of the surface can be constructed [5]. The reconstructed surface
can be used to compare with CAD models or for surface-based
automated programming.
Data Simplication and Smoothing. A compact approximation of
a shape can reduce memory requirements and accelerate data
processing. It can also accelerate computations involving shape
information, such as nite element analysis, collision detection,
visibility testing, shape recognition, and display. Simplication is
useful to make storage, transmission, computation, and display
more efcient [6].
Data Segmentation. Segmentation refers to a process to extract
the selected regions of interest from the rest of data using
automated or manual techniques. Data ltering and segmentation
are two different aspects of the same problem. A good ltering
process should distinguish between a set of signicant regions
and the border between them. Such a ltering process assumes,
implicitly, that the segmentation is known [7].
Feature Detection. Defects or some basic elements on a surface
can be dealt with as features. Examples of such features include
size, position, contour measurement via edge detection and
linking, as well as texture measurements on regions [2]. Feature
detection is used to identify defects with certain features or
validate if the acquired data t a specic feature.
Data Comparison. A reference model is usually available for
data comparison. Data comparison calculates the derivations or
differences between the physical model and the reference model.
It can be applied to inspection, surface control, or CAD model

ARTICLE IN PRESS
Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

comparison. For example, (i) in feature detection, point clouds can


be used to measure geometric elements such as planes, cylinders,
circles, spheres, and boundaries; and (ii) in monitoring and
control, as-designed and as-built models are compared so that the
deviation (average error), tolerance, and distribution can be
evaluated.

3. Overview of hardware systems


A 3D data-acquisition device is an instrument that collects 3D
coordinates of a given region of a surface [8]. However, different
technical principles can be applied in measuring the elements
needed to compute a 3D geometry. Each method uses some
mechanisms or phenomena for interacting with the surface or
volume of an object of interest.
3.1. Classications
Researchers have classied data-acquisition systems in different views. For example, Varady et al. [9] have classied data
acquisition devices in terms of the technical principle a device
applies. As shown in Fig. 1, the devices are classied into noncontact and contact. A non-contact device uses a media such as
light, sound, or magnetic elds, while a contact device touches the
surface of an object with a mechanical probe at the end of an arm
(tactile methods). In either case, an appropriate analysis must be
performed to determine the positions of points on the objects
surface from the physical readings obtained. However, for
efcient and practical applications in manufacturing, only noncontact devices are reviewed in this paper.
Isgro et al. [10] have classied non-contact devices into passive
and active systems. Passive systems work only on naturally
occurring images produced by reected light from a natural or
man-made source, and they do not use any kind of energy to help
the sensors. Active systems project energy (e.g. a pattern of light,
sonar pulses, etc.) on the scene and detect its position to perform
the measurement; or exploit the effect of controlled changes of
some sensor parameters (e.g. focus).

405

Passive stereo vision adopts two or more cameras to view the same
object and acquires 3D data by triangulation.
The advantages of a passive system are: (i) it is less sensitive to
environment; and (ii) it is suitable for a mobile vision platform
and requires no external energy source. Shape-from-shading and
shape-from-motion methods are not well suited for general 3D
data acquisition because of: (i) sensitivity to the illumination and
surface reectance properties of an object; (ii) limited ability to
cope with non-uniform surface textures; and (iii) the difculty to
infer absolute depth. Passive stereo vision has a critical issue of
nding the pixel correlations of two images. To solve this
problem, features (such as edges and points) have to be extracted
and matched correctly. Both feature extraction and matching are
complex and computationally intensive; therefore, a depth map
may not be generated in a reasonable time.

3.1.2. Active systems


Active systems can be further classied in terms of underlying
physical principles: triangulation, time-of-ight or laser pulse, and
interferometry.
Triangulation is based on the principle of triangulating a
measurement spot on the object from a physically separate
camera optical source and detector. By simple geometry, the
(x, y, z) coordinates of the illuminated spot on the object are
calculated [11]. As shown in Fig. 2, triangulation-based sensors
can be single-point or slit. A single-point sensor acquires range
information point by point, whereas a slit scanner allows the
projection of a laser line and the simultaneous detection of a
complete prole of points. A slit scanner must compromise

3.1.1. Passive systems


A passive system uses shape-from-shading, shape-from-motion,
or passive stereo vision to acquire 3D data. Shape-from-shading
utilizes a single camera for the object under different conditions.
By studying the changes in brightness over a surface and
employing constraints in the orientations of surfaces, certain
depth information is calculated. Shape-from-motion uses motion
sequences of the object by moving the object or the camera.

Fig. 1. Classication of data acquisition devices [9].

Fig. 2. Examples of sensor systems adapted [12] (a) Single-point sensor and
(b) slit sensor.

ARTICLE IN PRESS
406

Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

between the eld of view and depth resolution, and it has


relatively poor immunity to ambient light.
Time-of-ight or laser pulse scanners work on the principle that
the surface of an object reects light back towards a receiver which
then measures the time (or phase difference) between transmission and reception. The range is measured as a direct consequence
of the propagation delay of electromagnetic wave. Time-of-ight
systems require very accurate timing sources and typically provide
modest resolution (generally of centimeter but occasionally
millimeter accuracy) for longer range applications. They are best
suited to distance measurement and environment modeling at
medium to long ranges. Fig. 3 shows an example system with the
acquisition speed of 13.8 kHz and the precision of 1 cm.
Interferometry scanners use multiple stripes or patterns
projected simultaneously on an object. Using two precisely
matched pairs of gratings, the projected light is spatially
amplitude modulated by the grating, and the camera grating
demodulates the viewed pattern and creates interference fringes
whose phases are proportional to the range. Interferometry
methods are most useful with objects having relatively large at
surfaces and small depth variations. Although the accuracy for an
equivalent depth of view is comparatively smaller than a slit or
laser point scanner, the use of incoherent light removes the
speckle noise associated with lasers, resulting in smooth data and
the possibility of acquiring color texture. Fig. 4 shows an image
example acquired by an interferometry scanner.
In choosing a vision system for a particular application, many
factors have to be taken into consideration, including accuracy,
speed, resolution and spot size, range limits and inuence of interfering light, eld of view, registration devices, and imaging cameras.

3.2. Control of data acquisition systems


No control is needed for a xed scanner with pre-dened
window and resolution. However, a more sophisticated sensor
provides a range of possible resolutions and the exibility to
select one or more individual scan windows. It may be mounted
on actuators whose motions can be manipulated. The scanning
process should be controlled to achieve high precision and acquire
a complete image efciently from a complex object. The survey by
Scott et al. [11] concluded that view planning for high-quality
object reconstruction/inspection has no general-purpose solution.
The remaining open problems are simultaneously both intriguingly simple and exceptionally complex, needing to consider
efciency, accuracy, and robustness. The efciency issue relates to
the computational complexity of the view planning algorithm in
terms of both time and memory, although timeliness is probably
the more important factor. No method proposed to date is able to
provide both performance and efciency at the same time. View
planning algorithms and imaging environments restricted to a
one- or two-dimensional viewpoint space can at best provide
limited-quality, limited-coverage models for a restricted class of
objects. There is a need to measure and model object reectance
to handle shiny surfaces and compensate for limited camera
dynamic range. View planning techniques are also required to
avoid or compensate for the effects of geometric step edges,
reectance step edges, and multiple reections.
In manufacturing processes, actual parts often differ from
their CAD models when manual operations or adjustments are
needed. For example, a chassis can include some missing or
additional parts, and the attached brackets or suspended parts can
be in arbitrary orientations. These deviations and uncertainties
have a great impact on the trajectory planning of robotic systems,
which must be collision free. To automatically identify the
deviations and uncertainties, Larsson and Kjellander [14] used a
standard industrial robot with a laser prole scanner to achieve
that the freedom. It was proposed to be used in a future
automation system for the reverse engineering of unknown
objects.
3.3. Available systems

Fig. 3. Time-of-ight based system shown in [13].

Although many techniques can be applied to acquire images of


a 3D object, the automation of data acquisition depends on their
requirements on time, accuracy, and throughout. The well-known
technologies such as coordinate measuring machines, light
curtains, shape from shading, and computed tomography are
insufcient in meeting the requirements. These techniques are

Fig. 4. An image example acquired by an interferometry sensor (http://www.capture3d.com).

ARTICLE IN PRESS
Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

407

Table 1
Some manufacturers and data acquisition devices.
Manufacturer

Type

Range (m)

Software

Web site

Optech

ILRIS-3D

ShapeGrabber

Scan heads, portable


classic, automated system
3D laser proling sensor
ModelMake D50, 100, 200

0.81.5,
31500
0.331.75

Polyworks
ILRIS-3D software
ShapeGrabber SGCentral

http://www.optech.ca/
http://www.geo-konzept.de
http://www.shapegrabber.com/

0.3
0.050.2

INO software
KUBE CAS
Corrosion Analysis Software

http://www.ino.ca/
http://www.3dscanners.com/

0.22.7

SEER Software

http://www.tyzx.com/

n/a

http://www.ptgrey.com/

INO
3D Scanners
Metres

ModelMake Z35, 70, 140


(handler scanner)

Tyzx

DeepSea camera 3 cm,


6 cm, 22 cm, DeepSea G2
Bumblebee 2 Bumblebee
XB2

Point Grey Research

Konica Minolta
Genex Technologies
3rdTech
CALLIDUS Precision
Systems
Leica Geosystems
FARO
I-SiTE Pty Ltd
MetricVision

Riegl Laser Measurement


Systems
Zoller + Froehlich GmbH

VIVID Series910
VIVID SeriesVIVID 9i
Rainbow 3D Camera

0.52.5

DeltaSphere-3000 3D
Digitizer
CALLIDUS CT 180, 900,
3200
Scan Station
LS 420, LS 840, LS 880
I-SiTE 4400 LR
MV224, MV260
XC50 Cross Scanner
XC50-LS Cross Scanner
LC50, LC15
LMS-Zxxx series

0.515

FlyCapture SDK
Triclops SDK,
Censys3D SDK,
Multiclops
Polygon Editing Tool Software
Photogrammetry PSC-1
3D Mosaic
3D Surgeon
SceneVision-3D

0.18

3D-Extractor

http://www.callidus.de

0.3
20, 40, 80
0.150.7
0.070.195

Cyclone, CloudWorx
FARO Scene
I-SiTE Studio
CMM Software

http://www.leica-geosystems.com
http://www.faro.com
http://www.isite3d.com
http://www.metris.com

0.351.0

RiSCAN PRO

http://www.riegl.com

53.5

Z+ laser control

http://www.zf-laser.com

IMAGER 5006
IMAGER 5003

0.20.37

RSI GmbH

DigiScan2000

0.4

Roland Corp
Inus Technology Inc.,
Bytewise Measurement
Systems

LPX-60/600 3D

0.30.4

Light Form Modeller (LFM)


Visual Sensor Fusion (VSF)
A+F ProjectView
3D Reconstructor
RapidForm2000
PhotoModeler Pro 5
Rapid 2007 Webinar

http://se.konicaminolta.us/
http://www.genextech.com/pages/
601/Rainbow_3D_Camera.htm
http://www.3rdtech.com

http://www.rsi.gmbh.de/maine.htm
http://www.rolanddga.com/
http://www.rapidform.com

CTWIST
0.032

Prole360TM Prole
Measurement System

http://www.bytewise.com/

LLT2800-25, 100100
Desktop 3D Scanner
COMET 5
ARIUS3D
OptoScan
Smartscan
MicroScribe MX
MicroScribe MLX
ATOS 3D Digitizer
M2D, M2DW, M20D-XF

0.0250.1
n/a
0.421.7
6
0.36
0.72
0.63
0.84
1.6
1.2

scanControl 2800
ScanStudioTM CORE
T-Scan Software system
Pointstream 3d
OPTOCAT for Windows
3D-Alignment
MicroScribe Utility Software

http://www.me-us.com/
https://www.nextengine.com/
http://www.steinbichler.de
http://www.arius3d.com/
http://www.breuckmann.com/

ATOS software
MEL Software

http://www.gom.com/EN/
http://www.melsensor.de/

Cyberware software

http://www.cyberware.com/

Laser Design

Model shop Whole body 4,


and X
DS, RE, PS

http://www.laserdesign.com/

Vitronic
Polhemus
TC2
Nextec
Kreon
Perceptron

Vitus
FastScan
Body scanner
Hawk
ZEPHYR KZ 25, 50, 100
Contour Probe Sensor

1
0.8
0.8
0.3
0.2
0.1

Surveyor Scan Control


RealScan
Vitus
FastScan software
3D Body Measurement software
Hawk software
Polygonia
ScanWorks

Micro-Epsilon
NextEngine, Inc.
Steinbichler
ARIUS3D
Breuckmann
MicroScribe
GOM mbH
MEL Mikroelektronik
GmbH
Cyberware

therefore beyond the scope of our discussions. In order to meet


the requirements, the rst problem to solve in automating a
machine vision task is to understand what kind of information

http://www.emicroscribe.com/

http://www.vitronic.de/
http://www.fastscan3d.com/
http://www.tc2.com/
http://www.nextec-wiz.com/
http://www.kreon3d.com/
http://www.perceptron.com/

that the vision system is to retrieve and how this is transformed


into measurements or features extracted from the obtained
images [2].

ARTICLE IN PRESS
408

Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

Several vertically integrated industrial 3D solutions are


reported in the literature. Biegebauer et al. [15] use laser
triangulation sensors from IVP (Sweden) to obtain a range image
of parts moving on a conveyor belt. The sample parts include a
gearbox with motor, compressor tank, steering column of a truck,
and rear view mirror. Up to 600 scans per second can be taken.
The resolution for a scan line width of 2 m is 1.2 mm. Teutsch
et al. [16] developed a robust optical laser scanner for the
digitalization of arbitrary objects, primarily industrial workpieces.
As the measuring principle, they use triangulation with structured
lighting and a multi-axis locomotor system. Measurements on the
generated data may lead to incorrect results if the contained error
is too high. Many existing methods for polygonal mesh optimization produce aesthetically pleasing 3D models, but often require
user interaction and are limited in processing speed and/or
accuracy. Furthermore, operations on optimized meshes consider
the entire model and pay little attention to individual measurements. Also, most algorithms consider unsorted point clouds
although the scanned data is structured through device properties
and measuring principles. Table 1 summaries a few available 3D
scanners and their manufacturers, from [8][17].

4. Overview of software tools for data processing


4.1. Data ltering
The acquired data must be ltered to eliminate invalid data.
Point data can be invalid due to many reasons, for example,
(i) reections from an object in the background; (ii) reections
originating in the space between scanner and object (trees and
other objects in the foreground, moving persons or trafc,
atmospheric effects such as dust or rain); (iii) partial reections
of the laser spot at edges; (iv) multiple reections of the laser
beam; (v) range differences originating from systematic range
errors caused by different reectivity of the surface elements; and
(vi) erroneous points caused by bright objects. The elimination
process, however, has to be done interactively since no automatic
method can foresee all possible constellations [3].
The acquired data may be ltered to reduce the level of noise
caused by the precision of data acquisition system. If the object is
known to be smooth, the application of a low pass or median lter
can improve the data considerably. It should be noted, however,
that ltering will inuence the entire object in the same way. If
the object consists of smooth sections and edges, ltering may not
be advisable at the early stages of processing. The data may also
be ltered to reduce the number of points in a dense area. The
surface is often scanned on grid denser than required. This makes
manipulation with a scanned image slow and difcult. Point
thinning can have a similar effect as ltering if those points
having large deviations from an intermediate surface are
preferably deleted. If several scans were taken from different
observation stations, it may be advisable to delay point thinning
until all measurements are combined in the registration
process [3].
Most of the available software tools have basic ltering
functions. For example, Matlab toolboxes include lters to reduce
the number of points, remove isolate points, and reduce blurring.
The lters to remove isolate points are based on various criteria.
For example, if a neighborhood is given, the software can
determine the number of neighboring points for each point of
the scanned image. If this number is under a specied threshold,
the point is considered isolated and removed from the contour
matrix. The group of lters to reduce blurring consists of assorted
functions for smoothing data. For each point, a neighboring block

is determined and a new value is generated based on the chosen


criteria.

4.2. Data registration and integration


Registration is needed for two different purposes: (i) combination of several point clouds taken from different observation
points; and (ii) referencing the object in a global coordinate
system [3].
The key step to accomplish the registration is to dene a
distance between two images; a distance of zero corresponds to
a perfect overlap of the two images. The most successful
methodologies for data registration and integration is the iterative
closest points (ICP), where the distance of two objects is dened as
the closest Cartesian distance between two objects. Since the
introduction of ICP by Chen and Medioni [18] and Besl and McKay
[19], many variants have been introduced on the basic ICP
concept. Rusinkiewicz and Levoy [20] have classied and
compared several ICP variants based on the effect that each has
on convergence speed.
The approaches of registration can be pair-wise and global [21].
Pair-wise registration is simple since it works only with two views
at a time, and the result is further registered with another view
until all views are registered. The advantage of pair-wise
approaches is simplicity. However, the pair-wise registration can
accumulate errors occurred in minimizing the error for the pair. In
addition, denition of the error of a pair has no signicance to the
entire surface. Global registration deals with all related data
simultaneously to approximate an unknown surface.
Studies are ongoing to develop a general registration algorithm
based on ICP. For example, Liu et al. [22] have proposed to use a
combination of surface tting and registration within the
geometric optimization framework of squared distance minimization (SDM). In this way, they obtain a quasi-Newton like
optimization algorithm, in which each iteration simultaneously
registers the data set with a rigid motion to the tting surface and
adapts the shape of the tting surface. This optimization
algorithm can combine registration of multiple scans of an object
and model tting into a single optimization process that is shown
to be superior to the traditional procedure; it rst registers the
data and then ts a model to it. Pottmann et al. [23] have
presented a new approach to the geometric alignment of a point
cloud to a surface and to related registration problems. They have
provided an alternative ICP concept that relies on instantaneous
kinematics and on the geometry of the squared distance function
of a surface. The proposed algorithm has exhibited faster
convergence than ICP.
The greatest difculty in performing registration is dealing
with outliers and local minima while remaining efcient. While
commonly used, iterative closest point algorithm is efcient but
unable to deal with outliers or avoid local minima. Another
commonly used optimization algorithm, simulated annealing, is
effective at dealing with local minima but is slow. Luck et al. [24]
have developed an algorithm that combines the speed of iterative
closest point with the robustness of simulated annealing.
Additionally, a robust error function is incorporated to deal with
outliers. This algorithm is incorporated into a complete modeling
system that inputs two sets of range data, registers the sets, and
outputs a composite model. ICP is successful when a close initial
transformation between two coordinates of images is provided. It
should be noted that the determination of an initial transformation is not an easy task. In Murino et al. [25], this pre-alignment is
based on the matching between the branches of 3D skeletons
extracted from the two images. They have demonstrated that 3D
skeletons can be successfully used to obtain a coarse alignment

ARTICLE IN PRESS
Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

sufcient to make ICP converge. Gelfand et al. [26] have


developed an algorithm for the automated registration without
any assumptions about the initial positions. A descriptor is
dened and computed for each data point based on local
geometry; a few of feature points are automatically picked from
the data shape according to the uniqueness of the descriptor value
at the point; a branch-and-bound algorithm is used to dene the
initial correspondence of two views.
Turk and Levoy [4] have presented a method consisting of the
following steps: (i) align the meshes with each other using a
modied iterated closest point algorithm; (ii) zip together
adjacent meshes to form a continuous surface that correctly
captures the topology of the object: and (iii) compute local
weighted averages of surface positions on all meshes to form a
consensus surface geometry.
4.3. Feature detection
Feature detection is used to recover a high-level geometric
description from the lower-level geometric representation of a
part. Several methods have been proposed for automatically
extracting a high-level, feature-based description from lowerlevel models of part geometry [2729].
Thompson et al. [30] have used some geometric primitives for
reverse engineering. The resulting models can be directly
imported into feature-based CAD systems without loss of the
semantics and topological information inherent in feature-based
representations. Schindler and Bauer [31] have applied modelbased methods for building reconstructions. The 3D points are
segmented into a coarse polyhedral model with a robust
regression algorithm, and the geometry of this model is rened
with pre-dened shape templates in order to automatically
recover a CAD-like model of the building surface.
4.4. 3d Reconstruction
Surface reconstruction from point-clouds is fundamental in
many applications. Some literature reviews on surface reconstruction are available [3234].
Azernikov et al. [34] classied reconstruction methods into
two types: computational geometry approach and computer
graphics approach.
The rst type of approaches focuses on piecewise-linear
interpolation of unorganized points, and denes the surface as a
carefully chosen sub-set of Delaunay triangulation in a Cartesian
coordinate system. Cazals and Giesen [35] gave a short survey on
surface reconstruction methods based on Delaunay triangulation.
Amenta et al. [36] used a medial axis transform (MAT) for the
approximation of a surface; where an MAT is a representation of
an object as an innite union of balls. The surface approximation
is called a power crust. Bernardini et al. [37] developed a BallPivoting Algorithm (BPA) for surface reconstruction from a given
point cloud. The principle is simple: three points form a triangle if
a ball of a user-specied radius touches them without containing
any other point. Therefore, starting with a seed triangle, the ball
pivots around an edge until it touches another point, forming
another triangle. Abdel-Wahab et al. [38] conducted a comparison
of some computer-geometry based approaches including the
crust, the power crust, the tight cocoon, and the ball pivoting
algorithm, in terms of reconstruction quality, memory usage, and
time; they have concluded that the crust and power crust
algorithms showed a balanced trade-off between execution time
and memory usage. The ball pivoting algorithm exhibited
minimum execution time and memory usage, followed by the
tight cocoon. The experiments showed that applying any of the

409

four algorithms on a non-uniformly distributed cloud may create


a poor quality surface. In general, the computational cost of
these algorithms is determined by the Delaunay triangulation
generation.
The second type of approaches focuses on the visual quality of
the resulting model. The approaches falling into this type do not
constrain the surface to interpolate sampled points. Fundamental
works on surface reconstruction have been done by Hoppe et al.
[39]. Their approach is based on the idea of determining the zero
set of an estimated signed distance function. It is capable of
automatically inferring the topological type of the surface,
including the presence of boundary curves. The method of
dening signed distance function was also adopted by Neugebauer and Klein [40]. Freedman [41] proposed an incremental
technique for surface reconstruction. The algorithm does not
require the surfaces embedding space; the dimension of the
embedding space may vary arbitrarily without substantially
affecting the complexity of the algorithm. Curless and Levoy
[42] proposed a volumetric approach, which exploits the fact that
the cloud of points is a collection of laser range images.
Unfortunately, it is restricted to devices where the projection
plane is known. Zhao et al. [43] introduced an algorithm based on
variation and partial differential equation (PDE) methods.
The size of a dataset is a primary concern for reconstruction
and visualization, in particular for web-based applications. Engel
et al. [44] introduced an adaptive and hierarchical concept to
minimize the number of vertices that have to be reconstructed,
transmitted and rendered; the resulting system is able to directly
generate stripped surface representations in a web-based
application.
4.5. Surface simplication
While most surveys ignored the issue of surface simplication,
Heckbert and Garland [45] attempted to summarize the works
related to this issue, which are classied in terms of the technical
problems solved rather than targeted applications.
Schroeder et al. [46] introduced an algorithm using local
operations on geometry and topology to reduce the number of
triangles in a triangle mesh. The implementation is for the
triangle mesh and applicable to other types of polygon meshes.
Hoppe et al. [47] developed an energy minimization approach,
and the energy function consists of three terms: a distance energy
that measures the closeness of t, a representation energy that
penalizes meshes with a large number of vertices, and a
regularizing term that conceptually places springs of rest length
zero on the edges of the mesh. The minimization algorithm
partitions the problem into two nested sub problems: an inner
continuous minimization and an outer discrete minimization. The
search space consists of all meshes homeomorphisms to the
starting mesh. Pauly et al. [48] analyzed and quantitatively
compared a number of surface simplication methods for pointsampled geometry. Based on the study, they implemented
incremental and hierarchical clustering, iterative simplication,
and particle simulation algorithms to create approximations of
point-based models with lower sampling density. To compare the
quality of the simplied surfaces, they also designed a new
method for computing numerical and visual error estimates for
point-sampled surfaces.
4.6. Segmentation
Segmentation involves the partitioning a given image into a
number of homogeneous segments such that the union of any
two neighboring segments yields a heterogeneous segment.

ARTICLE IN PRESS
410

Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

Numberless methods have been developed to deal with segmentation. Li et al. [49] classied the methods into histogram-based
techniques, edge-based techniques, region-based techniques,
Markov random eld-based techniques, hybrid methods that
combine edge and region methods, the level set method, and the
morphological watershed transform. Some signicant works were
reviewed by Zhang [50] and by Chan and Zhu [51]. Cremers
et al. [7] proposed a variational approach in which, besides the
level set function for segmentation, a new function called labeling
function is introduced to indicate the regions in which shape
priors should be enforced.
If the data in a segment is treated as a feature, segmentation
relates closely to feature extraction or tting in some ways. Some
research studies focused on the tting issue. Many of the tting
methods [9][5254] have focused on recovering patches of simple
geometric surfaces, which are then connected together resulting
in a B-rep (Boundary Representation) model. Rabbani and Heuvel
[55] presented some methods for tting CAD models to point
clouds. Constructive Solid Geometry (CSG) is used to represent
the models due to its exibility and compactness. A CSG tree is
converted to a B-rep (or a triangular mesh, or a point cloud) for
approximating the orthogonal distance of a given point from the
model surface. The notion of Internal Constraints was introduced
to represent the geometric relationships among constituent
components of a CSG tree.

to take more than a minute to register. In contrast, smaller images


required only a few seconds.
Shih and Wang [60] applied 3D acquisition and processing
techniques for the comparison of as-designed and as-built
construction. The comparison was used to quantify the differences as a reference for the verication and modication of the
original building design. This study compared 3D computer
models with 3D point clouds of a campus building. The
differences could easily be identied from the perspectives and
elevations of different views. Gerth [61] described a system
capable of Virtual Functional Build (VFB) where point cloud
representations of parts are assembled virtually using assembly
modeling software. The potential benets of VFB over traditional
techniques are reduced capital investment in tooling and, more
importantly, reduced lead time due to earlier collaboration
between geographically remote part and sub-assembly suppliers.
Pernkopf [62] proposed an approach for acquisition and reconstruction of 3D surface data of raw steel blocks in harsh industrial
environments. The 3D surface is reconstructed by using geometric
transformation, and the methods based on spline interpolation
and singular value decomposition have been recommended for
recovering the depth map of the inspected goods. More than 98%
of the surface segments were classied correctly.

4.8. Available software tools

4.7. Other relevant work


Dorain et al. [56] implemented new techniques for robust
estimation of transformations relating multiple views and seamless integration of registered data to form an unbroken surface.
They have also validated the system performance, experimentally.
Vincze et al. [57] presented the methods to detect the relevant
geometric features such as freeform surface, cavity and rib
section. The range image processing requires about 30 s on a
standard PC and path planning can also be executed in another
30 s on a high-end PC. Rusinkiewicz and Levoy [20] applied
structured-light range nder, a real-time alignment algorithm,
and a voxel-based merging algorithm, to scan and model (in
4 minutes) a turtle of 18 cm long to a resolution of 0.5 mm,
containing approximately 200,000 polygons. Hutterrer et al. [58]
used a data acquisition and processing system for modeling of the
forming process. Product models are digitized by means of a 3D
sensor during the controlled process. The determination of
geometrical deviations is based on feature extraction in these
areas. The use of form features speeds up the geometrical data
processing thus allowing an online process control. Furthermore,
a manufacturing process adhering closely to specications without computing complete surface geometries is possible. This has
allowed a control loop to be developed that greatly enhances the
reproducibility of exible forming processes by using sensors. It
focuses on the efcient, on-line-capable 3D sensor data processing, which can be structured according to feature extraction,
segmentation and registration.
Tait et al. [59] developed an automated visual inspection
framework implemented on a blackboard architecture. As the
choice of resolution determines the smallest size of detectable
defect, high resolutions are used to prevent loss of details during
the processing of an image difference. The difference of image is
produced by subtracting xed and moving images after registration into a common coordinate system. Preliminary testing of the
intensity based registration algorithm showed that, to obtain
reasonable processing speeds, images needed to be small or
down-sampled in size. Large high-resolution images were found

Software tools for data processing include an extensive


collection of modules for different purposes ranging from scanner
control to 3D modeling. For example, Rocchini et al. [63]
introduced a 3D scanning software suite for range map alignment,
range map merging, mesh editing and mesh simplication.
All venders of 3D scanners (see [8] for a list) provide software
tools to support data processing; however, most of them are
rudimentary. The demand for greater capability and growth in the
market has lead to stand-alone software products for 3D
scanning. Table 2 lists some of the available stand-alone 3D
data processing software providers (from [3] and http://scanning.
fh-mainz.de/scanninglist.php). While the market is growing, it is
likely that consolidation will take place given the number of
competitors and the move by large CAD and 3D modeling
Table 2
Independent 3D scanning software tools.
Producer

Software tool

Hyper link

Braintech
Canada Inc.
InnovMetric

eVisionFactory 4.0,
VOLTS-IW
Polyworks

http://www.braintech.com/

3D Veritas
Metrologic
Group
Octocom AG

3D-Veritas
Metrolog II

SDRC
Z+ F UK Ltd.
Inn.Tec s.r.l.
INUS
Technology
Kubit GmbH
Phocad GmbH
UGS
Raindrop
Geomagic
Pointools
Free Open
Source
3D3 Solutions

Imageware Surfacer
Light Form Modeller
Reconstructor
RapidForm

http://www.zf-laser.com/e_octocad.
html
http://www.mayametrix.com/surfacer
http://praxis.zf-uk.com/index.html
www.reconstructor.it
www.rapidform.com

PointCloud
Phidias
Imageware
Geomagic Studio

www.kubit.de
www.phocad.de
www.ugsplm.de
www.geomagic.com

Pointools View
MeshLab

www.pointools.com
meshlab.sourceforge.net

FlexScan3D

http://www.3d3solutions.com/

OctoCAD

http://www.innovmetric.com/
Manufacturing/home.aspx
http://www.3dveritas.com/
http://www.metrologic.fr/

ARTICLE IN PRESS
Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

software houses to compete in this market by upgrading their


products to import and handle large numbers of range points for
surface construction [3].

5. Applications
There is an increasing demand for accurate, as-built, 3D
models of existing industrial sites in many sectors. For example,
Rabbani and van den Heuvel [64] recommended application areas
including (i) planning (clash detection, decommissioning,
design changes); (ii) revamping and retrotting of old sites;
(iii) implementation of services based on virtual and augmented
reality; (iv) off-site training; (v) safety analysis; and (vi) change
detection. In fact, many successful applications have been found
in reverse engineering, design and manufacturing, inspection and
measurement, digital mockup and simulation, medical applications, multimedia, art and museum (http://www.emicroscribe.
com/products/index.htm). In the next section, some applications
from a manufacturing perspective are introduced.
5.1. Manufacturing applications
Reverse engineering and rapid prototyping: Products with one or
small volumes are required to be manufactured for special
purposes such as a new product demonstration before the
product is nalized and the production line launched. It is a
common situation where the concept of the product comes from
an object without a computer model and a vision system is
applied to capture data from the existing object and generate its
surface model.
Part location and alignment: High precision machining operation needs to know the exact location where a part is positioned.
A vision system can be applied to detect the position when a part
is xtured, and the detection result can be used to modify its
corresponding machining program or alignment of the part on a
machine tool to an ideal position so that the tolerance of part
positioning can be accommodated. In some situations, such as the
milling operation of a casted part, the margin of operation can be
changed from one part to another; a vision system can be applied
to capture the real dimension of an actual part so that an
optimized xturing position can be determine to achieve the
required machining quality.
Inspection: Inspection is a critical step towards the development of an entire production line. Coordinate Measuring
Machines (CMMs) are widely used for fullling the inspection
task. However, there are some limitations of CMMs: (i) a part has
to be placed on the CMM to proceed with the inspection, in many
case, out of the production line; (ii) it takes a long time to do the
inspection, and 100% inspection is impossible for most of the
products; and (iii) contact inspection may damage the part
surface. A vision-based inspection system is expected to address
all the aforementioned issues.
Virtual assembly: In developing a new product, prototyping
parts need to be assembled together to validate the feasibility of
the product. Those prototyping parts are usually fabricated
individually without or with less considerations of the complete
product. The assembly processes needs a trivial process of trails
and errors for re-ordering the assembly sequence, relocating and
reorientation of parts in an assembly operation, and changing the
physical parts to t them in the assembly. A vision-based system
can acquire and generate the CAD model for these prototyping
parts, identify the critical problems, and accelerate the assembly
process in a virtual environment.
Flexible robot automation in assembly, welding, and surface
treatments: Automation relies on industrial robots. Robots have to

411

be programmed to execute tasks. A robot program is usually


dedicated to one task. However, it is desirable that a vision-based
system can be applied to capture the changes of a task, and a
software tool can respond to the changes automatically by
generating new programs for the task.

5.2. Technical gaps


Vision-based systems have been applied successfully in many
applications, particularly in medical applications, historical
recoveries, and reverse engineering. However, in the manufacturing domain, its critical requirements must be met. To be practical
for broad manufacturing applications, a vision system must
embody a number of simultaneous characteristics including
high-speed and real-time data analysis, high precision, a large
measurement volume, reliability and robustness to measure
features on both simple and complex parts, ability to measure
parts with complex surface nishes ranging from freshly
machined to dart paint, operation in normal lighting conditions,
and simple to operate [1]. All the aforementioned requirements
must be taken into considerations in developing a vision-based
application system.
The technical gaps in this context are to meet these requirements simultaneously and cost effectively, because any one of
them can become a critical barrier for a practical implementation.
High-speed and real-time data analysis: Vision data-acquisition
hardware systems use cutting-edge technologies, and the speed of
data-acquisition can be fast. The bottleneck, however, is the data
processing system; it usually takes a relatively long time to
process the acquired data due of the nature of large size point
clouds data. The development of a real-time and efcient data
processing system still has a long way to go.
High precision: A vision system can be manufactured to be of
high precision that is required by many manufacturing applications. However, the difculty is that the high precision will
sacrice other good characteristics such as working volumes,
scanning speeds, and the cost. A trade-off has to be made to
balance all of these requirements.
Large measurement volume: In comparison with the working
volume of a vision sensor, many parts in manufacturing have a
large size. To acquire data over a large part, multiple sensors have
to work together and/or vision sensors have to be integrated with
other moving devices such as industrial robots; it brings the
challenges to coordinate the sensors and moving devices and to
merge multi-sources data under the same reference coordinate
system. Most of small sensor providers or manufacturing
companies lack either of these expertises.
Reliability and robustness in actual environment: Vision systems
can work well for smooth and bright parts under static, clean and
well-lighted environment. It will become a great challenge to
achieve the consistent quality for complex parts with unclean
surfaces and dark/shining color or texture in actual manufacturing environment under normal lighting conditions. Another point
is that a vision system is very sensitive to its surrounding
environment and it must be customized accordingly; however, it
is difcult to maintain the consistent condition in a working
environment.
Customization, integration, and ease of operation: Every application has unique requirements of using a vision data-acquisition
and vision-based automated system. Extra works are required to
determine these requirements such as working volume, speed,
and time with consideration of the cost. In addition, the result of
data processing has to be utilized in certain way, and integration
with other system components is required to develop a practical
vision-data manufacturing system.

ARTICLE IN PRESS
412

Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

6. Summary and research trends

 integration of a data acquisition and processing system with


automated programming and planning system.

6.1. Hardware systems


Data acquisition and processing techniques are evolving
rapidly. Active stereo visions are well-suited to the recovery of
surfaces that have low curvature but they tend to result in sparse
data collection when applied to a surface with high curvature.
Various laser scanners with high precision and efciency have
become commercially available. However, process automation
has critical requirements for data acquisition on time, accuracy,
and throughout rates. There exists no industrial vision system
capable of handling all tasks in every application eld. Only when
the requirements of a particular application domain are specied,
can appropriate decisions for the design and development of the
application system be taken. The ultimate goal is to satisfy
increasingly demanding performance criteria against budget
constraints from the viewpoints of end users [65]. The following
three directions have been identied towards this goal: (a) the
rapid development in semiconductor technology along with
development of multipurpose mainstream operating systems;
(b) improvements in humancomputer interfaces; and (c)
advances in solid state imaging sensors [2].
6.2. Software systems
Whereas 3D scanners have become powerful, the performance of the corresponding software tools are perceived as
unsatisfactory; commercial 3D modeling tools lack the ability to
deal with large amount of data in a vision-based system. The
range of software to make full use of the hardware is still lacking
in many cases, and underdeveloped [9]. This might be the result
of limited resources of the small companies who produce standalone software for the treatment of 3D scanner data [3]. Current
commercial software systems allow point cloud processing and
single surface tting with manual involvement, and generation
of an accurate surface model is only possible for simple objects
or polyhedral approximations. However, users wish to automatically process a wide range of objects, possibly from a
variety of data capture devices with different characteristics, to
produce models in a variety of representations and accuracies. It
is expected that a certain consolidation and concentration may
occur in the near future, hopefully leading to better software
solutions (and support), thus promoting the acceptance of 3D
scanning as a method for 3D modeling [3]. Further studies are
required in key research areas including improved scanner
calibration, noise reduction, view registration and integration,
reliable segmentation and surface tting, feature extraction and
comparison [9].
6.3. Applications
Applications are limited to structured environments where the
time required for data acquisition and processing is not critical to
the application. The implementation of 3D data process generally
requires many manual operations. In customizing a 3D data
acquisition and processing system for a manufacturing application, study in both hardware and software systems is required,
and the following aspects should be considered:

 cost-effective data acquisition sensors suitable to be applied in


actual manufacturing environments;

 technologies for reducing the cycle time in data processing;


 approaches to deal with incomplete surface, noise, and
distortion; and

References
[1] Johnston K. Automotive applications of 3D laser scanning. Whitepaper,
Metron Systems Incorporated, 2006, /http://www.metronsys.com/publica
tions/automotive-s.pdfS.
[2] Malamsa EN, Petrakis EGM, Zervakis M, Petit L. A survey on industrial
vision systems, applications and tools. Image and Vision Computing 2003;21:
17188.
[3] Boehler W, Heinz G, Marbs A, Siebold M. 3D scanning software: an
introduction. In: Proceedings of the International workshop on Scanning for
Cultural Heritage Recording, Corfu, Greece, Sept. 12, 2002, p. 4751.
[4] Turk G, Levoy M. Zippered polygon meshes from range images. In:
Proceedings of the 21st annual conference on Computer Graphics and
Interactive Techniques. 1994, p. 311318.
[5] Hoppe H, DeRose T, Duchamp T, McDonald J, Stuetzle W. Surface
reconstruction from unorganized points. ACM SIGGRAPH 1992;1992:718.
[6] Garland M, Heckbert PS. Surface simplication using quadric error metrics.
Proceedings of SIGGRAPH 1997:97. /http://citeseer.ist.psu.edu/garland97
surface.htmlS.
[7] Cremers D, Sochen N, Schnorr C. Towards recognition-based variational
segmentation using shape priors and dynamic labeling. In: Grifth L, editor.
In: LNCSInternational conference on Scale Space Theories in Computer
Vision, 2695, 2003, p. 388400.
[8] Boehler W, Marbs A. 3D scanning instruments. In: Proceedings of the
International workshop on Scanning for Cultural Heritage Recording, Corfu,
Greece, Sept. 12, 2002, p. 912.
[9] Varady T, Martin R, Cox J. Reverse engineering of geometric modelsan
introduction. Computer-Aided Design 1997;29(4):25568.
[10] Isgro F, Odone F, Verri A. An open system for 3D data acquisition from
multiple sensors. In: Proceedings of the seventh international workshop on
Computer Architecture for Machine Perception (CAMP05), Universita di
Genova, Italy, July 46, 2005, p. 5257.
[11] Scott WR, Roth G, Rivest J-F. View planning for automated three-dimensional
object reconstruction and inspection. ACM Computing Surveys
2003;35(1):6496.
[12] Blais F. Review of 20 years of range sensor development. Journal of Electronic
Imaging 2004;13(1):23140.
[13] Wulf O, Wagner B. Fast 3D-scanning methods for laser measurement
systems. In: Proceedings of the 14th International Conference on Control
Systems and Computer Science (CSCS14), July 25, 2003, Bucharest, Romania.
[14] Larsson S, Kjellander JAP. Motion control and data capturing for laser
scanning with an industrial robot. Robotics and Autonomous Systems
2006;54:45360.
[15] Biegebauer G, Pichler A, Vincze M. Detection of geometric features in range
images for automated robotic spray painting. Vision with Non-Traditional
Sensors, In: Proceedings of the 26th Workshop of the Austrian Association for
Pattern Recognition, September 1011, 2002, ISBN 3-85403-160-2.
[16] Teutsch C, Isenberg T, Trostmann W, Berndt MD, Strothotte T. Evaluation and
correction of laser-scanned point clouds. In: Beraldin J-A, El-Hakim SF, Gruen
A. Walton JS, editor. In: Proceedings of SPIE, 5665, Videometrics VIII, 2005,
p. 172183.
[17] Ingensand H. Metrological aspects in terrestrial laser-scanning technology.
In: Proceedings of the 3rd IAG/12th FIG symposium, Baden, May 2224, 2006.
[18] Chen Y, Medioni G. Object modeling by registration of multiple range images.
In: Proceedings of the IEEE Conference on Robotics and Automation, 1991.
[19] Besl PJ, Mckay ND. A method for registration of 3D shapes. IEEE Transactions
on Pattern Analysis and Machine Intelligence 1992;14(2):23955.
[20] Rusinkiewicz S, Levoy M. Efcient variants of the ICP algorithm. In:
Proceedings of the third international conference on 3D Digital Imaging
and Modeling, Quebec, Canada, 2001, p. 145152.
[21] Pandzo H, Mahadevan S, Bennamoun M, Williams JA A 3D acquisition and
modeling system. In: Proceedings of the IEEE international conference on
Acoustics, Speech and Signal Processing, Salt-Lake City, Utah, USA, May 711,
2001, vol. 3, p. 19411944.
[22] Liu Y, Pottmann H, Wang W. Constrained 3D shape reconstruction using a
combination of surface tting and registration. Computer-Aided Design
2006;38(6):57283.
[23] Pottmann H, Leopoldseder S, Hofer M. Registration without ICP. Computer
Vision and Image Understanding 2004;95(1):5471.
[24] Luck J, Little C, Hoff W. Registration of range data using a hybrid simulated
annealing and iterative closest point algorithm. In: Proceedings of the IEEE
International Conference on Robotics and Automation, San Francisco, April
2428, 2000, p. 37393744.
[25] Murino V, Fusiello A, Castellani U, Ronchetti L. Pre-aligned ICP for the
reconstruction of complex object. In: Proceedings of the ItalyCanada 2001
Workshop on 3D Digital Imaging and Modeling Applications of Heritage,
Industry, Medicine & Land, Padova, Italy, April 34, 2001.
[26] Gelfand N, Mitra NJ, Guibas L, Pottmann H. Robust global registration. In:
Desbrun M, Pottmann H, editor. Eurographics Symposium on Geometry
Processing, 2005.

ARTICLE IN PRESS
Z.M. Bi, L. Wang / Robotics and Computer-Integrated Manufacturing 26 (2010) 403413

[27] Fischer A, Shpitalni M. Encoding and recognition of features by applying


curvatures and torsion criteria to boundary representation. Proceedings of
ASME 1992 Winter Annual MeetingSymposium on Concurrent Engineering
1992:6984.
[28] Vandenbrande J, Requicha A. Spatial reasoning for the automatic recognition
of machinable features in solid models. IEEE Transactions on Pattern Analysis
and Machine Intelligence 1993:126985.
[29] Regli W, Gupta S, Nau D Feature recognition for manufacturability analysis.
Technical Report ISR TR94-10, University of Maryland, 1994.
[30] Thompson WB, Owen JC, Germain HJ, Stark SR, Henderson TC. Feature-based
reverse engineering of mechanical parts. IEEE Transactions on Robotics and
Automation 1999;15(1):5766.
[31] Schindler K, Bauer J. A model-based method for building reconstruction. In:
Proceedings of the rst IEEE international workshop on Higher-Level
Knowledge in 3D Modeling and Motion Analysis, 2003, p. 7482.
[32] Gois JP, Filho AC, Nonato LG, Biscaro HH. Surface reconstruction: classication comparisons, and applications. In: Proceedings of XXV Iberian Latin
American Congress on Computational Methods, 2004, p. 115.
[33] Schall O, Samozino M. Surface from scattered points: a brief survey of recent
developments. Proccedings of the rst international workshop towards
Semantic Virtual Environments, March 1618, 2005, Villars, Switzerland,
p. 138147.
[34] Azernikov S, Miropolsky A, Fischer A. Surface reconstruction of freeform objects
based on multiresolution volumetric method. Transactions of the ASME, Journal
of Computing and Information Science in Engineering 2003;3:3347.
[35] Cazals F, Giesen J. Delaunay triangulation based surface reconstruction. In:
Boissonnat J-D, Teillaud M, editors. Effective computational geometry for
curves and surfaces. Springer-Verlag; 2006. p. 23176.
[36] Amenta N, Choi S, Kollurio RK. The power crust, unions of balls and the
medial axis transform. Computational Geometry: Theory and Applications
2001;19:12753.
[37] Bernardini F, Mittleman J, Rushmeier H, Silva C, Taubin G. The ball-pivoting
algorithm for surface reconstruction. IEEE Transactions on Visualization and
Computer Graphics 1999;5(4):34959.
[38] Abdel-Wahab MS, Hussein AS, Taha I, Gaber MS. An enhanced algorithm for
surface reconstruction from a cloud of points, GVIP05 Conference, Cairo,
Egypt, 1921 December, 2005.
[39] Hoppe H, DeRose T, Duchamp T, Halstead M, Jin H, McDonald J, et al.
Piecewise smooth surface reconstruction. ACM SIGGRAPH 1994:295302.
[40] Neugebauer PJ Klein K. Adaptive triangulation of objects reconstructed from
multiple range images. In: Proceedings of the IEEE Visualization 97, Late
Breaking Hot Topics, Phoenix, Arizona, October 2024, 1997.
[41] Freedman D. Surface reconstruction, one triangle at a time. In: Proceedings of
the 16th Canadian Conference on Computational Geometry, Concordia
University, Montreal, Quebec, August 911, 2004, p. 1519.
[42] Curless B, Levoy M. A volumetric method for building complex models from
range images. Proceedings of SIGGRAPH 96 1996.
[43] Zhao H-K, Osher S, Fedkiw R. Fast surface reconstruction using the level set
method. In: Proceedings of the IEEE workshop on Variational and Level Set
Methods in Computer Vision, July, 2001.
[44] Engel K, Westermann B, Ertl T. Isosurface extraction techniques for
web-based volume visualization. In: Proceedings of the conference on
Visualization 99, San Francisco, California, 1999, p. 139146.
[45] Heckbert P, Garland M. Survey of polygonal surface simplication algorithms.
Technical Report, CMU-CS-95-194, Carnegie Mellon University, 1997,
/http://citeseer.ist.psu.edu/heckbert97survey.htmlS.

413

[46] Schroeder WJ, Zarge JA, Lorensen WE. Decimation of triangle meshes.
Computer Graphics 1992;26(2):6570.
[47] Hoppe H, DeRose T, Duchamp MJ, Stuetzle W. Mesh optimization. ACM
SIGGRAPH 1993:1926.
[48] Pauly M, Gross M, Kobbelt LP. Efcient simplication of point-sampled
surfaces. In: Proceedings of the conference on Visualization 02, Boston,
Massachusetts, 2002, p. 163170.
[49] Li H, Elmoataz A, Fadili J, Ruan S. An improved image segmentation approach
based on level set and mathematical morphology. Proceedings of SPIE
2003;5286:851.
[50] Zhang Y. A review of recent evaluation methods for image segmentation. In:
Proceedings of the international symposium on Signal Processing and its
Applications, Kuala Lumpur, Malaysia, 1316 August, 2001, p. 148151.
[51] Chan T, Zhu W. Level set based shape prior segmentation. Technical Report,
UCLA, 2003.
[52] Fisher RB. Applying knowledge to reverse engineering problems. Proceedings
of Geometric Modeling and Processing 2002:14955.
[53] Werghi N, Fisher RB, Ashbrook A, Robertson C. Shape reconstruction incorporating multiple non-linear geometric constraints. Computer Aided Design
1999;31(6):36399.
[54] Ahn SJ, Rauh W. Orthogonal distance tting of implicit curves and surfaces.
IEEE Transactions on PAMI 2002;24(5):62038.
[55] Rabbani T, van den Heuvel F. Method for tting CSG models to point clouds
and their comparison. Computer Graphics and Imaging, August 1719, 2004,
Kauai, Hawaii, USA.
[56] Dorain C, Wang G, Jain AK, Mercer C. Registration and integration of multiple
object views for 3D model construction. IEEE Transactions on Pattern
Analysis and machine intelligence 1998;20(1):839.
[57] Vincze M, Pichler A, Biegelbauer G. Detection of classes of features for
automated robot programming. In: Proceedings of the 2003 IEEE international conference on Robotics and Automation, Taipei, Taiwan, Sept. 1419,
2003, p. 151155.
[58] Hutterer A, Menzel T, Otto A, Muller G. Feature extraction for advanced
control of exible forming processes. Proceedings of the Vision Modeling and
Visualization Conference 2001:4350.
[59] Tait RJ, Schaefer G, Hopgood AA, Nolle L. Automated visual inspection using a
distributed blackboard architecture. International Journal of Simulation
Systems, Science and Technology 2006;7(3):1220.
[60] Shih N-J, Wang P-H. Point-cloud-based comparison between construction
schedule and as-built progress: long-range three-dimensional laser scanners
approach. Journal of Architectural Engineering 2004;10(3):98102.
[61] Gerth RJ. Virtual functional build for body assembly. In: Proceedings of 2005
ASME international Mechanical Engineering Congress and Exposition,
November 511, Orlando, Florida, USA, 2005, IMECE2005-79884.
[62] Pernkopf F. 3D surface acquisition and reconstruction for inspection or raw
steel products. Computers in Industry 2005;56:87685.
[63] Rocchini C, Cignoni P, Montani C, Pingi P, Scopigno R. A suites of tools for the
management of 3D scanned data. In: Workshop Proceedings of 3D Digital
Imaging and Modeling Applications: Heritage, Industry, Medicine & Land,
April 34, 2001.
[64] Rabbani T, van den Heuvel F. Automatic point cloud registration using
constrained search for corresponding objects. In: Proceedings of the 7th
Conference on Optical 3-D Measurement Techniques, October 35, 2005,
Vienna, Austria, p. 177186.
[65] McKrory J, Daniels M. The impact of new technology in machine vision.
Sensor Review 1995;15:811.

Potrebbero piacerti anche