Sei sulla pagina 1di 9

3D shape measurement of objects with high

dynamic range of surface reflectivity

Gui-hua Liu,1,2,* Xian-Yong Liu,1 and Quan-Yuan Feng2


1
School of Information Engineering, Southwest University of Science and Technology, Mianyang 62010, China
2
School of Information Science & Technology, Southwest Jiao tong University, Chendu 610031, China
*Corresponding author: fengquanyuan@163.com

Received 13 April 2011; revised 26 June 2011; accepted 26 June 2011;


posted 27 June 2011 (Doc. ID 145846); published 4 August 2011

This paper presents a method that allows a conventional dual-camera structured light system to directly
acquire the three-dimensional shape of the whole surface of an object with high dynamic range of surface
reflectivity. To reduce the degradation in area-based correlation caused by specular highlights and dif-
fused darkness, we first disregard these highly specular and dark pixels. Then, to solve this problem and
further obtain unmatched area data, this binocular vision system was also used as two camera-projector
monocular systems operated from different viewing angles at the same time to fill in missing data of the
binocular reconstruction. This method involves producing measurable images by integrating such tech-
niques as multiple exposures and high dynamic range imaging to ensure the capture of high-quality
phase of each point. An image-segmentation technique was also introduced to distinguish which mono-
cular system is suitable to reconstruct a certain lost point accurately. Our experiments demonstrate that
these techniques extended the measurable areas on the high dynamic range of surface reflectivity such as
specular objects or scenes with high contrast to the whole projector-illuminated field. 2011 Optical
Society of America
OCIS codes: 150.6910, 330.1400, 100.3010, 120.6650, 080.5084.

1. Introduction A series of papers [610] explore the use of a dif-


A structured light binocular technique is be- fusive screen of a planar monitor coupled with a
coming more and more popular to achieve a three- projector as a structured light source to measure
dimensional (3D) shape of surfaces due to its fast specular surface so that information for the whole
speed and high accuracy, as well as its nondestructive field can be obtained. The periodic fringe patterns
nature. But the conventional fringe projection [13] or are displayed on the screen, reflected by the mea-
Moire [4,5] techniques can only measure diffuse sur- sured specular surface, and the distorted patterns
faces. High dynamic range (HDR) of surface reflectiv- are captured with a camera. A 3D shape is obtained
ity, such as with specular objects or scenes with very by integrating the slope distribution numerically,
bright and dark parts, may lead to dynamic ranges which can be reconstructed by analyzing these dis-
that the camera(s) cannot handle; thus, the optical torted patterns. However, the phase values generally
signal cannot be properly retrieved. As a consequence, depend on the surface slopes coupled with the sur-
parts of the surface will go undetected or are misread. face heights and have ambiguities in their distribu-
One traditional way to measure such objects involves tions. Hongwei Guo [11] improved this method by
spraying the surface with certain materials, such as moving the diffusive light source vertically to two
developer; however, this is not suitable for many ob- or more known positions and measuring the phase
jects, such as clothing, sculptures, and cultural relics. distribution of the deformed fringe pattern at each
position to eliminate phase ambiguities and error ac-
0003-6935/11/234557-09$15.00/0 cumulations. But this is a time-consuming procedure
2011 Optical Society of America and always needs some known positions.

10 August 2011 / Vol. 50, No. 23 / APPLIED OPTICS 4557


Scene-adapted structured light is another ap- based on multiple exposures and a phase-shifting
proach to make the colors, intensities, and shapes method. A sequence of fringe images with different ex-
of the projected patterns adapt to the scene. Caspi posures was taken. The saturated pixels because of
et al. [12] presented the structured light approaches, specular reflection in the higher exposure were re-
which explicitly modeled the camera-projector path. placed by the corresponding pixels of the lower expo-
The work [13] of the question of an optimal set of sures. Therefore, the specular or dark area can be
patterns is formulated for the first time. In order to properly measured without affecting the rest of the
avoid overexposure and underexposure in the image, areas. However, most of the multiple-exposure fusion
Koninckx [14] proposed a more complete camera- methods depend on certain image properties like the
projector model, which allows adapting patterns on- scene irradiance map [20], etc. When the irradiance
line on a per pixel basis. This model is based on a across a scene varies greatly, there will always be
crude estimate of the scene geometry and a reflec- overexposed and underexposed areas in the captured
tance characteristic, which was a bit similar to the image no matter what exposure time is used.
work proposed in [15], in which a projector was used Our approach is to further optimize the third type
to change an objects appearance. But it relies on of method to handle high-dynamic range surfaces. A
prior knowledge of surface reflectance or on an as- conventional dual-camera structured light system is
sumed reflectance model. As a result, they may cause used in our research. The techniques of multiple ex-
severe errors in the measurement of objects. posures and HDR imaging are unified and general-
Our approach falls into another type of approach by ized to lead to insights regarding solutions to the
changing the viewpoint, exposure time of camera, or traditional problems of dealing with HDR surfaces.
light intensity of projection. This type of approach We further developed a practical technique combin-
seeks to avoid estimating surface roughness and ing both binocular and monocular reconstruction to
scene geometry because measuring them is often deal with some parts of highly specular surfaces with
impractical. Bhat and Nayar [16] used a trinocular large reflectivity variations that may be too shiny
stereo system as three binocular stereo systems and or dark to measure by all the mentioned methods.
analyzed pairs of camera images to extract correct These improvements now make it suitable for a con-
depth estimates of scene points from different pairs ventional structured light binocular vision system to
so as to yield an accurate depth map of scene. The im- perform high-quality whole-field surface measure-
portant characteristic of these configurations is that ments of objects with HDR of surface reflectivity
for each scene point in the common field of view of sen- without increasing the hardware cost.
sors, at least one binocular pair provided the correct The paper is organized as follows. Before giving a
depth estimate. But it involved extra efforts to deter- more detailed explanation of our techniques in
mine a suitable trinocular configuration. Scharstein Sections 3 and 4, Section 2 explains the effect of re-
[17] used a camera translating on a linear stage to flection mechanisms on structured light systems.
construct a binocular system, and one or two light pro- Some experimental procedures and results about ty-
jectors illuminated the scene from different direc- pical cases of measurement of objects are given in
tions. This approach enables high-confidence view Section 5. The last section summarizes the main re-
disparities at points visible in both cameras and illu- sults of this work.
minated by at least one source. A number of unknown
code values due to shadows in the scene could be re- 2. Effect of Reflection Mechanisms on Structured
duced by using more than one illumination source. Light Binocular System
But this paper points out that partial occlusion (visi- Briefly, in our paper, a series of digital fringe patterns
ble only in one view) was unavoidable because of composed of vertical straight stripes are generated
highly specular surfaces or very low albedo areas. by a computer and sent to a digital video projector,
This would ultimately lead to some holes in the recon- which then casts the fringe images onto the object.
struction. To compensate for the influence of specular These deformed fringe images are then recorded by
reflections or shadowed areas, Kowarschik et al. [18] two cameras from different angles. The digital fringe
also used a projection system to grate structures from images acquired by the camera are then processed to
up to 15 different directions, which is connected with a retrieve the phase using a phase-shifting algorithm.
simultaneous variation of the intensity of the pro- A four-step phase-shifting algorithm with a phase
jected grating structures. Moreover, the object can shift of =2 was utilized to determine relative phase
be rotated, yielding other views of the object. A in this research. The projector also illuminates a
camera acquired different patches of the object that scene with a temporally varying pattern of light
were transformed into a global coordinate system stripes. A temporal phase unwrapping algorithm is
to achieve complete measurement. But the param- utilized to get the absolute phase map, which serves
eters of the system are so much that the calibration as an important matching constraint, as well as an
is a time-consuming procedure and is difficult to rea- epipolar geometry to get the corresponding points
lize with high accuracy. This can result in a poor con- between two images. The depth of corresponding
ditioning of the coordinate calculation because of the points can be determined by triangulating between
errors and their propagation. Recently, Zhang and projected light patterns and serving camera view-
Yau [19] proposed a HDR 3D scanning technique point [21,22].

4558 APPLIED OPTICS / Vol. 50, No. 23 / 10 August 2011


It is known that surface radiance may be decom- Source
Surface Image
posed into three primary reflection components: normal sensor
the diffuse lobe, the specular lobe, and the specular vector
spike. Figure 1 shows the polar plots of these three
components. The diffuse lobe represents both an Incident
ray Specular
internal scattering mechanism and multiple reflec- Image spike
tions on the surface in a random manner, and reflects sensor
hemisperically in all directions. The specular lobe
spreads for some range around the specular direc- Diffuse i i Specular
tion, which is the angle at which the incident angle lobe lobe
equals the reflected angle. The specular spike repre-
sents a mirror-like reflection, and it is nearly zero in
all directions, except for a very narrow range around Fig. 1. (Color online) Polar plots of the three reflection
the specular direction. components.
When the surface is smooth enough, the irradiance
across a scene varies greatly. The specular spike
areas are often too bright while the diffuse lobe areas highly specular and dark pixels. Those techniques
are often too dark; the captured image may be over- match the measuring range of intensity of camera
exposed or underexposed due to limitations on the with the range of reflection intensity.
measuring range of intensity of the camera. But the To start the procedure, a sequence of even white
overexposed or underexposed image cannot repre- patterns with 256-level light intensities is first pro-
sent the proper intensity of fringe images, which jected onto the inspected surface by turns. We sup-
can lead to serious phase error. These errors would pose that the gray of a given pixel in the captured
result in the loss of matching points and leave blank white image is I and set minimal and maximal high-
areas or holes in the final 3D shape. precision measurable gray scale threshold I min and
To reduce the degradation in area-based correla- I max by experiment. The thresholds can be used to
tion caused by specular highlights and diffuse dark-
reduce the effect of the other nearby pixels influ-
ness, we first acquire measurable images by multiple
enced by the overexposed and underexposed pixels.
exposures and an HDR imaging technique, disre-
The next step is to calculate the mask image se-
garding those highly specular or dark points. Then
we adopt a technique that combines the binocular quence M, which serves as the definition of measured
and monocular vision stereo to further obtain un- validity of pixels in a certain image (here, 1 repre-
matched data of the whole surface. sents validity and 0 represents invalidity). A bright,
even pattern is projected onto the inspected surface,
3. High Dynamic Range Imaging and the images are captured using multi-exposure
To meet luminous requirements of different surface times. Assume the number of multiple exposures
properties, a multi-exposure technique is adopted for is N. The entire captured images form an image se-
the cameras. The camera takes the fringe images quence I n , with n 1; 2; ; N. They are arranged
with different exposures by adjusting the aperture from brighter to darker (i.e., from higher exposure
of the camera lens or by changing the exposure time to lower exposure). The mask image sequence M
of the camera. Longer exposure may give good fringe can be achieved by the following Eq. (1) and (2).
quality of the darkest areas, but the brightest areas When n 1,
might be overexposed, while shorter exposure may
give good fringe quality of the brightest areas but
cause the darkest areas invisible.
The HDR imaging technique is also introduced to 
1; I min I n x; y I max
merge each well-illuminated and unsaturated pixel Mn : 1
from different exposures together to obtain the 0; otherwise
right/left synthesized images, and disregard these

When n > 1,


1; I min I n x; y I max and M l x; y 0l 1; 2; :::; n 1
M n x; y ; 2
0; otherwise

10 August 2011 / Vol. 50, No. 23 / APPLIED OPTICS 4559


where n, l denote the sequence number of I, the The multiple exposures and HDR imaging techni-
I n x; y represents the intensity of pixel x; y of im- que take full advantage of the merit of retrieving the
age I n , and M n x; y and M l x; y are the mask value phase values of different pixels from different shots
of pixels x; y in I n and I l . of fringe images to ensure the capture of high-quality
Next, sinusoidal phase-shifted fringe patterns are phase of each point with HDR of surface reflectivity.
projected onto the surface under the same series ex-
posures as the former step. A sequence of fringe 4. Combination of Binocular and Monocular Vision
images [Pkn x; y, with k 1; 2; 3; 4 and n 1; 2; 3; Technique
; N] are acquired for the measurement. For each In reality, many observed intensities include not only
set n, four fringe images with a phase shift of =2 body reflectance but also specular reflectance, inter-
are captured under the same exposure. In other reflectance, shadows, etc. In specular objects, in
words, each set of fringe images can be used to inde- which the component of the specular spike domi-
pendently reconstruct a 3D shape for good points. nates, a reflected light beam does not expand but con-
We assume that the phases of pixels whose inten- centrates in a small region around the specular
sities lie in the range I min to I max can all be calculated direction. Hence, certain areas may cause very differ-
accurately. If all pixels meet the requirement, the al- ent light captures for the two cameras. The light-
gorithm comes to an end. If not, we decrease the ex- input intensity from some points is often appropriate
posure time and take images again. The overexposed for only one of the two cameras, which makes it im-
pixels in a higher exposure are replaced by the possible to perform the matching of these points by
corresponding pixels in a lower exposure, while the using binocular reconstruction. To solve this problem
rest of the pixels remain unaltered. The algorithm and further obtain unmatched area data of the whole
stops only when the whole surface is measurable or surface, we adopted a system that combines the bi-
the exposure time reaches to the longest point of the nocular vision and monocular vision techniques as
camera. Assume the brightness of the fringe image described below.
sets decrease from one exposure to the next, i.e., The setup of this system is shown in Fig. 2. Two
cameras from different viewing angles then capture
Pkn x; y Pkn1 x; y: 3 the deformed fringe images. Different from other tra-
ditional binocular structure, this system is also di-
Then the final fringe image pixel used for 3D vided into two monocular structures, each of which is
composed of a single camera and the same projector,
measurement is
respectively. Fringe images acquired by each mono-
cular structure are used to reconstruct some specific
Pkf x; y Pkm x; y; m minn; 4 patches of the 3D geometry through the phase-
shifting algorithm, while the binocular vision struc-
with I min Pkm x; y I max k 1; 2; 3; 4 and P1m1 ture functions normally at the same time.
I min ; I max  orP2m1 I min ; I max  or P3m1 I min ; I max  or
A. System Calibration
P4m1 I min ; I max . Here, m minn is the minimum
function of n. System calibration requires geometric calibration of
The obtained high-quality pixels from all expo- the cameras and the projector. Binocular stereo
sures are then merged together to generate the com-
plete fringe image to acquire 3D coordinates. Each
pixel of the fringe images H k x; y k 1; 2; 3; 4 is
generated by selecting the brightest within the range
I min to I max unsaturated corresponding pixel from
one set of fringe images, namely,

X
N
H k x; y M n x; y Pkn x; y k 1; 2; 3; 4:
n1 Camera Camera
5 Projector

For an arbitrary point on the image, its intensity


Left monocular system Right monocular system
values of four fringe images use a certain exposure
so that all intensity values of this pixel are not satu- Binocular system

rated while the same pixel of the previous set of


images with higher exposure is saturated for at least
one fringe image. Both the right and the left synthe-
sized images obtained in Eq. (5) can obtain the phase
pixel by pixel that can be further converted into 3D
coordinates. Fig. 2. System setup.

4560 APPLIED OPTICS / Vol. 50, No. 23 / 10 August 2011


visual sensors are calibrated from images of a stan- the holes of specular area in binocular reconstruc-
dard checkerboard shown at a few different unknown tion. Namely, the points reconstructed by monocular
positions [23]. The calibration of the two camera- system only aim to the unmatched points. This not
projector monocular systems uses the same standard only avoids overlapping of the point clouds but also
checkerboard as that of a binocular system. The key maintains the reconstruction precision of the binocu-
is to treat the projector as a camera by projector lar vision system since it is well-known that the pre-
calibration, which can be referred to as the phase- cision of the binocular stereo is often better than
shifting method. This method [24] regards the projec- that of the monocular stereo.
tor as a camera, and the fringe images are used as a For monocular systems, each camera captures only
tool to establish the correspondence between the a part of the 3D shape. Since all the systems are ca-
camera pixel and the projector pixel, as shown in librated in a world coordinate system, the corre-
Fig. 3. So the projector can capture images like a sponding 3D data from those systems are aligned.
camera, thus making the calibration and reconstruc- The 3D data pieces can also be registered with the
tion of a monocular system essentially the same as assistance of the absolute phase [25]. Here only a sin-
that of a binocular system. gle projector is utilized and absolute phase is unique;
In order to further obtain the complete data of the therefore, they should be aligned whichever way
whole projector-illuminated surface, the data from each camera acquires it. As for unmatched points,
the three vision system must be well aligned and re- the absolute phases from the two monocular systems
gistered. So the binocular and monocular systems are not the same or the epipolar geometry constraint
are to be calibrated in the same world coordinate cannot be satisfied. So there is a problem regarding
system. In the process of calibration, the world coor- which monocular system should be chosen to con-
dinate system may be set on the left camera coordi- struct the lost points in a binocular system accu-
nate system for the unification of binocular and left rately as to a certain point.
monocular coordinate systems. Thus it is only needed Since the reliability of the unwrapping phase de-
for the right monocular coordinate system to be con- pends on the gray of a pixel, we introduce an image
verted to the world coordinate system, which has segmentation technique to distinguish the reliable
been well-studied and is not discussed here. points that can recover the lost points accurately from
the right or left monocular system self-adaptively.
B. Compensation of Unmatched Points in Binocular The Ostu thresholding method [26] is the simplest
Reconstruction approach for image segmentation through threshold-
In highly specular objects, the component of the spec- ing, which is based on the analysis of the gray-levels
ular spike dominates, and the irradiance across this histogram. Thresholding can be explained in the fol-
area may vary so greatly that there will always be lowing simple procedure. Let N be the set of natural
overexposed and underexposed areas in the captured numbers and x; y be the spatial coordinate of a
image no matter what exposure time is used. digitized image. The thresholding operation is re-
Specular reflections can cause the intensity and color garded as the partitioning of the pixels into two
of corresponding points to change dramatically ac- groups, C0 f0; 1; ; tg and C1 ft 1; t 2; ;
cording to different viewpoints, thus producing se- L 1g, where L is the number of levels in the grays-
vere matching errors for various stereo algorithms. cale image, and f0; 1; ; m 1g is a set of
In order to solve the matching problem, we pro- positive integers of gray levels. Then, from the defi-
posed that construction of monocular vision system nitions above, an image function can be defined as
can resolve the matching problem and make up the mapping:

f x; yN N ;

where f x; y is the brightness (gray level) of a pixel


with coordinates x; y. Let t be a threshold level and
B fb0; b1g be a pair of binary gray levels, i.e.,
b0; b1 . An image function f :; : at gray level t
can be derived from a binary image function
f t x; yN N B:

b0; if f x; y < t;
f t x; y : 6
b1; otherwise

From the histogram of the image, let f i be the num-


ber of ith gray level points. t is a thresholding point.
0 t is the mean for the first group C0 and 1 t is the
mean for the second group C1. Their definitions are
Fig. 3. (Color online) Monocular stereo vision. given as follows:

10 August 2011 / Vol. 50, No. 23 / APPLIED OPTICS 4561


X
L1 X
t
pi f i = fi qt pi 7
i0 i0

Pt PL1
i0ipi 0 t it ipi t
0 1 1 ;
qt qt 1 qt 1 qt
8

where p1 ; ; pi represent the histogram probabilities


of the observed gray values of 1; ; I. Let 2t be the
between-group variance:

2t qt1 qt1 0 2 : 9 Fig. 4. Experimental system.

Then the optimal thresholding, t , can be deter-


involved applying a constraint derived from a gam-
mined by maximizing the between-group variance
ma model according to the number of projected pat-
as shown below:
tern. The details of this method were described in
[27]. Those techniques can make the data from mono-
2t max 2t : 10 cular and binocular vision systems merged more
smoothly.
Thus, the set B can be decided according to the op-
timal thresholding. All the image points in the right/ 5. Experiments
left synthesized image labeled by the value of b0 and
b1 are called labelr =labell , which are used to deter- The proposed method was implemented and tested
mine the membership of the pixel to construct the in our previously developed 3D shape measurement
lost points unreliably or reliably (here, b0 represents system, as shown in Fig. 4. This system includes a
unreliability; b1 represents reliability). If either one Digital Light Processing projector (MD-565X) with a
label labelr =labell of unmatched pixels in the right/ resolution of 1024 768 pixels and two digital CCD
left synthesized images is b1, the lost pixel Plose in cameras (DH-HV13202UM) with an image resolu-
binocular system can be filled in by the reconstructed tion of 1280 1024 pixels. The camera lenses are
pixel Pr =Pl in the right/left monocular system whose Schneider with a focal length of 23 mm. The proces-
label is b1 at the pixel. Otherwise, Plose cannot be re- sor used for computation has two micro core struc-
covered accurately even if both labelr and labell tures with main frequency of 1:6 GHZ and 2G
equal to b1 because some error may be produced dur- internal memory. To ensure good measurement re-
ing the process of unwrapping phase results in these sults, the baseline of the two cameras is about 0:7
unmatched points. So these points are to be omitted 0:95 mm with the measurement distance being
in this case. 0:9 1:2 mm, and the angle between optical axis of
the camera and the base line is around 40.
8 Our approach consists of the following steps:
< Pr ; labelr b1; labell b0
Plose Pl labelr b0; labell b1 : 11 System calibration. Calibrate the binocular
:
omitted; otherwise and two monocular vision systems in the same world
coordinate system with the same checkerboard.
In practice, there are many factors affecting the ac- HDR imaging. The right/left synthesized
curacy of the result of monocular systems. These fac- images are obtained after merging fringe images un-
tors can be classified as being from uncertainty due der multiple exposures, while some highly specular
to sensor noise or from a nonlinear response or gam- and dark pixels are disregarded.
ma distortion in the cameraprojector pair. The two Phase solution. The right/left absolute phases
patches from two monocular systems are often not of all points are acquired by unwrapping right/left
aligned well with binocular reconstruction. Some phase maps obtained from the right/left synthesized
techniques about elimination of nonlinear luminance images. The absolute phases serve as a matching
effects for digital video projection must be applied in constraint, as well as an epipolar geometry to get
our system to improve the accuracy of two camera- the corresponding points in stereo pairs.
projector system. It is mainly the gamma distortion 3D coordinates acquisition of matched points in
that causes nonlinear luminance effects for digital binocular system. The corresponding points are
video projection. Gamma correction was used to further converted to x, y, z coordinates of such points
greatly improve the measurement accuracy of phase. in binocular system.
Since the gamma value is not easily measured, the Lost points recovered by the reliable points
distorted phase was corrected by minimizing the en- from monocular reconstruction. Image segmentation
ergy in the harmonic components. This method technique is introduced to distinguish the suitable

4562 APPLIED OPTICS / Vol. 50, No. 23 / 10 August 2011


Fig. 7. Zoom-in view of a small region of binocular reconstruction
result (a) and compensation result from the combination of bino-
cular and monocular systems (b).

eventually used for both binocular and monocular


reconstruction.
Here, Fig. 6 illustrates the process of the reconstruc-
tion. The corresponding unwrapping phase maps ob-
tained from the right or left synthesized fringe images
are shown in Figs. 6(a) and 6(b). Figures 6(c) and 6(d)
show 3D results from the two monocular systems, re-
Fig. 5. The measured vase (a), its left and right fringe images spectively. The 3D results from the binocular recon-
(b, d), and its left and right synthesized fringe image (c, e). struction jointly constructed by the two unwrapping
phase maps were also shown in Fig. 6(e). It is obvious
monocular system to reconstruct the lost points there were some holes still existing in the binocular
accurately as to a certain point. reconstruction even though multiple exposures were
functional. As is shown in Fig. 6(f), the eventual 3D
We measured a china vase. The contrast between result combining all the output from the three systems
different colors is very large, albeit its surface is al- extends the measurable areas on the objects to the
whole projector-illuminated field. That is, all the holes
most diffuse. Figure 5(a) illustrated the photograph
in the binocular reconstruction are filled up with data
of the measured object. Three exposures from long to from monocular reconstruction. The existence of the
short were preformed in this test. One image pair un- regular gamma distortion in monocular systems re-
der certain exposure is seen in Figs. 5(b) and 5(d). sulted in the stripes shown in Figs. 6(c) and 6(d).
One pair of synthesized fringe images for a certain Although the gamma correction was performed, there
phase shift after merging the results from all expo- was still some gamma distortion existing in the
sure images is illustrated in Figs. 5(c) and 5(e) and cameraprojector pair and canceled out in the bino-
cular system. The zoom-in views of a small region of
binocular reconstruction and its compensation result
through monocular reconstruction are illustrated in
Fig. 7. Compared with traditional binocular results
using multiple exposures, it is clear that the combined

Fig. 6. Left and right unwrapped phases maps of the vase (a, b), Fig. 8. The turbine blade (a) and the reconstructed 3D results ob-
the 3D results reconstructed from left and right monocular system tained after single exposure from the binocular system (b), four
(c, d) and binocular system (e), respectively, and the compensation exposures from the binocular system (c), and four exposures from
result from the combination of binocular and monocular system (f). the binocular and monocular systems (d), respectively.

10 August 2011 / Vol. 50, No. 23 / APPLIED OPTICS 4563


Table 1. Comparison of the Reconstructed Turbine other image (or lack of correspondence, which indi-
Blade by the Tree Method
cates occlusion) can be unambiguously determined,
Computation Number of the and the holes could be made up by the monocular
Method Time (s) Acquired Points stereo system. In addition, we do not attempt to
Single exposure 2.76 311053 avoid specular reflection like strong highlights but
Multiple exposure 4.61 563592 rather perform accurate compensation in their pre-
Multiple exposure and 5.73 694726 sence. Thus, preprocessing of imaging, like removal
combined binocular and of highlights, is avoided. Our approach is not limited
monocular technique by any specific reflectance model or to any correspon-
dence scheme, and this method does not require the
change of the relative position between the system
result obtains more complete 3D cloud data of sur- and the object, so it is therefore easy to incorporate
faces, and solves the problem of loss of 3D data. into existing stereo algorithms. Our experiments de-
Moreover, we measured a polished, highly specular monstrated that this proposed method could success-
surface: a metal turbine blade shown in Fig. 8(a). fully measure full-field surfaces of typical objects
Both cameras captured images under four exposures. with large dynamic surface reflectivity without too
However, it should be noted that the reconstructed much computation cost. However, enhancing self-
3D results after single exposure and four exposures adaptability of measurable images acquisition may
from the binocular system, depicted in Figs. 8(b) and be further needed to make this method more
8(c), respectively, show incomplete results since only practicable.
the overlapping view field of the two cameras could
This work is supported by the National Natural
be reconstructed by the binocular system and the
Science Foundation of China (NSFC) under grant
matching errors resulted in the loss of matching
number 60 990 323, and the Sichuan provinces Inter-
points and left blank areas or holes in the final 3D
national Cooperation Fund under grant number
shape. It is obvious that the number of the uncertain
2009HH0023.
points is decreased by increasing the number of
exposures. References
In fact, some points, although not being recon-
1. V. Srinivasan, H. C. Liu, and M. Halioua, Automated phase-
structed, have good phases either from the left cam- measuring profilometry of 3-D diffuse object, Appl. Opt. 23,
era or from the right camera. Of note, the nonpublic 31053108 (1984).
area, the root as well as the top of the object that can- 2. E. Hu and Y. He, Surface profile measurement of moving
not be obtained from the binocular system, have good objects by using an improved phase-shifting Fourier trans-
phases in two monocular systems. These data could form profilometry, Opt. Lasers Eng. 47, 5761 (2009).
be used to compensate for the incomplete areas. As is 3. J. Vanherzeele, S. Vanlanduit, and P. Guillaume, Processing
illustrated in Fig. 8(d), the final reconstructed result optical measurements using a regressive Fourier series: a re-
with complete 3D cloud data was obtained from bino- view, Opt. Lasers Eng. 47, 461472 (2009).
cular and monocular vision systems. Comparison of 4. D. M. Meadows, W. O. Johnson, and J. B. Allen, Generation of
surface contour by Moire patterns, Appl. Opt. 9, 942947
the turbine blade reconstructed by single exposure,
(1970).
four exposures, and the proposed method, including 5. A. K. Asundi, Moire methods using computer-generated
the number of the acquired points and the computa- gratings, Opt. Eng. 32, 107116 (1993).
tion time, is shown in Table 1. These experimental 6. P. Aswendt, R. Hoing, and S. Gartner, Industrial inspection of
results confirmed that the proposed method could specular surfaces using a new calibration procedure, Proc.
successfully measure full-field specular surfaces SPIE 5856, 393400 (2005).
where the projector could project fringe images onto 7. O. A. Skydan, M. J. Lalor, and D. R. Burton, Three-
without increasing computation cost too much. The dimensional shape measurement of non-full-field reflective
rms error for all these 3D points of the turbine blade surfaces, Appl. Opt. 44, 47454752 (2005).
8. O. A. Skydan, M. J. Lalor, and D. R. Burton, 3D shape
is around 0:036 mm.
measurement of automotive glass by using a fringe reflection
6. Conclusions technique, Meas. Sci. Technol. 18, 106114 (2007).
9. W. Li, T. Bothe, C. von Kopylow, and W. P. O. Juptner,
We have presented here a method that allows a con- Evaluation methods for gradient measurement techniques,
ventional structured light system with dual cameras Proc. SPIE 5457, 300311 (2004).
to successfully measure the 3D shape of objects with 10. A. Moreno, J. Campos, and L. P. Yaroslavsky, Frequency re-
a broad range of surface reflectivity without knowing sponse of five integration methods to obtain the profile from
the property of surface reflectivity and scene geome- its slope, Opt. Eng. 44, 033604 (2005).
try. Enjoying this advantage, however, as compared 11. H. W. Guo, F. Peng, and T. Tao, Specular surface measure-
ment by using least squares light tracking technique, Opt.
to using a monocular and binocular reconstruction
Lasers Eng. 48, 166171 (2010).
separately, the high precision of the binocular vision 12. D. Caspi, N. Kyriati, and J. Shamir, Range imaging with
system is maintained because public field of view adaptive color structured light, IEEE Trans. Pattern Anal.
is reconstructed by the binocular vision system. Machine Intell. 20, 470480 (1998).
Furthermore, as long as each pixel is caught by at 13. E. Horn and N. Kiryati, Towards optimal structured light
least one of the cameras, its correspondence in the patterns, Image Vision Comput. 17, 8797 (1999).

4564 APPLIED OPTICS / Vol. 50, No. 23 / 10 August 2011


14. T. P. Koninckx, P. Peers, P. Dutre, and L. Van Gool, Scene- International Conference on Instrumentation and Measure-
adapted structured light, in Proceedings of IEEE Conference ment Technology (IEEE, 2009), pp. 361366.
on Computer Vision and Pattern Recognition (IEEE, 2005), pp. 21. J. Batlle, E. Mouaddib, and J. Salvi, Recent progress in coded
611618. structured light as a technique to solve the correspondence
15. M. Grossberg, H. Peri, S. Nayar, and P. Belhumeur, Making problem: A survey, Patt. Recog. 31, 963982 (1998).
one object look like another, in Proceedings of IEEE Interna- 22. J. Davis, D. Nehab, R. Ramamoorthi, and S. Rusinkiewicz,
tional Conference on Computer Vision and Pattern Recognition Spacetime stereo: A unifying framework for depth from tri-
(IEEE, 2004), pp. I-452I-459. angulation, IEEE Trans. Pattern Anal. Machine Intell. 27,
16. D. Bhat and S. Nayar, Stereo in the presence of specular 296302 (2005).
reflection, in Proceedings of IEEE International Conference 23. Z. Zhang, A flexible new technique for camera calibration,
on Computer Vision (IEEE, 1995), pp. 10861092. IEEE Trans. Pattern Anal. Machine Intell. 22, 13301334
17. D. Scharstein and R. Szeliski, High-accuracy stereo depth (2000).
maps using structured light, in Proceedings of IEEE Interna- 24. S. Zhang and P. S. Huang, Novel method for structured light
tional Conference on Computer Vision and Pattern Recognition system calibration, Opt. Eng. 45, 083601 (2006).
(IEEE, 2003), pp. 195202. 25. S. Zhang and S-T Yau, Absolute phase assisted three-
18. R. M. Kowarschik, J. Gerber, G. Notni, and W. Schreiber, dimensional data registration for a dual-camera structured
Adaptive optical three-dimensional measurement with struc- light system, Appl. Opt. 47, 31343142 (2008).
tured light, Opt. Eng. 39, 150158 (2000). 26. N. Ostu, A thresholding selection method from gray-level his-
19. S. Zhang and S-T Yau, High dynamic range scanning techni- tograms, IEEE Trans. Syst. Man Cybern. A 9, 6266 (1979).
que, Opt. Eng. 48, 033604 (2009). 27. K. Liu, Y. Wang, D. L. Lau, Q. Hao, and L. G. Hassebrook,
20. A. R. Varkonyi-Koczy, Improved fuzzy logic supported HDR Gamma model and its analysis for phase measuring profilo-
colored information enhancement, in Proceedings of IEEE metry, J. Opt. Soc. Am. A 27, 553561 (2010).

10 August 2011 / Vol. 50, No. 23 / APPLIED OPTICS 4565

Potrebbero piacerti anche