Sei sulla pagina 1di 7

See discussions, stats, and author profiles for this publication at: https://www.researchgate.

net/publication/3906043

Illumination insensitive eigenspaces

Conference Paper · February 2001


DOI: 10.1109/ICCV.2001.937523 · Source: IEEE Xplore

CITATIONS READS

28 30

3 authors:

Horst Bischof Horst Wildenauer


Graz University of Technology TU Wien
790 PUBLICATIONS   26,107 CITATIONS    34 PUBLICATIONS   465 CITATIONS   

SEE PROFILE SEE PROFILE

Ales Leonardis
University of Birmingham
288 PUBLICATIONS   8,202 CITATIONS   

SEE PROFILE

Some of the authors of this publication are also working on these related projects:

Computer-aided forensic cases analysis and presentation View project

AUTOVISTA View project

All content following this page was uploaded by Horst Wildenauer on 01 June 2014.

The user has requested enhancement of the downloaded file.


Illumination Insensitive Eigenspaces

Horst Bischof and Horst Wildenauer AleS Leonardis


Pattern Recognition and Image Processing Group Faculty of Computer and Information Science
Vienna University of Technology University of Ljubljana
Favoritenstr. 9/1832, 1040 Vienna, Austria TriaSka 25, 1001 Ljubljana, Slovenia
{bis,wilde} @prip.tuwien.ac.at alesl@fri.uni-lj.si

Abstract in-between class variations. Appearance-based approaches


employing the eigenspace framework have attacked the
Variations in illumination can have a dramatic effect problem of illumination variation by sampling [ 1 11, i.e.,
on the appearance of an object in an image. In this pa- generating many views of the object under different illu-
per we propose how to deal with illumination variations in mination conditions. However, in general, an object can
eigenspace methods. We demonstrate that the eigenimages produce so many different images that it is not clear how
obtained by a training set under a single illumination con- to sample all of them. Farid and Adelson [6] demonstrated
dition (ambient light) can be used for recognition of objects that the effects of reflections and illuminations can be sep-
taken under different illumination conditions. The major arated by Independent Component Analysis. However, it is
idea is to incorporate a set of gradient based$lter banks not possible to use this method for object recognition.
into the eigenspace recognition framework. This can be Shashua [12] has shown that the image space of a 3D
achieved since the eigenimage coeficients are invariant for Lambertian surface (ignoring cast shadows) is determined
linearly3ltered images (input and eigenimages). To achieve by a basis of three images . In the spirit of appearance based
further illumination insensitivity we devised a robust proce- methods, Belhumeur and Kriegman [3] have developed the
dure for coeficient recovery. The proposed approach has illumination cone method. In [7] it was shown that the illu-
been extensively evaluated on a set of 2160 images and the mination cone can be used for face recognition from a single
results were compared to other approaches. pose. The major problem with this approach is that when
the training images contain multiple light sources, the illu-
1. Introduction mination cone can not be determined accurately. In a recent

The appearance of objects in an image depends on the


combined effects of objects’ 3D geometry, pose in the
scene, surface properties, and illumination conditions. All
these properties contribute to the imaging process and can
result in significantly different appearances of the same ob-
ject in an image.
Especially illumination conditions can have a dramatic
effect on the appearance of an object. Several object recog-
nition methods in the past have either ignored variations
in illumination, or performed simple normalization proce-
dures, or searched for some “illumination invariant” fea-
tures in the image, like edges.
Adini et. al. [ l ] have demonstrated for faces that vari- Figure 1. Different representations obtained
ations in illumination cause larger variations in the image by filtering and PCA.
than different subjects seen under the same illumination
conditions. This is a clear indication that object recogni-
tion can not be reliably performed without taking into ac-
count the variations in illumination, thereby preventing that paper, Chen et. al. [5] have shown that even for objects with
the within-class variation would be much larger than the Lambertian surfaces there are no discriminative functions of

233
$10.00 0 2001 IEEE
0-7695-1143-0/01
images of objects that are illumination invariant. However, that the representation obtained by first filtering and then
it was demonstrated in that paper that from the image gra- performing PCA is not efficient (in terms of the number of
dient a measure which is insensitive to illumination can be eigenimages needed). This can be explained by the fact that
developed and used for probabilistic object recognition. In the filtered (edge) images are less correlated, therefore more
a similar spirit, Jacobs et al. [9] developed a measure from eigenimages are needed for an accurate representation.
the ratio of two images for comparing images under variable The representation based on first performing PCA and
illumination. It was stated that this measure can be inter- then filtering has the additional advantage of increased flex-
preted as a simple comparison between image edges. Sim- ibility. Namely, once we have performed the PCA we can
ilar results have been obtained in [14]. From these studies, apply any linear filter without recalculating the eigenimages
we can conclude that edges and gradients are useful mea- (therefore we also do not need to access to the original train-
sures to deal with variations in illumination. However, in ing set). Whereas in the other approach, any time when we
the approaches mentioned above, the authors did not take require to use a new filter, we first have to filter the original
into account multiple poses. In particular, we would like training images and then perform PCA.
to be able to deal with the illumination variations in cases The main aim of this paper is to exploit the filtered
when a set of images is compressed-in an eigenspace and eigenspace representation to obtain illumination insensitiv-
the input objects to be recognized are taken under' various ity in object recognition. In section 2 we show how to ob-
illumination conditions. This would enable us to perform tain the coefficients of the eigenimage expansion from the
recognition in an efficient manner. ' ' filtered images. We demonstrate that by taking a filter bank
In this paper we propose an approach based on eigen- of gradient filters, one can already achieve illumination in-
images filtered by gradient-based operators. Namely, we sensitivity to a large degree. As an extension, we show how
show how to recover eigenspace coefficients in a global the coefficients can be robustly recovered in order to deal
eigenspace representation from responses of local filter with cluttered background and other types of non-Gaussian
banks. Based on that, we discuss how this approach can be noise. In section 3 we extensively evaluate the approach on
used for illumination insensitive object recognition. This object recognition with different illuminations and compare
work extends the approaches mentioned above in the direc- the results to the one obtained with other approaches. In
tion that we can deal also with multiple instances of objects section 4 we discuss how this global-local representation
in multiple poses in a compressed representation. can be used to further enhance the current capabilities of
To be more specific, let y = [ y l , .. . , ymIT E Rm be an recognition methods based on eigenimage representations.
individual image, and Y = { yl, . . . yn} be a set of images.
To simplify the notation we assume Y to be normalized, 2. Eigenimages and Local Filters
having zero mean. The set of eigenvectors obtained from
Y is denoted by E = { e l , .. . e n } ;ei = [ e i l , . . . ,eimIT E
Rm. Let us denote the PCA transform by P C A p ( Y )= &p, In the standard approach, the parameters ai of the eigen-
image expansion are obtained by projecting an image in the
where p indicates that usually only p , p < TI, eigenvectors
form of data vector x onto the eigenspace
(those with the largest eigenvalues) are needed to represent
the yi to a sufficient degree of accuracy as a linear combi- m
nation of eigenvectors ei ai(x) =< x,ei >= C z j e i j i = 1.. . p . (2)
j=1
P

a(x) = [a1(x),. . . ,ap(x)lTis the point in the eigenspace.


i=l
Let us call ai(x) coefficients of x. In the following, we
where denotes the approximation to y. simply write a instead of a(x).
In this paper we exploit the idea of filtering the eigenvec- Instead of calculating the coefficients by a projection, we
tors by a set of linear filters. Fig. 1 schematically depicts can also calculate the same coefficients by solving a system
two different directions that one can take. We can either of linear equations [ 101. The idea is based on a simple ob-
(as in our approach) first compress the set of images using servation: namely, that Eq. (1) is valid point-wise. Thus, we
the PCA and then perform the filtering (shown on the left), only need k 2 p points r = ( T I , .. . r k ) and simply solve
or alternatively, first transform the set of images using filter- the following system of linear equations'
banks and then compress the resulting filtered images by the
PCA (shown on the right)'. The latter approach has been
systematically investigated by [8, 141. The authors noted
'Note that in general f * PCA(Y) # PCA(f * y ) ;* denotes the *Please note that when all the points are used this is equivalent to
usual 2D convolution. Eq. (2).

234
This has the additional advantage that the eigenimages need
not be orthogonal.
We take Eq. (3) as the starting point of our derivation of
a new approach which preserves all the advantages of the
eigenspace method, and augments it with the rich local im-
age structure. We can derive the following property which
holds due to the linearity of the equation
P
(f * 4 ( d = C.4f * e2>(.3) , (4)
(a) Original im-
age
(b) Image illumi-
nated from right
(c) Reconstruc-
tion obtained
2=1 from (b)

This equation states that we can calculate the coefficients Figure 3. Demonstration of illumination in-
u2 from the filtered eigenimages and the filtered input im- sensitivity of filtered eigenspaces.
age. We have demonstrated [4] that the recovered coeffi-
cients in filtered and subsampled images can be calculated
and remain stable despite the fact that due to filtering and
subsampling the eigenimages are no longer orthogonal. 2. we can recover the coefficients from a single point us-
ing q filter responses at that single point;
3. we can use a combination of the above two options.
As preliminary experiments have shown, we get the most
reliable (numerically stable) results using the third option.
We have now the freedom to choose a bank of linear filters
which are insensitive to illumination variations. From [5,9]
we can conclude that gradient-based filters achieve the de-
sired effect. In particular, we have selected a set of steerable
filters [ 131. Figure 2 depicts 8 eigenimages filtered with 6
derivative filters in different orientations.
Fig. 3 demonstrates the insensitivity of the filtered
eigenspaces to illumination changes. Though there is a sig-
nificant change in the illumination conditions between the
images acquired in the training phase (see also section 3)
we can still recover the correct coefficients as can be seen
from the reconstruction in Fig. 3(b).

Figure 2. First row shows 8 eigenimages. The 2.1 Enhanced Robustness


rows below show for each of the 8 eigenim-
ages 6 filter responses. The method, as presented so far, cannot deal with clut-
ter in the background (i.e., when there are many additional
edges in the background) or when the object is occluded
so that additional image gradients are generated. The rea-
We can now go one step further: Let F = {ti,.. . ,fq}
son is that the coefficients are calculated in a least-squares
denote a set of linear filters, f * k' = {f * xi,.. . , f * x,},
sense. Also, in the case of severe illumination changes, ad-
a n d F * X = { f i * X ,..., f q * X } = { f i * x l ,. . . , f q * x n } ,
ditional gradients are generated by cast shadows and high-
filtering of a set of images with a set of filters. Using a set
lights. Therefore, we have to estimate the coefficients in
of filters F we can construct a system of equations
a robust way. This is achieved by solving Eq. ( 3 ) by a
P two-fold robust procedure [lo] which combines a robust
(fSXZ)(Tj) = C a z ( f s * e i ) ( r j )s = 1,.. . , q , j = 1,.. . , I C equation solver (a-trimmed estimator or M-estimator) and
i=l a RANSAC-like competition among multiple estimates.
(5) The procedure executes the following three steps (for
We have now the following options to recover the coeffi- more details see [lo]):
cient vector a:
1. Generate multiple hypotheses by selecting randomly k
1. We can recover the coefficients from IC points; points from the image.

235
one can tolerate a significant amount of (non-Gaussian)
noise, considerable occlusions, and does not need to pre- (a) Lion with (b) Robustly (c) Reconstruc-
segment the images. shadow reconstructed tion of (a) using
image obtained the standard
Fig. 4 shows the result of using the robust procedure for from (a) method
calculating the coefficients of the same object as in Fig. 3(a).
Note that in this case, despite the fact, that we have already Figure 5. Demonstration of robustness
reached saturation in the frontal part of the image, we were . and illumination insensitivity of filtered
able to calculate the coefficients accurately, as can be seen eigenspaces (cast shadow).
from Fig. 4(b). Using the standard method (Eq. ( 2 ) ) to re-
cover the coefficients, we get the reconstruction depicted
in Fig. 4(c). Fig. 5 depicts another example, where more
6. As has been shown in [lo], under certain conditions,
than 60% of the figure is in a shadow. In this case the re-
variations in the number of hypotheses and the number of
construction is not as good as in the previous case but still
points do not change the results significantly. One should
sufficiently good for a reliable recognition.
note that from the point of view of solving Eq. ( 5 ) we can
decrease the number of filters if we increase the number of
points and vice-versa.

Figure 6. A set of five different objects.

(a) Severe illumi- (b) Robustly (c) Reconstruc-


nation from the. reconstructed tion of (a) using
front image obtained the standard
from (a) method Evaluation of Coefficients To have a systematic perfor-
mance evaluation, we acquired images of all objects in a
Figure 4. Demonstration of robustness similar setup as for the training images. The difference was
and illumination insensitivity of filtered that this time we used a point light- source which illumi-
eigenspaces (specular illumination). nated the objects from the upper right (Fig. 7). We then
used our robust filtered PCA method to recover the coeffi-
cients of the eigenimage expansion and compared them to
the coefficients obtained by the standard eigenspace method
3. Experimental Results (Eq. (2)). Since our method is based on a random selec-
tion of points, we repeated the experiment 10 times to see
We have extensively evaluated our algorithm on a also the variation in the obtained solutions. Fig. 8 shows
database of 5 objects (Fig. 6). Each object is represented by the statistics of this experiment. Image numbers are plot-
views from 72 orientations. These were obtained by plac- ted on the z-axis and the mean squared coefficient errors of
ing the object on a turntable and a view was taken every 5 the two methods are plotted on the y-axis. For the robust
degrees. 36 orientations of each object were used to build filtered PCA method we also provide an error bar with the
the eigenspace. The objects have been acquired under con- minimum and the maximum of the obtained error. From this
ditions 'of an ambient illumination. Unless specified other- plot one can see that for all the cases our method performed
wise we have used in all experiments reported in the paper better than the standard one. Also note that the variation of
an eigenspace of dimension 18, the number of hypotheses the produced results is rather small, so that even our worst-
generated was 50, and for each hypothesis 300 points have case result is always better than the result produced by the
been sampled. The number of filters that we have used was standard method.

236
Figure 9. Illumination conditions tested in the
recognition experiment.
Figure 7. Test-set of “rhino” object under dif-
ferent illuminations.

method (using Eq. ( 2 ) ) ,robust filtered PCA method, stan-


dard method without using the first three eigenvectors for
the recognition, and robust filtered PCA method without us-
ing the first three eigenvectors for the recognition.
The results are summarized in Table 1. It is evident that
our method outperforms the standard one. In [ 2 ] it has been
shown that the standard method achieves a certain degree
of illumination insensitivity when the first three eigenvec-
tors are not used in the recognition. While in this case the
recognition rate of the standard method, indeed, improves,
it is still below the recognition rate obtained by our method.
It is interesting to note that the robust filtered PCA method
does not improve in the recognition rate when the first three
I
5 10 15 20 25 30 s eigenvectors are dropped, which indicates that the filtered
Image numbar

Figure 8. Comparison between the standard images contain the appropriate discriminative information.
and our robust method.
4. Summary and Conclusions

In this paper we proposed how to deal with illumination


Recognition Evaluation In this experiment we evaluated variations in the, eigenspace recognition framework. We
the recognition performance under different illumination demonstrated that eigenimages obtained by a training set
conditions. For training set we have used 5 objects under under a single illumination condition (ambient light) can be
36 rotations acquired with ambient illumination conditions. used for the recognition of objects taken under different il-
These training images were represented in an eigenspace lumination conditions. The major idea was to incorporate a
of dimension 30. To test the illumination insensitivity, we set of gradient based filter banks into the eigenspace recog-
have built a test database consisting of 2160 images (Fig. 9) nition framework. This can be achieved since the eigen-
where a point light source was systematically pointed to the image coefficients are invariant for linearly filtered images
objects in different directions. The intensity of the point (input and eigenimages). To achieve further illumination
light source was also varied (e.g., Figs. 3 , 4). In order to insensitivity we devised a robust procedure for coefficient
test also the ability to cope with larger shadows, we placed recovery which considerably improved the overall perfor-
another object between the test object and the light source mance of the method. The proposed method was exten-
such that a shadow was cast on the test object (Fig. 5). sively evaluated on a database of 2160 test images with
In the following set of experiments we compared the varying illumination. We have demonstrated that we can
recognition rates of different methods, namely, standard achieve a recognition rate which is more than 25% better

237
Acknowledgments
Table 1. Comparison of the recognition rates
for different methods. The tables show the H. B. and H. W. were supported by a grant from the Aus-
confusion matrix and the achieved recogni- trian National Fonds zur Forderung der wissenschaftlichen
tion rate (“A) for the individual objects. For Forschung (P13981INF). H. B. acknowledges the support
those objects that were correctly recognized by the K plus Competence Center ADVANCED COM-
we also calculated the mean absolute error of PUTER VISION. A. L. acknowledges the support from the
the pose estimation (ang.). Ministry of Science and Technology of Republic of Slove-
Robust filtered method - all eigenvectors used. nia (Project 52-0414).
obj.1 1 2 3 4 51 % I ang.
1 I 360 0 0 0 0 1 100.0 5.25 I References

[l] Y. Adini, Y. Moses, and S. Ullman. Face recognition: The


3 332 problem of compensating for changes in illumination direc-
15 tion. IEEE Trans. on Pattern Analysis and Machine Intelli-
avg.
- gence, 19(7):721-732, July 1997.
[2] P. Belhumeur, P. Hespanha, and D. Kriegman. Eigenfaces

1 1 2i 25i 1
Standard method - all eigenvectors used.
obj. 1 2 3 4 5 % ang. I vs. Fisherfaces: Recognition Using Class Specific Linear
Projection. IEEE Trans. on Pattern Analysis and Machine
1 1 141 0 14 26 179 39.2 I 10.50 Intelligence, 19(7):711-720, July 1997.
18.90
3;:
38 249
18;
44
i;::
69.2
3.47
7.1 1
[3] P. Belhumeur and D. Kriegman. What is the set of images
of an object under all possible illumination conditions? Int.
J. Computer Vision, 28(3):245-260, July 1998.
51 0 557 91.0 6.82 [4] H. Bischof and A. Leonardis. Robust recognition of scaled
avg. 70.3 8.53 eigenimages through a hierarchical approach. In Proc. of
CVPR98, pages 664-670, 1998.
Robust filtered method - w/o first three eigenvectors
[5] H. Chen, P. Belhumeur, and D. Jacobs. In search of illumi-
obj. 1 I 2 3 4 5 I
% ang. I nation invariants. In CVPROO, pages I:254-261,2000.
1 I 359 1 0 0 0 I 99.7 ‘1 4.87 [6] H. Farid and E. Adelson. Separating reflections and lighting
using independent components analysis. In CVPR99, pages
1:262-267, 1999.
[71 A. Georghiades, D. Kriegman, and P. Belhumeur. Illumina-

k idard method - w/o first three eigenve


tion cones for recognition under variable lighting: Faces. In
Proc. of CVPR98, pages 52-58.1998.
[8] J. Ilomegger, H. Niemann, and R. Riesack. Appearance-
based object recognition using optimal feature transforms.
obj. 1 2 3 4 5 Pattern Recognition, 33(2):209-224, 2000.
318 24 4 0 14 [9] D. Jacobs, P. Belhumeur, and R. Basri. Comparing images
12 272 11 0 29 22.24 under variable illumination. In Proceedings ofthe CVPR98,
3 6 446 0 49 pages 610-617, 1998.
30 73 2 251 4 4.18 [IO] A. Leonardis and H. Bischof. Robust recognition using
18 6 23 0 565 eigenimages. Computer Vision and Image Understanding,
I
I 85.8 78(1):99-118, 2000.
avg. 8.38
[ 1 I] H. Murase and S. Nayar. Illumination planning for object
recognition using parametric eigenspaces. IEEE Trans. on
Pattem Analysis and Machine Intelligence, 16(12):1219-
1227, 1994.
in comparison with the standard (Eq. 2) and more than 10% [ 121 A. Shashua. On photometric issues in 3D visual recognition
better than the standard method without the first three eigen- from single 2D image. Int. J. Computer Vision, 21:99-122,
vectors. 1997.
[ 131 E. Simoncelli and H. Farid. Steerable wedge filters for lo-
The method that we have proposed to achieve illumina- cal orientation analysis. IEEE Trans. on Image Processing,
tion insensitivity also establishes a link between the global pages 1-15, 1996.
eigenspace representations and the local representations by [14] A. Yilmaz and M. Gokmen. Eigenhill vs. eigenface and
a bank of filters. This duality of the local and global object eigenedge. Pattem Recognition, 34( 1): 18 1-1 84, January
representations opens up many venues of further research 2001.
that we are currently pursuing.

238

View publication stats

Potrebbero piacerti anche