Sei sulla pagina 1di 5

Web image Re-ranking

Web Image Re-Ranking Using Query-Specific Semantic


Signatures
1. Introduction:
The revolutionary internet and digital technologies have imposed a need to have a
system to organize abundantly available digital images for easy categorization and
retrieval. The need to have versatile and general purpose image retrieval (IR) system
for a very large image database has attracted focus of many researchers of information
technology-giants and leading academic

institutions for development of IR

techniques .These techniques encompass diversified areas, viz. image segmentation,


image feature extraction, representation, mapping of features to semantics, storage
and indexing, image similarity-distance

measurement and retrieval - making IR

system development a challenging task. Visual information retrieval requires a large


variety of knowledge. The clues that must be pieced together when retrieving images
from a database include not only elements such as color, texture and shape but also
the relation of the image contents to alphanumeric information, and the higher-level
concept of the meaning of objects in the scene.
Image re-ranking, as an effective way to improve the results of web-based
image search, has been adopted by current commercial search engines. Given a query
keyword, a pool of images is rst retrieved by the search engine based on textual
information. By asking the user to select a query image from the pool, the remaining
images are re-ranked based on their visual similarities with the query image. A major
challenge is that the similarities of visual features do not well correlate with images
semantic meanings which interpret users search intention. On the other hand, learning
a universal visual semantic space to characterize highly diverse images from the web
is difcult and inefcient. In this paper, we propose a novel image re-ranking
framework, which automatically ofine learns different visual semantic spaces for
different query keywords through keyword expansions. The visual features of images
are projected into their related visual semantic spaces to get semantic signatures. At
the online stage, images are re-ranked by comparing their semantic signatures
obtained from the visual semantic space specied by the query keyword. The new
approach signicantly improves both the accuracy and efciency of image re-ranking.
The original visual features of thousands of dimensions can be projected to the
semantic signatures as short as 25 dimensions.Experimental results show that 20%1

info@ocularsystems.in
Mobile No : 7385350430

Web image Re-ranking

35% relative improvement has been achieved on re-ranking precisions compared with
the state of the art methods.

2. Literature Survey:
1. E. Bart and S. Ullman. Single-example learning of novel classes using representation
by similarity. In Proc. BMVC, 2005.
Summary : We develop an object classification method that can learn a novel class from a single training
example. In this method, experience with already learned classes is used to facilitate the
learning of novel classes. Our classification scheme employs features that discriminate
between class and non-class images. For a novel class, new features are derived by selecting
features that proved useful for already learned classification tasks, and adapting these features
to the new classification task. This adaptation is performed by replacing the features from
already learned classes with similar features taken from the novel class. A single example of a
novel class is sufficient to perform feature adaptation and achieve useful classification
performance. Experiments demonstrate that the proposed algorithm can learn a novel class
from a single training example, using 10 additional familiar classes. The performance is
significantly improved compared to using no feature adaptation. The robustness of the
proposed feature adaptation concept is demonstrated by similar performance gains across 107
widely varying object categories.

2.

Lampert, H. Nickisch, and S. Harmeling. Learning to detect unseen object classes by


between-class attribute transfer. In Proc. CVPR, 2005.

Summary :
In this paper, we tackle the problem by introducing attribute-based classification. It performs
object detection based on a human-specified high-level description of the target objects
instead of training images. The description consists of arbitrary semantic attributes, like
shape, color or even geographic information. Because such properties transcend the specific
learning task at hand, they can be pre-learned, e.g. from image datasets unrelated to the
current task. Afterwards, new classes can be detected based on their attribute representation,
without the need for a new training phase. In order to evaluate our method and to facilitate
research in this area, we have assembled a new largescale dataset, Animals with Attributes,
2

info@ocularsystems.in
Mobile No : 7385350430

Web image Re-ranking

of over 30,000 animal images that match the 50 classes in Oshersons classic table of how
strongly humans associate 85 semantic attributes with animal classes.
3. G. Cauwenberghs and T. Poggio. Incremental and decremental support vector
machine learning. In Proc. NIPS, 2001.
4. J. Cui, F. Wen, and X. Tang. Intentsearch: Interactive on-line image
search re-ranking.
5. In Proc. ACM Multimedia. ACM, 2008. N. Dalal and B. Triggs. Histograms of
oriented gradients for human detection. In Proc. CVPR, 2005.

3. Problem Statement:
we propose a novel image re-ranking framework, which automatically offline learns
different visual semantic spaces for different query keywords through keyword
expansions. Personalized search using agglomerative clustering

4. Objective

To design the front end and store the cumulative results in the back end.
To develop and design code for sematic signatures of images.
To test the system and implement the algorithm.

5. Methodology:
Discovery of Reference Classes :
- Keyword Expansion
- Image retrieval
- Remove outlier images
- Remove redundant references
Query specific reference classes :
Classifiers of reference classes :
Mining the keywords associated with Image :
Creating Semantic Signatures :
Text Based Image Search :
Re-Ranking Based on Semantic Signatures :
Agglomerative Clustering for personalized image search :
6. System Design and Architecture:
The diagram of the proposed approach is shown below.
3

info@ocularsystems.in
Mobile No : 7385350430

Web image Re-ranking

info@ocularsystems.in
Mobile No : 7385350430

Web image Re-ranking

7. Theoretical result:
The images for testing the performance of re-ranking and the images of reference classes can
be collected at different time4 and from different search engines. Given a query keyword,
1000 images are retrieved from the whole web using certain search engine. As summarized in
Table 1, we create three data sets to evaluate the performance of our approach in different
scenarios. In data set I, 120; 000 testing images for re-ranking were collected from the Bing
Image Search using 120 query keywords in July 2010. These query keywords cover diverse
topics including animal, plant, food, place, people, event, object, scene, etc. The images of
reference classes were also collected from the Bing Image Search around the same time. Data
set II use the same testing images for re-ranking as in data set I. However, its images of
reference classes were collected from the Google Image Search also in July 2010.
8. Future work/ Own Contributions:
We proposed our new algorithm for personalize image search using Agglomerative
clustering.
9. References:
1. E. Bart and S. Ullman. Single-example learning of novel classes using representation
by similarity. In Proc. BMVC, 2005.
2. Y. Cao, C. Wang, Z. Li, L. Zhang, and L. Zhang. Spatial-bag-offeatures.
In Proc. CVPR, 2010.
3. G. Cauwenberghs and T. Poggio. Incremental and decremental support vector
machine learning. In Proc. NIPS, 2001.
4. J. Cui, F. Wen, and X. Tang. Intentsearch: Interactive on-line image
search re-ranking.
5. In Proc. ACM Multimedia. ACM, 2008. N. Dalal and B. Triggs. Histograms of
oriented gradients for human detection. In Proc. CVPR, 2005.

info@ocularsystems.in
Mobile No : 7385350430

Potrebbero piacerti anche