ISSN (Online): 2349-7084
GLOBAL IMPACT FACTOR 0.238
DIIF 0.876
INTERNATIONAL JOURNAL OF COMPUTER ENGINEERING IN RESEARCH TRENDS
VOLUME 2, ISSUE 9, SEPTEMBER 2015, PP 608-611
IJCERT © 2015 Page | 608
http://www.ijcert.org
Augmenting Image Re-Ranking Using
Semantic Signatures
1
Mubasheera Tazeen,
2
G.Somasekhar,
3
Dr.S.Prem Kumar
1
(M.Tech), CSE
2
Assistant Professor, Department of Computer Science and Engineering
3
Professor & HOD, Department of computer science and engineering,
G.Pullaiah College of Engineering and Technology, Kurnool, Andhra Pradesh, India.
Abstract:- Nowadays, Image re-ranking, as an effective way to improve the results of web-based image search. In this
paper, a new technique is proposed for web-scale image re-ranking. The mentioned technique is very useful in giving specific
results to users in just one click. In this, different semantic spaces for different query keywords can be found offline indepen-
dently and automatically. Semantic signatures of the images are acquired by projecting their visual features into their related
semantic spaces and these semantic signatures are compacted using Hashing techniques At the online stage, images are re-
ranked by comparing their semantic signatures obtained from the visual semantic space specified by the query keyword.
Keywords: Image re-ranking, query keyword, query image, keyword expansion, visual query expansion, image search,
semantic space, semantic signature, Hashing.
—————————— ——————————
1 INTRODUCTION
Image re-ranking, as an effective way to improve the
results of web-based image search, has been adopted by
Current commercial search engines. Given a query key-
word, a pool of images is first retrieved by the search
engine based on textual information. By asking the user
to select a query image from the pool, the remaining
images are re-ranked based on their visual similarities
with the query image. A major challenge is that the simi-
larities of visual features do not well correlate with im-
ages’semantic meanings which interpret users’ search
intention. On the other hand, learning a universal visual
semantic space to characterize highly diverse images
from the web is difficult and inefficient. In the past few
years, internet has been spread widely all over the
world and because of it image database on the internet
has become huge. Searching the right image from such a
huge database is a very difficult task. Mainly there are
two approaches used by internet scale search engines.
First is text-based image search. Many commercial in-
ternet scale image search engines use this Approach.
They use only keywords as queries. Users type query
keywords in the hope of finding a certain type of im-
ages. The text-based search result is ambiguous. Because
keywords provided by the users tend to be short and
they cannot describe the actual visual content of target
images just by using keywords. The text-based search
results are noisy and consist of images with quite differ-
ent semantic meanings. For example, if "apple" is en-
tered by the user to a search engine as a query keyword,
the search results may belong to different categories
such as "green apple," "red apple," "apple logo," "apple
laptop" and "apple iphone" because of the ambiguity of
the word "apple". To overcome this problem of ambigui-
ty of keywords, text-based image search alone is not
enough. Additional information has to be used to cap-
ture users search intention. As a solution to this prob-
lem, the second approach, content based image search
with relevance feedback is then introduced. For this
multiple relevant and irrelevant image examples are to
be selected by the users. Through the online training, the
visual similarity metrics are learned from them, from
which re-ranking of images is performed. But a lots of
user interventions is needed in this approach and hence
it is very time consuming and not appropriate for com-
mercial web-scale search engines. A combination of both
above approaches is useful. But to effectively improve
the search results, online image re-ranking should limit
users’ effort to just one-click feedback. In this a major
challenge is that sometimes the visual feature vectors
are large in size and thus it slows down their matching
speed. Also, to acquire the users’ search intentions, the
resemblance of low-level visual features and images’
high-level semantic meanings should correlate, but it
does not happen always. However, there have been
many studies to decrease this semantic gap.