DOI: 10.4018/IJHISI.2018040102
International Journal of Healthcare Information Systems and Informatics
Volume 13 • Issue 2 • April-June 2018
Copyright © 2018, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
13
Medical Image Retrieval in
Healthcare Social Networks
Riadh Bouslimi, ISG, Tunis, Tunisia
Mouhamed Gaith Ayadi, ISG, Tunis, Tunisia
Jalel Akaichi, ISG, Tunis, Tunisia
ABSTRACT
In this article, the authors present a multimodal research model to research medical images based on
multimedia information that is extracted from a radiological collaborative social network. The opinions
shared on a medical image in a medico-social network is a textual description which in most cases
requires cleaning by using a medical thesaurus. In addition, they describe the textual description and
medical image in a TF-IDF weight vector using a “bag-of-words” approach. The authors then use
latent semantic analysis to establish relationships between textual terms and visual terms in shared
opinions on the medical image. The model is evaluated against the ImageCLEFmedbaseline, which
is the ground truth for the experiments. The authors have conducted numerous experiments with
different descriptors and many combinations of modalities. The analysis of results shows that when
the model is based on two methods it can increase the performance of a research system based on a
single modality both visually or textually.
KeywoRdS
Bag-of-Word, Latent Semantic Analysis, Medical Image Retrieval, Medical Social Network, Multimodal Fusion
1. INTRodUCTIoN
The explosion of medical information in the last 10 years over the Internet has made information
seeking for both textual and visual objects a very hot topic of research. In the medical domain, in
particular, the vast volumes of visual information produced every day in hospitals in connection
with the existence of digital Picture Archiving and Communications Systems (PACS) make the need
imperative for advanced ways of searching, i.e., by moving beyond conventional textbased searching
towards combining both text and visual features in search queries. Indeed, biomedical information
comes in several forms: as text in scientific articles, social networks, as images or illustrations from
databases and Electronic Health Records (EHR). Although many methods and tools have been
developed, still, we are far from an effective solutionespecially in the case of image retrieval from
large and heterogeneous databases. One way towards the improvement of current retrieval facility is
data fusion. Data fusion is generally defined as the use of techniques that combines data from multiple
sources and gather that information in order to achieve inferences, which will be more efficient and
accurate than if they are achieved by means of a single source.