Abstract:
Large quantities of medical image data are steadily increasing in social health networks. However, radiology students are often faced with emergency situations in which they must make a decision to save lives. Face image data volume, and not forgetting the terms of heterogeneous images, it is necessary to develop effective systems medical image search based on both textual and visual modalities. Although substantial progress has been made in different domain of medical image retrieval, the applications produce unsatisfactory results because of the unique characteristics of medical images. In this paper, we develop a new multi-modal approach for medical image retrieval based on deep learning. More precisely, we first study a new model of latent semantic analysis to integrate the visual and textual information of medical images to fill the semantic gap. Experimental results on the online ImageCLEF dataset containing a large volume of real-world medical images have shown that our new approach is a promising solution for the next-generation medical imaging indexing and retrieval system.