This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/ACCESS.2019.2923552, IEEE Access Date of publication Digital Object Identifier DOI DeepStyle: Multimodal Search Engine for Fashion and Interior Design IVONA TAUTKUTE 13 , TOMASZ TRZCINSKI 23 (Member, IEEE), ALEKSANDER SKORUPA 3 , LUKASZ BROCKI 1 and KRZYSZTOF MARASEK 1 1 Polish-Japanese Academy of Information Technology, Warsaw, Poland (e-mail: s16352 at pjwstk.edu.pl) 2 Warsaw University of Technology, Warsaw, Poland (e-mail: t.trzcinski at ii.pw.edu.pl) 3 Tooploox, Warsaw, Poland Corresponding author: Ivona Tautkute (e-mail: s16352 at pjwstk.edu.pl). ABSTRACT In this paper, we propose a multimodal search engine that combines visual and textual cues to retrieve items from a multimedia database aesthetically similar to the query. The goal of our engine is to enable intuitive retrieval of fashion merchandise such as clothes or furniture. Existing search engines treat textual input only as an additional source of information about the query image and do not correspond to the real-life scenario where the user looks for "the same shirt but of denim". Our novel method, dubbed DeepStyle, mitigates those shortcomings by using a joint neural network architecture to model contextual dependencies between features of different modalities. We prove the robustness of this approach on two different challenging datasets of fashion items and furniture where our DeepStyle engine outperforms baseline methods by more than 20% on tested datasets. Our search engine is commercially deployed and available through a Web-based application. INDEX TERMS Multimedia computing, Multi-layer neural network, Multimodal Search, Machine Learning I. INTRODUCTION M ULTIMODAL search engine allows to retrieve a set of items from a multimedia database according to their similarity to the query in more than one feature spaces, e.g. textual and visual or audiovisual (see Fig. 1). This problem can be divided into smaller subproblems by using separate solutions for each modality. The advantage of this approach is that both textual and visual search engines have been devel- oped for several decades now and have reached a certain level of maturity. Traditional approaches such as Video Google [2] have been improved, adapted and deployed in industry, es- pecially in the ever-growing domain of e-commerce. Major online retailers such as Zalando, Alibaba and ASOS already offer visual search engine functionalities to help users find products that they want to buy [3]. Furthermore, interactive multimedia search engines are omnipresent in mobile devices and allow for speech, text or visual queries [4]–[6]. Nevertheless, using separate search engines per each modality suffers from one significant shortcoming: it pre- vents the users from specifying a very natural query such as ’I want this type of dress but made of silk’. This is mainly due to the fact that the notion of similarity in separate spaces of different modalities is different than in one multi- modal space. Furthermore, modeling this highly dimensional multimodal space requires more complex training strategies and thoroughly annotated datasets. Finally, defining the right balance between the importance of various modalities in the context of a user query is not obvious and hard to estimate a priori. Although several multimodal representations have been proposed in the context of a search for fashion items, they typically focus on using other modalities as an additional source of information, e.g. to increase classification accuracy of compatible and non-compatible outfits [7]. To address the above-mentioned shortcomings of the cur- rently available search engines, we propose a novel end-to- end method that uses neural network architecture to model the joint multimodal space of database objects. This method is an extension of our previous work [9] that blended multi- modal results. Although in this paper we focus mostly on the fashion items (clothes, accessories) and furniture, our search engine is in principle agnostic to object types and we see no limitations from applying it to other domains. We call our method DeepStyle and show that thanks to its ability to jointly model both visual and textual modalities, it allows for a more intuitive search queries, while providing higher accu- racy than the competing approaches. We prove the superiority VOLUME , 2019 1