SketchSPORE: A Sketch Based Domain Separation and Recognition System for Interactive Interfaces Danilo Avola 1 , Luigi Cinque 2 , and Giuseppe Placidi 1 1 Department of Life, Health and Environmental Sciences, University of L’Aquila Via Vetoio Coppito 2, 67100, L’Aquila, Italy {danilo.avola,giuseppe.placidi}@univaq.it http://www.univaq.it/en/section.php?id=262 2 Department of Computer Science, Sapienza University Via Salaria 113, 00198, Rome, Italy cinque@di.uniroma1.it http://w3.uniroma1.it/dipinfo/english/index.asp Abstract. Multimodal interfaces are used to interact with devices and automata using different channels of communication. In this context, the sketch modality plays a key role since it allows users to convey concepts and/or commands using freehand drawing (graphical domain) and/or handwriting (textual domain). The acquisition of the sketch modality can be performed using touch (e.g., touchscreen) or touchless (e.g., RGB- D camera) tools supporting the development of versatile and powerful interactive interfaces. Domain separation and sketch recognition are two fundamental issues of these interfaces. This paper presents SketchSPORE a novel framework designed both to automatically distinguish graphical from textual elements within the same sketch and to recognize freehand drawing as well as handwriting. The recognition processes support both on-line and off-line modes, moreover their processing can be suitably stored within an XML file to provide a means to maintain the compat- ibility between the framework and service and/or application targets. Extensive experiments showing the effectiveness of the proposed method are reported and discussed. Keywords: multimodal interfaces, sketch recognition, freehand draw- ing, handwriting, graphical domain, textual domain, SketchML. 1 Introduction Multimodal interfaces allow users to interact with devices (e.g., tablets, smart- phones, game consoles) using multiple modalities (e.g., sketch, gesture, speech) according to application requirements, environmental characteristics (e.g., in- door, outdoor) or device-dependent features (e.g., screen size, computational capacity). More recently, some human-oriented modalities (e.g., gestures) have been widely used to interact with advanced systems (e.g., Computer Assisted A. Petrosino (Ed.): ICIAP 2013, Part II, LNCS 8157, pp. 181–190, 2013. c Springer-Verlag Berlin Heidelberg 2013