Recognition of Multi-touch Drawn Sketches Michael Schmidt and Gerhard Weber Dresden University of Technology, Institute of Applied Science, Human-Computer Interaction, N¨othnitzer Straße 46, 01062 Dresden {Michael.Schmidt1,Gerhard.Weber}@tu-dresden.de Abstract. We present concepts and possible realizations for the clas- sification of multi-touch drawn sketches. A gesture classifier is modified and integrated into a sketching tool. The applied routines are highly scalable and provide the possibilities of domain independent sketching. Classification rates are feasible without exploiting the full potential of the scheme. We demonstrate that the classifier is capable of identify- ing common basic primitives and gestures as well as complex drawings. Users define sketches per templates in their individual style and link them to constructed primitives. A pilot evaluation is conducted and re- sults regarding sketching techniques of users and classification rates are discussed. Keywords: Sketch, recognition, classifier, survey, gestures, multi-touch. 1 Introduction and Motivation A sketching software enhanced by methods for recognition allows for sketches to be interpreted, edited, searched and neatened [1]. Since its early years (e.g., ‘Sketchpad’ [2] introduced 1963), recognition of sketches reached manifold ap- plications in different domains. Various sketching applications are for UML [3,4] and other diagrams [5,6,7], user interface design [8,9,10], mechanical schematical sketches [11], 3D curve modeling [12] and many more. Domain-independent ap- proaches as in [13] exist, too. Additionally, gesture-based interfaces became ubiq- uitous in recent years. Mainly, their application is in direct manipulations for the scaling or re-orientation of graphical user interface elements. Nevertheless, new interaction techniques evolved in recent developments. Aiming to create more natural sketching interfaces, both techniques are often combined and switch- ing of input modalities is required. We present a multi-touch sketching editor that extends sketch-based interaction techniques and circumvents the bound- ing to pen-based input. The gap between gestural interaction and sketching is alleviated by allowing to sketch with multiple fingers. Furthermore, domain- dependency is diminished by enabling the definition of primitives and complex multi-stroke symbols per examples. M. Kurosu (Ed.): Human-Computer Interaction, Part IV, HCII 2013, LNCS 8007, pp. 479–490, 2013. c Springer-Verlag Berlin Heidelberg 2013