An application to test the emotion conveyed by vocal and musical signals. Simone Carcone 1 , Carlo Giovannella 2 1 ISIM_garage Phys. Dept., University of Rome Tor Vergata, Italy 2 ISIM_garage Phys. Dept. and Scuola IaD, University of Rome Tor Vergata, Italy giovannella@scuolaiad.it Abstract We present an application that allows straightforwardly to built up, administer and analyze tests designed to measure the emotion conveyed by multimodal and single modal signals, among them voice, music and sounds. The application is available either as stand-alone application and, partially, as web-service. Index Terms: emotions conveyed, emotions perceived 1. Introduction and rational One of the main problems met by designers and developers of applications aimed to support the emotional level of the mediated communication is certainly the lack of an universally accepted model of emotions [1,9]. Of course this is due in part to the complexity of the human and in part to the still relatively short time elapsed since the emotional level started to be considered and investigated seriously also within the domain of the human-machine interaction [2]. However, we do believe that the current situation is also partially due to the lack of tools that make easy to built up, administer and analyze perception tests; in particular tests that may help to point out possible correlations among physical characteristics of the stimuli and the human perception. The application that we present here aim to contribute to fill this gap and to enable more accurate and more extensive studies of the emotional level of human communication. 2. The application The application was developed in Processing [3] (an open development environment based on a Java preprocessor) and is composed by two modules that can be used in stand-alone configuration: (a) test and (b) analysis. The test module is available also as on-line service and implements most of the features offered by the stand alone version. At the present the on-line service is available as an internal service of the collaborative working and learning environment, LIFE [4]. Soon, it will be available to external users as open service from the ISIM_garage website [5]. 2.1. Text module The test module allows to build up, administer and analyze in a quantitative manner the ability to perceive the emotional nuances conveyed by any sort of signals/stimuli (voice, sounds, words/text, images, single or multiple movies). The user is allowed to combine at will all the modal channels to produce single modal or synaesthetic tests. It is important to stress that the design of a new test does not require any modification of the code; in fact, the user has simply to provide a text file written according to a predefined format containing: a) instructions to show to the subject (in a single or multiple languages); b) a list of multimedia resources that one intends to present as a random sequence of stimuli (all resources should be made available in appropriate folders); c) the physical location within the projector area (dark grey box in fig.1) where each resource has to be shown to the subject (this specification is necessary if one wishes to present simultaneously more than one picture or video). Figure 1: Interface of the design and admin module of the application. Figure 2: Example of finite states (above), bi- dimensional (below-left) and GEW (below-right) models of the emotions available in our applications. Copyright 2011 ISCA 28 - 31 August 2011, Florence, Italy INTERSPEECH 2011 3313