Artificial Affective Listening towards a Machine Learn- ing Tool for Sound-Based Emotion Therapy and Control Alexis Kirke Eduardo Miranda Slawomir J. Nasuto Interdisciplinary Centre for Com- puter Music Research, Plymouth University, UK alex- is.kirke@plymouth.ac.uk Interdisciplinary Centre for Computer Mu- sic Research, Plymouth University, UK Eduardo Miran- da@plymouth.ac.uk University of Reading, Reading, UK s.j.nasuto@reading.ac. uk ABSTRACT We are extending our work in EEG-based emotion detec- tion for automated expressive performances of algorith- mically composed music for affective communication and induction. This new system will involve music com- posed and expressively performed in real-time to induce specific affective states, based on the detection of affec- tive state in a human listener. Machine learning algo- rithms will learn: (1) how to use EEG and other biosen- sors to detect the user’s current emotional state; and (2) how to use algorithmic performance and composition to induce certain affective trajectories. In other words the system will attempt to adapt so that it can in real-time - turn a certain user from depressed to happy, or from stressed to relaxed, or (if they like horror movies!) from relaxed to fearful. As part of this we have developed a test-bed involving an artificial listening affective agent to examine key issues and test potential solutions. As well as giving a project overview, prototype design and first experiments with this artificial agent are presented here. 1. INTRODUCTION The aim of our research is to develop technology for im- plementing innovative intelligent systems that can moni- tor a person’s affective state and induce a further specific affective states through music, automatically and adap- tively. [1] investigates the use of EEG to detect emotion in an individual and to then generate emotional music based on this. These ideas have been extended into a 4.5 year EPSRC research project [2] in which machine learn- ing is used to learn, by EEG emotional feedback, what types of music evoke what emotions in the listener. This paper introduces the key background elements behind the project: Music and Emotion, Emotional Expressive Per- formance and Algorithmic Composition, and EEG Affec- tive Analysis; then details some preparatory work being undertaken, together with the future project plans. 2. MUSIC AND EMOTION Music is commonly known to evoke various affective states (popularly referred to as “emotions”) [3]. There have been a number of questionnaire studies supporting the notion that music communicates affective states (e.g., [4, 5]) and that music can be used for affect regulation and induction (e.g., [6, 7]). However the exact nature of these phenomena is not fully understood. The literature makes a distinction between perceived and induced affec- tivity with music being able to generate both types [4]. The differences between induced affective state and per- ceived affective state have been discussed by Juslin and Sloboda [3]. For example a listener may enjoy a piece of music like Barber’s Adagio, which most people would describe as a “sad” piece of music. However, if they gain pleasure from listening, the induced affective state must be positive, but the perceived affective state is sadness; i.e., a negative state. Despite the differences between per- ceived and induced affective state, they are highly corre- lated [4, 8]. Zentner et al. [9] reported on research into quantifying the relationship between perceived and in- duced affective state in music genres. 3. EMOTION-BASED ALGORITHMIC COMPOSITION One area of algorithmic composition which has received more attention recently is affectively-based computer- aided composition. A common theme running through some of the affective-based systems is the representation of the valence and arousal of a participant’s affective state [11]. Valence refers to the positivity or negativity of an affective state; e.g., a high valence affective state is joy or contentment, a low valence one is sadness or an- ger. Arousal refers to the energy level of the affective state; e.g., joy is a higher arousal affective state than hap- piness. Until recently the arousal-valence space was a dominant quantitative two-dimensional representation of emotions in research into musical affectivity. More re- cently, a new theory of emotion with the corresponding scale, referred to as GEMS (Geneva Emotional Musical Scale) has been proposed [9]. Many of the affective-based systems are actually based around re-composition rather than composition; i.e. they focus on how to transform an already composed piece of music to give a different emotional effect e.g. make it sadder, happier, etc. This is the case with the best known