A Neural Network Based Interface to Real Time Control Musical Synthesis Processes GIOVANNI COSTANTINI 1,2 , MASSIMILIANO TODISCO 1 , MASSIMO CAROTA 1 1 Department of Electronic Engineering University of Rome “Tor Vergata” Via del Politecnico, 1 – 00133 ROMA ITALY 2 Institute of acoustics “O. M. Corbino” Via del Fosso del Cavaliere, 100 – 00133 ROMA ITALY Abstract: - In this paper, we present an innovative Neural Network system interface that allows an electronic music composer to plan and conduct the musical expressivity of a performer. For musical expressivity we mean all those execution techniques and modalities that a performer has to follow in order to satisfy common musical aesthetics, as well as the desiderata of the composer. The proposed interface or virtual musical instrument is able to transform two input parameters in many sound synthesis parameters. Especially, we focus our attention on mapping strategies based on Neural Network to solve the problem of electronic music expressivity. Key-Words: - Neural Network, Control, Musical Synthesis Process. 1 Introduction Traditional musical sound is a direct result of the interaction between a performer and a musical instrument, based on complex phenomena, such as creativeness, feeling, skill, muscular and nervous system actions, movement of the limbs, all of them being the foundation of musical expressivity. Actually, musical instruments transduce movements of a performer into sound. Moreover, they require two or more control inputs to generate a single sound. For example, the loudness of the sound can be controlled by means of a bow, a mouthpiece, or by plucking a string. The pitch is controlled separately, for example by means of fingering which changes the length of an air column or of a string. The sound produced is characteristic of the musical instrument itself and depends on a multitude of time-varying physics quantities, such as frequencies, amplitudes, and phases of its sinusoidal partials [1]. The way music is composed and performed changes dramatically [2] when, to control the synthesis parameters of a sound generator, we use human-computer interfaces, such as mouse, keyboard, touch screen or input devices such as kinematic and electromagnetic sensors, or gestural control interfaces [3,4]. As regards musical expressivity, it is important to define how to map few input data onto a lot of synthesis parameters. At present, it is obvious that the simple one-to-one mapping laws regarding traditional acoustical instruments leave room to a wide range of mapping strategies. The paper is organized as follows: in the second session we describe perceptual considerations; in the third session we describe the Neural Network structure; in the fourth session we describe and illustrate our interface and the mapping strategies adopted; finally, we show a real-time musical application using our interface. 2 Perceptual considerations To investigate the influence that mapping has on musical expression, let us consider some aspects of Information Theory and Perception Theory [5]: • the quality of a message, in terms of the information it conveys, increases with its originality, that is with its unpredictability; • information is not the same as the meaning it conveys: a maximum information message doesn’t make sense, if any listener that’s able to decode it doesn’t exist. A perceptual paradox [6] illustrating how an analytic model fails in predicting what we perceive from what our senses transduce is the following: both maximum predictability and maximum unpredictability imply minimum information or even no information at all. A neural network approach is chosen to exceed the perceptual limits above mentioned. Proceedings of the 11th WSEAS International Conference on CIRCUITS, Agios Nikolaos, Crete Island, Greece, July 23-25, 2007 41