Linear basis models for prediction and analysis of musical expression Maarten Grachten 1 Gerhard Widmer 1,2 1 Department of Computational Perception Johannes Kepler University, Linz, Austria 1,2 Austrian Research Institute for Artificial Intelligence, Vienna, Austria Abstract The quest for understanding how pianists interpret notated music to turn it into a lively musical experience, has led to numerous models of mu- sical expression. Several models exist that explain expressive variations over the course of a performance, for example in terms of phrase structure, or musical accent. Often however expressive markings are written explic- itly in the score to guide performers. We present a modelling framework for musical expression that is especially suited to model the influence of such markings, along with any other information from the musical score. In two separate experiments, we demonstrate the modelling framework for both predictive and explanatory modelling. Together with the results of these experiments, we discuss our perspective on computational modelling of musical expression in relation to musical creativity. 1 Introduction and related work When a musician performs a piece of notated music, the performed music typ- ically shows large variations in expressive parameters like tempo, dynamics, articulation, and depending on the nature of the instrument, further dimen- sions such as timbre and note attack. It is generally acknowledged that one of the primary goals of such variations is to convey an expressive interpretation of the music to the listener. This interpretation may contain affective elements, and also elements that convey musical structure (Clarke, 1988; Palmer, 1997). These insights have led to numerous models of musical expression. The aim of these models is to explain the variations in expressive parameters as a function of the performer’s interpretation of the music, and most of them can roughly be classified as either focusing on affective aspects of the interpretation, 1