arXiv:1104.3498v4 [nlin.CD] 17 May 2012 Mutual information rate and bounds for it M. S. Baptista 1 , R. M. Rubinger 2 , E. R. V. Junior 2 , J. C. Sartorelli 3 , U. Parlitz 4 , and C. Grebogi 1,5 1 Institute for Complex Systems and Mathematical Biology, SUPA, University of Aberdeen, AB24 3UE Aberdeen, United Kingdom 2 Federal University of Itajuba, Av. BPS 1303, Itajub´a, Brazil 3 Institute of Physics, University of S˜ao Paulo, Rua do Mat˜ao, Travessa R, 187, 05508-090, S˜ao Paulo, Brazil 4 Biomedical Physics Group, Max Planck Institute for Dynamics and Self-Organization, Am Fassberg 17, 37077 G¨ottingen, Germany 5 Freiburg Institute for Advanced Studies (FRIAS), University of Freiburg, Albertstr. 19, 79104 Freiburg, Germany (Dated: May 28, 2018) The amount of information exchanged per unit of time between two nodes in a dynamical network or between two data sets is a powerful concept for analysing complex systems. This quantity, known as the mutual information rate (MIR), is calculated from the mutual information, which is rigorously defined only for random systems. Moreover, the definition of mutual information is based on probabilities of significant events. This work offers a simple alternative way to calculate the MIR in dynamical (deterministic) networks or between two data sets (not fully deterministic), and to calculate its upper and lower bounds without having to calculate probabilities, but rather in terms of well known and well defined quantities in dynamical systems. As possible applications of our bounds, we study the relationship between synchronisation and the exchange of information in a system of two coupled maps and in experimental networks of coupled oscillators. I. INTRODUCTION Shannon’s entropy quantifies information [1]. It mea- sures how much uncertainty an observer has about an event being produced by a random system. Another im- portant concept in the theory of information is the mu- tual information [1]. It measures how much uncertainty an observer has about an event in a random system X after observing an event in a random system Y (or vice- versa). Mutual information is an important quantity because it quantifies not only linear and non-linear interdepen- dencies between two systems or data sets, but also is a measure of how much information two systems exchange or two data sets share. Due to these characteristics, it became a fundamental quantity to understand the devel- opment and function of the brain [2, 3], to characterise [4, 5] and model complex systems [6–8] or chaotic sys- tems, and to quantify the information capacity of a com- munication system [9]. When constructing a model of a complex system, the first step is to understand which are the most relevant variables to describe its behaviour. Mutual information provides a way to identify those vari- ables [10]. However, the calculation of mutual information in dynamical networks or data sets faces three main difficulties[4, 11–13]. Mutual information is rigorously defined for random memoryless processes, only. In ad- dition, its calculation involves probabilities of significant events and a suitable space where probability is calcu- lated. The events need to be significant in the sense that they contain as much information about the system as possible. But, defining significant events, for example the fact that a variable has a value within some partic- ular interval, is a difficult task because the interval that provides significant events is not always known. Finally, data sets have finite size. This prevents one from cal- culating probabilities correctly. As a consequence, mu- tual information can often be calculated with a bias, only [4, 11–13]. In this work, we show how to calculate the amount of information exchanged per unit of time [Eq. (3)], the so called mutual information rate (MIR), between two arbitrary nodes (or group of nodes) in a dynamical net- work or between two data sets. Each node representing a d-dimensional dynamical system with d state variables. The trajectory of the network considering all the nodes in the full phase space is called “attractor” and repre- sented by Σ. Then, we propose an alternative method, similar to the ones proposed in Refs. [14, 15], to calcu- late significant upper and lower bounds for the MIR in dynamical networks or between two data sets, in terms of Lyapunov exponents, expansion rates, and capacity di- mension. These quantities can be calculated without the use of probabilistic measures. As possible applications of our bounds calculation, we describe the relationship be- tween synchronisation and the exchange of information in small experimental networks of coupled Double-Scroll circuits. In previous works of Refs. [14, 15], we have proposed an upper bound for the MIR in terms of the positive con- ditional Lyapunov exponents of the synchronisation man- ifold. As a consequence, this upper bound could only be calculated in special complex networks that allow the ex- istence of complete synchronisation. In the present work, the proposed upper bound can be calculated to any sys- tem (complex networks and data sets) that admits the calculation of Lyapunov exponents. We assume that an observer can measure only one scalar time series for each one of two chosen nodes. These