1 Bias management in Time-Changing Data Streams Jo˜ ao Gama 1 and Gladys Castillo 1,2 1 LIACC, University of Porto, Portugal 2 Department of Mathematics, University of Aveiro, Portugal 1.1 Introduction The term bias has been widely used in Machine Learning and Statistics with somewhat different meanings. In the context of Machine Learning, Mitchell [25] defines the bias as any basis for choosing one generalization over another, other than strict consistency with the instances. In [17] the au- thors distinguish two major types of bias: representational and procedural. The former defines the states in a search space. It specifies the language used to represent generalizations of the examples. The latter determines the order of traversal of the states in the space defined by a representational bias. In Statistics, bias is used in a somewhat different way. Given a learning problem, the bias of a learning algorithm is the persistent or systematic error the learn- ing algorithm is expected to achieve when trained with different training sets of the same size. To summarize, while Machine Learning bias refers to restric- tions in the search space, Statistics focus on the error. Some authors [13, 19] have presented the so called Bias-Variance error decomposition that gives insights on a unified view of both perspectives. Powerful representation lan- guages explore larger spaces with a reduction on the bias component of the error (although by increasing the variance). Less powerful representation lan- guages are correlated with large error due to a systematic error. Often as one modifies some aspect of the learning algorithm, it will have opposite effects on the bias and the variance. For example, usually as one increases the number of degrees of freedom in the algorithm, the bias error shrinks but the error due to variance increases. The optimal number of degrees of freedom (as far as expected loss is concerned) is that which optimizes this trade-off between bias and variance. In this article we study the problem of bias management when there is a continuous flow of training examples, e.g., the number of training examples increases with time. We argue that for an increased number of training ex- amples, Machine Learning algorithms should strength bias management. We discuss methods that monitor the evolution of the error for incoming exam-