Editorial Novel approaches in machine learning and computational intelligence This special issue of Neurocomputing presents 18 original papers, which are extended versions of selected papers from the 20th European Symposium on Artificial Neural Networks (ESANN). ESANN is a single-track conference held annually in Bruges, Belgium, one of the most beautiful medieval towns in Europe, whose atmosphere is favorable to efficient work but also to enjoyable cultural visits and relations as a UNESCO World Heritage site. ESANN is organized by Prof. Michel Verleysen from Universite Catholique de Louvain, Belgium. In addition to regular sessions, the conference welcomes special sessions focused on particular topics such as machine learning for spectral data or multimedia applica- tions, new trends in kernel design, motion recognition, the effects and handling of missing data or interpretable models. The contributions in this special issue show that ESANN covers a broad range of topics in neuro-computing, machine learning and neuroscience from theoretical aspects to state-of-the-art applica- tions and many related themes in signal processing and computa- tional intelligence. More than 130 researcher from 19 countries and five continents participated in the 20th ESANN in April, 2012. They presented 105 contributions, and enjoyed the especially communicative atmosphere in Bruges. Based on the recommen- dations of special-session organizers, the reviews of the confer- ence papers, and the quality of the presentations made at the conference, a number of authors were invited to submit an extended version of their conference paper for this special issue of Neurocomputing. All of these papers were thoroughly reviewed once more by at least two independent experts and, finally the 18 papers presented in this volume were accepted for publication. In this special issue we can find a multitude of examples using neuro-computing and related techniques in different branches of research. The first six papers analyze theoretical aspects of different learning systems and identify results on the learning dynamic, potential optimization schemes and novel strategies to improve learning under different constraints. The first paper by Orrite et al. about Magnitude Sensitive Competitive Learning presents a new view on competitive learn- ing. Standard methods distribute the representative data points, the final model consists of, according to the data density. The method allows for a new type of flexibility during learning such that any magnitude calculated from the input data inside its Voronoi region can be used to control the competition process. An important topic in kernel machines is iterative construction of more complex kernels from a collection of simpler ones. This topic is considered in the paper of Belanche et al. Averaging of kernel functions and studies one particular way of building such compound kernels by generalized averaging of simpler kernels. The authors rigorously show a rather strong result: The only feasible average for kernel learning is the arithmetic average. It is also shown that geometric mean can preserve the kernel property for a limited class of kernel functions. The authors Neumann et al. of the next paper Intrinsic Plasticity via Natural Gradient Descent with Application to Drift Compensation investigate intrinsic plasticity (IP) for optimization of the activa- tion function of artificial neurons. The parameter space is ana- lyzed by means of information geometry exploiting the concept of natural gradient to improve the IP learning. The effects on the drift compensation in learning dynamics are analyzed and experi- mentally evaluated. The effective mapping of structured input data, like a multi- resolution representation of an image, to structured output data, like a structured semantic interpretation, is the topic of the paper of Bacciu et al. An Input–Output Hidden Markov Model for Tree Transductions. An input driven model for tree-structured data is proposed, that extends the bottom-up hidden tree Markov model to non-homogeneous state transition and emission probabilities. Accordingly, the state transition and emission distributions, may be explicitly dependent (i.e. parametrized) by some observed information. This permits higher flexibility in structured-data processing as shown in experiments for document classification tasks. In the paper of Emmerich et al. Multi-directional Continuous Association with Input-driven Neural Dynamics an interesting compu- tational alternative for continuous association is presented. The approach relies on a core dynamical system whose dimensions can be freely assigned the roles of inputs, outputs or internal states. While a very promising approach, it has two potential drawbacks: potential instabilities due to feedback loops and different scales of input and output modalities. The paper offers practical guidelines for overcoming those drawbacks. The first block of papers is closed by the work of Ti ˇ no et al. about Short Term Memory in Input-Driven Linear Dynamical Sys- tems. The paper provides a theoretical analysis of two quantitative measures characterizing short term memory in input driven dynamical systems: short term memory capacity and the Fisher memory curve. A close connection is identified for linear input driven dynamical systems and its relatedness for symmetric and cyclic dynamic couplings is explored. The following four papers are also widely driven by theoretical findings but focus on data encoding and representation. Fre ´ nay et al. reanalyze the concept of mutual information for feature selection in their paper Theoretical and Empirical Study on the Potential Inadequacy of Mutual Information for Feature Selection in Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2013.01.005 Neurocomputing 112 (2013) 1–3