Dynamic Partial Reconfiguration of the Ubichip for Implementing Adaptive Size Incremental Topologies ector F. Satiz´ abal, Andres Upegui Abstract— The Ubichip is a reconfigurable digital circuit with special bio-inspired mechanisms that supports dynamic partial reconfigurability in a flexible and efficient way. This paper presents an adaptive size neural network model with incremental learning that exploits these capabilities by creating new neurons and connections whenever it is needed and by destroying them when they are not used during some time. This neural network, composed of a perception layer and an action layer, is validated on a robot simulator, where neurons are created under the presence of new perceptions. Furthermore, links between perceptions and actions are created, reinforced, and destroyed following a Hebbian approach. In this way, the neural controller creates a model of its specific environment, and learns how to behave in it. The neural controller is also able to adapt to a new environment by forgetting previously unused knowledge, freeing thus hardware resources. We present some results about the neural controller and how it manages to characterize some specific environments by exploiting the dynamic hardware topology support offered by the ubichip. I. I NTRODUCTION Nowadays technological trends are allowing higher system integration at lower costs. Current semiconductor integration technologies allow to include billions of transistors in a single chip, reducing in a dramatic way the cost per transistor. This fact has allowed the appearance of pervasive systems, myriads of small distributed embedded systems equipped with sensing, actuating, and/or communication capabilities, which are constantly interacting with the environment and with human users. These systems are typically confronted to changing envi- ronments or to users with different preferences. Moreover, given the high number of devices, it becomes very difficult to manually upgrade, tune, or customize them, for fitting the requirements for a specific user or for specific environmental conditions. Adaptation becomes thus a very desirable feature for such type of systems, in order to allow them to meet the specific requirements for each of the possible scenarios. Different levels of adaptation can be identified, ranging from simple parameter tuning to complicated topological modifications which may determine the internal system structure. Moreover, when we consider the case of con- stantly changing environments, the system should continue ector F. Satiz´ abal is a PhD student at the Universit´ e de Lausanne, Switzerland, and works with the Institute of Reconfigurable and Embedded Digital Systems (REDS), at the University of Applied Sciences of Western Switzerland (HEIG-VD), Rte de Cheseaux 1, 1400 Yverdon-les-Bains, (email: Hector.SatizabalMejia@unil.ch). Andres Upegui is a Senior Researcher at the Institute of Reconfigurable and Embedded Digital Systems (REDS), at the University of Applied Sciences of Western Switzerland (HEIG-VD), Rte de Cheseaux 1, 1400 Yverdon-les-Bains, (email: andres.upegui@heig-vd.ch). adapting in an incremental manner. This dynamic behaviour is performed naturally by living beings at different levels. For instance, biological nervous systems are in continuous adaptation during the developmental and learning processes. Hence during these periods, neurons are constantly replicat- ing, differentiating, migrating, creating synapses, strengthen- ing them, or destroying them according to the genetic plan of the individual, and its interaction with its environment. Biological nervous systems, and specially biological neu- rons and its interconnections, have been used as inspira- tion to mimetically endow artificial systems with properties belonging to living beings like learning and adaptation. Artificial neural networks (ANN) constitute a clear example of this approach. Since the introduction of ANN as an alternative data processing tool, a large number of topologies and implementations of learning algorithms have been cre- ated [1]. There are networks which learn in a supervised or unsupervised way; networks having one, two, or more layers of neurons; networks performing classification, or regression tasks; fixed size, and adaptive size networks, etc. Nowadays, ANN are powerful tools for the implementa- tion of mobile robot controllers. Nonetheless, the size of a network is always hard to determine, and it is even harder when the environment changes during time. Several neural models featuring incremental adaptive topologies have been proposed in order to automate the network construction, and to enable the model to solve incremental tasks [2]. Some examples of such networks are ART [3], GAR [4], FAST [5], and GNG [6]. In a more general framework, these techniques offer the possibility of adapting the structure of the problem solver to the complexity of the problem at hand. This same principle can also be exploited by the field of dynamically reconfigurable hardware systems in order to provide simultaneously high performance, adaptability, and fault tolerance. However, its application on currently avail- able reconfigurable devices is not straightforward. Several approaches have been proposed for exploiting the flexibility of reconfigurable circuits in order to implement adaptable topologies, each of them exhibiting a set of limitations [7]. Bearing in mind these limitations, in [8] they have proposed the ubichip architecture. The Ubichip is a reconfigurable digital circuit with bio- inspired mechanisms that allows a flexible and efficient im- plementation of hardware systems featuring dynamic topolo- gies. It supports the implementation of self-replicating sys- tems, dynamic creation and destruction of interconnections, and a very flexible dynamic partial reconfigurability. It has been used for modelling synaptogenic neural networks [9],