TO APPEAR IN IEEE TRANSACTIONS ON NEURAL NETWORKS 1 Decentralized Asynchronous Learning in Cellular Neural Networks Bipul Luitel, Member, IEEE, Ganesh K Venayagamoorthy, Senior Member, IEEE Abstract—Cellular neural networks (CNNs) described in liter- ature so far consist of identical units called cells connected to their adjacent neighbors. These cells interact with each other in order to fulfill a common goal. The current methods involved in learning of CNNs are usually centralized (cells are trained in one location) and synchronous (all cells are trained simultaneously either sequentially or in parallel depending on the available hardware/software platform). In this paper, a generic architecture of CNN is presented and a special case of supervised learning has been demonstrated explaining the internal components of a cell. A decentralized asynchronous learning (DAL) framework for CNN is developed in which each cell of the CNN learns in a spatially and temporally distributed environment. An application of DAL framework is demonstrated by developing a CNN based wide area monitoring system for power systems. The results obtained are compared against equivalent traditional methods and shown to be better in terms of accuracy and speed. Index Terms—CNN, Decentralized asynchronous learning, high performance computer, multilayer perceptron, power sys- tems, PSO, SRN, wide area monitoring I. I NTRODUCTION T Wo major variations of cellular neural networks (CNNs) have been studied in the neural networks community. CNN introduced by Chua and Yang in 1988 [1] consists of individual units (cells) connected to each of its neighbors on a cellular structure. Each cell of such a CNN is a computational unit and have been applied to pattern recognition [2], [3] and image processing [4], [5]. CNN is a highly non-linear system and its stability is important for real applications. Multistability of such CNNs is discussed in [6]. In [7], Werbos introduced a cellular implementation of simultaneous recurrent neural networks (SRNs), where each ‘cell’ is an SRN with same set of weights but different set of inputs. Such a CNN consisting of SRNs as cells are called Cellular SRN (CSRN) and that containing multilayer perceptron (MLP) as cells are called Cellular MLP (CMLP). CSRNs have been used in maze navigation problem [8], [9], facial recognition [10] and image processing [11]. Stability of recurrent neural networks in the presence of noise and time delays is discussed in [12]. Thus, [6] and [12] together provide a basis for stability of such CNNs containing neural networks in each of its cells. In the original CNN, each cell is connected only to its adjacent cells [1]. However, in CMLP and CSRN, the connection of Bipul Luitel and Ganesh K Venayagamoorthy are with Real-Time Power and Intelligent Systems Laboratory, Department of Electrical and Com- puter Engineering, Clemson University, Clemson, SC, 29634 USA. Contact: iambipul@ieee.org, gkumar@ieee.org The funding provided by the National Science Foundation, USA under the CAREER grant ECCS #1231820 and EFRI #1238097 is gratefully acknowledged. different cells to each other is application dependent, as is shown in application to bus voltage prediction in a power system [13]. However, even with variations, most of the CNNs studied so far consist of identical units in each cell of the CNN and learning is centralized and synchronous - neural networks (NN) in each cell of the CSRN or CMLP are trained simultaneously in one location. Distributed learning of artificial systems has been of interest for a long time in the research community. Many approaches have focused on either data decomposition or task decompo- sition methods to achieve parallelism by distributing among multiple processors [14]–[18]. Use of distributed learning in computational intelligence (CI) and machine learning (ML) paradigms has also been reported in literature [14], [19], [20]. Although distributed learning methods capture the essence of decentralized computing by reducing the volume of infor- mation shared by performing local computations at different ‘nodes’, the approaches either consist of a ‘master’ making decisions based on information from the rest of the nodes in the network [21], or the nodes being centrally located in one place and synchronized by a global clock. Learning may be carried out sequentially or in parallel by exploiting their inherent parallelism using a parallel computing platform. However, all of the nodes or cells are updated simultaneously for any change in the system and hence learning is not inde- pendent among the cells. As such, most current approaches, even though distributed, carry out centralized synchronous learning regardless of the hardware/software platform used for implementing them. The major contributions of this paper are as follows: 1) A generic framework of CNN is presented and a special case of supervised learning has been demonstrated. 2) A decentralized asynchronous learning (DAL) frame- work for CNN has been developed and implemented on a heterogeneous CNN. 3) CNN with DAL framework has been implemented as a wide area monitoring system (WAMS) for power systems. 4) It is shown that multiple neural networks of different cells of a CNN can each, concurrently, learn information embedded in data obtained from a complex system. The remaining sections of the paper are arranged as follows: Architecture of CNN is presented in Section II. Learning of learning systems is explained in Section III. Development of proposed DAL for heterogeneous CNN is explained in Section IV. Development of WAMS based on CNN with DAL is presented in Section V. Case studies with results and