A neural network with a single recurrent unit for associative memories based on linear optimization $ Qingshan Liu a , Tingwen Huang b,n a School of Automation, Southeast University, Nanjing 210096, China b Department of Mathematics and Science, Texas A&M University at Qatar, Doha 5825, Qatar article info Article history: Received 3 September 2012 Received in revised form 22 November 2012 Accepted 23 February 2013 Communicated by H. Jiang Available online 28 April 2013 Keywords: Neural networks Associative memories Linear optimization Global convergence abstract Recently, some continuous-time recurrent neural networks have been proposed for associative memories based on optimizing linear or quadratic programming problems. In this paper, a simple and efcient neural network with a single recurrent unit is proposed for realizing associative memories. Compared with the existing neural networks for associative memories, the main advantage of the proposed model is that it has only one recurrent unit, which lowers the model complexity by the greatest extent. In the proposed neural network, each prototype pattern is stored in the connection weights between the input and hidden layers. In addition, the advanced performance of the proposed network is demonstrated by means of simulations of three numerical examples. & 2013 Elsevier B.V. All rights reserved. 1. Introduction Realizing associative memories with neural networks has received much attention in the literature [4,7,19,29]. In general, it is desired to store a set of prototype patterns as asymptotically stable equilibrium points of the neural networks. The basin of attraction of each desired memory pattern is distributed reason- ably; i.e., the system trajectory is convergent to the prototype pattern which is close to the given pattern with noise in the Hamming distance sense. In the literature, many continuous-time neural networks have been proposed for associative memories based on the energy function method, and optimizing linear or quadratic functions [11,22,23,31]. Specially, Tao et al. [22] investigated the continuous- time and discrete-time neural networks for associative memories, in which the quadratic energy function were employed for realiz- ing associative memories. A linear optimization neural network for associative memories was proposed in [23]. According to the number of interconnections among neurons, the existing neural networks for associative memories are mostly designed with O(n) model complexity, in which n is the number of stored prototypes. In [22,23], the designed networks are composed of (n+1) recurrent units, which are considered as (n+1)-dimensional recurrent neural networks. As the number of prototype patterns increases and/or the recognition process should be operated in real time, parallel algorithms and hardware implementation are desirable for asso- ciative memories [12,21,25]. However, the structure of the existing neural networks for associative memories with O(n) model com- plexity will become complex as the number of stored prototype patterns increases, and this will restrict their real-time applica- tions. For these reasons, in this paper, we are concerned with constructing a neural network with only a single recurrent unit for associative memories, which lowers the model complexity by the greatest extent. Recurrent neural networks based on circuit implementation for optimization and their engineering applications have been widely investigated in the literature (see, e.g., [3,15,28], and the references therein). In this paper, the realization of associative memories is rst converted to a 01 linear programming problem, and then the linear programming problem is solved using a recurrent neural network. Moreover, the linear programming problem can be considered as a model for solving the winner-take-all (WTA) operation, which is to select the maximum from a collection of input signals. The WTA networks are usually utilized to solve the WTA operation, which has been widely investigated in the literature [16,26,30]. In particular, several neural networks with only one recurrent unit are presented for the WTA operation in [9,14,27]. Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/neucom Neurocomputing 0925-2312/$ - see front matter & 2013 Elsevier B.V. All rights reserved. http://dx.doi.org/10.1016/j.neucom.2013.02.035 This work was partially supported by the National Priority Research Project NPRP 4-1162-1-181 funded by Qatar National Research Fund, the National Natural Science Foundation of China under Grant 61105060, the Program for New Century Excellent Talents in University (NCET-12-0114), the Natural Science Foundation of Jiangsu Province of China under Grant BK2011594, and the Fundamental Research Funds for the Central Universities. n Corresponding author. E-mail addresses: qsliu@seu.edu.cn (Q. Liu), tingwen.huang@qatar.tamu.edu (T. Huang). Neurocomputing 118 (2013) 263267