1549-7747 (c) 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. This article has been accepted for publication in a future issue of this journal, but has not been fully edited. Content may change prior to final publication. Citation information: DOI 10.1109/TCSII.2017.2750065, IEEE Transactions on Circuits and Systems II: Express Briefs IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II: EXPRESS BRIEFS 1 An efficient neural network model for solving the absolute value equations Amin Mansoori, Mohammad Eshaghnezhad, and Sohrab Effati Abstract—In this paper, we obtain the exact solution of the absolute value equation (AVE). To the best of our knowledge, there is not an attempt to obtain the exact solution to this problem. However, there exist many numerical methods to get the approximation solution of the AVE. Here, we try to present a neural network model in order to find the solution of the AVE, analytically. Furthermore, the Lyapunov stability and the global convergence of the model are proved. Finally, the simulation results show the performance, the effectiveness, and the accuracy of the method. Index Terms—Absolute value equations, Linear complemen- tarity problem, Neural network, Globally stable in the sense of Lyapunov. I. I NTRODUCTION I N this paper, we discuss the absolute value equation (AVE) [1] of the following form: Ax −|x| = b, (1) where A ∈ R n×n , b ∈ R n , x ∈ R n and |x| denotes the absolute values of x. Notice that, AVE (1) is a non- smooth non-linear equation due to the non-differentiability of absolute value function. The significance of absolute value equation (1) arises from the fact that the general NP-hard linear complementarity problem (LCP) [2], which subsumes many mathematical programming problems, can be formulated as an AVE (1). This implies that AVE (1) is NP-hard in its general form [1]. By utilizing this connection with LCPs we are able to give some simple existence results for (1) such as that all singular values of A exceeding 1 implies existence of a unique solution for any right-hand side b [1]. Recently, to efficiently solve the AVE, some numerical methods have been developed. In [3] Mangasarian presented the generalized Newton’s method. Feng and Liu [4] suggested and analyzed an improved generalized Newton method for solving the NP-hard absolute value equations. Edalatpour et al. [5] based on the Gauss-Seidel splitting, presented a new matrix splitting iteration method, called generalized Gauss- Seidel iteration method, for solving the large sparse absolute value equation. Haghani [6] introduced an extension of the well-known two-step Traub’s method for solving absolute value equations. Many neural network models are proposed to solve the mathematical optimization problems. The first one is given A. Mansoori, M. Eshaghnezhad, and S. Effati are with the De- partment of Applied Mathematics, Ferdowsi University of Mashhad, Mashhad, Iran and also with the Center of Excellence of Soft Computing and Intelligent Information Processing, Ferdowsi Univer- sity of Mashhad, Mashhad, Iran (e-mail: am.ma7676@yahoo.com, a- mansoori@um.ac.ir; m shaghnezhad@yahoo.com; s-effati@um.ac.ir). in [7]. Linear programming problem is solved via proposed neural network by Tank and Hopfield [7]. After 1986 many attempts are made to solve optimization problems with neural network models. These attempts cause many articles [8]–[17]. Recently, fuzzy programming problems are also solved via neural networks [18], [19]. The motivation of this paper is to give a new method to obtain the analytical solution and of course the exact solution of the AVE. As we stated before and as far as we know, there is not a study to obtain the analytical solution to the AVE. Here, by applying the equivalence form of the AVE we give a neural network model to obtain the analytical solution of the problem. In fact, we consider the LCP form of the AVE to give an efficient method to solve the AVE. Furthermore, we prove that the model is stable in the sense of Lyapunov and globally convergent. Reported results demonstrate the efficiency of the methodology. The rest of the paper is organized as follows. In the next section the problem formulation and some equivalent forms are stated. In Section 3, the neural network model is introduced. We present the model based on the projection function in this section. The stability and the convergence analysis are proved in Section 4. Illustrative examples are given in Section 5. Finally, Section 6, states a brief description about some issues for the proposed method and findings of the paper. II. PROBLEM FORMULATION In this section, we consider the AVE (1) and its LCP reformulation. Some results about the connection between them are proved. Consider the AVE (1). It can be reformulated as the fol- lowing LCP which consists of finding a vector z ∈ R n such that: z ≥ 0, Mz + q ≥ 0, z T (Mz + q)=0, (2) where M ∈ R n×n and q ∈ R n . Also, M =(A+I )(A−I ) −1 ,q = ((A+I )(A−I ) −1 −I )b, z =(A−I )x−b. Moreover, Mangasarian and Meyer in [1] show that if 1 is not an eigenvalue of M , then the LCP (2) can be reduced to the following AVE: (M − I ) −1 (M + I )x −|x| =(M − I ) −1 q, where x = 1 2 ((M −I )z +q). In addition, we have the following results. Lemma 2.1: Consider the following optimization problem: 0 = min x∈R n {((A + I )x − b) T ((A − I )x − b)|(A + I )x − b ≥ 0,