International Journal of Electronics and Computer Science Engineering 1162 Available Online at www.ijecse.org ISSN- 2277-1956 ISSN 2277-1956/V2N4-1162-1170 CMOS VLSI Hyperbolic Tangent Function & its Derivative Circuits for Neuron Implementation Hussein CHIBLE, Ahmad Ghandour Phd School of Sciences and Technology Lebanese University - EDST Beirut, Lebanon Abstract- The hyperbolic tangent function and its derivative are key essential element in analog signal processing and especially in analog VLSI implementation of neuron of artificial neural networks. The main conditions of these types of circuits are the small silicon area, and the low power consumption. The objective of this paper is to study and design CMOS VLSI hyperbolic tangent function and its derivative circuit for neural network implementation. A circuit is designed and the results are presented Keywords – Non Linear function, derivative, Analog signal processing, neurons, CMOS VLSI implementation I. INTRODUCTION Neural Networks (NN) are particularly attractive for VLSI implementations as each parallel element (neuron or synapse) is relatively simple, allowing the complete integration of large networks on a single chip. Moreover, as noted by several authors, Neural Networks are most efficiently implemented by asynchronous analog circuits [1-5]. Analog implementations are generally faster (due to the asynchronous operation) and require less hardware (lower transistor count) than digital VLSI implementations. Analog VLSI Neural Networks are heavy parallel analog systems, which used and demonstrated in solving a wide range of real world problems [6]. [7] Presents a number of different implementations for the first derivative of the sigmoid function. The implementation of the sigmoid function employs a powers-of-two piecewise linear approximation. The best implementation scheme for the derivative is suggested based on overall speed performance (circuit speed and training time) and hardware requirements. The CMOS circuit implementation of the feed forward neural primitives of a generic Multi Layer Perceptron network is presented in [8]. Basically the approach is based on current mode computation and is aimed at a low power/low voltage circuit implementation; moreover, it is easily scalable to implement network of any size. Experimental results are reported. A new CMOS VLSI implementation of an asymmetric programmable sigmoid neural activation function, as well as of its derivative, is presented in [9]. It consists of two coupled PMOS and NMOS differential pairs with different programmable bias currents that set the upper and lower limits of the sigmoid. The circuit works in the weak inversion region, for low power consumption and exponential envelope, or in strong inversion to achieve higher speeds. The results obtained from the theoretical transfer function, and from the simulations of the circuit implemented in AMI’s 0.35mm technology, show a very good match. [10] Presents a piecewise linear recursive approximation scheme is applied to the computation of the sigmoid function and its derivative in artificial neurons with learning capability. The scheme provides high approximation accuracy with very low memory requirements. The recursive nature of this method allows for the control of the rate accuracy/computation-delay just by modifying one parameter with no impact on the occupied area. The error analysis shows an accuracy comparable to or better than other reported piecewise linear approximation schemes. No multiplier is needed for a digital implementation of the sigmoid generator and only one memory word is required to store the parameter that optimizes the approximation. In this paper, the CMOS VLSI implementation of the sigmoid function or the hyperbolic tangent function and the equivalent derivative are proposed and presented. Especially, this circuit can be used for the neuron module of the