DATAMINING AND NETWORKS NEURONAL EXTRACTING KNOWLEDGE FROM HIGH PRESSURE DATA PATIENTS. Mbuyi Mukendi Eugène 1 , Kafunda Katalayi Pierre 2 , Mbuyi Badibanga Steve 3 , Mbuyi Mukendi Didier 4 1 Professor University of Kinshasa, Department of Computer Sciences, DR Congo 2 University of Kinshasa, Department of Computer Sciences, DR Congo 3 University of Kinshasa, Department of Computer Sciences, DR Congo 3 University of Kinshasa, Department of Computer Sciences, DR Congo Kinshasa Computer science Laboratory. Summary Data mining is a set of methods and techniques for exploring and analyzing automatically or semi-automatically databases in order to detect rules, associations, unknown or hidden trends, specific structures that restore most of the useful information while reducing the amount of data.This is a process of extracting valid and tractable knowledge from large amount of data.In this paper, we present a contribution on the extraction of useful knowledge from databases on patients with high blood pressure from one of the hospital in Kinshasa (RD Congo), using multi- layerneural networks. Key words: Datamining, extraction of knowledge, database, Neural Network, algorithm, high blood pressure. I. INTRODUCTION [1], [2], [3], [4], [5], [7], [8], [9], [10] Learning can be seen as a problem of updating connection weights within a network in order to achieve the requested task. The learning rule allows network to evolve over time taking into account prior experiences. The connection weights are modified according to previous results for finding the best model in comparison to the given examples. Neural networks is divided in two main classes, that is, supervised learning networks and unsupervised learning. Another class is called hybrid learning network. In this work however, we focus on supervised learning. We present some theoretical concepts on learning algorithms: the back-propagation algorithm, used for extracting knowledge from high blood pressure data. . This algorithm is applied to implement neural networks done by Microsoft, i.e., Microsoft Neural Networks used in chapter II of this work. I.1. Back-propagation algorithm A. Definition : 1) A function k (x) is said sigmoid of parameter k 0, if it is defined as follows kx kx kx k e e e x 1 1 1 ) ( (1) This is an infinitely differentiable approximation of the Heaviside function threshold. The approximation is better when k is large. In this work, we take k = 1. Therefore, x x x e e e x 1 1 1 ) ( (2) The derivative of this function will be used in the rule for updating the weights by the back-propagation algorithm. ) ( 1 ( . ) ( ) 1 ( ) ( 2 ' x x e e x x x (3) 2) A n-input real cell unitis a real ) , .... , ( 2 1 n x x x x defined by the synaptic weight ) ..... , ( 2 1 n w w w w and the output , x is computed with the following formula : n i i i y x w w x y with e x 1 . 1 1 ) ( A multilayer perceptron (MLP) is a neural network with hidden layers with well defined unit cells. B. Algorithm principle As with the linear perceptron, the principle is to minimize error function. The next step is to calculate the contribution to the error of each of the synaptic weights. Let a MLP defined by a n-input architecture and p outcomes; w vector synaptic weights associated with s s c x , IJCSI International Journal of Computer Science Issues, Vol. 9, Issue 3, No 1, May 2012 ISSN (Online): 1694-0814 www.IJCSI.org 449 Copyright (c) 2012 International Journal of Computer Science Issues. All Rights Reserved.