Concurrent Neural Networks for Speaker Recognition Angel Cataron Dept. of Electronics and Computers, TRANSILVANIA University of Brasov, Romania email: Cataron@vega.unitbv.ro Victor-Emil Neagoe University POLITEHNICA of Bucharest, Faculty of Electronics and Telecommunications, Bucharest, Romania Tel: +40 92 302998, email: Vic@gitprai.pub.ro ABSTRACT We propose a new recognition model called Concurrent Neural Networks (CNN), representing a winner-takes-all collection of neural networks. Each network of the system is trained individually to provide best results for one class only. We have applied the above model for the task of speaker recognition. We performed distinct speaker recognition experiments using three variants of basic components of the CNN system: the Multi-Layer Perceptron (MLP), the Time- Delay Neural Network (TDNN) and the Kohonen Self-Organizing Map (SOM). We have used two databases: a clean speech database called SPEECHDATA and a telephone database called TELEPHDATA. The experiments proved a significant increase of the recognition score using the proposed CNN model by comparation to the use of a single neural network for the whole speaker recognition task. The SOM best has performed in our experiments proving an increase of about 38% for SPEECHDATA as well as an increase of about 30% for TELEPHDATA. I. INTRODUCTION The speech and speaker recognition experiments proved that the classical neural networks models do not perform very satisfying. MLP, TDNN and SOM provide acceptable recognition accuracies for isolated word recognition, but the performances decrease dramatically for speaker recognition tasks [2]. On the other side, reducing the number of speakers leads to better results, and we tried to exploit this result. We proposed and we tested a complex network model, the concurrent neural network which consisted in a collection of specialized small neural networks. II. CONCURRENT NEURAL NETWORKS Concurrent neural networks (CNN) are a collection of neural networks which use a global winner-takes-all strategy. Each network is used to correctly classify the patterns of one class and the number of networks equals the classes number. The CNN training technique is a supervised one, but for the individual networks their particular training algorithms are used. The CNN model is depicted in figure 1.