Unsupervised Hybrid Learning Model (UHLM) as Combination of Supervised and Unsupervised Models Andrey Gavrilov, Sungyoung Lee Computer Engineering Department, Kyung Hee University, 1 Seocheon-dong, Giheung-gu, Yongin-si, Gyeonggi-do, 446-701, Republic of Korea Abstract – In this paper novel paradigm of Unsupervised Hybrid Learning Model is proposed based on usage of unsupervised learning model as teacher for supervised learning model. This approach is result of generalization of hybrid neural model MLP-ART2, proposed by authors in [7, 8, 9]. Also we propose novel architecture of Reinforcement Learning based on our paradigm and Multilayer Perceptron (MLP). In this architecture MLP is working in two modes: attraction of output vector to target and repulsion from target with respect to award. We propose also model MLP-ART-RL based on combination of model MLP-ART2 and Reinforcement Learning. I. INTRODUCTION For creating of general intelligence, in particular, for intelligent robots, learning models must satisfy to following requirements: - Fast learning, - Fast recalling, - Incremental learning, - Unsupervised learning or reinforcement learning. Practically all models of neural networks cover just part of these features. So in last years new approach became very popular based on development of hybrid neural networks consisting of typically two different neural paradigms to get cumulative result [1, 2, 3, 4]. Such hybrid neural networks may be viewed as part of wider concept of hybrid intelligent systems consisting of different paradigms of representation and processing of knowledge [5, 6]. But really these combinations of neural models address the solving of partial technical problems for concrete application. In this paper we propose novel approach to combine two neural models one of them is supervised learning model and another is unsupervised learning model. And these models may be variable in wide area. This model inspired by investigations of brain and mind is a result of generalization of hybrid neural network MLP- ART2 earlier proposed by us [7, 8, 9]. II. MAIN CONCEPTS OF HYBRID UNSUPERVISED LEARNING MODEL Our suggested hybrid model consists of two models as shown in figure 1. Model 1 is based on multi-layer neural network using error back propagation (EBP) algorithm [10]. It provides mapping of input feature space on output feature space more suitable for classification or clustering or invariant recognition. It is well known that MLP can provide arbitrary transformation of primary features [11]. So we can train one to get invariant features as outputs. The model 2 provides clustering and classification and also provides mapping of recognized class (or cluster) on output feature space of model 1 as additional result of this process. It means that it produces desirable output pattern of model 1 for algorithm EBP. The model 2 may be viewed as teacher for model 1 to adapt it to relative small transformations of input patterns. Note that unlike classical MLP the algorithm EBP in this model aims to some transforms feature space by just small number of iterations but no reduction of error to very small value. Fig. 1. Structure of proposed Unsupervised Hybrid Learning Model III. SOME IMPLEMENTATIONS OF UNSUPERVISED HYBRID LEARNING MODEL A. Model MLP-ART2 One implementation of this paradigm was proposed and investigated in [7, 8, 9]. This model consists of multi-layer perceptron with error back propagation (EBP) as model 1 and ART-2 [12] as model 2. MLP provides conversion of primary feature space to secondary feature space with lower dimension and with more Supervised learning model (model 1) Unsupervised learning model (model 2) Mapping of state of model 2 on target output vector of model 1 (teaching)