Efficient Knowledge Representation for Flexible Intelligent Systems Ahmed Khorsi Department of Computer Science, Djillali Liabes University, Bel Abbes, 22000, Algeria ahmed-khorsi@univ-sba.dz Abstract In this paper we propose a new memory modeling for knowledge based systems which is at the same time machine learning structure and knowledge rep- resentation model. In terms of machine learning, our structure allows an unlimited flexibility where no restraining architecture is imposed at the begin- ning(think about decision trees[13, 6, 5, 2]). Clas- sification can be performed with incomplete vectors where the most likely corresponding class is assigned to a vector with missing attributes. Viewed as a knowledge model, a basic knowledge are easily spec- ified graphically. Inference is defined by rules ex- pressed in the same manner where the existing sub- instances are used to generate new connections and entities. Inference upon existing knowledge is de- scribed in two algorithms. Our approach is mainly based on the representation of the context concept. Our model brings together advantages of the sym- bolic knowledge representation namely human to computer knowledge coding, and those of machine learning structures namely ease of efficient coding of inference to perform so-called intelligent tasks like pattern recognition, prediction and well known oth- ers. Its graphical representation allows visualization of both dynamic and static sides of the model (i.e. Inference and knowledge). keywords : Knowledge representation, machine learning, memory modeling. 1 Introduction Psycho-connectionnist models such as frames and se- mantic nets, belong to the class of knowledge repre- sentations said structural models. Indeed, the model proposed in this paper seems to be a simple variant of semantic nets [17, 9], but let us recall that this last model represents knowledge as follow : Objects are represented by nodes with a symbolic labels. A node is bounded up with some other via a labeled edge, where the label textually and ex- plicitly designates the nature (semantic) of the unique relation being between this two objects. On the other hand, when knowledge are acquired by explaining-like technique, the simplifica- tion of incertitude to just two types of links: by de- fault and absolutes as semantic nets do; may suffice to avoid contradictions in the reasoning, but in ma- chine learning applications such simplification may result in unsatisfactory results. When researchers in KR tempt to simplify knowl- edge expression and give less attention to the com- putation performances [4], those in machine learn- ing [16] tempt to design structures that make com- putation more efficient. Learn automatically means -in reality- learn from examples. This means that the acquired knowledge are uncertain. Hence, each structure in ML has a its proper manner to represent this incertitude [14, 12]. The model proposed in [8] which will be described in the section two; is an at- tempt to endow a knowledge representation with a simple incertitude modeling. Section three will show how our model represents a simple boolean function and how its archtecture allows easy addition of a new knowledge. Section four shows how inference rules are defined. Section five, will detail the representa- tion of the weights. This representation allows the unsupervised learning as many of machine learning structures.