Proceedings of the First International Workshop on Intelligent Adaptive Systems (IAS-95) Ibrahim F. Imam and Janusz Wnek (Eds.), pp. 38-51, Melbourne Beach, Florida, 1995. Constructive Induction-based Learning Agents: An Architecture and Preliminary Experiments Eric Bloedorn and Janusz Wnek Center for Machine Learning and Inference George Mason University 4400 University Dr., Fairfax VA 22030, USA {bloedorn, wnek}@aic.gmu.edu Abstract This paper introduces a new type of intelligent agent called a constructive induction-based learning agent (CILA). This agent differs from other adaptive agents because it has the ability to not only learn how to assist a user in some task, but also to incrementally adapt its knowledge representation space to better fit the given learning task. The agent’s ability to autonomously make problem-oriented modifications to the originally given representation space is due to its constructive induction (CI) learning method. Selective induction (SI) learning methods, and agents based on these methods, rely on a good representation space. A good representation space has no misclassification noise, inter-correlated attributes or irrelevant attributes. Our proposed CILA has methods for overcoming all of these problems. In agent domains with poor representations, the CI- based learning agent will learn more accurate rules and be more useful than an SI-based learning agent. This paper gives an architecture for a CI-based learning agent and gives an empirical comparison of a CI and SI for a set of six abstract domains involving DNF-type (disjunctive normal form) descriptions. Key words: intelligent agents, constructive induction, multistrategy learning. 1. Introduction The goal of research in intelligent agents is to construct software that can provide individualized assistance to users. Two approaches that have been used in the past are 1) to force the end-user to provide the necessary skills by programming the agent, or 2) to provide the agent with a priori domain-specific knowledge about the application and user. The first approach is too difficult for most users, and the second approach is too hard for application developers, who must accurately predict the current and future needs of users (Maes, 1994). Another proposed approach is to build into the agents an ability to learn required skills from experience (Dent, 1992, Maes, 1994). In this method, the agent gains competence by interacting