IMPROVING CONTROLLER PERFORMANCE USING GENETICALLY EVOLVED STRUCTURES WITH CO-ADAPTATION DOMINIC SEARSON, MARK WILLIS AND GARY MONTAGUE Department of Chemical and Process Engineering University of Newcastle Newcastle Upon Tyne, NE1 7RU, England. d.p.searson@ncl.ac.uk keywords: genetic programming, co-adaptation, controller design ABSTRACT This article describes the use of a genetic programming algorithm that utilises co-adaptation to concurrently evolve 2 controller sub-structures. These sub-structures are used to improve the performance of an existing PID controller on a simulated process. The process is characterised by a long deadtime and is subject to measurable disturbances. The standard genetic programming algorithm and an extension to multiple program individuals are detailed, followed by the process simulation control configuration and genetic programming run parameters. Finally, some results show that a performance enhancing augmented controller can be satisfactorily evolved using this method are presented with some comments and recommendations for further work INTRODUCTION Genetic programming (GP) [1] is a development of genetic algorithms [2] that allows far greater representational flexibility by symbolically encoding the structures of the evolved solutions as tree-structured programs in a domain specific language. Randomly generated populations of these symbolic expressions can be manipulated in a manner analogous to evolution in biological systems. The versatility of GP lies in the ability of human designers to reformulate problems in the form of questions that require a computer program as an answer. GP has achieved near-optimal solutions for a number of problems in engineering and physics. There have been a number of articles describing the use of GP in designing control algorithms in several fields of engineering, e.g. in [3] Howley used GP to evolve control algorithms for a simulated mechanical two link payload manipulator, and in [4] Dracopoulos used GP to derive an attitude control law to successfully de-tumble a simulated spinning satellite and used a Lyapunov function to show that the derived law was stable. Recently, the authors [5] demonstrated the use of GP to design controllers that offered similar, or marginally better, performance to PID controllers for two simple SISO chemical process systems. The test systems presented, however, could be controlled well by the PID controllers and so there was in fact very little margin for any improvement in performance by the evolved controllers. Another limitation of the scheme presented in [5] is that the evolved expressions were only able to generate a single control signal, precluding their potential use in many applications of interest. In this article we investigate the use of concurrently evolved structures to augment an existing controller (in order to improve its performance) rather than trying to replace it. In the example we present, the existing PID loop is augmented by two controller structures. One structure is designated as the feedforward element of the system and is intended to cancel out load disturbances. The other is employed as a setpoint modifier and is intended to improve the servo characteristics of the system. The machine learning technique of co-adaptation [6] is employed in order to derive a controller in which each evolved sub-structure operates independently of the other in its interactions with its environment. Our implementation of the co-adaptation method evolves heterogeneous controller sub-structures as a ‘team’. Only the teams that perform well are propagated through to a final solution and both structures must be utilised to solve the problem satisfactorily. The ideas behind standard genetic programming are first briefly outlined, followed by its extension to co- adaptation using multiple program individuals. Then the configuration of a simulation test system is presented, followed by some results and a discussion. DESCRIPTION OF GP GP works by maintaining a population of individuals, each of which is a computer program that can be ‘executed’ to give a potential solution to the designated problem. These candidate programs are encoded as parse trees (rather than lines of computer code). Figure 1 depicts a very simple example of a tree-structured program. The result of the program is usually returned at the top of the tree. In this example the program result is (3-b)+2. Each node of the tree is either a terminal node or a function node. Terminals are usually specified to be