Autonomous Recovery from Hostile Code Insertion using Distributed Reflection Catriona M. Kennedy and Aaron Sloman School of Computer Science University of Birmingham Edgbaston, Birmingham B15 2TT Great Britain In Journal of Cognitive Systems Research, Vol. 4, No. 2, 2003 pp. 89–117, Abstract In a hostile environment, an autonomous cognitive system requires a reflective capability to detect prob- lems in its own operation and recover from them without external intervention. We present an architecture in which reflection is distributed so that components mutually observe and protect each other, and where the sys- tem has a distributed model of all its components, including those concerned with the reflection itself. Some reflective (or “meta-level”) components enable the system to monitor its execution traces and detect anoma- lies by comparing them with a model of normal activity. Other components monitor “quality” of performance in the application domain. Implementation in a simple virtual world shows that the system can recover from certain kinds of hostile code attacks that cause it to make wrong decisions in its application domain, even if some of its self-monitoring components are also disabled. Key words: anomaly, immune systems, meta-level, quality-monitoring, reflection, self-repair. 1 Introduction There are many situations where an autonomous system should continue operating in the presence of damage or intrusions without human intervention. Such a system requires a self-monitoring (reflective) capability in order to detect and diagnose problems in its own components and to take recovery action to restore normal operation. The simplest way to make an autonomous system reflective is to include a layer in its architecture to monitor behaviour patterns of its components and detect deviations from expectancy (anomalies). There are situations, however, where the monitoring layer will not detect anomalies in itself (e.g. it cannot detect that it has just been deleted, or replaced with hostile code). In previous papers [Kennedy, 1999, Kennedy, 2000] we called this problem “reflective blindness”. Traditional ways of improving resistance to attacks on a monitoring layer involve the addition of features to an existing software architecture and do not consider the software as a “whole”intelligent system. Examples in- clude replication and voting [Cristian, 1991], design diversity [Hilford et al., 1997] and program self-checking methods (e.g. [Harman and Danicic, 1995]). In contrast, our research investigates novel ways of integrating existing algorithms and techniques to form an enhanced coherent architecture. The aim is to build a complete autonomous system with some cognitive capability that can protect itself in a hostile environment. We use the term “reflection”in the sense of “meta- management”[Beaudoin, 1994] which involves (among other things) the ability of a cognitive system to detect problems in its internal processing and correct them. Autonomous response and reconfiguration in the presence of unforeseen problems is already a fairly estab- lished area in remote vehicle control systems which have to be self-sufficient (see for example [Pell et al., 1997]). However, they do not specify how the system should recover from problems in its self-monitoring and control software, or the insertion of hostile code to take over control of the system. 1.1 Distributed Reflection We address the problem of reflective blindness by distributing the reflection over multiple components so that all components are subject to monitoring from within the system. This does not entirely eliminate the “blind- 1