Emotion as the basis for computational autonomy in cognitive agents Darryl N. Davis Neural, Emergent and Agent Technology Research Group, Department of Computer Science, University of Hull, Kingston-upon-Hull, HU6 7RX, U.K. D.N.Davis@dcs.hull.ac.uk Abstract. Many agent architectures are competency-based designs related to tasks in specific domains. More general frameworks map across tasks and domains. These types of agent architectures tend to fit well with the concept of weak notion of agency; i.e. they define autonomous systems that perform specific roles within a real or abstract environment. However, there is a problem with many of these approaches when they are applied to the design of a mind analogous in type to the human mind – the foundational concepts underlying the concept of agency are no longer adequate for stronger notions of agency. Four foundations to the weak notion of agency are autonomy, social ability, reactivity and pro-activeness. These tend to be defined in terms of interactions between an agent’s environment and the motivational qualities of an agent. From the perspective of developing intelligent computational systems this is more than acceptable. However, these definitions are shallow and insufficient for agent designs (and architectures) defined with regard to some aspect of cognitive functioning. There is no core to these agents other than an information processing architecture. From the perspective of developing or simulating functioning (human-like) minds this is problematic – these models are in effect autistic. This paper presents an emotion-based core that underpins an agent’s autonomy, social behaviour, reactivity and pro-activeness. As an agent functions it is sometimes called to monitor its internal interactions and relate the nature of these wholly internal functions to tasks in its external environment. The impetus for change within itself (i.e. to adapt or learn) is manifested as an unwanted combination (disequilibrium) of emotions. The modification of an agent’s internal environment is described in terms of an emotion motivated mapping between its internal and external environments. To rephrase a previous revolution in artificial intelligence: human-like intelligence requires embodiment of the supporting computational infrastructure not only in terms of an external environment but also in terms of an internal (emotional) environment