An Intelligent Theory of Cost for Partial Metric Spaces Steve Matthews 1 and Michael Bukatin 2 1 University of Warwick, Coventry, UK, Steve.Matthews@warwick.ac.uk, 2 Nokia Corporation, Boston Massachusetts, USA bukatin@cs.brandeis.edu Abstract. Partial metric spaces generalise metric spaces, allowing non zero self distance. This is needed to model computable partial informa- tion, but falls short in an important respect. The present cost of com- puting information, such as processor time or memory used, is rarely expressible in domain theory, but contemporary theories of algorithms incorporate precise control over cost of computing resources. Complex- ity theory in Computer Science has dramatically advanced through an intelligent understanding of algorithms over discrete totally defined data structures such as directed graphs, without using partially defined in- formation. So we have an unfortunate longstanding separation of par- tial metric spaces for modelling partially defined computable informa- tion from the complexity theory of algorithms for costing totally defined computable information. To bridge that separation we seek an intelligent theory of cost for partial metric spaces. As examples we consider the cost of computing a double negation ¬¬p in two-valued propositional logic, the cost of computing negation as failure in logic programming, and a cost model for the hiaton time delay. Keywords: AGI, partial metric spaces, discrete mathematics 1 Introduction Today it may be taken for granted that a computing system should be adaptive and intelligent. Certainly the behaviour of a hand held device running a com- puter game or interactive internet site is adaptive and, as it exists to serve us humans, is designed to be as intelligent as is possible. Some forty years ago pro- gramming language design was categorised into what now appear narrow forms: axiomatic (a system of logic), operational (defined by a machine model), or de- notational (each program denoted by a point in some mathematical domain ). Through a groundbreaking axiomatic model such as Robin Milner’s Calculus of Communicating Systems (CCS) or Dana Scott’s denotational theory of domains we have made great progress in specifying some behaviours, but sadly not enough to handle the adaptive and intelligent features required for today’s systems. So, what went wrong? What seems to have emerged is a dominant operational view