A Logical Treatment of Scientific Anomalies or Artificial Intelligence Meets Philosophy of Science Ricardo Sousa Silvestre Department of Philosophy Université de Montréal Montréal, QC, Canada Tarcisio H. C. Pequeno Department of Computing Universidade Federal do Ceará Fortaleza, CE, Brazil Abstract A logical formalization to the process of theory revision when it is confronted by a scientific anomaly is presented. By “scientific anomaly” we mean an observed fact falling into the explanatory scope of a theory that cannot be explained by the theory and the accepted auxiliary hypotheses. As a first approach to restore the theory’s explicative power, some tentative auxiliary hypotheses are proposed to replace the old ones. In order to capture this refutable character of auxiliary hypotheses we brought into play a nonmonotonic inferential mechanism. Also, since the several tentative auxiliary hypotheses are mutually exclusive and may produce conflicts, we took a paraconsistent inferential relation as the monotonic basis of our system. By representing laws and auxiliary hypotheses through this nonmonotonic and paraconsistent logic we are able to provide an inferential machinery in which the effects of both occurrence and solution of anomalies upon a theory can be represented. * Key words: Nonmonotonic reasoning, scientific anomalies, theory change, default logic, paraconsistent logic, philosophy of science. 1 Introduction The affirmation that it is possible to establish fruitful parallels between Artificial Intelligence and Philosophy is not hard to be found in a certain kind of literature, which could be called, in the absence of a better name, “Philosophical AI”. This affirmation is supported by the observation that many problems faced by AI are, in a sense, a rephrasing in modern terms of venerable questions and problems of traditional * This research was supported by CNPq philosophy. Some authors go as far as to say things like: “Were they reborn into a modern university, we claim, Plato and Aristotle and Leibinz would most suitably take up appointments in the department of computer science”[1]. If some of the more interesting, and hard, AI problems are of real interest to philosophy, as so many AI researchers philosophically inclined believe (the authors of this paper included) it is about time to seriously try the use of methods and techniques developed in the field of AI to the consideration of philosophical problems. In particular, we think this connection is not very hard to establish between certain problems concerning the treatment of knowledge and reasoning in AI with problems in philosophy of science, using techniques developed for the solution of the former in the benefit of the later. To offer an illustration of this point is precisely the aim of the present paper. The problem in question is related to the dynamics of scientific theories, more specifically the process of theory change when a certain theory is challenged by what is technically called a “scientific anomaly”, that is the existence of an observed fact that does not agree with the previsions provided by the theory. The techniques we believe can help in treating this problem are the logical systems developed in the field of nonmonotonic logics. In particular, it seems very promising in this connection the use of some logical systems we have been working on by about a decade now, which combine nonmonotonic with paraconsistent 1 issues, as we intend to demonstrate. 1 Briefly stating, a paraconsistent logic is one in which a theory does not collapse if plagued by contradictions.