An objective definition of subjective probability Nico Roos Abstract. Several attempts have been made to give an objective definition of subjective probability. These attempts can be divided into two approaches. The first approach uses an a priori probability distribution over the set of interpretations of the language that we are using to describe information. The idea is to define such an a priori probability distribution using some general principles such as the insufficient reason principle of Bernoulli and Laplace. The second approach does not start from a set of interpretations among which we try to find the one describing the world, but in- stead tries to build a partial model of the world. Uncertainty in the available information results in several possible partial models, each presenting a different view of the world. Using the insufficient reason principle, a probability is assigned to each view. This paper will present arguments for using the second approach instead of the first. Furthermore, a new formalization of the second approach, solving the problems of earlier attempts, will be given. 1 Introduction Several attempts have been made to give an objective definition of subjective probability. These attempts can be divided into two ap- proaches. The first approach uses an a priori probability distribution on the set of interpretations of the language that we are using to de- scribe information [1, 4, 2, 6]. The idea is to define such an a priori probability distribution using some general principles. The second approach does not start from a set of interpretations among which we try to find the one describing the world, but instead tries to build a partial model of the world [7, 8, 9]. Uncertainty in the available in- formation results in several possible partial models, each presenting a different view of the world. Using the insufficient reason principle of Bernoulli and Laplace, a probability is assigned to each view. The two approaches can be characterized by two child games. The approach in which we start with a probability distribution over the set interpretations can be characterized by the game ‘ Who is it’. The goal of this game is to identify a person from a set of candidates by asking questions. Based on the answers, we eliminate some of the candidates. If we would define an a priori probability distribution over the set of candidates, we could answer questions such as: ‘what is the chance that the person we try to identify has blue eyes?’. The other approach can be characterized as solving a jigsaw puz- zle. When we receive some information, this information may rep- resent two or more pieces of which one belongs to the puzzle. Each of these pieces results in a different view on how to complete the puzzle. We may also receive information representing a piece of the puzzle of which we do not know where to put it into the puzzle. There might be more than one position where it could fit. This again can re- sult in different views on how to complete the puzzle. If we have no Maastricht University, Department of Computer Science, P.O. Box 616, 6200 MD Maastricht, The Netherlands, e-mail: roos@cs.unimaas.nl information to prefer one of these views, using the insufficient rea- son principle of Bernoulli and Laplace, we might consider all these views on how to complete the puzzle as being equally likely. So the uncertainty expresses our lack of knowledge. For the first approach, one also uses the insufficient reason princi- ple. It is used in the definition of the a priori probability distribution. Can we argue, however, that the set of candidates should be equally likely? Suppose, for example that we have the following information about the person to be identified. The person has blue or green eyes. If this person has green eyes, his/her hair is blond. If there are as many candidates with blue as with green eyes, what should be the probability that the person to be identified has green eyes? If all candidates are equally likely, and if there are candidates for each combination of eye color and hair color, then the probability that the person to be identified has green eyes will be less than . This outcome is perfectly valid for this game, but is it for an agent receiving information about the world? The fact that we do not know the color of the person’s hair if he/she has blue eyes, can hardly be a reason to consider having blue eyes to be more likely. It would imply that the likelihood of a possible situa- tion is proportional with the lack of information about this situation. The heart of the problem is that in the first approach, we assume that all worlds are really possible. The only thing that we do not know is which world has been selected for today. The name of the game is, however, that there is one ‘fixed’ world of which we have to deter- mine what it looks like. Therefore, we should view information as pieces of a puzzle. Some information, such as the color of the per- sons eyes, represents two pieces, one of which belongs to the puzzle. So, we can have different views on how to complete the puzzle. As a result uncertainty arises. Furthermore, information stating that the person has blond hair if s/he has green eyes, is a pieces that we can put into the puzzle in one view, but which is not a piece of the puzzle in the other view. So, we have more information about one possi- ble view than we have about the other. This should not influence the likelihood of the two views. 2 The probability distribution Since there is only one fixed world, we cannot define a probability distribution over a set of possible worlds. As was pointed out in the introduction, we can only define a probability distribution over the set views we have about the world. Therefore, we must now address the question concerning the requirements such a probability distribution must meet. First of all, views must be mutually exclusive. Since views are in fact epistemic states, it might be possible to combine the information of two different views in one more informative view. In other words, c 1998 N. Roos ECAI 98. 13th European Conference on Artificial Intelligence Edited by Henri Prade Published in 1998 by John Wiley & Sons, Ltd.