Comparison of Bayesian Objective Procedures for Variable Selection in Linear Regression El´ ıas Moreno and F. Javier Gir´ on Universidad de Granada and Universidad de M´ alaga Abstract In the objective Bayesian approach to variable selection in regression a crucial point is the encompassing of the underlying nonnested linear models. Once the models have been encompassed one can define objective priors for the multiple testing problem involved in the variable selection problem. There are two natural ways of encompassing: one way is to encompass all models into the model containing all possible regressors, and the other one is to encompass the model containing the intercept only into any other. In this paper we compare the variable selection procedures that result from each of the two mentioned ways of encompassing by analysing their theoretical properties and their behavior in simulated and real data. Relations with frequentist criteria for model selection such as those based on the R 2 adj , and Mallows C p are provided incidentally. Keywords: encompassing, intrinsic priors, linear regression, model se- lection, reference priors. 1 Introduction Suppose that Y represents an observable random variable and X 1 ,X 2 ,..., X k a set of k potential explanatory covariates through the normal linear model Y = α 1 X 1 + α 2 X 2 + ... + α k X k + ε, ε N (·|0 2 ). (1) An important problem, known as the variable selection problem, consists of re- ducing the complexity of model (1) by identifying a subset of the α i coefficients that have a zero value based on an available dataset (y, X), where y is a vector of size n and X a n × k design matrix of full rank. This is essentially a model selec- tion problem where we have to choose a model among the 2 k possible submodels 1