|| Checking for direct PDF access through Ovid
The development of prior distributions for Bayesian regression has traditionally been driven by the goal of achieving sensible model selection and parameter estimation. The formalization of properties that characterize good performance has led to the development and popularization of thick-tailed mixtures of g priors such as the Zellner-Siow and hyper-g priors. In this paper we introduce a conditional information asymptotic regime that is motivated by the common data analysis setting where at least one regression coefficient is much larger than the others. We analyse existing mixtures of g priors under this limit and reveal two new phenomena, essentially least-squares estimation and the conditional Lindley paradox, and argue that these are, in general, undesirable. The driver behind both is the use of a single, latent scale parameter common to all coefficients.