arxivst stuff from arxiv that you should probably bookmark

Learning Approximately Objective Priors

Abstract · Apr 4, 2017 20:07 ·

stat-ml stat-co

Arxiv Abstract

  • Eric Nalisnick
  • Padhraic Smyth

In modern probabilistic learning we often wish to perform automatic inference for Bayesian models. However, informative prior distributions can be costly and difficult to elicit, and, as a consequence, flat priors are often chosen with the hope that they are reasonably uninformative. Objective priors such as the Jeffreys and reference priors are generally preferable over flat priors but are not tractable to derive for many models of interest. We address this issue by proposing techniques for learning reference prior approximations: we select a parametric family and optimize a lower bound on the reference prior objective to find the member of the family that serves as a good approximation. Moreover, optimization can be made derivation-free via differentiable Monte Carlo expectations. We experimentally demonstrate the method’s effectiveness by recovering Jeffreys priors and learning the Variational Autoencoder’s reference prior.

Read the paper (pdf) »