Informed MCMC sampling for high-dimensional model selection problems

Quan Zhou (Texas A&M)



Informed Markov chain Monte Carlo (MCMC) methods have been proposed as scalable solutions to Bayesian posterior computation on high-dimensional discrete state spaces, but theoretical results about their convergence behavior in general settings are lacking. In this talk, we introduce a generally applicable mixing time bound for Markov chains on discrete spaces, which can be used to prove the rapid mixing of both random walk and informed Metropolis-Hastings algorithms for high-dimensional model selection problems. On the algorithmic side, we propose two novel informed MCMC algorithms, one based on Metropolis-Hastings sampling and the other based on importance weighting. Theoretical results specific to these two algorithms will also be discussed, which justify the practical advantages of informed MCMC methods and provide guidance on how to choose an informed proposal weighting scheme.


Back to Day 3