A new version 0.5-0 of my R package bayesImageS is now available on CRAN. To accompany it is a revision to my paper with Kerrie and Tony, “Scalable Bayesian inference for the inverse temperature of a hidden Potts model.” (Moores, Pettitt & Mengersen, arXiv:1503.08066v2). This paper introduces the parametric functional approximate Bayesian (PFAB) algorithm (the ‘p’ is silent…), which is a form of Bayesian indirect likelihood (BIL).

PFAB splits computation into 3 stages:

  1. Simulation for fixed \(\beta\) using Swendsen-Wang
  2. Fitting a parametric surrogate model using Stan
  3. Approximate posterior inference using Metropolis-within-Gibbs

For Stage 1, I used 2000 iterations of SW for each of 72 values of \(\beta\), but this is really overkill for most applications. I chose 72 values because I happened to have a 36-core, hyperthreaded CPU available. Here I’ll just be running everything on my laptop (an i7 CPU with 4 hyperthreaded cores), so 28 values should be plenty. The idea is to have higher density closer to the critical temperature, where the variance (and hence the gradient of the score function) is greatest

For our precomputation step, we need to know the image dimensions \(n\) and the number of labels \(k\) that we will use for pixel classification. We’ll be using the Lake of Menteith dataset from Bayesian Essentials with R (Marin & Robert, 2014):

library(bayess)
data("Menteith")
n <- prod(dim(Menteith))
k <- 6
image(as.matrix(Menteith),
      asp=1,xaxt='n',yaxt='n',col=gray(0:255/255))
Lake of Menteith, Scotland

Lake of Menteith, Scotland

bcrit <- log(1 + sqrt(k))
beta <- sort(c(seq(0,1,by=0.1),seq(1.05,1.23,by=0.05),seq(1.245,1.4,by=0.05),seq(1.5,2,by=0.1),2.5,3,4))