BOLFI 🔗

“Bayesian Optimization for Likelihood-Free Inference of Simulator-Based Statistical Models” (2016) by Michael U. Gutmann and Jukka Corander.

Available as 🔗 presentation slides or as a 🔗 blogpost.

Problem 🔗

Other assumptions 🔗

Existing methods 🔗

For likelihood-free inference with simulator-based models, the basic idea is to identify model parameters by finding values which yield simulated data that resemble the observed data.

Conventional ABC 🔗

Implicit likelihood approximation 🔗

There is an implicit likelihood approximation going on here.

Illustration of rejection sampling (Lintusaari et al., 2016)

Downsides 🔗

Inefficiencies of rejection sampling (Florent Leclercq, 2018)

\(L(\theta) \approx \frac{1}{N} \sum_{i=1}^{N} \mathbb{1}\left(d\left(y_{o}^{(i)}, y^{\circ}\right) \leq \epsilon\right)\)

  1. Rejects most samples when \(\epsilon\) is small
  2. Does not make assumptions about the shape of \(L(\theta)\)

    1. Dependency of L on \(\theta\) is via the parameter used to generate theta, if we moved the threshold by just a little bit we don’t constrain the likelihood to be similar at all; no constraint on likelihood’s smoothness at all. Algorithm is given a lot of freedom but it becomes expensive. A lot of dependency on theta, as opposed to depending on say smoothness constraint.

  3. Does not use all information available.

    1. You could, say, stop early after many rejections and conclude that the likelihood is low with high certainty.

  4. Aims at equal accuracy for all parameters

    1. Computational effort \(N\) doesn’t depend on \(\theta\)
    2. E.g. prioritise for modal area

BOLFI 🔗

Advantages of BOLFI (Florent Leclercq, 2018)

  1. Does not reject samples; learns from them (i.e. builds a statistical model of the distances w.r.t. the parameters).
  2. Models the distances, assuming the average distance is smooth.
  3. Use Bayes’ theorem to update the proposal of new points.
  4. Prioritize parameter regions with small distances.

Regressing effective likelihood 🔗

Regressing effective likelihood 🔗

Data acquisition 🔗

Bayesian optimization in action 🔗

An animation of Bayesian optimization (Florent Leclercq, 2018)

Close the loop 🔗

High-level mechanism of Bayesian optimization (Florent Leclercq, 2018)

Results 🔗

Figure 12 🔗

  • Computational cost on log scale. 6 means a million; 3 means a thousand
  • Blue line is reference for the inferred posterior mean at the very end of the standard PMC-ABC posterior method.
  • Red line is the best, model-based posterior
  • Confidence intervals are wider than conventional approach

Advantages 🔗

Further research 🔗

Conclusion 🔗

BOLFI combines …

  1. Statistical modeling (GP regression) of the distance between observed and simulated data
  2. Decision making under uncertainty (Bayesian optimization).

…to increase efficiency of inference by several orders of magnitude.

Bibliography

Lintusaari, J., Gutmann, M. U., Dutta, R., Kaski, S., & Corander, J. (2016), Fundamentals and Recent Developments in Approximate Bayesian Computation, Systematic Biology.

Leclercq, F. (2018). Bayesian optimisation for likelihood-free cosmological inference. Retrieved from http://www.florent-leclercq.eu/talks/2018-10-22_IHP.pdf. (Accessed on 11/03/2019. Recording at https://www.youtube.com/watch?v=orDbPZFd7Gk&t=41s.).