# An introductory example for Markov chain Monte Carlo (MCMC)

**Authors:**Katy Klauenberg, Clemens Elster (PTB)

When the

*Guide to the Expression of Uncertainty in Measurement*(GUM) and methods from its supplements are not applicable, the Bayesian approach may be a valid and welcome alternative. Evaluating the posterior distribution, estimates or uncertainties involved in Bayesian inferences often requires numerical methods to avoid high-dimensional integrations. Markov chain Monte Carlo (MCMC) sampling is such a method—powerful, flexible and widely applied. [Klauenberg and Elster 2016] give a concise introduction, illustrated by a simple, typical example from metrology.

_{Markov chain Monte Carlo samples (adapted from Klauenberg and Elster 2016)}

In this example, the information gained about an unknown voltage by one measurement with a high precision device (in black) is to be fused with five measurements from another device having larger uncertainty (gray), which is a slight modification of the GUM example as posed in supplement 2 [Klauenberg and Elster 2016, table 8]. Although the toy example is particularly simple and a weighted mean could be applied in principle, the general problem of accounting for additional information about the measurand is not addressed by the GUM or its supplements. The application of Bayesian statistics and MCMC sampling methods is advantageous for such problems where information from different sources is to be fused.

The Metropolis–Hastings algorithm is the most basic and yet flexible MCMC method. [Klauenberg and Elster 2016] explain its underlying concepts and give the algorithm step by step. The accompanying program code consists of just a few lines, is easy to reproduce, simple to adapt, and the resulting samples are as flexible for subsequent inferences and computing summaries as Monte Carlo samples. The Figure below shows realizations of the samples and the density approximation for the toy example.

_{density approximation (adapted from Klauenberg and Elster 2016)}

In addition, [Klauenberg and Elster 2016] supply guidance on how to judge and how to improve the efficiency of Markov chains as well as on the general difficulty of assessing their convergence. Notes on alternative MCMC methods as well as on powerful software complete the introduction.