In all scientific disciplines, performing experiments and therewith collecting data is an essential part to improving our understanding of the world around us. It is, however, usually not trivial to decide where and how to collect the data; experimental design is therefore concerned with the allocation of resources when conducting an experiment. The general aim is to find design features, or experimental configurations, that may improve parameter estimations or compare competing models.
Traditional experimental design uses frequentist approaches that are usually based on the Fisher information matrix; this is a well-established field. The frequentist framework however, does not work well for optimising non-linear problems, as only locally optimal designs can be obtained. Bayesian statistics has mature theory addressing this issue, but due to the computational costs involved, the field of Bayesian experimental design has only recently become popular.
There exists extensive work on Bayesian experimental design for explicit models, where the likelihood is analytically known or can be easily computed. There has, however, been little work on designing experiments for implicit models, where the likelihood is intractable and the model is specified in terms of a stochastic data generating process, or simulator. These models are common in the natural sciences and appear in many disciplines, such as epidemiology, neuroscience and cosmology. It is thus crucial to develop efficient methods for experimental design that are applicable to these models.