Bayesian optimisation is a versatile set of methods for black-box optimisation in Machine Learning. In practical applications of Bayesian optimisation it is important to ﬁnd a suitable kernel for the underlying Gaussian process in order to improve the quality of ﬁt and thereby to reduce the number of iterations needed for optimisation. A particular challenge is posed by non-stationary target functions which exhibit diﬀerent levels of “roughness” at diﬀerent locations in input space. This is due to the fact that non-stationary kernels are frequently more complicated to ﬁt despite the relatively small number of samples frequently available in Bayesian optimization, keeping in mind that samples are accumulated sequentially.
There exist a variety of methods for non-stationary GP regression in the literature but it is not clear how well these perform for Bayesian optimisation, with the exception of which have been designed speciﬁcally with this application in mind; even there, however, a systematic comparison is not at present available.
I therefore propose to investigate this topic by evaluating and comparing diﬀerent classes of non-stationary kernels in the context of Bayesian optimisation, with the ultimate goal of characterising those methods which would be useful for general-purpose Bayesian optimisation. This project could lead to designing new ways to ﬁt non-stationary functions with limited samples, or possible alternative methods to improve the performance Bayesian optimisation for these. This would likely involve taking ideas from current literature on active learning and spatial modelling; in addition it will be interesting to investigate the sequential nature of Bayesian optimization and how it can be exploited in this context.
Supervisors: Guido Sanguinetti & Ramon Grima