Approximate Bayesian computation (ABC) is a method for Bayesian inference when the likelihood is unavailable but simulating from the model is possible. However, many ABC algorithms require a large number of simulations, which can be costly. To reduce the computational cost, surrogate models and Bayesian optimisation (BO) have been proposed. Bayesian optimisation enables one to intelligently decide where to evaluate the model next, but standard BO strategies are designed for optimisation and not specifically for ABC inference. Our paper addresses this gap in the literature. We propose a new acquisition rule that selects the next evaluation where the uncertainty in the posterior distribution is largest. Experiments show that the proposed method often produces the most accurate approximations, especially in high-dimensional cases or in the presence of strong prior information, compared to common alternatives.