EPI: Expected Parallel Improvement

Scott Clark, Cornell University

Photo of Scott Clark

This derivative-free global optimization method (EPI) allows for the optimal sampling of many concurrent points from an expensive-to-evaluate, unknown and possibly non-convex function. Instead of sampling sequentially, which can be inefficient when the available resources allow for simultaneous evaluation, EPI provides the best set of points to sample next, allowing multiple samplings to be performed in unison. In this work we develop a model for expected parallel improvement based on numerically estimating the expected improvement using multiple samples and use multi-start gradient descent to find the optimal set of points to sample next, while fully taking into account points that are currently being sampled and for which the result is not yet known.

Abstract Author(s): Scott Clark, Peter Frazier