In many practical optimization problems, the number of function evaluations is severely limited by time or cost. This practical consideration has driven the development of efficient methods for global optimization which require only small numbers of function evaluations. This talk will consider, in particular, the method of Jones et al. in which the objective is modelled by a linear predictor which interpolates the function at a set of sample points. A corresponding standard error function models the uncertainty in the predictor at points not yet sampled. By optimizing a merit function of the predictor and standard error, the best new sample point is determined. This talk will focus on efficient methods for optimizing the merit function and the extension of efficient methods for global optimization when the gradient of the objective at sample points is available, as well as discussing assumptions about the nature of the objective function which underpin the method.
Note that the postscript version is of finer resolution.