We consider a general class of second-order iterations for unconstrained optimization that includes regularization and trust-region variants of Newton's method. For each method in this class, we exhibit a smooth, bounded-below objective function, whose gradient is globally Lipschitz continuous within an open convex set containing any iterates encountered and whose Hessian is α-Hölder continuous (for given α ∈ [0, 1]) on the path of the iterates, for which the method in question takes at least ⌊ε-(2+α)/(1 + α)⌋ function-evaluations to generate a first iterate whose gradient is smaller than ε in norm. This provides a lower bound on the evaluation complexity of second-order methods in our class when applied to smooth problems satisfying our assumptions. Furthermore, for α = 1, this lower bound is of the same order in ε as the upper bound on the evaluation complexity of cubic regularization, thus implying cubic regularization has optimal worst-case evaluation complexity within our class of second-order methods.
Written: 19 June 2011
Submitted for publication