Interior point methods for linear and (convex) quadratic programming display several features which make them particularly attractive for very large scale optimization. They have an impressive low-degree polynomial worst-case complexity. In practice, they display an unrivalled ability to deliver optimal solutions in an almost constant number of iterations which depends very little, if at all, on the problem dimension. Since many problems in machine learning can be recast as linear or quadratic optimization problems and it is common for them to have large or huge sizes, interior point methods are natural candidates to be applied in this context. In this chapter we will discuss several major issues related to interior point methods including the worst-case complexity result, the features responsible for their ability to solve very large problems, and their existing and potential applications in machine learning.
Key words: Interior Point Methods, Machine Learning, Quadratic Programming, Matrix-Free Methods.