Big-data applications have initiated a resurgence of optimization methods with inexpensive iterations, namely first-order methods. The efficiency of first-order methods has been shown for several well conditioned problems in big-data optimization. However, their practical convergence might be slow on ill-conditioned/pathological instances.
In this talk we will discuss Newton-type methods, which aim to exploit the trade-off between inexpensive iterations and robustness. Two methods will be presented, a robust block coordinate descent method and a primal-dual Newton conjugate gradients method. We will discuss theoretical properties of the methods and we will present numerical experiments on big-data applications such as regression, machine learning and image processing.
Current 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996