### Nathan Srebro (Toyota Technological Institute at Chicago)

#### Matrix learning: a tale of two norms

*Joint work with Rina Foygel, Jason Lee, Ben Recht, Russ Salakhutdinov, Ohad Shamir, Adi Shraibman and Joel Tropp and others.*

*Wednesday 23 May 2012 at 15.30, JCMB 6206*

##### Abstract

There has been much interest in recent years in various ways of constraining
the complexity of matrices based on factorizations into a product of two
simpler matrices. Such measures of matrix complexity can then be used as
regularizers for such tasks as matrix completion, collaborative filtering,
multi-task learning and multi-class learning. In this talk I will discuss two
forms of matrix regularization which constrain the norm of the factorization,
namely the trace-norm (aka nuclear-norm) and the so-called max-norm (aka
γ_{2}:l_{1}→l_{∞} norm). I will
both argue that they are independently motivated and often better model data
then rank constraints, as well as explore their relationships to the rank. In
particular, I will discuss how simple low-rank matrix completion guarantees
can be obtained using these measures, and without various "incoherence"
assumptions. I will present both theoretical and empirical arguments for why
the max-norm might actually be a better regularizer, as well as a better
convex surrogate for the rank.

### Seminars by year

*Current*
*2016*
*2015*
*2014*
*2013*
*2012*
*2011*
*2010*
*2009*
*2008*
*2007*
*2006*
*2005*
*2004*
*2003*
*2002*
*2001*
*2000*
*1999*
*1998*
*1997*
*1996*