Natural Language Processing (NLP) studies the problems of automated generation and understanding of natural human languages. Tasks within NLP range from labelling words with a word type and parsing of sentences into syntax trees to Machine Translation.
Since a few years the most prominent approach to NLP is the use of labelled problem-solution pairs to train a probabilistic model of the task in question. Given such a trained model, one has to then find/infer the solution with the highest probability given a problem instance. While this is straightforward for simple classification tasks, it becomes increasingly difficult with so called structured prediction tasks, such as parsing or machine translation, where more complex outputs have to be generated. The methods of choice so far have been to either incorporate strong (possibly poorly justified) independence assumption into the probabilistic model and use dynamic programming to find the best solution or to resort to heuristics.
In this talk we show how ILP and the Cutting Plane method can be used to solve the inference problem in Statistical NLP in a principled manner while still allowing to incorporate generic dependencies into models. We will also discuss the role of ILP and Cutting Planes for learning model parameters. We present first results for the task of dependency parsing with additional hard constraints and show how we plan to extend the approach to other tasks and to the case of soft constraints.
Current 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996