Over the last ten years, positive definite kernel matrices have formed an important ingredient in machine learning methods. The learning process is a convex optimization in a high-dimensional feature space, yet as everything is defined in terms of kernel evaluations this high-dimensional space never has to be explicitly computed. We will follow the review paper of Hofmann, Schölkopf and Smola (sections 1 to 3, pages 1-29), covering the properties of kernel matrices, how they are constructed, and where they fit in machine learning optimization problems.
It is recommended that attendees prepare by reading the paper.