Foundations and Trends® in Machine Learning > Vol 4 > Issue 3

Kernels for Vector-Valued Functions: A Review

Mauricio A. Álvarez, Department of Electrical Engineering, Universidad Tecnológica de Pereira, Colombia, malvarez@utp.edu.co Lorenzo Rosasco, Istituto Italiano di Tecnologia, Italy and Massachusetts Institute of Technology, USA, lrosasco@mit.edu Neil D. Lawrence, Department of Computer Science, University of Sheffield and The Sheffield Institute for Translational Neuroscience, UK, N.Lawrence@dcs.sheffield.ac.uk
 
Suggested Citation
Mauricio A. Álvarez, Lorenzo Rosasco and Neil D. Lawrence (2012), "Kernels for Vector-Valued Functions: A Review", Foundations and Trends® in Machine Learning: Vol. 4: No. 3, pp 195-266. http://dx.doi.org/10.1561/2200000036

Published: 19 Jun 2012
© 2012 M. A. Álvarez, L. Rosasco and N. D. Lawrence
 
Subjects
Kernel methods
 

Free Preview:

Article Help

Share

Download article
In this article:
1 Introduction
2 Learning Scalar Outputs with Kernel Methods
3 Learning Multiple Outputs with Kernel Methods
4 Separable Kernels and Sum of Separable Kernels
5 Beyond Separable Kernels
6 Inference and Computational Considerations
7 Applications of Multivariate Kernels
8 Discussion
Acknowledgments
Notations and Acronyms
References

Abstract

Kernel methods are among the most popular techniques in machine learning. From a regularization perspective they play a central role in regularization theory as they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic perspective they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. Traditionally, kernel methods have been used in supervised learning problems with scalar outputs and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partially by frameworks like multitask learning. In this monograph, we review different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and functional methods.

DOI:10.1561/2200000036
ISBN: 978-1-60198-558-3
80 pp. $65.00
Buy book
 
ISBN: 978-1-60198-559-0
80 pp. $110.00
Buy E-book
Table of contents:
1: Introduction
2: Learning Scalar Outputs with Kernel Methods
3: Learning Multiple Outputs with Kernels Methods
4: Separable Kernels and Sum of Separable Kernels
5: Beyond Separable Kernels
6: Inference and Computational Considerations
7: Applications of Multivariate Kernels
8: Discussion
Acknowledgements
Notations and Acronyms
References

Kernels for Vector-Valued Functions

Kernel methods are among the most popular techniques in machine learning. From a regularization theory perspective, they provide a natural choice for the hypotheses space and the regularization functional through the notion of reproducing kernel Hilbert spaces. From a probabilistic theory perspective, they are the key in the context of Gaussian processes, where the kernel function is known as the covariance function. The theory of kernel methods for single-valued functions is well established by now, and indeed there has been a considerable amount of work devoted to designing and learning kernels. More recently there has been an increasing interest in methods that deal with multiple outputs, motivated partly by frameworks like multitask learning. Applications of kernels for vector-valued functions include sensor networks, geostatistics, computer graphics and several more. Kernels for Vector-Valued Functions: A Review looks at different methods to design or learn valid kernel functions for multiple outputs, paying particular attention to the connection between probabilistic and regularization methods. Kernels for Vector-Valued Functions: A Review is aimed at researchers with an interest in the theory and application of kernels for vector-valued functions in areas such as statistics, computer science and engineering. One of its goals is to provide a unified framework and a common terminology for researchers working in machine learning and statistics.

 
MAL-036