By Sushant Sachdeva, Yale University, USA, sushant.sachdeva@yale.edu | Nisheeth K. Vishnoi, Microsoft Research, India, nisheeth.vishnoi@gmail.com
This monograph presents techniques to approximate real functions such as x^{ s }; x^{–1} and e^{–}^{ x } by simpler functions and shows how these results can be used for the design of fast algorithms. The key lies in the fact that such results imply faster ways to approximate primitives such as A^{ s }v; A^{–1}v and exp(–A)v, and to compute matrix eigenvalues and eigenvectors. Indeed, many fast algorithms reduce to the computation of such primitives, which have proved useful for speeding up several fundamental computations such as random walk simulation, graph partitioning and solving linear systems of equations.
Faster Algorithms via Approximation Theory illustrates how classical and modern techniques from approximation theory play a crucial role in obtaining results that are relevant to the emerging theory of fast algorithms. The key lies in the fact that such results imply faster ways to approximate primitives such as products of matrix functions with vectors, and to compute matrix eigenvalues and eigenvectors, which are fundamental to many spectral algorithms.
The first half of the book is devoted to the ideas and results from approximation theory that are central, elegant, and may have wider applicability in theoretical computer science. These include not only techniques relating to polynomial approximations but also those relating to approximations by rational functions and beyond. The remaining half illustrates a variety of ways that these results can be used to design fast algorithms.
Faster Algorithms via Approximation Theory is self-contained and should be of interest to researchers and students in theoretical computer science, numerical linear algebra, and related areas.