Foundations and Trends® in Optimization > Vol 1 > Issue 3

Proximal Algorithms

By Neal Parikh, Stanford University, USA, npparikh@cs.stanford.edu | Stephen Boyd, Stanford University, USA, boyd@stanford.edu

 
Suggested Citation
Neal Parikh and Stephen Boyd (2014), "Proximal Algorithms", Foundations and Trends® in Optimization: Vol. 1: No. 3, pp 127-239. http://dx.doi.org/10.1561/2400000003

Publication Date: 13 Jan 2014
© 2014 N. Parikh and S. Boyd
 
Subjects
Optimization
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction 
2. Properties 
3. Interpretations 
4. Proximal Algorithms 
5. Parallel and Distributed Algorithms 
6. Evaluating Proximal Operators 
7. Examples and Applications 
8. Conclusions 
References 

Abstract

This monograph is about a class of optimization algorithms called proximal algorithms. Much like Newton’s method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton’s method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods. Here, we discuss the many different interpretations of proximal operators and algorithms, describe their connections to many other topics in optimization and applied mathematics, survey some popular algorithms, and provide a large number of examples of proximal operators that commonly arise in practice.

DOI:10.1561/2400000003
ISBN: 978-1-60198-716-7
128 pp. $85.00
Buy book (pb)
 
ISBN: 978-1-60198-717-4
128 pp. $120.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Properties
3. Interpretations
4. Proximal Algorithms
5. Parallel and Distributed Algorithms
6. Evaluating Proximal Operators
7. Examples and Applications
8. Conclusions
References

Proximal Algorithms

Proximal Algorithms discusses proximal operators and proximal algorithms, and illustrates their applicability to standard and distributed convex optimization in general and many applications of recent interest in particular. Much like Newton’s method is a standard tool for solving unconstrained smooth optimization problems of modest size, proximal algorithms can be viewed as an analogous tool for nonsmooth, constrained, large-scale, or distributed versions of these problems. They are very generally applicable, but are especially well-suited to problems of substantial recent interest involving large or high-dimensional datasets. Proximal methods sit at a higher level of abstraction than classical algorithms like Newton’s method: the base operation is evaluating the proximal operator of a function, which itself involves solving a small convex optimization problem. These subproblems, which generalize the problem of projecting a point onto a convex set, often admit closed-form solutions or can be solved very quickly with standard or simple specialized methods.

Proximal Algorithms discusses different interpretations of proximal operators and algorithms, looks at their connections to many other topics in optimization and applied mathematics, surveys some popular algorithms, and provides a large number of examples of proximal operators that commonly arise in practice.

 
OPT-003