Foundations and Trends® in Optimization > Vol 9 > Issue 3

Riemannian Online Learning

By Xi Wang, The University of Sydney, Australia, xi.wang@sydney.edu.au | Guodong Shi, The University of Sydney, Australia, guodong.shi@sydney.edu.au

 
Suggested Citation
Xi Wang and Guodong Shi (2025), "Riemannian Online Learning", Foundations and TrendsĀ® in Optimization: Vol. 9: No. 3, pp 248-406. http://dx.doi.org/10.1561/2400000054

Publication Date: 01 Sep 2025
© 2025 X. Wang and G. Shi
 
Subjects
Online learning,  Computational learning,  Optimization,  Geometry: Differential and Riemannian geometry
 

Free Preview:

Download extract

Share

Download article
In this article:
1. Introduction
2. Preliminaries
3. Riemannian Online Gradient Descent
4. Riemannian Bandit Online Gradient Descent
5. Riemannian Online Extra Gradient Descent
6. Riemannian Online Optimistic Gradient Descent
References

Abstract

In emerging fields such as machine learning, quantum computing, biomedical imaging, and robotics, data and decisions often exist in curved, non-Euclidean spaces due to physical constraints or underlying symmetries. Riemannian online optimization provides a new framework for handling learning tasks where data arrives sequentially in geometric spaces. This monograph offers a comprehensive overview of online learning over Riemannian manifolds.

DOI:10.1561/2400000054
ISBN: 978-1-63828-610-3
170 pp. $99.00
Buy book (pb)
 
ISBN: 978-1-63828-611-0
170 pp. $160.00
Buy E-book (.pdf)
Table of contents:
1. Introduction
2. Preliminaries
3. Riemannian Online Gradient Descent
4. Riemannian Bandit Online Gradient Descent
5. Riemannian Online Extra Gradient Descent
6. Riemannian Online Optimistic Gradient Descent
References

Riemannian Online Learning

Riemannian optimization is a powerful tool for decision-making in situations where the data and decision space are structured as non-flat spaces due to physical constraints and/or underlying symmetries. In emerging fields such as machine learning, quantum computing, biomedical imaging, and robotics, data and decisions often exist in curved, non-Euclidean spaces due to physical constraints or underlying symmetries. Riemannian online optimization provides a new framework for handling learning tasks where data arrives sequentially in geometric spaces.

This monograph offers a comprehensive overview of online learning over Riemannian manifolds, and offers a unified overview of the state-of-the-art algorithms for online optimization over Riemannian manifolds. Also presented is a detailed and systematic analysis of achievable regret for those algorithms. The study emphasizes how the curvature of manifolds influences the trade-off between exploration and exploitation, and the performance of the algorithms.

After an introduction, Section 2 briefly introduces Riemannian manifolds, together with the preliminary knowledge of Riemannian optimization and Euclidean online optimization. In Section 3, the fundamental Riemannian online gradient descent algorithm under full information feedback is presented, and the achievable regret on both Hadamard manifolds and general manifolds is analyzed. Section 4 extends the Riemannian online gradient descent algorithm to the bandit feedback setting. In Sections 5 and 6, the authors turn to two advanced Riemannian online optimization algorithms designed for dynamic regret minimization, the Riemannian online extra gradient descent and the Riemannian online optimistic gradient descent.

 
OPT-054