By Xi Wang, The University of Sydney, Australia, xi.wang@sydney.edu.au | Guodong Shi, The University of Sydney, Australia, guodong.shi@sydney.edu.au
In emerging fields such as machine learning, quantum computing, biomedical imaging, and robotics, data and decisions often exist in curved, non-Euclidean spaces due to physical constraints or underlying symmetries. Riemannian online optimization provides a new framework for handling learning tasks where data arrives sequentially in geometric spaces. This monograph offers a comprehensive overview of online learning over Riemannian manifolds.
Riemannian optimization is a powerful tool for decision-making in situations where the data and decision space are structured as non-flat spaces due to physical constraints and/or underlying symmetries. In emerging fields such as machine learning, quantum computing, biomedical imaging, and robotics, data and decisions often exist in curved, non-Euclidean spaces due to physical constraints or underlying symmetries. Riemannian online optimization provides a new framework for handling learning tasks where data arrives sequentially in geometric spaces.
This monograph offers a comprehensive overview of online learning over Riemannian manifolds, and offers a unified overview of the state-of-the-art algorithms for online optimization over Riemannian manifolds. Also presented is a detailed and systematic analysis of achievable regret for those algorithms. The study emphasizes how the curvature of manifolds influences the trade-off between exploration and exploitation, and the performance of the algorithms.
After an introduction, Section 2 briefly introduces Riemannian manifolds, together with the preliminary knowledge of Riemannian optimization and Euclidean online optimization. In Section 3, the fundamental Riemannian online gradient descent algorithm under full information feedback is presented, and the achievable regret on both Hadamard manifolds and general manifolds is analyzed. Section 4 extends the Riemannian online gradient descent algorithm to the bandit feedback setting. In Sections 5 and 6, the authors turn to two advanced Riemannian online optimization algorithms designed for dynamic regret minimization, the Riemannian online extra gradient descent and the Riemannian online optimistic gradient descent.