Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…

This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Riemannian optimization is a powerful tool for decision-making in situations where the data and decision space are structured as non-flat spaces due to physical constraints and/or underlying symmetries. In emerging fields such as machine learning, quantum computing, biomedical imaging, and robotics, data and decisions often exist in curved, non-Euclidean spaces due to physical constraints or underlying symmetries. Riemannian online optimization provides a new framework for handling learning tasks where data arrives sequentially in geometric spaces.
This monograph offers a comprehensive overview of online learning over Riemannian manifolds, and offers a unified overview of the state-of-the-art algorithms for online optimization over Riemannian manifolds. Also presented is a detailed and systematic analysis of achievable regret for those algorithms. The study emphasizes how the curvature of manifolds influences the trade-off between exploration and exploitation, and the performance of the algorithms.
After an introduction, Section 2 briefly introduces Riemannian manifolds, together with the preliminary knowledge of Riemannian optimization and Euclidean online optimization. In Section 3, the fundamental Riemannian online gradient descent algorithm under full information feedback is presented, and the achievable regret on both Hadamard manifolds and general manifolds is analyzed. Section 4 extends the Riemannian online gradient descent algorithm to the bandit feedback setting. In Sections 5 and 6, the authors turn to two advanced Riemannian online optimization algorithms designed for dynamic regret minimization, the Riemannian online extra gradient descent and the Riemannian online optimistic gradient descent.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
Stock availability can be subject to change without notice. We recommend calling the shop or contacting our online team to check availability of low stock items. Please see our Shopping Online page for more details.
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Riemannian optimization is a powerful tool for decision-making in situations where the data and decision space are structured as non-flat spaces due to physical constraints and/or underlying symmetries. In emerging fields such as machine learning, quantum computing, biomedical imaging, and robotics, data and decisions often exist in curved, non-Euclidean spaces due to physical constraints or underlying symmetries. Riemannian online optimization provides a new framework for handling learning tasks where data arrives sequentially in geometric spaces.
This monograph offers a comprehensive overview of online learning over Riemannian manifolds, and offers a unified overview of the state-of-the-art algorithms for online optimization over Riemannian manifolds. Also presented is a detailed and systematic analysis of achievable regret for those algorithms. The study emphasizes how the curvature of manifolds influences the trade-off between exploration and exploitation, and the performance of the algorithms.
After an introduction, Section 2 briefly introduces Riemannian manifolds, together with the preliminary knowledge of Riemannian optimization and Euclidean online optimization. In Section 3, the fundamental Riemannian online gradient descent algorithm under full information feedback is presented, and the achievable regret on both Hadamard manifolds and general manifolds is analyzed. Section 4 extends the Riemannian online gradient descent algorithm to the bandit feedback setting. In Sections 5 and 6, the authors turn to two advanced Riemannian online optimization algorithms designed for dynamic regret minimization, the Riemannian online extra gradient descent and the Riemannian online optimistic gradient descent.