Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…

This tutorial guide introduces online nonstochastic control, an emerging paradigm in control of dynamical systems and differentiable reinforcement learning that applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. In optimal control, robust control, and other control methodologies that assume stochastic noise, the goal is to perform comparably to an offline optimal strategy. In online control, both cost functions and perturbations from the assumed dynamical model are chosen by an adversary. Thus, the optimal policy is not defined a priori and the goal is to attain low regret against the best policy in hindsight from a benchmark class of policies. The resulting methods are based on iterative mathematical optimization algorithms and are accompanied by finite-time regret and computational complexity guarantees. This book is ideal for graduate students and researchers interested in bridging classical control theory and modern machine learning.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
Stock availability can be subject to change without notice. We recommend calling the shop or contacting our online team to check availability of low stock items. Please see our Shopping Online page for more details.
This tutorial guide introduces online nonstochastic control, an emerging paradigm in control of dynamical systems and differentiable reinforcement learning that applies techniques from online convex optimization and convex relaxations to obtain new methods with provable guarantees for classical settings in optimal and robust control. In optimal control, robust control, and other control methodologies that assume stochastic noise, the goal is to perform comparably to an offline optimal strategy. In online control, both cost functions and perturbations from the assumed dynamical model are chosen by an adversary. Thus, the optimal policy is not defined a priori and the goal is to attain low regret against the best policy in hindsight from a benchmark class of policies. The resulting methods are based on iterative mathematical optimization algorithms and are accompanied by finite-time regret and computational complexity guarantees. This book is ideal for graduate students and researchers interested in bridging classical control theory and modern machine learning.