Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…

This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
This monograph deals with methods for stochastic or data-driven optimization. The overall goal of these methods is to minimize a certain parameter-dependent objective function that for any parameter value is an expectation of a noisy sample performance objective whose measurement can be made from a real system or a simulation device depending on the setting used. A class of model-free approaches based on stochastic approximation is presented which involve random search procedures to efficiently make use of the noisy observations. The idea here is to simply estimate the minima of the expected objective via an incremental-update or recursive procedure and not to estimate the whole objective function itself. Both asymptotic as well as finite sample analyses of the procedures used for convex as well as non-convex objectives are presented.
The monograph also includes algorithms that either estimate the gradient in gradient-based schemes or estimate both the gradient and the Hessian in Newton-type procedures using random direction approaches involving noisy function measurements. Hence the class of approaches studied fall under the broad category of zeroth order optimization methods. Both asymptotic convergence guarantees in the general setup as well as asymptotic normality results for various algorithms are presented, and also provided is an introduction to stochastic recursive inclusions and their asymptotic convergence analysis. This is necessitated because many of these settings involve set-valued maps for any given parameter. Finally, several interesting applications of these methods in the domain of reinforcement learning are included, and the appendices at the end quickly summarize the basic material for this text.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
Stock availability can be subject to change without notice. We recommend calling the shop or contacting our online team to check availability of low stock items. Please see our Shopping Online page for more details.
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
This monograph deals with methods for stochastic or data-driven optimization. The overall goal of these methods is to minimize a certain parameter-dependent objective function that for any parameter value is an expectation of a noisy sample performance objective whose measurement can be made from a real system or a simulation device depending on the setting used. A class of model-free approaches based on stochastic approximation is presented which involve random search procedures to efficiently make use of the noisy observations. The idea here is to simply estimate the minima of the expected objective via an incremental-update or recursive procedure and not to estimate the whole objective function itself. Both asymptotic as well as finite sample analyses of the procedures used for convex as well as non-convex objectives are presented.
The monograph also includes algorithms that either estimate the gradient in gradient-based schemes or estimate both the gradient and the Hessian in Newton-type procedures using random direction approaches involving noisy function measurements. Hence the class of approaches studied fall under the broad category of zeroth order optimization methods. Both asymptotic convergence guarantees in the general setup as well as asymptotic normality results for various algorithms are presented, and also provided is an introduction to stochastic recursive inclusions and their asymptotic convergence analysis. This is necessitated because many of these settings involve set-valued maps for any given parameter. Finally, several interesting applications of these methods in the domain of reinforcement learning are included, and the appendices at the end quickly summarize the basic material for this text.