Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…

This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Today we live in the world which is very much a man-made or artificial. In such a world there are many systems and environments, both real and virtual, which can be very well described by formal models. This creates an opportunity for developing a synthetic intelligence - artificial systems which cohabit these environments with human beings and carry out some useful function. In this book we address some aspects of this development in the framework of reinforcement learning, learning how to map sensations to actions, by trial and error from feedback. In some challenging cases, actions may affect not only the immediate reward, but also the next sensation and all subsequent rewards. The general task of reinforcement learning stated in a traditional way is unreasonably ambitious for these two characteristics: search by trial-and-error and delayed reward. We investigate general ways of breaking the task of designing a controller down to more feasible sub-tasks which are solved independently. We propose to consider both taking advantage of past experience by reusing parts of other systems, and facilitating the learning phase by employing a bias in initial configuration.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
Stock availability can be subject to change without notice. We recommend calling the shop or contacting our online team to check availability of low stock items. Please see our Shopping Online page for more details.
This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.
Today we live in the world which is very much a man-made or artificial. In such a world there are many systems and environments, both real and virtual, which can be very well described by formal models. This creates an opportunity for developing a synthetic intelligence - artificial systems which cohabit these environments with human beings and carry out some useful function. In this book we address some aspects of this development in the framework of reinforcement learning, learning how to map sensations to actions, by trial and error from feedback. In some challenging cases, actions may affect not only the immediate reward, but also the next sensation and all subsequent rewards. The general task of reinforcement learning stated in a traditional way is unreasonably ambitious for these two characteristics: search by trial-and-error and delayed reward. We investigate general ways of breaking the task of designing a controller down to more feasible sub-tasks which are solved independently. We propose to consider both taking advantage of past experience by reusing parts of other systems, and facilitating the learning phase by employing a bias in initial configuration.