Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…
The problem with many machine learning classification algorithms is that their high level of accuracy is achieved at the cost of model comprehensibility, and with a consequent loss of justifiability: their mechanism cannot be shown to be reasonable because it cannot be explained. This has hindered their acceptance in sensitive domains, leading to growing demand for ‘explainable AI’. In addition, the EU’s recent GDPR legislation has elevated the issue to a legal requirement. If domain knowledge regarding nondecreasing (monotone) relationships could be incorporated into high performance classification algorithms without compromising their performance, the resulting increase in comprehensibility may allow them to surpass ‘black box’ barriers to acceptance and unlock their high accuracy for wider use.
This book aims for ‘complete’ monotone classification algorithms, that: (a) are partially monotone (allow nonmonotone features); (b) guarantee monotonicity globally; © retain high accuracy; and (d) are scalable to large data sets. To achieve these aims, the book contains:
- Explanation of the principles of ordinal classification, monotonicity and partial orders. - Extensive review of the literature and available monotone algorithms. - Several techniques for monotone tree-based ensembles (and Random Forest in particular). - Novel constraint generation for monotone Support Vector Machines. - Extension of cone-based dominance relations to partial monotonicity, for classification and pairwise and partial-order based problems. A panel of seventeen partially monotone datasets is used throughout the book to allow comparative empirical accuracy and performance of the many approaches discussed. It is hoped this book encourages and enables practitioners to include knowledge of monotonicity in their models when appropriate, for the sake of accuracy, simplicity, and interpretability.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
The problem with many machine learning classification algorithms is that their high level of accuracy is achieved at the cost of model comprehensibility, and with a consequent loss of justifiability: their mechanism cannot be shown to be reasonable because it cannot be explained. This has hindered their acceptance in sensitive domains, leading to growing demand for ‘explainable AI’. In addition, the EU’s recent GDPR legislation has elevated the issue to a legal requirement. If domain knowledge regarding nondecreasing (monotone) relationships could be incorporated into high performance classification algorithms without compromising their performance, the resulting increase in comprehensibility may allow them to surpass ‘black box’ barriers to acceptance and unlock their high accuracy for wider use.
This book aims for ‘complete’ monotone classification algorithms, that: (a) are partially monotone (allow nonmonotone features); (b) guarantee monotonicity globally; © retain high accuracy; and (d) are scalable to large data sets. To achieve these aims, the book contains:
- Explanation of the principles of ordinal classification, monotonicity and partial orders. - Extensive review of the literature and available monotone algorithms. - Several techniques for monotone tree-based ensembles (and Random Forest in particular). - Novel constraint generation for monotone Support Vector Machines. - Extension of cone-based dominance relations to partial monotonicity, for classification and pairwise and partial-order based problems. A panel of seventeen partially monotone datasets is used throughout the book to allow comparative empirical accuracy and performance of the many approaches discussed. It is hoped this book encourages and enables practitioners to include knowledge of monotonicity in their models when appropriate, for the sake of accuracy, simplicity, and interpretability.