Become a Readings Member to make your shopping experience even easier. Sign in or sign up for free!

Become a Readings Member. Sign in or sign up for free!

Hello Readings Member! Go to the member centre to view your orders, change your details, or view your lists, or sign out.

Hello Readings Member! Go to the member centre or sign out.

Misleading Evidence and Evidence-Led Policy: Making Social Science More Experimental
Hardback

Misleading Evidence and Evidence-Led Policy: Making Social Science More Experimental

$280.99
Sign in or become a Readings Member to add this title to your wishlist.

This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.

Research evidence can and should have an important role in shaping public policy. Just as much of the medical community has embraced the concept of evidence-based medicine, increasing numbers of social scientists and government agencies are calling for an evidence-based approach to determine which social programs work and which ones don t. It is an irony not lost on the social scientists writing for the September volume of The Annals that the first use of experimental methods in medicine (to test the effects of Streptomycin on tuberculosis in the late 1940s) was actually conducted by an economist. But while more than one million clinical trials in medicine have been conducted since that time, only about 10,000 have been conducted to evaluate whether social programs achieve their intended effects.

Authors of the September volume argue that this level of investment in the gold standard of research designs is insufficient for a wide range of reasons. Randomized controlled trials, for example, are far better at controlling selection biases and chance effects than are other observational methods, while econometric and statistical techniques that seek to correct for bias fall short of their promise. The volume dramatically demonstrates that alternative methods generate different (and often substantially wrong) estimates of program effects. Some research based on nonexperimental research designs actually mislead policy makers and practitioners into supporting programs that don t work, while ignoring others that do.

Authors of this volume also directly address critiques of experimental designs, which range from questions about their practicality to their ethics. Some of these arguments are well taken, but addressable. The authors, however, reject other arguments against controlled tests as unfounded and damaging to social science..

Policymakers will find these articles invaluable in better understanding how alternative research methods can mislead as much as enlighten. Students and researchers will be confronted with powerful arguments that question the use of nonexperimental techniques to estimate program effects.

This volume throws the gauntlet down. We challenge you to pick it up.

Read More
In Shop
Out of stock
Shipping & Delivery

$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout

MORE INFO
Format
Hardback
Publisher
Sage Publications (CA)
Country
United States
Date
15 September 2003
Pages
236
ISBN
9780761928577

This title is printed to order. This book may have been self-published. If so, we cannot guarantee the quality of the content. In the main most books will have gone through the editing process however some may not. We therefore suggest that you be aware of this before ordering this book. If in doubt check either the author or publisher’s details as we are unable to accept any returns unless they are faulty. Please contact us if you have any questions.

Research evidence can and should have an important role in shaping public policy. Just as much of the medical community has embraced the concept of evidence-based medicine, increasing numbers of social scientists and government agencies are calling for an evidence-based approach to determine which social programs work and which ones don t. It is an irony not lost on the social scientists writing for the September volume of The Annals that the first use of experimental methods in medicine (to test the effects of Streptomycin on tuberculosis in the late 1940s) was actually conducted by an economist. But while more than one million clinical trials in medicine have been conducted since that time, only about 10,000 have been conducted to evaluate whether social programs achieve their intended effects.

Authors of the September volume argue that this level of investment in the gold standard of research designs is insufficient for a wide range of reasons. Randomized controlled trials, for example, are far better at controlling selection biases and chance effects than are other observational methods, while econometric and statistical techniques that seek to correct for bias fall short of their promise. The volume dramatically demonstrates that alternative methods generate different (and often substantially wrong) estimates of program effects. Some research based on nonexperimental research designs actually mislead policy makers and practitioners into supporting programs that don t work, while ignoring others that do.

Authors of this volume also directly address critiques of experimental designs, which range from questions about their practicality to their ethics. Some of these arguments are well taken, but addressable. The authors, however, reject other arguments against controlled tests as unfounded and damaging to social science..

Policymakers will find these articles invaluable in better understanding how alternative research methods can mislead as much as enlighten. Students and researchers will be confronted with powerful arguments that question the use of nonexperimental techniques to estimate program effects.

This volume throws the gauntlet down. We challenge you to pick it up.

Read More
Format
Hardback
Publisher
Sage Publications (CA)
Country
United States
Date
15 September 2003
Pages
236
ISBN
9780761928577