Bibliometrics and Research Evaluation: Uses and Abuses, Yves Gingras (Universite du Quebec a Montreal) (9780262035125) — Readings Books
Bibliometrics and Research Evaluation: Uses and Abuses
Hardback

Bibliometrics and Research Evaluation: Uses and Abuses

$96.99
Sign in or become a Readings Member to add this title to your wishlist.

Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.The research evaluation market is booming. Ranking,
metrics,
h-index, and impact factors are reigning buzzwords. Government and research administrators want to evaluate everything-teachers, professors, training programs, universities-using quantitative indicators. Among the tools used to measure research excellence, bibliometrics-aggregate data on publications and citations-has become dominant. Bibliometrics is hailed as an objective measure of research quality, a quantitative measure more useful than subjective and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.

Read More
In Shop
Out of stock
Shipping & Delivery

$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout

MORE INFO

Stock availability can be subject to change without notice. We recommend calling the shop or contacting our online team to check availability of low stock items. Please see our Shopping Online page for more details.

Format
Hardback
Publisher
MIT Press Ltd
Country
United States
Date
7 October 2016
Pages
136
ISBN
9780262035125

Why bibliometrics is useful for understanding the global dynamics of science but generate perverse effects when applied inappropriately in research evaluation and university rankings.The research evaluation market is booming. Ranking,
metrics,
h-index, and impact factors are reigning buzzwords. Government and research administrators want to evaluate everything-teachers, professors, training programs, universities-using quantitative indicators. Among the tools used to measure research excellence, bibliometrics-aggregate data on publications and citations-has become dominant. Bibliometrics is hailed as an objective measure of research quality, a quantitative measure more useful than subjective and intuitive evaluation methods such as peer review that have been used since scientific papers were first published in the seventeenth century. In this book, Yves Gingras offers a spirited argument against an unquestioning reliance on bibliometrics as an indicator of research quality. Gingras shows that bibliometric rankings have no real scientific validity, rarely measuring what they pretend to. Although the study of publication and citation patterns, at the proper scales, can yield insights on the global dynamics of science over time, ill-defined quantitative indicators often generate perverse and unintended effects on the direction of research. Moreover, abuse of bibliometrics occurs when data is manipulated to boost rankings. Gingras looks at the politics of evaluation and argues that using numbers can be a way to control scientists and diminish their autonomy in the evaluation process. Proposing precise criteria for establishing the validity of indicators at a given scale of analysis, Gingras questions why universities are so eager to let invalid indicators influence their research strategy.

Read More
Format
Hardback
Publisher
MIT Press Ltd
Country
United States
Date
7 October 2016
Pages
136
ISBN
9780262035125