Readings Newsletter
Become a Readings Member to make your shopping experience even easier.
Sign in or sign up for free!
You’re not far away from qualifying for FREE standard shipping within Australia
You’ve qualified for FREE standard shipping within Australia
The cart is loading…
In this groundbreaking exploration of artificial intelligence's impact on scientific research, Michael Lissack and Brenden Meagher examine the profound challenges that Large Language Models (LLMs) pose to academic integrity and knowledge creation.
"Misused Tools" investigates how these powerful AI systems can fundamentally transform-for better or worse-how scientific knowledge is produced, validated, and transmitted. The authors make a crucial distinction between using LLMs to augment human intelligence versus substituting for human judgment, arguing that uncritical adoption risks producing what they term "sloppy science"-research that appears sophisticated on the surface but lacks genuine intellectual depth.
Drawing on frameworks from cognitive science, complexity theory, and philosophy of mind, Lissack and Meagher offer a nuanced perspective that neither demonizes nor uncritically celebrates these technologies. Instead, they present practical strategies for researchers to maintain intellectual rigor while leveraging AI's capabilities, including:
How to approach LLMs as research partners rather than authorities Techniques for critical evaluation of AI-generated content Frameworks for responsible integration based on the Oxford tutorial model Methods to prevent recursive feedback loops of misinformation
This timely volume addresses concerns from individual researchers to institutional leaders, providing both philosophical foundations and practical guidance for navigating the AI revolution in scientific research. It's an essential resource for anyone concerned with preserving the integrity of science while embracing technological progress.
$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout
In this groundbreaking exploration of artificial intelligence's impact on scientific research, Michael Lissack and Brenden Meagher examine the profound challenges that Large Language Models (LLMs) pose to academic integrity and knowledge creation.
"Misused Tools" investigates how these powerful AI systems can fundamentally transform-for better or worse-how scientific knowledge is produced, validated, and transmitted. The authors make a crucial distinction between using LLMs to augment human intelligence versus substituting for human judgment, arguing that uncritical adoption risks producing what they term "sloppy science"-research that appears sophisticated on the surface but lacks genuine intellectual depth.
Drawing on frameworks from cognitive science, complexity theory, and philosophy of mind, Lissack and Meagher offer a nuanced perspective that neither demonizes nor uncritically celebrates these technologies. Instead, they present practical strategies for researchers to maintain intellectual rigor while leveraging AI's capabilities, including:
How to approach LLMs as research partners rather than authorities Techniques for critical evaluation of AI-generated content Frameworks for responsible integration based on the Oxford tutorial model Methods to prevent recursive feedback loops of misinformation
This timely volume addresses concerns from individual researchers to institutional leaders, providing both philosophical foundations and practical guidance for navigating the AI revolution in scientific research. It's an essential resource for anyone concerned with preserving the integrity of science while embracing technological progress.