Become a Readings Member to make your shopping experience even easier. Sign in or sign up for free!

Become a Readings Member. Sign in or sign up for free!

Hello Readings Member! Go to the member centre to view your orders, change your details, or view your lists, or sign out.

Hello Readings Member! Go to the member centre or sign out.

If Anyone Builds It, Everyone Dies
Hardback

If Anyone Builds It, Everyone Dies

$68.99
Sign in or become a Readings Member to add this title to your wishlist.

The scramble to create superhuman AI has put us on the path to extinction--but it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity. "May prove to be the most important book of our time."--Tim Urban, Wait But Why

In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.

For decades, two signatories of that letter--Eliezer Yudkowsky and Nate Soares--have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us--and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close.

How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.

The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.

"The best no-nonsense, simple explanation of the AI risk problem I've ever read."--Yishan Wong, Former CEO of Reddit

Read More
In Shop
Out of stock
Shipping & Delivery

$9.00 standard shipping within Australia
FREE standard shipping within Australia for orders over $100.00
Express & International shipping calculated at checkout

MORE INFO
Format
Hardback
Publisher
Little Brown and Company
Date
16 September 2025
Pages
272
ISBN
9780316595643

The scramble to create superhuman AI has put us on the path to extinction--but it's not too late to change course, as two of the field's earliest researchers explain in this clarion call for humanity. "May prove to be the most important book of our time."--Tim Urban, Wait But Why

In 2023, hundreds of AI luminaries signed an open letter warning that artificial intelligence poses a serious risk of human extinction. Since then, the AI race has only intensified. Companies and countries are rushing to build machines that will be smarter than any person. And the world is devastatingly unprepared for what would come next.

For decades, two signatories of that letter--Eliezer Yudkowsky and Nate Soares--have studied how smarter-than-human intelligences will think, behave, and pursue their objectives. Their research says that sufficiently smart AIs will develop goals of their own that put them in conflict with us--and that if it comes to conflict, an artificial superintelligence would crush us. The contest wouldn't even be close.

How could a machine superintelligence wipe out our entire species? Why would it want to? Would it want anything at all? In this urgent book, Yudkowsky and Soares walk through the theory and the evidence, present one possible extinction scenario, and explain what it would take for humanity to survive.

The world is racing to build something truly new under the sun. And if anyone builds it, everyone dies.

"The best no-nonsense, simple explanation of the AI risk problem I've ever read."--Yishan Wong, Former CEO of Reddit

Read More
Format
Hardback
Publisher
Little Brown and Company
Date
16 September 2025
Pages
272
ISBN
9780316595643