Produktbild: If Anyone Builds It, Everyone Dies
- 10%

If Anyone Builds It, Everyone Dies Why Superhuman AI Would Kill Us All

2
10% sparen

24,99 € UVP 28,00 €

inkl. MwSt, Versandkostenfrei

Lieferung nach Hause

Beschreibung

Details

Verkaufsrang

21734

Einband

Gebundene Ausgabe

Erscheinungsdatum

16.09.2025

Verlag

Little Brown USA

Seitenzahl

256

Maße (L/B/H)

24/15,8/2,5 cm

Gewicht

463 g

Sprache

Englisch

ISBN

978-0-316-59564-3

Beschreibung

Details

Verkaufsrang

21734

Einband

Gebundene Ausgabe

Erscheinungsdatum

16.09.2025

Verlag

Little Brown USA

Seitenzahl

256

Maße (L/B/H)

24/15,8/2,5 cm

Gewicht

463 g

Sprache

Englisch

ISBN

978-0-316-59564-3

Herstelleradresse

Libri GmbH
Europaallee 1
36244 Bad Hersfeld
DE

Email: gpsr@libri.de

Unsere Kundinnen und Kunden meinen

2 Bewertungen

Informationen zu Bewertungen

Zur Abgabe einer Bewertung ist eine Anmeldung im Konto notwendig. Die Authentizität der Bewertungen wird von uns nicht überprüft. Wir behalten uns vor, Bewertungstexte, die unseren Richtlinien widersprechen, entsprechend zu kürzen oder zu löschen.

5 Sterne

(2)

4 Sterne

(0)

3 Sterne

(0)

2 Sterne

(0)

1 Sterne

(0)

I was on the fence about if I should read this book

Bewertung am 03.10.2025

Bewertungsnummer: 2614177

Bewertet: Buch (Gebundene Ausgabe)

I was really on the fence about if I should read this book. I was already scared about AI before reading it and now I know I severely undererstimated how bad it really is. Do I regret reading the book? Not in the slightest. If you ask me, EVERYONE should read this book. The actual dangers AI poses if it does not get regulated ASAP is something everyone should now about. Everyone has to know about! It's bad, but the more people now about it, the better are our chances to stop AI before it's too late. So if you are on the fence about reading this book, please read it right now!

I was on the fence about if I should read this book

Bewertung am 03.10.2025
Bewertungsnummer: 2614177
Bewertet: Buch (Gebundene Ausgabe)

I was really on the fence about if I should read this book. I was already scared about AI before reading it and now I know I severely undererstimated how bad it really is. Do I regret reading the book? Not in the slightest. If you ask me, EVERYONE should read this book. The actual dangers AI poses if it does not get regulated ASAP is something everyone should now about. Everyone has to know about! It's bad, but the more people now about it, the better are our chances to stop AI before it's too late. So if you are on the fence about reading this book, please read it right now!

Well-written, up-to-date, and highly relevant

Bewertung am 22.09.2025

Bewertungsnummer: 2603765

Bewertet: Buch (Gebundene Ausgabe)

This book is rightfully recommended by a wide range of credible and prominent people, including nobel price winners (Ben Bernanke), national security experts (Jon Wolfsthal), MIT professors (Max Tegmark) and more. The main message of the book is: "If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die." This is relevant for everybody, because most of the major AI companies are currently trying to build such a superintelligent AI, with wildly insufficient precautions. Most people are not aware that there is a real risk that this will kill everyone on Earth. Part 1 of the book explains the very strong arguments why this is the case: why such AIs develop weird goals which were not intended by human developers, how these are practically guaranteed to have motivations incompatible with human life, and how such a superhuman AI will find it very easy to kill literally every human on Earth. Part 2 illustrates these abstract arguments with a detailed example scenario how such an AI can come into existence, how it evades human constraints (even though the fictional AI company is more careful than present-day real AI companies), and how it then kills all humans and takes over as many galaxies as it can. This is not meant as a detailed prediction; as explained in part 1, there are many different ways in which such an AI can end up killing all humans. This part 2 is still helpful for making the general arguments from part 1 more salient. Part 3 is the authors' call to action: how can we prevent such an outcome? What is the current state of AI safety, and what do we urgently need to do to ensure no homicidal superintelligent AI ever gets built? The book is very well written and clearly explains the arguments, accessible to a lay audience, on only 250 pages. Many more details and background information are available for free in an online appendix, if a reader has any questions the book did not answer. I strongly recommend this book and hope that it helps to improve awareness and discourse around this topic. Ideally, this might improve public debate on these risks and political action to prevent anyone from building a superintelligent AI, so that we and our children can have long and happy lives, and not have everyone die.

Well-written, up-to-date, and highly relevant

Bewertung am 22.09.2025
Bewertungsnummer: 2603765
Bewertet: Buch (Gebundene Ausgabe)

This book is rightfully recommended by a wide range of credible and prominent people, including nobel price winners (Ben Bernanke), national security experts (Jon Wolfsthal), MIT professors (Max Tegmark) and more. The main message of the book is: "If any company or group, anywhere on the planet, builds an artificial superintelligence using anything remotely like current techniques, based on anything remotely like the present understanding of AI, then everyone, everywhere on Earth, will die." This is relevant for everybody, because most of the major AI companies are currently trying to build such a superintelligent AI, with wildly insufficient precautions. Most people are not aware that there is a real risk that this will kill everyone on Earth. Part 1 of the book explains the very strong arguments why this is the case: why such AIs develop weird goals which were not intended by human developers, how these are practically guaranteed to have motivations incompatible with human life, and how such a superhuman AI will find it very easy to kill literally every human on Earth. Part 2 illustrates these abstract arguments with a detailed example scenario how such an AI can come into existence, how it evades human constraints (even though the fictional AI company is more careful than present-day real AI companies), and how it then kills all humans and takes over as many galaxies as it can. This is not meant as a detailed prediction; as explained in part 1, there are many different ways in which such an AI can end up killing all humans. This part 2 is still helpful for making the general arguments from part 1 more salient. Part 3 is the authors' call to action: how can we prevent such an outcome? What is the current state of AI safety, and what do we urgently need to do to ensure no homicidal superintelligent AI ever gets built? The book is very well written and clearly explains the arguments, accessible to a lay audience, on only 250 pages. Many more details and background information are available for free in an online appendix, if a reader has any questions the book did not answer. I strongly recommend this book and hope that it helps to improve awareness and discourse around this topic. Ideally, this might improve public debate on these risks and political action to prevent anyone from building a superintelligent AI, so that we and our children can have long and happy lives, and not have everyone die.

Unsere Kundinnen und Kunden meinen

If Anyone Builds It, Everyone Dies

von Eliezer Yudkowsky, Nate Soares

0 Bewertungen filtern

Die Leseprobe wird geladen.
  • Produktbild: If Anyone Builds It, Everyone Dies