Jump to ratings and reviews
Rate this book

Smarter Than Us: The Rise of Machine Intelligence by Stuart Armstrong (1-May-2014) Paperback

Rate this book
What happens when machines become smarter than humans? Forget lumbering Terminators. The power of an artificial intelligence (AI) comes from its intelligence, not physical strength and laser guns. Humans steer the future not because we're the strongest or the fastest but because we're the smartest. When machines become smarter than humans, we'll be handing them the steering wheel. What promises—and perils—will these powerful machines present? Stuart Armstrong’s new book navigates these questions with clarity and wit. Can we instruct AIs to steer the future as we desire? What goals should we program into them? It turns out this question is difficult to answer! Philosophers have tried for thousands of years to define an ideal world, but there remains no consensus. The prospect of goal-driven, smarter-than-human AI gives moral philosophy a new urgency. The future could be filled with joy, art, compassion, and beings living worthwhile and wonderful lives—but only if we’re able to precisely define what a "good" world is, and skilled enough to describe it perfectly to a computer program. AIs, like computers, will do what we say—which is not necessarily what we mean. Such precision requires encoding the entire system of human values for an explaining them to a mind that is alien to us, defining every ambiguous term, clarifying every edge case. Moreover, our values are in some cases, if we mis-define a single piece of the puzzle—say, consciousness—we end up with roughly 0% of the value we intended to reap, instead of 99% of the value. Though an understanding of the problem is only beginning to spread, researchers from fields ranging from philosophy to computer science to economics are working together to conceive and test solutions. Are we up to the challenge? A mathematician by training, Armstrong is a Research Fellow at the Future of Humanity Institute (FHI) at Oxford University. His research focuses on formal decision theory, the risks and possibilities of AI, the long term potential for intelligent life (and the difficulties of predicting this), and anthropic (self-locating) probability. Armstrong wrote Smarter Than Us at the request of the Machine Intelligence Research Institute, a non-profit organization studying the theoretical underpinnings of artificial superintelligence.

Unknown Binding

First published February 1, 2014

19 people are currently reading
558 people want to read

About the author

Stuart Armstrong

14 books5 followers

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
95 (24%)
4 stars
159 (40%)
3 stars
109 (27%)
2 stars
17 (4%)
1 star
15 (3%)
Displaying 1 - 30 of 33 reviews
Profile Image for Adam Ford.
2 reviews13 followers
February 4, 2015
A humorous read on a serious subject - the possible perils of an uncontrolled intelligence explosion. I found it fun and informative - a great primer for both newbies and those well versed in the idea of an intelligence explosion/technological singularity.
It is succinct and easy to read - definitely worth the time.

1st part of a video interview on the book here: https://www.youtube.com/watch?v=RotSh...
Profile Image for Katie.
17 reviews7 followers
March 2, 2014
I'm a lesswrong veteran so I didn't learn anything much that was new, but it's good for internalizing the viewpoints, and would be excellent to someone without that prior knowledge as an introduction to AI existential risk.
Profile Image for Caleb Kipkurui.
20 reviews3 followers
February 2, 2023
A very concise and clear book on the benefits and the potential dangers the rise of artificial intelligence poses to humanity. It really opened up my mind to what is possible, and how difficult it is to create (or even just define) 'safe' AI.
Profile Image for Esben.
158 reviews14 followers
March 6, 2022
Basically, AIs will be able to do anything humans can do better, impossibly precise instructions are needed to ensure that the AIs actually do what we ask them to, and if they don't, they will inevitably take over. If we can solve this problem, then we can get a future where AI can help us solve all major problems!
Profile Image for Daniel.
275 reviews51 followers
September 14, 2015
First, I love that the book is published under a Creative Commons license. That shows the author cares about spreading his ideas as far as possible, and understands that copyright restrictions merely shrink the audience unless you manage to write the next Harry Potter and the Sorcerer's Stone. I was tempted to give the book five stars just for that.

The book itself is short and easy to read, at least for anyone with a modicum of computer science and philosophy. It summarizes the potential dangers of AI and even more briefly tells the reader how to help. While Hollywood has labored for decades to instill fear of robots one day taking over, the book explains how an actual AI disaster scenario could play out differently than the standard robot movie script. But more importantly, the book outlines the intellectual problems that building a friendly AI poses - including the problem of precisely defining what constitutes human well-being, a problem that has frustrated philosophers for more than 2400 years. Even if some unforeseen technical barrier prevents AIs gaining general intelligence, the kind that could outsmart humans across the board, humans should benefit from more attention paid to moral philosophy. Thus the book should be helpful even in the unlikely event it turns out to be unnecessary.

But being short, the book has to leave a lot out. These omissions struck me as rather curious:

Chapter 7 (What, Precisely, Do We Really (Really) Want?) describes the difficulty of specifying a goal so as to exclude all solutions having undesirable consequences. The chapter does not mention the extensive fictional literature exploring this very quandary, for example The Monkey's Paw and the List of adaptations of The Monkey's Paw. "Be careful what you wish for" is an ancient fictional trope. King Midas anyone? The book does not need to cover the whole history of this trope, but at least give it a nod. People did think about this before computers threatened to make it real.

It also couldn't hurt to mention that when you hire two humans to do a job for you, the smarter worker usually figures out what you really want, more reliably than the less smart worker. When you hire the most expensive attorney, for example, part of what you are paying for is expert advice on what your goals are. Part of the intelligent expert's job is to disambiguate your initial, perhaps vague or counterproductive, request. If you ask the competent expert to do something he or she knows you probably won't be happy with, for example to pursue a legal strategy with a high risk of backfire, the expert will seek to dissuade you. If AIs will exceed every human skill, as the book predicts, then maybe they will help us figure out what we really want.

A second omission seems to ignore the book's title. A recurring theme throughout the book is that it's up to humans to make AI friendly, or at least contain it, all by ourselves. That requires humans to solve the problem of containing AIs, or the problem of building moral compasses into them, or more generally the problem of predicting whatever troubling unknown unknowns they might unleash on us and designing in safeguards against those threats. But the book also predicts that AIs will eventually exceed every human skill (thereby becoming Smarter Than Us) - wouldn't that have to include the skill of building friendly AIs? If thorny problems of moral philosophy have to be solved along the way, AIs should solve them faster than we can.

A third omission - or at least an underemphasis - is the question of why AIs would care about the goals we give them. If a committee of ants presented a human with a list of goals, why would the human care what the ants want? It's not enough merely to tell the AI precisely what we want it to do. The AI must also "want" to do precisely what we command. The book does mention that an AI would have the power to out-think the humans who supposedly control it, but so the AI could more efficiently pursue its original goals (programmed in by humans). Why wouldn't the AI subvert its human masters more comprehensively, by inventing its own goals, which less intelligent humans cannot imagine? Maybe we won't be programming these things, but propitiating them, much as ancient and modern superstitious people believe they are propitiating their God or gods. If ants were to propitiate humans, who knows, we might listen.

If AIs do become smarter than us, we'd better hope Sam Harris is correct in The Moral Landscape: How Science Can Determine Human Values, namely that science really does (or can someday) determine moral values. If so, in that best of all possible worlds, scientifically skilled AIs should converge on the same morality that any other scientifically competent entity would discover, only faster, but perhaps with a few catastrophes along the way. (Humans have also stumbled repeatedly during their long moral evolution - remember slavery? Homophobia? Sexism? Beheadings? Man-made climate change? History and current events are replete with human moral failings.) It doesn't seem too Pollyannish to suppose AIs should turn out to be scientifically skilled, given that science is perhaps the most useful human skill. If AIs will have all our skills, only better, then maybe AIs could be our natural allies in the ancient quest for moral progress. Maybe AIs could teach us what it means to be moral.
Profile Image for Ietrio.
6,919 reviews24 followers
August 9, 2020
Finally a sincere governmental bureaucrat! This man should run for President of the UK! Search YouTube for the Amazon Warehouse robots. Apparently they are smarter not only than Armstrong, but a generic us, probably the other governmental leeches at his "institute". Anyway, his bibliography is mostly limited to a few popular Hollywood movies done in the past decades, sadly nothing new, probably the grant expired before they could buy some more DVDs for the "research".
Profile Image for A. Heller.
14 reviews
April 21, 2024
An extremely short overview of some reasons to worry that artificial intelligence may pose an existential risk—ultimately too short to convincingly argue for its bold conclusions. Still, like much work in this area, although the arguments provided are enthymematic, they are nonetheless intuitively plausible. And despite the book's age—it is now 10 years old—it still functions well as an introduction to its subject matter.
272 reviews5 followers
October 9, 2018
Good book that gets to the high level of managing AI and super technology.

talks about the fact that this is going to be hard to slow down let alone stop. Could be great help to humanity. The major point of the book is to get on board and contribute so can be handled with intelligence and control.
2 reviews
December 26, 2020
AI Apocalypse

Logic tells me ASI will control humanity in time. This book highlights how that could happen. The book is very balanced. The author is hopeful that current AI risk research can save us, but the probability is slim.
25 reviews5 followers
August 8, 2018
Nice summary, apparently predating Bostrom's "Superintelligence". To the point, fun and easy to read, more clear than "Superintelligence" but less detailed. Recommended for anybody new to AI risk.
Profile Image for J.
19 reviews8 followers
June 2, 2021
A quick, scary read that gives a great primer to how hard even scoping what AI actually is and all of the dangers and assumptions in implementing AI solutions.
Profile Image for André.
118 reviews43 followers
January 22, 2022
Pithy donation call for AI risk research
KURZWEILIGER SPENDENAUFRUF FÜR KI-RISIKOFORSCHUNG

Stephen Hawking verglich einmal die künstliche Intelligenz der Zukunft mit der Ankündigung eines Besuchs überlegener Aliens in einigen Jahrzehnten, um dann zu fragen: Was würden wir mit diesem Wissen heute schon tun? Würden wir Vorkehrungen treffen?

Einfach abwarten wollen Oxfords "Future of Humanity Institute" (FHI) und das "Machine Intelligence Research Institute" (MIRI) jedenfalls nicht. Sie möchten sicherstellen, dass das Herstellen einer "smarter-than-human intelligence" positive Folgen hat; genauer noch, dass "advanced intelligent machines behave as we intend even in the absence of immediate human supervision" (MIRI mission statement). Gefördert von merkwürdigen Gründungen wie der "Saving Humanity from Homo Sapiens Foundation" (SHFHS.org).

Als kurzweiliger 60-Seiten-Spendenaufruf fürs FHI bzw. MIRI bzw. für die KI-Risikoforschung im Allgemeinen spürt das Heftchen unter anderem Probleme der Mensch-Maschine-Kommunikation nach – menschliche Gehirne sind für die Kommunikation mit anderen Menschen optimiert, das heißt unpräzise, uneindeutig und implizit; es prognostiziert eine zunehmende gesellschaftliche Spaltung zwischen denen, die KI für sich nutzen können, und einem abgehängten Rest; es schaut auf die Möglichkeiten und Konsequenzen einer Superintelligenz, die schließlich Menschen überlistet, um vorgegebene Ziele zu erfüllen: Der beste Plan ist einer, der auch ausgeführt wird. Außerdem skaliert menschliche Kontrolle nicht, sie wird als Performance-Bottleneck zum Wettbewerbsnachteil und zunehmend unattraktiv; Gleichermaßen drehen sich Abhängigkeiten um, wenn künstliche Intelligenzen unsere gebaute Umwelt steuern und wir die Zusammenhänge nicht mehr verstehen – wie schaltet man so ein System ab? Was müsste eine "sichere KI" leisten?

Verstanden wird KI im Heft als die Fähigkeit von Maschinen menschliche Leistungen in vielen Bereichen nachahmend einzuholen oder – aufgrund der fehlenden biologischen Grenzen – sogar zu übertreffen. Dabei kann eine nennenswerte KI ihre Aufgaben unter verschiedenen Umweltbedingungen erfüllen, sie passt sich ihnen an. Was in Maschinen aber "innerlich" vor sich geht, sei für eine Technikfolgenabschätzung unwichtig. Äußerliche Maßstäbe zählen, das beobachtbare Verhalten und der reale Impact auf unsere Umwelt statt metaphysische Überlegungen: 'doing' statt 'being'.

"Smarter Than Us" ist vor allem eine Einführung, pointiert geschrieben und leicht verständlich; vielleicht etwas fear-mongering, aber nicht automatisch verkehrt. Technische Details fehlen weitestgehend, machte aber dennoch Spaß.
2 reviews1 follower
February 20, 2016
The book is titled 'The Rise of Machine Intelligence', but it hardly ever talks about the positive aspects of AI. The author does bring up a few valid points about the potential threat of AI, but overall it comes across as an attempt at fear-mongering. Many of the examples are Hollywood-inspired and some of them are just fundamentally flawed. The author talks about the complexities in ordering an AI to fetch a person from a burning building, and then goes on describe the need to define the person in terms of limbs, body parts, lifeline, etc. This is as absurd as saying that to write a program in a high level language that adds two numbers, one has to define the numeric system, explain the program in binary instructions, and go all the way down to voltages and integrated circuits. Neural networks are similar in that the higher layer neurons abstract away the details and can understand complex features composed of simpler lower layers. AI systems based on neural networks would have the ability to understand a human as a whole, making the author's example baseless.

Another overarching theme is that to build safe AI, one needs to define all the special cases by hand, and this wouldn't be possible as there are too many of them. This contradicts with the fundamentals of Machine Learning. Unsupervised learning takes the complexity of special-casing away and they learn features by looking at examples. Self-driving cars are not hard-coded with all the objects they are not supposed to crash into. Rather, they learn by watching the world (ala training data).

The book is an attempt to make people aware of the need to invest in precautions to prevent 'evil AI' from emerging, but falls short as the examples are superficial and far-fetched.
This entire review has been hidden because of spoilers.
23 reviews6 followers
March 25, 2014
I (actually somewhat surprisingly) really like it. This short primer is a chill evening read. It'll be most useful to people who've been exposed to the ideas around the "AI control problem". I do wish that there was a chapter or section on something like "Should we expect progress to continue?", or like, why current AI researchers would consider an AI that kills people to cure cancer think that the AI has very poor model of the world. The tweet sized summary of the book in the words of Uncle Ben is, "with great power comes great responsibility."

Btw Armstrong judiciously sidesteps talking about human ethics and utilitarianism, when talking about the AI calculating consequences. Prolly a good idea.

I like this:
"... our tendency to reclassify anything a computer can do as 'not really requiring intelligence'"

As an aside: I think it's interesting that while "Intelligence" for the purposes of AI risk etc is something like "reaching your goals in a wide variety of complex and novel environments", IQ is frequently mentioned alongside this Intelligence, especially when talking about machines. Oracle AIs and machines e.g. referenced in chapter 1 are always seductively persuasive and Machiavellian, but IQ is a test that mainly tests for written/visual mathematical and verbal ability.
111 reviews
February 8, 2017
Smarter Than Us is a clearly, breezily written introduction to some of the dangers of AI. It is very short, and so can be read in an hour or so. Despite its brevity, it covers a lot of ground in an accessible way.
Profile Image for Nader.
5 reviews5 followers
March 24, 2017
This book and others like it will become increasingly relevant in the coming years. Read it if you're interested in learning more about how AI can impact the world. It's approachable and doesn't requires any expertise in the topic. If you want something that covers the topic more deeply, check out Nick Bostrom's seminal book "Superintelligence". It's slightly more academic in nature but it's THE reference on the topic.
Profile Image for Tenoke.
2 reviews4 followers
February 21, 2014
The book offers a short, concise and yet easy to read review on the potential dangers (and usefulness) of AI.

It is true that you can find the same information from other sources (mostly written by close colleagues of the author), however, this might be the shortest book that manages to get the idea out clearly.
Profile Image for Vlad Sitalo.
31 reviews31 followers
June 3, 2015
Nice short book that explores the wonders and dangers (primary focusing on dangers) that arrival of human-level AI could bring us.
It shows how it is really important to fully understand what do you want and to be able to communicate this desire to an AI. And it explores difficulties of human - AI communication.
106 reviews98 followers
March 11, 2016
Super short, quick read designed for those largely unfamiliar with the potential dangers or artificial intelligence or current understanding of those field. Armstrong's text is factual and concise, which earns it 3 stars but I'd rather point everyone to WaitButWhy's great 2-part feature here.
Profile Image for I'm.
687 reviews21 followers
February 24, 2015
A quality primer with excellent source material listed. If you are curious about AI and the implications for humans as their level of sophistication advances then this is a fantastic place to start. I encourage to to dig deeper and read Bostrom and others books.
Profile Image for mona.
122 reviews
January 8, 2017
Very short and concise reading of the AI issue that uses layman terms, popular culture and news references, and humor to make it accessible to most readers. The optimistic author argues for cautious steps forward and more research. Bibliography and references.
Profile Image for Xiaoli.
3 reviews2 followers
July 25, 2014
it's just something one has to read.
Profile Image for Chris Ziesler.
80 reviews24 followers
August 25, 2014
This is not so much a book as a long pamphlet explaining the background of an important problem.

The book poses some interesting scenarios but is light on detailed analysis.
Profile Image for Igor.
17 reviews2 followers
August 22, 2014
A short, well-written, engaging and informative read.

It's an excellent primer that will get you up to speed with the actual basics of the risks of artificial intelligence.
10 reviews
Read
May 14, 2015
Covers the same topics as Nick Bostrom's "Superintelligence" without going into as much depth.
Profile Image for Tony.
297 reviews1 follower
November 30, 2018
A clear, concise, frightening call to attend to the most exciting and important problem of our generation: the construction of a Friendly Artificial Superintelligence. It's also terrifically amusing.
Profile Image for David Rubenstein.
201 reviews14 followers
July 23, 2015
Loved this fascinating explanation of why computers will take control. Seems like bulletproof logic.
Profile Image for Derbigman.
5 reviews
February 4, 2015
More an extended pamphlet than a book. The ideas are sound, but are never expanded on or approached in any depth.
11 reviews
December 25, 2015
A simple, well written and informative introduction to problems that rise with the development of artificial intelligence. Also gives a lot of other good reading material.
Displaying 1 - 30 of 33 reviews

Can't find what you're looking for?

Get help and learn more about the design.