Gerd Leonhard > Gerd's Quotes

Showing 1-30 of 247
« previous 1 3 4 5 6 7 8 9
sort by

  • #1
    Edsger W. Dijkstra
    “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”
    Edsger W. Dijkstra

  • #2
    Richard Dawkins
    “I am thrilled to be alive at time when humanity is pushing against the limits of understanding. Even better, we may eventually discover that there are no limits.”
    Richard Dawkins, The God Delusion

  • #3
    Terry Pratchett
    “Maybe the only significant difference between a really smart simulation and a human being was the noise they made when you punched them.”
    Terry Pratchett, The Long Earth

  • #4
    Brian  Christian
    “To be human is to be 'a' human, a specific person with a life history and idiosyncrasy and point of view; artificial intelligence suggest that the line between intelligent machines and people blurs most when a puree is made of that identity.”
    Brian Christian, The Most Human Human: What Talking with Computers Teaches Us About What It Means to Be Alive

  • #5
    Eliezer Yudkowsky
    “By far the greatest danger of Artificial Intelligence is that people conclude too early that they understand it.”
    Eliezer Yudkowsky

  • #6
    James Barrat
    “A powerful AI system tasked with ensuring your safety might imprison you at home. If you asked for happiness, it might hook you up to a life support and ceaselessly stimulate your brain's pleasure centers. If you don't provide the AI with a very big library of preferred behaviors or an ironclad means for it to deduce what behavior you prefer, you'll be stuck with whatever it comes up with. And since it's a highly complex system, you may never understand it well enough to make sure you've got it right.”
    James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

  • #7
    Jaron Lanier
    “The attribution of intelligence to machines, crowds of fragments, or other nerd deities obscures more than it illuminates. When people are told that a computer is intelligent, they become prone to changing themselves in order to make the computer appear to work better, instead of demanding that the computer be changed to become more useful.”
    Jaron Lanier, You Are Not a Gadget

  • #8
    Michio Kaku
    “But on the question of whether the robots will eventually take over, he {Rodney A. Brooks} says that this will probably not happen, for a variety of reasons. First, no one is going to accidentally build a robot that wants to rule the world. He says that creating a robot that can suddenly take over is like someone accidentally building a 747 jetliner. Plus, there will be plenty of time to stop this from happening. Before someone builds a "super-bad robot," someone has to build a "mildly bad robot," and before that a "not-so-bad robot.”
    Michio Kaku, The Future of the Mind: The Scientific Quest to Understand, Enhance, and Empower the Mind

  • #9
    “If an AI possessed any one of these skills—social abilities, technological development, economic ability—at a superhuman level, it is quite likely that it would quickly come to dominate our world in one way or another. And as we’ve seen, if it ever developed these abilities to the human level, then it would likely soon develop them to a superhuman level. So we can assume that if even one of these skills gets programmed into a computer, then our world will come to be dominated by AIs or AI-empowered humans.”
    Stuart Armstrong, Smarter Than Us: The Rise of Machine Intelligence

  • #10
    James Barrat
    “As I’ll argue, AI is a dual-use technology like nuclear fission. Nuclear fission can illuminate cities or incinerate them. Its terrible power was unimaginable to most people before 1945. With advanced AI, we’re in the 1930s right now. We’re unlikely to survive an introduction as abrupt as nuclear fission’s.”
    James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

  • #11
    James Barrat
    “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended. Is such progress avoidable? If not to be avoided, can events be guided so that we may survive? —Vernor Vinge, author, professor, computer scientist”
    James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

  • #12
    James Barrat
    “The strongest argument for why advanced AI needs a body may come from its learning and development phase—scientists may discover it’s not possible to “grow” AGI without some kind of body.”
    James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

  • #13
    James Barrat
    “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else. —Eliezer Yudkowsky, research fellow, Machine Intelligence Research Institute”
    James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

  • #14
    James Barrat
    “we don’t want an AI that meets our short-term goals—please save us from hunger—with solutions detrimental in the long term—by roasting every chicken on earth—or with solutions to which we’d object—by killing us after our next meal.”
    James Barrat, Our Final Invention: Artificial Intelligence and the End of the Human Era

  • #15
    Aldous Huxley
    “A physical shortcoming could produce a kind of mental excess. The process, it seemed, was reversible. Mental excess could produce, for its own purposes, the voluntary blindness and deafness of deliberate solitude, the artificial impotence of asceticism.”
    Aldous Huxley, Brave New World

  • #16
    Arthur C. Clarke
    “Whether we are based on carbon or on silicon makes no fundamental difference; we should each be treated with appropriate respect.”
    Arthur C. Clarke, 2010: Odyssey Two

  • #17
    Ian McDonald
    “Any AI smart enough to pass a Turing test is smart enough to know to fail it.”
    Ian McDonald, River of Gods

  • #18
    Terry Pratchett
    “The nice thing about artificial intelligence is that at least it's better than artificial stupidity.”
    Terry Pratchett, The Long War

  • #19
    Garrison Keillor
    “Computers can never completely replace humans. They may become capable of artificial intelligence, but they will never master real stupidity.”
    Garrison Keillor, A Prairie Home Companion Pretty Good Joke Book

  • #20
    Clyde DeSouza
    “Emotions - Happiness, anger, jealousy... is the mind experiencing "presence" in our holographic existence.”
    Clyde Dsouza, Memories With Maya

  • #21
    Clyde DeSouza
    “You realize, there is no free-will in anything we create with Artificial Intelligence...”
    Clyde Dsouza

  • #22
    E.L. Doctorow
    “Writing is a socially acceptable form of schizophrenia.”
    E.L. Doctorow

  • #23
    “Deep Blue didn't win by being smarter than a human; it won by being millions of times faster than a human. Deep Blue had no intuition. An expert human player looks at a board position and immediately sees what areas of play are most likely to be fruitful or dangerous, whereas a computer has no innate sense of what is important and must explore many more options. Deep Blue also had no sense of the history of the game, and didn't know anything about its opponent. It played chess yet didn't understand chess, in the same way a calculator performs arithmetic bud doesn't understand mathematics.”
    Jeff Hawkins, On Intelligence

  • #24
    Alan M. Turing
    “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”
    Alan Turing, Computing machinery and intelligence

  • #25
    Ray Kurzweil
    “Death gives meaning to our lives. It gives importance and value to time. Time would become meaningless if there were too much of it.”
    Ray Kurzweil

  • #26
    Ray Kurzweil
    “Death is a great tragedy…a profound loss…I don’t accept it…I think people are kidding themselves when they say they are comfortable with death.”
    Ray Kurzweil

  • #28
    Ray Kurzweil
    “Everyone takes the limits of his own vision for the limits of the world. —ARTHUR SCHOPENHAUER”
    Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology

  • #29
    Ray Kurzweil
    “Finally, our new brain needs a purpose. A purpose is expressed as a series of goals. In the case of our biological brains, our goals are established by the pleasure and fear centers that we have inherited from the old brain. These primitive drives were initially set by biological evolution to foster the survival of species, but the neocortex has enabled us to sublimate them. Watson’s goal was to respond to Jeopardy! queries. Another simply stated goal could be to pass the Turing test. To do so, a digital brain would need a human narrative of its own fictional story so that it can pretend to be a biological human. It would also have to dumb itself down considerably, for any system that displayed the knowledge of, say, Watson would be quickly unmasked as nonbiological.”
    Ray Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed

  • #30
    Ray Kurzweil
    “In mathematics you don’t understand things. You just get used to them. —John von Neumann”
    Ray Kurzweil, How to Create a Mind: The Secret of Human Thought Revealed

  • #31
    Ray Kurzweil
    “One cubic inch of nanotube circuitry, once fully developed, would be up to one hundred million times more powerful than the human brain.9”
    Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology



Rss
« previous 1 3 4 5 6 7 8 9