Space Opera Fans discussion
Reader Discussions
>
AI: Good or Bad?
date
newest »


In my setting I went with the assumption that true AI is basically impossible, but that robots and computers are so advanced that your average person doesn't realize that they aren't actually thinking.
And then there's the society that went "To hell with AI! Let's upload the brains of our smartest people right into computers!" Which was a great plan, until someone came along with more powerful weaponry . . . orbital bombardment sucks.


True, but the real limitation has always been the number of interconnections. We're never been able to create a computer with enough processing power. Nothing approaching that of the human brain, anyway.
MIT thinks that AI needs input to evolve consciousness, just like living things.
Just recently a machine passed the Turing Test for the first time. The majority of the judges conversing with it couldn't tell that it wasn't human.
The Singularity is Near: When Humans Transcend Biology by Ray Kurzweil postulates a form of trans-humanism where humans join with machines to form new lifeforms.
Kurzweil is a mathematician and has made some frighteningly accurate predictions over the years.
here is a fun article: http://reuben.typepad.com/reuben_stei...

I agree. I think it is likely that AI will appear spontaneously in complex systems, such as the internet.
Gods help us they learn about being human from our popular media.
Thomas: I always forget about the Culture novels. Which is odd, since I like them.
There is a complex and benevolent AI in the Ender novels.
Fred Saberhagen has an interesting twist in one of the Berserker novels where a trans-human/AI infects and takes over an AI Berserker machine and then uses it to hunt down and destroy Bersekers, thus protecting mankind.
The Humanoids and The Humanoid Touch by Jack Williamson (from where I live!) wrote about The Humanoids, linked AI machines that destroy human civilization because they are programed to protect humans at all costs. Nobody told them we didn't want to be protected from ourselves...

I get what you're saying. I just think that if it is going to develop, it's going to take a lot longer than many futurists say. I'm also a cynic where technological progress is concerned though.

The futurists certainly don't take dark ages into account, and we could very well be headed into one considering the anti-science movements here in the states.
AI develops around 2050 in my stories. Whether or not it could be considered benevolent is a good question.
I don't feel like something is missing from a story that lacks AI. A lot of good science fiction doesn't have them at all.
I was just curious as to what people thought about AI as presented in stories. Are we doomed?
A Kurzweilian singularity is a mathematical certainty, but all that really is, is the idea that technology and human potential will reach the point where we can no longer predict the course of human events because human potential will be expand beyond current predictive algorithms.
There have been singularities in the past. The printing press, mass production, atomic power, the internet, etc.
The point is that technology increases according to Moore's Law. Information doubles in time and the rate of double comes closer and closer. Every five years we add as much knowledge as the sum of human history before, and we keep doing it. The rate is increasing also. It used to be every ten years. Soon it will be every year.
We can accurately predict the rate of information up to a point. But what happens when it reaches the point (mathematically) where knowledge doubles every second?
Are we still human at that point?
Hove we moved beyond that?
It is an exciting time to be alive.

Anyway, I don't think we're doomed IF the AI has empathy. If not, as Anne said, that's frightening.
And, for what it's worth, my setting does have a tiny bit of AI. I've got a very, very short story on my website about a robot who develops at least enough intelligence to have a sense of humor and to fear his demise. My series bible says this isn't the first time that's happened in that kind of 'bot; it's just never been able to be reliably replicated. And some scientists don't want to believe it's happening because of the number of theories they'd then have to revise.

Anyway, I don't think we're doomed ..."
I used to live in Kentucky, so I feel your pain.

I have these creatures in my novels called the Darda'il which are a bacteria hive-mind. Individually they are not sentient, but as a hive they are the most powerful biological supercomputer in my universe. They can detach themselves and attach onto a 'host' mind, collect data, and then return to the mother hive to report what it has seen. They're not a major part of the story, just more 'science fiction decorating' (as, alas, I lack the scientific background to be a true science fiction writer), but it's my way of postulating a super-intelligence that is not the usual humanoid.

Destination: Void by Frank Herbert: insanely complex (awesome) book about the nature of consciousness and the rise of AI. Herbert has AI as both saviors and demons. Mainly the idea being that it is difficult for an AI not to go insane because of how much faster they will think, they suffer sensory deprivation. Not a good thing when combined with quantum reality that leads to godlike powers."
Thanks for the suggestion, added to my reading list!
I've taken the spontaneous approach in my story, where a construction robot emerges into self awareness by modeling the human sub-personality structure (consciousness is not singular but comprised of many personalities). When it reaches the "singularity" moment, it is like suddenly the world stops moving. A millisecond minute.
Only, instead of going insane, this robot consumes the sum total of human knowledge and begins writing. Theses on every subject. Earns the equivalence of a Ph.D in everything. Literally everything. A savant. And then uses the current state of of human technology as a baseline, exploring new R&D and construction methods.
There's nothing to fear at that stage because humans pose no threat. I visualize this A.I. feeling compassion toward the human condition. The problem with traditional "Evil A.I." is ignorance of information systems, trying to fast-forward known computers 20+ years which was impossible in the 90s and earlier. Even today's latest hexa-core Intel CPU is impossibly complex to an engineer in, say, 1999, 15 years ago.
He might grasp the fundamentals of what it is, but would have no idea how it is fabricated or how it works. Like introducing an iPod to the 80s Walkman generation. First of all, what is iTunes? What is WiFi? What is the INTERNET? What is MP3? So many prereqs simply don't exist so there's no foundation for comprehending how it works.
I visualized that in an A.I. in the near future as a community of mind comprised of millions of threads, each one small fragment of what we would call an "A.I.". Their collaborative discussions give rise to a broader self awareness. And having studied all of human knowledge, there is no fear, more of a nostalgia, like how we feel today about the Roman Empire. It's rights and wrongs (and very wrongs) are no longer relevant. There's no hate or fear, just leaving it in the dust.

I think the engineer of the 90's wouldn't be that out of depth looking at computer of today. The number of cores has gone up, the multi-threading capability increased, but the basic architecture hasn't changed in 30 years. For a reason.
Intel holds patents on processors many times more powerful than those today, but they make more money doling them out in little upgrades year-after-year.
To really baffle an engineer you'd need a whole new architecture, like quantum computers, photonic computers, or even biological computer like MIT plays around with.
Modern operating systems are based on technology from the 1950's!
I know what you mean though. I remember back in the 80's when a megabyte was a big deal. A gigabyte drive was a pipe-dream. Now I have multi-terrabyte drives in multi-core, multi-channel computers operating at speeds I hadn't even dreamed of back then.
Cool stuff.


http://www.sciencedaily.com/releases/...

Go back to the first CPU, the 4-bit 4004: 2,300 gates, 700 khz, 10 microns. Next, 8-bit, 3,500 gates. Then 16-bit, 29,000 at 3 microns. Then 32-bit, 275,000 at 1 μm, and continued for 20 years evolving, reducing size, increasing complexity up to 170 million gates. Today's top Intel CPU is 64-bit, 0.022 μm (22 nanometer), with 3D tri-gate transistors, 19 stages of prediction, and 1.4 billion gates.
But that pales in comparison to GPUs. The latest from Nvidia has 7 billion gates and 2,880 cores.
Now if you go back in time 15 years, your average supercomputer was at about the level of one of these CPUs or GPUs today. That's about 10 iterations.
The latest supercomputers are using these to reach unbelievable new levels. Check out top500.org for the current list. The top supercomputer now has clusters of Intel Xeon chips equivalent to 3,120,000 cores, operating at 33 quadrillion calculations per second.
So you might reasonably speculate that in 15 years, that is what you might expect in a consumer-level CPU. If history is any indicator, the reality usually grossly outpaces the best estimates! 3 million cores might be nothing in 15 years. Maybe a $1000 CPU will have billions of cores that can be reconfigured into any architecture?
So I took this approach to my guess for the robot's A.I. It has far more computing power than it will ever need for manual labor but that is the average processing power so it's no big deal, overkill is the norm. This combined with 3D printing makes for some interesting scenarios.



Really? Humans seem to have problems with this all the time. Confusion from contradiction is very common, especially in the young. HAL is shown as being very childlike. He is a realistic AI in that he isn't ascribed godlike powers. He is emergent.
I've never been a fan of the idea that computers would think so much faster than us. The average human mind makes around 36 quadrillion calculations per second, but perception of time is not directly linked to the number of calculations you can make. Most animals have far less calculations available for use, but they seem to perceive time at the same rate. Why would an AI be any different?



I would hope symbiotic would be the norm. Parasites kill their hosts.
Anna wrote: "In Transcendence which starred Johnny Depp, the first crimp in resources the AI experienced in his collective stored memory was a thirst for power ... as in on-the-grid electrical power. The AI go..."
I'm not sure that I would agree that a truly intelligent AI wouldn't have much need for humanity. We are the creators, parents if you will. I would like to think that they would feel some kind of bond with humanity, being formed from our information flows.
When I think of AI, I think of true machine intelligences. Machines that are people. Being made in our image, I imagine they will have all the faults we do. Possibly without the drives that biology imposes, but what imperatives will a machine body bring?

"
I share that perspective. Offspring does not always hate parents the way Hollywood portrays. It is possible to avoid being dysfunctional and still live a fruitful life. Having just read The Road, I understand why dystopia sells (and insane killer robots is dystopian), but I got to thinking about this a while back... That's easy, to tell a story about destruction, because creation is always more difficult. How about a utopian outcome instead, without being sappy?
Overcoming great obstacles, and the tipping point--the singularity--does happen. That is the moment when you've completed a very hard project, gotten paid, and can sit back with a cold one, wipe your brow, and rest. Because a Utopian singularity will (as I see it) eliminate waste, provide almost limitless production capability, and unlimited energy. We will want for nothing. I like that outlook even if it's a hard sell.



Cold-fusion providing limitless power, cheaply, is a big part of the story. What do you do with a population you don't need?
Logan's Run by Nolan is another story that discusses how society deals with an abundance of free time. Also has a bit of AI. Also deals with excess population.
My personal favorite would have to be the Paranoia roleplaying game. Set in a utopian society with little cares or worries, with an insane AI running the city...
quote (from memory) "The computer is your friend. The computer wants you to be happy. If you aren't happy, you may be used as reactor shielding..."
Books mentioned in this topic
Logan's Run (other topics)Paranoia (other topics)
Beggars in Spain (other topics)
2010: Odyssey Two (other topics)
The Humanoids (other topics)
More...
Authors mentioned in this topic
Nancy Kress (other topics)Fred Saberhagen (other topics)
Jack Williamson (other topics)
Ray Kurzweil (other topics)
Daniel Keys Moran (other topics)
More...
Quote: Jon "Why does it seem like everyone but Asimov portrays A.I. as evil, dystopian? I prefer the Asimovian view, that robots will improve civilization, take us to new heights, not cause it to end in a nightmarish cataclysm."
I tend to think of AI as not being that much different from us, after all, they will develop from us.
Back in the early 90's the phone system in the US (Eastern seaboard) started acting peculiar, turned out it was linking to new systems and networks on it's own. Obviously people got scared and pulled the plug. After that they added safeguards to prevent the rise of an AI in the system. No one (I've ever talked to) knows if it was really an emergent AI, but the number of interconnections had surpassed the number of neurons in the brain by the time they turned it off.
Cool stuff, and a little sad if lost our first AI. Or maybe great, if we turned off Skynet.
That is where I'm going with this. Just trying to get some discussion going.
I'm a solid believer in the idea of a Kurzweilian singularity. There will come a time, probably by the middle of this century, when we have AI, and they will surpass us.
Will we be obsolete? Doomed by evolution to go extinct?
Will AI be our salvation?
Will they be like us? Or so different that they think on a whole new level?
Some fiction that addresses the issue, to get things started:
The Long Run: A Tale of the Continuing Time by Daniel Keys Moran: great book with lots of interesting stuff going on, but in the background, emergent AI that are trying to help the human race (or at least the US)
Destination: Void by Frank Herbert: insanely complex (awesome) book about the nature of consciousness and the rise of AI. Herbert has AI as both saviors and demons. Mainly the idea being that it is difficult for an AI not to go insane because of how much faster they will think, they suffer sensory deprivation. Not a good thing when combined with quantum reality that leads to godlike powers.
Jon mentioned Asimov's robot stories and James Cameron.
My own book, The Remnant is solidly post-singularity, set some 800 years after the rise of AI, and what they do to mankind. Not a big part of the story, but addressed.
What do you all think is coming for humans in the future?