The Future of AI and LLMs
It’s been a crazy start to the year in AI-land, with the release of Deepseek’s R1 LLM. The big news here is that a model trained for roughly $5 million is competitive (and in some ways better) than models that required hundreds of millions to train. Ars has a cursory but illuminating comparison between Deepseek and OpenAI here.
There are thousands of hot takes and breakdowns elsewhere, and the Deepseek team released a paper on their methodology and opensourced everything, so plenty to dive into there as other labs rush to replicate the findings (something within the computational and financial means of many more now). The markets are, of course, having a hissy fit. And there are political observations to make here, as Deepseek apparently has very little to say about Tiananmen Square.
I recently had someone ask me in the Silo subreddit what my views on AI are now, compared to when I wrote the SILO series or published my short story collection MACHINE LEARNING. I have some pretty bizarre views on AI and LLMs. Most folks drive down either a ditch of Singularity is Near! or AI is a dumb stochastic parrot. I think both groups miss a few points. So here goes.
Intelligence is Asymptotic
Curves of AI’s future abilities remind me of growth curves for developing nations like India and China. I remember reading THE WORLD IS FLAT by Thomas Friedman back in the day, and about halfway through the book it hit me how completely wrong the central premise was. Growth curves bent off every page as Friedman extrapolated countries catching up to countries flying ahead. Little to no thought was given to the fact that limiting factors in the US would be paralleled in these other economies and cultures. It was assumed that growth would rise hyperbolically forever with none of the problems that come from growth, just the benefits.
Predictions of future intelligences make the same mistake. It’s just as likely that there is a limit to how much can be known and how many creative inferences one can make as it is that intelligence and knowledge are infinite. Neither claim has been tested, but one of these certainly has been widely assumed. Even if the space of what-there-is-to-know is infinite, the computational toll of some inferences might be greater than the power of every star in the universe or the future age of the universe in which to calculate them (for instance, there is likely some prime number so large that the entire universe, as a computer, could never calculate it).
Once we know these limits exist, the next question is where we are in relation to the asymptote of intelligence. Collectively, the 8+ billion brains wired together on Earth right now might be 80% of the way there. Or 95%. Or 5%. We don’t know, which makes curves like this and all the commentary and promise about them idle speculation:

LLMs Grabbed Low-Hanging Fruit
When OpenAI released GPT v3 in 2020, it created a seismic shift in the AI landscape. I was at a tech conference in the week following the release, and some of the smartest people I knew were walking around in a daze wondering what it meant and where things would go from here. I’ve never seen so many bright futurists and prognosticators seem so lost in all my life (as a much dimmer futurist, I was in the same stupor as them).
Even back then, there was some speculation that the next phase shift in AI would require a different method or a LOT more compute. This was my early assumption, that we had just gotten tall enough to grab a bunch of low-hanging fruit. This point is related to my previous point about the limits to intelligence and knowledge. It’s easy to confuse this sudden leap upward to a new velocity that we will maintain. It’s more likely that we just scampered onto a ledge and are now faced with a much higher cliff.
We Are LLMs
I think most of what we do as humans with language is what LLMs do. We confuse our own word prediction for creative and original thought, when the latter is much rarer than the former. And it’s very easy for creative and original thought to descend into absurdism for the sake of being avant-garde (our way of halucinating). Humans spend a lot of our time delivering rote responses to the same verbal triggers. We deliver the same story in response to the same inciting incidences. Once you see this happening in others and yourself, you can’t stop seeing it. We are parrots in so many ways.
We are also wildly, spectacularly, unoriginal. This is why writers get sued for stealing the ideas of other writers: thousands of people have the same thoughts to create the same stories and characters independent of one another. It’s almost as if the thoughts, characters, situations, and plot lines have an existence outside of us (or collectively within us) and we just take turns expressing them in slightly different ways.
The dream that we would invent AI and it would teach us new things about the world will one day be supplanted with the realization that we created AI to learn ancient things about ourselves.
LLMs are US
What LLMs “know” is basically all the things we know, linked together with vectors and distance in a massive web of correlations that can be turned into raw numbers and then turned back again into language. This gets more powerful as it becomes recursive (allowing the LLM to take its output as input over and over to refine its “thinking”) and as you give them access to the web and other tools. But at its core, every LLM is trained on what humans already know, have written, etc.
Many folks present this as a limitation (the stochastic parrot people), but holy fuck think about this for a moment. As the error rates of LLMs gets pushed closer and closer to zero (this is happening at an impressive clip already), we will have a tool that we can communicate with in many different ways (voice, text, images, video) that has access to all of human knowledge and can synthesize results faster than we can in most cases.
This is the original dream of the universal thinking machines of Turing and Babbage. This is extreme science fiction. This changes everything. How we teach. How we work. How we nurture the next generation. How we grow old. The idea that AI has to surpass us in knowledge to change the world is clearly wrong. It’s enough to know what we know, but be able to access and deliver that knowledge free from bias and error, something that we will continue to improve.
Want to create a new application for your phone or computer? You won’t need to learn a programming language to do this. People with no programming skills are already making complex applications using the AIs currently available. Worried about mental health? There will be access to a 24/7 therapist who remembers everything you’ve told it, never confuses you for another client, is never tired, is always available, and improves over time. Wish you could afford a nanny for your child that would immerse them in five languages and teach them about logic and ethics instead of regurgitating facts or mindless entertainment? This is now possible. The future is limited only by our imaginations and values. Which leads me to…
We Will Probably Screw This Up
AI could do much to alleviate drudgery and suffering without causing economic upheaval and exacerbating income inequalities. It could … but it won’t. Because we will not choose this route. Instead, we will choose a route that causes more heartache than is necessary and provides fewer mental health benefits than it could all while we are as uncreative and immoral as humanly possible.
The reason for this is in those last two words: humanly possible. My wife once said the smartest thing I’ve heard anyone say about AI: “People are really bad at being human.” Most of us know the right thing to do in most situations, but that doesn’t make it easy to pull it off. New Year’s resolutions are an annual reminder. We have incredible brains, but they are hampered by hormones and glands and millions of years of evolutionary adaptation.
AI today can already tell us how we should manage our affairs as good as or better than we can — even ethically and spiritually. For instance, this AI-generated religion is more reasonable to me than any of the human-created ones I’ve sampled. AI can and will surpass us in ethics, but that doesn’t mean we will listen to it or act on it, any more than we act on our own conscience. We already have a small voice whispering the right thing to do, and the human thing is to largely ignore it.
And so… we will employ AI in a way that harms others, maximizes profits over well-being, destroys the planet, weakens our social fabric, confuses our wits and addles our senses, provides cheap entertainment rather than deep introspection, fosters tribalism and in-fighting, and exacerbates the widening gulf between the haves and the have-nots. We will do this even though if you ask AI what we should be doing, it’ll give you a much different and wiser response.
I asked Chat how we might use AI to make the world a better place, and it wrote this, which I think you could use to found a just society. It also gave this conclusion:
Using AI for good involves more than just developing clever algorithms; it requires a holistic approach that weaves together technological innovation, ethical practices, community engagement, and a clear vision of positive social and environmental impact. By combining human ingenuity with AI’s computational power, we can tackle pressing global challenges more effectively and equitably. The key is to place human wellbeing at the center of innovation—ensuring AI remains a tool that empowers, rather than exploits, and that drives us toward a fairer, healthier, and more sustainable future.
It’s a very human response and a very humane attitude. In fact, the problem of AI alignment you hear much wailing about is likely to be overshadowed by my wife’s observation, which is that we are often misaligned with our own self-interests. People are terrible at being human.
The next two points are less about AI, but worth mentioning:
The Pace of Change is Slowing
The pace of change in the world around us is slower today than it was 100 years ago. It’s now 2025. The world around us is very similar to the world in 2000. Other 25-year jumps in the past century were far crazier, politically, socially, technologically, and culturally. 2000 and 1975. 1975 and 1950. 1950 and 1925. 1925 and 1900. An example: it was 66 years between the Wright brothers taking the first powered flight and Apollo 11 landing on the moon. It’s been 56 years since then.
The early 21st century was one of plucking low-hanging mechanical fruit. The early 22nd century has been one of plucking low-hanging digital fruit. It’s not clear to me that the latter is more significant than the former. We engineered our way out of physical poverty in the 21st century. We are now engineering our way into emotional poverty in the 22nd century. Our social connections are being fractured by the internet; truth is being weakened by access to more disinformation; the extremes are moving apart due to digital silos.
I think AI is going to hasten this trend of less progress happening overall (our brightest minds now try to figure out how to get us to click on more ads), while more progress happens in the mundane and uninteresting (us staring at more ads). Mental health will continue to decline as a result of both. Meanwhile, the next big breakthrough in human development is more likely to be the contagion of an idea than the invention of a thing.
Attention is More Powerful than Intention
An obvious prediction, but one rarely stated: the future will be determined by our attention, not our intention. We stopped going to the Moon because people stopped watching or caring. We will stop going to Mars for the same reason. There won’t be any profit or enjoyment there, once the initial conquest is over.
Because of this, the future of AI will go one of two ways: We will be captivated by our interactions with them, which will replace more and more human interactions, and we will spiral into an unthinking mass of inhumanity akin to the chair blobs of WALL-E. OR, we will get bored of chat-bots, deepfakes, AI-generated content, spending time on anti-social media, and there will be a movement to spend more of our time having authentic in-person experiences.
My guess is that both of these will occur simultaneously, it’s just a question of percentages. Lots of people are now retiring early or working remotely to live on boats, in vans, or out of suitcases as they see the world while still contributing in some way to it. Board games and sports fads surge as folks find ways to engage with other folks. Meanwhile, some people will spend most of their lives staring at their phones or computer screens, arguing with bots, clicking on things that aren’t really there, while a handful of companies profit from their mental decline.
AI will play a huge role for both groups: one will have their AI returning most of their email while they continue their in-person adventures, while the other will realize on their first coffee date that they’ve been in love with a fibbing AI this whole time. My fear is the ratio will be 15/85. My hope is that it might be the inverse. The deciding factor is whether we set clear intentions and exert the willpower needed to follow them, or whether we succumb to distraction and mindless consumption. AI can be a tool for either. If only we had a choice.
The post The Future of AI and LLMs appeared first on Hugh Howey.

Or tackle medical diagnosis and become part of decision making, "whoa chief that's your second candy bar today, how about some fruit?" Or "buzz buzz, that's your second episode, let's go for a quick walk before we settle in."
I'd much rather see AI supplement humanity with logic and necessity, rather than taking away things people like (creating videos, writing books, generating art).
IMO we're going to need robots that wipe butts way more than we need random art generators. Will tech bros pivot?