Cal Newport's Blog
August 31, 2025
Does Work-Life Balance Make You Mediocre?
Last month, a 22-year-old entrepreneur named Emil Barr published a Wall Street Journal op-ed boasting a provocative title: “‘Work-Life Balance’ Will Keep You Mediocre.”
He opens with a spicy take:
“I’m 22 and I’ve built two companies that together are valued at more than $20 million…When people ask how I did it, the answer isn’t what they expect—or want—to hear. I eliminated work-life balance entirely and just worked. When you front-load success early, you buy the luxury of choice for the rest of your life.”
As Barr elaborates, when starting his first company, he slept only three and a half hours per night. “The physical and mental toll was brutal: I gained 80 pounds, lived on Red Bull and struggled with anxiety,” he writes. “But this level of intensity was the only way to build a multimillion-dollar company.”
He ends the piece with a wonderfully cringe-inducing flourish. “I plan to become a billionaire by age 30,” he writes. “Then I will have the time and resources to tackle problems close to my heart like climate change, species extinction and economic inequality.”
(Hold for applause.)
It’s easy to mock Barr’s twenty-something bravado, even if I do have to be careful not to be the pot calling the kettle black (ahem).
Yet, some of this knee-jerk mockery might stem from the uncomfortable realization that beneath this performative busyness, there may lie a kernel of truth. Are we forfeiting our opportunity to make a meaningful impact with our work if we prioritize balance too much? As NYU professor Suzy Welch noted, “I do give [Barr] points for saying something I only mutter to my M.B.A. students …You cannot well-being yourself to wealth.”
To help address these fears, let’s turn to the advice of another twenty-something: me. In an essay I published when I was all of 27—around the time I was finishing my doctoral dissertation at MIT—I wrote the following:
“I found writing my thesis to be similar to writing my books. It’s an exercise in grit: You have to apply hard focus, almost every day, over a long period of time.
To me, this is the definition of what I call hard work. The important point, however, is that the regular blocks of hard focus that comprise hard work do not have to be excessively long. That is, there’s nothing painful or unsustainable about hard work. With only a few exceptions, for example, I was easily able to maintain my fixed 9 to 5:30 schedule while writing my thesis.
By contrast, the work schedule [followed by many graduate students] meets the definition of what I call hard to do work. Working 14 hours a day, with no break, for months on end, is very hard to do! It exhausts you. It’s painful. It’s impossible to sustain.
I’m increasingly convinced that a lot of student stress is caused by a failure to recognize the difference between these two work types. Students feel that big projects should be hard, so hard to do habits seem a natural fit.
I am hoping that by explicitly describing the alternative of doing plain hard work, I can help convince you that the hard to do strategy is a terrible way to tackle large…challenges.”
I gave that article a simple, declarative title: Focus Hard. In Reasonable Bursts. One Day at a Time.
This strategy has continued to serve me well. I’m now 43 years old and, I suppose, still managing to avoid mediocrity—all while continuing to rarely work past 5:30 p.m. I’m not willing to sacrifice all the other things I care about in order to grind.
Barr is still young, and his body is resilient enough to get away with his hustle for a while longer. I hope, however, that those who found his message appealing might also hear mine. Deep results require disciplined, relentless action over a long period of time, and this is a very different commitment than the type of unfocused freneticism lionized by Barr. I work hard almost every day. But those days are rarely hard to get through. This distinction matters.
The post Does Work-Life Balance Make You Mediocre? appeared first on Cal Newport.
August 17, 2025
What if AI Doesn’t Get Much Better Than This?
In the years since ChatGPT’s launch in late 2022, it’s been hard not to get swept up in feelings of euphoria or dread about the looming impacts of generative AI. This reaction has been fueled, in part, by the confident declarations of tech CEOs, who have veered toward increasingly bombastic rhetoric.
“AI is starting to get better than humans at almost all intellectual tasks,” Anthropic CEO Dario Amodei recently told Anderson Cooper. He added that half of entry-level white collar jobs might be “wiped out” in the next one to five years, creating unemployment levels as high as 20%—a peak last seen during the Great Depression.
Meanwhile, OpenAI’s Sam Altman said that AI can now rival the abilities of a job seeker with a PhD, leading one publication to plaintively ask, “So what’s left for grads?”
Not to be outdone, Mark Zuckerberg claimed that superintelligence is “now in sight.” (His shareholders hope he’s right, as he’s reportedly offering compensation packages worth up to $300 million to lure top AI talent to Meta.)
But then, two weeks ago, OpenAI finally released its long-awaited GPT-5, a large language model that many had hoped would offer leaps in capabilities, comparable to the head-turning advancements introduced by previous major releases, such as GPT-3 and GPT-4. But the resulting product seemed to be just fine.
GPT-5 was marginally better than previous models in certain use cases, but worse in others. It had some nice new usability updates, but others that some found annoying. (Within days, more than 4,000 ChatGPT users signed a change.org petition asking OpenAI to make their previous model, GPT-4o, available again, as they preferred it to the new release.) An early YouTube reviewer concluded that GPT-5 was a product that “was hard to complain about,” which is the type of thing you’d say about the iPhone 16, not a generation-defining technology. AI commentator Gary Marcus, who had been predicting this outcome for years, summed up his early impressions succinctly when he called GPT-5 “overdue, overhyped, and underwhelming.”
This all points to a critical question that, until recently, few would have considered: Is it possible that the AI we are currently using is basically as good as it’s going to be for a while?
In my most recent article for The New Yorker, which came out last week, I sought to answer this question. In doing so, I ended up reporting on a technical narrative that’s not widely understood outside of the AI community. The breakthrough performance of the GPT-3 and GPT-4 language models was due to improvements in a process called pretraining, in which a model digests an astonishingly large amount of text, effectively teaching itself to become smarter. Both of these models’ acclaimed improvements were caused by increasing their size as well as the amount of text on which they were pretrained.
At some point after GPT-4’s release, however, the AI companies began to realize that this approach was no longer as effective as it once was. They continued to scale up model size and training intensity, but saw diminishing returns in capability gains.
In response, starting around last fall, these companies turned their attention to post-training techniques, a form of training that takes a model that has already been pretrained and then refines it to do better on specific types of tasks. This allowed AI companies to continue to report progress on their products’ capabilities, but these new improvements were now much more focused than before.
Here’s how I explained this shift in my article:
“A useful metaphor here is a car. Pre-training can be said to produce the vehicle; post-training soups it up. [AI researchers had] predicted that as you expand the pre-training process you increase the power of the cars you produce; if GPT-3 was a sedan, GPT-4 was a sports car. Once this progression faltered, however, the industry turned its attention to helping the cars that they’d already built to perform better.”
The result was a confusing series of inscrutably named models—o1, o3-mini, o3-mini-high, -4-mini-high—each with bespoke post-training upgrades. These models boasted widely-publicized increases on specific benchmarks, but no longer the large leaps in practical capabilities we once expected. “I don’t hear a lot of companies using AI saying that 2025 models are a lot more useful to them than 2024 models, even though the 2025 models perform better on benchmarks,” Gary Marcus told me.
The post-training approach, it seems, can lead to incrementally better products, but not the continued large leaps in ability that would be necessary to fulfill the tech CEO’s more outlandish predictions.
None of this, of course, implies that generative AI tools are worthless. They can be very cool, especially when used to help with computer programming (though maybe not as much as some thought), or to conduct smart searches, or to power custom tools for making sense of large quantities of text. But this paints a very different picture from one in which AI is “better than humans at almost all intellectual tasks.”
For more details on this narrative, including a concrete prediction for what to actually expect from this technology in the near future, read the full article. But in the meantime, I think it’s safe, at least for now, to turn your attention away from the tech titans’ increasingly hyperbolic claims and focus instead on things that matter more in your life.
The post What if AI Doesn’t Get Much Better Than This? appeared first on Cal Newport.
August 3, 2025
On Additive and Extractive Technologies
A reader recently sent me a Substack post they thought I might like. “I bought my kids an old-school phone to keep smartphones out of their hands while still letting them chat with friends,” the post’s author, Priscilla Harvey, writes. “But it’s turned into the sweetest, most unexpected surprise: my son’s new daily conversations with his grandmothers.”
As Harvey continues, her son has adopted the habit of stretching out on the couch, talking to his grandmother on a retro rotary-style phone, the long cable stretching across the room. “There’s no scrolling, no distractions, no comparisons, no dopamine hits to chase,” she notes. “Instead he is just listening to stories, asking questions, and having the comfort of knowing someone who loves him is listening on the other end of the line.”
The post’s surface message is one about kids and technology. Harvey, defiantly pushed back against the culture of weary resignation surrounding our youth and phone use, and discovered something sacred.
But I think there’s a more general idea lurking here as well.
The telephone, in its original hard-plastic, curly-wired form, is an example of what we might call an additive technology. Its goal is to take something you value—like talking to people you know—and make this activity easier and more accessible. You want to talk to your grandmother? Dial her number, and her voice fills your ear, clear and immediate. The phone seeks strictly to add value to your life.
Now compare this to Instagram. The value proposition is suddenly muddled. You might enjoy aspects of this app: the occasional diversion, the rare update from a cherished friend. But with these joys come endless sorrows as well. The scrolling can become worryingly addictive, while the content tends to devolve into a digital slurry—equal parts mind-numbing and anxiety-inducing.
Unlike the straightforward benefits of a landline, it soon becomes clear that this tool doesn’t have your best interests as its primary goal. It’s using you; making itself just compelling enough that you’ll pick it up, at which point it can monetize every last ounce of your time and data. It’s what we might call an extractive technology, as it seeks to extract value from you instead of providing it.
My philosophy of techno-selectionism builds on a simple belief: we must become significantly more critical and choosy about the tools we allow into our lives. This goal becomes complicated when we filter our choices based solely on whether something can plausibly offer us any benefit. Nearly everything passes that low bar.
But if we distinguish between additive and extractive technologies, clarity emerges. The key is not whether that app, device, or site is flashy or potentially cool. What matters is whose interest it ultimately serves. If it’s not our own, why bother? Life’s too short to miss time on the phone with grandma.
The post On Additive and Extractive Technologies appeared first on Cal Newport.
July 27, 2025
On Engineered Wonder
In the wake of my recent (and inaugural) visit to Disneyland, I read Richard Snow’s history of the park, Disney’s Land. Early in the book, Snow tells a story that I hadn’t heard before. It fascinated me—not just for its details, but also, as I’ll soon elaborate, for its potential relevance to our current moment.
The tale begins in 1948. According to Land, Disney’s personal nurse and informal confidant, Hazel George, had become worried. “[She] began to sense that her boss was sinking into what seemed to her to be a dangerous depression,” Land writes. “Perhaps even heading toward what was then called a nervous breakdown.”
The sources of this distress were obvious. Disney’s studio hadn’t had a hit since Bambi’s release in 1942, and the loss of the European markets during the war, as well as the economic uncertainty that followed in peacetime, had strained the company’s finances. Meanwhile, during this same period, Disney faced an animator strike that he took as a personal betrayal. “It seemed again to just be pound, pound, pound,” writes Land. “Disney was often aggressive, abrupt, and when not angry, remote.”
Hazel George, however, had a solution. She knew about Disney’s childhood fascination with steam trains, so it caught her attention when she saw an advertisement in the paper for the Chicago Railroad Fair, which would feature exhibits from thirty different railway lines built out over fifty acres on the shore of Lake Michigan. She suggested Disney take a vacation to see the fair. He loved the idea.
In Chicago, entranced by what he encountered, Disney felt a spark of the creative enthusiasm that had been missing throughout the war years. He just needed to find a way to harness it. Serendipitously, upon returning to Los Angeles, one of his animators, Ward Kimball, introduced him to a group of West Coast train enthusiasts who were building scale models of functioning steam trains large enough for an adult to ride on (think: cars roughly the length of a child’s wagon).
This, Disney decided, is what he needed to do.
In 1949, Disney and his wife, Lillian, bought a five-acre plot of land on Carolwood Drive in the Holmby Hills neighborhood of LA, to build a new house. They chose the location in large part because Disney thought its layout would be perfect for his own scale railroad project.
Over the next year, he worked with the machine shops at his studio to help construct his scale trains and with a team of landscapers to build out the track and its surroundings. When complete, Disney’s Carolwood Pacific Railroad, as he called it, included a half-mile of right-of- way that circled the house and yard, including a 46-foot-long trestle bridge and a 90-foot-long tunnel dug under his wife’s flower bed—complete with an S-turn shape so that you couldn’t see the other end upon entering. His rolling stock included his 1:8 scale steam locomotive, called the Lilly Belle, six cast-metal gondolas, two boxcars, two stock cars, a flatcar, and a wooden caboose decorated inside with miniature details like a twig-sized broom and tiny potbelly stove that could actually be lit.

As Land tells it, this project re-energized Disney. The more he worked on the line, the more ideas began to flow for his company. Soon, one such idea began to dominate all the others. In 1953, Disney abruptly shut down the Carolwood Pacific. It had accomplished its goal of helping him rediscover his creative inspiration, but now he had a bigger project to pursue; one that would dominate the final chapter of his career and provide him endless fascination and enthusiasm: he would build a theme park.
As Land concludes: “Of all the influences that helped shape Disneyland, the railroad is the seminal one. Or, rather, a railroad. One Disney owned.”
~~~
My term for what Disney achieved in building the Carolwood Pacific Railroad is engineered wonder. More generally, engineered wonder is when you take something that sparks a genuine flare of interest, and you pursue it to a degree that’s remarkable (or, depending on who you ask, perhaps even absurd). Such projects are not done for money, or advancement, or respect, but instead just because they fascinate you, and you want to amplify that feeling as expansively as possible.
This brings me back to my promised connection to our current moment. In the early 1950s, Disney deployed engineered wonder to escape the creativity-sapping economic doldrums created by wartime uncertainty. Seventy-five years later, I see a more widely relevant use for this strategy: escaping the digital doldrums created by mediating too many of our experiences through screens.
I increasingly worry that as we live more and more of both our personal and professional lives in the undifferentiated abstraction of the digital, we lose touch with what it’s like to grapple with the joys and difficulties of the real world: to feel real awe, or curiosity, or fascination, and not just an algorithmically-optimized burst of emotion; to see our intentions manifest concretely in the world, and not just mechanically measured by view counts and likes.
Engineered wonder offers an escape from this state. It reawakens our nervous systems to what it’s like to engage with the non-digital. It teaches our brains to crave the real sensations and reactions that our screens can only simulate. It’s a way to jumpstart a more exciting chapter in our lives.
During Disney’s era, the Carolwood Pacific Project likely seemed extreme to most people he encountered. Today, this extremeness might be exactly what we need.
The post On Engineered Wonder appeared first on Cal Newport.
July 20, 2025
No One Knows Anything About AI
I want to present you with two narratives about AI. Both of them are about using this technology to automate computer programming, but they point toward two very different conclusions.
The first narrative notes that Large Language Models (LLMs) are exceptionally well-suited for coding because source code, at its core, is just very well-structured text, which is exactly what these models excel at generating. Because of this tight match between need and capability, the programming industry is serving as an economic sacrificial lamb, the first major sector to suffer a major AI-driven upheaval.
There has been no shortage of evidence to support these claims. Here are some examples, all from the last two months:
Aravind Srinivas, the CEO of the AI company Perplexity, claims AI tools like Cursor and GitHub Copilot cut task completion time for his engineers from “three or four days to one hour.” He now mandates every employee in his company to use them: “The speed at which you can fix bugs and ship to production is scary.”An article in Inc. confidently declared: “In the world of software engineering, AI has indeed changed everything.”Not surprisingly, these immense new capabilities are being blamed for dire disruptions. One article from an investment site featured an alarming headline: “Tech Sector Sees 64,000 Job Cuts This Year Due to AI Advancement.” No one is safe from such cuts. “Major companies like Microsoft have been at the forefront of these layoffs,” the article explains, “citing AI advancements as a primary factor.”My world of academic computer science hasn’t been spared either. A splashy Atlantic piece opens with a distressing claim: “The Computer Science-Bubble is Bursting,” which it largely blames on AI, a technology it describes as “ideally suited to replace the very type of person who built it.”Given the confidence of these claims, you’d assume that computer programmers are rapidly going the way of the telegraph operator. But, if you read a different set of articles and quotes from this same period, a very different narrative emerges:
The AI evaluation company METR recently released the results of a randomized control trial in which a group of experienced open-source software developers were sorted into two groups, one of which would use AI coding tools to complete a collection of tasks, and one of which would not. As the report summarizes: “Surprisingly, we find that when developers use AI tools, they take 19% longer than without—AI makes them slower.”Meanwhile, other experienced engineers are beginning to push back on extreme claims about how AI will impact their industry. “Quitting programming as a career right now because of LLMs would be like quitting carpentry as a career thanks to the invention of the table saw,” quipped the developer Simon Wilson.Tech CEO Nick Khami reacted to the claim that AI tools will drastically reduce the number of employees required to build a software product as follows: “I feel like I’m being gaslit every time I read this, and I worry it makes folks early in their software development journey feel like it’s a bad time investment.”But what about Microsoft replacing all those employees with AI tools? A closer look reveals that this is not what happened. The company’s actual announcement clarified that cuts were spread across divisions (like gaming) to free up more funds to invest in AI initiatives—not because AI was replacing workers..What about the poor CS majors? Later in that same Atlantic article, an alternative explanation is floated. The tech sector has been contracting recently to correct for exuberant spending during the pandemic years. This soft market makes a difference: “enrollment in the computer-science major has historically fluctuated with the job market…[and] prior declines have always rebounded to enrollment levels higher than where they started.” (Personal history note: when I was studying computer science as an undergraduate in the early 2000s, I remember that there was consternation about the plummeting numbers of majors in the wake of the original dot-com bust.)Here we can find two completely different takes on the same AI issue, depending on what articles you read and what experts you listen to. What should we take away from this confusion? When it comes to AI’s impacts, we don’t yet know anything for sure. But this isn’t stopping everyone from pretending like we do.
My advice, for the moment:
Tune out both the most heated and the most dismissive rhetoric.Focus on tangible changes in areas that you care about that really do seem connected to AI—read widely and ask people you trust about what they’re seeing.Beyond that, however, follow AI news with a large grain of salt. All of this is too new for anyone to really understand what they’re saying.AI is important. But we don’t yet fully know why.
The post No One Knows Anything About AI appeared first on Cal Newport.
July 13, 2025
Dispatch From Vermont
Most summers, my family and I retreat to New England for much of July. From a professional perspective, I see this as an exercise in seasonality (to use a term from my book Slow Productivity), a way to recharge and recenter the creative efforts that sustain my work. This year, I needed all the help I could get. I had recently finished part one of my new book on the deep life and was struggling to find the right way to introduce the second.
During my first couple of days up north, I made rapid progress on the new chapter. But I soon began to notice some grit in the gears of my conceptual narrative. As I pushed forward in my writing, the gnashing and grinding became louder and more worrisome. Eventually, I had to admit that my approach wasn’t working. I threw out a couple thousand words, and went searching for a better idea.
It was at this point that we fortuitously decided to take a hike. We headed to Franconia Notch in the White Mountains, which we’ve always enjoyed for its unruly, romantic grandeur. We had decided to tackle the trek up to Lonesome Lake, a serene body of water nestled at 2,700 feet amid the peaks and ridges of Cannon Mountain.
The Lonesome Lake hike begins with a mile of steady elevation gain. At first, you’re accompanied by the sounds of traffic from I-93 below; your legs burning, mind still mulling the mundane. But eventually the trail turns, and the road noise dissipates. After a while, your attention has no choice but to narrow. Time expands. You almost don’t notice when the trail begins to flatten. Then, picking your way through spindly birches, you emerge onto the lake’s quiet, wind-rippled serenity.

It was at Lonesome Lake that my difficulties with my new chapter began to dissipate. With an unhurried clarity, I saw a better way to make my argument. I scribbled some notes down in the pocket-sized notebook I always carry. As we finally, reluctantly, made our way back down the mountain, I continued to refine my thinking.
Walking and thinking have been deeply intertwined since the dawn of serious thought. Aristotle so embraced mobile cognition—he wore out the covered walkways of his outdoor academy, the Lyceum—that his followers became known as the Peripatetic School, from the Greek peripatein, meaning ‘to walk around’.
My recent experience in the White Mountains was a minor reminder of this major truth. In an age where AI threatens to automate ever-wider swaths of human thought, it seems particularly important to remember both the hard-won dignity of producing new ideas de novo within the human brain, and the simple actions, like putting the body in motion, that help this miraculous process unfold.
The post Dispatch From Vermont appeared first on Cal Newport.
July 6, 2025
Don’t Ignore Your Moral Intuition About Phones
In a recent New Yorker review of Matt Richtel’s new book, How We Grow Up, Molly Fischer effectively summarizes the current debate about the impact phones and social media are having on teens. Fischer focuses, in particular, on Jon Haidt’s book, The Anxious Generation, which has, to date, spent 66 weeks on the Times bestseller list.
“Haidt points to a selection of statistics across Anglophone and Nordic countries to suggest that rising rates of teen unhappiness are an international trend requiring an international explanation,” Fischer writes. “But it’s possible to choose other data points that complicate Haidt’s picture—among South Korean teens, for example, rates of depression fell between 2006 and 2018.”
Fischer also notes that American suicide rates are up among many demographics, not just teens, and that some critics attribute depression increases in adolescent girls to better screening (though Haidt has addressed this latter point by noting that hospitalizations for self-harm among this group rose alongside rates of mental health diagnoses).
The style of critique that Fischer summarizes is familiar to me as someone who frequently writes and speaks about these issues. Some of this pushback, of course, is the result of posturing and status-seeking, but most of it seems well-intentioned; the gears of science, powered by somewhat ambiguous data, grinding through claims and counterclaims, wearing down rough edges and ultimately producing something closer and closer to a polished truth.
And yet, something about this whole conversation has increasingly rubbed me the wrong way. I couldn’t quite put my finger on it until I came across Ezra Klein’s interview with Haidt, released last April (hat tip: Kate McKay).
It wasn’t the interview so much that caught my attention as it was something that Klein said in his introduction:
“I always found the conversation over [The Anxious Generation] to be a little annoying because it got at one of the difficulties we’re having in parenting and in society: a tendency to instrumentalize everything into social science. Unless I can show you on a chart the way something is bad, we have almost no language for saying it’s bad.”
This phenomenon is, to me, a collapse in our sense of what a good life is and what it means to flourish as a human being.”
I think Klein does a good job of articulating the frustration I’d been feeling. In highly educated elite circles, like those in which I operate, we have become so conditioned by technical discourse that we’ve begun outsourcing our moral intuition to statistical analyses.
We hesitate to take a strong stance because we fear the data might reveal we were wrong, rendering us guilty of a humiliating sin in technocratic totalitarianism, letting the messiness of individual human emotion derail us from the optimal operating procedure. We’re desperate to do the right – read: most acceptable to our social/tribal community – thing, and need a chattering class of experts to assure us that we are. (See Neil Postman’s underrated book Technopoly for a much smarter gloss on this cultural trend.)
When it comes to children, however, we cannot and should not abdicate our moral intuition.
If you’re uncomfortable with the potential impact these devices may have on your kids, you don’t have to wait for the scientific community to reach a conclusion about depression rates in South Korea before you take action.
Data can be informative, but a lot of parenting comes from the gut. I don’t feel right, for example, offering my pre-adolescent son unrestricted access to pornography, hateful tirades, mind-numbing video games, and optimally addictive content on a device he can carry everywhere in his pocket. I know this is a bad idea for him, even if there’s lingering debate among social psychologists about statistical effect sizes when phone harms are studied under different regression models.
Our job is to help our kids “flourish” as human beings (to use Klein’s terminology), and this is as much about our lived experience as it is about studies. When it comes to phones and kids, our moral intuition matters. We should trust it.
The post Don’t Ignore Your Moral Intuition About Phones appeared first on Cal Newport.
June 29, 2025
Is AI Making Us Lazy?
Last fall, I published a New Yorker essay titled, “What Kind of Writer is ChatGPT?”. My goal for the piece was to better understand how undergraduate and graduate college students were using AI to help with their writing assignments.
At the time, there was concern that these tools would become plagiarism machines. (“AI seems almost built for cheating,” wrote Ethan Mollick in his bestselling book, Co-Intelligence. What I observed was somewhat more complex.
The students weren’t using AI to write for them, but instead to hold conversations about their writing. If anything, the approach seemed less efficient and more drawn out than simply buckling down and filling the page. Based on my interviews, it became clear that the students’ goal was less about reducing overall effort than it was about reducing the maximum cognitive strain required to produce prose.
“‘Talking’ to the chatbot about the article was more fun than toiling in quiet isolation,” I wrote. Normal writing requires sharp spikes of focus, while working with ChatGPT “mellowed the experience, rounding those spikes into the smooth curves of a sine wave.”
I was thinking about this essay recently, because a new research paper from the MIT Media Lab, titled “Your Brain on ChatGPT,” provides some support for my hypothesis. The researchers asked one group of participants to write an essay with no external help, and another group to rely on ChatGPT 4o. They hooked both groups to EEG machines to measure their brain activity.
“The most pronounced difference emerged in alpha band connectivity, with the Brain-only group showing significantly stronger semantic processing networks,” the researchers explain, before then adding, “the Brain-only group also demonstrated stronger occipital-to-frontal information flow.”
What does this mean? The researchers propose the following interpretation:
“The higher alpha connectivity in the Brain-only group suggests that writing without assistance most likely induced greater internally driven processing…their brains likely engaged in more internal brainstorming and semantic retrieval. The LLM group…may have relied less on purely internal semantic generation, leading to lower alpha connectivity, because some creative burden was offloaded to the tool.” [emphasis mine]
Put simply, writing with AI, as I observed last fall, reduces the maximum strain required from your brain. For many commentators responding to this article, this reality is self-evidently good. “Cognitive offloading happens when great tools let us work a bit more efficiently and with a bit less mental effort for the same result,” explained a tech CEO on X. “The spreadsheet didn’t kill math; it built billion-dollar industries. Why should we want to keep our brains using the same resources for the same task?”
My response to this reality is split. On the one hand, I think there are contexts in which reducing the strain of writing is a clear benefit. Professional communication in email and reports comes to mind. The writing here is subservient to the larger goal of communicating useful information, so if there’s an easier way to accomplish this goal, then why not use it?
But in the context of academia, cognitive offloading no longer seems so benign. Here is a collection of relevant concerns raised about AI writing and learning in the MIT paper [emphases mine]:
“When students rely on AI to produce lengthy or complex essays, they may bypass the process of synthesizing information from memory, which can hinder their understanding and retention of the material.”
“This suggests that while AI tools can enhance productivity, they may also promote a form of ‘metacognitive laziness,’ where students offload cognitive and metacognitive responsibilities to the AI, potentially hindering their ability to self-regulate and engage deeply with the learning material.”
“AI tools…can make it easier for students to avoid the intellectual effort required to internalize key concepts, which is crucial for long-term learning and knowledge transfer.”
In a learning environment, the feeling of strain is often a by-product of getting smarter. To minimize this strain is like using an electric scooter to make the marches easier in military boot camp; it will accomplish this goal in the short term, but it defeats the long-term conditioning purposes of the marches.
In this narrow debate, we see hints of the larger tension partially defining the emerging Age of AI: to grapple fully with this new technology, we need to better grapple with both the utility and dignity of human thought.
####
To hear a more detailed discussion of this new paper, listen to today’s episode of my podcast, where I’m joined by Brad Stulberg to help dissect its findings and implications [ listen | watch ].
The post Is AI Making Us Lazy? appeared first on Cal Newport.
Does AI Make Us Lazy?
Last fall, I published a New Yorker essay titled, “What Kind of Writer is ChatGPT?”. My goal for the piece was to better understand how undergraduate and graduate college students were using AI to help with their writing assignments.
At the time, there was concern that these tools would become plagiarism machines. (“AI seems almost built for cheating,” wrote Ethan Mollick in his bestselling book, Co-Intelligence. What I observed was somewhat more complex.
The students weren’t using AI to write for them, but instead to hold conversations about their writing. If anything, the approach seemed less efficient and more drawn out than simply buckling down and filling the page. Based on my interviews, it became clear that the students’ goal was less about reducing overall effort than it was about reducing the maximum cognitive strain required to produce prose.
“‘Talking’ to the chatbot about the article was more fun than toiling in quiet isolation,” I wrote. Normal writing requires sharp spikes of focus, while working with ChatGPT “mellowed the experience, rounding those spikes into the smooth curves of a sine wave.”
I was thinking about this essay recently, because a new research paper from the MIT Media Lab, titled “Your Brain on ChatGPT,” provides some support for my hypothesis. The researchers asked one group of participants to write an essay with no external help, and another group to rely on ChatGPT 4o. They hooked both groups to EEG machines to measure their brain activity.
“The most pronounced difference emerged in alpha band connectivity, with the Brain-only group showing significantly stronger semantic processing networks,” the researchers explain, before then adding, “the Brain-only group also demonstrated stronger occipital-to-frontal information flow.”
What does this mean? The researchers propose the following interpretation:
“The higher alpha connectivity in the Brain-only group suggests that writing without assistance most likely induced greater internally driven processing…their brains likely engaged in more internal brainstorming and semantic retrieval. The LLM group…may have relied less on purely internal semantic generation, leading to lower alpha connectivity, because some creative burden was offloaded to the tool.” [emphasis mine]
Put simply, writing with AI, as I observed last fall, reduces the maximum strain required from your brain. For many commentators responding to this article, this reality is self-evidently good. “Cognitive offloading happens when great tools let us work a bit more efficiently and with a bit less mental effort for the same result,” explained a tech CEO on X. “The spreadsheet didn’t kill math; it built billion-dollar industries. Why should we want to keep our brains using the same resources for the same task?”
My response to this reality is split. On the one hand, I think there are contexts in which reducing the strain of writing is a clear benefit. Professional communication in email and reports comes to mind. The writing here is subservient to the larger goal of communicating useful information, so if there’s an easier way to accomplish this goal, then why not use it?
But in the context of academia, cognitive offloading no longer seems so benign. Here is a collection of relevant concerns raised about AI writing and learning in the MIT paper [emphases mine]:
“When students rely on AI to produce lengthy or complex essays, they may bypass the process of synthesizing information from memory, which can hinder their understanding and retention of the material.”
“This suggests that while AI tools can enhance productivity, they may also promote a form of ‘metacognitive laziness,’ where students offload cognitive and metacognitive responsibilities to the AI, potentially hindering their ability to self-regulate and engage deeply with the learning material.”
“AI tools…can make it easier for students to avoid the intellectual effort required to internalize key concepts, which is crucial for long-term learning and knowledge transfer.”
In a learning environment, the feeling of strain is often a by-product of getting smarter. To minimize this strain is like using an electric scooter to make the marches easier in military boot camp; it will accomplish this goal in the short term, but it defeats the long-term conditioning purposes of the marches.
In this narrow debate, we see hints of the larger tension partially defining the emerging Age of AI: to grapple fully with this new technology, we need to better grapple with both the utility and dignity of human thought.
####
To hear a more detailed discussion of this new paper, listen to today’s episode of my podcast, where I’m joined by Brad Stulberg to help dissect its findings and implications [ listen | watch ].
The post Does AI Make Us Lazy? appeared first on Cal Newport.
June 22, 2025
An Important New Study on Phones and Kids
One of the topics I’ve returned to repeatedly in my work is the intersection of smartphones and children (see, for example, my two New Yorker essays on the topic, or my 2023 presentation that surveys the history of the relevant research literature).
Given this interest, I was, of course, pleased to see an important new study on the topic making the rounds recently: “A Consensus Statement on Potential Negative Impacts of Smartphone and Social Media Use on Adolescent Mental Health.”
To better understand how experts truly think about these issues, the study’s lead authors, Jay Van Bavel and Valerio Capraro, convened a group of 120 researchers from 11 disciplines and had them evaluate a total of 26 claims about children and phones. As Van Bavel explained in a recent appearance on Derek Thompson’s podcast, their goal was to move past the ‘non-representative shouting about these topics that happens online to try instead to arrive at some consensus views.’
The panel of experts was able to identify a series of statements that essentially all of them (more than 90%) agreed were more or less true. These included:
Adolescent mental health has declined in several Western countries over the past 20 years (note: contrarians had been claiming that this trend was illusory and based on reporting effects).Smartphone and social media use correlate with attention problems and behavioral addiction.Among girls, social media use may be associated with body dissatisfaction, perfectionism, exposure to mental disorders, and risk of sexual harassment.These consensus statements are damaging for those who still maintain the belief, popular at the end of the last decade, that data on these issues is mixed at best, and that it’s just as likely that phones cause no serious issues for kids. The current consensus is clear: these devices are addictive and distracting, and for young girls, in particular, can increase the likelihood of several mental health harms. And all of this is happening against a backdrop of declining adolescent mental health.
The panel was less confident about policy solutions to these issues. They failed to reach a consensus, for example, on the claim that age limits on social media would improve mental health. But a closer look reveals that a majority of experts believe this is “probably true,” and that only a tiny fraction believe there is “contradictory evidence” against this claim. The hesitancy here is simply a reflection of the reality that such interventions haven’t yet been tried, so we don’t have data confirming they’ll work.
Here are my main takeaways from this paper…
First, rigorous social psychology studies are tricky. In addition to the numerous confounding factors associated with them, the experiments are particularly difficult to design. As a result, we don’t have the same sort of lock-step consensus on our concerns about this technology that we might be able to generate for, say, the claim that human activity is warming the globe.
But, it’s also now clear that this field is no longer actually divided on the question of whether, generally speaking, smartphones and social media are bad for kids. In this new study, almost every major claim about this idea generated at least majority support, with many being accepted by over 90% of the experts surveyed. There were close to no major claims for which more than a very small percentage of experts felt that there was contradictory evidence.
In social psychology, this might be as clear a conclusion as we’re likely to achieve. Combine these results with the strong self-reports from children and parents decrying these technologies and their negative impacts, and I think there’s no longer an excuse not to act.
There’s been a sort of pseudo-intellectual thrill in saying things like, “Well, it’s complicated…” when you encounter strong claims about smartphones like those made in Jon Haidt’s immensely popular book, The Anxious Generation. But such a statement is tautological. Of course, it’s complicated; we’re talking about technology-induced social trends; we’re never going to get to 100% certainty, and there will always be some contradictory reports.
What matters now is the action that we think makes the most sense given what we know. This new paper is the final push we need to accept that the precautionary principle should clearly apply. Little is lost by preventing a 14-year-old from accessing TikTok or Snapchat, or telling a 10-year-old they cannot have unrestricted access to the internet through their own smartphone, but so much will almost certainly be gained.
#####
If you want to hear a longer discussion about this study, listen to the most recent episode of my podcast, or for the video version, watch here.
On an unrelated note: I want to highlight an interesting new service: DoneDaily. It offers online coaching for professional productivity, based loosely on my philosophy of multi-scale planning. I’ve known these guys for a long time (the company’s founder used to offer health advice on my blog), and I think they’ve done a great job. Worth checking out…
The post An Important New Study on Phones and Kids appeared first on Cal Newport.
Cal Newport's Blog
- Cal Newport's profile
- 9837 followers
