
AI is everywhere, the pressure to adopt it is relentless, and the evidence that it is making us smarter is scarcer every quarter.
On New Year’s Day 2026, a programmer named Steve Yegge launched an open source platform called Gas Town. It allows users to orchestrate swarms of AI coding agents simultaneously, assembling software at speeds no human could match.
One of the first people to try it described the experience in terms that had nothing to do with productivity. “There’s really too much going on for you to reasonably understand.” he wrote. “I had a palpable sense of stress watching it.”
That phrase should be pinned to the wall of every executive suite, every venture capital boardroom, and every CES main stage where the word “intelligence” is thrown around like confetti. Because something strange is happening in the relationship between humans and the technology we continue to call intelligent.
Machines are getting faster. The humans who interact with them are increasingly exhausted, more anxious, and in several ways less capable of the one thing that intelligence was supposed to improve: thinking clearly.
The pressure to adopt AI is now so widespread that it has developed its own vocabulary of coercion.
You need to have AI.
You need to use AI.
You need to buy AI.
Your competitors are already using it.
Your children will be left behind without it.
Language doesn’t come from engineers quietly solving problems. It comes from earnings calls, product launches, and LinkedIn posts written with the manic energy of people who have confused selling a product with describing reality.
In January 2026, in the World Economic Forum in DavosMicrosoft CEO Satya Nadella offered a quote so revealing that it deserves to be studied as a cultural artifact. He warned that AI was at risk of losing its “social permission” consume large amounts of energy unless it begins to bring tangible benefits to people’s lives.
The framing was striking: It’s not about whether the technology works, but whether you can keep the public engaged while the industry figures out if it works. Nadella called AI a “cognitive amplifier” offering “access to infinite minds.”
A month later, a Circana Survey of American Consumers found that 35 percent of them didn’t want AI on their devices at all. The main reason was not confusion or technophobia. It was simpler than that. They said they didn’t need it.
The gap between rhetoric and evidence has become difficult to ignore. In March 2026, Goldman Sachs released an analysis of fourth-quarter earnings data and found, in the words of senior economist Ronnie Walker, “There is no significant relationship between productivity and economy-wide adoption of AI.”
The bank noted that a record 70 percent of S&P 500 management teams had discussed AI in their earnings calls. Only 10 percent had quantified their impact on specific use cases. The one percent had quantified their impact on profits. Meanwhile, the five largest US tech companies They were collectively expected to spend $667 billion on AI infrastructure in 2026, a 62 percent increase from the previous year.
The National Bureau of Economic Research described the situation as “productivity paradox”: perceived earnings greater than measured.
There are real productivity improvements, but they are surprisingly limited. Goldman found an average profit of about 30 percent in two specific areas: customer service and software development. Outside of those areas, evidence of overall improvement was, in the bank’s assessment, essentially absent. The promised revolution, for now, is taking place in two rooms of a very large house.
It’s worth taking a closer look at what’s happening in those rooms, though, because even when the AI delivers, something else seems to be breaking.
In February 2026, researchers at UC Berkeley Haas School of Business published the results of an eight-month study conducted at a 200-person American technology company. They found that AI did not reduce workload. He intensified them. Tasks became faster, so expectations increased. Expectations rose, so the scope expanded. The scope expanded, so workers took on responsibilities that previously belonged to other functions. Product managers started writing code. The researchers took on engineering work. Role boundaries dissolved because the tools made it feel possible, and then burnout hit.
I got tired just writing it.
The researchers identified a cycle they called “increased workload”: a gradual accumulation of tasks that goes unnoticed until cognitive fatigue degrades the quality of each decision.
Harvard Business Review gave the phenomenon a more forceful name: “A fry of brains with AI.” A Boston Consulting Group study of nearly 1,500 American workers found that 14 percent of those who use AI tools that require significant supervision reported experiencing it, a different form of brain fog characterized by difficulty concentrating, slower decision-making, and headaches after prolonged interaction with AI.
The workers most affected were not the skeptics or the laggards. They were the enthusiastic adopters, the ones who had done exactly what every speech told them to do.
The distribution of this depletion is not random. 62 percent of associates and 61 percent of entry-level workers. Reported burnout related to AIaccording to the Harvard Business Review study.
Among senior management executives, the figure fell to 38 percent. The pattern is consistent with what anyone who has spent time in an organization could have predicted: the people who make strategic decisions about AI adoption are not the people who manage its results, clean up its bugs, and switch between its tools eight hours a day.
All of this raises a question the industry would rather skip: what exactly do we mean when we use the word “intelligence”?
The term “artificial intelligence” was coined in 1956 in a workshop at Dartmouth College and has done a particular kind of ideological work ever since. By naming this field after a human quality, its founders took a step that was as much marketing as it was science. He invited us to see computing as cognition, pattern matching as understanding, speed as wisdom.
Whenever a product is described as “smart,” it is borrowing from the emotional weight of a word that, for most of human history, meant something like the capacity for judgment, reflection, and the ability to sit with uncertainty long enough to think clearly about it.
That’s not what these systems do. What they do, often brilliantly, is statistical prediction on an extraordinary scale. They recognize patterns in the data, generate plausible continuations of sequences and optimize the objectives defined by their designers.
This is really useful. It is not intelligence in the sense that any philosopher, psychologist or, indeed, any thoughtful person on the street would recognise. The slippage between both meanings is not accidental. It is the driving force of the entire commercial project.
Here is the deepest irony: in the rush to surround ourselves with artificial intelligence, we seem to be eroding the conditions under which real human intelligence operates. Intelligence, real intelligence, requires things that the AI economy is systematically destroying: uninterrupted attention, tolerance for ambiguity, the willingness to sit with a problem before seeking a solution, and the cognitive space to doubt, reconsider, and change your mind.
Researchers at the London School of Economics argued in a February 2026 paper that the manufactured urgency around AI reduces the space for democratic deliberation itself, collapsing the future into a single inevitability and leaving no room for the slow, uncertain and distinctly human process of deciding together what we really want.
There is something almost comical about the situation.
We have built machines that can process language, generate images, and write code at superhuman speed, and people who use them report brain fog, difficulty concentrating, and an increasing inability to think.
A senior engineering manager cited in the BCG study described juggling multiple AI tools to weigh technical decisions, generate drafts, and summarize information. Constant change and verification created what he called “mental disorder.” Their effort had shifted from solving the core problem to managing the tools.
Not everyone complies. A third of consumers have seen AI installed on their phones and laptops and clearly said no. Workers whose organizations value work-life balance report 28 percent less AI fatigueaccording to BCG research, which suggests the problem has less to do with the technology itself and more to do with the culture of compulsive adoption that surrounds it.
The question is not whether AI is useful. In certain applications, it clearly is. The question is whether the frenzy around it, the relentless pressure to adopt, integrate and accelerate, is making us smarter or simply more docile.
Sixty-seven billion dollars of quarterly investment. Record mentions on earnings calls. Entire conferences dedicated to the word “intelligence.”
And in a January survey, the most common reason a human being gave for not wanting any of it was four words: I don’t need it. That sentence, calm and unimpressed, may be the smartest thing anyone has said about AI in years. The question now is whether we still have the attention span to hear it.





