Five architects of the AI ​​economy explain where the gears are coming off


Earlier this week, five people who touch every layer of the AI ​​supply chain sat down at the Milken Global Conference in Beverly Hills, where they spoke with this editor about everything from chip shortages to orbital data centers to the possibility that the entire architecture underpinning the technology is wrong.

On stage with TechCrunch: Christophe Fouquet, CEO of ASML, the Dutch company that holds the monopoly on extreme ultraviolet lithography machines, without which modern chips would not exist; Francis deSouza, chief operating officer of Google Cloud, who oversees one of the biggest infrastructure bets in corporate history; Qasar Younis, co-founder and CEO of Applied Intuition, a $15 billion physical AI company that started in simulation and has since moved into defense; Dimitry Shevelenko, Chief Commercial Officer at Perplexity, the AI-native agent search company; and Eve Bodnia, a quantum physicist who left academia to challenge the fundamental architecture that most of the AI ​​industry takes for granted at her startup, Logical Intelligence. (Meta’s former chief AI scientist Yan LeCun signed on as founding chairman of its technical research board earlier this year.)

This is what the five said:

Bottlenecks are real

The rise of AI is reaching strict physical limits, and the limitations start lower than many imagine. Fouquet was the first to say it, describing a “huge acceleration of chip manufacturing,” while expressing his “firm belief” that, despite all that effort, “over the next two, three, maybe five years, the market will be in limited supply,” meaning the hyperscalers (Google, Microsoft, Amazon, Meta) won’t get all the chips they’re paying for, period.

DeSouza highlighted how big (and how fast it’s growing) this problem is, reminding the audience that Google Cloud’s revenue surpassed $20 billion last quarter, growing 63%, while its backlog (revenue committed but not yet delivered) nearly doubled in a single quarter, from $250 billion to $460 billion. “The demand is real,” he stated with impressive calm.

For Younis, the limitation comes mainly from elsewhere. Applied Intuition builds autonomy systems for cars, trucks, drones, mining equipment, and defense vehicles, and its bottleneck isn’t silicon: It’s data that can only be collected by sending machines out into the real world and watching what happens. “You have to find it in the real world,” he said, and no synthetic simulation completely closes that gap. “It will be a long time before models that run in the physical world can be fully trained synthetically.”

Technology event

San Francisco, CA
|
October 13-15, 2026

The energy problem is also real

If chips are the first bottleneck, energy is what looms behind. DeSouza confirmed that Google is exploring data centers in space as a serious response to energy constraints. “You get access to more abundant energy,” he said. Of course, even in orbit, it’s not simple. DeSouza noted that space is a vacuum, so it eliminates convection, leaving radiation as the only way to release heat to the surrounding environment (a process much slower and more difficult to design than the air and liquid cooling systems that data centers rely on today). But the company still considers it a legitimate path.

The deeper argument De Souza made, unsurprisingly, was about efficiency through integration. Google’s strategy of co-designing its entire AI stack (from custom TPU chips to models and agents) pays dividends in failures per watt (more calculation per unit of energy) that a company buying off-the-shelf components simply cannot replicate, he suggested. “Running Gemini on TPU is much more power efficient than any other configuration,” because chip designers know what’s coming in the model before it ships, he said.

Fouquet raised a similar point later in the discussion. “Nothing can have a price,” he said. The industry is in a strange place right now, investing extraordinary amounts of capital, driven by strategic necessity. But more computing means more energy, and more energy comes at a price.

A different type of intelligence

While the rest of the industry debates the scale, architecture, and efficiency of inference within the large language model paradigm, Bodnia is building something very different.

Her company, Logical Intelligence, is based on so-called energy-based models (EBMs), a class of AI that does not predict the next token in a sequence, but instead tries to understand the rules underlying the data, in a way that it says is closer to how the human brain actually works. “Language is a user interface between my brain and yours,” he said. “Reasoning itself is not tied to any language.”

Its largest model runs on 200 million parameters, compared to hundreds of billions for leading LLMs, and claims to run thousands of times faster. More importantly, it is designed to update your knowledge as data changes, rather than requiring new training from scratch.

For chip design, robotics, and other domains where a system needs to understand physical rules rather than linguistic patterns, he argues that EBMs are the most natural choice. “When you drive a car, you don’t look for patterns in any language. You look around, understand the rules of the world around you, and make a decision.” It’s an interesting argument and one that’s likely to attract more attention in the coming months, as the AI ​​field is starting to question whether scale alone is enough.

Agents, security barriers and trust

Shevelenko spent much of the conversation explaining how Perplexity has evolved from a search product to something he now calls a “digital worker.” Perplexity Computer, its newest offering, is designed not as a tool that a knowledge worker uses, but as a staff that a knowledge worker manages. “Every day you wake up and you have a hundred employees on your team,” he said of the opportunity. “What are you going to do to make the most of it?”

It is a convincing speech; It also raises obvious questions about control, so I asked them. His answer was: granularity. Enterprise administrators can specify not only which connectors and tools an agent can access, but also whether those permissions are read-only or read-write, a distinction that matters greatly when agents operate within corporate systems. When Comet, Perplexity’s IT usage agent, takes action on behalf of a user, it presents a plan and requests approval first. Some users find the friction annoying, Shevelenko said, but he said he considers it essential, particularly after joining Lazard’s board of directors, where he said he has found himself unexpectedly sympathetic to the conservative instincts of a CISO protecting a 180-year-old brand built entirely on customer trust. “Granularity is the foundation of good security hygiene,” he said.

Sovereignty, not just security

Younis offered what may have been the most geopolitically charged observation of the panel, which is that physical AI and national sovereignty are intertwined in ways that purely digital AI never was.

The Internet initially spread as an American technology and only faced setbacks at the application layer (Ubers and DoorDashes) when offline consequences became visible. Physical AI is different. Autonomous vehicles, defense drones, mining equipment, agricultural machines – these are manifesting in the real world in ways that governments cannot ignore, raising questions about security, data collection and who ultimately controls the systems that operate within a nation’s borders. “Almost constantly, every country says: we don’t want this intelligence in physical form on our borders, controlled by another country.” Fewer nations, he told the crowd, can currently deploy a robotaxi than those that possess nuclear weapons.

Fouquet put it a little differently. AI progress in China is real (the launch of DeepSeek earlier this year sparked something close to panic in parts of the industry), but that progress is limited below the model layer. Without access to EUV lithography, Chinese chipmakers cannot make the most advanced semiconductors, and models built with older hardware operate at a compounded disadvantage no matter how good the software is. “Today, in the United States, you have the data, you have the access to computing, you have the chips, you have the talent. China does a very good job at the top, but it is missing some elements at the bottom,” Fouquet said.

The generational question

Near the end of our panel, someone in the audience asked the obvious and uncomfortable question: Will all of this affect the critical thinking ability of the next generation?

The responses were optimistic, as would be expected from people who have staked their careers on this technology. DeSouza immediately pointed out the magnitude of the problems that more powerful tools could finally allow humanity to address. Think about neurological diseases whose biological mechanisms we still don’t understand, greenhouse gas removal, and network infrastructure that has been delayed for decades. “This should take us to the next level of creativity,” he said.

Shevelenko made a more pragmatic point: Initial work may be disappearing, but the ability to launch something independently has never been more accessible. “(For) anyone who has Perplexity Computer… the limitation is their own curiosity and agency.”

Younis drew the clearest distinction between knowledge work and physical work. He pointed to the fact that the average American farmer is 58 years old and that labor shortages in mining, long-distance trucking and agriculture are chronic and growing, not because wages are too low, but because people don’t want those jobs. In those areas, physical AI is not displacing willing workers. It is filling a void that already exists and that only seems to deepen from here.

When you purchase through links in our articles, we may earn a small commission. This does not affect our editorial independence.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *