
TO viral post on X from a veteran programmer and former Google engineer Steve Yege It set off a rhetorical storm this week, prompting harsh public rebuttals from some of Google’s most prominent AI leaders and reopening a sensitive question for the company: To what extent are its own engineers actually using the latest generation of AI coding tools?
The debate began after Yegge summarized what he said was the opinion of his friend, a current and former Google employee (or Googler), who stated that the Gemini AI firm’s internal adoption of AI seems much more common and less cutting-edge than outsiders might expect.
Yegge said a Google friend claimed that Google engineering reflects an “average” industry pattern of a 20%-60%-20% split: a small group of outright AI rejecters (20%), a much larger middle group that still relies primarily on simpler chat workflows and coding assistants (60%), and another small group of AI-first, cutting-edge engineers who widely use and master agent tools (20%).
TO VentureBeat search for X Using its parent company’s AI assistant, Grok, it found that Yegge’s April 13 post spread quickly, surpassing 4,500 likes, 205 cited posts, 458 replies and 1.9 million views as of April 14.
We’ve reached out to Google for comment on the claims and will update you when we receive a response.
A veteran and frank Googler voice
Why was the opinion of Yegge’s anonymous Googler friend so strong? Partly because Yegge is not just another commentator who shoots from the sidelines.
He spent about 13 years at Google after previous stints at Amazon and GeoWorks, then joined Grab and then became head of engineering at Sourcegraph in 2022. He has long been known in software circles for his widely read essays on programming and engineering culture, and for a previous internal Google memo that accidentally became public in 2011 and attracted media attention.
That history helps explain why engineers and executives still take his criticism seriously, even when they reject it.
Yegge has built a reputation over many years as an outspoken voice, internally and externally, in software culture, someone with enough cachet in the industry that his judgments can travel fast, especially when they hit nerves within big tech companies.
Wikipedia summary of his career. highlights his long tenure at Google and the enormous attention his blog posts and previous criticisms of Google have received.
Unraveling Yegge’s Friend’s Argument
In this case, Yegge’s argument was not simply that Google uses too little AI. It was that the company’s adoption may be uneven, culturally limited, and less transformed than its brand implies.
His friend allegedly argued that some Google employees couldn’t use Anthropic’s Claude Code because it was framed as “the enemy” and that Gemini was not yet sufficient for more complete agent coding workflows. He contrasted Google with what he described as a smaller set of companies that move much faster.
Rejection of Hassabis and current Google employees
The first big pushback came from Demis Hassabis, co-founder and CEO of Google DeepMind, who responded directly and forcefully. “Maybe tell your friend to do some real work and stop spreading absolute nonsense. This post is completely false and just click bait,” Hassabis wrote.
Other Google leaders followed with longer defenses.
Addy OsmaniGoogle Cloud AI director wrote that Yegge’s account “does not match the status of the coding agent at our company.” He added: “Over 40,000 SWEs use agent coding weekly here.”
Osmani said that Google employees have access to internal tools and systems, including “custom models, skills, CLI, and MCP,” and rejected the idea that Google employees are isolated from external models, writing that “people can even use @AnthropicAI models in Vertex” and concluding that “Google is anything but average.”
Other current Google employees reinforced that message. Jana Dogana Google software engineer, wrote in a quoted tweet: “Everyone I work with uses @antigravity like every second of the day,” and then followed up with another X post that says: "Unpopular opinion: If you think token burn is a productivity metric, no one should take you seriously. Imagine you are one of the top 0.0001% writers and only the tokens you produce count."
Paige BaileyDevX engineering lead at Google DeepMind, said teams had agents “running 24/7.”
Several other Google and DeepMind figures also disputed Yegge’s characterization, some questioning the factual basis for his claims and others suggesting he lacked visibility into current internal usage.
Yegge’s refutation
Yegge, for his part, did not back down. in a Hassabis trackinghe wrote, “I’m not trying to misrepresent anyone,” but argued that by its own standard for advanced AI adoption, Google still doesn’t seem to be doing especially well.
He pointed to the use of tokens and the replacement of old development habits with truly agentic workflows as the most significant benchmark, and said he would be willing to walk back his criticism if Google could demonstrate that its engineers were operating at that level.
AI Adoption vs. AI Transformation
That leaves the central dispute unresolved, but clearer. This is less a fight over whether Google engineers use AI than a fight over what should be considered meaningful adoption.
Googlers highlight the scale, weekly usage, and availability of internal and external tools. Yegge argues that such measures may capture broad exposure without demonstrating a deeper change, an AI transformation, in the way engineering work is done. The clash reflects a broader industry divide between visible usage metrics and more transformative power user behavior.
For Google, the issue is especially sensitive. Yegge has criticized the company before, including in a 2018 essay explaining why he left. where he argued that Google had become too risk-averse and had lost much of its ability to innovate.
If his last review had come from a lesser-known poster, it might have faded away. Coming from a former Google engineer with a history of memorable public criticism, he instead got direct answers from some of the company’s top AI figures and turned a single post into a broader public argument about whether Google’s leadership in AI is as deep internally as it appears from the outside.





