Google and AWS split AI agent stack between control and execution



The era of companies bringing together fast chains and shadow agents is coming to an end as more options emerge for orchestrating complex multi-agent systems. As organizations move AI agents into production, the question remains: "How will we manage them?"

Google and Amazon Web Services offer fundamentally different answers, illustrating a split in the AI ​​stack. Google’s approach is to run a management agent at the system layer, while AWS’s leveraging method is configured at the execution layer.

The debate over how to manage and control gained new energy last month when competing companies launched or updated their agent creation platforms: Anthropic with Claude’s new managed agents and OpenAI with improvements for the agent sdk—provide developer teams with options for managing agents.

AWS with new capabilities added BedrockCore Agent is optimizing speed (relying on harnesses to get agents to the product faster) while also offering identity management and tools.

Meanwhile, Google’s Gemini company takes a governance-centric approach using a Kubernetes-style control plane. Each method provides insight into how agents transition from short task helpers to longer entities within a workflow.

Updates and umbrellas

To understand the situation of each company, here is what is really new.

Google launched a new version of Gemini Enterprise, bringing its enterprise AI agent offerings (Gemini Enterprise Platform and Gemini Enterprise Application) under one umbrella.

The company has changed its name. Vertex AI as Gemini Enterprise Platformalthough it insists that, apart from the name change and new features, it is still fundamentally the same interface.

“We want to provide a platform and a gateway for businesses to have access to all the AI ​​systems and tools that Google provides,” Maryam Gholami, senior director of product management at Gemini Enterprise, told VentureBeat in an interview. “The way you can think of it is that the Gemini Enterprise App is built on the Gemini Enterprise Agent platform, and the security and governance tools are provided free of charge as part of the Gemini Enterprise App subscription.”

On the other hand, AWS added a new managed agent harness to Bedrock Agentcore. The company said in a press release shared with VentureBeat that the harness “replaces the initial build with a configuration-based starting point powered by Strands Agents, AWS’s open source agent framework.”

Users define what the agent does, the model it uses, and the tools it calls, and AgentCore does the work of putting all that together to run the agent.

Agents are now becoming systems.

The shift toward long-lived, stateful autonomous agents has forced a rethinking of how AI systems behave. As agents move from short-lived tasks to long-lived workflows, a new class of failure is emerging: state drift.

As agents continue to operate, they accumulate states: also evolving memory, responses, and context. Over time, that state becomes obsolete. Data sources change or tools may yield conflicting answers. But the agent becomes more vulnerable to inconsistencies and less truthful.

Agent reliability becomes a systems issue, and managing that drift may require more than just faster execution; may require visibility and control.

It is this point of failure that platforms like Gemini Enterprise and AgentCore try to avoid.

Although this change is already happening, Gholami admitted that customers will dictate how they want to run and control any long-running agent.

“We’re going to learn a lot from customers who would use long-running agents, where they simply assign a task to these autonomous agents to go ahead and do it,” Gholami said. “Of course, there are tricks and balances to doing it right and the agent can come back and ask for more information.”

The new AI stack

What is increasingly clear is that the AI ​​stack is separating into different layers, solving different problems.

AWS, and to some extent Anthropic and OpenAI, are optimized for faster deployment. Claude Managed Agents abstracts away much of the backend work to activate an agent, while the Agents SDK now includes support for sandboxes and a ready-to-use harness. These approaches aim to reduce the barrier to getting agents up and running.

Google offers a centralized dashboard to manage identity, enforce policies, and monitor long-term behavior.

Companies likely need both.

As some professionals see it, their companies need to have a serious conversation about how much risk they are willing to take on.

“The main takeaway for enterprise technology leaders considering these technologies right now can be formulated this way: While the question of leveraging agent versus runtime is often perceived as build versus buy, this is primarily a risk management issue. If you can afford to run your agents through a third-party runtime because they don’t impact your revenue streams, that’s fine. Conversely, in the context of more critical processes, the latter option will be the only one to consider from a business perspective,” Rafael Sarim Oezdemir, head of growth at EZContacts, told VentureBeat in an email.

Rapid iteration allows teams to experiment and discover what agents can do, while centralized control adds a layer of trust. What companies need is to make sure they don’t get trapped in systems designed exclusively for a single way of running agents.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *