Bringing AI to production is forcing a rethink of business infrastructure



Brought to you by Nutanix


Across industries, organizations are focused on how to move from AI pilots, proofs of concept, and cloud-based experimentation to deployment at scale, in real workloads, for real users, and in real business environments. VentureBeat spoke with Tarkan Maner, president and chief commercial officer of Nutanix, and Thomas Cornely, executive vice president of product management, about what that transition requires and what it will take to get it right.

“AI in general is changing everything we do, not just in technology, but across all industry verticals, from regulated industries like banking, healthcare, government and education to unregulated industries like manufacturing and retail,” Maner said. “As a full-platform company, we welcome this change. It is creating more opportunities for us as a company to better serve our customers as we move forward.”

But there is still a practical gap between experimentation and production, Cornely said.

“It’s one thing to do an experiment, to make a prototype. And another thing is to take that prototype and deploy it to 10,000 employees,” he explained. “We went from people focusing on training models to chatbots and now agents, where the demand and pressures on AI infrastructure are growing exponentially.”

Agentic AI introduces a new layer of business complexity

The rise of agent AI is what makes this transition especially momentous. These systems introduce multi-step workflows between applications and data sources, along with a degree of autonomy that creates new operational demands.

Enterprises now have to deal with multiple agents running simultaneously, unpredictable and real-time workloads, and the need to coordinate access to infrastructure between teams.

“OpenClaw is now making it very easy for anyone to create agents and run them,” Cornely said. “What you want is for those agents to run on-premises with your data. You need to have the right structures around them to protect the company from what an agent might do.”

As these systems become more autonomous, the challenge extends beyond how they operate and how they interact with enterprise data, systems, and equipment.

AI is augmenting human work, not replacing it

Agent AI is fundamentally an amplifier of human capability rather than a substitute for it, Maner said. The goal for companies is not to eliminate human work, but to find the right balance between human decision making, AI-powered automation, and agent-based workflows.

“We believe there will be love, peace and harmony between AI, agent tools, robotic systems and human capital,” Maner said. “That harmony can be optimized for better outcomes for businesses, governments and public sector organizations, if the right vendors provide the right tools and services.”

How companies are starting to use AI at scale

In practice, the move from experimentation to real-world implementation is where the challenges become most visible. Despite the push, many are still working on how to scale AI beyond initial use cases.

In doing so, organizations quickly run into practical limitations. Many start in the cloud due to easy access to resources and services, but practical considerations such as data, governance and control, and costs quickly come to the fore.

The cloud can be used for experimentation, with the ultimate goal of returning applications to on-premises as they move toward production, using platforms that address security and costs.

Use cases gaining the most momentum include document search and knowledge retrieval, security and predictive threat detection, software development and coding workflows, and customer service and support operations. In the security space, banking and other clients in Europe and the US are deploying AI-powered tools, including facial recognition and predictive threat detection. Meanwhile, in the customer service industry there is a growing focus on end-to-end and 360-degree customer engagement, from pre-sales to post-sales promotion.

The transformation of AI in specific sectors is already underway

Across industries, the move from experimentation to actual implementation is already taking shape in different ways. In retail, AI is transforming store operations with cameras and robotics used for targeted aisle marketing at the point of purchase decision, while cashierless checkouts are replacing traditional POS systems and freed-up human capital is being redeployed to merchandising and back-office functions.

In healthcare, Nutanix works with customers on applications spanning diagnosis, treatment, remote health and hospital operations, with cloud partners including AWS and Azure. In manufacturing and logistics, the transformation is equally significant.

The operational challenges of scaling enterprise AI

As AI use cases increase, businesses face a new class of operational challenges. Managing multiple AI workloads and agents, coordinating infrastructure access across teams, ensuring security and governance, and integrating AI systems with existing business processes are now top concerns for both business and IT leaders.

The gap between AI developers seeking speed and access, and infrastructure teams responsible for security, uptime, and governance, is one of the defining challenges of this moment.

“Now I have agents and they are all going to fight to get access to resources to solve my problems,” Cornely said. “What we want now is an infrastructure that allows us to set limitations and govern resources.”

The AI ​​Factory: A Shared Platform for AI Production

These challenges are driving demand for what Maner and Cornely describe as the AI ​​factory: a shared infrastructure environment that supports multiple users and workloads simultaneously, enabling both experimentation and production while balancing developer agility with enterprise governance.

At GTC 2026, Nutanix announced the Nutanix Agent AI Solutiona complete platform encompassing core infrastructure, Kubernetes-based container services running on a topology-aware hypervisor, and advanced services for creating and governing agents.

“We are launching a complete platform, from core infrastructure to PaaS and advanced PaaS services to the entire management framework for your AI factories,” Cornely said. “It will really enable self-service for the teams that will be building these applications in the enterprise.”

Hybrid environments are essential for enterprise AI strategy

Operating this type of environment requires flexibility throughout the infrastructure. Hybrid infrastructure is not a commitment, but a requirement. Some workloads will always run in the public cloud, while others must remain on-premises due to security requirements, regulatory compliance, data sovereignty, or competing intellectual property considerations.

“Especially in regulated industries, as sovereignty becomes a bigger issue, data severity becomes a bigger issue, security, and also a lot of competitive differentiation in the industry, will depend on what the company wants for its own intellectual property,” Maner said.

This is the basis of Nutanix’s platform position, he added.

“We are the perfect harmony, bringing those applications, that data and all the optimization for these use cases end-to-end, from on-premises to off-premises and in a hybrid mode,” he said. “Do it not just in one cloud, but in multiple clouds.”

That flexibility also extends to the broader ecosystem. Nutanix works with hyperscalers including AWS, Azure, and Google Cloud, as well as regional service providers and emerging neoclouds. Nutanix offers neoclouds a complete software stack to run their own clouds and deliver advanced AI services, giving enterprise customers already running Nutanix a simple extension of compute, networking and AI capabilities.

Maner described the agreement as a victory for both sides. For businesses, it means simplified access to hybrid AI services. For neoclouds, it means a proven platform to build on. Everything is automated and secure by default, Cornely added.

“All those governance problems that arise now with agent AI are the same problems that we have been solving for the last 16 years for all the other applications that run in the cloud,” he said.

From pilot to production: operationalizing AI across the enterprise

Ultimately, the goal is not to run a successful AI pilot, but to operationalize AI in real-world use cases, manage infrastructure as a shared resource, support collaboration between infrastructure teams and AI developers, and scale from initial projects to enterprise-wide deployment.

“There’s a huge gap right now between people building AI applications, AI engineers, agent AI developers, and classic infrastructure teams,” Cornely said. “They need tools to enable infrastructure teams so they can support their AI engineers. That’s what we offer with our agent AI solution.”


Sponsored articles are content produced by a company that pays to publish or has a business relationship with VentureBeat, and are always clearly marked. For more information, contact sales@venturebeat.com.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *