Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124


Recent information about the failure rates of AI projects have raised uncomfortable questions for organizations that invest heavily in AI. Much of the debate has focused on technical factors like model accuracy and data quality, but after watching dozens of AI initiatives launch, I’ve noticed that the biggest opportunities for improvement are often cultural, not technical.
Internal projects that are struggling tend to share common problems. For example, engineering teams create models that product managers don’t know how to use. Data scientists build prototypes that operations teams struggle to maintain. AND AI applications They remain unused because the people for whom they were built were not involved in deciding what “useful” really meant.
On the contrary, organizations that achieve significant value with AI We’ve figured out how to create the right kind of collaboration across departments and established shared accountability for results. Technology matters, but so does organizational preparation.
Here are three practices I’ve observed that address cultural and organizational barriers that can impede AI success.
When only engineers understand how an AI system works and what it is capable of, collaboration breaks down. Product managers can’t evaluate tradeoffs they don’t understand. Designers cannot create interfaces for capabilities they cannot articulate. Analysts cannot validate results that they cannot interpret.
The solution is not to turn everyone into data scientists. It’s helping each position understand how AI applies to their specific job. Product managers must understand what types of content, predictions, or recommendations generated are realistic given the available data. Designers need to understand what AI can actually do so they can design features that users will find useful. Analysts need to know which AI results require human validation and which can be trusted.
When teams share this working vocabulary, AI stops being something that happens in the engineering department and becomes a tool that the entire organization can use effectively.
The second challenge involves knowing where AI can act on its own and where human approval is required. Many organizations resort to extremes, either hindering every AI decision through human review or allowing AI systems to operate without railings.
What is needed is a clear framework that defines where and how AI can act autonomously. This means setting rules up front: Can the AI approve routine configuration changes? Can you recommend schema updates but not implement them? Can you deploy code to test environments but not to production?
These rules must include three elements: auditability (can you trace how the AI arrived at its decision?), reproducibility (can you recreate the decision path?), and observability (Can teams monitor AI behavior as it happens?) Without this framework, you either slow down to the point where AI provides no advantage, or you create systems that make decisions that no one can explain or control.
The third step is to codify how different teams actually work with AI systems. When each department develops its own approach, inconsistent results and redundant efforts result.
Cross-functional playbooks work best when teams develop them together rather than having them imposed from above. These manuals answer specific questions such as: How do we test AI recommendations before putting them into production? What is our fallback procedure when an automated deployment fails? Do you hand it over to human operators or try a different approach first? Who should be involved when we override an AI decision? How do we incorporate feedback to improve the system?
The goal is not to add bureaucracy. It’s about ensuring everyone understands how AI fits into their current job and what to do when results don’t match expectations.
Technical excellence in AI remains important, but companies that overindex on model performance while ignoring organizational factors are setting themselves up for avoidable challenges. The successful AI implementations I’ve seen treat cultural transformation and workflows with the same seriousness as the technical implementation.
The question is not whether your AI technology is sophisticated enough. It’s about whether your organization is ready to work with it.
Adi Polak is director of developer experience advocacy and engineering at Confluent.