
Many people tried AI tools and were not impressed. I get it: many demos promise magic, but in practice, the results can seem disappointing.
That’s why I want to write this not as a futuristic prediction, but from a lived experience. For the past six months, I’ve prioritized AI in my engineering organization. I’ve talked before about the system behind that transformation: how we build workflows, metrics, and guardrails. Today I want to move away from the mechanics and talk about what I have learned from that experience, about where our profession is headed when software development is turned upside down.
Before we do, a couple of numbers to illustrate the magnitude of the change. Subjectively, it seems like we are going twice as fast. Objectively, this is how performance evolved. Our engineering team’s total headcount went from 36 at the beginning of the year to 30. So you get ~170% performance with a ~80% headcount, which matches the subjective ~2x.
Zooming in, I picked a couple of our senior engineers who started the year in a more traditional software engineering process and ended it with AI first. (Falls correspond to vacations and off-site):
Note that our PRs are tied to JIRA tickets, and the average reach of those tickets didn’t change much over the year, so this is as good a proxy as the data can give us.
Qualitatively, if we look at the value of the business, I see an even bigger increase. One reason is that as we started last year, our QA team couldn’t keep up with the speed of our engineers. As a company leader, I was unhappy with the quality of some of our early releases. As we move through the year and prepare our AI Workflows By including write unit tests and end-to-end testing, our coverage improved, the number of bugs decreased, users became fans, and the business value of engineering work multiplied.
From great design to rapid experimentation
Before AI, we spent weeks perfecting user flows before writing code. It made sense when change was expensive. Agile helped, but even then, testing multiple product ideas was too expensive.
Once we opted for AI, that trade-off disappeared. the cost of experimentation collapse. An idea could go from whiteboard to working prototype in a day: from idea to AI-generated product requirements document (PRD), to AI-generated technical specifications, to AI-assisted implementation.
It manifested itself in some amazing transformations. Our website, critical to our acquisition and inbound demand, is now a product-scale system with hundreds of custom components, all designed, developed, and maintained directly in code by our creative director.
Now, instead of validating with slides or static prototypes, we validate with functional products. We test ideas live, learn faster, and release major updates every two months—a pace I couldn’t imagine three years ago.
For example, Zen CLI was first written in Kotlin, but then we changed our mind and moved it to TypeScript without losing release speed.
YoInstead of mocking features, our UX designers and project managers code them. And when the release time crisis hit everyone, they sprang into action and fixed dozens of small details with production-ready PR to help us launch a great product. This included an overnight UI design change.
From coding to validation
The next change came where I least expected it: Validation.
In a traditional organization, most people write code and a smaller group tests it. But when AI drives much of the implementation, the leverage point moves. The real value lies in defining what “good” looks like, in making the correctness explicit.
Us Supports over 70 programming languages and countless integrations. Our QA engineers have become system architects. They build AI agents that generate and maintain acceptance tests directly from requirements. And those agents are embedded in coded AI workflows that allow us to achieve predictable engineering results using a system.
This is what “turn left” really means. Validation is not a standalone function, it is an integral part of the production process. If the agent cannot validate its work, it cannot be trusted to generate production code. For QA professionals, this is a time of reinvention, where, with the right upskilling, their work becomes a critical enabler and accelerator of quality assurance. AI adoption.
Product managers, technology leaders, and data engineers now also share this responsibility, because defining correctness has become a cross-functional skill, not a role limited to QA.
From diamond to double funnel
For decades, software development followed a “diamond” shape: a small product team transitioned to a large engineering team and then narrowed again through QA.
Today, that geometry is changing. Humans engage most deeply at the beginning (defining intent, exploring options) and again at the end, validating results. The middle, where the AI runs, is faster and tighter.
It’s not just a new workflow; It is a structural investment.
The model looks less like an assembly line and more like a control tower. Humans set direction and constraints, AI handles execution at high speed, and people step in to validate results before decisions reach production.
Engineering at a higher level of abstraction
Every major leap in software raised our level of abstraction: from punch cards to high-level programming languages, from hardware to the cloud. AI is the next step. Our engineers now work at a meta layer: orchestrating AI workflows, fine-tuning agent instructions and skills, and defining guardrails. Machines build; humans decide that and because.
Teams now routinely decide when it’s safe to merge AI production without review, how firmly to tie agent autonomy into production systems, and what signals actually indicate correctness at scale—decisions that simply didn’t exist before.
And that’s the paradox of AI-first engineering: it feels less like coding and more like thinking. Welcome to the new era of human intelligence, powered by AI.
Andrew Filev is founder and CEO of Zencoder





