Altman Solon is the largest global telecommunications, media, and technology consulting firm. In this insight, we examine the rise of AI coding and opportunities for DevOps to be embedded into new ways of programming.
In early 2023, Andrej Karpathy, one of OpenAI’s founding engineers, tweeted, “The hottest new programming language is English.” While humorous, the tweet spoke to a more significant trend: agentic AI coding tools—systems that autonomously generate and modify code based on natural language prompts (also known as "vibecoding")—are accelerating software development like never before.
While AI coding has many advantages, vibecoders should proceed with caution. When you ask a generative AI tool to create code from plain English, it produces real, executable code, and the bugs are in English, too. Natural language is inherently ambiguous. The instructions that the AI translates into code carry uncertainties and nuances from everyday language, which can lead to unintended behavior. In other words, the code works as the AI interpreted it from your prompt, but if the prompt is ambiguous, those ambiguities (and the resulting bugs) are part of the code, too.
Rapidly generated code may bypass essential production considerations such as reliability, security, scalability, and compliance. In this rush, one pillar of modern engineering is being quietly sidelined.
DevOps.
And that is a big problem.
Agentic AI tools are reshaping how developers build software, Cursor, an AI code editor is the fastest-growing SaaS product ever, outpacing ChatGPT's growth. Platforms like Lovable allow engineers to generate full features, reason across files, and scaffold applications from scratch with natural language prompts. GitHub Copilot, integrated into Visual Studio Code, helps autocomplete functions and boilerplate, while Cursor actively suggests improvements and refactors entire flows. Tools like Cursor and Lovable offer developers full end-to-end integrated development environments (IDE). They are all fast, smart, and transformative, but their value is nuanced.
A recent Stanford study showed that, on average, Google engineers using AI tools completed tasks 21% faster. That is a real boost but also a far cry from GitHub Copilot’s often-quoted 55.8% productivity gain.
Why the gap? The GitHub study looked at freelancers solving basic tasks. Stanford evaluated experienced engineers doing enterprise-grade work. Notably, developers who coded 5+ hours a day saw a 32% boost in speed. Senior engineers benefited the most.
The takeaway? These tools do not replace experience. They amplify it.
While AI coding tools speed up development, they often skip crucial parts and rarely know where they are going.
For all their speed, AI coding tools often miss what matters most in code production: context, architecture, and operational realities. We have identified four recurring challenges backed by real-world examples from developers.
These challenges show that rushing code generation without full operational context can lead to severe reliability and security issues. Smarter AI must be tightly integrated with DevOps practices to produce production-quality code from the outset. AI must be connected to system operations to solve these issues fully.
In the early days of software development, teams built and shipped large, monolithic applications using a waterfall model. Releases were infrequent, deployments were painful, and rollbacks were risky. The infamous “throw it over the wall” dynamic defined the culture: developers were rewarded for shipping features, while operations bore the burden of maintaining stability. Despite its flaws, the waterfall model worked—when systems were smaller, slower-moving, and easier to reason.
The rise of cloud infrastructure and APIs ushered in a new era of distributed systems and microservices deployed across ephemeral environments. What used to be a quarterly release became hundreds of deployments per day.
Teams adopted agile methodologies to iterate faster, but the traditional dev-to-ops handoff could not keep up with the scale and complexity of these systems. DevOps emerged as the fix.
DevOps was not merely about new tools but a new contract between developers and operations. It introduced a framework that includes:
These practices allowed teams to move quickly without sacrificing system reliability.
Today’s AI tools can generate more code than many teams can effectively review, at a speed that traditional DevOps processes were never designed to match. Agentic AI is not merely autocompleting code; it is scaffolding entire backends, generating infrastructure code, and editing files across the repository.
While the results are impressive, there is a catch: AI lacks systemic awareness of the environments it targets.
Ask it to build a feature, and it might give you something that technically functions. But it will not know:
It will not consider scaling patterns, concurrency limits, graceful degradation, fallback policies, or multi-tenant isolation. Unless you explicitly guide it, AI will default to the fastest, flattest, and often most brittle path to “done.”
That might mean:
In short, AI is good at writing features, not systems.
It does not reason about availability, cost containment, scalability under load, or compliance with policy. It does not understand shared infrastructure, sensitive data boundaries, or latency SLAs. It just writes code that looks right. Which is exactly what DevOps has spent a decade trying to prevent.
Today's AI tools are context blind. Without systems thinking embedded in the loop, every new feature they generate is a potential fragile point, a future outage, or a silent compliance risk.
And right now? DevOps is still sitting at the end of the line, trying to catch all of it—after the damage is already done.
DevOps has long "shifted left,” or integrating code testing earlier in the production timeline, making the practice an integral part of software development. With the rise of AI coders, the focus moves away from operational oversight. It is time for DevOps to be embedded within the AI coding process so that every line of generated code has operational intelligence built in from the start.
DevOps should not act as a safety net. Instead, DevOps best practices should be integrated into the prompts and training of AI systems. Consider these approaches:
AI-powered coders have, in many cases, distanced development from operations, leaving a gap that must be bridged. DevOps must reassert itself by ensuring that the AI coder is not just delivering functional code, but code that is intrinsically aligned with operational practices. Every prompt, every generated function, and every deployment should be designed with robust, scalable, and secure operations in mind.
It is not about slowing down progress—it is about building fast and inherently resilient systems by default. Bringing DevOps back means integrating operational expertise where the code is born, setting the foundation for a future where development and operations work in unison.