INSIGHTS
Build Fast, Break Faster: AI coding booms while DevOps plays catch-up
Altman Solon is the largest global telecommunications, media, and technology consulting firm. In this insight, we examine the rise of AI coding and opportunities for DevOps to be embedded into new ways of programming.
In early 2023, Andrej Karpathy, one of OpenAI’s founding engineers, tweeted, “The hottest new programming language is English.” While humorous, the tweet spoke to a more significant trend: agentic AI coding tools—systems that autonomously generate and modify code based on natural language prompts (also known as "vibecoding")—are accelerating software development like never before.
While AI coding has many advantages, vibecoders should proceed with caution. When you ask a generative AI tool to create code from plain English, it produces real, executable code, and the bugs are in English, too. Natural language is inherently ambiguous. The instructions that the AI translates into code carry uncertainties and nuances from everyday language, which can lead to unintended behavior. In other words, the code works as the AI interpreted it from your prompt, but if the prompt is ambiguous, those ambiguities (and the resulting bugs) are part of the code, too.
Rapidly generated code may bypass essential production considerations such as reliability, security, scalability, and compliance. In this rush, one pillar of modern engineering is being quietly sidelined.
DevOps.
And that is a big problem.
The rise of AI in coding
Agentic AI tools are reshaping how developers build software, Cursor, an AI code editor is the fastest-growing SaaS product ever, outpacing ChatGPT's growth. Platforms like Lovable allow engineers to generate full features, reason across files, and scaffold applications from scratch with natural language prompts. GitHub Copilot, integrated into Visual Studio Code, helps autocomplete functions and boilerplate, while Cursor actively suggests improvements and refactors entire flows. Tools like Cursor and Lovable offer developers full end-to-end integrated development environments (IDE). They are all fast, smart, and transformative, but their value is nuanced.
A recent Stanford study showed that, on average, Google engineers using AI tools completed tasks 21% faster. That is a real boost but also a far cry from GitHub Copilot’s often-quoted 55.8% productivity gain.
Why the gap? The GitHub study looked at freelancers solving basic tasks. Stanford evaluated experienced engineers doing enterprise-grade work. Notably, developers who coded 5+ hours a day saw a 32% boost in speed. Senior engineers benefited the most.
The takeaway? These tools do not replace experience. They amplify it.
While AI coding tools speed up development, they often skip crucial parts and rarely know where they are going.
For all their speed, AI coding tools often miss what matters most in code production: context, architecture, and operational realities. We have identified four recurring challenges backed by real-world examples from developers.
- Spaghetti code 2.0: AI can rapidly accelerate coding but may do so inconsistently and inefficiently. GitClear reported that in 2024 duplicated code increased eightfold while refactoring fell 40%—the first time cloning outpaced cleanup. Though clones may work, they bloat the codebase and sow defects when updates are not applied uniformly. This trend stems from AI’s narrow context window, which pushes new snippets over existing, proven functions.
- Break-fix loops: Developers report that AI-generated fixes, which appear correct on the surface, often lead to regression issues or remove critical functionality. For example, an AI-modified method might inadvertently strip essential logging and, in turn, trigger cascading data-handling bugs. Harness’ Survey found that over two‑thirds of developers now report spending more time debugging AI output, and a similar share face increased security‑vulnerability fixes from AI‑generated code
- Context-free confidence: Often, AI-generated code passes initial tests but fails in practice by ignoring compliance, complex business rules, or bypassing key security flows, such as role-based access control (RBAC). 65% of developers state that AI tools lack context of codebase and internal architecture, a similar share face increased security‑vulnerability fixes from AI‑generated code.
- Operational Oversight: AI coding tools may generate code that appears complete, only for developers to discover post-deployment issues, such as missing telemetry, hardcoded secrets, retry policies, overlooked configuration updates, or exception handling. As a result, adherence to DevOps best practices is weakening, 59% of developers reported encountering deployment errors at least half of the time related to these oversights.
These challenges show that rushing code generation without full operational context can lead to severe reliability and security issues. Smarter AI must be tightly integrated with DevOps practices to produce production-quality code from the outset. AI must be connected to system operations to solve these issues fully.
DevOps cannot be an afterthought
In the early days of software development, teams built and shipped large, monolithic applications using a waterfall model. Releases were infrequent, deployments were painful, and rollbacks were risky. The infamous “throw it over the wall” dynamic defined the culture: developers were rewarded for shipping features, while operations bore the burden of maintaining stability. Despite its flaws, the waterfall model worked—when systems were smaller, slower-moving, and easier to reason.
The rise of cloud infrastructure and APIs ushered in a new era of distributed systems and microservices deployed across ephemeral environments. What used to be a quarterly release became hundreds of deployments per day.
Teams adopted agile methodologies to iterate faster, but the traditional dev-to-ops handoff could not keep up with the scale and complexity of these systems. DevOps emerged as the fix.
The DevOps solution
DevOps was not merely about new tools but a new contract between developers and operations. It introduced a framework that includes:
- Continuous integration/continuous delivery (CI/CD): Code moves safely from development to production with automated tests, checks, and rollbacks.
- Observability as a baseline: Logs, metrics, traces, and alerting baked into services to offer enhanced stability.
- Resilience-first culture: Blameless postmortems, SLOs, and incident response tied to operational health.
These practices allowed teams to move quickly without sacrificing system reliability.
Enter AI
Today’s AI tools can generate more code than many teams can effectively review, at a speed that traditional DevOps processes were never designed to match. Agentic AI is not merely autocompleting code; it is scaffolding entire backends, generating infrastructure code, and editing files across the repository.
While the results are impressive, there is a catch: AI lacks systemic awareness of the environments it targets.
Ask it to build a feature, and it might give you something that technically functions. But it will not know:
- If that function needs to run across multiple zones for high availability.
- If it violates your compliance boundaries.
- If it makes synchronous calls that wreck your app’s performance under load.
- Or if it provisions cloud resources in a way that quietly explodes your bill.
It will not consider scaling patterns, concurrency limits, graceful degradation, fallback policies, or multi-tenant isolation. Unless you explicitly guide it, AI will default to the fastest, flattest, and often most brittle path to “done.”
That might mean:
- Building stateful services with no redundancy.
- Hardcoding secrets and skipping identity checks.
- Using third-party packages without license awareness.
- Calling expensive cloud services synchronously on every request.
- Failing silently in edge cases without logging or alerts.
In short, AI is good at writing features, not systems.
It does not reason about availability, cost containment, scalability under load, or compliance with policy. It does not understand shared infrastructure, sensitive data boundaries, or latency SLAs. It just writes code that looks right. Which is exactly what DevOps has spent a decade trying to prevent.
Today's AI tools are context blind. Without systems thinking embedded in the loop, every new feature they generate is a potential fragile point, a future outage, or a silent compliance risk.
And right now? DevOps is still sitting at the end of the line, trying to catch all of it—after the damage is already done.
Bringing DevOps back
DevOps has long "shifted left,” or integrating code testing earlier in the production timeline, making the practice an integral part of software development. With the rise of AI coders, the focus moves away from operational oversight. It is time for DevOps to be embedded within the AI coding process so that every line of generated code has operational intelligence built in from the start.
DevOps should not act as a safety net. Instead, DevOps best practices should be integrated into the prompts and training of AI systems. Consider these approaches:
- Infra-aware prompts: Include explicit instructions for the AI regarding operational details—such as logging, error reporting, security configurations, and deployment scaffolding—so that the generated code already embodies these operational prerequisites. Tools like Amazon Q Developer exemplify this approach by being fine-tuned with AWS best practices, ensuring that when you ask for code, it factors in real-world infrastructure needs right from the start.
- Ops integration: Operational controls must be part of the AI coding process rather than relying solely on post-generation scanners. The system should generate code with built-in safety measures because operational requirements are part of its training data.
- Embedded policy-as-code: Establish strict, machine-readable policies that become a part of the prompt engineering process. This ensures the AI coder understands what “production readiness” entails—from cost and scalability to security and compliance—without having to rely on a post-generation review. Platforms like Spacelift demonstrate how policy-as-code can be integrated effectively by embedding continuous governance into the CI/CD pipeline. The next step is to incorporate that capability directly into the AI coder so that any deviation from the established policies is minimized from the moment the code is generated.
AI-powered coders have, in many cases, distanced development from operations, leaving a gap that must be bridged. DevOps must reassert itself by ensuring that the AI coder is not just delivering functional code, but code that is intrinsically aligned with operational practices. Every prompt, every generated function, and every deployment should be designed with robust, scalable, and secure operations in mind.
It is not about slowing down progress—it is about building fast and inherently resilient systems by default. Bringing DevOps back means integrating operational expertise where the code is born, setting the foundation for a future where development and operations work in unison.
Leave your contact information below to connect with our technology experts about how AI-powered coding tools are reshaping DevOps and accelerating software development.