The future is AC/DC: the Agent Centric Development Cycle

Tariq Shaukat photo

Tariq Shaukat

CEO

14 min read

  • AI

Table of contents

Start your free trial

Verify all code. Find and fix issues faster with SonarQube.

Loslegen

The era of Continuous Integration, with its familiar processes and workflows, is rapidly coming to an end. Traditional CI relies on developers making small, frequent, iterative commits. Today, the “continuous” part is changing. Agents do not work like that. They operate in asynchronous batches, often working for hours before dropping massive, complex payloads of code. We are seeing the emergence of a new paradigm that will fundamentally reshape how we create software: Agent Centric Development.

For good reason, there is a lot of discussion and adoption of code generation tools and agents. They have undeniable strengths that are transforming how developers do their job. There is a growing consensus that developers will be focusing more on design, architecture, and planning, and then on monitoring, verification, and review.

Less discussed are the changes required to ensure that software development agents are operating in a trustworthy, consistent, transparent, and responsible manner. Even in the best hands, AI slop is pervasive. Our research has demonstrated that, left unchecked, coding models generate verbose, complex, buggy, and insecure code.

Agentic development requires a strong, deliberate, and intentional set of practices and a well-constructed set of tools. These provide the guardrails, transparency, assurances, and verification necessary to build world-class software. We call this the Agent Centric Development Cycle (AC/DC).

Yes, it’s electrifying!

This new model operates on a different set of steps than the legacy CI model. Because the continuous human cadence is gone, agents work for a longer period of time before they are ready to commit code. Pull requests are vastly larger and more complex. Small errors the agent makes early in their process compound, making the process inherently unstable.

Everything should start with a thoughtful, detailed, specific plan. What are the specifications? What are the desired outcomes? How do you expect the solution to be used? How scalable does it need to be? Well-crafted plans have always been important in software development, but now, with agents, they are the essential prerequisite that powers the entire cycle.

Building on that plan, we define the Agent Centric Development Cycle as having 4 discrete stages:

  • Guide: Agents need to understand the canvas on which they are being asked to create, so that the output fits with what the developer and organization require.
  • Generate: LLM-based code generation tools generate the code they believe will achieve the desired outcome, in the right context.
  • Verify: The agent has to be specifically and deliberately required to check the code meets the necessary standards, including that it really achieves the desired outcomes and is reliable, maintainable, and secure.
  • Solve: Any issues that are identified are provided to a code repair agent to fix.

This process then continues again, with the lessons from the Solve and Verify stages feeding into the Guide so that the next agentic steps learn from the previous loop.

The development canvas is evolving

The 4-stage AC/DC does not operate in traditional tooling. IDEs are less relevant, and the pull request, as noted earlier, happens much less frequently. At a high level, there are three major environmental changes that become prevalent in the AC/DC model.

First, AC/DC steps, Guide-Generate-Verify-Solve, happen in a sandbox environment. Agentic reasoning loops go on for a while and solve larger problems. They will do this before committing code to your main codebase. In fact, for smaller codebases, you might just make a copy of the codebase and iterate off of that in its entirety. (While complex enterprise microservices and data states make fully isolated sandboxing more difficult, the principle remains: intense validation happens in isolation.) Developers manage and monitor that sandbox. Only when there is a verified, high-quality product does the main codebase become modified.

This is an enormous change with a lot of implications. It is much harder to understand the changes being made to the codebase, presenting long-term risks and challenges. Security issues, for example, could creep in without being noticed when 40,000 lines of code are being written vs. 300. Also, in this model, developers are responsible for shipping something that works, not just code. Activities that used to happen post the Build stage of CI/CD, such as dynamic testing, will happen in the sandbox and be the developer’s responsibility. This is not the normal “shift left.” It is more akin to being in the Matrix: “there is no right” inside the traditional pipeline. Because the continuous micro-commit is dead, production-grade validation must happen in an agentic sandbox environment, before the massive code payload is submitted.

The second major change is that these steps, Guide-Generate-Verify-Solve, happen at two different levels in this process: the inner loop and the outer loop.

  • The inner loop: Guide-Generate-Verify-Solve happens in each agentic reasoning loop, ensuring that the agent stays on track as it methodically works to achieve the plans. These are essentially “micro” adjustments that are continuously made, using guardrails, prompt traces, and rapid verification analyses.
  • The outer loop: Guide-Generate-Verify-Solve happens once the agent has ‘finished’ its work. Here, more comprehensive verification occurs and, often, the agent will have to fix larger-scale issues that are identified.

Lastly, the Agentic Development toolchain will typically include many code generation tools, depending on what the developers believe are the best platforms for their specific use cases: Cursor for some cases, Claude Code for others, Devin, Codex, and GitHub Copilot for others. However, the Guide-Verify-Solve stages are more effective when there is a standard for each in the company: a consistent approach to verification for all tools, and a common engine for context to Guide all of the generation tools. 

Guide-Verify-Solve: the heart of the matter

A lot of people are talking about code generation. Guide-Verify-Solve is equally, if not more, critical to master.

  • Guide: Guiding is not just about pointing to a codebase; it's about defining the playing field and setting the rules of engagement. Agents need to be told the context and constraints that shape their work. This is critical in both greenfield and brownfield environments. Agents need to know, of course, what the specifications are. But that’s just the start. They need to understand the standards, regulations, guidelines, and guardrails you have established for your codebase, along with the current and desired architecture.
  • Verify: AI makes mistakes. Lots of them. Unlike developers, they do not make basic mistakes very often. Instead, they make very complex, hard-to-find mistakes. And the models themselves are both unpredictable (due to their probabilistic nature) and very sensitive to changes in their training data and environments. A prompt that worked well yesterday has no guarantee of working today. Given these stark realities, verification must be thorough, transparent, and consistent. As noted above, we have to provide feedback to the agent inside the reasoning processes themselves; and then we need to provide feedback to the developer accountable for the end result.
    • In the inner loop, the primary purpose is to allow agents to self-verify, giving it a continuous evaluation of how it is doing and the ability to course correct quickly. Typically these tests will consist of frequent analysis of the generated code, looking for issues; evaluation of the prompt traces to ensure no issues are spotted; and on-the-fly verification of business logic using AI. The goal is to give high-signal, low-noise feedback to the agent so that it can self-correct.
    • In the outer loop, once the agent believes it has constructed a good solution, we must then verify that the agent's work achieves the intended functional and non-functional outcomes, which could include internal standards and compliance requirements. This is where processes like code verification and code review come into play, but in an agent-driven world, we believe this will also see the “Right” of the traditional SDLC disappear, and reappear inside the sandbox. The developer is responsible and accountable for shipping Something That Works.
  • Solve: In both the inner and outer loops, problems are inevitable. The "Solve" phase is the automatic debugging and remediation phase based on verification feedback. Armed with a deep understanding of the application's structure and the results from the verification phase, corrections can be made. And, unlike most traditional processes, failure is not just a bug to be patched; it's a lesson that refines the next iteration, making the entire system more resilient. The issues and their solution feed back into the Guide process for the next round.

Agent Centric Development Cycle (AC/DC): the toolchain

Many of the traditional SDLC solutions will need to evolve, quickly, or be increasingly irrelevant as agents take over the development process. Critical components of the new AC/DC cycle include:

  • Agentic Development Sandbox: An environment in which the Guide-Generate-Verify-Solve loops can work for all your agents, regardless of what agent and code generation partners you use.
  • Dynamic Context Engine: There are two critical parts of the Dynamic Context Engine. First, you have to have tools that can provide useful context—for example, a thorough evaluation of your codebase architecture or crisp, transparent, and specific articulation of standards and guardrails. Second, you need to determine which pieces of context should be provided in each circumstance. Too much context, too little context, or incorrect context can all degrade performance instead of enhancing it.
  • Trust and Verification Platform: Software Development worked because, generally speaking, companies trusted their developers to write good code and to review that code. Verification was important, but many treated it as optional given trust was so high.

    Agent-centric development breaks this compact. AI-assisted and agentic workflows create code at such high volume and speed that pull requests are 10x or more larger than in the past. Truly understanding the new code is almost impossible, the models themselves are black boxes, and the output is very sensitive to the input. Verification is mandatory in AC/DC, not optional.
    Like context, Verification is an area that can become problematic quickly. Many of the ‘easy’ approaches to verification, such as using LLMs to check their own work, can generate a high level of false positives and are neither explainable nor consistent. While they can be helpful, these inherently imprecise approaches have to be grounded in deterministic, comprehensive, transparent analyses to maximize signal and meet enterprise standards. They have to make it clear to the developer, who is accountable for the work, precisely what was checked, what worked, and what did not.

    There are many valuable sources of verification data. Deterministic code analysis covering reliability, maintainability, complexity, and security (such as that provided by SonarQube) is a vital component. LLM-based AI Code Review is another. Inside of the agentic sandbox, the code can be tested and observability traces generated to provide additional information. A comprehensive Verification Platform aggregates and intermediates these signals, and ultimately will pass judgment on the end result.

Beyond this, there are some emerging best practices that demonstrably improve overall agentic performance:

  • Embedded Context: Models are a reflection of their training data and techniques, and while the foundation model companies are continuously updating their training data, the vast majority of training data is based on open source code. The quality, style, and standards used for these open source datasets are highly varied, and perhaps most importantly, they are different from what you and your company want or have used in the past. Fine-tuning models, where the model provider allows, helps to both improve absolute quality and security, while also having the models better reflect the context embedded in your codebase. As Agent-Centric development progresses, we believe there will be an increasing recognition of the need for these fine-tuned enterprise models. This is complementary to, not competitive with, the more transient, task-specific context from the dynamic context engine.
  • Special Purpose Agents: Today, the baseline foundation models generate a lot of excitement. However, addressing specific problems in software development likely requires smaller models and agents that are custom-built for purpose. A code repair agent, with custom workflows and understanding of verification context, can better address the Solve part of the AC/DC. Code review agents, trained on pull request information, are likely to provide developers with better information than a generic review agent. This space is emerging, and is worth watching and experimenting with.

How to get started with AC/DC

Most companies cannot move from the current CI process to AC/DC overnight. There are tangible steps they can take, however, to get started. 

  1. [Verify] Strengthen your verification practices. Verification in AC/DC is mandatory, not optional, and it is something that requires deliberate design and planning. It starts with defining “what good looks like.” These quality profiles might be different depending on the application. One leading financial institution making the transition to AC/DC has a low/medium/high quality profile definition, and every project is categorized against this. They have mandated that every line of code written by AI agents has to be verified against the quality profile using deterministic code analysis. Similarly, a global telecommunications company tried to use AI coding agents in their traditional CI process, and were forced to stop due to lack of sound governance. Rolling out mandatory deterministic code analysis unlocked this process and enabled them to roll out AI coding tools everywhere. 
  2. [Solve] Invest in remediation agents. With your verification in place, you can drive real impact by using remediation agents to work through your existing backlog of issues. In the Agent Centric Development Cycle, technical debt is no longer just a drag on velocity; it’s a hallucination trigger. Complexity kills and errors compound, leading agents down logic rabbit holes. Establishing and maintaining a clean codebase will speed development in an agentic world, and lower token consumption. Faster and cheaper! While the work from the remediation agents needs to be verified as they, too, are not perfect, current capabilities are strong and improving all the time. 
  3. [Guide and Verify] Manage your architecture. Most companies have a very poor understanding of the architecture of their codebase. Architectural knowledge is often tribal, sitting in the heads of a few key architects, and maintained by hand. AC/DC requires a deep, structured understanding of the software architecture. Beyond this, it requires that you take active steps to guide the agent to maintain or, better yet, improve the architecture as it works. By treating architecture as active, structured context rather than static documentation, you ensure agents build within your guardrails, not around them. 

These three steps will get you started on the path to success regardless of whether you’re using Claude Code, Codex, Github Copilot, Cursor, or any other coding assistant. There are of course more advanced steps you can take, such as establishing your agentic sandboxes, and employing hunting agents to amplify your security research program.   

The transition to AC/DC isn't just a shift left—it's a fundamental rebuilding of the factory floor. Old practices will not set you up for success. Embracing AC/DC as your development framework, with Guide-Verify-Solve complementing your coding agent implementation, will help boost productivity while reducing risk and costs.

Vertrauen in jede Zeile Code einbauen

Image for rating

4.6 / 5

LoslegenVertrieb kontaktieren
  • Follow SonarSource on Twitter
  • Follow SonarSource on Linkedin
language switcher
Deutsch (German)
  • Rechtliche Dokumentation
  • Vertrauenszentrum

© 2025 SonarSource Sàrl. Alle Rechte vorbehalten.