Blog post

The future of software development is AC/DC — and Sonar is here to power it

Manish Kapur photo

Manish Kapur

高级总监,产品与解决方案

12 阅读时间

TL;DR overview

  • Today we are announcing open beta of three new products: Sonar Context Augmentation, SonarQube Agentic Analysis, and SonarQube Remediation Agent.
  • All three capabilities are available in open beta and are free during the beta period. They can be enabled using the administration interface.
  • These three betas deliver critical capabilities aligned to our Guide-Verify-Solve framework.
  • Read on for the complete story on how these betas work together to power agent-driven development. 

AI agents are writing more code than ever before. As developers lean in with coding tools like Cursor, Claude Code, Codex, Gemini, and GitHub Copilot, they can now enlist AI agents to do in minutes what used to take hours or weeks. 

We know that agents inherently generate a lot of code…and with it, a lot of issues. Pull requests that used to be 300 lines are now 3,000, and tomorrow might be 300,000.

The verbose, complex code that agents most often write can be harder to verify and maintain. Agents sometimes behave unpredictably and create unnecessary code. And in addition, they are often flying architecturally blind, silently violating structural boundaries and accumulating technical debt

Independent, peer-reviewed academic research from Carnegie-Mellon University studied 807 open source projects that had adopted Cursor, and measured the impact on code quality using SonarQube. The study found that agent usage caused a temporary coding velocity spike, but it disappeared by the third month of usage. More disturbingly, agent usage caused a significant and persistent increase in code analysis warnings (+30%) and code complexity (+41%), which resulted in a longer term slowdown in development velocity.

AI agent coding tools are extraordinary and powerful innovations that are reinventing the way software is built. But for organizations to take full advantage of their potential to actually improve coding velocity, they’ll need a new approach to AI code trust and verification.

The Agent Centric Development Cycle

We recently introduced a new framework for software development in the age of AI: the Agent Centric Development Cycle, or AC/DC. At its core, AC/DC defines four continuous stages that every AI-generated contribution should move through: 

Guide → Generate → Verify → Solve

  • Guide. Before an agent writes a single line of code, it needs to understand the playing field—your standards, your architecture, your constraints, your compliance requirements. Without this, you're asking an agent to play a game without knowing the rules. Everything that follows reflects the quality of this guidance.
  • Generate. This is the AI's job. The AI agent creates code in a sandbox environment, iterating to solve larger problems before any code touches your main codebase.
  • Verify. This is where deterministic-first, transparent, and multi-layered verification ensures the generated code meets your functional, non-functional, and compliance standards before it goes anywhere. This is the stage that breaks down most often—and the consequences can be dire.
  • Solve. Issues identified in Verify are fed back to specialized repair agents to fix. And those lessons feed back into Guide, making the next iteration better. The cycle is self-improving.

If you want the full picture on AC/DC, our CEO Tariq Shaukat laid it out in detail here

Today’s announcement is about what Sonar has built to bring the Agent Centric Development Cycle to life.

Guide: Sonar Context Augmentation

The problem: AI coding agents work in a vacuum. They don’t automatically know your team’s coding standards, your codebase’s architecture, or where the constraints and boundaries are. The result is code that works in isolation but breaks things when it’s integrated—leading to rework, frustration, and higher costs.

What it does: Sonar Context Augmentation bridges that gap by injecting real-time, project-specific dynamic context from SonarQube directly into your AI agent’s workflow. Before the agent writes a line of code, it understands the playing field: the most relevant guidelines for the files it’s working with, the structure of your codebase, and the patterns it should follow.

This isn’t about dumping every rule into the agent’s context. Context Augmentation is smart about it—surfacing only the guidelines that are relevant to the task at hand, so agents get cleaner signals and less noise.

The results from our early benchmarks are striking: An increase in build pass rates, increase in test pass rates, a significant reduction in code duplication, and a drop in cognitive complexity. Agents also use fewer tool calls and consume fewer tokens, which means lower operating costs.

Learn more

Verify: SonarQube Agentic Analysis

The problem: Typically, a developer only finds out that an AI-generated PR is broken when the quality gate fails—hours after the code was written. By then, fixing it is slow and costly. Standard code checkers don’t catch the kinds of deep, cross-file issues that SonarQube is built to find.

What it does: SonarQube Agentic Analysis brings Sonar’s trusted code analysis engine directly into the AI agent’s generation loop. Rather than waiting until a developer reviews the pull request, the agent can ask SonarQube to check its work in real time, as the code is being written.

If the agent’s suggestion contains a security risk, a logic error, or a maintainability problem, Agentic Analysis catches it immediately. The agent sees the issue, corrects it, and moves on—before a human ever has to review it.

This is a meaningful shift: errors are caught at the source, not hours downstream. Software developers stay focused on code design and architecture, rather than acting as manual gatekeepers cleaning up AI mistakes.

Learn more

Solve: SonarQube Remediation Agent

The problem: Finding a code issue is only half the job. Once Verify surfaces a problem, someone—or something—has to fix it. Today, that falls on software developers. It’s manual, repetitive, and pulls focus away from building new features.

What it does: The SonarQube Remediation Agent closes the loop in two ways—and the second one is where it really changes the game.

For new code, it steps in the moment SonarQube flags an issue in a pull request, generating a fix before a developer has to chase it down.

For your backlog, it operates at a different scale entirely. Every codebase carries accumulated weight—security vulnerabilities, reliability gaps, maintainability problems that teams acknowledge but never quite clear. The Remediation Agent works through that backlog systematically, opening one pull request per issue so developers can review and merge each fix on their own terms. Years of technical debt, tackled without a dedicated cleanup sprint.

In both cases, the agent doesn't trust its own output. Every fix is re-scanned using Sonar's analysis engine to confirm it resolves the original issue without introducing new ones. Only verified fixes reach the developer—as ready-to-review pull requests, never forced changes.

Learn more

Three products, one continuous loop

Guide, Verify, Solve. These aren’t three separate tools bolted together — they’re three parts of a connected system designed to work in concert.

Sonar Context Augmentation sets agents up for success before they start. SonarQube Agentic Analysis keeps them honest as they work. And the SonarQube Remediation Agent fixes what they get wrong. Together, they make the Agent Centric Development Cycle a practical reality—not just a framework on a slide. 

And they are just the beginning.

All three products are now available in open beta for SonarQube Cloud Teams and Enterprise annual plan customers, free to use during the beta period.

Ready to get started? Explore the individual product posts below, or visit docs.sonarsource.com to dive in today.


Sonar Context Augmentation

SonarQube Agentic Analysis

SonarQube Remediation Agent

SHARE

Twitter
Facebook
LinkedIn
Email

在每行代码中建立信任

Rating image

4.6 / 5