Start your free trial
Verify all code. Find and fix issues faster with SonarQube.
始めましょうA few days ago, Anthropic announced Claude Code Security, an agentic approach to vulnerability identification and remediation. Similar to the announcement of Aardvark (aka Codex Security) from OpenAI a few months ago, these initiatives have sparked significant discussion about the future of cybersecurity.
This blog post aims to explain what Claude Code Security is (recognizing few details are currently available), and how enterprises and developers should think about its role in their cybersecurity toolchain.
What is Claude Code Security?
Claude Code Security is a research preview from Anthropic that uses AI models to scan codebases, identify specific high-severity vulnerabilities (such as memory corruption, injection flaws, and authentication bypasses), and patch the issues they find.
In our view, what Anthropic announced is akin to an agentic security researcher. It has long been considered best practice to employ a range of techniques, from hiring a security research team or ethical hackers to having bug bounty programs that search for vulnerabilities in applications. These approaches complement other cyber defenses, including SAST and DAST, by looking for issues that are typically missed. Claude Code Security focuses on high-severity vulnerabilities including memory corruption, injection flaws, authentication bypasses, and complex logic errors that pattern-matching tools typically miss.
Once it finds an issue, it uses a technique called adversarial verification to try to confirm that the issue is real—and then it generates a patch to attempt to address the identified issue.
Agentic security research shows a lot of promise in improving overall codebase and application security. By amplifying the work of security researchers and addressing the last mile of remediation (similar to our SonarQube Remediation Agent, now available in Beta), it creates a force multiplier. We expect this will result in healthier, more secure codebases when used in combination with existing techniques. As Anthropic says in their product description, “Claude Code Security complements your existing tools by catching what they might miss and closing the loop on remediation.”
How does Claude Code Security fit with SonarQube?
While valuable, Claude Code Security solves a different use case than SonarQube.
- SonarQube systematically evaluates all of your code, while Claude Code Security engages in a more sampling based, spot-checking approach.
- SonarQube consistently and repeatedly evaluates a defined set of issues, providing assurance they have been reviewed, while Claude Code Security is more opportunistic and looks for a different class of issues.
- SonarQube employs sophisticated mathematical reasoning techniques that move beyond simplistic pattern matching to evaluate complex issues such as data flows. All while maintaining the industry’s lowest false positive rate. Claude Code Security employs probabilistic reasoning techniques that are subject to hallucinations and uses token-consuming, biased, and less reliable LLM-based verification techniques.
In other words, the two tools serve very different but complementary jobs:
- SonarQube: Rigorous, consistent, fast, and low-cost code review and verification
- Claude Code Security:Opportunistic hunting for rare but high-value vulnerabilities.
SonarQube’s approach ensures that every line of code meets defined standards for reliability and maintainability while also monitoring open-source dependencies for known vulnerabilities and license risks.
This methodology is deterministic and consistent: given the same code, you get the same result every time. It is comprehensive: the entire codebase is checked, not just selected parts. And it is explainable: when an issue is flagged, you can see exactly which rule was triggered and why.
This matters for a few practical reasons:
- Auditors and compliance frameworks require consistent, repeatable evidence that code has been checked.
- Development teams need results they can act on in their normal workflow—inside their IDE, as part of a CI/CD pipeline, before code is merged.
- Security coverage needs to extend beyond your own code to include open-source dependencies, infrastructure configuration, and secrets that may have been accidentally committed.
| Dimension | SonarQube | Claude Code Security |
| Primary goal | Systematic code verification and review | Spot-checking and discovery |
| Coverage | Entire codebase, every line of code, every scan | Opportunistic; not comprehensive and guaranteed to be exhaustive |
| Consistency | Deterministic Same code → same result, every time | Probablistic Results may vary between runs |
| False positive (FP) rate | ~ 3% | Unknown, LLMs inherently produce FPs |
| Explainability | Clear rule reference for every finding | AI reasoning; may be harder to audit |
| Compliance use | Accepted by auditors and regulators | Not currently suitable for compliance evidence |
| Speed/cost | Fast and predictable cost | Slower and high-token consumption |
| Adoption | 7M+ users, embedded in CI/CD workflows and integrated with major AI coding tools | Currently in research preview; available only in Claude Code |
The value of SonarQube systematic codebase analysis is not just in finding individual vulnerabilities. It is in being able to demonstrate, continuously and verifiably, that your entire codebase has been checked against a well-defined standard.
The bigger picture: how security toolchains actually work
The most security-conscious organizations rely on a portfolio of tools. A typical mature security practice already combines several layers of defense, as no single method catches everything:
- Automated systematic codebase analysis integrated into the development workflow (SAST, SCA, secrets, IaC)
- Dedicated security testing tools for specific vulnerability classes
- Internal security teams who review architecture and design
- External security researchers, often through bug bounty programs, who look for what everyone else missed
Claude Code Security fits naturally into the fourth category. It is an AI-powered security researcher—one that can be pointed at a codebase to preemptively identify issues before they can be weaponized.
The right question is not "which tool do we use?" It is "what does each layer of our security practice cover, and where are the gaps?" Systematic codebase analysis and AI-assisted research address fundamentally different challenges.
What is the next evolution of application security?
The emergence of AI-powered security research tools is a positive development for the industry. Finding vulnerabilities that require contextual reasoning—understanding what a piece of code is supposed to do, and identifying where that intent breaks down—has historically required skilled human researchers. Making that capability more accessible and scalable is valuable.
At the same time, the properties that make AI research tools interesting are also the properties that make them unsuitable as a replacement for systematic codebase analysis. They are not exhaustive. They are not consistent run-to-run. They do not produce the kind of structured, auditable evidence that compliance frameworks require.
The future of application security is likely one where both layers are stronger. Deterministic, comprehensive scanning handles the verification layer—ensuring that every known class of vulnerability has been checked, across all code, continuously. AI-assisted research handles the exploratory layer—finding the things that rules cannot anticipate. Together, they cover more ground than either could alone.
Claude Code Security is a spot-checking tool.
SonarQube is a comprehensive audit and verification platform.
Each has a role.
In summary:
- Systematic codebase analysis (SAST, SCA, secrets, IaC) by SonarQube employs mathematical reasoning to provide comprehensive, consistent, auditable coverage of your entire codebase. It is the foundation of any serious security practice.
- AI-assisted security research finds context-specific vulnerabilities that rules cannot anticipate—the same job that human security researchers and bug bounty programs have always done.
- These are complementary capabilities, not competing ones. The strongest security posture uses both.
- For teams with compliance requirements, regulatory obligations, or a need to demonstrate consistent security coverage, systematic code analysis remains essential—and cannot be replaced by a research preview tool.
Anthropic has built something genuinely useful, and we think the teams that will benefit most from it are the ones who already have a solid systematic code analysis foundation in place. That is what gives AI-assisted research the context it needs to be most effective.

