Secure agents from leaking secrets with the new SonarQube CLI

Satinder Khasriya photo

Satinder Khasriya

Product Marketing Manager, Code Security

12 min read

  • Code quality
  • Code security

Table of contents

Start your free trial

Verify all code. Find and fix issues faster with SonarQube.

Get started

In the modern development landscape, a single leaked credential can dismantle years of built trust. According to the Verizon Data Breach Investigations Report, it takes a median of 94 days for organizations to remediate leaked secrets. In an era where a breach can happen in milliseconds, nearly three months of exposure is an unacceptable systemic risk. Catching secrets at the source—before they ever reach your version control system—is the only way to prevent a localized mistake from becoming a persistent security liability. Once a secret is committed to a repository, it is functionally compromised. Even if you delete the file or overwrite the line, the secret remains in the Git history, accessible to anyone with repository access.

For enterprises, the "cost of a leak" scales exponentially the longer it remains undetected. It isn't just about the immediate risk of unauthorized access; it's about the massive operational toil required to rotate keys, invalidate tokens, audit logs for misuse, and potentially notify regulatory bodies. 

That is why we are excited to announce the open beta of SonarQube CLI. It transforms this workflow by moving security from the end of the pipeline directly into the developer's agentic workflow. The headline feature of this release is Sonar’s AI-native secrets protection—the ultra-fast, high-precision secrets detection hook as part of the SonarQube CLI.

The rise of the "automated leak"

In a traditional workflow, a secret leak usually resulted from a human error, such as a developer accidentally committing a .env file to GitHub. However, in the world of the agentic-centric development cycle coding tools such as Claude Code and Cursor can introduce a dangerous new backdoor for sensitive data. Because these agents function by scanning your local environment to build context, they can inadvertently ingest active session tokens, API keys, or database credentials and send them directly to an LLM provider’s servers as part of the prompt history.

This creates a "silent leak" scenario. You might copy-paste a block of code into a prompt to debug it, forgetting that a hardcoded token is buried in the logic. This creates a challenge where the speed of generation can outpace the security of the workflow. 

LLM gateways and persistent risk

This risk is further compounded by the rapid adoption of LLM gateways (such as Portkey, Helicone, or LiteLLM). Enterprises use these platforms to manage costs and provide a unified API layer. However, if an agent sends an unscrubbed secret in a prompt, that secret is now persisted in the gateway’s request logs—often in plain text. Once a token hits these logs, it is no longer just a local mistake; it is an enterprise liability. To build software you can trust, organizations must implement independent, automated verification that catches these secrets before they escape the local environment.

SonarQube CLI: Built for the agent-centric development

Today, the workflow is often fragmented and reactive. To secure code, developers typically rely on CI/CD pipelines to catch issues. However, by the time the code reaches the pipeline, the silent leak to an LLM provider has already happened. The shift toward the agent-centric development cycle only amplifies these challenges. When agents are autonomously writing or refactoring code at scale, the volume of  "silent leaks" grows exponentially. Agents don't just write code; they ingest environment context, read log files, and transmit data to external LLMs at a pace no human can manually audit. Standard tools often fail in an agentic environment because they are too slow or too noisy; if a scanner takes five seconds to analyze a file, it breaks the "flow" of the agent. Without an ultra-fast verification layer, organizations face an accountability crisis: the speed of agentic innovation begins to outpace the ability to verify its safety.

With Sonar’s AI-native secrets protection we have optimized our engine for agentic workflows rather than just rigid compliance checks. To integrate this directly into your agentic workflow and stop "automated leaks" at the source, you can configure coding agents, such as Claude Code, to use SonarQube as a mandatory verification step. By adding a pre-capture hook, SonarQube CLI scans every code snippet the agent produces in real time—achieving sub-100ms latency—to ensure that no session tokens or API keys are ever sent to the LLM provider. Key benefits for this approach:

  • High precision: Our secret detection features a false positive rate of less than 5%, ensuring work is only interrupted when there is a genuine risk.
  • Extreme speed: Based on our internal testing, we observed an average processing speed of 100ms per file in environments like Claude Code. This ensures your agent remains unhindered while your secrets stay local.

The launch of the SonarQube CLI creates a versatile, extensible foundation for the future of the Sonar ecosystem. By establishing a presence directly in the automation layer, we have opened a pipeline to deliver high-frequency, specialized "hooks" that address the evolving needs of the AI-native SDLC. Beyond secrets detection, this architecture allows us to release future capabilities as portable, ultra-fast modules. This evolution ensures that as your development workflows become more complex and agent-driven, Sonar is the high-precision verification layer that moves at the speed of your innovation.

You can secure your coding agents, such as Claude Code, workflow today by installing Sonar’s AI-native secrets detection CLI and integrating it directly with your environment. Start using the SonarQube CLI to make verification the default—whether code is written by developers, copilots, or agents.

How to set up secrets detection with the SonarQube CLI

The following walkthrough takes about five minutes. You'll install the SonarQube CLI, scan files for hardcoded secrets, and then wire up automated protection for Claude Code.

Prerequisites:

  • A SonarQube Cloud account (free tier works) or a SonarQube Server instance
  • Node.js 18.20.0 or later
  • macOS (ARM64), Linux, or Windows
  • The SonarQube CLI is currently in open beta

Install the SonarQube CLI

Download and install the CLI with a single command.

On macOS or Linux:

curl -o- https://raw.githubusercontent.com/SonarSource/sonarqube-cli/refs/heads/master/user-scripts/install.sh | bash

On Windows (PowerShell):

irm https://raw.githubusercontent.com/SonarSource/sonarqube-cli/refs/heads/master/user-scripts/install.ps1 | iex

The installer adds sonar to your PATH via your shell profile, but the change won't take effect in your current terminal session. Either open a new terminal or run:

export PATH="$HOME/.local/share/sonarqube-cli/bin:$PATH"

Verify the install with sonar --help. You should see the available commands: auth, install, integrate, analyze, and list.

Authenticate with SonarQube Cloud

Connect the CLI to your SonarQube Cloud account. The fastest option is non-interactive, passing your org key and token directly:

sonar auth login -o <YOUR_ORG_KEY> -t <YOUR_TOKEN>

If you prefer, sonar auth login without flags opens a browser-based flow where SonarQube Cloud generates a token for you. For SonarQube Server, use sonar auth login -s https://your-server-url --with-token <TOKEN> instead. The CLI will then prompt for your organization key.

Confirm the connection:

Install the secrets binary

The secrets scanner is a separate binary that the CLI downloads and manages for you. Install it:

sonar install secrets

The sonar-secrets binary handles the actual pattern matching. It covers 450+ distinct secret patterns across 248 cloud services, using both format-specific rules and entropy-based detection for unknown secret types.

Scan for secrets

Point the scanner at a file with hardcoded credentials. This database.yml has plaintext passwords:

# Database configuration

production:

 adapter: postgresql

 host: prod-db.internal.example.com

 port: 5432

 database: inventory

 username: app_user

 password: "Sup3r$ecretPa$$w0rd!2026"

 pool: 25

 timeout: 5000

 # Replica for read-heavy queries

 replica:

   host: prod-db-replica.internal.example.com

   port: 5432

   username: readonly_user

   password: "R3adOnly$ecret!2026"

staging:

 adapter: postgresql

 host: staging-db.internal.example.com

 port: 5432

 database: inventory_staging

 username: staging_user

 password: <%= ENV['STAGING_DB_PASSWORD'] %>

 pool: 10

 timeout: 5000

The "Scan failed" message looks alarming, but it's working exactly as intended. In fact, it found 2 secrets. Exit code 51 means "secrets detected." A non-zero exit code is what makes this scanner useful as a gate. CI pipelines, pre-commit hooks, and agent integrations all rely on exit codes to decide whether to proceed or block. Exit code 0 means clean; exit code 51 means stop.

The scanner reports each finding with the file path, the exact line and column range, and a partially masked preview of the secret value. The scanner caught both passwords in database.yml, reporting each as a Generic Password.

To see the opposite result, scan a file that uses environment variables instead of hardcoded values:

No findings, exit code 0. The pattern is clear: pull secrets out of your source files and into environment variables, and the scanner stays quiet.

Protect your AI coding agent

Manual scanning catches secrets in files you already know about. The real risk with AI coding agents is the files you don't think about, the ones the agent reads autonomously to build context: config files, log output, and environment definitions. If any of these contain a live secret, the agent sends it straight to an LLM provider as part of the prompt.

sonar integrate claude solves this by installing two hooks that scan automatically, without any action from you or the agent:

sonar integrate claude -o <YOUR_ORG_KEY> -p <YOUR_PROJECT_KEY>

Note: Server users will use the -s flag.

To install hooks globally (active across all your projects), add `-g`:

sonar integrate claude -g

The integration installs two Claude Code hooks into your .claude/settings.json:

  • PreToolUse hook — triggers every time Claude Code reads a file. The hook runs sonar analyze secrets on the file before Claude sees it. If the scanner finds secrets (exit code 51), the hook blocks the read entirely. The secret never enters the LLM context window.
  • UserPromptSubmit hook — triggers every time you submit a prompt. The hook scans your prompt text for secrets. If you paste an API key or token into the chat, the hook blocks the prompt from being sent.

After this single command, every file read and every prompt submission in your Claude Code session runs through secrets detection automatically. No workflow changes, no extra steps, no way to forget.

Code you can trust in the era of agents

The SonarQube CLI’s "analyze secrets” capability provides an ultra-fast verification layer that moves at the speed of AI-driven development. By launching high-precision hooks for the SonarQube CLI, we are delivering the initial installment of a roadmap built to hardcode integrity into every stage of your innovation

Secure your workflow today

Don't let your secrets become enterprise liabilities. Stop automated leaks and start verifying your AI-generated code with the SonarQube CLI.

Get started with SonarQube CLI

Build trust into every line of code

Image for rating

4.6 / 5

Get startedContact sales