Step-by-step guide

The .NET developer’s guide to SonarQube - Part 4: Interpreting results and mastering quality gates

Table of contents

  • Chevron right iconThe strategic approach: Focus on new code
  • Chevron right iconMastering quality gates
  • Chevron right iconDeciphering the software qualities
  • Chevron right iconManaging false positives

Start your free trial

Verify all code. Find and fix issues faster with SonarQube.

始めましょう

In Part 3, we transitioned from manual execution to automated Continuous Inspection. By integrating analysis into Azure DevOps or GitHub Actions, your organization now receives consistent, automated feedback on every pull request, uploaded directly to SonarQube Cloud.

The result of this automation is a comprehensive dashboard populated with metrics, ratings, and issue counts. For a development team, the challenge now shifts from generating this data to interpreting it effectively.

In this installment, we will examine how to navigate the SonarQube Cloud dashboard, apply the "Focus on New Code" strategy to manage technical debt sustainability, and configure quality gates to enforce your organization’s standards.

The strategic approach: Focus on new code

Upon analyzing an existing codebase for the first time, it is common to encounter a significant number of reported issues in the "Overall Code" view. A legacy application may report hundreds of reliability issues and months of estimated remediation effort.

Attempting to remediate historical debt immediately is often counterproductive and honestly overwhelming. It diverts resources from feature delivery and risks introducing regressions in stable code. Instead, the modern SonarQube methodology advocates for a strict focus on new code.

The branch homepage is divided into two primary contexts:

  1. New code: This view isolates changes introduced in the pull request or code modified within a specific reference period (typically defined by a Reference Branch or the last 30 days).
  2. Overall code: This view represents the absolute state of the entire repository.

The sustainability of your codebase relies on the "New code" metrics. By ensuring that all new contributions meet high quality standards, the overall health of the application will naturally improve over time as legacy code is touched and refactored during routine maintenance.

Mastering quality gates

The quality gate is the policy engine of SonarQube. It defines the pass/fail criteria for a project and serves as the definitive indicator of whether a pull request is suitable for merging.

The default standard: "Sonar way"

By default, SonarQube Cloud applies the "Sonar way" quality gate. This read-only standard strictly enforces the Focus on New Code strategy. It does not gate on the overall state of the project, but sets a zero-tolerance policy for the "diff":

  • Reliability: Rating must be A (0 New Issues).
  • Security: Rating must be A (0 New Vulnerabilities).
  • Security review: 100% of New Security Hotspots must be reviewed.
  • Maintainability: Rating must be A.
  • Coverage: New Code must have at least 80% test coverage.

Defining custom quality gates

While the default standard is recommended, organizational requirements may necessitate customization. For example, a team may require specific thresholds for AI Code Assurance or different coverage targets for legacy maintenance branches.

NOTE: Custom quality gates are only available in the Team and Enterprise plans. For more info, look here.

To create a custom quality gate in SonarQube Cloud:

  1. Navigate to the Organization settings level.
  2. Select the Quality Gates tab.
  3. Copy the existing "Sonar way" gate to use as a baseline (e.g., "Corporate .NET Standard").
  4. Modify the conditions as required.
  5. Navigate to the specific Project Settings > Quality Gate and assign the new gate to your project.

Recommendation: Exercise caution when adding conditions to "Overall Code" within your quality gates. Enforcing strict, retroactive standards on legacy code (e.g., "Overall Coverage > 80%") can lead to unintended consequences and slow down development. When you apply high standards on overall code to legacy codebases, you risk causing pipelines to fail permanently, blocking development until significant refactoring is completed. The most effective way to improve code health is to ensure that every new contribution meets high standards of quality and security. The goal is to stop the leak of new issues, not to pause development to fix the past.

Deciphering the software qualities

Modern SonarQube analysis moves beyond simple bug counting. It categorizes code health into three pillars known as software qualities. Understanding these domains is essential for prioritization.

1. Reliability

Reliability measures the ability of your software to perform its required functions under stated conditions. Issues in this category are non-negotiable defects that will likely result in runtime errors.

  • Metric: reliability Rating (A–E).
  • Example (.NET Rule S2259): A potential NullReferenceException.
public void ProcessUser(User user)

{

    // Reliability Issue: 'user' is dereferenced without a null check.

    var name = user.Name; 

}

2. Security

Security is assessed through two distinct mechanisms:

  • Vulnerabilities: Confirmed weaknesses that are open to exploitation (e.g., SQL Injection, Hardcoded Credentials). These affect your Security Rating and require immediate remediation.
  • Security hotspots: Security-sensitive code segments that require human review. SonarQube Cloud cannot deterministically flag these as "safe" or "unsafe" without context.
    • Example: The use of System.Random. If used for cryptography, it is a vulnerability. If used to randomize a UI element, it is safe. The developer must review the hotspot and mark it as "Safe" or "Fixed."

3. Maintainability

Maintainability ensures that your code is consistent, intentional, and adaptable. Issues here (sometimes referred to as "code smells") increase the cost of future changes. SonarQube quantifies this as "technical debt," expressed as the estimated time required for remediation.

  • Metric: Maintainability Rating (A–E).
  • Example (.NET Rule S1125): Boolean redundancy that reduces readability.
// Low Maintainability: Redundant comparison

if (isValid == true) {... }

// High Maintainability

if (isValid) {... }

Managing false positives

Static analysis engines effectively identify patterns but lack the semantic understanding of a human developer. Occasionally, valid code may be flagged as having an issue. There are two primary methods for handling these exceptions.

Method 1: The dashboard (Management approach)

Issues can be managed directly within the SonarQube Cloud interface. A user with appropriate permissions can change an issue's status to False Positive and Accept. This removes the issue from the metrics and allows the quality gate to pass without altering the source code.

Method 2: In-code suppression (Developer approach)

For .NET developers, it is often preferable to document the suppression directly in the code. SonarQube respects standard C# preprocessor directives. This ensures the suppression is version-controlled and visible to other developers.

To suppress a specific rule (e.g., Rule S1234) for a block of code, use the #pragma directive:

#pragma warning disable S1234 // Justification: Legacy interoperability requires this specific pattern

    var legacyResult = ExecuteUnsafeMethod();

#pragma warning restore S1234

Alternatively, if a developer accepts the false positive in SonarQube for IDE (or in the SonarQube web interface), the quality gate will be immediately re-calculated and no further action will be required.

Conclusion

By correctly interpreting the dashboard and configuring quality gates, you transform raw analysis data into actionable intelligence. The focus on new code approach ensures that technical debt is managed sustainably, allowing teams to improve code quality without halting feature delivery.

However, you may notice that the "Coverage" metric on your dashboard is currently empty. In the final installment of this series, Part 5: Advanced topics, we will close this loop. We will demonstrate how to integrate Coverlet for code coverage generation and explore how to extend the analysis with custom Roslyn rules.

To fully complete this quality picture you'll also want to include coverage, of course, and right now the "Coverage" metric on your dashboard is empty. In the final installment of this series, Part 5: Advanced topics, we will close this loop by demonstrating how to integrate Coverlet for code coverage generation and exploring how to extend the analysis with custom Roslyn rules.

  • Follow SonarSource on Twitter
  • Follow SonarSource on Linkedin
language switcher
日本語 (Japanese)
  • 法的文書
  • トラスト センター

© 2025 SonarSource Sàrl.無断複写・転載を禁じます。