Press release

Sonar Data Reveals Critical "Verification Gap" in AI Coding: 96% Don’t Fully Trust Output, Yet Only 48% Verify It

AUSTIN – January 8, 2026 – Sonar, the leader in code review and verification, today announced the release of its State of Code Developer Survey report. The study, which surveyed over 1,100 developers globally, confirms that AI adoption in coding has reached critical mass: 72% of developers who have tried AI use it every day, and AI accounts for 42% of all committed code—a volume that developers expect will rise to 65% by 2027. Yet, this explosion in code volume has not necessarily delivered the expected improvements in efficiency. Instead, the study reveals that the surge in output has created a new bottleneck at the verification stage of software development, with more work now required to review code—leading to urgent new challenges regarding the reliability and security of deployed software.

The survey paints a comprehensive picture of an industry in transition; the emerging AI coding landscape is complex and nuanced. For example, the average team now juggles four different AI coding tools, and 64% of developers have started to use autonomous AI agents. And, this surge in automation has not eliminated developer toil work, which remains steady at nearly a quarter (24%) of the work week, regardless of how frequently developers use AI. The data suggests that the time saved in drafting code is now being reinvested into the necessary work of reviewing and debugging AI output to ensure it meets production standards.

To manage this new workflow effectively, successful engineering teams are moving towards a 'vibe, then verify' approach, balancing the speed of AI generation with the rigorous oversight required to maintain code health. However, a critical disconnect remains: while 96% of developers report they do not fully trust that AI-generated code is functionally correct, only 48% state they always check their AI-assisted code before committing it. This creates what Amazon Web Services (AWS) CTO Werner Vogels has termed “verification debt.” The verification burden to avoid this debt is significant, with 38% of developers noting that reviewing AI-generated code requires more effort than reviewing code written by their human colleagues.

Additional highlights from the Sonar 2026 State of Code Developer Survey report include:

  • The governance challenge: As developers seek to maximize efficiency, 35% report accessing AI coding tools via personal accounts rather than work-sanctioned ones, highlighting a potential blind spot for security and compliance teams who need to protect company data.
  • The experience dynamic: A divide has emerged in how AI impacts different career stages. Junior developers report the highest productivity gains from AI (40%) but are also more likely to say that reviewing AI code requires more effort than their senior colleagues.
  • Complex impact on technical debt: AI’s impact on technical debt is a double-edged sword requiring close oversight; while 93% of developers report positive effects—such as improved documentation (57%) and test coverage (53%)—88% also cite negative impacts, specifically regarding the generation of code that looks correct but isn't reliable (53%) or is unnecessary and duplicative (40%).

"We are witnessing a fundamental shift in software engineering where value is no longer defined by the speed of writing code, but by the confidence in deploying it. While AI has made code generation nearly effortless, it has created a critical trust gap between output and deployment,” said Tariq Shaukat, CEO of Sonar. “To realize the full potential of AI, we must close this gap. The winners in this new era will be those who empower their developers to use AI as a true force multiplier, pairing rapid generation with the automated and comprehensive review and verification needed to ensure strictly high-quality, highly-maintainable, secure code."

Sonar is uniquely positioned to analyze these trends, as SonarQube is used by more than 7 million developers to analyze over 750 billion lines of code each day. Download the full report here.

About Sonar

Sonar is the trust and verification layer for AI code, and the industry standard for automated code review for 17+ years. Integrating code quality and code security into a single platform, Sonar delivers deterministic, repeatable, and actionable code verification at scale, analyzing over 750 billion lines of code daily to ensure software is secure, reliable, and maintainable. Rooted in the open source community, Sonar is trusted by 7M+ developers globally, including teams at Snowflake, Booking.com, Deutsche Bank, AstraZeneca, and Ford Motor Company.

To learn more about Sonar, please visit: www.sonar.com