Blog post

Sonar's Take: Software Development Under America's AI Action Plan

Nathan Jones photo

Nathan Jones

VP Public Sector

Date

  • Code Quality
  • Code Security
  • Code Compliance
  • AI

The White House has officially launched “America's AI Action Plan,” designed to accelerate AI innovation across the country. The intent of the plan has been described as ‘empowering the private sector, removing regulatory hurdles, and solidifying the U.S. as a global leader in artificial intelligence.’

The ambition to foster a "try-first culture," as highlighted in Pillar I under the Enable AI Adoption section, signals to developers the opportunity to innovate with AI, build faster, and solve more complex problems than ever before. 

For software development, however, moving fast cannot mean breaking things, or removing all oversight, especially when it comes to the code that powers our world. We believe the key to successful AI adoption lies in a "trust and verify" approach, ensuring that the code we build with AI is secure, robust, and high-quality from the start.

Accelerating innovation with open access and a 'try-first' culture

To accelerate innovation, the plan champions two intertwined strategies: supporting open source/open-weight AI and enabling broad AI adoption through a "try-first" culture. These strategies work in tandem to expand the use of AI in invaluable ways.

We at Sonar particularly applaud the administration for promoting open source software (OSS) and lowering the barrier for the adoption of AI in the government. Our own journey, rooted in open source, has led to our community and commercial SonarQube platform being widely adopted across the Federal Government. Today, our tools are used by hundreds of federal agencies, powering critical projects across civilian, defense, and intelligence communities. The plan’s emphasis on simplified regulations and procurement will be crucial in accelerating access to innovative technology.

The plan puts action behind its words by calling for the creation of "regulatory sandboxes" and "AI Centers of Excellence." These initiatives are designed to give developers and businesses safe, secure space for the rapid deployment and testing of new AI tools, helping significantly lower the barrier to entry for innovation.

This accelerated, open environment is exactly what the community needs to push boundaries. However, as more developers use AI tools (both open source and proprietary) to generate code, the responsibility to verify the output grows exponentially. A "try-first" culture must be built on a foundation of verification, and the plan acknowledges this through several of the outlined initiatives such as “AI Interpretability, Control, and Robustness Breakthroughs,” and “Build an AI Evaluations Ecosystem.” Verifying AI is essential for accelerating its safe adoption. This principle is the foundation of a "trust and verify" model, a practical framework for harnessing AI's power responsibly.

"Secure-by-Design": extending security from the model to the code

It’s clear that a central theme of the AI Action Plan is safety and security. In addition to the above named initiatives Pillar I lays out, Pillar II of the plan rightly calls for promoting "secure-by-design" AI technologies and establishing an AI Information Sharing and Analysis Center (AI-ISAC) to centralize and share threat intelligence.

Securing AI models themselves from malicious input, prompt injection, and other threats, is a critical and necessary step. But a perfectly secure, well-behaved model can still generate insecure code. It can introduce subtle bugs, rely on deprecated libraries, or inadvertently "hallucinate" flawed logic that creates new attack surfaces. When the output isn’t paid the same attention as the input of these models, this creates an "Output Assurance" gap. It’s not enough to assure that the AI model is secure; we need assurance that the code it produces is also secure and high-quality.

A core philosophy at Sonar, true "secure-by-design" extends beyond the AI tool and into the final artifact: the generated code itself. Sonar provides an essential safety net, ensuring that any vulnerabilities, bugs, or code smells in AI-generated code are caught and fixed before they ever reach production. For example, our AI Code Assurance capability enables developers to have confidence in the quality and security of every line of AI-generated code through enforcement of high standards within a thorough validation process. 

The data dilemma: high-quality output depends on high-quality input

Another key goal highlighted in the plan is the effort to create "the world’s largest and highest quality AI-ready scientific datasets." This points to a universal principle that is fundamental not just to science, but to all of software development — the quality of an AI model's training data directly dictates the quality of its output.

For AI coding assistants, this presents a significant "garbage in, garbage out" risk. Today’s large language models (LLMs) are trained on vast, uncurated code repositories from the open internet. Inevitably, they learn from buggy, insecure, and outdated code, absorbing millions of examples of what not to do.

This results in AI assistants that can unknowingly perpetuate bad practices, recommend flawed security patterns, and suggest inefficient code, costing developers time in rework and introducing organizational risk. Until AI models are trained exclusively on high-quality, secure code, developers are ultimately accountable for the quality of the AI-generated code being put into production. This manual verification tax threatens to reduce the very productivity gains that AI promises. As found in a recent METR study, “AI tooling slowed developers down,” with AI coding assistants decreasing experienced software developers' productivity by 19%, largely because the time saved in writing code was lost to debugging and verifying the flawed output.

It is essential that development teams have tools that can systematically identify and remediate the flawed patterns that AI inherits from its training data. Sonar acts as a quality gatekeeper, helping development teams uphold consistent standards for code quality and code security, ensuring that the mistakes of the past aren't replicated in the software of the future.

Building the future of AI, securely and reliably

“America's AI Action Plan” has the potential to positively reshape the software development landscape, empowering teams to build more and faster than ever before.

However, to truly "win the race," speed must be matched with quality and security. The “trust and verify” mindset is a solid approach for minimizing risk while maximizing the incredible productivity and technological advancements AI promises for software development. This is where static analysis tools become critical. Solutions like our SonarQube platform enable development teams to harness the power of AI for code generation, with confidence. By ensuring every line of AI-generated code is secure and high-quality from the start, we empower developers to innovate faster. 

Let's embrace the AI revolution. Let's use AI to experiment, build, and create. But let's do it with the confidence of knowing that the code we build is robust, secure, and of the highest quality.

Get the most value out of your AI-generated code

Request AI demoGet Started
  • Follow SonarSource on Twitter
  • Follow SonarSource on Linkedin
language switcher
日本語 (Japanese)
  • 法的文書
  • トラスト センター

© 2008-2024 SonarSource SA.無断複写·転載を禁じます。SONAR、SONARSOURCE、SONARLINT、SONARQUBE、およびCLEAN AS YOU CODEは、SonarSource SAの商標です。