Blog post

AI-Generated Code Demands ‘Trust, But Verify’ Approach to Software Development

Tariq Shaukat photo

Tariq Shaukat

Co-CEO

Date

  • Code Quality

AI is pervading every aspect of business today. In fact, IBM reports nearly half of enterprise-scale companies have actively deployed AI in their business. Many applications used in today’s business environments are already leveraging AI behind the scenes, meaning it's highly likely many end users are reaping the benefits of AI without even knowing it. The majority of leaders are still trying to navigate how to get started with AI in a way that is safe for their organization. Where there’s promise, there’s also skepticism – plus a healthy dose of concern – regarding the introduction of new risks from AI. It’s critical, though, that fear and skepticism don’t stop forward momentum. Instead, leaders must focus on putting the right parameters in place to avoid the risk of falling behind.


It's exciting to imagine – and impossible to predict – what AI will be capable of in 5 to 10 years, or even just a year from now. No matter what unfolds, however, it’s guaranteed that we’ll make mistakes as we learn to implement and work alongside AI technologies. To minimize disruption and risk, while maximizing productivity and innovation, it’s imperative that companies approach their AI adoption open-mindedly and with an eye toward quality control.


Taking a “trust but verify” approach, where you employ the AI and verify its output with human review, is a way we advocate for taking advantage of the technology without taking on excessive risk. Pairing the approach with the power of Sonar’s Clean Code solutions SonarQube, SonarCloud, and SonarLint, organizations can be confident that their AI-generated code is high-quality, maintainable, reliable, and secure. 


Strengthen Productivity with AI 

Companies that invest in AI tools are actively investing in the growth, productivity, and general satisfaction of their employees. I think it is true in any walk of life — like it or not, mundane tasks that are necessary, but in themselves add little value, consume a lot of precious time. If AI does nothing else, it will remove the burden of these mundane, repetitive tasks. This frees up time to collaborate, to be creative, and to think outside the box. 


As a result, it’s inevitable that the nature of work will change. People will increasingly become quality control, editors, and creatives. For example, in software development, AI (with the right prompts) will increasingly write the main elements of code. As of June 2023, GitHub found that its AI coding tool Copilot had already generated over three billion accepted lines of code. The human role in software development will have to ensure that the code has no security issues, is reliable, is maintainable, and doesn’t contain problematic hallucinations or anything else of the sort. Increasingly, we’ll see priorities like sustainability and Clean Code become focus areas. 


AI offers a solution to free up time so the focus can instead be paid to the architecture, the customer experience, and ‘the new, hard, innovative problem’ that nobody previously had time to solve. 


Understand and Brace for AI-induced Risks 

There are also risks that AI creates a gap between individuals who leverage this technology to be more productive and individuals who only use it because it is part of the landscape. I can see a path where it fractures teams into two. If a team has a split between how the technology is being used and, therefore, a difference in what is produced, there will be quite a lot of misalignment. 


The risks extend beyond individuals to teams and organizations at large as well. In the world of software development, companies today are already using AI to write code. But here’s the catch: businesses are innovating and competing in their markets on a foundation of software, which already tends to be riddled with bad code that causes tech debt to mount. Bad code is a trillion-dollar problem, and AI has the potential to greatly exacerbate the issue by increasing the velocity of software development without regard for quality.


Developers Must Prioritize Quality

It can’t be overstated that companies need to approach the adoption of AI coding assistants and tools with an eye toward quality control when it comes to building software. Just like human output, AI produces code that has security, reliability, and maintainability issues. In fact, a recent study from Microsoft Research analyzed how 22 coding assistants performed beyond functional correctness. It found that "they generally falter when tested on our benchmark, hinting at fundamental blindspots in their training setups.”


One fact will remain true for the foreseeable future: all code – human or AI-generated – must be properly analyzed and tested before it’s put into production. Developers should turn to AI for volume and automation of mundane, banal tasks, but must have the right checks in place to ensure their code remains a foundational business asset. 


AI coding tools are expected to free up 20-30% of developers’ time, allowing them to offload some work and focus on more interesting and challenging projects. With 83% of developers experiencing burnout due to an increased workload, this tech can offer much-needed relief, improve productivity, and raise job satisfaction. It can also help technology and business leaders with the struggle of striking a balance between speed and quality. 


Establishing Safeguards to Harness AI for Good 

Whether organizations know it or not, their people are using AI, so it’s best to understand where and how it is being used. Companies must think through their investments as well as what governance they need to put in place. While federal regulators and consortiums — like AISIC — are strategizing on how to deploy safe and trustworthy AI, organizations should put in place easily adaptable and modifiable governance as things continue to rapidly change.


Here are a few things to keep in mind: First, trusted frameworks are a great place to start and map to, such as NIST’s Secure Software Development Framework. Organizations should also outline a list of approved AI tools, deciding particularly whether or not the use of AI code generators is allowed as the majority of software developers are already using them. It should be stipulated, as well, what reviews should look like for different AI use cases, to ensure anything being released or put into production is correct and responsible. 


This is something that GitHub even calls out themselves in their Copilot documentation, stating “You are responsible for ensuring the security and quality of your code. We recommend you take the same precautions when using code generated by GitHub Copilot that you would when using any code you didn't write yourself. These precautions include rigorous testing, IP scanning, and tracking for security vulnerabilities.”


The use of AI also needs to be thought about from a holistic view; it is a mistake to think about segregating AI to a specific department. CTOs and CISOs should not be the only people weighing in. It’s critical to establish clear principles to set the tone from the top. Rather than overreact or act impulsively, the assurance of having the right guardrails in place can be a guiding light. 


Be Confident in Your Code – AI or Human Generated – With Sonar

The popular tech mindset of “go fast and break things” simply doesn’t work when you consider the cost of fixing any output generated by AI. However, you can’t slow down the pace of innovation either, and AI can help businesses gain a competitive advantage. 


As such, organizations must remain proactive in their evaluation of holistic risk, how AI can augment efficiency and effectiveness, and proper governance policies. They also must invest in the right tools to support their development teams with taking advantage of genAI in a way that doesn’t increase risk and technical debt. 


With Sonar’s powerful code analysis tools — SonarQube, SonarCloud, SonarLint — developers can easily integrate with popular coding environments and CI/CD pipelines for in-depth insight into the quality, maintainability, reliability, and security of their code no matter if human or AI-generated. Having this visibility into code, organizations can feel confident that their code is clean.    


Taking a “trust but verify” approach is important across the spectrum of AI use. In code, or even in marketing, teams need to ensure they aren’t blindly accepting what is generated by AI. Everything needs to be considered in the corporate and societal context, and that shouldn’t get forgotten in the hype of AI technology.

Get new blogs delivered directly to your inbox!

Stay up-to-date with the latest Sonar content. Subscribe now to receive the latest blog articles. 

By submitting this form, you agree to the Privacy Policy and Cookie Policy.