Blog post

Cyber Resilience Act: Navigating speed and security with AI-coding

Anirban Chatterjee photo

Anirban Chatterjee

Sr. Director, Product and Solutions Marketing

5 min read

Modern software development is caught between two powerful forces. On one hand, generative artificial intelligence (AI) coding tools are supercharging development velocity at the expense of rigorous security review. On the other, the European Union's Cyber Resilience Act or CRA (Regulation EU 2024/2847), along with related legislation such as the Product Liability Directive (PLD), is ushering in an era of strict regulatory accountability, placing the liability for preventing cybersecurity failures squarely on manufacturers. This creates a critical paradox: the very tools used to build software faster are introducing security risks at a scale that manual oversight cannot manage, and the CRA makes manufacturers legally responsible for these risks.

For all companies that do business in the EU – notably, not just companies based in the EU – this new reality signals significant new complications for software lifecycle and supply chain management, especially when using AI coding tools. The CRA introduces mandatory cybersecurity requirements that apply throughout a product's entire lifecycle, covering “products with digital elements” (PDEs) from design to end-of-life. With severe penalties for non-compliance—up to €15 million or 2.5% of global turnover—the CRA legally mandates a new model: one that demands organizations move fast, but prove their products are built right from the start.

New obligations driven by the CRA

The CRA's scope is intentionally broad, applying to all PDEs made available on the EU market, regardless of where the manufacturer is located. This includes a wide array of products, such as baby monitors, networked household gadgets, B2B software, connected consumer electronics, and more. Its core technical requirements, detailed in Annex I, are extensive. The cornerstone is the mandate to ship products "without known exploitable vulnerabilities" and to deliver them with a "secure by default configuration." Other essential obligations include protecting against unauthorized access, ensuring the confidentiality and integrity of data, limiting attack surfaces, and minimizing data processing.

The Act also establishes ongoing responsibilities. Manufacturers must implement robust vulnerability handling processes, which includes creating a Software Bill of Materials (SBOM) for their products. They are required to provide security updates for a support period of at least five years. Perhaps the most urgent requirement is the 24-hour deadline to notify the EU's cybersecurity agency, ENISA, of any "actively exploited vulnerability," a rule that demands mature and well-practiced incident response plans. Proving compliance requires meticulous documentation, including a cybersecurity risk assessment.

The only exceptions are for products where sector-specific legislation with equivalent cybersecurity requirements already exists, such as for medical devices, aviation, and cars. Certain open-source software developed or supplied outside the course of a commercial activity is also excluded from the direct obligations placed on manufacturers, though the commercial products that incorporate this software remain fully within scope. This wide-ranging applicability ensures that the CRA establishes a horizontal cybersecurity baseline for the digital economy.

The AI-coding paradox: speed, with risk

The rapid adoption of AI coding assistants introduces a new variable for developers and manufacturers into the CRA compliance equation. These tools accelerate development, but they also pose significant security risks. Trained on massive public code repositories, AI models learn from and replicate the countless vulnerabilities and insecure coding patterns contained within that data. Studies have shown that a substantial portion—approximately 40% in some cases—of AI-generated code contains security flaws like those on the CWE Top 25 list. Some examples of increased security exposure include:

  • Replicating insecure patterns, such as those leading to log injection or cross-site scripting attacks
  • Using outdated open-source libraries with known vulnerabilities, or even "hallucinating" packages that do not exist (this creates a potential attack vector where malicious actors can register those package names to distribute malware)
  • Poor, insecure prompts for AI-generated code that are widely reused and, combined with AI hallucinations, spread insecure patterns across organizations
  • Potential malicious poisoning of training data, where an attacker intentionally introduces vulnerable or backdoored code into public repositories that are likely to be scraped for model training

The CRA provides an unambiguous answer to the question of who is responsible for these AI-induced flaws: the manufacturer of the final product. The regulation makes no distinction between code written by a human and code suggested by  AI. To meet the CRA's standard of due diligence, organizations must treat AI-generated code as an untrusted input that requires the same, if not a more stringent, level of automated security analysis as any third-party library.

A strategic framework for compliance

Addressing the dual challenges of the CRA and AI-generated code requires a framework that embeds automated security verification throughout the software development lifecycle.

  • Embedding security from the start: The CRA's "secure-by-design" principle requires shifting security left. This is enabled by Static Code Analysis and Static Application Security Testing (SAST) tools that integrate directly into the developer's IDE and the CI/CD pipeline. For example, a tool like SonarQube prevents issues from entering the main branch by giving your developers immediate feedback on vulnerabilities and coding errors as code is being written.
  • Maintaining control over AI-generated code: Organizations must verify, not just trust, AI-generated code. Doing this at scale requires an ability to stop any vulnerable or low-quality code, with automated guardrails. A quality gate, available in SonarQube, can stop any vulnerable or low-quality code from entering production. This is a non-negotiable checkpoint in the CI/CD pipeline, regardless of whether the code was written by a developer or an AI.
  • Mastering the software supply chain: The CRA's mandate for an SBOM makes robust Software Composition Analysis (SCA) essential. An effective SCA process, such as the one offered in SonarQube's Advanced Security offering, automatically flags risks in your third party open source software based on dependency identification and continuous vulnerability analysis. It can also ensure a traceable vulnerability management process with Software Bills of Materials (SBOM) capabilities.
  • Protecting data integrity: The CRA mandates that systems be resilient against manipulation and that the impact of security incidents be minimized. Taint Analysis, a SonarQube feature that traces untrusted user data flow across the entire application and third-party libraries to identify deeply embedded injection flaws, directly addresses these requirements. 
  • Safeguarding system access: SonarQube frequently finds instances where developers inadvertently commit hard-code credentials into source code control. The speed of AI-assisted development, while beneficial for productivity, introduces a heightened risk for this to occur. To mitigate this, automated secrets detection tools are crucial. For example, SonarQube identifies and ensures that your developers remove sensitive access data before you are exposed by scanning the entire codebase for patterns matching API keys, passwords, and other sensitive tokens.
  • Demonstrating compliance: Proving compliance requires an auditable record of security activities, but it can be difficult to track all of the security activities across your codebases in a cohesive, efficient manner. Solutions like SonarQube include reports that ensure quick, consistent access to the documentation you need to show compliance with security standards like the OWASP Top 10 and CWE Top 25.

A narrow window of opportunity

The Cyber Resilience Act's deadlines are firm and approaching. The obligation to report actively exploited vulnerabilities applies from September 11, 2026, with the full application of most other provisions following on December 11, 2027.

Organizations should begin preparing immediately by assessing which products fall under the CRA's scope, conducting a gap analysis of their current processes, evaluating their security tooling, and formalizing their incident response plans to meet the tight 24-hour reporting window.

The SonarQube platform integrates all the aforementioned capabilities into a single solution, ensuring a user-friendly and streamlined experience for developers and quality gatekeepers alike. This comprehensive approach allows teams to implement Sonar’s “trust and verify” approach to maintaining high standards of code quality and security, even as they adopt AI coding solutions, ultimately leading to more robust and reliable applications.

Software teams that treat the CRA as a mere compliance checklist to be managed with fragmented tools or manual processes will struggle to keep pace, exposing themselves to significant legal and financial risk. In contrast, organizations that embrace the spirit of the Act—a deep-seated commitment to producing high-quality, secure, and reliable software from the start—can transform this regulatory obligation into a powerful competitive advantage. By implementing an integrated framework to govern the quality and security of all code, regardless of its origin, companies can not only meet their legal duties but also build more robust products, foster greater customer trust, and ultimately innovate faster and more safely.

Transform your code with SonarQube

Contact us today
  • Follow SonarSource on Twitter
  • Follow SonarSource on Linkedin
language switcher
English

© 2008-2025 SonarSource SA. All rights reserved.