Codacy's New AI Risk Hub and AI Reviewer Bring Order to the Wild West of AI Code Compliance

5 min read Original article ↗

The widespread adoption of AI coding tools is starting to feel like a 'Wild West' for devs, engineering leaders and compliance officers alike. With 41% of new code today being generated by AI models that are trained on outdated, often vulnerable codebases, the sheer volume of code exposed to conventional security risks like hardcoded secrets and insecure dependencies keeps accelerating.Coupled with a new wave of AI-specific exploits, like the recent resurgence of invisible unicode injections continuing to hit NPM package users en masse, businesses are now scrambling to establish reliable, organization-wide controls for safe AI use policies and scalable, AI-aware code review solutions.

The AI Paradox: Security, Compliance, and the Speed Trap

The 2025 Stack Overflow Developer Survey found that over 77.9% of devs use AI coding tools, helping teams move through ideas, code, and solutions far quicker than traditional workflows allow. At the same time, 81.4% of devs have concerns about the security and privacy of data when using AI agents, and for a good reason: “The biggest single frustration (66%) is dealing with ‘AI solutions that are almost right, but not quite,’ which often leads to the second-biggest frustration: ‘Debugging AI-generated code is more time-consuming’ (45%)”

Relying on old code review flows is no longer sustainable as new levels of alert fatigue and automation bias, paired with a lack of accountability, push devs to bypass automated checks in an effort to meet delivery deadlines.

Introducing AI Risk Hub: Your new Governance Suite for AI Code Compliance and Risk

Today, we are launching a new way for security and engineering leaders to govern AI coding policies and establish automated AI safeguards for developers at scale.

Our new AI Risk Hub allows engineering teams to centrally define their AI policies and enforce them instantly across teams and projects, while tracking their organization-wide AI risk score based on a checklist of protection layers available in Codacy.

The first iteration of the AI Risk Hub delivers two new governance capabilities:

Unified, enforceable code-level AI policies

We have introduced the concept of “AI Policies” – a pre-defined, curated ruleset designed to prevent risks and vulnerabilities that are inherent to AI code from entering the codebase – which can be enforced immediately across all repositories and Pull Request checks.

AI Risk Hub Demo

The AI Policy covers four groups of AI-related risks:

Unapproved model calls

Ensure no unallowed models are used in production and get visibility around any compliance misuses.

AI Safety

Coding with AI and for AI introduces all sorts of new concerns for engineering teams. We have created a new set of patterns, such as invisible unicode detection, that ensures safety practices are enforced and applied across the codebase.

Hardcoded Secrets

AI coding is known for taking the quickest approaches to development solutions, which includes the handling of secrets and credentials. We want to ensure anything created or used by AI is protected from misuse. And with the Guardrails IDE plugin, devs can even catch secrets locally at the moment they are introduced by the coding agent, long before they reach Git.

Vulnerabilities (Insecure dependencies / SCA)

Dependencies that would be considered harmless months ago, and were used to train your AI models, might pose serious security risks for your applications today. Ensure protection on all fronts by integrating vulnerability detection throughout your development lifecycle.

Codacy + ___-1

Org-wide risk score and checklist for AI code governance

Every team using Codacy can now track their organizational AI Risk Level based on the progress of implementing a range of essential AI safeguards that can be enabled in Codacy.

With most repositories today being subject to GenAI code contributions, the checklist covers seven essential source code controls recommended to be enabled across all projects:

AI Policy applied

Ensures every repository follows the same AI usage and security rules. Without a universal policy, teams may expose data or introduce insecure AI-generated code.

Coverage enabled

Checks that most repositories have coverage enabled. Repos without coverage operate without visibility, increasing the risk of undetected vulnerabilities.

Protected Pull Requests

Measures how many merged PRs passed required checks and gates. This prevents insecure or low-quality code from being merged.

Enforced gates

Confirms that repositories have active enforcement gates like quality or security checks. Gates block risky code before it reaches the main branch.

Daily Vulnerability Scans

Regularly scans all of your dependencies to identify vulnerabilities across your entire codebase.

Applications scanned (DAST)

Checks whether application-level (DAST) scanning is enabled and has at least one target configured. This catches runtime security issues that code analysis cannot detect.

AI BOM (Coming soon)

Provides a bill of materials of all AI models, libraries, and components used across the codebase. Knowing what AI systems are in place is essential for managing risk and ensuring compliance.

How to access the AI Risk Hub

The AI Risk Hub is now available to all organizations subscribed to the Business plan, with a limited-time preview available for Team plan subscribers. Check our pricing page and documentation to learn more.