Skip to main content

AI Compliance in Motion: Adapting to Agentic Systems

AI compliance is evolving. Are your controls keeping up?

AI compliance is evolving, are your controls keeping up? Interpretation of the AI Act is often grounded in traditional GenAI use cases. But what happens when AI becomes agentic, learning, acting, and adapting autonomously within its ecosystem? That’s when compliance needs to evolve too. In this post: how to govern AI that thinks, and moves, on its own.

Dynamic AI Agents, Static Compliance: Rethinking the AI Act for an Autonomous Era

The EU’s AI Act sends a clear signal: risk governance, transparency, and accountability are no longer optional, they are central to being compliant. But just as companies begin aligning with these new rules, a new technological frontier is reshaping compliance strategies: agentic AI.

This type of AI does not just assist, it acts and evolves. It independently pursues goals within a workflow, learns from its environment, and makes autonomous decisions across ecosystems involving multiple data sources and third-party applications. By orchestrating a range of models, it can steer and improve processes effectively and combine the strengths of various specialties and domains of expertise.

Agentic AI unlocks powerful capabilities such as multistep problem solving, but it also introduces a new level of risk and complexity, leading to additional compliance challenges. When these agents fall under the high-risk category, they are subject to strict regulatory oversight under the AI Act. However, even when organisations develop or use AI systems that are formally not classified as high-risk, a responsible AI approach, aligned with the spirit of the regulation, supports applying similar safeguards. This encourages organisations to proactively adopt comparable controls, strengthening governance and trust.

This blogpost explores how organisations can prepare in practice, and what it takes to operationalise AI ACT requirements in the age of agentic AI.

What is Agentic AI, and Why Does It Matter?

Unlike traditional monolithic generative models or retrieval-augmented systems, agentic AI operates with a high degree of autonomy. These systems:

  • Pursue goals rather than simply producing outputs
  • Learn and adapt dynamically, updating their strategies or behaviours over time
  • Take action across both digital and physical systems

What sets agentic AI apart is its integration of core problem-solving capabilities, including memory, planning, orchestration, and the ability to interact with external applications. Together, these features make agentic systems highly effective in optimizing processes and executing decisions autonomously.

A New Risk Landscape

Agentic AI fundamentally shifts the risk profile. As these systems increase in “agentness”, broader goals, greater adaptability, and more independence, the risks scale accordingly:

  • Emergent behavior: Agents learn through interaction, causing their behavior to shift in ways that are often unanticipated. As a result, static, upfront risk assessments are no longer sufficient. Risk management needs to be ongoing and responsive to how the system evolves in real-world conditions. This broadens and shifts the scope of risk evaluation across the AI value chain, risk mitigation is not confined to the development phase but becomes equally, if not more, critical during deployment.
  • External integration risk: Agentic systems often autonomously interface with third-party tools, APIs, and environments, meaning that their operational boundary is constantly shifting. A vulnerability in any integrated service can cascade into the agent itself, significantly expanding the attack surface and creating a hard-to-control security environment.
  • The accountability gap: These systems operate via countless micro-decisions, making it difficult to trace why something happened, complicating compliance with transparency and auditability standards under the AI Act.

 

The AI Act Through an Agentic Lens

While the AI Act provides a strong foundation, applying the requirements to agentic AI asks for a reinterpretation in four key areas:

1. Risk management must account for real-time evolution and be ecosystem-aware

(Articles 9, 15, 26) Agentic systems evolve in production. Although the AI Act mandates risk evaluation before and after deployment, most of its risk mitigation requirements remain concentrated in the development phase. Consequently, this places the primary responsibility on the provider. Users are required to notify providers of emerging risks but are only obligated to implement risk mitigations in limited circumstances, unless the evolution of the agentic AI system is deemed a substantial modification by the user. However, what constitutes such a modification remains unclear at this stage. In practice, risk management must be continuous, with real-time monitoring and mitigation embedded into system operations. Fixed performance thresholds are insufficient. These systems adapt, which means compliance must ensure consistent reliability in dynamic environments, not just initial accuracy. Similarly, robustness must account for failure modes that arise over time, requiring systems to degrade safely and recover under unexpected conditions. Security must evolve as well. As agents increasingly integrate with external tools and APIs, their operational boundaries become fluid. Effective security requires active assurance across the full ecosystem, not just the core model.

2. Human oversight must guide behaviour, not just approve outputs

(Article 14) Manual approvals are too slow. Oversight must be embedded into the system via dynamic guardrails, real-time intervention points, and escalation protocols. Providers must update risk controls based on post-market performance, while deployers need to contribute operational insights. Oversight becomes a shared, continuous responsibility throughout the system’s lifecycle.

3. Transparency must reflect system evolution and complexity

(Article 13) One-time disclosures fall short. Effective transparency requires ongoing, real-world insight into what the system is doing and why. Simple, user-friendly explanations are more difficult to deliver when decisions come from complex, multivariate reasoning. But meaningful interpretability is still possible, by surfacing the key factors influencing decisions, even if full logic is irreducible.

4. Documentation must be dynamic and auditable over time

(Articles 11, 12, 18 & 19) Agentic AI demands living documentation: regularly updated to reflect changes in logic, behaviour, and system architecture. Logging individual outputs is not enough, organisations need structured records of how decisions were made, with versioned archives that reflect the system’s evolution. Instead of archiving everything, the emphasis should be on retaining interpretable and relevant data that supports audits and investigations.

From Principle to Practice: Governing Agentic AI

Identifying risks is only the beginning. The real challenge lies in translating the AI Act’s high-level requirements into operational governance. That requires changes across both technical systems and organisational processes.

Here are three practical priorities:

  • Shared, ongoing risk assessment: Providers must build tools for detecting emergent risks. Deployers must monitor real-world system behaviour and its effects on end users and fundamental rights. Feedback loops are essential.
  • Dynamic transparency and real-time monitoring: Agentic AI systems require traceability infrastructure: unique system IDs, behavioral dashboards, and activity logs that show how and why decisions were made, not just what the outcome was.
  • Adaptive oversight, both technical and human: Controls must scale with speed. That means automated safeguards (like action filters and emergency shutdowns), layered permissions, and AI-literate human operators empowered to intervene when it counts.

 

Final Thought: Same Principles, New Execution

The core pillars of the AI Act, risk management, transparency, oversight, remain relevant. But how we apply them must evolve.

Agentic AI requires governance that is:

  • Continuous, not one-off
  • Interpretative, not black-and-white
  • Collaborative, not siloed

Governing agentic AI isn’t just a technical task. It’s a shared responsibility, and an opportunity to lead. By aligning legal compliance with technical agility, organisations can build AI systems that are not only intelligent, but also safe, accountable, and worthy of trust.

Mila Verhaag

Business Analyst

+31 (0)85 3034271

Linkedin