AI compliance is evolving, are your controls keeping up? Interpretation of the AI Act is often grounded in traditional GenAI use cases. But what happens when AI becomes agentic, learning, acting, and adapting autonomously within its ecosystem? That’s when compliance needs to evolve too. In this post: how to govern AI that thinks, and moves, on its own.
The EU’s AI Act sends a clear signal: risk governance, transparency, and accountability are no longer optional, they are central to being compliant. But just as companies begin aligning with these new rules, a new technological frontier is reshaping compliance strategies: agentic AI.
This type of AI does not just assist, it acts and evolves. It independently pursues goals within a workflow, learns from its environment, and makes autonomous decisions across ecosystems involving multiple data sources and third-party applications. By orchestrating a range of models, it can steer and improve processes effectively and combine the strengths of various specialties and domains of expertise.
Agentic AI unlocks powerful capabilities such as multistep problem solving, but it also introduces a new level of risk and complexity, leading to additional compliance challenges. When these agents fall under the high-risk category, they are subject to strict regulatory oversight under the AI Act. However, even when organisations develop or use AI systems that are formally not classified as high-risk, a responsible AI approach, aligned with the spirit of the regulation, supports applying similar safeguards. This encourages organisations to proactively adopt comparable controls, strengthening governance and trust.
This blogpost explores how organisations can prepare in practice, and what it takes to operationalise AI ACT requirements in the age of agentic AI.
Unlike traditional monolithic generative models or retrieval-augmented systems, agentic AI operates with a high degree of autonomy. These systems:
What sets agentic AI apart is its integration of core problem-solving capabilities, including memory, planning, orchestration, and the ability to interact with external applications. Together, these features make agentic systems highly effective in optimizing processes and executing decisions autonomously.
Agentic AI fundamentally shifts the risk profile. As these systems increase in “agentness”, broader goals, greater adaptability, and more independence, the risks scale accordingly:
While the AI Act provides a strong foundation, applying the requirements to agentic AI asks for a reinterpretation in four key areas:
(Articles 9, 15, 26) Agentic systems evolve in production. Although the AI Act mandates risk evaluation before and after deployment, most of its risk mitigation requirements remain concentrated in the development phase. Consequently, this places the primary responsibility on the provider. Users are required to notify providers of emerging risks but are only obligated to implement risk mitigations in limited circumstances, unless the evolution of the agentic AI system is deemed a substantial modification by the user. However, what constitutes such a modification remains unclear at this stage. In practice, risk management must be continuous, with real-time monitoring and mitigation embedded into system operations. Fixed performance thresholds are insufficient. These systems adapt, which means compliance must ensure consistent reliability in dynamic environments, not just initial accuracy. Similarly, robustness must account for failure modes that arise over time, requiring systems to degrade safely and recover under unexpected conditions. Security must evolve as well. As agents increasingly integrate with external tools and APIs, their operational boundaries become fluid. Effective security requires active assurance across the full ecosystem, not just the core model.
(Article 14) Manual approvals are too slow. Oversight must be embedded into the system via dynamic guardrails, real-time intervention points, and escalation protocols. Providers must update risk controls based on post-market performance, while deployers need to contribute operational insights. Oversight becomes a shared, continuous responsibility throughout the system’s lifecycle.
(Article 13) One-time disclosures fall short. Effective transparency requires ongoing, real-world insight into what the system is doing and why. Simple, user-friendly explanations are more difficult to deliver when decisions come from complex, multivariate reasoning. But meaningful interpretability is still possible, by surfacing the key factors influencing decisions, even if full logic is irreducible.
(Articles 11, 12, 18 & 19) Agentic AI demands living documentation: regularly updated to reflect changes in logic, behaviour, and system architecture. Logging individual outputs is not enough, organisations need structured records of how decisions were made, with versioned archives that reflect the system’s evolution. Instead of archiving everything, the emphasis should be on retaining interpretable and relevant data that supports audits and investigations.
Identifying risks is only the beginning. The real challenge lies in translating the AI Act’s high-level requirements into operational governance. That requires changes across both technical systems and organisational processes.
The core pillars of the AI Act, risk management, transparency, oversight, remain relevant. But how we apply them must evolve.
Governing agentic AI isn’t just a technical task. It’s a shared responsibility, and an opportunity to lead. By aligning legal compliance with technical agility, organisations can build AI systems that are not only intelligent, but also safe, accountable, and worthy of trust.
mila.verhaag@aceconsulting.nl
+31 (0)85 3034271
Linkedin
iris.wuisman@aceconsulting.nl