Skip to main content

Navigating the AI Act

Compliance and strategic opportunities for financial institutions

The EU AI Act is the first comprehensive risk-based regulatory framework governing AI across sectors, setting standards for High-Risk AI Systems (HRAIS) and General-Purpose AI Models (GPAIM), both critical for Financial Institutions (FIs).

Learn more about Responsible AI

AI Act Compliance for Financial Institutions: Key Obligations and Roles

HRAIS include critical applications like credit scoring, fraud detection, and risk management. These systems impact financial access and consumer trust. FIs operating these systems face stringent obligations, such as risk assessments, transparency, and governance.

A FI’s specific obligations under the AI Act depend on its role, such as provider, deployer, distributor, or importer. In cases where FIs assume multiple roles, they must meet the full set of requirements for each role.

For GPAIMs, while initial transparency and governance standards apply, additional HRAIS obligations are triggered when models are customized for high-risk use cases. This ensures that both general-purpose and high-risk applications remain compliant under the AI Act’s framework.

 

What key compliance challenges does the AI Act bring to Financial Institutions?

High-risk AI systems under the AI Act face stringent requirements, including mandatory documentation, risk management, transparency, and post-deployment monitoring. Meeting these requirements demands significant resources and can be challenging for institutions with legacy systems.

Mitigating bias to ensure fairness in decision-critical AI models. This requires not only advanced technical solutions but also multi-functional collaboration across the AI lifecycle with comprehensive oversight.

 

Transparency refers to making the overall functioning and decision-making process of an AI system visible and understandable, while explainability focuses on clarifying individual decisions and outcomes. For FIs, this poses a compliance challenge because many AI systems are complex and difficult to interpret, making it hard to demonstrate how decisions are made, meet regulatory standards, and maintain trust with clients and regulators.

Implementing effective governance structures to monitor and take action when necessary.

Requires resource allocation and expertise as implementing the AI Act’s requirements demands specialised skills in AI ethics, compliance, data science, and risk management. Additionally, it involves integration with governance structures and complex classification and risk assessment of AI systems, especially in credit scoring, fraud detection, and financial advising.

Institutions should begin preparations now to ensure compliance and strategic alignment with their AI initiatives in time.

  • 2027 onward: Regular evaluations and reviews.

The Commission will conduct regular evaluations and reviews of the AI Act to ensure it adapts to new AI innovations, emerging risks, and global standards.

  • Late 2026: Regulatory sandboxes in practice.

Member States are expected to have established at least one AI regulatory sandbox at the national level by this date.

  • August 2026: Full compliance for high-risk AI systems.

By this point, organisations must demonstrate conformity with the Act, including robust risk management, documentation, and reporting for high-risk AI applications.

  • Late 2025 to early 2026: Guideline for Article 6 implementation.

The Commission is expected to provide guidelines specifying the practical implementation of Article 6, including post-market monitoring plans.

  • August 2025: Compliance for general-purpose AI Systems (GPAIMs).

It entails transparency, governance, and documentation requirements. Appointment of Member State competent authorities, and annual Commission reviews of prohibited AI will also commence.

  • February 2025: Prohibition on unacceptable AI systems comes into effect.

Our Responsible AI and AI Act experts

Alice Al Tayar

Senior Consultant

Mart Spekreijse

Business Analyst

Iris Wuisman

Partner

Olga Popescu

Senior Consultant

Ready to take your Responsible AI journey forward?

Contact us today to discuss how we can help you navigate AI regulation in Europe and position your institution for sustainable success.