Banks today are navigating an increasingly complex web of digital regulations, spanning data protection, operational resilience, and now, AI-specific risk management. While regulatory overlap is nothing new, the arrival of the AI Act brings a long-standing challenge into sharper focus: What new governance elements must be added on top of existing frameworks? And just as crucially, which obligations require entirely new controls or adjustments to current ones? For European financial institutions, answering these questions is now central to aligning compliance with evolving technological realities.
Multiple frameworks, one problem
The problem isn’t that regulations are unclear, but that each comes with its own workstreams, owners, and audits, even where requirements overlap. Few areas make this clearer than AI governance, where the AI Act brings new obligations that intersect directly with existing GDPR and DORA controls. Let’s look at AI governance as a case in point.
AI governance faces unclear standards and looming deadlines.
The AI Act categorises AI systems by risk, with high-risk financial applications facing strict obligations around risk assessment, data quality, and traceability. Depending on the role of the institution in relation to the AI system a particular set of obligations applies. These requirements go beyond traditional software governance, demanding transparency, data lineage, and continuous monitoring.
The challenge becomes more acute when considering that the AI Act’s detailed implementation guidance remains in development. While the legislation entered into force in August 2024, harmonised standards providing concrete compliance pathways are still being finalised by European Standardisation Organisations, now expected to be delayed until 2026.
This timing gap creates a strategic dilemma: move too quickly and risk misaligned controls; wait too long and risk missing the August 2026 enforcement deadline for the high-risk AI.
AI Act, GDPR, and DORA demand the same foundations
What many compliance teams encounter is that the AI Act does not exist in isolation but rather intersects significantly with existing regulatory frameworks already consuming substantial institutional resources. Banks have spent considerable effort mapping GDPR requirements for data protection and are simultaneously implementing DORA’s ICT resilience controls, creating a layering of overlapping obligations. Many of the same or similar procedural and strategic requirements now appear across all three regimes. For example:
Logging, monitoring, and incident reporting:
Across the AI Act (articles 12 and 26(5–6) for deployers), GDPR (arts. 30, 33–34), and DORA (arts. 10, 17–19) banks are required to monitor system behaviour, retain traceable logs, and notify stakeholders when serious issues occur. While the definitions vary – technical incidents under DORA, personal data breaches under GDPR, and AI malfunctions under the AI Act – the underlying building blocks often overlap. Yet many firms still maintain separate owners, systems, and audit trails for each regime.
Data governance and quality:
The AI Act requires high-quality datasets to minimise risks of discriminatory outcomes (arts. 10 and 26(4)), GDPR enforces accurate and up-to-date personal data (arts. 5(1)(d) and (f)), and DORA mandates data integrity to ensure operational resilience (art. 5(2)(b)). All three regimes demand a robust data-governance framework. Addressing these requirements in silos not only duplicates effort but also risks inconsistent standards across the organisation.
Control silos stall innovation
The fragmented approach can lead to ‘compliance silos’, where isolated ownership structures and parallel mitigation actions may result in duplication of controls, overstretching of resources, and —most critically— delay or stalling of AI pilots that could deliver real value.
The uncertainty surrounding the final details of the AI Act adds to the challenge. Only 11% of European banks feel prepared for the AI Act, while 70% admit they are partially ready (EY, 2024). Earlier in June, banks were among the signatories of a cross-industry call for a delay to the AI Act, citing fragmented standards and unclear requirements (EU AI Champions Initiative, 2025). And it is not just AI: overlapping EU regulations impose an estimated €150 billion of annual compliance costs across industries (European Commission, 2025).
With harmonised standards still in development, locking in controls too early may lead to misalignment with future requirements. On the other hand, waiting for complete clarity can push delivery timelines dangerously close to regulatory deadlines.
Compliance teams find themselves squeezed between premature implementation and regulatory inertia, all while juggling requirements that could be far more efficient if tackled in a unified way. So, how can banks move quickly and confidently in this overlapping, uncertain regulatory environment? More specifically:
How do you design AI governance today, when the rules are still evolving, without creating a silo next to GDPR, DORA, and else? How do you avoid duplicating control efforts, and instead build one scalable framework that serves all regimes?

Integrated governance fabric for AI and beyond
The answer lies in adopting an approach that treats regulatory convergence as an opportunity rather than a burden, with technology as an enabler. Treating the AI Act as a test case for broader governance improvement lets banks solve two problems at once: meeting imminent AI obligations, and reducing the duplication already introduced by the likes of GDPR and DORA. Here’s a four-step approach to build that governance fabric; starting with the AI Act, but designed to scale across frameworks.
Step 1: Position yourself against the AI Act
Start by mapping your current and planned AI applications against the AI Act’s risk categories (limited, high-risk, prohibited), and clarify your organisation’s role, as each carries different obligations. Link the applicable articles to affected business functions and processes. Design lightweight classification processes that enable rapid assessment of new AI initiatives. Identify which use cases require immediate attention vs. those that can evolve with emerging standards.
Tip: ACE’s expert-led workshops guide you through AI-Act scoping, running applicability scans and risk-classification sessions to build and expand your AI inventory. Meanwhile, ACE RegAI’s impact-assessment module automatically maps every relevant article to your business functions, so you can focus on innovation rather than manual cross-referencing.
This same scoping approach applies to other frameworks beyond the AI Act: understanding where regulatory obligations attach to business processes and systems is the first step in reducing duplicated work.
Step 2: Assess gaps and overlaps with existing controls
Evaluate which AI Act requirements are already covered by existing controls and where clear gaps remain. Pay special attention to implementation overlaps, ownership conflicts, and areas of regulatory convergence like data classification, incident response, and third-party risk. This step helps determine where new controls are genuinely needed, and where existing ones can be re-used or extended.
Tip: Leverage ACE’s in-house Responsible AI framework to map AI Act requirements against your existing control environment. Our alignment workshops help identify reuse candidates, surface control gaps, and assign clear responsibilities.
Step 3: Draft a provisional AI control set
Develop targeted controls only where real gaps exist. These AI-specific measures (such as dataset bias testing, retraining frequency, or explainability thresholds) should be treated as “version 0.9” controls, designed for flexibility as harmonised standards evolve. When CEN-CENELEC guidance is finalised, only minor updates to these baseline controls will be required.
Tip: Use ACE’s RegAI Generative Accelerator to support the drafting of AI controls and map them back to applicable obligations. This focused module helps deliver a targeted first draft, ready for refinement and later rationalisation in the broader control library.
Step 4: Rationalise controls and monitor
Merge overlapping controls into a single, traceable framework that maintains regulatory lineage and connects requirements to specific evidence locations.
For instance, consolidate the logging requirements from the earlier “Logging, monitoring, and incident reporting” example into a single “Event logging” control:
- Implement immutable, timestamped event logging for high-risk AI systems, ICT systems supporting critical or important functions, and systems processing personal data under GDPR, capturing user/role IDs, actions/events, and affected data elements. Store logs in a tamper-evident repository, retain them for the defined period, and integrate with automated alerting or monitoring mechanisms where required.
Or to cover the overlapping regimes in data governance, a potential “Data governance and integrity” control could be formulated in a similar way. These are starting points. The same ontology of controls and evidence can—and should—be expanded to cover frameworks like NIS2 or ISO 27001, creating a foundation for managing evolving digital risk and compliance without re-inventing your controls every time a new regulation arrives.
Tip: RegAI’s Control Rationalisation engine accelerates this process by automatically mapping regulatory lineage across frameworks. As part of a broader compliance workbench, RegAI provides a collaborative, GenAI-enabled platform where risk, data, and compliance teams can automate regulatory mapping, streamline control management, and build towards continuous compliance over time.
Turn regulatory complexity into competitive advantage
This integrated approach transforms regulatory compliance from a cost burden into a competitive advantage, enabling faster AI deployment, more efficient resource allocation, and stronger risk management capabilities. By rationalising controls first, and extending governance to connect roles, processes, and evidence, banks address the AI Act without creating yet another compliance silo.
The AI Act is just the latest trigger. Banks that build a unified governance fabric today will be equipped to adapt faster, smarter, and with less duplication when the next wave of regulation arrives. Ready to move beyond fragmented compliance? Contact our experts to discuss your specific AI governance challenges or arrange a RegAI demonstration to see harmonised control rationalisation in action.