The EU AI Act Is Here: What Financial Services Firms Need to Know

A practical guide to EU AI Act compliance for banks, insurers, and investment firms. What's required, what's high-risk, and how to prepare before enforcement begins.

Contents

TL;DR: The EU AI Act is now law. Financial services firms using AI for credit scoring, fraud detection, insurance pricing, or investment decisions are likely operating “high-risk” systems under the regulation. You have until August 2026 to comply with the full requirements - but the prohibited practices rules kick in February 2025. Start with an inventory of your AI systems, classify them by risk level, and prioritize documentation and human oversight for anything customer-facing.


The regulation is real, and the clock is ticking

The EU AI Act entered into force in August 2024. If you’ve been waiting to see if this would actually happen - it has. The world’s first comprehensive AI regulation is now law, and financial services firms are squarely in its crosshairs.

This isn’t like GDPR where you could argue about what counts as “personal data.” The AI Act is specific about what’s regulated and what’s required. And if you’re a bank, insurer, or investment firm using AI in customer-facing decisions, you’re almost certainly in scope.

What counts as “high-risk” in financial services

The Act defines certain AI applications as high-risk by default. For financial services, this includes:

  • Credit scoring and creditworthiness assessment - Any AI system that influences whether someone gets a loan, what interest rate they pay, or what credit limit they receive.

  • Insurance risk assessment and pricing - Systems that determine premiums, assess claims, or decide coverage eligibility.

  • Fraud detection - This one surprises people. If your fraud detection system can block transactions or flag customers for review, it’s likely high-risk.

  • Investment advice and portfolio management - Robo-advisors, algorithmic trading systems that affect client portfolios, and automated investment recommendations.

The key question isn’t “is this AI?” but “does this AI system make or influence decisions about people?” If yes, you’re probably high-risk.

What high-risk actually requires

Being classified as high-risk triggers a set of mandatory requirements. Here’s what you’ll need:

Risk management system - Not a one-time assessment, but an ongoing process. You need to identify risks, estimate their likelihood and severity, and implement mitigation measures. This needs to be documented and updated throughout the system’s lifecycle.

Data governance - Training data must be relevant, representative, and as free from errors as possible. You need to document where your data comes from, how it was processed, and what assumptions went into it. For financial services firms already dealing with model risk management, this overlaps with existing requirements - but the AI Act wants more specificity.

Technical documentation - Detailed docs covering the system’s purpose, how it works, what data it uses, and how it was tested. This isn’t marketing material. It’s technical documentation that a regulator could use to understand and evaluate your system.

Record-keeping - Automatic logging of the system’s operations. You need to be able to reconstruct what the system did and why. For how long? The Act says “appropriate to the intended purpose” - which for financial services likely means years, not months.

Transparency - Users need to know they’re interacting with an AI system. And deployers (that’s you, the financial institution) need to be able to explain the system’s outputs to affected individuals.

Human oversight - This is the big one. High-risk AI systems must be designed to allow effective human oversight. Someone needs to be able to understand the system’s outputs, decide whether to act on them, and override or stop the system when needed.

Accuracy, robustness, and cybersecurity - The system needs to perform consistently and be resilient to errors, faults, and attacks. You need to test for this and document the results.

The timeline you’re working with

Here’s what’s coming:

  • February 2025 - Prohibited AI practices become enforceable. This includes things like social scoring and manipulative AI. Most financial services firms won’t have prohibited systems, but check anyway.

  • August 2025 - Rules for general-purpose AI models take effect. This matters if you’re building on foundation models.

  • August 2026 - Full enforcement of high-risk AI requirements. This is your real deadline.

August 2026 sounds far away. It’s not. If you’re running complex AI systems in production, 18 months to achieve compliance is tight. Especially when you factor in procurement cycles, documentation efforts, and the inevitable surprises.

Where to start

If you haven’t started yet, here’s a practical sequence:

1. Inventory your AI systems - You can’t comply with requirements for systems you don’t know about. This is harder than it sounds. AI is everywhere now - in vendor products, in internal tools, in spreadsheets that someone trained a model for. Get a complete picture.

2. Classify by risk level - Go through each system and determine if it’s high-risk under the Act. Be conservative. If you’re not sure, assume it’s high-risk until you can prove otherwise.

3. Gap assessment - For each high-risk system, compare current state against the requirements. Where are you already compliant? Where are the gaps? Documentation is usually the biggest gap.

4. Prioritize - You probably can’t fix everything at once. Prioritize based on risk exposure and remediation effort. Customer-facing systems with significant decision impact should come first.

5. Build the infrastructure - Some requirements need infrastructure you might not have. Logging systems, documentation repositories, human oversight workflows. Start building these now.

What this means for vendor relationships

If you’re using third-party AI systems - and most financial institutions are - you need to think about your vendors.

The AI Act places obligations on both providers (who build the systems) and deployers (who use them). As a deployer, you’re responsible for using the system correctly and maintaining human oversight. But you’re also dependent on providers giving you the documentation and transparency you need.

Start asking your vendors hard questions. Can they provide the technical documentation required? Do they have conformity assessments? What’s their compliance roadmap? If they can’t answer these questions, that’s a problem you need to solve now, not in 2026.

This isn’t just about compliance

Here’s the thing - most of what the AI Act requires is just good practice. Risk management, documentation, human oversight, data governance. These aren’t bureaucratic hurdles. They’re the foundations of trustworthy AI systems.

Financial services firms that treat this as a checkbox exercise will struggle. The ones that use it as an opportunity to build better AI governance will come out ahead. They’ll have systems that are more reliable, more explainable, and more trusted by customers and regulators alike.

The EU AI Act is the first major AI regulation, but it won’t be the last. The frameworks you build now will serve you well as other jurisdictions follow.


Rotascale helps financial services firms build trustworthy AI systems. Our platform provides the monitoring, evaluation, and governance capabilities you need for AI Act compliance. Learn more about our financial services solutions or request a demo.

Share this article

Stay ahead of AI governance

Get insights on enterprise AI trust, agentic systems, and production architecture delivered to your inbox.

Subscribe