EU Regulation

The EU AI Act: It's time to get prepared

The world's first legal framework for AI is here. Ensure your organisation is compliant, competitive, and ready for the future.

What is the EU AI Act?

The EU AI Act is a landmark regulatory framework designed to ensure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly.

It adopts a risk-based approach, meaning the stricter the risk, the stricter the rules. Non-compliance can lead to significant financial penalties.

EU AI Act Logo

The AI Act introduces a risk-based approach, distinguishing between four kinds of AI systems:

1. Unacceptable

AI applications that are incompatible with EU values and fundamental rights. They will be prohibited.

2. High Risk

Highly regulated AI Systems that could cause significant harm if they are failing or misused, or that are safety components.

3. Limited Risk

Applications that pose a risk of manipulation or deceit. They are less regulated, but have transparency obligations.

4. Minimal Risk

All remaining AI systems. While they have no mandatory requirements, transparency and ethical use are encouraged.

The Cost of Non-Compliance

€35M
or 7% of global turnover

For prohibited AI practices

€15M
or 3% of global turnover

For other non-compliance

€7.5M
or 1.5% of global turnover

For incorrect information

Implementation Timeline

1

August 2024

The AI Act entered into force.

2

February 2025 (6 Months)

Prohibitions on unacceptable risk AI systems apply.

3

August 2025 (12 Months)

Obligations for General Purpose AI (GPAI) models become applicable.

4

August 2026 (24 Months)

Full application of the AI Act, including rules for high-risk systems.

Ready to ensure compliance?

BoardX helps you navigate these regulations with automated compliance tracking and risk management.