Robot and EU flag in front of European Parliament, symbolising the new EU AI Act and legal compliance challenges for businesses

EU AI Act Compliance Explained: What It Is, Key Deadlines, and What Businesses Must Do

1. Introduction

The EU AI Act entered into force on 1 August 2024. Most businesses and in-house legal teams are aware of it by now – and know that EU AI Act compliance will be required in some form or another. But which actions, by when, and what exactly do the rules require?

This article offers a clear and practical starting point. It outlines what the AI Act is, the key deadlines, and the concrete steps that businesses and legal teams should take to ensure compliance.

2. What is the EU AI Act?

The Artificial Intelligence Act (Regulation (EU) 2024/1689) (“AI Act”) sets out harmonized rules for the development, marketing, and use of AI systems across the EU. It entered into force on 1 August 2024, twenty days after its publication in the Official Journal on 12 July 2024. The AI Act applies to a wide range of businesses, including those located outside the EU whose AI systems are used within the EU. If your company develops or uses AI systems, or integrates general-purpose AI tools, this regulation likely applies to you. As a result, EU AI Act compliance is likely to be a priority for your legal and technical teams.

In terms of Article 2.1, the AI Act applies (amongst others) to:

  • providers of AI systems (even if established outside the EU);
  • deployers (users) of AI systems in the EU;
  • importers and distributors of AI systems; and
  • providers of general-purpose AI models, such as foundation models used in multiple contexts.

The AI Act adopts a risk-based approach, classifying AI systems into four categories:

  • Unacceptable risk / prohibited AI systems (Art 5): These include systems that deploy subliminal techniques, exploit vulnerabilities, or involve real-time remote biometric identification in public spaces, with limited exceptions.
  • High-risk AI systems (Chapter III): These include AI used in sensitive areas such as employment, education, law enforcement, border control, and critical infrastructure. Such systems must comply with strict requirements on risk management, data quality, technical documentation, human oversight, and conformity assessments (Article 6-27, Annex III).
  • Limited risk systems: These are subject to basic transparency obligations, such as informing users they are interacting with an AI system.
  • Minimal risk systems: no binding requirements are imposed, but voluntary codes of conduct are encouraged.

General-purpose AI models (“GPAI”), including large language models, are subject to specific transparency and documentation obligations (Articles 53–55).

3. What Are the Key Deadlines?

The AI Act is being phased in over several years. These are the main milestones to track:

  • 1 August 2024: The AI Act entered into force (Art 113).
  • 2 February 2025: The first obligations became enforceable. The ban on unacceptable-risk AI practices now applies (Article 5, as read with Article 113(a)). See also my earlier article on this point for additional information.
  • 2 August 2025: The obligations for GPAI models have now entered into force. These include requirements to provide summaries of the training data used; implement adequate cybersecurity measures; provide detailed technical documentation; and comply with the forthcoming Code of Practice, which is voluntary for now (Articles 52–55, as read with Article 113(b)).
  • 2 August 2026: The full compliance obligations for high-risk AI systems apply from this date. These include risk management systems, technical documentation, conformity assessments, and post-market monitoring (Article 16 and Article 50, as read with Articles 111(2) and 113).
  • 2 August 2027: GPAI models placed on the market before 2 August 2025 must be fully compliant by this date (Article 6, as read with Article 113(c)).

The European Commission has confirmed there will be no delay to this timeline. Calls to postpone the Act’s application were explicitly rejected in July 2025.

4. What Does EU AI Act Compliance Mean for In-House Legal and Businesses?

Businesses and in-house legal teams should treat AI Act compliance with the same seriousness as GDPR or product safety regulation. It’s not just an IT or technical issue – legal input is essential to get this right. The following steps provide a starting point:

  1. Identify your exposure
    Determine whether your company is a provider, deployer, importer, or user of AI systems – or a combination of these. This classification determines which obligations apply.
  2. Classify the systems
    Use Annex III of the AI Act to identify high-risk systems. Check whether you are using or developing GPAI models (including models licensed from third parties), and whether any systems involve sensitive or biometric data.
  3. Review and update contracts
    Procurement agreements, development contracts, and internal policies should be reviewed. Obligations around traceability, transparency, human oversight, accuracy, and documentation may need to be added – especially for high-risk systems.
  4. Assign compliance responsibility
    Designate clear internal responsibility whether within legal, data protection, IT, or through a dedicated compliance role. Avoid fragmented accountability.
  5. Prepare documentation and governance processes
    Ensure that your systems are supported by appropriate technical documentation (Article 11) and a post-market monitoring system (Article 50). You should also have a process in place to manage incidents and engage with national supervisory authorities.
  6. Consider voluntary measures for GPAI
    Early participation in the Code of Practice (not yet mandatory) may reduce legal and reputational risk (Article 56).

Proactively addressing EU AI Act compliance may significantly reduce the risk of regulatory intervention and demonstrate a responsible approach to AI governance.

5. What Happens If You Don’t Act?

The AI Act includes significant penalties for non-compliance. Fines are tiered based on the nature of the breach, as laid out in Article 99:

  • Use of prohibited AI practices: Up to €35 million or 7% of global annual turnover, whichever is higher.
  • Breach of obligations for high-risk systems or GPAI: Up to €15 million or 3% of turnover, whichever is higher.
  • Provision of incomplete, incorrect, or misleading information to regulators: Up to €7.5 million or 1% of turnover, whichever is higher.

For SMEs and start-ups, the fines for all the above are subject to the same maximum percentages or amounts, but whichever is lower (Art. 99(6)).

Importantly, enforcement is expected to be serious and proactive, particularly where companies fail to take reasonable preparatory steps. Regulators are likely to focus not only on actual harms, but also on the absence of internal risk governance.

6. Conclusion

The EU AI Act is already in force, and obligations are being phased in on a clear timeline. Key provisions — including the ban on prohibited practices and the GPAI rules — already apply.

For businesses and legal teams, this is the time to assess exposure, classify systems, and put the necessary internal governance in place. Compliance may be complex, but early preparation will ensure your organization avoids unnecessary risk.

If you would like support in applying the AI Act within your business, or if you should have any thoughts, comments or questions, please feel free to get in touch!

Gundo Haacke, Interim Legal Counsel & Owner of Haacke Commercial Legal Services.
Blog article published on 26 August 2025.
Image credit: World wide web.

 

Disclaimer
The information provided in this blog article is for general informational purposes only. Nothing contained in this blog article constitutes legal advice, nor is it intended to be a substitute for legal counsel on any subject matter. The author disclaims any liability in connection with the use of this information.

Related Blog Posts