/ THE CORE

The EU AI Act in Practice: What Builders Actually Need to Do in 2026

The EU AI Act is no longer theoretical. Here's a practical guide to what it means for teams shipping AI products in Europe — without the legal jargon.

Simplified flowchart of EU AI Act risk classification for AI products

The Act is here — now what?

The EU AI Act entered into force in stages, and by 2026, most of its provisions are either active or imminent. If you're building AI products that serve European users — whether you're based in the EU or not — this affects you.

This post is not legal advice. It's an engineering-oriented translation of what the Act requires and how teams are practically meeting those requirements. Consult a lawyer for your specific situation, but read this first to know what questions to ask.

Risk classification: where does your system fall?

The Act classifies AI systems into four risk levels. Most consumer-facing AI products fall into the lower two:

Minimal risk — Most AI applications (spam filters, recommendation engines, content tools) fall here. Requirements are minimal: essentially, be transparent that AI is being used.

Limited risk — AI systems that interact with people (chatbots, content generators) must disclose that the user is interacting with AI and that content is AI-generated. If your product generates synthetic media (images, video, audio), it must be labeled as such.

High risk — AI systems used in employment decisions, credit scoring, education, healthcare diagnosis, law enforcement, and critical infrastructure. These face extensive requirements: risk management systems, data governance, technical documentation, human oversight, accuracy and robustness standards, and registration in an EU database.

Unacceptable risk — Social scoring, real-time biometric surveillance (with narrow exceptions), manipulation of vulnerable groups. These are banned outright.

The GPAI provisions If you're building on top of general-purpose AI models (GPT, Claude, Gemini, Llama), the model providers have their own obligations under the GPAI provisions. But as a deployer, you also have obligations — particularly around how you use the model and what decisions you allow it to influence.

What most teams actually need to do

1. Classify your system honestly

Don't assume you're minimal risk without checking. The classification depends on the use case, not the technology. An LLM is minimal risk when writing marketing copy. The same LLM becomes high risk when screening job applicants. Review the Annex III list of high-risk use cases carefully.

2. Implement transparency requirements

For limited-risk systems (which covers most chatbots and content generation tools): clearly disclose to users that they are interacting with AI, label AI-generated content as such, and provide a way for users to identify AI-generated output.

This is usually straightforward to implement: a visible label in the UI, metadata tags on generated content, and clear documentation.

3. Maintain technical documentation

All AI systems need proportionate documentation. For minimal and limited risk, this means maintaining records of the models you use, the data processing you perform, and the testing you've conducted. For high risk, the documentation requirements are extensive and specific.

The good news: if you followed the governance practices from our earlier guide, you already have most of this.

4. Set up incident reporting

High-risk systems must report serious incidents to national authorities. But even for lower-risk systems, having an incident response process is good practice and demonstrates due diligence.

5. Conduct a fundamental rights impact assessment (high-risk only)

If your system is high-risk, you need to assess its potential impact on fundamental rights before deployment. This is similar in spirit to a data protection impact assessment (DPIA) under GDPR.

What this means for non-EU companies

The EU AI Act has extraterritorial reach: it applies to any company that places AI systems on the EU market or whose AI systems affect people in the EU. If your product has EU users, the Act applies to you regardless of where your company is incorporated.

Practically, this means: assign someone to understand your obligations, implement the transparency and documentation requirements, and monitor the enforcement landscape as national authorities begin issuing guidance.

The compliance vs. innovation false dichotomy

The most productive framing isn't compliance vs. innovation — it's compliance as quality engineering. The Act's requirements for documentation, testing, monitoring, and transparency are practices that well-run AI teams should be doing anyway. The regulation mandates a floor, not a ceiling.

Teams that have already invested in evaluation, governance, and observability will find compliance relatively straightforward. Teams that haven't will find the Act a useful forcing function to build those capabilities.

Staying current

The regulatory landscape is evolving. Codes of practice for GPAI models are being developed, national authorities are issuing guidance, and enforcement precedents are being set. Stay informed through official EU sources and industry associations, and budget for periodic legal review — not because the regulations are punitive, but because clarity is still emerging.

Link copied!