Policy
The EU AI Act August 2 deadline: what it actually requires
On August 2, 2026 the high-risk obligations of the EU AI Act become enforceable. Fines reach €15 million or 3% of global turnover, whichever is higher. The Act applies extraterritorially — US companies are in scope if their AI affects EU users. Ninety days out, this is what you actually need to do.
Where we are on the timeline
- Already in force: Prohibited practices and AI literacy obligations (since February 2, 2025); governance rules and obligations for general-purpose AI models (since August 2, 2025).
- August 2, 2026: High-risk AI obligations under Annex III become fully enforceable. Regulators can issue fines and corrective orders.
- August 2, 2027: Extended deadline for high-risk AI embedded in regulated products (medical devices, machinery, etc.).
What counts as high-risk
Annex III is the list to look at. The categories that catch most enterprises:
- AI in employment — recruiting, screening, evaluation, promotion, termination decisions.
- AI used in credit and insurance decisions affecting individuals.
- AI used in education — admissions, grading, exam supervision.
- AI used in essential public and private services like benefits eligibility.
- AI used in law enforcement, migration, and the administration of justice.
- AI used in critical infrastructure safety contexts.
If you sell a recruiting tool, a credit-scoring model, an exam-proctoring product, or anything that touches benefits eligibility, you are almost certainly in scope.
What the obligations actually look like
For each high-risk system you place on the EU market or whose output affects EU users, you need:
- Technical documentation covering decision logic, training data summary, performance characteristics, and known limitations.
- Risk management across the lifecycle — not a one-time checklist, an ongoing process.
- Data governance — provenance, quality controls, bias testing on representative data.
- Logging of system outputs sufficient to allow post-hoc audit and incident investigation.
- Human oversight with clear intervention points and the ability to stop, override, or correct the system.
- Accuracy, robustness, and cybersecurity testing appropriate to the use case.
- Conformity assessment before placing the system on the market, plus EU declaration of conformity.
- Post-market monitoring and a duty to report serious incidents.
The autonomous-agent wrinkle
The Act was largely written before 2026-style agents existed, and regulators are now adapting interpretation. The four control areas they are emphasizing for any autonomous agent:
- Documented decision logic — you must be able to explain why the agent did what it did.
- Open-loop architecture — agents cannot operate in pure closed-loop mode without external observability.
- Structured human oversight with defined intervention points, not just an emergency stop.
- Stop-and-correct mechanisms that work in practice, not in theory.
Who is on the hook outside the EU
The Act’s extraterritorial reach is broad. A US company with no EU office is still in scope if its AI system’s output is used in the EU. In practical terms: if EU residents can use your product, you are in scope. The question is whether your product falls into a high-risk category.
What to do in the next 90 days
- Inventory. List every AI system you ship or use internally that touches EU residents. Map each to Annex III categories.
- Triage. Separate clearly high-risk systems, clearly out-of-scope systems, and the gray middle. Get legal eyes on the gray middle.
- Document. For each high-risk system, start the technical file now. Conformity assessment timelines are not short.
- Build the governance. Designate an AI compliance owner. Stand up an internal review process for new AI features. The Act assumes there is one.
- Wire in oversight. Agents and high-risk systems need logged human-in-the-loop checkpoints with defined intervention authority. Retrofit if needed.
- Plan for incidents. Serious incidents must be reported. You need a process before the first one happens.
The penalty math
Non-compliance with high-risk obligations: up to €15 million or 3% of global annual turnover, whichever is higher. Prohibited practices: up to €35 million or 7%. For a company doing €1B in revenue, the high-risk ceiling is €30M per breach. Boards are paying attention.
This article is general information, not legal advice. Talk to qualified counsel about your specific products and exposure.