8 min read

EU AI Act 101: What US Companies Need to Know

The EU AI Act is not a European problem. If your AI systems touch EU markets, you are in scope — regardless of where your headquarters are.

If you watched the GDPR rollout from the US and assumed it would not affect your business, only to scramble for compliance when it clearly did, consider this your early warning for the EU AI Act. The regulation entered into force in August 2024, with a phased enforcement timeline that is already underway. And just like GDPR, it has extraterritorial reach.

This is not a future problem. Companies that deploy AI systems whose outputs are used in the EU, or that process data of EU residents, need to understand their obligations now — not when the first enforcement actions land.

What the EU AI Act Actually Is

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It takes a risk-based approach, categorizing AI systems into four tiers based on the potential harm they can cause.

Unacceptable risk. These AI practices are banned outright. This includes social scoring systems, real-time biometric identification in public spaces (with narrow exceptions), manipulation techniques that exploit vulnerabilities, and systems that infer emotions in workplaces or educational institutions. If your AI system falls here, you cannot deploy it in the EU. Period.

High risk. AI systems that affect health, safety, or fundamental rights. This covers a broad range: credit scoring, hiring and recruitment tools, insurance risk assessment, medical devices with AI components, critical infrastructure management, and law enforcement applications. High-risk systems face the heaviest obligations — conformity assessments, technical documentation, human oversight requirements, and ongoing monitoring.

Limited risk. Systems like chatbots and deepfake generators that primarily require transparency obligations. Users must be informed they are interacting with AI, and AI-generated content must be labeled as such.

Minimal risk. The vast majority of AI systems — spam filters, AI-powered video games, inventory management — fall here and face no additional requirements beyond existing law.

Why US Companies Are in Scope

The EU AI Act applies to any organization that places AI systems on the EU market or puts them into service in the EU, regardless of where that organization is established. It also applies when the output of an AI system is used in the EU, even if the system itself runs on US infrastructure.

Concretely, you are likely in scope if any of the following apply: you have EU customers who interact with your AI-powered features; you have EU-based employees whose data is processed by AI systems (think HR analytics, performance tools, or automated scheduling); you provide AI-powered services to EU-based businesses; or your product makes decisions that affect EU residents, even indirectly.

The reach is broad by design. The EU learned from GDPR that geographic boundaries mean nothing in the digital economy, and they drafted this regulation accordingly.

Key Obligations by Risk Category

For high-risk systems, the obligations are substantial. You will need a quality management system, a risk management system that operates throughout the AI system's lifecycle, technical documentation that describes the system's design, development, and intended purpose, data governance practices for training and validation data, logging and traceability capabilities, transparency and information to deployers, human oversight mechanisms, and demonstrated accuracy, robustness, and cybersecurity.

For general-purpose AI models (think foundation models like GPT or Claude), providers must maintain technical documentation, comply with EU copyright law, and publish a sufficiently detailed summary of training data. Models that pose systemic risk face additional obligations including adversarial testing and incident reporting.

For limited-risk systems, the primary obligation is transparency. Users must know they are interacting with AI. AI-generated content, including deepfakes, must be labeled.

Timeline and Enforcement

The EU AI Act entered into force on August 1, 2024. The compliance timeline is staggered. Prohibitions on unacceptable-risk practices became enforceable in February 2025. Obligations for general-purpose AI models apply from August 2025. The full requirements for high-risk systems take effect in August 2026. And obligations for AI systems that are components of regulated products (like medical devices) apply from August 2027.

Penalties are significant: up to 35 million euros or 7% of global annual turnover for prohibited practices, and up to 15 million euros or 3% of turnover for other violations. These are not theoretical — the EU has demonstrated with GDPR that it will enforce extraterritorially.

Common Misconceptions

“We are US-based, so it does not apply to us.” Wrong. If your AI system's output is used in the EU, or if EU residents are affected by your AI-powered decisions, you are in scope. Corporate domicile is irrelevant.

“We just use third-party AI, we do not build our own.” The Act distinguishes between providers (who build or place AI on the market) and deployers (who use AI systems). Both have obligations. If you deploy a third-party high-risk AI system, you still need to ensure human oversight, monitor for risks, keep logs, and inform affected individuals. You cannot outsource compliance by outsourcing the technology.

“It only applies to AI companies.” The Act applies to any organization that develops, provides, or deploys AI systems. A bank using AI for credit scoring is just as much in scope as the company that built the scoring model. A retailer using AI-powered recruitment tools has obligations as a deployer of a high-risk system.

What to Do Now

Inventory your AI systems. You cannot classify what you have not cataloged. Build a comprehensive register of every AI system your organization develops, deploys, or procures. Include internal tools, embedded AI features in SaaS products you use, and AI components in your own products.

Classify each system by risk tier. Map each AI system against the Act's risk categories. Pay particular attention to systems that make or influence decisions about people — those are the most likely to be classified as high risk.

Assess your gaps. For each high-risk system, evaluate your current state against the Act's requirements. Where are you missing documentation? Where do you lack human oversight mechanisms? Where are your data governance practices insufficient?

Document everything. The EU AI Act is documentation-heavy by design. Start building the technical documentation, risk assessments, and conformity evidence now. This is not work you want to rush under deadline pressure.

Engage your supply chain. If you rely on third-party AI providers, understand their compliance posture. The Act creates shared obligations between providers and deployers, and you need to ensure your contracts reflect this.

Related service

Our EU AI Act Readiness Review classifies your AI systems, identifies compliance gaps, and delivers a remediation roadmap before enforcement deadlines. Fixed fee, 3–4 weeks.

Get ahead of EU AI Act compliance

We help US companies classify their AI systems, identify compliance gaps, and build the documentation the EU AI Act requires. Start with a consultation or take our readiness assessment.