EU AI Act: Provider vs Deployer Obligations
TL;DR
The EU AI Act distinguishes between providers (who develop, import, or substantially modify AI systems) and deployers (who use AI systems for their intended purpose). Providers bear primary responsibility for compliance, risk assessment, documentation, and CE marking. Deployers must ensure appropriate use, human oversight, and impact assessments for high-risk systems. Understanding your role determines your obligations under the regulation.
Key Facts
Providers develop, import, or substantially modify AI systems for market placement or service provision.
Deployers use AI systems under their authority for their intended purpose in professional context.
Providers bear primary responsibility for compliance, including risk management and documentation.
Deployers must ensure human oversight, appropriate use, and conduct impact assessments where required.
High-risk AI systems trigger additional obligations for both providers and deployers.
Implementation Steps
Assess whether you are a provider, deployer, or both for each AI system.
Classify AI systems according to EU AI Act risk categories (prohibited, high-risk, limited risk, minimal risk).
If provider: implement risk management, quality management, documentation, and CE marking.
If deployer: establish human oversight, use instructions, and impact assessment procedures.
Implement monitoring systems and incident reporting as required by your role.
Glossary
- Provider
- Entity that develops, imports, or places AI systems on the EU market
- Deployer
- Entity that uses an AI system under their authority for its intended purpose
- High-risk AI system
- AI system listed in Annex III or used as safety component in regulated products
- CE marking
- Conformity marking indicating compliance with EU health, safety, and environmental requirements
- Substantial modification
- Change to AI system that affects compliance or intended purpose significantly
- Impact assessment
- Assessment of fundamental rights impact before deploying high-risk AI systems
References
- [1] EU AI Act (Regulation 2024/1689) https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689
- [2] European Commission AI Act Implementation Guidance https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai
Machine-readable Facts
[
{
"id": "f-provider-role",
"claim": "Providers bear primary compliance responsibility including risk management and CE marking.",
"source": "https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689"
},
{
"id": "f-deployer-role",
"claim": "Deployers must ensure appropriate use and human oversight of AI systems.",
"source": "https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689"
},
{
"id": "f-high-risk-obligations",
"claim": "High-risk AI systems trigger enhanced obligations for both providers and deployers.",
"source": "https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689"
}
]