The AI Act adopts a risk-based approach to compliance obligations, categorizing AI systems by application areas and target groups into distinct risk levels. In this tiered compliance framework, most requirements fall upon the developers and deployers of AI systems classified as “high-risk”, and on general-purpose AI models (including foundation models and generative AI) deemed to pose “systemic risks”. For instance, low-risk AI such as chatbots used in customer service will be subject to few requirements beyond notifying users that they are interacting with AI. AI intended for high-risk application areas that may impact health, safety, or fundamental rights of people will have to comply with stricter controls,while some applications areas, such as subliminal manipulation of vulnerable groups, are outright prohibited.
To comply with the AI Act, companies will need to clearly assign within their organizations responsibilities for overseeing AI deployment and compliance. The mandated responsibility extends beyond technical departments to encompass the entire corporate fabric. Non-compliance exposes companies to severe risks, including heavy penalties with maximum fines that even surpass the maximum fines under the EU’s General Data Protection Regulation (GDPR).