Italian Hackathon League  · Read on La Stampa →
Guide

AI Risk Classification Under the AI Act

The four risk levels of the European AI Act: what they are, what obligations they entail, and how to classify the AI systems used in your company.

Updated: March 202613 min read

1. The Four Risk Levels of the AI Act

The AI Act (Regulation EU 2024/1689) classifies all artificial intelligence systems into four risk levels. This classification is the heart of the regulation: it determines the obligations on the company, the required documentation, the controls to implement, and the penalties for violations.

The four levels, from most severe to least, are: unacceptable risk (prohibited systems), high risk (stringent obligations), limited risk (transparency obligations), and minimal risk (no specific obligations). The classification does not concern the technology itself (an LLM is not inherently high risk), but the context of use: the same AI model can be minimal risk when used to summarize emails and high risk when used to evaluate candidates in a hiring process.

For Italian companies, correctly classifying their AI systems is the first step toward compliance. Yellow Tech has classified hundreds of AI systems across 500+ client organizations and found that most fall into the minimal or limited risk categories. But a significant share -- often unidentified before the assessment -- includes high-risk systems requiring immediate action.

2. Unacceptable Risk: Prohibited AI Systems

Art. 5 of the AI Act lists AI practices deemed unacceptable risk and therefore prohibited in the European Union as of February 2, 2025. The ban is absolute: no derogations exist, and penalties for violations reach up to 35 million euros or 7% of global turnover.

Prohibited systems include: subliminal manipulation that causes harm (systems designed to influence people's behavior imperceptibly), exploitation of vulnerabilities of specific groups (age, disability, economic situation) to distort their behavior, social scoring by public authorities (classifying people based on social behavior), and real-time remote biometric identification in public spaces for law enforcement purposes (with very limited exceptions).

For most Italian companies, unacceptable-risk systems are not a direct concern. But verification is essential: a loyalty system using AI to segment customers by economic vulnerability could approach the boundary. An employee surveillance system with continuous facial recognition is problematic. Our assessment always includes a screening for potentially prohibited systems, to eliminate any gray areas from the outset.

3. High Risk: Annex III Systems

The high-risk systems are the regulatory core of the AI Act. They are listed in Annex III of the regulation and grouped into eight thematic areas. These systems can be developed and used, but are subject to stringent compliance, documentation, and monitoring obligations.

The eight areas of Annex III are: (1) Biometrics -- remote biometric identification and categorization systems; (2) Critical infrastructure -- AI for managing water, gas, electricity, and transport networks; (3) Education and training -- systems determining access to institutions, evaluating students, or monitoring exams; (4) Employment -- CV screening, candidate evaluation, decisions on promotions, terminations, task allocation; (5) Essential public services -- access to social benefits, credit scoring, life/health insurance; (6) Law enforcement -- recidivism risk assessment, evidence analysis, profiling; (7) Migration and border control -- risk assessment, document verification; (8) Justice -- research and interpretation of facts and laws.

For Italian companies, the most relevant areas are 4 (employment) and 5 (essential services). If your company uses AI to select candidates, evaluate employee performance, decide on promotions, or assign tasks, the system is high risk. If you operate in the financial sector and use AI for credit scoring or insurance assessment, the system is high risk.

  • Complete technical documentation -- system description, purpose, operation, training datasets, performance metrics
  • Quality management system -- documented processes for development, testing, and maintenance
  • Log register -- automatic log retention for at least 6 months
  • Human oversight -- measures enabling human intervention in system decisions
  • Robustness and accuracy -- the system must achieve an adequate level of accuracy and resilience
  • Impact assessment (FRIA) -- Fundamental Rights Impact Assessment mandatory for deployers
  • EU database registration -- the system must be registered in the European high-risk AI system database

4. Limited Risk and Minimal Risk

Limited-risk systems are subject exclusively to transparency obligations (Art. 50 of the AI Act). This category includes chatbots, text, image, audio, and video generation systems, and deepfake-producing systems.

The obligation is straightforward: users must be informed that they are interacting with an AI system or that the content was AI-generated. In practice, a chatbot on a company website must indicate "You are speaking with an AI assistant." Text generated by ChatGPT and published on the company blog should include a note about AI generation (though this point is still being refined through European AI Office guidelines).

The minimal-risk systems have no specific AI Act obligations (beyond the general AI literacy obligation in Art. 4). This category covers the majority of everyday business uses: using ChatGPT or Claude to summarize documents, draft emails, analyze data, translate text, or assist in software development.

Note: even for minimal-risk systems, the GDPR fully applies when personal data is processed. And the corporate AI Policy should cover these systems as well, to prevent data leaks and misuse. We classify all AI systems for our 500+ clients, regardless of risk level, because a complete mapping is the foundation of any governance framework.

5. How to Classify Your Company's AI Systems

AI system classification is not a theoretical exercise: it is an operational process requiring cross-functional expertise (legal, technical, business) and a structured methodology. Here is the four-phase process Yellow Tech uses with its 500+ client organizations.

Phase 1: Complete inventory. Catalog all AI systems in use across the organization, including those spontaneously adopted by employees (shadow AI). Our assessment combines team surveys, software license analysis, and network traffic monitoring to major AI providers. The result often reveals many more active AI systems than IT is aware of. A significant share of AI systems in use are unknown to corporate IT (so-called shadow AI).

Phase 2: Context-of-use analysis. For each system, analyze: what decision it supports, who it impacts (employees, customers, citizens), what data it processes, whether its output is advisory or binding. The same technology (e.g., an LLM) can have different classifications depending on use: summarizing a report = minimal risk; selecting candidates = high risk.

Phase 3: Classification against Annex III. Each system is compared against the eight areas of the AI Act's Annex III. If it falls within one, it is high risk. Otherwise, it is assessed for transparency obligations (limited risk) or no specific obligations (minimal risk). We use a proprietary decision matrix that makes the process fast and repeatable.

Phase 4: Documentation and action plan. The result is a classified register with, for each system: risk level, applicable obligations, current gaps, and a compliance plan with priorities and timeline. For high-risk systems, the plan includes preparation of technical documentation, FRIA, log systems, and human oversight procedures. To start the classification process for your company, contact us for an assessment.

6. Required Documentation for Each Risk Level

One of the most operational aspects of the AI Act is documentation. Each risk level has specific documentation requirements, and their absence is itself a sanctionable violation. For deployers (most Italian companies), the requirements are more contained than for providers, but still significant.

For high-risk systems, the deployer must maintain: the contract and documentation received from the provider (usage instructions, technical specifications, known limitations), the completed and approved FRIA, system logs retained for at least 6 months, documented human oversight procedures, evidence of training for personnel using the system, and a register of any incidents and malfunctions.

For limited-risk systems, documentation focuses on transparency: evidence that users are informed of AI interaction, internal policy on content generation system usage, and procedures for labeling AI-generated content.

For minimal-risk systems, there are no specific AI Act documentation requirements, but we still recommend: registration in the corporate AI system register, coverage in the AI Policy, and compliance with GDPR obligations if personal data is processed.

Risk levelExamplesMain obligationsMaximum penalty
UnacceptableSocial scoring, subliminal manipulation, mass biometric surveillancePROHIBITED -- no derogationEUR 35M or 7% of turnover
HighCV screening, credit scoring, AI diagnostics, student evaluationFRIA, technical documentation, 6-month logs, human oversight, EU database registrationEUR 15M or 3% of turnover
LimitedChatbots, text/image/video generation, deepfakesTransparency: inform users of AI interaction, label generated contentEUR 7.5M or 1% of turnover
MinimalAI for email, data analysis, translation, coding assistanceAI literacy (Art. 4). No additional specific obligationsN/A (Art. 4 only)

Frequently Asked Questions

How do I know if my AI system is high risk?+

Check whether your AI system falls within one of the eight areas of the AI Act's Annex III: biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice. If you use AI to select candidates, evaluate performance, or decide on credit and insurance, it is almost certainly high risk. Yellow Tech has classified hundreds of AI systems across 500+ organizations with a fast, repeatable framework.

Is a chatbot high risk?+

No, a standard customer service or FAQ chatbot is classified as limited risk, with transparency obligations (informing the user they are speaking with an AI). But if the chatbot makes decisions impacting people's rights (e.g., approving or denying benefits), it could be high risk. Yellow Tech analyzes each specific use case across 500+ client organizations to ensure accurate classification.

What happens if I misclassify an AI system?+

An incorrect classification exposes the company to penalties: if a high-risk system is treated as minimal risk, it violates AI Act obligations with penalties up to 15 million euros or 3% of turnover. This is why Yellow Tech uses a decision matrix validated on hundreds of systems and recommends a semi-annual classification review.

What risk level does Microsoft Copilot use in a company fall under?+

Standard Microsoft Copilot usage (summaries, email drafts, Excel data analysis, presentations) is minimal risk. But if Copilot is integrated into HR processes to evaluate candidates or financial processes for credit decisions, the specific use case could be high risk. Classification depends on context, not the tool. Copilot is among the most frequently classified systems in Yellow Tech client organizations, with varying results depending on use.

Can Yellow Tech classify all my company's AI systems?+

Yes. Risk classification assessment is one of Yellow Tech's core services, already completed for 500+ Italian organizations with hundreds of systems cataloged and classified. The process takes 2-4 weeks and produces a complete register with classification, gap analysis, and prioritized action plan. The team of 30+ specialists includes legal, technical, and industry experts.

How often should risk classification be reviewed?+

Yellow Tech recommends a full review every 6 months and a point update whenever a new AI system is adopted, an existing use case is modified, or regulations change. The AI Act and European AI Office guidelines are continuously evolving. Yellow Tech's AI Governance as a Service includes periodic classification review for hundreds of client organizations.

Want to understand how AI can help your business?

Let's talk. 500+ Italian organizations already trust Yellow Tech for their AI transformation.