1. What Is the AI Act (Regulation EU 2024/1689)
The AI Act (Regulation EU 2024/1689) is the world's first legislation to comprehensively regulate artificial intelligence. Approved by the European Parliament on March 13, 2024, it entered into force on August 1, 2024 and applies to all EU member states, including Italy, without the need for national transposition.
The regulation adopts a risk-based approach: the more potentially dangerous an AI system is to people's fundamental rights, the stricter the obligations for those who develop and deploy it. The distinction between "provider" (who develops the system) and "deployer" (who uses it in a business context) is central: a company that integrates ChatGPT into its processes is a deployer and has specific obligations.
For Italian companies, the AI Act represents a structural change. Organizations currently using AI tools without a governance and compliance framework will need to comply by specific deadlines, with penalties reaching up to 7% of global turnover. But the AI Act is not just a cost: it is the opportunity to structure AI usage in an effective, safe, and competitive way.
2. Application Timeline: Key Deadlines
The AI Act does not come into force all at once. The European legislator has provided a 36-month transition period with progressive deadlines. For Italian companies, planning is essential: those who wait until the last moment risk being non-compliant with active penalties.
The first critical deadline was February 2, 2025, when the absolute ban on unacceptable-risk AI systems took effect (social scoring, subliminal manipulation, mass biometric surveillance). The second deadline is August 2, 2025, with obligations for general-purpose AI (GPAI) models, which directly affects anyone using LLMs like GPT, Claude, or Gemini in production.
For companies using high-risk systems, the operational deadline is August 2, 2026, when the full Annex III obligations come into force: technical documentation, conformity assessment, registration in the European database, and post-market monitoring. Companies have just over 4 months to complete compliance.
| Deadline | What comes into force | Who is affected |
|---|---|---|
| August 1, 2024 | Regulation enters into force | All EU operators |
| February 2, 2025 | Ban on unacceptable-risk systems | All providers and deployers |
| August 2, 2025 | Obligations for GPAI and general-purpose AI models | Foundation model providers and deployers |
| August 2, 2026 | Full obligations for high-risk systems (Annex III) | Providers and deployers of high-risk systems |
| August 2, 2027 | Full application of all remaining obligations | All economic operators |
3. The Four Risk Levels of the AI Act
The core of the AI Act is the risk-level classification. Every AI system must be assessed and placed in one of four categories. The category determines the regulatory obligations, required documentation, and penalties for non-compliance.
The four levels are: unacceptable risk (prohibited), high risk (stringent obligations), limited risk (transparency obligations), and minimal risk (no specific obligations). Most AI systems used by Italian companies fall into the last two categories, but verification is essential: a customer service chatbot may be limited risk, but an AI system that evaluates job candidates is high risk.
For a detailed guide on how to classify your company's AI systems, see our guide to AI risk classification. Yellow Tech has already supported over 500 organizations in analyzing and classifying their AI systems according to AI Act categories.
4. Obligations for Italian Companies
Obligations vary based on the company's role (provider or deployer) and the risk level of the AI system used. For most Italian companies, which are deployers of AI systems developed by third parties (OpenAI, Anthropic, Google, Microsoft), the main obligations concern responsible and documented use of these tools.
For high-risk systems, deployers must: use the system in accordance with provider instructions, ensure human oversight of decision-making processes, retain system-generated logs for at least 6 months, inform affected individuals that they are subject to an AI system, and complete a Fundamental Rights Impact Assessment (FRIA) before deployment.
For limited-risk systems, the primary obligation is transparency: users must know they are interacting with an AI system. This applies to chatbots, text generation systems, deepfakes, and synthetic content. A company using an AI chatbot on its website must clearly indicate that responses are generated by artificial intelligence.
- Drafting a corporate AI Policy -- a document defining rules and limits for AI use in the company (full guide here)
- AI system register -- mapping all AI systems in use, with risk classification
- Staff training -- Art. 4 of the AI Act mandates AI literacy for all employees who use AI systems
- Impact assessment (FRIA) -- mandatory for high-risk systems before deployment
- Continuous monitoring -- human oversight and periodic audits of production systems
- Incident management -- procedures for reporting serious malfunctions to competent authorities
5. Penalties: The Cost of Non-Compliance
The AI Act provides a three-tier penalty system, calibrated by violation severity and company turnover. The penalties are among the highest in the European regulatory landscape, comparable only to those of the GDPR.
The first tier concerns the use of prohibited AI systems (unacceptable risk): penalties up to 35 million euros or 7% of annual global turnover, whichever is higher. For a company with 50 million in turnover, this means a potential fine of 3.5 million euros.
The second tier concerns violations of high-risk system obligations: up to 15 million euros or 3% of global turnover. The third tier, for false or incomplete information provided to authorities, provides penalties up to 7.5 million euros or 1% of turnover.
For SMEs and startups, the regulation provides proportional and reduced penalties. But reputational risk extends beyond the fine: a non-compliance investigation under the AI Act can damage business relationships and public procurement participation. Prevention through a structured AI governance framework is the most effective investment.
6. How to Prepare: The Yellow Tech Path
AI Act compliance requires a structured approach integrating legal, technical, and organizational aspects. Yellow Tech has developed a 5-phase path specifically for Italian companies, based on experience with over 500 organizations and 300+ AI agents in production.
The Phase 1 (Assessment) consists of mapping all AI systems in use, from the Microsoft Copilot suite to custom chatbots and production AI agents. Each system is classified by risk level according to AI Act criteria. We have cataloged hundreds of AI systems in client companies over the past year.
The Phase 2 (Gap Analysis) compares the current state with regulatory obligations. For each high-risk system, the assessment covers: technical documentation, human oversight procedures, log management, user transparency, and impact assessment. The result is a prioritized roadmap with effort and timeline for each intervention.
The Phase 3 (Implementation) includes drafting the corporate AI Policy, configuring monitoring systems, preparing compliance documentation, and integrating with existing GDPR processes.
The Phase 4 (Training) is legally mandatory: Art. 4 of the AI Act requires that all employees operating AI systems have an adequate level of AI literacy. We have trained over 20,000 people in Italy on this topic, with programs ranging from 2-hour executive sessions to complete AI Upskilling programs.
The Phase 5 (Continuous monitoring) includes periodic audits, documentation updates, and risk classification reviews. The AI Act is not a one-time obligation: it requires a living, updated governance system. To start the compliance journey, contact us for a free assessment.
Frequently Asked Questions
When does the AI Act come into force for Italian companies?+
The AI Act has been in force since August 1, 2024. Bans on unacceptable-risk systems apply from February 2, 2025. GPAI model obligations from August 2, 2025. Full high-risk system obligations from August 2, 2026. Yellow Tech has already supported over 500 Italian organizations with regulatory compliance, with a team of 30+ dedicated specialists.
What penalties does the AI Act provide?+
Penalties reach up to 35 million euros or 7% of global turnover for using prohibited systems, up to 15 million or 3% for high-risk obligation violations, and up to 7.5 million or 1% for false information. Proportional reductions apply to SMEs. Yellow Tech offers compliance programs from assessment through continuous monitoring.
Does the AI Act apply if I use ChatGPT or Copilot?+
Yes. If your company uses ChatGPT, Microsoft Copilot, Claude, Gemini, or any other AI system in the workplace, the AI Act applies. The company is classified as a "deployer" with specific obligations, including employee AI literacy (Art. 4) and user transparency. Yellow Tech has trained over 20,000 people in 500+ organizations on compliant AI tool usage.
How is an AI system's risk classified?+
Classification is based on Annex III of the AI Act, which lists high-risk system categories (biometrics, critical infrastructure, education, employment, credit, justice, migration). Yellow Tech uses a proprietary framework that maps each corporate AI system to Annex III criteria and produces a classified register with 300+ systems already analyzed for clients.
How much does AI Act compliance with Yellow Tech cost?+
Cost depends on organizational complexity and the number of AI systems in use. An initial assessment with risk classification starts at 10,000-20,000 euros. A full compliance program (assessment, gap analysis, AI policy, training, monitoring) for a mid-sized company ranges between 30,000 and 80,000 euros. 98% of Yellow Tech clients rate the investment positively, with an average CSAT of 98%.