Italian Hackathon League  · Read on La Stampa →
Guide

GDPR and Artificial Intelligence: Compliance Guide

How to use artificial intelligence while respecting the GDPR: legal bases, impact assessment, data minimization, and the intersection with the AI Act.

Updated: March 202613 min read

1. The GDPR Applied to Artificial Intelligence Systems

The GDPR (Regulation EU 2016/679) applies to all personal data processing, regardless of the technology used. When an artificial intelligence system processes data that identifies or makes a natural person identifiable -- name, email, IP address, biometric data, behavioral patterns -- the GDPR is fully applicable.

The critical point for Italian companies is that AI introduces new processing methods often not anticipated by existing privacy policies. An LLM (Large Language Model) like GPT or Claude can extract personal information from documents, infer sensitive characteristics from seemingly anonymous data, and generate behavioral profiles without the company being aware. The Italian Privacy Authority (Garante Privacy) has already issued significant rulings on this matter, starting with the ChatGPT case in 2023.

Yellow Tech has supported over 500 organizations in aligning AI usage with GDPR compliance. Experience shows that most companies underestimate AI's privacy impact: they use tools like chatbots, document analysis systems, and AI agents without having updated their processing register, privacy notices, and legal bases. The good news is that with a structured approach, compliance is achievable without slowing innovation.

2. Legal Bases for Processing Data with AI

Every personal data processing via AI must rest on a valid legal basis from among those provided by Art. 6 of the GDPR. The choice of legal basis is not a formality: it determines the data subject's rights, the controller's obligations, and the conditions for processing.

The consent (Art. 6.1.a) is the most common basis for customer-facing chatbots and direct user interactions. It must be freely given, specific, informed, and unambiguous. For AI, this means: explaining that data will be processed by an AI system, indicating which data is collected, specifying whether data is used for model training (in most enterprise contracts, it is not).

The legitimate interest (Art. 6.1.f) is often invoked for internal process analysis and operational optimization via AI. But it requires a documented balancing test (LIA -- Legitimate Interest Assessment) between the company's interest and the data subject's rights. Automating customer service with an AI agent is a legitimate interest; profiling employees with AI to predict resignations is far more contentious.

The performance of a contract (Art. 6.1.b) applies when AI processing is necessary to provide the service requested by the customer. For example, an insurance company using AI to calculate premiums is performing the contract. But note Art. 22 of the GDPR: decisions based solely on automated processing that produce legal effects on the data subject require specific safeguards, including the right to obtain human intervention.

3. Data Minimization When Using Large Language Models

The data minimization principle (Art. 5.1.c GDPR) requires processing only personal data that is "adequate, relevant and limited to what is necessary" for the purposes. Applied to LLMs, this principle has immediate operational implications for any company using AI tools.

The main risk is involuntary data leakage: an employee who pastes an email containing a customer's name, surname, and tax ID into ChatGPT for summarization is transferring personal data to a third party (OpenAI) without an adequate legal basis. We have found this scenario in the majority of companies audited before implementing an AI Policy.

Three practical minimization strategies exist. The first is preventive pseudonymization: removing or replacing identifying data before entering it into the AI system. The second is using enterprise environments (ChatGPT Enterprise, Azure OpenAI, Anthropic API with DPA) that contractually guarantee data is not used for model training. The third is configuring automatic filters that intercept and mask personal data before it reaches the AI model.

We implement these strategies as part of the AI Adoption journey: each company receives a customized AI tool configuration that respects the minimization principle. With over 300 AI agents in production, our team has developed consolidated patterns for secure data processing.

4. DPIA: The Impact Assessment for AI Systems

The DPIA (Data Protection Impact Assessment) is mandatory when processing "is likely to result in a high risk to the rights and freedoms of natural persons" (Art. 35 GDPR). AI usage is almost always a DPIA trigger, because it falls within the criteria indicated by the Privacy Authority: new technologies, large-scale processing, systematic monitoring, profiling.

The DPIA for an AI system must include: a description of the processing and its purposes, an assessment of necessity and proportionality, a risk analysis for data subjects, and planned mitigation measures. For AI systems, specific elements are added: model transparency ("explainability"), the possibility of algorithmic bias, decision accuracy, and the human intervention mechanism.

The Italian Privacy Authority has published a list of processing operations subject to mandatory DPIA that explicitly includes "processing carried out using innovative technologies, including artificial intelligence." In practice, any AI agent processing personal data in production requires a DPIA.

We integrate the DPIA into the development process for every AI agent. Across 300+ agents in production, 100% have a completed DPIA approved by the client's DPO before go-live. This "privacy by design" approach prevents post-launch blocks and ensures compliance from day one of operation.

5. Data Subject Rights and AI Systems

The GDPR guarantees data subjects a series of rights that also apply when data is processed by AI systems. Art. 22 is particularly relevant: the data subject has the "right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her."

In business practice, this means an AI system can support a decision but not replace the human decision-maker when the decision has significant effects on the person. Examples: an AI that pre-screens CVs can be used as a filter, but the final decision to invite or exclude a candidate must be made by a human. An AI that calculates a credit score can produce a recommendation, but loan approval or denial requires human intervention.

Other relevant rights are the right of access (Art. 15) -- the data subject can request what data the AI system holds and how it is used; the right to rectification (Art. 16) -- if the AI system produces output based on incorrect data, the data subject can request correction; the right to erasure (Art. 17) -- the "right to be forgotten" also applies to data processed by AI; and the right to explanation -- the combination of Articles 13, 14, and 22 requires providing meaningful information about the logic of automated processing.

For companies developing AI agents with us, respect for these rights is integrated into the system architecture from the design phase, with dedicated endpoints for data subject requests and complete audit logs.

6. GDPR and AI Act: Where They Intersect

The AI Act does not replace the GDPR: the two regulations apply in parallel and are complementary. A company using AI systems must comply with both, and the overlap areas are significant.

The first intersection concerns impact assessment. The GDPR requires a DPIA for high-risk processing; the AI Act requires a FRIA (Fundamental Rights Impact Assessment) for high-risk AI systems. The two assessments can be conducted in an integrated manner, saving time and ensuring consistency. We have developed a unified DPIA+FRIA template used successfully across 500+ organizations.

The second intersection concerns transparency. The GDPR requires informing data subjects about the logic of automated processing (Art. 13-14). The AI Act imposes transparency obligations for all limited-risk AI systems (Art. 50). For a customer-facing chatbot, both regulations require the user to know they are interacting with an AI system and that their data is being processed.

The third intersection concerns governance. The DPO required by the GDPR and the emerging AI Officer from the AI Act must work together. The most advanced organizations among our clients have created a single integrated governance framework covering privacy, AI compliance, and cybersecurity under unified coordination. This approach significantly reduces compliance costs compared to managing them separately.

Frequently Asked Questions

Is a DPIA required for using ChatGPT in a company?+

In most cases, yes, especially if employees enter personal data (even unintentionally) into the system. The Italian Privacy Authority has indicated that using innovative technologies with personal data requires a DPIA. Yellow Tech recommends a DPIA for every AI deployment involving personal data -- an approach applied successfully across 500+ client organizations.

Can I use personal data to train an AI model?+

Only with a valid legal basis (typically explicit consent or legitimate interest with documented LIA), in compliance with the minimization principle, and after a DPIA. Enterprise contracts from OpenAI, Anthropic, and Google include clauses excluding data use for training. Yellow Tech configures all 300+ production AI agents with contractual clauses guaranteeing non-use of data for model training.

How do you reconcile GDPR and AI Act in practice?+

The two regulations are complementary and should be managed in an integrated way. The main overlap areas are: impact assessment (DPIA + FRIA), user transparency, and organizational governance. Yellow Tech has developed a unified framework covering both regulations, tested across 500+ Italian organizations with a 98% CSAT.

What happens if an employee enters personal data into ChatGPT?+

If the company has not implemented preventive measures (policy, training, technical filters), it is responsible for the unauthorized processing under the GDPR. Penalties can reach 4% of global turnover. Yellow Tech has found this scenario in the majority of companies audited before implementing an AI Policy -- training and technical controls reduce it to near zero.

Can Yellow Tech help with GDPR compliance for AI?+

Yes. Yellow Tech offers an integrated program covering GDPR and AI Act: audit of existing AI usage, DPIA for every production system, AI Policy drafting with privacy section, enterprise-mode AI tool configuration, and staff training. The team of 30+ specialists has completed this program for over 500 Italian organizations, with 300+ GDPR-compliant AI agents in production.

Want to understand how AI can help your business?

Let's talk. 500+ Italian organizations already trust Yellow Tech for their AI transformation.