The EU’s GDPR and upcoming AI Act aim to safeguard privacy, but critics argue they hinder innovation. Conflicts arise over transparency, data minimization, and accountability. Balancing AI growth with strong privacy protections requires privacy-first design, compliance monitoring, and ethical practices—ensuring innovation advances without sacrificing fundamental rights.
The world’s largest digital platforms, such as Meta and Google, have claimed that EU regulations stand in the way of launching their AI tools within the European market – and by extension, stand in the way of innovation for businesses across sectors. Requirements laid out in the Digital Services Act (DSA), AI Act, and General Data Protection Regulation (GDPR) create barriers, according to these tech giants, while civil society groups focused on consumer privacy argue that companies need to be held to a higher standard for accountability in terms of how data is used before being relieved of any regulatory burden.
While the TikToks and Metas of the world claim that consumers are missing out as long as their AI solutions are withheld from the EU market, civil society groups highlight the importance of consumer rights and safety. If the right to personal data protection is fundamentally enshrined in and protected by GDPR, these regulations need to be central to the regulation of and the design and deployment of AI tools. Today it appears that this oversight is missing.
Seeing AI through a data-privacy lens is more than just a layer of red tape; at the same time, however, voices in the European Union caution that taking no action on AI at all will equally stymie innovation and discourage investment. A balance must be struck.
What is the EU AI Act? A comprehensive approach to governing and regulating AI in Europe, the AI Act covers risk, obligations, users and general purpose AI. Its ostensible purpose is to support AI innovation by allowing companies to develop and test general-purpose AI models. But it’s not quite as straightforward as that.
Critics of the regulation believe that it slows AI development and general innovation due to regulatory limitations. The AI Act is seen as a part of European regulatory overreach, including the oft-cited GDPR, prioritizing digital protection and data privacy at the cost of AI-related economic development and leadership.
Many would point to the US decision to adopt a ten-year moratorium on AI regulation as a sign of tangible support for the free flow of innovation. Supporters of the US law claim it would avoid a convoluted patchwork of state-based AI laws (which already plague the data privacy space) to focus on a federal-level law, while opponents claim that the moratorium is merely a concession for big tech couched in competition-friendly language.
And how do the AI Act and GDPR conflict, and why is this important?
Since 2018, personal data privacy has been protected for residents of the European Union by GDPR, ensuring that personal data is processed transparently and lawfully. The AI Act (AIA), coming into effect in 2026, aims to establish clear requirements for AI systems to respect fundamental rights, including privacy. However, challenges arise where the AI Act and GDPR meet because their scopes and objectives differ significantly from the outset.
How can these coexist, enabling both the growth and development of AI systems and individual rights to data privacy?
AI systems pose several clear challenges to privacy-by-default designs, including this non-exhaustive set of examples:
AI systems must have a clearly defined and legitimate purpose early in development, which conflicts with regulations covering general-purpose AI solutions; the appropriate legal basis for data processing does not by default align with the principles of GDPR.
Transparency is a key tenet of GDPR and preserving and protecting data privacy; by default, AI systems are like black boxes providing no visibility into their workings.
Because AI systems require as much data as possible with which to train, the GDPR principle of data minimization and purpose/scope limitation ends up being moot.
GDPR mandates data accuracy and strict rules for data retention, which AI systems are not designed to respect.
GDPR guarantees individual the right to opt out of automated decision-making, which runs counter to much of what AI promises, which will demand human monitoring in parallel with AI system operation.
AI can make inferences from existing data, which can create new privacy concerns. AI systems can generate identifying insights from data on hand, which falls outside of GDPR’s current scope.
GDPR and AIA assign roles and responsibilities differently, which can result in governance disparities in risk assessment and accountability. Who is accountable for what?
How can these challenges be balanced out and overcome in order to ensure that both privacy and AI innovation are respected? Organizations trying to do the right thing for both need to:
Become granular about what you ask AI to do: Define the problem the AI aims to solve, identify the data used for training and testing, determine the expected outputs, and assess how those outputs will be used. Apply GDPR data minimization principles from the outset.
Design with privacy in mind: After evaluating both GDPR and AIA requirements, design with privacy in mind in order to avoid the biggest pitfalls of AI’s black box tendencies.
Monitor for compliance: Perform regular audits and stay abreast of evolving regulations to ensure that the AI system adheres to GDPR and AIA mandates.
Always return to a privacy-first way of thinking: Let GDPR be the guiding light when in doubt because individual data privacy trumps all, and GDPR provides the most protections. This means always keeping user privacy and consent top of mind.
Note that recent research argues that large language models (LLMs) quaify as personal data under GDPR, which means compliance with data protection requirements would be applicable throughout the full development lifecycle. This is not a universally accepted interpretation of LLM data, but it is worth erring on the side of data privacy caution. The perceived black box of AI, when opened, can be more like a Pandora’s box, revealing that open source AI training sets may be littered with personal data. Recent research found millions of images of passports, credit cards, birth certificates, and other documents containing personally identifiable information in DataComp CommonPool, one of the large AI training sets for image generation.
Insights from consultancy firm EY make the case clear: “The risk of AI misusing data and infringing on privacy cannot be understated, especially given the current lack of comprehensive understanding of its long-term effects. Just as brakes make it possible to drive safely at high speeds, robust regulatory frameworks enable organizations to operate securely and confidently in the rapidly evolving landscape of AI. These frameworks enable the development of AI technologies while protecting personal privacy and human rights.”