CookieHub Logo
Europe’s AI Act and what it means for data privacy, consent and the developers that build tech

Europe’s AI Act and what it means for data privacy, consent and the developers that build tech

Table of contents

Europe’s AI Act introduces the first global legal AI framework, balancing innovation with privacy and consent. Its phased rollout, Code of Practice, and GDPR intersections challenge developers and tech companies. Compliance requires risk categorization, transparency, AI literacy, and valid cookie consent, shaping the future of trustworthy, rights-respecting AI in Europe.

In 2024, the EU Artificial Intelligence Act (AI Act) entered into force, marking the world’s first comprehensive legal framework for AI. Its stated primary mission was to foster trustworthy, safe, and rightsrespecting AI in Europe through a riskbased regulatory approach, according to IAPP. As with all developments in the AI space, there is considerable concern about data privacy, individual rights and how consent figures into AI equations. Where is the balance between data privacy and the tenets of the AI Act?  

Phased regulatory rollout against lightning-fast AI innovation 

One of the challenges – and fears – at the heart of this tension between regulation and innovation is the fact that, as usual, regulation is slow to catch up to innovation. And the AI Act has not taken effect all at once. So, for example, bans on unacceptable AI practices, such as social scoring, subliminal manipulation, and emotion recognition, as well as a requirement for AI literacy, came into effect in early 2025, high-risk AI systems and general-purpose AI model obligations don’t come into effect before August 2026.  

For all of the regulations Europe aims to implement, some pushback exists in other jurisdictions. Although shot down by the US Senate in the end, the current US administration attempted to prohibit US states from enforcing any law governing artificial intelligence models, AI systems, or automated decision-making systems for up to ten years. The point being – not everyone wants to regulate and oversee the encroaching influence of AI.  

Among other murky challenges, this may have confusing, and potentially disastrous, effects on data privacy as AI use clashes with existing privacy and consent laws. But before we dive into these concerns, let’s take a look at the current landscape. 

The AI Code of Practice: Bridging the gap or a bridge too far? 

Standardization lags prompted the EU to introduce a voluntary General-purpose AI Code of Practice in summer 2025. It offers a practical framework for GP-AI model providers to demonstrate compliance with obligations in the AI Act, such as transparency, copyright, safety and systemic risk.  

Tech companies, particularly the giants like Google and xAI, have mostly signed on, committing to some or all of the Code, as taking on the code aligns companies with greater legal certainty and a reduced administrative burden compared to other compliance paths. Despite making these commitments, all of the corporate tech behemoths have voiced objections – from Google warning that EU regulations will stifle innovation to Meta rejecting the Code completely, citing ambiguity and regulatory overreach. This, even after the European Commission granted privileged access to the Code to a number of large US companies who want the Code watered down. It would seem that for all the good intentions of the AI Act, the final product may not work in the best interests of European citizens and their privacy.  

Ultimately the Code stops serving its ostensible purpose – easing compliance – because of this kind of corporate interference, which treats the Code as a political stance rather than a technical tool.  

The ambiguous AI future for developers and tech companies 

Developers are navigating ambiguity during the phased rollout. Everything seems to be in a state of change, making compliance a headache (the opposite of what the AI Act aimed to achieve). For developers and tech companies AI and the AI Act will mean introducing a variety of new steps.  

Some of these steps include introducing different risk categorization and documentation of system design, provide transparency and copyright safeguards, deliver AI literacy for their employees and affected users, and adopt and use AI regulatory sandboxes.  

Intersections with data privacy, cookies and consent 

While the AI Act is primarily a product safety regulation, not a privacy law, it intersects extensively with GDPR. Data processed by AI must still comply with rights and protections under GDPR. Cookie-based processing that is considered personal data requires valid legal grounds, that is, explicit consent and/or legitimate interest. And this is where some AI functions are running into trouble. 

Specifics on AI and cookies and consent 

Consent is at the heart of most online transactions and interactions – most consumers are well familiar with cookie banners and having to navigate and make decisions about what cookies to accept or reject. With AI, there is a whole new wave of considerations for developers and businesses that may not have considered that AI systems require new, specific consent if data processing purposes change. That is, if user data or cookie data is repurposed to train or validate AI systems, existing consent is not valid.  

This comes at a time when EU regulators are tightening enforcement of cookie consent requirements, with more significant fines imposed for non-compliance. This impacts AI projects relying on cookie-based tracking or personalization – making valid consent and clear disclosures more important than ever. 

Navigating through change: AI and consent 

Europe’s AI governance landscape is advancing but still unfolding and changing. For developers and tech companies, this means: 

Understanding your role and risk category—are you a provider, deployer, importer? 

Preparing documentation and transparency strategies—even if relying on the voluntary Code of Practice. 

Prioritizing AI literacy internally, along with cross-functional compliance alliances across privacy, legal, and R&D teams. 

Ensuring data and cookie practices meet GDPR standards, particularly regarding consent and repurposed data for AI. 

Monitoring evolving enforcement, guidelines, and simplify-at-scale efforts by EU regulators to support smooth adoption. 

Just as with GDPR, initial compliance challenges may seem overwhelming, but adapting early, building strong governance, and prioritizing transparency will help companies not just comply—but lead in building ethical and trustworthy AI that complies with consent regulations.  

Are you prepared for AI data privacy and consent management challenges? CookieHub can help.

Sign up today and create a custom cookie banner for your website

30 day free trial

No credit card required

©2025 CookieHub ehf.