CookieHub Logo

The European Union AI Act, consent and compliance

The EU Artificial Intelligence Act (AI Act) is the European Union’s first comprehensive legal framework for regulating how artificial intelligence (AI) is used. Consent and transparency are central to the AI Act. AI systems that use biometric identification, emotion recognition, or profiling must inform users and get explicit consent. Are you ready for the AI Act?

What your business needs to know about the EU AI Act

What your business needs to know about the EU AI Act

Adopted in 2024, the AI Act classifies AI systems into four categories based on risk: unacceptable, high, limited, and minimal. The Act aims to ensure AI systems are safe, transparent, and respect fundamental rights, particularly those related to data privacy. 

High-risk systems face stricter requirements, meaning that they demand risk assessments, documentation, and human oversight. Some practices, such as social scoring and real-time biometric identification in public spaces, are completely prohibited.

What does EU AI Act compliance require?

Businesses that develop, distribute, or use AI in the EU need to take note of several key points:

Oversight:

High-risk systems (e.g. in hiring, education, law enforcement) require strict oversight

Documentation:

Detailed technical documentation and conformity assessments are required

Transparency:

AI systems that interact with humans or use biometric data must include transparency notices

Global application:

The Act applies globally—if your system impacts EU residents, compliance is mandatory

Who needs to comply with the EU AI Act?

Who needs to comply with the EU AI Act?

The following entities must comply with the AI Act: 

AI system providers operating in or targeting the EU 

AI systems within the EU 

Distributors and importers placing AI systems on the EU market 

Non-EU businesses whose AI systems affect EU users 

This includes startups, SMEs, and global corporations across industries.

Consumer rights under the EU AI Act

The EU AI Act intersects with data privacy laws like GDPR and thus confers a number of data privacy rights to consumers, including:

Why cookies as part of EU AI compliance

Why cookies as part of EU AI compliance

While the AI Act doesn’t directly regulate cookies, if cookie data is used for profiling or AI-driven decision-making, it can fall under the regulation: 

You must disclose AI-driven personalization or profiling based on cookie data 

Explicit consent is required for biometric or sensitive data collected via cookies 

Make sure AI models using cookie data are auditable and transparent

Penalties for EU AI Act non-compliance

Penalties for EU AI Act non-compliance

Penalties under the AI Act can be substantial: 

Up to 35 million EUR or 7% of global annual revenue for violations of prohibited practices 

Up to 15 million EUR or 3% for non-compliance with high-risk obligations 

Lesser fines can be imposed for issues such as documentation errors or lack of cooperation 

Regulators can also suspend system deployment or require product recalls for non-compliant systems.

How to comply with the EU AI Act 

To assess your compliance with the AI Act:

Review data practices:

Determine if your AI systems are used within the EU or affect EU residents

Assess risk:

Classify each system according to its risk level

Implement governance:

Implement proper documentation, data governance, and human oversight

Adopt a CMP:

Use third-party tools and official EU checklists to assess your readiness

Add human oversight:

Designate internal compliance officers where necessary

How CookieHub can help with EU AI Act compliance

While cookies are not central to the concerns of the AI Act, visibility, auditability, and transparency are. Cookies may well be used in AI systems, and a trusted consent management platform like CookieHub can help ensure you capture consent properly and have an accurate paper trail to show your compliance.

Frequently Asked Questions

The EU AI Act applies to providers, users, importers, and distributors of artificial intelligence systems that are placed on the EU market, used within the EU, or impact people in the EU—even if the developer is located outside the EU. It classifies AI systems by risk level (unacceptable, high, limited, and minimal) and sets corresponding regulatory obligations.

Personal data refers to any information that relates to an identified or identifiable individual, as defined under the EU General Data Protection Regulation (GDPR). This includes names, email addresses, biometric data, and other information that could directly or indirectly identify a person.

Sensitive data includes special categories of personal data under GDPR, such as racial or ethnic origin, political opinions, religious beliefs, trade union membership, genetic data, biometric data for identification, health data, and data concerning a person’s sex life or sexual orientation. AI systems processing this type of data face stricter requirements.

The EU AI Act establishes a European Artificial Intelligence Board to coordinate enforcement across the EU. Each member state will designate one or more national supervisory authorities responsible for enforcing the AI Act within their territory.

Certain AI systems are exempt, including those used exclusively for military, defense, or national security purposes. Research and development activities not placed on the market or used in real-world applications may also be excluded, provided they meet specific conditions.

You can learn more by visiting the official European Commission website or reviewing the full text of the AI Act. Legal advisories, compliance organizations, and national data protection authorities also provide guidance and updates on implementation.