The EU AI Act is the EU’s landmark law that defines rules for safe and trustworthy AI. Importantly, it prohibits specific AI practices considered unacceptable risks to fundamental rights and public safety. Understanding what is banned, from manipulative AI to untargeted biometric scraping, is crucial for developers, businesses, and policymakers preparing to comply. In this article, we explain the main prohibitions, their reasons, practical compliance tips, and what businesses should do next.
Quick summary – what the EU AI Act prohibits
Article 5 of the law lists prohibited AI practices. The main categories are:
- AI that manipulates or deceives people in ways that undermine autonomy, using techniques such as subliminal or manipulative methods.
- AI that targets vulnerabilities, such as children or individuals with disabilities.
- Social-scoring systems that evaluate or rank individuals based on social behavior leading to discriminatory outcomes.
- Real-time remote biometric identification in public spaces, which is mostly banned for law enforcement use with some narrow exceptions.
- Untargeted scraping to create facial recognition databases, and certain biometric categorization or emotion recognition in sensitive environments like workplaces and schools.
These prohibitions will take effect in the first phase of enforcement starting 2 February 2025.
Why these practices are banned (short explainer)
The EU created these prohibitions to protect fundamental rights, including privacy, dignity, non-discrimination, and freedom of thought. The recitals of the law point out risks such as subliminal manipulation, undermining autonomy, and mass surveillance from untargeted biometric databases. These harms are deemed unacceptable in democratic societies.
Prohibited practices – deeper look
Manipulative or deceptive AI
- What it covers: AI that intentionally influences decisions by exploiting cognitive biases, such as subliminal nudging in political or health contexts.
- Why banned: It can undermine free will and democratic processes.
Exploitative AI (targeting vulnerabilities)
- What it covers: Systems that lead people to harmful choices by targeting minors, disabled individuals, or other vulnerable groups for profit or other goals.
- Why banned: It exploits the power imbalance and lack of understanding between the system and individuals.
Social scoring
- What it covers: AI that evaluates or ranks individuals’ trustworthiness or social behavior across various areas to determine access to services.
- Why banned: Social scoring can foster discrimination and prevent people from accessing opportunities.
Real-time remote biometric identification (RBI) in public spaces
- What it covers: Automated face recognition used in real-time in public spaces for identification by law enforcement, which is banned with tightly defined exceptions.
- Why banned: It poses risks such as chilling effects, wrongful identifications, and mass surveillance.
Untargeted scraping and biometric categorization
- What it covers: Bulk scraping of images or CCTV to create facial databases and AI that infers sensitive attributes like race, religion, or sexual orientation from biometric or behavioral signals.
- Why banned: These practices allow for abusive profiling and violate privacy.
What the prohibitions mean in practice (for businesses)
- Immediate effect: The prohibited list is already in force for many uses. Deploying banned systems risks hefty fines and reputational damage.
- Compliance actions: Identify your AI use cases, conduct AI impact assessments, stop or redesign any prohibited practices, and document changes.
- Liability & penalties: Non-compliance can result in substantial fines. Ensure that legal and compliance teams get involved early.
Exceptions and grey areas – what to watch
The Act allows for narrow exceptions and leaves room for guidance. For instance, specific law enforcement uses of biometric ID may be permitted under strict conditions, and legitimate risk-scoring in insurance or finance might remain lawful if conducted under sectoral law. However, the Commission’s guidance has made the interpretation of what counts as “social scoring” or “exploitative” behavior stricter, making early legal review essential.
Practical checklist to avoid prohibited AI practices
- Map and classify all AI systems against Article 5 categories.
- Stop or redesign any system that fits a prohibited description.
- Conduct AI Impact Assessments (AIA) for high-risk systems.
- Maintain detailed logs and documentation, including model cards and data lineage.
- Train staff on AI literacy and rights in the workplace.
FAQs
Q1: What exactly does the EU AI Act ban?
It bans AI practices considered an “unacceptable risk,” including manipulative AI, exploitative targeting, social scoring, untargeted biometric scraping, and some real-time biometric identification in public spaces.
Q2: Is all biometric identification banned by the EU AI Act?
No, the Act targets real-time remote biometric identification in public spaces and untargeted scraping; other biometric uses may be regulated as high-risk rather than outright banned.
Q3: When did the prohibitions take effect?
The main prohibitions became applicable on 2 February 2025 as part of the phased rollout.
Q4: Can companies rely on sector laws (e.g., insurance) instead?
Sectoral laws are important, but the Act’s prohibitions are broad. Legal review is necessary to avoid conflicts, especially concerning social scoring and profiling.
Q5: What penalties apply for violating a prohibition?
Violating the rules can result in substantial fines and enforcement actions. The Act imposes significant penalties for serious breaches. Consult legal counsel for specifics.