Navigating Prohibited Practices under the AI Act before the deadline

Authored by
Osman Gazi Güçlütürk
Legal & Regulatory Lead in Public Policy at Holistic AI
Bahadir Vural
Legal Researcher at Holistic AI
Published on
Aug 6, 2024
read time
0
min read
share this
Navigating Prohibited Practices under the AI Act before the deadline

Key Takeaways:

  • The EU AI Act identifies specific AI systems as prohibited. The prohibited practices are not absolute, with many of them having exceptions, so should be examined on a case-by-case basis.
  • From 2 February 2025, using these systems in the EU will no longer be legal.
  • Unlike other requirements or obligations under the Act, rules on prohibited practices are operator-agnostic and do not change depending on the operator’s role or identity.
  • Non-compliance with these prohibitions can result in significant administrative fines, up to €35,000,000 or up to 7% of their global annual turnover.

The EU AI Act passed last year, with the first requirements – related to prohibited practices and AI literacy – taking effect on 2 February 2025 as part of the Act’s gradual application timeline that spans 36 months.

The systems that are prohibited are those that are associated with an unacceptable level of risk to fundamental rights, health, and safety. From 2 February 2025, it will no longer be legal to use or make these systems available in the EU, with heavy penalties for non-compliance. In this blog post, we outline the prohibited practices under the Act and their implications.

Which systems are prohibited under the EU AI Act?

Eight key AI practices are prohibited in the EU under the EU AI Act, as can be seen in the figure below.

Prohibited AI Practices

Subliminal, manipulative and deceptive AI techniques with the risk of significant harm

The AI Act establishes strict prohibitions against AI systems that utilize subliminal techniques, manipulations, or deceptions to alter human behavior, coercing individuals into making decisions they wouldn't otherwise consider, especially when such actions could lead to significant harm.

These AI systems potentially undermine personal autonomy and freedom of choice, often without individuals being consciously aware or able to counteract these influences. Such manipulations are considered highly risky, potentially leading to detrimental outcomes on an individual's physical or psychological health, or financial well-being.

AI technologies might employ subtle cues through audio, imagery, or video that, while undetectable to the human senses, are potent enough to sway behavior. Examples include streaming services embedding unnoticed messages in videos or films, or social media platforms that algorithmically promote emotionally charged content to manipulate user feelings, aiming to extend their platform engagement.

The Act, however, does not ban AI's application in advertising but draws a fine line between permissible AI-enhanced advertising and forbidden manipulative or deceptive techniques. This distinction is not always straightforward and requires careful examination of the specific context on a case-by-case basis, ensuring the use of AI in advertising respects consumer autonomy and decision-making.

AI systems that exploit the vulnerabilities of persons

The AI Act also prohibits AI systems that exploit human vulnerabilities to significantly distort behavior.  The Act emphasizes the protection of individuals particularly susceptible due to factors like age, disabilities (as defined by EU accessibility legislation, which includes long-term physical, mental, intellectual, or sensory impairments), or specific social or economic situations, including severe financial hardship or belonging to ethnic or religious minorities.

Again, the advertising activities may be relevant for this type of prohibited practice. For instance, these AI systems might deploy advanced data analytics to generate highly personalized online ads. By leveraging sensitive information—such as a person's age, mental health status, or employment situation—these systems aim to exploit vulnerabilities, thereby influencing individuals' choices or the frequency of their purchases.

AI systems used for social scoring

The AI Act bans social scoring AI systems assessing or categorizing individuals or groups over time based on their social behavior or known, inferred, or predicted personal traits. Additionally, if any of the below are true, then the AI system will be prohibited:

  1. The AI system results in adverse treatment of individuals in social situations unrelated to the original data generation or collection contexts, or;
  2. The use of the AI system causes adverse treatment that is unfair based on their social behavior or its seriousness.

Specifically, the EU AI Act recognizes that these systems, when used by both public and private entities, could result in discriminatory consequences and the marginalization of specific demographics. Such systems may infringe on the right to dignity and non-discrimination, along with fundamental values like equality and justice.  

For example, employers using AI systems to analyze job applicants’ social media activity to make hiring decisions based on factors unrelated to job performance, such as political views, religious beliefs, or membership in specific groups would be prohibited.

However, it is important to note that this prohibition does not impede lawful assessment practices of individuals carried out for specific purposes in adherence to both national and Union regulations. The lawful deployment of AI algorithms by financial institutions to assess individuals' creditworthiness based on their financial behavior, such as payment history, debt levels, and credit utilization, helps them determine whether to approve loans or credit cards, without posing any unacceptable risk in the context of the prohibitions.

AI profiling of personality assessments for predictive policing

AI systems that evaluate individuals' potential for criminal behavior based solely on profiling or personality traits are also banned under the EU AI Act.  It highlights the necessity for evaluations within the EU to rely on concrete actions rather than predictions of behavior derived from profiling, absent of any reasonable suspicion supported by objective evidence and human review.

However, the Act carves out exceptions for AI tools that support human decision-making in assessing an individual's engagement in criminal activities, provided these assessments are grounded in factual and verifiable evidence directly related to criminal conduct. Additionally, AI systems focusing on risk assessments unrelated to individual profiling or personality traits—such as analyzing anomalous transactions to prevent financial fraud or using trafficking patterns to locate illegal narcotics or contraband for customs purposes—remain permissible under the Act.

Untargeted scraping of facial images for AI facial recognition databases

The AI Act prohibits AI systems designed to collect or enhance facial recognition databases through untargeted scraping of facial images from online sources or footage from closed-circuit television (CCTV) systems. CCTV systems, characterized by their network of video cameras that transmit signals to specific, non-publicly accessible monitors, are often used for surveillance and security. This prohibition is a critical measure within the AI Act aimed at preventing the spread of a culture of mass surveillance and practices that infringe upon fundamental rights, with a particular focus on the right to privacy.  The inclusion of this prohibition is responsive to the concerns arising from concrete examples of untargeted scraping and complementary to the EU’s General Data Protection Regulation (GDPR) in protecting privacy when the processing of personal data by or for AI is involved, with Clearview AI facing multiple penalties under the GDPR due to non-consensual scraping of images from the internet to build their facial recognition database.

AI systems for inferring emotions in workplaces and education

AI technologies aimed at inferring or interpreting individuals' emotional states in workplaces and educational settings will be banned under the EU AI Act. This measure stems from concerns over the scientific validity of these AI applications, which attempt to analyze human emotions. Indeed, given the diversity of emotional expressions across different cultures and situations, there is a significant risk that such AI systems could lead to inaccurate assessments and biases. These technologies often suffer from issues of reliability, accuracy, and applicability, leading to potential discriminatory practices and violations of personal rights.

However, the AI Act specifies exceptions for AI applications designed for health or safety reasons, such as in medical or therapeutic settings, underscoring the Act’s nuanced approach to balancing technological advancement with ethical considerations and human rights protections.

Biometric categorization AI systems to infer sensitive personal traits

Another AI practice prohibited by the EU AI Act is categorizing individuals by analyzing biometric data, such as facial characteristics or fingerprints, to deduce their race, political leanings, trade union membership, religious or philosophical beliefs, sexual orientation, or details about their sex life. The use of AI in this manner risks enabling discriminatory practices across various sectors, including employment and housing, thus reinforcing societal disparities and infringing on fundamental rights like privacy and equality.

For example, when landlords or housing managers employ these AI tools for screening prospective tenants, there's a tangible risk of biased decisions against people from specific racial or ethnic backgrounds.

Nevertheless, the AI Act acknowledges exceptions for activities that are legally permissible, including the organization of biometric data for specific, regulatory-approved purposes. Lawful uses might involve organizing images by attributes such as hair or eye color for objectives provided by law, including certain law enforcement activities, provided these actions comply with EU or national legislation.

AI systems for real-time remote biometric identification in publicly accessible spaces

Finally, the AI Act forbids AI systems for real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes. This is not to be confused with other types of biometric systems that are considered high-risk under the EU AI Act, the distinction between which we explore here. RBI refers to the process where capturing, comparing, and identifying biometric data occur almost instantaneously, without notable delay. Publicly accessible locations are described as areas, whether publicly or privately owned, that can be entered by an undetermined number of people, regardless of any conditions of access or capacity restrictions.

The potential for these technologies to intrude into private lives, foster a ubiquitous surveillance environment, and deter the exercise of essential freedoms, such as the right to peaceful assembly, is particularly troubling. Moreover, the propensity for technical shortcomings, including biases and inaccuracies within these systems, could lead to incorrect detentions or the disproportionate targeting of certain groups, undermining public confidence in law enforcement and intensifying societal divides.

Who do EU AI Act prohibitions apply to?

Rules on prohibited AI practices are operator-agnostic. The AI Act distinguishes between various actors involved with AI systems, assigning distinct responsibilities based on their specific roles in relation to the AI system or model. This differentiation is particularly evident in the context of AI systems and general-purpose AI models, where the most significant responsibilities are allocated to the providers. This approach ensures that those who have the most control over the development and deployment of AI technologies are held accountable. In contrast to these tailored obligations for different actors, the rules regarding prohibited AI practices are designed to be operator-agnostic i.e. regardless of the actor's specific role (whether providing, developing, deploying, distributing, or utilizing AI systems that engage in prohibited practices).

Do EU AI Act prohibitions apply to AI models?

Rules on prohibited AI practices do not directly apply to AI models. The Act draws a subtle distinction between AI systems and AI models, introducing specific rules for the latter only when they are general-purpose, which are the key building blocks for generative AI. The rules on prohibitions primarily target AI systems engaging in the prohibited practices.

What are the penalties for using a prohibited AI system under the EU AI Act?

Non-compliance with the prohibition of the AI practices shall be subject to administrative fines of up to EUR 35,000,000 or 7 % of total worldwide annual turnover for the preceding financial year, whichever is higher.

On the other hand, Union institutions, bodies, offices, and agencies will be subject to administrative fines of up to EUR 1,500,000 for their non-compliance with the prohibited practices.

How to prepare for the EU AI Act’s prohibitions

  1. Create an inventory of any AI systems you are developing or deploying
  2. Classify your systems to determine if any are prohibited
  3. If any systems are prohibited, immediately stop using them in the EU
  4. Determine whether you are a provider or deployer of AI in the EU
  5. Ensure that you have a plan for ongoing AI literacy training and mechanisms to assess that literacy is sufficient, particularly if you are a developer or deployer
  6. Start to prepare for provisions on any general purpose AI models or high-risk AI systems that will come into effect in the near future

Stay within the safe zone with Holistic AI!

Preparing for the implementation of a detailed regulatory framework like the AI Act is a considerable undertaking – requires time and careful planning. While the Act has a 24-month grace period before most of its provisions come into effect, the rules concerning prohibited practices are set to be enforced first. Affected entities must now prioritize the establishment of processes and practices to comply with the regulation.

Find out how the EU AI Act impacts your business by using our EU AI Act Risk Calculator, and schedule a call to learn more about how Holistic AI can help you get ahead with your AI Act preparedness.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo