As the European Union’s landmark Artificial Intelligence Act (AI Act) comes closer to the finalization of its legislative journey with the European Parliament’s approval last month, navigating its restrictions as well as red lines gains more importance. The AI Act introduces a risk-based framework for classifying AI systems, exclusively categorizing systems as being prohibited, high-risk, and low (or minimal) risk.
Notably, the systems that are prohibited are those that are associated with an unacceptable level of risk, and, instead of imposing specific development and deployment standards to mitigate associated risks such as with high-risk systems, the AI Act opts for an outright ban of these systems. While the Act refrains from defining 'unacceptable risk,' it identifies certain AI practices as inherently unacceptable due to their potential to significantly conflict with core Union values and fundamental rights, including human dignity, freedom, equality, and privacy.
Article 5 of the AI Act, listing prohibited AI practices, more commonly known as AI systems with unacceptable risk, is one of the most significant parts of the Act, not only because it will mark the end of the road for some AI systems within the EU but also it will be the first provision to apply in Act’s gradual application timeline, that spans to 24 months starting from the entry into force.
To assist providers, deployers, users, and other stakeholders in understanding the red lines of the AI Act, with this blog, we will outline the prohibited practices under the Act and their implications.
Key Takeaways:
Eight key AI practices are prohibited in the EU under the EU AI Act, as can be seen in the figure below.
The AI Act establishes strict prohibitions against AI systems that utilize subliminal techniques, manipulations, or deceptions to alter human behavior, coercing individuals into making decisions they wouldn't otherwise consider, especially when such actions could lead to significant harm. These AI systems, by influencing decisions and actions, potentially undermine personal autonomy and freedom of choice, often without individuals being consciously aware or able to counteract these influences. Such manipulations are considered highly risky, potentially leading to detrimental outcomes on an individual's physical or psychological health, or financial well-being.
AI technologies might employ subtle cues through audio, imagery, or video that, while undetectable to the human senses, are potent enough to sway behavior. Examples include streaming services embedding unnoticed messages in videos or films, or social media platforms that algorithmically promote emotionally charged content to manipulate user feelings, aiming to extend their platform engagement. These practices can subtly influence users’ subconscious, altering thoughts or actions without their realization, or exploiting emotions for undesirable ends.
The Act, however, does not ban AI's application in advertising but draws a fine line between permissible AI-enhanced advertising and forbidden manipulative or deceptive techniques. This distinction is not always straightforward and requires careful examination of the specific context on a case-by-case basis, ensuring the use of AI in advertising respects consumer autonomy and decision-making.
The AI Act also prohibits AI systems that exploit human vulnerabilities to significantly distort behavior, deeming such practices to carry unacceptable risks. The Act emphasizes the protection of individuals particularly susceptible due to factors like age, disabilities (as defined by EU accessibility legislation, which includes long-term physical, mental, intellectual, or sensory impairments), or specific social or economic situations, including severe financial hardship or belonging to ethnic or religious minorities.
Again, the advertising activities may be relevant for this type of prohibited practice. For instance, these AI systems might deploy advanced data analytics to generate highly personalized online ads. By leveraging sensitive information—such as a person's age, mental health status, or employment situation—these systems aim to exploit vulnerabilities, thereby influencing individuals' choices or the frequency of their purchases. This relentless targeting not only invades privacy but gradually erodes individuals' sense of autonomy, leaving them feeling powerless in managing their online shopping behaviors and choices.
The AI Act bans social scoring AI systems assessing or categorizing individuals or groups over time based on their social behavior or known, inferred, or predicted personal traits. Additionally, if any of the below are true, then the AI system will be prohibited:
Specifically, the EU AI Act recognizes that these systems, when used by both public and private entities, could result in discriminatory consequences and the marginalization of specific demographics. Such systems may infringe on the right to dignity and non-discrimination, along with fundamental values like equality and justice. For example, employers using AI systems to analyze job applicants’ social media activity to make hiring decisions based on factors unrelated to job performance, such as political views, religious beliefs, or membership in specific groups would be prohibited.
However, it is important to note that this prohibition does not impede lawful assessment practices of individuals carried out for specific purposes in adherence to both national and Union regulations. The lawful deployment of AI algorithms by financial institutions to assess individuals' creditworthiness based on their financial behavior, such as payment history, debt levels, and credit utilization, helps them determine whether to approve loans or credit cards, without posing any unacceptable risk in the context of the prohibitions.
AI systems that evaluate individuals' potential for criminal behavior based solely on profiling or personality traits are also banned under the EU AI Act. This provision upholds the principle of the presumption of innocence, affirming that all individuals should be considered innocent until proven guilty. It highlights the necessity for evaluations within the EU to rely on concrete actions rather than predictions of behavior derived from profiling, personality characteristics, nationality, or economic standing, absent any reasonable suspicion supported by objective evidence and human review.
However, the Act carves out exceptions for AI tools that support human decision-making in assessing an individual's engagement in criminal activities, provided these assessments are grounded in factual and verifiable evidence directly related to criminal conduct. Additionally, AI systems focusing on risk assessments unrelated to individual profiling or personality traits—such as analyzing anomalous transactions to prevent financial fraud or using trafficking patterns to locate illegal narcotics or contraband for customs purposes—remain permissible under the Act. This distinction ensures that while protecting individual rights and the presumption of innocence, the legislation does not impede the use of AI in legitimate and evidence-based law enforcement activities.
The AI Act prohibits AI systems designed to collect or enhance facial recognition databases through untargeted scraping of facial images from online sources or footage from closed-circuit television (CCTV) systems. CCTV systems, characterized by their network of video cameras that transmit signals to specific, non-publicly accessible monitors, are often used for surveillance and security. This prohibition is a critical measure within the AI Act aimed at preventing the spread of a culture of mass surveillance and practices that infringe upon fundamental rights, with a particular focus on the right to privacy. By banning such practices, the Act intends to protect individual autonomy and guard against the risks associated with uncontrolled data collection, emphasizing the importance of privacy and personal freedom in the digital age. The inclusion of this prohibition is responsive to the concerns arising from concrete examples of untargeted scraping and complementary to the EU’s General Data Protection Regulation (GDPR) in protecting privacy when the processing of personal data by or for AI is involved, with Clearview AI facing multiple penalties under the GDPR due to non-consensual scraping of images from the internet to build their facial recognition database.
AI technologies aimed at inferring or interpreting individuals' emotional states in workplaces and educational settings will be banned under the EU AI Act. This measure stems from concerns over the scientific validity of these AI applications, which attempt to analyze human emotions. Indeed, given the diversity of emotional expressions across different cultures and situations, there is a significant risk that such AI systems could lead to inaccurate assessments and biases. These technologies often suffer from issues of reliability, accuracy, and applicability, leading to potential discriminatory practices and violations of personal rights. In environments like offices or schools, where there's a notable power differential, the use of emotion-detecting AI could result in unfair treatment—such as employees being sidelined based on assumed negative emotions or students being unfairly judged as underperforming due to perceived disengagement.
However, the AI Act specifies exceptions for AI applications designed for health or safety reasons, such as in medical or therapeutic settings, underscoring the Act’s nuanced approach to balancing technological advancement with ethical considerations and human rights protections.
Another AI practice prohibited by the EU AI Act is categorizing individuals by analyzing biometric data, such as facial characteristics or fingerprints, to deduce their race, political leanings, trade union membership, religious or philosophical beliefs, sexual orientation, or details about their sex life. The use of AI in this manner risks enabling discriminatory practices across various sectors, including employment and housing, thus reinforcing societal disparities and infringing on fundamental rights like privacy and equality.
For example, when landlords or housing managers employ these AI tools for screening prospective tenants, there's a tangible risk of biased decisions against people from specific racial or ethnic backgrounds, or discrimination based on sexual orientation or gender identity. Such practices not only undermine fairness but also contravene principles of nondiscrimination and personal dignity.
Nevertheless, the AI Act acknowledges exceptions for activities that are legally permissible, including the organization of biometric data for specific, regulatory-approved purposes. Lawful uses might involve organizing images by attributes such as hair or eye color for objectives provided by law, including certain law enforcement activities, provided these actions comply with EU or national legislation. This nuanced approach aims to balance the benefits of AI technologies with the imperative to protect individual rights and prevent discrimination.
Finally, the AI Act forbids AI systems for real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes. RBI refers to the process where capturing, comparing, and identifying biometric data occur almost instantaneously, without notable delay. Publicly accessible locations are described as areas, whether publicly or privately owned, that can be entered by an undetermined number of people, regardless of any conditions of access or capacity restrictions.
The Act recognizes such AI applications as profoundly infringing upon individuals' rights and freedoms, highlighting substantial privacy and civil liberties concerns. The potential for these technologies to intrude into private lives, foster a ubiquitous surveillance environment, and deter the exercise of essential freedoms, such as the right to peaceful assembly, is particularly troubling. Moreover, the propensity for technical shortcomings, including biases and inaccuracies within these systems, could lead to incorrect detentions or the disproportionate targeting of certain groups, undermining public confidence in law enforcement and intensifying societal divides. The immediate effects of deploying such systems, combined with the limited scope for subsequent oversight, amplify the risk of adverse outcomes.
The AI Act specifies certain exceptions under precisely delineated and narrowly interpreted conditions where the use of such AI systems is deemed critical to protect a significant public interest that outweighs the potential risks involved. These exceptional use case scenarios, in which the utilization of real-time RBI systems in publicly accessible spaces for law enforcement is allowed, can be listed as follows:
In introducing these exceptional use cases, the AI Act imposes various requirements and obligations on law enforcement authorities and Member States to mitigate the risks posed by real-time RBI systems:
Rules on prohibited AI practices do not directly apply to AI models. The Act draws a subtle distinction between AI systems and AI models, introducing specific rules for the latter only when they are general-purpose, which are the key building blocks for generative AI. The rules on prohibitions primarily target AI systems and what is prohibited under the Act is placing on the market or putting into the service of AI systems engaging with the prohibited practices.
Hence, prohibitions do not directly apply to AI models. However, when an AI model, either a general-purpose or a specific one, is used to create an AI system, prohibitions under the Act will be triggered.
Rules on prohibited AI practices are operator-agnostic. The AI Act distinguishes between various actors involved with AI systems, assigning distinct responsibilities based on their specific roles in relation to the AI system or model. This differentiation is particularly evident in the context of AI systems and general-purpose AI models, where the most significant responsibilities are allocated to the providers. This approach ensures that those who have the most control over the development and deployment of AI technologies are held accountable to the highest standards. In contrast to these tailored obligations for different actors, the rules regarding prohibited AI practices are designed to be operator-agnostic.
This means that the prohibitions apply universally, regardless of the actor's specific role. Whether it involves providing, developing, deploying, distributing, or utilizing AI systems that engage in prohibited practices, such actions are uniformly forbidden within the EU. This broad application underscores the Act's commitment to preventing practices that could undermine fundamental rights or pose unacceptable risks, emphasizing a comprehensive approach to regulation that encompasses all forms of interaction with AI technologies deemed harmful.
The Act has a gradual application timeline that spreads across 36 months, starting from its entry into force, which will happen on the 20th day following the Act’s publication in the EU Official Journal. However, rules on prohibitions will be the first ones to apply, with a 6-month grace period after the Act’s entry into force. Given that the Act is expected to be officially adopted at the end of May 2024, the rules on prohibited practices are likely to start applying before the end of the year.
The Act provides hefty penalties for non-compliance with its provisions, and the heftiest fines are triggered in the case of non-compliance with the rules on prohibited practices. Accordingly, non-compliance with the prohibition of the AI practices shall be subject to administrative fines of up to EUR 35,000,000 or, if the offender is an undertaking, up to 7 % of its total worldwide annual turnover for the preceding financial year, whichever is higher. On the other hand, Union institutions, bodies, offices, and agencies will be subject to administrative fines of up to EUR 1,500,000 for their non-compliance with the prohibited practices.
Preparing for the implementation of a detailed regulatory framework like the AI Act is a considerable undertaking – requires time and careful planning. While the Act has a 24-month grace period before most of its provisions come into effect, the rules concerning prohibited practices are set to be enforced first. Affected entities must now prioritize the establishment of processes and practices to comply with the regulation. Â
Find out how the EU AI Act impacts your business by using our EU AI Act Risk Calculator, and schedule a call to learn more about how Holistic AI can help you get ahead with your AI Act preparedness.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts