Brazil has emerged as a leader in AI regulation in South America, proposing three new laws to govern the development and use of artificial intelligence. This legislative activity builds on the Brazilian AI Strategy, which aims to promote trustworthy and ethical AI.
Brazil's proposals come amid growing global efforts to codify responsible AI practices into law, especially in Western jurisdictions like Europe, Canada and the United States. Other countries are also beginning to enact policies, including China's focus on regulating generative AI.
In South America specifically, multiple countries have published guidance to promote safe and fair AI, including Colombia's framework for ethical AI and Argentina’s guidelines for trustworthy AI.
However, leading efforts in this region on the legal front is Brazil. In this blog post, we outline the nation’s approach to AI regulation.
Key takeaways:
Representing an early effort towards responsible AI in Brazil, Bill No. 5051 was introduced in 2019 to prioritise human wellbeing and rights in the use of AI through a series of principles. In particular, 5.051/2019 promotes:
5051/2019 also calls for the use of AI to promote and harmonise the value of human work and economic development in Brazil, where AI-driven decision making systems should support, but not replace, human decision making.
Further, the human oversight and supervision mechanisms should be informed by the type, severity, and implications of the decisions made by the AI system, where supervisors will be held liable for any harm resulting from AI systems.
As such, the bill provides guidelines for the Union, the States, the Federal District, and the Municipalities on the development of artificial intelligence in Brazil:
In 2020, Brazil introduced a second draft bill, 21/2020, to establish foundations and principles for AI, which is defined as a process-based computational system that uses human-defined objectives to process information and perceive, interpret, and interact with the external environment to make predictions, recommendations, classifications, and decisions. This includes technologies such as machine learning, as well as knowledge and logic-based systems, statistical approaches, Bayesian inference, and research and optimization methods.
Through the use of AI, the aim is to advance technology and science in Brazil, as well as:
While 21/2020 shares commonalities with 5051/2019, Bill 21 outlines considerably more foundations for AI development in Brazil:
The bill further requires that AI developed and applied in Brazil should be for beneficial purposes and put humans at the centre to respect dignity, privacy, and fundamental rights as well as prevent discrimination and bias.
Where AI is used, there should be adequate transparency about the way it is used, particularly that users are interacting with AI systems, the entity responsible for the operation of the system, and any risks.
Furthermore, technical and governance mechanisms should be in place to manage and mitigate such risks throughout the lifecycle of the system. This is supported by responsible innovation practices that encourage documentation and accountability.
To maximise compliance, the bill requires sectoral action, where the regulatory frameworks of different sectors should be considered when implementing the principles. Risk management practices should also be proportionate to the specific risks of each system, an approach shared by the EU AI Act. Moreover, the bill requires public consultation when adopting norms for AI, as well as regulatory impact assessments.
Introduced in the first half of 2021 by Senator Veneziano Vital do Rêgo, Bill No. 872 outlines the necessary foundations for AI in order to balance innovation with safety. Similar to 5051, the bill asserts that AI should have foundations in:
In particular, the Bill requires AI to respect people’s autonomy, help maintain social and cultural diversity by not restricting personal lifestyle choices, preserve solidarity between people across different generations, allow for democratic scrutiny and public debate, and have built-in security tools that allow for human intervention. Decisions made by AI must also be traceable and without discrimination or bias, and systems should follow governance standards to ensure ongoing risk management.
By embedding these values into AI, the aim is to promote an ecosystem that is conducive to inclusive growth and sustainable development, supports opportunities for research, innovation, and entrepreneurship, and facilitates improvements to the quality and efficiency of public services.
To support this, the Union, the States, the Federal District and Municipalities in the development of Artificial intelligence are required to:
Although none of the three AI bills introduced since 2019 in Brazil have made it through Congress, a new bill, 2338/2023, was introduced in May 2023 to replace the previous iterrations.
This bill mandates entities to ensure transparency and mitigate biases, particularly in high-risk AI systems. It also requires detailed public impact assessments that outline the system's purpose, risk mitigations, and stakeholder involvement, where entities will be held strictly liable for any damages caused.
The definition of AI has also been revised from previous bills, with an AI system now defined as a computational system with different degrees of autonomy that is designed to infer how to achieve a given set of objectives using machine learning and/or logical and knowledge representation. This process relies on input data from humans or machines and produces predictions, recommendations, or decisions to influence the real or virtual environment.
The 2023 bill also focuses on rights given to individuals, including:
Taking a similar approach to the EU AI Act, the Brazil Bill takes a risk-based approach, with obligations dependent on the level of risk posed by a system.
In order to establish the risk of an AI system, the bill requires that systems undergo a preliminary assessment by the supplier before they are placed on the market or deployed, with the assessment documented and registered to ensure accountability and liability. Where the assessment identifies that a system is associated with a high level of risk, an algorithmic impact assessment will be required, in addition to governance measures.
High-risk systems are those that are used for:
Such systems must implement transparency and data management measures, follow data protection legislation, procedures for training, testing, and validating system results, and information security measures. They must also comply with specific governance mechanisms for high-risk systems, such as documentation, automatic recording of events, tests for reliability and robustness, and data management to prevent bias.
Similar to the prohibitions of the EU AI Act, systems with excessive risk are prohibited under the bill. This includes systems that employ subliminal techniques to produce harmful behaviour, those that exploit the vulnerabilities of specific groups to produce harmful behaviour, and those used by public authorities for the purpose of social scoring. Furthermore, biometric identification systems are only permitted continuously in publicly accessible spaces when given judicial authorisation in connection with security activity and individualized criminal prosecution.
Penalties for violating 2338/2023 start at a simple fine of up to R$50,000,000.00 per infraction, or up to 2% of annual revenue, but other outcomes such as public registration of the violation and a partial or total suspension of the development or supply of the AI system are also possible.
Brazil’s latest AI law proposal has clear parallels with the EU AI Act, demonstrating the global influence of the law, which is expected to be finalized by the end of 2023.
Companies developing and deploying AI will soon have a wave of legal requirements to navigate regardless of where they are located. Compliance is vital to promote safe and ethical AI and avoid large financial penalties and reputational damage.
However, compliance cannot happen overnight. Getting started early is the best way to maximise alignment with emerging and existing laws.
Schedule a call with our experts to find out how Holistic AI can help you with our visionary AI Governance, Risk Management and Compliance Platform, as well as out suite of AI audit solutions.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts