Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

The White House Publishes its Blueprint for an AI Bill of Rights

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Oct 11, 2022
read time
0
min read
share this
The White House Publishes its Blueprint for an AI Bill of Rights

AI Risk Management is becoming a top global priority. There have been many high-profile cases of harm caused by the use of AI and algorithms, from discrimination in credit scoring and insurance, to unreliable trading algorithms. Governments and regulators are now cracking down.

Several countries have proposed legislation and frameworks to ensure that AI is developed, used and governed in a responsible and ethical way, to prevent further harms.

AI regulation is gathering steam in the U.S

The U.S. has been leading the charge. At the federal level, an Algorithmic Accountability Act has been proposed, and the National Institute for Standards and Technology (NIST) has produced an AI Risk Management Framework. States and other jurisdictions have passed or proposed laws which regulate AI use in specific areas. Illinois has enacted the Artificial Intelligence Video Interview Act, which requires employers to notify job applicants that their video interviews are being screened by algorithms.

The New York City Council passed legislation mandating bias audits of automated tools used to make decisions about hiring candidates and promoting employees. Legislation enacted in Colorado prevents insurance providers from using biased algorithms or data to make decisions.

Finally, Washington DC proposed the Stop Discrimination by Algorithms Act, to prevent discrimination in automated decisions about employment, housing, and public accommodation, and to require audits for discriminatory patterns.

The AI Bill of Rights

Demonstrating their commitment to managing the risks of AI, the White House Office of Science and Technology Policy has recently published the Blueprint for an AI Bill of Rights: Making Automated Systems work for the American People.

This is a White House white paper, designed to protect US citizens from the harms that AI can cause.

The Blueprint signals President Biden’s vision of how AI should be governed. It will be used to inform future U.S. policy and legislation.

The Blueprint sets out 5 principles that should guide the design, use and deployment of AI and automated systems:

The AI Bill of Rights

1. Safe and effective systems: You should be protected from unsafe or ineffective systems

  • Systems should be designed in a way that prevents foreseeable harms
  • Risks should be tested for, identified and mitigated before a system is deployed
  • There should be ongoing monitoring to ensure that systems are safe and effective
  • Independent evaluations and audits of systems should be undertaken and made public wherever possible

2. Algorithmic discrimination protections: you should not face discrimination from algorithms and systems should be used and designed in an equitable way

  • Designers, developers and deployers proactively and continually consider how automated systems might result in discriminatory outcomes or treatment
  • Disparate impact, or discriminatory outcomes, should be continually tested for and mitigated against
  • There should be appropriate oversight to prevent proxies of protected classifications being used to make decisions
  • Independent evaluations and audits should be performed and made public where possible

3. Data privacy: you should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used

  • Algorithmic systems should have built-in protections against the malicious use of data
  • Users should consent to the collection, transfer, use, access and storage of data
  • Requests for consent should be clear and transparent
  • Continuous surveillance technology should be proportionate and limited to what is strictly necessary to achieve a legitimate purpose
  • Continuous surveillance technology should not be used in sensitive contexts like work, education or housing

4. Notice and Explanation: you should know that an automated system is being used and understand how and why it contributes to outcomes that impact you

  • Designers, developers and deployers should provide accessible and clear documentation about the system’s function, how automation is used, who is responsible for the system, and the outcomes of the system
  • Notices should be up-to-date and additional notices should be given following any major changes to the system
  • Explanations should be technically valid, meaningful, useful, and tailored to the audience and context
  • Where possible, notices should be made publicly accessible

5. Human Alternatives, Consideration and Fallback: you should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter

  • Where appropriate, there should be an option to opt-out of using the automated decision in favour of a human alternative
  • Fallback and escalation processes should be designed to allow humans to make effective and equitable decisions if an automated system fails, results in an error, or the output is appealed
  • Automated decisions used in sensitive contexts like employment and education should have additional oversight and those interacting with the system should receive appropriate training
  • Publicly accessible documentation detailing the human governance and oversight processes, their outcomes and their effectiveness should be made available wherever possible

Based on insights from researchers, technologists, advocates, and policymakers, the White House also published ‘From Principles to Practice’, a technical companion to the Blueprint, to support organisations in implementing the framework.

What should companies do?

AI Risk Management is climbing up the global agenda and is a priority issue for the U.S. Government. It is vital for protecting against the harms posed by automated systems, while maximising their value.

Being proactive and taking steps early to establish AI Risk Management processes is the only way you can achieve command and control over your automated systems. Request a demo to find out more about how Holistic AI can support you on this journey!

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo