🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Facial Recognition is a Controversial and High-Risk Technology. Algorithmic Risk Management Can Help

Authored by
Airlie Hilliard
Senior Researcher at Holistic AI
Published on
Aug 26, 2022
read time
0
min read
share this
Facial Recognition is a Controversial and High-Risk Technology. Algorithmic Risk Management Can Help

Facial recognition has a number of applications, from controlling access to a building and unlocking devices to replacing a boarding pass and identification of suspects by law enforcement.

The technology works by identifying faces in images and analysing the spatial geometry of different facial features to create a template of the face. This template can then be compared to a database of faces that have already been mapped in order to verify the identity of the person.

Facial recognition is controversial and high-risk

While used widely, facial recognition is not without controversy. Indeed, under the EU AI Act, facial recognition systems are considered high risk and subject to additional restrictions. The Act also asserts that identifications made by biometric systems must be verified by at least two people before action is taken based on this identification and prohibits real-time facial recognition systems from being used by law enforcement except for in a limited number of cases.

In line with the classification of facial recognition systems as high risk, some harms of facial recognition technology have already been realised. For example, the well-known Gender Shades Project found that commercial gender recognition tools are most accurate for lighter-skinned males and least accurate for darker-skinned females. An investigation into 189 facial recognition algorithms by The National Institute of Standards and Technology (NIST) also identified racial bias.

Facial recognition can also lack support from the very people that are applying it; the Detroit Chief of Police claimed that the technology fails 96% of the time. In support of this claim, a study of the facial recognition technology used by London’s met police found that it fails 80% of the time.

Some policymakers have banned facial recognition

In response to the risks associated with the use of facial recognition technology, Baltimore City Council has enacted legislation to ban the use of facial recognition in the city, with penalties of a $1000 fine or imprisonment for violations. Similar legislation also exists in Portland, while New York City legislation restricts the sale of facial recognition and requires businesses to inform customers of their use of facial recognition.

Facial recognition systems must also comply with relevant data protection laws such as GDPR, and Co-operative has recently been accused of violating these laws through their use of facial recognition to blacklist customers who have a history of criminal activity since they did not inform customers that their details were being retained for up to two years.

Algorithmic risk management can help

Where it is still permitted, the use of facial recognition technology has the potential to cause harm through to all four risk verticals. However, through the appropriate risk management strategies, the residual risk of facial recognition can be reduced.

  • Bias - facial recognition technology can be biased against non-white and female individuals, particularly if they are trained on data consisting of mostly lighter-skinned males. This risk of bias can be managed by examining the training data for representativeness and auditing for bias in the accuracy of the algorithms for different subgroups.
  • Privacy - biometric information is highly sensitive and can present identity risks if the data or technology falls into the wrong hands.
  • Appropriate data management strategies and fail - safes should be implemented to appropriately restrict access to data and prevent breaches as much as possible. Strategies and data use should also comply with relevant data protection and privacy laws.
  • Safety - accuracy of facial recognition systems is a major concern, particularly if this is not robust across groups. There is also the risk of malicious use if the technology falls into the wrong hands, and can compromise the safety of users. Risk management strategies can help to establish safeguards to prevent the technology and data from falling into the wrong hands, and can identify mitigation strategies if the system does not perform similarly across groups.
  • Transparency - while facial recognition is arguably more explainable than other AI technologies, there is still a risk of transparency being compromised if operators do not disclose the use of technology. Risk management strategies can help to ensure there is appropriate disclosure of the use of the technology and options to decline or withdraw consent where appropriate.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo