🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
→
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
→
Learn more about EU AI Act

AI Regulation Around the World: The Netherlands

Authored by
Marcus Grazette
Policy Lead at Holistic AI
Published on
Feb 22, 2023
read time
0
min read
share this
AI Regulation Around the World: The Netherlands

The European Union’s (EU’s) proposed AI Act aims to harmonise requirements for AI systems across the EU with its risk-based approach. Ahead of this, some countries, like the Netherlands, are pressing ahead with specific national requirements.

The Dutch approach is shaped by the scandal around a biased algorithm that their tax office used to assess benefits claims. The tax office implemented the system in 2013 and, after civil society raised concerns, two formal investigations in 2020 and 2021 uncovered systematic bias affecting 1.4 million people.

Amnesty International’s report on the scandal documents the harms people suffered as a result; some lost their homes, life savings and suffered ill health due to stress. In 2021, then Prime Minister Mark Rutte issued a formal apology and his entire Cabinet resigned over the scandal.

The Netherlands introduces a new statutory regime

In response, the Dutch government is ramping up algorithm oversight. They committed to a new statutory regime, ensuring that systems are checked for transparency and discrimination. The Dutch Data Protection Authority (Autoriteit Persoonsgegevens/ The AP) will get funding to create a new supervisory function. Alexandra van Huffelen, the State Secretary for Digitisation, updated parliament on the new approach in December 2022.

The data protection regulator (the AP) will get an extra €1 million in 2023 for algorithm supervision, rising to €3.6 million by 2026. This funding is in addition to the future AI authority requirement in the EU AI Act. AP will initially focus on:

  • Identifying, analysing and sharing information on cross-sector risks.
  • Mapping supervisory activities through collaboration with the existing regulatory ecosystem.
  • Developing a common approach to standards and an overview of legal frameworks.

The specific approach is still in development, but the direction of travel is clear. The Dutch government wants more transparency about the AI systems deployed in the public sector. Proposals include:

  • A legal requirement to use the Court of Audit’s algorithm assessment framework. The framework is a structured questionnaire based on the EU’s High Level Expert Group on AI’s Ethical Guidelines for Trustworthy AI.
  • A public, nationwide register of high-risk AI systems with information about purpose and impact, accountability, datasets used and data processing, risks and mitigations – including information about human oversight and non-discrimination measures – and explainability.

The algorithm assessment framework

The Dutch tax authority scandal and public debate in mid-2020 about automated decisions in the Dutch covid-19 notification app promoted the Netherlands Court of Audit to intervene. The court audited a selection of automated systems currently in use and published a report in early 2021.

In their report, the court focused on algorithms with both a predictive or prescriptive function and a “substantial impact on government behaviour, or on decision made about specific cases, citizens or businesses”. They developed and tested an audit framework with five components (1) accountability, (2) model and data (3) privacy (4) IT controls and (5) ethics. The framework is closely aligns with the EU’s High Level Expert Group on AI recommendations and mirrors our approach to AI auditing.

The court found that:

  • Government department and organisations working with them lacked a comprehensive overview of the algorithms they used.
  • In around a quarter cases, the algorithms examined were used to assist decision-making, for example, by flagging cases for further investigation.
  • In three cases, the development process did not properly consider legal and ethical standards. The court noted that the development process tended to focus on data specialists, programmers and privacy experts but legal and policy advisers tended to be left out.

The court concluded that “in many cases no action is taken to limit ethical risks such as biases in the selected data”. Auditing exposes gaps and builds trust by delivering recommendations for improvement.

What does this mean for businesses?

The proposals currently apply to the public sector. However, we think they will impact business in two important ways:

  • Companies supplying AI systems to the public sector – will likely face demands from their clients to show that their systems meet the requirements.
  • Greater public awareness of AI systems and how they are used could create demand for private companies to match the level of transparency expected in the public sector.

Businesses should consider how they can demonstrate that their systems are fair, robust and explainable. We believe that AI assurance can provide that proof. Auditing can help businesses to comply with their GDPR requirement to show that processing is fair and to get ahead of the risk assessment requirements in the upcoming EU AI Act.

Schedule a demo to find out more about how Holistic AI can help you navigate upcoming AI rules including the EU’s proposed AI Act and national requirements, including in the Netherlands.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo