🚀 New Holistic AI Tracker 2.0 – Now featuring Atlas, a heat-mapped world view tracking AI legislation, incidents, and regulations in real time.
Register for free
Learn more about EU AI Act
Join Webinar: Bias Detection in Large Language Models - Techniques and Best Practices
Register Now
Learn more about EU AI Act

Recommendation Systems: Ethical Challenges and the Regulatory Landscape

Authored by
Siddhant Chatterjee
Public Policy Strategist at Holistic AI
Published on
Jul 7, 2023
read time
0
min read
share this
Recommendation Systems: Ethical Challenges and the Regulatory Landscape

Recommendation systems have become ubiquitous in our digital lives, influencing the content we consume, the products we purchase, and the information we encounter online. Fuelled by vast amounts of user data, these algorithms have the power to personalise experiences, making suggestions tailored to our preferences and interests. Indeed, the benefits of personalisation are immense – from connecting users to the products most suited to them, to creating exponential efficiencies for platforms that deploy them. However, if unchecked, these systems may have implications for user privacy, autonomy and agency and therefore warrant careful consideration of their ethical implications and potential risks. In this blog, we delve into the basics of recommendation systems, po`ssible risks associated with their deployment and what governments worldwide are doing to address them.

Key takeaways:

  1. Recommendation systems filter and rank information to suggest relevant content based on user preferences, utilising signals like past user behaviours and demographic information.
  1. Techniques used in recommendation engines include collaborative filtering (based on similar user preferences), content-based filtering (using item characteristics), and hybrid filtering (combining both techniques).
  1. Ethical challenges arise from privacy risks and algorithmic biases, which can compromise user agency and expose users to harmful content. These risks can be particularly pronounced for vulnerable users like minors and young adults.
  1. Regulatory efforts in the European Union – primarily through the Digital Services Act and EU AI Act – aim to ensure transparency, risk assessment, and user control in recommendation systems through audits, data access provisions, and opt-out options.
  1. The United States is also seeing concerted bipartisan efforts to regulate recommendation systems and algorithms, emphasising transparency and accountability through proposed legislations.

What are recommendation systems?

Recommendation systems, or recommenders, are tools that filter, cluster and rank information, suggesting relevant content or products to users based on their preferences. These tools do so based on pre-defined criteria or signals which may include past user behaviour, platform interactions, purchase trends and demographic information, among others. Recommendations and rankings are a complex process, and there are generally a series of algorithms that carry out these functions.

Principally, there are three techniques that recommendation systems rely on:

  1. Collaborative-filtering: Used to suggest content or products to users based on the preferences of other users with similar tastes. For example, if two users have similar movie preferences, the recommender system will suggest movies to one user based on the knowledge that the other user already likes it.
  1. Content-based filtering: Rely on the characteristics and features of an item to recommend items in line with a user’s preferences. For example, if a user prefers to watch action films over period dramas, the system recommends and up-ranks content tailored to the action genre.
  1. Hybrid-filtering: Systems that deploy both collaborative and content-based filtering techniques to recommend items to a user.

Risks of recommendation systems

Although they can have benefits for both platform providers and consumers, recommendation systems can pose several risks, particularly in relation to privacy and bias. These risks can open up providers not only to financial and reputational risks, but also legal action.

Privacy risks

As recommendation systems leverage large quantities of user data to carry out their functions, they may be prone to privacy risks. Data containing personal identifiers may be collected by such systems without obtaining explicit consent, causing loss of user agency. If not fortified adequately through data protection and cybersecurity mechanisms, these datasets may run the risk of being de-anonymised and misused by bad actors to granularly-profile users – which was evidenced by the Cambridge Analytica scandal of the mid-2010s.

Considering that many recommendation systems heavily rely on collaborative-filtering techniques, safeguarding users from the system's potential (and, at times, harmful) inferences becomes a complex undertaking. This can consequently pose challenges in protecting users from the types of conclusions the system can derive about them, potentially impeding their digital autonomy.

Bias risks

If not trained properly, recommendation and ranking systems may be programmed with a series of algorithmic biases which might impede their effectiveness. These biases can vary, based on whether a recommendation algorithm prioritises popular, highly ranked or clickbait content over a user’s actual preferences (popularity bias), or fails to understand multiple user interests at the same time, recommending only a certain kind of result (single-interest bias). Depending on user behaviours, these can generate potentially harmful outcomes, such as inadvertently exposing users to content glorifying self-harm, eating disorders, suicide, and violent extremism.

Despite best efforts to curb their prevalence, automated feedback loops continue to recommend a fraction of such problematic material to users, surrendering them to algorithmic overdependence – where individuals rely too heavily on algorithms to make decisions, without fully considering their potential risks. Algorithmic overdependence, in turn, may funnel users into filter-bubbles, or echo chambers of one-dimensional and, at times, harmful and inaccurate narratives. Finally, such risks may be particularly pronounced for vulnerable users like minors and young adults, exposing them to potentially dangerous products, inappropriate content and bad actors.

Legal action against recommendation systems

Indeed, there have been a slew of lawsuits against the use of recommendation algorithms, highlighting their potential to cause harm, both online and offline. For example, according to the Gonzalez vs. Google case, petitioners argued that YouTube’s recommendation engine helped radicalise individuals on ISIS propaganda, resulting in them being sued under Title 18 of the U.S. Code § 2333 of the Antiterrorism Act (ATA).

Further, a Seattle school district blamed algorithms for playing a central role in exacerbating mental health issues among teenagers. Citing a 2021 investigation where teenage girls reportedly developed eating disorders after TikTok promoted extreme diet videos to them, the District sued leading social media platforms for allegedly addicting their children to problematic content.

Regulatory efforts gaining momentum

Governments worldwide have accelerated their efforts to govern recommendation systems and prevent future instances of harm. Leading the pack is the European Union, which in recent years has launched a multi-pronged regulatory endeavour to govern such algorithms, starting with its Guidelines on Ranking Transparency in 2020, which mandates that recommendation and ranking decisions be explainable and well-communicated to users.

This is complemented by the Digital Services Act (DSA) – the EU’s mainstay legislation on online safety, which prescribes a series of measures to ensure recommender transparency, risk assessment and risk management (Articles 27, 34 and 35 of the legislation, respectively). The DSA mandates Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs) to conduct independent external audits of recommendation algorithms (Article 37) and grant data access to Digital Services Coordinators and vetted researchers (Article 40), such that systemic risks to online safety are proactively prevented. Furthermore, under Article 38, the DSA directs VLOPs and VLOSEs to implement design and technical modifications to their systems to provide users with the choice to opt out of personalised recommendations.

Governing recommendation systems also comes under the purview of the EU AI Act, which seeks to establish a horizontal and risk-based regulatory regime for Artificial Intelligence. In the Act's latest compromise text – which was unanimously passed by the European Parliament on 14 June 2023 and has since proceeded to the final Trilogue stage of negotiations between the EU Parliament, Council and Commission – recommender systems deployed by VLOPs and VLOSEs have been designated as High-Risk AI systems. This brings forth a set of stringent obligations, mandating providers to undergo ex-ante conformity assessments, obtain a CE certification, conduct Fundamental Rights Impact Assessments, and establish post-market monitoring plans.

Across the Atlantic, the United States is seeing concerted bipartisan efforts to regulate recommendation systems and platform algorithms. Legislations like the Algorithmic Justice and Online Platform Transparency Act (2021), Platform Accountability and Transparency Act (2021) and the Filter Bubble Transparency Act (2019) are leading examples in this regard – and like the playbook followed by broader US AI regulation (notably the Algorithmic Accountability (AAA) and Stop Discriminations by Algorithms (SDAA) Acts), may subject providers of such systems to mandatory transparency measures and algorithmic audits.

It remains to be seen whether these endeavours will effectively reduce the incidence of harm or inadvertently stifle innovation. In the short term, however, increasing public scrutiny and government clarion calls for regulation are certain.

We’re part of the solution

On 6 May 2023, The European Commission published draft rules on conducting annual independent audits of large platforms under the Digital Services Act. Targeting platform algorithms (which include recommendation systems), these rules are expected to be adopted by the Commission by the third quarter of 2023, leaving just a few months for platforms to comply with them. With such regulatory measures afoot, it is crucial to prioritise the development of AI systems that embed ethical principles such as fairness, explainability and harm mitigation right from the outset.

At Holistic AI, we have pioneered the field of AI ethics and have carried out over 1000 risk mitigations covering a vast range of systems. Using our interdisciplinary approach that combines expertise from computer science, law, policy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context it is used in.

To find out more about how Holistic AI can help you, schedule a demo with us.

DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.

Subscriber to our Newsletter
Join our mailing list to receive the latest news and updates.
We’re committed to your privacy. Holistic AI uses this information to contact you about relevant information, news, and services. You may unsubscribe at anytime. Privacy Policy.

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo