×
Policy Hour

Understanding the EU Digital Services Act: Impact, Obligations, and Audits for Online Platforms

Tuesday, 27 June 2023, at 09:00 am ET
share this
Register for this event

On 27 June 2023, we organised the third edition of our Policy Hour webinar series on the European Union’s Digital Services Act (DSA). Our host, Holistic AI Co-Founder Dr. Adriano Koshiyama, was joined by Alejandro Guerrero, Partner in the EU Competition Law practice at Simmons & Simmons.

Effective from 16 November 2022, the DSA mandates tech companies to undertake critical responsibilities like conducting comprehensive risk assessments and implementing robust mitigation strategies, while subjecting them to rigorous independent audits. Our panellists delved into various aspects of this legislation, how it interacts with other EU digital policy regimes like the Digital Markets Act (DMA) and the AI Act, and due diligence obligations for covered entities, especially for Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs).  

The webinar also focused on key differences between the EU DSA and the UK’s Online Safety Bill, and how businesses operating in the EU can best prepare to comply with the legislation.

Below we have included the slides for the Webinar, and a list of audience Q&As.

Q&A


The DSA and AI Act will overlap on a few provisions, especially on enabling transparency of recommendation systems. While procedural modalities are yet to be clearly outlined, both legislations are envisaged to complement each other in achieving user safety, platform accountability and AI trust objectives. This is visible in the following ways:

  • The recitals of the AI Act highlight that adherence to the legislation should facilitate VLOPs in fulfilling their broader obligations concerning risk assessment and risk mitigation as stated in Article 34 and 35 of the DSA.
  • The AI Act’s recitals also imply that the authorities assigned under the DSA should serve as enforcement entities responsible for implementing the provisions related to AI recommender systems outlined in the AI Act.

While both legislations are geared towards safeguarding users and increasing platform accountability, the DSA addresses these objectives from a societal lens, slightly different from the OSB’s approach of focusing on individual harms. The OSB divides services into different categories depending on their size and deemed risk, as opposed to the DSA’s focus on ‘online platforms’, which include social media platforms, search engines, online marketplaces and hosting services, among others. Further, while the DSA treats all kinds of illegal content equally, the OSB contains different obligations for different types of illegal content. Finally, the DSA will be enforced by the EU Commission and Digital Services Coordinators – government entities – while the OSB will be enforced by Ofcom, the UK’s independent communications regulator.

It is difficult to anticipate whether the list of VLOPs and VLOSEs will be expanded (or shortened) soon, with companies like Zalando already suing the European Commission over their inclusion in the list.

Independent auditing requirements under Article 37 of the DSA only apply to VLOPs and VLOSEs, whereas audit provisions mentioned in the AI Act apply to providers and deployers of High Risk AI Systems (HRAIS) spanning across sectors (biometrics, critical infrastructure, HRTech, Insurance, etc) as mentioned in Annex III of the legislation. This key difference aside, DSA Audits will focus on providing third-party assurance on several online safety platform requirements (provided in Chapter III of the DSA), while AI Act audits will focus on providing third-party assurance on Quality Management Systems established by HRAIS providers and deployers.

Under the EU AI Act, Generative AI tools will be subject to strict transparency obligations, with providers of such applications required to inform users when a piece of content is machine-generated, deploy adequate training and design safeguards, as well as publicly disclose a summary of copyrighted materials used to develop their models. This is part of an entirely new article (28b) which was introduced in the European Parliament’s May 2023 iteration of the AI Act.

However, the current iteration of the DSA does not cover generative AI yet, creating what academics call a ‘dangerous regulatory gap'. However, it seems like the European Commission has taken note of this discrepancy by including generative models in its recent draft rules on auditing algorithms under the DSA. Ultimately, it will be interesting to see whether these regulations will complement each other to create a comprehensive governance regime that is able to keep pace with the rapidity of generative AI.

How we can help

On 6 May 2023, The European Commission published draft rules on conducting annual independent audits of large platforms in line with terms of the DSA. Targeting platform algorithms, these rules are expected to be adopted by the Commission by the third quarter of 2023, leaving just a few months for platforms to comply with them. With such regulatory measures afoot, it is crucial to prioritise the development of AI systems that embed ethical principles such as fairness, explainability and harm mitigation right from the outset.

At Holistic AI, we have pioneered the field of AI ethics and have carried out over 1000 risk mitigations covering a vast range of systems. Using our interdisciplinary approach that combines expertise from computer science, law, policy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context it is used in.

To find out more about how Holistic AI can help you, schedule a call with us.

Our Speakers

Adriano Koshiyama, Co-Founder & Co-CEO - Holistic AI

Alejandro Guerrero, Partner - Simmons & Simmons

Agenda

Hosted by

No items found.

On 27 June 2023, we organised the third edition of our Policy Hour webinar series on the European Union’s Digital Services Act (DSA). Our host, Holistic AI Co-Founder Dr. Adriano Koshiyama, was joined by Alejandro Guerrero, Partner in the EU Competition Law practice at Simmons & Simmons.

Effective from 16 November 2022, the DSA mandates tech companies to undertake critical responsibilities like conducting comprehensive risk assessments and implementing robust mitigation strategies, while subjecting them to rigorous independent audits. Our panellists delved into various aspects of this legislation, how it interacts with other EU digital policy regimes like the Digital Markets Act (DMA) and the AI Act, and due diligence obligations for covered entities, especially for Very Large Online Platforms (VLOPs) and Search Engines (VLOSEs).  

The webinar also focused on key differences between the EU DSA and the UK’s Online Safety Bill, and how businesses operating in the EU can best prepare to comply with the legislation.

Below we have included the slides for the Webinar, and a list of audience Q&As.

Q&A


The DSA and AI Act will overlap on a few provisions, especially on enabling transparency of recommendation systems. While procedural modalities are yet to be clearly outlined, both legislations are envisaged to complement each other in achieving user safety, platform accountability and AI trust objectives. This is visible in the following ways:

  • The recitals of the AI Act highlight that adherence to the legislation should facilitate VLOPs in fulfilling their broader obligations concerning risk assessment and risk mitigation as stated in Article 34 and 35 of the DSA.
  • The AI Act’s recitals also imply that the authorities assigned under the DSA should serve as enforcement entities responsible for implementing the provisions related to AI recommender systems outlined in the AI Act.

While both legislations are geared towards safeguarding users and increasing platform accountability, the DSA addresses these objectives from a societal lens, slightly different from the OSB’s approach of focusing on individual harms. The OSB divides services into different categories depending on their size and deemed risk, as opposed to the DSA’s focus on ‘online platforms’, which include social media platforms, search engines, online marketplaces and hosting services, among others. Further, while the DSA treats all kinds of illegal content equally, the OSB contains different obligations for different types of illegal content. Finally, the DSA will be enforced by the EU Commission and Digital Services Coordinators – government entities – while the OSB will be enforced by Ofcom, the UK’s independent communications regulator.

It is difficult to anticipate whether the list of VLOPs and VLOSEs will be expanded (or shortened) soon, with companies like Zalando already suing the European Commission over their inclusion in the list.

Independent auditing requirements under Article 37 of the DSA only apply to VLOPs and VLOSEs, whereas audit provisions mentioned in the AI Act apply to providers and deployers of High Risk AI Systems (HRAIS) spanning across sectors (biometrics, critical infrastructure, HRTech, Insurance, etc) as mentioned in Annex III of the legislation. This key difference aside, DSA Audits will focus on providing third-party assurance on several online safety platform requirements (provided in Chapter III of the DSA), while AI Act audits will focus on providing third-party assurance on Quality Management Systems established by HRAIS providers and deployers.

Under the EU AI Act, Generative AI tools will be subject to strict transparency obligations, with providers of such applications required to inform users when a piece of content is machine-generated, deploy adequate training and design safeguards, as well as publicly disclose a summary of copyrighted materials used to develop their models. This is part of an entirely new article (28b) which was introduced in the European Parliament’s May 2023 iteration of the AI Act.

However, the current iteration of the DSA does not cover generative AI yet, creating what academics call a ‘dangerous regulatory gap'. However, it seems like the European Commission has taken note of this discrepancy by including generative models in its recent draft rules on auditing algorithms under the DSA. Ultimately, it will be interesting to see whether these regulations will complement each other to create a comprehensive governance regime that is able to keep pace with the rapidity of generative AI.

How we can help

On 6 May 2023, The European Commission published draft rules on conducting annual independent audits of large platforms in line with terms of the DSA. Targeting platform algorithms, these rules are expected to be adopted by the Commission by the third quarter of 2023, leaving just a few months for platforms to comply with them. With such regulatory measures afoot, it is crucial to prioritise the development of AI systems that embed ethical principles such as fairness, explainability and harm mitigation right from the outset.

At Holistic AI, we have pioneered the field of AI ethics and have carried out over 1000 risk mitigations covering a vast range of systems. Using our interdisciplinary approach that combines expertise from computer science, law, policy, ethics, and social science, we take a comprehensive approach to AI governance, risk, and compliance, ensuring that we understand both the technology and the context it is used in.

To find out more about how Holistic AI can help you, schedule a call with us.

Speaker

Adriano Koshiyama, Co-Founder & Co-CEO - Holistic AI

Adriano Koshiyama, Co-Founder & Co-CEO - Holistic AI

Adriano Koshiyama is the co-founder of Holistic AI, an AI Governance, Risk and Compliance (GRC) software solution. Holistic AI services many large and medium-size organizations on their journey on adopting AI, ensuring due risk-management, and compliance with the changing regulatory & standards environment. Previously, he was a Research Fellow in Computer Science at University College London, and academically, he has published more than 50 papers in international conferences and journals. He is responsible for ground-breaking results in the intersection of Machine Learning and Finance, with the earlier work on GANs and Transfer Learning in this area.

Alejandro Guerrero, Partner - Simmons & Simmons

Alejandro Guerrero, Partner - Simmons & Simmons

Alejandro is a partner in Simmons & Simmons EU Competition Law practice based in Brussels and in Madrid. He currently specialises in antitrust and commercial regulation, compliance and investigations, as well as in related disputes and litigation, in a variety of sectors including TMT, energy, consumer products, e-commerce and tech.​

See the industry-leading AI governance platform in action

Schedule a call with one of our experts

Get a demo