×
Webinar

Bias Detection in Large Language Models - Techniques and Best Practices

Wednesday, 30 October 2024 | 10am PDT/ 1pm EDT/ 5pm BST
share this
Register for this event

Large Language Models (LLMs) are powerful AI systems trained on extensive text data to generate and predict human-like language. Their applications span numerous fields, including software development, scientific research, media, and education. However, the widespread use of these models has raised concerns about inherent biases that can lead to skewed language generation, unfair decision-making, and perpetuation of systemic inequalities. Early detection of these biases is crucial to ensure that LLMs contribute positively to society. 

This webinar will explore bias assessment in traditional machine learning and the specific challenges posed by LLMs. We will discuss policy requirements for bias assessment, such as those outlined in NYC Bias Local Law 144. The session will also cover various types of bias in LLMs and how these biases manifest in different downstream tasks, both deterministic and generative. Additionally, we will introduce several research papers published by Holistic AI. Register now to secure your spot and be part of the conversation on shaping the future of ethical AI.

Our Speakers

Zekun Wu

AI Researcher, Holistic AI

Xin Guan

AI Researcher, Holistic AI

Nate Demchak

AI Research Assistant, Holistic AI

Ze Wang

AI Research Affiliate, Holistic AI

Agenda

Hosted by

No items found.

Large Language Models (LLMs) are powerful AI systems trained on extensive text data to generate and predict human-like language. Their applications span numerous fields, including software development, scientific research, media, and education. However, the widespread use of these models has raised concerns about inherent biases that can lead to skewed language generation, unfair decision-making, and perpetuation of systemic inequalities. Early detection of these biases is crucial to ensure that LLMs contribute positively to society. 

This webinar will explore bias assessment in traditional machine learning and the specific challenges posed by LLMs. We will discuss policy requirements for bias assessment, such as those outlined in NYC Bias Local Law 144. The session will also cover various types of bias in LLMs and how these biases manifest in different downstream tasks, both deterministic and generative. Additionally, we will introduce several research papers published by Holistic AI. Register now to secure your spot and be part of the conversation on shaping the future of ethical AI.

Learn more

Our Speakers

Zekun Wu

AI Researcher at Holistic AI, leading Responsible Gen AI Research and Development projects and conducting comprehensive AI audits for clients like Unilever and Michelin. Currently a PhD candidate at University College London, focusing on sustainable and responsible machine learning. Collaborations include work with organizations like OECD.AI, UNESCO, and See Talent on AI tool development and metrics for trustworthy AI. Published research in top conferences like EMNLP and NeurIPS, covering bias detection and stereotype analysis in large language models, and delivered lectures and talks at UCL, UNESCO, Oxford, Ofcom and the Alan Turing Institute.

Xin Guan

AI Researcher at Holistic AI with a master's and undergraduate degree in mathematics and philosophy from the University of Oxford. Published research in top conferences like EMNLP. Core member of the Chinese Key National Project AGILE Index during stays at the Chinese Academy of Sciences. Remote Research Associate at the Centre for Long-Term AI. His research interests focus on AI for good, including large language model fairness and alignment, long-term AI ethics and safety, and foundational theories of intelligence.

Nate Demchak

AI Research Assistant at Holistic AI. Third-year undergraduate student at Stanford University, majoring in computer science and computational biology. Passionate about the advancements in large language models and leveraging AI for social good, focusing on open-generation bias assessment in LLMs. Developing customizable bias benchmark and researching biases within existing bias benchmarks, aiming to foster more equitable and accurate AI assessments.

Ze Wang

AI Research Affiliate at Holistic AI, specializing in social bias in AI and the intersection of AI with Economics. Currently pursuing a PhD at University College London, where my research focuses on applying AI techniques to empirical and theoretical economics, including areas such as game theory, labour economics, inequalities, and macroeconomic dynamic modelling. I have published work in leading conferences like EMNLP on topics such as bias benchmarks, model collapse, and bias amplification in large language models. Additionally, I have taught statistics tutorials at UCL.

Discover how we can help your company

Schedule a call with one of our experts

Get a demo