The phrase AI ethics has become a buzzword in recent years, with researchers, policy makers, engineers, and users of artificial intelligence (AI) becoming increasingly concerns about how they can abide by and promote the principles encompassed by the AI ethics movement, but what does this phrase actually mean? In this this blog post, we provide an overview of AI ethics, first defining the term before discussing the approaches that bring about more ethical AI and the major themes in the field.
What is AI ethics?
AI ethics is a nascent and emerging field that has resulted from the growing concerns about the psychological, social, and political impact of AI. As the name suggests, the field of AI ethics is concerned with ensuring that AI is used in an ethical way, and, since it draws on philosophical principles, computer science practices, and law, the field is inherently disciplinary. It is a subset of the wider field of digital ethics, which addresses concerns raised by the development and deployment of new digital technologies, such as AI, big data analytics, and blockchain technologies.
What are the main considerations of AI ethics?
There are six major themes that characterise the field of AI ethics: human agency and oversight, safety, privacy, transparency, fairness, and accountability. They also fall under the umbrella of trustworthy AI.
- Human wellbeing – concerns the impact that AI can have on human agency and/or the societal impact of the use of these technologies
- Safety – prevention of harm in terms of ensuring robustness, prevention of malicious use, reliability and reproducibility, fallback plans, and safeguards in case of unknown risks
- Privacy – concerns data management or stewardship, and minimisation of the data collected by AI systems
- Transparency – relates to the explainability of the system and communication of the capabilities and purposes of the system to those both directly and indirectly impacted by it
- Fairness – an umbrella term that encapsulates testing for and mitigating against bias, accessibility of the technology and the consideration of accommodations, and engagement with wider society to solicit stakeholder views
- Accountability - inclusion of robust human oversight mechanisms and algorithmic impact assessments to determine potential harm resulting from an AI system.
How can AI be made more ethical?
Having identified the main aims of AI ethics, it is important to have strategies to implement these principles. We identify three major approaches to AI ethics:
- Principles – the use of guidelines that inform and direct the use and development of AI
- Abstract first-principles – articulated statements about values are presented as guidance and standards with which the AI systems can be developed and deployed
- Legislative standards and norms – development and deployment of AI systems in accordance with legal requirements
- Inspiration from bio/medical ethics - bio/medical ethics provides a well-established model that can influence other ethical frameworks due to its robustness and accountability measures
- Processes – used to address risk and harm that can result because of design issues and lack of appropriate governance
- Ethical by design - a commitment to building systems ethically and in the hope that harm can be prevented through consultation with experts and the adoption of clear principles, laws, standards, and guidelines with which to structure and judge design.
- Governance – of both technical aspects (accountability and transparency of the technology and design choices) and non-technical aspects (decision-making, training, education, level of human-in-the-loop)
- Ethical consciousness – actions motivated by a moral awareness or desire to do the right thing
- Encompasses the integration of codes of conduct and compliance, consideration of reputational issues, (corporate) social responsibility, and concerns for institutional philosophy and culture.
To find out more about AI ethics, read our full-length paper here.