Ethics, Fairness and Explanation in AI

Module aims

As AI becomes more developed and successful, and is adopted more widely in industry and across society, the ethical and social issues raised by AI become more pressing.  Practitioners of AI should be aware of these issues, and this module is intended to achieve that.  The module divides into three parts.  One, on the ethics of AI, is about ethical and philosophical problems raised by AI—such as the alignment problem, and the attribution of responsibility for autonomous agents.  The second, on fairness and bias in machine learning (ML), concerns conceptions of algorithmic fairness and bias, the accuracy/bias trade-off, and practical approaches in these areas.  The third, on explainable AI (XAI), will concern approaches for understanding why a given AI has reached specific decisions—explanations for decisions being essential for understanding whether those decisions are justified, and thus ethical.               

Learning outcomes

Upon completion of the module, students will be able to:
1.  Evaluate the ethical implications of developments in AI with respect to underlying philosophical ideas.
2.  Engage critically, in an informed fashion, with debates on AI safety and AI alignment.
3.  Detect algorithmic bias in machine learning decisions and measure it based on several common metrics.
4.  Evaluate appropriate algorithmic fairness measures to address the bias depending on the task, choose among pre-, in-, or post-processing methods, and perform empirical analysis using appropriate libraries.
5.  Assess the strengths and weaknesses of different approaches to explanation, and their robustness, in specific instances of AI tasks.
6.  Implement explanation tasks using widely-used Python libraries.           

Module syllabus

The module consists of three parts.
1. Ethics in philosophy: utilitarianism and other approaches.  Agency, moral agency, artificial agency.  AGI.  The singularity.  The simulation hypothesis.  The alignment problem.  Existential risk. Autonomous agents and responsibility.  Ethical issues in AI and regulation.
2. Definitions of fairness in ML.  The evaluation of fairness metrics.  Ways to enforce fairness in ML models.  Representation learning: traditional and adversarial approaches.  The analysis of bias in ML datasets, including use of fairness metrics for the same.
3. Forms of explanation in XAI: feature attribution, counterfactuals, etc.  Evaluation metrics for explanations: machine-centric and human-centric.  Robustness of explanation and the relation with fairness. Application areas.           

Teaching methods

The module consists of a combination of lectures, tutorials, and lab sessions.  All three parts will have lectures.  The part on ethics and AI will be a mixture of lectures and tutorials; the tutorials will involve responding to readings and lecture content.  The part on fairness in ML will have lectures and tutorials, and also lab sessions with Python exercises (unassessed).  The part on XAI will have lectures, tutorials, and lab sessions.  Teaching will be supported by Q&A on EdStem, and TAs will assist with tutorials and marking.               

Assessments

The module is assessed solely by CW; there is no exam.  There are three pieces of CW, one for each part of the module (ethics; fairness in ML; XAI).  The ethics CW consists of written questions and answers on themes or readings discussed in the lectures and tutorials.  The fairness in ML CW is a programming exercise on algorithmic fairness and bias, and accompanying report.  The XAI CW is a Python exercise using explanation libraries.  The weighting of the three pieces of CW matches the weighting of parts within the module.               

All three pieces of CW will receive individual feedback.  Marks and feedback will be returned within 2 weeks.

Module leaders

Dr Antonio Rago
Dr Robert Craven
Dr Matthew Wicker
Dr Francesco Leofante