Repository logo
 

Trustworthy Machine Learning: From Algorithmic Transparency to Decision Support


Type

Thesis

Change log

Authors

Abstract

Developing machine learning models worthy of decision-maker trust is crucial to using models in practice. Algorithmic transparency tools, such as explainability and uncertainty estimates, demonstrate the trustworthiness of a model to a decision-maker. In this thesis, we first explore how practitioners use explainability in industry. Through an interview study, we find that, while engineers increasingly use explainability methods to test model behavior during development, there is limited adoption of these methods for the benefit of external stakeholders. To that end, we develop novel algorithmic transparency methods for specific decision-making contexts and test these methods with real decision-makers via human-subject experiments. We first propose DIVINE, an example-based explanation method, which finds training points that are not only influential to the model's parameters but also diversely located in input space. We show how our explanations can improve a decision-maker's ability to simulate a model's decision boundary. We next discuss Counterfactual Latent Uncertainty Explanations (CLUE), a feature importance explanation method that identifies which input features, if perturbed, would reduce the model's uncertainty on a given input. We demonstrate how decision-makers can use our explanations to identify a model's uncertainty on unseen inputs. While each method is successful in its own right, we are interested in understanding, more generally, the settings under which outcomes improve after a decision-maker leverages a form of decision support, be it algorithmic transparency or model predictions. We propose the problem of learning a decision support policy that, for a given input, chooses which form of support to provide to decision-makers for whom we initially have no prior information. Using techniques from stochastic contextual bandits, we introduce THREAD, an online algorithm to personalize a decision support policy for each decision-maker. We deploy THREAD with real users to show how personalized policies can be learned online, and illustrate nuances of learning decision support policies in practice. We conclude this thesis with the promise of personalizing access to decision support, which could include forms of algorithmic transparency, based on decision-maker needs.

Description

Date

2023-09-01

Advisors

Weller, Adrian

Keywords

Artificial Intelligence, Decision Support, Explainable AI, Machine Learning, Responsible AI, Transparency

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge
Sponsorship
Alan Turing Institute (TUR-000346)
Leverhulme Trust (RC-2015-067)
My PhD research was funded by the Leverhulme Center for the Future of Intelligence (Trust and Transparency Initiative) with generous donations from DeepMind and Leverhulme Trust. I was also supported by Mozilla Foundation Fellowship and a Partnership on AI Research Fellowship.