Repository logo
 

Accurate Uncertainty Quantification and Explainable Artificial Intelligence in Machine Learning Models for Toxicological Risk Assessment


Change log

Authors

Gong, Chen 

Abstract

Consumer and environmental safety decisions can be supported by Quantitative Structure-Activity Relationship (QSAR) models – a key part of the Next Generation Risk Assessment strategy for animal-free safety. Machine learning methods are often employed to build QSAR models, but these “black box” functions still need to be validated robustly before being included in risk assessment strategies. Two key issues remain: uncertainty of the predictions and transparency of the model.

The second chapter discusses mechanistically driven structural alerts for mitochondrial toxicity. Structural alerts are constructed using a maximum common substructure algorithm developed by Wedlake et al. (2020) and their mechanisms are verified by literature review. The alerts performed well on external validation when combined with existing structural alerts and can be further built upon in the future when more data is available.

In chapter three, uncertainty quantification is studied by considering three different modelling methodologies (Bayesian bootstrapping, conformal prediction, and Bayesian neural networks) on a diverse dataset of 21 toxicologically relevant targets identified by Allen et al. (2022). Metrics to evaluate uncertainty quantification are defined and four interpretations of uncertainty are investigated. I show that being uncertain about a prediction does not necessarily imply higher error on average and epistemic uncertainty within a Bayesian neural network is correlated with the applicability domain of the model.

Finally, in chapter four a Bayesian neural network is constructed using a large Ames mutagenicity dataset and evaluated on four different data splits based on source data, showing state of the art performance. Explanations for predictions are generated by two methods – RDKit SimilarityMaps based on molecular graph perturbation and SHAP applied to the Bayesian neural network. These explanations can reproduce existing literature structural alerts for covalent DNA binding developed by Enoch et al. (2010) and often where the model assigns a negative label it still correctly identifies the structural alert where applicable.

Description

Date

2023-07-14

Advisors

Goodman, Jonathan

Keywords

adverse outcome pathway, artificial intelligence, bayesian neural network, computational toxicology, explainable models, machine learning, molecular initiating event, next generation risk assessment, structural alerts, uncertainty quantification

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge
Sponsorship
Unilever Clare College