Augmented Radiology in High Grade Serous Ovarian Carcinoma
Repository URI
Repository DOI
Change log
Authors
Abstract
Radiological imaging is at the centre of how modern medical care is delivered and provides a unique perspective on disease: it is non-invasively attained, gives 3-dimensional spatially resolved information about disease, and when images are captured sequentially, allows for temporal comparison. Unlocking the information contained within these images, however, relies on access to a radiologist. Their interpretation is time-consuming, largely qualitative, dependent on their level of training and experience, and prone to subjectivity.
Demand for radiological imaging outstrips the supply of radiologists needed to interpret them. As healthcare providers struggle to bridge this shortfall, artificial intelligence (AI) tools may provide a solution. Many organisations, including the US Food and Drug Administration (FDA), the UK Medicines and Healthcare products Regulatory Agency (MHRA), the Royal College of Radiologists, and the European Society of Radiology, are convinced that AI-based tools will transform how imaging departments function [1 ]. Such tools are already showing great promise in the research setting, however transition to clinical practice remains largely limited to a small number of paired down decision-aid tools [2].
For AI-based image analysis tools to become truly integrated into clinical care and deliver on their undoubted promise the trust and acceptance of two crucial stakeholder groups: clinicians and patients - is required.
Currently, clinicians remain largely distinct from the development and validation of AI-based medical image analysis tools, save for a passive role in the generation of training and testing datasets. Many groups developing them do not have clinician input, which can result in systemic errors or false assumptions obvious to a clinician being overlooked [ 3 ]. The process of evaluating the performance of these tools also happens without clinician input, relying on quantitative metrics which do not necessarily align with utility in a clinical setting. The results of these quantitative evaluations can also be opaque to clinician insight. These factors contribute to a lack of clinician trust in their performance. Consensus is building that greater active clinician involvement in development and performance validation of AI-based medical image analysis tools is required for clinicians to accept oversight responsibility for there use in the clinic, and expedite their translation into clinical practice [1]. Their involvement would also align with the principles of "Good Machine Learning Practice for Medical Device Development: Guiding Principles" [4 ]. How a clinician should formally be involved in these processes, however, is currently unclear.
For any successfully validated tool to have full clinical adoption it must be acceptable and trusted by patients [5]. There are numerous examples of promising new technologies which have struggled or been rejected due to lack of public acceptance [6 –10]. How patients will respond to the idea of an AI-based tool being involved in their care, particularly when its judgement informs decisions about treatment, remains largely unknown.
This PhD examines 1) the issues of clinician involvement in the development and validation of AI based medical image analysis tools, 2) our patients’ attitude towards them, and 3) the unique insight they may give by deepening our understanding of disease. These topics will be considered in the context of ovarian cancer, a complex, multi-site disease which is frequently diagnosed at late stage, and for which limited improvement in survival outcomes has been made in recent decades.
Chapter 3 considers the clinical utility of a newly developed AI-based segmentation tool, which has modest performance by standard quantitative metrics. In the process of establishing its clinical utility, a novel physician led framework for evaluating AI based segmentations is proposed, which assesses the utility of an AI segmentation tool in a clinical context both in an independent and assisting role.
Chapter 4 examines the perceptions of patients investigated for ovarian cancer towards having an AI based tool involved in their care. Themes such as trust, privacy, reliability, and accountability are considered in the context of the patient’s own diagnostic journey. The study highlights issues that patients feel should be considered when developing tools which contribute to their care, and makes recommendations on how to pursue patient-focused AI development.
With the successful validation of its performance in Chapter 3, Chapter 5 will leverage the segmentation tool to unravel the different spatial distributions of ovarian cancer across a large multi-centre dataset acquired during this PhD, with the aim to better understand the significance of different volume-of-disease distribution patterns in ovarian cancer.
