Repository logo
 

Theses - Public Health and Primary Care

Browse

Recent Submissions

Now showing 1 - 20 of 72
  • ItemEmbargo
    Enabling Universal Health Coverage in Sub-Saharan Africa: Implications and Learnings from Uganda
    Ifeagwu, Susan; Ifeagwu, Susan [0000-0002-7915-2765]
    Introduction
    Universal health coverage (UHC), defined by the World Health Organization as all individuals having access to health services they need, of sufficient quality, without suffering financial hardship is important to improve population health, particularly for vulnerable populations. Across Sub-Saharan Africa (SSA), advances towards UHC are still limited. This thesis aimed to strengthen the evidence base on UHC in SSA, by focusing on Uganda and examining its readiness for a sustainable UHC model. Objectives included synthesising the existing knowledge base on health financing mechanisms for UHC in SSA, determining how UHC is framed within Ugandan national health policy documents, exploring the relationship between health insurance and sociodemographic factors, and investigating the perception and values of key stakeholders in UHC, particularly health care workers and policymakers in Uganda. Methods
    Quantitative and qualitative methods of the mixed methodological approach involved a systematic review of UHC health financing mechanisms in SSA, national health policy content analysis and 2016 Uganda Demographic and Health Survey analysis. This was complemented by a stakeholder analysis of primary data at international, regional, and national level that included email survey techniques, 30 semi-structured interviews, and a focus group discussion about stakeholder perceptions and values of UHC. The theoretical approach of both the qualitative and quantitative data analysis was guided by the Walt & Gilson (1994) Policy Triangle Framework, an interaction between content, context, processes, and actors. Results
    The systematic review revealed a high dependency of donor funding to support health care expenditure among SSA countries, with a need for an increased proportion of domestic financing and enhanced accountability measures to achieve UHC. There was an absence of detailed budget and monitoring and evaluation plans needed to implement UHC in policy documents. Demographic survey analysis showed that among the low proportion (1.4%) of individuals with health insurance, coverage widened with increasing wealth and education levels. Findings of the poisson regression analysis suggested a significant positive association between health service utilisation and age, marital status, wealth, and knowledge of health insurance. A high level of importance of UHC in Uganda was found among stakeholders, with key recommendations centred on communication, organisation, power, and trust. Conclusion
    Summarising these diverse findings provides new insights into the complexities around delivering UHC in low-income settings. Despite the high attributed importance given to UHC by all stakeholders, this research highlights the critical need for better communication of UHC targets to improve understanding across relevant communities and institutions. Furthermore, increased domestic financing, transparent budget plans and strong accountability measures are essential to ensure health coverage reaches lower socio-economic groups. Higher education levels were positively associated with health insurance coverage, indicating the importance of education in health promotion. Therefore, the next steps for implementation must include a communication strategy based on these findings to widely disseminate the existing UHC Roadmap to all stakeholder groups from policy to grassroots level. Beyond an inclusive approach of reaching vulnerable population groups, strategies for the future national health insurance scheme need to incorporate the improvement of education services for empowerment. Analysis of stakeholder perceptions highlights that safeguarding trust among the population through transparency and honesty in metrics and progress, awareness of local cultural sensitivities, provision of quality and timely health services, and investment of health care workers through fair renumeration will be essential to ensuring success of a multisectoral systems approach of UHC in Uganda.
  • ItemEmbargo
    Development of prediction models for cardiovascular disease risk in China
    Zhang, Dudan
    Background: Cardiovascular diseases (CVD) are the leading causes of death in China. Since population CVD incidence and risk factor levels vary considerably across regions in China, prevention of CVD taking into account heterogeneity in risk factors and disease rated across China could be advantageous. Risk prediction models are an integral part of CVD prevention guidelines and can be used to help guide intervention. However, there is no model generalizable to the various incidence, risk-factor levels, and composition of CVD (proportion of CHD and stroke) in different regions of China. Recalibration, an approach to adapt risk scores to local contemporary circumstances, would be of potential benefit to reflect the diversity of CVD epidemiology across China. This thesis aimed to construct a CVD risk estimation system, which is calibrated to CVD risk in different regions in China and can be regularly updated in the future in response to changing trends in CVD rates. Methods: The project involved several interlinked steps. First, I conducted a thorough review of the current epidemiological features of CVD across regions and the presented implications for the prevention of CVD in China; and reviewed and summarised the qualities of CVD risk scores recommended by primary prevention guidelines from a global and China perspective, to provide a benchmark for new score development and reveal where improvements may be warranted specifically for the Chinese population; Secondly, I explored different methods for using aggregate and individual data for recalibration of risk prediction scores to select an optimal approach for use in China. Different methods were illustrated and assessed using 185 222 males aged 40-69 years without previous CVD at baseline and no missing data on measurements of risk predictors from UK Biobank (UKB). Thirdly, I compared the performance of three risk scores recommended by national and international primary prevention guidelines (i.e., World Health Organization [WHO] CVD risk charts, Pooled Cohorts Equations (PCE), and Prediction for Atherosclerotic Cardiovascular Disease Risk in China (China-PAR) models) in 11,169 participants without CVD at baseline survey during 2008-2010 from the Fangshan cohort (FSC) in rural Beijing. The original and recalibrated models were assessed for calibration, discrimination, and reclassification. Finally, the WHO CVD score was recalibrated to China as a whole using aggregate level country, sex- and age group-specific risk factors averages and mortality estimates. And the China-specific score was further recalibrated to each providence with province-specific estimates. Risk factor values were obtained from the China Chronic Disease and Risk Factors Surveillance (CCDRFS), a nationally and provincially representative cross-sectional survey of 145 268 participants aged 40-80 years old. The Chinese Center for Disease Control and Prevention (China CDC) provided mortality rates, estimated using the Global Burden of Diseases 2017 study methodology and data from published scientific reports, disease registries, and health system administrative records. The benefits of the province- versus whole country-specific recalibration were explored by comparing calibration and potential public health impact. Furthermore, preliminary recalibration to urban and rural-specific areas within each province was explored by using mortality statistics from the national death surveillance points (DSP) system which included 605 surveillance points covering more than 80% of the deaths nationally. To avoid the limitations in predicting only fatal CVD risk and develop a risk prediction system calibrated to estimate province-specific total (fatal and non-fatal) CVD risk, I further explored possibility of the developing of a total (fatal plus non-fatal) CVD risk prediction system recalibrated to each province respecting the different rates in urban and rural differences in CVD rates. To explore the translation of mortality rates to total event rates, multipliers (the ratio of total incidence to mortality) were estimated using data of 512,726 participants, aged 30-79 years, enrolled from 5 urban and 5 rural areas in 2004-2008 from China Kadoorie Biobank (CKB). Main results: There are significant regional variations in the mortality rate, composition (proportion of IHD and stroke), and epidemiological patterns of CVD in China. More importantly, the disparity in the epidemiology of CVD among Chinese provinces widened over time. Meanwhile, the CVD prevention recommendations depend more and more on models of CVD risk. There is significant unmet need for a better calibrated risk score taking into account the variations in risk across province and urban/rural environment. A general practical approach was compared with model-specific replacement method for recalibration of risk models of various types, using various alternative sources of data. The example in UKB showed that the general regression method could effectively calibrate the risk model with aggregate data and provide greater simplicity when recalibration needs to be done in an age-specific way. For countries/regions with limited resources, the general regression methods with aggregate data can facilitate regular update of CVD risk score to the CVD rates and risk factor levels in contemporary population. Thus, the general regression approach with aggregate level population data was chosen for recalibration of CVD risk score in China. The validation in FSC showed, the WHO CVD risk score, PCE and China-PAR score all discriminated risk well, with C-indices of 0.735 (95% CI: 0.715, 0.755), 0.731 (95% CI: 0.712, 0.750) and 0.732 (95% CI: 0.712, 0.751) respectively, but performed variously in relation to calibration. However, a simple recalibration could significantly improve and equalized their calibration. This led to selection of the WHO risk score for further province specific recalibration due to its simplicity and advantages in its adaptability to different stroke and CHD rates. After recalibration with risk factor values from CCDRFS and mortality rates from China CDC to the China as a whole, although the China-specific WHO score performed well in China on average, it still overestimated or underestimated mortality risk when used in each province. Province-specific models provided more accuracy prediction of CVD risk in each province. Accordingly, using the province-specific scores for an individual with the same combination of risk factors, the 10-year risk of CVD mortality differed substantially across provinces. Consequently, the proportions of individuals classified at high risk (5-year fatal CVD risk>5%) were strikingly more varied across provinces when using the province- versus China-specific models. For example, the proportion of men classified as high risk ranged from 9% to 39% across provinces. Based on the national DSP system, the raw mortality of CVD was higher in rural versus urban areas within 28 of 31 provinces by 1.03 to 1.96-fold. Thus, a preliminary urban and rural-specific risk prediction system within each province was developed to further avoid over or underestimation using the same recalibration methods. Results in CKB showed that while mortality rates were higher in rural areas, estimated multipliers were lower compared with urban areas. Multipliers were more extreme for stroke versus CHD events, higher for women, and decreased with age. Further research is needed to assess the appropriate application of multipliers from CKB to all 31 provinces in the development of a province-specific total CVD risk estimation system. Conclusion: A province-specific CVD mortality risk estimation system that can be regularly recalibrated in the future using routinely available information was constructed. The application of the recalibrated CVD risk score should help accurately estimate CVD risk in individuals from China and assist policymakers in making more appropriate decisions about the allocation of preventative resources. Further studies are warranted to determine the value of urbanization in the CVD risk prediction and the appropriate method to develop a total CVD risk estimation system with more reliable data.
  • ItemOpen Access
    Factors associated with variation in secondary prevention medication use after stroke
    Edwards, Duncan; Edwards, Duncan [0000-0003-1500-2108]
    Stroke is an important cause of death and disability. Once a person has had a stroke or transient ischaemic attack (TIA), they have a high chance of having another, and this risk can be reduced by taking stroke prevention medications. Medications to prevent recurrent stroke include statins to lower cholesterol, medications to lower blood pressure, and anti-platelets or anticoagulants to reduce the risk of clotting, yet many patients do not take them. In this thesis, I investigated which stroke and TIA survivors are missing out on stroke prevention medications. First, a systematic review and meta-ethnography combining 15 studies and including 350 patients demonstrated that stroke patients are diverse and can have many challenges which make medication-taking difficult. Patients have different beliefs about their medications, about stroke and about health which influence whether they take their medications. Social and professional support can make medication-taking easier, and should be tailored to the individual’s challenges and beliefs. Second, a systematic review of 41 observational studies (125,746 patients) demonstrated 28 factors that are associated with altered levels of medication-taking. The strongest evidence linked three cardiovascular co-morbidities (dyslipidaemia, hypertension and diabetes), alongside male sex, higher income, pre-stroke use of medications, stroke unit care and living with others with medication-taking. Anxiety and disability were linked with lower levels of medication-taking. The studies were highly varied, both in their populations and how they defined taking medications. Additionally, some bias, especially reporting bias, was present. Third, a population-representative cross-section of 45,521 UK stroke or TIA patients >=25 years was extracted from primary care routine data. This sample confirms that, when indicated, only 72% of stroke and TIA survivors use statins, 83% use antithrombotics and 80% use anticoagulants. Likewise, only 75% use antihypertensives, and 36% have documented blood pressure control. Cardiovascular co-morbidities were strongly associated with higher levels of medications use, whereas some mental health co-morbidities were associated with lower levels (specifically: alcohol problems, dementia, learning disability, and a history of psychosis). Women, patients at the extremes of age (younger than 55 and over 85), and patients with relatively infrequent GP practice consultations were all less likely to use preventative medications. This thesis provides a model to explain why certain groups of stroke and TIA survivors are less likely to take stroke prevention medications. It describes many of these groups and suggests policies and interventions to support them. It also suggests how and why research and clinical databases should include more variables, such as patient belief and social support factors, which are currently systematically disregarded.
  • ItemOpen Access
    Personalising Predictive Prevention of Cardiovascular Disease using Electronic Health Records and Genomics
    Chung, Ryan
    Cardiovascular diseases (CVD) remain one of the leading causes of morbidity and mortality in the world. The development of CVD risk prediction models has been pivotal in helping identify high risk individuals who may benefit the most from appropriate treatment. In England, clinical guidelines provide recommendations of the appropriate care and treatments needed to manage CVD. In particular, the guidelines recommend using a CVD risk prediction model during a full formal risk assessment for all individuals between 40 and 75 years. To manage health resources, the guidelines also recommend systematically prioritising individuals for risk assessments using historical information already recorded in primary care records. However, there are limitations of existing guidelines. First, a dedicated risk model designed for prioritisation to risk assessments does not exist, or is currently recommended for use in current primary care systems. In addition, it is unknown how implementing a fixed risk threshold for prioritisation would affect the effectiveness of formal CVD risk assessments. Therefore, the first aim of this thesis is to develop a novel prioritisation model and evaluate its public health impact, by comparing a fixed risk threshold against age- and sex- specific risk thresholds to determine whether individuals would be deemed at high risk. Second, the majority of research of genetic data, genomics, has focussed on improving risk model performance using genetic-based risk factors called polygenic risk scores (PRS). However, little is known as to whether PRS will benefit CVD prioritisation. Therefore, the second aim of this thesis is to investigate the potential benefits of PRS when used for both prioritisation and formal assessments. Third, a future healthcare system that incorporates widespread genetic profiling has the ability to personalise preventative medicine. Therefore, the third aim is to investigate and estimate the lifetime impact of novel PRS-based personalised invitation strategies. Key finding 1: By utilising all available primary care records in a large, national database, a novel prioritisation model, eHEART, was developed. We showed that prioritisation, in addition to using optimised age-and sex specific risk thresholds, can be used to make formal CVD risk assessments more efficient. For example, a formal CVD risk assessment on all adults would identify 76% and 49% of future CVD events amongst men and women respectively. However, prioritisation with eHEART could identify 73% and 47% of future events amongst men and women respectively, with a 19% and 42% reduction in the number needed to screen to prevent one CVD event respectively. The results suggest that optimising the risk thresholds used can lead to a more efficient CVD risk assessment programme, with the biggest improvements amongst younger individuals. Key finding 2: To understand how PRS could improve CVD risk prioritisation, a comparison of how prioritisation differed when using either only primary care records, age and PRS, or primary care records enhanced with PRS. The results showed that prioritising using primary care records can reduce the number needed to screen to prevent one CVD event (NNS), and enhancing it with PRS can further improve this whilst saving the same number of events. Prioritisation with only age and PRS should not be used in isolation due to poor performance. Key finding 3: The impact of using PRS to decide statin initiations across a lifetime is unknown. We devised four strategies for determining the first age of invitation, followed by a formal risk assessment at which treatment would be allocated if the individual is at high risk were created, each with increasing levels of PRS implementation. Compared to a population- wide invitation strategy followed by assessment using conventional CVD risk factors and PRS, a strategy using PRS to personalise the age of first invitation prior to an assessment led to a 43% and 39% reduction in the NNS in men and women respectively whilst saving a similar number of events over a lifetime. Overall, this thesis has identified the potential benefits of prioritisation for CVD risk assessments, using existing primary care records within the framework of current guideline recommendations, as well as considering the potential that PRS may have in future healthcare systems.
  • ItemControlled Access
    Immune Infiltrates in Breast Cancer: Clinical Significance from Histopathology to Prognosis
    Bernstein, Aaron
    Though breast cancer has been traditionally regarded as non-immunogenic, in recent years, evidence has increasingly shown that patient immune responses play a central role in prognosis. Overall, increased tumour infiltrating lymphocyte (TIL) counts are associated with better outcomes. However, the prognostic significance of TILs varies by TIL type, with CD8+ (cytotoxic T-cells) generally being associated with better outcomes, and both FOXP3+ (T-regulatory cells) and CD163+ (M2 macrophages) TILs being associated with worse outcomes. However, there is considerable heterogeneity in the prognostic literature of FOXP3+ and CD163+ TILs, with studies conflicting on both the direction and statistical significance of the association. Moreover, there is a paucity of information on the significance of CD20+ (B-cells) TILs. To better elucidate the individual prognostic associations of the above TILs, and study the prognostic interplay of each, this thesis aims to evaluate the association between breast cancer-specific survival (BCSS) and CD8+, FOXP3+, CD20+, and CD163+ TILs, both individually and in combination. To accomplish this, the first aim was to develop an algorithm capable of quickly and accurately scoring the TILs in the Breast Cancer Association Consortium’s (BCAC) tissue microarray (TMA) dataset. TMAs are cassettes designed to efficiently organise cylindrical tumour samples (cores) from a large number of patients, and stain their horizontal sections for a variety of targets. BCAC’s TMA dataset is composed of 137,181 core images from 18,088 patients, each stained for either CD8, FOXP3, CD20, or CD163. To score the BCAC dataset, I developed two separate machine learning algorithms, one based on the Random Forest and the other based in Halo, a proprietary digital pathology platform widely used in the clinic. Both models compare favourably with pathologist-generated overall CD8+ TIL counts, with Cohen’s weighted Kappa scores of 0.8 and 0.81 for the custom and Halo algorithms, respectively. However, due to substantial time restrictions in the PhD, development of the tissue segmentation (tumour, stroma, artefact, glass) component of the custom algorithm was cut short. As a result, the custom algorithm’s performance dropped markedly relative to the Halo algorithm, with tumour-specific kappas of 0.49 and 0.7, and stroma-specific kappas of 0.6 and 0.74, for the custom and Halo algorithms, respectively. Therefore, my focus shifted to the Halo algorithm, which alone, underwent pathologist-led training and validation across all markers and studies. During expert validation of Halo, two pathologists evaluated the algorithm’s TIL and tumour/stroma segmentation across 100 randomly selected images (25/marker). For each image, the pathologists reached consensus, passing or failing the TIL and tissue segmentation separately, based on the accuracy of the predictions (extent of under- and over-prediction, fit of segmentation masks to objects of interest, etc.). Ultimately, the TIL and tissue segmentation components of Halo passed in 98% and 85% of the images, respectively. Halo then underwent quantitative validation on tissue annotations for the 100-image set, receiving an average F1 score of 0.90 across all markers and tissue segmentation categories. With model development complete, I proceeded to my second aim and primary objective: the prognostication of each TIL marker. I began by scoring the entirety of the BCAC dataset with Halo. TIL scores were calculated in the form of compartment-specific percents (i.e., the compartment area covered by TILs divided by the total compartment area). These compartments consisted of tumour- and stroma-specific TILs, as well as the overall TILs. Once scored, I merged the results with the available clinical data. The sample then underwent exclusion for missing survival data, ER status, age, tumour grade, tumour diameter (mm), and number of metastasised nodes. I conducted ER-stratified sensitivity analyses, establishing cutoffs for artefact percent, my primary QC metric which measured the total tissue area covered by damage, detritus, or non-specific staining. Ultimately, it was determined that cutoffs of 25% for ER+ samples and 95% for ER- samples produced the optimal balance of quality and sample size for each stratum. These cutoffs were utilised only in uni-metric survival analyses (Cox regression using only one TIL type in one compartment), as the multi-metric analyses suffered considerable missingness given that not every patient had IHC-stained TMAs for each marker. Beginning with the uni-metric analyses, CD8 had 7,845 ER+ samples and 2,885 ER- samples; FOXP3 had 7,830 ER+ samples and 2,785 ER- samples; CD20 had 8,070 ER+ samples and 2,834 ER- samples; and CD163 had 7,901 ER+ samples and 2,815 ER- samples. For the multimetric analyses, there were 7,774 ER+ samples and 2,591 ER- samples. My ER-stratified uni-metric and multi-metric Cox regressions were then performed for each marker, TIL metric, and compartment combination (e.g., CD8 tumoural percent). For brevity, here I will only outline my major findings, which derived from the fully adjusted analyses (age, grade, TVC(grade) [TVC: time-varying coefficient; Included to account for proportional hazards violations], tumour diameter, and number of metastasised nodes). I found statistically significant hazard ratios (HR) with protective effects across ER and compartment strata in my CD8 unimetric analyses (overall percent: ER+ HR (95% CI) = 0.92 (0.87, 0.97), ER- HR = 0.89 (0.84, 0.95); tumoural percent: ER+ HR = 0.82 (0.74, 0.91), ER- HR = 0.85 (0.78, 0.93); stromal percent: ER+ HR = 0.94 (0.91, 0.98), ER- HR = 0.93 (0.89, 0.97])). For my multimetric analyses, statistical significance was maintained across all ER- compartments, as well as ER+ tumoural percent (overall percent: ER- HR = 0.92 (0.86, 0.99); tumoural percent: ER+ HR = 0.89 (0.82, 0.98), ER- HR = 0.91 (0.82, 0.99); stromal percent: ER- HR = 0.95 (0.91, 0.99]). For FOXP3, in my unimetric analyses, statistically significant protective effects were observed for both ER+ and ER- overall percent (ER+ HR = 0.63 (0.41, 0.98), ER- HR = 0.56 (0.38, 0.82)), as well as for ER- tumoural percent (HR = 0.56 (0.34, 0.93)) and ER- stromal percent (HR = 0.70 (0.53, 0.92)). However, all statistical significance was lost in the multimetric analyses. For CD20, I found protective effects for all ER+ unimetric analyses (overall percent: HR = 0.93 (0.90, 0.97); tumoural percent: HR = 0.84 (0.74, 0.95); stromal percent: HR = 0.94 (0.91, 0.97)). For multimetric analyses, I found statistically significant protective effects for ER+ overall percent (HR = 0.96 (0.93, 0.99)) and ER+ stromal percent (HR = 0.96 (0.94, 0.99)). No statistically significant associations were found for ER- breast cancers. Finally, for CD163 statistically significant protective effects were found for all ER- unimetric (overall percent: HR = 0.96 (0.94, 0.98); tumoural percent: HR = 0.92 (0.88, 0.96); stromal percent: HR = 0.97 (0.95, 0.99)) and multimetric analyses (overall percent: HR = 0.97 (0.95, 0.99); tumoural percent: HR = 0.94 (0.90, 0.99); stromal percent: HR = 0.98 (0.96, 0.99)). No ER+ associations were statistically significant. Together, these results suggest that each marker is individually prognostically significant, with CD8 and CD20 being protective in ER+ breast cancers, and CD8, FOXP3, and CD163 being protective in ER- breast cancers. Furthermore, all markers maintain both the direction of association and the majority of their statistical significance when analysed in concert, with the exception of FOXP3.
  • ItemEmbargo
    The impact of patient-clinician interactions on patients with systemic autoimmune rheumatic diseases
    Sloan, Melanie
    The patient-clinician relationship can have wide-ranging effects on patient mental health, healthcare behaviours and satisfaction with care, in addition to clinical outcomes. This relationship is especially important in chronic autoimmune rheumatic diseases, which remain incurable, necessitating a life-long need for regular medical support. The overall objective of this PhD was therefore to investigate the impacts of positive and negative medical interactions on patients with systemic lupus erythematosus and related diseases, in order to identify potential opportunities for improvement. A series of inter-linked studies employed complementary methodology to meet this objective, including: 1) an ethnographic analysis of the LUPUS UK forum to more deeply understand the patient group and their needs and priorities; 2) three mixed methods studies, the first to explore patient symptoms and medical experiences, the second to examine the impacts of the Covid-19 pandemic on healthcare and medical relationships, and the third to ascertain patient self-reported prevalence of neuropsychiatric symptoms, identification, and the impact of medical relationships and previous misdiagnoses; and 3) a randomised controlled trial that was adapted to become a longitudinal cohort to measure the effects of changes to care on medical relationships due to the Covid-19 pandemic. Mental health and wellbeing was measured by the 14-item validated Warwick Edinburgh mental wellbeing scale (WEMWBS). T-tests and correlations were used to explore differences and associations between various participant groupings, and between patient measures of mental health, care, and behaviours. Findings showed an overwhelming perception of invalidation in multiple areas of patient lives, including medically. There were significant associations between mental health and multiple measures of satisfaction with care. There was a positive correlation between satisfaction with life and satisfaction with medical care (r=0.41). However, patient perception of self-efficacy and control over disease had a higher correlation with life satisfaction (r=0.52), suggesting empowering patients and encouraging self-empowerment and peer support in addition to medical support are key elements to also be considered in clinician-patient communication. An unexpected finding was that healthcare-behaviours were only weakly correlated with medical relationship satisfaction measures. Reporting of mental health symptoms to clinicians was low (>50% rarely or never reported these symptoms) and treatment adherence was high (81% of the large-scale Covid-care study participants reported always adhering to treatment), although this was found to have no significant association with the perceived quality of the medical relationship. The Covid-19 pandemic predictably had a very negative impact on medical relationships and patient medical security. Medical security versus medical abandonment was a key theme identified from in-depth interviews. The studies identified the building blocks of medical security as: clinicians being quickly available, believing and validating patient-reported symptoms, and providing continuity of compassionate care. There was a high level of reported persisting psychological damage and medical distrust expressed by many study participants from their previous ‘Adverse Medical Experiences’. These were often accrued on lengthy and traumatic diagnostic journeys where a propensity of clinicians to initially misdiagnose these patients with psychosomatic or mental health conditions was identified. Reductions in long-term satisfaction with care and with life, medical security and mental health were associated with psychosomatic or mental health misdiagnosis, and the lengthiest diagnostic journeys, although there were no/weak correlations with any adverse long-term impacts on some patient behaviours. Systemic autoimmune rheumatic disease patients had a high burden of neuropsychiatric symptoms, and concerningly high levels of under-reporting these symptoms to clinicians. Clinician participants demonstrated compassion and a desire to improve care, but are hindered not just by time constraints but also their under-estimation of the disease burden, the degree of under-reporting, and the long-term impact of their patients’ frequent previous adverse medical experiences. The greater value attributed to objective over subjective data was both an over-arching theme throughout the thesis studies, and a prevailing attitude to challenge more widely in medicine and medical academia. This incorporated the patient views of invisible, non-externally verifiable symptoms being treated less seriously than those with objective measurements, and the explicitly discussed physician desire for objective evidence. In addition, this extended to many clinicians and medical journals prioritising ‘objective’ quantitative data over qualitative research, and to my own challenges over-coming perceptions, including my own, that my subjectivity as a patient needed to be constrained to ensure credibility and impact. This analysis on what is perceived to constitute a positive/ negative medical relationship, and resultant suggestions on how to build supportive medical relationships will raise awareness of the patient and clinician views and thus improve patient-clinician relationships. Particular focus is required to reduce – and improve medical support during – the often long and damaging diagnostic journeys. Valuing subjectivity, and listening to a patients’ experiences and self-evaluation is key.
  • ItemEmbargo
    Identifying Observational and Causal Factors for Cardiovascular Diseases through Large-scale Cohorts
    Gaziano, Liam
    Background Starting in the 1940s, prospective studies like the Framingham Heart Study helped scientists generate new knowledge on risk factors for cardiovascular diseases. While these data sources set in motion the decline in age-adjusted cardiovascular disease seen over the last five decades, they were limited by their size in the kinds of research questions they could answer. Since then, larger data sources have emerged, some with genotyping on the order of hundreds of thousands of people, that have afforded immense power. Here, I use multiple data sources that contain more than 500,000 individuals, some with and some without genetic data, to identify novel insights into observational and causal risk factors for cardiovascular diseases. Objectives To use observational (non-genetic) epidemiological approaches to examine and compare risk factors for subtypes of stroke. To elucidate the observational and causal associations of kidney function with stroke and coronary heart disease (CHD). To perform Mendelian randomization (MR) using transcriptomic and proteomic data on CHD. To develop race- and sex- specific risk prediction models for subtypes of heart failure. Results Observational risk factor profiles for subarachnoid hemorrhage and intracerebral hemorrhage, two subtypes of hemorrhagic stroke, differed quantitatively, suggesting distinct pathogenesis. I found U-shaped observational associations between estimated glomerular filtration rate (eGFR) and CHD. Associations between genetically-predicted eGFR and CHD displayed a threshold effect starting roughly around 75 mL/min/1.73 m2, highlighting the potential for reno-protective therapeutics, like sodium-glucose cotransporter 2 inhibitors, to prevent primary CHD events. I prioritized 582 genes/proteins within 275 genomic regions with possible causal relevance to CHD. I identified cilostazol, an inhibitor of Phosphodiesterase 3A (PDE3A), as a repurposing opportunity for the primary prevention of CHD and a safer alternative to existing antiplatelets. Lastly, I developed race- and sex- risk prediction models for two subtypes of heart failure, defined by preserved (HFpEF) or reduced (HFrEF) ejection fraction, Conclusions The emergence of large-scale data sources, with and without genetics, has allowed for analyses that require immense power, leading to novel insights into cardiovascular diseases.
  • ItemOpen Access
    Delivering a Screening Programme for Atrial Fibrillation: a mixed methods investigation
    Modi, Rakesh; Modi, Rakesh Narendra [0000-0001-9651-6690]
    Introduction Atrial fibrillation (AF) is an irregular rhythm of the heart that is associated with 30% of strokes. Substantial undiagnosed AF might be detectable by hand-held ECG devices, and treatment with anticoagulation might reduce the risk of stroke. For an AF screening programme to be endorsed by a national policy-making body, it needs to be proven that the programme causes more benefit (namely in stroke reduction) than harm. The SAFER trial is a trial of an AF screening programme designed to provide this evidence. Whether an AF screening programme provides more benefit than risk will depend on how it is delivered, and therefore many policy-making bodies require a plan for delivery. There is currently no such understanding of or plans for the delivery of an AF screening programme. The research question of this PhD is: how is the screening programme for atrial fibrillation within the SAFER trial delivered, and what recommendations can be made from this for the delivery of a screening programme for atrial fibrillation at a national scale? Methods I undertook a scoping review of the literature to synthesise detailed schematics for generic screening programmes to identify activities and components that an AF screening programme could require. I also conducted a scoping review to locate the most relevant implementation theory(ies) to aid in the methods and analysis of a process evaluation of SAFER. I then undertook a process evaluation of SAFER. This included studies of which group of staff (general practice staff or central administrators) should undertake the majority of tasks, how participants should be supported, how staff can best manage and be managed in their roles, and how staff training can be optimised. I undertook consultations with 26 stakeholders to create a logic model, observed 43 hours of training and 16 patientpractitioner consultations, conducted 49 semi-structured interviews with practice and trial staff, collected 24 documents and emails for analysis, assessed 230 training evaluation forms, collected 270 practice staff survey responses, and collected online data on characteristics of 36 practices. I created a theory of intervention for a national scale AF screening programme from the literature reviews and the process evaluation, and from this theory I provided a list of recommendations for the delivery of such a programme. Results I created detailed schematics for generic screening programmes and selected the Consolidated Framework for Implementation Research (CFIR) as the key implementation theory to aid the conduct and analysis of the process evaluation. I found that both general practice and the trial team were successful in eliciting high quality traces from participants in a remotely run AF screening programme, but that general practice staff might have been better at reaching underserved participants while the trial team were more consistent in performance. I found that training for staff was successful and well received. From practice surveys, over 30% of practices were already providing opportunistic screening for AF but nearly 20% of practices were not monitoring patients with AF annually. Combining primary and secondary data, I created theories of change for the programme that highlighted how practices needed to improve guideline compliant care of AF but were able to interpret and act on screening results, that trial staff required achievable targets and a nurturing environment to become competent and confident, that public information needed to be balanced by detailing both benefits and harms of screening, that results needed to be reported with as few indeterminate results as possible, that specific strategies for online and flexible training should be provided for staff, and that a positive public and staff experience were key to a functioning programme. I made some key recommendations for policymakers based on these findings. Conclusion I have provided theory and recommendations for the delivery of a national scale AF screening programme. I have shown that an evidence-based plan for delivery is of value and should be considered as a criterion before endorsing any screening programme. In the process, I have reported novel methods and approaches for future researchers.
  • ItemEmbargo
    Environmental Arsenic Exposure and Risk of Cardiovascular Disease
    Van Daalen, Kim Robin
    Background: Environmental exposure to inorganic arsenic (iAs), one of the most abundant elements in the earth’s crust, is widely recognised as a major global health risk – millions of people are estimated to be chronically exposed to arsenic worldwide. Yet, compared to arsenic-related cancer and skin lesions, research on arsenic-related cardiometabolic disease, particularly at lower exposure levels, has been limited. This PhD thesis aims to investigate the epidemiological relationship between chronic environmental arsenic exposure and several sub-types of cardiometabolic disease, with a primary focus on cardiovascular disease (CVD), specifically myocardial infarction (MI), as well as diabetes mellitus (DM) and hypertension. Data sources: Firstly, to explore the relationship between toenail arsenic (including arsenic metabolites and metabolism markers) with MI, the Bangladesh Risk of Acute Vascular Events (BRAVE) study, a case-control study of 1,532 MI cases and 1,334 controls, was used. Secondly, to explore the dose-response relationship between arsenic and several CVD endpoints, a systematic review and two-stage dose-response meta-analyses were performed on 35 studies. Thirdly, to assess the association of environmental arsenic exposure with two major CVD risk factors, diabetes mellitus and hypertension, a systematic review and two-stage dose-response meta-analyses were performed on 71 studies. Lastly, to explore whether genetic variation may be at least partially responsible for inter-individual variation in arsenic-related cardiometabolic disease susceptibility, a narrative systematic review on 14 unique candidate gene-environment (cGxE) studies reporting on 650 single nucleotide polymorphisms (SNPs) in 145 genes was used. Main results: In BRAVE, MI was positively associated with higher toenail monomethylated arsenic (MMA) in μg/g, higher MMA% and a higher primary methylation index (PMI) in both categorical and continuous logistic regression models, and negatively associated with higher iAs% and a higher secondary methylation index (SMI). No statistically significant associations were found for total arsenic (tAs), iAs and dimethyalated arsenic (DMA) in μg/g, nor for DMA%. Dose-response meta-analyses found consistent positive associations between water arsenic concentrations and coronary heart disease (CHD) [fatal, non-fatal, overall], non-fatal stroke, CVD (fatal, non-fatal, overall), hypertension, DM type 2, and gestational diabetes mellitus (GDM). Analyses on low water arsenic concentrations showed positive associations below 10 μg/l for several of the cardiometabolic outcomes studied. From the total of 35 studies on CVD endpoints, and 71 studies on hypertension or DM, only 4 (low quality) studies reported statistically significant negative associations. In the narrative systematic review, 29 SNPs in 21 genes as well as 3 haplotypes (involved in e.g. arsenic metabolism, DNA damage repair, endothelial function, inflammation) were indicative of SNP-arsenic interactions associated with cardiometabolic outcomes. Conclusion: This PhD thesis presents further supports evidence that people exposed to chronic environmental arsenic pollution may be at greater risk for cardiovascular disease, hypertension, diabetes mellitus type 2 and gestational diabetes. Evidence is most conclusive at moderate and high levels of arsenic exposure (i.e., water arsenic > 100 μg/l). However, results at lower arsenic levels suggest that a potential downward revision of the World Health Organization (WHO) guidelines (10 μg/l) may be beneficial for health protection. Whilst studies on individual arsenic species and arsenic metabolism markers are less consistent, results in this thesis suggest a potential role for arsenic metabolism in the development of CVD – potentially particularly at lower arsenic levels. Various SNPs and genes have been suggested to be involved in the susceptibility to cardiometabolic disease, however, these results should be approached with appropriate caution due to the lack of reproducible findings, and the cGxE nature of the included studies. Overall, further long-term prospective studies of individual person data that assess arsenic through different media and include speciation analyses, combined with experimental, mechanistic and genetic studies, are needed to improve understanding of the observed relationship between arsenic and cardiometabolic disease, and the ways in which arsenic may disrupt cell physiology and promote pathophysiology.
  • ItemOpen Access
    Multimorbidity in Bangladesh: Burden and Correlates in the Bangladesh Longitudinal Investigation of Emerging Vascular and nonvascular Events (BELIEVE) Study
    Khan, Nusrat
    Background: Multimorbidity, a major global health concern nowadays, is increasing at an alarming rate worldwide, including in South Asian countries. Non-communicable diseases (NCDs) explain the majority of current global mortality and with an increasingly ageing population, NCD multimorbidity can overwhelm health systems worldwide. Although there is some evidence of several determinants and risks of multimorbidity from western populations, these cannot be generalised to apply to South Asian populations where the sociodemographics and lifestyles are quite different. Objectives: The main aims of this thesis are to 1) summarise the existing epidemiological evidence of multimorbidity in South Asia; 2) investigate the burden of multimorbidity and the cross-sectional association between multimorbidity and socioeconomic status; 3) identify the association between multimorbidity and adiposity measures; and 4) describe the association between multimorbidity and mental health. Methods: BELIEVE is a large-scale multicentre population-based prospective study from Bangladesh with a sample size of ~73,500, over three distinct sites covering urban, rural, and urban slum environments. BELIEVE has collected extensive baseline information on all participants' sociodemographic, economic, lifestyle and physical measures along with various health conditions. Therefore, it serves as a unique scientific resource to provide an indication of the burden and correlates of multimorbidity across a varied range of individuals in Bangladesh. Results: A systematic review demonstrated that there was a scarcity of published data on multimorbidity from South Asian countries. The retrieved literature indicated a wide range of prevalence estimates reflecting different study samples, methods, and definitions of multimorbidity. The most widely used was the World Health Organization’s definition, namely “two or more diseases in an individual, definition was adopted for the BELIEVE study with the focus on any two diseases from a total list of 10. Using this definition, the overall prevalence of multimorbidity in BELIEVE was 16.8% with males having a higher prevalence (18%) than females (15%). In terms of site of residence, urban participants had the highest prevalence of 18% followed by 10% in the slum and 6% in the rural site. Cross-sectional analyses of the BELIEVE data revealed that multimorbidity was strongly associated with higher socioeconomic status based on the highest individual income (OR 2.05, 95% CI 1.71-2.47) and the highest wealth quintile (OR 2.38, 95% CI 2.19-2.60). The prevalence of overweight/obesity was very high (around 70%) in the multimorbid BELIEVE participants and there was a strong association between greater general and central measures of adiposity and increased likelihood of multimorbidity. Total body fat % (general adiposity) (OR 1.74, 95% CI 1.66-1.82) and waist circumference (central adiposity) (OR 1.70, 95% CI 1.65-1.75) showed the strongest association with multimorbidity. Possible depression and possible anxiety affected an estimated 16.8% and 11.7% of the BELIEVE population respectively, with higher prevalence among those with multimorbidity. When considered separately both possible depression and anxiety showed a moderate association with the risk of multimorbidity, but the odds ratio of the risk of having multimorbidity (OR 1.89, 95% CI 1.51-2.36) was greater for participants with co-occurring probable depression and anxiety. Conclusion: The present analyses represent the largest cross-sectional study on multimorbidity in a South Asian population and focus on a diverse range of individuals. Results confirmed some previous associations with the risk of multimorbidity found in western populations, but also revealed some novel insights into the association of socioeconomic characteristics, adiposity measures and mental health status with multimorbidity that may be specific to Bangladesh. Owing to the cross-sectional nature of the study, causality cannot be assumed. Nevertheless, the findings may indicate certain subgroups that would benefit from targeted public health intervention to reduce NCD impact. It is hoped that the findings of this study will stimulate further longitudinal research allowing causal links to be determined and to help reduce the rising burden of multimorbidity in Bangladesh and other South Asian countries.
  • ItemOpen Access
    Understanding missed diagnostic opportunities in bladder and kidney cancer
    Zhou, Yin; Zhou, Yin [0000-0002-8815-1457]
    Background Bladder and kidney cancer are among the ten most commonly diagnosed cancers in the UK every year. Delayed diagnosis is associated with poorer survival and patient reported outcomes. Bladder and kidney cancers pose diagnostic challenges as presenting symptoms such as haematuria and lower urinary tract symptoms are common, and can be due to benign causes. Women with bladder cancer in particular, are more likely to experience diagnostic delay and worse survival than men. Kidney cancer is expected to be among the cancers with the fastest increasing incidence over the next 20 years. It is therefore imperative to understand how and why delays, and sub-quality care occurs during the diagnostic process to improve outcomes. Prior work has indicated that missed diagnostic opportunities (MDOs) exist for a range of diseases, including cancer, and may contribute to diagnostic delay. How often and why MDOs might occur in bladder and kidney cancer is unknown. My PhD seeks to address this evidence gap. Aim In this thesis, I combined frameworks from the early cancer diagnosis (the Pathways to Treatment model) and diagnostic research (the SaferDx model) fields to underpin the design, analysis and interpretation of five core studies (a systematic review, three quantitative studies and one mixed-methods study) with the aim to better understand diagnostic delay and MDOs in bladder and kidney cancer. Methods First, I performed a systematic review to examine the factors associated with the quality of the diagnostic process in patients with bladder and kidney cancer. This review highlighted evidence gaps in the definition of timeliness, the quality of assessment of lower risk urological symptoms (other than haematuria), and the clinical and system factors that might contribute to suboptimal quality of the diagnostic process. Next , I performed three quantitative studies (1 to 3) using linked primary care, secondary care imaging data and cancer registry data with 5,322 patients diagnosed with bladder and kidney cancer between January 2012 to December 2015. In Studies 1 and 2, I examined the potential ‘diagnostic window’ of bladder and kidney cancer following relevant blood or imaging tests, using a novel form of Poisson regression modelling extending Joinpoint regression analysis advised by a medical statistician. Next, I examined for signals of MDOs in bladder and kidney cancer patients who met the 2005 National Institute for Health and Care Excellence (NICE) guidelines for fast-track referral. I operationalised four NICE-qualifying presentation scenarios (visible haematuria, non-visible haematuria, recurrent UTIs and abdominal mass), and examined the predictors of a prolonged interval from qualifying for NICE-referral to cancer diagnosis (NICE-DI) using multivariable logistic regression (Study 3). Lastly, I performed a prospective, mixed-methods study (Study 4) in 940 symptomatic patients from 9 general practices in East of England. Study 4 consisted of a case note review (n=940) and qualitative patient interviews (n=15). I performed descriptive statistics, and logistic regression to examine predictors of having a referral. I then analysed the qualitative interviews using thematic analysis and used mixed-methods synthesis to provide an overarching understanding of the factors contributing to MDOs. Results The diagnostic window of bladder and kidney cancer was about 6-8 months following an abnormal blood test, or a relevant imaging test. This window represents the time frame during which a potential cancer diagnosis may be achieved, and that more timely diagnosis is possible in at least some patients. My subsequent findings demonstrated that the following aspects of the diagnostic process are likely to harbour MDOs: i. Test delays, highlighting the need to improve access and reduce time-to-testing interval : clinical and disease factors were the most likely to contribute to pre-analytical test delays, including having kidney cancer (due to presentation with less specific symptoms) compared to bladder cancer, having stages other than 4 at diagnosis, presenting with UTI symptoms and not having haematuria pre-diagnosis. Post-analytical test delays might be due to the lack of or poor communication of test results, resulting in the failure to close a diagnostic loop. ii. Variations in time to referral and diagnosis leading to inequality in diagnostic timeliness in patient subgroups: Women, and patients with recurrent UTIs were particularly at risk of no referrals, and prolonged primary care interval, than men and patients with other NICE-qualifying clinical presentations (any type of haematuria or abdominal mass). Patients with lower risk urological symptoms (non-visible haematuria and recurrent UTIs) had 1.5 and 3 times higher odds respectively of experiencing diagnostic delay, compared to patients with visible haematuria. The mechanisms for MDOs involve process breakdowns during the initial patient-doctor consultation, and the follow-up of patients which might contribute to diagnostic delays and MDOs. These include inadequate history taking and examination, clinicians assuming urological symptoms to be UTIs, tendency for GPs to treat recurrent and persistent UTIs without reviews, lack of communication of test results and suboptimal follow-up activities due to results being given by receptionists or not at all. Conclusions My thesis demonstrates that MDOs in primary care exist in the evaluation and diagnostic process of bladder and kidney cancer. In particular, women, and patients with recurrent UTIs are the most at risk of experiencing suboptimal care in the primary care diagnostic process. MDOs secondary to process breakdowns during initial diagnostic assessment, and the follow-up of tests, most likely contribute to the observed diagnostic delays. System factors such as rigid consultation norms may influence these process breakdowns. Targeting patients groups with non-alarm urological symptoms, and improving system and clinician related factors such as access to and availability of tests and clinician willingness to test may reduce MDOs in patients with bladder and kidney cancer.
  • ItemOpen Access
    Inference frameworks in computational biology: from protein-protein interaction networks using machine learning to carbon footprint estimation.
    Lannelongue, Loic
    Protein-protein interactions (PPIs) are essential to understanding biological pathways and their roles in development and disease. Computational tools have been successful at predicting PPIs in silico, but the lack of consistent and reliable frameworks for this task has led to network models that are difficult to compare and, overall, a low level of trust in the predicted PPIs. To better understand the underlying mechanisms underpinning these models, I designed B4PPI, an open-source framework for benchmarking that accounts for a range of biological and statistical pitfalls while facilitating reproducibility. I use B4PPI to shed light on the impact of network topology and understand how different algorithms deal with highly connected proteins. By studying functional genomics-based and sequence- based models (two of the most popular approaches) on human PPIs, I show their complementarity as the former performs best on lone proteins while the latter specialises in interactions involving hubs. I also show that algorithm design has little impact on performance with functional genomic data. I replicate these results between human and yeast data and demonstrate that models using functional genomics are better suited to PPI prediction across species. These analyses also highlight disparities in computing resources needed to train the prediction tools; some models run within seconds while others need hours. Longer runtimes require more energy and are responsible for more greenhouse gas emissions. Being able to quantify this impact is crucial as climate change profoundly affects nearly all aspects of life on earth, including human societies, economies and health. Various human activities are responsible for significant greenhouse gas emissions, including data centres and other sources of large-scale computation. Although many important scientific milestones have been achieved thanks to the development of high-performance computing, the resultant environmental impact has been underappreciated. I present a methodological framework to estimate the carbon footprint of any computational task in a standardised and reliable way, and metrics to contextualise greenhouse gas emissions are defined. I develop a freely available online tool, Green Algorithms, which enables a user to estimate and report the carbon footprint of their computation (available at www.green-algorithms.org). The tool easily integrates with computational processes as it requires minimal information and does not interfere with existing code while also accounting for a broad range of hardware configurations. Finally, I quantify the greenhouse gas emissions of algorithms used for particle physics simulations, weather forecasts, natural language processing and a wide range of bioinformatic tools. With rapidly increasing amounts of sequence and functional genomics data, this work on protein interactions provides a systematic foundation for future construction, comparison and application of PPI networks. It also integrates essential metrics of environmental efficiency developed by the Green Algorithms project, a simple generalisable framework and a freely available tool to quantify the carbon footprint of nearly any computation. This work also elucidates the carbon footprint of common analyses in bioinformatics and provides recommendations to empower scientists to move toward greener research.
  • ItemOpen Access
    Towards an Understanding of Sedentary Time and Physical Activity in Older Adults: Informing the Development of Future Behavioural Interventions
    Yerrakalva, Dharani
    The World Health Organization reports that up to five million deaths a year could be averted if the global population was more active and less sedentary, and describes these two behaviours as key to healthy ageing. Increases in physical activity and reductions in sedentary time are associated with reduced incidence of a number of age-related conditions such as cardiovascular disease, type 2 diabetes, and cancers. Despite the benefits of activity and low sedentary time, older adults are not meeting current ecommendations. Although interventions have been developed to reduce sedentary time and increase physical activity, sustained changes have not been achieved in older adults. Older adults have demonstrated the capacity to change in trial settings over the short-term. Therefore, they have the ability to choose to modify their activity levels (autonomy), can be motivated and are capable of succeeding in change (agency). However, it is also clear that there are barriers to longer term change that we need to overcome. There are three key gaps in our understanding of the role that physical activity and sedentary time play in the health of older adults. Firstly, few studies have examined the prospective associations of physical activity and sedentary time with muscle mass indices, physical function and quality of life - three key components of healthy ageing. Importantly, the epidemiological literature thus far has largely relied on cross-sectional data, self-report measures of activity, has neglected to examine the whole spectrum of activity intensities (e.g. sedentary time and LPA) and has neglected older adults. Secondly, there are few data on the correlates of changes in sedentary time, which could contribute to greater specificity in intervention development and delivery, including characterising context-specific behaviours which could be targeted by future interventions. Thirdly, given the nature of the problem, scalable digital interventions are likely to be required, but the limited evidence on the effectiveness of mobile app interventions in facilitating physical behaviour change has not been synthesised. Given smartphone use is already relatively common and increasing among older adult and apps have large potential reach over traditional health professional-delivered interventions, a population-based approach that includes apps to initiate small changes at the individual level may be cost-effective to achieve population impact. Addressing these gaps in knowledge will help inform the development of more effective intervention strategies. I conducted five complementary studies to address these key gaps and to inform the development and evaluation of future interventions among older adults. In these five studies, I (i) quantified the bidirectional associations of change in sedentary time and physical activity with lean muscle mass indices (Study 1), physical function (Study 2) and health-related quality of life (Study 3) in older adults; (ii) described correlates of changes in sedentary time among older adults (Study 4); and (iii) conducted a systematic review and metaanalysis of trials of the effects of mobile app interventions on sedentary time, physical activity, and fitness among older adults (Study 5). In the first four studies, I examined these associations in participants from the population-based EPIC Norfolk cohort study. Physical activity and sedentary time were assessed using accelerometers. Physical function measures included hand grip strength, usual walking speed and chair stand speed. Health-related quality-oflife (Hr-QoL) was measured using EuroQol-5D (EQ-5D) questionnaires, and muscle mass indices using DEXA (dual energy X-ray absorptiometry). Demographic factors assessed included sex, age, employment status, educational level, smoking status, BMI, occupational classification, and urban-rural status; behavioural factors included housework, gardening, cycling, walking, dog walking, TV/video viewing, computer use, newspaper reading, book reading, radio listening, and transport mode. In Study 5, I systematically searched five electronic databases for trials investigating effects of mobile health app interventions on sedentary time, physical activity and fitness among community-dwelling older adults. I calculated pooled standardised mean differences in these outcomes between intervention and control groups after the intervention period. Broadly, I found positive bidirectional associations of physical activities with physical function, Hr-QoL and muscle mass indices and negative bidirectional associations of sedentary time variables with physical function, Hr-QoL and muscle mass indices. I also found that individuals in specific sub-groups (older, male, higher BMI) and who differentially participate in certain behaviours (less gardening, less walking and more television viewing) increased their sedentary time at a higher rate than others. Finally, I report that mobile health app interventions may be associated with reductions in sedentary time, increases in physical activity, and increases in fitness in trials ≤3 months and with increases in physical activity in trials ≥6 months. I conclude here that (i) light physical activity, total sedentary time and prolonged sedentary bout time ought to be considered as target behaviours to change, (ii) certain context-specific activities might be more effective in eliciting change, (iii) app-based interventions should be explored in older adults as our study highlights their potential efficacy but a lack of large RCTs (iv) particular behavioural change techniques (BCTs) appear to be more effective in app-based interventions and therefore should be considered for inclusion in future interventions (v) motivational messaging about muscle mass/physical function/Hr-QoL benefit should be considered for inclusions (vi) muscle mass, physical function and Hr-QoL indices should be included as secondary outcomes in future RCTs and cost effectiveness analyses so we do not underestimate intervention value. Altogether, this thesis presents a more complete picture of physical behaviours in terms of correlates, important outcomes and emerging technologies. Taken together, these studies strengthen the case for physical behaviour interventions and highlight important information which needs to be considered early in future intervention design.
  • ItemEmbargo
    Care Needs, Social Wellbeing, and Health and Social Services Use in Older Age
    Assaad, Sarah; Assaad, Sarah [0000-0002-8104-1546]
    Background. Over the last hundred years, the increase in longevity, first regarded as one of the greatest successes of humankind, became perceived as a global challenge with substantial and changing implications for societies, economies, and health care systems. To respond to this challenge, a first step is to understand the change in health and social care needs in older age to inform policy and practice. The Cambridge City over-75s Cohort (CC75C) study (1985-2015), one of the largest and longest-running population-based studies of the oldest old (75+ years at recruitment), provides a rare opportunity to investigate the patterns of change in physical, mental, and social health profiles and health and social services use in older age. Objectives. The project aimed to (1) summarise the evidence on the link between social wellbeing and outcomes of interest (mortality and health/ social services use), (2) derive a measure of social wellbeing and describe its longitudinal change, (3) describe the change in health profiles and health/ social services use over the 10 waves of follow-up, (4) determine the longitudinal associations between social wellbeing and outcomes, and finally (5) provide an overview and evaluation of health service use models with respect to the results. Methods. A literature review and a systematic review of reviews were carried out to summarise the evidence on social wellbeing in older age with respect to mortality and health/ social services use. The derivation of a social wellbeing index (SWI) was based on a standardisation method using data pertaining to four social dimensions: relationships, network, support, and participation. Longitudinal descriptive and inferential analyses were conducted. The latter were based on marginal models (generalised estimating equations) and time-to-event-models (Cox regression and competing risks analyses) depending on study outcomes. Missing data were investigated, described, and adjusted for, using different methods including multiple imputation and inverse probability weighting. Results. The review of the literature showed that lower social support, having no spouse, or living alone were associated with higher risk of re-hospitalisation and emergency department visit, with some variation in patterns. Health service use models ranged from broad (for example, the Andersen behavioural model) to specific (for example, a post-hip fracture rehabilitative care model) and focused on healthcare services rather than social care services or both. The analyses revealed that while care needs and health/ social services use increased, social wellbeing decreased as the cohort aged. Furthermore, a higher SWI was significantly associated with a decreased risk of mortality, use of social care services, and stay at a residential care home, and with a more recent last GP visit. Conclusion. This body of work provides (1) a higher-level synthesis of the evidence on social wellbeing in older age, (2) a first longitudinal social wellbeing index based on common social dimensions with details on its conceptualisation and derivation for replication and benchmarking purposes, (3) empirical evidence on the change in patterns of care needs and health/ social services use in older age using a rare longitudinal study, (4) robust quantitative longitudinal evidence on the protective effect of social wellbeing against mortality risk and health/ social services use, and finally (5) an overview and critique of health service use models suggesting the incorporation of social health predictors and social care service use outcomes. Implications for research, practice, and policy are discussed.
  • ItemOpen Access
    The global cardiovascular burden of excess salt intake
    Fahimi, Saman
    Despite adverse impact of excess salt intake on population health, reliable global salt intake data were not available, with limitations of coverage, time era, representativeness, comparability and potential heterogeneity by age and sex. The overall purpose of the work presented in this thesis was to provide, for the first time, the best estimates of the national (187 countries), regional (21 regions) and global burden of cardiovascular disease attributable to higher than optimal salt intakes in 1990 and 2010 by systematically retrieving the best available evidence. First, I reviewed and critically appraised the scientific evidence on the link between excess salt intake and health outcomes, including its relationship with high blood pressure, gastric cancer, osteoporosis, and renal disease. After that, existing controversies about the impact of excess salt intake on health were discussed; to do so, a recent publication that questioned the health benefits of population salt reduction was critically evaluated. In the next part, various terminologies for defining the exposure to salt and their strengths and limitations were discussed. In the following chapter, using a multi‐disciplinary approach, an optimal level for salt intake was suggested. Systematic review and analysis of 24‐hour urinary sodium excretion and dietary surveys worldwide were the bases of the next chapter. Data from 247 surveys conducted from 1980 onwards including 143 using 24‐h urinary sodium and 104 diet‐based surveys, representing 66 countries and 74.1% of the world population were used. Using a Bayesian Hierarchical method, national, regional, and global average sodium intakes and their associated uncertainties in 1990 and 2010, were estimated. Mean global salt intake in 2010 was 10 g/d, twice the WHO recommendation of 5 g/d. Substantial heterogeneity was seen by country and sex. Mean intakes of adult men and women in 97% and 95% of countries were >5g/d. Highest intakes were seen in Kazakhstan (15.2 g/d), Mauritius (14.2) and Uzbekistan (14.0); and lowest in Kenya (3.8), Malawi (3.8) and Rwanda (4.0). Little variation was seen by age. From 1990 to 2010, global salt intake increased by 315 mg/d; it increased by >250 mg/d in 83 countries, and decreased by >250 mg/d in only 15 countries. The chapter after that was dedicated to estimation of national, regional and global impacts on mortality due to excess salt intake. First, the relationship of salt intake with blood pressure was quantified by reviewing the literature and performing new meta‐analyses of sodium reduction randomized trials. After that, changes in national, regional, and global levels of blood pressure due to reduction in salt intake ‐ including a hypothetical situation in which salt intake was reduced to the optimal level ‐ were estimated. Finally, by taking into account the mortality associated with different blood pressure levels, the reductions in national, regional, and global cardiovascular mortality that would be gained by reducing populations salt intake levels were estimated. 1.38 million (95% CI 0.9, 1.5M) cardiovascular deaths were attributable to excess salt in 2010, 45% due to coronary heart disease, 46% to stroke, and 9% other cardiovascular disease. 55% of these deaths were in men. More than 80% of deaths were in low and mid‐income countries. Among the top 30 most populous nations, highest mortality due to excess salt intake was seen in Ukraine (1,163 deaths per million adult population), Russia (1,006), and China (505); highest proportional mortality in Thailand (18.5% of all CVD deaths attributable to excess salt), China (17.6%), and Korea (17.4%). The final two chapters focused on salt reduction in Iran, where a national salt reduction plan was proposed and discussed.
  • ItemOpen Access
    The Value of a Person-Centred Approach to Intervention Format and Delivery in Facilitating Patient-Led Identification, Expression and the Addressing of Unmet Support Needs in Patients with Long-Term Conditions
    Gardener, Anita; Gardener, Anita [0000-0002-8064-3780]
    Background Strategy documents recommend a patient-led approach to identifying and addressing the unmet support needs of patients with long-term conditions. Further recommendations advocate an intervention-model to systematically assess need based around the use of a needs-assessment questionnaire or prompt completed with, or by, the patient, and a patient-HCP conversation. However, despite enthusiasm for these interventions within the qualitative literature, their usefulness in supporting a patient-led approach remains unclear, and there has been limited attention to exploring the case for an alternative person-centred approach to intervention format and delivery. Aim The aim of this thesis is to critically analyse the nature and usefulness of the Systematic Needs Assessment intervention model as a means to enable a patient-led approach to identifying, expressing and addressing unmet support need, and to make the case for an alternative person-centred approach to intervention format and delivery. In addition, the thesis aims to investigate whether, and how, this alternative approach can support patient-led identification and expression of their unmet support needs in practice through the exploration of an ‘exemplar’: the Support Needs Approach for Patients (SNAP): consisting of a five-stage approach to delivering person-centred care underpinned by the SNAP Tool. Methods A mixed methods approach was adopted involving three stages: 1) a thematic synthesis of the relevant qualitative literature; 2) a mixed methods study to assess the face, content and criterion validity of the SNAP Tool; 3) a qualitative study to explore the use of SNAP in clinical practice. Results The thematic synthesis identified that interventions based around the systematic assessment of patient need tend to support enhanced patient involvement in an HCP-led approach to identifying and addressing support need, rather than a more person-centred patient-led process. Findings also suggested this was a function of intervention characteristics that emphasised the HCP role (e.g. use of instruments designed to measure symptoms). In contrast, elements of a patient-led approach were evident where interventions incorporated features orientated towards person-centred care and support needs (rather than symptoms). Together these limitations exposed a weakness in the evidence base of existing interventions, and lent support for the exploration of an alternative person-centred approach to intervention format and delivery. Consideration of this alternative person-centred approach, via an exemplar intervention (SNAP,) found that SNAP had value in clinical practice and was able to support a person-centred patient-led process. Validity testing of the SNAP Tool found it has good face, content and criterion validity. The qualitative investigation of SNAP further identified that, when delivered as intended, SNAP operationalised person-centred care thereby enabling patient-led identification, expression, and addressing of their unmet support needs. Conclusion This thesis found that interventions based on a Systematic Needs Assessment intervention model are orientated to supporting enhanced patient involvement in an HCP-led approach to identifying and addressing patient support need, rather than a patient-led approach. In contrast, SNAP provides an alternative person-centred approach to intervention format and delivery that can directly enable patient-led identification, expression, and addressing of their unmet support needs.
  • ItemEmbargo
    Optimising Cardiovascular Disease Risk Assessment: Application of Dynamic Prediction Tools and Risk Stratification Strategies Using Electronic Health Records
    Xu, Zhe; Xu, Zhe [0000-0003-1519-6707]
    Cardiovascular diseases (CVDs) remain the leading cause of morbidity and mortality worldwide. Identifying individuals who are at higher risk of CVD is fundamental for effectively implementing prevention strategies with limited health care resources and subsequently reducing the burden of CVD. For this purpose, numerous prognostic cardiovascular risk prediction models have been developed in populations from different regions over the past two decades. However, there are limitations of existing risk prediction models. First, they are mostly based on single measurements of risk factors and there is limited evidence quantifying the value of longitudinal risk predictor measures. Therefore, the first aim of this thesis is to evaluate the role of repeated risk factor measures on CVD risk prediction, with a focus on people with type 2 diabetes who are regularly monitored and have more measurements. Second, few models have considered effect of post-baseline statin initiation, which may lead to an underestimation of an individual’s future risk of disease. Thus, the second aim is to explore novel approaches to account for post-baseline statin initiation in CVD risk prediction models. Third, a single fixed risk threshold for treatment initiation is typically recommended in most guidelines, however, such strategy does not account for the large impact of age and sex on CVD risk. Consequently, the third aim is to investigate age- and sex-specific thresholds for CVD risk stratification. These questions are addressed using electronic health records (EHRs) from approximately two million individuals from the UK Clinical Practice Research Datalink (CPRD), together with the linked data from Hospital Episode Statistics (HES) and the Office for National Statistics (ONS). Key findings 1: By applying landmark modelling to EHRs for people with type 2 diabetes, models incorporating trajectories and variability of risk predictors demonstrated significant improvement in risk discrimination (C-index=0.659, 95% Confidence Interval: 0.654-0.663) as compared to using last observed values (0.651, 0.646-0.656) or means (0.650, 0.645-0.655). Inclusion of standard deviations (SDs) of systolic blood pressure yielded the greatest improvement in discrimination (C-index increase=0.005, 95% Confidence Interval: 0.004-0.007) in comparison to incorporating SDs of total cholesterol (0.002, 0.000-0.003), HbA1c (0.002, 0.000-0.003), or high-density lipoprotein cholesterol (0.003, 0.002-0.005). Given that repeat measures are readily available in EHRs especially for regularly monitored patients with diabetes, this improvement could easily be achieved. Key findings 2: To account for statin initiation in CVD risk prediction, I incorporated a time-dependent effect of statin initiation constrained to a 25% relative risk reduction (from trial results) into the risk prediction models. In models accounting for (versus ignoring) statin initiation, 10-year CVD risk predictions were slightly higher; predictive performance was moderately improved. However, few individuals were reclassified to a high-risk threshold, resulting in negligible improvements in number needed to screen to prevent one CVD event. In conclusion, incorporating statin effects from trial results into risk prediction models enabled statin-naïve CVD risk estimation and provides moderate gains in predictive ability but had a limited impact on treatment decision-making under current guidelines in this population. Key findings 3: Age- and sex-specific risk thresholds were specified as the minimum of 10% or the 90th percentile of the estimated risk distributions from the respective populations. Compared with the single threshold of 10%, using age- and sex-specific thresholds significantly improved the discriminatory ability to identify high-risk men and women at younger ages. The number needed to screen to prevent one CVD event was reduced by 58% and 89% for women and women aged 40 to 49. The gain in CVD-free life expectancy by age and sex was slightly higher when the strategy identified more people as high-risk for younger age groups, with a maximum increase of 0.16 years. In conclusion, the results suggest using age- and sex-specific thresholds can modestly enhance CVD risk stratification for allocation of statin therapy among younger people. Overall, these findings have identified achievable and pragmatic approaches to improve CVD risk prediction and risk stratification for allocating statin initiation by harnessing information from electronic medical records.
  • ItemOpen Access
    Understanding community end of life anticipatory medication care
    Bowers, Ben; Bowers, Ben [0000-0001-6772-2620]
    The prescribing of injectable end-of-life anticipatory medications ahead of possible need has become established good practice in controlling distressing symptoms for patients dying in the community. The intervention aims to optimise timely symptom control, provide reassurance for patients and families and prevent crisis hospital admissions. However, in signifying the imminence of death, anticipatory medication carry great symbolic and emotional impact. Stakeholder perspectives, particularly those of patients and informal caregivers, have not been investigated in detail. This thesis examines the practice of prescribing and using anticipatory medications in the community in England and patient, informal caregiver and clinician perspectives of this care. My systematic literature review identified that anticipatory medication policy and practice is founded on an inadequate knowledge base. Current practice is based primarily on clinicians’ beliefs that anticipatory medications offer reassurance to all, facilitate effective symptom control and prevent undesired hospital admissions. The views and experiences of patients, informal caregivers and general practitioners (GPs) have not been adequately investigated; neither has the clinical effectiveness, cost-effectiveness and safety of prescribing anticipatory medication. In the light of my review, I then investigated community anticipatory prescribing and administration practices and perspectives of care through three sequential studies in two English counties. I undertook interviews with GPs, examined documented care in deceased patients’ clinical records, and conducted longitudinal interviews with patients, their informal caregivers and clinicians. Triangulation of these data revealed that prescribing practices reflected the structures and culture of community healthcare rather than being particularly person-centred. Discussion with patients and informal caregivers of the process of dying and the role of anticipatory medications in controlling symptoms was often vague, inadequate or even absent. Some patients and informal caregivers expressed ambivalence about the medicines and perceived that they might hasten death. The prescribing of standardised drugs and doses and of anticipatory syringe pumps, often weeks to months ahead of the patient’s death, had unintended consequences for care experiences and patient safety. Although administered anticipatory medications were reported to have generally helped symptom control, some informal caregivers reported difficulties in persuading nurses to administer them to patients. Clinician and institutional preferences to put prescriptions in place at the earliest opportunity can be counterproductive and are unjustified. Anticipatory medications are a nuanced and complex intervention, which needs careful tailoring to the preferences and experience of patients and families and regular review as situations change. Nurses’ decisions to administer medication should take into consideration informal caregiver insights into patient comfort, especially when patients can no longer communicate.
  • ItemOpen Access
    Understandings of attachment theory for clinical practice
    Beckwith, Helen; Beckwith, Helen [0000-0002-4720-9552]
    Attachment theory and research is considered to have a great deal of relevance for clinical and social welfare practice. Practitioners are encouraged, through literature, training and policy, to learn, understand, refer to and use their knowledge of attachment theory and research when working to meet the needs of the children and families they encounter. However, there has been very little empirical study of how practitioners have understood attachment concepts and methods in order to do this. The research reported here examines how clinicians and researchers understand attachment theory and research in the context of clinical practice for child mental health. Chapter 1 spotlights the gap in empirical work pertaining to practice-based understandings and behaviour, with respect to attachment theory. It draws on theories and models of professional knowledge to contextualise the forthcoming study; framed as an evaluation of attachment theory’s intelligibility. Chapter 2 reviews a range of source materials surrounding the development and distribution of attachment knowledge. It presents a narrative synthesis of the diverging ways attachment concepts are used within academic, policy and practice literature. Attention is given to issues arising from each discourse when considering implications for clinical practice. This work generated the initial themes that informed the study design and development. Chapter 3 explores what beliefs about attachment applications may exist by assembling a pool of relevant claims observed from the literatures. Q-methodology was used to examine the views of international attachment researchers and clinicians working with children, adolescents and their families in the UK. Additional background and demographic information were collected to explore potential mediating influences that may shape the perspectives of these participants. Chapter 4 reports in detail on the by-person factor analysis employed to make sense of the data. A substantial degree of commonality was observed, alongside profiles of three perspectives that diverged on a number of key issues. Participants clustered around these viewpoints based on shared professional characteristics. Chapter 5 discusses the findings with particular emphasis on identified areas of consensus and divergence between researchers and clinicians. In addition, it reflects on the contextual influences which shaped the views and concerns expressed by participants. Chapter 6 concludes with a consideration of how knowledge is shared between domains of research and practice, and a coherent and incisive position on the current state of play. It ends with a reflexive narrative about approaching and conducting this work as a practicing clinician and researcher within the field. The work reported in this dissertation will be of particular value to i) researchers interested in how best to communicate with and learn from practitioners and wider publics; and ii) practitioners interested to think further about the implications of attachment theory and research for their own work.
  • ItemOpen Access
    Attachment and Trauma: A Historical and Empirical Study of the Meaning of Unresolved Loss and Abuse in the Adult Attachment Interview
    (2021-11-01) Bakkum, Lianne; Bakkum, Lianne [0000-0002-2880-6853]
    This thesis comprises three studies of the meaning of adults’ unresolved states of mind with respect to attachment (U/d) in the Adult Attachment Interview. The first study is a historical analysis of the conceptualisation of “trauma” in the unresolved state of mind classification, drawing on published and unpublished texts by Mary Main and colleagues. The paper traces the emergence of the construct of an unresolved state of mind, and places this in the context of wider contemporary discourses of trauma, in particular posttraumatic stress disorder and discourses about child abuse. In the second study, individual participant data were used from 1,009 parent-child dyads across 13 studies. Interviewees with or without unresolved loss/abuse were differentiated by subsets of commonly occurring indicators of unresolved loss/abuse. Predictive models suggested a psychometric model of unresolved states of mind consisting of a combination of these common indicators, which was weakly predictive of infant disorganised attachment. There was no significant association between unresolved “other trauma” and infant disorganised attachment. The findings provide directions for further articulation and optimisation of the unresolved state of mind construct. In the third study, first-time pregnant women (N = 235) participated in the Adult Attachment Interview while indicators of autonomic nervous system reactivity were recorded. Unresolved speech about loss was associated with increased heart rate. Participants classified as unresolved showed a decrease in pre-ejection period and blunted skin conductance level throughout the interview. Unresolved states of mind may be associated with physiological dysregulation, but questions remain about the psychological mechanisms involved. This thesis contributes towards further clarification of the unresolved state of mind construct by examining its historical context, psychometric characteristics, and psychophysiological mechanisms. Further exploratory and theoretical work should focus on improving the definition and validity of the unresolved state of mind construct, to gain a better understanding of how attachment-related experiences of loss and trauma are processed and how this might affect parenting behaviour.