Theses - Engineering
Permanent URI for this collection
- ItemOpen AccessThe Role of Common Risk- Assessment Tools in Assessing Patient Safety RisksO'Kelly, Eugenia; O'Kelly, Eugenia [0000-0002-4748-3957]Introduction and Aims Whether ill or well, young, or old, we trust in the healthcare system to protect our health and prolong our life rather than harm or end it. Yet healthcare, in comparison to other safety critical industries, has a high degree of preventable harm. Many scholars believe a different or new risk assessment tool may improve patient safety. This dissertation examines the role of risk assessment tools in healthcare, exploring why these tools are not benefiting healthcare to the same extent as they benefit other safety critical industries. Four main questions guide this investigation. Methods and Research Questions Using Design Research Methodology, a mixed method approach is used to answer the four research sub-questions. Question 1, what is the nature ofpatient-safety risk in healthcare, is answered through a literature review and observational study of risk in three clinical procedures. Question 2, which tools are commonly used for safety assessment, is explored through literature review. Question 3, how is patient safety currently assessed in healthcare, is answered by a survey and interview study of US and UK based risk managers. Finally, Question 4, what are the requirements for quality risk assessment in healthcare, is determined by synthesizing the findings from the previous three research questions. Results and Conclusion This dissertation finds that risk in clinical care is primarily driven by human factors, with skill- based errors and routine violations generating a particularly high degree of preventable risk. The tools currently used to assess risk in healthcare may not be ideal for capturing the nature of risk. Regardless, there appears to be insufficient resources available at most healthcare institutions to support the proper use of any but the most basic risk assessment tool. A poor organisational safety culture appears to be at the root of these and other issues with risk assessment tools. This dissertation concludes that while many risk assessment tools may provide benefit to healthcare, including those currently used, severe problems in safety culture prevent such benefit from being realized.
- ItemOpen AccessCritical infrastructure organisation management - An analysis of the transition to the Industry 4.0 eraO'Brien, SallyCritical infrastructure systems (CISs) provide the services that are vital for the economic prosperity, security, and well-being of society. These include power transmission and distribution, telecommunications, transport, and water distribution networks, to name a few. CISs are comprised of both physical and digital assets that provide infrastructure services but require interaction with people to operate. Through sociotechnical systems theory, we can model CISs through the interactions between social and technical factors that influence organisational performance, which is directly linked to the quality of the infrastructure services provided. The task of managing CISs involves not only the effective monitoring of physical assets, system interdependencies, and emerging risks and trends, but also the management of organisational aspects including people, processes, and working cultures. The interaction of these aspects during CIS operations is formalised within the field of organisational theory. Developed society is currently experiencing a technological revolution, often termed as Industry 4.0, whereby the operational and management capabilities of CISs and organisations are being transformed through the use of bespoke technologies and system analytics. These technological advancements are allowing organisations to gain new insights into how they can operate, maintain, and protect their systems and assets more effectively. The idealised Industry 4.0 model envisions that all data capture, storage, and analysis processes will be fully automated to enable automated decision-making with minimal human intervention. However, advancements of this type will change how critical infrastructure (CI) organisations are structured and managed. At present, organisational theory lacks the ability to capture and provide governance on the evolving relationships between technologies, people, and processes in CIS management. This research addresses this shortcoming by critically analysing data management structures and practices in CI organisations and developing an analytical framework that captures the interdependencies between technical and social elements in complex systems. It introduces the Data and Information Flow (DIF) model, which provides a framework that surpasses the capability of existing methods by capturing the movement and interactions between data, information, management systems, people, and processes in organisations. The research approach and subsequent development of the DIF model design involved a combination of reviewing existing organisational theory methods and engaging with five case study organisations from the energy sector to investigate and comparatively assess their organisational resource and data management strategies. The research also sought to understand the extent to which the case study organisations have adopted Industry 4.0 advancements in practice. Qualitative data collection methods including interviews, group workshops and webinars, and document analysis were applied. The research findings provide unique insight into how organisations structure both technical – i.e., data, information, and management systems – and social – i.e., personnel and processes – entities. The DIF model visually captures and characterizes the organisational practices, resources, and inherent relationships that influence and determine management decisions. The case study assessments show clear evidence of the existence of and potential for further automation in data capture and analysis processes to support system management and decision-making in CI organisations. However, they highlight inefficiencies in system operations due to the vast amount of human input and influence required in management processes, data quality challenges, and insufficient technical capability in existing management systems and software. These inefficiencies, in addition to social and cultural disruptions caused by technological changes, are limiting the extent of the digital transition and realisation of the Industry 4.0 ideal in CI organisations. This can only be achieved if an organisation’s structure and the interactions between all available resources can be assessed and optimised in a combined manner. The DIF model offers the ability to map these wider interactions.
- ItemOpen AccessLow temperature glide cycles for energy storage applications: Thermodynamic and capital cost analysisKoen, AntoineAmong the technologies being researched for grid-scale electricity storage, pumped thermal energy storage consists of storing electricity as thermal energy, with the conversions being done by a reversible heat pump/heat engine system. Advantages include the use of abundant and cheap materials, conventional components from the power industry, as well as full flexibility in siting. Various thermodynamic cycles can be used for the energy conversion, including the Joule-Brayton, transcritical, and glide (also known as Kalina) cycles. The latter has not been extensively studied yet for PTES and is the focus of the present work. Akin to a Rankine cycle, it differs by employing a zeotropic mixture for the working fluid. As a result, evaporation and condensation occur non-isothermally over a temperature “glide”, making this cycle a good match for sensible heat storage. The mixture used for the working fluid must be chosen carefully. Its effective heat capacity during phase change (a well defined property due to the temperature glide) can exhibit extremely high variability, which would incur substantial pinch-point problems and thus inefficiency in the heat transfer processes integral to PTES. In the present work, an alkane mixture is therefore optimised to achieve near-constant heat capacity. This alkane mixture is then used as the working fluid in a model of a low temperature glide cycle that uses water for both the hot and cold stores. Despite the low temperatures involved, the computed cycle round-trip efficiency reaches roughly 55% under realistic assumptions for isentropic efficiency and heat exchanger effectiveness. This is largely due to the efficient heat transfer enabled by optimising the working fluid. Energy density is relatively low at 2.2 kWh.m−3 compared to other PTES cycles, though still several times higher than for pumped hydro storage. Capital cost is estimated at 15-45 $/kWhe for the specific energy cost and 1300-2900 $/kW for the specific power cost, making this competitive with batteries for long duration storage even under conservative assumptions. Capital cost is then investigated further in order to analyse the physical drivers of cost in PTES systems in general, both in terms of power and energy. The nature of the working fluid influences the specific power cost via the turbomachinery as well as the heat exchangers. Operating conditions like pressure and temperature also have a major effect; in particular, increasing temperature lowers the heat-to-work ratio of the cycle and therefore lowers heat exchanger cost, although beyond a certain point material limits force a switch to more expensive materials and cause a discrete jump in cost. In order to improve efficiency, energy density, and potentially reduce cost, a higher temperature glide cycle is then modelled, after the optimisation of a new working fluid mixture. To that end, many fluids are examined according to several criteria, before deciding on a final mixture that is also based on alkanes. Pressurised water is used for the hot store at up to 180 °C while unpressurised water is again used for the cold store. Performance is characterised by parametric studies which suggest round-trip efficiency can reach 60%, with an energy density of around 5 kWh.m −3 . The specific power cost is (conservatively) estimated at 1600 $/kW, an improvement on the low-temperature cycle thanks to the lower heat-to-work ratio. However, the specific energy cost is higher at around 120 $/kWhe, due to the expense of the pressure vessel for the hot store. Consequently, the PTES plant’s storage duration determines which of the low and high temperature glide is most cost-effective.
- ItemOpen AccessModelling and Control of Tubular Linear Generators for Wave-Power ApplicationsRidge, Alexander NicholasLinear generators are an attractive choice for use in wave-power applications. They have the ability to be coupled directly to the motion of a point-absorber or float, eliminating the need for potentially complex mechanical/hydraulic intermediate stages which are often required for designs based on rotary generators. This decreases system complexity / maintenance requirements and increases system reliability, key factors for economic operation of wave energy converters out at sea. Linear generators are a relatively new technology, as is their application to the field of wave-power energy extraction. The literature on the modelling and control of linear generators thereby naturally lags behind that of rotary machines. This dissertation makes contributions to four key areas in applying a linear generator to the challenge of wave power energy extraction – thrust ripple reduction, thermal modelling, dynamic testing / maximum power extraction and sensorless control. These areas were selected for their importance in the optimal use of a linear generator in a wave-power scenario and for which there was little existing published knowledge. Experimental validation was used throughout this work to ensure a directly-applicable nature for its results. The work in this thesis generally starts at the point of an existing linear machine design and looks towards the optimal application of that machine to energy extraction in a wave-power scenario. As the design and application of machines are however inherently linked, some of this work will naturally feed back into the design process. The work on thrust ripple reduction presents a successful methodology to allow precise control of the force from the machine in an open-loop manner, regardless of the machine’s position. The thermal modelling work mainly focuses on the construction of thermal models for a linear machine when stationary. This was considered a first and important step in thermal modelling of a linear machine, giving a ‘worst case’ thermal rating. Detailed thermal models that are able to predict the temperatures of individual machine components with minimal experimental calibration are presented and validated. The realistic dynamic testing of linear generators for a wave-power scenario is an important area to develop due to the risky, harsh and expensive-to-access nature of testing in the ocean. A dynamic test rig that is capable of providing a linear generator with a mechanical environment equivalent to what it would experience being attached to a float on the sea was successfully designed, built and tested. This test rig has no fixed motion profile, but instead solves a hydrodynamic model in real-time to determine the forces that should be applied to the linear generator. Initial investigations into maximum power extraction algorithms using this test rig confirm its usefulness and versatility as a piece of equipment for future research. Last but not least the potential for sensorless control is investigated. If achieved, this can increase device reliability by eliminating externally-mounted sensors which is attractive for harsh marine environments. A sensorless control technique is developed which can output a number of different estimations of machine position at a full range of machine speeds, including stationary. This control technique shows great promise and requires no additional electrical components besides those which would already exist for general machine control.
- ItemOpen AccessLearning from Structured Data with Weak SupervisionWang, HanchenIn all science, inquiry proceeds based on observation and experimentation, exercising informed judgement and developing hypotheses to guide the design of experiments and disambiguate the theories. Artificial intelligence (AI) has dramatically improved state-of-the-art scientific research by helping scientists to formulate hypotheses, design experiments to test them, and collect and interpret data. Fundamental advances over the past decade include self-supervised learning methods that train models on broad data at scale without pre-defined labels, geometric deep learning that leverages structure and geometry informed by scientific knowledge, and generative AI methods that create action plans for experiments and produce new designs such as small molecule drugs and proteins from a diversity of data obtained from experiments, including images and sequences. Among such advances, one of the most commonly shared characteristics is learning the AI/ML model with weak forms of supervision. To fulfil such goals, we develop a variety of learning methods on a range of structured data representations. We start by working on point clouds; we developed a universal selfsupervised pre-training method for neural feature encoders called “OcCo” and devised a quantum computing-based method named “qKC” for registration. Both methods require no labels for training and improve the robustness of the model when meeting data noise. We next focus on medical CT and CXR images, where data are usually isolated across multiple centres; therefore, we develop a federated learning framework to jointly exploit isolated data usage to improve clinical models’ performance. We next developed “GraphMVP” and “MolGraphEval” to advance the SOTA of self-supervised graph learning on molecules and provide an understanding of what structural information is captured in these methods.
- ItemOpen AccessLean Premixed Prevaporised Combustion for MicroturbinesWang, JiayiThere has been growing interest in the microturbine because of its potential application as a range extender in hybrid electric vehicles. Unlike the existing microturbines used for other purposes, these microturbines will need to operate on liquid fuels with highly preheated air while maintaining low pollutant emissions. A promising strategy to meet these requirements is the lean premixed prevaporised (LPP) concept. Thus, this thesis explores the fuel flexibility, flame stability, structures, gas emissions, and droplet behaviour in LPP combustion at conditions relevant for microturbines. The first part of the thesis examines the effect of fuel choice using zero- and one-dimensional combustion simulations. The fundamental flame properties of gaseous mixtures of air and diesel, gasoline, and kerosene respectively are calculated at typical microturbine operating conditions. The calculations find that the three fuels differ significantly in their ignition delay times and extinction strain rates but are relatively similar in their laminar flame speeds and adiabatic flame temperatures. These results highlight the importance of fuel choice in microturbine combustion systems that rely on autoignition or non-premixed flames. In the second part of the thesis, simulations on single droplet combustion are conducted. The spontaneous ignition of isolated n-heptane droplets with initial diameters of 20–100 μm in air at 4 atm and 700–1200 K is modelled, which includes the typical operating conditions of microturbines. Because some fuel droplets in a combustor may be sprayed or carried to near the recirculation zone, the simulations use a mixture of pure air and hot combustion products as the oxidiser. The flame structures, evaporation times, and autoignition times in both physical and mixture fraction spaces for the different conditions are presented and compared. The variables examined include the air preheat temperature, amount of dilution with hot products, initial fuel droplet diameter, oxidiser temperature, and oxygen concentration. The results show that droplets in pure air at microturbine conditions fully evaporate before ignition, suggesting that a prevapourised concept is suitable for microturbines. The dilution with hot combustion products decreases the ignition delay time mainly by raising the oxidiser temperature. Low-temperature chemistry does not have a significant effect on droplet ignition because adding even a small amount of hot combustion products can increase the oxidiser temperature to higher than the temperatures favorable for low-temperature kinetics. The cool flame is only observed for 100 μm droplets at low temperatures, but two-stage ignition is not observed. The last part experimentally explores the stability and structure of a turbulent swirling n-heptane spray flame under various degrees of prevaporisation at ambient pressure. The results show that preheating the air to 343 and 393 K has little effect on the lean blow-off (LBO) velocity, but recessing the fuel injection significantly decreases the lean stability limit. To correlate these limits, various attempts to define a Damköhler number were made, but unlike previous studies with no prevaporisation, the difficulty in defining laminar flame speed in the present case does not allow a single correlation to work for all degrees of prevaporisation. Four stable cases that differ in equivalence ratio, air preheat temperature, and fuel injection recess are investigated using a gas analyser, 1D PDA, OH* chemiluminescence, and CH₂O-PLIF. Decreasing the global equivalence ratio, preheating the air, and recessing the fuel injection all reduced the NOₓ emissions. Cases without fuel injection recess or air preheat exhibit a conical-shaped heat release zone near the shear layers. Preheating the air to 393 K reduced the Sauter mean diameter (SMD), increased prevaporisation, and enabled a second heat release zone downstream of the fuel injection. Recessing the fuel injection by 25 mm reduced droplet velocities and led to a semi-spherical instead of a conical heat release zone. The CH₂O-PLIF signal without injection recess was high along the central axis and its distribution resembled that observed for spray jet flames. In contrast, with recessed spray injection, CH₂O was mainly found outside the central recirculation zone and only appeared inside during LBO; similar to previous work with premixed flames. Single droplet evaporation was also modelled at the experimental conditions using the single droplet combustion code. The results agree with the experimental data that preheating the air was the most effective for obtaining small droplet diameter. These findings show that different methods of prevaporisation, which only differ by subtle changes in droplet characteristics, strongly impact flame stability. These data can be used for turbulent flame modelling focusing on sprays and finite-rate kinetics.
- ItemOpen AccessDevelopment of a Bulk Superconducting Magnet for Benchtop Nuclear Magnetic ResonanceBeck, Michael; Beck, Michael [0000-0003-4476-3803]Stacks of bulk high temperature superconducting (HTS) rings are promising candidates for the generation of the strong polarising fields required for nuclear magnetic resonance (NMR) when utilised as trapped field magnets (TFMs). To date, these stacks have been magnetised by the quasi-static field cooled magnetisation (FCM) technique within NMR-grade fields, requiring access to large and expensive magnetising fixtures – limiting the accessibility and reach of such systems. Portable, low-cost magnetisation techniques such as pulsed field magnetisation (PFM) have been shown to readily trap fields up to 5 T within disc shaped samples. This work investigates the viability of magnetising stacks of ring-shaped bulk HTS by PFM to generate magnetic fields suitable for NMR. First a two-dimensional (2D) axi-symmetric model of a single ring, based on the wellestablished H-formulation, was developed and iteratively refined to remove numerical errors from the solution. This model was then validated against analytical solutions for the quasi-static problem, before being expanded to account for thermal effects during rapid field application. The penetration of magnetic flux into ring-shaped bulk HTS was found to occur from both the inner and outer faces of the ring, degrading the stability of the magnetisation process during PFM. These results were verified experimentally. The influences of ring geometry and critical current density characteristic (Jc) on the trapped field were investigated – along with the use of inserts to improve the trapped field strength and mitigate the instabilities. The use of inserts gave mixed results but open several avenues for further investigation. The model – without inserts - was then expanded to predict the behaviour of stacks identical rings when axially spaced with variable separation. Increasing the number of rings within the stack is an effective way of improving the magnitude of the trapped field but does not necessarily improve the homogeneity of the trapped field sufficiently for NMR. Constructing the stack with variable separation between the rings can result in a highly uniform field with only a small effect on the peak strength. Finally, as practical samples do not exhibit perfect axi-symmetry, a three-dimensional (3D) model of a ring stack is implemented with spatially varying Jc. The validity of the spatial Jc distribution was validated by comparison with a real sample. The influence of this non-uniformity on the trapped field under both FCM and PFM was investigated – for which it was found that axial non-uniformity may be exploited to improve the homogeneity of the trapped field. Any circumferential variation significantly degrades the trapped field properties but can be mitigated by use of multiple samples with weaker regions deliberately misaligned. These results provide novel insight into the rapid magnetisation of ring-shaped bulk HTS, and methods through which the inherent instabilities may be mitigated - opening new pathways to high-field, low-cost, portable NMR systems.
- ItemOpen AccessPaths Towards Shape Change: Elastic Instabilities and Spontaneous DeformationsGiudici, AndreaA system can change shape. In biology, shape changes lead to the wonderful variety of forms that surround us; in engineering, they are a tool to design soft robots and deployable structures. In this thesis, we identify and explore three fundamental paths that lead to shape change: elastic instabilities, spontaneous deformations, and the combination of these two. Classic elastic instabilities occur when a progressive load is applied to a system which initially responds progressively, but at a critical threshold suddenly changes response and, importantly, shape. Here, we study a classic example of large strain instability — ballooning – and the process that leads to longitudinal phase separation in solids. We show that near a critical point, phase separation is described by a universal and simple energy which allows us to understand shape evolution analytically. Typical examples of the second path towards shape change — via spontaneous defor- mations — are biological growth or muscle contraction. They sculpt the shape of flowers or power the beating of a heart. Outside of the realm of biology, the swelling of gels and the anisotropic deformations in liquid crystal elastomers (LCEs) are two excellent examples of spontaneous deformations that can be patterned, allowing scientists to program shape changes, mimic biology and design soft robots. A typical example of patterned deformation is the morphing of flat sheets into curved surfaces. Here, we show that an LCE sheet can be morphed into a variety of shapes in a reprogrammable way by patterning the activation strength of the LCE via light modulation. Moreover, we show that patterned shape changes are not limited to 2D and discuss the origin of the 3D deformation that causes twist and contraction in LCE fibres. Finally, we recognise the last and mechanically most rich path towards shape change where elastic instabilities and spontaneous deformations combine. This new mode of defor- mation underpins, for instance, patterns in biology such as the corrugations on the mammal brain or the folding of pollen grains. In the case of soft active materials, we show that the ballooning instability and phase separation can be triggered and amplified by the spontaneous deformation of LCEs; that a coiling instability can be induced in spontaneous twisting fibres; and discuss how curvature loads govern the instabilities in thin shells.
- ItemOpen AccessAugmented Workforce Canvas: Towards a Tool for Integrating Operator Assistance Systems in IndustryMöncks, Mirco; Möncks, Mirco [0000-0003-1108-6455]To remain competitive in an increasingly complex manufacturing landscape, organisations are moving beyond a full automation narrative and considering the empowering role of augmentation. Although technology is an important pillar in industry, people remain essential on shop floors and will continue to be so in the future. Where total automation is not the preferred option, augmentation technologies and operator assistance systems (OAS) have the potential to realise an optimal combination of people and technology, resulting in human-technology integration (HTI). In this study, OAS are conceptualised as socio-technical systems that modify or complement an operator's capabilities. However, despite their promising potential for empowering the workforce, OAS are inadequately understood in industry. For example, there is a distinct lack of knowledge on the applicability of OAS, their value-added, and the most effective way of integrating OAS into production environments. The set of relevant technology management factors that need to be considered to understand and guide technology implementation projects concerning OAS is unknown. Understanding these factors is crucial as the successful adoption of OAS depends on how an application was developed and deployed. Focusing on OAS for execution support, this study therefore (a) explores the relevant technology management factors for integrating OAS into production systems, and (b) strives to understand how to systematically consider these factors during the integration of OAS into human-centric production systems. In technology management research, the use of multiple methods has been advocated to overcome individual methodological weaknesses and to allow for a richer approach to data collection, analysis and interpretation. Following pragmatism and engaged scholarship, this study applies procedural action research. Due to the contextual richness of OAS research, a mixed method research approach was selected, involving: (a) a systematic review of 2,928 papers; (b) 67 semi-structured expert interviews from 45 different manufacturing organisations; (c) 32 survey-guided industry case studies; (d) 108 structured industry workshops and working sessions; (e) ethnography and observations in ten different shop floor environments; (f) three industrial case studies; and (g) two in-depth evaluative industrial case studies over the course of three months each. As a result, this study identifies (a) 11 goal-based application areas for OAS, (b) 11 organisation-based application areas for OAS, (c) the value-added of OAS on shop floors, and (d) a set of 15 technology management factors that need to be considered when integrating OAS. An essential contribution emerging from these findings is the Augmented Workforce Canvas (Canvas) (Figure 1). The Canvas is a framework enabling practitioners to systematically understand and guide activities related to the integration of OAS. Evaluating the Canvas in two end-to-end industry case studies, this research provides evidence that the Canvas can be applied to guide OAS integration activities in industry. Overall, this study contributes to the understanding of industrial socio-technical systems and their integration into production systems by placing both people and the value-added of OAS at the heart of technology management decisions.
- ItemEmbargoHierarchical Carbon Structures for Advanced High Areal Capacity Lithium-Ion Battery ElectrodesZhao, Zijian; Zhao, Zijian [0000-0001-5937-9279]The ever-growing energy demand from modern portable electronics and electric vehicles calls for a matching development in lithium-ion batteries (LIBs) that offer high power density, energy density, and long cycle life. However, LIBs based on traditional layered metal oxide cathodes and graphite anodes are quickly approaching their specific energy limits. The exploration of advanced electrode architectures provides a novel route to improving the energy density of LIBs and is often overlooked. This PhD thesis investigates the design, fabrication, and testing of structurally engineered and high areal capacity LIB electrodes based on hierarchical carbon structures such as carbon nanotubes (CNTs). Firstly, (surface-modified) direct-spun CNT mats are proposed as conductive scaffolds for lithium metal anodes (LMAs), which are desirable for next-generation batteries because of their ultra-high energy density. However, the practical applications of LMAs are hindered by the inhomogeneous Li dendrite formation during cycling which leads to poor efficiency and safety issues. The proposed electrode provides ample Li nucleation sites, reduces the local current densities, and promotes uniform Li plating/stripping at selective locations, thus achieving coulombic efficiencies (CEs) of 98-99% for over 130 cycles in CNT|Li half-cells (lithiation capacity: 2.0 mA h cm-2, current density: 1.0 mA cm-2). Next, a freestanding carbon fibre paper with carbon fillers (CFP) electrode is proposed with a novel dual charge storage mechanism – it acts both as a host for Li intercalation and as a conductive, porous, and lithiophilic 3D scaffold for Li plating/stripping. The CFP electrode exhibits excellent long-term cycling stability, as evidenced by CEs of over 99.5% on the 250th cycle in CFP|Li half-cells (lithiation capacity: 1.5 mA h cm-2, current density: 0.5 mA cm-2). Notably, both the CNT mat and CFP electrodes can be manufactured at an industrial scale, showing real promises for practical applications. Lastly, thick electrodes can help improve the energy density of LIBs by increasing the ratio of electrochemically active materials in a battery cell, but they often suffer from poor rate performances that is caused by the increase of various internal resistance terms associated with respective charge storage mechanisms, as well as manufacturing challenges. Here, ultra-thick lithium titanate (LTO) electrodes based on nanostructured CNT honeycomb conductive backbones (LCHB) are proposed – the hierarchical CNT honeycombs help improve Li-ion diffusion, provide more electrically conductive pathways, increase electrochemically active surface area, and offer mechanical support. High aspect ratio (over 1000) CNT honeycomb backbones of over 1 mm tall are successfully manufactured, and subsequently LCHB electrodes with areal loadings of over 25 mg cm-2 are successfully fabricated. The LCHB electrodes demonstrated superior rate performance (over 50% at 2 C as compared to less than 10% for planar electrode with similar areal loading) and longer cycle life (over 230 cycles until 10% capacity is lost as compared to less than 15 cycles for planar electrodes). Furthermore, EIS results revealed LCHB electrodes with low frequency resistance term (consists of charge transfer and Li ion transfer resistances) an order of magnitude lower than planar electrodes.
- ItemOpen AccessAdvanced Algorithmic Approaches for Improving Image Quality in 2D and 3D Holographic DisplaysYang, Fan; Yang, Fan [0000-0003-4351-1030]Holography is an advanced three-dimensional (3D) imaging and visualisation technology capable of reconstructing realistic 3D scenes. Despite decades of concentrated effort, holographic 3D displays are still struggling to meet the demands required for a consumer-ready solution. This thesis addresses technical challenges in the practical implementation and focuses specifically on potential image quality improvements based on algorithmic development. The thesis builds a holographic display system prototype and reconstructs established 3D scenes and real-world scenes using commercially available RGB-D cameras. By closely examining the reconstructed images, experimental reconstruction issues are evaluated. Given that image quality degradation is one of the significant issues in holographic displays, the gradient descent method is introduced to phase-only CGH optimisation. Contemporary image quality metrics (IQMs) considering human visual systems are employed as loss functions to improve the reconstructed image quality. Extensive objective and subjective assessment of experimentally reconstructed images reveal that the perceived quality improves considerably should the appropriate IQM loss be selected. Finally, the gradient descent method is extended to 3D hologram generation. While previous work optimises 3D CGH generation primarily on the in-focus region, this research further combines the method with an incoherent imaging module and a reformulated loss function to improve the defocus effect without sacrificing in-focus image quality. The experimentally acquired result demonstrates its effectiveness in reconstructing realistic 3D images beyond the capabilities of existing 3D hologram generation algorithms.
- ItemOpen AccessRandom Features for Efficient Attention ApproximationLikhosherstov, ValeriiTransformers are, perhaps, the most widespread architectures in today's landscape of deep learning. They, however, do not scale well with long sequences, resulting in O(L^2) computational complexity for the sequence length L. In this thesis, we propose a holistic approach for reducing this complexity to linear O(L) via an unbiased approximation of the softmax kernel appearing in self-attention, the main component in the Transformer backbone. The obtained efficient Transformer architecture is referred to as Performer. Compared to other developments in the area of efficient Transformers, Performer is theory-grounded and the problem of long-sequence processing can be reduced to a theoretical derivation of random features with certain properties minimizing the variance of the approximation. This thesis describes an evolution of mechanisms for random-feature approximation: from the so-called FAVOR, to FAVOR+ and, finally, to FAVOR++. The FAVOR++ mechanism has the tightest concentration properties and the best performance in practice. On the way to FAVOR++, we also discuss several extensions of the proposed efficient self-attention mechanism, among which are masked self-attention, sub-linear memory Performers, generalized attention, and more. For each proposed method, this thesis contains empirical evaluations in real-life large-scale learning setups and thorough theoretical analyses with proofs.
- ItemEmbargoFundamental Configuration Optimisation of Aircraft with Electric PropulsionFlanagan, FergusElectric propulsion has the potential to induce a paradigm shift in how aircraft perform, provided the same shift occurs in their design. Improvements to conventional aircraft have reduced CO2 emissions per passenger kilometre by, on average, 3.8% per year since 1960, but these incremental improvements to conventional designs are not sufficient to meet the ambitious goals for reductions in the environmental impact of air transport. To meet these goals, radically different architectures and propulsion systems are required. These radical architectures are by definition difficult to design due to the lack of historical data to enable configuration choices, normally enabled by prior knowledge of performance trends for various configurations. Without the configuration established, optimisers cannot be used due to the small integer variable choices, such as number and position of propulsors, embedded in configuration design. This challenge is solved by developing a optimisation scheme for hybrid-electric aircraft that can deal with the small value integer variables that derail conventional schemes. This scheme is applied to conventional aircraft, for conventional missions, and replicates the expected performance and design of these aircraft. Then, the scheme is applied to missions currently infeasible with conventional aircraft. First of these missions was a STOL aircraft with 500 km range carrying 10 people. Application of this optimisation scheme produced an aircraft capable of a 241 m take-off run, cruise range of 500 km at 200 kt, burning only 287 kg of fuel. Second of these missions, a STOL survey aircraft designed to operate in the High Himalayas, an aircraft that bridges the capability gap between the remote access of helicopters and the endurance of fixed wing aircraft, coupling a 250 m balanced field length with a 1200 km range whilst burning only 217 kg of Jet-A. This work shows how the unlocking of this integer variable problem is critical to evaluating the true potential of electric propulsion architectures and how they shift and morph conventional design space maps, and unlock performance previously unavailable to both conventional designs and conventionally configured aircraft with electric propulsion.
- ItemOpen AccessManagement of the UK plutonium stockpile using thorium fuelled Light Water ReactorsMorrison, SophieThe UK government is responsible for the world’s largest stockpile of civil plutonium (Pu). The intention is to manage the stockpile through the implementation of an appropriate recycling strategy, expected to centre around the use of Mixed OXide (MOX) fuelled Light Water Reactors (LWRs). Typically, MOX fuel involves the use of uranium (U) as a fertile carrier matrix for fissile Pu. However, the effect of aging and isotopic decay within the UK Pu stockpile impacts the feasibility of this approach. The build-up of americium-241 from decay of Pu-241 leads to increased fissile feed requirements which, in the case of U-Pu MOX fuels causes the Void Coefficient (VC) to become positive under transient conditions for a stockpile averaged Pu vector from the year 2055 onwards. This is unacceptable from a regulatory perspective. Replacing uranium with thorium (Th) significantly improves reactivity feedback coefficients such that, if UK Pu is to be recycled with Am-241 in-situ, Th-Pu MOX fuels provide a favourable alternative to U-Pu MOX. Analysing the effect of isotopic composition on reactivity feedback coefficients showed that the fissile isotopes provide the greatest contributions, regardless of the Am-241 content in the fuel. The main issue to note is that the use of Th-based MOX fuels results in Moderator Temperature Coefficient (MTC) trends which do not become less negative with burnup, meaning that batch averaging effects cannot be relied upon as a passive safety measure. Heterogeneous loading of Am and Pu where Am-241 is concentrated in approximately half of the peripheral fuel assembly pins has minimal effect on the overall Pu and Am destruction rates in the PWR and does not lead to significant improvements in the MTC trends. However, radial, and axial heterogeneous loading of Am and Pu in the ABWR offers fuel performance benefits in terms of increased Am-241 destruction and reduced curium (Cm) accumulation. From a fuel cycle perspective, the security burden associated with the UK Pu stockpile is better managed using Th-Pu MOX than U-Pu MOX. Th-Pu MOX fuelled PWRs require a great fissile feed than U-Pu MOX fuelled PWRs and can achieve significantly higher levels of Pu and minor actinide (MA) destruction leading to rapid and more complete stockpile depletion. The higher fissile loadings and greater Pu and MA destruction potential in the Th-Pu MOX case results in a lower mass of spent nuclear fuel (SNF) produced and marginally lower decay heat, radioactivity, and radiotoxicity - though the differences between Th-Pu and U-Pu MOX SNF are small enough that this will offer only limited benefits from a handling and disposal perspective. The potential profits associated with recycling the stockpile are comparable regardless of whether the recycling vehicle used is thorium or uranium. These profits may be marginally increased by removing Am-241 from the stockpile and recycling the purified plutonium. However, the difference in profits associated with removing the Am-241 from the stockpile versus leaving the Am-241 in-situ is minor. In addition, removing the Am (or “cleaning” the stockpile of Am-241) will complicate the overall UK Pu management strategy because an additional strategy would be needed to deal with the separated Am-241. Implementation timescales are important as delays in selecting a recycling strategy lead to greater fissile feed requirements needed to overcome the reactivity penalty associated with increased levels of Am-241. This further complicates the fuel manufacturing process, limits the income potential, and prolongs the security burden. A major difference now compared to fifteen years ago is that the need to design and build a MOX fuel fabrication facility (MFFF) means that Th-Pu MOX fuels have the opportunity to be ready for commercial use within the same timescale as U-Pu MOX fuels if research and development (R&D) into Th-UK-Pu MOX is conducted in parallel with MFFF construction and whilst R&D into UK Pu in general is ongoing. Three papers have been published as a result of the research presented in this thesis, with an additional paper currently in preparation: 1. Morrison, S. L., Lindley, B. A., Parks, G. T., Isotopic and spectral effects of Pu quality in Th-Pu fuelled PWRs, Annals of Nuclear Energy, Volume 117, 318–332 (2018). 2. Morrison, S. L., Parks, G. T., The effect of Am-241 on UK plutonium recycle options in thorium-plutonium fuelled LWRs – Part I: PWRs, Annals of Nuclear Energy, Volume 135, 106952 (2020). 3. Morrison, S. L., Parks, G. T., The effect of Am-241 on UK plutonium recycle options in thorium-plutonium fuelled LWRs – Part II: BWRs, Annals of Nuclear Energy, Volume 135, 106974 (2020). 4. Morrison, S. L., Worrall, A., Gregg, R., Parks, G. T., Recycle options for UK plutonium using MOX fuelled LWRs (in preparation).
- ItemEmbargoIndustry Emergence and the Underlying EcosystemsLuo, YiningIn this digital era, the emerging industries enabled by digital technologies such as cloud computing, big data, and artificial intelligence have shown different patterns in contrast to the product-centric industries in the industrial era. Rather than producing a standard product at lower costs through mass production, the key to the emergence and development of contemporary emerging industries is delivering customised and complex solution and services for different customers. The increasingly complex value proposition requires external resources and knowledge from different industries, thus making ecosystems more important as a new way of organising innovation and business. The service-centric, customised, and boundary-crossing trends in the digital age have challenged the classic product innovation and industry evolution theories that are largely based on evidence from the industrial age. Although the stages of industry emergence have been examined, apart from the process perspective, we still need to uncover how the transitions between stages are achieved as well as how a new industry is created by a set of actors from an organisational perspective. Moreover, the cumulative research on industry emergence from diverse disciplines requires integration. The aim of this research is to explore: how does a new industry emerge based on the underlying ecosystems? Specifically, the two sub-questions are: (1) how does industry emergence unfold in the digital era? and (2) how do the underlying ecosystems enable the transitions and emergence? To answer these research questions, I conducted an in-depth longitudinal study of the cloud computing industry in China from 2008 to 2021. Through multiple methods to theorise from process data, I developed the holistic process model of industry emergence and the industry ecosystem models from actor and activity perspectives. Concerning the industry emergence process, four stages and four transitions characterised by dominant design dynamics are identified. The findings show the decoupling between the dominance of technology design and the dominance of business model design and how the dominant design dynamics influence industry emergence, which fills the gap in industry emergence research and contributes to dominant design theories. Concerning the underlying ecosystems, this research firstly identifies the role of ecosystem shapers, complementors and facilitators at different stages of industry emergence. Secondly, it explains the generic transition mechanism by identifying the multi-level industry ecosystem as the linkage between the supply side and demand side and by introducing cross-industry innovation and application scenario as new elements in the industry ecosystem. This fills the gap in ecosystem research at the industry level, and gives strategic and policy implications.
- ItemOpen AccessStructured Mining and Analysis of Public Energy Data to Evaluate, Improve and Assist the Execution of the UK’s Built Environment Energy Efficiency PoliciesYuan, MingdaThe UK has committed to reducing its carbon emissions by 80% compared to the 1990 levels by 2050. The government has introduced various built environment energy efficiency policies in the hope of meeting the 2050 target. Alongside the policies, the government has also collected and published high-coverage, high-quality and high-resolution public data to monitor, target and aid carbon reduction and energy efficiency policies. Despite significant improvements in the availability of data and information, we believe much of the public data is still under-utilised. Structured mining and analysis that take advantage of different data mining and statistical algorithms to gain a deeper understanding of the built environment are desired and necessary. In this thesis, we explore new structured mining and analysis frameworks and procedures to tackle problems in built environment energy efficiency based on the available public datasets. In this way, we evaluate, improve and assist the application of the UK’s built environment energy efficiency policies. We demonstrate our exploration in three studies that have successfully found unforeseen patterns in public data and provided insights into the energy efficiency of the built environment. The first study proposed a two-step framework, demonstrated by analysing domestic gas consumption, to bridge the resolution differences between data and policy executive bodies. Different clustering algorithms (i.e. Gaussian Mixture Model and Hierarchical Clustering) were selected to cope with the objective in each layer. We showed that the same gas consumption-related variable could have different relationships, both qualitative and quantitative-wise, in different clusters of small areas. We also grouped the executive bodies to help them better collaborate and fine-tune the execution of the built environment energy efficiency policies. We combined statistical analysis and data mining in the second study to analyse the reliability of non-domestic EPC ratings. Local weighted regression (LOWESS) analysis found inconsistency and human factors in the non-domestic EPC ratings. Buildings whose conditions remain unchanged can be better rated by approximately 10 points on average when their initial rating is above the minimum requirement in the regulation. Clustering analysis of EPC recommendation changes further justified these findings and provided practical energy efficiency improvement strategies that people have already naturally adopted together with association rule mining. The third study opened the discussion of the relationship between local economic development and built environment energy efficiency. The geographical units created by complex network community detection, also called high-growth business community areas in our study, are shown to have their non-domestic buildings improved to a better average with a lower starting position compared to other areas. Further clustered community groups show different development paths also tend to be linked with different built environment energy e ciency improvement patterns.
- ItemEmbargoThe Aerodynamics of Cricket Ball SwingBriggs, AaronThe technique of swing bowling is used in the sport of cricket to gain an advantage over the batter, and improve a team’s chance of winning a match. The current best practice for swing bowling is based on anecdotal evidence, with little consensus on the optimal strategies. Bowling technique, the condition of the ball and atmospheric conditions are all considered important for swing, yet there is little quantification of these factors even in elite cricket. Previous experimental studies have not provided a description of swing that aligns with measurements of on-field swing made using ball-tracking technology. This research looks to provide clarity on the physical mechanisms behind swing bowling for new and used cricket balls. Appropriate boundary conditions are measured in field tests, and applied to experiments so that results represent on-field swing. Wind tunnel tests are performed which measure the changes in the aerodynamic force coefficient, CF. The relevant findings are related back to cricket through tools for use in the professional game. For new cricket balls, it is shown that conventional swing has a magnitude of 0.3
- ItemEmbargoBiophysical characterization of biomolecular condensates through microfluidic technologiesWelsh, Timothy; Welsh, Timothy [0000-0001-7817-5722]Biomolecular condensates can form through phase separation when mixtures of pro teins and nucleic acids coalesce to form membrane-less sub compartments within the cell. These condensates have been indicated to be responsible for a range of key physiological and pathological indications in living organisms ranging from energy storage and RNA processing to driving cancer and neurodegenerative diseases. However, key biophysical questions still exist as to how condensates are able to remain stable in cells, how the transition process from individual molecules to micron-scale condensates occurs, and the exact nature of the interactions of multiple heterogeneous components within condensates. Conventional biophysical methods are often limited in their ability to describe the complexity and heterogeneity of the underlying condensate structures and their associated biophysical parameters. In my thesis, I establish and apply approaches to perform biophysical characterization of biomolecular condensates using microfluidic technology. These methods allow us to measure the stability of condensates through single-condensate zeta potential measurements, to probe the interactions between phase separating proteins and specific RNA sequences and other proteins and co polymers, as well as determine the sizes of nanoscale condensate assemblies. Beyond these characterizations I have also helped to developed high-throughput methods for assessing the phase diagram of proteins to understand how environmental factors shift the phase separation boundary. Throughout these studies, by utilizing microfluidic methods I am able to study properties of condensates under flow while allowing for the prevention of surface interactions. Overall, my findings and developments have implications for progressing our understanding of biomolecular condensates and have direct links to the design of therapeutic interventions, which will rely on detailed knowledge of the biophysical parameters defining condensate behavior.
- ItemOpen AccessModelling the floating-catalyst method for carbon nanotube productionGokstorp, FilipCarbon nanotube fiber, as a material, promises superior material properties to ordinary carbon fiber. It has higher tensile strength, as well as high electrical and thermal conductivity. The production process is implemented in several university rigs, as well as in some industrial settings, but the process is not well understood or modelled physically. This thesis investigates several different components of this process, presents computational models of the key physical processes, and assimilates experimental data into those models in order to improve their accuracy. The production process consists of a heated quartz reactor tube that is continuously fed with hydrogen, methane, ferrocene, and thiophene. First, a hydrodynamic computational fluid dynamics solver is developed to simulate the flow in the reactor and to model the temperature gradient in the flow. This solver is also extended to model the hydrodynamic stability of the flow through a gas exchange valve attached to the outlet of the production reactor. A simplistic model of the finished carbon nanotube aerogel is presented to evaluate the influence of a convecting solid structure on the flow through the gas valve. Further detail of the production process is then investigated by calculating the decomposition rate of thiophene from experimental observations. The problem of finding the decomposition rate is set up using a Bayesian inference framework and the resulting objective function is minimised using a gradient-based method. An adjoint method is used to calculate the gradient of the objective function with respect to the model parameters. This decomposition rate of thiophene in a hydrogen atmosphere is then used to compare the decomposition of ferrocene and thiophene in the reactor using two different reactor inlet conditions. The implications for the production of carbon nanotubes are presented. Finally the nucleation, growth and evaporation of the catalyst nanoparticles in the reactor is investigated. A simple particle model that can quantitatively describe the mass fraction of the particles in the flow is developed. This model is first applied to predict how the radial particle mass fraction distribution varies with flow rate and the input ferrocene concentration. Then the particle model is fitted using the Bayesian inference framework, again using gradient-based minimisation with an adjoint method to calculate the gradient of the objective function with respect to the model parameters. The implications of the best-fit parameters are discussed, and a parameter set that is closer to the decomposition rate of thiophene than the decomposition rate of ferrocene is found to describe the experimental results best. The models presented in this study can be used to guide further experimental studies into the carbon nanotube production process, and to improve the design of the reactor used. The adjoint method presented can be applied to other fields in which analytical and quantitative models can be paired with experimental data to improve the model’s parameters. Further experimental data can be easily assimilated into the models presented here, because the Bayesian interference framework is a rigorous process to assimilate both new and existing data. Finally this thesis can also explain why the final carbon nanotube product forms a sock-like structure, and highlights that the availability of sulphur is critical to the formation of the catalyst nanoparticles from which carbon nanotubes can grow.
- ItemOpen AccessImproving Cascaded Systems in Spoken Language ProcessingLu, YitingSpoken language processing encompasses a broad range of speech production and perception tasks. One of the central challenges in building spoken language systems is the lack of end-to-end training corpora. For example in spoken language translation, there is little annotated data that directly transcribes speech into a foreign language. Therefore, in spoken language processing, a cascaded structure is widely adopted. This breaks down the complex task into simpler modules, so that individual modules can be trained with sufficient amount of data from the associated domains. However, this simplified cascaded structure suffers from several issues. The upstream and downstream modules are usually connected via an intermediate variable, which does not always encapsulate all the information needed for the downstream processing. For example, speech transcriptions cannot convey prosodic information, and any downstream tasks operating on transcripts will have no access to speech prosodies. The cascaded structure also forces early decisions to be made at the upstream modules, and early stage errors would potentially propagate through and degrade the downstream modules. Furthermore, individual modules in the cascaded system are often trained in their corresponding domains, which can be different from the target domain of the spoken language task. The mismatched training and evaluation domains would cause performance degradation at the inference stage. The focus of this thesis is therefore to investigate multi- modular integration approaches addressing the issues facing the simple cascaded structure, and to improve spoken language processing tasks under limited end-to-end data. The contributions of this thesis are three-fold. The first contribution is to describe the general concept of multimodular combination. The scoring criteria are modified to enable assessment of individual modules and complete systems, and approaches are explored to improve the vanilla cascaded structure. Three categories of spoken language systems are considered with an increasing level of module integration: cascaded, integrated and end-to-end systems. Cascaded systems train individual modules in their corresponding domains, and do not require any end-to-end corpora. Integrated systems propagate richer information across modular connections, and require a small amount of end-to-end data to adapt to the target domain. End-to-end systems drop the notion of modules and require large amount of end-to-end data to reach convergence. More tightly integrated systems generally require larger amount of end-to-end training data. With the trade-off between modelling power and data efficiency, different approaches are discussed aiming to strike a balance between the two. The second contribution of this thesis is to propose a general framework of reranking for multimodular systems, addressing both the error propagation and information loss issues. Rerankers are commonly used for single module sequence generation tasks, such as speech recognition and machine translation. In this work, rerankers are applied to multimodular systems, where they directly access the hypothesis space of the intermediate variables at the modular connection. Taking into account multiple hypotheses of the modular connection leads to a richer information flow across modules, and consequently helps reduce error propagation. The third contribution of this thesis is to propose the embedding passing approach. The idea is to extract continuous feature representations of the upstream context, and use them as the modular connection. The embedding connection allows richer information propagation as well as gradient backpropagation across modules, thus enabling joint optimisation of the multimodular system. Among the wide range of possible spoken language tasks, this thesis considers three example tasks with an increasing level of complexity: spoken disfluency detection (SDD), spoken language translation (SLT) and spoken grammatical error correction (SGEC). Spontaneous speech often comes with disfluencies, such as filled pauses, repetitions and false starts. As an important pre-processing step for many spoken language systems, SDD removes speech disfluencies and recovers a fluent transcription flow for downstream text processing tasks. SLT converts speech inputs into foreign text outputs, which is commonly adopted for automatic video subtitling as well as simultaneous interpreting. It is a challenging application that brings together automatic speech recognition (ASR) and neural machine translation (NMT), both of which are complex sequence-to-sequence tasks. With growing global demand for learning a second language, SGEC has become increasingly important to give feedback on the grammatical structure of spoken language. SGEC converts non-native disfluent speech into grammatically correct fluent text, and the main challenge is to operate under extremely limited end-to-end data. The SDD and SLT systems are respectively evaluated on the pub- licly available Switchboard and MuSTC datasets, and the SGEC system is evaluated on a proprietary LIN corpus. The experiments demonstrate that: the simple cascaded structure gives reasonable baselines for spoken language tasks; the proposed reranking and embedding passing approaches are both effective in propagating richer information and mitigating error propagation under limited end-to-end training corpora.