Theses - Engineering

Browse

Recent Submissions

Now showing 1 - 20 of 1062
  • ItemOpen Access
    Automating Assembly on Construction Sites
    Butterfield, Timothy
    This thesis seeks to address practical challenges within mainstream construction operations, proposing the adaptation of existing processes and equipment to facilitate a new automated assembly capability. Automation has enabled significant increases to productivity in manufacturing and logistics, and a wide range of automated machines have been developed for construction industry applications. However, most have failed to gain widespread adoption in industry, primarily due to poor compatibility with the commercial environment and practical requirements of mainstream construction work. Through examination of these characteristics, the development of automated assembly capabilities by incrementally adapting existing equipment and components is identified as being highly conducive to achieving significant industry adoption. The scope of this approach is developed around preliminary criteria for construction vehicles adapted to perform automated robotic handling, and existing component systems modified to enable rigid handling and automated assembly. The characteristics of existing handling vehicles are established through experimentation and literature review, and requirements for handling existing component systems are established through analysis of commonly-used component types. Current positioning information creation and exchange processes are investigated through analysis of case studies. To facilitate automated assembly, construction vehicles require additional technologies to enable robotic control and component positioning, and existing flexible lifting equipment must be replaced with rigid handling systems. Existing connection designs must be modified or replaced, and existing digital frameworks adapted to provide assembly positioning and to store as-built information. The feasibility of implementing these changes has been considered through investigation of off-the-shelf products and practical solutions for each aspect. On the basis of the analysis in this thesis, the proposed approach to automated assembly is confirmed to be feasible overall, notwithstanding varying levels of viability for different technologies, varying levels of modifications required for different component types, and limitations of the scope of this investigation.
  • ItemOpen Access
    Embedded liquid-laminated connections for structural glass applications
    Volakos, Efstratios
    The structural use of glass in buildings and constructions has dramatically increased in recent years due to the ever-growing demand for architectural transparency. Indeed, the use of glass has evolved from simple infill panels for framed windows to large frameless facades and primary structural members (columns, beams). Given the brittle glass nature, one of the most critical issues in glass engineering is how to effectively connect structural glass components in a visually unobtrusive manner. To date, bolted assemblies represent one of the most common methods for connecting glass. However, bolted glass connections are structurally inefficient because they generate high tensile stress concentrations which cannot be plastically redistributed due to the glass brittleness. Consequently, research has focused on adhesive connections which distribute the loads more evenly, thereby reducing the stress concentrations and simultaneously eliminating drilling of the glass. About a decade ago, a new type of adhesive connection emerged, known as embedded laminated glass connections, that has significantly improved the load-bearing capacity and appearance of glass connections. Typically, these connections consist of a metallic insert which is partially embedded within a glass laminate via solid foil interlayers and they are assembled in an autoclave. However, unfavourable residual stresses are set up in the embedded zone due to the differential thermal expansion between the glass and the metallic insert during the autoclaving fabrication process. To address this, this research aims to develop a novel variant of embedded laminated connections where lamination is achieved through a liquid resin interlayer. Unlike autoclave lamination, resin lamination is performed at much lower temperatures, thereby drastically reducing the undesirable residual stresses and the energy consumption required for lamination. To this aim, research has been undertaken to assess the structural performance of embedded resin-laminated connections under various loading and environmental conditions and to develop analytical/numerical tools capable of sufficiently predicting the connection mechanical response for engineering design purposes. Specifically, in this thesis, the axial tensile load-carrying behaviour of embedded resin-laminated connections with thin steel inserts is investigated by means of experimental pull-out tests performed on physical connection prototypes. These tests are executed at varying displacement rates and temperatures in order to assess the effect of the time and temperature dependent behaviour of the polymeric resin interlayer on the strength, stiffness and failure mode of the connection. The resistance of the connection to humidity is also examined. Namely, as-new connection specimens are subjected to an artificial accelerated weathering schedule and then tested under pull-out loading in order to compare their mechanical response with that of their non-weathered counterparts. To further interpret the experimental findings and better understand the connection mechanical behaviour, numerical finite element (FE) simulations of these tests are performed. The principal load-transfer/failure mechanisms of the connection are identified and the associated complex stress state within the connection is studied in order to quantify the resulting stress peaks. Alternative connection configurations are also numerically examined with the purpose of improving the connection structural performance. Based on the experimental and numerical data, an analytical model is developed that captures the pull-out load-displacement response of the connection with less effort, time and cost compared to numerical modelling or experimental testing. Therefore, this analytical approach provides an insight of the connection response which is a useful aid for preliminary sizing of such connections during initial stages of design. Finally, the benefits and viability of the embedded liquid laminated connection examined in this research are demonstrated by assembling a novel glass component (demonstrator). The successful completion of this component confirms that embedded resin-laminated connections are suitable for the development of robust and aesthetically appealing real-world glass applications.
  • ItemOpen Access
    High-Speed Organic Photodetectors for Visible Light Communication
    Singh, Gurashish
    Organic semiconductors possess a variety of characteristics that surpass conventional rigid electronics. In recent years, visible light communication (VLC) has attracted considerable interest for use in in-home and sensor networks. Organic optoelectronic devices such as organic light-emitting diodes (OLEDs) and organic photodetectors (OPDs) have great potential for VLC links because of low-cost fabrication and integration on flexible substrates. However, these are developed primarily for display applications and efficient photovoltaics, respectively, and there are few reports on their high-speed operation. Here, the research aims to achieve high-bandwidth OPDs for VLC links. The focus is on simulating optimised bandwidth performance and fabricating prototype devices based on organic semiconductors. In end-user contexts like LiFi and device-to-device communications and sensor networks like the Internet of Things (IoT), visible light communication (VLC) provides an energy-efficient and cost-effective connection. VLC uses existing lighting infrastructure, giving it a cost-effective and energy-efficient data transmission solution. Second, it improves security because light signals do not penetrate walls, lowering the chance of eavesdropping. VLC allows for speedier communication in areas where radio frequency-based technologies may encounter interference or saturation. Due to these advantages, VLC is a promising technique for various applications, including indoor location, smart lighting systems, and data communication in sensitive situations. Organic optoelectronic devices, more specifically organic light-emitting diodes (OLEDs) and organic photodetectors (OPDs), show significant promise for these kinds of systems because of their capacity to be manufactured at large scales utilising materials that can be solution-processed at a low cost and because they can be integrated on flexible substrates without any problems. On the other hand, OLEDs are largely tailored for display applications, emphasising high brightness, power efficiency, stability, and longevity, while OPDs, which are needed for energy harvesting (photovoltaics), are developed to attain high photon conversion efficiency. This is an important distinction to make, but it is often overlooked. Surprisingly, the bandwidth performance of these devices and the degree to which they have been optimised for high-speed operation have received only a limited amount of research. Consequently, this research focuses on computational and experimental efforts to attain OPDs for VLC systems with large bandwidths (~ MHz range), which are suitable for many sensor applications. The design for the OPD that has been presented consists of an ultra-thin multi-layered structure that incorporates PTCBI and CuPc organic semiconductors. This structure enables quick exciton dissociation and efficient charge collection at the appropriate electrodes. State-of-the-art unique modelling, simulation and experimental studies are conducted. The work extensively explores the fabrication equipment and structure of an organic photodiode (OPD), summarizing the fabrication process and current research trends on OPD structures. The research work highlights modelling and simulation findings of bilayer and multilayer devices. Fundamental modelling concepts for an organic photodiode, the impact of biasing regimes, and the introduction of the novel organic simulation software, OghmaNano, lead to the development of an ultra-thin bi-layer heterojunction-layered organic photodiode. Insightful results from modelling the bi-layer heterojunction thin-layered OPD device show remarkable consistency between experimental measurements and predicted outcomes. The device exhibits exceptional dark current performance, sensitivity to nanowatt-level incident visible light, and a noteworthy Linear Dynamic Range (LDR) of 168 dB. The research extends to the discussion of fabricated multiple-layered ultra-thin organic photodetectors. A simulation study assesses their performance, reporting excellent external quantum efficiency. Current voltage trends for dark and light curves are evaluated through simulation and validated with fabrication results, demonstrating decent linearity across various devices with varying layers. The temperature dependence of the dark current is explored, incorporating different activation energy values through the Arrhenius plot. Photocurrent dynamics are investigated, achieving a rapid 3-dB response around 6 MHz for a 4.5 mm2 area and 8 MHz for a 2 mm2 area, highlighting the significant potential for visible light communication and diverse sensing applications in these multi-layered photodetectors.
  • ItemOpen Access
    Human Perception of Transient Longitudinal Vehicle Motion
    Abdelmoeti, Samer
    Human perception of discomfort during transient longitudinal motion of a vehicle is little understood. Optimizing vehicle motion to maximize occupant comfort is of great interest to vehicle manufacturers. The imminent wide-scale deployment of autonomous control has further increased the attention on the ride comfort problem, since user acceptance significantly impacts adoption. Correlations of passenger discomfort and vehicle motion have been studied for decades using regression analysis, but there is increasing doubt about the relevance of these results given rapid developments in vehicle performance and autonomy. It is therefore desirable to develop a mechanistic model of the vehicle occupant for the prediction of objective and subjective responses to transient longitudinal motion. A measurement system capable of providing the data required for identification of a mechanistic model of the passenger is needed. Accordingly, the first contribution of this work is the development of passenger instrumentation based on probabilistic state estimation of the occupant head and torso motion in an accelerating vehicle. Probabilistic visual-inertial sensor fusion is used to overcome the limitations of existing approaches and estimate body segment dimensions by appending biomechanical kinematic constraints. A mechanistic model of the passenger is developed including details of biomechanics, sensory perception, cognition, and muscular action. In particular, this work contributes with a novel model of the passenger's ability to temporally anticipate the vehicle motion utilizing sensory cues. The model is used to predict the objective responses including internal latent signals which are hypothesized to be associated with subjective emotions. Measurements and published datasets are used to identify subject-specific model parameter sets. The model predictions are found to reasonably fit the measurements during a wide range of motion parameter combinations. Subjective-objective correlations are studied to predict the passenger discomfort from observed and latent features providing unique insights which were not possible to infer from existing data-driven studies. Results show that discomfort is likely the result of the vehicle acceleration magnitude and passenger head motion magnitude and predictability. The developed insights are used to devise guidelines for vehicle motion design.
  • ItemEmbargo
    Self-healing strain-hardening cementitious composites (SH2CC) for cyclic loading environments
    Tang, Zixuan; Tang, Zixuan [0000-0002-4334-1898]
    Cyclic loading is ubiquitous and frequently encountered in various infrastructure. However, the widely used construction material, concrete, is inherently brittle and could be poor at sustaining cyclic stress (especially those involving tensile stress). Although reinforcement can help avoid structural failure, large cracks are unavoidable which would reduce the durability of the material. In this case, timely retrofit or repair of damaged or cracked infrastructure due to cyclic loading is crucial, which can however be difficult and costly. Aiming to reduce structural damage and repair costs due to cyclic loading, self-healing techniques could be applied in combination with fibre-reinforced strain-hardening cementitious composites (SHCC); the former enables cementitious materials to heal cracks without human intervention, and the latter features much higher ductility than traditional concrete due to effective fibre bridging based on micromechanical design. Accordingly, this research develops novel materials based on SHCC which achieve 1) high deflection capacity (around 5 mm, equivalent to an estimated tensile strain capacity around 1.92%) under cyclic loading with different strain rates, 2) effective crack width (CW) control (below 30 μm regardless of sample sizes) and enhanced self-healing ability to fully recover mechanical properties within 3 weeks after damage caused by low-rate cyclic loading. Three main challenges are dealt with in this research. Firstly, uniform fibre distribution in SHCC is the prerequisite for developing and producing SHCC with robust mechanical performance, and appropriate rheological status of the fresh mortar is the key to it, which could however be difficult to achieve and guarantee with the existing trial-and-error methods. Therefore, a quantitative method for effective rheology control of fresh SHCC using combined chemical admixtures was first developed based on rheometer testing and regression modelling, which was further validated by four-point bending tests. The developed models were able to predict rheological parameters with high confidence and provide guidance for rheology optimisation for achieving robust deflection-hardening behaviour of SHCC. Secondly, either cyclic loading or high loading rates has been reported to reduce the ductility of SHCC in most cases, while the performance of SHCC under cyclic loading with different loading rates has not been sufficiently studied. It is also necessary to develop SHCC with enhanced cyclic endurance especially at varying or elevated strain rates for targeted applications. Results show that either cyclic loading or high loading rates could increase the tendency of fibre rupture, thus reducing the ductility of SHCC; combined elevated-rate and cyclic loading posed an even larger negative effect on the SHCC ductility due to accelerated degradation of fibres and fibre-matrix interfaces. Nevertheless, the impact could be minimised through matrix tailoring. Two SHCC mixes with improved rate-cyclic endurance were developed through increasing fly ash (FA) content or incorporating triethanolamine (TEA) based on micromechanical models, which showed reduced tendency of fibre rupture and thus improved ductility under the high-rate reversed cyclic loading. Thirdly, traditional SHCC has exhibited satisfying CW control below 100 μm which could promote autogenous healing after cracking, while this process can be slow and insufficient especially for aggressive loading conditions. Considering this, the healing efficiency of SHCC can potentially be enhanced and accelerated through incorporating engineered healing agents. Non-encapsulated mineral and/or polymer admixtures were found to be more compatible without harming the deflection-hardening property of SHCC. Based on this, five SHCC mixes for enhancing self-healing efficiency (named SH2CC) were developed by applying expansive mineral substitutes (MS, including reactive magnesia pellets, or reactive magnesia powder, combined with quicklime) and/or TEA additive in high-volume-FA-SHCC, and their self-healing performance after low-rate cyclic damage (up to 50% of ductility) was investigated. All the SH2CC showed almost 100% recovery regarding various mechanical properties after the 3-week healing conditioning, and SH2CC with MS presented 70 – 90% CW reduction. MS could promote mechanical recovery through hydration and pozzolanic reaction with water supply, and more prominently accelerate crack sealing through enhanced carbonation at crack tips. TEA addition was able to pause the initial hydration/carbonation process of SH2CC under air curing, thus resuming the process upon water contact and boosting both mechanical recovery and crack sealing. Accordingly, SH2CC with combined MS and TEA additive exhibited the highest healing efficiency which was attributed to 1) improved CW control (≤ 20 μm), 2) the production of larger amounts of mixed types of healing products (various hydrates and carbonates) both in cracks and fibre-matrix interfaces for crack filling and enhancement of fibre bonding strength.
  • ItemOpen Access
    Mechanics of Liquid Transport and Swelling in Porous Media
    Das, Ratul
    The mechanics of liquid transport in porous cellulose foam and in a dense random packing of glass spheres are studied in this thesis. Specifically, the capillary and diffusion mechanisms are investigated by critical experiments, and appropriate theoretical models are developed for both porous media. Finally, a coupled diffusion-actuation framework is developed in 1D to predict the water-induced swelling of dry pre-compressed cellulose foam. First, a critical water discharge test via a cellulose foam siphon reveals that the liquid transport in the foam is primarily by capillary transport via a set of independent tubes. We hypothesize that the cellulose foam comprises independent cylindrical tubes of radius *a* following a probability distribution function *p*(*a*). The distribution function *p*(*a*) is experimentally measured, and it suggests that water flows primarily through the tubes of radii 0.1 µm to 100 µm. XCT scan of the wet foam supports this argument. An additional diffusion model is then proposed to explain the spread of water from a damp section of cellulose foam to an adjacent dry section. The diffusion model accounts for the deep moisture traps present in the foam. Our study suggests that the water transport in cellulose foam combines capillary transport and diffusion. The mechanics of capillary liquid rise in a random dense packing of glass spheres is then investigated. Experiments reveal the onset of a critical meniscus pinning phenomenon at a height during water or glycerol rise. The pinning is interpreted in terms of capillary rise in a wavy-walled tube, where the meniscus can get pinned at several heights when the local capillary pull balances the weight of liquid. We show that imposed pressure fluctuations at the meniscus can result in a slower capillary rise of the liquid above the first pinning height. The 1D actuation of a dry pre-compressed cellulose foam by water absorption is then studied. A Fickian diffusion model is developed to predict the concentration of water in the actuating foam. The model is suitably informed by the dynamic actuation timescale of dry pre-compressed foam and the sensitivity of the foam’s actuation of relative moisture content, measured experimentally. The model can adequately predict the early stage of the actuation response of pre-compressed foam by water absorption. The thesis concludes with recommendations for future work to further understand the nature of liquid transport mechanism and swelling in porous media.
  • ItemEmbargo
    Optical See-Through Augmented Reality for Task Support in Dynamic Environments
    Kang, Bo
    Recent breakthroughs in processing capabilities, miniaturization of electronic components, advancements in wireless technology, and the evolution of artificial intelligence have heralded an era of unprecedented product functionalities, especially within the consumer electronics domain. As these products become intrinsic parts of our work and daily environments, they offer more than just connectivity and intelligence. They actively shape interactive ecosystems, propelling us into an age defined by pervasive interaction. Yet, this ascent comes with inherent challenges. Crafting Human-Computer Interaction (HCI) within these complex systems can introduce unforeseen complexities. Navigating these intricacies demands a nuanced comprehension of changing task dynamics and an adept strategy for ensuring fluid user engagements. The increasing cognitive load brought on by a surge in interactive modalities and information emphasizes the need for more intuitive interfaces and experiences. Herein lies the potential of an innovative synthesis of Augmented Reality (AR) and systems engineering, aimed at crafting unified human-computer interaction models. Against this backdrop, the research proposes the potential of optical see-through aug- mented reality (OST-AR) to elevate human-machine interactions in settings laden with tasks. This is achieved through accurate, context-sensitive visual representations, more efficient workflows, and heightened user-friendliness. This proposition forms the foundation for a research journey that transitions from academic constructs to real-world industrial solutions, eventually finding relevance in household settings. The cornerstone of this research lies in the meticulous enhancement and fine-tuning of OST-AR for handling complex, attention-demanding tasks in real-world scenarios. This effort ensures that both system architects and end-users engage in unified, intuitive interactions, whether they are designing or interfacing with sophisticated equipment, systems, or processes in dynamic situations. This progression not only accentuates OST-AR’s accessibility but also paves the way for its expanded relevance across multiple sectors.
  • ItemOpen Access
    A brain-machine interface for investigating neural representations of navigation in a virtual environment
    Sorrell, Ethan
    Simultaneously recording behaviour and neural activity can only take researchers so far in understanding the function of different regions of the brain, and the neurons therein. Novel methods are required for probing further beyond standard correlational and knockout analysis. The activity of one brain region might correlate with task relevant variables, and silencing this region can show its necessity for task success. However, can the activity of this region independently drive successful completion of the task? We propose that Brain-Machine Interfaces (BMIs) are a uniquely poised technology capable of answering such questions. In this thesis, we investigate the neural representations of navigation in the Posterior Parietal Cortex (PPC) of mice during a virtual navigation task. We recorded neural activity during completion of a T-maze task using 2-photon calcium imaging, and trained decoders to decode task relevant variables from recorded neural activity. We then closed the loop, and allowed mice to navigate through the virtual maze directly using these brain signals, using the output of our decoders. These experiments showed that mice can successfully navigate through the virtual maze using this BMI, showing that these neural representations in the PPC are sufficient for driving behaviour. Through further investigation of the behaviour and neural activity during BMI use, we also showed that the representations being used for closed-loop control were related to high-level navigational signals, as opposed to low-level motor commands. These results show that the PPC, a region of the brain at the interface between sensory and motor brain regions, is capable of driving navigational behaviour even when bypassing standard downstream neural pathways. We propose that our methods could be applied to other brain regions and experiments to enable researchers to investigate other challenging questions about encoding and function in the brain.
  • ItemEmbargo
    Aerosol Synthesis of Layered Oxide Cathodes Materials for Lithium-ion Batteries
    Almazrouei, Manar
    With the global population increasing and energy demand on the rise, the reliance on fossil fuels for energy generation has exacerbated climate change through the release of greenhouse gases. Rechargeable batteries, particularly lithium ion batteries (LIBs), play a crucial role in efficiently storing renewable energy, making them pivotal in the global transition towards sustainable energy. The LIB market is experiencing significant growth, primarily driven by the shift to electric vehicles. The importance of cathodes in shaping battery performance becomes evident when considering the often higher specific capacity exhibited by anode materials. Enhancing cathode materials and improving their electrochemical performance can be achieved through the refinement of synthesis methods and conditions, including the selection of metal and lithium sources, synthesis atmosphere, and annealing parameters such as temperature, duration, and atmosphere. Various synthesis techniques have been employed for LIB cathode materials, including precipitation, solid-state, sol-gel, hydrothermal, spray pyrolysis (SP), and combustion methods. Among these, coprecipitation involves multiple steps and increases processing costs, while SP offers continuous processing, scalability, and rapid homogenous composition attainment. However, SP requires additional heat treatment to achieve high purity and crystallinity, necessitating extensive experimentation to determine optimal annealing conditions. In this study, two distinct SP reactor configurations were developed: the initial design and the optimized design. These designs incorporated enhancements compared to conventional SP synthesis methods, with improvements made to the droplet preheating zone and the collection system, aiming to increase production rates at the laboratory scale. These reactors were employed for the synthesis and optimization of different cathode materials, including LiCoO2 (LCO) and LiNi0.8Mn0.1Co0.1O2, known as NMC811, which is challenging to synthesize. To gain a deeper understanding of LCO synthesis through SP, the study investigated the influence of reactor wall temperature on the structural and morphological properties of the synthesized particles using precursors of nitrate and acetate. In-situ heated X-ray diffraction (HT-XRD) was utilized to monitor the structural evolution of the particles at various temperatures and identify the optimal annealing conditions. The phase evolution was observed, with spinel Li2Co2O4 forming at lower annealing temperatures and layered oxide LCO emerging at intermediate temperatures. However, higher temperatures led to structural defects due to lithium loss. The annealed particles with the desired structure were subsequently tested in batteries, contributing to a better understanding of LCO synthesis via SP and its potential applications in electrochemical systems. Furthermore, the study investigated the impact of synthesis temperature and precursor type in the SP synthesis of NMC811 particles. HT-XRD was employed to study the structural evolution of the synthesized particles and determine the optimum annealing conditions for two different precursors and heating atmospheres. The particles exhibited a transformation from a rock-salt structure to a spinel phase, followed by a transition into a well-ordered layered structure upon lithiation and ordering. However, at higher synthesis temperatures, structural degradation occurred due to oxygen vacancies and lithium loss, as well as Li and Ni ions mixing. This process involved the migration of Li+ ions to the transition metal layer (TM), while Ni2+ ions from the TM layer moved to the Li layer. The electrochemical performance of the particles annealed under optimal conditions was thoroughly examined. Additionally, the study explored the enhancement of NMC811 cathode material synthesis using the optimized SP reactor and an electrochemical testing protocol. Factors such as reactor temperature, precursor concentration, reacting gas orientation in the preheating zone, organic additives (urea), annealing temperature, time, and high oxygen flow rate were investigated. The optimized reactor design, featuring improved reacting gas flow, resulted in enhanced particle sphericity and excellent electrochemical performance. Despite the morphological enhancements, annealing led to the formation of porous agglomerates, often demonstrating improved electrochemical performance due to enhanced lithium diffusion These findings offer valuable insights into the aerosol synthesis of Li-ion battery cathode materials, paving the way for further optimization and advancement in energy storage technologies.
  • ItemOpen Access
    Concentrated Gauss Curvature in Shape-Programmed Shells
    Duffy, Daniel; Duffy, Daniel [0000-0002-0383-5527]
    In this thesis we will design new patterns of deformation in shape-programmed sheets, pushing forward the paradigm of `metric mechanics', in which active shells respond to stimuli by undergoing large spontaneous deformations. Our focus will be on novel ways to program Gauss curvature, motivated by its deep mechanical consequences - principally it imparts strength, since Gauss-curved shells cannot be flattened without expensive stretch. Concentrated Gauss curvature will be of particular interest, offering qualitatively new mechanics, and interesting theoretical challenges. Specifically we will explore the Gauss curvature encoded in deformation patterns containing topological defects, holes, and seams - all features where Gauss curvature is generically concentrated. Finally we will address the load-bearing and lifting capacity of shape-programmed cones, which have become a classic example, and whose strength ultimately stems from their Gauss-curved tips. This will reveal many surprises, including new thin-limit results for buckling of conical shells, exhibiting unexpected scalings and holding broad implications within and beyond shape programming.
  • ItemEmbargo
    Unveiling the Impact of Supply Chain Complexity on Product Safety Risk
    Schumacher, Roman; Schumacher, Roman [0000-0003-0764-5867]
    Purpose:
    Product safety risks increased over the past decades and in multiple industries, with often very negative consequences for consumers and companies, including fatalities, injuries and bankruptcies of firms involved. The purpose of this research is to examine the root causes of product safety risk by investigating the implications of supply chain complexity on product safety in multiple industries. Furthermore, it examines how these risks could be managed effectively. Methodology:
    The research objectives are addressed in two distinct, yet content-related studies: Study 1 focuses on identifying the relationships between supply chain complexity and product safety risk in different industries and derives its findings from a multiple holistic, inductive, cross-industry case study approach involving 7 cases of companies operating in the medical device, pharmaceutical, automotive and electronics industries. Study 2 empirically validates the findings of Study 1 by investigating the impact of supply chain complexity on product safety risk in the medical device industry using large scale, industry level data. Building on Complexity Theory, Information Processing Theory and the results of Study 1, Study 2 was conducted using regression techniques and secondary firm financial, supply chain and recall data from multiple sources to operationalise supply chain complexity and test its relationship with product safety risk. Findings:
    Study 1 provides a conceptual framework, suggesting relationships between multiple supply chain complexity dimensions and product safety risk. It further identifies several reactive and preventive measures that can be applied to managing product safety risk in complex supply chains. The results of Study 2 support hypothesised relationships indicating that various dimensions of supply chain complexity are positively associated with product safety risk. Furthermore, it shows that operational efficiency has a positive impact on product safety. Lastly, it shows that operational efficiency serves as a moderator, mitigating the negative effects of supply chain complexity on product safety. Implications for Theory:
    This research contributes to the Supply Chain and Operations Management literature in several meaningful ways. It uses Complexity Theory and Information Processing Theory to elaborate on the relationships between supply chain complexity and product safety risk. Thereby, it contributes to understanding the implications of information processing requirements in complex supply chains, and their effect on product safety risk. Furthermore, it conceptualises several complexity dimensions building upon prior literature and extends this conceptualisation by including technological aspects of supply chain complexity as well. Implications for Practice:
    This research provides several insights for the reduction of product safety risk caused by supply chain complexity. It identifies several actionable measures to reduce the likelihood of product failures and to contain the potential impact of product recalls. Furthermore, the study identifies several supply chain complexity dimensions and their relative strength of impact on product safety risks. Thereby, this research provides practitioners with a useful set of insights to prioritise their measures to reduce product safety risk sustainably.
  • ItemOpen Access
    Innovation strategies of New Technology-Based Firms for managing crises: reconciling short-term survival and long-term success. The case of gene therapy pioneers
    Wolff, Johannes
    New Technology-Based Firms (NTBFs) are pioneers in innovation and play a key role in creating and disrupting high-tech industries. However, these firms usually lack product revenues and are initially very reliant on external investments to innovate and to survive. This renders them particularly vulnerable to technology-specific or industry-wide external crises which suddenly and drastically constrain their access to these external resources. When facing a crisis, NTBFs have to ensure survival in the immediate crisis environment, without compromising their long-term success and potential after the crisis. This creates a tension between the risk of investing in managing the crisis, which may drain the already constrained resources, and the risk of not investing in long term innovation during the crisis, which may endanger the firm’s long-term potential to recover and capture new opportunities once the crisis has resolved. The thesis identified ‘innovation strategies’ that NTBFs may use in a crisis to achieve and reconcile both short-term and long-term benefits. The strategies are identified by studying the behaviours adopted by pioneering gene therapy NTBFs which survived over decades, including a severe and lasting period of crisis and setback. The study merges two extant concepts which together define ‘innovation strategies’: Firms may pursue explorative and exploitative innovation (the exploration/exploitation or ambidexterity concept) and may do so through intra- or inter-organisational modes (the internal and Open Innovation concept). Both concepts are well known for their critical role during crises and for their differing short-term and long-term benefits and risks. However, only very few studies have combined both dimensions to capture the full range of innovation strategy configurations that a firm can choose from. Further, the evolution and dynamics of these configurations over time in a crisis context, their short-term and long-term outcomes and the contingencies for (successfully) adopting one strategy over another remain elusive to date. This study seeks to address these research gaps by using a longitudinal case study methodology of pioneering gene therapy NTBFs which successfully survived through periods of setback and progress, including a sudden, drastic and lasting technology-specific external crisis. The thesis presents an original strategy typology which allows to objectively and quantitatively capture different configurations of intra-/inter-organisational exploration/exploitation together to compare these configurations across firms and over time. Supplemented with interviews with (ex-)executives of these case firms, the longitudinal data presents: a) which configurations (Strategy Types) NTBFs use before, during and after a crisis, b) how these configurations change over time across crisis and non-crisis periods (Strategy Pathways) and c) the short-term and long-term outcomes of different Strategy Types. From these, the different Strategy Types and Pathways suitable for reconciling short-term and long-term survival and success in a crisis were drawn. Additionally, the study provides propositions on d) the contingencies for adopting and e) successfully implementing different crisis strategies. The study thus contributes to theory and practice. For theory, it merges and expands the literature streams of ambidexterity and Open Innovation, and substantially contributes to the knowledge on strategic adaptation to crisis. For practice, it provides actionable recommendations to NTBF managers on which strategies to choose under which contingencies to maximise the chances of short-term survival and long-term success when facing a crisis.
  • ItemOpen Access
    Machine Learning Interatomic Potentials to Predict Bond Dissociation Energies
    Gelzinyte, Elena; Gelzinyte, Elena [0000-0002-8625-1497]
    Empirical force fields are valuable tools in computational chemistry, however, they suffer from limitations in terms of accuracy, transferability and their lack of applicability to open-shell structures. Recently, Machine Learning Interatomic Potentials (MLIPs) have emerged as versatile surrogate models capable of accurately reproducing ab initio potential energy surfaces. However, most of their applications have been targeted at near-equilibrium closed-shell structures. This project aims to address this limitation by developing highly accurate and transferable MLIPs that can be applied to both closed- and open-shell molecules. An accurate description of radical species extends the scope of possible applications to Bond Dissociation Energy (BDE) prediction, for example, with relevance to cytochrome P450 metabolism modelling. In this work, three methods are compared – Gaussian Approximation Potentials (GAP), Atomic Cluster Expansion (ACE), and MACE – in their ability to accurately fit closed- and open-shell hydrocarbon data, extrapolate to novel compounds and predict BDEs with required accuracy. The analysis reveals shortcomings in GAP and ACE when simultaneously fitting closed- and open-shell structures and demonstrates significantly better MACE performance when fitted to the same data. We further develop a transferable MACE model applicable to compounds containing carbon, hydrogen and oxygen chemical elements. To verify its transferability, we evaluate this model on several independent datasets and compare its performance to a general-purpose ANI-2x interatomic potential, which is only applicable to closed-shell structures. Furthermore, MACE shows better predicted BDE correlation with the reference method than the currently used semi-empirical AM1 method. The MACE model extrapolates well over bond dissociation potential energy surface scans, which shows promise for extension to predict not only reaction energies but also reaction activation energies. Finally, the wfl and ExPyRe Python packages are described, which were developed to aid in building high-throughput MLIP fitting and atomistic simulation workflows.
  • ItemOpen Access
    Data assimilation for verification of dry storage cask content
    Tan, Zi Liang
    Dry cask storage is a method for the interim storage of spent fuel assemblies. These assemblies contain fissile isotopes of uranium and plutonium which can present a proliferation concern, and consequently there is a need for methods which can verify dry cask content. The loading (or partial loading in diversion scenarios) of a cask determines various physical quantities. On one hand, theoretical calculations of these quantities can be computed assuming a particular loading. On the other hand, experimental measurements of these quantities can be made directly. The discrepancy between theoretical calculations and experimental measurements is therefore a possible detection mechanism to highlight diversion scenarios. The Best Estimate Results with Reduced Uncertainties (BERRU) framework by Cacuci quantifies such discrepancies between calculated and measured results by a data consistency indicator χ². Conventional statistical tests using the χ² value can be used to eliminate proposed cask loadings which are too-inconsistent with experimental measurements, but cannot be used to accept any particular cask loading and therefore lack the predictive capability to identify any particular cask loading as being the most consistent. An extension of the BERRU framework and its χ² tests is then proposed in which we calculate the χ² indicator across various proposed cask loadings and experimental measurements. The cask loading which gives the lowest χ² value among all the proposed cask loadings will be accepted by the test as being the most consistent with the experimental measurements. This proposed test is applied to two physical models of dry cask storage to develop: (1) a neutronics test based on a neutron diffusion model of Bonner sphere measurements at the cask exterior, and (2) a thermal test based on an analytical natural circulation model of the gain in temperature of cooling air exiting a cask and its comparison with experimental thermocouple measurements. The particular physics of each model result in varying performance of each test in terms of what types of diversion scenarios it can predict, and the test's success rates in predicting them. Since both tests are partial glimpses into the common true state of the cask, we expect that considering both tests together ought to yield better results than the results from considering them as single tests in isolation. Two complementary applications of the neutronics and thermal tests are investigated where it is found that a multiphysics BERRU test yields improved prediction results and can be recommended as a preferred complementary application.
  • ItemOpen Access
    Advanced Numerical Modelling of Bulk Superconductor Magnetisation
    Cientanni, Vito
    Superconductivity is the observation of the vanishing electrical resistance of a material, and the expulsion of magnetic fields from within its volume, below a certain (critical) temperature. Superconductors can be subdivided into two classifications; type-I, or type-II, with the latter exhibiting a ‘mixed-state’ of superconductivity across their volume. Through appropriate magnetisation, magnetic flux may become ‘pinned’ within type-II superconductors fabricated into bulk (i.e. large sized) forms, therefore creating quasi-permanent trapped field magnets of extraordinary strength and engineering importance [1–3]. Type-II superconductors acting as trapped field magnets have to date been shown to trap in excess of 17 T [4], and have the potential to trap significantly greater magnetic fields [5]. Meanwhile, the best conventional permanent magnets have a saturation magnetisation of approximately 1.4 T [6]. One such group of type-II superconductors are (RE)-BaCuO materials (where RE = rare-earth element or Y/Gd, Barium-Copper-Oxide), which exhibit superconductivity at approximately 90 K or higher. Magnesium diboride (MgB2) is another promising type-II superconductor, which exhibits superconductivity below 39 K. Despite the extensive research into fabricating and using bulk superconductors as trapped field magnets, magnetising them to high fields with a compact, efficient, and practical method is still challenging. The focus of this PhD is therefore to use a modelling-first approach to investigate practical methods of trapping magnetic fields within bulk superconductors, with the goal of providing deeper insight into the techniques which may enhance their magnetisation. One particular focus of this study is on the phenomenon of flux jumps, which have traditionally been seen as a blight in the stable operation of superconductors. Flux jumps are the spontaneous and ‘avalanche’ like motion of magnetic flux through a superconducting medium [7–11]. Such instabilities for example have historically posed a challenge in early (c. 1969) multi-filamentary superconducting wires. Carefully spacing filaments, or ‘twisting’ them around the wire core, have proven effective methods for eliminating flux jumps in wires and coils [12]. Flux jumps generally result in the rapid, and often mechanically destructive, demagnetisation of a bulk superconductor during field-cooled (FC), or zero-field cooled (ZFC) magnetisation (which are quasi-static, energy intensive magnetisation techniques). Using numerical techniques in 3, flux jumps in bulk superconductors under FC & ZFC magnetisation are modelled, and validated against analytical solutions as well as experimental data. Using these models, the controlling variables and properties of thermomagnetic instabilities are explored (such as the influence of bulk geometry and size, the critical current density distribution (see section 1.1.1), and the thermal properties). Methods for avoiding them, controlling them, or better monitoring them are suggested and discussed throughout section 3.3.2.4. Another method of magnetising bulk superconductors, which is more efficient and practical, is pulsed-field magnetisation (PFM). Using this technique, significant thermal stresses are often generated within the bulk, which can reduce the trapped field potential or damage the sample. However, it has been practically demonstrated that thermomagnetic instabilities during PFM can permit the ‘jumping-in’ (and successful trapping) of magnetic flux within bulks. This type of ‘assistive’ flux jump could be a viable method of efficiently and practically magnetising HTS bulk superconductors, and lead to higher trapped fields. Later in 3, numerical techniques are used to model experimentally observed flux jumps during PFM, then the controlling variables that generate the circumstances for instability to flux jumps are explored (such as the critical current distribution, the bulk geometry, the effectiveness of cooling, and the n-value distribution, see section 3.4). Based on these results, techniques for researchers to trap even greater fields using PFM are suggested and discussed. Continuing with the goal of enhancing magnetisation in 4, a series of models are presented which numerically describe the experimentally obtained magnetisation of a new record-breaking composite MgB2 bulk under PFM, with excellent agreement to the experiments. Using these models, a number of powerful extension studies are presented, which explore the controlling variables of the experiment and the composite bulk (such as the influence of multi-pulsing, the use of an iron yoke, the number of included copper layers, and the effect of cooling the composite bulk, see section 4.2.2). A number of suggestions for researchers to further increase the trapped field within these bulks are finally given. An analysis comparing the intrinsic instability of (RE)-BaCuO and MgB2 materials to flux jumps is also given, demonstrating why it is generally best to avoid them in MgB2 bulks. 1 and 2 are supplementary to this study, introducing the main aims and background to the PhD study, as well as the numerical methods used. 5 finally concludes the study, and outlines the next steps, and suggested continued research.
  • ItemOpen Access
    Advances in Probabilistic Deep Learning and Their Applications
    Daxberger, Erik Alexander
    Deep learning and probabilistic modeling are two machine learning paradigms with complementary benefits. Probabilistic deep learning aims to unify the two, with the potential to offer compelling theoretical properties and practical functional benefits across a variety of problems. This thesis provides contributions to the methodology and application of probabilistic deep learning. In particular, we develop new methods to address four different application domains. The first application is out-of-distribution detection. Neural networks tend to make unreliable predictions when the data distribution changes after training. To address this, we propose a new probabilistic deep learning method based on a Bayesian variational autoencoder, where a full distribution is inferred over the model parameters, rather than just a point estimate. We then use information-theoretic measures to detect out-of-distribution inputs with this model. The second application is data-efficient optimization. Many science and engineering problems require optimizing a costly black-box function over a high-dimensional, structured space. To tackle this, we develop a new probabilistic deep learning method that efficiently optimizes the function in the low-dimensional, continuous latent space of a variational autoencoder. We propose to periodically retrain the model to keep the latent manifold useful for optimization. The third application is neural network calibration. Neural networks tend to be poorly calibrated on inputs not seen during training. To avoid overconfidence, models must be able to quantify their uncertainty. To this end, we develop a new probabilistic deep learning method that performs Bayesian inference over just a subset of a neural network’s parameters. We propose a way to choose such subnetworks to faithfully preserve the model‘s predictive uncertainty. The fourth application is continual deep learning. Neural networks often catastrophically forget previously learned tasks when trained on new tasks. To enable models to learn across task sequences, we introduce a new probabilistic deep learning method that unifies two popular continual learning approaches: Bayesian weight regularization and experience replay. Our method explicitly aims to approximate the model obtained from batch-training on all tasks jointly. Overall, the goals of this thesis are twofold. Firstly, we aim to develop new methods at the intersection of probabilistic modeling and deep learning that combine their respective advantages. Secondly, we aim to demonstrate the practical potential of those probabilistic deep learning methods by applying them to advance the diverse application areas mentioned before.
  • ItemOpen Access
    Neurosymbolic Reasoning for Link Prediction in Supply Chain Knowledge Graphs
    Kosasih, Edward; Kosasih, Edward [0000-0001-5293-2641]
    This thesis is motivated by recent developments in Supply Chain Management (SCM) and Artificial Intelligence (AI). On one side, as modern supply chains become complex and interconnected with invisible dependencies, we increasingly see disruptions emerging and propagating across the network. This phenomenon, also known as ripple effect, is often difficult to manage given lack of visibility of the supply chain structure. While data-driven approach has recently emerged as solutions to proactively reconstruct and monitor these hidden dependencies, extant literature remains limited. This is an open problem in SCM. Meanwhile, the world is faced with a renewed disruption from the development of Artificial Intelligence (AI). The combination of big data availability and accessible computing power has increased the adoption of AI-based data-driven approaches in many aspects of society. However, as many modern AI techniques are based on black-box approaches such as neural networks, there have been calls for more governance to make AI more trustworthy in performing learning and reasoning. This is an open problem in AI. This thesis investigates the development of trustworthy AI as a data-driven approach to predict hidden dependencies in supply chain. We demonstrate how to utilise a novel methodology called neurosymbolic AI to predict hidden dependencies not only between companies, as used in extant literature, but also with other entities in the supply chain such as products, locations, certifications and many others. We also illustrate how this methodology enables practitioners to inspect the AI model's reasoning process, thus improving trustworthiness. While our works have shown promising results in two real supply chain data in the automotive and energy industry, there remains open questions on developing AI approaches that leverage uncertainty and privacy in order to make the model more trustworthy and adoptable by supply chain practitioners. This thesis will systematically discuss the remaining research gaps based on comparing the result of our systematic literature review with our thesis contributions.
  • ItemOpen Access
    The Neural Processes Family: Translation Equivariance and Output Dependencies
    Requeima, James
    Most contemporary machine learning approaches use a model trained from scratch on a particular task and a learning algorithm designed by hand. This approach has worked very well with the advent of deep learning and in the presence of very large datasets (Goodfellow et al., 2016). Recently, meta-learning has emerged as a machine learning approach to learn both a model and a learning algorithm (Hospedales et al., 2021; Schmidhuber, 1987) directly from data. Neural processes (Garnelo et al., 2018a,b) are a family of meta-learning models which combine the flexibility of deep learning with the uncertainty awareness of probabilistic models. Training using meta-learning allows neural processes to apply deep neural networks to applications with smaller training sets where they would typically overfit. Neural processes produce well-calibrated predictions, enable fast inference at test time, and have flexible data-handling properties that make them a good candidate for messy real-world datasets and applications. However, this thesis focuses on addressing two shortcomings when applying neural processes to real-world applications by i) incorporating translation equivariance into the architecture of neural processes rather than requiring the model to learn this inductive bias directly from data and ii) developing methods for neural processes to parametrize rich predictive distributions that can model dependencies between output-space variables and produce coherent samples. This thesis makes four main contributions to the family of neural processes models. First, we introduce the convolutional conditional neural process (ConvCNP). The ConvCNP incorporates translation equivariance into its modelling assumptions by using convolutional neural networks and improves training data efficiency and performance when data is approximately stationary. Second, we propose the latent variable version of the ConvCNP, the convolutional latent neural process (convLNP) that is able to model epistemic uncertainty and output-space dependencies and able to produce coherent function samples. We also propose an approximate maximum likelihood training procedure for the ConvLNP improving upon the standard VI approximate inference technique used by latent neural processes at the time. Third, we propose the Gaussian neural process (GNP) which models the predictive distribution with a full covariance Gaussian. The GNP can model joint output-space dependencies like the ConvLNP but avoids the issues associated with using latent variables. Training GNPs is much more simple than the ConvLNP since it uses the same maximum likelihood technique as standard conditional neural processes. Fourth, we introduce the autoregressive neural process (AR NP). Rather than proposing a new neural process architecture this method produces predictions at test time by evaluating existing neural process models autoregressively via the product rule of probability. This method allows for the use of existing, potentially already trained neural processes to model non-Gaussian predictive distributions and produce coherent samples without any modifications to the architecture or training procedure. The efficacy of each of these methods is demonstrated through a series of synthetic and real world experiments in climate science, population modelling, and medical science applications. It can be seen in these applications that incorporating translation equivariance as a modelling assumption and generating predictive distributions that model output-space dependencies improves predictive performance.
  • ItemEmbargo
    Installation Effects on Variable Pitch Fans
    Ma, Kwun Yeung
    Future low pressure ratio fan systems require an extended operating range and variable pitch fans can achieve this by re-pitching the rotor blades. Moreover, a variable pitch fan has the potential of reducing fuel burn if sufficient reverse thrust can be generated, such that the engine weight and drag associated with heavy cascade-type thrust reversers can be eliminated. Previous studies have shown the installation can have a significant impact on the variable pitch fan performance in reverse thrust. This thesis aims to study the effects of inlet distortion on variable pitch fan performance in reverse thrust operation, to find the most influential distortion components and to determine the way to operate a variable pitch fan with inlet distortion. Steady RANS and rig measurements are used in this study to achieve the research aims. Both the simulations and the experiments show that typical distortion from the engine installation leads to a very different flow field compared to the case with uniform inflow at uninstalled conditions. The distortion causes significant changes in flow structure, mass flow, thrust and power distribution in the engine. In particular, the flow at the fan rotor breaks down and forms a strong radial flow within the rotor passage. This leads to a recirculating jet penetrating into the bypass duct, forming large recirculation regions between the rotor and the bypass inlet. The recirculation regions significantly restrict the net mass flow through the engine and the fan is regarded as stalled in this particular state. It is found that almost all of the power from the fan is used to drive the recirculating flow, which has high loss and total temperature. The VPF operating with recirculating flow is a significantly less efficient thrust reverser than the VPF at uninstalled conditions as the expected net reverse thrust generation for the former is only around 20% of the take-off thrust, which is notably lower than the value of 35% for the latter. Radial pitch angle distortion is found to be the most influential component among radial and circumferential distortion components in yaw angle, pitch angle, total pressure and total temperature. Because of its blockage effect at the bypass inlet, the radial pitch angle distortion alone leads to the recirculating flow in the bypass duct. This is unmatched by other distortion components, as each of them alone is insufficient for causing recirculating flow at the rotor. The radial total pressure distortion is also important as it governs the amount of pressure force acting on the flow through the engine. The lower average inlet total pressure in the distortion increases the size of recirculation and reduces the jet momentum. With all the radial distortion components from the engine installation, the reverse thrust can be recovered through closing the rotor blades and increasing the fan speed if the total pressure deficiency at the bypass inlet can be reduced from 3.5% of the freestream total pressure to an acceptable level of 2% for the engine in this study. Significant hysteresis found during reverse thrust recovery, where the variable pitch fan can be stalled and unstalled by opening and closing the blades respectively, aligns with NASA's experimental campaigns and is captured computationally for the first time. The computations also contribute towards the subject by showing that stalling of the fan is driven by the extent of shock-induced flow separation on the suction surface and unstalling of the fan is driven by the extent of flow reattachment along the span. Based on the computational results, in order to operate a variable pitch fan with inlet distortion in reverse thrust, the rotor blades have to be closed to clear stall and the blades should then be opened to increase reverse thrust.
  • ItemEmbargo
    Graphene and Related Materials Inks and Composites for Space Applications
    Marcellino, Jeremiah
    Recent advancements in reusable rockets have led to a precipitous drop in pay-load to orbit costs (from $55,000/kg to $2,700/kg), enabling the establishment and meteoric rise of a commercial space industry. This opening up of space has spurred the generation of novel ideas and technologies, creating an ecosystem of new companies across the globe poised to play a part in the new space economy by enabling space research, tourism, manufacturing, and interplanetary travel. Underpinning this new era is the targeted development of advanced materials and processes that enhance in-space manufacturing capability, enable *in-situ* resource utilization, improved spacecraft capability, and protect biological life. The need for these advanced materials is driven by the inadequate susceptibility of current materials and technologies to the harsh space and planetary environments. Effects like extreme temperature (+180 to -180 °), altered gravity (hyper- and micro-gravity), thermal cycling (16 day/night cycles every 24 hours in low Earth orbit), ultra-high vacuum, solar radiation (1380 W/cm2 at Earth radius), atomic oxygen, and galactic cosmic radiation, in addition to planetary specific environmental factors, like regolith, present complex challenges for material performance in space. Graphene and related materials (GRMs) exhibit a range of unique properties that are size and thickness dependent, making them a versatile material platform. Their expansive range of properties and atomic composition make them attractive for advanced material solutions in space, including application in thermal control systems, radiation shielding, abrasion resistance for planetary surface exploration, electro-static charge dissipation, and light-weight, multifunctional composites, amongst others. Despite this, GRMs have yet to be used in space, while ground-based investigation of their usefulness in space applications is limited. In this dissertation, I showcase how graphene and related materials (GRMs) can be fine-tuned via exfoliation and tailored processing conditions to enable the development of advanced materials and manufacturing capabilities for space applications. Utilizing high-pressure homogenization (HPH), I establish a framework for several techniques that enhance the effective use of GRMs, expanding the utility of HPH beyond mere exfoliation to encompass multimaterial and multiphase processing of GRMs and other nanomaterials. The ensuing chapters delve into the practical applications of the resulting GRM inks and composites across a spectrum of space technologies, spanning thermal control devices, additive manufacturing materials, and lunar surface exploration. Moreover, this work highlights the potential of in-space manufacturing (ISM) and *in-situ* resource utilization (ISRU) as promising avenues for advancing the use and capability of GRMs in space. Through a thorough investigation into the production, characterization, and application of GRMs, this dissertation lays a robust foundation for their future in space.