Theses - Engineering
Permanent URI for this collection
Browse
Recent Submissions
Item Open Access Material efficiency in architectural glass and façade design: Steps toward reuseHartwell, Rebecca; Hartwell, Rebecca [0000-0002-2261-0745]Reuse and high-value recycling have a pivotal role to play in reducing waste and minimising the ecological impact of activities in the built environment. The optimisation of building energy performance in use has received significant attention to date. This, together with the aim to advance other performance criteria such as occupant safety and user comfort, has stimulated the evolution of façade systems that serve numerous functions. Increased functionality has been achieved through the use of a broader range of material resources, advanced processing methods, and more complex construction techniques which paradoxically may reduce the ability to recover material resources at the end of their design life. To date, very little attention has been paid to the consequences of incorporating multiple layers of components and greater proportions of irreversible connections on the ease of disassembly and reuse. Future targets for reducing whole-life carbon emissions will not be met without an improved understanding of how to recover components from new and existing façade systems effectively. This research thus examines the role of material efficiency in façade design, with a focus on reuse and high-value recycling, across the façade life-cycle. The behavioural, environmental, and technical factors affecting design strategies for reuse and recycling are investigated. Through engagements with 69 stakeholders across the value-chain, this research first investigated the challenges and opportunities associated with façade reuse as perceived by those in the façade sector. A mixed-method approach of data collection was employed in which the responses from a preliminary set of surveys were examined in more detail through semi-structured interviews. Through this, it was found that the adoption of design strategies that facilitate reuse and recycling is dependent on: increased awareness and quantification of the environmental value of material resources beyond their first use; new business models; cross-supply-chain support for accompanying take-back infrastructure; and advancements in efficient practical separation methods specific to façade components. Enhanced communication between stakeholders addressing acceptability criteria and product availability; new reconditioning methods; and more holistic legislation based on whole life-cycle performance also emerged as vital requisites for advancing material efficiency. Subsequently, the convergence and divergence in stakeholder priorities related to these factors was examined. Glass is a key construction material in the materials palette for contemporary façade systems. A material flow analysis of the UK flat glass sector was constructed in this research. This revealed that the practical reuse of architectural glass rarely takes place and closed-loop recycling rates are low, despite the potential for significant energy and carbon dioxide (CO2) savings. Opportunities to improve recycling rates were identified in: new producer responsibility schemes; capital investment in collection infrastructure; cullet quality monitoring; demonstration of the financial costs / benefits associated with consideration of the CO2 emission reduction from increased cullet use; and the establishment of supporting policies. Each of these options would require further research and support to be fully realised. Many contemporary façade systems consist of multi-material composite components which are difficult to disassemble and reuse or recycle at their end-of-life. A literature review of existing environmental assessments revealed the need for a bespoke methodology that accounts for these design complexities. The new assessment method developed in this research enables a quantitative evaluation of the environmental reclamation potential over time. The reclamation potential measures the influence of material selection and construction methods on the ability to disassemble and reuse recovered façade products at their end-of-life. The method accounts for the technical service lifetimes of components, including performance degradation over time, and can thus inform suggestions for the most suitable recovery route (system reuse, component reuse or recycling). The newly developed method was applied to four glazed façade typologies. Results highlight that the use of permanent adhesive connections can significantly limit the environmental value of connected components at their respective end-of-life, due to service life dependencies on neighbouring components. In addition, it was shown that the additional life-cycle environmental impact associated with recurring resource inputs for component replacements requires greater attention in order to avoid promoting designs that shift the environmental burden to other life-cycle stages. The existing technological barriers affecting the disassembly of contemporary façade systems for component reuse were reviewed. The parameters affecting the end-of-life separation of laminated glass - a composite of glass and poly-vinyl butyral frequently used in façade glazing - were examined through a series of experimental investigations. The interfacial strength between glass and poly-vinyl butyral was found to be significantly affected by moisture, temperature and glass surface properties. At a small scale (12500 mm2), under specific conditions, delamination phenomena between glass and PVB was observed. This is a key area for future research into developing efficient separation processes for façade components with the long-term goal of preserving material value in the built environment. The realisation of resource-efficient façade designs requires a systematic review of behavioural, environmental, technological, legislative, economic and social factors. This research investigates the first three of these and thus (i) presents an informed overview of the key challenges and opportunities in façade reuse as perceived by the façade value-chain; (ii) develops a new framework for assessing the environmental reclamation potential of complex designs; and (iii) investigates new methods for alleviating technical barriers in reuse, to open up opportunities for reducing the environmental impact of future construction. Future research directions are identified which could build upon the outputs of this research and thus promote the practical realisation of material efficiency in façade design.Item Embargo Molecular characterisation of methylated circulating tumour DNA bimarkers and their detection with low-cost biosensorsLleshi, ErmiraDiagnosing cancer at an early stage can effectively identify tumors at a time when the patient could benefit from treatments with superior clinical outcomes. Improving early cancer detection relies on discovering highly specific, sensitive, and robust biomarkers that can be obtained non-invasively and developing a rapid and low-cost platform. In this context, epigenetic markers have been shown to be a promising tool in the early detection and classification of cancers. Recently, cell-free methylated DNA immunoprecipitation (cfMeDIP) method was used to distinguish localized from advanced prostate cancers with high sensitivity. The challenge in prostate cancers is that the methylation levels are notoriously heterogeneous, which may lead to misclassification in a subset of patients. In this thesis, machine learning was coupled with a similar target-enrichment method, cell-free Methylation Binding Domain 2-sequencing (cfMBD2_seq), to differentiate benign, localized, and metastatic prostate cancers (mPCa). By training the machine learning model on patients with higher cell-free tumour DNA fraction, the method was able to identify 900 differentially methylated regions (DMRs) that can discriminate metastatic patients from controls with a sensitivity of 95% at a specificity of 100%. Furthermore, the same DMRs detected localized disease at a sensitivity of 37.5% and a specificity of 100%. Upon functional annotation of these DMRs, it was revealed that they are enriched for binding sites of transcription factors related to cell proliferation (i.e., MAZ, Sp2, TIEG1 and E2F-3). This demonstrates the potential of using cfDNA methylation profiling generated using cfMbd2_seq in combination with a machine learning approach in early diagnosis of prostate cancer. Despite the advancements in these diagnostic tools, the existing sequencing-based cancer detection methods are expensive, labor-intensive, and usually also require extensive sample processing. These drawbacks hinder them from being used in routine clinical applications. To overcome these challenges, the DMRs in the previous method was used to design a set of oligonucleotide probes for label-free detection using a low-cost, mass-sensitive biosensing device known as Thin Film Bulk Acoustic Resonator (TFBAR). In-liquid biosensing of these targets with TFBAR sensor was demonstrated based on the ability of these devices to detect mass changes at the surface by tracking the resonance frequency. With optimal surface functionalization, the performance of the TFBAR devices were tested to differentiate metastatic prostate cancer cases from controls based on the mass of cfDNA methylated targets. While this is proof of concept, the data generated here offers fundamental knowledge for future studies to develop a portable, low-cost, biosensing devices for Point of Care Testing setting.Item Embargo Domain-Specific Analog Physical Computing AcceleratorsMeech, James; Meech, James [0000-0003-4052-7248]This dissertation provides applications in the form of Monte Carlo simulations and Bayesian inference as motivation for the fast and efficient generation of non-uniform random variates in hardware. Using simulations and real-world empirical measurements this dissertation shows that software non-uniform random number generation is slow and inefficient, and discusses why this is the case. This dissertation presents the idea that we can offload the task of non-uniform random number generation from the digital electronic processor, leaving it free to perform other computations. This dissertation shows that it is possible to produce samples from a Gaussian distribution with any arbitrary mean and standard deviation using a simple transform and a source of samples from a Gaussian distribution. Our new hardware architecture can then produce samples from any arbitrary one-dimensional probability distribution by decomposing it into a mixture of Gaussians using a kernel density. To illustrate the novelty of our approach compared to the existing literature this dissertation presents terminology to describe non-uniform random number generators and a taxonomy that categorizes the state of the art of hardware non-uniform random number generators based on the physical process and measurement hardware they use. We focus on the hardware implementation of the measurement mechanism because it places a fundamental limit on the speed of generation. Based on this analysis we show that field-programmable gate array-based non-uniform random number generators produce the highest sample rates on average. The fastest generator in our study uses a photodiode but it is an outlier for that architecture. This indicates that a field-programmable gate array or optical-based design will produce the fastest generator. The majority of hardware uniform random number generators obtain their randomness from a non-uniform random physical process and then transform or truncate this process to produce a uniform distribution. Therefore, the hardware uniform random number generators are subject to the same fundamental speed constraints as hardware non-uniform random number generators. This suggests that our non-uniform random number generation architecture also has the potential to produce a uniform random number generator that is faster than the state-of-the-art. After describing the reasons that we would want a hardware non-uniform random number generator, this dissertation presents a new non-uniform random number generator architecture with the potential for greater speed and efficiency than the state-of-the-art non-uniform random number generator methods. We present results from two implementations of the architecture which both involve sampling a random physical process that varies in time. The physical processes which the system samples are: 1) Microelectromechanical system (MEMS) sensor noise. 2) Electron tunneling noise. This dissertation shows that the mean and standard deviation of both the MEMS- and electron tunneling noise-based programmable random variate accelerators depend upon their temperature and supply voltage and proposes an architecture to compensate for this effect. This dependence is important when designing a non-uniform random number generator based on electronic noise. The random number generator will either need to measure and compensate for the temperature or be kept in a constant temperature environment. Chapter 5 concludes our discussion of programmable non-uniform random number generators. Chapters 6 to 8 of this dissertation focus on accelerating Fourier transform and convolution operations using an optical computing accelerator. This hardware has the sole purpose of accelerating Fourier transform and convolution operations. Chapter 6 describes the theory of Fourier transforms and the empirical results of experiments that use the theory. Chapter 7 presents a theoretical benchmark analysis of 27 end-to-end applications that would benefit from running over an optical Fourier transform and convolution accelerator. We find that the optical accelerator can only produce a speedup of > 10× for two applications (pure Fourier transforms and convolutions). We built a prototype optical Fourier transform accelerator using off-the-shelf hardware to illustrate the data movement bottleneck that occurs in any computing accelerator that moves data between analog and digital computing devices. Chapter 8 proposes a new computer architecture which we call (Mia) memory in aperture. Mia mitigates the data movement bottleneck using hybrid analog-digital memories in the physical address space of the inevitable digital electronic processor in the computing system. Mia will reduce the data movement bottleneck in any computer architecture that frequently moves data between analog and digital computing devices. Mia holds particular future promise for interfacing analog neuromorphic computing architectures and quantum computers with digital electronic processors.Item Embargo Experimentally validated DEM modelling of granular materials under simple shear testingGuo, JikaiSimple shear testing’s ability in simulating realistic in situ conditions makes it an effective element-level testing method to study the response of granular materials under uni-directional and multi-directional loading. DEM modelling provides complementary information such as contact networks and particle movements, that is underlying, but not observed directly from the experiments. In this study, a number of uni-directional and bi-directional experimental and DEM simple shear tests were conducted to provide a database for studying the response of granular materials, as well as for developing DEM simulation techniques. The physical specimens consisted of 60,000 poly-dispersed steel spheres. The material properties and sample setup in DEM were closely matched to those in the experiments, to ensure the models perform in good agreement with the physical counterparts in terms of the macroscopic response. Different boundary types were evaluated and compared regarding the shear transmission ability and computational efficiency. Flat boundaries showed not only a lower shear transmission ability, but also insufficient engagement of particle movements and rotations. The application of ribbed and pyramid-shaped projections on boundaries resulted in pronounced improvement. Boundaries with pyramid-shaped projections were eventually used for the DEM models because of the close match to the experimental results as well as the applicability in bi-directional tests. A detailed analysis of the sample preparation process was conducted. An artificially assigned low interparticle friction coefficient throughout the entire consolidation stage resulted in an unrealistic macroscopic and microscopic initial response, the effects of which were gradually erased with shearing. In contrast, switching the interparticle friction coefficient back to the actual value before the application of any vertical stress generated dense specimens with a more realistic initial response. The DEM bi-directional simple shear models successfully captured the trend of macroscopic response in physical tests. A larger angle of change of shearing directions resulted in greater reductions in the shear stress and a more contractive response. Inclined force chains could be observed in the vertical plane that paralleled the direction of shearing. Contact fabrics tended to orient at around 45o from the horizontal direction at large strain. The change of the direction of shearing caused the rearrangement of the force chains and contact fabrics. For specimens with the same density, a larger angle between the two shearing directions resulted in a larger number of downward-moving particles. For the same loading conditions, loose specimens exhibited greater volume reduction at the change point of shearing direction.Item Open Access Applications of Geometric Algebra in Mathematical EngineeringHadfield, Hugo; Hadfield, Hugo [0000-0003-4318-050X]Geometric Algebra (GA) has found success in various areas of the physical sciences and engineering over the last decade but remains relatively underutilised in industry and several key topics in the field remain unexplored. This thesis focuses on the practical applications of Geometric Algebra in various interconnected areas of mathematical engineering. In Part I we explore the properties of the objects resulting from the addition of blades in Conformal Geometric Algebra (CGA) and how we might use these objects in computer graphics and robotics algorithms. In Part II we explore how Screw Theory embeds into CGA, how to use this embedding for simulation of the dynamics of rigid bodies, and how practitioners can leverage the geometric primitives built into CGA to represent and solve constraints in multi-body robotic systems.Item Open Access Development of a Reusable Thin-Shell Concrete Flooring SystemNuh, Mishael; Nuh, Mishael [0000-0003-0672-2832]Global demand for floor space is projected to continue to rapidly increase due to urbanisation and population growth. Coupled with the high carbon intensity of the construction industry, immediate action must be taken in order to reduce and limit the impact of our buildings and structures on global carbon emissions and the environment. Circularity and component reuse offer an alternative design paradigm which runs counter to traditional manufacturing processes. While the sustainability benefits are clear, modern building construction practices are not compatible with component reuse and circular economy principles. This dissertation demonstrates the feasibility of a lightweight and carbon-efficient concrete flooring system which enables disassembly, reuse, and reconfiguration, achieved by leveraging modern digital fabrication techniques and a funicular form. Inspired by the classical fan vault geometry, the segmented fan concrete shell consists of thin-shell glass fibre reinforced concrete elements which thrust against each other to maintain its structural form without any mortar. Form-finding, analysis, and digital fabrication processes were developed for the system. Through a combination of finite element modelling and an experimental testing program, the structural behaviour and performance of the segmented fan concrete shell were investigated. Sustainability comparisons to alternative flooring systems demonstrate that the proposed design is both a materially and carbon-efficient flooring system. It was found that the proposed flooring system is structurally stable and can resist considerable loads. However, there is a sensitivity to fabrication tolerances and support conditions which can drastically reduce its stability and structural capacity. While the potential of the structural system as well as the novel digital fabrication method has been well demonstrated, further work is required to improve the proposed design to increase robustness against these factors. The work presented provides a foundation for future work on sustainable and reusable concrete shell structures.Item Open Access Longer Range Backscatter Communications for IoTYu, SichengWireless communication technologies have gained widespread adoption, forming the foundation of the Internet of Things (IoT). The IoT offers versatile, accurate, cost-effective, and energy-efficient solutions for a diverse range of applications. Within this network, backscatter communication plays a significant role. Expanding the operational range of backscatter systems holds the potential to unlock new application possibilities. One of the limitations faced by the backscatter system is the presence of phase noise. A powerful method for phase noise cancellation is Range Correlation. To achieve maximum suppression of phase noise, it is crucial to match time delays between correlated signals. However, in backscatter communication, the transmission delays through free space may not remain fixed if the reading distance changes. To address this challenge, an interpolation method with several fixed taps and controlled weights is developed for generating desired delays. The contribution of this Doctoral Thesis mainly consists of three parts. 1. Phase noise suppression is fulfilled by using the range correlation effect at the receiver side. The possibility of extending the operational range with a spatially separated structure using a common local oscillator is also demonstrated. This approach allows for a potential range increase of up to 50 times. 2. A delay matching block using interpolation methods is developed for optimum range correlation that maximise phase noise suppression. This three-tap block can generate arbitrary delays within its specified range and can be conveniently integrated into existing backscatter systems. When applied prior to range correlation, this block successfully suppresses phase noise by over 50dB at 100kHz offset. 3. Special transmission lines are designed for shrinking the size of the delay matching block for easier deployment. The delay-to-length ratio is largely increased by 17 times.Item Open Access Aib(2)oxyntomodulin Peptide Molecular Self-Assembly: Smart Nanostructures Toward Long Acting Formulation for Obesity and Diabetes Type 2Mohammad Karim, AlirezaType 2 diabetes mellitus (T2DM) and obesity are the most prevalent metabolic complications in the world. Previous studies have revealed that dual agonist gut hormone, oxyntomodulin (Oxm), has a promising beneficial role to treat obesity and T2DM through suppressing the food intake, raising the energy expenditure, and increasing insulin secretion to maintain the normal blood glucose level in the body. Nevertheless, due to the fast renal clearance of Oxm via proteolytic degradation, Oxm fibrils were explored and discovered to enhance Oxm bioactivity *in vivo* up to few days. Despite to the prolong bioactivity of Oxm fibrils in subcutaneous (s.c.) space, the plasma half-life of Oxm peptide, upon release from the fibril, was short in serum due to the proteolytic degradation by DPP4, which limits the Oxm biological role. Furthermore, previous study reported that an analogue of Oxm, aminoisobutyric acid (2) oxyntomodulin known as Aib2-Oxm, is resistant to DPP4 proteolytic degradation and enhances the plasma half-life bioactivity in serum. Despite to the promising potential of long-bioactive role of Aib2-Oxm fibrils in the body, the thoroughly identified kinetic, thermodynamic and mechanical parameters affecting the self-assembly process of Aib2-Oxm free peptides as well as the fundamental physical interactions underpinning self-assembly and controlling stability of their fibrillar structures remained to be resolved. A key element to completing our understanding of the physical interactions is the elucidation of the near-atomic level structure of the fibrils and the determination of their mechanical properties. In this PhD study I studied the detailed mechanical and structural characterization of the individual fibrils using atomic force microscopy (AFM) and cryogenic electron microscopy (cryo-EM). The reversibly self-assembled multi-filament fibrils, formed by free Oxm and Aib2-Oxm, were shown to have a mechanical stiffness, in terms of Young's modulus of elasticity, of 0.6 - 1.7 GPa, comparable to those of natural assemblies found within cells, such as actin and tubulin (0.3 - 1.2 GPa), which were previously found to be the reversible self-assembled fibrils, *in vivo*. This finding supports the possibility of the controlled dissociation mechanism of Oxm and Aib2-Oxm fibrils, *in vivo*. Moreover, the comparisons of elastic moduli of Oxm and Aib2-Oxm fibrils with those of previously studied materials signify the fact that both dense hydrogen-bonding network and amphiphilic (i.e., hydrophobic and hydrophilic) interactions are comparably responsible for stability of Oxm and Aib2-Oxm fibrils. By combining data from both AFM and cryo-EM images, alongside modelling based on semiflexible polymer theory, we show that Oxm and Aib2-Oxm fibrils exhibit ribbon-like multifilament structures as opposed to the closely packed multifilament structures as previously speculated. In order to understand the fibrillation and dissociation processes of Oxm and Aib2-Oxm, a through quantitative knowledge of the kinetics and thermodynamics of fibril formation and dis-assembly were conducted. Here, I reported for the first time an extensive thermodynamic and kinetic study of fibril formation and dissociation using the quartz crystal microbalance with dissipation (QCM-D) and bulk experiments for Oxm and Aib2-Oxm. I investigated the speed of peptide binding and detachment as well as the related thermodynamic processes via the Gibbs free energy change for the fibril formation and dissociation. My study showed that fibril formation of Oxm and Aib2-Oxm are both exothermic (i.e., spontaneous process) as the Gibbs free energy change associated with them is negative due to the peptide binding whilst Oxm fibrils are more thermodynamically stable than Aib2-Oxm fibrils due to their higher Gibbs free energy loss which is attributed to the larger α-helix contents in the secondary structure of Aib2-Oxm fibrils as compared to Oxm fibrils. Moreover, the speed at which Oxm seeds elongation is higher than Aib2-Oxm seed elongation rate which is related to the larger amount of required Gibbs free energy of activation for Aib2-Oxm peptide unfolding for nucleation as compared to Oxm peptide. The findings acquired from kinetics and thermodynamics of fibril formation indicated the higher propensity of Oxm peptide for fibril elongation than Aib2-Oxm. Moreover, I have found that Aib2-Oxm fibrils are thermodynamically less stable than Oxm fibrils which proves that Aib2-Oxm fibrils are more feasible and faster to dissociate *in vivo* relative to Oxm fibrils and signifies that Aib2-Oxm exhibits much higher bioactivity in serum relative to Oxm due to the fact that Aib2-Oxm fibrils might send more free peptides inside the serum for a given time as compared to Oxm fibrils. Furthermore, in order to find a systematic approach for Aib2-Oxm fibril formation *in vitro* as well as Aib2-Oxm fibril dissociation *in vivo* I examined Aib2-Oxm peptide-seed binding and the peptide release from fibrils via considering three parameters: temperature, concentration and size of pre-formed seeds. In this Ph.D. study, I reported a comprehensive knowledge on kinetics and thermodynamics of Aib2-Oxm fibril elongation and dissociation along with understanding the mechanics and structures of mature Aib2-Oxm fibrils under influence of the three parameters mentioned earlier. Regardless of physical conditions (varying temperature, concentration and size of pre-formed fibril seeds) over which Aib2-Oxm fibrils were prepared, they all exhibit helical and twisted structures with similar materials strength under influence of same forces: hydrogen bonds and hydrophobic-hydrophilic interactions; and provide a promising potential of *in vivo* reversibility in human body via comparing their materials compatibility with naturally self-assembled fibrils in human cells. Moreover, Aib2-Oxm fibrillation follows Arrhenius behavior, as opposed by its analogue, Oxm. Detailed analysis of thermally driven Aib2-Oxm fibril elongation by QCM-D revealed that the temperature rise from 23 to 37 oC induced twice more probable Aib2-Oxm peptide-seed binding. Similarly, Aib2-Oxm peptide release from the Aib2-Oxm fibril enhances with temperature rise with lower release rate in subcutaneous tissues (32 oC) than other physiological temperatures (i.e., 37 and 42 oC) which indicates strong potential of long-lasting therapeutic effects of Aib2-Oxm peptides via subcutaneous injection in patients.Item Open Access Reducing Off-State Current in P-Type Metal Oxide Thin Film TransistorsMeeth, David JakeThe Internet of Things (IoT) is a system in which common objects in our lives are able to broadcast and communicate information as nodes in a large network of information. This diverse application system is a growing interest for many industries. Large Area Electronics (LAE) is required to effectively produce the mass quantity of small, low-profile devices that will be embedded in these objects. Due to the scale and irregularity of the form factor, the need for flexible thin-film electronics arises. Metal-oxide materials are a prime candidate. They can be fabricated cheaply, in large areas, using the current electronics infrastructure. Some devices are already being produced in industry; however, they typically rely on an NMOS device architecture, which is inferior to CMOS. While NMOS is a type of digital circuit design which utilises only n-type semiconductors, CMOS utilises complementary n-type and p-type transistors to enable low-power and stability. P-type metal-oxide semiconductors have performed poorly in transistor devices and have prevented metal-oxide electronics from using CMOS. A suitable p-type transistor, to complement current n-type technology, is needed in order to implement CMOS and improve flexible metal-oxide electronics enough to be viable for the IoT or other flexible electronics applications. In this work, metal oxide Cu2O Thin Film Transistors (TFTs) are fabricated using High Target Utilisation Sputtering (HiTUS). In previous work, it was shown that the high off-current associated with Cu2O TFTs may be attributed to an atypical minority-carrier electron current leakage in the off-state. This gives a focus for device improvement. Most research in this area has focused on material improvements and inventions to attempt to create high performance devices. After all, this was the method through which n-type metal oxides first thrived, with IGZO being the prime example. However, this work aims to instead alter the device design to work with the present material limitations. An Electron Blocking Layer (EBL) is proposed and implemented for the first time on metal oxide TFTs. Further, good-quality c-Si/Cu2O PN junctions are also fabricated and used to efficiently trial EBL materials. NiO and MoO3 EBL materials are shown to moderately improve the on-off ratio by 141 % from 5.1 to 12.3, while CuI and WO3 show no improvement. It was concluded that the minority carrier accumulation was not the dominant source of off-current leakage for devices in this work. In a further effort to identify the more dominant source of the off-current, defect passivation was investigated. A minimum of 5 nm of Al2O3, when applied as a passivation layer to the back channel of annealed Cu2O TFTs, was shown to greatly improve the on/off ratio by 150000 %, three orders of magnitude, from 11 to 1.7×104. Finally, a new TFT design is proposed, in light of the results from this work, that incorporates SnO and a graded Tantalum EBL.Item Open Access The role of microarchitecture in the tensile properties of polymer scaffoldsMulard, AudeSoft tissues are one of the human body’s main components. Their constant communication and movements makes their mechanical properties very important to human health. However, soft tissues have complex mechanical characteristics: they undergo large deformations and have non-linear stress-strain relationships. Numerical models in the literature give insight into their behaviour, but most describe the tissues as continuous solids or viscoelastic liquids. In reality, soft tissues have a multiscale organisation, and their microarchitecture is at the origin of their mechanical properties. In particular, the protein collagen, providing the scaffold of the extracellular matrix, is responsible for soft tissue tensile strength, and its organisation directs the tensile properties of mesenchymal-derived soft tissues (connective tissues, bone and muscle). To better describe the link between microarchitecture and macroscopic behaviour, we create a discrete network model using the Growth and Voronoi architectures. Our model replicates some well-known results from the Mikado network literature and expands them to our geometries. These results notably include transitions from a floppy regime to rigidity, and from nonaffine to affine regimes, with the increase of strain, coordination, density and bending rigidity. Additionally, we uncover the role of heterogeneity and micro organisation in the evolution of the network when the value of these latter parameters change beyond what has been done previously. We also extend our model to three dimensions, showing that many of these transitions happen at higher strain and generally that 3D behaviour is softer than 2D behaviour. Further, we also highlight differences, like the apparition of a double strainstiffening with increasing coordination, which is not visible in two dimensions. Together, these results constitute an argument for the mechanical importance of microarchitecture for various collagenous tissues. Moreover, thanks to its computational efficiency and flexibility, our model provides a strong basis for future simulations of collagen gels and soft tissues in two and three dimensions.Item Embargo Corrections to inter-blade-row flow measurements in axial compressorsSedunin, ViacheslavFor industrial multistage axial compressors, flow measurements in axial gaps between the blade rows are limited by mechanical constraints and the high uncertainty of the gathered data. The main factors affecting measurement uncertainties are the confined space between the blade rows, large relative dimensions of the probe and the holding stem, high flow velocities and complex flow structures. In this thesis, it has been shown that the uncertainties become large when a cylindrical probe is placed in proximity to the blades or the large vortical structures, such as blade wakes or separation zones, or when the flow Mach number exceeds the probe critical values of 0.7. In these cases, the uncertainties may exceed 10° in the flow angle and over 30% of dynamic head in measured static pressure. A set of boundaries are defined for geometrical and flow conditions, where the deviations from the true value of the undisturbed flow are quasi-linear and can be estimated. These boundaries are based on the physical principles of the internal turbomachinery flows and can be universally applied to other multistage compressors. The corrections were applied first to a CFD-simulated probe placed at representative locations along the span and circumference of the passage both upstream and downstream of stator blade rows at representative flow conditions. When the corrections were applied, the measurement uncertainties caused by the probe's flow field distortion were reduced to ±1° in the flow angle and to ±5% of the dynamic head. Finally, the corrections were applied to the experimental spanwise measurements made on an axial compressor as a part of an industrial gas turbine during its operation on site. A procedure was developed to control the consistency of the measurement corrections based on fundamental physical principles such as mass and energy conservation and radial equilibrium. The corrected data have shown improved consistencies: the variation of the integrated mass flow values along the compressor axial locations was reduced from ±15% to ±4% from the inlet value, and the static pressure distribution along the span became within 5% of dynamic head away from the radial equilibrium condition compared to more than 30% initially. The corrections presented in this thesis have fixed the first-order errors relating to the use of finite-size probes in multistage axial compressors, namely, consistently identifying and correcting for the blockage from the probe itself and the potential field of the upstream and downstream blade rows. Before the corrections, these errors meant spanwise measurements of flow angle and pressure had large uncertainties and errors; therefore, they were rarely performed in multistage industrial compressor environments. After the corrections, the results of spanwise traversing have shown improved consistency and can now be used as feedback in the design and development of multistage axial compressors.Item Embargo Stochastic Modelling and Approximate Bayesian Inference: Applications in Object Tracking and Intent AnalysisGan, RunzeAs two fundamental pillars of Bayesian inference for time series, stochastic modelling and approximate Bayesian inference play crucial roles in providing accurate priors for underlying random processes and addressing the challenges of evaluating posterior distributions when exact computation is infeasible. Balancing novel contributions in both areas, this thesis highlights innovative stochastic models in Chapters 2 and 3, and puts emphasis on novel approximate Bayesian inference schemes in Chapters 4, 5, and 6. Driven by applications in object tracking and intent inference, the developed methodologies aim to accurately capture desired motion characteristics while enhancing the effectiveness, efficiency, and robustness of estimations. The applications of intent inference and single object tracking are considered in Chapters 2 and 3. Chapter 2 presents a generic Bayesian intent inference framework capable of predicting the destination of a tracked object, along with an exploration of several mean-reverting stochastic processes that serve as dynamic models within the framework. Chapter 3 develops novel α-stable Lévy state-space models for manoeuvring object tracking and intent prediction, expressed in continuous time as Lévy processes. These models effectively capture sharp changes in state induced by erratic maneuvers with heavy-tailed α-stable driven noise, while maintaining an advantageous conditionally Gaussian transition. Additionally, this chapter introduces an efficient intent inference procedure that accommodates dynamically varying intent across the surveyed area, offering versatile solutions for diverse tracking scenarios. Chapter 4 introduces a novel conditionally factorised variational family that retains dependence between desired variables at user-defined levels of detail. A new variational Bayes algorithm is then proposed and implemented with importance sampling. It guarantees a better variational lower bound by choosing a finer conditional structure, offering a flexible trade-off between computational cost and inference accuracy. Multi-object tracking tasks are addressed in Chapters 5 and 6 with Poisson measurement processes. Chapter 5 introduces a variational Bayes multi-object tracker that effectively performs tracking, data association, and learning of target and clutter rates, while offering substantial efficiency gains and parallelisable implementation. Chapter 6 extends this tracker to tackle highly challenging tracking scenarios involving a large number of closely-spaced objects and heavy clutter. By introducing a novel variational localisation strategy that quickly rediscovers missed targets under extremely heavy clutter, the enhanced tracker can automatically detect and recover from track loss, delivering outstanding performance in tracking accuracy and efficiency under difficult tracking conditions.Item Open Access Time series modelling and inference with Bayesian Context TreesPapageorgiou, IoannisTime series arise all over the sciences and engineering, with numerous important applications in many different fields. In the ‘Big Data Era’, the tasks of time series modelling, inference and prediction have become more critical than ever. In this thesis, we introduce a collection of statistical ideas and algorithmic tools to build a general Bayesian framework for modelling and inference with time series data, which is found to be effective in a number of practical settings, including both discrete and real-valued observations. For discrete-valued time series, we describe a novel Bayesian framework based on variable-memory Markov chains, called Bayesian Context Trees (BCT). This is a rich class of high-order Markov chains that admit parsimonious representations by allowing the memory of the process to depend on the values of the most recent observations. A general prior structure is introduced and a collection of methodological and algorithmic tools are developed, allowing for efficient, exact Bayesian inference in this setting. It is shown that the evidence (averaging over all models and parameters) can be computed exactly, and that the a posteriori most likely models can be precisely identified. The relevant algorithms have only linear complexity in the length of the data and can be updated sequentially, facilitating online prediction. We provide extensive experimental results illustrating the efficacy of our methods in a number of statistical tasks, as well as theoretical results that further justify their use. The proposed approach is then extended to real-valued time series, where it is employed to develop a general hierarchical Bayesian framework for building mixture models. At the top level, a set of discrete contexts (or ‘states’) are extracted from quantised versions of the observations. The set of all relevant contexts are represented as a context tree. At the bottom level, a different real-valued time series model is associated with each state. This defines a very general framework that can be used in conjunction with any existing model class to build flexible and interpretable mixture models. We show that, again, effective computational tools can be developed, allowing for efficient, exact Bayesian inference. The utility of the general framework is illustrated when autoregressive models are used as the base model, resulting in a nonlinear AR mixture, and when conditional heteroscedastic models are used, resulting in a flexible mixture model that gives a systematic way of modelling the well-known volatility asymmetries in financial data. The proposed methods are found to outperform the state-of-the-art techniques in various settings, both with simulated and real-world data.Item Open Access End-to-end Contextual Speech Recognition and UnderstandingSun, GuangzhiContextual knowledge is of vital importance to end-to-end automatic speech recognition (ASR) and spoken language understanding systems (SLU), especially for the long-tailed word problem where systems suffer from degraded performance on rare or unseen words that are both relevant to the context and carrying important information. Integrating such contextual knowledge into such end-to-end systems is both necessary and challenging, as contextual knowledge is always dynamically changing while neural systems adopt a static set of trained parameters. In ASR, dynamic contextual knowledge is often incorporated via contextual biasing, where a list of rare words or phrases that are likely to appear in a given context is included, denoted as a biasing list of biasing words. A word is more likely to be correctly recognised if it is incorporated into the biasing list. The tree-constrained pointer generator (TCPGen) component, initially proposed for contextual biasing in ASR and later extended to end-to-end SLU systems, is introduced as the core of all the contributions in this thesis. TCPGen leverages the pointer generator mechanism to directly modify the model output distribution, which is both effective and has good generalisation ability. This combines the advantage of two widely-used biasing methods: the effectiveness of shallow fusion and the flexibility of neural deep biasing. Meanwhile, a word piece prefix tree is used to structure the biasing list, which allows TCPGen to handle large biasing lists containing thousands of words or entities. TCPGen, as a generic biasing component, can be integrated into both the attention-based encoder-decoder (AED) model and the neural transducer (NT) model. The minimum biasing word error (MBWE) training and biasing-word-driven language model discounting (BLMD) algorithms for TCPGen are also introduced for training and inference respectively. Experiments performed across three different datasets spanning domains of LibriSpeech audiobooks data, meeting conversation and spoken dialogue have shown consistent and significant word error rate (WER) improvements using TCPGen, especially on rare words. TCPGen also has the potential to be applied to universal large speech models via distribution-level adaptation, and the Whisper ASR model is used as an example to evaluate the effectiveness of TCPGen for this purpose. TCPGen can be further equipped with graph neural networks (GNN) by exploiting the tree structure of the biasing list. GNN encodings provide more powerful node representations in the prefix tree of TCPGen, allowing for "lookahead" functionality where each node contains not only its own word piece information but also information about its child branches. This improved node representation in TCPGen leads to a more accurate generation probability predictions for biasing words, enabling better determination of contextual biasing by incorporating information about future word pieces during each ASR decoding step. Three types of GNN encodings are explored in this thesis, including tree recursive neural networks (tree-RNN), graph convolutional networks (GCN) and GraphSAGE. Combinations of GCN and GraphSAGE that take advantage of the complementarity are also explored. To further illustrate the potential of applying contextual biasing and TCPGen in real-world scenarios with a particular focus on academic and educational applications, an audio-visual contextual ASR pipeline is also introduced using the augmented multi-party interaction (AMI) data. Experiments show further significant improvements using GNNs compared to the standard TCPGen on rare words on both the LibriSpeech data and on the audio-visual contextual ASR pipeline. Notably, the best-performing combination achieved a relative 60% WER reduction compared to the standard end-to-end ASR systems. As ASR, TCPGen can also be integrated into end-to-end SLU systems to boost the performance on rare words, especially for the slot-filling task. In SLU systems, a focused biasing list can be extracted from a structured knowledge base (KB) by predicting a slot shortlist (SS) using a class language model (CLM). The distribution over the valid set of word pieces can be converted into a distribution over all slot types, achieving a shortcut to the model slot output, termed slot probability biasing (SPB). Experiments on the combination of the two methods achieve significant performance improvements in slot filling, with a 1.7% absolute SLU-F1 increase overall and a 7% increase in unseen entities compared to a pipeline system. SPB further enabled zero-shot learning of unseen slot types by providing a list of possible named entities for that slot. Furthermore, a knowledge-aware audio-grounded (KA2G) generative SLU framework is introduced which performs slot filling by prompt and response in natural language. A stacked TCPGen component is incorporated with a shared tree encoding, such that knowledge explored during ASR beam search decoding can be transferred to the generation of slot values. The KA2G framework achieves further performance improvements compared to the SPB mechanism, especially on few-shot entities by exposing only a handful of examples to the model in the training set. Specifically, KA2G, compared to the end-to-end SLU with TCPGen, achieved a further 1.1\% absolute increase in SLU-F1 overall, with an 8\% increase on unseen entities.Item Restricted Item Open Access Towards Improved Variational Inference for Deep Bayesian ModelsOber, Sebastian WilliamDeep learning has revolutionized the last decade, being at the forefront of extraordinary advances in a wide range of tasks including computer vision, natural language processing, and reinforcement learning, to name but a few. However, it is well-known that deep models trained via maximum likelihood estimation tend to be overconfident and give poorly-calibrated predictions. Bayesian deep learning attempts to address this by placing priors on the model parameters, which are then combined with a likelihood to perform posterior inference. Unfortunately, for deep models, the true posterior is intractable, forcing the user to resort to approximations. In this thesis, we explore the use of variational inference as an approximation, as it is unique in simultaneously approximating the posterior and providing a lower bound to the marginal likelihood. If tight enough, this lower bound can be used to optimize hyperparameters and to facilitate model selection. However, this capacity has rarely been used to its full extent for Bayesian neural networks, likely because the approximate posteriors typically used in practice can lack the flexibility to effectively bound the marginal likelihood. We therefore explore three aspects of Bayesian learning for deep models. First, we begin our investigation by asking whether it is necessary to perform inference over as many parameters as possible, or whether it is reasonable to treat many of them as hyperparameters that we optimize with respect to the marginal likelihood. This would introduce significant computational savings; however, we observe that this can lead to pathological behavior and severe overfitting, suggesting that it is better to be as “fully Bayesian” as possible. We continue our thesis by proposing a variational posterior that provides a unified view of inference in Bayesian neural networks and deep Gaussian processes, which we show is flexible enough to take advantage of added prior hyperparameters. Finally, we demonstrate how variational inference can be improved in certain deep Gaussian process models by analytically removing symmetries from the posterior, and performing inference on Gram matrices instead of features. While we do not directly investigate the use of our improvements for model selection, we hope that our contributions will provide a stepping stone to fully realize the promises of variational inference in the future.Item Open Access Strong and weak principles of Bayesian machine learning for systems neuroscienceJensen, KristopherNeuroscientists are recording neural activity and behaviour at a rapidly increasing scale. This provides an unprecedented window into the neural underpinnings of behaviour, while also pushing the need for new techniques to analyse and model these large-scale datasets. Inspiration for such tools can be found in the Bayesian machine learning literature, which provides a set of principled techniques that allow us to perform inference in complex problem settings with large parameter spaces. When applied to neural population recordings, we propose that these approaches can be divided into ‘weak’ and ‘strong’ models of neural data. The weak models consist of tools for analysing experimental data, which build our own prior knowledge of neural circuits directly into the analysis pipeline. In contrast, strong Bayesian models of neural dynamics posit that the brain itself performs something akin to Bayesian inference. In this view, we can interpret our Bayesian machine learning models as algorithmic or mechanistic models of the learning processes and computations taking place in the biological brain. In this work, we first provide an overview of Bayesian machine learning and its applications to neuroscience, highlighting how both the strong and weak approaches have improved our understanding of neural computations in recent years. We then develop several new models in this field, which provide insights into neural computations ranging from motor control to navigation and decision making. These models can be grouped into three broad categories. First, we construct a series of new ‘weak’ latent variable models that allow us to infer the dimensionality and topology of neural data in an unsupervised manner. We highlight the utility of such approaches on synthetic data and across several biological circuits involved in motor control and navigation. Second, we propose a new method for Bayesian continual learning and relate it to longitudinal recordings of neural activity as a ‘strong’ model of biological learning and memory. Finally, we develop a new ‘strong’ model of planning and decision making through the lens of reinforcement learning formulated as Bayesian inference. In contrast to previous network models, we explicitly build in the capacity for planning-by-simulation and show that this explains many features of both human behaviour and rodent hippocampal replays. This results in a new theory of the role of hippocampus in flexible planning. The new methods developed in this work both expand the Bayesian toolbox available to systems neuroscientists and provide new insights into the neural computations driving natural behaviours.Item Open Access Business Model Innovation and ProductivityWannakrairoj, WitRapid increase in computer power and performance at an exponential rate, coupled with significant reduction in costs of computer hardware and software, has been generally expected to help enhance productivity in many industries. However, recent global statistics show that productivity growth has slowed down in major industrialised nations, especially so in the United Kingdom since the 2007-2008 financial crisis. This phenomenon is now known as the “Productivity Paradox” and has attracted the interest of many researchers. Several hypotheses have been proposed to explain this productivity paradox, but they all have not quite unlocked the mystery nor completely revealed the real causes of the problem. Here we show that business model innovation can be an important source of productivity growth through both quantitative and qualitative studies. The first study investigates the relationship between business model innovation and productivity growth. In this study, business model innovation is considered as an important organisational factor which may have an impact on productivity growth. In order to study the relationship between business model innovation and productivity empirically, this first study introduces a novel approach of measuring business model innovation by analysing changes in net asset turnover ratio. Through this newly introduced business model innovation variable, this study shows through empirical analysis that business model innovation contributed significantly to productivity growth across UK firms between 2003 and 2017. After showcasing that there is a relationship between productivity and business model innovation and that business model innovation can be the source of productivity growth in the first study, the second study looks further into how change in risks as a result of business model innovation affects productivity. The study also looks at servitization of manufacturing, as part of business model innovation, and its impact on productivity. As the servitization of business models changes the fundamental economics related to risk, strategic risk management is found to be one of the reasons why firms have servitized. To account for risk within different business models, this study introduces a novel approach to measure productivity by adjusting for risk, based on volatility and source of output variation. This study looks at the differences in risk and productivity between service and manufacturing sectors in UK manufacturing industry as a proxy of pre- and post-servitization of manufacturing. This empirical study compares risk adjusted productivity and unadjusted productivity between the service and manufacturing sectors in the UK between 2003 to 2018. The results show that the productivity gap between the manufacturing and service sectors reduces significantly after adjusting for risk. Although prior studies have found that the service sector contributes to a slowdown in productivity growth, this study provides empirical support that once adjusted for risk, the results show otherwise. To understand the mechanics and consequences of business model innovation to productivity, the third study takes a closer look at a firm level case study of Rolls-Royce, to study the relationship between the servitization of business models and productivity growth. To understand what levers of business model influenced by servitization impact on productivity growth, both quantitative and qualitative methods were used. Through qualitative representation, the firm’s business model choices based on business model levers were linked with profitability and productivity consequences. With quantitative analysis, the consequences of business model innovation to productivity can be quantified through a method of profit decomposition. The results of this study show that firms can increase or decrease their productivity through the use of different levers dependent on business model choice. Our results from these three studies demonstrate that business model innovation can be one of the possible solutions to the productivity paradox puzzle. Overall, in the methodologies implemented herein, and their subsequent results, this dissertation contributes to the literatures of business model innovation, servitization and productivity. The results also have significant implications for both policy makers and managers on the importance of business model innovation, particularly servitization to achieve productivity growth.Item Open Access An optimization approach to relate neural circuit architecture, loss landscapes and learning performance in static and dynamic tasksPerez Rotondo, AdrianaLearning is challenging for large and complex neural circuits. There is a fundamental difficulty in determining how individual neurons or synapses affect the overall behavior of a circuit, which is known as the credit assignment problem. Rather than looking at single neurons or synapses then, one must step back and look at the overall circuit architecture and match the circuit’s structural patterns to its behavior. One pattern of notable importance that we observe throughout the brain is dimensionality expansions, whereby a small population of neurons diverge onto a large group of neurons with synapses that convey the same information. As large neural circuits are energetically expensive, what is the purpose of these seemingly redundant synapses? Here, we show that this type of neural circuit expansion architecture affects the circuit’s ability to learn both static and dynamic tasks. In static tasks, our findings show that adding redundant synapses to neural circuits can increase learning accuracy. We evaluate the impact of synaptic changes on learning performance, quantify the inherent challenges of learning with biologically plausible learning rules in large neural circuits, and establish the relationship between learning difficulty and neural circuit architecture. We link the geometry of the loss landscape to the difficulty of a task and demonstrate how network expansions modify the loss landscape. For dynamic tasks, we consider the cerebellum, which is involved in motor control and has a unique architecture. Cerebellar mossy fibre inputs undergo a massive expansion into granule cells. Classical codon theory and more recent extensions argue that this architecture facilitates learning via pattern separation, however, this theory predicts that granule cell layer activity is sparse. Instead, recent physiological data indicate that the activity is denser than previously thought, underscoring a gap between cerebellar theory and data. Moreover, there is a conceptual gap between static pattern separation and the critical role of the cerebellum in dynamic tasks such as motor learning. We aim to fill both these gaps through mathematical analysis and simulations of cerebellar learning. We identify specific difficulties inherent to online learning of dynamic tasks, find that input expansions directly mitigate these difficulties, and show that this benefit is maximized when granule cell activity is dense. Overall, this study illuminates how neural circuit architecture determines the ability to learn a task. Our analysis uncovers fundamental relationships between network architecture, learning performance, and geometry of loss landscapes, independent of specific learning rules. The findings suggest that seemingly redundant synapses in neural circuits may have a critical function in facilitating precise and fast motor learning.Item Open Access Modern Bayesian Object Tracking: Challenges and SolutionsLi, Qing; Li, Qing [0000-0003-0297-4346]Target tracking is a challenging problem with a wide range of applications such as surveillance, robotics, and autonomous vehicles. Recent years have seen significant progress in the field of object tracking; nonetheless, there are still numerous challenges that need to be addressed in modern tracking systems. These difficulties include the development of scalable and robust tracking algorithms that can perform reliably under extreme conditions such as heavy clutter, closely-spaced targets, occlusion, and high target density. Modern tracking algorithms also face new demands in learning information about tracking scenes, interpreting target behaviour and detecting anomalies in real time. The objective of this thesis is to analyse state-of-the-art object tracking problems and present statistical tools for the design of efficient tracking systems with the help of Bayesian computational methods. The target tracking problem is typically divided into two sub-tasks: data association and state estimation. Data association involves identifying which measurements correspond to which object, while state estimation involves estimating the position, velocity, and other properties of each object. The major challenge of data association lies in its fast-growing combinatorial complexity due to the growth of the measurement and target number. On the other hand, an accurate state estimation relies on a good model of the target's motion and interaction, including the ability to capture the dynamic interaction structure as well as the manoeuvre behaviours. This thesis will present solutions for several tracking applications based on the efficient design of Monte Carlo sampling methods, which sets us apart from the existing techniques that are based on the Kalman filter or other recursive closed-form Gaussian mixture filter implementations with approximations and heuristic design. The Monte Carlo sampling methods, despite being theoretically optimal and having superior tracking performance, are considered of less practical interest due to their computational burden. Therefore, our goal is to investigate real-time sampling-based solutions for modern tracking problems by exploring modelling and inference strategies to speed up and scale the sampling structures. A highlight of this thesis is the application of Rao-Blackwellisation strategies for different inference and tracking tasks, which could be a good case study for learning the performance of Rao-Blackwellisation strategies in sequential Bayesian estimation problems.