Show simple item record

dc.contributor.authorMaji, Parthaen
dc.date.accessioned2020-07-02T07:58:47Z
dc.date.available2020-07-02T07:58:47Z
dc.date.submitted2020-06-24en
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/307488
dc.description.abstractIn deep learning, a convolutional neural network (ConvNet or CNN) is a powerful tool for building interesting embedded applications that use data to make predictions. An application running on an embedded system typically has limited access to memory resources, processing power, and storage. Implementing deep convolutional neural network-based inference on resource-constrained devices can be very challenging, as these environments cannot usually make use of the massive computing power and storage that are present in cloud server environments. Furthermore, the constantly evolving nature of modern deep network architecture aggravates the problem by making it necessary to balance flexibility against specialisation to avoid the inability to adapt. However, much of the baseline architecture of a deep convolutional neural network stayed the same. With careful optimisation of the most common and widely occurring layer architectures, it is typically possible to accelerate these emerging workloads for resource-constrained embedded systems. This thesis makes four contributions. I first developed a lossy three-stage low-rank approximation scheme that can reduce the computational complexity of a pre-trained model by 3-5x and up to 8-9x for individual convolutional layers. This scheme requires restructuring of the convolutional layers and generally suits the scenario where both the training data and trained model are available. In many scenarios, the training data is not available for fine-tuning any loss in prediction accuracy if structural changes are made to a model as a post-processing step. Besides the lack of availability of training data, there are other situations where the architecture of a model cannot be changed after training. My second contribution handles this scenario by using a low-level optimisation scheme that requires no changes to the model architecture, unlike the low-rank approximation scheme. This novel scheme uses a modified version of the Cook-Toom algorithm to reduce the computational intensity of commonly occurring dense and spatial convolutional layers and speedup inference time by 2-4x. My third contribution is an efficient implementation of the Cook-Toom class of algorithms on ubiquitous Arm's low-power Cortex processor. Unlike the direct convolution, computing convolutions using the modified Cook-Toom algorithm requires a different data processing pipeline as it involves pre- and post-transformations of the intermediate activations. I introduced a multi-channel multi-region (MCMR) scheme to enable an efficient implementation of the fast Cook-Toom algorithm. I demonstrate that by effectively using SIMD instructions and the MCMR scheme an average 2-3x and a peak 4x per layer speedup is easily achievable. My final contribution is the Cook-Toom accelerator, a custom hardware architecture for modern convolutional neural networks. This accelerator architecture is designed from the ground up to address some of the limitations of a resource-constrained SIMD processor. I also illustrate how new emerging layer types can be mapped efficiently to the same flexible architecture without any modification.en
dc.rightsAll rights reserveden
dc.rightsAll rights reserveden
dc.rightsAll rights reserveden
dc.rightsAll rights reserveden
dc.rightsAll rights reserveden
dc.rightsAll rights reserveden
dc.rightsAll rights reserveden
dc.rightsAll rights reserveden
dc.subjectNeural Networken
dc.subjectConvolutional Neural Networken
dc.subjectOptimisationen
dc.subjectSIMDen
dc.subjectLow-ranken
dc.subjectCompressionen
dc.subjectAcceleratorsen
dc.titleModel-Architecture Co-design of Deep Neural Networks for Embedded Systemsen
dc.typeThesis
dc.type.qualificationlevelDoctoralen
dc.type.qualificationnameDoctor of Philosophy (PhD)en
dc.publisher.institutionUniversity of Cambridgeen
dc.identifier.doi10.17863/CAM.54581
rioxxterms.licenseref.urihttp://www.rioxx.net/licenses/all-rights-reserveden
dc.contributor.orcidMaji, Partha [0000-0002-1919-1228]
rioxxterms.typeThesisen
dc.publisher.collegeClare Hall
dc.type.qualificationtitleDoctor of Philosophy in Computer Scienceen
pubs.funder-project-idEPSRC (1700975)
pubs.funder-project-idEPSRC (1700975)
cam.supervisorMullins, Robert


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record