Repository logo
 

On the Reduction of Computational Complexity of Deep Convolutional Neural Networks.

Published version
Peer-reviewed

Change log

Abstract

Deep convolutional neural networks (ConvNets), which are at the heart of many new emerging applications, achieve remarkable performance in audio and visual recognition tasks. Unfortunately, achieving accuracy often implies significant computational costs, limiting deployability. In modern ConvNets it is typical for the convolution layers to consume the vast majority of computational resources during inference. This has made the acceleration of these layers an important research area in academia and industry. In this paper, we examine the effects of co-optimizing the internal structures of the convolutional layers and underlying implementation of fundamental convolution operation. We demonstrate that a combination of these methods can have a big impact on the overall speedup of a ConvNet, achieving a ten-fold increase over baseline. We also introduce a new class of fast one-dimensional (1D) convolutions for ConvNets using the Toom-Cook algorithm. We show that our proposed scheme is mathematically well-grounded, robust, and does not require any time-consuming retraining, while still achieving speedups solely from convolutional layers with no loss in baseline accuracy.

Description

Keywords

computational optimization, convolutional neural network, deep learning, hardware implementation

Journal Title

Entropy (Basel)

Conference Name

Journal ISSN

1099-4300
1099-4300

Volume Title

20

Publisher

MDPI AG
Sponsorship
EPSRC (1700975)