Repository logo
 

Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs

Accepted version
Peer-reviewed

Type

Conference Object

Change log

Authors

Mundy, A 
Dasika, G 
Beu, J 

Abstract

The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs). Although there has been a lot of research done on model and algorithmic optimization of CNN, little attention has been paid to the efficient implementation of these algorithms on embedded CPUs, which usually have frugal memory and low power budget. This research work aims to fill this gap and focuses on the efficient implementation of Winograd or Cook-Toom based convolution on modern Arm Cortex-A CPUs, widely used in mobile devices today. Specifically, we demonstrate a reduction in inference latency by using a set of optimization strategies that improve the utilization of computational resources, and by effectively leveraging the ARMv8-A NEON SIMD instruction set. We evaluated our proposed region-wise multi-channel implementations on Arm Cortex-A73 platform using several representative CNNs. The results show significant performance improvements in full network, up to 60%, over existing im2row/im2col based optimization techniques.

Description

Keywords

Journal Title

Conference Name

EMC2: Workshop On Energy Efficient Machine Learning And Cognitive Computing For Embedded Applications

Journal ISSN

Volume Title

Publisher

Rights

All rights reserved
Sponsorship
PhD student funded by EPSRC doctoral training account