Repository logo
 

Software prefetching for indirect memory accesses: A microarchitectural perspective

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

Many modern data processing and HPC workloads are heavily memory-latency bound. A tempting proposition to solve this is software prefetching, where special non-blocking loads are used to bring data into the cache hierarchy just before being required. However, these are difficult to insert to effectively improve performance, and techniques for automatic insertion are currently limited. This article develops a novel compiler pass to automatically generate software prefetches for indirect memory accesses, a special class of irregular memory accesses often seen in high-performance workloads. We evaluate this across a wide set of systems, all of which gain benefit from the technique. We then evaluate the extent to which good prefetch instructions are architecture dependent and the class of programs that are particularly amenable. Across a set of memory-bound benchmarks, our automated pass achieves average speedups of 1.3× for an Intel Haswell processor, 1.1× for both an ARM Cortex-A57 and Qualcomm Kryo, 1.2× for a Cortex-72 and an Intel Kaby Lake, and 1.35× for an Intel Xeon Phi Knight’s Landing, each of which is an out-of-order core, and performance improvements of 2.1× and 2.7× for the in-order ARM Cortex-A53 and first generation Intel Xeon Phi.

Description

Journal Title

ACM Transactions on Computer Systems

Conference Name

Journal ISSN

0734-2071
1557-7333

Volume Title

36

Publisher

Association for Computing Machinery (ACM)

Rights and licensing

Except where otherwised noted, this item's license is described as All rights reserved
Sponsorship
EPSRC (1510365)
Engineering and Physical Sciences Research Council (EP/K026399/1)
Engineering and Physical Sciences Research Council (EP/M506485/1)
EPSRC [EP/K026399/1, EP/M506485/1], ARM Ltd.