The case for limited-preemptive scheduling in GPUs for real-time systems
Journal Title
14th annual workshop on Operating Systems Platforms for Embedded Real-Time applications
Conference Name
Operating Systems Platforms for Embedded Real-Time applications
Type
Conference Object
Metadata
Show full item recordCitation
Spliet, R., & Mullins, R. The case for limited-preemptive scheduling in GPUs for real-time systems. 14th annual workshop on Operating Systems Platforms for Embedded Real-Time applications https://doi.org/10.17863/CAM.25225
Abstract
Many emerging cyber-physical systems, such as autonomous vehicles, have both extreme computation and hard latency requirements. GPUs are being touted as the ideal platform for such applications due to their highly parallel organisation. Unfortunately, while offering the necessary performance, GPUs are currently designed to maximise throughput and fail to offer the necessary hard real-time (HRT) guarantees.
In this work we discuss three additions to GPUs that enable them to better meet real-time constraints. Firstly, we provide a quantitative argument for exposing the non-preemptive GPU scheduler to software. We show that current GPUs perform hardware context switches for non-preemptive scheduling in 20-26.5μs on average, while swapping out 60-270KiB of state. Although high, these overheads do not forbid non-preemptive HRT scheduling of real-time task sets. Secondly, we argue that limited-preemption support can deliver large benefits in schedulability with very minor impact on the context switching overhead. Finally, we demonstrate the need for a more predictable DRAM request arbiter to reduce interference caused by processes running on the GPU in parallel.
Embargo Lift Date
2100-01-01
Identifiers
This record's DOI: https://doi.org/10.17863/CAM.25225
This record's URL: https://www.repository.cam.ac.uk/handle/1810/277888
Rights
Licence:
http://www.rioxx.net/licenses/all-rights-reserved