Intelligent Interactive Displays in Vehicles with Intent Prediction: A Bayesian framework
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
Using an in-vehicle interactive display, such as a touchscreen, typically entails undertaking a free hand pointing gesture and dedicating a considerable amount of attention, that can be otherwise available for driving, with potential safety implications. Due to road and driving conditions, the user input can also be subject to high levels of perturbations resulting in erroneous selections. In this article, we give an overview of the novel concept of an intelligent predictive display in vehicles. It can infer, notably early in the pointing task and with high confidence, the item the user intends to select on the display from the tracked free hand pointing gesture and possibly other available sensory data. Accordingly, it simplifies and expedites the target acquisition (pointing and selection), thereby substantially reducing the time and effort required to interact with an in-vehicle display. As well as briefly addressing the various signal processing and human factor challenges posed by predictive displays in the automotive environment, the fundamental problem of intent inference is discussed and a Bayesian formulation is introduced. Empirical evidence from data collected in instrumented cars is shown to demonstrate the usefulness and effectiveness of this solution.
Description
Keywords
Journal Title
Conference Name
Journal ISSN
1558-0792