Repository logo
 

Fast and Precise Touch-Based Text Entry for Head-Mounted Augmented Reality with Variable Occlusion

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Change log

Abstract

We present the VISAR keyboard: An augmented reality (AR) head-mounted display (HMD) system that supports text entry via a virtualised input surface. Users select keys on the virtual keyboard by imitating the process of single-hand typing on a physical touchscreen display. Our system uses a statistical decoder to infer users’ intended text and to provide error-tolerant predictions. There is also a high-precision fall-back mechanism to support users in indicating which keys should be unmodified by the auto-correction process. A unique advantage of leveraging the well-established touch input paradigm is that our system enables text entry with minimal visual clutter on the see-through display, thus preserving the user’s field-of-view. We iteratively designed and evaluated our system and show that the final iteration of the system supports a mean entry rate of 17.75wpm with a mean character error rate less than 1%. This performance represents a 19.6% improvement relative to the state-of-the-art baseline investigated: A gaze-then-gesture text entry technique derived from the system keyboard on the Microsoft HoloLens. Finally, we validate that the system is effective in supporting text entry in a fully mobile usage scenario likely to be encountered in industrial applications of AR HMDs.

Description

Journal Title

ACM Transactions on Computer-Human Interaction

Conference Name

Journal ISSN

1073-0516
1557-7325

Volume Title

25

Publisher

Association for Computing Machinery (ACM)

Rights and licensing

Except where otherwised noted, this item's license is described as All rights reserved
Sponsorship
EPSRC (1198)
Engineering and Physical Sciences Research Council (EP/R004471/1)
Engineering and Physical Sciences Research Council (EP/N014278/1)
Engineering and Physical Sciences Research Council (EP/N010558/1)
Per Ola Kristensson was supported in part by a Google Faculty research award and EPSRC grants EP/N010558/1 and EP/N014278/1. Keith Vertanen was supported in part by a Google Faculty research award. John Dudley was supported by the Trimble Fund.