Refine
Year of publication
- 2019 (2)
Document Type
Language
- English (2)
Has Fulltext
- no (2)
Is part of the Bibliography
- no (2)
Keywords
- Automotive HMI (1)
- Mid-Air Gestures (1)
- UCD (1)
- Voice Control (1)
Institute
The uprising levels of autonomous vehicles allow the drivers to shift their attention to non-driving tasks while driving (ie, texting, reading, or watching movies). However, these systems are prone to failure and, thus, depending on human intervention becomes crucial in critical situations. In this work, we propose using human actuation as a new mean of communicating take-over requests (TOR) through proprioception. We conducted a user study via a driving simulation in the presence of a complex working memory span task. We communicated TORs through four different modalities, namely, vibrotactile, audio, visual, and proprioception. Our results show that the vibrotactile condition yielded the fastest reaction time followed by proprioception. Additionally, proprioceptive cues resulted in the second best performance of the non-driving task following auditory cues.
For highly automated vehicles (AVs), new interaction concepts need to be developed. Even in AVs, the driver might want to intervene and override the automation from time to time. To create the possibility of control, we explore vehicle control through maneuver-based interventions (MBI). Thereby, we focus on explicit, contact-less interaction, which could be beneficial in future AV designs, where the driver is not necessarily bound to classical controls. We propose a set of freehand gestures and keywords for voice control derived in a user-centered design process. Further, we discuss properties, applicability and user impressions of both interaction modalities. Voice control seems to be an efficient way to select a maneuver and free-hand gestures could be used, if voice channel is blocked, e.g., through conversation with passengers.