Refine
Year of publication
- 2019 (5) (remove)
Document Type
- Conference Proceeding (3)
- Article (1)
- Part of a Book (1)
Language
- English (5)
Has Fulltext
- no (5)
Is part of the Bibliography
- no (5)
Keywords
- Automotive HMI (2)
- Highly Automated Vehicles (1)
- Informationstechnik (1)
- Interventions (1)
- Maneuver-Based Driving (1)
- Mid-Air Gestures (1)
- Mobilität (1)
- UCD (1)
- Voice Control (1)
Institute
Starting with the automatic gear change, the operation of a vehicle becomes more and more abstract. In the future, we could control vehicles with single, simple commands. For such a maneuver-based vehicle control system, we investigate a head-up display design in a workshop. The aims are to identify common and distinct features of various display designs through mock-ups. First results show that different sizes of GUI elements are preferred by different states. The preferred position of GUI elements in the head-up display (HUD) is the central bottom area. We found two major interface design styles: static interfaces (all elements visible) with fixed layout and dynamic interfaces (only relevant elements visible) with fixed or adaptive layout.
For highly automated vehicles (AVs), new interaction concepts need to be developed. Even in AVs, the driver might want to intervene and override the automation from time to time. To create the possibility of control, we explore vehicle control through maneuver-based interventions (MBI). Thereby, we focus on explicit, contact-less interaction, which could be beneficial in future AV designs, where the driver is not necessarily bound to classical controls. We propose a set of freehand gestures and keywords for voice control derived in a user-centered design process. Further, we discuss properties, applicability and user impressions of both interaction modalities. Voice control seems to be an efficient way to select a maneuver and free-hand gestures could be used, if voice channel is blocked, e.g., through conversation with passengers.
The way we communicate with autonomous cars will fundamentally change as soon as manual input is no longer required as back-up for the autonomous system. Maneuver-based driving is a potential way to allow still the user to intervene with the autonomous car to communicate requests such as stopping at the next parking lot. In this work, we highlight different research questions that still need to be explored to gain insights into how such control can be realized in the future.
The uprising levels of autonomous vehicles allow the drivers to shift their attention to non-driving tasks while driving (ie, texting, reading, or watching movies). However, these systems are prone to failure and, thus, depending on human intervention becomes crucial in critical situations. In this work, we propose using human actuation as a new mean of communicating take-over requests (TOR) through proprioception. We conducted a user study via a driving simulation in the presence of a complex working memory span task. We communicated TORs through four different modalities, namely, vibrotactile, audio, visual, and proprioception. Our results show that the vibrotactile condition yielded the fastest reaction time followed by proprioception. Additionally, proprioceptive cues resulted in the second best performance of the non-driving task following auditory cues.
Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles--and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting.