Refine
Document Type
- Conference Proceeding (12)
- Article (3)
- Part of a Book (1)
- Contribution to a Periodical (1)
Language
- English (17) (remove)
Is part of the Bibliography
- no (17)
Keywords
- Automotive HMI (2)
- Automobiles (1)
- AutomotiveHMI (1)
- Autonomous automobiles (1)
- Data visualization (1)
- Highly Automated Vehicles (1)
- Human Factors (1)
- Human-Computer Interaction (1)
- Image color analysis (1)
- Inclusion (1)
Institute
Starting with the automatic gear change, the operation of a vehicle becomes more and more abstract. In the future, we could control vehicles with single, simple commands. For such a maneuver-based vehicle control system, we investigate a head-up display design in a workshop. The aims are to identify common and distinct features of various display designs through mock-ups. First results show that different sizes of GUI elements are preferred by different states. The preferred position of GUI elements in the head-up display (HUD) is the central bottom area. We found two major interface design styles: static interfaces (all elements visible) with fixed layout and dynamic interfaces (only relevant elements visible) with fixed or adaptive layout.
Understanding user needs and behavior in automated vehicles (AVs) while traveling is essential for future in-vehicle interface and service design. Since AVs are not yet market-ready, current knowledge about AV use and perception is based on observations in other transportation modes, interviews, or surveys about the hypothetical situation. In this paper, we close this gap by presenting real-world insights into the attitude towards highly automated driving and non-driving-related activities (NDRAs). Using a Wizard of Oz AV, we conducted a real-world driving study (N= 12) with six rides per participant during multiple days. We provide insights into the users’ perceptions and behavior. We found that (1) the users’ trust a human driver more than a system,(2) safety is the main acceptance factor, and (3) the most popular NDRAs were being idle and the use of the smartphone.
In this demo paper we present a new visualization technique for dynamic networks. It displays the time slices of the dynamic network using two dimensional graph layouting algorithms and stacks these in the third dimension to show the development over time. The visualization ensures that the same node always has the same position in each time slice so that it is easy to follow its development. It also allows filtering data and influencing node appearance based on properties. Additionally we offer a two dimensional comparison view for two time slices which highlights changes in graph structure and (if available) in measures of nodes. The presented visualization technique is implemented using Web technology and is available in a Web-based analytics workbench. We demonstrate the benefits of these techniques by an analysis of a data set from a learning community.
Human emotion detection in automated vehicles helps to improve comfort and safety. Research in the automotive domain focuses a lot on sensing drivers' drowsiness and aggression. We present a new form of implicit driver-vehicle cooperation, where emotion detection is integrated into an automated vehicle's decision-making process. Constant evaluation of the driver's reaction to vehicle behavior allows us to revise decisions and helps to increase the safety of future automated vehicles.
Anonymity-preserving Methods for Client-side Filtering in Position-based Collaboration Approaches
(2017)
Recently, rescue worker resources have not been sufficient to meet the regular response time during large-scale catastrophic events in every case. However, many volunteers supported official forces in different disaster situations, often self-organized through social media. In this paper, a system will be introduced which allows the coordination of trained volunteers by a professional control center with the objective of a more efficient distribution of human resources and technical equipment. Volunteers are contacted via app on their private smartphone. The design of this app is based on user requirements gathered in focus group discussions. The feedback of the potential users includes privacy aspects, low energy consumption, and mechanisms for long-term motivation and training. The authors present the results of the focus group analyses as well as the transfer to their app design concept.
Automotive user interfaces and automated vehicle technology pose numerous challenges to support all diverse facets of user needs. These range from inexperienced, thrill-seeking, young novice drivers to elderly drivers with a mostly opposite set of preferences together with their natural limitations. To allow assessing the (hedonic) quality of automotive user interfaces and automated driving technology (i. e., UX) already during development, the proposed workshop is dedicated to the quest of finding objective, quantifiable criteria to describe future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors” researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Interaktion – Verbindet – Alle”, this workshop calls in particular for contributions in the areas of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) with focus on hedonic quality and design of user experience to enhance the safety feeling in ADS.
In catastrophic events, the potential of help has grown through new technologies. Voluntary help has many forms. Within this paper different categories of voluntary help are suggested. Those categories are based on properties like organizational structures, helping process, kind of prosocial behavior and many more. A focus is clearly on the organizational structure and motivational aspects of helper groups. Examples are given for each category. The categorization’s aim is to give a brief overview of possible properties a group of system users could have.
Self-driving cars will relief the human from the driving task. Nevertheless, the human might want to intervene in the driving process and thus needs the possibility to control the car. Switching back to fully manual controls is uncomfortable once being passive and engaging in non-driving-related activities. A more comfortable way is controlling the car with elemental maneuvers (e.g., "turn left" or "stop"). Whereas touch interaction concepts exist, contactless interaction through voice and mid-air gestures has not yet been explored for maneuver-based car control. In this paper, we, therefore, compare the general eligibility of voice and mid-air gesture with touch interaction as the primary maneuver selection mechanism in a driving simulator study. Our results show high usability for all modalities. Contactless interaction leads to a more positive emotional perception of the interaction, yet mid-air gestures lead to higher task load. Overall, voice and touch control are preferred over mid-air gestures by most users.
For highly automated vehicles (AVs), new interaction concepts need to be developed. Even in AVs, the driver might want to intervene and override the automation from time to time. To create the possibility of control, we explore vehicle control through maneuver-based interventions (MBI). Thereby, we focus on explicit, contact-less interaction, which could be beneficial in future AV designs, where the driver is not necessarily bound to classical controls. We propose a set of freehand gestures and keywords for voice control derived in a user-centered design process. Further, we discuss properties, applicability and user impressions of both interaction modalities. Voice control seems to be an efficient way to select a maneuver and free-hand gestures could be used, if voice channel is blocked, e.g., through conversation with passengers.