Refine
Year of publication
Document Type
- Article (64) (remove)
Language
- English (64) (remove)
Is part of the Bibliography
- no (64)
Keywords
How to Increase Automated Vehicles’ Acceptance through In-Vehicle Interaction Design: A Review
(2020)
Automated vehicles (AVs) are on the edge of being available on the mass market. Research often focuses on technical aspects of automation, such as computer vision, sensing, or artificial intelligence. Nevertheless, researchers also identified several challenges from a human perspective that need to be considered for a successful introduction of these technologies. In this paper, we first analyze human needs and system acceptance in the context of AVs. Then, based on a literature review, we provide a summary of current research on in-car driver-vehicle interaction and related human factor issues. This work helps researchers, designers, and practitioners to get an overview of the current state of the art.
Human emotion detection in automated vehicles helps to improve comfort and safety. Research in the automotive domain focuses a lot on sensing drivers' drowsiness and aggression. We present a new form of implicit driver-vehicle cooperation, where emotion detection is integrated into an automated vehicle's decision-making process. Constant evaluation of the driver's reaction to vehicle behavior allows us to revise decisions and helps to increase the safety of future automated vehicles.
The way we communicate with autonomous cars will fundamentally change as soon as manual input is no longer required as back-up for the autonomous system. Maneuver-based driving is a potential way to allow still the user to intervene with the autonomous car to communicate requests such as stopping at the next parking lot. In this work, we highlight different research questions that still need to be explored to gain insights into how such control can be realized in the future.
The presented work formulates an framework in which early prediction of drivers lane change behavior is realized. We aim to build a representation of drivers lane change behavior in order to recognize and to predict driver's intentions as a first step towards a realistic driver model. In the test bed of the Institute of Neuroinformatik, based on the traffic simulator NISYS TRS 1, 10 individuals have driven in the experiments and they performed more then 150 lane change maneuvers. Lane-offset, distance to the front car and time to contact, were recorded. The acquired data was used to train - in parallel- a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able of performing a lane change prediction time of 1.5 sec beforehand. The proposed approach describes a framework for lane-change detection and prediction, which will serve as a prerequisite for a successful driver model.
Women are still underrepresented at the highest management levels. The think-manager-think-male phenomenon suggests that leadership is associated with male rather than female attributes. Although styling has been shown to influence the evaluation of women's leadership abilities, the relevant specific features have been left remarkably unaddressed. In a 2 × 2 × 2 × 2 (skirt/pants, with/without jewelry, loose hair/braid, with/without makeup) between-subjects design, 354 participants evaluated a woman in a photograph. Women with makeup, pants, or with jewelry were rated as more competent than women without makeup, with skirts, or without jewelry. A combination of loose hair and no makeup was perceived as warmest, and women with loose hair were more likely to be hired than those with braids. In sum, even subtle changes in styling have a strong impact on how women's leadership abilities are evaluated.
In this paper we describe an architecture for behavioral organization based on dynamical systems. This architecture
enables the generation of complex behavioral sequences, which is demonstrated using the example of approaching and
passing a door. The behavioral sequence is generated by activating and deactivating the elementary behaviors dependent
on sensory information and internal logical conditions. The architecture is demonstrated on a mobile KOALA robot and
in simulation as well.