Refine
Year of publication
Document Type
- Conference Proceeding (179)
- Article (64)
- Part of a Book (16)
- Bachelor Thesis (6)
- Book (5)
- Report (5)
- Contribution to a Periodical (3)
- Doctoral Thesis (2)
- Master's Thesis (2)
- Part of Periodical (2)
Language
- English (287) (remove)
Is part of the Bibliography
- no (287)
Keywords
Developing an intelligent chatbot that can imitate human-to-human interaction has become important in recent years. For this reason, many studies have been conducted to evaluate the quality of chatbots. Furthermore, various approaches and tools, such as sentiment analysis, have been created to improve the performance of chatbots.
This study examines previous research to identify the quality dimensions used to measure chatbots performance in order to develop a general chatbot assessment model that evaluates and compares chatbots quality. The developed evaluation model measures ten chatbot quality dimensions. This model is based on user experience. It requires human testers to interact with the chatbot to test its functioning and then a quantitative approach is used to collect data from user testing by conducting a survey with these testers. In this survey, they are instructed to evaluate the quality of the chatbot using a questionnaire that contains the items needed to evaluate each dimension.
This study also investigates whether sentiment analysis can improve the quality of chatbots and, if so, to identify the dimensions improved with sentiment analysis. For this reason, two chatbot versions are implemented using the Rasa framework (one that cannot detect sentiment and the other that analyzes sentiment and responds accordingly).
Following that, we used our evaluation model to evaluate and compare the two chatbot versions with two groups of participants by conducting a survey. In this survey, each group tested the functioning of one version. Then, both groups were instructed to use the items of the evaluation model to evaluate the version they tested. The goal of this survey was to evaluate the validity and reliability of the items used in the evaluation model to evaluate chatbots, and also to determine if sentiment analysis improved the chatbot quality by comparing survey results between the two groups. The results show that items used in the assessment model to evaluate chatbots are valid and reliable. The findings also indicate that sentiment analysis improves the chatbot’s quality. However, it improves the quality of some dimensions but not the majority of them.
In this document a reliable data streaming mechanism for a TDMA LPWAN application is developed by adapting a link layer solution for power line communication, published at the International Symposium on Power Line Communications and its Applications (ISPLC) 2015. A C++ implementation of the services link layer is provided and demonstrated
working at a packet error rate of 50%.
Starting with the automatic gear change, the operation of a vehicle becomes more and more abstract. In the future, we could control vehicles with single, simple commands. For such a maneuver-based vehicle control system, we investigate a head-up display design in a workshop. The aims are to identify common and distinct features of various display designs through mock-ups. First results show that different sizes of GUI elements are preferred by different states. The preferred position of GUI elements in the head-up display (HUD) is the central bottom area. We found two major interface design styles: static interfaces (all elements visible) with fixed layout and dynamic interfaces (only relevant elements visible) with fixed or adaptive layout.
For highly automated vehicles (AVs), new interaction concepts need to be developed. Even in AVs, the driver might want to intervene and override the automation from time to time. To create the possibility of control, we explore vehicle control through maneuver-based interventions (MBI). Thereby, we focus on explicit, contact-less interaction, which could be beneficial in future AV designs, where the driver is not necessarily bound to classical controls. We propose a set of freehand gestures and keywords for voice control derived in a user-centered design process. Further, we discuss properties, applicability and user impressions of both interaction modalities. Voice control seems to be an efficient way to select a maneuver and free-hand gestures could be used, if voice channel is blocked, e.g., through conversation with passengers.
How to Increase Automated Vehicles’ Acceptance through In-Vehicle Interaction Design: A Review
(2020)
Automated vehicles (AVs) are on the edge of being available on the mass market. Research often focuses on technical aspects of automation, such as computer vision, sensing, or artificial intelligence. Nevertheless, researchers also identified several challenges from a human perspective that need to be considered for a successful introduction of these technologies. In this paper, we first analyze human needs and system acceptance in the context of AVs. Then, based on a literature review, we provide a summary of current research on in-car driver-vehicle interaction and related human factor issues. This work helps researchers, designers, and practitioners to get an overview of the current state of the art.
Human emotion detection in automated vehicles helps to improve comfort and safety. Research in the automotive domain focuses a lot on sensing drivers' drowsiness and aggression. We present a new form of implicit driver-vehicle cooperation, where emotion detection is integrated into an automated vehicle's decision-making process. Constant evaluation of the driver's reaction to vehicle behavior allows us to revise decisions and helps to increase the safety of future automated vehicles.
Self-driving cars will relief the human from the driving task. Nevertheless, the human might want to intervene in the driving process and thus needs the possibility to control the car. Switching back to fully manual controls is uncomfortable once being passive and engaging in non-driving-related activities. A more comfortable way is controlling the car with elemental maneuvers (e.g., "turn left" or "stop"). Whereas touch interaction concepts exist, contactless interaction through voice and mid-air gestures has not yet been explored for maneuver-based car control. In this paper, we, therefore, compare the general eligibility of voice and mid-air gesture with touch interaction as the primary maneuver selection mechanism in a driving simulator study. Our results show high usability for all modalities. Contactless interaction leads to a more positive emotional perception of the interaction, yet mid-air gestures lead to higher task load. Overall, voice and touch control are preferred over mid-air gestures by most users.
Currently, car assistant systems mainly try to prevent accidents. Increasing built-in car technology also extends the potential applications in vehicles. Future cars might have virtual windshields that augment the traffic or individual virtual assistants interacting with the user. In this paper, we explore the potential of an assistant system that helps the car’s occupants to calm down and reduce stress when they experience an accident in front of them. We present requirements from a discussion (N= 11) and derive a system design from them. Further, we test the system design in a video-based simulator study (N= 43). Our results indicate that an accident support system increases perceived control and trust and helps to calm down the user.
The way we communicate with autonomous cars will fundamentally change as soon as manual input is no longer required as back-up for the autonomous system. Maneuver-based driving is a potential way to allow still the user to intervene with the autonomous car to communicate requests such as stopping at the next parking lot. In this work, we highlight different research questions that still need to be explored to gain insights into how such control can be realized in the future.