Relax yourself - Using Virtual Reality to enhance employees mental health and work performance
(2019)
This paper presents work-in-progress aiming to develop an actively adapting virtual reality (VR) relaxation application. Due to the immersive nature of VR technologies, people can escape from their real environment and get into a relaxing state. Goal of the application is to adapt to the users' physiological signals to foster the positive effect. Until now, a first version of the VR application was constructed and is currently evaluated in an experiment. Preliminary results of this study demonstrate that people appreciate the immersion into the virtual environment and escape from reality. Moreover, participants highlighted the option to adapt users' needs and preferences. Based on the final study data, the constructed application will be enhanced with regard to adoption and surrounding factors.
While extant research on human-robot-interactions (HRI) has dealt with the examination of different user characteristics, quantifying and describing the various characteristics of human diversity remains a challenge for HRI research. This in turn often leads to a disregard of human diversity in the design of HRI, homogeneous study samples, and differences in technology access. Addressing these challenges, we conducted a systematic synthesis of existing models on human diversity, culminating in the development of a model we coined the Human Diversity Wheel for Robotic Interactions (HDWRI). The goal of this model is to provide designers and researchers in HRI with an analytical lens to ensure their work considers different human characteristics. To achieve this, we started to conduct expert interviews with HRI researchers to put our model into a practical context. This paper presents the development of our model, preliminary findings of our first interviews, and an outline of future steps.
The deployment of social robots in public spaces has received increased interest over the past years. These robots need to process a wide array of personal data to offer services that are tailored to users’ requirements. While much research has been carried out regarding the creation of explainable content, little research has dealt with how data transparency - as a way to address uncertainty and concerns regarding the handling of personal data - is conveyed to users. To examine the impact of different transparency declarations on trust, performance, and robot perception, we conducted a virtual reality (VR) supported laboratory experiment with N = 53 participants who interacted with a robot in a public setting (a library). The interaction between users and robots was accompanied by information on the handling of users’ personal data using three different modalities (via posters, the robot’s tablet, or verbally). The results imply that, while all modalities are understandable and perceived as useful, there is no preference for any modality. Our findings contribute to HRI research by examining different modalities for transparency declarations, in an effort to foster understandable and transparent processing of data.
Technological systems, especially social robots in public spaces, need to interact with a diverse audience as citizens differ, e.g., in gender, educations, beliefs, and experiences with different technologies. However, a wide array of technological systems do not yet have the ability to cater for this human diversity, as evidenced by cases of algorithmic bias (e.g., a decreased usability of various systems for people of colour (Hankerson et al., 2016). At the same time, an inclusion of all audiences is imperative when deploying new technologies to avoid creating a digital divide between citizens who are affected by algorithmic bias and those who are not, as such a digital divide might have dire consequences for public life (Singh & Singh, 2021). The inclusion of diverse user audiences is an interdisciplinary challenge that has, so far, been neglected. Through the lens of media psychology, it is especially relevant to examine the influence that individuals’ diversity characteristics might have on the experience of algorithmic bias, and how these biased experiences could force individuals into selfselection and consequently cause digital divide. Additionally, when facing situations of algorithmic bias, do humans detect discriminations and how do they attribute a failed interaction to the system or themselves?
Addressing these questions, focus groups with Ruhr area citizens are conducted. Participants’ diversity features and attitudes towards technological systems are captured and put into context of their individual experiences with algorithmic bias. Additionally, scenarios of algorithmic bias are be discussed to learn how citizens interpret these scenarios, to capture their attribution of blame and learn of possible coping strategies. Finally, participants are asked to picture a utopian and dystopian scenario in which a social robot is deployed in a public library to examine possible biases and fears of discrimination.
When deploying interactive agents like (social) robots in public spaces they need to be able to interact with a diverse audience, with members each having individual diversity characteristics and prior experiences with interactive systems. To cater for these various predispositions, it is important to examine what experiences citizens have made with interactive systems and how these experiences might create a bias towards such systems. To analyze these bias-inducing experiences, focus group interviews have been conducted to learn of citizens’ individual discrimination experiences, their attitudes towards and arguments for and against the deployment of social robots in public spaces. This extended abstract focuses especially on the method and measurement of diversity.
This paper presents an extension for Amazon’s Alexa, which provides a gratitude journal, and investigates its effectiveness compared to a regular paper-based version. Decades of research demonstrate that expressing gratitude has various psychological and physical benefits. At the same time, gratitude routines run the risk of being a hassle activity, which diminishes the positive outcome. Speech assistants might help to integrate gratitude routines more easily in an intuitive way using voice input. The results of our 8-day field study with two experimental groups (Alexa group vs. Paper group, N = 8) show that users see the benefits, that Alexa was effective in reducing participants’ stress and that both groups express their gratitude differently. The positive effect of Alexa was restricted by a security setting (limiting user input to eight seconds) imposed by Amazon, which has now been repealed. The findings give practical and theoretical implications of how verbal gratitude expression affects participants’ well-being.
Studying in social isolation is a reality for many students that was further reinforced after the start of the COVID-19 pandemic. Research shows that isolation can lead to decreased learning efficiency and is intensified by the increased asynchronous online teaching during the pandemic. This change is not only challenging for students, but also for teachers, as students do not have a direct communication and feedback channel when learning content is presented in form of pre-recorded videos in a learning management system. In this paper, we present VGather2Learn Analytics, which is an extension to the already existing collaborative learning system VGather2Learn, which makes it possible for teachers to analyse the learning behavior of students in asynchronous video-teaching. The information presented in a dashboard will allow teachers to better understand how students interact while watching learning videos collaboratively and can improve online-teaching.
This paper investigates the potential of Virtual Reality (VR) as a research tool for studying diversity and inclusion characteristics in the context of human-robot interactions (HRI). Some exclusive advantages of using VR in HRI are discussed, such as a controllable environment, the possibility to manipulate the variables related to the robot and the human-robot interaction, flexibility in the design of the robot and the environment, and advanced measurement methods related e.g. to eye tracking and physiological data. At the same time, the challenges of researching diversity and inclusion in HRI are described, especially in accessibility, cyber sickness and bias when developing VR-environments. Furthermore, solutions to these challenges are being discussed to fully harness the benefits of VR for the studying of diversity and inclusion.
Wizard-of-Oz (WoZ) systems represent a widespread method in HRI research. While they are cost-effective, flexible and are often preferred over developing autonomous dialogs in experimental settings, they are typically tailored to specific use cases. In addition, WoZ systems are mainly used in lab studies that deviate from real world scenarios. Here, virtual reality (VR) can be used to immerse the user in a real world interaction scenario with robots. This article highlights the necessity for a modularized and customizable WoZ system, using the benefits of VR. The proposed system integrates well-established features like speech and gesture control, while expanding functionality to encompass a data dashboard and dynamic robot navigation using VR technology. The discussion emphasizes the importance of developing technical systems, like the WoZ system, in a modularized and customizable way, particularly for non-technical researchers. Overcoming usability hurdles is crucial to establishing this tool's role in the HRI research field.
Conducting Virtual Reality (VR) studies in Human-Robot Interaction (HRI) offers substantial benefits. Researchers use VR as a versatile research instrument, providing a controlled and reproducible study environment while enabling less invasive, continuous, and more valid data generation through measurement methods like motion capture and eye-tracking. Despite its potential, technical complexities and resource-intensive VR application development pose barriers to researchers, which may prevent them from using the advantages of VR for their own research. Our vision is to address this challenge by creating an intuitive VR authoring tool that facilitates the creation and execution of HRI study designs, considering that different disciplines may have varying requirements for studies. To enable broad usability of such a tool, we conducted expert interviews with seven HRI researchers, gathering insights into perceptions, opportunities, and risks associated with VR, and subsequently derived a catalog of requirements for the authoring tool. We evaluated the mockups resulting from these interviews at an international robotics conference with 22 experts, aiming to collaboratively develop a suitable authoring tool within the HRI community.