Refine
Year of publication
Document Type
- Conference Proceeding (179) (remove)
Language
- English (179) (remove)
Is part of the Bibliography
- no (179)
Keywords
- Entrepreneurship (2)
- Intergenerational Collaboration (2)
- Intergenerational Innovation (2)
- Sentiment Analysis (2)
- Usability (2)
- Adolescents (1)
- Automated Driving Technology (1)
- Automobiles (1)
- Automotive HMI (1)
- Automotive User Interfaces (1)
Recognition of emotions from multimodal cues is of basic interest for the design of many adaptive interfaces in human-machine interaction (HMI) in general and human-robot interaction (HRI) in particular. It provides a means to incorporate non-verbal feedback in the course of interaction. Humans express their emotional and affective state rather unconsciously exploiting their different natural communication modalities such as body language, facial expression and prosodic intonation. In order to achieve applicability in realistic HRI settings, we develop person-independent affective models. In this paper, we present a study on multimodal recognition of emotions from such auditive and visual cues for interaction interfaces. We recognize six classes of basic emotions plus the neutral one of talking persons. The focus hereby lies on the simultaneous online visual and accoustic analysis of speaking faces. A probabilistic decision level fusion scheme based on Bayesian networks is applied to draw benefit of the complementary information from both – the acoustic and the visual – cues. We compare the performance of our state of the art recognition systems for separate modalities to the improved results after applying our fusion scheme on both DaFEx database and a real-life data that captured directly from robot. We furthermore discuss the results with regard to the theoretical background and future applications.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems arose in the past years. Fully or partly autonomously guided vehicles, particularly for road traffic, pose high demands on the development of reliable algorithms. Principal problems are caused by having a moving observer in predominantly natural environments. At the Institut fur Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We present a solution for a driver assistance system. We concentrate on the aspects of video-based scene analysis and organization of behavior.
Relevance & Research Question: Smartphones have become an integrated part in everyday life facilitating communication, information access, entertainment and organization anytime and anywhere. However, the omnipresence of such devices can evoke psychological dependencies and the need of being always connected resulting in discomfort when the smartphone is not accessible. While few studies have found heightened anxiety during smartphone absence (e.g. Cheever, Rosen, Carrier, & Chavez, 2014), such research is scarce. Therefore, we aimed at expanding existing research asking whether the mere imagination of smartphone absence suffices to trigger anxiety and affect user’s context evaluations.
In this paper we discuss how group processes can be influenced by designing specific tools in computer supported collaborative leaning. We present the design of a shared workspace application for co-constructive tasks that is enriched by certain functions that are able to track, analyze and feed back parameters of collaboration to group members. Thereby our interdisciplinary approach is mainly based on an integrative methodology for analyzing collaboration behavior and patterns in an implicit manner combined with explicit surveyed data of group members’ attitudes and its immediate feedback to the groups. In an exploratory study we examined the influence of this feedback function. Although we could only analyze ad-hoc groups in this study, we detected some benefits of our methodology which might enrich real life Learning Communities’ collaboration processes. The data analysis in our study showed advantages of this feedback on processes of a group’s well-being as well as parameters of participation. These results provide a basis for further empirical work on problem solving groups that are supported by means of parallel interaction analysis as well as its re-use as information resource.
This paper describes an educational application that combines handhelds (PDAs) and programmable Lego bricks in a classroom scenario that deals with the problem of letting a robot escape from a maze. It is specific to our setting that the problem can be solved both in the physical world by steering a Lego robot and in a simulated software environment on a PDA or on a PC. This approach enables the students to generate successful sets of rules in the simulation and to test these sets of rules later in physical mazes, or to create new types of mazes as challenges for known rule sets. In this paper we describe the technical setting for this scenario, different pedagogical scenarios and we will report an evaluation with a group of students in a school environment.
The paper provides a contextualization process to adapt Open Knowledge Resources for the need of public administrations. By help of a matching strategy, culture and context profiles of learners and learning resources are compared. The comparison allows to draw inferences how to contextualize an open knowledge resource for own learning needs. An example is illustrated and future research fields are proposed.
We describe the general concept, system architecture, hardware, and the behavioral abilities of Cora (Cooperative Robot Assistant, see Fig. 1), an autonomous non mobile robot assistant. Outgoing from our basic assumption that the behavior to perform determines the internal and external structure of the behaving system, we have designed Cora anthropomorphic to allow for humanlike behavioral strategies in solving complex tasks. Although Cora was built as a prototype of a service robot system to assist a human partner in industrial assembly tasks, we will show that Cora’s behavioral abilities are also conferrable in a household environment. After the description of the hardware platform and the basic concepts of our approach, we present some experimental results by means of an assembly task.