Refine
Year of publication
Document Type
- Conference Proceeding (63) (remove)
Is part of the Bibliography
- no (63)
Keywords
We present a light-weight real-time applicable 3D-gesture recognition system on mobile devices for improved Human-Machine Interaction. We utilize time-of-flight data coming from a single sensor and implement the whole gesture recognition pipeline on two different devices outlining the potential of integrating these sensors onto mobile devices. The main components are responsible for cropping the data to the essentials, calculation of meaningful features, training and classifying via neural networks and realizing a GUI on the device. With our system we achieve recognition rates of up to 98% on a 10-gesture set with frame rates reaching 20Hz, more than sufficient for any real-time applications.
Touch versus mid-air gesture interfaces in road scenarios-measuring driver performance degradation
(2016)
We present a study aimed at comparing the degradation of the driver's performance during touch gesture vs mid-air gesture use for infotainment system control. To this end, 17 participants were asked to perform the Lane Change Test. This requires each participant to steer a vehicle in a simulated driving environment while interacting with an infotainment system via touch and mid-air gestures. The decrease in performance is measured as the deviation from an optimal baseline. This study concludes comparable deviations from the baseline for the secondary task of infotainment interaction for both interaction variants. This is significant as all participants are experienced in touch interaction, however have had no experience at all with mid-air gesture interaction, favoring mid-air gestures for the long-term scenario.
Building upon prior results, we present an alternative approach to efficiently classifying a complex set of 3D hand poses obtained from modern Time-Of-Flight-Sensors (TOF). We demonstrate it is possible to achieve satisfactory results in spite of low resolution and high noise (inflicted by the sensors) and a demanding outdoor environment. We set up a large database of pointclouds in order to train multilayer perceptrons as well as support vector machines to classify the various hand poses. Our goal is to fuse data from multiple TOF sensors, which observe the poses from multiple angles. The presented contribution illustrates that real-time capability can be maintained with such a setup as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.
The first robots are currently appearing on the consumer market. Initially they are targeted at rather simple applications such as entertainment and home convenience. For more complex areas, these robots will need to collaborate and interactively communicate with their human users, which requires appropriate man-machine interaction technologies and considerable cognitive abilities on the robot's side. Consumer acceptance will strongly depend on the integrated system. Thus, system integration and evaluation of the integrated system is becoming increasingly important. This paper describes our approach to construct a robotic assistance system. We present experience with an integrated technology demonstration and exposure of the integrated system to the public.
RELEVANCE & RESEARCH QUESTION: Currently the effectiveness of Virtual Reality (VR) and Augmented Reality (AR) systems as practice teaching methods are virtually uncharted. The proof that these systems can provide the same or better learning outcomes than a text instructed practical task could represent a significant benefit for educational activities. METHODS & DATA: To fathom the effectiveness, an experimental study with the three conditions (VR, AR and a real setup) were used to teach participant how to assemble a standard computer. Each condition was divided into two parts: part one in which participants were confronted with their specific scenario, part two in which participants had to go through a real practice after one week. The learning outcome was determined by the designation of hardware parts, a quiz that queried their function and the correct assembling of the components in addition to needed time. Apart from the mere performance, the acceptance of such application in academic context and difference in evaluation by men and women were of interest. RESULTS: Results concerning the Learning Outcome showed that participants from the VR condition outperformed those learned from the real setup ((M=10.0, SD=0.0) [virtual reality] vs. (M=8.95, SD=1.27) [control]). Furthermore, results from the assembling duration assessment demonstrated that VR Group Participants completed their tasks 6.62% faster than the control group. Regarding the identification of Hardware Parts, both groups scored a significant improvement during the post condition compared to the first test run, indicating a learning progress. However, due to the VR group achieving a better outcome in average answers and a more significant difference between the trials, the results indicate a better performance by participants assigned to the VR condition. ADDED VALUE: The results revealed that VR and AR systems could exceed text-based approach in terms of learning outcome performance. The effectiveness of the systems implicates a major benefit for the educational landscape, as learning content that is not realizable in terms of cost, distance or logistics could be designed as an immersive and engaging experience.
The scene interpretation and the behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering, angle and velocity. In this paper a scene interpretation and a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of a two-dimensional neural field for scene interpretation and two one-dimensional neural fields controlling steering angle and velocity. The stimuli of the fields are determined according to the sensor information.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems arose in the past years. Fully or partly autonomously guided vehicles, particularly for road traffic, pose high demands on the development of reliable algorithms. Principal problems are caused by having a moving observer in predominantly natural environments. At the Institut fur Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We present a solution for a driver assistance system. We concentrate on the aspects of video-based scene analysis and organization of behavior.
Relax yourself - Using Virtual Reality to enhance employees mental health and work performance
(2019)
This paper presents work-in-progress aiming to develop an actively adapting virtual reality (VR) relaxation application. Due to the immersive nature of VR technologies, people can escape from their real environment and get into a relaxing state. Goal of the application is to adapt to the users' physiological signals to foster the positive effect. Until now, a first version of the VR application was constructed and is currently evaluated in an experiment. Preliminary results of this study demonstrate that people appreciate the immersion into the virtual environment and escape from reality. Moreover, participants highlighted the option to adapt users' needs and preferences. Based on the final study data, the constructed application will be enhanced with regard to adoption and surrounding factors.