Refine
Year of publication
Document Type
- Conference Proceeding (63) (remove)
Is part of the Bibliography
- no (63)
Keywords
Analyse dynamischer Szenen
(1999)
In diesem Artikel wird die Analyse dynamischer Szenen im Rahmen einer flexiblen Architektur zur Lösung von Fahrerassistenzaufgaben in Kraftfahrzeugen vorgestellt. Die Lösung unterschiedlicher Aufgaben mit verwandten Ansätzen bedingt einen hohen Grad an Modularität und Flexibilität. Nur so können die gestellten Aufgaben mit den vorhandenen Algorithmen optimal gelöst werden. In der vorgestellten Architektur wird eine objektbezogene Analyse von Sensordaten, eine verhaltensbasierte Szeneninterpretation und eine Verhaltensplanung durchgeführt. Eine globale Wissensbasis, auf der jedes einzelne Modul arbeitet, beinhaltet die Beschreibung physikalischer Zusammenhänge, Verhaltensregeln für den Straßenverkehr, sowie Objekt- und Szenenwissen.
Externes Wissen (z.B. GPS – Global Positioning System) kann ebenfalls in die Wissensbasis eingebunden werden. Als Anwendungsbeispiel der Verhaltensplanung ist ein intelligenter Tempomat realisiert.
We propose a new approach to object detection based on data fusion of texture and edge information. A self organizing Kohonen map is used as the coupling element of the different representations. Therefore, an extension of the proposed architecture incorporating other features, even features not derived from vision modules, is straight forward. It simplifies to a redefinition of the local feature vectors and a retraining of the network structure. The resulting hypotheses of object locations generated by the detection process are finally inspected by a neural network classifier based on co-occurence matrices.
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Analysis of dynamic scenes
(2000)
In this paper the proposed architecture for a dynamic scene analysis is illustrated by a driver assistance system. To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (traffic rules, physical laws), additional information (GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the
object-related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird’s eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task.
Handgesten im Automobil haben das Potenzial einer Kombination von gut sichtbaren Displays nahe der Windschutzscheibe und einer als intuitiv empfundenen Gestensteuerung, wie sie berührungsgesteuert von Smartphones aber auch berührungslos von einigen Fernsehgeräten bekannt ist. Bei entsprechender Positionierung der Sensoren können so die Augen auf der Straße und die Hände am Lenkrad oder zumindest sehr nahe dazu verbleiben. Der hier beschriebene frühe Demonstrator zeigt die Machbarkeit dieser Technologie mit einem neuartigen Erkennungsverfahren.
We present a novel approach of distributing matrix multiplications among GPU-equipped nodes in a cluster system. In this context we discuss the induced challenges and possible solutions. Additionally we state an algorithm which outperforms optimized GPU BLAS libraries for small matrices. Furthermore we provide a novel theoretical model for distributing algorithms within homogeneous computation systems with multiple hierarchies. In the context of this model we develop an algorithm which can find the optimal distribution parameters for each involved subalgorithm. We provide a detailed analysis of the algorithms space and time complexities and justify its use with a structured evaluation within a small GPU-equipped Beowulf cluster.
We present a novel method to perform multi-class pattern classification with neural networks and test it on a challenging 3D hand gesture recognition problem. Our method consists of a standard one-against-all (OAA) classification, followed by another network layer classifying the resulting class scores, possibly augmented by the original raw input vector. This allows the network to disambiguate hard-to-separate classes as the distribution of class scores carries considerable information as well, and is in fact often used for assessing the confidence of a decision. We show that by this approach we are able to significantly boost our results, overall as well as for particular difficult cases, on the hard 10-class gesture classification task.
A light-weight real-time ap- plicable hand gesture recognition system for automotive applications
(2015)
We present a novel approach for improved hand-gesture recognition by a single time-of-flight(ToF) sensor in an automotive environment. As the sensor's lateral resolution is comparatively low, we employ a learning approach comprising multiple processing steps, including PCA-based cropping, the computation of robust point cloud descriptors and training of a Multilayer perceptron (MLP) on a large database of samples. A sophisticated temporal fusion technique boosts the overall robustness of recognition by taking into account data coming from previous classification steps. Overall results are very satisfactory when evaluated on a large benchmark set of ten different hand poses, especially when it comes to generalization on previously unknown persons.
We present a system for efficient dynamic hand gesture recognition based on a single time-of-flight sensor. As opposed to other approaches, we simply rely on depth data to interpret user movement with the hand in mid-air. We set up a large database to train multilayer perceptrons (MLPs) which are subsequently used for classification of static hand poses that define the targeted dynamic gestures. In order to remain robust against noise and to balance the low sensor resolution, PCA is used for data cropping and highly descriptive features, obtainable in real-time, are presented. Our simple yet efficient definition of a dynamic hand gesture shows how strong results are achievable in an automotive environment allowing for interesting and sophisticated applications to be realized.
We present a novel hierarchical approach to multi-class classification which is generic in that it can be applied to different classification models (e.g., support vector machines, perceptrons), and makes no explicit assumptions about the probabilistic structure of the problem as it is usually done in multi-class classification. By adding a cascade of additional classifiers, each of which receives the previous classifier's output in addition to regular input data, the approach harnesses unused information that manifests itself in the form of, e.g., correlations between predicted classes. Using multilayer perceptrons as a classification model, we demonstrate the validity of this approach by testing it on a complex ten-class 3D gesture recognition task.
Utilizing biometrie traits for privacy- and security-applications is receiving an increasing attention. Applications such as personal identification, access control, forensics appli-cations, e-banking, e-government, e-health and recently person-alized human-smart-home and human-robot interaction present some examples. In order to offer person-specific services for/of specific person a pre-identifying step should be done in the run-up. Using biometric in such application is encountered by diverse challenges. First, using one trait and excluding the others depends on the application aimed to. Some applications demand directly touch to biometric sensors, while others don't. Second challenge is the reliability of used biometric arrangement. Civilized application demands lower reliability comparing to the forensics ones. And third, for biometric system could only one trait be used (uni-modal systems) or multiple traits (Bi- or Multi-modal systems). The latter is applied, when systems with a relative high reliability are expected. The main aim of this paper is providing a comprehensive view about biometric and its application. The above mentioned challenges will be analyzed deeply. The suitability of each biometric sensor according to the aimed application will be deeply discussed. Detailed com-parison between uni-modal and Multi-modal biometric system will present which system where to be utilized. Privacy and security issues of biometric systems will be discussed too. Three scenarios of biometric application in home-environment, human-robot-interaction and e-health will be presented.
As smart homes are being more and more popular, the needs of finding assisting systems which interface between users and home environments are growing. Furthermore, for people living in such homes, elderly and disabled people in particular and others in general, it is totally important to develop devices, which can support and aid them in their ordinary daily life. We focused in this work on sustaining privacy issues of the user during a real interaction with the surrounding home environment. A smart person-specific assistant system for services in home environment is proposed. The role of this system is the assisting of persons by controlling home activities and guiding the adaption of Smart-Home-Human interface towards the needs of the considered person. At the same time the system sustains privacy issues of it’s interaction partner. As a special case of medical assisting the system is so implemented, that it provides for elderly or disabled people person-specific medical assistance . The system has the ability of identifying its interaction partner using some biometric features. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system represents person-specific list of medicines either visually or auditive. And third the system gives an alarm in the case of taking medicament either later or earlier as normal taking time.
Forschung an Hochschulen
(2015)
In diesem Aufsatz soll die Forschung an Fachhochschulen beispielhaft aus dem Blickwinkel des Instituts Informatik der in 2009 gegründeten Hochschule Ruhr West betrachtet werden. Am Institut Informatik ist es das Ziel Lehre und Forschung geeignet zu verknüpfen, um Studierenden, wissenschaftlichen Mitarbeiterinnen und Mitarbeitern und auch Lehrenden ein attraktives Angebot in Forschung und Lehre im Bereich der Informatik zu liefern. Dabei bilden neben der Durchführung interessanter Lehrveranstaltungen, welche durch aktuelle Forschungsfragestellungen angereichert werden, das kooperative Bearbeiten von gesellschaftlich relevanten und zukunftsweisenden Forschungsaufgaben, die Teilnahme an Forschungsverbünden, bilaterale Forschungsaktivitäten mit Partnern aus der Wirtschaft und das Einwerben von externen Mitteln, die Basis der Arbeit am Institut.
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air hand gesture recognition on mobile devices. ToF sensors are capable of providing depth data at high frame rates independent of illumination making any kind of application possible for in- and outdoor situations. This comes at the cost of precision regarding depth measurements and comparatively low lateral resolution. We present a novel feature generation technique based on a rasterization of the point clouds which
realizes fixed-sized input making Deep Learning approaches applicable using Convolutional Neural Networks. In order to increase precision we introduce several methods to reduce noise and normalize the input to overcome difficulties in scaling. Backed by a large-scale database of about half
a million data samples taken from different individuals our
contribution shows how hand gesture recognition is realiz-
able on commodity tablets in real-time at frame rates of up to 17Hz. A leave-one out cross-validation experiment
demonstrates the feasibility of our approach with classification errors as low as 1,5% achieved persons unknown to the model.
We present a light-weight real-time applicable 3D-gesture recognition system on mobile devices for improved Human-Machine Interaction. We utilize time-of-flight data coming from a single sensor and implement the whole gesture recognition pipeline on two different devices outlining the potential of integrating these sensors onto mobile devices. The main components are responsible for cropping the data to the essentials, calculation of meaningful features, training and classifying via neural networks and realizing a GUI on the device. With our system we achieve recognition rates of up to 98% on a 10-gesture set with frame rates reaching 20Hz, more than sufficient for any real-time applications.
We present a publicly available benchmark database for the problem of hand posture recognition from noisy depth data and fused RGB-D data obtained from low-cost time-of-flight (ToF) sensors. The database is the most extensive database of this kind containing over a million data samples (point clouds) recorded from 35 different individuals for ten different static hand postures. This captures a great amount of variance, due to person-related factors, but also scaling, translation and rotation are explicitly represented. Benchmark results achieved with a standard classification algorithm are computed by cross-validation both over samples and persons, the latter implying training on all persons but one and testing on the remaining one. An important result using this database is that cross-validation performance over samples (which is the standard procedure in machine learning) is systematically higher than cross-validation performance over persons, which is to our mind the true application-relevant measure of generalization performance.
Touch versus mid-air gesture interfaces in road scenarios-measuring driver performance degradation
(2016)
We present a study aimed at comparing the degradation of the driver's performance during touch gesture vs mid-air gesture use for infotainment system control. To this end, 17 participants were asked to perform the Lane Change Test. This requires each participant to steer a vehicle in a simulated driving environment while interacting with an infotainment system via touch and mid-air gestures. The decrease in performance is measured as the deviation from an optimal baseline. This study concludes comparable deviations from the baseline for the secondary task of infotainment interaction for both interaction variants. This is significant as all participants are experienced in touch interaction, however have had no experience at all with mid-air gesture interaction, favoring mid-air gestures for the long-term scenario.
Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.
Applying step heating thermography to wind turbine rotor blades as a non-destructive testing method
(2017)
Building upon prior results, we present an alternative approach to efficiently classifying a complex set of 3D hand poses obtained from modern Time-Of-Flight-Sensors (TOF). We demonstrate it is possible to achieve satisfactory results in spite of low resolution and high noise (inflicted by the sensors) and a demanding outdoor environment. We set up a large database of pointclouds in order to train multilayer perceptrons as well as support vector machines to classify the various hand poses. Our goal is to fuse data from multiple TOF sensors, which observe the poses from multiple angles. The presented contribution illustrates that real-time capability can be maintained with such a setup as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.
In this article we present a system for coupling different base algorithms and sensors for segmentation. Three different solutions for image segmentation by fusion are described, compared and results are shown. The fusion of base algorithms with colorinformation and a sensor fusion process of an optical and a radar sensor including a feedback over time is realized. A feature-in decision-out fusion process is solved. For the fusion process a multi layer perceptron (MLP) with one hidden layer is used as a coupling net. The activity of the output neuron represents the membership of each pixel to an initial segment.
We present a novel approach of distributing small-to mid-scale neural networks onto modern parallel architectures. In this context we discuss the induced challenges and possible solutions. We provide a detailed theoretical analysis with respect to space and time complexities and reinforce our computation model with evaluations which show a performance gain over state of the art approaches.
Object detection systems which operate on large data streams require an efficient scaling with available computation power. We analyze how the use of tile-images can increase the efficiency (i.e. execution speed) of distributed HOG-based object detectors. Furthermore we discuss the challenges of using our developed algorithms in practical large scale scenarios. We show with a structured evaluation that our approach can provide a speed-up of 30-180 % for existing architectures. Due to the its generic formulation it can be applied to a wide range of HOG-based (or similar) algorithms. In this context we also study the effects of applying our method to an existing detector and discuss a scalable strategy for distributing the computation among nodes in a cluster system.
The behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering angle and velocity. In this paper a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of two one-dimensional neural fields. The stimuli of the field are determined according to sensor information produced by a simulation environment.
In this paper, we describe a method to model human clothes for a later recognition by the use of RGB- and SWIR-cameras. A basic model is estimated during people detection and tracking. This model will be refined if the recognition is triggered. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body parts. The body parts are estimated by the use of a silhouette extraction combined with a skeleton estimation. In this way, the model describes the human clothes in a compact manner which allows the use of a simple and fast comparison method for people recognition. Such models can be used in security and service applications.
A self-driving car that operates on the SAE automation level 3 or 4 can navigate through different traffic conditions without human input. If such a system is on its operating limits, it will emit a takeover request before shutting down. This request will likely generate a physical response of the driver. Our goal is to shed light on the stress perception of drivers in various scenarios. To this end, we have carried out a feasibility study for preparation. Two subjects drove an autonomous vehicle and during the ride ECG signals were recorded, and afterwards evaluated. Unfortunately, the stress reaction to takeover requests could not be investigated, due to the poor function of the autonomous driving mode from the vehicle, however the reaction to autopilot misconduct without warning to the driver could be investigated instead.
Checking wind turbines for damage is a common problem for operators of wind parks, as regular inspections are legally required in many countries and prevention is economically viable. While some of the common forms of damage are easily visible on the surface, structural problems can remain invisible for years before they eventually result in catastrophic failure of a rotor blade. Common forms of testing fibre composite parts like ultrasonic testing or X-ray tests are impractical due to the large dimensions of wind turbine components and their limited accessibility for any short-range methods. Active thermographic inspection of wind turbines is a promising approach to testing for structural flaws beneath the surface of rotor blades. As part of an ongoing research project, a setup for testing the general viability of this method was built and used to compare different thermographic cameras. A sample cut from a discarded rotor blade was modified to emulate structural damage. The results are promising for the development of a cost effective on-site testing system.
Increasing economic viability and safety through structural health monitoring of wind turbines
(2017)
Serious accidents with property damage or even human casualties, result from structural flaws in wind turbine rotor blades. Common maintenance practices result in long downtimes and do not lead to the required results. Therefore, the Ruhr West University of Applied Sciences and the iQbis Consulting GmbH, currently research a new structural health monitoring method for wind turbine rotor blades. The goal of this project is to build a sensor system that can detect structural weaknesses inside of rotor blades without the need of downtime for industrial climbers. This technology has the potential to prevent accidents, save lives, extend the useful life of wind turbines and optimize the production of green energy.
We present a pipeline for recognizing dynamic freehand gestures on mobile devices based on extracting depth information coming from a single Time-of-Flight sensor. Hand gestures are recorded with a mobile 3D sensor, transformed frame by frame into an appropriate 3D descriptor and fed into a deep LSTM network for recognition purposes. LSTM being a recurrent neural model, it is uniquely suited for classifying explicitly time-dependent data such as hand gestures. For training and testing purposes, we create a small database of four hand gesture classes, each comprising 40 × 150 3D frames. We conduct experiments concerning execution speed on a mobile device, generalization capability as a function of network topology, and classification ability ‘ahead of time’, i.e., when the gesture is not yet completed. Recognition rates are high (>95%) and maintainable in real-time as a single classification step requires less than 1 ms computation time, introducing freehand gestures for mobile systems.
RELEVANCE & RESEARCH QUESTION: Currently the effectiveness of Virtual Reality (VR) and Augmented Reality (AR) systems as practice teaching methods are virtually uncharted. The proof that these systems can provide the same or better learning outcomes than a text instructed practical task could represent a significant benefit for educational activities. METHODS & DATA: To fathom the effectiveness, an experimental study with the three conditions (VR, AR and a real setup) were used to teach participant how to assemble a standard computer. Each condition was divided into two parts: part one in which participants were confronted with their specific scenario, part two in which participants had to go through a real practice after one week. The learning outcome was determined by the designation of hardware parts, a quiz that queried their function and the correct assembling of the components in addition to needed time. Apart from the mere performance, the acceptance of such application in academic context and difference in evaluation by men and women were of interest. RESULTS: Results concerning the Learning Outcome showed that participants from the VR condition outperformed those learned from the real setup ((M=10.0, SD=0.0) [virtual reality] vs. (M=8.95, SD=1.27) [control]). Furthermore, results from the assembling duration assessment demonstrated that VR Group Participants completed their tasks 6.62% faster than the control group. Regarding the identification of Hardware Parts, both groups scored a significant improvement during the post condition compared to the first test run, indicating a learning progress. However, due to the VR group achieving a better outcome in average answers and a more significant difference between the trials, the results indicate a better performance by participants assigned to the VR condition. ADDED VALUE: The results revealed that VR and AR systems could exceed text-based approach in terms of learning outcome performance. The effectiveness of the systems implicates a major benefit for the educational landscape, as learning content that is not realizable in terms of cost, distance or logistics could be designed as an immersive and engaging experience.
Relax yourself - Using Virtual Reality to enhance employees mental health and work performance
(2019)
This paper presents work-in-progress aiming to develop an actively adapting virtual reality (VR) relaxation application. Due to the immersive nature of VR technologies, people can escape from their real environment and get into a relaxing state. Goal of the application is to adapt to the users' physiological signals to foster the positive effect. Until now, a first version of the VR application was constructed and is currently evaluated in an experiment. Preliminary results of this study demonstrate that people appreciate the immersion into the virtual environment and escape from reality. Moreover, participants highlighted the option to adapt users' needs and preferences. Based on the final study data, the constructed application will be enhanced with regard to adoption and surrounding factors.
We present a system for 3D hand gesture recognition based on low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. Our system fuses data coming from two ToF sensors which is used to build up a large database and subsequently train a multilayer perceptron (MLP). We demonstrate that we are able to reliably classify a set of ten hand gestures in real-time and describe the setup of the system, the utilised methods as well as possible application scenarios.
As smart homes are being more and more popular, the needs of finding assisting systems which interface between users and home environments are growing. Furthermore, for elderly and disabled people living in such homes it is totally important to develop devices, which can support and aid them in their ordinary daily life. This demands means and tools that extend independent living and promote improved health. In this work we reviewed the state of the art in the assistant systems in home environments. A case study of medical assisting system for elderly and people with disabilities is discussed deeply. A smart nfc-based person-specific assistant system for services in home environment is proposed. The role of this system is the assisting by controlling of home activities and adaption of home-human interface towards the needs of the considered person. For the special case of medical assisting the system has the ability of providing for elderly or disabled people person-specific medical assistance. The system has the ability of identifying its interaction partner using some biometric features. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system represents person-specific list of medicaments either visually, on screen, or acoustic, speaker. And third the system gives an alarm in the case of taking medicament either later or earlier as normal taking time.
In the context of existing approaches to cluster computing we present a newly developed modular framework `SimpleHydra' for rapid deployment and management of Beowulf clusters. Instead of focusing only the pure computation tasks on homogeneous clusters (i.e. clusters with identically set up nodes), this framework aims to ease the configuration of heterogeneous clusters and to provide a low-level / high-level object-oriented API for low-latency distributed computing. Our framework does not make any restrictions regarding the hardware and minimizes the use of external libraries to the case of special modules. In addition to that our framework enables the user to develop highly dynamic cluster topologies. We describe the framework's general structure as well as time critical elements, give application examples in the `Big-Data' context during a research project and briefly discuss additional features. Furthermore we give a thorough theoretical time/space complexity analysis of our implemented methods and general approaches.
In this paper, we describe an efficient method for a fast people re-identification based on models of human clothes. An initial model is estimated during people detection and tracking, which will be refined during the re-identification. This stepwise extraction, combination and comparing of features speeds up the whole re-identification. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body part. The body parts are located with an optimized GPU-based HOG detector. Furthermore, we introduce a meanshift-based fusion concept which utilizes multiple detectors in order to increase the detection reliability.
Currently in home environments, robot assisting systems with emotion understanding ability are generally achieved in two several manners. The first is the implementing of such systems in such a way that they offer general services for all considered persons without considering privacy, special needs of their interaction partners. The second way is the targetting of such systems for merely one person. In this work we present a robot assisting system, which has both the abilities of assisting several persons at the same time and sustaining their privacy and security issues. The robot can interact with it's interaction partner emotionally by analyzing the emotions of her expressed either visually, facial expression, or auditive, speech prosody. The role of this system is the providing of person-specific support in home environment. In order to identify its interaction partner the system uses diverse biometric traits. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system loads the corresponding emotional profile of the detected interaction partner in order to practice a person-specific emotional human-robot interaction, which has an advantage over the person independent interaction.
Pedestrian movement analysis at airports - videobased analysis across multiple camera systems
(2013)
The Desire project aimed at the development and implementation of a mobile service robotic research platform (technology platform) able to handle real world scenarios regarding service robotic tasks. Different modules for different tasks plus an interaction infrastructure were integrated on this platform. An example of a real world scenario task is the support of a handicapped person to clean up a kitchen in home environments.
One of the main challenges to be solved in this field is the interaction with people. To start an interaction process between a robot and a person, the most important information is the knowledge about the interacting partner’s identity and whether the interacting partner is present or not. This means, the robot must be able to detect and be finally able to identify persons. Accurate identification of specific individuals has to be done by analyzing the individual features of each person. A typical feature set that allows for a distinct identification of a specific person is often extracted from the facial image acquired by a camera. This feature-set is stored in a database to allow the identification of different persons independent from place and time by comparing given feature-sets. Thus, a face recognition module was integrated into the technology platform which includes face detection and identification algorithms.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (e.g., vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (e.g. , traffic rules, physical laws), additional information (i.e., GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the object related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird's eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task. The influence of behavior planning on the behavior of the guided vehicle is limited to advices as no mechanical control (e.g. , control of the steering angle) was implemented. An Intelligent Cruise Control (ICC) is shown as a spin-off for using this architecture.
The scene interpretation and the behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering, angle and velocity. In this paper a scene interpretation and a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of a two-dimensional neural field for scene interpretation and two one-dimensional neural fields controlling steering angle and velocity. The stimuli of the fields are determined according to the sensor information.
Practical application of object detection systems, in research or industry, favors highly optimized black box solutions. We show how such a highly optimized system can be further augmented in terms of its reliability with only a minimal increase of computation times, i.e. preserving realtime boundaries. Our solution leaves the initial (HOG-based) detector unchanged and introduces novel concepts of non-linear metrics and fusion of ROIs. In this context we also introduce a novel way of combining feature vectors for mean-shift grouping. We evaluate our approach on a standarized image database with a HOG detector, which is representative for practical applications. Our results show that the amount of false-positive detections can be reduced by a factor of 4 with a negligable complexity increase. Although introduced and applied to a HOG-based system, our approach can easily be adapted for different detectors.
Industry 4.0 is known as the fourth industrial revolution which refers to the integration of technologies that make the factories interoperable by seamlessly connecting machines, employees and sensors for communication. In Industry 4.0, one of the key features is the use of new technologies to recognize the current context. Thus, the employees are supported with contextual information for speeding up decision-making during various processes related to planning, production, maintenance, etc. As a contribution to this area, the work described here aims to introduce a cyber-physical system (CPS) approach to provide context-based and intelligent support to employees in heavy industries using new technologies, especially in the field of mobile devices. In this work, mobile device sensors and image processing techniques are used to recognize the context which requires specific support. In addition, new scenarios and associated processes are developed to support the employees on the basis of new, flexible, adaptive and mobile technologies.
This contribution demonstrates the efficient embedding of a single depth-camera into the automotive environment making mid-air gesture interaction for mobile applications viable in such a scenario. In this setting a new human-machine interface is implemented to give an idea of future improvements in automation processes in industrial applications. Our system is based on a data-driven approach by learning hand poses as well as gestures from a large database in order to apply them on mobile devices. We register any movement in a nearby driver area and crop data efficiently with the means of PCA transforming it into so-called feature vectors which present the input for our multi-layer perceptrons (MLPs). After MLP classification, the interpretation of user input is sent via WiFi to a tablet PC mounted into the car interior visualizing an infotainment system which the user is able to interact with. We demonstrate that by this setup hand gestures as well as hand poses are easily and efficiently interpretable insofar as that they become an intuitive and supplementary means of interaction for automotive HMI in mobile scenarios realizable in real-time.
The first robots are currently appearing on the consumer market. Initially they are targeted at rather simple applications such as entertainment and home convenience. For more complex areas, these robots will need to collaborate and interactively communicate with their human users, which requires appropriate man-machine interaction technologies and considerable cognitive abilities on the robot's side. Consumer acceptance will strongly depend on the integrated system. Thus, system integration and evaluation of the integrated system is becoming increasingly important. This paper describes our approach to construct a robotic assistance system. We present experience with an integrated technology demonstration and exposure of the integrated system to the public.
In this contribution we present a novel approach to transform data from time-of-flight (ToF) sensors to be interpretable by Convolutional Neural Networks (CNNs). As ToF data tends to be overly noisy depending on various factors such as illumination, reflection coefficient and distance, the need for a robust algorithmic approach becomes evident. By spanning a three-dimensional grid of fixed size around each point cloud we are able to transform three-dimensional input to become processable by CNNs. This simple and effective neighborhood-preserving methodology demonstrates that CNNs are indeed able to extract the relevant information and learn a set of filters, enabling them to differentiate a complex set of ten different gestures obtained from 20 different individuals and containing 600.000 samples overall. Our 20-fold cross-validation shows the generalization performance of the network, achieving an accuracy of up to 98.5% on validation sets comprising 20.000 data samples. The real-time applicability of our system is demonstrated via an interactive validation on an infotainment system running with up to 40fps on an iPad in the vehicle interior.
Recognition of emotions from multimodal cues is of basic interest for the design of many adaptive interfaces in human-machine interaction (HMI) in general and human-robot interaction (HRI) in particular. It provides a means to incorporate non-verbal feedback in the course of interaction. Humans express their emotional and affective state rather unconsciously exploiting their different natural communication modalities such as body language, facial expression and prosodic intonation. In order to achieve applicability in realistic HRI settings, we develop person-independent affective models. In this paper, we present a study on multimodal recognition of emotions from such auditive and visual cues for interaction interfaces. We recognize six classes of basic emotions plus the neutral one of talking persons. The focus hereby lies on the simultaneous online visual and accoustic analysis of speaking faces. A probabilistic decision level fusion scheme based on Bayesian networks is applied to draw benefit of the complementary information from both – the acoustic and the visual – cues. We compare the performance of our state of the art recognition systems for separate modalities to the improved results after applying our fusion scheme on both DaFEx database and a real-life data that captured directly from robot. We furthermore discuss the results with regard to the theoretical background and future applications.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems arose in the past years. Fully or partly autonomously guided vehicles, particularly for road traffic, pose high demands on the development of reliable algorithms. Principal problems are caused by having a moving observer in predominantly natural environments. At the Institut fur Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We present a solution for a driver assistance system. We concentrate on the aspects of video-based scene analysis and organization of behavior.
Fahrerassistenzsysteme werden eingesetzt, um dem Fahrer
eines Kraftfahrzeugs Handlungsabläufe abzunehmen. Diese Handlungsabläufe
werden definiert durch eine Aufgabenstellung, die vom Fahrer an das Fahrerassi-
stenzsystem übergeben oder systembedingt gelöst wird. Bei komplexen Fahreras-
sistenzsystemen ist an eine autonome Navigation im Straßenverkehr gedacht. Es
wird ein neues Verfahren vorgestellt, welches eine Bewegungssteuerung eines
autonomen Fahrzeugs durchführen kann. Es werden der Lenkwinkel und die Ge-
schwindigkeit beeinflußt. Für diese Aufgabe wird ein dynamischer Ansatz aus
dem Bereich der neuronalen Felder gewählt. Relevante Attribute für den Fahrt-
verlauf auf unterschiedlichem Abstraktionsniveau können dabei einfach (additiv)
verarbeitet werden.