Refine
Year of publication
Document Type
- Conference Proceeding (63)
- Article (25)
- Report (11)
- Part of a Book (8)
- Contribution to a Periodical (1)
- Doctoral Thesis (1)
- Lecture (1)
- Other (1)
- Research Data (1)
Is part of the Bibliography
- no (112)
Keywords
- Architektur (1)
- Augmented Reality (1)
- Computer Vision (1)
- DamokleS (1)
- Fahrerassistenzsystem (1)
- INTELLIGENT VEHICLES (1)
- Künstliche Intelligenz (1)
- Psychoacoustics (1)
Applying step heating thermography to wind turbine rotor blades as a non-destructive testing method
(2017)
Systems for automated image analysis are useful for a variety of tasks and their importance is still growing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut für Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Building upon prior results, we present an alternative approach to efficiently classifying a complex set of 3D hand poses obtained from modern Time-Of-Flight-Sensors (TOF). We demonstrate it is possible to achieve satisfactory results in spite of low resolution and high noise (inflicted by the sensors) and a demanding outdoor environment. We set up a large database of pointclouds in order to train multilayer perceptrons as well as support vector machines to classify the various hand poses. Our goal is to fuse data from multiple TOF sensors, which observe the poses from multiple angles. The presented contribution illustrates that real-time capability can be maintained with such a setup as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.
In this article we present a system for coupling different base algorithms and sensors for segmentation. Three different solutions for image segmentation by fusion are described, compared and results are shown. The fusion of base algorithms with colorinformation and a sensor fusion process of an optical and a radar sensor including a feedback over time is realized. A feature-in decision-out fusion process is solved. For the fusion process a multi layer perceptron (MLP) with one hidden layer is used as a coupling net. The activity of the output neuron represents the membership of each pixel to an initial segment.
We present a novel approach of distributing small-to mid-scale neural networks onto modern parallel architectures. In this context we discuss the induced challenges and possible solutions. We provide a detailed theoretical analysis with respect to space and time complexities and reinforce our computation model with evaluations which show a performance gain over state of the art approaches.
Object detection systems which operate on large data streams require an efficient scaling with available computation power. We analyze how the use of tile-images can increase the efficiency (i.e. execution speed) of distributed HOG-based object detectors. Furthermore we discuss the challenges of using our developed algorithms in practical large scale scenarios. We show with a structured evaluation that our approach can provide a speed-up of 30-180 % for existing architectures. Due to the its generic formulation it can be applied to a wide range of HOG-based (or similar) algorithms. In this context we also study the effects of applying our method to an existing detector and discuss a scalable strategy for distributing the computation among nodes in a cluster system.
The behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering angle and velocity. In this paper a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of two one-dimensional neural fields. The stimuli of the field are determined according to sensor information produced by a simulation environment.
In this paper, we describe a method to model human clothes for a later recognition by the use of RGB- and SWIR-cameras. A basic model is estimated during people detection and tracking. This model will be refined if the recognition is triggered. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body parts. The body parts are estimated by the use of a silhouette extraction combined with a skeleton estimation. In this way, the model describes the human clothes in a compact manner which allows the use of a simple and fast comparison method for people recognition. Such models can be used in security and service applications.
A self-driving car that operates on the SAE automation level 3 or 4 can navigate through different traffic conditions without human input. If such a system is on its operating limits, it will emit a takeover request before shutting down. This request will likely generate a physical response of the driver. Our goal is to shed light on the stress perception of drivers in various scenarios. To this end, we have carried out a feasibility study for preparation. Two subjects drove an autonomous vehicle and during the ride ECG signals were recorded, and afterwards evaluated. Unfortunately, the stress reaction to takeover requests could not be investigated, due to the poor function of the autonomous driving mode from the vehicle, however the reaction to autopilot misconduct without warning to the driver could be investigated instead.
Checking wind turbines for damage is a common problem for operators of wind parks, as regular inspections are legally required in many countries and prevention is economically viable. While some of the common forms of damage are easily visible on the surface, structural problems can remain invisible for years before they eventually result in catastrophic failure of a rotor blade. Common forms of testing fibre composite parts like ultrasonic testing or X-ray tests are impractical due to the large dimensions of wind turbine components and their limited accessibility for any short-range methods. Active thermographic inspection of wind turbines is a promising approach to testing for structural flaws beneath the surface of rotor blades. As part of an ongoing research project, a setup for testing the general viability of this method was built and used to compare different thermographic cameras. A sample cut from a discarded rotor blade was modified to emulate structural damage. The results are promising for the development of a cost effective on-site testing system.
Increasing economic viability and safety through structural health monitoring of wind turbines
(2017)
Serious accidents with property damage or even human casualties, result from structural flaws in wind turbine rotor blades. Common maintenance practices result in long downtimes and do not lead to the required results. Therefore, the Ruhr West University of Applied Sciences and the iQbis Consulting GmbH, currently research a new structural health monitoring method for wind turbine rotor blades. The goal of this project is to build a sensor system that can detect structural weaknesses inside of rotor blades without the need of downtime for industrial climbers. This technology has the potential to prevent accidents, save lives, extend the useful life of wind turbines and optimize the production of green energy.
We present a pipeline for recognizing dynamic freehand gestures on mobile devices based on extracting depth information coming from a single Time-of-Flight sensor. Hand gestures are recorded with a mobile 3D sensor, transformed frame by frame into an appropriate 3D descriptor and fed into a deep LSTM network for recognition purposes. LSTM being a recurrent neural model, it is uniquely suited for classifying explicitly time-dependent data such as hand gestures. For training and testing purposes, we create a small database of four hand gesture classes, each comprising 40 × 150 3D frames. We conduct experiments concerning execution speed on a mobile device, generalization capability as a function of network topology, and classification ability ‘ahead of time’, i.e., when the gesture is not yet completed. Recognition rates are high (>95%) and maintainable in real-time as a single classification step requires less than 1 ms computation time, introducing freehand gestures for mobile systems.
RELEVANCE & RESEARCH QUESTION: Currently the effectiveness of Virtual Reality (VR) and Augmented Reality (AR) systems as practice teaching methods are virtually uncharted. The proof that these systems can provide the same or better learning outcomes than a text instructed practical task could represent a significant benefit for educational activities. METHODS & DATA: To fathom the effectiveness, an experimental study with the three conditions (VR, AR and a real setup) were used to teach participant how to assemble a standard computer. Each condition was divided into two parts: part one in which participants were confronted with their specific scenario, part two in which participants had to go through a real practice after one week. The learning outcome was determined by the designation of hardware parts, a quiz that queried their function and the correct assembling of the components in addition to needed time. Apart from the mere performance, the acceptance of such application in academic context and difference in evaluation by men and women were of interest. RESULTS: Results concerning the Learning Outcome showed that participants from the VR condition outperformed those learned from the real setup ((M=10.0, SD=0.0) [virtual reality] vs. (M=8.95, SD=1.27) [control]). Furthermore, results from the assembling duration assessment demonstrated that VR Group Participants completed their tasks 6.62% faster than the control group. Regarding the identification of Hardware Parts, both groups scored a significant improvement during the post condition compared to the first test run, indicating a learning progress. However, due to the VR group achieving a better outcome in average answers and a more significant difference between the trials, the results indicate a better performance by participants assigned to the VR condition. ADDED VALUE: The results revealed that VR and AR systems could exceed text-based approach in terms of learning outcome performance. The effectiveness of the systems implicates a major benefit for the educational landscape, as learning content that is not realizable in terms of cost, distance or logistics could be designed as an immersive and engaging experience.
Artificial Intelligence Driven Human-Machine Collaboration Scenarios in Virtual Reality (Poster)
(2018)
In this review, we describe current Machine Learning approaches to hand gesture recognition with depth data from time-of-flight sensors. In particular, we summarise the achievements on a line of research at the Computational Neuroscience laboratory at the Ruhr West University of Applied Sciences. Relating our results to the work of others in this field, we confirm that Convolutional Neural Networks and Long Short-Term Memory yield most reliable results. We investigated several sensor data fusion techniques in a deep learning framework and performed user studies to evaluate our system in practice. During our course of research, we gathered and published our data in a novel benchmark dataset (REHAP), containing over a million unique three-dimensional hand posture samples.
Relax yourself - Using Virtual Reality to enhance employees mental health and work performance
(2019)
This paper presents work-in-progress aiming to develop an actively adapting virtual reality (VR) relaxation application. Due to the immersive nature of VR technologies, people can escape from their real environment and get into a relaxing state. Goal of the application is to adapt to the users' physiological signals to foster the positive effect. Until now, a first version of the VR application was constructed and is currently evaluated in an experiment. Preliminary results of this study demonstrate that people appreciate the immersion into the virtual environment and escape from reality. Moreover, participants highlighted the option to adapt users' needs and preferences. Based on the final study data, the constructed application will be enhanced with regard to adoption and surrounding factors.
We present a system for 3D hand gesture recognition based on low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. Our system fuses data coming from two ToF sensors which is used to build up a large database and subsequently train a multilayer perceptron (MLP). We demonstrate that we are able to reliably classify a set of ten hand gestures in real-time and describe the setup of the system, the utilised methods as well as possible application scenarios.
With the introduction of Apple’s iPhone, gesture control became pop-
ular and was perceived as an intuitive means of interaction. Contact-
less gestures received broad attention with the X-Box Kinect.
Current technology is limited to a small number of uses, mainly
in entertainment systems. The target of this project is to increase the
range of possible applications, e.g. to the field of automotive,
industrial applications (manufacturing plants), assisted living in con-
texts ranging from private households to hospitals (interaction for
people with disabilities) and many more.
With a rapidly ageing population, it is increasingly important to de-
velop devices for elderly and disabled people that can support and aid
them in their daily lives, helping them to live at home as long as pos-
sible. The goal of this project is to implement a human-machine inter-
action and assistance system that can offer personalised health sup-
port for elderly people, or for those who have special needs in the
home environment.
As smart homes are being more and more popular, the needs of finding assisting systems which interface between users and home environments are growing. Furthermore, for elderly and disabled people living in such homes it is totally important to develop devices, which can support and aid them in their ordinary daily life. This demands means and tools that extend independent living and promote improved health. In this work we reviewed the state of the art in the assistant systems in home environments. A case study of medical assisting system for elderly and people with disabilities is discussed deeply. A smart nfc-based person-specific assistant system for services in home environment is proposed. The role of this system is the assisting by controlling of home activities and adaption of home-human interface towards the needs of the considered person. For the special case of medical assisting the system has the ability of providing for elderly or disabled people person-specific medical assistance. The system has the ability of identifying its interaction partner using some biometric features. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system represents person-specific list of medicaments either visually, on screen, or acoustic, speaker. And third the system gives an alarm in the case of taking medicament either later or earlier as normal taking time.
In the context of existing approaches to cluster computing we present a newly developed modular framework `SimpleHydra' for rapid deployment and management of Beowulf clusters. Instead of focusing only the pure computation tasks on homogeneous clusters (i.e. clusters with identically set up nodes), this framework aims to ease the configuration of heterogeneous clusters and to provide a low-level / high-level object-oriented API for low-latency distributed computing. Our framework does not make any restrictions regarding the hardware and minimizes the use of external libraries to the case of special modules. In addition to that our framework enables the user to develop highly dynamic cluster topologies. We describe the framework's general structure as well as time critical elements, give application examples in the `Big-Data' context during a research project and briefly discuss additional features. Furthermore we give a thorough theoretical time/space complexity analysis of our implemented methods and general approaches.
In this paper, we describe an efficient method for a fast people re-identification based on models of human clothes. An initial model is estimated during people detection and tracking, which will be refined during the re-identification. This stepwise extraction, combination and comparing of features speeds up the whole re-identification. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body part. The body parts are located with an optimized GPU-based HOG detector. Furthermore, we introduce a meanshift-based fusion concept which utilizes multiple detectors in order to increase the detection reliability.
Currently in home environments, robot assisting systems with emotion understanding ability are generally achieved in two several manners. The first is the implementing of such systems in such a way that they offer general services for all considered persons without considering privacy, special needs of their interaction partners. The second way is the targetting of such systems for merely one person. In this work we present a robot assisting system, which has both the abilities of assisting several persons at the same time and sustaining their privacy and security issues. The robot can interact with it's interaction partner emotionally by analyzing the emotions of her expressed either visually, facial expression, or auditive, speech prosody. The role of this system is the providing of person-specific support in home environment. In order to identify its interaction partner the system uses diverse biometric traits. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system loads the corresponding emotional profile of the detected interaction partner in order to practice a person-specific emotional human-robot interaction, which has an advantage over the person independent interaction.
"Quarter agile" aims to promote older people's social participation and community
via physical and cognitive training which the participants also help create. The project relies heavily on the use of smartphones as training support. Loneliness
and loss of physical and cognitive skills are to be prevented by means of training
and participation in groups. We want to investigate the effects of technology-
assisted training on physical and cognitive performance and social participation of
older people. "Quarter agile" is geared towards healthy people ages 65 and up who are residents of the specified neighborhood.
Pedestrian movement analysis at airports - videobased analysis across multiple camera systems
(2013)
The Desire project aimed at the development and implementation of a mobile service robotic research platform (technology platform) able to handle real world scenarios regarding service robotic tasks. Different modules for different tasks plus an interaction infrastructure were integrated on this platform. An example of a real world scenario task is the support of a handicapped person to clean up a kitchen in home environments.
One of the main challenges to be solved in this field is the interaction with people. To start an interaction process between a robot and a person, the most important information is the knowledge about the interacting partner’s identity and whether the interacting partner is present or not. This means, the robot must be able to detect and be finally able to identify persons. Accurate identification of specific individuals has to be done by analyzing the individual features of each person. A typical feature set that allows for a distinct identification of a specific person is often extracted from the facial image acquired by a camera. This feature-set is stored in a database to allow the identification of different persons independent from place and time by comparing given feature-sets. Thus, a face recognition module was integrated into the technology platform which includes face detection and identification algorithms.
Coming out of the labs, the first robots are currently appearing on the consumer market. Initially they target rather simple application scenarios ranging from entertainment to home convenience. However, one can expect, that they will capture more complex areas soon. These robots will have a higher and higher level and a broad range of functional competence, and will collaborate and interactively communicate with their human users. All this requires considerable cognitive abilities on the robot’s side and appropriate man-machine interaction technologies. Apart from further development of individual functions and technologies it is crucial to build and evaluate fully integrated systems. This paper describes our approach to construct a robotic assistance system. We present experience with an integrated technology demonstration and the exposure of the integrated system to the public.
In diesem Artikel wird eine flexible Architektur vorgestellt, mit deren Hilfe eine modulare Lösung von Fahrerassistenzaufgaben in Kraftfahrzeugen gezeigt werden kann. Es wird eine Objektbezogene Analyse von Sensordaten, eine Verhaltensbasierte Szeneninterpretation und eine Verhaltensplanung vorgestellt. Eine globale Wissensbasis, auf der jedes einzelne Modul arbeitet, beinhaltet die Beschreibung physikalischer Zusammenhänge, Verhaltensregeln für den Straßenverkehr, sowie Objekt- und Szenenwissen. Externes Wissen (z.B. GPS - Global Positioning System) kann ebenfalls in die Wissensbasis eingebunden werden. Als Anwendungsbeispiel der Verhaltensplanung wird ein intelligenter Tempomat vorgestellt.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (e.g., vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (e.g. , traffic rules, physical laws), additional information (i.e., GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the object related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird's eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task. The influence of behavior planning on the behavior of the guided vehicle is limited to advices as no mechanical control (e.g. , control of the steering angle) was implemented. An Intelligent Cruise Control (ICC) is shown as a spin-off for using this architecture.
The scene interpretation and the behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering, angle and velocity. In this paper a scene interpretation and a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of a two-dimensional neural field for scene interpretation and two one-dimensional neural fields controlling steering angle and velocity. The stimuli of the fields are determined according to the sensor information.
Systems for automated image analysis are useful for a variety of tasks. Their importance is still growing due to technological advances and increased social acceptance. Especially driver assistance systems have reached a high level of sophistication. Fully or partly autonomously guided vehicles, particularly for road traffic, require highly reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We present a system extracting important information from an image taken by a CCD camera installed at the rear-view mirror in a car. The approach is divided into a sequential and a parallel phase of sensor and information processing. Three main tasks, namely initial segmentation (object detection), object tracking and object classification are realized by integration in the sequential phase and by fusion in the parallel phase. The main advantage of this approach is integrative coupling of different algorithms providing partly redundant information. q 2000 Elsevier Science B.V. All rights reserved.
Practical application of object detection systems, in research or industry, favors highly optimized black box solutions. We show how such a highly optimized system can be further augmented in terms of its reliability with only a minimal increase of computation times, i.e. preserving realtime boundaries. Our solution leaves the initial (HOG-based) detector unchanged and introduces novel concepts of non-linear metrics and fusion of ROIs. In this context we also introduce a novel way of combining feature vectors for mean-shift grouping. We evaluate our approach on a standarized image database with a HOG detector, which is representative for practical applications. Our results show that the amount of false-positive detections can be reduced by a factor of 4 with a negligable complexity increase. Although introduced and applied to a HOG-based system, our approach can easily be adapted for different detectors.
Industry 4.0 is known as the fourth industrial revolution which refers to the integration of technologies that make the factories interoperable by seamlessly connecting machines, employees and sensors for communication. In Industry 4.0, one of the key features is the use of new technologies to recognize the current context. Thus, the employees are supported with contextual information for speeding up decision-making during various processes related to planning, production, maintenance, etc. As a contribution to this area, the work described here aims to introduce a cyber-physical system (CPS) approach to provide context-based and intelligent support to employees in heavy industries using new technologies, especially in the field of mobile devices. In this work, mobile device sensors and image processing techniques are used to recognize the context which requires specific support. In addition, new scenarios and associated processes are developed to support the employees on the basis of new, flexible, adaptive and mobile technologies.
This contribution demonstrates the efficient embedding of a single depth-camera into the automotive environment making mid-air gesture interaction for mobile applications viable in such a scenario. In this setting a new human-machine interface is implemented to give an idea of future improvements in automation processes in industrial applications. Our system is based on a data-driven approach by learning hand poses as well as gestures from a large database in order to apply them on mobile devices. We register any movement in a nearby driver area and crop data efficiently with the means of PCA transforming it into so-called feature vectors which present the input for our multi-layer perceptrons (MLPs). After MLP classification, the interpretation of user input is sent via WiFi to a tablet PC mounted into the car interior visualizing an infotainment system which the user is able to interact with. We demonstrate that by this setup hand gestures as well as hand poses are easily and efficiently interpretable insofar as that they become an intuitive and supplementary means of interaction for automotive HMI in mobile scenarios realizable in real-time.