Refine
Year of publication
Document Type
- Conference Proceeding (63)
- Article (25)
- Report (11)
- Part of a Book (8)
- Contribution to a Periodical (1)
- Doctoral Thesis (1)
- Lecture (1)
- Other (1)
- Research Data (1)
Is part of the Bibliography
- no (112)
Keywords
- Architektur (1)
- Augmented Reality (1)
- Computer Vision (1)
- DamokleS (1)
- Fahrerassistenzsystem (1)
- INTELLIGENT VEHICLES (1)
- Künstliche Intelligenz (1)
- Psychoacoustics (1)
Das kEFIR‐Projekt untersucht die praktische Anwendung von thermographischen Verfahren zur Analyse der strukturellen Integrität von Windkraftrotorblättern. Das Projekt entstand in Zusammenarbeit der Hochschule Ruhr West (HRW) mit der IQbis Consulting GmbH im Rahmen eines ZIM‐Förderprojekts des Bundesministeriums für Wirtschaft und Energie (BMWi). Hintergrund ist die zunehmende Anzahl von Windkraftanlagen (WKA) und der somit steigende Wartungsaufwand. Um einen reibungslosen Betrieb dieser Anlagen zu gewährleisten, und damit den besonderen Anforderungen an die Verfügbarkeit energieerzeugender Anlagen sicherzustellen, ist ein Bedarf an qualitativ hochwertigen Fehleranalysesystemen für im Betrieb befindlicher WKA von besonderer Bedeutung. Erfahrungsgemäß ist der Zeitaufwand für diese Inspektionen mit aktuellen Mitteln sehr groß und wird üblicherweise mit mehreren Arbeitstagen kalkuliert. Die Reproduzierbarkeit der gewonnenen Daten ist bei den derzeitigen Methoden meist nicht gewährleistet. Um frühzeitig auf Instabilitäten oder Schäden in den Rotorblättern einer WKA aufmerksam zu werden, ist die Entwicklung eines schnellen und qualitativ hochwertigen Fehleranalysesystems von zentraler Bedeutung. Ein Forschungsschwerpunkt in diesem Zusammenhang ist die Entwicklung von geeigneten bildgebenden und berührungslosen Verfahren, welche bei den Inspektionen eingesetzt werden können. Beispielsweise erlaubt der Einsatz thermographischer Sensoren eine Analyse nicht nur der Rotorblattoberfläche, sondern auch ihrer inneren Struktur. Weiterhin ist aufgrund des schnell wachsenden Marktes bei unbemannten Luftfahrzeugen, wie beispielsweise positionsstabiler Quatrocoptersysteme, eine zusätzliche Möglichkeit gegeben, die Inspektion von Windenergieanlagen mit Hilfe mobiler, kompakter und fliegender Analysesysteme zu unterstützen.
Industry 4.0 is known as the fourth industrial revolution which refers to the integration of technologies that make the factories interoperable by seamlessly connecting machines, employees and sensors for communication. In Industry 4.0, one of the key features is the use of new technologies to recognize the current context. Thus, the employees are supported with contextual information for speeding up decision-making during various processes related to planning, production, maintenance, etc. As a contribution to this area, the work described here aims to introduce a cyber-physical system (CPS) approach to provide context-based and intelligent support to employees in heavy industries using new technologies, especially in the field of mobile devices. In this work, mobile device sensors and image processing techniques are used to recognize the context which requires specific support. In addition, new scenarios and associated processes are developed to support the employees on the basis of new, flexible, adaptive and mobile technologies.
Artificial Intelligence Driven Human-Machine Collaboration Scenarios in Virtual Reality (Poster)
(2018)
A self-driving car that operates on the SAE automation level 3 or 4 can navigate through different traffic conditions without human input. If such a system is on its operating limits, it will emit a takeover request before shutting down. This request will likely generate a physical response of the driver. Our goal is to shed light on the stress perception of drivers in various scenarios. To this end, we have carried out a feasibility study for preparation. Two subjects drove an autonomous vehicle and during the ride ECG signals were recorded, and afterwards evaluated. Unfortunately, the stress reaction to takeover requests could not be investigated, due to the poor function of the autonomous driving mode from the vehicle, however the reaction to autopilot misconduct without warning to the driver could be investigated instead.
We present a pipeline for recognizing dynamic freehand gestures on mobile devices based on extracting depth information coming from a single Time-of-Flight sensor. Hand gestures are recorded with a mobile 3D sensor, transformed frame by frame into an appropriate 3D descriptor and fed into a deep LSTM network for recognition purposes. LSTM being a recurrent neural model, it is uniquely suited for classifying explicitly time-dependent data such as hand gestures. For training and testing purposes, we create a small database of four hand gesture classes, each comprising 40 × 150 3D frames. We conduct experiments concerning execution speed on a mobile device, generalization capability as a function of network topology, and classification ability ‘ahead of time’, i.e., when the gesture is not yet completed. Recognition rates are high (>95%) and maintainable in real-time as a single classification step requires less than 1 ms computation time, introducing freehand gestures for mobile systems.
In this contribution we present a novel approach to transform data from time-of-flight (ToF) sensors to be interpretable by Convolutional Neural Networks (CNNs). As ToF data tends to be overly noisy depending on various factors such as illumination, reflection coefficient and distance, the need for a robust algorithmic approach becomes evident. By spanning a three-dimensional grid of fixed size around each point cloud we are able to transform three-dimensional input to become processable by CNNs. This simple and effective neighborhood-preserving methodology demonstrates that CNNs are indeed able to extract the relevant information and learn a set of filters, enabling them to differentiate a complex set of ten different gestures obtained from 20 different individuals and containing 600.000 samples overall. Our 20-fold cross-validation shows the generalization performance of the network, achieving an accuracy of up to 98.5% on validation sets comprising 20.000 data samples. The real-time applicability of our system is demonstrated via an interactive validation on an infotainment system running with up to 40fps on an iPad in the vehicle interior.