Refine
Year of publication
- 2017 (14) (remove)
Document Type
- Conference Proceeding (9)
- Article (3)
- Lecture (1)
- Report (1)
Has Fulltext
- no (14)
Is part of the Bibliography
- no (14)
Institute
Checking wind turbines for damage is a common problem for operators of wind parks, as regular inspections are legally required in many countries and prevention is economically viable. While some of the common forms of damage are easily visible on the surface, structural problems can remain invisible for years before they eventually result in catastrophic failure of a rotor blade. Common forms of testing fibre composite parts like ultrasonic testing or X-ray tests are impractical due to the large dimensions of wind turbine components and their limited accessibility for any short-range methods. Active thermographic inspection of wind turbines is a promising approach to testing for structural flaws beneath the surface of rotor blades. As part of an ongoing research project, a setup for testing the general viability of this method was built and used to compare different thermographic cameras. A sample cut from a discarded rotor blade was modified to emulate structural damage. The results are promising for the development of a cost effective on-site testing system.
Applying step heating thermography to wind turbine rotor blades as a non-destructive testing method
(2017)
Practical application of object detection systems, in research or industry, favors highly optimized black box solutions. We show how such a highly optimized system can be further augmented in terms of its reliability with only a minimal increase of computation times, i.e. preserving realtime boundaries. Our solution leaves the initial (HOG-based) detector unchanged and introduces novel concepts of non-linear metrics and fusion of ROIs. In this context we also introduce a novel way of combining feature vectors for mean-shift grouping. We evaluate our approach on a standarized image database with a HOG detector, which is representative for practical applications. Our results show that the amount of false-positive detections can be reduced by a factor of 4 with a negligable complexity increase. Although introduced and applied to a HOG-based system, our approach can easily be adapted for different detectors.
Das CameraFramework wurde entwickelt, um mittels Socket-Kommunikation [1] als Middleware zwischen verschiedenen Kamerainstanzen mit eigenen Kameratreibern und Clienten zu fungieren. Über diesen Kommunikationsweg ist es möglich Clienten nicht nur lokal, sondern auch über das Netzwerk mit Kameradaten zu versorgen. Um neue Kameras mit dem Framework nutzen zu können, muss die Implementierung gewissen Regeln folgen, was durch ein vorgegebenes Basis-Interface (abstrakte Basis-Klasse in C++ [2]) fast vollständig sichergestellt ist. Neue Kameras werden zur Laufzeit über dynamische Bibliotheken geladen. Parameter für Kameras sind über ein XML-File [3] einzustellen. Funktionen zur Übergabe von neuen Kameradaten sind implementiert und müssen durch den Entwickler der einzelnen Kamerainterfaces aufgerufen werden.
Die Zuordnung von Kameradaten zum passenden Nutzer übernimmt das Framework. Jeder Clienterhält seinen eigenen konfigurierbaren Ringbuffer [4] um unabhängig von anderen Nutzern und Kameras zu sein. Die Aufgaben des Frameworks sind auf verschiedene Module, wie in Abbildung 1 dargestellt, aufgeteilt.
We present a pipeline for recognizing dynamic freehand gestures on mobile devices based on extracting depth information coming from a single Time-of-Flight sensor. Hand gestures are recorded with a mobile 3D sensor, transformed frame by frame into an appropriate 3D descriptor and fed into a deep LSTM network for recognition purposes. LSTM being a recurrent neural model, it is uniquely suited for classifying explicitly time-dependent data such as hand gestures. For training and testing purposes, we create a small database of four hand gesture classes, each comprising 40 × 150 3D frames. We conduct experiments concerning execution speed on a mobile device, generalization capability as a function of network topology, and classification ability ‘ahead of time’, i.e., when the gesture is not yet completed. Recognition rates are high (>95%) and maintainable in real-time as a single classification step requires less than 1 ms computation time, introducing freehand gestures for mobile systems.
In this contribution we present a novel approach to transform data from time-of-flight (ToF) sensors to be interpretable by Convolutional Neural Networks (CNNs). As ToF data tends to be overly noisy depending on various factors such as illumination, reflection coefficient and distance, the need for a robust algorithmic approach becomes evident. By spanning a three-dimensional grid of fixed size around each point cloud we are able to transform three-dimensional input to become processable by CNNs. This simple and effective neighborhood-preserving methodology demonstrates that CNNs are indeed able to extract the relevant information and learn a set of filters, enabling them to differentiate a complex set of ten different gestures obtained from 20 different individuals and containing 600.000 samples overall. Our 20-fold cross-validation shows the generalization performance of the network, achieving an accuracy of up to 98.5% on validation sets comprising 20.000 data samples. The real-time applicability of our system is demonstrated via an interactive validation on an infotainment system running with up to 40fps on an iPad in the vehicle interior.
Autonomous driving is one of the future visions in which many vehicle manufacturers are working with high pressure.
Nowadays, it is already supported partially by high-class vehicles. A completely autonomous journey is indeed the goal, but in cars for
the public road traffic still not available. Automatic lane keeping assistants, speed regulators as well as shield and obstacle detections
are parts or precursors on the way to completely autonomous driving.
The American vehicle manufacturer Tesla is not only known for its electric drive, but also for the fact that high-pressure work is carried out on the autonomous drive. Tesla is thus the only vehicle manufacturer to use its users as so-called beta testers for its assistance systems. The progress and the function of the currently available Model S in the field of assistance systems and autonomic driving is documented and described in this paper. It is shown how good or bad the test vehicle manages scenarios in normal road traffic situations
with the assistance systems, e.g. lane keeping assistant, speed control, lane change and distance assistant, and which scenarios can
not be managed by the vehicle itself.