Refine
Year of publication
- 2014 (38) (remove)
Document Type
- Conference Proceeding (21)
- Article (8)
- Part of a Book (2)
- Doctoral Thesis (2)
- Part of Periodical (2)
- Book (1)
- Preprint (1)
- Working Paper (1)
Language
- English (23)
- German (13)
- Multiple languages (2)
Keywords
- Fachhochschule (2)
- Hochschule Ruhr West (2)
- Mülheim an der Ruhr (2)
- Zeitschrift (2)
- Feldverteilung (1)
- Halberzeugnis (1)
- Hochtemperatur (1)
- Inprozesskontrolle (1)
- Rundstahl (1)
- Warmwalzen (1)
MeHRWert Ausgabe 5 Juni 2014
(2014)
Electro-magnetic acoustic transducers (EMATs) are intended as non-contact and non-destructive ultrasound transducers for metallic material. The transmitted intensities from EMATS are modest, particularly at notable lift off distances. Some time ago a concept for a “coil only EMAT” was presented, without static magnetic field. In this contribution, such compact “coil only EMATs” with effective areas of 1–5 cm2 were driven to excessive power levels at MHz frequencies, using pulsed power technologies. RF induction currents of 10 kA and tens of Megawatts are applied. With increasing power the electroacoustic conversion efficiency also increases. The total effect is of second order or quadratic, therefore non-linear and progressive, and yields strong ultrasound signals up to kW/cm2 at MHz frequencies in the metal. Even at considerable lift off distances (cm) the ultrasound can be readily detected. Test materials are aluminum, ferromagnetic steel and stainless steel (non-ferromagnetic). Thereby, most metal types are represented. The technique is compared experimentally with other non-contact methods: laser pulse induced ultrasound and spark induced ultrasound, both damaging to the test object’s surface. At small lift off distances, the intensity from this EMAT concept clearly outperforms the laser pulses or heavy spark impacts.
Efficient photoluminescence (PL) spectra from GaN and InGaN layers at temperatures up to 1100 K are observed with low noise floor and high dynamic resolution. A number of detailed spectral features in the PL can be directly linked to physical properties of the epitaxial grown layer. The method is suggested as an in situ monitoring tool during epitaxy of nitride LED and laser structures. Layer properties like thickness, band gap or film temperature distribution are feasible.
We present a system for 3D hand gesture recognition based on low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. Our system fuses data coming from two ToF sensors which is used to build up a large database and subsequently train a multilayer perceptron (MLP). We demonstrate that we are able to reliably classify a set of ten hand gestures in real-time and describe the setup of the system, the utilised methods as well as possible application scenarios.
PROPRE is a generic and modular neural learning paradigm that autonomously extracts meaningful concepts of multimodal data flows driven by predictability across modalities in an unsupervised, incremental and online way. For that purpose, PROPRE consists of the combination of projection and prediction. Firstly, each data flow is topologically projected with a self-organizing map, largely inspired from the Kohonen model. Secondly, each projection is predicted by each other map activities, by mean of linear regressions. The main originality of PROPRE is the use of a simple and generic predictability measure that compares predicted and real activities for each modal stream. This measure drives the corresponding projection learning to favor the mapping of predictable stimuli across modalities at the system level (i.e. that their predictability measure overcomes some threshold). This predictability measure acts as a self-evaluation module that tends to bias the representations extracted by the system so that to improve their correlations across modalities. We already showed that this modulation mechanism is able to bootstrap representation extraction from previously learned representations with artificial multimodal data related to basic robotic behaviors [1] and improves performance of the system for classification of visual data within a supervised learning context [2]. In this article, we improve the self-evaluation module of PROPRE, by introducing a sliding threshold, and apply it to the unsupervised classification of gestures caught from two time-of-flight (ToF) cameras. In this context, we illustrate that the modulation mechanism is still useful although less efficient than purely supervised learning.
To analyze the electric field around bipolar resectoscopes, used in urology, in terms of reasons for late complications after a surgical treatment a flexible multielectrode system was developed to measure the 3-D potential distribution. A high spatial resolution is achieved with the least possible individual measurements under the conditions of a quasi-static electric field. A flexible arrangement and positioning of the measuring points in the vertical direction of the experimental environment enable an adjustable spatial resolution and the selection of the region of interest. The existing influence of the multielectrode system on the measuring results is described and a correction method is presented to achieve significant results. Thus, the multielectrode system is usable for a comparative study of bipolar resectoscopes varying in the arrangement of resection and return electrode.
Currently in home environments, robot assisting systems with emotion understanding ability are generally achieved in two several manners. The first is the implementing of such systems in such a way that they offer general services for all considered persons without considering privacy, special needs of their interaction partners. The second way is the targetting of such systems for merely one person. In this work we present a robot assisting system, which has both the abilities of assisting several persons at the same time and sustaining their privacy and security issues. The robot can interact with it's interaction partner emotionally by analyzing the emotions of her expressed either visually, facial expression, or auditive, speech prosody. The role of this system is the providing of person-specific support in home environment. In order to identify its interaction partner the system uses diverse biometric traits. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system loads the corresponding emotional profile of the detected interaction partner in order to practice a person-specific emotional human-robot interaction, which has an advantage over the person independent interaction.
We present a study on 3D based hand pose recognition using a new generation of low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. We investigate the performance of different 3D descriptors, as well as the fusion of two ToF sensor streams. By basing a data fusion strategy on the fact that multilayer perceptrons can produce normalized confidences individually for each class, and similarly by designing information-theoretic online measures for assessing confidences of decisions, we show that appropriately chosen fusion strategies can improve overall performance to a very satisfactory level. Real-time capability is retained as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.
This paper presents a web-based framework that allows the creation and deployment of mobile learning activities. We present an authoring tool that allows not-technically skilled persons to design mobile learning tasks and deploy them as a web-based mobile application. Since the presented approach is based exclusively on web-technologies, the deployed mobile application can be executed via a mobile browser and therefore is platform independent. Despite previous research efforts carried out in this domain, few of the projects have addressed this course of actions from a purely web-based perspective. Through the latest development of web technologies, mobile applications have access to internal sensors like camera, microphone and GPS and therefore allow data collection within web-applications. In order to validate whether the proposed framework can be applied in educational settings, we conducted a pilot study with experienced teachers and present the results of these efforts in this paper.
In this paper, we describe an efficient method for a fast people re-identification based on models of human clothes. An initial model is estimated during people detection and tracking, which will be refined during the re-identification. This stepwise extraction, combination and comparing of features speeds up the whole re-identification. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body part. The body parts are located with an optimized GPU-based HOG detector. Furthermore, we introduce a meanshift-based fusion concept which utilizes multiple detectors in order to increase the detection reliability.
In recent years, teachers have started to conduct pedagogical activities to promote different kinds of learning interactions supported by rich media. The deployment of such activities is rapidly increasing, as teachers and students own technological means that allow supporting them along such interactions. These activities can be carried out in traditional classroom settings while using regular computers. Additionally, they can also be conducted from anywhere at any time while using smartphones and tablets. In this paper, we describe a pedagogical activity requiring students to author and later peer- assess learning interactions
incorporated to videos in YouTube. We describe EDU.Tube, an environment that enables them to create, share and consume such rich media learning activities across a variety of devices. We then detail a plan for the implementation of an activity that took place in 3 different classes dealing with diverse materials addressing computer science related topics. Finally, we also
provide an evaluation presenting students' insights and feedbacks resulting from the experienced activity. We discuss and analyze these outcomes in order to elaborate on them as concerns that could be applied for the further deployment of the EDU.Tube environment.
Ziel des Verbundprojektes APFel (Projektlaufzeit: 01.01.2010 ‐ 31.03.2014)war eine zeitlich vorwärts‐ und rückwärtsgerichtete Lokalisation von Personen innerhalb eines Kameranetzwerkes aus sich nicht überlappenden Kameras in Hyperechtzeit zu ermöglichen. Einsatzbereiche dieses Szenarios sind kritische Infrastrukturen wie Flughäfen und Flugplätze. Zunächst fokussierte das Projekt APFel auf die Lokalisation einer einzelnen Zielperson. Weiterführend wurden die entwickelten Verfahren auf die Analyse von Gruppen erweitert, um Personen als Teil einer Gruppe lokalisieren zu können.
Mobile devices are nowadays used almost ubiquitously by a large number of users. 2013 was the first year in which the number of sold mobile devices (tablet computers and mobile phones) outperformed the number of PCs’ sold. And this trend seems to be continuing in the coming years. Additionally, the scenarios in which these kinds of devices are used, grow almost day by day. Another trend in modern landscapes is the idea of Cloud Computing, that basically allows for a very flexible provision of computational services to customers. Yet, these two trends are not well connected. Of course there exists already quite a large amount of mobile applications (apps) that utilize Cloud Computing based services. The other way round, that mobile devices provide one of the building blocks for the provision of Cloud Computing based services is not well established yet. Therefore, this paper concentrates on an extension of a technology that allows to provide standardized Web Services, as one of the building blocks for Cloud Computing, on mobile devices. The extension hereby consists of a new approach that now also allows to provide asynchronous Web Services on mobile devices, in contrast to synchronous ones. Additionally, this paper also illustrates how the described technology was already used in an app provided by a business partner.
This paper describes the design and development stages of a web-based framework, aiming to support the creation of mobile applications within the context of mobile learning. The suggested approach offers the opportunity to deploy and execute these applications on mobile devices. This web-based solution additionally offers the possibility to visualize the data collected by the mobile applications in a web-browser. Despite previous research efforts carried out in this domain, few of the projects have addressed these processes from a purely web-based perspective. Currently, a prototype of an authoring tool for creating mobile data collection applications is already implemented. In order to integrate and validate this solution in everyday educational settings, we are collaborating with a network of high schools. On the basis of workshops with teachers we will carry out, refinements and requirements for further enhancements will be collected and will be used to guide our coming efforts.
With the introduction of Apple’s iPhone, gesture control became pop-
ular and was perceived as an intuitive means of interaction. Contact-
less gestures received broad attention with the X-Box Kinect.
Current technology is limited to a small number of uses, mainly
in entertainment systems. The target of this project is to increase the
range of possible applications, e.g. to the field of automotive,
industrial applications (manufacturing plants), assisted living in con-
texts ranging from private households to hospitals (interaction for
people with disabilities) and many more.
Recommender systems have become an important application domain related to the development of personalized mobile services. Thus, various recommender mechanisms have been developed for filtering and delivering relevant information to mobile users. This paper presents a rich context model to provide the relevant content of news to the current context of mobile users. The proposed rich context model allows not only providing relevant news with respect to the user’s current context but, at the same time, also determines a convenient representation format of news suitable for mobile devices.
With a rapidly ageing population, it is increasingly important to de-
velop devices for elderly and disabled people that can support and aid
them in their daily lives, helping them to live at home as long as pos-
sible. The goal of this project is to implement a human-machine inter-
action and assistance system that can offer personalised health sup-
port for elderly people, or for those who have special needs in the
home environment.