000 Allgemeines, Wissenschaft
Refine
Year of publication
- 2016 (15) (remove)
Document Type
- Conference Proceeding (7)
- Article (4)
- Part of Periodical (3)
- Part of a Book (1)
Is part of the Bibliography
- no (15)
Keywords
- Fachhochschule (1)
- Hochschule Ruhr West (1)
- Mülheim an der Ruhr (1)
- Zeitschrift (1)
In recent years, hardware for the production and consumption of virtual reality content has reached level of prices that make it affordable to everyone. Accordingly schools and universities are showing increased interest in implementations of virtual reality technologies for supporting their innovative educational activities. Hence, this paper presents a flexible architecture for supporting the development of virtual reality learning scenarios conveniently deployed for educational purposes. We also suggest an example of such
educational scenario for medical purposes deployable with the suggested architecture. In addition, we developed and used a questionnaire answered by 17 medical students in order to derive additional requirements for refining such scenarios. Then, we present these efforts while aiming at deployments usable also for additional domains. Finally, we summarize and mention aspects we will address
in our coming efforts while deploying such activities.
Nowadays, teachers and students utilize different ICT devices for conducting innovative and educational activities from anywhere at any time. The enactment of these activities relies on robust communication and computational infrastructures used for supporting technological devices enabling better accessibility to educational resources and pedagogical scaffolds, wherever and whenever necessary. In this paper, we present EDU.Tube: an interactive environment that relies on web and mobile solutions offered to teachers and students for authoring and incorporating educational interactions at specific moments along the time line of occasional YouTube video-clips. The teachers and students could later experience these authored artefacts while interacting from their stationary or mobile devices. We describe our efforts related to the design, deployment and evaluation of an educational activity supported by the EDU.Tube environment. Furthermore, we illustrate the specific teachers’ and students’ efforts practiced along the different phases of this educational activity. The evaluation of this activity and results are presented, followed by a discussion of these findings, as well as some recommendations for future research efforts further elaborating on EDU.Tube’s aspects in relation to learning analytics.
We present a publicly available benchmark database for the problem of hand posture recognition from noisy depth data and fused RGB-D data obtained from low-cost time-of-flight (ToF) sensors. The database is the most extensive database of this kind containing over a million data samples (point clouds) recorded from 35 different individuals for ten different static hand postures. This captures a great amount of variance, due to person-related factors, but also scaling, translation and rotation are explicitly represented. Benchmark results achieved with a standard classification algorithm are computed by cross-validation both over samples and persons, the latter implying training on all persons but one and testing on the remaining one. An important result using this database is that cross-validation performance over samples (which is the standard procedure in machine learning) is systematically higher than cross-validation performance over persons, which is to our mind the true application-relevant measure of generalization performance.
We present a light-weight real-time applicable 3D-gesture recognition system on mobile devices for improved Human-Machine Interaction. We utilize time-of-flight data coming from a single sensor and implement the whole gesture recognition pipeline on two different devices outlining the potential of integrating these sensors onto mobile devices. The main components are responsible for cropping the data to the essentials, calculation of meaningful features, training and classifying via neural networks and realizing a GUI on the device. With our system we achieve recognition rates of up to 98% on a 10-gesture set with frame rates reaching 20Hz, more than sufficient for any real-time applications.
Touch versus mid-air gesture interfaces in road scenarios-measuring driver performance degradation
(2016)
We present a study aimed at comparing the degradation of the driver's performance during touch gesture vs mid-air gesture use for infotainment system control. To this end, 17 participants were asked to perform the Lane Change Test. This requires each participant to steer a vehicle in a simulated driving environment while interacting with an infotainment system via touch and mid-air gestures. The decrease in performance is measured as the deviation from an optimal baseline. This study concludes comparable deviations from the baseline for the secondary task of infotainment interaction for both interaction variants. This is significant as all participants are experienced in touch interaction, however have had no experience at all with mid-air gesture interaction, favoring mid-air gestures for the long-term scenario.
Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.
A simple copper coil without a voluminous stationary magnet can be utilized as a non-contacting transmitter and as a detector for ultrasonic vibrations in metals. Advantages of such compact EMATs without (electro-)magnet might be: applications in critical environments (hot, narrow, presence of iron filings…), potentially superior fields (then improved ultrasound transmission and more sensitive ultrasound detection).
The induction field of an EMAT strongly influences ultrasound transduction in the nearby metal. Herein, a simplified analytical method for field description at high liftoff is presented. Within certain limitations this method reasonably describes magnetic fields (and resulting eddy currents, inductances, Lorentz forces, acoustic pressures) of even complex coil arrangements. The methods can be adapted to conventional EMATS with a separate stationary magnet.
Increased distances (liftoff) are challenging and technically relevant, and this practical question is addressed: with limited electrical power and given free space between transducer and target metal, what would be the most efficient geometry of a circular coil? Furthermore, more complex coil geometries (“butterfly coil”) with a concentrated field and relatively higher reach are briefly investigated.
In this paper we present an approach for contextual big data analytics in social networks, particularly in Twitter. The combination of a Rich Context Model (RCM) with machine learning is used in order to improve the quality of the data mining techniques. We propose the algorithm and architecture of our approach for real-time contextual analysis of tweets. The proposed approach can be used to enrich and empower the predictive analytics or to provide relevant context-aware recommendations.