Refine
Year of publication
Document Type
- Conference Proceeding (229)
- Bachelor Thesis (100)
- Article (99)
- Master's Thesis (34)
- Part of a Book (27)
- Report (20)
- Book (17)
- Part of Periodical (13)
- Contribution to a Periodical (8)
- Doctoral Thesis (7)
Language
- English (287)
- German (275)
- Multiple languages (4)
Keywords
- Hochschule Ruhr West (9)
- Zeitschrift (9)
- Fachhochschule (8)
- Mülheim an der Ruhr (8)
- Intergenerational Collaboration (3)
- Intergenerational Innovation (3)
- Sentiment Analysis (3)
- Usability (3)
- Automotive HMI (2)
- Digitalisierung (2)
Institute
- Fachbereich 1 - Institut Informatik (372)
- Fachbereich 4 - Institut Mess- und Senstortechnik (96)
- Fachbereich 2 - Wirtschaftsinstitut (54)
- Fachbereich 1 - Institut Energiesysteme und Energiewirtschaft (16)
- Fachbereich 3 - Institut Bauingenieurwesen (11)
- Fachbereich 3 - Institut Maschinenbau (5)
- Fachbereich 4 - Institut Naturwissenschaften (3)
The astronomy domain provides rich opportunities for learning about natural phenomena. It can involve and motivate a variety of mathematical and physical knowledge and skills. However it is difficult to connect astronomic observations to modelling and calculation tools and to embed them into educational scenarios. It is particularly this challenge which is focused in this paper. Concretely, we build on an existing collaborative modelling framework (Cool Modes) and extend it with specific representations to support learning activities in astronomy. A first field test has been conducted with these extensions.
Der Einsatz von virtuellen Servern im LDS NRW erfolgte bisher unter dem Blickwinkel der Konsolidierung von einfachen und sehr einfachen Systemen, die keine dedizierte Serversystemtechnik benötigten.
Mittlerweile bietet VMware Funktionalitäten, die neben dem Konsolidierungsgedanken hoch interessante Möglichkeiten für vielfältigste, individuelle Kundenanforderungen bieten. Dies reicht von flexiblen, preiswerten und einfachen
Systemen bis hin zu Serverplattformen mit hohen Ansprüchen an Performance und Verfügbarkeit.
This contribution demonstrates the efficient embedding of a single depth-camera into the automotive environment making mid-air gesture interaction for mobile applications viable in such a scenario. In this setting a new human-machine interface is implemented to give an idea of future improvements in automation processes in industrial applications. Our system is based on a data-driven approach by learning hand poses as well as gestures from a large database in order to apply them on mobile devices. We register any movement in a nearby driver area and crop data efficiently with the means of PCA transforming it into so-called feature vectors which present the input for our multi-layer perceptrons (MLPs). After MLP classification, the interpretation of user input is sent via WiFi to a tablet PC mounted into the car interior visualizing an infotainment system which the user is able to interact with. We demonstrate that by this setup hand gestures as well as hand poses are easily and efficiently interpretable insofar as that they become an intuitive and supplementary means of interaction for automotive HMI in mobile scenarios realizable in real-time.
In recent years, the number of reasonable powerful mobile devices increased. In 2011, the number of smartphones(e.g.)increased to more than 300 million units. A lot of research has already been conducted with respect of mobile devices acting as Cloud Service consumers, but
still not much effort is put on mobile devices in the role of Cloud Service providers. Therefore, this paper presents an approach that allows to utilize mobile devices like smart phones or tablets as Cloud Service providers. In order to make this a reasonable approach, some of the occurring problems are discussed and it is shown how the presented architecture is able to overcome these problems. Last
but not least, this paper
describes some performance
tests of the chosen implementa
tion for mobile Web Services.
CORA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot's gripper (force sensing). The design objective has been to exploit the human operator's intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
We extend the attractor dynamics approach to generate goal-directed movement of a redundant, anthropomorphic arm while avoiding dynamic obstacles and respecting joint limits. To make the robot's movements human-like, we generate approximately straight-line trajectories by using two heading direction angles of the tool-point quite analogously to how movement is represented in the primate central nervous system. Two additional angles control the tool's spatial orientation so that it follows the tool-point's collision-free path. A fifth equation governs the redundancy angle, which controls the elevation of the elbow so as to avoid obstacles and respect joint limits. These variables make it possible to generate movement while sitting in an attractor (or, in the language of the potential field approach, in a minimum). We demonstrate the approach on an assistant robot, which interacts with human users in a shared workspace
The presented work formulates an framework in which early prediction of drivers lane change behavior is realized. We aim to build a representation of drivers lane change behavior in order to recognize and to predict driver's intentions as a first step towards a realistic driver model. In the test bed of the Institute of Neuroinformatik, based on the traffic simulator NISYS TRS 1, 10 individuals have driven in the experiments and they performed more then 150 lane change maneuvers. Lane-offset, distance to the front car and time to contact, were recorded. The acquired data was used to train - in parallel- a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able of performing a lane change prediction time of 1.5 sec beforehand. The proposed approach describes a framework for lane-change detection and prediction, which will serve as a prerequisite for a successful driver model.