Refine
Year of publication
Document Type
- Conference Proceeding (229)
- Bachelor Thesis (100)
- Article (99)
- Master's Thesis (32)
- Part of a Book (27)
- Report (20)
- Book (17)
- Part of Periodical (13)
- Contribution to a Periodical (8)
- Doctoral Thesis (7)
Language
- English (287)
- German (273)
- Multiple languages (4)
Keywords
- Hochschule Ruhr West (9)
- Zeitschrift (9)
- Fachhochschule (8)
- Mülheim an der Ruhr (8)
- Intergenerational Collaboration (3)
- Intergenerational Innovation (3)
- Sentiment Analysis (3)
- Usability (3)
- Automotive HMI (2)
- Digitalisierung (2)
Institute
- Fachbereich 1 - Institut Informatik (372)
- Fachbereich 4 - Institut Mess- und Senstortechnik (95)
- Fachbereich 2 - Wirtschaftsinstitut (53)
- Fachbereich 1 - Institut Energiesysteme und Energiewirtschaft (16)
- Fachbereich 3 - Institut Bauingenieurwesen (11)
- Fachbereich 3 - Institut Maschinenbau (5)
- Fachbereich 4 - Institut Naturwissenschaften (3)
Automotive user interfaces and automated vehicle technology pose numerous challenges to support all diverse facets of user needs. These range from inexperienced, thrill-seeking, young novice drivers to elderly drivers with a mostly opposite set of preferences together with their natural limitations. To allow assessing the (hedonic) quality of automotive user interfaces and automated driving technology (i. e., UX) already during development, the proposed workshop is dedicated to the quest of finding objective, quantifiable criteria to describe future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors” researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Interaktion – Verbindet – Alle”, this workshop calls in particular for contributions in the areas of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) with focus on hedonic quality and design of user experience to enhance the safety feeling in ADS.
Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles--and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting.
E-Learning and openness in education are receiving ever increasing attention in businesses as well as in academia. However, these practices have only to small extent been introduced in public administrations. The study addresses this gap by presenting a literature review on Open Educational Resources [OER] and E-Learning in the public sector. The main goal of the article is to identify challenges to open E-Learning in public administrations. Experiences will be conceptualized as barriers which need to be considered when introducing open E-Learning systems and programs in administrations. The main outcome is a systematic review of lessons learned, presented as a contextualized Barrier Framework which is suitable to analyze requirements when introducing E-Learning and OER in public administrations.
CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot’s gripper (force sensing). The design objective has been to exploit the human operator’s intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air hand gesture recognition on mobile devices. ToF sensors are capable of providing depth data at high frame rates independent of illumination making any kind of application possible for in- and outdoor situations. This comes at the cost of precision regarding depth measurements and comparatively low lateral resolution. We present a novel feature generation technique based on a rasterization of the point clouds which
realizes fixed-sized input making Deep Learning approaches applicable using Convolutional Neural Networks. In order to increase precision we introduce several methods to reduce noise and normalize the input to overcome difficulties in scaling. Backed by a large-scale database of about half
a million data samples taken from different individuals our
contribution shows how hand gesture recognition is realiz-
able on commodity tablets in real-time at frame rates of up to 17Hz. A leave-one out cross-validation experiment
demonstrates the feasibility of our approach with classification errors as low as 1,5% achieved persons unknown to the model.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (e.g., vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (e.g. , traffic rules, physical laws), additional information (i.e., GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the object related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird's eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task. The influence of behavior planning on the behavior of the guided vehicle is limited to advices as no mechanical control (e.g. , control of the steering angle) was implemented. An Intelligent Cruise Control (ICC) is shown as a spin-off for using this architecture.
We present a novel approach of distributing matrix multiplications among GPU-equipped nodes in a cluster system. In this context we discuss the induced challenges and possible solutions. Additionally we state an algorithm which outperforms optimized GPU BLAS libraries for small matrices. Furthermore we provide a novel theoretical model for distributing algorithms within homogeneous computation systems with multiple hierarchies. In the context of this model we develop an algorithm which can find the optimal distribution parameters for each involved subalgorithm. We provide a detailed analysis of the algorithms space and time complexities and justify its use with a structured evaluation within a small GPU-equipped Beowulf cluster.
A Large and Quick Induction Field Scanner for Examining the Interior of Extended Objects or Humans
(2017)
This study describes the techniques and signal properties of a large, powerful, and linear-scanning 1.5 MHz induction field scanner. The mechanical system is capable of quickly reading the volume of relative large objects, e.g., a test person. The general approach mirrors Magnetic Induction Tomography (MIT), but the details differ considerably from currently-described MIT systems: the setup is asymmetrical, and it operates in gradiometric modalities, either with coaxial excitation with destructive interference or with a single excitation loop and tilted receivers. Following this approach, the primary signals were almost completely nulled, and test objects' real or imaginary imprint was obtained directly. The coaxial gradiometer appeared advantageous: exposure to strong fields was reduced due to destructive interference. Meanwhile, the signals included enhanced components at higher spatial frequencies, thereby obtaining a gradually improved capability for localization. For robust signals, the excitation field can be powered towards the rated limits of human exposure to time-varying magnetic fields. Repeated measurements assessed the important signal integrity, which is affected by the scanner´s imperfections, particularly any motions or respiratory changes in living beings during or between repeated scans. The currently achieved and overall figure of merit for artifacts was 58 dB for inanimate test objects and 44 dB for a test person. Both numbers should be understood as worst case levels: a repeated scan with intermediate breathing and drift/dislocations requires 50 seconds, whereas a single measurement (with respiratory arrest) takes only about 5 seconds.