Refine
Year of publication
Document Type
- Conference Proceeding (229)
- Bachelor Thesis (100)
- Article (99)
- Master's Thesis (34)
- Part of a Book (27)
- Report (20)
- Book (17)
- Part of Periodical (13)
- Contribution to a Periodical (8)
- Doctoral Thesis (7)
Language
- English (287)
- German (275)
- Multiple languages (4)
Keywords
- Hochschule Ruhr West (9)
- Zeitschrift (9)
- Fachhochschule (8)
- Mülheim an der Ruhr (8)
- Intergenerational Collaboration (3)
- Intergenerational Innovation (3)
- Sentiment Analysis (3)
- Usability (3)
- Automotive HMI (2)
- Digitalisierung (2)
Institute
- Fachbereich 1 - Institut Informatik (372)
- Fachbereich 4 - Institut Mess- und Senstortechnik (96)
- Fachbereich 2 - Wirtschaftsinstitut (54)
- Fachbereich 1 - Institut Energiesysteme und Energiewirtschaft (16)
- Fachbereich 3 - Institut Bauingenieurwesen (11)
- Fachbereich 3 - Institut Maschinenbau (5)
- Fachbereich 4 - Institut Naturwissenschaften (3)
Automotive user interfaces and automated vehicle technology pose numerous challenges to support all diverse facets of user needs. These range from inexperienced, thrill-seeking, young novice drivers to elderly drivers with a mostly opposite set of preferences together with their natural limitations. To allow assessing the (hedonic) quality of automotive user interfaces and automated driving technology (i. e., UX) already during development, the proposed workshop is dedicated to the quest of finding objective, quantifiable criteria to describe future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors” researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Interaktion – Verbindet – Alle”, this workshop calls in particular for contributions in the areas of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) with focus on hedonic quality and design of user experience to enhance the safety feeling in ADS.
Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles--and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting.
E-Learning and openness in education are receiving ever increasing attention in businesses as well as in academia. However, these practices have only to small extent been introduced in public administrations. The study addresses this gap by presenting a literature review on Open Educational Resources [OER] and E-Learning in the public sector. The main goal of the article is to identify challenges to open E-Learning in public administrations. Experiences will be conceptualized as barriers which need to be considered when introducing open E-Learning systems and programs in administrations. The main outcome is a systematic review of lessons learned, presented as a contextualized Barrier Framework which is suitable to analyze requirements when introducing E-Learning and OER in public administrations.
CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot’s gripper (force sensing). The design objective has been to exploit the human operator’s intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air hand gesture recognition on mobile devices. ToF sensors are capable of providing depth data at high frame rates independent of illumination making any kind of application possible for in- and outdoor situations. This comes at the cost of precision regarding depth measurements and comparatively low lateral resolution. We present a novel feature generation technique based on a rasterization of the point clouds which
realizes fixed-sized input making Deep Learning approaches applicable using Convolutional Neural Networks. In order to increase precision we introduce several methods to reduce noise and normalize the input to overcome difficulties in scaling. Backed by a large-scale database of about half
a million data samples taken from different individuals our
contribution shows how hand gesture recognition is realiz-
able on commodity tablets in real-time at frame rates of up to 17Hz. A leave-one out cross-validation experiment
demonstrates the feasibility of our approach with classification errors as low as 1,5% achieved persons unknown to the model.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (e.g., vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (e.g. , traffic rules, physical laws), additional information (i.e., GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the object related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird's eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task. The influence of behavior planning on the behavior of the guided vehicle is limited to advices as no mechanical control (e.g. , control of the steering angle) was implemented. An Intelligent Cruise Control (ICC) is shown as a spin-off for using this architecture.
We present a novel approach of distributing matrix multiplications among GPU-equipped nodes in a cluster system. In this context we discuss the induced challenges and possible solutions. Additionally we state an algorithm which outperforms optimized GPU BLAS libraries for small matrices. Furthermore we provide a novel theoretical model for distributing algorithms within homogeneous computation systems with multiple hierarchies. In the context of this model we develop an algorithm which can find the optimal distribution parameters for each involved subalgorithm. We provide a detailed analysis of the algorithms space and time complexities and justify its use with a structured evaluation within a small GPU-equipped Beowulf cluster.
A Large and Quick Induction Field Scanner for Examining the Interior of Extended Objects or Humans
(2017)
This study describes the techniques and signal properties of a large, powerful, and linear-scanning 1.5 MHz induction field scanner. The mechanical system is capable of quickly reading the volume of relative large objects, e.g., a test person. The general approach mirrors Magnetic Induction Tomography (MIT), but the details differ considerably from currently-described MIT systems: the setup is asymmetrical, and it operates in gradiometric modalities, either with coaxial excitation with destructive interference or with a single excitation loop and tilted receivers. Following this approach, the primary signals were almost completely nulled, and test objects' real or imaginary imprint was obtained directly. The coaxial gradiometer appeared advantageous: exposure to strong fields was reduced due to destructive interference. Meanwhile, the signals included enhanced components at higher spatial frequencies, thereby obtaining a gradually improved capability for localization. For robust signals, the excitation field can be powered towards the rated limits of human exposure to time-varying magnetic fields. Repeated measurements assessed the important signal integrity, which is affected by the scanner´s imperfections, particularly any motions or respiratory changes in living beings during or between repeated scans. The currently achieved and overall figure of merit for artifacts was 58 dB for inanimate test objects and 44 dB for a test person. Both numbers should be understood as worst case levels: a repeated scan with intermediate breathing and drift/dislocations requires 50 seconds, whereas a single measurement (with respiratory arrest) takes only about 5 seconds.
A light-weight real-time ap- plicable hand gesture recognition system for automotive applications
(2015)
We present a novel approach for improved hand-gesture recognition by a single time-of-flight(ToF) sensor in an automotive environment. As the sensor's lateral resolution is comparatively low, we employ a learning approach comprising multiple processing steps, including PCA-based cropping, the computation of robust point cloud descriptors and training of a Multilayer perceptron (MLP) on a large database of samples. A sophisticated temporal fusion technique boosts the overall robustness of recognition by taking into account data coming from previous classification steps. Overall results are very satisfactory when evaluated on a large benchmark set of ten different hand poses, especially when it comes to generalization on previously unknown persons.
In this paper we describe a session management system for setting up various collabora- tive classroom ,scenarios. The approach ,is addressing the additional workload ,of administrating classroom networks on the teacher, which is an important aspect for teachers' willingness to im- plement technology enhanced,learning in schools. The system facilitates preparation of classroom scenarios and the adhoc installation of networked collaborative sessions. We provided a graphical interface, which is usable for administration, monitoring, and for specification of a wide variety of different classroom ,situations with group work. The resulting graphical specifications are well suited to be re-used in the more formal learning design format IMS/LD; this is achieved by a auto- matable transformation of the scenarios to LD documents. Keywords: Collaborative classroom scenarios, lightweight classroom orchestration, learning de- sign, shared workspaces.
We present a novel hierarchical approach to multi-class classification which is generic in that it can be applied to different classification models (e.g., support vector machines, perceptrons), and makes no explicit assumptions about the probabilistic structure of the problem as it is usually done in multi-class classification. By adding a cascade of additional classifiers, each of which receives the previous classifier's output in addition to regular input data, the approach harnesses unused information that manifests itself in the form of, e.g., correlations between predicted classes. Using multilayer perceptrons as a classification model, we demonstrate the validity of this approach by testing it on a complex ten-class 3D gesture recognition task.
Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give teen-aged pupils insights into this area in a project-based learning environment with professional tools. Within the last 18 month this project was successfully conducted several times with participants of different age.
We present a system for 3D hand gesture recognition based on low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. Our system fuses data coming from two ToF sensors which is used to build up a large database and subsequently train a multilayer perceptron (MLP). We demonstrate that we are able to reliably classify a set of ten hand gestures in real-time and describe the setup of the system, the utilised methods as well as possible application scenarios.
We present a system for efficient dynamic hand gesture recognition based on a single time-of-flight sensor. As opposed to other approaches, we simply rely on depth data to interpret user movement with the hand in mid-air. We set up a large database to train multilayer perceptrons (MLPs) which are subsequently used for classification of static hand poses that define the targeted dynamic gestures. In order to remain robust against noise and to balance the low sensor resolution, PCA is used for data cropping and highly descriptive features, obtainable in real-time, are presented. Our simple yet efficient definition of a dynamic hand gesture shows how strong results are achievable in an automotive environment allowing for interesting and sophisticated applications to be realized.
We present a novel method to perform multi-class pattern classification with neural networks and test it on a challenging 3D hand gesture recognition problem. Our method consists of a standard one-against-all (OAA) classification, followed by another network layer classifying the resulting class scores, possibly augmented by the original raw input vector. This allows the network to disambiguate hard-to-separate classes as the distribution of class scores carries considerable information as well, and is in fact often used for assessing the confidence of a decision. We show that by this approach we are able to significantly boost our results, overall as well as for particular difficult cases, on the hard 10-class gesture classification task.
The influence of national culture on knowledge sharing has important implications for all organizations. However, the existing frameworks only cover a subset of relevant factors or limit the research of the framework to either organizational or national level. Hence, a more encompassing framework is needed. The question this articles answers is how does national culture influence knowledge sharing. Based on extensive literature review and interviews carried out in Finland and Japan, this article sets forth a foundation for a new framework. The framework details how national culture influences individual level and organizational level factors and technical tools. Additionally, the framework includes a new dimension, time-dimension, which is usually disregarded in knowledge sharing research. For researchers and practitioners, the derived framework provides key insight on relevant factors on knowledge sharing and national culture. Finally, future research directions are discussed.
We present a publicly available benchmark database for the problem of hand posture recognition from noisy depth data and fused RGB-D data obtained from low-cost time-of-flight (ToF) sensors. The database is the most extensive database of this kind containing over a million data samples (point clouds) recorded from 35 different individuals for ten different static hand postures. This captures a great amount of variance, due to person-related factors, but also scaling, translation and rotation are explicitly represented. Benchmark results achieved with a standard classification algorithm are computed by cross-validation both over samples and persons, the latter implying training on all persons but one and testing on the remaining one. An important result using this database is that cross-validation performance over samples (which is the standard procedure in machine learning) is systematically higher than cross-validation performance over persons, which is to our mind the true application-relevant measure of generalization performance.