Refine
Year of publication
Document Type
- Conference Proceeding (229) (remove)
Language
- English (179)
- German (49)
- Multiple languages (1)
Is part of the Bibliography
- no (229)
Keywords
- Entrepreneurship (2)
- Intergenerational Collaboration (2)
- Intergenerational Innovation (2)
- Sentiment Analysis (2)
- Usability (2)
- Adolescents (1)
- Automated Driving Technology (1)
- Automobiles (1)
- Automotive (1)
- Automotive HMI (1)
Currently, car assistant systems mainly try to prevent accidents. Increasing built-in car technology also extends the potential applications in vehicles. Future cars might have virtual windshields that augment the traffic or individual virtual assistants interacting with the user. In this paper, we explore the potential of an assistant system that helps the car’s occupants to calm down and reduce stress when they experience an accident in front of them. We present requirements from a discussion (N= 11) and derive a system design from them. Further, we test the system design in a video-based simulator study (N= 43). Our results indicate that an accident support system increases perceived control and trust and helps to calm down the user.
Im Rahmen des diesjährigen Workshop Automotive HMI werden wieder eine Vielzahl an Vorträgen aus dem Bereich automobiler Mensch-Maschine Schnittstellen präsentiert. Des Weiteren ist wie in den beiden letzten Jahren ein Interaktiver Innovationsworkshop Teil des Programms. Das Motto der Mensch und Computer 2014 lautet „Interaktiv Unterwegs “. Dies passt hervorragend zum Thema des Workshops.
In this demo paper we present a new visualization technique for dynamic networks. It displays the time slices of the dynamic network using two dimensional graph layouting algorithms and stacks these in the third dimension to show the development over time. The visualization ensures that the same node always has the same position in each time slice so that it is easy to follow its development. It also allows filtering data and influencing node appearance based on properties. Additionally we offer a two dimensional comparison view for two time slices which highlights changes in graph structure and (if available) in measures of nodes. The presented visualization technique is implemented using Web technology and is available in a Web-based analytics workbench. We demonstrate the benefits of these techniques by an analysis of a data set from a learning community.
4. Workshop Automotive HMI
(2015)
Benutzerschnittstellen im Fahrzeug stellen eine besondere Herausforderung in Konzeption und Entwicklung dar, steht doch eine sichere Bedienung in allen Fahrsituationen sowohl von Fahrerassistenzsystemen als auch von Komfort-und Unterhaltungsfunktionen im Vordergrund. Zugleich treffen durch zunehmende Vernetzung die langen Entwicklungszyklen von Kraftfahrzeugen auf die hochdynamische Welt von Mobiltelefonen und Internet. Ein-und Ausgabetechnologien gehören des Weiteren zu den zentralen Mitteln der Hersteller, die Wertigkeit der im Fahrzeug eingebauten Systeme hervorzuheben und sich gegenüber der Konkurrenz abzuheben. Dafür werden in diesem Workshop Konzepte und technische Lösungen von Designern, Entwicklern und Human Factors Experten aus Hochschulen, Forschungsinstituten und der Automobilindustrie vorgestellt und diskutiert.
5th Workshop Automotive HMI
(2016)
Benutzerschnittstellen im Fahrzeug stellen eine besondere Herausforderung in Konzeption und Entwicklung dar, steht doch eine sichere Bedienung in allen Fahrsituationen von Fahrerassistenzsystemen wie auch Komfort- und Unterhaltungsfunktionen im Vordergrund. Zugleich treffen durch zunehmende Vernetzung die langen Entwicklungszyklen von Kraftfahrzeugen auf die hochdynamische Welt von Mobiltelefonen und Internet. Ein- und Ausgabetechnologien gehören des Weiteren zu den zentralen Mitteln der Hersteller, die Wertigkeit der im Fahrzeug eingebauten Systeme hervorzuheben. Passend zu dem Tagungsmotto „Sozial Digital – Gemeinsam Auf Neuen Wegen“ wurden in diesem Workshop insbesondere Arbeiten und Visionen präsentiert, die das Automobil bzw. HMIs im Fahrzeug als Teil einer vernetzten digitalen Welt verstehen – einer neuen Art eines sozialen Mensch-Maschine Ökosystems. Die zentrale Frage, die im Workshop diskutiert wurde war, wie Systeme in Zukunft aussehen müssen, um sowohl den Menschen als auch die Maschine optimal zu unterstützen (angelehnt an das MABA-MABA Paradigma von Fitts, 1954). Der Workshop war wiederum interdisziplinär aufgesetzt und hat Konzepte und technische Lösungen von und mit Designern, Entwicklern und „Human Factors“-Experten aus Universitäten/Hochschulen, Forschungsinstituten und der Automobilindustrie aus ganzheitlicher Sicht diskutiert.
Im Zentrum dieses Workshops stehen Erkenntnisse zur Mensch-Computer-Interaktion in sicherheitskritischen Anwendungsgebieten. Da in solchen Feldern – etwa Katastrophenmanagement, Verkehr, Produktion oder Medizin – immer häufiger MCI stattfindet, sind viele wissenschaftliche Gebiete, unter anderem die Informatik, zunehmend gefragt. Die Herausforderung besteht darin, bestehende Ansätze und Methoden zu diskutieren, anzupassen und innovative Lösungsansätze zu entwickeln.
Automotive user interfaces and, in particular, automated vehicle technology pose a plenty of challenges to researchers, vehicle manufacturers, and third-party suppliers to support all diverse facets of user needs. To give an example, they emerge from the variation of different usergroups ranging from inexperienced, thrill-seeking young novice drivers to elderly drivers with all their natural limitations. To allow assessing the quality of automotive user interfaces and automated driving technology already during development and within virtual test processes, the proposed workshop is dedicated to the quest of finding objective, quantifiable quality criteria for describing future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors" researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Spielend einfach interagieren “, this workshop calls in particular for contributions in the area of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) and artificial intelligence (predictive HMIs, adaptive systems, intuitive interaction).
Automotive user interfaces and automated vehicle technology pose numerous challenges to support all diverse facets of user needs. These range from inexperienced, thrill-seeking, young novice drivers to elderly drivers with a mostly opposite set of preferences together with their natural limitations. To allow assessing the (hedonic) quality of automotive user interfaces and automated driving technology (i. e., UX) already during development, the proposed workshop is dedicated to the quest of finding objective, quantifiable criteria to describe future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors” researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Interaktion – Verbindet – Alle”, this workshop calls in particular for contributions in the areas of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) with focus on hedonic quality and design of user experience to enhance the safety feeling in ADS.
Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles--and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting.
Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air hand gesture recognition on mobile devices. ToF sensors are capable of providing depth data at high frame rates independent of illumination making any kind of application possible for in- and outdoor situations. This comes at the cost of precision regarding depth measurements and comparatively low lateral resolution. We present a novel feature generation technique based on a rasterization of the point clouds which
realizes fixed-sized input making Deep Learning approaches applicable using Convolutional Neural Networks. In order to increase precision we introduce several methods to reduce noise and normalize the input to overcome difficulties in scaling. Backed by a large-scale database of about half
a million data samples taken from different individuals our
contribution shows how hand gesture recognition is realiz-
able on commodity tablets in real-time at frame rates of up to 17Hz. A leave-one out cross-validation experiment
demonstrates the feasibility of our approach with classification errors as low as 1,5% achieved persons unknown to the model.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (e.g., vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (e.g. , traffic rules, physical laws), additional information (i.e., GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the object related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird's eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task. The influence of behavior planning on the behavior of the guided vehicle is limited to advices as no mechanical control (e.g. , control of the steering angle) was implemented. An Intelligent Cruise Control (ICC) is shown as a spin-off for using this architecture.
We present a novel approach of distributing matrix multiplications among GPU-equipped nodes in a cluster system. In this context we discuss the induced challenges and possible solutions. Additionally we state an algorithm which outperforms optimized GPU BLAS libraries for small matrices. Furthermore we provide a novel theoretical model for distributing algorithms within homogeneous computation systems with multiple hierarchies. In the context of this model we develop an algorithm which can find the optimal distribution parameters for each involved subalgorithm. We provide a detailed analysis of the algorithms space and time complexities and justify its use with a structured evaluation within a small GPU-equipped Beowulf cluster.
A light-weight real-time ap- plicable hand gesture recognition system for automotive applications
(2015)
We present a novel approach for improved hand-gesture recognition by a single time-of-flight(ToF) sensor in an automotive environment. As the sensor's lateral resolution is comparatively low, we employ a learning approach comprising multiple processing steps, including PCA-based cropping, the computation of robust point cloud descriptors and training of a Multilayer perceptron (MLP) on a large database of samples. A sophisticated temporal fusion technique boosts the overall robustness of recognition by taking into account data coming from previous classification steps. Overall results are very satisfactory when evaluated on a large benchmark set of ten different hand poses, especially when it comes to generalization on previously unknown persons.
In this paper we describe a session management system for setting up various collabora- tive classroom ,scenarios. The approach ,is addressing the additional workload ,of administrating classroom networks on the teacher, which is an important aspect for teachers' willingness to im- plement technology enhanced,learning in schools. The system facilitates preparation of classroom scenarios and the adhoc installation of networked collaborative sessions. We provided a graphical interface, which is usable for administration, monitoring, and for specification of a wide variety of different classroom ,situations with group work. The resulting graphical specifications are well suited to be re-used in the more formal learning design format IMS/LD; this is achieved by a auto- matable transformation of the scenarios to LD documents. Keywords: Collaborative classroom scenarios, lightweight classroom orchestration, learning de- sign, shared workspaces.
We present a novel hierarchical approach to multi-class classification which is generic in that it can be applied to different classification models (e.g., support vector machines, perceptrons), and makes no explicit assumptions about the probabilistic structure of the problem as it is usually done in multi-class classification. By adding a cascade of additional classifiers, each of which receives the previous classifier's output in addition to regular input data, the approach harnesses unused information that manifests itself in the form of, e.g., correlations between predicted classes. Using multilayer perceptrons as a classification model, we demonstrate the validity of this approach by testing it on a complex ten-class 3D gesture recognition task.
Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give teen-aged pupils insights into this area in a project-based learning environment with professional tools. Within the last 18 month this project was successfully conducted several times with participants of different age.