Refine
Year of publication
Document Type
- Conference Proceeding (161)
- Article (48)
- Part of a Book (14)
- Report (5)
- Bachelor Thesis (3)
- Contribution to a Periodical (3)
- Book (2)
- Doctoral Thesis (1)
- Master's Thesis (1)
- Other (1)
Language
- English (241) (remove)
Is part of the Bibliography
- no (241)
Keywords
Institute
- Fachbereich 1 - Institut Informatik (241) (remove)
Currently, car assistant systems mainly try to prevent accidents. Increasing built-in car technology also extends the potential applications in vehicles. Future cars might have virtual windshields that augment the traffic or individual virtual assistants interacting with the user. In this paper, we explore the potential of an assistant system that helps the car’s occupants to calm down and reduce stress when they experience an accident in front of them. We present requirements from a discussion (N= 11) and derive a system design from them. Further, we test the system design in a video-based simulator study (N= 43). Our results indicate that an accident support system increases perceived control and trust and helps to calm down the user.
In this demo paper we present a new visualization technique for dynamic networks. It displays the time slices of the dynamic network using two dimensional graph layouting algorithms and stacks these in the third dimension to show the development over time. The visualization ensures that the same node always has the same position in each time slice so that it is easy to follow its development. It also allows filtering data and influencing node appearance based on properties. Additionally we offer a two dimensional comparison view for two time slices which highlights changes in graph structure and (if available) in measures of nodes. The presented visualization technique is implemented using Web technology and is available in a Web-based analytics workbench. We demonstrate the benefits of these techniques by an analysis of a data set from a learning community.
Automotive user interfaces and, in particular, automated vehicle technology pose a plenty of challenges to researchers, vehicle manufacturers, and third-party suppliers to support all diverse facets of user needs. To give an example, they emerge from the variation of different usergroups ranging from inexperienced, thrill-seeking young novice drivers to elderly drivers with all their natural limitations. To allow assessing the quality of automotive user interfaces and automated driving technology already during development and within virtual test processes, the proposed workshop is dedicated to the quest of finding objective, quantifiable quality criteria for describing future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors" researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Spielend einfach interagieren “, this workshop calls in particular for contributions in the area of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) and artificial intelligence (predictive HMIs, adaptive systems, intuitive interaction).
Automotive user interfaces and automated vehicle technology pose numerous challenges to support all diverse facets of user needs. These range from inexperienced, thrill-seeking, young novice drivers to elderly drivers with a mostly opposite set of preferences together with their natural limitations. To allow assessing the (hedonic) quality of automotive user interfaces and automated driving technology (i. e., UX) already during development, the proposed workshop is dedicated to the quest of finding objective, quantifiable criteria to describe future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors” researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Interaktion – Verbindet – Alle”, this workshop calls in particular for contributions in the areas of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) with focus on hedonic quality and design of user experience to enhance the safety feeling in ADS.
Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles--and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting.
E-Learning and openness in education are receiving ever increasing attention in businesses as well as in academia. However, these practices have only to small extent been introduced in public administrations. The study addresses this gap by presenting a literature review on Open Educational Resources [OER] and E-Learning in the public sector. The main goal of the article is to identify challenges to open E-Learning in public administrations. Experiences will be conceptualized as barriers which need to be considered when introducing open E-Learning systems and programs in administrations. The main outcome is a systematic review of lessons learned, presented as a contextualized Barrier Framework which is suitable to analyze requirements when introducing E-Learning and OER in public administrations.
CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot’s gripper (force sensing). The design objective has been to exploit the human operator’s intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air hand gesture recognition on mobile devices. ToF sensors are capable of providing depth data at high frame rates independent of illumination making any kind of application possible for in- and outdoor situations. This comes at the cost of precision regarding depth measurements and comparatively low lateral resolution. We present a novel feature generation technique based on a rasterization of the point clouds which
realizes fixed-sized input making Deep Learning approaches applicable using Convolutional Neural Networks. In order to increase precision we introduce several methods to reduce noise and normalize the input to overcome difficulties in scaling. Backed by a large-scale database of about half
a million data samples taken from different individuals our
contribution shows how hand gesture recognition is realiz-
able on commodity tablets in real-time at frame rates of up to 17Hz. A leave-one out cross-validation experiment
demonstrates the feasibility of our approach with classification errors as low as 1,5% achieved persons unknown to the model.