Refine
Year of publication
- 2016 (23) (remove)
Document Type
- Conference Proceeding (10)
- Article (6)
- Part of a Book (2)
- Book (1)
- Contribution to a Periodical (1)
- Part of Periodical (1)
- Report (1)
- Research Data (1)
Language
- English (23) (remove)
Is part of the Bibliography
- no (23)
Keywords
- Automation (1)
- Human-machine Interaction (1)
- Open educational resources (1)
- Risk Management (1)
- Safety-critical Systems (1)
- Usability (1)
In catastrophic events, the potential of help has grown through new technologies. Voluntary help has many forms. Within this paper different categories of voluntary help are suggested. Those categories are based on properties like organizational structures, helping process, kind of prosocial behavior and many more. A focus is clearly on the organizational structure and motivational aspects of helper groups. Examples are given for each category. The categorization’s aim is to give a brief overview of possible properties a group of system users could have.
Women are still underrepresented at the highest management levels. The think-manager-think-male phenomenon suggests that leadership is associated with male rather than female attributes. Although styling has been shown to influence the evaluation of women's leadership abilities, the relevant specific features have been left remarkably unaddressed. In a 2 × 2 × 2 × 2 (skirt/pants, with/without jewelry, loose hair/braid, with/without makeup) between-subjects design, 354 participants evaluated a woman in a photograph. Women with makeup, pants, or with jewelry were rated as more competent than women without makeup, with skirts, or without jewelry. A combination of loose hair and no makeup was perceived as warmest, and women with loose hair were more likely to be hired than those with braids. In sum, even subtle changes in styling have a strong impact on how women's leadership abilities are evaluated.
Massive open online courses (MOOCs) become more and more popular. These course formats are typically highly flexible and attract large groups of learners from heterogeneous backgrounds. So far research in this area concentrating on success factors for low dropout rates and high satisfaction on the side of the learners in MOOCs is scarce. In this chapter, we describe experiences of a large online course offered to students of two large German universities. Based on theory drawn from a social psychological perspective on the relevance of social interaction for learning, we describe the background, structure, and specific elements of the MOOC-like course. We outline evaluation results of both small group collaboration (in workshops) and mass interaction (via forum and wiki usage) as well as results of the general evaluation of the overall course concept. We argue that the specific mixture of small and large group interaction as well as teacher- and learner-generated content is especially promising with regard to satisfaction, learning outcomes, and course completion rates.
"Quarter agile" aims to promote older people's social participation and community
via physical and cognitive training which the participants also help create. The project relies heavily on the use of smartphones as training support. Loneliness
and loss of physical and cognitive skills are to be prevented by means of training
and participation in groups. We want to investigate the effects of technology-
assisted training on physical and cognitive performance and social participation of
older people. "Quarter agile" is geared towards healthy people ages 65 and up who are residents of the specified neighborhood.
In recent years, hardware for the production and consumption of virtual reality content has reached level of prices that make it affordable to everyone. Accordingly schools and universities are showing increased interest in implementations of virtual reality technologies for supporting their innovative educational activities. Hence, this paper presents a flexible architecture for supporting the development of virtual reality learning scenarios conveniently deployed for educational purposes. We also suggest an example of such
educational scenario for medical purposes deployable with the suggested architecture. In addition, we developed and used a questionnaire answered by 17 medical students in order to derive additional requirements for refining such scenarios. Then, we present these efforts while aiming at deployments usable also for additional domains. Finally, we summarize and mention aspects we will address
in our coming efforts while deploying such activities.
Nowadays, teachers and students utilize different ICT devices for conducting innovative and educational activities from anywhere at any time. The enactment of these activities relies on robust communication and computational infrastructures used for supporting technological devices enabling better accessibility to educational resources and pedagogical scaffolds, wherever and whenever necessary. In this paper, we present EDU.Tube: an interactive environment that relies on web and mobile solutions offered to teachers and students for authoring and incorporating educational interactions at specific moments along the time line of occasional YouTube video-clips. The teachers and students could later experience these authored artefacts while interacting from their stationary or mobile devices. We describe our efforts related to the design, deployment and evaluation of an educational activity supported by the EDU.Tube environment. Furthermore, we illustrate the specific teachers’ and students’ efforts practiced along the different phases of this educational activity. The evaluation of this activity and results are presented, followed by a discussion of these findings, as well as some recommendations for future research efforts further elaborating on EDU.Tube’s aspects in relation to learning analytics.
We present a publicly available benchmark database for the problem of hand posture recognition from noisy depth data and fused RGB-D data obtained from low-cost time-of-flight (ToF) sensors. The database is the most extensive database of this kind containing over a million data samples (point clouds) recorded from 35 different individuals for ten different static hand postures. This captures a great amount of variance, due to person-related factors, but also scaling, translation and rotation are explicitly represented. Benchmark results achieved with a standard classification algorithm are computed by cross-validation both over samples and persons, the latter implying training on all persons but one and testing on the remaining one. An important result using this database is that cross-validation performance over samples (which is the standard procedure in machine learning) is systematically higher than cross-validation performance over persons, which is to our mind the true application-relevant measure of generalization performance.
We present a light-weight real-time applicable 3D-gesture recognition system on mobile devices for improved Human-Machine Interaction. We utilize time-of-flight data coming from a single sensor and implement the whole gesture recognition pipeline on two different devices outlining the potential of integrating these sensors onto mobile devices. The main components are responsible for cropping the data to the essentials, calculation of meaningful features, training and classifying via neural networks and realizing a GUI on the device. With our system we achieve recognition rates of up to 98% on a 10-gesture set with frame rates reaching 20Hz, more than sufficient for any real-time applications.
Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air hand gesture recognition on mobile devices. ToF sensors are capable of providing depth data at high frame rates independent of illumination making any kind of application possible for in- and outdoor situations. This comes at the cost of precision regarding depth measurements and comparatively low lateral resolution. We present a novel feature generation technique based on a rasterization of the point clouds which
realizes fixed-sized input making Deep Learning approaches applicable using Convolutional Neural Networks. In order to increase precision we introduce several methods to reduce noise and normalize the input to overcome difficulties in scaling. Backed by a large-scale database of about half
a million data samples taken from different individuals our
contribution shows how hand gesture recognition is realiz-
able on commodity tablets in real-time at frame rates of up to 17Hz. A leave-one out cross-validation experiment
demonstrates the feasibility of our approach with classification errors as low as 1,5% achieved persons unknown to the model.