Refine
Year of publication
Document Type
- Conference Proceeding (196) (remove)
Language
- English (161)
- German (34)
- Multiple languages (1)
Is part of the Bibliography
- no (196)
Keywords
- Entrepreneurship (2)
- Intergenerational Collaboration (2)
- Intergenerational Innovation (2)
- Sentiment Analysis (2)
- Usability (2)
- Adolescents (1)
- Automated Driving Technology (1)
- Automobiles (1)
- Automotive (1)
- Automotive HMI (1)
Institute
- Fachbereich 1 - Institut Informatik (196) (remove)
RELEVANCE & RESEARCH QUESTION: Currently the effectiveness of Virtual Reality (VR) and Augmented Reality (AR) systems as practice teaching methods are virtually uncharted. The proof that these systems can provide the same or better learning outcomes than a text instructed practical task could represent a significant benefit for educational activities. METHODS & DATA: To fathom the effectiveness, an experimental study with the three conditions (VR, AR and a real setup) were used to teach participant how to assemble a standard computer. Each condition was divided into two parts: part one in which participants were confronted with their specific scenario, part two in which participants had to go through a real practice after one week. The learning outcome was determined by the designation of hardware parts, a quiz that queried their function and the correct assembling of the components in addition to needed time. Apart from the mere performance, the acceptance of such application in academic context and difference in evaluation by men and women were of interest. RESULTS: Results concerning the Learning Outcome showed that participants from the VR condition outperformed those learned from the real setup ((M=10.0, SD=0.0) [virtual reality] vs. (M=8.95, SD=1.27) [control]). Furthermore, results from the assembling duration assessment demonstrated that VR Group Participants completed their tasks 6.62% faster than the control group. Regarding the identification of Hardware Parts, both groups scored a significant improvement during the post condition compared to the first test run, indicating a learning progress. However, due to the VR group achieving a better outcome in average answers and a more significant difference between the trials, the results indicate a better performance by participants assigned to the VR condition. ADDED VALUE: The results revealed that VR and AR systems could exceed text-based approach in terms of learning outcome performance. The effectiveness of the systems implicates a major benefit for the educational landscape, as learning content that is not realizable in terms of cost, distance or logistics could be designed as an immersive and engaging experience.
Today usually every student owns a reasonably powerful mobile device that allows to be integrated in scenarios. One of the drawbacks of the fast evolution of reasonably powerful devices, is the
heterogeneity of that these kind of devices us ually bring with them. This paper provides an overview how rich mobile learning scenarios can be implemented platform independent on the basis of HTML5 and JavaScript. The paper presents a mobile learning application based on the principles of Situated Lea
rning entirely developed in HTML5. The paper also presents the results of tests performed with the application which were aimed at finding out the difference in performance users perceived when compared with the native desktop version of the
application and the added value that mobility introduces in learning activities.
Blended learning offers learning solutions for higher educational institutions facing the industrial revolution 4.0. In this study, we investigated the influence factors student perceptions of blended learning based on gender-specific differences in Indonesia. We applied a research model to systematically assess the effect of design features on the effectiveness of blended learning indicators (intrinsic motivation and student satisfaction). Moreover, we evaluated the research model for both genders separately. Based on the quantitative survey of 223 Indonesian students, our study confirms that the design features significantly influence the effectiveness of blended learning for male and female students.
Applications and research efforts in Mobile Learning constitute a growing field in the area of Technology Enhanced Learning. However, despite a permanent increase of mobile internet accessibility and availability of mobile devices over the past years, a mobile learning environment that is easy to use, widely accepted by teachers and learners, uses widespread off-the-shelf software, and that covers various application scenarios and mobile devices, is not yet available. In this paper, we address this issue by presenting an approach and technical framework called "Mobile Contributions" ("MoCo"). MoCo supports learners to create and send contributions through various channels (including third-party solutions like Twitter, SMS and Facebook), which are collected and stored in a central repository for processing, filtering and visualization on a shared display. A set of different learning and teaching scenarios that can be realized with MoCo are described along with first experiences and insights gained from qualitative and quantitative evaluation.
This paper presents an approach towards a mobile learning environment, which is flexible in terms of supported scenarios, supported devices and input channels. The approach makes use of existing and commonly used channels like SMS, Twitter or Face book to increase acceptance and ease-of-use of mobile devices in learning scenarios. Envisaged application scenarios are described along with technical details for their realization.
For highly automated vehicles (AVs), new interaction concepts need to be developed. Even in AVs, the driver might want to intervene and override the automation from time to time. To create the possibility of control, we explore vehicle control through maneuver-based interventions (MBI). Thereby, we focus on explicit, contact-less interaction, which could be beneficial in future AV designs, where the driver is not necessarily bound to classical controls. We propose a set of freehand gestures and keywords for voice control derived in a user-centered design process. Further, we discuss properties, applicability and user impressions of both interaction modalities. Voice control seems to be an efficient way to select a maneuver and free-hand gestures could be used, if voice channel is blocked, e.g., through conversation with passengers.
Bei Großschadensereignissen kann es durch die Vielzahl der Alarme dazu kommen, dass die verfügbaren Rettungskräfte nicht mehr ausreichen, um die anfallenden Aufgaben zu bewältigen oder Hilfsfristen einzuhalten. Die vorliegende Arbeit beschreibt einen Ansatz, sich zusätzlicher Hilfe aus der Bevölkerung zu bedienen, die über einen Disponenten aus der vorhandenen Leitstelle koordiniert wird. Dabei stehen nicht spontan organisierte Helfer im Vordergrund, sondern Personen, die sich vorab mit einem klaren Fertigkeitsprofil und ggf. auch Ausstattung im System registriert haben. Besondere Anforderungen entstehen bei den Disponenten der Leitstelle, deren Mehrbelastung durch das neue System gering zu halten ist, als auch bei den freiwilligen Helfern, die über eine App auf dem Mobiltelefon alarmiert werden und auch darüber die Kommunikation führen sollen. Die Anforderungen beeinflussen sowohl die System-Infrastruktur als auch die Benutzerschnittstelle.
Durch den technischen Fortschritt in der Spracherkennung und -verarbeitung wird Sprache als Interaktionsform auch in Fahrzeugen, z.B. zur Bedienung von Infotainmentsystem, immer populärer. Die Steuerung von teilautomatisierten Fahrzeugen über Sprache ist bisher wenig erforscht. Ziel dieser Arbeit ist es unter der grundsätzlichen Annahme der Eignung von Sprachsteuerung für teilautonome Fahrzeuge, Nutzererwartungen und spezielle Anforderungen an eine Sprachsteuerung für die grundlegenden Fahrmanöver zu identifizieren. Aus den Ergebnissen eines Expertenworkshops und einer explorativen Videostudie werden Anforderungen und Sprachkommandos abgeleitet.
Self-driving cars will relief the human from the driving task. Nevertheless, the human might want to intervene in the driving process and thus needs the possibility to control the car. Switching back to fully manual controls is uncomfortable once being passive and engaging in non-driving-related activities. A more comfortable way is controlling the car with elemental maneuvers (e.g., "turn left" or "stop"). Whereas touch interaction concepts exist, contactless interaction through voice and mid-air gestures has not yet been explored for maneuver-based car control. In this paper, we, therefore, compare the general eligibility of voice and mid-air gesture with touch interaction as the primary maneuver selection mechanism in a driving simulator study. Our results show high usability for all modalities. Contactless interaction leads to a more positive emotional perception of the interaction, yet mid-air gestures lead to higher task load. Overall, voice and touch control are preferred over mid-air gestures by most users.