004 Informatik
Refine
Year of publication
Document Type
- Part of a Book (19) (remove)
Is part of the Bibliography
- no (19)
Keywords
- Design Challenges (1)
- Design Principles (1)
- Digitalisierung (1)
- Indonesian Higher Education (1)
- Informationstechnik (1)
- Kommunikation (1)
- Medienwissenschaft (1)
- Mobilität (1)
- Online Learning (1)
- Vocational Education (1)
Institute
Massive open online courses (MOOCs) become more and more popular. These course formats are typically highly flexible and attract large groups of learners from heterogeneous backgrounds. So far research in this area concentrating on success factors for low dropout rates and high satisfaction on the side of the learners in MOOCs is scarce. In this chapter, we describe experiences of a large online course offered to students of two large German universities. Based on theory drawn from a social psychological perspective on the relevance of social interaction for learning, we describe the background, structure, and specific elements of the MOOC-like course. We outline evaluation results of both small group collaboration (in workshops) and mass interaction (via forum and wiki usage) as well as results of the general evaluation of the overall course concept. We argue that the specific mixture of small and large group interaction as well as teacher- and learner-generated content is especially promising with regard to satisfaction, learning outcomes, and course completion rates.
In this work methods are described, which are used for an individual adaption of a dialog system. Anyway, an automatic real-time capable visual user attention estimation for a face to face human machine interaction is described. Furthermore, an emotion estimation is presented, which combines a visual and an acoustic method. Both, the attention estimation and the visual emotion estimation based on Active Appearance Models (AAMs). Certainly, for the attention estimation Multilayer Perceptrons (MLPs) are used to map the Active Appearance Parameters (AAM-Parameters) onto the current head pose. Afterwards, the chronology of the head poses is classified as attention or inattention. In the visual emotion estimation the AAM-Parameter will be classified by a Support-Vector-Machine (SVM). The acoustic emotion estimation also use a SVM to classifies emotion related audio signal features into the 5 basis emotions (neutral, happy, sad, anger, surprise). Afterward, a Bayes network is used to combine the results of the visual and the acoustic estimation in the decision level. The visual attention estimation as well as the emotion estimation will be used in service robotic to allow a more natural and human like dialog. Furthermore, the human head pose is very efficient interpreted as head nodding or shaking by the use of adaptive statistical moments. Especially, the head movement of many demented people are restricted, so they often only use their eyes to look around. For that reason, this work examine a simple gaze estimation with the help of an ordinary webcam. Moreover, a full body user re-identification method is described, which allows an individual state estimation of several people for hight dynamic situations. In this work an appearance based method is described, which allows a fast people re-identification over a short time span to allow the usage of individual parameter.
This chapter describes our current research efforts related to the contextualization of learners in mobile learning activities. Substantial research in the field of mobile learning has explored aspects related to contextualized learning scenarios. However, new ways of interpretation and consideration of contextual information of mobile learners are necessary. This chapter provides an overview regarding the state of the art of innovative approaches for supporting contextualization in mobile learning. Additionally, we provide the description of the design and implementation of a flexible multi-dimensional vector space model to organize and process contextual data together with visualization tools for further analysis and interpretation. We also present a study with outcomes and insights on the usage of the contextualization support for mobile learners. To conlcude, we discuss the benefits of using contextualization models for learners in different use-cases. Moreover, a description is presented in order to illustrate how the proposed contextual model can easily be adapted and reused for different use-cases in mobile learning scenarios and potentially other mobile fields.
Sicherheitskritische Mensch-Computer-Interaktion ist nicht nur derzeit, sondern auch zukünftig ein äußerst relevantes Thema. Hierbei kann ein Lehr- und Fachbuch, wie dieses, immer nur einen punktuellen Stand abdecken. Dennoch kann der Versuch unternommen werden, aktuelle Trends zu identifizieren und einen Ausblick in die Zukunft zu wagen. Genau das möchte dieses Kapitel erreichen: Es sollen zukünftige Entwicklungen vorausgesagt und versucht werden, diese korrekt einzuordnen. Das ist an dieser Stelle nicht nur durch den Herausgeber, sondern durch Abfrage bei zahlreichen am Lehrbuch beteiligten Autoren geschehen. Neben einem Ausblick auf Grundlagen und Methoden werden dementsprechend auch sicherheitskritische interaktive Systeme und sicherheitskritische kooperative Systeme abgedeckt.
Im vorliegenden Beitrag wird ein hochsprachenprogrammierbares System zur schritthaltenden Vollbild-Interpretation natürlich beleuchteter Szenenfolgen im Videotakt vorgestellt. Im einzelnen werden folgende Teilmodule und Subsysteme beschrieben: eine hochdynamische, pixellokal autoadaptive CMOS-Kamera mit ca. 120 dB Helligkeitsdynamik (20Bits/Pixel) ein hochsprachenprogrammierbarer Systolic Array Prozessor (für die pixelbezogenen Verarbeitungsmodule) im PCI-Kartenformat, samt optimierendem Compiler, Simulator und Emulator Systemprozeßgerüste unter Linux auf den für die Echtzeit-Anwendungen eingesetzten Hostrechnern (z.B. DEC/Alpha oder Intel/ Pentium)eine prototypische Anwendung zur bildverarbeitungsbasierten Eigenbewegungsbeobachtung (Translationsrichtung, Eotationsraten)eine prototypische, automotive Anwendung zur schritthalt enden Detektion und Kartierung des Straßen- und Spurverlaufs unter partieller monokularer 3D-Rekonstruktion, sowie prototypische Anwendungen zur Klassifikation verkehrsrelevanter Hindernisse (Verkehrsteilnehmer)
Nowadays, teachers and students utilize different ICT devices for conducting innovative and educational activities from anywhere at any time. The enactment of these activities relies on robust communication and computational infrastructures used for supporting technological devices enabling better accessibility to educational resources and pedagogical scaffolds, wherever and whenever necessary. In this paper, we present EDU.Tube: an interactive environment that relies on web and mobile solutions offered to teachers and students for authoring and incorporating educational interactions at specific moments along the time line of occasional YouTube video-clips. The teachers and students could later experience these authored artefacts while interacting from their stationary or mobile devices. We describe our efforts related to the design, deployment and evaluation of an educational activity supported by the EDU.Tube environment. Furthermore, we illustrate the specific teachers’ and students’ efforts practiced along the different phases of this educational activity. The evaluation of this activity and results are presented, followed by a discussion of these findings, as well as some recommendations for future research efforts further elaborating on EDU.Tube’s aspects in relation to learning analytics.
This chapter describes our research efforts related to the design of mobile learning (m-learning) applications in cloud-computing (CC) environments. Many cloud-based services can be used/integrated in m-learning scenarios, hence, there is a rich source of applications that could easily be applied to design and deploy those within the context of cloud-based services. Here, we present two cloud-based approaches—a flexible framework for an easy generation and deployment of mobile learning applications for teachers, and a flexible contextualization service to support personalized learning environment for mobile learners. The framework provides a flexible approach that supports teachers in designing mobile applications and automatically deploys those in order to allow teachers to create their own m-learning activities supported by mobile devices. The contextualization service is proposed to improve the content delivery of learning objects (LOs). This service allows adapting the learning content and the mobile user interface (UI) to the current context of the user. Together, this leads to a powerful and flexible framework for the provisioning of potentially ad hoc mobile learning scenarios. We provide a description of the design and implementation of two proposed cloud-based approaches together with scenario examples. Furthermore, we discuss the benefits of using flexible and contextualized cloud applications in mobile learning scenarios. Hereby, we contribute to this growing field of research by exploring new ways for designing and using flexible and contextualized cloud-based applications that support m-learning.
Derzeitige Projekte am Institut für Neuroinformatik in Bochum beschäftigen sich mit der Analyse von Straßenverkehrsszenen mittels Computer Vision [12]. Dies impliziert, wegen der durch die natürliche Umwelt aufgestellten Randbedingungen, hohe Anforderungen an die zu entwickelnden Algorithmen. Im speziellen wird versucht, Verkehrsteilnehmer aus Videobildern zu extrahieren und die so gewonnenen Objekthypothesen weiter zu attributieren (z.B. Objektklasse, Abstand, Geschwindigkeit, Gefahrenpotential hinsichtlich der beabsichtigten Eigentrajektorie etc.), um im Hinblick auf den Einsatz in Assistenzsystemen in Fahrzeugen eine möglichst genaue Beschreibung der Umwelt zu erreichen. Nicht nur die große Vielfalt der unterschiedlichen Umweltszenarien, sondern auch das hohe Maß an Sicherheit, das die gestellte Aufgabe erfordert, bedingen ein breitbandiges und flexibles Gesamtsystem [6]. Ein Lösungsvorschlag wird im folgenden behandelt.
Systems for automated image analysis are useful for a variety of tasks and their importance is still growing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut für Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.