Refine
Document Type
- Part of a Book (16) (remove)
Language
- English (16) (remove)
Is part of the Bibliography
- no (16)
Keywords
- Design Challenges (1)
- Design Principles (1)
- Indonesian Higher Education (1)
- Informationstechnik (1)
- Mobilität (1)
- Online Learning (1)
- Vocational Education (1)
The continuous evolution of learning technologies combined with the changes within ubiquitous learning environments in which they operate result in dynamic and complex requirements that are challenging to meet. The fact that these systems evolve over time makes it difficult to adapt to the constant changing requirements. Existing approaches in the field of Technology Enhanced Learning (TEL) are generally not addressing those issues and they fail to adapt to those dynamic situations. In this chapter, we investigate the notion of an adaptive and adaptable architecture as a possible solution to address these challenges. We conduct a literature survey upon the state of the art and state of practice in this area. The outcomes of those efforts result in an initial model of a Domain-specific architecture to tackle the issues of adaptability and adaptiveness. To illustrate these ideas, we provide a number of scenarios where this architecture can be applied or is already applied. Our proposed approach serves as a foundation for addressing future ubiquitous learning applications where new technologies constantly emerge and new requirements evolve.
Coming out of the labs, the first robots are currently appearing on the consumer market. Initially they target rather simple application scenarios ranging from entertainment to home convenience. However, one can expect, that they will capture more complex areas soon. These robots will have a higher and higher level and a broad range of functional competence, and will collaborate and interactively communicate with their human users. All this requires considerable cognitive abilities on the robot’s side and appropriate man-machine interaction technologies. Apart from further development of individual functions and technologies it is crucial to build and evaluate fully integrated systems. This paper describes our approach to construct a robotic assistance system. We present experience with an integrated technology demonstration and the exposure of the integrated system to the public.
This chapter describes our research efforts related to the design of mobile learning (m-learning) applications in cloud-computing (CC) environments. Many cloud-based services can be used/integrated in m-learning scenarios, hence, there is a rich source of applications that could easily be applied to design and deploy those within the context of cloud-based services. Here, we present two cloud-based approaches—a flexible framework for an easy generation and deployment of mobile learning applications for teachers, and a flexible contextualization service to support personalized learning environment for mobile learners. The framework provides a flexible approach that supports teachers in designing mobile applications and automatically deploys those in order to allow teachers to create their own m-learning activities supported by mobile devices. The contextualization service is proposed to improve the content delivery of learning objects (LOs). This service allows adapting the learning content and the mobile user interface (UI) to the current context of the user. Together, this leads to a powerful and flexible framework for the provisioning of potentially ad hoc mobile learning scenarios. We provide a description of the design and implementation of two proposed cloud-based approaches together with scenario examples. Furthermore, we discuss the benefits of using flexible and contextualized cloud applications in mobile learning scenarios. Hereby, we contribute to this growing field of research by exploring new ways for designing and using flexible and contextualized cloud-based applications that support m-learning.
This chapter describes our current research efforts related to the contextualization of learners in mobile learning activities. Substantial research in the field of mobile learning has explored aspects related to contextualized learning scenarios. However, new ways of interpretation and consideration of contextual information of mobile learners are necessary. This chapter provides an overview regarding the state of the art of innovative approaches for supporting contextualization in mobile learning. Additionally, we provide the description of the design and implementation of a flexible multi-dimensional vector space model to organize and process contextual data together with visualization tools for further analysis and interpretation. We also present a study with outcomes and insights on the usage of the contextualization support for mobile learners. To conlcude, we discuss the benefits of using contextualization models for learners in different use-cases. Moreover, a description is presented in order to illustrate how the proposed contextual model can easily be adapted and reused for different use-cases in mobile learning scenarios and potentially other mobile fields.
We present a study on 3D based hand pose recognition using a new generation of low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. We investigate the performance of different 3D descriptors, as well as the fusion of two ToF sensor streams. By basing a data fusion strategy on the fact that multilayer perceptrons can produce normalized confidences individually for each class, and similarly by designing information-theoretic online measures for assessing confidences of decisions, we show that appropriately chosen fusion strategies can improve overall performance to a very satisfactory level. Real-time capability is retained as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.
Nowadays, teachers and students utilize different ICT devices for conducting innovative and educational activities from anywhere at any time. The enactment of these activities relies on robust communication and computational infrastructures used for supporting technological devices enabling better accessibility to educational resources and pedagogical scaffolds, wherever and whenever necessary. In this paper, we present EDU.Tube: an interactive environment that relies on web and mobile solutions offered to teachers and students for authoring and incorporating educational interactions at specific moments along the time line of occasional YouTube video-clips. The teachers and students could later experience these authored artefacts while interacting from their stationary or mobile devices. We describe our efforts related to the design, deployment and evaluation of an educational activity supported by the EDU.Tube environment. Furthermore, we illustrate the specific teachers’ and students’ efforts practiced along the different phases of this educational activity. The evaluation of this activity and results are presented, followed by a discussion of these findings, as well as some recommendations for future research efforts further elaborating on EDU.Tube’s aspects in relation to learning analytics.
While more and more nuclear installations facing the end of their lifetime, decommissioning financing issues gain importance in political discussions.
The financing needs are huge along the Uranium value chain. Following the polluter pays principle the operator of a nuclear installation is expected to accumulate all the necessary decommissioning funds during the operating life of its facility. However, since decommissioning experience is still limited,
since the decommissioning process can take several decades and since the time
period between the shutdown of a nuclear installation and the final disposal of radioactive waste can be very long, there are substantial risks that costs will be underestimated and that the liable party and the funds accumulated might
not be available anymore when decommissioning activities have to be paid.
Nevertheless, these financing risks can be reduced by the implementation of transparent, restricted, well-governed decommissioning financing schemes, with a system of checks and balances that aims at avoiding negative effects
stemming from conflicts of interests.
This article describes the current state of our research on anthropomorphic robots. Our aim is to make the reader familiar with the two basic principles our work is based on: anthropomorphism and dynamics. The principle of anthropomorphism means a restriction to human-like robots which use version, audition and touch as their only sensors so that natural man-machine interaction is possible. The principle of dynamics stands for the mathematical framework based on which our robots generate their behavior. Both principles have their root in the idea that concepts of biological behavior and information processing can be exploited to control technical systems.
As service robotics research advances rapidly, availability of objective, reproducible test specifications and evaluation criteria and also of benchmarking is more and more felt to be desirable in the community. As a first step towards benchmarking, in this paper we propose a formalization of tests - exemplified for domestic grasp&place tasks. The underlying philosophy of our approach is to confront the robot system in a black-box manner with requirements of a “rational customer”, and characterize the performance of the system in an objective way by the outcomes of a test-suite tailored to this scenario. A formalized single test description consists of a clear and reproducible specification of the robot’s task and the full context on the one hand, and a number of figures which objectively characterize the test result on the other hand. We illustrate this methodology for the domestic assistance scenario.
CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot’s gripper (force sensing). The design objective has been to exploit the human operator’s intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.