Refine
Year of publication
- 2011 (16) (remove)
Document Type
- Conference Proceeding (12)
- Article (4)
Has Fulltext
- no (16)
Is part of the Bibliography
- no (16)
Keywords
- JavaScript (1)
- Mobile learning (1)
- Moodle (1)
- Web Service, Web Application (1)
- Web Services (1)
- user interface (1)
Institute
Today usually every student owns a reasonably powerful mobile device that allows to be integrated in scenarios. One of the drawbacks of the fast evolution of reasonably powerful devices, is the
heterogeneity of that these kind of devices us ually bring with them. This paper provides an overview how rich mobile learning scenarios can be implemented platform independent on the basis of HTML5 and JavaScript. The paper presents a mobile learning application based on the principles of Situated Lea
rning entirely developed in HTML5. The paper also presents the results of tests performed with the application which were aimed at finding out the difference in performance users perceived when compared with the native desktop version of the
application and the added value that mobility introduces in learning activities.
Recognition of emotions from multimodal cues is of basic interest for the design of many adaptive interfaces in human-machine interaction (HMI) in general and human-robot interaction (HRI) in particular. It provides a means to incorporate non-verbal feedback in the course of interaction. Humans express their emotional and affective state rather unconsciously exploiting their different natural communication modalities such as body language, facial expression and prosodic intonation. In order to achieve applicability in realistic HRI settings, we develop person-independent affective models. In this paper, we present a study on multimodal recognition of emotions from such auditive and visual cues for interaction interfaces. We recognize six classes of basic emotions plus the neutral one of talking persons. The focus hereby lies on the simultaneous online visual and accoustic analysis of speaking faces. A probabilistic decision level fusion scheme based on Bayesian networks is applied to draw benefit of the complementary information from both – the acoustic and the visual – cues. We compare the performance of our state of the art recognition systems for separate modalities to the improved results after applying our fusion scheme on both DaFEx database and a real-life data that captured directly from robot. We furthermore discuss the results with regard to the theoretical background and future applications.
In the presented work we compare machine learning techniques in the context of lane change behavior performed by humans in a semi-naturalistic simulated environment. We evaluate different learning approaches using differing feature combinations in order to identify appropriate feature, best feature combination, and the most appropriate machine learning technique for the described task. Based on the data acquired from human drivers in the traffic simulator NISYS TRS 1 , we trained a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able to predict lane changes up to 1.5 sec in beforehand.
Simulated reality environment incorporating humans and physically plausible behaving robots, providing natural interaction channels, with the option to link simulator to real perception and motion, is gaining importance for the development of cognitive, intuitive interacting and collaborating robotic systems. In the present work we introduce a head tracking system which is utilized to incorporate human ego motion in simulated environment improving immersion in the context of human-robot collaborative tasks.
This paper describes a system which allows platform independent access to quizzes of the popular learning platform Moodle. The main focus is on the software architecture which is implemented on the base of platform independent technology like Web Services, HTML5 and JavaScript. Another aspect is the user interface which was developed with the goal to run on a broad range of mobile devices from small mobile phones up to large tablets.
The WWW is the killerapp of the internet. In recent years an enormously increasing number of Web Applications, as a means of human-to-computer interaction, showed up, that allows a visitor of a certain website to interact with the website. Additionally the approach of Web Services was introduced in order to allow computer-to-computer Interaction on the basis of standardized protocols. This paper shows how the gap between Web Applications and Web Services can be closed by making Web Applications available to computer-to-computer interaction by a systematic approach.
Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation.