Refine
Year of publication
Document Type
- Conference Proceeding (229) (remove)
Language
- English (179)
- German (49)
- Multiple languages (1)
Is part of the Bibliography
- no (229)
Keywords
- Entrepreneurship (2)
- Intergenerational Collaboration (2)
- Intergenerational Innovation (2)
- Sentiment Analysis (2)
- Usability (2)
- Adolescents (1)
- Automated Driving Technology (1)
- Automobiles (1)
- Automotive (1)
- Automotive HMI (1)
We propose a new approach to object detection based on data fusion of texture and edge information. A self organizing Kohonen map is used as the coupling element of the different representations. Therefore, an extension of the proposed architecture incorporating other features, even features not derived from vision modules, is straight forward. It simplifies to a redefinition of the local feature vectors and a retraining of the network structure. The resulting hypotheses of object locations generated by the detection process are finally inspected by a neural network classifier based on co-occurence matrices.
In this article we present a system for coupling different base algorithms and sensors for segmentation. Three different solutions for image segmentation by fusion are described, compared and results are shown. The fusion of base algorithms with colorinformation and a sensor fusion process of an optical and a radar sensor including a feedback over time is realized. A feature-in decision-out fusion process is solved. For the fusion process a multi layer perceptron (MLP) with one hidden layer is used as a coupling net. The activity of the output neuron represents the membership of each pixel to an initial segment.
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Analyse dynamischer Szenen
(1999)
In diesem Artikel wird die Analyse dynamischer Szenen im Rahmen einer flexiblen Architektur zur Lösung von Fahrerassistenzaufgaben in Kraftfahrzeugen vorgestellt. Die Lösung unterschiedlicher Aufgaben mit verwandten Ansätzen bedingt einen hohen Grad an Modularität und Flexibilität. Nur so können die gestellten Aufgaben mit den vorhandenen Algorithmen optimal gelöst werden. In der vorgestellten Architektur wird eine objektbezogene Analyse von Sensordaten, eine verhaltensbasierte Szeneninterpretation und eine Verhaltensplanung durchgeführt. Eine globale Wissensbasis, auf der jedes einzelne Modul arbeitet, beinhaltet die Beschreibung physikalischer Zusammenhänge, Verhaltensregeln für den Straßenverkehr, sowie Objekt- und Szenenwissen.
Externes Wissen (z.B. GPS – Global Positioning System) kann ebenfalls in die Wissensbasis eingebunden werden. Als Anwendungsbeispiel der Verhaltensplanung ist ein intelligenter Tempomat realisiert.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (e.g., vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (e.g. , traffic rules, physical laws), additional information (i.e., GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the object related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird's eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task. The influence of behavior planning on the behavior of the guided vehicle is limited to advices as no mechanical control (e.g. , control of the steering angle) was implemented. An Intelligent Cruise Control (ICC) is shown as a spin-off for using this architecture.
Fahrerassistenzsysteme werden eingesetzt, um dem Fahrer
eines Kraftfahrzeugs Handlungsabläufe abzunehmen. Diese Handlungsabläufe
werden definiert durch eine Aufgabenstellung, die vom Fahrer an das Fahrerassi-
stenzsystem übergeben oder systembedingt gelöst wird. Bei komplexen Fahreras-
sistenzsystemen ist an eine autonome Navigation im Straßenverkehr gedacht. Es
wird ein neues Verfahren vorgestellt, welches eine Bewegungssteuerung eines
autonomen Fahrzeugs durchführen kann. Es werden der Lenkwinkel und die Ge-
schwindigkeit beeinflußt. Für diese Aufgabe wird ein dynamischer Ansatz aus
dem Bereich der neuronalen Felder gewählt. Relevante Attribute für den Fahrt-
verlauf auf unterschiedlichem Abstraktionsniveau können dabei einfach (additiv)
verarbeitet werden.
The behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering angle and velocity. In this paper a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of two one-dimensional neural fields. The stimuli of the field are determined according to sensor information produced by a simulation environment.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems arose in the past years. Fully or partly autonomously guided vehicles, particularly for road traffic, pose high demands on the development of reliable algorithms. Principal problems are caused by having a moving observer in predominantly natural environments. At the Institut fur Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We present a solution for a driver assistance system. We concentrate on the aspects of video-based scene analysis and organization of behavior.
The scene interpretation and the behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering, angle and velocity. In this paper a scene interpretation and a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of a two-dimensional neural field for scene interpretation and two one-dimensional neural fields controlling steering angle and velocity. The stimuli of the fields are determined according to the sensor information.
Analysis of dynamic scenes
(2000)
In this paper the proposed architecture for a dynamic scene analysis is illustrated by a driver assistance system. To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (traffic rules, physical laws), additional information (GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the
object-related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird’s eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task.
In this paper we discuss how group processes can be influenced by designing specific tools in computer supported collaborative leaning. We present the design of a shared workspace application for co-constructive tasks that is enriched by certain functions that are able to track, analyze and feed back parameters of collaboration to group members. Thereby our interdisciplinary approach is mainly based on an integrative methodology for analyzing collaboration behavior and patterns in an implicit manner combined with explicit surveyed data of group members’ attitudes and its immediate feedback to the groups. In an exploratory study we examined the influence of this feedback function. Although we could only analyze ad-hoc groups in this study, we detected some benefits of our methodology which might enrich real life Learning Communities’ collaboration processes. The data analysis in our study showed advantages of this feedback on processes of a group’s well-being as well as parameters of participation. These results provide a basis for further empirical work on problem solving groups that are supported by means of parallel interaction analysis as well as its re-use as information resource.
We describe the general concept, system architecture, hardware, and the behavioral abilities of Cora (Cooperative Robot Assistant, see Fig. 1), an autonomous non mobile robot assistant. Outgoing from our basic assumption that the behavior to perform determines the internal and external structure of the behaving system, we have designed Cora anthropomorphic to allow for humanlike behavioral strategies in solving complex tasks. Although Cora was built as a prototype of a service robot system to assist a human partner in industrial assembly tasks, we will show that Cora’s behavioral abilities are also conferrable in a household environment. After the description of the hardware platform and the basic concepts of our approach, we present some experimental results by means of an assembly task.
The first robots are currently appearing on the consumer market. Initially they are targeted at rather simple applications such as entertainment and home convenience. For more complex areas, these robots will need to collaborate and interactively communicate with their human users, which requires appropriate man-machine interaction technologies and considerable cognitive abilities on the robot's side. Consumer acceptance will strongly depend on the integrated system. Thus, system integration and evaluation of the integrated system is becoming increasingly important. This paper describes our approach to construct a robotic assistance system. We present experience with an integrated technology demonstration and exposure of the integrated system to the public.
Mobile Walzenmesstechnik
(2003)
This paper deals with the question how to integrate smart devices in Java appli-
cations. It will outline how different smart devices can be used to enrich learning
environments, we will point to some of the problems one has to face while dealing
with smart devices, a differentiation of smart devices will be done and we will give an
overview about existing Java Virtual Machines available for different smart devices.
Furthermore we will tackle the question of the communication between different smart
devices and also between different kinds of smart devices. An outlook to the future
work will also be given at the end of this work
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics approach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan make it possible to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that emerge as long as the arm is far from obstacles make the movement goals of the robotic assistant predictable for the human operator, improving man-machine interaction
This paper describes an educational application that combines handhelds (PDAs) and programmable Lego bricks in a classroom scenario that deals with the problem of letting a robot escape from a maze. It is specific to our setting that the problem can be solved both in the physical world by steering a Lego robot and in a simulated software environment on a PDA or on a PC. This approach enables the students to generate successful sets of rules in the simulation and to test these sets of rules later in physical mazes, or to create new types of mazes as challenges for known rule sets. In this paper we describe the technical setting for this scenario, different pedagogical scenarios and we will report an evaluation with a group of students in a school environment.
In asynchronous collaboration scenarios, document metadata play an important role for indexing and retrieving documents in jointly used archives. However, the manual input of metadata is usually an unpleasant and error prone task. This paper describes an approach that allows the partially automatic generation of metadata in a collaborative modeling environment. It illustrates some usage scenarios for the metadata within the modelling framework – including concepts for document based social navigation and ideas for tool embedded archive queries based on the current state of the user's work.
The astronomy domain provides rich opportunities for learning about natural phenomena. It can involve and motivate a variety of mathematical and physical knowledge and skills. However it is difficult to connect astronomic observations to modelling and calculation tools and to embed them into educational scenarios. It is particularly this challenge which is focused in this paper. Concretely, we build on an existing collaborative modelling framework (Cool Modes) and extend it with specific representations to support learning activities in astronomy. A first field test has been conducted with these extensions.
This paper presents some ideas of how to use Web Services
for the implementation of innovative collaborative technologies. A major goal here is the idea to build re-usable collaborative software components to foster knowledge exchange and learning. This paper describes two examples of how we used Web Services to achieve this goal. The first example we will describe implements a digital notice board with large, public displays. Here, we used web service to provide flexible data access. Web services provide the possibility to use our infrastructure with different programming languages and devices. The second example we will present is an application that enables students to construct and
model experiments descriptions using a control plant-growth system, the biotube, remotely via Web Services.
In this paper we describe our efforts to foster educational interoperability in scenarios using mobile and wireless technologies to support hands-on scientific experimentation and learning. A special focus is given to the idea that innovative uses of mobile and wireless technologies enhance the learners' scientific experience. Specific contributions include the creation of new applications to support interoperability between different mobile devices, thus to provide "glue" between different learning situations. We describe a number of educational scenarios as well as the technologies and the architectural principles behind them.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed-points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics aoproach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan enable to approach to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that result far from obstacles make the movement goals of the robotic assistant predictable, improving man-machine interaction.
In this paper we describe a session management system for setting up various collabora- tive classroom ,scenarios. The approach ,is addressing the additional workload ,of administrating classroom networks on the teacher, which is an important aspect for teachers' willingness to im- plement technology enhanced,learning in schools. The system facilitates preparation of classroom scenarios and the adhoc installation of networked collaborative sessions. We provided a graphical interface, which is usable for administration, monitoring, and for specification of a wide variety of different classroom ,situations with group work. The resulting graphical specifications are well suited to be re-used in the more formal learning design format IMS/LD; this is achieved by a auto- matable transformation of the scenarios to LD documents. Keywords: Collaborative classroom scenarios, lightweight classroom orchestration, learning de- sign, shared workspaces.
Methods of red-hot rod shape testing require a robust non-contact measurement principle as a touch point could lead to damages to the rod and the detection unit. Therefore a new basic approach based on high frequency eddy current (HFEC) has been investigated. Due to the robustness and the ability to determine the rod shape even above the Curie temperature this principle is especially well suited and can be implemented in the production process directly. The first automatic measurement setup was successfully developed with promising results. Hereby a defect of ovality was detected with a parallel RLC-oscillator. The capacity of this RLC-oscillator is constant whereas the inductance is the measurement part that varies due to eddy current interactions with the rod.
For the rod shape measurement of hot rolled round steel bars (rods) the high frequency eddy current method is especially well suited as it requires no contact point and is not limited to below the Curie temperature. Defects of the rod's shape can be detected by measuring the impedance spectrum of the RLC-oscillator. In the first laboratory setup an Agilent impedance analyser was used for initial tests. Nevertheless, this setup cannot be applied in a steel plant due to the difficult environmental conditions. Hence, a vector network analyser for passive impedance measurement that is applicable in these surroundings was developed.
Temporal stabilization of discrete movement in variable environments: An attractor dynamics approach
(2009)
The ability to generate discrete movement with distinct and stable time courses is important for interaction scenarios both between different robots and with human partners, for catching and interception tasks, and for timed action sequences. In dynamic environments, where trajectories are evolving online, this is not a trivial task. The dynamical systems approach to robotics provides a framework for robust incorporation of fluctuating sensor information, but control of movement time is usually restricted to rhythmic motion and realized through stable limit cycles. The present work uses a Hopf oscillator to produce discrete motion and formulates an online adaptation rule to stabilize total movement time against a wide range of disturbances. This is integrated into a dynamical systems framework for the sequencing of movement phases and for directional navigation, using 2D-planar motion as an example. The approach is demonstrated on a Khepera mobile unit in order to show its reliability even when depending on low-level sensor information.
Fat content of liver is an essential parameter to decide whether a liver is suitable for transplantation or not. The determination of fat content is often challenging and usually there is not enough time to bring a specimen to a pathologic laboratory. That is why transplantation clinics need a technique to measure the fat content of a graft. In this paper the theoretical basics and an existing laboratory setup are presented.
We present an architecture based on the Dynamic Field Theory for the problem of scene representation. At the core of this architecture are three-dimensional neural fields linking feature to spatial information. These three-dimensional fields are coupled to lower-dimensional fields that provide both a close link to the sensory surface and a close link to motor behavior. We highlight the updating mechanism of this architecture, both when a single object is selected and followed by the robot's head in smooth pursuit and in multi-item tracking when several items move simultaneously
Integrating Orientation Constraints into the Attractor Dynamics Approach for Autonomous Manipulation
(2010)
Generating collision free reaching movements for redundant manipulators using dynamical systems
(2010)
For autonomous robots to manipulate objects in unknown environments, they must be able to move their arms without colliding with nearby objects, other agents or humans. The simultaneous avoidance of multiple obstacles in real time by all link segments of a manipulator is still a hard task both in practice and in theory. We present a systematic scheme for the generation of collision free movements for redundant manipulators in scenes with arbitrarily many obstacles. Based on the dynamical systems approach to robotics, constraints are formulated as contributions to a dynamical system that erect attractors for targets and repellors for obstacles. These contributions are formulated in terms of variables relevant to each constraint and then transformed into vector fields over the manipulator joint velocity vector as an embedding space in which all constraints are simultaneously observed. We demonstrate the feasibility of the approach by implementing it on a real anthropomorphic 8-degrees-of-freedom redundant manipulator. In addition, performance is characterized by detecting failures in a systematic simulation experiment in randomized scenes with varying numbers of obstacles.
Generating flexible collision-free reaching move-
ments is a standard task for autonomous articulated robots that
is critical especially when such systems interact with humans in
a service robotics setting. Current solutions are still challenging
to put into practice. Here we generalize an approach
first
used to plan end-effector movement that is based on attractor
dynamical systems. We show, how different contributions to
the motion planning dynamics can be formulated in constraint-
specific reference frames and then transformed into the frame
of the joint velocity vector. We implement this system on an
8 DoF redundant manipulator and show its feasibility in a
simulation. A systematic experiment with randomly generated
obstacle scenes characterizes the performance of the system.
Especially challenging confi
gurations of obstacles are discussed
to illustrate how the method solves these cases
For any kind of assistant systems, the ability to interact with the human operator and taking into account his or her assumptions and expectations, is the basis for a reasonable behavior. As a consequence the human behavior have to be studied in order to generate driver models that are learned from human driving data. In this work we focus on the improvement of the immersion in driving simulation environment by developing and implementing a cheap and efficient method for head tracking. We also explain why head tracking feedback is crucial for the quality of collected behavioural data, especially for simulators with close screen distances.
Today usually every student owns a reasonably powerful mobile device that allows to be integrated in scenarios. One of the drawbacks of the fast evolution of reasonably powerful devices, is the
heterogeneity of that these kind of devices us ually bring with them. This paper provides an overview how rich mobile learning scenarios can be implemented platform independent on the basis of HTML5 and JavaScript. The paper presents a mobile learning application based on the principles of Situated Lea
rning entirely developed in HTML5. The paper also presents the results of tests performed with the application which were aimed at finding out the difference in performance users perceived when compared with the native desktop version of the
application and the added value that mobility introduces in learning activities.
Recognition of emotions from multimodal cues is of basic interest for the design of many adaptive interfaces in human-machine interaction (HMI) in general and human-robot interaction (HRI) in particular. It provides a means to incorporate non-verbal feedback in the course of interaction. Humans express their emotional and affective state rather unconsciously exploiting their different natural communication modalities such as body language, facial expression and prosodic intonation. In order to achieve applicability in realistic HRI settings, we develop person-independent affective models. In this paper, we present a study on multimodal recognition of emotions from such auditive and visual cues for interaction interfaces. We recognize six classes of basic emotions plus the neutral one of talking persons. The focus hereby lies on the simultaneous online visual and accoustic analysis of speaking faces. A probabilistic decision level fusion scheme based on Bayesian networks is applied to draw benefit of the complementary information from both – the acoustic and the visual – cues. We compare the performance of our state of the art recognition systems for separate modalities to the improved results after applying our fusion scheme on both DaFEx database and a real-life data that captured directly from robot. We furthermore discuss the results with regard to the theoretical background and future applications.
In the presented work we compare machine learning techniques in the context of lane change behavior performed by humans in a semi-naturalistic simulated environment. We evaluate different learning approaches using differing feature combinations in order to identify appropriate feature, best feature combination, and the most appropriate machine learning technique for the described task. Based on the data acquired from human drivers in the traffic simulator NISYS TRS 1 , we trained a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able to predict lane changes up to 1.5 sec in beforehand.
Simulated reality environment incorporating humans and physically plausible behaving robots, providing natural interaction channels, with the option to link simulator to real perception and motion, is gaining importance for the development of cognitive, intuitive interacting and collaborating robotic systems. In the present work we introduce a head tracking system which is utilized to incorporate human ego motion in simulated environment improving immersion in the context of human-robot collaborative tasks.
This paper describes a system which allows platform independent access to quizzes of the popular learning platform Moodle. The main focus is on the software architecture which is implemented on the base of platform independent technology like Web Services, HTML5 and JavaScript. Another aspect is the user interface which was developed with the goal to run on a broad range of mobile devices from small mobile phones up to large tablets.
The WWW is the killerapp of the internet. In recent years an enormously increasing number of Web Applications, as a means of human-to-computer interaction, showed up, that allows a visitor of a certain website to interact with the website. Additionally the approach of Web Services was introduced in order to allow computer-to-computer Interaction on the basis of standardized protocols. This paper shows how the gap between Web Applications and Web Services can be closed by making Web Applications available to computer-to-computer interaction by a systematic approach.
The investigation of neuronal accounts of cognition is closely linked to collaboration between behavioral experiments, theory and application and supports the process of moving from pure behaviorist correlation analysis to gaining a real understanding of the underlying mechanisms. Cognition builds upon the individual behavioral history, and the understanding of cognition is based on neuronal principles.
The study of human behavior incorporates in particular interactive, dynamically changing scenarios with multiple human individuals. Both the acquisition of behavioral data of human subjects, the modeling of behavior, as well as the evaluation in interactive scenarios, makes it necessary to generate simulated images of reality. Simulations allow the investigator to precisely control the structure of the environment the subject interacts with. Furthermore, situations that would be too dangerous in the real world (e.g. near-crash driving situations) can be investigated using virtual reality.
By nature, simulated reality frameworks are designed to simulate naturalistic environments. Within these environments, ecologically relevant stimuli embedded in a meaningful and controlled context can be presented. The quality of experimental data acquired within the simulated environment depends not to the last on the degree of immersion of the human subject.
Driving experiments usually attempt to relate observable driver behavior to cognitive inputs. The precise visual (retinal) input of a driver in a driving simulator depends also on the exact position of his head with respect to the screen (Noth et al., 2010). The major meaning of ego motion feedback can be considered as a continuous calibration here.
In a virtual cooperation scenario, consistency matters - if an operator perceives an object at 1 m distance, moving 20 cm towards it should decrease the perceived distance to 80 cm, moving to the side of an object which occludes another one should reveal the latter (Pretto et al., 2009).
The ego-motion feedback mitigates the cues that remind operators of the fact that they are in a virtual and not in the real world. The way the appearance of a virtual object changes due to a lateral head movement is identical to its real counterpart, which means that even relations between real and virtual objects remain (Creem-Regehr et al., 2005; Cutting, 1997).
In this contribution we introduce a head tracking system which is utilized to incorporate human ego motion in simulated environments improving immersion in the context of a human-robot collaborative task and in an interactive driving simulator.
For both cases, we explain how the ego motion feedback leads to a more precise comprehension of the virtual scene and how the aspect of immersion influences the feeling of being “really” inside of the virtual scene and the weakening of the awareness of the border between the real and the virtual world.
The neuronal basis of movement preparation, during which movement parameters such as movement direction are assigned values, is fairly well understood (Georgopoulos, 2000). Motor and premotor cortex as well as portions of the parietal cortex represent movement parameters through the activity of neuronal populations (Bastian et al., 2003; Cisek & Kalaska, 2005).
The parameter representation is of dynamic nature, updated in the course of movement. It adapts to boundary conditions of the motion plan or to environmental changes. Schwartz (2004) was able to decode motor cortical activity in the motor cortex and utilized this knowledge to drive a virtual or robotic end-effector. Thus he proved that the motor cortex is involved in the generation of movement planning. At this level of abstraction we assume that the movement of an end-effector, as well as human walking movement, is represented appropriately by its direction and satisfies other constraints, such as obstacle avoidance or movement coordination.
A neuronal dynamic of movement generates goal-directed movements and satisfies other constraints, such as obstacle avoidance. Movement is generated by choosing low-dimensional, behaviorally relevant state variables. Behavioral goals are represented as attractors of dynamical systems over such behavioral variables (Schöner et al., 1995). The robots trajectory emerges as a solution of these dynamical systems, in which the behavioral variables are stabilized at attractors corresponding to behavioral goals. Constraints are included in a similar manner as repellers. Recently we applied this approach to generate reaching movements for manipulators under obstacle avoidance and orientation con- straints (Iossifidis & Schöner, 2009; Reimann et al., 2010a,b).
We aim to develop an approach to robotic action based on dynamical systems 1
that is quantitatively modeled on human behavior. By varying the intrinsic parameters obtained for different individuals we will be able to implement different personal styles of movement. In this contribution we implement the neuronal dynamics of movement on a humanoid robotic system which generates goal-directed walking movements while avoiding obstacles.
Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary.
In recent years a new approach for the dynamic usage of computational power, memory and other
resources comes into play: the Cloud Computing paradigm. This new approach needs to be concerned with
respect to IT Service Management since cloud based infrastructures have to be managed differently from a
usual infrastructure. This paper discusses, based on the IT Infrastructure Library (ITIL), as the de-facto
standard for IT Service Management, what kind of processes needs to be concerned especially if a certain
service should be deployed in the cloud.
In recent years, the number of mobile devices that are available for learning scenarios has increased a lot. Different learning settings are usually supported by mobile devices. On the one hand we find mobile devices in informal learning settings, and on the other hand in formal learning settings like a usual lecture. This paper motivates the question whether the usage of mobile devices in a usual lecture is something that is wanted by the students. A first case study is provided with an platform independent prototype that gives an initial indication for preferred usage.
The term “Cloud Computing” does not primarily specify new types of core technologies but rather addresses features to do with integration, interoperability and accessibility. Although not new, virtualization and automation are core features that characterize Cloud Computing. In this paper, we intend to explore the possibility of integrating cloud services with educational scenarios without re-defining neither the technology nor the usage scenarios from scratch. Our suggestion is based on certain solutions that have already been implemented and tested for specific cases.
The Desire project aimed at the development and implementation of a mobile service robotic research platform (technology platform) able to handle real world scenarios regarding service robotic tasks. Different modules for different tasks plus an interaction infrastructure were integrated on this platform. An example of a real world scenario task is the support of a handicapped person to clean up a kitchen in home environments.
One of the main challenges to be solved in this field is the interaction with people. To start an interaction process between a robot and a person, the most important information is the knowledge about the interacting partner’s identity and whether the interacting partner is present or not. This means, the robot must be able to detect and be finally able to identify persons. Accurate identification of specific individuals has to be done by analyzing the individual features of each person. A typical feature set that allows for a distinct identification of a specific person is often extracted from the facial image acquired by a camera. This feature-set is stored in a database to allow the identification of different persons independent from place and time by comparing given feature-sets. Thus, a face recognition module was integrated into the technology platform which includes face detection and identification algorithms.
Integrating Social Networking Sites in Day-to-Day Learning Scenarios - A Facebook Based Approach
(2012)
In recent years, the number of users in social networking sites regularly increased. Especially younger people spend a tremendous time on social networking sites like Facebook, YouTube, Flickr, Google+ and many more. Since obviously this is the place on the World-Wide-Web where our students spent their spare time, we integrated social networking sites in our day-to-day learning scenarios. This is on the one hand to start working with our students where they feel comfortable, and on the other hand to allow to foster the communication among our students about the topic of the lectures.
In recent years the diversity and the ownership of mobile devices steadily increased while the prices for this kind of devices decreased to a level that allows many students to own reasonably powerful devices. As mobile devices are also being used in learning scenarios, the challenge of today is the integration of multiple heterogeneous devices into existing and upcoming learning scenarios. This paper describes an architecture that allows easy integration of various kinds of mobile and non-mobile devices. The presented architecture will be exemplified by a group discussion scenario in a heterogeneous learning environment. The paper concludes with the description of a pilot study using the described system.
Benutzerschnittstellen im Fahrzeug stellen eine besondere Herausforderung in Konzeption und Entwicklung dar, steht doch eine einfache Bedienung in allen Fahrsituationen von Fahrerassistenzsystemen wie auch Komfort- und Unterhaltungsfunktionen im Vordergrund der Bedien- und Anzeigekonzepte. Zugleich treffen durch eine zunehmende Vernetzung des Fahrzeugs die langen Entwicklungszyklen von Kraftfahrzeugen auf die hochdynamische Welt von Mobiltelefonen und Internet-Applikationen. Weitere Herausforderungen ergeben sich durch absehbare Änderungen im Mobilitätsverhalten und die Einführung von Elektrofahrzeugen.
Innovationen im Fahrzeug inkl. der Bedienschnittstelle halten oft zunächst in den Oberklassefahrzeugen Einzug und werden gemäß der Erwartungen der entsprechenden Zielgruppe, zumeist 45 Jahre und älter entwickelt. Auf der anderen Seite gehen im Mobilgerätebereich die Innovationen von technisch interessierten Menschen, meist Jugendlichen aus. In dieser Arbeit wurde versucht, die Entwicklung eines Autocockpits für junge Menschen von eben diesen in vier Stufen der nächsten 20 Jahre gestalten zu lassen unter eigener Einschätzung der technischen Möglichkeiten.
Applications and research efforts in Mobile Learning constitute a growing field in the area of Technology Enhanced Learning. However, despite a permanent increase of mobile internet accessibility and availability of mobile devices over the past years, a mobile learning environment that is easy to use, widely accepted by teachers and learners, uses widespread off-the-shelf software, and that covers various application scenarios and mobile devices, is not yet available. In this paper, we address this issue by presenting an approach and technical framework called "Mobile Contributions" ("MoCo"). MoCo supports learners to create and send contributions through various channels (including third-party solutions like Twitter, SMS and Facebook), which are collected and stored in a central repository for processing, filtering and visualization on a shared display. A set of different learning and teaching scenarios that can be realized with MoCo are described along with first experiences and insights gained from qualitative and quantitative evaluation.
This paper presents an approach towards a mobile learning environment, which is flexible in terms of supported scenarios, supported devices and input channels. The approach makes use of existing and commonly used channels like SMS, Twitter or Face book to increase acceptance and ease-of-use of mobile devices in learning scenarios. Envisaged application scenarios are described along with technical details for their realization.
Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give teen-aged pupils insights into this area in a project-based learning environment with professional tools. Within the last 18 month this project was successfully conducted several times with participants of different age.
Die Hochschule Ruhr West hat erstmals zum Wintersemester 2011/12 ein Schülerstudium im Studiengang Angewandte Informatik angeboten. Dieses ist aus verschiedenen Aktivitäten zum Übergang Schule - Hochschule hervorgegangen. Der Artikel beschreibt die Erfahrungen bei der Einführung eines solchen Programms an einer sich im Aufbau befindlichen Fachhochschule sowohl aus Sicht der Hochschulangehörigen als auch der teilnehmenden Schülerinnen und Schüler.
Untersuchung des Einflusses von Längsrissen in Drähten auf die Impedanz eines Wirbelstromsensors
(2012)
The transurethral resection (TUR) is a standard technique in urological treatment procedures. Both, monopolar and bipolar electrosurgical systems, are used for TUR. Whereas electrical and physical processes in surgery surroundings are well understood for monopolar systems, there is no sufficient data base for the assessment of the processes with the use of bipolar systems. In this context a multi-electrode measuring system was developed to visualize the spatial potential distribution around bipolar electrosurgical devices as a first step to risk analysis. To simulate the anatomic surroundings of a transurethral surgery a cylinder filled with isotonic saline solution was used as a complexity reduced experimental environment.
Pedestrian movement analysis at airports - videobased analysis across multiple camera systems
(2013)
One of the most stressing challenges in our culture is the demographic change. On the one hand, people become older and older, at the same time less young people are available in order to support the elderly. Currently, this fact already provides a number of social impacts that need to be solved in the near future. This paper concentrates on the integration of mobile devices in scenarios that allow elderly people to age successfully. Here, the term "aging successfully" refers to broad range of aspects from health to social life of elderly people. A special focus of this paper lies in the question whether services deployed to a mobile device provide advantages in the area of aging successfully. In order to answer this question, both technical challenges are explained and solved by example architectures, and scenarios that benefit from services deployed to mobile devices are explained.
Mobile devices, in the form of smartphones, are endowed with rich capabilities in terms of multimedia, sensors and connectivity. The wide adoption of these devices allows using them across different settings and situations. One area in which mobile devices become more and more prominent is within the field of mobile learning. Here, mobile devices provide rich possibilities for the contextualization of the learner, by using the set of sensors available in the device. On the one hand, the usage of mobile devices enables participation in learning activities independent of time and space. Nevertheless, developing mobile learning applications for the heterogeneity of mobile devices available in the market becomes a challenge. Not only this is a problem related to form factor aspects, but also the large number of different operating systems, platforms and app infrastructures (app stores) are aspects to be considered. In this paper we present our initial efforts with regard to the development of cross-platform mobile applications to support the contextualization of learning content.
In this paper, we describe a method to model human clothes for a later recognition by the use of RGB- and SWIR-cameras. A basic model is estimated during people detection and tracking. This model will be refined if the recognition is triggered. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body parts. The body parts are estimated by the use of a silhouette extraction combined with a skeleton estimation. In this way, the model describes the human clothes in a compact manner which allows the use of a simple and fast comparison method for people recognition. Such models can be used in security and service applications.
The mathematical competence of first year students is an important success factor at least for technical studies. As a significant percentage of students do not have sufficient mathematical skills, universities often utilise blended learning courses to increase these skills prior to the start of studies. Due to the diversity of students and their educational backgrounds, individual strategies are needed to achieve the necessary competence for successfully managing their studies. This paper describes our approach at the University of Applied Sciences Ruhr West, where we are using personalized blended learning concepts based on the measurement of individual mathematical competences at the beginning of a coaching process. This is used to gain a better matching between the individual learner level and the adapted learning concepts. We combine individual presence learning groups and a personalized e-learning environment. This environment is adapted based on mathematical skills of each stud ent. It uses individual learning advices, short-term optical feedback and up to date e-learning material in a Moodle-based LMS (learning management system). The coaching concept is approved by the results of summative and formative evaluations.
Durch Anpassung der Mathematik-Qualifizierungsmaßnahmen in der Studieneingangsphase an die einzelnen Kompetenzen der Studienanfängerinnen und Studienanfänger wird die individuelle Passgenauigkeit der Maßnahmen erhöht und ein hoher Lernfortschritt erzielt. Dies führt zu einer wesentlichen Verbesserung der
Eingangsqualifikation im Bereich der Mathematik und zu einer Homogenisierung der Leistungsfähigkeit von
Studierenden