Refine
Year of publication
Document Type
- Conference Proceeding (179)
- Article (64)
- Part of a Book (16)
- Bachelor Thesis (6)
- Book (5)
- Report (5)
- Contribution to a Periodical (3)
- Doctoral Thesis (2)
- Master's Thesis (2)
- Part of Periodical (2)
Language
- English (287) (remove)
Is part of the Bibliography
- no (287)
Keywords
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. The main focus of "Technical Image Processing of Dynamic Scenes" lies
with the development of methods for the interpretation of images derived from various sensors. Apart from conventional visual images, this involves mainly X-ray and radar images. Taking into account the requirements of the various applications, suitable methods are derived. Current projects are dealing with the analysis of traffic scenes, detection of detonators when X-raying luggage and determination of type and expansion of oil pollution in maritime surveillance.
We propose a new approach to object detection based on data fusion of texture and edge information. A self organizing Kohonen map is used as the coupling element of the different representations. Therefore, an extension of the proposed architecture incorporating other features, even features not derived from vision modules, is straight forward. It simplifies to a redefinition of the local feature vectors and a retraining of the network structure. The resulting hypotheses of object locations generated by the detection process are finally inspected by a neural network classifier based on co-occurence matrices.
In this article we present a system for coupling different base algorithms and sensors for segmentation. Three different solutions for image segmentation by fusion are described, compared and results are shown. The fusion of base algorithms with colorinformation and a sensor fusion process of an optical and a radar sensor including a feedback over time is realized. A feature-in decision-out fusion process is solved. For the fusion process a multi layer perceptron (MLP) with one hidden layer is used as a coupling net. The activity of the output neuron represents the membership of each pixel to an initial segment.
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Systems for automated image analysis are useful for a variety of tasks and their importance is still growing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut für Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (e.g., vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (e.g. , traffic rules, physical laws), additional information (i.e., GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the object related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird's eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task. The influence of behavior planning on the behavior of the guided vehicle is limited to advices as no mechanical control (e.g. , control of the steering angle) was implemented. An Intelligent Cruise Control (ICC) is shown as a spin-off for using this architecture.
Systems for automated image analysis are useful for a variety of tasks. Their importance is still growing due to technological advances and increased social acceptance. Especially driver assistance systems have reached a high level of sophistication. Fully or partly autonomously guided vehicles, particularly for road traffic, require highly reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We present a system extracting important information from an image taken by a CCD camera installed at the rear-view mirror in a car. The approach is divided into a sequential and a parallel phase of sensor and information processing. Three main tasks, namely initial segmentation (object detection), object tracking and object classification are realized by integration in the sequential phase and by fusion in the parallel phase. The main advantage of this approach is integrative coupling of different algorithms providing partly redundant information. q 2000 Elsevier Science B.V. All rights reserved.
The behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering angle and velocity. In this paper a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of two one-dimensional neural fields. The stimuli of the field are determined according to sensor information produced by a simulation environment.
To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems arose in the past years. Fully or partly autonomously guided vehicles, particularly for road traffic, pose high demands on the development of reliable algorithms. Principal problems are caused by having a moving observer in predominantly natural environments. At the Institut fur Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We present a solution for a driver assistance system. We concentrate on the aspects of video-based scene analysis and organization of behavior.
The scene interpretation and the behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering, angle and velocity. In this paper a scene interpretation and a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of a two-dimensional neural field for scene interpretation and two one-dimensional neural fields controlling steering angle and velocity. The stimuli of the fields are determined according to the sensor information.
Analysis of dynamic scenes
(2000)
In this paper the proposed architecture for a dynamic scene analysis is illustrated by a driver assistance system. To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (traffic rules, physical laws), additional information (GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the
object-related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird’s eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task.
In this paper we discuss how group processes can be influenced by designing specific tools in computer supported collaborative leaning. We present the design of a shared workspace application for co-constructive tasks that is enriched by certain functions that are able to track, analyze and feed back parameters of collaboration to group members. Thereby our interdisciplinary approach is mainly based on an integrative methodology for analyzing collaboration behavior and patterns in an implicit manner combined with explicit surveyed data of group members’ attitudes and its immediate feedback to the groups. In an exploratory study we examined the influence of this feedback function. Although we could only analyze ad-hoc groups in this study, we detected some benefits of our methodology which might enrich real life Learning Communities’ collaboration processes. The data analysis in our study showed advantages of this feedback on processes of a group’s well-being as well as parameters of participation. These results provide a basis for further empirical work on problem solving groups that are supported by means of parallel interaction analysis as well as its re-use as information resource.
We describe the general concept, system architecture, hardware, and the behavioral abilities of Cora (Cooperative Robot Assistant, see Fig. 1), an autonomous non mobile robot assistant. Outgoing from our basic assumption that the behavior to perform determines the internal and external structure of the behaving system, we have designed Cora anthropomorphic to allow for humanlike behavioral strategies in solving complex tasks. Although Cora was built as a prototype of a service robot system to assist a human partner in industrial assembly tasks, we will show that Cora’s behavioral abilities are also conferrable in a household environment. After the description of the hardware platform and the basic concepts of our approach, we present some experimental results by means of an assembly task.
The first robots are currently appearing on the consumer market. Initially they are targeted at rather simple applications such as entertainment and home convenience. For more complex areas, these robots will need to collaborate and interactively communicate with their human users, which requires appropriate man-machine interaction technologies and considerable cognitive abilities on the robot's side. Consumer acceptance will strongly depend on the integrated system. Thus, system integration and evaluation of the integrated system is becoming increasingly important. This paper describes our approach to construct a robotic assistance system. We present experience with an integrated technology demonstration and exposure of the integrated system to the public.
Mobile Walzenmesstechnik
(2003)
CORA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot's gripper (force sensing). The design objective has been to exploit the human operator's intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
This paper deals with the question how to integrate smart devices in Java appli-
cations. It will outline how different smart devices can be used to enrich learning
environments, we will point to some of the problems one has to face while dealing
with smart devices, a differentiation of smart devices will be done and we will give an
overview about existing Java Virtual Machines available for different smart devices.
Furthermore we will tackle the question of the communication between different smart
devices and also between different kinds of smart devices. An outlook to the future
work will also be given at the end of this work
Coming out of the labs, the first robots are currently appearing on the consumer market. Initially they target rather simple application scenarios ranging from entertainment to home convenience. However, one can expect, that they will capture more complex areas soon. These robots will have a higher and higher level and a broad range of functional competence, and will collaborate and interactively communicate with their human users. All this requires considerable cognitive abilities on the robot’s side and appropriate man-machine interaction technologies. Apart from further development of individual functions and technologies it is crucial to build and evaluate fully integrated systems. This paper describes our approach to construct a robotic assistance system. We present experience with an integrated technology demonstration and the exposure of the integrated system to the public.
CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot’s gripper (force sensing). The design objective has been to exploit the human operator’s intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
This article describes the current state of our research on anthropomorphic robots. Our aim is to make the reader familiar with the two basic principles our work is based on: anthropomorphism and dynamics. The principle of anthropomorphism means a restriction to human-like robots which use version, audition and touch as their only sensors so that natural man-machine interaction is possible. The principle of dynamics stands for the mathematical framework based on which our robots generate their behavior. Both principles have their root in the idea that concepts of biological behavior and information processing can be exploited to control technical systems.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics approach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan make it possible to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that emerge as long as the arm is far from obstacles make the movement goals of the robotic assistant predictable for the human operator, improving man-machine interaction
This paper describes an educational application that combines handhelds (PDAs) and programmable Lego bricks in a classroom scenario that deals with the problem of letting a robot escape from a maze. It is specific to our setting that the problem can be solved both in the physical world by steering a Lego robot and in a simulated software environment on a PDA or on a PC. This approach enables the students to generate successful sets of rules in the simulation and to test these sets of rules later in physical mazes, or to create new types of mazes as challenges for known rule sets. In this paper we describe the technical setting for this scenario, different pedagogical scenarios and we will report an evaluation with a group of students in a school environment.
In asynchronous collaboration scenarios, document metadata play an important role for indexing and retrieving documents in jointly used archives. However, the manual input of metadata is usually an unpleasant and error prone task. This paper describes an approach that allows the partially automatic generation of metadata in a collaborative modeling environment. It illustrates some usage scenarios for the metadata within the modelling framework – including concepts for document based social navigation and ideas for tool embedded archive queries based on the current state of the user's work.
The astronomy domain provides rich opportunities for learning about natural phenomena. It can involve and motivate a variety of mathematical and physical knowledge and skills. However it is difficult to connect astronomic observations to modelling and calculation tools and to embed them into educational scenarios. It is particularly this challenge which is focused in this paper. Concretely, we build on an existing collaborative modelling framework (Cool Modes) and extend it with specific representations to support learning activities in astronomy. A first field test has been conducted with these extensions.
This paper presents some ideas of how to use Web Services
for the implementation of innovative collaborative technologies. A major goal here is the idea to build re-usable collaborative software components to foster knowledge exchange and learning. This paper describes two examples of how we used Web Services to achieve this goal. The first example we will describe implements a digital notice board with large, public displays. Here, we used web service to provide flexible data access. Web services provide the possibility to use our infrastructure with different programming languages and devices. The second example we will present is an application that enables students to construct and
model experiments descriptions using a control plant-growth system, the biotube, remotely via Web Services.
In this paper we describe our efforts to foster educational interoperability in scenarios using mobile and wireless technologies to support hands-on scientific experimentation and learning. A special focus is given to the idea that innovative uses of mobile and wireless technologies enhance the learners' scientific experience. Specific contributions include the creation of new applications to support interoperability between different mobile devices, thus to provide "glue" between different learning situations. We describe a number of educational scenarios as well as the technologies and the architectural principles behind them.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed-points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics aoproach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan enable to approach to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that result far from obstacles make the movement goals of the robotic assistant predictable, improving man-machine interaction.
As service robotics research advances rapidly, availability of objective, reproducible test specifications and evaluation criteria and also of benchmarking is more and more felt to be desirable in the community. As a first step towards benchmarking, in this paper we propose a formalization of tests - exemplified for domestic grasp&place tasks. The underlying philosophy of our approach is to confront the robot system in a black-box manner with requirements of a “rational customer”, and characterize the performance of the system in an objective way by the outcomes of a test-suite tailored to this scenario. A formalized single test description consists of a clear and reproducible specification of the robot’s task and the full context on the one hand, and a number of figures which objectively characterize the test result on the other hand. We illustrate this methodology for the domestic assistance scenario.
For face recognition from video streams speed and accuracy are vital aspects. The first decision whether a preprocessed image region represents a human face or not is often made by a feed-forward neural network (NN), e.g. in the Viisage-FaceFINDER® video surveillance system. We describe the optimisation of such a NN by a hybrid algorithm combining evolutionary multi-objective optimisation (EMO) and gradient-based learning. The evolved solutions perform considerably faster than an expert-designed architecture without loss of accuracy. We compare an EMO and a single objective approach, both with online search strategy adaptation. It turns out that EMO is preferable to the single objective approach in several respects.
In this paper we describe a session management system for setting up various collabora- tive classroom ,scenarios. The approach ,is addressing the additional workload ,of administrating classroom networks on the teacher, which is an important aspect for teachers' willingness to im- plement technology enhanced,learning in schools. The system facilitates preparation of classroom scenarios and the adhoc installation of networked collaborative sessions. We provided a graphical interface, which is usable for administration, monitoring, and for specification of a wide variety of different classroom ,situations with group work. The resulting graphical specifications are well suited to be re-used in the more formal learning design format IMS/LD; this is achieved by a auto- matable transformation of the scenarios to LD documents. Keywords: Collaborative classroom scenarios, lightweight classroom orchestration, learning de- sign, shared workspaces.
NewsGrid
(2005)
Film archives—particularly those storing video material on all kinds of news items—are important information sources for TV stations. Each TV station creates and maintains its own archive by storing video material received via satellite and/or internet on tapes in analogue and/or digital form. It cannot be predicted in advance which of this archived material will actually be used. Thus all material received must be catalogued and stored. On average only a small percentage of the material stored is actually used. Due to the increase in data volumes the cost of maintaining such repositories and retrieving particular stored items has become prohibitive. To-day digital videos are increasingly replacing analogue material. Digital videos offer the advantage that the can be stored in distributed databases and then be transferred without loss of quality to the transmitting station. Such digital archives can be made accessible to many TV stations, thus spreading the maintenance cost. Individual stations can retrieve only the material they actually need for particular news casts. In this paper a grid architecture for distributed video archives for news broadcasts is proposed. A crucial aspect of such a grid approach is that advanced methods for retrieving data must be available.
Sensing and processing of multimedia information is one of the basic traits of human beings. The development of digital technologies and applications allows the production of huge amounts of multimedia data. The rapidly decreasing prices for hardware such as digital cameras/camcorders, sound cards and the corresponding displays led to wide distribution of multimedia-capable input and output devices in all fields of the everyday life, from home entertainment to companies and educational organisations. Thus, multimedia information in terms of digital pictures, videos, and music can be created intuitively and is affordable for a broad spectrum of users.
The harmonic and interharmonic analysis recommendations are contained in the latest International Electrotechnical Commission (IEC) standards on power quality. Measurement and analysis experiences have shown that great difficulties arise in the interharmonics detection and measurement with acceptable levels of accuracy. In this paper, the spectral leakage problems of the discrete Fourier transform due to synchronization errors of interharmonics are analyzed. The time-domain averaging is investigated for the processing of harmonics in the framework of the IEC standards. A difference filter is proposed to detect interharmonics and can be compatible with the IEC standards. Simulations and the field results show the usefulness of the proposed methods.
We extend the attractor dynamics approach to generate goal-directed movement of a redundant, anthropomorphic arm while avoiding dynamic obstacles and respecting joint limits. To make the robot's movements human-like, we generate approximately straight-line trajectories by using two heading direction angles of the tool-point quite analogously to how movement is represented in the primate central nervous system. Two additional angles control the tool's spatial orientation so that it follows the tool-point's collision-free path. A fifth equation governs the redundancy angle, which controls the elevation of the elbow so as to avoid obstacles and respect joint limits. These variables make it possible to generate movement while sitting in an attractor (or, in the language of the potential field approach, in a minimum). We demonstrate the approach on an assistant robot, which interacts with human users in a shared workspace
Methods of red-hot rod shape testing require a robust non-contact measurement principle as a touch point could lead to damages to the rod and the detection unit. Therefore a new basic approach based on high frequency eddy current (HFEC) has been investigated. Due to the robustness and the ability to determine the rod shape even above the Curie temperature this principle is especially well suited and can be implemented in the production process directly. The first automatic measurement setup was successfully developed with promising results. Hereby a defect of ovality was detected with a parallel RLC-oscillator. The capacity of this RLC-oscillator is constant whereas the inductance is the measurement part that varies due to eddy current interactions with the rod.
The presented work formulates an framework in which early prediction of drivers lane change behavior is realized. We aim to build a representation of drivers lane change behavior in order to recognize and to predict driver's intentions as a first step towards a realistic driver model. In the test bed of the Institute of Neuroinformatik, based on the traffic simulator NISYS TRS 1, 10 individuals have driven in the experiments and they performed more then 150 lane change maneuvers. Lane-offset, distance to the front car and time to contact, were recorded. The acquired data was used to train - in parallel- a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able of performing a lane change prediction time of 1.5 sec beforehand. The proposed approach describes a framework for lane-change detection and prediction, which will serve as a prerequisite for a successful driver model.
For the rod shape measurement of hot rolled round steel bars (rods) the high frequency eddy current method is especially well suited as it requires no contact point and is not limited to below the Curie temperature. Defects of the rod's shape can be detected by measuring the impedance spectrum of the RLC-oscillator. In the first laboratory setup an Agilent impedance analyser was used for initial tests. Nevertheless, this setup cannot be applied in a steel plant due to the difficult environmental conditions. Hence, a vector network analyser for passive impedance measurement that is applicable in these surroundings was developed.
Fat content of liver is an essential parameter to decide whether a liver is suitable for transplantation or not. The determination of fat content is often challenging and usually there is not enough time to bring a specimen to a pathologic laboratory. That is why transplantation clinics need a technique to measure the fat content of a graft. In this paper the theoretical basics and an existing laboratory setup are presented.
We present an architecture based on the Dynamic Field Theory for the problem of scene representation. At the core of this architecture are three-dimensional neural fields linking feature to spatial information. These three-dimensional fields are coupled to lower-dimensional fields that provide both a close link to the sensory surface and a close link to motor behavior. We highlight the updating mechanism of this architecture, both when a single object is selected and followed by the robot's head in smooth pursuit and in multi-item tracking when several items move simultaneously
In this paper we describe an architecture for behavioral organization based on dynamical systems. This architecture
enables the generation of complex behavioral sequences, which is demonstrated using the example of approaching and
passing a door. The behavioral sequence is generated by activating and deactivating the elementary behaviors dependent
on sensory information and internal logical conditions. The architecture is demonstrated on a mobile KOALA robot and
in simulation as well.
Integrating Orientation Constraints into the Attractor Dynamics Approach for Autonomous Manipulation
(2010)
Generating collision free reaching movements for redundant manipulators using dynamical systems
(2010)
For autonomous robots to manipulate objects in unknown environments, they must be able to move their arms without colliding with nearby objects, other agents or humans. The simultaneous avoidance of multiple obstacles in real time by all link segments of a manipulator is still a hard task both in practice and in theory. We present a systematic scheme for the generation of collision free movements for redundant manipulators in scenes with arbitrarily many obstacles. Based on the dynamical systems approach to robotics, constraints are formulated as contributions to a dynamical system that erect attractors for targets and repellors for obstacles. These contributions are formulated in terms of variables relevant to each constraint and then transformed into vector fields over the manipulator joint velocity vector as an embedding space in which all constraints are simultaneously observed. We demonstrate the feasibility of the approach by implementing it on a real anthropomorphic 8-degrees-of-freedom redundant manipulator. In addition, performance is characterized by detecting failures in a systematic simulation experiment in randomized scenes with varying numbers of obstacles.
Generating flexible collision-free reaching move-
ments is a standard task for autonomous articulated robots that
is critical especially when such systems interact with humans in
a service robotics setting. Current solutions are still challenging
to put into practice. Here we generalize an approach
first
used to plan end-effector movement that is based on attractor
dynamical systems. We show, how different contributions to
the motion planning dynamics can be formulated in constraint-
specific reference frames and then transformed into the frame
of the joint velocity vector. We implement this system on an
8 DoF redundant manipulator and show its feasibility in a
simulation. A systematic experiment with randomly generated
obstacle scenes characterizes the performance of the system.
Especially challenging confi
gurations of obstacles are discussed
to illustrate how the method solves these cases
For any kind of assistant systems, the ability to interact with the human operator and taking into account his or her assumptions and expectations, is the basis for a reasonable behavior. As a consequence the human behavior have to be studied in order to generate driver models that are learned from human driving data. In this work we focus on the improvement of the immersion in driving simulation environment by developing and implementing a cheap and efficient method for head tracking. We also explain why head tracking feedback is crucial for the quality of collected behavioural data, especially for simulators with close screen distances.
Today usually every student owns a reasonably powerful mobile device that allows to be integrated in scenarios. One of the drawbacks of the fast evolution of reasonably powerful devices, is the
heterogeneity of that these kind of devices us ually bring with them. This paper provides an overview how rich mobile learning scenarios can be implemented platform independent on the basis of HTML5 and JavaScript. The paper presents a mobile learning application based on the principles of Situated Lea
rning entirely developed in HTML5. The paper also presents the results of tests performed with the application which were aimed at finding out the difference in performance users perceived when compared with the native desktop version of the
application and the added value that mobility introduces in learning activities.
Recognition of emotions from multimodal cues is of basic interest for the design of many adaptive interfaces in human-machine interaction (HMI) in general and human-robot interaction (HRI) in particular. It provides a means to incorporate non-verbal feedback in the course of interaction. Humans express their emotional and affective state rather unconsciously exploiting their different natural communication modalities such as body language, facial expression and prosodic intonation. In order to achieve applicability in realistic HRI settings, we develop person-independent affective models. In this paper, we present a study on multimodal recognition of emotions from such auditive and visual cues for interaction interfaces. We recognize six classes of basic emotions plus the neutral one of talking persons. The focus hereby lies on the simultaneous online visual and accoustic analysis of speaking faces. A probabilistic decision level fusion scheme based on Bayesian networks is applied to draw benefit of the complementary information from both – the acoustic and the visual – cues. We compare the performance of our state of the art recognition systems for separate modalities to the improved results after applying our fusion scheme on both DaFEx database and a real-life data that captured directly from robot. We furthermore discuss the results with regard to the theoretical background and future applications.
In the presented work we compare machine learning techniques in the context of lane change behavior performed by humans in a semi-naturalistic simulated environment. We evaluate different learning approaches using differing feature combinations in order to identify appropriate feature, best feature combination, and the most appropriate machine learning technique for the described task. Based on the data acquired from human drivers in the traffic simulator NISYS TRS 1 , we trained a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able to predict lane changes up to 1.5 sec in beforehand.
Simulated reality environment incorporating humans and physically plausible behaving robots, providing natural interaction channels, with the option to link simulator to real perception and motion, is gaining importance for the development of cognitive, intuitive interacting and collaborating robotic systems. In the present work we introduce a head tracking system which is utilized to incorporate human ego motion in simulated environment improving immersion in the context of human-robot collaborative tasks.
This paper describes a system which allows platform independent access to quizzes of the popular learning platform Moodle. The main focus is on the software architecture which is implemented on the base of platform independent technology like Web Services, HTML5 and JavaScript. Another aspect is the user interface which was developed with the goal to run on a broad range of mobile devices from small mobile phones up to large tablets.
The WWW is the killerapp of the internet. In recent years an enormously increasing number of Web Applications, as a means of human-to-computer interaction, showed up, that allows a visitor of a certain website to interact with the website. Additionally the approach of Web Services was introduced in order to allow computer-to-computer Interaction on the basis of standardized protocols. This paper shows how the gap between Web Applications and Web Services can be closed by making Web Applications available to computer-to-computer interaction by a systematic approach.
Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation.
The investigation of neuronal accounts of cognition is closely linked to collaboration between behavioral experiments, theory and application and supports the process of moving from pure behaviorist correlation analysis to gaining a real understanding of the underlying mechanisms. Cognition builds upon the individual behavioral history, and the understanding of cognition is based on neuronal principles.
The study of human behavior incorporates in particular interactive, dynamically changing scenarios with multiple human individuals. Both the acquisition of behavioral data of human subjects, the modeling of behavior, as well as the evaluation in interactive scenarios, makes it necessary to generate simulated images of reality. Simulations allow the investigator to precisely control the structure of the environment the subject interacts with. Furthermore, situations that would be too dangerous in the real world (e.g. near-crash driving situations) can be investigated using virtual reality.
By nature, simulated reality frameworks are designed to simulate naturalistic environments. Within these environments, ecologically relevant stimuli embedded in a meaningful and controlled context can be presented. The quality of experimental data acquired within the simulated environment depends not to the last on the degree of immersion of the human subject.
Driving experiments usually attempt to relate observable driver behavior to cognitive inputs. The precise visual (retinal) input of a driver in a driving simulator depends also on the exact position of his head with respect to the screen (Noth et al., 2010). The major meaning of ego motion feedback can be considered as a continuous calibration here.
In a virtual cooperation scenario, consistency matters - if an operator perceives an object at 1 m distance, moving 20 cm towards it should decrease the perceived distance to 80 cm, moving to the side of an object which occludes another one should reveal the latter (Pretto et al., 2009).
The ego-motion feedback mitigates the cues that remind operators of the fact that they are in a virtual and not in the real world. The way the appearance of a virtual object changes due to a lateral head movement is identical to its real counterpart, which means that even relations between real and virtual objects remain (Creem-Regehr et al., 2005; Cutting, 1997).
In this contribution we introduce a head tracking system which is utilized to incorporate human ego motion in simulated environments improving immersion in the context of a human-robot collaborative task and in an interactive driving simulator.
For both cases, we explain how the ego motion feedback leads to a more precise comprehension of the virtual scene and how the aspect of immersion influences the feeling of being “really” inside of the virtual scene and the weakening of the awareness of the border between the real and the virtual world.
The neuronal basis of movement preparation, during which movement parameters such as movement direction are assigned values, is fairly well understood (Georgopoulos, 2000). Motor and premotor cortex as well as portions of the parietal cortex represent movement parameters through the activity of neuronal populations (Bastian et al., 2003; Cisek & Kalaska, 2005).
The parameter representation is of dynamic nature, updated in the course of movement. It adapts to boundary conditions of the motion plan or to environmental changes. Schwartz (2004) was able to decode motor cortical activity in the motor cortex and utilized this knowledge to drive a virtual or robotic end-effector. Thus he proved that the motor cortex is involved in the generation of movement planning. At this level of abstraction we assume that the movement of an end-effector, as well as human walking movement, is represented appropriately by its direction and satisfies other constraints, such as obstacle avoidance or movement coordination.
A neuronal dynamic of movement generates goal-directed movements and satisfies other constraints, such as obstacle avoidance. Movement is generated by choosing low-dimensional, behaviorally relevant state variables. Behavioral goals are represented as attractors of dynamical systems over such behavioral variables (Schöner et al., 1995). The robots trajectory emerges as a solution of these dynamical systems, in which the behavioral variables are stabilized at attractors corresponding to behavioral goals. Constraints are included in a similar manner as repellers. Recently we applied this approach to generate reaching movements for manipulators under obstacle avoidance and orientation con- straints (Iossifidis & Schöner, 2009; Reimann et al., 2010a,b).
We aim to develop an approach to robotic action based on dynamical systems 1
that is quantitatively modeled on human behavior. By varying the intrinsic parameters obtained for different individuals we will be able to implement different personal styles of movement. In this contribution we implement the neuronal dynamics of movement on a humanoid robotic system which generates goal-directed walking movements while avoiding obstacles.
Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary.
In recent years a new approach for the dynamic usage of computational power, memory and other
resources comes into play: the Cloud Computing paradigm. This new approach needs to be concerned with
respect to IT Service Management since cloud based infrastructures have to be managed differently from a
usual infrastructure. This paper discusses, based on the IT Infrastructure Library (ITIL), as the de-facto
standard for IT Service Management, what kind of processes needs to be concerned especially if a certain
service should be deployed in the cloud.
In recent years, the number of mobile devices that are available for learning scenarios has increased a lot. Different learning settings are usually supported by mobile devices. On the one hand we find mobile devices in informal learning settings, and on the other hand in formal learning settings like a usual lecture. This paper motivates the question whether the usage of mobile devices in a usual lecture is something that is wanted by the students. A first case study is provided with an platform independent prototype that gives an initial indication for preferred usage.
Detection of air trapping in chronic obstructive pulmonary disease by low frequency ultrasound
(2012)
Background: Spirometry is regarded as the gold standard for the diagnosis of COPD, yet the condition is widely underdiagnosed. Therefore, additional screening methods that are easy to perform and to interpret are needed. Recently, we demonstrated that low frequency ultrasound (LFU) may be helpful for monitoring lung diseases. The objective of this study was to evaluate whether LFU can be used to detect air trapping in COPD. In addition, we evaluated the ability of LFU to detect the effects of short-acting bronchodilator medication.Methods: Seventeen patients with COPD and 9 healthy subjects were examined by body plethysmography and LFU. Ultrasound frequencies ranging from 1 to 40 kHz were transmitted to the sternum and received at the back during inspiration and expiration. The high pass frequency was determined from the inspiratory and the expiratory signals and their difference termed F. Measurements were repeated after inhalation of salbutamol.Results: We found signi ficant differences in F between COPD subjects and healthy subjects. These differences were already significant at GOLD stage 1 and increased with the severity of COPD. Sensitivity for detection of GOLD stage 1 was 83% and for GOLD stages worse than 1 it was 91%. Bronchodilator effects could not be detected reliably.Conclusions: We conclude that low frequency ultrasound is cost-effective, easy to perform and suitable for detecting air trapping. It might be useful in screening for COPD
The term “Cloud Computing” does not primarily specify new types of core technologies but rather addresses features to do with integration, interoperability and accessibility. Although not new, virtualization and automation are core features that characterize Cloud Computing. In this paper, we intend to explore the possibility of integrating cloud services with educational scenarios without re-defining neither the technology nor the usage scenarios from scratch. Our suggestion is based on certain solutions that have already been implemented and tested for specific cases.
Collaboration and Technology
(2012)
This book constitutes the proceedings of the 18th Collaboration Researchers' International Working Group Conference on Collaboration and Technology, held in Raesfeld, Germany, in September 2012. The 9 revised papers presented together with 12 short papers were carefully reviewed and selected from numerous submissions. They are grouped into five themes that represent collaborative learning, social media analytics, conceptual and design models, formal modeling and technical approaches and collaboration support in emergency scenarios.
The Desire project aimed at the development and implementation of a mobile service robotic research platform (technology platform) able to handle real world scenarios regarding service robotic tasks. Different modules for different tasks plus an interaction infrastructure were integrated on this platform. An example of a real world scenario task is the support of a handicapped person to clean up a kitchen in home environments.
One of the main challenges to be solved in this field is the interaction with people. To start an interaction process between a robot and a person, the most important information is the knowledge about the interacting partner’s identity and whether the interacting partner is present or not. This means, the robot must be able to detect and be finally able to identify persons. Accurate identification of specific individuals has to be done by analyzing the individual features of each person. A typical feature set that allows for a distinct identification of a specific person is often extracted from the facial image acquired by a camera. This feature-set is stored in a database to allow the identification of different persons independent from place and time by comparing given feature-sets. Thus, a face recognition module was integrated into the technology platform which includes face detection and identification algorithms.
In recent years, the number of reasonable powerful mobile devices increased. In 2011, the number of smartphones(e.g.)increased to more than 300 million units. A lot of research has already been conducted with respect of mobile devices acting as Cloud Service consumers, but
still not much effort is put on mobile devices in the role of Cloud Service providers. Therefore, this paper presents an approach that allows to utilize mobile devices like smart phones or tablets as Cloud Service providers. In order to make this a reasonable approach, some of the occurring problems are discussed and it is shown how the presented architecture is able to overcome these problems. Last
but not least, this paper
describes some performance
tests of the chosen implementa
tion for mobile Web Services.
Integrating Social Networking Sites in Day-to-Day Learning Scenarios - A Facebook Based Approach
(2012)
In recent years, the number of users in social networking sites regularly increased. Especially younger people spend a tremendous time on social networking sites like Facebook, YouTube, Flickr, Google+ and many more. Since obviously this is the place on the World-Wide-Web where our students spent their spare time, we integrated social networking sites in our day-to-day learning scenarios. This is on the one hand to start working with our students where they feel comfortable, and on the other hand to allow to foster the communication among our students about the topic of the lectures.
In recent years the diversity and the ownership of mobile devices steadily increased while the prices for this kind of devices decreased to a level that allows many students to own reasonably powerful devices. As mobile devices are also being used in learning scenarios, the challenge of today is the integration of multiple heterogeneous devices into existing and upcoming learning scenarios. This paper describes an architecture that allows easy integration of various kinds of mobile and non-mobile devices. The presented architecture will be exemplified by a group discussion scenario in a heterogeneous learning environment. The paper concludes with the description of a pilot study using the described system.
One of the latest hypes in IT is the well-known Cloud
Computing paradigm. This paradigm that showed up in recent years
is a paradigm for the dynamic usage of computational power, memory and other computational resources. With respect to hypes, the author strongly believes that the
Cloud Computing paradigm has the potential to survive the hype and to become a usual technology used for the provision of IT based services. Therefore, it will be necessary to deploy Cloud Computing based infrastructures in a professional, stable and reliable way. This would lead to the idea that the Cloud Computing paradigm needs to be concerned with respect to IT Service Management, since cloud based infrastructures have to be managed differently in comparison to a usual infrastructure. This paper discusses, based on the IT Infrastructure Library (ITIL), as the de-facto standard for IT Service Management, whether this de-facto standard might also be able to manage Cloud Computing based infrastructures, how the according processes might change and whether ITIL supports a division of labor between the customer and the service provider
of a Cloud Computing based infrastructure.
Applications and research efforts in Mobile Learning constitute a growing field in the area of Technology Enhanced Learning. However, despite a permanent increase of mobile internet accessibility and availability of mobile devices over the past years, a mobile learning environment that is easy to use, widely accepted by teachers and learners, uses widespread off-the-shelf software, and that covers various application scenarios and mobile devices, is not yet available. In this paper, we address this issue by presenting an approach and technical framework called "Mobile Contributions" ("MoCo"). MoCo supports learners to create and send contributions through various channels (including third-party solutions like Twitter, SMS and Facebook), which are collected and stored in a central repository for processing, filtering and visualization on a shared display. A set of different learning and teaching scenarios that can be realized with MoCo are described along with first experiences and insights gained from qualitative and quantitative evaluation.
This paper presents an approach towards a mobile learning environment, which is flexible in terms of supported scenarios, supported devices and input channels. The approach makes use of existing and commonly used channels like SMS, Twitter or Face book to increase acceptance and ease-of-use of mobile devices in learning scenarios. Envisaged application scenarios are described along with technical details for their realization.
Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give teen-aged pupils insights into this area in a project-based learning environment with professional tools. Within the last 18 month this project was successfully conducted several times with participants of different age.
The role of mobile devices as Web Service consumers is widely accepted and a large number of mobile applications already consumes Web Services in order to fullfill their task. Nevertheless, the growing number of powerful mobile devices, e.g. mobile phones, tablets even raise the question whether these devices can not only be used as Web Service consumers but at the same time also as Web Service providers. Therefore, this paper presents an approach that allows to deploy Web Services on mobile devices by the usage of the well-known protocols and standards, e.g. SOAP/REST and WSDL.