Refine
Document Type
- Article (8) (remove)
Has Fulltext
- no (8)
Is part of the Bibliography
- no (8)
Institute
CORA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot's gripper (force sensing). The design objective has been to exploit the human operator's intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
In this paper we describe an architecture for behavioral organization based on dynamical systems. This architecture
enables the generation of complex behavioral sequences, which is demonstrated using the example of approaching and
passing a door. The behavioral sequence is generated by activating and deactivating the elementary behaviors dependent
on sensory information and internal logical conditions. The architecture is demonstrated on a mobile KOALA robot and
in simulation as well.
We extend the attractor dynamics approach to generate goal-directed movement of a redundant, anthropomorphic arm while avoiding dynamic obstacles and respecting joint limits. To make the robot's movements human-like, we generate approximately straight-line trajectories by using two heading direction angles of the tool-point quite analogously to how movement is represented in the primate central nervous system. Two additional angles control the tool's spatial orientation so that it follows the tool-point's collision-free path. A fifth equation governs the redundancy angle, which controls the elevation of the elbow so as to avoid obstacles and respect joint limits. These variables make it possible to generate movement while sitting in an attractor (or, in the language of the potential field approach, in a minimum). We demonstrate the approach on an assistant robot, which interacts with human users in a shared workspace
The presented work formulates an framework in which early prediction of drivers lane change behavior is realized. We aim to build a representation of drivers lane change behavior in order to recognize and to predict driver's intentions as a first step towards a realistic driver model. In the test bed of the Institute of Neuroinformatik, based on the traffic simulator NISYS TRS 1, 10 individuals have driven in the experiments and they performed more then 150 lane change maneuvers. Lane-offset, distance to the front car and time to contact, were recorded. The acquired data was used to train - in parallel- a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able of performing a lane change prediction time of 1.5 sec beforehand. The proposed approach describes a framework for lane-change detection and prediction, which will serve as a prerequisite for a successful driver model.
Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation.