Refine
Year of publication
- 2010 (8) (remove)
Document Type
- Conference Proceeding (5)
- Article (3)
Has Fulltext
- no (8)
Is part of the Bibliography
- no (8)
Institute
For any kind of assistant systems, the ability to interact with the human operator and taking into account his or her assumptions and expectations, is the basis for a reasonable behavior. As a consequence the human behavior have to be studied in order to generate driver models that are learned from human driving data. In this work we focus on the improvement of the immersion in driving simulation environment by developing and implementing a cheap and efficient method for head tracking. We also explain why head tracking feedback is crucial for the quality of collected behavioural data, especially for simulators with close screen distances.
In this paper we describe an architecture for behavioral organization based on dynamical systems. This architecture
enables the generation of complex behavioral sequences, which is demonstrated using the example of approaching and
passing a door. The behavioral sequence is generated by activating and deactivating the elementary behaviors dependent
on sensory information and internal logical conditions. The architecture is demonstrated on a mobile KOALA robot and
in simulation as well.
Generating collision free reaching movements for redundant manipulators using dynamical systems
(2010)
For autonomous robots to manipulate objects in unknown environments, they must be able to move their arms without colliding with nearby objects, other agents or humans. The simultaneous avoidance of multiple obstacles in real time by all link segments of a manipulator is still a hard task both in practice and in theory. We present a systematic scheme for the generation of collision free movements for redundant manipulators in scenes with arbitrarily many obstacles. Based on the dynamical systems approach to robotics, constraints are formulated as contributions to a dynamical system that erect attractors for targets and repellors for obstacles. These contributions are formulated in terms of variables relevant to each constraint and then transformed into vector fields over the manipulator joint velocity vector as an embedding space in which all constraints are simultaneously observed. We demonstrate the feasibility of the approach by implementing it on a real anthropomorphic 8-degrees-of-freedom redundant manipulator. In addition, performance is characterized by detecting failures in a systematic simulation experiment in randomized scenes with varying numbers of obstacles.
Generating flexible collision-free reaching move-
ments is a standard task for autonomous articulated robots that
is critical especially when such systems interact with humans in
a service robotics setting. Current solutions are still challenging
to put into practice. Here we generalize an approach
first
used to plan end-effector movement that is based on attractor
dynamical systems. We show, how different contributions to
the motion planning dynamics can be formulated in constraint-
specific reference frames and then transformed into the frame
of the joint velocity vector. We implement this system on an
8 DoF redundant manipulator and show its feasibility in a
simulation. A systematic experiment with randomly generated
obstacle scenes characterizes the performance of the system.
Especially challenging confi
gurations of obstacles are discussed
to illustrate how the method solves these cases
Integrating Orientation Constraints into the Attractor Dynamics Approach for Autonomous Manipulation
(2010)
We present an architecture based on the Dynamic Field Theory for the problem of scene representation. At the core of this architecture are three-dimensional neural fields linking feature to spatial information. These three-dimensional fields are coupled to lower-dimensional fields that provide both a close link to the sensory surface and a close link to motor behavior. We highlight the updating mechanism of this architecture, both when a single object is selected and followed by the robot's head in smooth pursuit and in multi-item tracking when several items move simultaneously