004 Informatik
Refine
Document Type
- Conference Proceeding (8)
- Article (2)
- Contribution to a Periodical (1)
Is part of the Bibliography
- no (11)
Institute
Auf gute Zusammenarbeit
(2008)
Generating collision free reaching movements for redundant manipulators using dynamical systems
(2010)
For autonomous robots to manipulate objects in unknown environments, they must be able to move their arms without colliding with nearby objects, other agents or humans. The simultaneous avoidance of multiple obstacles in real time by all link segments of a manipulator is still a hard task both in practice and in theory. We present a systematic scheme for the generation of collision free movements for redundant manipulators in scenes with arbitrarily many obstacles. Based on the dynamical systems approach to robotics, constraints are formulated as contributions to a dynamical system that erect attractors for targets and repellors for obstacles. These contributions are formulated in terms of variables relevant to each constraint and then transformed into vector fields over the manipulator joint velocity vector as an embedding space in which all constraints are simultaneously observed. We demonstrate the feasibility of the approach by implementing it on a real anthropomorphic 8-degrees-of-freedom redundant manipulator. In addition, performance is characterized by detecting failures in a systematic simulation experiment in randomized scenes with varying numbers of obstacles.
Generating flexible collision-free reaching move-
ments is a standard task for autonomous articulated robots that
is critical especially when such systems interact with humans in
a service robotics setting. Current solutions are still challenging
to put into practice. Here we generalize an approach
first
used to plan end-effector movement that is based on attractor
dynamical systems. We show, how different contributions to
the motion planning dynamics can be formulated in constraint-
specific reference frames and then transformed into the frame
of the joint velocity vector. We implement this system on an
8 DoF redundant manipulator and show its feasibility in a
simulation. A systematic experiment with randomly generated
obstacle scenes characterizes the performance of the system.
Especially challenging confi
gurations of obstacles are discussed
to illustrate how the method solves these cases
Integrating Orientation Constraints into the Attractor Dynamics Approach for Autonomous Manipulation
(2010)
We present an architecture based on the Dynamic Field Theory for the problem of scene representation. At the core of this architecture are three-dimensional neural fields linking feature to spatial information. These three-dimensional fields are coupled to lower-dimensional fields that provide both a close link to the sensory surface and a close link to motor behavior. We highlight the updating mechanism of this architecture, both when a single object is selected and followed by the robot's head in smooth pursuit and in multi-item tracking when several items move simultaneously
Temporal stabilization of discrete movement in variable environments: An attractor dynamics approach
(2009)
The ability to generate discrete movement with distinct and stable time courses is important for interaction scenarios both between different robots and with human partners, for catching and interception tasks, and for timed action sequences. In dynamic environments, where trajectories are evolving online, this is not a trivial task. The dynamical systems approach to robotics provides a framework for robust incorporation of fluctuating sensor information, but control of movement time is usually restricted to rhythmic motion and realized through stable limit cycles. The present work uses a Hopf oscillator to produce discrete motion and formulates an online adaptation rule to stabilize total movement time against a wide range of disturbances. This is integrated into a dynamical systems framework for the sequencing of movement phases and for directional navigation, using 2D-planar motion as an example. The approach is demonstrated on a Khepera mobile unit in order to show its reliability even when depending on low-level sensor information.
Based on the concepts of dynamic field theory (DFT), we present an architecture that autonomously generates scene representations by controlling gaze and attention, creating visual objects in the foreground, tracking objects, reading them into working memory, and taking into account their visibility. At the core of this architecture are three-dimensional dynamic neural fields (DNFs) that link feature to spatial information. These three-dimensional fields couple into lower dimensional fields, which provide the links to the sensory surface and to the motor systems. We discuss how DNFs can be used as building blocks for cognitive architectures, characterize the critical bifurcations in DNFs, as well as the possible coupling structures among DNFs. In a series of robotic experiments, we demonstrate how the DNF architecture provides the core functionalities of a scene representation.