Refine
Year of publication
Document Type
- Conference Proceeding (20)
- Article (8)
- Part of a Book (3)
- Book (2)
- Contribution to a Periodical (1)
- Report (1)
Is part of the Bibliography
- no (35)
Institute
Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary.
Integrating Orientation Constraints into the Attractor Dynamics Approach for Autonomous Manipulation
(2010)
The neuronal basis of movement preparation, during which movement parameters such as movement direction are assigned values, is fairly well understood (Georgopoulos, 2000). Motor and premotor cortex as well as portions of the parietal cortex represent movement parameters through the activity of neuronal populations (Bastian et al., 2003; Cisek & Kalaska, 2005).
The parameter representation is of dynamic nature, updated in the course of movement. It adapts to boundary conditions of the motion plan or to environmental changes. Schwartz (2004) was able to decode motor cortical activity in the motor cortex and utilized this knowledge to drive a virtual or robotic end-effector. Thus he proved that the motor cortex is involved in the generation of movement planning. At this level of abstraction we assume that the movement of an end-effector, as well as human walking movement, is represented appropriately by its direction and satisfies other constraints, such as obstacle avoidance or movement coordination.
A neuronal dynamic of movement generates goal-directed movements and satisfies other constraints, such as obstacle avoidance. Movement is generated by choosing low-dimensional, behaviorally relevant state variables. Behavioral goals are represented as attractors of dynamical systems over such behavioral variables (Schöner et al., 1995). The robots trajectory emerges as a solution of these dynamical systems, in which the behavioral variables are stabilized at attractors corresponding to behavioral goals. Constraints are included in a similar manner as repellers. Recently we applied this approach to generate reaching movements for manipulators under obstacle avoidance and orientation con- straints (Iossifidis & Schöner, 2009; Reimann et al., 2010a,b).
We aim to develop an approach to robotic action based on dynamical systems 1
that is quantitatively modeled on human behavior. By varying the intrinsic parameters obtained for different individuals we will be able to implement different personal styles of movement. In this contribution we implement the neuronal dynamics of movement on a humanoid robotic system which generates goal-directed walking movements while avoiding obstacles.
Generating collision free reaching movements for redundant manipulators using dynamical systems
(2010)
For autonomous robots to manipulate objects in unknown environments, they must be able to move their arms without colliding with nearby objects, other agents or humans. The simultaneous avoidance of multiple obstacles in real time by all link segments of a manipulator is still a hard task both in practice and in theory. We present a systematic scheme for the generation of collision free movements for redundant manipulators in scenes with arbitrarily many obstacles. Based on the dynamical systems approach to robotics, constraints are formulated as contributions to a dynamical system that erect attractors for targets and repellors for obstacles. These contributions are formulated in terms of variables relevant to each constraint and then transformed into vector fields over the manipulator joint velocity vector as an embedding space in which all constraints are simultaneously observed. We demonstrate the feasibility of the approach by implementing it on a real anthropomorphic 8-degrees-of-freedom redundant manipulator. In addition, performance is characterized by detecting failures in a systematic simulation experiment in randomized scenes with varying numbers of obstacles.
Generating flexible collision-free reaching move-
ments is a standard task for autonomous articulated robots that
is critical especially when such systems interact with humans in
a service robotics setting. Current solutions are still challenging
to put into practice. Here we generalize an approach
first
used to plan end-effector movement that is based on attractor
dynamical systems. We show, how different contributions to
the motion planning dynamics can be formulated in constraint-
specific reference frames and then transformed into the frame
of the joint velocity vector. We implement this system on an
8 DoF redundant manipulator and show its feasibility in a
simulation. A systematic experiment with randomly generated
obstacle scenes characterizes the performance of the system.
Especially challenging confi
gurations of obstacles are discussed
to illustrate how the method solves these cases
Das übergeordnete Forschungsgebiet, in das sich die vorliegende Arbeit einbettet, befasst sich mit der Erforschung von informationsverabeitenden Prozessen im Gehirn und der Anwendung der resultierenden Erkenntnisse auf technische Systeme. In Analogie zu biologischen Systemen, deren Beschaffenheit aus den Anforderungen der Umwelt an ihr Verhalten resultiert, leitet sich die Anthropomorphie als Entwurfsprinzip für die Struktur des mit den Menschen interagierenden robotischen Assistenzsystemen ab. Der Autor behandelt in der vorliegende Arbeit das Problem der Erzeugung von Motorverhalten im dreidimensionalen Raum am Beispiel eines anthropomorphen Roboterarmes in einem anthropomorphen robotischen Assistenzsystem. Entwickelt wurde hierbei ein allgemeiner Ansatz, der die Konzepte der Erzeugung von Motorverhalten im 3D-Raum, der Voraussimulation dynamischer Systeme zur Systemdiagnose und zur Suche gewünschter Systemzustände, sowie ein Konzept der Organisation von Verhalten enthält und vereinigt. Nichtlineare dynamische Systeme bilden das mathematische Fundament, die einheitlich, formale Sprache des Ansatzes, mit der sowohl das Motorverhalten des Roboters als auch dessen zeitkontinuierliche Teilsysteme rückgekoppelt werden.