Refine
Document Type
- Conference Proceeding (20) (remove)
Is part of the Bibliography
- no (20)
Institute
In the presented work we compare machine learning techniques in the context of lane change behavior performed by humans in a semi-naturalistic simulated environment. We evaluate different learning approaches using differing feature combinations in order to identify appropriate feature, best feature combination, and the most appropriate machine learning technique for the described task. Based on the data acquired from human drivers in the traffic simulator NISYS TRS 1 , we trained a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able to predict lane changes up to 1.5 sec in beforehand.
We describe the general concept, system architecture, hardware, and the behavioral abilities of Cora (Cooperative Robot Assistant, see Fig. 1), an autonomous non mobile robot assistant. Outgoing from our basic assumption that the behavior to perform determines the internal and external structure of the behaving system, we have designed Cora anthropomorphic to allow for humanlike behavioral strategies in solving complex tasks. Although Cora was built as a prototype of a service robot system to assist a human partner in industrial assembly tasks, we will show that Cora’s behavioral abilities are also conferrable in a household environment. After the description of the hardware platform and the basic concepts of our approach, we present some experimental results by means of an assembly task.
Autonomous robots with limited computational capacity call for control approaches that generate meaningful, goal-directed behavior without using a large amount of resources. The attractor dynamics approach to movement generation is a framework that links sensor data to motor commands via coupled dynamical systems that have attractors at behaviorally desired states. The low computational demands leave enough system resources for higher level function like forming a sequence of local goals to reach a distant one. The comparatively high performance of local behavior generation allows the global planning to be relatively simple. In the present paper, we apply this approach to generate walking trajectories for a small humanoid robot, the Aldebaran Nao, that are goal-directed and avoid obstacles. The sensor information is a single camera in the head of the robot. The limited field of vision is compensated by head movements. The design of the dynamical system for motion generation and the choice of state variable makes a computationally expensive scene representation or local map building unnecessary.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics approach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan make it possible to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that emerge as long as the arm is far from obstacles make the movement goals of the robotic assistant predictable for the human operator, improving man-machine interaction
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed-points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics aoproach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan enable to approach to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that result far from obstacles make the movement goals of the robotic assistant predictable, improving man-machine interaction.
The neuronal basis of movement preparation, during which movement parameters such as movement direction are assigned values, is fairly well understood (Georgopoulos, 2000). Motor and premotor cortex as well as portions of the parietal cortex represent movement parameters through the activity of neuronal populations (Bastian et al., 2003; Cisek & Kalaska, 2005).
The parameter representation is of dynamic nature, updated in the course of movement. It adapts to boundary conditions of the motion plan or to environmental changes. Schwartz (2004) was able to decode motor cortical activity in the motor cortex and utilized this knowledge to drive a virtual or robotic end-effector. Thus he proved that the motor cortex is involved in the generation of movement planning. At this level of abstraction we assume that the movement of an end-effector, as well as human walking movement, is represented appropriately by its direction and satisfies other constraints, such as obstacle avoidance or movement coordination.
A neuronal dynamic of movement generates goal-directed movements and satisfies other constraints, such as obstacle avoidance. Movement is generated by choosing low-dimensional, behaviorally relevant state variables. Behavioral goals are represented as attractors of dynamical systems over such behavioral variables (Schöner et al., 1995). The robots trajectory emerges as a solution of these dynamical systems, in which the behavioral variables are stabilized at attractors corresponding to behavioral goals. Constraints are included in a similar manner as repellers. Recently we applied this approach to generate reaching movements for manipulators under obstacle avoidance and orientation con- straints (Iossifidis & Schöner, 2009; Reimann et al., 2010a,b).
We aim to develop an approach to robotic action based on dynamical systems 1
that is quantitatively modeled on human behavior. By varying the intrinsic parameters obtained for different individuals we will be able to implement different personal styles of movement. In this contribution we implement the neuronal dynamics of movement on a humanoid robotic system which generates goal-directed walking movements while avoiding obstacles.
Simulated reality environment incorporating humans and physically plausible behaving robots, providing natural interaction channels, with the option to link simulator to real perception and motion, is gaining importance for the development of cognitive, intuitive interacting and collaborating robotic systems. In the present work we introduce a head tracking system which is utilized to incorporate human ego motion in simulated environment improving immersion in the context of human-robot collaborative tasks.