Refine
Year of publication
Document Type
- Conference Proceeding (18)
- Article (7)
- Part of a Book (3)
- Report (1)
Language
- English (29) (remove)
Is part of the Bibliography
- no (29)
Institute
CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot’s gripper (force sensing). The design objective has been to exploit the human operator’s intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
CORA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot's gripper (force sensing). The design objective has been to exploit the human operator's intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed-points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics aoproach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan enable to approach to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that result far from obstacles make the movement goals of the robotic assistant predictable, improving man-machine interaction.
In the presented work we compare machine learning techniques in the context of lane change behavior performed by humans in a semi-naturalistic simulated environment. We evaluate different learning approaches using differing feature combinations in order to identify appropriate feature, best feature combination, and the most appropriate machine learning technique for the described task. Based on the data acquired from human drivers in the traffic simulator NISYS TRS 1 , we trained a recurrent neural network, a feed forward neural network and a set of support vector machines. In the followed test drives the system was able to predict lane changes up to 1.5 sec in beforehand.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics approach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan make it possible to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that emerge as long as the arm is far from obstacles make the movement goals of the robotic assistant predictable for the human operator, improving man-machine interaction
This article describes the current state of our research on anthropomorphic robots. Our aim is to make the reader familiar with the two basic principles our work is based on: anthropomorphism and dynamics. The principle of anthropomorphism means a restriction to human-like robots which use version, audition and touch as their only sensors so that natural man-machine interaction is possible. The principle of dynamics stands for the mathematical framework based on which our robots generate their behavior. Both principles have their root in the idea that concepts of biological behavior and information processing can be exploited to control technical systems.
In this paper we describe an architecture for behavioral organization based on dynamical systems. This architecture
enables the generation of complex behavioral sequences, which is demonstrated using the example of approaching and
passing a door. The behavioral sequence is generated by activating and deactivating the elementary behaviors dependent
on sensory information and internal logical conditions. The architecture is demonstrated on a mobile KOALA robot and
in simulation as well.
The investigation of neuronal accounts of cognition is closely linked to collaboration between behavioral experiments, theory and application and supports the process of moving from pure behaviorist correlation analysis to gaining a real understanding of the underlying mechanisms. Cognition builds upon the individual behavioral history, and the understanding of cognition is based on neuronal principles.
The study of human behavior incorporates in particular interactive, dynamically changing scenarios with multiple human individuals. Both the acquisition of behavioral data of human subjects, the modeling of behavior, as well as the evaluation in interactive scenarios, makes it necessary to generate simulated images of reality. Simulations allow the investigator to precisely control the structure of the environment the subject interacts with. Furthermore, situations that would be too dangerous in the real world (e.g. near-crash driving situations) can be investigated using virtual reality.
By nature, simulated reality frameworks are designed to simulate naturalistic environments. Within these environments, ecologically relevant stimuli embedded in a meaningful and controlled context can be presented. The quality of experimental data acquired within the simulated environment depends not to the last on the degree of immersion of the human subject.
Driving experiments usually attempt to relate observable driver behavior to cognitive inputs. The precise visual (retinal) input of a driver in a driving simulator depends also on the exact position of his head with respect to the screen (Noth et al., 2010). The major meaning of ego motion feedback can be considered as a continuous calibration here.
In a virtual cooperation scenario, consistency matters - if an operator perceives an object at 1 m distance, moving 20 cm towards it should decrease the perceived distance to 80 cm, moving to the side of an object which occludes another one should reveal the latter (Pretto et al., 2009).
The ego-motion feedback mitigates the cues that remind operators of the fact that they are in a virtual and not in the real world. The way the appearance of a virtual object changes due to a lateral head movement is identical to its real counterpart, which means that even relations between real and virtual objects remain (Creem-Regehr et al., 2005; Cutting, 1997).
In this contribution we introduce a head tracking system which is utilized to incorporate human ego motion in simulated environments improving immersion in the context of a human-robot collaborative task and in an interactive driving simulator.
For both cases, we explain how the ego motion feedback leads to a more precise comprehension of the virtual scene and how the aspect of immersion influences the feeling of being “really” inside of the virtual scene and the weakening of the awareness of the border between the real and the virtual world.
We describe the general concept, system architecture, hardware, and the behavioral abilities of Cora (Cooperative Robot Assistant, see Fig. 1), an autonomous non mobile robot assistant. Outgoing from our basic assumption that the behavior to perform determines the internal and external structure of the behaving system, we have designed Cora anthropomorphic to allow for humanlike behavioral strategies in solving complex tasks. Although Cora was built as a prototype of a service robot system to assist a human partner in industrial assembly tasks, we will show that Cora’s behavioral abilities are also conferrable in a household environment. After the description of the hardware platform and the basic concepts of our approach, we present some experimental results by means of an assembly task.