Refine
Year of publication
- 2004 (16) (remove)
Document Type
- Conference Proceeding (7)
- Part of a Book (5)
- Book (2)
- Article (1)
- Other (1)
Has Fulltext
- no (16)
Is part of the Bibliography
- no (16)
CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot’s gripper (force sensing). The design objective has been to exploit the human operator’s intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
Advances in Human-Robot Interaction" provides a unique collection of recent research in human-robot interaction. It covers the basic important research areas ranging from multi-modal interfaces, interpretation, interaction, learning, or motion coordination to topics such as physical interaction, systems, and architectures. The book addresses key issues of human-robot interaction concerned with perception, modelling, control, planning and cognition, covering a wide spectrum of applications. This includes interaction and communication with robots in manufacturing environments and the collaboration and co-existence with assistive robots in domestic environments. Among the presented examples are a robotic bartender, a new programming paradigm for a cleaning robot, or an approach to interactive teaching of a robot assistant in manufacturing environment. This carefully edited book reports on contributions from leading German academic institutions and industrial companies brought together within MORPHA, a 4 year project on interaction and communication between humans and anthropomorphic robot assistants.
This paper describes an educational application that combines handhelds (PDAs) and programmable Lego bricks in a classroom scenario that deals with the problem of letting a robot escape from a maze. It is specific to our setting that the problem can be solved both in the physical world by steering a Lego robot and in a simulated software environment on a PDA or on a PC. This approach enables the students to generate successful sets of rules in the simulation and to test these sets of rules later in physical mazes, or to create new types of mazes as challenges for known rule sets. In this paper we describe the technical setting for this scenario, different pedagogical scenarios and we will report an evaluation with a group of students in a school environment.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed-points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics aoproach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan enable to approach to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that result far from obstacles make the movement goals of the robotic assistant predictable, improving man-machine interaction.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics approach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan make it possible to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that emerge as long as the arm is far from obstacles make the movement goals of the robotic assistant predictable for the human operator, improving man-machine interaction
This article describes the current state of our research on anthropomorphic robots. Our aim is to make the reader familiar with the two basic principles our work is based on: anthropomorphism and dynamics. The principle of anthropomorphism means a restriction to human-like robots which use version, audition and touch as their only sensors so that natural man-machine interaction is possible. The principle of dynamics stands for the mathematical framework based on which our robots generate their behavior. Both principles have their root in the idea that concepts of biological behavior and information processing can be exploited to control technical systems.
For face recognition from video streams speed and accuracy are vital aspects. The first decision whether a preprocessed image region represents a human face or not is often made by a feed-forward neural network (NN), e.g. in the Viisage-FaceFINDER® video surveillance system. We describe the optimisation of such a NN by a hybrid algorithm combining evolutionary multi-objective optimisation (EMO) and gradient-based learning. The evolved solutions perform considerably faster than an expert-designed architecture without loss of accuracy. We compare an EMO and a single objective approach, both with online search strategy adaptation. It turns out that EMO is preferable to the single objective approach in several respects.
In this paper we describe our efforts to foster educational interoperability in scenarios using mobile and wireless technologies to support hands-on scientific experimentation and learning. A special focus is given to the idea that innovative uses of mobile and wireless technologies enhance the learners' scientific experience. Specific contributions include the creation of new applications to support interoperability between different mobile devices, thus to provide "glue" between different learning situations. We describe a number of educational scenarios as well as the technologies and the architectural principles behind them.