Refine
Year of publication
- 2004 (11) (remove)
Document Type
- Conference Proceeding (7)
- Part of a Book (3)
- Article (1)
Language
- English (11) (remove)
Has Fulltext
- no (11)
Is part of the Bibliography
- no (11)
Institute
- Fachbereich 1 - Institut Informatik (11) (remove)
For face recognition from video streams speed and accuracy are vital aspects. The first decision whether a preprocessed image region represents a human face or not is often made by a feed-forward neural network (NN), e.g. in the Viisage-FaceFINDER® video surveillance system. We describe the optimisation of such a NN by a hybrid algorithm combining evolutionary multi-objective optimisation (EMO) and gradient-based learning. The evolved solutions perform considerably faster than an expert-designed architecture without loss of accuracy. We compare an EMO and a single objective approach, both with online search strategy adaptation. It turns out that EMO is preferable to the single objective approach in several respects.
As service robotics research advances rapidly, availability of objective, reproducible test specifications and evaluation criteria and also of benchmarking is more and more felt to be desirable in the community. As a first step towards benchmarking, in this paper we propose a formalization of tests - exemplified for domestic grasp&place tasks. The underlying philosophy of our approach is to confront the robot system in a black-box manner with requirements of a “rational customer”, and characterize the performance of the system in an objective way by the outcomes of a test-suite tailored to this scenario. A formalized single test description consists of a clear and reproducible specification of the robot’s task and the full context on the one hand, and a number of figures which objectively characterize the test result on the other hand. We illustrate this methodology for the domestic assistance scenario.
CoRA is a robotic assistant whose task is to collaborate with a human operator on simple manipulation or handling tasks. Its sensory channels comprising vision, audition, haptics, and force sensing are used to extract perceptual information about speech, gestures and gaze of the operator, and object recognition. The anthropomorphic robot arm makes goal-directed movements to pick up and hand-over objects. The human operator may mechanically interact with the arm by pushing it away (haptics) or by taking an object out of the robot’s gripper (force sensing). The design objective has been to exploit the human operator’s intuition by modeling the mechanical structure, the senses, and the behaviors of the assistant on human anatomy, human perception, and human motor behavior.
In asynchronous collaboration scenarios, document metadata play an important role for indexing and retrieving documents in jointly used archives. However, the manual input of metadata is usually an unpleasant and error prone task. This paper describes an approach that allows the partially automatic generation of metadata in a collaborative modeling environment. It illustrates some usage scenarios for the metadata within the modelling framework – including concepts for document based social navigation and ideas for tool embedded archive queries based on the current state of the user's work.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed-points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics aoproach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan enable to approach to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that result far from obstacles make the movement goals of the robotic assistant predictable, improving man-machine interaction.
Coming out of the labs, the first robots are currently appearing on the consumer market. Initially they target rather simple application scenarios ranging from entertainment to home convenience. However, one can expect, that they will capture more complex areas soon. These robots will have a higher and higher level and a broad range of functional competence, and will collaborate and interactively communicate with their human users. All this requires considerable cognitive abilities on the robot’s side and appropriate man-machine interaction technologies. Apart from further development of individual functions and technologies it is crucial to build and evaluate fully integrated systems. This paper describes our approach to construct a robotic assistance system. We present experience with an integrated technology demonstration and the exposure of the integrated system to the public.
This paper presents some ideas of how to use Web Services
for the implementation of innovative collaborative technologies. A major goal here is the idea to build re-usable collaborative software components to foster knowledge exchange and learning. This paper describes two examples of how we used Web Services to achieve this goal. The first example we will describe implements a digital notice board with large, public displays. Here, we used web service to provide flexible data access. Web services provide the possibility to use our infrastructure with different programming languages and devices. The second example we will present is an application that enables students to construct and
model experiments descriptions using a control plant-growth system, the biotube, remotely via Web Services.
To enable a robotic assistant to autonomously reach for and transport objects while avoiding obstacles we have generalized the attractor dynamics approach established for vehicles to trajectory formation in robot arms. This approach is able to deal with the time-varying environments that occur when a human operator moves in a shared workspace. Stable fixed points (attractors) for the heading direction of the end-effector shift during movement and are being tracked by the system. This enables the attractor dynamics approach to avoid the spurious states that hamper potential field methods. Separating planning and control computationally, the approach is also simpler to implement. The stability properties of the movement plan make it possible to deal with fluctuating and imprecise sensory information. We implement this approach on a seven degree of freedom anthropomorphic arm reaching for objects on a working surface. We use an exact solution of the inverse kinematics, which enables us to steer the spatial position of the elbow clear of obstacles. The straight-line trajectories of the end-effector that emerge as long as the arm is far from obstacles make the movement goals of the robotic assistant predictable for the human operator, improving man-machine interaction
The astronomy domain provides rich opportunities for learning about natural phenomena. It can involve and motivate a variety of mathematical and physical knowledge and skills. However it is difficult to connect astronomic observations to modelling and calculation tools and to embed them into educational scenarios. It is particularly this challenge which is focused in this paper. Concretely, we build on an existing collaborative modelling framework (Cool Modes) and extend it with specific representations to support learning activities in astronomy. A first field test has been conducted with these extensions.
In this paper we describe our efforts to foster educational interoperability in scenarios using mobile and wireless technologies to support hands-on scientific experimentation and learning. A special focus is given to the idea that innovative uses of mobile and wireless technologies enhance the learners' scientific experience. Specific contributions include the creation of new applications to support interoperability between different mobile devices, thus to provide "glue" between different learning situations. We describe a number of educational scenarios as well as the technologies and the architectural principles behind them.