Refine
Year of publication
- 2014 (15) (remove)
Document Type
- Conference Proceeding (10)
- Article (3)
- Part of a Book (2)
Has Fulltext
- no (15)
Is part of the Bibliography
- no (15)
Institute
We present a system for 3D hand gesture recognition based on low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. Our system fuses data coming from two ToF sensors which is used to build up a large database and subsequently train a multilayer perceptron (MLP). We demonstrate that we are able to reliably classify a set of ten hand gestures in real-time and describe the setup of the system, the utilised methods as well as possible application scenarios.
Currently in home environments, robot assisting systems with emotion understanding ability are generally achieved in two several manners. The first is the implementing of such systems in such a way that they offer general services for all considered persons without considering privacy, special needs of their interaction partners. The second way is the targetting of such systems for merely one person. In this work we present a robot assisting system, which has both the abilities of assisting several persons at the same time and sustaining their privacy and security issues. The robot can interact with it's interaction partner emotionally by analyzing the emotions of her expressed either visually, facial expression, or auditive, speech prosody. The role of this system is the providing of person-specific support in home environment. In order to identify its interaction partner the system uses diverse biometric traits. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system loads the corresponding emotional profile of the detected interaction partner in order to practice a person-specific emotional human-robot interaction, which has an advantage over the person independent interaction.
We present a study on 3D based hand pose recognition using a new generation of low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. We investigate the performance of different 3D descriptors, as well as the fusion of two ToF sensor streams. By basing a data fusion strategy on the fact that multilayer perceptrons can produce normalized confidences individually for each class, and similarly by designing information-theoretic online measures for assessing confidences of decisions, we show that appropriately chosen fusion strategies can improve overall performance to a very satisfactory level. Real-time capability is retained as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.
In this paper, we describe an efficient method for a fast people re-identification based on models of human clothes. An initial model is estimated during people detection and tracking, which will be refined during the re-identification. This stepwise extraction, combination and comparing of features speeds up the whole re-identification. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body part. The body parts are located with an optimized GPU-based HOG detector. Furthermore, we introduce a meanshift-based fusion concept which utilizes multiple detectors in order to increase the detection reliability.
Ziel des Verbundprojektes APFel (Projektlaufzeit: 01.01.2010 ‐ 31.03.2014)war eine zeitlich vorwärts‐ und rückwärtsgerichtete Lokalisation von Personen innerhalb eines Kameranetzwerkes aus sich nicht überlappenden Kameras in Hyperechtzeit zu ermöglichen. Einsatzbereiche dieses Szenarios sind kritische Infrastrukturen wie Flughäfen und Flugplätze. Zunächst fokussierte das Projekt APFel auf die Lokalisation einer einzelnen Zielperson. Weiterführend wurden die entwickelten Verfahren auf die Analyse von Gruppen erweitert, um Personen als Teil einer Gruppe lokalisieren zu können.
With the introduction of Apple’s iPhone, gesture control became pop-
ular and was perceived as an intuitive means of interaction. Contact-
less gestures received broad attention with the X-Box Kinect.
Current technology is limited to a small number of uses, mainly
in entertainment systems. The target of this project is to increase the
range of possible applications, e.g. to the field of automotive,
industrial applications (manufacturing plants), assisted living in con-
texts ranging from private households to hospitals (interaction for
people with disabilities) and many more.
With a rapidly ageing population, it is increasingly important to de-
velop devices for elderly and disabled people that can support and aid
them in their daily lives, helping them to live at home as long as pos-
sible. The goal of this project is to implement a human-machine inter-
action and assistance system that can offer personalised health sup-
port for elderly people, or for those who have special needs in the
home environment.
As smart homes are being more and more popular, the needs of finding assisting systems which interface between users and home environments are growing. Furthermore, for elderly and disabled people living in such homes it is totally important to develop devices, which can support and aid them in their ordinary daily life. This demands means and tools that extend independent living and promote improved health. In this work we reviewed the state of the art in the assistant systems in home environments. A case study of medical assisting system for elderly and people with disabilities is discussed deeply. A smart nfc-based person-specific assistant system for services in home environment is proposed. The role of this system is the assisting by controlling of home activities and adaption of home-human interface towards the needs of the considered person. For the special case of medical assisting the system has the ability of providing for elderly or disabled people person-specific medical assistance. The system has the ability of identifying its interaction partner using some biometric features. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system represents person-specific list of medicaments either visually, on screen, or acoustic, speaker. And third the system gives an alarm in the case of taking medicament either later or earlier as normal taking time.
Building upon prior results, we present an alternative approach to efficiently classifying a complex set of 3D hand poses obtained from modern Time-Of-Flight-Sensors (TOF). We demonstrate it is possible to achieve satisfactory results in spite of low resolution and high noise (inflicted by the sensors) and a demanding outdoor environment. We set up a large database of pointclouds in order to train multilayer perceptrons as well as support vector machines to classify the various hand poses. Our goal is to fuse data from multiple TOF sensors, which observe the poses from multiple angles. The presented contribution illustrates that real-time capability can be maintained with such a setup as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.