Refine
Document Type
- Conference Proceeding (7)
- Article (2)
- Part of a Book (1)
Has Fulltext
- no (10)
Is part of the Bibliography
- no (10)
Institute
- Fachbereich 1 - Institut Informatik (10) (remove)
As smart homes are being more and more popular, the needs of finding assisting systems which interface between users and home environments are growing. Furthermore, for people living in such homes, elderly and disabled people in particular and others in general, it is totally important to develop devices, which can support and aid them in their ordinary daily life. We focused in this work on sustaining privacy issues of the user during a real interaction with the surrounding home environment. A smart person-specific assistant system for services in home environment is proposed. The role of this system is the assisting of persons by controlling home activities and guiding the adaption of Smart-Home-Human interface towards the needs of the considered person. At the same time the system sustains privacy issues of it’s interaction partner. As a special case of medical assisting the system is so implemented, that it provides for elderly or disabled people person-specific medical assistance . The system has the ability of identifying its interaction partner using some biometric features. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system represents person-specific list of medicines either visually or auditive. And third the system gives an alarm in the case of taking medicament either later or earlier as normal taking time.
Utilizing biometrie traits for privacy- and security-applications is receiving an increasing attention. Applications such as personal identification, access control, forensics appli-cations, e-banking, e-government, e-health and recently person-alized human-smart-home and human-robot interaction present some examples. In order to offer person-specific services for/of specific person a pre-identifying step should be done in the run-up. Using biometric in such application is encountered by diverse challenges. First, using one trait and excluding the others depends on the application aimed to. Some applications demand directly touch to biometric sensors, while others don't. Second challenge is the reliability of used biometric arrangement. Civilized application demands lower reliability comparing to the forensics ones. And third, for biometric system could only one trait be used (uni-modal systems) or multiple traits (Bi- or Multi-modal systems). The latter is applied, when systems with a relative high reliability are expected. The main aim of this paper is providing a comprehensive view about biometric and its application. The above mentioned challenges will be analyzed deeply. The suitability of each biometric sensor according to the aimed application will be deeply discussed. Detailed com-parison between uni-modal and Multi-modal biometric system will present which system where to be utilized. Privacy and security issues of biometric systems will be discussed too. Three scenarios of biometric application in home-environment, human-robot-interaction and e-health will be presented.
Currently in home environments, robot assisting systems with emotion understanding ability are generally achieved in two several manners. The first is the implementing of such systems in such a way that they offer general services for all considered persons without considering privacy, special needs of their interaction partners. The second way is the targetting of such systems for merely one person. In this work we present a robot assisting system, which has both the abilities of assisting several persons at the same time and sustaining their privacy and security issues. The robot can interact with it's interaction partner emotionally by analyzing the emotions of her expressed either visually, facial expression, or auditive, speech prosody. The role of this system is the providing of person-specific support in home environment. In order to identify its interaction partner the system uses diverse biometric traits. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system loads the corresponding emotional profile of the detected interaction partner in order to practice a person-specific emotional human-robot interaction, which has an advantage over the person independent interaction.
With a rapidly ageing population, it is increasingly important to de-
velop devices for elderly and disabled people that can support and aid
them in their daily lives, helping them to live at home as long as pos-
sible. The goal of this project is to implement a human-machine inter-
action and assistance system that can offer personalised health sup-
port for elderly people, or for those who have special needs in the
home environment.
As smart homes are being more and more popular, the needs of finding assisting systems which interface between users and home environments are growing. Furthermore, for elderly and disabled people living in such homes it is totally important to develop devices, which can support and aid them in their ordinary daily life. This demands means and tools that extend independent living and promote improved health. In this work we reviewed the state of the art in the assistant systems in home environments. A case study of medical assisting system for elderly and people with disabilities is discussed deeply. A smart nfc-based person-specific assistant system for services in home environment is proposed. The role of this system is the assisting by controlling of home activities and adaption of home-human interface towards the needs of the considered person. For the special case of medical assisting the system has the ability of providing for elderly or disabled people person-specific medical assistance. The system has the ability of identifying its interaction partner using some biometric features. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system represents person-specific list of medicaments either visually, on screen, or acoustic, speaker. And third the system gives an alarm in the case of taking medicament either later or earlier as normal taking time.
In this work methods are described, which are used for an individual adaption of a dialog system. Anyway, an automatic real-time capable visual user attention estimation for a face to face human machine interaction is described. Furthermore, an emotion estimation is presented, which combines a visual and an acoustic method. Both, the attention estimation and the visual emotion estimation based on Active Appearance Models (AAMs). Certainly, for the attention estimation Multilayer Perceptrons (MLPs) are used to map the Active Appearance Parameters (AAM-Parameters) onto the current head pose. Afterwards, the chronology of the head poses is classified as attention or inattention. In the visual emotion estimation the AAM-Parameter will be classified by a Support-Vector-Machine (SVM). The acoustic emotion estimation also use a SVM to classifies emotion related audio signal features into the 5 basis emotions (neutral, happy, sad, anger, surprise). Afterward, a Bayes network is used to combine the results of the visual and the acoustic estimation in the decision level. The visual attention estimation as well as the emotion estimation will be used in service robotic to allow a more natural and human like dialog. Furthermore, the human head pose is very efficient interpreted as head nodding or shaking by the use of adaptive statistical moments. Especially, the head movement of many demented people are restricted, so they often only use their eyes to look around. For that reason, this work examine a simple gaze estimation with the help of an ordinary webcam. Moreover, a full body user re-identification method is described, which allows an individual state estimation of several people for hight dynamic situations. In this work an appearance based method is described, which allows a fast people re-identification over a short time span to allow the usage of individual parameter.
Recognition of emotions from multimodal cues is of basic interest for the design of many adaptive interfaces in human-machine interaction (HMI) in general and human-robot interaction (HRI) in particular. It provides a means to incorporate non-verbal feedback in the course of interaction. Humans express their emotional and affective state rather unconsciously exploiting their different natural communication modalities such as body language, facial expression and prosodic intonation. In order to achieve applicability in realistic HRI settings, we develop person-independent affective models. In this paper, we present a study on multimodal recognition of emotions from such auditive and visual cues for interaction interfaces. We recognize six classes of basic emotions plus the neutral one of talking persons. The focus hereby lies on the simultaneous online visual and accoustic analysis of speaking faces. A probabilistic decision level fusion scheme based on Bayesian networks is applied to draw benefit of the complementary information from both – the acoustic and the visual – cues. We compare the performance of our state of the art recognition systems for separate modalities to the improved results after applying our fusion scheme on both DaFEx database and a real-life data that captured directly from robot. We furthermore discuss the results with regard to the theoretical background and future applications.