Refine
Year of publication
- 2015 (3) (remove)
Document Type
- Article (1)
- Conference Proceeding (1)
- Contribution to a Periodical (1)
Language
- English (3) (remove)
Has Fulltext
- no (3)
Is part of the Bibliography
- no (3)
Institute
Recently, rescue worker resources have not been sufficient to meet the regular response time during large-scale catastrophic events in every case. However, many volunteers supported official forces in different disaster situations, often self-organized through social media. In this paper, a system will be introduced which allows the coordination of trained volunteers by a professional control center with the objective of a more efficient distribution of human resources and technical equipment. Volunteers are contacted via app on their private smartphone. The design of this app is based on user requirements gathered in focus group discussions. The feedback of the potential users includes privacy aspects, low energy consumption, and mechanisms for long-term motivation and training. The authors present the results of the focus group analyses as well as the transfer to their app design concept.
This contribution demonstrates the efficient embedding of a single depth-camera into the automotive environment making mid-air gesture interaction for mobile applications viable in such a scenario. In this setting a new human-machine interface is implemented to give an idea of future improvements in automation processes in industrial applications. Our system is based on a data-driven approach by learning hand poses as well as gestures from a large database in order to apply them on mobile devices. We register any movement in a nearby driver area and crop data efficiently with the means of PCA transforming it into so-called feature vectors which present the input for our multi-layer perceptrons (MLPs). After MLP classification, the interpretation of user input is sent via WiFi to a tablet PC mounted into the car interior visualizing an infotainment system which the user is able to interact with. We demonstrate that by this setup hand gestures as well as hand poses are easily and efficiently interpretable insofar as that they become an intuitive and supplementary means of interaction for automotive HMI in mobile scenarios realizable in real-time.