Refine
Year of publication
Document Type
- Conference Proceeding (179)
- Article (64)
- Part of a Book (16)
- Bachelor Thesis (6)
- Book (5)
- Report (5)
- Contribution to a Periodical (3)
- Doctoral Thesis (2)
- Master's Thesis (2)
- Part of Periodical (2)
Language
- English (287) (remove)
Is part of the Bibliography
- no (287)
Keywords
In this scientific research, an innovative sensor system is developed to prevent child heatstrokes in vehicles. The system incorporates a 24 GHz Continuous-Wave (CW) radar system, which identifies vital signs of an infant through a 4-by-1 patch antenna array embedded in a specifically designed circuit board. Intelligent signal processing algorithms analyze data generated by the radar chip and execute processing tasks on a robust microcontroller. The child’s respiration
rate can be extracted qualitatively from the data in nearly real-time, enabling the system to differentiate between a child and a mere shopping bag on the seat. In the event of identifying a critical condition, the system transmits this information via a data bus to a central ECU within the vehicle. This ECU is integrated with GSM and GPS connections, allowing communication with the driver or emergency services. The development of the sensor system adheres to existing
automotive industry standards, featuring a cost-effective design intended as a prototype for large-scale production. Through rigorous evaluation across various scenarios, including realworld
situations with children, the sensor system is refined. The continuously reliable function of the developed radar-based sensor system holds the potential to save children’s lives, making
a major contribution to automotive safety.
Efficient and reliable onsite inspection methods are gaining importance as the construc-tion of PV power plants is expanding. For large PV installations, time- and cost-efficient failure detection is essential for optimized operation and maintenance. For this purpose, various optical methods as Infrared thermography (IR), Electroluminescence (EL), Pho-toluminescence (PL) and Ultraviolet Fluorescence (UVF) are employed and under con-stant development. For each method, the camera, and eventually the light source, can be handheld, or mounted on a drone, also called unmanned aircraft vehicle (UAV), to achieve higher throughputs.
IR is the most widely used optical onsite PV inspection method, as many defects can be detected by the thermal radiation (heating) of the defect component. EL and PL reveal further information on the electrical behaviour of the Si-waver. They are also widely used and take the role of a complement to IR, showing electrically active/inactive areas of the semiconductor. On the other hand, UVF focuses on the degradation of the polymeric encapsulant of the Si-cell, most commonly consisting of EVA (ethylene-vinyl acetate). The degradation of the encapsulant can lead to its discoloration, also called yellow-ing/browning, which decreases the transmittance of visual light. UVF patterns can show this yellowing as well as humidity and oxygen entrances, which can lead to effects of corrosion. Both mechanisms (discoloration and corrosion) decrease the performance of the PV cell. The discoloration cannot be directly observed on IR or EL images, as the encapsulant is neither a heat source nor electroconductive. Using IR imagery, severe discoloration might be observed indirectly, as the reduced optical transmittance leads to changes in heat transfer mechanisms concerning the cell and the encapsulant.
Similarly, as long as corrosion does not lead to inactive cell areas or heating, it most likely will not be spotted using EL, PL or IR. So, UVF can fill the niche of inspecting the state of the encapsulant and detecting its defects due to climate impacts in early stages.
While a high number of studies on IR, EL, PL and some on UVF were performed in Europe and the USA, there are not yet many studies about the application of these tech-niques in South America (i.e., in Brazil). UVF mainly depends on climate factors (irradi-ation, temperature, humidity) and the operation time/”age” of the module. The UVF im-agery method has not yet been tested in climate and system conditions of Brazil. Fur-thermore, systems in Brazil are more recently installed. All this can affect differences in the results of UVF imagery applied in Europe, the USA and Brazil.
The present work focuses on the application of UVF imaging on PV power plants in Bra-zil, the creation of an experimental setup and the proposal of proceedings for the data analysis of the acquired images. The aim is to propose a method that is suitable for large-scale inspection.
Developing an intelligent chatbot that can imitate human-to-human interaction has become important in recent years. For this reason, many studies have been conducted to evaluate the quality of chatbots. Furthermore, various approaches and tools, such as sentiment analysis, have been created to improve the performance of chatbots.
This study examines previous research to identify the quality dimensions used to measure chatbots performance in order to develop a general chatbot assessment model that evaluates and compares chatbots quality. The developed evaluation model measures ten chatbot quality dimensions. This model is based on user experience. It requires human testers to interact with the chatbot to test its functioning and then a quantitative approach is used to collect data from user testing by conducting a survey with these testers. In this survey, they are instructed to evaluate the quality of the chatbot using a questionnaire that contains the items needed to evaluate each dimension.
This study also investigates whether sentiment analysis can improve the quality of chatbots and, if so, to identify the dimensions improved with sentiment analysis. For this reason, two chatbot versions are implemented using the Rasa framework (one that cannot detect sentiment and the other that analyzes sentiment and responds accordingly).
Following that, we used our evaluation model to evaluate and compare the two chatbot versions with two groups of participants by conducting a survey. In this survey, each group tested the functioning of one version. Then, both groups were instructed to use the items of the evaluation model to evaluate the version they tested. The goal of this survey was to evaluate the validity and reliability of the items used in the evaluation model to evaluate chatbots, and also to determine if sentiment analysis improved the chatbot quality by comparing survey results between the two groups. The results show that items used in the assessment model to evaluate chatbots are valid and reliable. The findings also indicate that sentiment analysis improves the chatbot’s quality. However, it improves the quality of some dimensions but not the majority of them.
In the course of this thesis, an overview will be given on which way developers can guide
users into acting environmentally friendly without the users realizing they are being
nudged. In the last couple of years, our private and work-life have been more and more
shifted away from reality into a digital context. Since the start of the Covid – 19 pandemic
in 2019, even more aspects of everyday life have been shifted to an online context, one
of them being groceries shopping. Even though online groceries shopping is not yet
common in Germany, there is a trend toward the online purchase of groceries visible.
This can be seen as a possibility to tackle another challenge the world is facing, the
climate crisis. One reason for the climate crisis is mindless consumption and purchasing
of too much food. This paper aims to combine the need for more aware consumption
with the newly rising trend of online supermarkets. Furthermore, a supermarket will be
provided to show if the implementation of environmentally–friendly nudges is technically
possible. To eventually prove the effectiveness of a nudge, it needs to be tested.
Keywords: Nudging, Environment, Online supermarkets
Artificial intelligence (AI) is one of the most auspicious yet controversial technologies with virtually unlimited potential to solve almost all of the existential problems humanity is facing today.1 Huge resources are poured into the development, testing and application of AI that is supposed to be utilized in almost all areas of everyday life.2 It may be used to combat genetically inherited diseases, to revolutionize the economy, to bring prosperity and equality to everyone and to counter the effects of climate change.3 With AI as the enabling technology humanity may experience a better future. Today, AI capabilities can already drastically improve analytic processing tasks and algorithmic systems and have beaten humans in games such as chess.4 Yet, AI and all of its applications bring about a myriad of ethical challenges. Revolutionary weapon systems that achieve autonomy via AI and genome-editing powered by AI are just some specific examples.5 An omnipotent AI will be either the greatest or the vilest thing that has happened to humanity in its brief existence.6 However, even today more and more computational devices are connected to each other, spurring a huge increase in global data streams that can be used to further train and enhance AI systems.
The prowess of AI for executing analytic tasks paves the way for the use of AI in more and more applications. One of these applications, that shows great promise, is the use of AI in surveillance applications.7 AI surveillance applications are proliferating at a fast rate, with a number of appli-cations already being in use today.8 These applications are aimed at accomplishing a number of policy objectives, some are in accordance with basic human laws, some are definitely not and some
1 Cf. Hawking (2018). P. 183ff
2 Cf. Hawking (2018). P. 183ff.
3 Cf. Hawking (2018). P. 183ff.
4 Cf. Burton (2015). P. 1ff.
5 Cf. Hawking (2018). P. 183ff.
6 Cf. Hawking (2018). P. 183ff.
7 Cf. Feldstein (2019). P. 1.
8 Cf. Feldstein (2019). P. 1.
2
belong in the nebulous area in between lawful and unlawful.9 But what are lawful and unlawful uses of AI surveillance systems and what are their ethical implications?
This thesis will examine the ethical implications of AI based mass surveillance systems and try to answer the first central question, if it is possible to use AI based mass surveillance applica-tions in an ethical way. Furthermore, the thesis will attempt to answer the second central ques-tion and find out how the ethical use of AI based mass surveillance systems, if this ethical use is possible, materialize. Governmental agencies will be in the focus of this discussion, as their use of the technology may have bigger ethical challenges. Yet private companies will play a part as well. In an attempt to accomplish these two aims, the thesis will inspect the basics of ethics and possible ethical theories that can be utilized to answer the questions. Normative ethics will be stud-ied first with a focus on consequentialism and utilitarianism. To gain a deeper understanding of utilitarianism, act and rule utilitarianism will be compared. Afterwards, deontological theories will be the focus of the discussion with a concentration on deontological pluralism. Next, the mentioned theories will be evaluated, discussing advantages and weak spots of the theories, to assess which theory may serve as the ethical framework of this thesis and the subsequent answer to the two main questions.
The next step will be the establishment of the AI framework. This contains the definition of AI and a distinction of terms that are commonly used in the its environment such as automation and au-tonomy. The importance of data for AI will be discussed. Afterwards, the technological basis of AI will be outlined, discussing key concepts such as machine learning and deep learning. Addi-tionally, it will be examined how an AI learns. The possible uses of AI in general will be outlined in a brief fashion, blazing the trail to discussing the moral challenges of AI. Afterwards, the current pace of AI development will be studied.
In the chapter that follows, the use of AI in surveillance technology is going to be highlighted. The possible ways of how AI can be used for surveillance purposes are reviewed here, discussing facial
9 Cf. Feldstein (2019). P. 1.
3
and behavioral recognition systems, smart cities, smart policing, communications/data driven sur-veillance and their enabling technologies. Then, the global proliferation of AI surveillance systems is going to be outlined.
Subsequently, the accordance of AI surveillance with basic human laws and rights, such as the right to privacy, will be checked to find out if the law and the international framework of human rights allow for AI surveillance or at least have restrictions that would greenlight the use of AI surveillance technology. All the aspects of the thesis, especially including the selected ethical framework, will be combined in this last section in order to enable the adaptation of a framework that allows to find out, if AI surveillance systems can be ethically permissible while also creating insights how this ethical AI surveillance system must be engineered. To finish, the thesis will end with a conclusion.
Digital technology is increasingly becoming a part of life and culture in society, and it must be consciously designed for the long-term benefit of humanity. Today, information systems are designed to do more than fulfill human duties or complete tasks. A widely adopted approach is a system design that focuses on the positive aspects of human-technology interaction. Positive computing is a design paradigm gaining traction because it emphasizes the importance of well-being as a bold goal to be implemented in system design. In this dissertation, technology design is part of an intergenerational environment aiming to facilitate information sharing regarding global startup innovation. Nevertheless, much of the research focuses on how technology can be used to facilitate intergenerational collaboration. On the other hand, very little is known about how technology can be "positively" designed to promote intergenerational innovation. Therefore, this dissertation applied Design Science Research (DSR) to inform and guide the creation of design principles through the lens of positive computing. The study results provide a holistic picture of the numerous barriers, well-being factors, competing concerns, and competencies that have been encountered in the context of intergenerational innovation and their implications. This dissertation is presented as a cumulative dissertation, answering three research questions divided into seven studies, consisting of nine articles.
In this study, we looked at the competencies and changes in the competency spectrum required for global start-ups in the digital age. Specifically, we explored intergenerational collaboration as an intervention in which experienced business-people from senior adult groups support young entrepreneurs. We conducted a Delphi study with 20 experts from different disciplines, considering the study context. The results of this study shed light on understanding the necessary competencies of entrepreneurs for intergenerationally supported start-up innovation by providing 27 competencies categorized as follows: intergenerational safety facilitation, cultural awareness, virtues for growth, effectual creativity, technical expertise, responsive teamwork, values-based organization, and sustainable network development. In addition, the study results also reveal the competency priorities and the minimum requirements for each competency group based on the global innovation process and can be used to develop a readiness assessment for start-up entrepreneurs.
In this document a reliable data streaming mechanism for a TDMA LPWAN application is developed by adapting a link layer solution for power line communication, published at the International Symposium on Power Line Communications and its Applications (ISPLC) 2015. A C++ implementation of the services link layer is provided and demonstrated
working at a packet error rate of 50%.
This work aims to generate synthetic electromyographic (EMG) signals using Generative Adversarial Network (GAN). GANs are considered as one of the most exciting and promising approaches in deep learning [6], offering the possibility to generate artificial data based on real data. GAN consists of two main parts, a discriminator that attempts to differentiate between the generated data and the original data, and a generator that tries to fool the discriminator by generating data which looks like real data, the GAN works by staging a two-player
minimax game between generator and discriminator networks. To achieve the objective of generating realistic artificial electromyographic signals, two different architectures are considered for the generator and the discriminator networks of the GAN model: Long short-term memory (LSTM), which can avoid the long-term dependency problem and remembers information over a long period of time, and convolutional neural network (CNN), which is a powerful tool at automatic feature extraction. Different combinations of CNN and LSTM including hybrid model are experimented within the GAN using the same training data-set. The results and performances of each combination are compared and reviewed. The generated artificial EMG signals can be used to
simulate real muscle activity situations to for example improve muscle signal controlled prostheses using artificial data that may include conditions that does not exist in real data. This method of artificial data generation is not limited to EMG signals, the network can also be used to generate other synthetic biomedical signals such as electroencephalogram (EEG) or electrocardiogram (ECG) that can be practically used for testing algorithms and classifiers.
This study aims to determine the competing concerns of people interested in startup development and entrepreneurship by using topic modeling and sentiment analysis on a social question-and-answer (SQA) website. Understanding the underlying concerns of startup entrepreneurs is critical to society and economic growth. Therefore, greater scientific support for entrepreneurship remains necessary, including data mining from virtual social communities. In this study, an SQA platform was used to identify the sentiment of thirty concerns of people interested in startup entrepreneurship. Based on topic modeling and sentiment analysis of 18819 inquiries in various forums on an SQA, we identified additional questions about founder figures, keys to success, and the location of a startup. In addition, we found that general questions were rated more positively, especially when it came to pitching, finding good sources, disruptive innovation, idea generation, and marketing advice. On average, the identified concerns were considered 48.9 percent positive, 41 percent neutral, and 10.1 percent negative. This research establishes a critical foundation for future research and development of digital startups by outlining a variety of different concerns associated with startup development in the digital age.
This study proposes a framework for the collaborative development of global start-up innovators in a multigenerational digital environment. Intergenerational collaboration has been identified as a strategy to support entrepreneurs during their formative years. However, integrating and fostering intergenerational collaboration remains elusive. Therefore, this study aims to identify competencies for successful global start- ups through intergenerational knowledge transfer. We used a systematic literature review to identify a competency set consisting of growth virtues, effectual creativity, technical domain, responsive teamwork, values-based organization, sustainable networking, cultural awareness, and facilitating intergenerational safety. The competency framework serves as a foundation for knowledge management research on the global innovation readiness of people to collaborate across generations in the digital age.
So far, researchers have used a wellbeing-centered approach to catalyze successful intergenerational collaboration (IGC) in innovative activities. However, due to the subject’s multidisciplinary nature, there is still a dearth of comprehensive research devoted to constructing the IGC system. Thus, the purpose of this study is to fill a research void by providing a conceptual framework for information technology (IT) system designers to use as a jumping-off point for designing an IGC system with a wellbeing-oriented design. A systematic literature study was conducted to identify relevant terms and develop a conceptual framework based on a review of 75 selected scientific papers. The result consists of prominent thematic linkages and a conceptual framework related to design technology for IGC systems. The conceptual framework provides a comprehensive overview of IGC systems in the innovation process by identifying five barrier dimensions and using six wellbeing determinants as IGC catalysts. Moreover, this study discusses future directions for research on IGC systems. This study offers a novel contribution by shifting the technology design process from an age-based design approach to wellbeing-driven IGC systems. Additional avenues for investigation were revealed through the analysis of the study’s findings.
Public transportation will become highly automated in the future, and at some point, human drivers are no longer necessary. Today many people are skeptical about such scenarios of autonomous public transport (abbr.: APT). In this paper, we assess users’ subjective priority of different factors that lead to personal acceptance or rejection of APT using an adapted online version of the Q-Methodology with 44 participants. We found four prototypical attitudes to which subgroups of participants relate: 1) technical enthusiasts, 2) social skeptics, 3) service-oriented non-enthusiasts, and 4) technology-oriented non-enthusiasts. We provide an unconventional perspective on APT acceptance that helps practitioners prioritize design requirements and communicate, targeting users’ specific attitudes.
Rapid digital transformation is taking place due to the COVID-19 pandemic, forcing organisations and higher educational institutions to change their working and learning culture. This study explores the challenges of rapid digital transformation arising during the pandemic in the higher education context. This research used the Q-methodology to understand the nine challenges that higher education encountered, perceived differently as four main patterns: (1) Digital-nomad enterprise; (2) Corporate-collectivism; (3) Well-being-oriented; and (4) Pluralistic. This study broadens the current understanding of digital transformation, especially in higher education. The nine challenges and four patterns of transformation actors serve as a starting point for organisations in supporting technological choice and strategic interventions, based on individual, group, and organisational behavioural levels. Moreover, five propositions, based on the competing concerns of these challenges, establish a framework for comprehending the ecosystem that enables rapid digital transformation. Strategies, prerequisites, and key factors during the (digital) technology development process benefit the cyber-society ecosystem. As a practical contribution, Q-methodology was used to investigate perspectives on digitalisation challenges during the pandemic.
Blended learning offers learning solutions for higher educational institutions facing the industrial revolution 4.0. In this study, we investigated the influence factors student perceptions of blended learning based on gender-specific differences in Indonesia. We applied a research model to systematically assess the effect of design features on the effectiveness of blended learning indicators (intrinsic motivation and student satisfaction). Moreover, we evaluated the research model for both genders separately. Based on the quantitative survey of 223 Indonesian students, our study confirms that the design features significantly influence the effectiveness of blended learning for male and female students.
In this demo paper we present a new visualization technique for dynamic networks. It displays the time slices of the dynamic network using two dimensional graph layouting algorithms and stacks these in the third dimension to show the development over time. The visualization ensures that the same node always has the same position in each time slice so that it is easy to follow its development. It also allows filtering data and influencing node appearance based on properties. Additionally we offer a two dimensional comparison view for two time slices which highlights changes in graph structure and (if available) in measures of nodes. The presented visualization technique is implemented using Web technology and is available in a Web-based analytics workbench. We demonstrate the benefits of these techniques by an analysis of a data set from a learning community.
Starting with the automatic gear change, the operation of a vehicle becomes more and more abstract. In the future, we could control vehicles with single, simple commands. For such a maneuver-based vehicle control system, we investigate a head-up display design in a workshop. The aims are to identify common and distinct features of various display designs through mock-ups. First results show that different sizes of GUI elements are preferred by different states. The preferred position of GUI elements in the head-up display (HUD) is the central bottom area. We found two major interface design styles: static interfaces (all elements visible) with fixed layout and dynamic interfaces (only relevant elements visible) with fixed or adaptive layout.
The uprising levels of autonomous vehicles allow the drivers to shift their attention to non-driving tasks while driving (ie, texting, reading, or watching movies). However, these systems are prone to failure and, thus, depending on human intervention becomes crucial in critical situations. In this work, we propose using human actuation as a new mean of communicating take-over requests (TOR) through proprioception. We conducted a user study via a driving simulation in the presence of a complex working memory span task. We communicated TORs through four different modalities, namely, vibrotactile, audio, visual, and proprioception. Our results show that the vibrotactile condition yielded the fastest reaction time followed by proprioception. Additionally, proprioceptive cues resulted in the second best performance of the non-driving task following auditory cues.
Understanding user needs and behavior in automated vehicles (AVs) while traveling is essential for future in-vehicle interface and service design. Since AVs are not yet market-ready, current knowledge about AV use and perception is based on observations in other transportation modes, interviews, or surveys about the hypothetical situation. In this paper, we close this gap by presenting real-world insights into the attitude towards highly automated driving and non-driving-related activities (NDRAs). Using a Wizard of Oz AV, we conducted a real-world driving study (N= 12) with six rides per participant during multiple days. We provide insights into the users’ perceptions and behavior. We found that (1) the users’ trust a human driver more than a system,(2) safety is the main acceptance factor, and (3) the most popular NDRAs were being idle and the use of the smartphone.
This paper describes a system which allows platform independent access to quizzes of the popular learning platform Moodle. The main focus is on the software architecture which is implemented on the base of platform independent technology like Web Services, HTML5 and JavaScript. Another aspect is the user interface which was developed with the goal to run on a broad range of mobile devices from small mobile phones up to large tablets.
Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give teen-aged pupils insights into this area in a project-based learning environment with professional tools. Within the last 18 month this project was successfully conducted several times with participants of different age.
Human computer interaction in security and time-critical systems is an interdisciplinary challenge at the seams of human factors, engineering, information systems and computer science. Application fields include control systems, critical infrastructures, vehicle and traffic management, production technology, business continuity management, medical technology, crisis management and civil protection. Nowadays in many areas mobile and ubiquitous computing as well as social media and collaborative technologies also plays an important role. The specific challenges require the discussion and development of new methods and approaches in order to design information systems. These are going to be addressed in this special issue with a particular focus on technologies for citizen and volunteers in emergencies.
Recently, rescue worker resources have not been sufficient to meet the regular response time during large-scale catastrophic events in every case. However, many volunteers supported official forces in different disaster situations, often self-organized through social media. In this paper, a system will be introduced which allows the coordination of trained volunteers by a professional control center with the objective of a more efficient distribution of human resources and technical equipment. Volunteers are contacted via app on their private smartphone. The design of this app is based on user requirements gathered in focus group discussions. The feedback of the potential users includes privacy aspects, low energy consumption, and mechanisms for long-term motivation and training. The authors present the results of the focus group analyses as well as the transfer to their app design concept.
In catastrophic events, the potential of help has grown through new technologies. Voluntary help has many forms. Within this paper different categories of voluntary help are suggested. Those categories are based on properties like organizational structures, helping process, kind of prosocial behavior and many more. A focus is clearly on the organizational structure and motivational aspects of helper groups. Examples are given for each category. The categorization’s aim is to give a brief overview of possible properties a group of system users could have.
Gestures are part of the interaction between humans and are currently getting more and more popular in the field of Human-Machine Interaction (HMI). First systems with mid-air gesture control are available in the automotive field of application. But it is still an open question which gestures are intuitive for the users, standards do not exist. In this paper we present a 2-step user study on expectations on touchless gestures in vehicles as part of a participatory design process.
Mission- and safety-critical domains are more and more characterized by interactive and multimedia systems varying from large-scale technologies (e. g. airplanes) to wearable devices (e. g. smartglasses) operated by professional staff or volunteering laypeople. While technical availability, reliability and security of computer-based systems are of utmost importance, outcomes and performances increasingly depend on sufficient human-machine interaction or even cooperation to a large extent. While this i-com Special Issue on “Human-Machine Interaction and Cooperation in Safety-Critical Systems” presents recent research results from specific application domains like aviation, automotive, crisis management and healthcare, this introductory paper outlines the diversity of users, technologies and interaction or cooperation models involved.
Automotive user interfaces and, in particular, automated vehicle technology pose a plenty of challenges to researchers, vehicle manufacturers, and third-party suppliers to support all diverse facets of user needs. To give an example, they emerge from the variation of different user groups ranging from inexperienced, thrill-seeking young novice drivers to elderly drivers with all their natural limitations. To allow assessing the quality of automotive user interfaces and automated driving technology already during development and within virtual test processes, the proposed workshop is dedicated to the quest of finding objective, quantifiable quality criteria for describing future driving experiences. The workshop is intended for HCI, AutomotiveUI, and "Human Factors" researchers and practitioners as well for designers and developers. In adherence to the conference main topic "Spielend einfach interagieren" this workshop calls in particular for contributions in the area of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces etc.) and artificial intelligence (predictive HMIs, adaptive systems, intuitive interaction).
Automotive user interfaces and automated vehicle technology pose numerous challenges to support all diverse facets of user needs. These range from inexperienced, thrill-seeking, young novice drivers to elderly drivers with a mostly opposite set of preferences together with their natural limitations. To allow assessing the (hedonic) quality of automotive user interfaces and automated driving technology (i. e., UX) already during development, the proposed workshop is dedicated to the quest of finding objective, quantifiable criteria to describe future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors” researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Interaktion – Verbindet – Alle”, this workshop calls in particular for contributions in the areas of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) with focus on hedonic quality and design of user experience to enhance the safety feeling in ADS.
System design for well-being needs an appropriate tool to help designers to determine relevant requirements that can help human well-being to flourish. Personas come as a simple yet powerful tool in the early development stage of the user interface design. Considering well-being determinants in the early design process provide benefits for both the user and the development team. Therefore, in this short paper, we performed a literature study to provide a conceptual model of well-being in personas and propose positive design interventions in personas’ creation process.
Automotive user interfaces and, in particular, automated vehicle technology pose a plenty of challenges to researchers, vehicle manufacturers, and third-party suppliers to support all diverse facets of user needs. To give an example, they emerge from the variation of different usergroups ranging from inexperienced, thrill-seeking young novice drivers to elderly drivers with all their natural limitations. To allow assessing the quality of automotive user interfaces and automated driving technology already during development and within virtual test processes, the proposed workshop is dedicated to the quest of finding objective, quantifiable quality criteria for describing future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors" researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Spielend einfach interagieren “, this workshop calls in particular for contributions in the area of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) and artificial intelligence (predictive HMIs, adaptive systems, intuitive interaction).
The way we communicate with autonomous cars will fundamentally change as soon as manual input is no longer required as back-up for the autonomous system. Maneuver-based driving is a potential way to allow still the user to intervene with the autonomous car to communicate requests such as stopping at the next parking lot. In this work, we highlight different research questions that still need to be explored to gain insights into how such control can be realized in the future.
For highly automated vehicles (AVs), new interaction concepts need to be developed. Even in AVs, the driver might want to intervene and override the automation from time to time. To create the possibility of control, we explore vehicle control through maneuver-based interventions (MBI). Thereby, we focus on explicit, contact-less interaction, which could be beneficial in future AV designs, where the driver is not necessarily bound to classical controls. We propose a set of freehand gestures and keywords for voice control derived in a user-centered design process. Further, we discuss properties, applicability and user impressions of both interaction modalities. Voice control seems to be an efficient way to select a maneuver and free-hand gestures could be used, if voice channel is blocked, e.g., through conversation with passengers.
Human emotion detection in automated vehicles helps to improve comfort and safety. Research in the automotive domain focuses a lot on sensing drivers' drowsiness and aggression. We present a new form of implicit driver-vehicle cooperation, where emotion detection is integrated into an automated vehicle's decision-making process. Constant evaluation of the driver's reaction to vehicle behavior allows us to revise decisions and helps to increase the safety of future automated vehicles.
Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles--and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting.
Currently, car assistant systems mainly try to prevent accidents. Increasing built-in car technology also extends the potential applications in vehicles. Future cars might have virtual windshields that augment the traffic or individual virtual assistants interacting with the user. In this paper, we explore the potential of an assistant system that helps the car’s occupants to calm down and reduce stress when they experience an accident in front of them. We present requirements from a discussion (N= 11) and derive a system design from them. Further, we test the system design in a video-based simulator study (N= 43). Our results indicate that an accident support system increases perceived control and trust and helps to calm down the user.
Self-driving cars will relief the human from the driving task. Nevertheless, the human might want to intervene in the driving process and thus needs the possibility to control the car. Switching back to fully manual controls is uncomfortable once being passive and engaging in non-driving-related activities. A more comfortable way is controlling the car with elemental maneuvers (e.g., "turn left" or "stop"). Whereas touch interaction concepts exist, contactless interaction through voice and mid-air gestures has not yet been explored for maneuver-based car control. In this paper, we, therefore, compare the general eligibility of voice and mid-air gesture with touch interaction as the primary maneuver selection mechanism in a driving simulator study. Our results show high usability for all modalities. Contactless interaction leads to a more positive emotional perception of the interaction, yet mid-air gestures lead to higher task load. Overall, voice and touch control are preferred over mid-air gestures by most users.
How to Increase Automated Vehicles’ Acceptance through In-Vehicle Interaction Design: A Review
(2020)
Automated vehicles (AVs) are on the edge of being available on the mass market. Research often focuses on technical aspects of automation, such as computer vision, sensing, or artificial intelligence. Nevertheless, researchers also identified several challenges from a human perspective that need to be considered for a successful introduction of these technologies. In this paper, we first analyze human needs and system acceptance in the context of AVs. Then, based on a literature review, we provide a summary of current research on in-car driver-vehicle interaction and related human factor issues. This work helps researchers, designers, and practitioners to get an overview of the current state of the art.
This Paper presents a new service-learning setting based on the collaboration of engineering students and people with disabilities. The implementation at a German university is described and first results from two years of experience are shown. The objective of this case study is to show a transferable best practice concept with impact.
Learning the German language is one of the most critical challenges for refugee children in Germany. It is a prerequisite to allow communication and integration into the educational system. To solve the underlying problem, we conceptualized a set of principles for the design of language learning systems to support collaboration between teachers and refugee children, using a Design Science Research approach. The proposed design principles offer functional and non-functional requirements of systems, including the integration of open educational resources, different media types to develop visual and audio narratives that can be linked to the cultural and social background. This study also illustrates the use of the proposed design principles by providing a working prototype of a learning system. In this, refugee children can learn the language collaboratively and with freely accessible learning resources. Furthermore, we discuss the proposed design principles with various socio-technical aspects of the well-being determinants to promote a positive system design for different cultural and generational settings. Overall, despite some limitations, the implemented design principles can optimize the potential of open educational resources for the research context and derive further recommendations for further research.
The highly successful lecture series on the topic of measurement and sensor technologies as part of the IEEE Workshop at the University of Applied Sciences Ruhr West (HRW) is being continued in collaboration with the University of Siegen, the TU Chemnitz and the ITMO National Research University of Information Technologies, Mechanics and Optics in St. Petersburg. This time the event is featuring an even more international orientation by linking it with the Russian SENSORICA. The topics cover industrial and medical measurement technology as well as sensor technology in vehicles. Our event offers a platform for knowledge transfer between industry and public and commercial research institutions in the area of measurement technology.
This Abstract Book offers the opportunity of contacting speakers even after the event.
In addition we are very pleased to have selected contributions published in a special edition of the journal „tm Technisches Messen“ (De Gruyter Oldenbourg Verlag) again this year.
The detection of soil erosion processes in dams, hydraulic heave failure or corrosion processes of reinforcing steel in concrete are a small selection of measuring applications in civil engineering where the impedance analysis can be used to determine the measurand. Those measuring applications are having high requirements for the measuring hardware. For example a common interface for fast data exchange, high resolution, independent functionality and easy customizability to suit the measuring application. For that reason, a well-known application for steel-mill process monitoring can be used as a development platform. This hardware platform is based on a vector network analyzer and is meeting the requirements mainly. However, a couple of modifications has to be made, like replacing the ADC for a higher sample rate, Ethernet for easy and fast data exchange and the microcontroller for more calculation power.
Process Monitoring in Steel-Mills using Impedance Analysis: VNA Improvement for Data Acquisition
(2017)
The process automation extends over every manufacturing step of a product in the steel-mill to increase the quality, quantity and energy efficiency. The product dimensions are an important part of the quality control; these must maintain the specified tolerances. Additional to the cross-sectional-area, the measured data contains much more information about the manufacturing process, e.g. eccentricity, condition of the rolls and defects of the rod. For analyzing the measured data and to gather more information about the manufacturing process it is necessary to increase the speed of the data acquisition by performing some modifications of the VNA, e.g. faster analog to digital converter and microcontroller, improved firmware and optimized values of the passive electrical components for faster time constants and transient responses.
Rolling mills are continually improved and opti-mized by implementing innovative technology to decrease costs and scrap. Despite of the progressive automation and experience, some important process parameters can still not be determined with sufficient accuracy. As part of the research project PIREF, the velocity of the hot rolled rod shall be measured by using im-pedance analysis to estimate the volumetric flow rate of the mate-rial. For a high accuracy measurement of the impedance, a pow-erful VNA is used. To minimize errors in the measurement, caused by e.g. temperature drift, a correction of the measurement fre-quency is needed. This must be achieved without recalibration of the VNA to avoid faulty behavior of the online control. To solve this problem, an approach based on a polynomial regression is presented in this work.
Quality and dimensional accuracy of hot rolled steel rods depend on several process parameters. In fact many of these crucial parameters are not be sufficiently determined yet. By improving automation and process control costs and scrap of production can be decreased. As part of the research project PIREF, one of these parameters – the roll gap – is under investigation beside other topics. Before starting rolling, the roll gap is typically set to a fixed value according to the planed dimensions of the product, but the forces during the rolling of the rod cause an enlargement of the roll gap. In which way the rolls change their position and form shall be examined in our research project. Therefore a first experimental setup has been built up to determine the change in position of the rolls under applied force. This is realized by a pot core coil as sensor using impedance analysis. The first results are presented in this work as a proof-of-principle.
Process diagnosis is an important method for improving product quality in rolling mills. In addition, the measurement of process variables such as roll gap, cross-sectional area, velocity, and volume flow of the material during production enables the implementation of model-based control concepts to improve product quality. The non-contact speed measurement of hot wire and bar is still a big challenge due to the rough environmental conditions and is solved mainly with optical measuring methods in production. The alternative measurement principle with eddy current sensors presented in this paper enables velocity measurement at locations in a rolling mill where optical measurement methods are not suitable.
In the field of producing hot-rolled steel bars and wires, hot rolling mills are incomplete or barely equipped with measuring technology for recording relevant process parameters. Therefore, there is a big potential to increase product quality and to decrease costs and scrap by improving process control establishing new sensor systems. One of these crucial parameters is the roll gap,which is investigated as part of the research project PIREF. In this paper an experimental setup for examining the roll gap during a rolling process is presented and based on these results different sensor arrangements are discussed.
Velocity Approximation of Hot Steel Rods Using Frequency Spectroscopy of the Cross-Section Area
(2019)
In this work, an approach for velocity approximation of hot steel rods based on frequency spectroscopy is presented. For this purpose, a sensor already implemented in a rolling mill for measuring the cross-sectional area of the rolling stock is used to obtain information about the velocity of the hot rods. Moreover, the effect of forward slip is briefly discussed.
The development of innovative measuring technology for process optimization in hot rolling mills becomes more and more relevant because of increasing demands on product quality. Measurement technology for high-resolution non-contact cross-sectional area measurement has shown that the variation in cross-sectional area contains information about the rolling process. This information can be used for the development of new measurement devices and analytical methods for process optimization. The harsh environmental conditions and strict safety regulations result in great effort when implementing a new sensor prototype in hot rolling mills. For this reason, this work presents a mechatronic test stand that can simulate the cross-sectional area variation under laboratory conditions realistically.
Researchers have previously utilized the advantages of a design driven by well-being and intergenerational collaboration (IGC) for successful innovation. Unfortunately, scant information exists regarding barrier dimensions and correlated design solutions in the information systems (IS) domain, which can serve as a starting point for a design oriented toward well-being in an IGC system. Therefore, in this study, we applied the positive computing approach to guide our analysis in a systematic literature review and developed a framework oriented toward well-being for a system with a multi-generational team. Our study contributes to the IS community by providing five dimensions of barriers to IGC and the corresponding well-being determinants for positive system design. In addition, we propose further research directions to close the research gap based on the review outcomes.
Globalization and information technology enable people to join the movement of global citizenship and work without borders. However, different type of barriers existed that could affect collaboration in today’s work environment, in which different generations are involved. Although researchers have identified several technical barriers to intergenerational collaboration (iGOAL), the influence of cultural diversity on iGOAL has rarely been studied. Therefore, using a quantitative study approach, this paper investigates the impact of differences in cultural background on perceived technical and operational barriers to iGOAL. Our study reveals six barriers to IGC that are perceived differently by culturally diverse people (CDP) and non-CDP. Furthermore, CDP can foster IGC because CDP consider the barriers to be of less of a reason to avoid working with different generations than do non-CDP.
Enabling decentral collaborative innovation processes -a web based real time collaboration platform
(2018)
The main goal of this paper is to define a collaborative innovation process as well as a supporting tool. It is motivated through the increasing competition on global markets and the resultant propagation of decentralized projects with a high demand of innovative collaboration in global contexts. It bases on a project accomplished by the author group. A detailed literature review and the action design research methodology of the project led to an enhanced process model for decentral collaborative innovation processes and a basic realization of a browser based real time tool to enable these processes.The initial evaluation in a practical distributed setting has shown that the created tool is a useful way to support collaborative innovation processes.
Open Educational Resources (OER) intend to support access to education for everyone. However, this potential is not fully exploited due to various barriers in the production, distribution and the use of OER. In this paper, we present requirements and recommendations for systems for global OER authoring. These requirements as well as the system itself aim at helping creators of OER to overcome typical obstacles such as lack of technical skills, different types of devices and systems as well as the cultural differences in cross-border-collaboration. The system can be used collaboratively to create OER and supports multi-languages for localization. Our paper contributes to facilitate global, collaborative e-Learning and design of authoring platforms by identifying key requirements for OER authoring in a global context.
Digital transformation is a process of digitizing the working and living environment in which people are at the center of digitization. In this paper, we present a personas-based guideline for system developers on how the humanization of digital transformation integrates into the design process. The proposed guideline uses the positive personas from the beginning as a basis for the transformation of the working environment into the digital form. We used the literature research as a preliminary study for the process of wellbeing-driven digital transformation design, consisting of questions for structuring the required information in the positive personas as well as a potential method that could be integrated into the wellbeing-based design process.
Why Should the Q-method Be Integrated Into the Design Science Research? A Systematic Mapping Study
(2019)
The Q-method has been utilized over time in various areas, including information systems. In this study, we used a systematic mapping to illustrate how the Q-method was applied within Information Systems (IS) community and proposing towards integration of Q-method into the Design Sciences Research (DSR) process as a tool for future research DSR-based IS studies. In this mapping study, we collected peer-reviewed journals from Basket-of-Eight journals and the digital library of the Association for Information Systems (AIS). Then we grouped the publications according to the process of DSR, and different variables for preparing Q-method from IS publications. We found that the potential of the Q-methodology can be used to support each main research stage of DSR processes and can serve as the useful tool to evaluate a system in the IS topic of system analysis and design
The virtual classroom continues to grow, but it is becoming more and more the norm, and it is fundamentally different from the vocational students at the Indonesian university. With the promised benefits of the virtual classroom, many challenges and difficulties come in the implementation. Although there are already successful design principles for virtual classrooms that support organizations in overcoming the challenges, the approach to implementing the design principles of virtual classroom at the vocational higher education in Indonesia is still lacking. In this study, we aim to answer the research gap and used the design sciences research by interviewing the lecturers to design the solutions. The proposed design approaches were implemented in a course and evaluated with students from two different groups. Overall, the evaluation of the proposed approaches shows1 significant results as an indicator of the benefits of the implementation of a virtual classroom for vocational students in Indonesia.
The adoption of Open Educational Resources (OER) can support collaboration and knowledge sharing. One of the main areas of the usage OER is the internationalization, i.e., the use in a global context. However, the globally distributed co-creation of digital materials is still low. Therefore, we identify essential barriers, in particular for co-authoring of OER in global environments. We use a design science research method to introduce a barrier framework for co-authoring OER in global settings and propose a wellbeing-based system design constructed from the barrier framework for OER co-authoring tool. We describe how positive computing concepts can be used to overcome barriers, emphasizing design that promotes the author's sense of competence, relatedness, and autonomy.
Autonomous driving is one of the future visions in which many vehicle manufacturers are working with high pressure.
Nowadays, it is already supported partially by high-class vehicles. A completely autonomous journey is indeed the goal, but in cars for
the public road traffic still not available. Automatic lane keeping assistants, speed regulators as well as shield and obstacle detections
are parts or precursors on the way to completely autonomous driving.
The American vehicle manufacturer Tesla is not only known for its electric drive, but also for the fact that high-pressure work is carried out on the autonomous drive. Tesla is thus the only vehicle manufacturer to use its users as so-called beta testers for its assistance systems. The progress and the function of the currently available Model S in the field of assistance systems and autonomic driving is documented and described in this paper. It is shown how good or bad the test vehicle manages scenarios in normal road traffic situations
with the assistance systems, e.g. lane keeping assistant, speed control, lane change and distance assistant, and which scenarios can
not be managed by the vehicle itself.
We are “not” too (young/old) to collaborate: Prominent Key Barriers to Intergenerational Innovation
(2019)
In this study, we analyzed the barriers to technology-supported intergenerational innovation to understand better how young and old can collaborate towards global innovations. Researchers in different disciplines have already identified various barriers to intergenerational collaboration. However, barriers are changing depending on the context of collaboration, and difficulties still exist to support intergenerational innovation in global settings. Therefore, we investigated the barriers that emerge when people work with someone decades older or younger. The results of our study have shown what barriers are influenced by age, what barriers exist only for senior and younger adults. The study theoretically contributes to deepening the Information Systems (IS) community's understanding of the barriers to intergenerational innovation that need to be considered when developing systems for global innovation
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. The main focus of "Technical Image Processing of Dynamic Scenes" lies
with the development of methods for the interpretation of images derived from various sensors. Apart from conventional visual images, this involves mainly X-ray and radar images. Taking into account the requirements of the various applications, suitable methods are derived. Current projects are dealing with the analysis of traffic scenes, detection of detonators when X-raying luggage and determination of type and expansion of oil pollution in maritime surveillance.
Technical Report
(2016)
This internal report discusses the theoretical and practical aspects of the cluster management framework SimpleHydra, which was developed in order to allow researchers the quick setup of classical small to mid-scale computation clusters while being as lightweight and platform independent as possible. We motivate crucial design choices with a theoretical analysis in the aspect of time and space complexity, furthermore we give a comprehensive introduction regarding the frameworks usage (which includes examples and detailed description of fundamental concepts as well as data structures). In addition to that we illustrate application scenarios with complete source code examples. Furthermore we hope that this document proves valuable not only as a development report but also as a practical manual for SimpleHydra.
We present a study on 3D based hand pose recognition using a new generation of low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. We investigate the performance of different 3D descriptors, as well as the fusion of two ToF sensor streams. By basing a data fusion strategy on the fact that multilayer perceptrons can produce normalized confidences individually for each class, and similarly by designing information-theoretic online measures for assessing confidences of decisions, we show that appropriately chosen fusion strategies can improve overall performance to a very satisfactory level. Real-time capability is retained as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.
For face recognition from video streams speed and accuracy are vital aspects. The first decision whether a preprocessed image region represents a human face or not is often made by a feed-forward neural network (NN), e.g. in the Viisage-FaceFINDER® video surveillance system. We describe the optimisation of such a NN by a hybrid algorithm combining evolutionary multi-objective optimisation (EMO) and gradient-based learning. The evolved solutions perform considerably faster than an expert-designed architecture without loss of accuracy. We compare an EMO and a single objective approach, both with online search strategy adaptation. It turns out that EMO is preferable to the single objective approach in several respects.
We propose a new approach to object detection based on data fusion of texture and edge information. A self organizing Kohonen map is used as the coupling element of the different representations. Therefore, an extension of the proposed architecture incorporating other features, even features not derived from vision modules, is straight forward. It simplifies to a redefinition of the local feature vectors and a retraining of the network structure. The resulting hypotheses of object locations generated by the detection process are finally inspected by a neural network classifier based on co-occurence matrices.
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Analysis of dynamic scenes
(2000)
In this paper the proposed architecture for a dynamic scene analysis is illustrated by a driver assistance system. To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (traffic rules, physical laws), additional information (GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the
object-related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird’s eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task.
We present a novel approach of distributing matrix multiplications among GPU-equipped nodes in a cluster system. In this context we discuss the induced challenges and possible solutions. Additionally we state an algorithm which outperforms optimized GPU BLAS libraries for small matrices. Furthermore we provide a novel theoretical model for distributing algorithms within homogeneous computation systems with multiple hierarchies. In the context of this model we develop an algorithm which can find the optimal distribution parameters for each involved subalgorithm. We provide a detailed analysis of the algorithms space and time complexities and justify its use with a structured evaluation within a small GPU-equipped Beowulf cluster.
We present a novel method to perform multi-class pattern classification with neural networks and test it on a challenging 3D hand gesture recognition problem. Our method consists of a standard one-against-all (OAA) classification, followed by another network layer classifying the resulting class scores, possibly augmented by the original raw input vector. This allows the network to disambiguate hard-to-separate classes as the distribution of class scores carries considerable information as well, and is in fact often used for assessing the confidence of a decision. We show that by this approach we are able to significantly boost our results, overall as well as for particular difficult cases, on the hard 10-class gesture classification task.
A light-weight real-time ap- plicable hand gesture recognition system for automotive applications
(2015)
We present a novel approach for improved hand-gesture recognition by a single time-of-flight(ToF) sensor in an automotive environment. As the sensor's lateral resolution is comparatively low, we employ a learning approach comprising multiple processing steps, including PCA-based cropping, the computation of robust point cloud descriptors and training of a Multilayer perceptron (MLP) on a large database of samples. A sophisticated temporal fusion technique boosts the overall robustness of recognition by taking into account data coming from previous classification steps. Overall results are very satisfactory when evaluated on a large benchmark set of ten different hand poses, especially when it comes to generalization on previously unknown persons.
We present a system for efficient dynamic hand gesture recognition based on a single time-of-flight sensor. As opposed to other approaches, we simply rely on depth data to interpret user movement with the hand in mid-air. We set up a large database to train multilayer perceptrons (MLPs) which are subsequently used for classification of static hand poses that define the targeted dynamic gestures. In order to remain robust against noise and to balance the low sensor resolution, PCA is used for data cropping and highly descriptive features, obtainable in real-time, are presented. Our simple yet efficient definition of a dynamic hand gesture shows how strong results are achievable in an automotive environment allowing for interesting and sophisticated applications to be realized.
We present a novel hierarchical approach to multi-class classification which is generic in that it can be applied to different classification models (e.g., support vector machines, perceptrons), and makes no explicit assumptions about the probabilistic structure of the problem as it is usually done in multi-class classification. By adding a cascade of additional classifiers, each of which receives the previous classifier's output in addition to regular input data, the approach harnesses unused information that manifests itself in the form of, e.g., correlations between predicted classes. Using multilayer perceptrons as a classification model, we demonstrate the validity of this approach by testing it on a complex ten-class 3D gesture recognition task.
Utilizing biometrie traits for privacy- and security-applications is receiving an increasing attention. Applications such as personal identification, access control, forensics appli-cations, e-banking, e-government, e-health and recently person-alized human-smart-home and human-robot interaction present some examples. In order to offer person-specific services for/of specific person a pre-identifying step should be done in the run-up. Using biometric in such application is encountered by diverse challenges. First, using one trait and excluding the others depends on the application aimed to. Some applications demand directly touch to biometric sensors, while others don't. Second challenge is the reliability of used biometric arrangement. Civilized application demands lower reliability comparing to the forensics ones. And third, for biometric system could only one trait be used (uni-modal systems) or multiple traits (Bi- or Multi-modal systems). The latter is applied, when systems with a relative high reliability are expected. The main aim of this paper is providing a comprehensive view about biometric and its application. The above mentioned challenges will be analyzed deeply. The suitability of each biometric sensor according to the aimed application will be deeply discussed. Detailed com-parison between uni-modal and Multi-modal biometric system will present which system where to be utilized. Privacy and security issues of biometric systems will be discussed too. Three scenarios of biometric application in home-environment, human-robot-interaction and e-health will be presented.
As smart homes are being more and more popular, the needs of finding assisting systems which interface between users and home environments are growing. Furthermore, for people living in such homes, elderly and disabled people in particular and others in general, it is totally important to develop devices, which can support and aid them in their ordinary daily life. We focused in this work on sustaining privacy issues of the user during a real interaction with the surrounding home environment. A smart person-specific assistant system for services in home environment is proposed. The role of this system is the assisting of persons by controlling home activities and guiding the adaption of Smart-Home-Human interface towards the needs of the considered person. At the same time the system sustains privacy issues of it’s interaction partner. As a special case of medical assisting the system is so implemented, that it provides for elderly or disabled people person-specific medical assistance . The system has the ability of identifying its interaction partner using some biometric features. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system represents person-specific list of medicines either visually or auditive. And third the system gives an alarm in the case of taking medicament either later or earlier as normal taking time.
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air hand gesture recognition on mobile devices. ToF sensors are capable of providing depth data at high frame rates independent of illumination making any kind of application possible for in- and outdoor situations. This comes at the cost of precision regarding depth measurements and comparatively low lateral resolution. We present a novel feature generation technique based on a rasterization of the point clouds which
realizes fixed-sized input making Deep Learning approaches applicable using Convolutional Neural Networks. In order to increase precision we introduce several methods to reduce noise and normalize the input to overcome difficulties in scaling. Backed by a large-scale database of about half
a million data samples taken from different individuals our
contribution shows how hand gesture recognition is realiz-
able on commodity tablets in real-time at frame rates of up to 17Hz. A leave-one out cross-validation experiment
demonstrates the feasibility of our approach with classification errors as low as 1,5% achieved persons unknown to the model.
We present a light-weight real-time applicable 3D-gesture recognition system on mobile devices for improved Human-Machine Interaction. We utilize time-of-flight data coming from a single sensor and implement the whole gesture recognition pipeline on two different devices outlining the potential of integrating these sensors onto mobile devices. The main components are responsible for cropping the data to the essentials, calculation of meaningful features, training and classifying via neural networks and realizing a GUI on the device. With our system we achieve recognition rates of up to 98% on a 10-gesture set with frame rates reaching 20Hz, more than sufficient for any real-time applications.
We present a publicly available benchmark database for the problem of hand posture recognition from noisy depth data and fused RGB-D data obtained from low-cost time-of-flight (ToF) sensors. The database is the most extensive database of this kind containing over a million data samples (point clouds) recorded from 35 different individuals for ten different static hand postures. This captures a great amount of variance, due to person-related factors, but also scaling, translation and rotation are explicitly represented. Benchmark results achieved with a standard classification algorithm are computed by cross-validation both over samples and persons, the latter implying training on all persons but one and testing on the remaining one. An important result using this database is that cross-validation performance over samples (which is the standard procedure in machine learning) is systematically higher than cross-validation performance over persons, which is to our mind the true application-relevant measure of generalization performance.
Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.
Applying step heating thermography to wind turbine rotor blades as a non-destructive testing method
(2017)
Systems for automated image analysis are useful for a variety of tasks and their importance is still growing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut für Neuroinformatik methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Building upon prior results, we present an alternative approach to efficiently classifying a complex set of 3D hand poses obtained from modern Time-Of-Flight-Sensors (TOF). We demonstrate it is possible to achieve satisfactory results in spite of low resolution and high noise (inflicted by the sensors) and a demanding outdoor environment. We set up a large database of pointclouds in order to train multilayer perceptrons as well as support vector machines to classify the various hand poses. Our goal is to fuse data from multiple TOF sensors, which observe the poses from multiple angles. The presented contribution illustrates that real-time capability can be maintained with such a setup as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.
In this article we present a system for coupling different base algorithms and sensors for segmentation. Three different solutions for image segmentation by fusion are described, compared and results are shown. The fusion of base algorithms with colorinformation and a sensor fusion process of an optical and a radar sensor including a feedback over time is realized. A feature-in decision-out fusion process is solved. For the fusion process a multi layer perceptron (MLP) with one hidden layer is used as a coupling net. The activity of the output neuron represents the membership of each pixel to an initial segment.
We present a novel approach of distributing small-to mid-scale neural networks onto modern parallel architectures. In this context we discuss the induced challenges and possible solutions. We provide a detailed theoretical analysis with respect to space and time complexities and reinforce our computation model with evaluations which show a performance gain over state of the art approaches.
Object detection systems which operate on large data streams require an efficient scaling with available computation power. We analyze how the use of tile-images can increase the efficiency (i.e. execution speed) of distributed HOG-based object detectors. Furthermore we discuss the challenges of using our developed algorithms in practical large scale scenarios. We show with a structured evaluation that our approach can provide a speed-up of 30-180 % for existing architectures. Due to the its generic formulation it can be applied to a wide range of HOG-based (or similar) algorithms. In this context we also study the effects of applying our method to an existing detector and discuss a scalable strategy for distributing the computation among nodes in a cluster system.
The behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering angle and velocity. In this paper a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of two one-dimensional neural fields. The stimuli of the field are determined according to sensor information produced by a simulation environment.
In this paper, we describe a method to model human clothes for a later recognition by the use of RGB- and SWIR-cameras. A basic model is estimated during people detection and tracking. This model will be refined if the recognition is triggered. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body parts. The body parts are estimated by the use of a silhouette extraction combined with a skeleton estimation. In this way, the model describes the human clothes in a compact manner which allows the use of a simple and fast comparison method for people recognition. Such models can be used in security and service applications.
A self-driving car that operates on the SAE automation level 3 or 4 can navigate through different traffic conditions without human input. If such a system is on its operating limits, it will emit a takeover request before shutting down. This request will likely generate a physical response of the driver. Our goal is to shed light on the stress perception of drivers in various scenarios. To this end, we have carried out a feasibility study for preparation. Two subjects drove an autonomous vehicle and during the ride ECG signals were recorded, and afterwards evaluated. Unfortunately, the stress reaction to takeover requests could not be investigated, due to the poor function of the autonomous driving mode from the vehicle, however the reaction to autopilot misconduct without warning to the driver could be investigated instead.
Checking wind turbines for damage is a common problem for operators of wind parks, as regular inspections are legally required in many countries and prevention is economically viable. While some of the common forms of damage are easily visible on the surface, structural problems can remain invisible for years before they eventually result in catastrophic failure of a rotor blade. Common forms of testing fibre composite parts like ultrasonic testing or X-ray tests are impractical due to the large dimensions of wind turbine components and their limited accessibility for any short-range methods. Active thermographic inspection of wind turbines is a promising approach to testing for structural flaws beneath the surface of rotor blades. As part of an ongoing research project, a setup for testing the general viability of this method was built and used to compare different thermographic cameras. A sample cut from a discarded rotor blade was modified to emulate structural damage. The results are promising for the development of a cost effective on-site testing system.
Increasing economic viability and safety through structural health monitoring of wind turbines
(2017)
Serious accidents with property damage or even human casualties, result from structural flaws in wind turbine rotor blades. Common maintenance practices result in long downtimes and do not lead to the required results. Therefore, the Ruhr West University of Applied Sciences and the iQbis Consulting GmbH, currently research a new structural health monitoring method for wind turbine rotor blades. The goal of this project is to build a sensor system that can detect structural weaknesses inside of rotor blades without the need of downtime for industrial climbers. This technology has the potential to prevent accidents, save lives, extend the useful life of wind turbines and optimize the production of green energy.
We present a pipeline for recognizing dynamic freehand gestures on mobile devices based on extracting depth information coming from a single Time-of-Flight sensor. Hand gestures are recorded with a mobile 3D sensor, transformed frame by frame into an appropriate 3D descriptor and fed into a deep LSTM network for recognition purposes. LSTM being a recurrent neural model, it is uniquely suited for classifying explicitly time-dependent data such as hand gestures. For training and testing purposes, we create a small database of four hand gesture classes, each comprising 40 × 150 3D frames. We conduct experiments concerning execution speed on a mobile device, generalization capability as a function of network topology, and classification ability ‘ahead of time’, i.e., when the gesture is not yet completed. Recognition rates are high (>95%) and maintainable in real-time as a single classification step requires less than 1 ms computation time, introducing freehand gestures for mobile systems.
RELEVANCE & RESEARCH QUESTION: Currently the effectiveness of Virtual Reality (VR) and Augmented Reality (AR) systems as practice teaching methods are virtually uncharted. The proof that these systems can provide the same or better learning outcomes than a text instructed practical task could represent a significant benefit for educational activities. METHODS & DATA: To fathom the effectiveness, an experimental study with the three conditions (VR, AR and a real setup) were used to teach participant how to assemble a standard computer. Each condition was divided into two parts: part one in which participants were confronted with their specific scenario, part two in which participants had to go through a real practice after one week. The learning outcome was determined by the designation of hardware parts, a quiz that queried their function and the correct assembling of the components in addition to needed time. Apart from the mere performance, the acceptance of such application in academic context and difference in evaluation by men and women were of interest. RESULTS: Results concerning the Learning Outcome showed that participants from the VR condition outperformed those learned from the real setup ((M=10.0, SD=0.0) [virtual reality] vs. (M=8.95, SD=1.27) [control]). Furthermore, results from the assembling duration assessment demonstrated that VR Group Participants completed their tasks 6.62% faster than the control group. Regarding the identification of Hardware Parts, both groups scored a significant improvement during the post condition compared to the first test run, indicating a learning progress. However, due to the VR group achieving a better outcome in average answers and a more significant difference between the trials, the results indicate a better performance by participants assigned to the VR condition. ADDED VALUE: The results revealed that VR and AR systems could exceed text-based approach in terms of learning outcome performance. The effectiveness of the systems implicates a major benefit for the educational landscape, as learning content that is not realizable in terms of cost, distance or logistics could be designed as an immersive and engaging experience.