Refine
Year of publication
Document Type
- Conference Proceeding (206) (remove)
Has Fulltext
- no (206) (remove)
Is part of the Bibliography
- no (206)
Keywords
- Usability (2)
- Adolescents (1)
- Automated Driving Technology (1)
- Automobiles (1)
- Automotive (1)
- Automotive HMI (1)
- Automotive User Interfaces (1)
- AutomotiveHMI (1)
- Autonomes Fahren (1)
- Autonomous Driving (1)
Blended learning offers learning solutions for higher educational institutions facing the industrial revolution 4.0. In this study, we investigated the influence factors student perceptions of blended learning based on gender-specific differences in Indonesia. We applied a research model to systematically assess the effect of design features on the effectiveness of blended learning indicators (intrinsic motivation and student satisfaction). Moreover, we evaluated the research model for both genders separately. Based on the quantitative survey of 223 Indonesian students, our study confirms that the design features significantly influence the effectiveness of blended learning for male and female students.
In this demo paper we present a new visualization technique for dynamic networks. It displays the time slices of the dynamic network using two dimensional graph layouting algorithms and stacks these in the third dimension to show the development over time. The visualization ensures that the same node always has the same position in each time slice so that it is easy to follow its development. It also allows filtering data and influencing node appearance based on properties. Additionally we offer a two dimensional comparison view for two time slices which highlights changes in graph structure and (if available) in measures of nodes. The presented visualization technique is implemented using Web technology and is available in a Web-based analytics workbench. We demonstrate the benefits of these techniques by an analysis of a data set from a learning community.
The uprising levels of autonomous vehicles allow the drivers to shift their attention to non-driving tasks while driving (ie, texting, reading, or watching movies). However, these systems are prone to failure and, thus, depending on human intervention becomes crucial in critical situations. In this work, we propose using human actuation as a new mean of communicating take-over requests (TOR) through proprioception. We conducted a user study via a driving simulation in the presence of a complex working memory span task. We communicated TORs through four different modalities, namely, vibrotactile, audio, visual, and proprioception. Our results show that the vibrotactile condition yielded the fastest reaction time followed by proprioception. Additionally, proprioceptive cues resulted in the second best performance of the non-driving task following auditory cues.
Understanding user needs and behavior in automated vehicles (AVs) while traveling is essential for future in-vehicle interface and service design. Since AVs are not yet market-ready, current knowledge about AV use and perception is based on observations in other transportation modes, interviews, or surveys about the hypothetical situation. In this paper, we close this gap by presenting real-world insights into the attitude towards highly automated driving and non-driving-related activities (NDRAs). Using a Wizard of Oz AV, we conducted a real-world driving study (N= 12) with six rides per participant during multiple days. We provide insights into the users’ perceptions and behavior. We found that (1) the users’ trust a human driver more than a system,(2) safety is the main acceptance factor, and (3) the most popular NDRAs were being idle and the use of the smartphone.
This paper describes a system which allows platform independent access to quizzes of the popular learning platform Moodle. The main focus is on the software architecture which is implemented on the base of platform independent technology like Web Services, HTML5 and JavaScript. Another aspect is the user interface which was developed with the goal to run on a broad range of mobile devices from small mobile phones up to large tablets.
Benutzerschnittstellen im Fahrzeug stellen eine besondere Herausforderung in Konzeption und Entwicklung dar, steht doch eine einfache Bedienung in allen Fahrsituationen von Fahrerassistenzsystemen wie auch Komfort- und Unterhaltungsfunktionen im Vordergrund der Bedien- und Anzeigekonzepte. Zugleich treffen durch eine zunehmende Vernetzung des Fahrzeugs die langen Entwicklungszyklen von Kraftfahrzeugen auf die hochdynamische Welt von Mobiltelefonen und Internet-Applikationen. Weitere Herausforderungen ergeben sich durch absehbare Änderungen im Mobilitätsverhalten und die Einführung von Elektrofahrzeugen.
Die Hochschule Ruhr West hat erstmals zum Wintersemester 2011/12 ein Schülerstudium im Studiengang Angewandte Informatik angeboten. Dieses ist aus verschiedenen Aktivitäten zum Übergang Schule - Hochschule hervorgegangen. Der Artikel beschreibt die Erfahrungen bei der Einführung eines solchen Programms an einer sich im Aufbau befindlichen Fachhochschule sowohl aus Sicht der Hochschulangehörigen als auch der teilnehmenden Schülerinnen und Schüler.
Innovationen im Fahrzeug inkl. der Bedienschnittstelle halten oft zunächst in den Oberklassefahrzeugen Einzug und werden gemäß der Erwartungen der entsprechenden Zielgruppe, zumeist 45 Jahre und älter entwickelt. Auf der anderen Seite gehen im Mobilgerätebereich die Innovationen von technisch interessierten Menschen, meist Jugendlichen aus. In dieser Arbeit wurde versucht, die Entwicklung eines Autocockpits für junge Menschen von eben diesen in vier Stufen der nächsten 20 Jahre gestalten zu lassen unter eigener Einschätzung der technischen Möglichkeiten.
Knowledge of fundamentals of human-computer interaction resp. usability engineering is getting more and more important in technical domains. However this interdisciplinary field of work and corresponding degree programs are not broadly known. Therefore at the Hochschule Ruhr West, University of Applied Sciences, a program was developed to give teen-aged pupils insights into this area in a project-based learning environment with professional tools. Within the last 18 month this project was successfully conducted several times with participants of different age.
Im Rahmen des diesjährigen Workshop Automotive HMI werden wieder eine Vielzahl an Vorträgen aus dem Bereich automobiler Mensch-Maschine Schnittstellen präsentiert. Des Weiteren ist wie in den beiden letzten Jahren ein Interaktiver Innovationsworkshop Teil des Programms. Das Motto der Mensch und Computer 2014 lautet „Interaktiv Unterwegs “. Dies passt hervorragend zum Thema des Workshops.
In den letzten Jahren ist die Verwendung mobiler Endgeräte im Automotive Bereich immer wichtiger geworden. Auf der einen Seite bringen immer mehr Personen ihre mobilen Geräte mit in ihr Auto und wollen hier auch auf verschiedene Funktionen des jeweiligen mobilen Geräts zugreifen können. Auf der anderen Seite haben sich mobile Geräte und die dort zum Einsatz kommenden Betriebssysteme aber auch als ideale Kandidaten für eine IT Unterstützung im Automotive Bereich herausgestellt. Das Ziel dieses Beitrages ist es, erste Erfahrungen aus der Entwicklung eines Infotainmentsystems auf Basis einer Android basierten Hardware vorzustellen.
Mensch-Maschine-Interaktion in sicherheitskritischen Systemen ist ein für die Informatik und die jeweiligen Anwendungsdomänen in der Bedeutung weiter zunehmendes Thema. Dieser Workshop der GI-Fachgruppe „Mensch-Maschine-Interaktion in sicherheitskritischen Systemen" innerhalb des Fachbereichs Mensch-Computer-Interaktion soll aktuelle Entwicklungen und Fragestellungen offenlegen und neue Impulse für das Forschungsgebiet geben.
4. Workshop Automotive HMI
(2015)
Benutzerschnittstellen im Fahrzeug stellen eine besondere Herausforderung in Konzeption und Entwicklung dar, steht doch eine sichere Bedienung in allen Fahrsituationen sowohl von Fahrerassistenzsystemen als auch von Komfort-und Unterhaltungsfunktionen im Vordergrund. Zugleich treffen durch zunehmende Vernetzung die langen Entwicklungszyklen von Kraftfahrzeugen auf die hochdynamische Welt von Mobiltelefonen und Internet. Ein-und Ausgabetechnologien gehören des Weiteren zu den zentralen Mitteln der Hersteller, die Wertigkeit der im Fahrzeug eingebauten Systeme hervorzuheben und sich gegenüber der Konkurrenz abzuheben. Dafür werden in diesem Workshop Konzepte und technische Lösungen von Designern, Entwicklern und Human Factors Experten aus Hochschulen, Forschungsinstituten und der Automobilindustrie vorgestellt und diskutiert.
Bei Großschadensereignissen kann es durch die Vielzahl der Alarme dazu kommen, dass die verfügbaren Rettungskräfte nicht mehr ausreichen, um die anfallenden Aufgaben zu bewältigen oder Hilfsfristen einzuhalten. Die vorliegende Arbeit beschreibt einen Ansatz, sich zusätzlicher Hilfe aus der Bevölkerung zu bedienen, die über einen Disponenten aus der vorhandenen Leitstelle koordiniert wird. Dabei stehen nicht spontan organisierte Helfer im Vordergrund, sondern Personen, die sich vorab mit einem klaren Fertigkeitsprofil und ggf. auch Ausstattung im System registriert haben. Besondere Anforderungen entstehen bei den Disponenten der Leitstelle, deren Mehrbelastung durch das neue System gering zu halten ist, als auch bei den freiwilligen Helfern, die über eine App auf dem Mobiltelefon alarmiert werden und auch darüber die Kommunikation führen sollen. Die Anforderungen beeinflussen sowohl die System-Infrastruktur als auch die Benutzerschnittstelle.
In catastrophic events, the potential of help has grown through new technologies. Voluntary help has many forms. Within this paper different categories of voluntary help are suggested. Those categories are based on properties like organizational structures, helping process, kind of prosocial behavior and many more. A focus is clearly on the organizational structure and motivational aspects of helper groups. Examples are given for each category. The categorization’s aim is to give a brief overview of possible properties a group of system users could have.
5th Workshop Automotive HMI
(2016)
Benutzerschnittstellen im Fahrzeug stellen eine besondere Herausforderung in Konzeption und Entwicklung dar, steht doch eine sichere Bedienung in allen Fahrsituationen von Fahrerassistenzsystemen wie auch Komfort- und Unterhaltungsfunktionen im Vordergrund. Zugleich treffen durch zunehmende Vernetzung die langen Entwicklungszyklen von Kraftfahrzeugen auf die hochdynamische Welt von Mobiltelefonen und Internet. Ein- und Ausgabetechnologien gehören des Weiteren zu den zentralen Mitteln der Hersteller, die Wertigkeit der im Fahrzeug eingebauten Systeme hervorzuheben. Passend zu dem Tagungsmotto „Sozial Digital – Gemeinsam Auf Neuen Wegen“ wurden in diesem Workshop insbesondere Arbeiten und Visionen präsentiert, die das Automobil bzw. HMIs im Fahrzeug als Teil einer vernetzten digitalen Welt verstehen – einer neuen Art eines sozialen Mensch-Maschine Ökosystems. Die zentrale Frage, die im Workshop diskutiert wurde war, wie Systeme in Zukunft aussehen müssen, um sowohl den Menschen als auch die Maschine optimal zu unterstützen (angelehnt an das MABA-MABA Paradigma von Fitts, 1954). Der Workshop war wiederum interdisziplinär aufgesetzt und hat Konzepte und technische Lösungen von und mit Designern, Entwicklern und „Human Factors“-Experten aus Universitäten/Hochschulen, Forschungsinstituten und der Automobilindustrie aus ganzheitlicher Sicht diskutiert.
Gestures are part of the interaction between humans and are currently getting more and more popular in the field of Human-Machine Interaction (HMI). First systems with mid-air gesture control are available in the automotive field of application. But it is still an open question which gestures are intuitive for the users, standards do not exist. In this paper we present a 2-step user study on expectations on touchless gestures in vehicles as part of a participatory design process.
Die spezifischen Herausforderungen des Fachgebiets bedürfen jedoch auch weiterhin einer Diskussion und der Entwicklung neuer Methoden und Ansätze zur Gestaltung von Informationssystemen. Diese sollen dieses Jahr adressiert werden. Generell fokussieren wir eher auf die Effekte von Technologien auf realweltliche Praktiken, als auf die isolierte Technologie. Auch der auf diesen Beiträgen basierende Workshop legt aktuelle Entwicklungen und Fragestellungen offen und gibt neue Impulse für das Forschungsgebiet. Der Workshop wird dabei zweigeteilt gestaltet: Innerhalb des ersten Teils wird den Vortragenden die Möglichkeit gegeben die eigenen Forschungsarbeiten zu präsentieren. Dabei sind sowohl designorientierte, praxisbasierte Analysen und Studien, als auch entwickelte und evaluierte Prototypen neuer Technologien von Interesse. Es wird den Vortragenden die Möglichkeit gegeben die eigenen Forschungsarbeiten teilweise in einem eher frühen Stadium in kompakter Form zu präsentieren und anschließend in Hinblick auf deren Weiterentwicklung diskutieren.
Automotive user interfaces and, in particular, automated vehicle technology pose a plenty of challenges to researchers, vehicle manufacturers, and third-party suppliers to support all diverse facets of user needs. To give an example, they emerge from the variation of different user groups ranging from inexperienced, thrill-seeking young novice drivers to elderly drivers with all their natural limitations. To allow assessing the quality of automotive user interfaces and automated driving technology already during development and within virtual test processes, the proposed workshop is dedicated to the quest of finding objective, quantifiable quality criteria for describing future driving experiences. The workshop is intended for HCI, AutomotiveUI, and "Human Factors" researchers and practitioners as well for designers and developers. In adherence to the conference main topic "Spielend einfach interagieren" this workshop calls in particular for contributions in the area of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces etc.) and artificial intelligence (predictive HMIs, adaptive systems, intuitive interaction).
Durch den technischen Fortschritt in der Spracherkennung und -verarbeitung wird Sprache als Interaktionsform auch in Fahrzeugen, z.B. zur Bedienung von Infotainmentsystem, immer populärer. Die Steuerung von teilautomatisierten Fahrzeugen über Sprache ist bisher wenig erforscht. Ziel dieser Arbeit ist es unter der grundsätzlichen Annahme der Eignung von Sprachsteuerung für teilautonome Fahrzeuge, Nutzererwartungen und spezielle Anforderungen an eine Sprachsteuerung für die grundlegenden Fahrmanöver zu identifizieren. Aus den Ergebnissen eines Expertenworkshops und einer explorativen Videostudie werden Anforderungen und Sprachkommandos abgeleitet.
Öffentliche Diskussionen zum autonomen Fahren zeigen einen hohen Anspruch, dass die Algorithmen in kritischen Fällen Entscheidungen nach ethischen Kriterien fällen. Diese für die Vielzahl von denkbaren Verkehrssituationen so zu erfassen, dass sie den Vorstellungen eines größten Teils der Bevölkerung entspricht, stellt eine große methodische Herausforderung dar. In dieser Arbeit wird untersucht, in wie weit eine überlegte Entscheidung mit dem Verhalten in einem Fahrsimulator übereinstimmt. Dabei wird bei einem großen Teil der Teilnehmer:innen ein Widerspruch zwischen geäußertem beabsichtigtem Handeln und tatsächlichem Handeln offenbar.
Automotive user interfaces and automated vehicle technology pose numerous challenges to support all diverse facets of user needs. These range from inexperienced, thrill-seeking, young novice drivers to elderly drivers with a mostly opposite set of preferences together with their natural limitations. To allow assessing the (hedonic) quality of automotive user interfaces and automated driving technology (i. e., UX) already during development, the proposed workshop is dedicated to the quest of finding objective, quantifiable criteria to describe future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors” researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Interaktion – Verbindet – Alle”, this workshop calls in particular for contributions in the areas of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) with focus on hedonic quality and design of user experience to enhance the safety feeling in ADS.
System design for well-being needs an appropriate tool to help designers to determine relevant requirements that can help human well-being to flourish. Personas come as a simple yet powerful tool in the early development stage of the user interface design. Considering well-being determinants in the early design process provide benefits for both the user and the development team. Therefore, in this short paper, we performed a literature study to provide a conceptual model of well-being in personas and propose positive design interventions in personas’ creation process.
Automotive user interfaces and, in particular, automated vehicle technology pose a plenty of challenges to researchers, vehicle manufacturers, and third-party suppliers to support all diverse facets of user needs. To give an example, they emerge from the variation of different usergroups ranging from inexperienced, thrill-seeking young novice drivers to elderly drivers with all their natural limitations. To allow assessing the quality of automotive user interfaces and automated driving technology already during development and within virtual test processes, the proposed workshop is dedicated to the quest of finding objective, quantifiable quality criteria for describing future driving experiences. The workshop is intended for HCI, AutomotiveUI, and “Human Factors" researchers and practitioners as well for designers and developers. In adherence to the conference main topic “Spielend einfach interagieren “, this workshop calls in particular for contributions in the area of human factors and ergonomics (user acceptance, trust, user experience, driving fun, natural user interfaces, etc.) and artificial intelligence (predictive HMIs, adaptive systems, intuitive interaction).
Im Zentrum dieses Workshops stehen Erkenntnisse zur Mensch-Computer-Interaktion in sicherheitskritischen Anwendungsgebieten. Da in solchen Feldern – etwa Katastrophenmanagement, Verkehr, Produktion oder Medizin – immer häufiger MCI stattfindet, sind viele wissenschaftliche Gebiete, unter anderem die Informatik, zunehmend gefragt. Die Herausforderung besteht darin, bestehende Ansätze und Methoden zu diskutieren, anzupassen und innovative Lösungsansätze zu entwickeln.
Die breite Einführung autonomer Fahrzeuge, ob für den Individualverkehr oder auch den öffentlichen Nahverkehr, ist nur noch eine Frage der Zeit. Dies bedeutet unweigerlich, dass in absehbarer Zeit alle Verkehrsteilnehmer*innen mit dieser Art von Fahrzeugen in Berührung kommen werden. In diesem Artikel soll diskutiert werden, wie Ansätze des Positive Computing helfen können, die Ausgestaltung der automatisierten Fahrzeuge so vorzunehmen, dass sie zum Wohlbefinden der Menschen in Verkehrssituationen beitragen.
For highly automated vehicles (AVs), new interaction concepts need to be developed. Even in AVs, the driver might want to intervene and override the automation from time to time. To create the possibility of control, we explore vehicle control through maneuver-based interventions (MBI). Thereby, we focus on explicit, contact-less interaction, which could be beneficial in future AV designs, where the driver is not necessarily bound to classical controls. We propose a set of freehand gestures and keywords for voice control derived in a user-centered design process. Further, we discuss properties, applicability and user impressions of both interaction modalities. Voice control seems to be an efficient way to select a maneuver and free-hand gestures could be used, if voice channel is blocked, e.g., through conversation with passengers.
Even though many aspects of automated driving have not yet become reality, many human factors issues have already been investigated. However, recent discussions revealed common misconceptions in both research and society about vehicle automation and the levels of automation levels. This might be due to the fact that automated driving functions are misnamed (cf. Autopilot) and that vehicles integrate functions at different automation levels (L1 lane keeping assistant, L2/L3 traffic jam assist, L4 valet parking). The user interface is one of the most critical issues in the interaction between humans and vehicles--and diverging mental models might be a major challenge here. Today's (manual) vehicles are ill-suited for appropriate HMI testing for automated vehicles. Instead, virtual or mixed reality might be a much better playground to test new interaction concepts in an automated driving setting.
Currently, car assistant systems mainly try to prevent accidents. Increasing built-in car technology also extends the potential applications in vehicles. Future cars might have virtual windshields that augment the traffic or individual virtual assistants interacting with the user. In this paper, we explore the potential of an assistant system that helps the car’s occupants to calm down and reduce stress when they experience an accident in front of them. We present requirements from a discussion (N= 11) and derive a system design from them. Further, we test the system design in a video-based simulator study (N= 43). Our results indicate that an accident support system increases perceived control and trust and helps to calm down the user.
Self-driving cars will relief the human from the driving task. Nevertheless, the human might want to intervene in the driving process and thus needs the possibility to control the car. Switching back to fully manual controls is uncomfortable once being passive and engaging in non-driving-related activities. A more comfortable way is controlling the car with elemental maneuvers (e.g., "turn left" or "stop"). Whereas touch interaction concepts exist, contactless interaction through voice and mid-air gestures has not yet been explored for maneuver-based car control. In this paper, we, therefore, compare the general eligibility of voice and mid-air gesture with touch interaction as the primary maneuver selection mechanism in a driving simulator study. Our results show high usability for all modalities. Contactless interaction leads to a more positive emotional perception of the interaction, yet mid-air gestures lead to higher task load. Overall, voice and touch control are preferred over mid-air gestures by most users.
The detection of soil erosion processes in dams, hydraulic heave failure or corrosion processes of reinforcing steel in concrete are a small selection of measuring applications in civil engineering where the impedance analysis can be used to determine the measurand. Those measuring applications are having high requirements for the measuring hardware. For example a common interface for fast data exchange, high resolution, independent functionality and easy customizability to suit the measuring application. For that reason, a well-known application for steel-mill process monitoring can be used as a development platform. This hardware platform is based on a vector network analyzer and is meeting the requirements mainly. However, a couple of modifications has to be made, like replacing the ADC for a higher sample rate, Ethernet for easy and fast data exchange and the microcontroller for more calculation power.
Process Monitoring in Steel-Mills using Impedance Analysis: VNA Improvement for Data Acquisition
(2017)
The process automation extends over every manufacturing step of a product in the steel-mill to increase the quality, quantity and energy efficiency. The product dimensions are an important part of the quality control; these must maintain the specified tolerances. Additional to the cross-sectional-area, the measured data contains much more information about the manufacturing process, e.g. eccentricity, condition of the rolls and defects of the rod. For analyzing the measured data and to gather more information about the manufacturing process it is necessary to increase the speed of the data acquisition by performing some modifications of the VNA, e.g. faster analog to digital converter and microcontroller, improved firmware and optimized values of the passive electrical components for faster time constants and transient responses.
Rolling mills are continually improved and opti-mized by implementing innovative technology to decrease costs and scrap. Despite of the progressive automation and experience, some important process parameters can still not be determined with sufficient accuracy. As part of the research project PIREF, the velocity of the hot rolled rod shall be measured by using im-pedance analysis to estimate the volumetric flow rate of the mate-rial. For a high accuracy measurement of the impedance, a pow-erful VNA is used. To minimize errors in the measurement, caused by e.g. temperature drift, a correction of the measurement fre-quency is needed. This must be achieved without recalibration of the VNA to avoid faulty behavior of the online control. To solve this problem, an approach based on a polynomial regression is presented in this work.
Quality and dimensional accuracy of hot rolled steel rods depend on several process parameters. In fact many of these crucial parameters are not be sufficiently determined yet. By improving automation and process control costs and scrap of production can be decreased. As part of the research project PIREF, one of these parameters – the roll gap – is under investigation beside other topics. Before starting rolling, the roll gap is typically set to a fixed value according to the planed dimensions of the product, but the forces during the rolling of the rod cause an enlargement of the roll gap. In which way the rolls change their position and form shall be examined in our research project. Therefore a first experimental setup has been built up to determine the change in position of the rolls under applied force. This is realized by a pot core coil as sensor using impedance analysis. The first results are presented in this work as a proof-of-principle.
Process diagnosis is an important method for improving product quality in rolling mills. In addition, the measurement of process variables such as roll gap, cross-sectional area, velocity, and volume flow of the material during production enables the implementation of model-based control concepts to improve product quality. The non-contact speed measurement of hot wire and bar is still a big challenge due to the rough environmental conditions and is solved mainly with optical measuring methods in production. The alternative measurement principle with eddy current sensors presented in this paper enables velocity measurement at locations in a rolling mill where optical measurement methods are not suitable.
In the field of producing hot-rolled steel bars and wires, hot rolling mills are incomplete or barely equipped with measuring technology for recording relevant process parameters. Therefore, there is a big potential to increase product quality and to decrease costs and scrap by improving process control establishing new sensor systems. One of these crucial parameters is the roll gap,which is investigated as part of the research project PIREF. In this paper an experimental setup for examining the roll gap during a rolling process is presented and based on these results different sensor arrangements are discussed.
Velocity Approximation of Hot Steel Rods Using Frequency Spectroscopy of the Cross-Section Area
(2019)
In this work, an approach for velocity approximation of hot steel rods based on frequency spectroscopy is presented. For this purpose, a sensor already implemented in a rolling mill for measuring the cross-sectional area of the rolling stock is used to obtain information about the velocity of the hot rods. Moreover, the effect of forward slip is briefly discussed.
Analyse dynamischer Szenen
(1999)
In diesem Artikel wird die Analyse dynamischer Szenen im Rahmen einer flexiblen Architektur zur Lösung von Fahrerassistenzaufgaben in Kraftfahrzeugen vorgestellt. Die Lösung unterschiedlicher Aufgaben mit verwandten Ansätzen bedingt einen hohen Grad an Modularität und Flexibilität. Nur so können die gestellten Aufgaben mit den vorhandenen Algorithmen optimal gelöst werden. In der vorgestellten Architektur wird eine objektbezogene Analyse von Sensordaten, eine verhaltensbasierte Szeneninterpretation und eine Verhaltensplanung durchgeführt. Eine globale Wissensbasis, auf der jedes einzelne Modul arbeitet, beinhaltet die Beschreibung physikalischer Zusammenhänge, Verhaltensregeln für den Straßenverkehr, sowie Objekt- und Szenenwissen.
Externes Wissen (z.B. GPS – Global Positioning System) kann ebenfalls in die Wissensbasis eingebunden werden. Als Anwendungsbeispiel der Verhaltensplanung ist ein intelligenter Tempomat realisiert.
Systems for automated image analysis are useful for a variety of tasks and their importance is still increasing due to technological advances and an increase of social acceptance. Especially in the field of driver assistance systems the progress in science has reached a level of high performance. Fully or partly autonomously guided vehicles, particularly for road-based traffic, pose high demands on the development of reliable algorithms due to the conditions imposed by natural environments. At the Institut fur Neuroinformatik, methods for analyzing driving relevant scenes by computer vision are developed in cooperation with several partners from the automobile industry. We introduce a system which extracts the important information from an image taken by a CCD camera installed at the rear view mirror in a car. The approach consists of a sequential and a parallel sensor and information processing. Three main tasks namely the initial segmentation (object detection), the object tracking and the object classification are realized by integration in the sequential branch and by fusion in the parallel branch. The main gain of this approach is given by the integrative coupling of different algorithms providing partly redundant information.
Analysis of dynamic scenes
(2000)
In this paper the proposed architecture for a dynamic scene analysis is illustrated by a driver assistance system. To reduce the number of traffic accidents and to increase the drivers comfort, the thought of designing driver assistance systems rose in the past years. Principal problems are caused by having a moving observer (ego motion) in predominantly natural surroundings. In this paper we present a solution for a flexible architecture for a driver assistance system. The architecture can be subdivided into four different parts: the object-related analysis, the knowledge base, the behavior-based scene interpretation, and the behavior planning unit. The object-related analysis is fed with data by the sensors (vision, radar). The sensor data are preprocessed (flexible sensor fusion) and evaluated (saliency map) searching for object-related information (positions, types of objects, etc.). The knowledge base is represented by static and dynamic knowledge. It consists of a set of rules (traffic rules, physical laws), additional information (GPS, lane-information) and it is implicitly used by algorithms in the system. The scene interpretation combines the information extracted by the
object-related analysis and inspects the information for contradictions. It is strongly connected to the behavior planning using only information needed for the actual task. In the scene interpretation consistent representations (i.e., bird’s eye view) are organized and interpreted as well as a scene analysis is performed. The results of the scene interpretation are used for decision making in behavior planning, which is controlled by the actual task.
Handgesten im Automobil haben das Potenzial einer Kombination von gut sichtbaren Displays nahe der Windschutzscheibe und einer als intuitiv empfundenen Gestensteuerung, wie sie berührungsgesteuert von Smartphones aber auch berührungslos von einigen Fernsehgeräten bekannt ist. Bei entsprechender Positionierung der Sensoren können so die Augen auf der Straße und die Hände am Lenkrad oder zumindest sehr nahe dazu verbleiben. Der hier beschriebene frühe Demonstrator zeigt die Machbarkeit dieser Technologie mit einem neuartigen Erkennungsverfahren.
We present a novel approach of distributing matrix multiplications among GPU-equipped nodes in a cluster system. In this context we discuss the induced challenges and possible solutions. Additionally we state an algorithm which outperforms optimized GPU BLAS libraries for small matrices. Furthermore we provide a novel theoretical model for distributing algorithms within homogeneous computation systems with multiple hierarchies. In the context of this model we develop an algorithm which can find the optimal distribution parameters for each involved subalgorithm. We provide a detailed analysis of the algorithms space and time complexities and justify its use with a structured evaluation within a small GPU-equipped Beowulf cluster.
We present a novel method to perform multi-class pattern classification with neural networks and test it on a challenging 3D hand gesture recognition problem. Our method consists of a standard one-against-all (OAA) classification, followed by another network layer classifying the resulting class scores, possibly augmented by the original raw input vector. This allows the network to disambiguate hard-to-separate classes as the distribution of class scores carries considerable information as well, and is in fact often used for assessing the confidence of a decision. We show that by this approach we are able to significantly boost our results, overall as well as for particular difficult cases, on the hard 10-class gesture classification task.
A light-weight real-time ap- plicable hand gesture recognition system for automotive applications
(2015)
We present a novel approach for improved hand-gesture recognition by a single time-of-flight(ToF) sensor in an automotive environment. As the sensor's lateral resolution is comparatively low, we employ a learning approach comprising multiple processing steps, including PCA-based cropping, the computation of robust point cloud descriptors and training of a Multilayer perceptron (MLP) on a large database of samples. A sophisticated temporal fusion technique boosts the overall robustness of recognition by taking into account data coming from previous classification steps. Overall results are very satisfactory when evaluated on a large benchmark set of ten different hand poses, especially when it comes to generalization on previously unknown persons.
We present a system for efficient dynamic hand gesture recognition based on a single time-of-flight sensor. As opposed to other approaches, we simply rely on depth data to interpret user movement with the hand in mid-air. We set up a large database to train multilayer perceptrons (MLPs) which are subsequently used for classification of static hand poses that define the targeted dynamic gestures. In order to remain robust against noise and to balance the low sensor resolution, PCA is used for data cropping and highly descriptive features, obtainable in real-time, are presented. Our simple yet efficient definition of a dynamic hand gesture shows how strong results are achievable in an automotive environment allowing for interesting and sophisticated applications to be realized.
We present a novel hierarchical approach to multi-class classification which is generic in that it can be applied to different classification models (e.g., support vector machines, perceptrons), and makes no explicit assumptions about the probabilistic structure of the problem as it is usually done in multi-class classification. By adding a cascade of additional classifiers, each of which receives the previous classifier's output in addition to regular input data, the approach harnesses unused information that manifests itself in the form of, e.g., correlations between predicted classes. Using multilayer perceptrons as a classification model, we demonstrate the validity of this approach by testing it on a complex ten-class 3D gesture recognition task.
Utilizing biometrie traits for privacy- and security-applications is receiving an increasing attention. Applications such as personal identification, access control, forensics appli-cations, e-banking, e-government, e-health and recently person-alized human-smart-home and human-robot interaction present some examples. In order to offer person-specific services for/of specific person a pre-identifying step should be done in the run-up. Using biometric in such application is encountered by diverse challenges. First, using one trait and excluding the others depends on the application aimed to. Some applications demand directly touch to biometric sensors, while others don't. Second challenge is the reliability of used biometric arrangement. Civilized application demands lower reliability comparing to the forensics ones. And third, for biometric system could only one trait be used (uni-modal systems) or multiple traits (Bi- or Multi-modal systems). The latter is applied, when systems with a relative high reliability are expected. The main aim of this paper is providing a comprehensive view about biometric and its application. The above mentioned challenges will be analyzed deeply. The suitability of each biometric sensor according to the aimed application will be deeply discussed. Detailed com-parison between uni-modal and Multi-modal biometric system will present which system where to be utilized. Privacy and security issues of biometric systems will be discussed too. Three scenarios of biometric application in home-environment, human-robot-interaction and e-health will be presented.
As smart homes are being more and more popular, the needs of finding assisting systems which interface between users and home environments are growing. Furthermore, for people living in such homes, elderly and disabled people in particular and others in general, it is totally important to develop devices, which can support and aid them in their ordinary daily life. We focused in this work on sustaining privacy issues of the user during a real interaction with the surrounding home environment. A smart person-specific assistant system for services in home environment is proposed. The role of this system is the assisting of persons by controlling home activities and guiding the adaption of Smart-Home-Human interface towards the needs of the considered person. At the same time the system sustains privacy issues of it’s interaction partner. As a special case of medical assisting the system is so implemented, that it provides for elderly or disabled people person-specific medical assistance . The system has the ability of identifying its interaction partner using some biometric features. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system represents person-specific list of medicines either visually or auditive. And third the system gives an alarm in the case of taking medicament either later or earlier as normal taking time.
Forschung an Hochschulen
(2015)
In diesem Aufsatz soll die Forschung an Fachhochschulen beispielhaft aus dem Blickwinkel des Instituts Informatik der in 2009 gegründeten Hochschule Ruhr West betrachtet werden. Am Institut Informatik ist es das Ziel Lehre und Forschung geeignet zu verknüpfen, um Studierenden, wissenschaftlichen Mitarbeiterinnen und Mitarbeitern und auch Lehrenden ein attraktives Angebot in Forschung und Lehre im Bereich der Informatik zu liefern. Dabei bilden neben der Durchführung interessanter Lehrveranstaltungen, welche durch aktuelle Forschungsfragestellungen angereichert werden, das kooperative Bearbeiten von gesellschaftlich relevanten und zukunftsweisenden Forschungsaufgaben, die Teilnahme an Forschungsverbünden, bilaterale Forschungsaktivitäten mit Partnern aus der Wirtschaft und das Einwerben von externen Mitteln, die Basis der Arbeit am Institut.
This contribution presents a novel approach of utilizing Time-of-Flight (ToF) technology for mid-air hand gesture recognition on mobile devices. ToF sensors are capable of providing depth data at high frame rates independent of illumination making any kind of application possible for in- and outdoor situations. This comes at the cost of precision regarding depth measurements and comparatively low lateral resolution. We present a novel feature generation technique based on a rasterization of the point clouds which
realizes fixed-sized input making Deep Learning approaches applicable using Convolutional Neural Networks. In order to increase precision we introduce several methods to reduce noise and normalize the input to overcome difficulties in scaling. Backed by a large-scale database of about half
a million data samples taken from different individuals our
contribution shows how hand gesture recognition is realiz-
able on commodity tablets in real-time at frame rates of up to 17Hz. A leave-one out cross-validation experiment
demonstrates the feasibility of our approach with classification errors as low as 1,5% achieved persons unknown to the model.
We present a light-weight real-time applicable 3D-gesture recognition system on mobile devices for improved Human-Machine Interaction. We utilize time-of-flight data coming from a single sensor and implement the whole gesture recognition pipeline on two different devices outlining the potential of integrating these sensors onto mobile devices. The main components are responsible for cropping the data to the essentials, calculation of meaningful features, training and classifying via neural networks and realizing a GUI on the device. With our system we achieve recognition rates of up to 98% on a 10-gesture set with frame rates reaching 20Hz, more than sufficient for any real-time applications.
We present a publicly available benchmark database for the problem of hand posture recognition from noisy depth data and fused RGB-D data obtained from low-cost time-of-flight (ToF) sensors. The database is the most extensive database of this kind containing over a million data samples (point clouds) recorded from 35 different individuals for ten different static hand postures. This captures a great amount of variance, due to person-related factors, but also scaling, translation and rotation are explicitly represented. Benchmark results achieved with a standard classification algorithm are computed by cross-validation both over samples and persons, the latter implying training on all persons but one and testing on the remaining one. An important result using this database is that cross-validation performance over samples (which is the standard procedure in machine learning) is systematically higher than cross-validation performance over persons, which is to our mind the true application-relevant measure of generalization performance.
Touch versus mid-air gesture interfaces in road scenarios-measuring driver performance degradation
(2016)
We present a study aimed at comparing the degradation of the driver's performance during touch gesture vs mid-air gesture use for infotainment system control. To this end, 17 participants were asked to perform the Lane Change Test. This requires each participant to steer a vehicle in a simulated driving environment while interacting with an infotainment system via touch and mid-air gestures. The decrease in performance is measured as the deviation from an optimal baseline. This study concludes comparable deviations from the baseline for the secondary task of infotainment interaction for both interaction variants. This is significant as all participants are experienced in touch interaction, however have had no experience at all with mid-air gesture interaction, favoring mid-air gestures for the long-term scenario.
Given the success of convolutional neural networks (CNNs) during recent years in numerous object recognition tasks, it seems logical to further extend their applicability to the treatment of three-dimensional data such as point clouds provided by depth sensors. To this end, we present an approach exploiting the CNN’s ability of automated feature generation and combine it with a novel 3D feature computation technique, preserving local information contained in the data. Experiments are conducted on a large data set of 600.000 samples of hand postures obtained via ToF (time-of-flight) sensors from 20 different persons, after an extensive parameter search in order to optimize network structure. Generalization performance, measured by a leave-one-person-out scheme, exceeds that of any other method presented for this specific task, bringing the error for some persons down to 1.5 %.
Applying step heating thermography to wind turbine rotor blades as a non-destructive testing method
(2017)
Building upon prior results, we present an alternative approach to efficiently classifying a complex set of 3D hand poses obtained from modern Time-Of-Flight-Sensors (TOF). We demonstrate it is possible to achieve satisfactory results in spite of low resolution and high noise (inflicted by the sensors) and a demanding outdoor environment. We set up a large database of pointclouds in order to train multilayer perceptrons as well as support vector machines to classify the various hand poses. Our goal is to fuse data from multiple TOF sensors, which observe the poses from multiple angles. The presented contribution illustrates that real-time capability can be maintained with such a setup as the used 3D descriptors, the fusion strategy as well as the online confidence measures are computationally efficient.
We present a novel approach of distributing small-to mid-scale neural networks onto modern parallel architectures. In this context we discuss the induced challenges and possible solutions. We provide a detailed theoretical analysis with respect to space and time complexities and reinforce our computation model with evaluations which show a performance gain over state of the art approaches.
Object detection systems which operate on large data streams require an efficient scaling with available computation power. We analyze how the use of tile-images can increase the efficiency (i.e. execution speed) of distributed HOG-based object detectors. Furthermore we discuss the challenges of using our developed algorithms in practical large scale scenarios. We show with a structured evaluation that our approach can provide a speed-up of 30-180 % for existing architectures. Due to the its generic formulation it can be applied to a wide range of HOG-based (or similar) algorithms. In this context we also study the effects of applying our method to an existing detector and discuss a scalable strategy for distributing the computation among nodes in a cluster system.
The behavior planning of a vehicle in real world traffic is a difficult problem to be solved. If different hierarchies of tasks and purposes are built to structure the behavior of a driver, complex systems can be designed. But finally behavior planning in vehicles can only influence the controlled variables: steering angle and velocity. In this paper a behavior planning for a driver assistance system aiming on cruise control is proposed. In this system the controlled variables are determined by an evaluation of the dynamics of two one-dimensional neural fields. The stimuli of the field are determined according to sensor information produced by a simulation environment.
In this paper, we describe a method to model human clothes for a later recognition by the use of RGB- and SWIR-cameras. A basic model is estimated during people detection and tracking. This model will be refined if the recognition is triggered. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body parts. The body parts are estimated by the use of a silhouette extraction combined with a skeleton estimation. In this way, the model describes the human clothes in a compact manner which allows the use of a simple and fast comparison method for people recognition. Such models can be used in security and service applications.
A self-driving car that operates on the SAE automation level 3 or 4 can navigate through different traffic conditions without human input. If such a system is on its operating limits, it will emit a takeover request before shutting down. This request will likely generate a physical response of the driver. Our goal is to shed light on the stress perception of drivers in various scenarios. To this end, we have carried out a feasibility study for preparation. Two subjects drove an autonomous vehicle and during the ride ECG signals were recorded, and afterwards evaluated. Unfortunately, the stress reaction to takeover requests could not be investigated, due to the poor function of the autonomous driving mode from the vehicle, however the reaction to autopilot misconduct without warning to the driver could be investigated instead.
Checking wind turbines for damage is a common problem for operators of wind parks, as regular inspections are legally required in many countries and prevention is economically viable. While some of the common forms of damage are easily visible on the surface, structural problems can remain invisible for years before they eventually result in catastrophic failure of a rotor blade. Common forms of testing fibre composite parts like ultrasonic testing or X-ray tests are impractical due to the large dimensions of wind turbine components and their limited accessibility for any short-range methods. Active thermographic inspection of wind turbines is a promising approach to testing for structural flaws beneath the surface of rotor blades. As part of an ongoing research project, a setup for testing the general viability of this method was built and used to compare different thermographic cameras. A sample cut from a discarded rotor blade was modified to emulate structural damage. The results are promising for the development of a cost effective on-site testing system.
Increasing economic viability and safety through structural health monitoring of wind turbines
(2017)
Serious accidents with property damage or even human casualties, result from structural flaws in wind turbine rotor blades. Common maintenance practices result in long downtimes and do not lead to the required results. Therefore, the Ruhr West University of Applied Sciences and the iQbis Consulting GmbH, currently research a new structural health monitoring method for wind turbine rotor blades. The goal of this project is to build a sensor system that can detect structural weaknesses inside of rotor blades without the need of downtime for industrial climbers. This technology has the potential to prevent accidents, save lives, extend the useful life of wind turbines and optimize the production of green energy.
We present a pipeline for recognizing dynamic freehand gestures on mobile devices based on extracting depth information coming from a single Time-of-Flight sensor. Hand gestures are recorded with a mobile 3D sensor, transformed frame by frame into an appropriate 3D descriptor and fed into a deep LSTM network for recognition purposes. LSTM being a recurrent neural model, it is uniquely suited for classifying explicitly time-dependent data such as hand gestures. For training and testing purposes, we create a small database of four hand gesture classes, each comprising 40 × 150 3D frames. We conduct experiments concerning execution speed on a mobile device, generalization capability as a function of network topology, and classification ability ‘ahead of time’, i.e., when the gesture is not yet completed. Recognition rates are high (>95%) and maintainable in real-time as a single classification step requires less than 1 ms computation time, introducing freehand gestures for mobile systems.
RELEVANCE & RESEARCH QUESTION: Currently the effectiveness of Virtual Reality (VR) and Augmented Reality (AR) systems as practice teaching methods are virtually uncharted. The proof that these systems can provide the same or better learning outcomes than a text instructed practical task could represent a significant benefit for educational activities. METHODS & DATA: To fathom the effectiveness, an experimental study with the three conditions (VR, AR and a real setup) were used to teach participant how to assemble a standard computer. Each condition was divided into two parts: part one in which participants were confronted with their specific scenario, part two in which participants had to go through a real practice after one week. The learning outcome was determined by the designation of hardware parts, a quiz that queried their function and the correct assembling of the components in addition to needed time. Apart from the mere performance, the acceptance of such application in academic context and difference in evaluation by men and women were of interest. RESULTS: Results concerning the Learning Outcome showed that participants from the VR condition outperformed those learned from the real setup ((M=10.0, SD=0.0) [virtual reality] vs. (M=8.95, SD=1.27) [control]). Furthermore, results from the assembling duration assessment demonstrated that VR Group Participants completed their tasks 6.62% faster than the control group. Regarding the identification of Hardware Parts, both groups scored a significant improvement during the post condition compared to the first test run, indicating a learning progress. However, due to the VR group achieving a better outcome in average answers and a more significant difference between the trials, the results indicate a better performance by participants assigned to the VR condition. ADDED VALUE: The results revealed that VR and AR systems could exceed text-based approach in terms of learning outcome performance. The effectiveness of the systems implicates a major benefit for the educational landscape, as learning content that is not realizable in terms of cost, distance or logistics could be designed as an immersive and engaging experience.
Relax yourself - Using Virtual Reality to enhance employees mental health and work performance
(2019)
This paper presents work-in-progress aiming to develop an actively adapting virtual reality (VR) relaxation application. Due to the immersive nature of VR technologies, people can escape from their real environment and get into a relaxing state. Goal of the application is to adapt to the users' physiological signals to foster the positive effect. Until now, a first version of the VR application was constructed and is currently evaluated in an experiment. Preliminary results of this study demonstrate that people appreciate the immersion into the virtual environment and escape from reality. Moreover, participants highlighted the option to adapt users' needs and preferences. Based on the final study data, the constructed application will be enhanced with regard to adoption and surrounding factors.
Die Entwicklung von vollautomatisierten Fahrzeugen wird in der gesellschaftlichen Diskussion immer präsenter. Wichtig für die Durchsetzung und verbreitete Nutzung dieser technischer Neuerungen ist jedoch vor allem die Akzeptanz der Bevölkerung – in diesem Fall nicht nur die der potenziellen KäuferInnen sondern auch die der übrigen Verkehrs-teilnehmenden. Vorgestellt wird eine explorative Online-Studie zur Akzeptanz von auto-nomen Fahren basierend auf quantitativen und qualitativen Daten einer Stichprobe von N = 89. Die Ergebnisse zeigen unter anderem eine geringe Vertrautheit mit dem Thema, ein vergleichsweise ausgeprägtes Vertrauen aber eine geringe Nutzungsabsicht.
Mobile Walzenmesstechnik
(2003)
Why do barriers to the exchange of open knowledge resources change in public administrations? Experts in the public sector have been interviewed and outlined antecedents of change to certain barriers. The results are an initial step towards theorizing on barrier change and stepping beyond the current trend of categorizing difficulties to e-Learning and use of open knowledge resources. Categorizing only shows the range of potential challenges. Whether and how the barriers change, however, is seldom addressed in previous literature. The results presented in this study thus provide a new perspective on the phenomenon. Results are part of a longitudinal study about open e-Learning in the public sector across four European countries. They will provide fresh empirical input for discussions at the World Conference on E-Learning how to advance future research and practices in the domain
We present a system for 3D hand gesture recognition based on low-cost time-of-flight(ToF) sensors intended for outdoor use in automotive human-machine interaction. As signal quality is impaired compared to Kinect-type sensors, we study several ways to improve performance when a large number of gesture classes is involved. Our system fuses data coming from two ToF sensors which is used to build up a large database and subsequently train a multilayer perceptron (MLP). We demonstrate that we are able to reliably classify a set of ten hand gestures in real-time and describe the setup of the system, the utilised methods as well as possible application scenarios.
PROPRE is a generic and modular neural learning paradigm that autonomously extracts meaningful concepts of multimodal data flows driven by predictability across modalities in an unsupervised, incremental and online way. For that purpose, PROPRE consists of the combination of projection and prediction. Firstly, each data flow is topologically projected with a self-organizing map, largely inspired from the Kohonen model. Secondly, each projection is predicted by each other map activities, by mean of linear regressions. The main originality of PROPRE is the use of a simple and generic predictability measure that compares predicted and real activities for each modal stream. This measure drives the corresponding projection learning to favor the mapping of predictable stimuli across modalities at the system level (i.e. that their predictability measure overcomes some threshold). This predictability measure acts as a self-evaluation module that tends to bias the representations extracted by the system so that to improve their correlations across modalities. We already showed that this modulation mechanism is able to bootstrap representation extraction from previously learned representations with artificial multimodal data related to basic robotic behaviors [1] and improves performance of the system for classification of visual data within a supervised learning context [2]. In this article, we improve the self-evaluation module of PROPRE, by introducing a sliding threshold, and apply it to the unsupervised classification of gestures caught from two time-of-flight (ToF) cameras. In this context, we illustrate that the modulation mechanism is still useful although less efficient than purely supervised learning.
Mobile devices are nowadays used almost ubiquitously by a large number of users. 2013 was the first year in which the number of sold mobile devices (tablet computers and mobile phones) outperformed the number of PCs’ sold. And this trend seems to be continuing in the coming years. Additionally, the scenarios in which these kinds of devices are used, grow almost day by day. Another trend in modern landscapes is the idea of Cloud Computing, that basically allows for a very flexible provision of computational services to customers. Yet, these two trends are not well connected. Of course there exists already quite a large amount of mobile applications (apps) that utilize Cloud Computing based services. The other way round, that mobile devices provide one of the building blocks for the provision of Cloud Computing based services is not well established yet. Therefore, this paper concentrates on an extension of a technology that allows to provide standardized Web Services, as one of the building blocks for Cloud Computing, on mobile devices. The extension hereby consists of a new approach that now also allows to provide asynchronous Web Services on mobile devices, in contrast to synchronous ones. Additionally, this paper also illustrates how the described technology was already used in an app provided by a business partner.
This paper describes the design and development stages of a web-based framework, aiming to support the creation of mobile applications within the context of mobile learning. The suggested approach offers the opportunity to deploy and execute these applications on mobile devices. This web-based solution additionally offers the possibility to visualize the data collected by the mobile applications in a web-browser. Despite previous research efforts carried out in this domain, few of the projects have addressed these processes from a purely web-based perspective. Currently, a prototype of an authoring tool for creating mobile data collection applications is already implemented. In order to integrate and validate this solution in everyday educational settings, we are collaborating with a network of high schools. On the basis of workshops with teachers we will carry out, refinements and requirements for further enhancements will be collected and will be used to guide our coming efforts.
In recent years, teachers have started to conduct pedagogical activities to promote different kinds of learning interactions supported by rich media. The deployment of such activities is rapidly increasing, as teachers and students own technological means that allow supporting them along such interactions. These activities can be carried out in traditional classroom settings while using regular computers. Additionally, they can also be conducted from anywhere at any time while using smartphones and tablets. In this paper, we describe a pedagogical activity requiring students to author and later peer- assess learning interactions
incorporated to videos in YouTube. We describe EDU.Tube, an environment that enables them to create, share and consume such rich media learning activities across a variety of devices. We then detail a plan for the implementation of an activity that took place in 3 different classes dealing with diverse materials addressing computer science related topics. Finally, we also
provide an evaluation presenting students' insights and feedbacks resulting from the experienced activity. We discuss and analyze these outcomes in order to elaborate on them as concerns that could be applied for the further deployment of the EDU.Tube environment.
This paper presents a web-based framework that allows the creation and deployment of mobile learning activities. We present an authoring tool that allows not-technically skilled persons to design mobile learning tasks and deploy them as a web-based mobile application. Since the presented approach is based exclusively on web-technologies, the deployed mobile application can be executed via a mobile browser and therefore is platform independent. Despite previous research efforts carried out in this domain, few of the projects have addressed this course of actions from a purely web-based perspective. Through the latest development of web technologies, mobile applications have access to internal sensors like camera, microphone and GPS and therefore allow data collection within web-applications. In order to validate whether the proposed framework can be applied in educational settings, we conducted a pilot study with experienced teachers and present the results of these efforts in this paper.
As smart homes are being more and more popular, the needs of finding assisting systems which interface between users and home environments are growing. Furthermore, for elderly and disabled people living in such homes it is totally important to develop devices, which can support and aid them in their ordinary daily life. This demands means and tools that extend independent living and promote improved health. In this work we reviewed the state of the art in the assistant systems in home environments. A case study of medical assisting system for elderly and people with disabilities is discussed deeply. A smart nfc-based person-specific assistant system for services in home environment is proposed. The role of this system is the assisting by controlling of home activities and adaption of home-human interface towards the needs of the considered person. For the special case of medical assisting the system has the ability of providing for elderly or disabled people person-specific medical assistance. The system has the ability of identifying its interaction partner using some biometric features. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system represents person-specific list of medicaments either visually, on screen, or acoustic, speaker. And third the system gives an alarm in the case of taking medicament either later or earlier as normal taking time.
In the context of existing approaches to cluster computing we present a newly developed modular framework `SimpleHydra' for rapid deployment and management of Beowulf clusters. Instead of focusing only the pure computation tasks on homogeneous clusters (i.e. clusters with identically set up nodes), this framework aims to ease the configuration of heterogeneous clusters and to provide a low-level / high-level object-oriented API for low-latency distributed computing. Our framework does not make any restrictions regarding the hardware and minimizes the use of external libraries to the case of special modules. In addition to that our framework enables the user to develop highly dynamic cluster topologies. We describe the framework's general structure as well as time critical elements, give application examples in the `Big-Data' context during a research project and briefly discuss additional features. Furthermore we give a thorough theoretical time/space complexity analysis of our implemented methods and general approaches.
In this paper, we describe an efficient method for a fast people re-identification based on models of human clothes. An initial model is estimated during people detection and tracking, which will be refined during the re-identification. This stepwise extraction, combination and comparing of features speeds up the whole re-identification. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body part. The body parts are located with an optimized GPU-based HOG detector. Furthermore, we introduce a meanshift-based fusion concept which utilizes multiple detectors in order to increase the detection reliability.
Currently in home environments, robot assisting systems with emotion understanding ability are generally achieved in two several manners. The first is the implementing of such systems in such a way that they offer general services for all considered persons without considering privacy, special needs of their interaction partners. The second way is the targetting of such systems for merely one person. In this work we present a robot assisting system, which has both the abilities of assisting several persons at the same time and sustaining their privacy and security issues. The robot can interact with it's interaction partner emotionally by analyzing the emotions of her expressed either visually, facial expression, or auditive, speech prosody. The role of this system is the providing of person-specific support in home environment. In order to identify its interaction partner the system uses diverse biometric traits. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system loads the corresponding emotional profile of the detected interaction partner in order to practice a person-specific emotional human-robot interaction, which has an advantage over the person independent interaction.
The development of web based applications gained enormous interests in recent years. Most of formerly desktop based applications nowadays provide at least a web based version or are completely re-implemented as web based applications. Nevertheless, from the development point of view, there are still a lot of strategies for the development of web based applications borrowed from the development strategies for desktop applications. Therefore, this paper concentrates on the description of an approach that allows to re-use a from the development of desktop applications well-known Design Pattern with a distinct enhancement for web based applications.
Mobile devices, in the form of smartphones, are endowed with rich capabilities in terms of multimedia, sensors and connectivity. The wide adoption of these devices allows using them across different settings and situations. One area in which mobile devices become more and more prominent is within the field of mobile learning. Here, mobile devices provide rich possibilities for the contextualization of the learner, by using the set of sensors available in the device. On the one hand, the usage of mobile devices enables participation in learning activities independent of time and space. Nevertheless, developing mobile learning applications for the heterogeneity of mobile devices available in the market becomes a challenge. Not only this is a problem related to form factor aspects, but also the large number of different operating systems, platforms and app infrastructures (app stores) are aspects to be considered. In this paper we present our initial efforts with regard to the development of cross-platform mobile applications to support the contextualization of learning content.
The mathematical competence of first year students is an important success factor at least for technical studies. As a significant percentage of students do not have sufficient mathematical skills, universities often utilise blended learning courses to increase these skills prior to the start of studies. Due to the diversity of students and their educational backgrounds, individual strategies are needed to achieve the necessary competence for successfully managing their studies. This paper describes our approach at the University of Applied Sciences Ruhr West, where we are using personalized blended learning concepts based on the measurement of individual mathematical competences at the beginning of a coaching process. This is used to gain a better matching between the individual learner level and the adapted learning concepts. We combine individual presence learning groups and a personalized e-learning environment. This environment is adapted based on mathematical skills of each stud ent. It uses individual learning advices, short-term optical feedback and up to date e-learning material in a Moodle-based LMS (learning management system). The coaching concept is approved by the results of summative and formative evaluations.
Durch Anpassung der Mathematik-Qualifizierungsmaßnahmen in der Studieneingangsphase an die einzelnen Kompetenzen der Studienanfängerinnen und Studienanfänger wird die individuelle Passgenauigkeit der Maßnahmen erhöht und ein hoher Lernfortschritt erzielt. Dies führt zu einer wesentlichen Verbesserung der
Eingangsqualifikation im Bereich der Mathematik und zu einer Homogenisierung der Leistungsfähigkeit von
Studierenden
The use of Web Services in modern software development is widely accepted and provides (integrated in an according architecture) a fast, flexible and scalable way for the implementation of modern software products. On the other hand, the development of mobile applications, so called apps, becomes more and more important. While using Web Services also from mobile devices is an already accepted scheme in the development of mobile apps, there is not much work done yet for providing Web Services on mobile devices. Therefore, this paper presents a new perspective to Web Services that could be run on mobile devices and, by this, become mobile Web Services.
One of the most stressing challenges in our culture is the demographic change. On the one hand, people become older and older, at the same time less young people are available in order to support the elderly. Currently, this fact already provides a number of social impacts that need to be solved in the near future. This paper concentrates on the integration of mobile devices in scenarios that allow elderly people to age successfully. Here, the term "aging successfully" refers to broad range of aspects from health to social life of elderly people. A special focus of this paper lies in the question whether services deployed to a mobile device provide advantages in the area of aging successfully. In order to answer this question, both technical challenges are explained and solved by example architectures, and scenarios that benefit from services deployed to mobile devices are explained.
Pedestrian movement analysis at airports - videobased analysis across multiple camera systems
(2013)
Die Entwicklung des automobilen HMI verläuft in immer kürzer werdenden Zyklen. Nichtsdestoweniger läßt sich kaum erahnen, inwieweit sich die Zukunft automobilen HMIs darstellen wird. Im Rahmen eines Experten-Workshops wurden verschiedene zukünftige Szenarien in 5, 10 und 20 Jahren auf Basis von Cockpitskizzen bearbeitet. Als Hilfestellung dienten hierbei drei unterschiedliche Personas, basierend auf verschiedenen prototypischen Kunden.
Applications and research efforts in Mobile Learning constitute a growing field in the area of Technology Enhanced Learning. However, despite a permanent increase of mobile internet accessibility and availability of mobile devices over the past years, a mobile learning environment that is easy to use, widely accepted by teachers and learners, uses widespread off-the-shelf software, and that covers various application scenarios and mobile devices, is not yet available. In this paper, we address this issue by presenting an approach and technical framework called "Mobile Contributions" ("MoCo"). MoCo supports learners to create and send contributions through various channels (including third-party solutions like Twitter, SMS and Facebook), which are collected and stored in a central repository for processing, filtering and visualization on a shared display. A set of different learning and teaching scenarios that can be realized with MoCo are described along with first experiences and insights gained from qualitative and quantitative evaluation.
The Desire project aimed at the development and implementation of a mobile service robotic research platform (technology platform) able to handle real world scenarios regarding service robotic tasks. Different modules for different tasks plus an interaction infrastructure were integrated on this platform. An example of a real world scenario task is the support of a handicapped person to clean up a kitchen in home environments.
One of the main challenges to be solved in this field is the interaction with people. To start an interaction process between a robot and a person, the most important information is the knowledge about the interacting partner’s identity and whether the interacting partner is present or not. This means, the robot must be able to detect and be finally able to identify persons. Accurate identification of specific individuals has to be done by analyzing the individual features of each person. A typical feature set that allows for a distinct identification of a specific person is often extracted from the facial image acquired by a camera. This feature-set is stored in a database to allow the identification of different persons independent from place and time by comparing given feature-sets. Thus, a face recognition module was integrated into the technology platform which includes face detection and identification algorithms.
In recent years a new approach for the dynamic usage of computational power, memory and other
resources comes into play: the Cloud Computing paradigm. This new approach needs to be concerned with
respect to IT Service Management since cloud based infrastructures have to be managed differently from a
usual infrastructure. This paper discusses, based on the IT Infrastructure Library (ITIL), as the de-facto
standard for IT Service Management, what kind of processes needs to be concerned especially if a certain
service should be deployed in the cloud.