Refine
Year of publication
- 2014 (23) (remove)
Document Type
- Conference Proceeding (16)
- Article (4)
- Part of a Book (1)
- Doctoral Thesis (1)
- Preprint (1)
Language
- English (23) (remove)
Has Fulltext
- no (23)
Is part of the Bibliography
- no (23)
Keywords
- Halberzeugnis (1)
- Hochtemperatur (1)
- Inprozesskontrolle (1)
- Rundstahl (1)
- Warmwalzen (1)
Currently in home environments, robot assisting systems with emotion understanding ability are generally achieved in two several manners. The first is the implementing of such systems in such a way that they offer general services for all considered persons without considering privacy, special needs of their interaction partners. The second way is the targetting of such systems for merely one person. In this work we present a robot assisting system, which has both the abilities of assisting several persons at the same time and sustaining their privacy and security issues. The robot can interact with it's interaction partner emotionally by analyzing the emotions of her expressed either visually, facial expression, or auditive, speech prosody. The role of this system is the providing of person-specific support in home environment. In order to identify its interaction partner the system uses diverse biometric traits. According to the recognized ID the system, first, adopts towards the needs of recognized person. Second the system loads the corresponding emotional profile of the detected interaction partner in order to practice a person-specific emotional human-robot interaction, which has an advantage over the person independent interaction.
This paper presents a web-based framework that allows the creation and deployment of mobile learning activities. We present an authoring tool that allows not-technically skilled persons to design mobile learning tasks and deploy them as a web-based mobile application. Since the presented approach is based exclusively on web-technologies, the deployed mobile application can be executed via a mobile browser and therefore is platform independent. Despite previous research efforts carried out in this domain, few of the projects have addressed this course of actions from a purely web-based perspective. Through the latest development of web technologies, mobile applications have access to internal sensors like camera, microphone and GPS and therefore allow data collection within web-applications. In order to validate whether the proposed framework can be applied in educational settings, we conducted a pilot study with experienced teachers and present the results of these efforts in this paper.
Mobile devices are nowadays used almost ubiquitously by a large number of users. 2013 was the first year in which the number of sold mobile devices (tablet computers and mobile phones) outperformed the number of PCs’ sold. And this trend seems to be continuing in the coming years. Additionally, the scenarios in which these kinds of devices are used, grow almost day by day. Another trend in modern landscapes is the idea of Cloud Computing, that basically allows for a very flexible provision of computational services to customers. Yet, these two trends are not well connected. Of course there exists already quite a large amount of mobile applications (apps) that utilize Cloud Computing based services. The other way round, that mobile devices provide one of the building blocks for the provision of Cloud Computing based services is not well established yet. Therefore, this paper concentrates on an extension of a technology that allows to provide standardized Web Services, as one of the building blocks for Cloud Computing, on mobile devices. The extension hereby consists of a new approach that now also allows to provide asynchronous Web Services on mobile devices, in contrast to synchronous ones. Additionally, this paper also illustrates how the described technology was already used in an app provided by a business partner.
In this paper, we describe an efficient method for a fast people re-identification based on models of human clothes. An initial model is estimated during people detection and tracking, which will be refined during the re-identification. This stepwise extraction, combination and comparing of features speeds up the whole re-identification. For the refining, several saliency maps are used to extract individual features. These individual features are located separately for any human body part. The body parts are located with an optimized GPU-based HOG detector. Furthermore, we introduce a meanshift-based fusion concept which utilizes multiple detectors in order to increase the detection reliability.
In recent years, teachers have started to conduct pedagogical activities to promote different kinds of learning interactions supported by rich media. The deployment of such activities is rapidly increasing, as teachers and students own technological means that allow supporting them along such interactions. These activities can be carried out in traditional classroom settings while using regular computers. Additionally, they can also be conducted from anywhere at any time while using smartphones and tablets. In this paper, we describe a pedagogical activity requiring students to author and later peer- assess learning interactions
incorporated to videos in YouTube. We describe EDU.Tube, an environment that enables them to create, share and consume such rich media learning activities across a variety of devices. We then detail a plan for the implementation of an activity that took place in 3 different classes dealing with diverse materials addressing computer science related topics. Finally, we also
provide an evaluation presenting students' insights and feedbacks resulting from the experienced activity. We discuss and analyze these outcomes in order to elaborate on them as concerns that could be applied for the further deployment of the EDU.Tube environment.
With the introduction of Apple’s iPhone, gesture control became pop-
ular and was perceived as an intuitive means of interaction. Contact-
less gestures received broad attention with the X-Box Kinect.
Current technology is limited to a small number of uses, mainly
in entertainment systems. The target of this project is to increase the
range of possible applications, e.g. to the field of automotive,
industrial applications (manufacturing plants), assisted living in con-
texts ranging from private households to hospitals (interaction for
people with disabilities) and many more.
In the context of existing approaches to cluster computing we present a newly developed modular framework `SimpleHydra' for rapid deployment and management of Beowulf clusters. Instead of focusing only the pure computation tasks on homogeneous clusters (i.e. clusters with identically set up nodes), this framework aims to ease the configuration of heterogeneous clusters and to provide a low-level / high-level object-oriented API for low-latency distributed computing. Our framework does not make any restrictions regarding the hardware and minimizes the use of external libraries to the case of special modules. In addition to that our framework enables the user to develop highly dynamic cluster topologies. We describe the framework's general structure as well as time critical elements, give application examples in the `Big-Data' context during a research project and briefly discuss additional features. Furthermore we give a thorough theoretical time/space complexity analysis of our implemented methods and general approaches.