One of the latest hypes in IT is the well-known Cloud
Computing paradigm. This paradigm that showed up in recent years
is a paradigm for the dynamic usage of computational power, memory and other computational resources. With respect to hypes, the author strongly believes that the
Cloud Computing paradigm has the potential to survive the hype and to become a usual technology used for the provision of IT based services. Therefore, it will be necessary to deploy Cloud Computing based infrastructures in a professional, stable and reliable way. This would lead to the idea that the Cloud Computing paradigm needs to be concerned with respect to IT Service Management, since cloud based infrastructures have to be managed differently in comparison to a usual infrastructure. This paper discusses, based on the IT Infrastructure Library (ITIL), as the de-facto standard for IT Service Management, whether this de-facto standard might also be able to manage Cloud Computing based infrastructures, how the according processes might change and whether ITIL supports a division of labor between the customer and the service provider
of a Cloud Computing based infrastructure.
Der Einsatz von virtuellen Servern im LDS NRW erfolgte bisher unter dem Blickwinkel der Konsolidierung von einfachen und sehr einfachen Systemen, die keine dedizierte Serversystemtechnik benötigten.
Mittlerweile bietet VMware Funktionalitäten, die neben dem Konsolidierungsgedanken hoch interessante Möglichkeiten für vielfältigste, individuelle Kundenanforderungen bieten. Dies reicht von flexiblen, preiswerten und einfachen
Systemen bis hin zu Serverplattformen mit hohen Ansprüchen an Performance und Verfügbarkeit.
The term “Cloud Computing” does not primarily specify new types of core technologies but rather addresses features to do with integration, inter-operability and accessibility. Although not new, virtualization and automation are cor features that characterize Cloud Computing. In this paper, we intend to explore the possibility of integrating cloud services with educational scenarios without re-defining neither the technology nor the usage scenarios from scratch. Our suggestion is based on certain solutions that have already been implemented and tested for specific cases.
Background:
Influential actors detection in social media such as twitter or Facebook can play a major role in gathering opinions on particular topics, improving the market
-
ing efficiency, predicting the trends, etc.
Proposed methods:
This work aims to extend our formally defined
T
measure to
present a new measure aiming to recognize the actor’s influence by the strength of
attracting new important actors into a networked community. Therefore, we propose a
model of the actor’s influence based on the attractiveness of the actor in relation to the
number of other attractors with whom he/she has established connections over time.
Results and conclusions:
Using an empirically collected social network for the
underlying graph, we have applied the above-mentioned measure of influence in
order to determine optimal seeds in a simulation of influence maximization. We study
our extended measure in the context of information diffusion because this measure is
based on a model of actors who attract others to be active members in a community.
This corresponds to the idea of the IC simulation model which is used to identify the
most important spreaders in a set of actors.
Keywords: Actor influence, Social media networks, Twitter, IC model, Information
diffusion, Independent cascade model, T measure
In this paper we present an approach for contextual big data analytics in social networks, particularly in Twitter. The combination of a Rich Context Model (RCM) with machine learning is used in order to improve the quality of the data mining techniques. We propose the algorithm and architecture of our approach for real-time contextual analysis of tweets. The proposed approach can be used to enrich and empower the predictive analytics or to provide relevant context-aware recommendations.
Technologie die beflügelt
(2016)
The Bitcoin whitepaper states that security of the system is guaranteed as long as honest miners control more than half of the current total computational power. The whitepaper assumes a static difficulty, thus it is equally hard to solve a cryptographic proof-of-work puzzle for any given moment of the system history. However, the real Bitcoin network is using an adaptive difficulty adjustment mechanism. In this paper we introduce and analyze a new kind of attack on a mining difficulty retargeting function used in Bitcoin. A malicious miner is increasing his mining profits from the attack, named coin-hopping attack, and, as a side effect, an average delay between blocks is increasing. We propose an alternative difficulty adjustment algorithm in order to reduce an incentive to perform coin-hopping, and also to improve stability of inter-block delays. Finally, we evaluate the presented approach and show that the novel algorithm performs better than the original algorithm of Bitcoin.