Page 1

CloudSource Cloud computing & opensource

Issue 2 - October 2013


Page left intentionally blank

1


Editorial Board Prof. Alex Delis, Dr. Norbert Meyer, Prof. Dr. Keith Jeffery, Dr. Yuri Glikman, Dr. Toshiyasu Ichioka, Mrs. Cristy Burne,

SUCRE Coordinator, National & Kapodistrian University of Athens, Greece Head of the Supercomputing Department at the Poznan Supercomputing Center, Poland. President of ERCIM, U.K. OCEAN project Coordinator, Fraunhofer Institute, Germany EU-Japan Centre for Industrial Cooperation,manager of the FP7 project JBILAT, Japan Scientific Editor and Journalist, Australia

Coordination by Giovanna Calabrò, Zephyr s.r.l., Italy and Mrs. Eleni Toli, National & Kapodistrian University of Athens, Greece. This publication is supported by EC funding under the 7th Framework Programme for Research and Technological Development (FP7). This Magazine has been prepared within the framework of FP7 SUCRE SUpporting Cloud Research Exploitation Project, funded by the European Commission (contract number 318204). The views expressed are those of the authors and the SUCRE consortium and are, under no circumstances, those of the European Commission and its affiliated organizations and bodies. The project consortium wishes to thank the Editorial Board for its support in the selection of the articles, the DG CONNECT Unit E.2 – Software & Services, Cloud of the European Commission and all the authors and projects for their valuable articles and inputs.

2


Table of Contents

3

2

Editorial Board

3

Table of Contents

4

Goodbye to legacy software - ARTIST changing frumpy to glamorous

9

Open cloud software applications for the public sector

12

Education in the cloud

14

Cloud-sourcing: Positioning the cloud for disaster relief scenarios

18

Azure: designing modern applications using a hybrid cloud approach

21

MODAClouds: Model-driven engineering for the clouds

25

Synnefo: A Complete Open Source Cloud Stack

29

CELAR: Automatic, multi-grained elasticity provisioning for the cloud

33

The PaaSage project: the cloud was the limit

37

News & Events

38

Related International Events


Goodbye to legacy software ARTIST changing frumpy to glamorous Clara Pezuela Research and Innovation Group Atos, Spain SA & the ARTIST project consortium Being stuck with ages-old software is a constant headache. It’s clunky, expensive to run and never quite works like it should. Worse, most legacy applications are unsuited to running on the cloud, and replacing them isn’t easy, interfering with business performance, continuity and offering. Yet traditional software and service providers must adapt to the new reality of the cloud, without disrupting business continuity for their customers.

4


Goodbye to legacy software - ARTIST changing frumpy to glamorous

Reverse-engineer, forward-engineer Until now, legacy applications could only watch as their modern counterparts whizzed by, scaling up and down on demand, roaming the internet, hosted from sleek and swanky data centers. Now, the EC-supported ARTIST project proposes a set of methods and tools that, like the flick of a wand, can reverse-engineer legacy apps to a meta-model version and then forward-engineer them to the desired platform. Remodeled applications can then take advantage of the latest technologies, including cloud computing, smartphones and security features. But the decision to remodel isn’t always that simple. Companies must decide whether to migrate existing solutions, which represent significant prior investment, or whether to start from scratch. Moreover, they must do so in an environment where time-to-market is critical.

Working with ARTIST “The ARTIST project applies the latest scientific knowledge to a critical business issue that many European companies are currently facing,” said Clara Pezuela, the ARTIST project coordinator. “We aim to help companies evaluate whether their applications can be migrated to a cloud environment at a reasonable cost. If migration is possible, we’ll provide the tools to achieve it.

“We’re committed to helping businesses revitalise the thousands of legacy applications that aren’t being used optimally.” Reduce software costs by 50% Approximately 90% of software costs can be attributed to post-installation support, yet legacy applications rarely achieve the performance levels of more modern solutions. According to its partners, ARTIST will reduce costs by 50% when compared to traditional manual migration methods, which will make it possible to implement more frequent migration programmes to better and more cost-efficient platforms. ARTIST’s software modernisation approach is based on model-driven engineering techniques, aiming to help with re-engineering legacy applications to platform-independent models suited to cloud computing. This will significantly reduce the risk, time and cost of software migration, which today represent major barriers for organisations wanting to take advantage of cloud-based technologies.

www.artist-project.eu

5


Large-scale topographical GIS database for the Hellenic Prefectural urban planning authorities Dimitrios Charalambakis – SingularLogic, Greece One of the greatest challenges that governments in the 21st century face is that of efficiently and cost-effectively keeping up with public demand. Increasingly, councils are under pressure to efficiently plan and manage resources, while keeping the public informed and opening the way for their participation. In response to these challenges, the Hellenic Ministry of Interior Affairs initiated a large-scale project, entitled “Electronic Urban Planning: A Geographic Information System (GIS) for the Prefectural Urban Planning Authorities.” The project aimed to introduce a private cloud-based GIS database for planning, monitoring and managing urban planning and land development for the 185 Prefectural urban planning authorities across Greece. SingularLogic was appointed to develop the system, tasked with providing the engineers and planners of the Prefectural urban planning offices – and the general public – with accurate, up-to-date information on urban planning legislation, and the tools to organise and manipulate this information accordingly.

6


Large-scale topographical GIS database for the Hellenic Prefectural urban planning authorities

Why geographic information systems? Until recently, cartography and GIS had little to do with urban development: why invest a good deal of money in cartography, if the results are useful only to a handful of specialists, and of no direct benefit to internal management or public relations? Now, with evolving technology and improved information management, the answer is clear. Regardless of the size of an urban development authority, land development planners must deal with a great volume of information: land-use data, addresses, transportation networks, housing, land acquisitions, accounts, and so on. A planner must study and track multiple urban and regional indicators, forecast future community needs, and plan strategically to guarantee quality of life for the community. By harnessing the power of a tailored GIS, urban planning authorities can more efficiently plan and develop, more rapidly identify and respond to problems, and more effectively share outcomes with citizens. Further, basing a GIS in the cloud enables urban planners and the general public to participate in the process.

Doing more and spending less Previously, the Hellenic urban planning lifecycle was predominantly manual. Information was retrieved using different registers and record books. The process was tedious and time consuming, leading to less objective decision-making. By implementing a private cloud-based GIS solution, the Ministry of Interior Affairs improved efficiency by:

automating tasks, enabling prompt decision-making, optimising information retrieval, and providing on-time, accurate and complete data for decision-making. As part of the project, the project team:

re-engineered the processes and procedures of the Urban Planning Offices, developed a private cloud-based GIS, designed a GIS database containing current legislation, and implemented a portal to provide value-added services and information to the citizens.

Value-added services for citizens The GIS system has been implemented over the governmental backbone (2500+ internal users) and internet (open to external users) via a private cloud. It is now used by town planners and central administration to automate the day-to-day functioning of all departments and offices. Users can view maps on-screen, and can combine, manage and analyse geographically referenced data to support their decision making, all via the intranet or internet. The success of this project has enabled Hellenic urban planning authorities to improve resource management and urban planning, enhance stakeholder communication, and update their geographic database.

The end result? The government has improved its services and dramatically cut costs.

7


Large-scale topographical GIS database for the Hellenic Prefectural urban planning authorities

Figure 1: Virtualised system architecture of the cloud-based GIS solution Citizens

Ministry of Interior/Prefectures

Citizen Service Offices

Internet Government Backbone

Database layer

nPar n4

nPar n3

nPar n2

nPar n1

Data Server #1 Application Layer Application Servers

Cluster Interconnect

Cluster Interconnect

Cluster Interconnect

Cluster Interconnect

LB / FO

Web Layer

GIS Application Servers

Web Servers

LB / FO

LB / FO

nPar n4

nPar n3

nPar n2

nPar n1

Infrastructure Layer (Management Servers)

Data Server #2

Disaster Site SAN switches Directory/Backup Layer LDAP Servers SAN switches LB / FO

SAN & backup Management Server

SAN-2 storage array

SAN-1 storage array

Tape library

8


Open cloud software applications for the public sector Androklis Mavridis, Athanasios Soulakis, and Spyros Skolarikis - B.open Open Business Software Systems Ltd, Greece In an era of economic crisis, the public sector is increasingly faced with budget cuts, and must offer more with ever less. Strategic thinking – and a deep knowledge of an organisation operations, constraints, capacities, ethics and political agenda – are required if IT investments are to flourish and catalyse cost reductions and operational improvements.

9


Open cloud software applications for the public sector

Why geographic information systems? If the overriding goal is to maintain operational standards while squeezing resources, then cloud technology is the best-fit solution for public sector organisations (PSOs). Exploiting clouds will help PSOs acquire new capacities and improve collaboration and awareness, all in a secure manner. To achieve this, PSOs need adaptable, extendable cloud solutions that will enable service integration and management, and accelerate development and deployment. It is exactly this need that b.Open Ltd covers with its jPlaton software application server and Comidor cloud application suite.

jPlaton Integrated Design, Development and Runtime Environment platform Since its inception in 2004, jPlaton has evolved into an integrated design, development and runtime environment platform for distributed enterprise applications, tailored to cloud software application development. jPlaton is independent from operating systems, databases, system architectures and underlying technologies. Any application built on jPlaton contains only plain XML files (no binary at all), can be installed on Windows, Linux and Mac, and can operate on relational database management systems, like MySql, Oracle, Sql Server and so on. The innovation lies in jPlaton’s open, multi-layered, distributed architecture, which encourages collaborative software development: any number of developers can work on the same software project, upgrading, modifying, extending and integrating as required. jPlaton takes modular architecture a step further, and a typical application consists of multiple parts, called program units. All the functionality of a program unit is contained in XML files that describe its objects and procedures, resulting in a multi-layered, homocentric environment. Any layer can be used to add new functionality, or to update or delete existing functionalities in its inner layers. The number and nature of the layers is tailored to a specific application. This completely open and transparent architecture permits the flow of information between layers, facilitating integration, and its distributed multi-layered nature allows evolution and customization, all while preserving the inner (core) layers. Users Group Χ

Layers

Group 1 e ag

1

System

Pa ck e ag 2

System

Pa ck

Platform

Package

jPlaton Pa ck

e ag

3

Pa ck

User

e4 ag

Group

At execution time, information on a specific program unit is automatically collected and assembled, as per its specific installation and user settings.

10


Open cloud software applications for the public sector

Comidor cloud application suite It is neither productive nor cost-effective for PSOs to purchase, operate and maintain numerous applications, just to cover their daily operational needs. This is where the Comidor application suite comes in. Developed on JPlaton, Comidor offers customer relationship management, and project and financial management capabilities, all on a state of the art collaboration platform. More specifically, Comidor enables:

Enhanced collaboration E-mail integration Contacts management Accounts management Social networking Interactive calendar File creation and sharing of any file format Wikis for collaboration and knowledge consolidation Real-time text chatting, video, message threading and polls Ability to “follow” the real-time feeds of co-workers Version control (including who changed it, date/time of changes, and ability to make prior versions available for use, etc.) Organisation management (drag’n’drop groups and users on an organisational chart that can be restructured on demand) Stats and graphs (on-the-fly visual presentation of opportunities, projects, contacts, accounts, and much more) Report creation and customisation Web services for third-party systems with full interaction and data import/export tools Comidor mobile

Personalised customer care Case-specific collaboration and quality-focused services provision Unified knowledge base Leads and opportunities management Campaign creation and monitoring with advanced analytics and reports Performance indicators, statistics and reporting Forecasting based on financial information streams

Project management Resource planning (human, tangibles and intangibles including personnel, equipment, effort, knowledge, etc.) Schedule and task management (allowing managers to monitor budget and costs at any given time) Deliverables and milestones management, at the project, task and resource level Knowledge management

The G-cloud way B.open’s cloud platform-as-a-service and software-as-a-service solutions provide an integrated cloud operational environment, supporting the adoption of cloud computing facilities and delivering fundamental changes to the way the public sector procures and operates ICT.

www.b-open.gr/ 11


Education in the cloud Przemyslaw Fuks - EFICOM S.A. European and Financial Consulting, EuroCloud, Poland In an environment where online learning (e-learning) is increasingly popular, the application of cloud computing to education creates some remarkable opportunities. Our younger generation especially is increasingly using the internet in their search for knowledge, and pupils of any age can benefit from this “here and now� model of knowledge, using net browsers, exchanging ideas on social networks, and pursuing their education in a dynamic, immediate and personalised manner.

12


Education in the cloud

Low-cost, tailored e-learning By raising awareness of cloud technologies as a new way of providing services, we can give schools and universities the chance to create their own low-cost e-learning systems. Thanks to cloud technologies, the cost of expensive hardware necessary to create IT infrastructure, and the cost of software licenses for school laboratories, is no longer a barrier. The required computational power is now being served by IT providers and the owners of integrated learning environments, allowing the users of Learning Management Systems to create new educational content with only the aid of special applications for on-line content editing.

Dynamic lessons Such solutions allow teachers and lecturers to make lessons more attractive, using the huge repositories of multimedia resources available in the cloud, without requiring the installation of additional software on local workstations. All teachers need is a web-browser. A policy of resource management allows teachers to share their educational content, leading to more attractive lessons as content is systematically enhanced with new, unique educational resources.

Personalised approach Pupils can access educational resources – including lessons, courses, revision aids, and so on – without the pressures of time or barriers of geography. They can use simple devices, such as laptops, tablets and smartphones, in their own preferred way, learning at their own pace and in their own style. Some may prefer e-books or multimedia presentations, while others prefer videos, and so on.

Flexibility Foremost of the advantages for schools or universities are the much lower costs of infrastructure maintenance when using cloud resources. Also, money spent is proportional to platform utilisation, and costs could thus be significantly reduced during holidays as compared to during the school year or end-of-term examinations.

Case study: Lodz, Poland An excellent example of the use of cloud solutions in education is the Educational Platform of Lodz, which has more than 150,000 users. By leveraging cloud technologies on municipal servers, the city of Lodz has deployed this broad-scale educational platform, on which users dynamically manage their educational content without the need to buy expensive software or costly hardware. A+ for Lodz.

13


Cloud-sourcing: Positioning the cloud for disaster relief scenarios Mark Roddy and Edel Jennings - TSSG, Ireland, Patrick Robertson - DLR, Germany and Dingqi Yang ITSud, France We saw the ability of digital natives and the networked world, using lightweight and easily iterated tools, to do something rapidly that a big organization or government would find difficult, if not impossible, to do.1 Richard Boly from the US State Department advocating ‘crowd-sourcing’ as an effective disaster relief tool

14


Cloud-sourcing: Positioning the cloud for disaster relief scenarios

Disasters, such as earthquakes or terrorist attacks, are by nature random and unpredictable. Disaster management teams must respond to unknown and dangerous situations using the best available and most reliable information. In recent years, data coming from disaster situations via social media and ubiquitous technologies has increased, and open-source cloud-based technologies, such as Ushahidi2 and Sahana Eden3 , have evolved to offer real-time insights that are helpful to relief agencies in focusing attention to where it is most required.

Cloud-sourced wisdom tool The EU FP7 open-source project, SOCIETIES4 , is investigating the use of novel ubiquitous and social cloud-based communications services to discover and organise people able to assist with specific disaster management information deficits. These cloud-based services propose harnessing the efforts of offsite crowd-sourced volunteers, selected for their relevant skills and trustworthiness. These intelligently orchestrated communitiesof-interest will help research the answers to general or specialist requests from disaster response teams, thus generating a trustworthy cloud-based and crowd-sourced wisdom tool.

Service requirements and design CPM’s disaster experts were presented with sample scenarios that attempted to describe how this cloudbased community might assist with disaster relief. For example, an offsite volunteer community could be asked to compare satellite images of a disaster zone, taken before and after the catastrophe, so that destroyed infrastructure, such as roads or bridges, could be identified and highlighted. A fundamental question5 derived from this research centered on trust: How could onsite experts trust results from the offsite community? Further analysis of this trust issue enabled researchers to generate a fundamental design that included the following features (Figure 1):

Τhe service should be able to identify those offsite users most relevant to specific onsite requests, Τhe service should allow for physical and virtual collaboration of offsite users, and Οnsite personnel should have sufficient confidence in the veracity of the cloud-sourced answers. The SOCIETIES team developed a user interface (Figure 2) that allows requests for help from disaster zones to be generated and uploaded, along with the skills profile of the expertise needed by the offsite volunteer. Disaster relief experts can then view the responses provided by ‘cloud-sourced’ volunteers, and filter and relay responses back to onsite relief teams.

http://www.fastcompany.com/1751308/state-department-trying-make-thousand-ushahidis-bloom “Fast Company” http://ushahidi.com/about-us “The Ushahidi Project” 3 http://wiki.sahanafoundation.org/doku.php “The Sahana Foundation” 4 http://www.ict-societies.eu/ “EU FP7 SOCIETIES Project” 5 http://www.ict-societies.eu/files/2011/11/D8.1_public.pdf “SOCIETIES Paper Trial Evaluation Report” 1 2

15


Cloud-sourcing: Positioning the cloud for disaster relief scenarios On-site Users Request Creation

Request Resolution

Request Notification

Answer Vote

Request Processor Request Manager Relevant Off-site User Selection Web User Interfaces

Request Database

Query Interface

Instant Message Service

User Profile Database User Profile Extractor

Manual Input

Social Networks

Off-site Users Off-site User Manager

Off-site User Grouping

Historical Usage Records

Figure 1: High-level architecture of the ‘cloud-sourcing’ platform

Figure 2: Screenshot of the user interface for the offsite volunteer service

16


Cloud-sourcing: Positioning the cloud for disaster relief scenarios

The cognitive walk-through experiments In recent months, SOCIETIES researchers conducted a series of experimental cognitive walkthroughs, aimed at evaluating the worth of offsite volunteer services, and including real volunteer users (Figure 3). The evaluations involved two user types: offsite volunteers selected by the system for their relevant expertise, and onsite relief teams and disaster victims. Researchers looked at several questions: What was the user experience of the offsite volunteers that were selected to collaborate? Did relief workers trust the results returned by offsite volunteers? In recent months, SOCIETIES researchers conducted a series of experimental cognitive walkthroughs, aimed at evaluating the worth of offsite volunteer services, and including real volunteer users (Figure 3). The evaluations involved two user types: offsite volunteers selected by the system for their relevant expertise, and onsite relief teams and disaster victims. Researchers looked at several questions: What was the user experience of the offsite volunteers that were selected to collaborate? Did relief workers trust the results returned by offsite volunteers? 6 Figure 3: First cognitive walkthrough with researchers from SOCIETIES The results clearly showed that although the system is still a work-in-progress, it has potential to be further developed to achieve good usability and trustworthiness.

Future work Issues for future consideration include integration of the service and system features with existing platforms, such as Ushahidi. Project partners have also used feedback from these first trials to make service improvements, and plan to perform an end-to-end experiment with sufficient offsite volunteers to enable full validation of the system.

http://www.ict-societies.eu/project-deliverables/ “SOCIETIES First Prototype Evaluation Report� released subject to CEC approval 6

17


Azure: designing modern applications using a hybrid cloud approach Tomasz Kopacz , Microsoft - Poland Managers want fast, flexible IT on a limited budget, so they’re increasingly turning to clouds. Economies of scale mean public clouds are more cost-effective, yet security concerns mean many companies require full control of their IT assets. Windows Azure provides businesses with a means to create their own tailored solutions, using hybrid cloud. Cloud services are usually divided into three categories: IaaS (infrastructure as a service), PaaS (platform as a service) and SaaS (software as a service).

18


Azure: designing modern applications using a hybrid cloud approach

Infrastructure as a service IaaS offers virtual computers, storage space, and some form of secure connection between the data center and customer. Examples include Windows Azure, virtual private networks (VPN), and so on. IaaS offerings usually include standardized services in predefined sets, hardware configurations, operating system (OS) images, and maybe some readymade applications, like Microsoft’s SQL Server, BizTalk Server, Sharepoint or even specialized Linux images with Django. Custom-made, specialized images can also be added, and architects will tailor these offerings to best suit a particular situation. Microsoft’s Hyper-V can be used to run the same OS image in Azure and on on-site servers. Thus, technically speaking, moving between public and private clouds is very easy in IaaS. Additionally, management tools such as Microsoft System Center allow users to control this mixed environment, including machines in both public and private clouds.

Platform as a service PaaS is a very generic form of cloud in which providers offer developers sets of components, or higher level building blocks. In the case of Azure, offerings include specially designed websites (from simple sites in PHP or Node.js to complex portals in ASP.NET or web services); HDInsight (based on Hadoop), for processing gigantic datasets and building next-generation analytical systems; or Generic Worker, which allows developers to deploy background processing tasks.

Software as a service SaaS includes all end-user solutions, from the simple – like Microsoft SkyDrive and Dropbox, or calendar services – to the more complicated – like SalesForce.com, Office 365 or Microsoft Dynamics CRM. Some SaaS are readyto-use products, others are platforms that allow developers or business analysts to create specialized solutions using very high level languages, removing the burden of dealing with technology layers like in PaaS- or IaaS-based applications.

In summary, there are no clear boundaries between clouds. So the real challenge lies in choosing the best components for you. Have a look at following example:

Building a sales automation system Let’s assume we want to build a sales automation system that supports e-commerce and mobile devices.

Data warehouse

We should opt to invest in our own data warehouse, for two reasons: data will be kept in, privately managed IT facility. And two, investing in our own hardware and backup system will be more cost-effective in the long-term. A VPN will be used to periodically download data from the cloud to our warehouse.

Website hosting

We won’t host our public website on our own servers, because of the cost of high-speed internet, denial-of-service protection, and so on. Our e-commerce site will be kept in Azure in a public cloud. We will use Azure Web Sites (for pure web presence) and Azure Worker Roles (for backend processing of orders). Communication between those components will be handed by Azure Service Bus. This architecture will allow us to scale-up services as needed.

19


Azure: designing modern applications using a hybrid cloud approach

Security

When it comes to web services, security is critical. Clouds can deliver better Mobile devices security simply because they operate on a larger scale. This means that, technically speaking, by opting for cloud, we’re not only buying a service, but

Mobile devices

also a guaranteed service level agreement and excellent security. For our case study example, this means we’ll be using the cloud as a simple form of DMZ: we won’t be exposing anything directly from our datacenter to the internet.

Data storage

We’ll store our data in two forms: in a relational database, like SQL Database, using which it is simple to build an application using ORM libraries; and in Azure Storage, where we will keep all media (images, videos) relevant to our products. Such storage is inexpensive, and thanks to protocols (HTTP(S)), integration with a portal is trivial. Additionally, our marketing agency can work directly with stored material and easily update it as needed.

Mobile devices

Thanks to Azure Mobile Services, it’s easy to build a backend for every major mobile platform: Android, Windows Phone, Windows 8, iPad, and of course generic HTML5. Mobile Services provides secure access to data, authentication (including integration with Facebook, etc.), and push notification services. This one solution can service all types of devices!

Tailoring your hybrid cloud In this way, we can build quite a complex system. This is, of course, only the beginning. We may also choose to use Azure Active Directory, to solve any authentication and authorization problems. In the future, we can expect many more of these standardized services. For example, email is already a highly standardized, SaaS service. Maybe, in the future, other cloud-based services will reach similar standards. Right now, each and every cloud provider uses similar REST-based application programming interfaces (APIs) to manage their services. From the client’s API perspective, there are also similarities between storage services. Of course, there are huge differences in terms of internal implementation. The Windows Azure Pack provides the same core services on-site as those available in the Azure public cloud, which means customers can build complex systems using these same, known services, and then choose which parts will run on a public cloud, and which on a private cloud.

20


MODAClouds: Model-driven engineering for the clouds Elisabetta Di Nitto (Politecnico di Milano, Italy) Dana Petcu (West University of Timisoara, Romania) Septimiu Nechifor (Siemens SRL, Romania) Vendor lock-in and cloud outages still discourage migration to cloud-based solutions, especially in the public sector. A solution to both problems could be to support application developers and operators in the adoption of a multi-cloud approach, where applications are built to run and replicate on different clouds and can rapidly switch between them.

21


MODAClouds: Model-driven engineering for the clouds

This multi-cloud approach requires proper cost risk analysis and advanced software engineering, guiding developers at design time and supporting providers at runtime. Thus a healthy marriage between new opensource cloud technologies and well-established model-driven engineering could increase trust in clouds and encourage cloud adoption. The MODAClouds project aims to deliver a decision-support system, methods and an open-source integrated development environment and run-time environment for the high-level design, early prototyping, semi-automatic code generation, and automatic deployment of applications on multi-clouds with guaranteed quality of service.

The MODAClouds’ approach The MODAClouds solution targets system developers and operators by providing tools to support the application or services life-cycle phases:

Feasibility study and analysis: allows developers to analyse and compare cloud solutions. Design, implementation and deployment: supports the cloud-agnostic design of software systems, semi-automatic coding, and deployment to target clouds. Run-time monitoring and adaptation: allows system operators to oversee execution on multiple clouds, automatically triggers some adaptation actions (e.g., migrating system components to services offering better performance at that time), and provides runtime information to inform software system evolution. Figure 1 sketches the general architecture of the MODAClouds’ solution. At design time, MODAClouds aims to analyse business opportunities, determine the value of some cloud solutions over others (cost risk analysis), map the functional and non-functional design of multi-cloud applications, and analyse different design alternatives. At runtime, MODAClouds focuses on three main issues:

1. 2. 3.

execution management, intended as the set of operations to instantiate, run and stop services on the cloud; monitoring the running application; and self-adaptation to ensure quality of service. 22


MODAClouds: Model-driven engineering for the clouds

Figure 1. General architecture of the MODAClouds’ solution MODACloudML Deployment and Provisioning Component

Data Mapping Component (DMC)

Shared Models

MODACloudML Functional Modelling Environment

Design Time

Monitoring Rules

Run Time

Policies for self-adaption

Decision Making Toolkit

Initial Deployment Model

QoS model

Deployable Artefacts

Self Adaption Platform

Current Model

Target Model

Models @ runtime Runtime GUI

Cloud App

23

Cloud App

Execution Platform

Runtime Platform

Self Adaption Reasoner Monitoring Platform

MODAClouds’ IDE

QoS modelling and analysis tool


MODAClouds: Model-driven engineering for the clouds

Smart city safety planner One MODAClouds case study is the Smart City Urban Safety Planner, for managing fire incidents in a high-density area served by a gas network. Gas detectors, traffic sensors, cameras, and electricity circuit-breakers are in place and managed by an Internet of Things platform. The safety planner aims to

predict the failure of gas detector sensors using sensor data, predict the impact of a fire by analysing video from nearby cameras, evaluate the best path for fire squads, and the best exit for traffic, by correlating run-time data with historical data from traffic sensors, determine the optimal gas pipes to isolate to limit impact send information to relevant authorities. The infrastructure must be highly scalable, to manage the varying density of data flows and events. Data replication and migration mechanisms are needed to avoid data loss in case of failure of one of the application instances. MODAClouds supports this case study by offering design-time and run-time mechanisms to enable the use of multiple clouds, aiming to guarantee 24/7 availability. It also offers dedicated mechanisms for managing data replication and synchronization, thus allowing hot-switching between different replicas of the safety planner system. www.modaclouds.eu

24


Synnefo A Complete Open Source Cloud Stack Vangelis Koukis, GRNET Constantinos Venetsanopoulos, GRNET Nectarios Koziris, CSLab, NTUA

In a nutshell Synnefo is a complete open source cloud stack that provides Compute, Network, Image, Volume and Storage services, similar to the ones offered by AWS. Synnefo manages multiple Google Ganeti (https://code.google.com/p/ganeti/) clusters at the backend for the handling of lowlevel VM operations. Essentially, it provides the necessary layer around Ganeti to implement the functionality of a complete cloud stack. To boost 3rdparty compatibility, Synnefo exposes the OpenStack APIs to users. We have developed two standalone clients for its APIs: a rich Web UI and a commandline client. Synnefo runs in large scale production environments since 2011, to power private or public cloud services.

25


Synnefo - A Complete Open Source Cloud Stack

Architecture An overview of the Synnefo stack is shown in Fig. 1. Synnefo has three main components:

Astakos is the common Identity/Account management service Pithos is the File/Object Storage service Cyclades is the Compute/Network/Image and Volume service î ˘ese components are implemented in Python using the Django framework. Each service exposes the associated OpenStack APIs to end users (OpenStack Compute, Glance, Cinder, Keystone, Object Storage). It scales out on a number of workers, uses its own private DB to hold cloud-level data, and issues requests to the cluster layer, as necessary. When the need arises to provision and manage resources automatically and in bulk, the ./kamaki command-line tool can be used to perform low-level administrative tasks. ./kamaki is just another client accessing Synnefo over its RESTful APIs, targeted to advanced end users and developers.

Key features Anyone using a cloud service powered by Synnefo, either public or private, has access to the following functionality:

Compute: Support for Windows Server 2008R2 and 2012, all major Linux distributions (Ubuntu, Debian, RHEL/CentOS/Scientific Linux, Fedora, Gentoo, ArchLinux, openSUSE), and FreeBSD Virtual Machines Spawning VMs from custom Images, uploaded by users Dynamic file injection upon VM creation for contextualization Per-VM CPU and Network statistics Easy and secure console access through the Web UI Network: Public networking with full IPv4/IPv6 support Different firewall options for the public network Isolated Private networks (virtual L2/L3 networks)with automatic or manual IP allocation Ability to create arbitrary virtual network topologies, with multiple NICs per VM Storage: File uploads/downloads over Web UI and command-line clients Syncing between local files and the cloud with native Windows and Mac OS X clients File sharing among individual users and user groups, with per-file Access Control Lists Image: Images are just files in the Storage Service Images can be shared with other users Support for custom, user-provided Images Automatic bundling tool transfers existing physical or virtual machines to a Synnefo cloud

26


Synnefo - A Complete Open Source Cloud Stack

Identity: Support for multiple login methods per user: classic username/password, LDAP/Active Directory, Google/Twitter/LinkedIn 3rd-party accounts, SAML 2.0 (Shibboleth) federated logins Fully customizable user sign-up process, with discrete verification/moderation steps Quota system for fine-grained per-user, perresource limits, with associated UI Support for collaborative projects for sharing virtual resources among user groups

Backend functionality Synnefo’s architecture decouples the cloud from the cluster layer, easing administration. It provides the following functionality at the backend:

Compute: Management of multiple Ganeti clusters Support for VM live migrations with or without shared storage Support for multiple storage backends: LVM, DRBD, Files on local/shared directory (e.g., over NFS), RBD (Ceph/RADOS) Simple interface to plug into existing SAN/NAS deployments Easy integration into existing infrastructure using admin hooks Linear scaling with dynamic addition of Ganeti clusters Network: Full IPv4/IPv6 support for Public and Private networks Scale to thousands of isolated private L2 segments over single VLAN Support for pluggable networking configurations in the backend Currently supports VLAN-based isolation, MAC-based filtering over single VLAN, VXLAN-based virtual L2 segments Storage: Files are collections of blocks Content-based addressing for blocks Partial file transfers, deduplication, efficient syncing Optionally used as single store for Files, Images and VM disks Image: Secure deployment of custom Images, inside isolated VM All contextualization done by Synnefo with no need for special tools inside the Image Efficient syncing and sharing of Images as files

Use case: The ~okeanos public cloud

Synnefo has been running in production powering GRNET’s 26okeanos public cloud service (http://okeanos.grnet. gr). As of this writing, ~okeanos runs more than 5k active VMs, for more than 3.5k users. Users have launched more than 130k VMs and more than 35k virtual networks.

27


Synnefo - A Complete Open Source Cloud Stack

Using Synnefo in production has enabled:

Rolling software and hardware upgrades across all nodes. We have done numerous hardware and software upgrades (kernel, Ganeti, Synnefo), many requiring physical node reboots, without user-visible VM interruption. Moving the whole service to a different datacenter, with cross-datacenter live VM migrations, from Intel to AMD machines, without the users noticing. Scaling from a few physical hosts to multiple racks with dynamic addition of Ganeti backends. Overcoming limitations of the networking hardware regarding number of VLANs, with multiple L2 segments on single VLAN, with MAC-based filtering or VXLAN encapsulation. Synnefo is open source. Source code, distribution packages, documentation, many screenshots and videos, as well as a test deployment open to all can be found at http://www.synnefo.org.

28


CELAR: Automatic, multi-grained elasticity provisioning for the cloud Dimitrios Tsoumakos National Technical University of Athens, Greece Cloud computing possesses the inherent ability to support elasticity, namely the scaling of infrastructure or platform resources to meet the exact demand, performance or cost characteristics at runtime.

29


CELAR : Automatic, multi-grained elasticity provisioning for the cloud

Optimal resource allocation is hugely important: users can experience wide variations in application workload across a year, month, day or even a few minutes. Static under-provisioning runs the risk of costly service unavailability at peak-hours, while static provisioning for peak-load incurs increased costs and underutilised resources (Figure 1, left graph).

Automatic throttling Elasticity can be applied so that application performance and cost are throttled in a controlled manner, bringing profits for both parties: service consumers can reduce task execution times without blowing their budget, and cloud providers maximise their financial gain by increasing their clientele and keeping their customers satisfied. While many systems claim to offer elasticity, the throttling is usually performed manually, and users are required to define the conditions under which resources should be scaled up or down – a difficult task. Clients’ needs change dynamically, and different optimisations will be required at different times. Such coarse-grained elastic provisioning – and/or the scaling of a single resource (e.g., CPUs, storage or networking elements) – leads to suboptimal use and performance degradation (Figure 1, middle graph). To harvest the benefits of elastic provisioning, it must be automated and fine-grained (Figure 1, right graph).

Static over provisioning

Time

demand Resources

Static provisioning

demand

Resources

Resources

demand

Coarse-grained elastic provisioning Time

Fine-grained elastic provisioning Time

Figure 1: Resource provisioning strategies

CELAR architecture The CELAR project is EU-funded and aims to enhance current cloud functionality to allow elastic resource provisioning. The project will develop open-source tools for applying and controlling elasticity in cloud-based applications, then apply this technology to two exemplary applications: one in online gaming, and the other in scientific computing, with an application requiring compute- and storage-intensive genome computations. The first version of the proposed CELAR system architecture is depicted in Figure 2.

30


CELAR : Automatic, multi-grained elasticity provisioning for the cloud

Application Management Platform

Custom Applications

Custom App2

HBase

Hive

Cassandra

Elasticity Platform

Custom App 3

Other Hadoop

Cluster IaaS

VM VM

VM

Physical Layer

VM

VM

VS VS

Storage

Application Orchestration

Multi-level Metrics Evaluation

Monitoring System

Services

Custom App 1

Information System

Resource Provisioner

Interceptor

SaaS / Pass

Application Submission

Cloud Orchestration

Cloud Information and Performance Monitor

Application Description

Application Profiler

Decision Module

CELAR Manager CELAR DataBase

Cloud Provider

Figure 2: CELAR system architecture v1 There are three main modules:

Application management platform: Modules and methods that enable developers to describe and deploy their applications. This layer will give users the ability to define their desired scaling policies and provide input to the inner CELAR modules. It will provide easy application deployment and real-time performance metrics. Modules will be implemented on top of the reliable Eclipse platform and exposed via meaningful, user-friendly UIs to the end-users.

Cloud information and performance monitor: A scalable, distributed subsystem that allows the collection and storage of statistics that relate to the running application and the resources it consumes over time. These metrics will be used to evaluate the current status of the application execution. Users will be able to evaluate the application by using existing metrics or by defining their own.

Elasticity platform: All the algorithms and modules required to automatically allocate (or free up) resources based on their characteristics, user preferences and the application load. The Decision Module is central to the platform: It views elasticity as a multi-dimensional property with three main dimensions: quality, cost and resources. This module then maps high-level elasticity requirements to low-level metric restrictions. For example, application cost can be broken down into the costs of running each virtual machine and the costs of I/O calls. The elasticity platform also maintains all necessary information for past and current application deployments, orchestrates addition or removal of resources (of different types and granularities), and ensures the robustness and availability of elastic operations. Finally, the platform features automated characterisation of an application’s behavior over a number of representative resource provisioning and load scenarios (profiling).

31


CELAR : Automatic, multi-grained elasticity provisioning for the cloud

Lifecycle of an application CELAR imposes a general structure into the lifecycle of an application (Figure 3). User input is required to describe, submit and deploy their application. The application is then profiled and elastically managed by CELAR until its termination.

Describe

Submit

Deploy

Profile

Monitor & Manage

Terminate

CELAR is committed to using standard APIs, open source tools and platform-independent programming languages to ensure wide coverage for underlying platforms and encourage widespread adoption and use of CELAR. We aim at provide the first integrated version of the CELAR system by the end of 2013.

www.celarcloud.eu/

32


The PaaSage project: the cloud was the limit Frode Finnes Larsen – EVRY, Norway and industrial partner in the PaaSage project Want the power of multiple cloud platforms? But only when you need it? Want to avoid cloud vendor lock-in? Want to develop once, but deploy to multiple clouds? Or just want to cloudify your stuff? PaaSage technology is for you!

33


The passage project : the cloud was the limit

The project PaaSage is an EC-funded project aimed at creating a development and deployment platform to help software engineers create new applications and migrate old applications to multiple cloud platforms. There are 19 partners involved; case studies for proof-of-concept come from industry and the public sector.

Case study: the public sector The public sector is facing a huge demographic challenge in Europe: our population is getting older, living longer, and requiring care for more extended periods. On top of this, there are the challenges of rising costs and economic crises. Part of the solution lies in increased efficiency: the public sector must deliver more services using fewer resources. Technology, automation and self-service will play a crucial role in this. In Norway we have 428 municipalities. These government bodies are autonomous: a huge number have their own legacy systems and have different ways of delivering the same service. Their systems are not harmonised and public interactions often require case-by-case management. The different legacy systems are heavily integrated into other systems and registers. Some services experience high-volume demand only once or twice a year. In this post-PRISM environment, privacy is becoming an even more important issue for the public sector: data must be stored nationally or within the EU. The public sector needs a platform that supports the use of different clouds while reducing technology dependencies and orchestrating multiple data sources, governance and control.

Core issue: handling application constraints There are several challenges when automatically deploying applications to multiple clouds, but the biggest issue is managing application constraints: not all cloud providers will be compliant with your application. Constraints are also necessary to know when to take action: What should trigger automatic scale-out or deployment? What should trigger the application to scale down? What price model do the different cloud providers employ? How many users must be supported? How many transactions? How much memory is required? How much CPU? How much storage? The nature of your application can affect the cost of its deployment. Thus any platform for managing multiple clouds must make it possible to specify constraints on application availability, performance, cost, security and privacy. For reasons of data security, it must also support specify constraints on the location of cloud providers.

PaaSage architecture The PaaSage platform aims to support deployment to multiple clouds, including private and community clouds as well as commercial cloud offerings. It employs three main elements: the integrated development environment (IDE), Executionware and Upperware.

IDE The IDE is PaaSage’s front-end. It extends the popular open-source development platform Eclipse and supports Cloud Modelling Language (CloudML). The IDE ensures model-based integration of functional components in a variety of application scenarios.

34


The passage project : the cloud was the limit

Executionware Executionware provides platform-specific mapping and technical integration of applications to the cloud provider’s architecture and Application Programming Interfaces (APIs). It can be used to monitor, reconfigure and optimise running applications on a variety of platforms.

Upperware Upperware is linked to the IDE and presents a collection of tools and components to assist developers when porting models or designing applications. At runtime, it integrates these models and applications with the executionware to optimise performance.

PaaSage metadata

Cloud Modelling Language

Speculative Profiler

Extra functional adoption

Intellegent Stochastic reasoning

Exchange with other PaaSage users

PaaSage metadata

PaaSage Metadata

PaaSage

Integrated Development Environment

Model based open platform

API & Architecture

API & Architecture

API & Architecture

Private cloud solutions

Existing cloud solutions

Data & Storage

API & Architecture

External service providers

Commercial offerings Figure 1: PaaSage architecture

PaaSage in action PaaSage will help to modernise the public sector by:

reducing platform and vendor dependencies, making strategic decisions easier, supporting adoption of cloud, thereby reducing costs and providing extra power only when needed. managing data privacy. Hot topic in these “Prism days”, managing multiple data sources and simplifying the handling of external data sources, managing governance and control at design-time, deploy-time and run-time, improving services by automatically monitoring application behavior, and migrating legacy applications to a cloud environment. 35


The passage project : the cloud was the limit

New Application

Legacy Application Metadata sharing

Speculative Profiler

Cloud ML Application Model Architectural model Dependancy model Data flow model Extra functional utility model

Execution optimization loop

Execution control

Design time optimization loop

Intellegent reasoner

Extra functional adaption

Metadata

Speculative Profiler

Community Expertise

Metadata collection

Execution monitoring

Platform specific mapping

Execution Environments Figure 2: PaaSage workflow

36


NEWS & EVENTS 1st SUCRE Video about Cloud Computing and the Public Sector available on YouTube! This first video focuses and highlights the benefits for the Public Sector to migrate its services to cloud computing. With a fresh, innovative and user friendly style, this short cartoon will give you the insights of how cloud computing could really benefit the P.A and ultimately the EU citizens. Enjoy it at http://www.youtube.com/watch?v=wNrM-617q70&feature=youtu.be

JOIN the SUCRE & OCEAN NETWORKING SESSION @ ICT 2013 IN VILNIUS The joint SUCRE-OCEAN session entitled Open Clouds in Europe and Japan: Tackling Interoperability through Collaboration will engage stakeholders from Europe and Japan to present and discuss about major interoperability challenges in Open Clouds, based also on the experiences drown by the two projects during the first year of operation. The proposed session will tackle some of the major challenges of Horizon 2020 and Digital Agenda. It is also in-line with and supports the aims of the International Cooperation strategy and policy of the EC. This session will take place on Tuesday 7 November, at 16:00, Booth 8

SUCRE INVITES YOU TO JOIN ALSO THE “ENGENEERING APPLICATIONS FOR THE CLOUD” The aim of this session is to cover the full range of technical challenges through themes like: Decision mechanisms for migration towards Clouds; model-driven engineering of applications for the Clouds; modeling of Cloud target platforms; and resource management in multiple Clouds. The session will also cover business aspects like: Business implications and impact from migrating to the Cloud, in particular in the public sector; validation and certification of migrated applications to the Cloud; opensource middleware for the Cloud. The session offers also short snapshot presentations of state-of-the-art research results, with ample time for discussion. This session will take place on Tuesday 7 November, at 18:00, Room A. More information at http://ec.europa.eu/digital-agenda/events/cf/ict2013/item-display.cfm?id=11538

The SUCRE Young Researcher Forum outcomes On September 23rd and 24th 2013, the SUCRE Consortium organised its Young Researcher Forum (YRF) as part of 2nd International Summer School on Services at KIT in Karlsruhe, Germany. The main goals of the event were to expose its participants in current technical issues while working with Open Cloud Computing platforms, help familiarize junior researchers with open and contemporary problems in the area, and finally, offer a forum to interact, learn and explore. Find out the main event outcome in the SUCRE portal www.sucreproject.eu

Innovative cloud research in action @ ICT 2013! The FP 7 SOCIETIES project is pleased to inform you that they are about to deploy a conference service that exploits the project innovations called the Relevance App at ICT 2013. Exhibitors are welcome to register their exhibit information and delegates are encouraged to download and use the application. For further information and to register please visit https://societies-trial.eventbrite.com/ and/or contact Mark Roddy at mroddy@tssg.org

PaaSage consortium gets enlarged! The FP7 PaaSage project (see www.PaaSage.eu) is welcoming three new partners from Poland (the Academic Computer Centre CYFRONET of AGH-University of Science and Technology in Krakow, Poland) and Cyprus (the Department of Computer Science of the University of Cyprus and Intelligent Business Solutions Ltd, an SME). They bring in additional expertise to broaden the potential use of PaaSage in e-Science and financial services, to provide tools ensuring continuous quality improvement, to improve interoperability through gathering and integrating metadata, and to extend both the PaaSage upperware and executionware. The project benefits therefore of an additional funding of 788 k€, corresponding to an additional investment of 1.080 k€. Total PaaSage investment reaches 7,4 M€ over 4 years! More information about PaaSage at www.PaaSage.eu

37


RELATED International EVENTS Cloud for Europe conference The event, organised by the Cloud for Europe project and Fraunhofer FOKUS with the support of the European Commission and in association with the European Cloud Partnership Steering Board, will take place in Berlin next November. The opening keynote address will be delivered by Neelie Kroes, Vice-President of the European Commission. For further information and to register please visit http://www.cloudforeurope.eu/

Supercomputing Conference 2013 This conference will take place in Denver, CO, U.S. and its continuing goal is to provide an informative, high-quality technical program that meets the highest academic standards. The Technical Program is highly competitive and one of the broadest of any HPC conference, with venues ranging from invited talks, panels, and research papers to tutorials, workshops, posters, and Birds-of-a-Feather (BOF) sessions. For Further information and to register please visit http://sc13.supercomputing.org/content/overview

5th IEEE International Conference on Cloud Computing Technology and Science (CloudCom 2013) The IEEE International Conference and Workshops on Cloud Computing Technology and Science, steered by the Cloud Computing Association, aim to bring together researchers who work on cloud computing and related technologies. This key event will take place in Bristol, U.K. on 2- 5 December 2013. For further information and to register please visit http://2013.cloudcom.org/

Cloud World Forum Africa 2014 After the outstanding success of the Cloud World Forum Africa 2013, which gathered 450+ Operator & Enterprise professionals attended from 50+ countries from 350+ different organizations - with 45+ speakers from all the hottest players in the industry including MTN, Airtel, Etisalat, Vodafon, Safaricom, First National & Standard Bank, Mastercard & More, in 2014 this event will be organised at the Sandton Sun Hotel, Johannesburg South Africa. Further information available at http://africa.cloudworldseries.com/

CLOSER 2014 : 4th International Conference on Cloud Computing and Services Science CLOSER 2014, focuses on the emerging area of Cloud Computing, inspired by some latest advances that concern the infrastructure, operations, and available services through the global network. Further, the conference considers as essential the link to Services Science, acknowledging the service-orientation in most current IT-driven collaborations. The conference is nevertheless not about the union of these two (already broad) fields, but about Cloud Computing where we are also interested in how Services Science can provide theory, methods and techniques to design, analyze, manage, market and study various aspects of Cloud Computing. For further information and to register please visit http://closer.scitevents.org/Home.aspx?y=2014

38


Sucre issue 2 cover Graphical concept and representation of the public sector existing as a system of nodes with interconnections. Some nodes also have sub elements or partitions which co-exist with its parent. The graphic tries to visualise the similarities between the public sector and the cloud concept. Graphic concept & design : Paul Davies

Design & layout : Paul Davies www.de-clunk.com paul@de-clunk.com

Sucre issue 2 2013  

SUCRE project magazine on Cloud Computing. Issue 2

Read more
Read more
Similar to
Popular now
Just for you