Page 1

CloudSource Cloud computing & opensource

Issue 1 - April 2013

Design & layout : Paul Davies www.de-clunk.com paul@de-clunk.com


Editorial Board Prof. Alex Delis, Dr. Norbert Meyer, Prof. Dr. Keith Jeffery, Dr. Yuri Glikman, Dr. Toshiyasu Ichioka, Mrs. Cristy Burne,

OTHER RELATED EVENTS

SUCRE Coordinator, National & Kapodistrian University of Athens, Greece Head of the Supercomputing Department at the Poznan Supercomputing Center, Poland. President of ERCIM, U.K. OCEAN project Coordinator, Fraunhofer Institute, Germany EU-Japan Centre for Industrial Cooperation,manager of the FP7 project JBILAT, Japan Scientific Editor and Journalist, Australia

Coordination by Mrs. Eleni Toli, National & Kapodistrian University of Athens, Greece and Giovanna Calabrò, Zephyr s.r.l., Italy This publication is supported by EC funding under the 7th Framework Programme for Research and Technological Development (FP7). This Magazine has been prepared within the framework of FP7 SUCRE SUpporting Cloud Research Exploitation Project, funded by the European Commission (contract number 318204). The views expressed are those of the authors and the SUCRE consortium and are, under no circumstances, those of the European Commission and its affiliated organizations and bodies. The project consortium wishes to thank the Editorial Board for its support in the selection of the articles, the DG CONNECT Unit E.2 – Software & Services, Cloud of the European Commission and all the authors and projects for their valuable articles and inputs.

Euro-Par 2013 The Euro-Par 2013 conference will take place in Aachen, Germany, from August 26th until August 30th, 2013. The conference is jointly organized by the German Research School for Simulation Sciences, Forschungszentrum Jülich, and RWTH Aachen University in the framework of the Jülich Aachen Research Alliance. Further information at http://www.europar2013.org/conference/conference.html

CLOUDZONE 2013 This is the biggest Cloud Fair of German speaking countries. CLOUDZONE has developed from the Trendcongress, which has taken place successfully since 2008 in Karlsruhe. It will take place for the fifth time at the Karlsruhe Fair Center on 15th- 16th May 2013. For further information and to find out who should attend this event please visit http://www.cloudzone-karlsruhe.de/

The Fourth International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2013) This key event will take place in Valencia, Spain on May 27th - June 1st 2013. For further information and to register please visit www.iaria.org/

TNC2013 - TERENA Networking Conference The event will be hosted by SURFnet, the Dutch National Research and Education Network and held in the picturesque city of Maastricht on 3rd – 6th June 2013. For further information and to register, please visit https://tnc2013.terena.org/

IEEE 6th International Conference on Cloud Computing To discuss this emerging enabling technology of the modern services industry, CLOUD 2013 invites you to join the largest academic conference to explores modern services and software sciences in the field of Services Computing, which was formally promoted by IEEE Computer Society since 2003. This event will take place in Santa Clara, CA, United States on 27th June – 2nd July 2013. For further information and to register please visit http://www.thecloudcomputing.org/2013/

ISC Cloud Conference 2013 This key conference will be held at the Marriott Hotel in Heidelberg, Germany on 23rd- 24th September 2013. Further information at http://www.isc-events.com/cloud13/

International Conference on Cloud and Green Computing (CGC2013) This conference will take place in Karlsruhe, Germany from September 30th to October 2nd 2013. Further information at http://socialcloud.aifb.uni-karlsruhe.de/confs/CGC2013/Calls.php

1

32


Table of contents 1

Editorial Board

3

Editorial - Prof. Keith G Jeffry

4

Migrating to Cloud as a means to cut budget

7

Forests and the cloud: an international model of forest defoliation

11

The UberCloud Experiment: paving the way to HPC as a service

14

mist.io - touch the clouds - Mobile-friendly multi-cloud management, monitoring and automation

17

CC1 system – the solution for private cloud computing

20

TERENA Trusted Cloud Drive for Academic Research

23

Scaling Software Challenges

27

PROSE survey on requirements for hosting open source software projects

31

News & Events

2


Editorial Issue 1 CloudSource Magazine April 2013 Prof. Keith G Jeffery President ERCIM

The second Expert Group Report ‘Advances in CLOUDs’ was published in December 20121 . It followed the earlier report ‘The Future of CLOUD Computing ‘ published in January 20112 . Both provide useful insights into the future of CLOUDs and the second is clearly an evolutionary development of the first. The report(s) cut through the hype and make a realistic assessment of the current state and future prospects. In particular they identify barriers to the development and take-up of CLOUD computing in Europe (and indeed wider) and propose research topics to address those barriers. The major conclusion is that there are opportunities for European businesses; first in the ICT industry itself in providing infrastructure, platform and service level offerings and second in using CLOUDs to improve business ICT management and use thus reducing costs and increasing opportunities. The Gartner Hypecycle for CLOUDs reached the peak of expectation in 2010, there are now some years in the trough of disillusionment followed by the slope of enlightenment before reaching the plateau of productivity. The trough and slope timescale means that there is a current tendency to invest less effort and interest into CLOUDs, which gives Europe a unique chance to leap ahead, defining the future of CLOUDs and the related market players and for European ICT businesses to develop offerings and for European business in general to prepare to adopt CLOUD computing.

However, the barriers mentioned above are restraining the possible growth. Although many commercial organisations are considering CLOUD computing or experimenting with it there is as yet no concerted move for them to adopt CLOUD technology. There are many European ICT SMEs working on CLOUDs and producing good products and services but they are not overcoming the threshold barrier to massive take-up as yet. One problem is that commercial businesses wish to use CLOUD in-house and expand elastically and interoperably onto one or more public CLOUDs (federated as necessary) for peak demand. The major barriers concern: (a) fear and uncertainty about legalistic aspects, especially the EU directive on processing of personal data which precludes outsourcing outside the EU unless the protection is equivalent to that of the EU3 ; (b) lack of confidence in entrusting commercial data to an outsourced service even within the EU; (c) associated fears over security and privacy; (d) lock-in to particular proprietary solutions by public CLOUD vendors; (e) lack of interoperability across CLOUD environments which would permit easy elastic expansion from private (in-house) to public CLOUDs on demand when processing peaks are encountered; (f) technology barriers such as reliable multitenancy, easy elasticity, reliability and autonomicity; (g) lack of appropriate systems development environments and programming languages; (h) lack of standards for example for the description of CLOUD services. Various EC-funded research projects are addressing some of these issues. The emerging results from those projects should advance the cause of widespread use of CLOUD computing.

1 http://cordis.europa.eu/fp7/ict/ssai/home_en.html 2 http://cordis.europa.eu/fp7/ict/ssai/docs/cloud-report-final.pdf 3 Directive 95/46/EC

3


Case study migrating to cloud as a means to cut budget Dr. Devendra D. Meshram, RTM Nagpur University, India Dr. Omprakash M. Ashtankar, Kavikulguru Institute of Technology and Science, India Urmila D. Meshram Teaching Assistance, MTech, Japan

4


migrating to cloud as a means to cut budget

Different organizations approach IT needs in different ways, but some things are the same: we all want cost-effective solutions tailored to our needs. Cloud computing can provide this, reducing the cost of procuring, handling and maintaining services and resources. The option to migrate one’s own servers to the cloud also reduces costs, and results in changes to mindset as well as policy. This real-time study of an actual company and its IT budget revealed how cost-effective migrating to cloud can be, covering issues of data migration, security, disaster recovery, and basic service provision.

Scenario of change The yearly cost of owning and handling a software application can be as much as four times the cost of its initial purchase, and companies can spend up to 75 percent of their total IT budget just maintaining and running existing systems and infrastructureďż˝ . Previously, this software-related budget was the main target of cost-reduction strategies. Now, however, the scenario has changed: hardware and infrastructure have become fair budgetary targets. As a result, concepts such as virtual desktop infrastructure, remote desktop services, cloud computing, smart-phone compatible services and web-meetings are booming. Many companies offering enterprise resource planning (ERP) software are developing infrastructure-compatible solutions to suit the latest technological challenges, offering ease of access from anywhere, and business accessibility using smart-phones. The cost-driven elements of migrating to cloud and procuring cloud services include choosing and managing data types, migration practices, database management systems, targeted landscapes, operating systems, secured hosting, high availability, and disaster recovery.

Case study results: reshuffling the IT budget After analyzing their IT budget, case study company ABC-ITďż˝ decided to migrate to the cloud. Their budget prior to migration (Figure 1, LHS) was fairly equally distributed between hardware, software, and operational costs. The majority of their software budget was allocated to software licensing. Their hardware budget was allocated to infrastructure networking, server acquisition and maintenance, data and application hosting, hardware allocation and storage. The operations budget covered support, transport management, user administration, consultants and development resources to help design and build custom systems.

5


migrating to cloud as a means to cut budget

Figure 1: Relative IT budget distribution before (LHS) and after (RHS) migration to cloud.

Hardware Software Operations

After migration, their software licensing costs remained the same, but there was a comparative drop in operation and hardware budgets (Figure 1, RHS), substantially reducing their overall cost base. Staff costs were reduced, since the cloud vendor was responsible for operating and maintaining hardware and software. Server costs were reduced, since data was managed in a single data center. The cost of infrastructure developed for deploying applications in the cloud - including hardware, software, and operational infrastructure - was lower than the cost of deploying these same applications on-premises.

Transition from a traditional approach Further savings are also available: The cost of updating software is drastically reduced. Any unusual operational costs could be taken by option (tariff). Tailoring contracts to offer threshold-level basic services also works to lower costs. Managing data in-house requires users to manage hardware, operating systems, security and so on; however, when operating from the cloud, the only user burden is the procurement of services. IT companies are rightly wary about handing over their data management to third party operators: mistakes could impact their entire business. To deal with this, the introduction of penalties for non-availability of contracted services may increase user confidence in service availability. Ultimately, in this case study, the transition to cloud computing significantly reduced IT budget, and changed business management thinking. 1 In Timothy Chou, “The End of Software,� SAMS Publishing, 2005, p. 6. 2 Company names have been changed.

6


Forests and the cloud: an international model of forest defoliation Satellites and aerial drones can give us one global perspective. ICT can offer another. Bohdan J. Naumienko, Eurotech Ltd, Poland Forests are the lungs of the Earth. Their ongoing health is a global challenge, not only in its geographic scale, but also in terms of the integrated, multidisciplinary research required to meet this challenge. Our research depends on international data sources and environmental sensors. We need a durable, globally accessible computerized model of forest health.

� We mean by integral management more than by integrated, inter alia real support for decision maker’s awareness based on complex rapid and automated mapping: in GIS environment - from sensors to user; (see the following text and for more information [2] Naumienko, B.J. (2009).

7


Forests and the cloud: an international model of forest defoliation

What can cloud computing offer? Let us suppose our cloud computing concept integrates innovative methods for forest monitoring, protection, cultivation and management. We could include data from Unmanned Aerial Vehicles (UAVs), Europe’s Global Monitoring for Environment and Security program (GMES), Group on Earth Observation (GEO), the G20 Global Agricultural Geo-Monitoring Initiative (GEOGLAM), and FOREST INTEGRAL1 OBSERVATION (FIO), which uses satellite and UAV data to monitor global forestry. We could also lead research in the polar (Norway), north-temperate (Poland), south-temperate (Italy, Turkey) and subtropical (China) zones. What are the key elements of such a computerized, distributed model of the forest as an ecosystem?

Calculating an average tree Forest modelling can be based on the average tree, statistically estimated from many forest representatives. Any tree can have three modelled areas:

1. the atmosphere, 2. the tree’s foliage, and 3. the tree’s roots. Since branches and trunk are considered transport pipelines only, leaf photosynthesis represents a tree’s healthiness. Leaf health can be impacted by three main factors: access to sunlight, water transport environment, and root pathogens. We model the sun’s energy using the variable S (intensity of light), and indirectly using variables O and C (concentration of O2 and CO2 respectively), as well as T (temperature).Through roots, trunk and branches as the other parts of the model are distributed pathogens to the whole foliage of the tree. For example the ash (Fraxinus), infected by either virulent and avirulent strains of Phytophthora, will be related to tree-invasive agent interactions, both compatible to disease development and incompatible to host-specific resistance, as well as to effectiveness of specific cultivation methods. Generally, host-pathogen interactions should allow us to aim specific pathways to enhance disease resistance in forestry. Any tree is identified by its geographical coordinates g = (i,j) and by a number k of biochemical compounds needed for vegetation, transported from atmosphere and roots to leaves. Thus we will be able to link ground and laboratory research with specific tree and examine not only results of use phosphites in forest cultivation, but also identify candidate genes involved in virulence. What is more, we can search relations between phosphites traitment on the ground and quantity D: a generalized defoliation index estimated from the air. This index is the one of fundamental measures of forest health and vitality (see Figure 1). Figure 1: Defoliation index (D): left, D = 0%; middle, D = 20%; and right, D = 70%.

8


Forests and the cloud: an international model of forest defoliation

Linking results When data like the above are presented in static documents, their effect is limited. However, storing results in a cloud-based database would make it easier to analyse and combine data, as well as to update and accumulate of knowledge. For example, if based on the cloud computing in which we had modelled trees infection by Phytophthora, the results of a model could be used in further studies, comparing tree-invasive pathogens, disease development, host-specific resistance, or cultivation methods from different research, especially concurrently from ground, aerial and satellite monitoring implemented in different regions of Earth. Figure 2: Architecture of a potential cloud solution.

Sources

Data Management

Web Applications

Tools

GIS Server Imagery Archives

Satellite

Aerial

File Transfer

Extraction Transform Loading

Image

GEO Database

SDK

Web Services

Imagery Tools GIS Mobile Clients

GIS Portal

VE

UAV

VE : Video Exploitation

Maps

Workflow

Imagery Archives

GIS Desktop

Web Clients

SDK : Software Development Kit (e.g. for the Microsoft .NET)

In addition to offering cloud solutions such as software and infrastructure ‘as a service,’ we must also offer processes: capturing as a service, pre-processing as a service, data sharing as a service, data delivery and user-processing, all as a service. Figure 3: Cloud solutions viewed as processes-as-a-service

SH : Sharing

PP : Pre-Processing

CA : Capturing

9

DE : Delivering & UP : User’s Processing


Forests and the cloud: an international model of forest defoliation Modelling any ecosystem, including that of a forest, requires a global approach to internet, information and communication technologies. From that global approach, we can also access scaled-down information services, useful for research education and business. Using cloud computing in this fashion achieves three important aims: it connects international innovators, it offers improved ecological security, and it leverages the local strength of small and medium enterprises, offering a truly global reach.

References: [1] Oszako, T. (2012) Methods of Forest Healthiness Estimation (in Polish – unpublished). Forestry Research Institute, Warsaw. [2] Naumienko, B.J. (2009) Games, geometries, languages, processes: the foundations of integral education. Far East Journal of Mathematical Education, Vol.3, 1, 41-74.

10


The UberCloud Experiment: paving the way to HPC as a service Wolfgang Gentzsch, (Independent HPC Cloud Consultant, Germany) Burak Yenier, (Fiserv, US)

There are several million small- and medium-size manufacturers around the world, most of them using ordinary workstations for their daily design and development work. What could they achieve with greater computing power?

11


The UberCloud Experiment: paving the way to HPC as a service Since buying an expensive compute cluster is usually not an option, renting computing power is the next best thing, but business in the cloud still comes with challenges: application complexity, privacy of sensitive data and intellectual property, expensive data transfers, conservative software licensing, performance bottlenecks from virtualization, user-specific system requirements, missing standards and lack of interoperability with different clouds… On the other hand, renting remote computing resources comes with extremely attractive benefits: no lengthy procurement and acquisition cycles, ability to rapidly upscale or downsize, opportunity to shift focus from capital expenditure to the more flexible operational expenditure, and business flexibility thanks to on-demand, at-your-fingertip resources. How can we reduce these barriers and allow businesses to optimize benefits? We tried an uber-experiment to find out.

On board the UberCloud Since August 2012, more than 400 organizations and individuals from around the world have joined the open, free, humanitarian UberCloud Experiment (http://www.hpcexperiment.com/). Designed to explore the end-to-end process of accessing and using remote computing resources, the UberCloud is now host to 60 international teams aiming to run end-user applications on remote computing resources. At the same time, we’re analyzing different HPC clouds and their interoperability, and finding ways to overcome the many roadblocks.

What’s UberCloud’s secret? We offer users a long list of real benefits: vendor-neutral service no need to hunt for resources in a crowded cloud market professional match-making of end-users with suitable service providers free, on-demand access to hardware, software, and expertise during the experiment carefully tuned, end-to-end and step-by-step process for accessing remote resources opportunity to learn from the best practice of other participants no-obligation, risk-free proof-of-concept: no money involved, no sensitive data transferred, no software license concerns, and the option to stay anonymous. With these benefits, the experiment is leading the way to increasing business agility, competitiveness, and innovation. Participants are also encouraged to make use of UberCloud Exhibit (http://www.exhibit.hpcexperiment.com), a directory of professional cloud services for the wider CAE, life sciences, and big data communities.

12


The UberCloud Experiment: paving the way to HPC as a service

Roadblocks so far - and their resolutions Several major roadblocks and their resolutions have been reported by our teams during the course of their projects. More details about the lessons learned and recommendations can be found in a recent article in Bio-IT World. Some of the major roadblocks were: Information security and privacy, even in this experiment setting: guarding raw data, processing models and resulting information. Lack of easy, self-service registration and administration: still not available with most providers. Incompatible software licensing models: the software licensing landscape is still difficult to navigate. High expectations: can lead to disappointing results or even to project failure. Reliability of resource providers: some teams had to wait for weeks before capacity could be allocated. Need for interoperable clouds: migration of work to another cloud resource is still a real challenge.

Join us There are many reasons to join this community experiment: For a start, HPC as a service is the next big thing. But beyond that, HPC is complex, and it’s easier to tackle within a community. Barriers to entry are low, you can learn by doing, without risk, and you see how all this fits into your future research or business direction. You can ask questions or register for the experiment at http://www.hpcexperiment.com/ Screenshot of the result of one of the experiment teams: development of stents for a narrowed artery after balloon angioplasty to widen the artery and improve blood flow. This experiment has been performed in the Cyclone HPC Cloud.

http://www.hpcexperiment.com

13


mist.io - touch the clouds Mobile-friendly multi-cloud management, monitoring and automation Markos Gogoulos, Unweb.me Ltd, Greece Dimitris Moraitis Mike Muzurakis Christodoulos Psaltis As cloud vendors compete by introducing a range of features and pricing models, consumers are increasingly combining a number of cloud offerings to construct the service combination they need. While this allows users to benefit from a wider range of service options, it can also have a negative effect on infrastructure management, introducing different sets of tools and APIs for each cloud. Add to this the need for VM maintenance, monitoring and provisioning, and using multiple clouds can end up a cumbersome and time consuming process.

14


mist.io - touch the clouds : Mobile-friendly multi-cloud management, monitoring and automation What’s required then, is a next-generation multi-cloud interface, preferably with mobile-friendly virtual machine (VM) management, monitoring and automation across clouds. Welcome to mist.io.

mist.io mist.io is an open source software and a freemium service that helps you manage and monitor VMs across multiple public, private and hybrid clouds using your mobile phone, tablet or laptop. You can use mist.io to create, reboot, destroy and tag VMs on any supported interface-as-a-service (IaaS) cloud. More importantly, you can send secure shell (ssh) commands using a web interface optimized for touchscreens, allowing you to solve infrastructure issues while on the road. Figure 1 presents mist.io’s architecture. Figure 2 shows how its interface has been optimized for touchscreens. Using a premium mist.io service, you can also configure events that trigger notifications or automated responses. In the future, mist.io will also facilitate the migration of VMs across different clouds.

REST API

mist.io server

Browser

Figure 1: mist.io architecture

Native APIs

Linode

Amazon EC2

Openstack

Rackspace Cloud

Figure 2: mist.io’s touchscreen-optimized interface

Inside mist.io Mist.io’s user interface is an HTML5 application based on jQuery Mobile [1] and Ember.js [2]. The backend is built in Python using the Pyramid web framework [3], which implements a simple REST API using JSON that handles network calls. Communication with the cloud backends is realized using Apache’s libcloud [4], also implemented in Python. Mist.io’s architecture is client-driven; most of the tasks are handled by JavaScript, on the browser. The server side requires the API keys to successfully communicate with the cloud providers.

15


mist.io - touch the clouds : Mobile-friendly multi-cloud management, monitoring and automation

Monitoring service Along with the open-source tool, a premium monitoring service has been developed to provide detailed VM usage statistics and customizable alerts that trigger email and SMS notifications or automated actions (e.g. deploy another VM on high load). Figure 3: mist.io monitoring a VM On each VM’s dashboard (Figure 3), mist.io plots metrics for CPU utilization, memory consumption, system’s load average, I/O operations per second (IOPS) and network traffic. Setting up events – including conditional events – is simple and touchscreen-optimized. Users can customize alert actions to include notification, creation/destruction of machines, and command execution, for example.

Conclusion Mist.io is a mobile-friendly, multi-cloud web-app for managing and monitoring VMs. Mist.io provides a simple interface to handle common VM tasks, execute remote commands via a touchscreen-optimized shell interface, and monitor the VMs’ health status, while triggering actions in response to user-defined events. In the future we will enhance mist.io to facilitate migration of VMs across different clouds, to help users mitigate vendor lock-in and exploit the best features each cloud has to offer. We also plan to introduce a smarter alerting service using crowdsourced data on how users react under certain circumstances to help mist.io identify common problems and recommend relevant solutions. Mist.io is open-source and available at https://github.com/mistio/mist.io. The freemium service is hosted in private beta at https://mist.io. It’s scheduled to go public in May.

References 1 jQuery Mobile, http://jquerymobile.com 2 ember.js, http://emberjs.com 3 Pyramid, http://www.pylonsproject.org/

16


CC1 system the solution for private cloud computing Mariusz Witek and team,Institute of Nuclear Physics PAN, Poland We began the “Cloud Computing for Science and Economy” project known as CC1 - at the end of 2009 at the Institute of Nuclear Physics PAN (IFJ PAN), Poland. The project is financed by the European Commission and the Polish Ministry of Science and Education (Innovative Economy, National Cohesion Strategy). In the project’s first phase, we developed a fully functional computing system in the form of a private cloud, and made it available to all IFJ PAN users. In the second phase, we implemented a distributed, centrally managed cloud architecture, enabling the resources of many distributed clusters to be shared.

17


CC1 system - the solution for private cloud computing

Introducing the CC1 system The result of this work — the CC1 system — provides resources within the infrastructure as a service (IaaS) model. A schematic view of the system is shown in Figure 1. The central element of the system is the cloud manager (CLM), which receives calls from user interfaces (web browser-based interfaces or EC2 interfaces) and passes commands to cluster managers (CMs). A cluster manager runs on each individual cluster, handling all low-level operations required to control virtual machines (VMs). We used the Python programming language for the top layer. Virtual resources are managed using libvirt, a lower-level virtualization toolkit that can support a number of virtual machine managers. Currently, CC1 uses kernel-based virtual machine (KVM). Figure. 1. The structure of the CC1 system.

EC2

WWW

Interface Layer

CLM (Cloud Manager)

DB - CLM

Storage

CM (Cluster Manager)

DB - CM

VMs

VMs

Node

Node

18


CC1 system - the solution for private cloud computing The main features of the system include: a custom web-based user interface, automatic creation of virtual clusters with a preconfigured batch system, groups of users with the ability to share resources, permanent virtual storage volumes that can be mounted to a VM, a distributed structure: a federation of clusters running as a uniform cloud, a quota for user resources, and a monitoring and accounting system.

Keeping it simple In developing the CC1 system, we emphasized simplicity: user access, administration and installation are all relatively easy. Self-service access to the system is provided via an intuitive web interface. The administration module contains a rich set of tools for user management and system configuration. We also developed an automatic installation procedure based on a standard package management system of Linux distributions. This way, the system can be set up quickly and operated without the need for a deep understanding of the underlying technology. One of CC1’s crucial features is the easy creation of VM computing clusters equipped with a preconfigured batch system. This allows users to perform intensive calculations on-demand, without the need for time-consuming manual configuration. When calculations are completed, the VM cluster can be destroyed and the resources can be made accessible to other users.

Putting it into operation A private cloud based on the CC1 system was installed in IFJ PAN at the beginning of 2012. A number of VM images with various Linux flavours have been made available to users. Currently, about 1000 CPU cores are shared by various research teams at the Institute and their collaborators. We have achieved stable operation and high CPU utilization of above 80%, making us confident the system will continue to be useful in the future.

A private cloud: benefits in action Multidisciplinary institutes such as IFJ PAN traditionally dedicate a given computer cluster to a single research group, typically resulting in low-level exploitation and inefficient use of resources. A private cloud model, such as the CC1 system, enables various research groups to share computing resources. It can significantly boost the efficiency of infrastructure usage and at the same time reduce maintenance costs. The private cloud computing model can improve the delivery of services and reduce the cost of IT operations. In addition, it ensures confidential data can be processed on well-protected local infrastructure. The CC1 software is distributed under the Apache License 2.0. See http://cc1.ifj.edu.pl for more details.

19


TERENA Trusted Cloud Drive for Academic Research by Peter Szegedi, TERENA, The Netherlands The Trusted Cloud Drive (TCD) project [1] aims at piloting an experimental, high-performance, trusted, cloud storage solution for the Research and Education (R&E) community gathered under the Trans-European Research and Education Networking Association (TERENA) [2]. It builds on an open source cloud storage brokering platform [3] that provides federated user access, strong data encryption, supports various storage back-ends, and most importantly ensures the separation of the storage data from the metadata (such as file attributes, encryption keys, etc.) that are kept in a trusted location. TCD can also be considered as a storage middleware that maintains trust and privacy within the user domain and acts as a secure relay towards the private and/or public providers’ domains connected.

20


TERENA Trusted Cloud Drive for Academic Research National Research and Education Networks (NRENs) around the globe - such as the Intrenet2 in the US or SURFnet in the Netherlands - connect universities, university colleges and campuses with high-capacity links, peer with commercial networks at major exchanges, and provide advanced value-added services to the R&E community. They are membership organizations, governed by universities, subsidized by national governments, and work in a nonprofit manner.

Why the Trusted Cloud Drive? Undoubtedly, massive data storage is vital for academic research. Individual researchers and students on campus more and more use commercial cloud storage offerings (e.g., Google Drive, iCloud, Dropbox) available on the market. However, these public services are not primarily designed for the needs of sensitive research data sets. Therefore, universities and research institutes are seeking for partnership with private storage solution integrators and application developers (e.g., PowerFolder, SpiderOak, OwnCloud) to build and operate their own storage infrastructure on campus that needs not only capital investment but operational knowledge and experience too. These private storage clusters can then provide the desired performance and data privacy but, due to the lack of standards and sometimes proprietary vendor solutions, cannot always interface with each other or with the public services. NRENs are in a good position to deliver high-performance data storage infrastructure as a service specially tailored to R&E community over their advanced networks at national scale. Moreover, thanks to the European and global NREN collaboration, they can also aggregate demands and facilitate community provided storage to be shared across TERENA members.

What does the TCD offer? Trust is the main asset of NRENs, as they are governed by the universities that are also the major clients of the NRENs. The Trusted Cloud Drive service pilot - the initiative of TERENA – builds on this trust relationship and provides the necessary software tool and know-how at NRENs’ hands. The open source, cloud storage brokering platform incorporated by the TCD pilot can be installed at university locations or hosted by NRENs to aggregate demands and broker storage resources. It can also act as a storage middleware layer that separates the underlying trust domain from the storage back-end providers’ domain so that maintains the data privacy of the users. The user is able to authenticate to TCD with his/her federated account provided by the home institution hence rich set of identity attributes are available to determine the actual service offering via the platform. On the front-end a native Web Application or standard WebDAV access can be used for typical disk operations. Acting as a middleware, it is also possible to integrate the TCD platform with other, feature-rich storage applications provided by commercials or the community. Beyond the scope of the pilot, TERENA has been discussing e.g., with PowerFolder and the OwnCloud Community about potential integration scenarios. Trusted Cloud Drive does the encryption and the separation of the metadata from the storage data. The encryption keys and the sensitive metadata are kept in the local metadata store. The encrypted storage data blob can then be exported to the public cloud using various storage back-end APIs, including Amazon S3, OpenStack Swift, Pithos+ (the Greek NREN’s cloud) and soon other APIs supported by Jclouds.

21


TERENA Trusted Cloud Drive for Academic Research

Progress to date The TERENA TCD pilot started in May 2012 and the final results with the list of potential use cases, service delivery scenarios and legal advices are expected to be published in April 2013. During the open pilot period (last 9 months) the software platform has been installed and tested at NRENs in Greece, Czech Republic, Croatia, Poland, Belgium, Portugal, Spain and Brazil. All together 19 NRENs, 8 Universities and 3 Research Labs have expressed their interest in experimenting with TCD in one way or another. TERENA is eager to maintain this community around the open source code in order to ensure the long term sustainability of the Trusted Cloud Drive. Commercial companies are also interested in exploring the potential service integration scenarios that goes beyond the scope of the pilot.

TERENA TF-Storage Task Force participants discuss about the Trusted Cloud Drive pilot in March, 2013 in Berlin, Germany

1 http://terena.org/clouddrive 2 http://www.terena.org/ 3 https://github.com/VirtualCloudDrive/CloudDrive

22


Scaling Software Challenges Alvaro Simón, CESGA, Spain Carlos Fernández, CESGA Victor Mendez, PIC Jordi Guijarro, CESCA Jesús Bermejo, Telvent As the digital universe expands, so too does software production, leading innovation in key European non-software industrial sectors such as automotive, aerospace, medical equipment, telecom equipment and consumer electronics [1]. In fact, it is difficult to identify a domain in which innovation does not rely on software. Managing this explosion of software requires new approaches, addressing more than technical scalability issues.

23


Scaling Software Challenges

Software and the economy The Organisation for Economic Co-operation and Development held two conferences focusing on the economic relevance of software; the first in Cáceres, Spain (November 2007), and the second in Tokyo, Japan (October 2008). The study addressed themes such as security, privacy, mobility, interoperability, accessibility and reliability from a user perspective [2]. Identifying the boundaries of the software industry was noted as a continuous challenge.

OSMOSE, OSIRIS and OSAmI-Commons Several projects have recently tackled the technical challenges of software scaling, including OSMOSE (Open Source Middleware for Open Systems in Europe, 2003–2005)[3], OSIRIS (Open Source Infrastructure for Run-time Integration of Services, 2005–2008)[4] and OSAmI-Commons (Open Source Ambient Intelligence Commons, 2008–2011) [5]. Figure. 1. MEGHA concept validation test bed

These projects worked on open source modular and dynamic middleware foundations, service bus imple-mentations, federated identity and reusability frameworks. Links between composite and virtualization cloud approaches were also identified, leading to the set-up of the MEGHA Federated Cloud (2010) or Intercloud initiative [6]).

The MEGHA Federated Cloud The MEGHA Working Group promotes and coordinates contributions to cloud computing R&D, education and management made by institutions affiliated with RedIRIS [7] in Spain. MEGHA established direct links with initiatives such as e-Science [8] and CRUE-TIC [9] in Spain, and internationally with projects like TERENA, the OpenNebula Interoperability Workng Group, GÉANT, EGI and OGF.

24


Scaling Software Challenges

Concept validation In the first phase (2010–2011), MEGHA validated federated cloud platforms using OCCI [10] to streamline the use of cloud technologies among R&E service centers. Representative infrastructure providers (CESCA, CESGA, PIC), middleware providers (OpenNebula, RedIRIS, OSAmI-Commons) and users (UAB, UOC, UM) together with intermediate/identity/brokers resources (RedIRIS) joined efforts to demonstrate the viability of this approach. The results stimulated the development of use cases including e-learning platforms on demand ( Learning Apps project [co-financed with FEDER funds]), a distributed HPC platform (e-Science), and Virtual Labs (VDI) in a hybrid scenario (Academic services).

Ongoing developments Federating research and academic community clouds must address new technical challenges: Federated user authentication and authorization mechanisms, and user management between different cloud managers Secure VM image distribution and validation among heterogeneous cloud managers A federated cloud accounting system integrating the accounting records of multiple cloud managers and supporting federated cloud governance Monitoring and notification of unpredictable changes in availability and readability status MEGHA is working on these challenges. For example, the new rOCCI [11] server and OCCI [12] clients tested by CESGA and PIC teams with OpenNebula 3.8.x are able to use x509 user certificates for authentication. MEGHA authentication is based on x509 users and robot certificates issued by the Spanish pkIRISGrid CA. This new feature was used by PIC developers to enhance the DIRAC software framework [13] originally developed by LHCb [14]. The new cloud plug-in developed by PIC and USC teams integrates a cloud broker, user authentication and supports different cloud managers such as OpenNebula or CloudStack [15]. Currently MEGHA members are working to enable virtual organizations (VOs) or a dynamic set of users to share federated cloud resources.

Beyond the technical challenges Software scaling challenges does not only derive from new technologies, but also from business models, methods, processes and tools. To engage on this topic, JCIS held a Conference on Service Science and Engineering, integrating the Scientific and Technical Conference on Web Services and SOA (JSWEB) and the Workshop on Business Processes and Service Engineering (PNIS) [16]. To continue this work, the SCALARE project — SCALing softwARE [17] — will soon start, uniting several European partners to address scaling software challenges across several dimensions.

Conclusion Digital technologies are transforming our society faster than ever, and the relevance of software in non-software markets in increasing. The approaches presented in the article are only a few of many tackling scaling software challenges.

25


Scaling Software Challenges

Cloud federation has been validated as a suitable starting point, while the availability of technical staff could be a future bottleneck. To increase the efficiency in software development is needed, and more research is required to fully understand the implications of merging physical and digital worlds.

Quick facts on the software and data explosion The Black Duck Knowledge Base [18] includes information from 800, 000 projects in more than 5500 sites and 2,200 software licenses Information in the world is doubling every two years [19] 2.5 quintillion bytes of data are created every day. Over 90% of the data in the world today has been created in the last three years [20].

References [1] Digital Economy Definition [Online]. Available: http://en.wikipedia.org/wiki/Digital_economy [2] Organization for Economic Co-operation and Development. OECD. [Online]. Available: [Online]. Available: http://www.oecd.org/sti/ind/44131881.pdf [3] Open Source Middleware for Open Systems in Europe (OSMOSE) Project [Online]. Available: http://www.itea2.org/project/index/view/?project=46 [4] Open Source Infrastructure for Run-tieme Integration of Services (OSIRIS) Project [Online].Available: http://www.itea2.org/project/index/view/?project=135 [5] Open Source Ambient Intelligence Commons(OSAmI-Commons) Project [Online]. Available: http://www.itea2.org/project/index/view/?project=230 [6] Intercloud Definition [Online]. Available: http://en.wikipedia.org/wiki/Intercloud [7] Megha Working Group [Online]. Available: http://wiki.rediris.es/megha/MainPage [8] Spanish e-Science Network [Online]. Available: http://www.e-ciencia.es/ [9] ICT Comission of Spanish University Chancellors Conference. CRUE-TIC[Online]. Available: http://www.crue.org/TIC/ [10] Open Cloud Computing Interface (OCCI) [Online]. Available: http://occi-wg.org/ [11] rOCCI server [Online]. Available: https://github.com/gwdg/rOCCI-server [12] Thijs Metsch, Andy Edmonds: Open Cloud Computing Interface. OGF.org (2010) [Online]. Available: http://goo.gl/MxX19 [13] Distributed Infrastructure with Remote Agent Control (DIRAC) [Online]. Available: http://diracgrid.org/ [14] Large Hadron Collider beauty (LHCb) http://lhcb-comp.web.cern.ch/lhcb-comp/DIRAC/. [15] Méndez, V., Fernández, V., Graciani, R., Casajus, A., Fernández, T., Merino, G., Saborido, J.J.: The integration of Cloudstack and OCCI/Opennebula with DIRAC. Journal of Physics Conference Series (2013) [16] IX Conference on Science and Service Engineering [Online].Available: http://www.kybele.etsii.urjc.es/jcis2013/ [17] Scaling Software (SCALARE) Project [Online]. Available: http://scalare.org/about-scalare/Black Duck KnowledgeBase [Online]. Available: http://www.blackducksoftware.com/products/knowledg-base [18] EMC Corporation [Online]. Available: http://www.emc.com/leadership/programs/digital-universe.htm [19] IBM [Online]. Available: http://www-01.ibm.com/software/data/bigdata/

26


PROSE survey on requirements for hosting open source software projects Alfredo Matos, (CMS) Miguel Ponce de Leon, (TSSG) Rui Ferreira, (IT Aveiro) João Paulo Barraca, (IT Aveiro) Widespread use of free/libre/open source software (FLOSS) in European funded projects is vital to innovation transfer, but often, such software doesn’t enter general use. Legal issues, lack of business drivers, incomplete documentation and lack of knowledge about FLOSS are some of the most common reasons for this.

27


PROSE survey on requirements for hosting open source software projects

PROSE PROSE, an EU-funded project tasked with promoting FLOSS, aims to provide a common cloud platform on which open source projects can be hosted and information shared. But how would such a platform work? And what would be its main requirements and features? PROSE turned to ICT FP7 participants for answers.

Consultation process Survey questions were divided into four themes (see Figure 1). Most questions were optional, allowing respondents to focus on the areas they considered more relevant. The survey was disseminated through EU ICT projects, resulting in 42 anonymous responses, 25 of which came from respondents identifying themselves as members of universities, research laboratories or companies. Figure. 1. Goals for each survey question group.

Source Code Hosting (G1)

Collaboration Tool (G2)

Identify the primary tools for source code management and identify the main capabilities and limitations for individual projects hosted in the forge

Establish the key collaboration mechanisms and recognize the associated collaboration models, that drive software development within EC ICT projects

Security and Support (G3)

Project Metrics and Statistics (G4)

Determine what technical capabilities the platform must provide in order to adequately support its’ projects and ensure secure access to its users

Identify what software quality metrics can be built from the multitude of available data that accurately convey a measure of the quality of the project’s results

Choosing a platform software Privacy (e.g. private repositories, or restricted content) stood out as a key driver when choosing a platform: ICT projects operate a mixed set of closed and open components, where only some results are public. This makes it difficult for such projects to reside entirely inside an open source forge. Another identified requirement dealt with version control systems (VCS): while Git was clearly preferred, subversion still gathered significant support (see Figure 2 - next page). The need for software quality metrics was also identified. While SourceForge1 , GitHub2 , or Ohloh3 partially address this need, their offerings are still far from the kind of metrics we envision. Ideally, metrics should reflect project success. In the context of EU ICT, success can be defined by progression, dissemination and cooperation. Most participants identified the number of downloads and community ratings, as good dissemination metrics. Other metrics can be obtained by analyzing activity and component (re)use. Interestingly, few participants found online social networking to be important. From the point of view of collaboration, mailing lists, forums and wikis remain the most popular solutions.

28


PROSE survey on requirements for hosting open source software projects

ar

)

.3 % (0. )

3.2

04

ur ce Sa fe :

:7

So

l : 10

% (0.03

zz

al

Mer curia

TFS : 5.5

Ba

Vi su

.0%

(0.06

)

Figure. 2. Importance of different Version Controls Software solutions (values < 0.5 express indifference or rejection by respondents >

%

(0.

02

CVS

: 6.4

)

% (0

.04)

Git : 31.4% (0,18)

Sub ver sion

(SV N

):3

0.1%

(0.1

7)

Darcs : 6.2% (0.03)

29


PROSE survey on requirements for hosting open source software projects

Allura-based PROSE platform In light of these survey results, PROSE moved towards selecting the Allura platform4 for its cloud offering. Allura addresses most of our requirements, as well as offering the possibility of self-hosting, platform customization, support for multiple tools (especially for VCS, supporting both git and subversion), and inclusion of external cutting-edge functionality, such as advanced metrics. Such metrics may be sourced from other ICT projects, thereby increasing collaborative impact. The choice of Allura will also allow us to develop a close relationship with the (Allura-based) SourceForge ecosystem, with its high Alexa page rank and millions of users: establishing an ICT neighborhood on Sourceforge bridges the interests of both communities and further potentiates the success of the PROSE platform. Following from this success, the PROSE survey5 is now in its second phase and will continue to shape our future decisions. Thank you to everyone who participated, and we look forward to an ongoing collaboration.

1 SourceForge: http://sourceforge.net 2 Github: http://github.com 3 Ohloh: http://ohloh.net 4 Allura Platform: http://incubator.apache.org/allura/ 5 Available at the ICT PROSE Website: http://www.ict-prose.eu

30


NEWS & EVENTS Announcing the SUCRE EU-Japan Experts Group International collaboration on topics of common interest is of outmost importance for the future of Cloud Computing and open source in Europe. Responding to this concern, the European Commission is already investing in collaborative research with major Cloud stakeholders worldwide, and particularly with researchers and industries in Japan and South-East Asia. In this context, the EU-funded SUCRE project was set out to support the exploitation of Open Clouds in Europe, and to foster an international dialogue on Open Clouds interoperability between Europe and Japan. In order to meet this twofold, ambitious goal, the project has successfully set up and is now operating the SUCRE EU-Japan experts group. The group has engaged and is benefited by the participation of a number of high-profiled stakeholders stemming from the academia and industry of both regions of interest. As of mid-January 2013, the group members along with SUCRE have initiated a dialogue on Open Clouds interoperability and collaboration opportunities between Europe and Japan. The discussion is carried out both online and offline, facilitated by a variety of means such as the likes of virtual meetings, collaborative editing tools, mailing lists, and a dedicated group on Facebook. Eventually, the results of the aforementioned activities will be captured by a final report that will be delivered by SUCRE in March 2014, summarizing all discussions and findings of the SUCRE EU-Japan experts group. Most importantly, the report will include a set of recommendations from the experts related to the interoperability of European and Japanese Open Clouds, as well as to future collaboration prospects.

The SUCRE Young Researchers Forum The SUCRE consortium is pleased to announce the organization of a Young Researchers Forum that aims at bringing achievements and the potential of open-source cloud developments to the researchers of tomorrow. The aforementioned event, focusing on Cloud Computing and Open Clouds, will take place in Karlsruhe Research Institute, Germany from 23-24 September 2013 and, among other scopes, aims at offering participants the opportunity to network with other researchers, international experts, and practitioners across disciplinary and national boundaries. As such, the main target audience of the event will be junior researchers at a pre-PhD level, who are embarking on research programmes where Cloud Computing and Open Source are significant components. Internationally established lecturers will offer talks and there will be an opportunity for hands-on tasks. This is in line with the goal of establishing and sustaining the missing connection between young people, the researchers and innovators of tomorrow, and experienced practitioners, researchers, policy makers in clouds. Further information and to register please visit the project website at http://www.sucreproject.eu/summer-school and/or contact the organisers Prof. Rizos Sekallariou (rizos@cs .man.ac.uk) and/or Mrs. Eleni Toli (elto@di.uoa.gr)

31


Editorial Board Prof. Alex Delis, Dr. Norbert Meyer, Prof. Dr. Keith Jeffery, Dr. Yuri Glikman, Dr. Toshiyasu Ichioka, Mrs. Cristy Burne,

OTHER RELATED EVENTS

SUCRE Coordinator, National & Kapodistrian University of Athens, Greece Head of the Supercomputing Department at the Poznan Supercomputing Center, Poland. President of ERCIM, U.K. OCEAN project Coordinator, Fraunhofer Institute, Germany EU-Japan Centre for Industrial Cooperation,manager of the FP7 project JBILAT, Japan Scientific Editor and Journalist, Australia

Coordination by Mrs. Eleni Toli, National & Kapodistrian University of Athens, Greece and Giovanna Calabrò, Zephyr s.r.l., Italy This publication is supported by EC funding under the 7th Framework Programme for Research and Technological Development (FP7). This Magazine has been prepared within the framework of FP7 SUCRE SUpporting Cloud Research Exploitation Project, funded by the European Commission (contract number 318204). The views expressed are those of the authors and the SUCRE consortium and are, under no circumstances, those of the European Commission and its affiliated organizations and bodies. The project consortium wishes to thank the Editorial Board for its support in the selection of the articles, the DG CONNECT Unit E.2 – Software & Services, Cloud of the European Commission and all the authors and projects for their valuable articles and inputs.

Euro-Par 2013 The Euro-Par 2013 conference will take place in Aachen, Germany, from August 26th until August 30th, 2013. The conference is jointly organized by the German Research School for Simulation Sciences, Forschungszentrum Jülich, and RWTH Aachen University in the framework of the Jülich Aachen Research Alliance. Further information at http://www.europar2013.org/conference/conference.html

CLOUDZONE 2013 This is the biggest Cloud Fair of German speaking countries. CLOUDZONE has developed from the Trendcongress, which has taken place successfully since 2008 in Karlsruhe. It will take place for the fifth time at the Karlsruhe Fair Center on 15th- 16th May 2013. For further information and to find out who should attend this event please visit http://www.cloudzone-karlsruhe.de/

The Fourth International Conference on Cloud Computing, GRIDs, and Virtualization (CLOUD COMPUTING 2013) This key event will take place in Valencia, Spain on May 27th - June 1st 2013. For further information and to register please visit www.iaria.org/

TNC2013 - TERENA Networking Conference The event will be hosted by SURFnet, the Dutch National Research and Education Network and held in the picturesque city of Maastricht on 3rd – 6th June 2013. For further information and to register, please visit https://tnc2013.terena.org/

IEEE 6th International Conference on Cloud Computing To discuss this emerging enabling technology of the modern services industry, CLOUD 2013 invites you to join the largest academic conference to explores modern services and software sciences in the field of Services Computing, which was formally promoted by IEEE Computer Society since 2003. This event will take place in Santa Clara, CA, United States on 27th June – 2nd July 2013. For further information and to register please visit http://www.thecloudcomputing.org/2013/

ISC Cloud Conference 2013 This key conference will be held at the Marriott Hotel in Heidelberg, Germany on 23rd- 24th September 2013. Further information at http://www.isc-events.com/cloud13/

International Conference on Cloud and Green Computing (CGC2013) This conference will take place in Karlsruhe, Germany from September 30th to October 2nd 2013. Further information at http://socialcloud.aifb.uni-karlsruhe.de/confs/CGC2013/Calls.php

1

32


CloudSource Cloud computing & opensource

Issue 1 - April 2013

Design & layout : Paul Davies www.de-clunk.com paul@de-clunk.com

SUCRE CloudSource Magazine Issue 1/2013  

First issue of SUCRE Project Magazine "CloudSource", a magazine

Read more
Read more
Similar to
Popular now
Just for you