CLOUDCOMPUTING WORLD Issue 5
Data Gathering in the Cloud: Making the Most of Network Analysis
Road to the Cloud
The Transformative Power of Google
Choosing the Right Companion
Advanced Analytics Hybrid Cloud Environments Budget-Based IT Launch Partners
Struggling with managing your IT Infrastructure? Automate datacenters and IT labs - with CloudShell CloudShell's self service platform, automated provisioning and reservation/scheduling system empowers cloud-like access to any combination of infrastructure, including legacy systems, physical networking, virtualization, SDN/NFV, and public cloud resources, so you can enable: Automated provisioning Faster test cycles Higher utilization Continuous integration DevOps
E LOP E VE
Learn more info.qualisystems.com/cloudcomp
S E C U R IT Y
The Power of Google
New Social Insights from IBM and Twitter
Virtual Desktop Infrastructure
All the key news from the world of cloud
David McLeman discusses the benefits of Google Apps for Work
Applying social data to business decisions
Fundraising website chooses Melbourne Server Hosting
Moving desktops to the cloud
24 26 28
30 32 36
Choosing a data management partner
e-space north business centre 181 Wisbech Rd, Littleport, Ely, Cambridgeshire CB6 1RA Tel: +44 (0)1353 865403 firstname.lastname@example.org www.cloudcomputingworld.co.uk
GoodData is helping win Bolder Thinking more business
Budget Based IT
LGN Media LTD
Meeting business requirements
Publisher & Managing Director: Ian Titchener Editor: Nick Wells Production Manager: Rachel Titchener Design: Andy Beavis Financial Controller: Samantha White
Customer Centric Clouds Is your cloud primed for success?
Hybrid Cloud Environments How best to monitor a hybrid environment
Consumer Cloud Services Time to drop Dropbox
Data Gathering in the Cloud Making the most of network analysis
Understanding Unified Access
How unified access supports true BYOD
Cloud Solutions Choosing the right companion
The views expressed in the articles and technical papers are those of the authors and are not endorsed by the publishers. The author and publisher, and its officers and employees, do not accept any liability for any errors that may have occurred, or for any reliance on their contents. All trademarks and brandnames are respected within our publication. However, the publishers accept no responsibility for any inadvertent misuse that may occur. This publication is protected by copyright ÂŠ 2015 and accordingly must not be reproduced in any medium. All rights reserved. Cloud Computing World stories, news, know-how? Please submit to email@example.com
DATA CENTRE SUMMIT 2015 NORTH
Manchesterâ€™s Old Trafford Conference Centre
Data Centre Summit North is the first in a series of new one-day conference focussed events, with the first set to take place at Manchesterâ€™s Old Trafford Conference Centre on the 30th of September 2015. DCS will bring all of the industries thought leaders together in one place, with the industries leading vendors and end users. The focus of the event will be placed on education, networking and debate, It will provide an open forum for delegates to learn from the best in the business.
om .c ld or
ew tr en ac at .d
30th of September 2015
DATA CENTRE SUMMIT 2015 NORTH Platinum Headline Sponsor
The event will also feature an exhibit hall where the industries leading companies will all show their newest products and services, together with a networking lounge so that you can make connections with like minded business professionals. White paper
To enquire about exhibiting call Peter Herbert on 07899 981123 or Ian Titchener on 01353 865403
FOREWORD Let’s throw away the rule book... Hello everyone, Cloud computing may no longer be considered a new trend, but it is yet to become universal. While the benefits are many, there’s a need for basic infrastructure to be highly flexible in adapting to the rapidly changing demands of digital business. The reliability of new hardware and its smooth integration with existing infrastructure is vital if organisations are to successfully navigate these changes. There’s also a constant need for security assurance with businesses relentlessly targeted by cyber criminals, coupled with all manner of requirements for meeting compliance regulations that are likely to have both providers and customers scratching their heads. Still, it’s important to focus on the new opportunities that are available as a cloud-enabled business. You can be completely mobile, more efficient and have more capacity. So, there’s really no reason why it should be shrouded in doubt or uncertainty. Yet, even some of the more technically savvy among us sometimes have difficulty explaining what makes cloud computing so special. Cloud technology revolutionises computing by reducing IT costs and increasing business agility, but with the Internet becoming a way of life right around the world, what’s really important is the benefit that digital technology can bring and the extraordinary opportunities that the cloud creates. In this respect, it’s not really about the technology, it’s about the new places it can take you and your business. Let’s make more of the potential the cloud has to offer. The new capabilities you have – plus the benefits they can bring for your customers – are limited only by your strategic imagination. Best Regards, Nick Wells, Editor, Cloud Computing World
loud Expo Europe and Data Centre World 2015, held across 11-12 March at ExCel London, kicked off to a roaring start on its first day with record attendance numbers and loud praises sung by delegates and exhibitors alike. Highlights included an opening keynote address from Dawn Leaf, Chief Information Officer at the U.S. Department of Labour, speaking about the challenges faced in its cloud migration and consolidation strategy supporting the agency’s 19,000 staff in over 500 locations. The ‘Data Centres of the Future’ conference stream also featured an insightful session delivered by Scott Neal, Marketing Director at Schneider Electric, which opened the debate on how best to predict the future of the industry, mitigate risk and plan for the ‘what if’ factor. Data, Analytics & the Internet of Things Theatre, a new addition to the 2015 programme, was also crowned a success with a full day of popular talks and lively panel discussions. Kalman Tiboldi of TVH, spoke to a packed theatre outlining the tremendous value to be generated by the hyped IoT market. Speaking after the end of the first day, Group Event Director, Thomas Standley, was overwhelmed with the feedback from attendees. “This has been yet another storming start to Cloud Expo Europe and Data Centre World. Year on year we have seen incredible growth and positive response to the shows from all industry vectors. This year has also welcomed the successful addition of the Hackathon run by Incubus, as well as the Open Cloud Developer Park, which have responded to the diversity and changing landscape of the cloud ecosystem.” Ambitions remain high for both events, with CloserStill aiming to expand the shows again for 2016. Phil Nelson, Commercial Director and co-founder of CloserStill Media, organiser of cloud and data centre events in London, Singapore and Frankfurt, commented: “The floorplan is already half-full for 2016. Exhibitor feedback suggests that the delegate audience is fantastic – top IT professionals with real projects and serious buying power.” Data Centre World visitor Tim Aldershot, Technical Director at JBrand, shared the enthusiasm: “Shows don’t often have new things to see – this event does! It is definitely worth attending to pick out new methods and products. I have seen many
All the key news in the world of cloud. Please don’t forget to check out our Web site at www.cloudcomputingworld.co.uk for a regular weekly feed of relevant news for cloud professionals.
potential purchases and will definitely be returning in 2016.” Frederik Clement, IT Infrastructure Manager at TVH Group, who flew over from Belgium for the events said: “We have already got a lot out of our visit. All the sessions have been great, and the exhibiting vendors are all relevant and highly useful. We would fly back for this show next year without a doubt!” Sue Goltyakova, Senior Marketing Manager at Netskope and sponsors of this year’s Security & Compliance Theatre added: “We’ve had an incredibly busy first day – we’ve seen the right people and the right companies here. What a good day!”After delivering her keynote session CIO Dawn Leaf commented: “It’s my first time here, and I am very impressed. There’s a real energy and the exhibit stands are great. It’s a big deal in the industry – it’s not only a very well-attended event, but it has a broad set of exhibitors with a wide range of different capabilities and experiences, products and services. It is clearly a very enthusiastic event. Just looking around you can see how many people are engaged with both the conference and exhibition.” www.cloudexpoeurope.com
adware, a leading provider of cyber security and application delivery solutions, has launched its latest attack mitigation platform that offers up to 300Gbps of mitigation capacity while allowing customers to enjoy the widest range of simultaneous cyber-attack protection in the industry. The powerful new design can address today’s most tenacious volumetric DDoS attacks such as UDP reflection attacks, fragmented and out-of-state floods, while picking out and mitigating sophisticated non-volume threats that often lurk below the surface in multi-vectored attacks. As the industry’s first dedicated attack mitigation platform to offer 100Gb interfaces with the ability to handle 230 million packets per second of attack traffic, Radware’s DefensePro x4420 platform was designed for multi-tenant environments with the ability to support upto 1,000 active policies, separate processing capabilities and customised management & reporting per tenant. “Cyber-attacks have evolved and reached a tipping point
in terms of quantity, length, complexity and targets,” says Carl Herberger, Vice President of Security Solutions for Radware. “In 2014, one in seven cyber-attacks were larger than 10Gbps and we’ve seen attacks 100+Gbps in size. The attack landscape is changing and cyber-attackers are getting more and more aggressive with their tactics. It’s not uncommon for mobile carriers and cloud providers to experience extra-large attacks.” Commercial accounts such as cloud providers and the mobile carrier market have been a target for these large-scale attacks. A recent Heavy Reading survey of mobile operators has shown that approximately 45 per cent of respondents reported their web and DNS servers facing the Internet are attacked on a daily or weekly basis. As some carriers are slowly rolling out new mobile features such as voice over LTE (VoLTE) these massive attacks can potentially target VoLTE as well, creating even more disruptions to mobile customers. “With close to 7 billion mobile subscribers worldwide, the need for carriers to provide a secure LTE infrastructure is paramount,” says Michael O’Malley, Vice President of Carrier Strategy and Business Development. “It is expected that worldwide VoLTE subscribers will reach 138 million by 2017 so the need for a highend DDoS platform that can mitigate high volumetric and application attacks across all mobile domains - network infrastructure, mission-critical applications, IMS and EPC - is clearly evident.” Radware’s attack mitigation platform can also provide service and cloud hosting providers better value for their investment as it eliminates the need to deploy multiple devices and provides a high performance platform that can protect networks from sophisticated and volumetric attacks. “Soon enough, DDoS attacks will eventually reach the 1Tbs level, placing manufacturers in a frenzy to keep up with future volumetric cyber attacks,” says Dan Thormodsgaard, Vice President of Solutions Architecture for FishNet Security. “Radware is a leading vendor in the anti-DDoS space and we are happy to see them addressing these larger volumetric DDoS attacks with the introduction of this new platform.” www.radware.com
enovo has announced the opening of the company’s first global High Performance Computing (HPC) innovation centre, offering a permanent R&D and application benchmarking site as well as a new ecosystem of partners that will collaborate on projects to bring the commercial benefits of HPC to a broader spectrum of clients and workloads. Located in Stuttgart, Germany, the centre marks Lenovo’s commitment to enterprise computing and ambitions to become the number one Open Systems vendor in the market. Coinciding with the opening, Lenovo has also been accepted as a full member of the European Technology Platform for High Performance Computing – an industry-led forum that leverages the transformative power of HPC to boost European competiveness in science, business and industry. Working closely with Intel, the centre offers the latest Intel Xeon E5 2600 v3 processor with Mellanox EDR 100Gb/s InfiniBand interconnect fabric leveraging Lenovo’s dense NeXtScale System as base compute platform. It will leverage the deep knowledge and specific skills of several specialised client partners from across Europe that will collaborate to further expand capabilities and advance localised research. Commenting on the announcement Lenovo EMEA President and Senior Vice President of Lenovo Group, Aymar de Lencquesaing, said: “Today marks a milestone in our ambition as a company. Not only are we opening the company’s first global HPC centre but we are reaffirming our commitment, investment and ambitions in the enterprise. The EMEA market has huge potential for HPC and provides a fertile ground for us to lead major advancements in projects and research that have an incredible impact on both industry and society.” Gilad shainer, Vice President of Marketing at Mellanox Technologies added, “Lenovo’s HPC Innovation Center at Stuttgart will provide developers and users with access to Mellanox EDR 100G InfiniBand Interconnect solutions for optimising application performance to take advantage of the most efficient highperformance interconnect. Interconnect performance capabilities are critical to dataand compute-intensive applications, which require ultra-low latency and a high rate of
message communication in order to deliver faster results.” One such project utilising the Innovation Centre backbone is being done in the UK where Lenovo technologists are working with the STFC DiRAC Facility scientists to explore new methods in dealing with very large scale memory intensive applications. Dr Jeremy Yates, Director of the DiRAC Facility told us, “One of the key success factors of the Innovation Centre will be measured by how it helps to promote regional intellectual property and we are encouraged by the focus on satellite collaborations.” www.lenovo.com
o help telecoms operators address the growing connected home market, intelligent sound sensing leader Audio Analytic has launched its Global Telecom Partner Programme (GTPP). The programme is designed to make it simple for telecoms operators of all sizes and their hardware suppliers to integrate Audio Analytic’s patented sound sensing and classification technology into home automation, monitoring and security products, such as cameras, microphones, thermostats, home hubs and baby monitors. Consultancy Berg Insight predicts that 100m homes, split equally between Europe and the US, will install connected home systems by 2017, with the market expected to be worth $17 billion. Due to their close, existing relationship with subscribers and networking strengths, telecoms companies, including cable operators and broadband providers, are well positioned to target this market by adding new services that increase revenues and enable differentiation from rivals. Audio Analytic’s award winning sensing software is built on a unique understanding, and database, of the home acoustic environment. This makes it simple to monitor sounds, such as smoke alarms, breaking glass, baby cries or car alarms within the connected home, business and high security scenarios. Products incorporating its technology can automatically detect common sounds such as smoke alarms, breaking glass, baby cries, car alarms, gun shots or aggression. “The connected home presents a huge opportunity for telecoms operators to build
on their existing customer relationships and introduce new, innovative products that enable home automation, security and monitoring,” said Chris Mitchell, founder and CEO, Audio Analytic. “It promises to unlock a combination of new revenues and provides the chance for operators to differentiate themselves in increasingly competitive markets. Our Global Telecoms Partner Programme provides full support, across the product lifecycle, to enable telecoms operators to target this fast-growing market opportunity.” The Global Telecoms Partner Programme combines access to Audio Analytic’s technology with full support, enabling sound sensing to be incorporated within products quickly, seamlessly and cost-effectively. Integration takes from just two weeks while the sensor software has no special hardware requirements and can work with any low quality microphone. Tens of thousands products containing Audio Analytic’s technology have been provided by partners in the consumer electronics and professional security markets. www.audioanalytic.com
nternational domain name registrar and web hosting provider EuroDNS has introduced free, fully featured Alpha SSL certificates to customers. By doing so, the company takes a major step in boosting security on the Internet. EuroDNS SSL certificates protect online communication and transactions using the strongest encryption available. “Many companies talk about securing the Internet, EuroDNS is taking constructive steps and offering every customer a free, Alpha SSL certificate with every domain name registered or transferred to us,” said EuroDNS Chairman, Xavier Buck. “We believe in a secure Internet and we’re striving to ensure that the threats that exist are recognised and controlled. It’s vital to us that our customers are protected and fully informed, so they can secure their own data and that of their customers. Giving a free SSL certificate is the latest initiative in our mission to provide the most reliable and secure domain registrar service available; following on from the recent launch of twostep verification and domain privacy.” www.eurodns.com
10 Reasons to Leap Into a New World of Collaboration
POWER OF GOOGLE
David McLeman discusses the benefits of Google Apps for Work By David McLeman, CEO of Ancoris
Infoburst A true cloud environment allows your organisation to become more effective and more innovative.
OPINION A New Era Years ago, the move from paper-based systems to computers revolutionised the way we work. Fast forward to 2015 and the current move to cloud services gives us a similar opportunity to re-think the way we co-create and share information on any device – laptop, Android, iPhone, Chromebook – wherever we are in the world. Google Apps for Work is an all-in-one suite of tools that enables users to communicate, store and create in the cloud. The suite includes file storage and sharing with Google Drive, real-time collaboration with Google Docs, video meetings with Google Hangouts and professional email with Google mail, all bundled together with a single price tag. Let’s take a closer look at how different organisations are using Google for Work and its cloud functionality.
Get your teams working together – faster, smarter, but not harder The personal productivity tools of the last two decades, such as word processors and spreadsheets, have changed the way individuals work. Now Google Apps for Work is changing the way teams work together. Start with Google Drive, Docs, Sheets and Slides. This is one of the most impactful and fundamental changes your organisation should adopt. Google Drive is a cloud-based office productivity suite that comes bundled with your apps purchase and offers the advantage of real-time collaboration and a single source of truth in the cloud. Any time you find yourself attaching files versioning things (v1, v2, v3, etc.) there is an opportunity to transform.
Improve employee communications beyond email Email is still mission-critical for most businesses, so providing quick, easy and reliable access to email at the lowest possible cost is vital. Yet users are moving past email and embracing social media tools that let them communicate and collaborate faster and more effectively. Google Apps for Work also gives you social tools. You can start an instant message conversation right from your Gmail inbox, quickly switch to a voice, video or group chat all from your browser and with no need to fire up separate applications. With Hangouts, your employees have a powerful communications platform that includes text chat, voice and video conferencing including multi-party video conferences, which also allow you to share documents and your screen in real time. Google+ offers a powerful social platform to help your organisation engage its workforce and connect with customers, partners and suppliers. With Google+ profiles, your employees can follow leadership or influential VIPs. With private communities you can build centres of excellence on special topics or encourage knowledge sharing with remote personnel. Google+ is fully mobile and integrated with Gmail and Hangouts.
NEARLY 1 IN 2 EMPLOYEES work away from the office at least once a week. Source: Cisco.com
View, store and share any kind of files With Google Drive, you can open a file even if you don’t have the right software installed. You can view over 40 popular file formats including videos, images, Microsoft Office documents, spreadsheets and PDFs. You can even edit Office files, without having Office installed. You can do this on your mobile, online and offline. The maximum file size is 5 terabytes and each user has unlimited storage, so your organisation never has to manage storage quotas again.
Create intranets to collaborate on team projects Google Sites, also bundled with the Google Apps suite, allows anyone to quickly and easily publish content to a web page. You can create intranets as simply as writing a document through an intuitive editor. That means you no longer need to go through a complex IT request to get a website created for your team or organisation. Now, the people who know the content best can own it and keep it updated.
The all-in-one suite means ease of use and rapid user adoption Google Apps for Work is designed as an all-in-one solution with integrated tools that work seamlessly together. When you make a comment in Docs, Sheets or Slides, collaborators automatically receive
OPINION email alerts. With a single click, you can launch a Hangout video meeting from your inbox or calendar. On top of the intrinsic consistency and comfort of the application, Google Apps for Work users enjoy the same experience across different devices, operating systems and browsers. Another advantage comes from the fact that true cloud applications, like Google Apps for work, are not limited by versions and therefore constantly evolving. It is simpler for employees to benefit from new features because these come in bite-size chunks over many months rather than large, indigestible batches every few years.
Free your staff and empower them to work anywhere, anytime Because Google Apps for Work runs in the browser, you can access the data and apps you need whether you’re using Windows, Mac OS or Linux. It doesn’t require you to install any software locally, so there’s no application support headache for the IT team.
of workers are using unauthorized mobile devices on company networks.
Get everyone collaborating Many organisations are struggling to provide their front of house staff with access to corporate systems. This is still very much the case today in retail, hospitality and call centre environments. The people who are the most able to give timely customer feedback and come up with innovative ideas have difficulties communicating with their head office. Google for Work enables you to bring every single employee into the digital age, to get them participating and for you to be seen as a more inclusive company.
Peace of mind with a secure suite built for the cloud With collaboration being a key driver for innovation, users want to be able to share information easily, but the IT team needs to make sure data doesn’t fall into the wrong hands. Google Apps for Work was designed from the ground up to let you collaborate effectively and securely in a shared, distributed environment. Security is not an add-on or afterthought – it’s part of Google’s DNA. For example, all data is encrypted in transit and at rest and is deliberately fragmented across multiple data centres, so that someone compromising the security in one location would still be unable to make sense of users’ data.
Businesses require an environment that fosters growth and innovation Historically, IT departments have spent 70 per cent of their budgets ‘keeping the lights switched on’ simply maintaining their existing infrastructure and status quo. Analysts at Forrester Research warn that future success will require organisations to cut ‘business as usual’ costs to just half their budget and devote the savings to business expansion
Source: HBR.org and innovation to deliver competitive advantage. A key weapon in this shift of priorities will be moving to the cloud. You benefit from the scale of Google’s operations and you don’t need to spend time managing software licenses, installing patches and upgrades and periodically replacing applications. Your own IT staff can concentrate on the needs of the business, making sure users have the right tools in place and the know-how to best use them.
Watch your costs go down, as productivity goes up By using tools that come at no extra cost in Google Apps for Work, such as Google Hangouts, Google+ and Google Sites, employees are able to communicate more effectively, both within the company and with external partners. Google Apps for Work also brings simplicity to your budget, with a clear, simple pricing plan that has everything you need included as standard. You pay a fixed fee of £79 per user per year, and you can easily scale your costs up or down. As storage is limitless in Google Drive, you no longer need to worry about storing your data. Conclusion Google Apps for Work is about more than just providing low cost email and outsourcing operations. As we’ve shown in this article, by being designed from the ground up to run securely in the cloud, a true cloud environment addresses ten of the key challenges faced by most organisations today. Designed for teams and available on any device with a browser, it allows your organisation to become more effective and more innovative.
Applying Social Data to Business Decisions
Infoburst IBM and Twitter offer businesses an unprecedented edge to empower decisions and create actionable insights for business decision-makers.
TURNING TWEETS INTO
New Social Insights from IBM and Twitter Drive Better Business Decision-Making IBM is moving Twitter data beyond social listening
Introduction IBM and Twitter have announced the availability of industry-first cloud data services that allow business professionals and developers to extract actionable business insights from Twitter data. With more than 100 early client engagements underway, the IBM and Twitter partnership is already helping enterprise clients apply social data to business decisions. Twitter is like no other data source in the world. It is a real-time, public, conversational and global information platform where voices from around the world are speaking about every topic imaginable. But for business professionals to do more than social listening – to be able to use Twitter data to inform their organization’s most essential decisions – they must first isolate the signal from the noise. IBM does this by enriching and analyzing Twitter data in combination with millions of data points from other streams of public and business data – such as weather forecasts, sales information and product inventory stats – to uncover powerful correlations that drive more actionable insights. “So much of business decision making relies on internal data such as sales, promotion and inventory. Now with Twitter data, customer feedback can easily be incorporated into decision making,” said Chris Moody, Vice President of Data Strategy at Twitter. “IBM’s unique capabilities can help businesses leverage this valuable data, and we expect to see rapid demand in retail, telecommunications, finance and more.” The new IBM analytics services on the cloud will help businesses and developers:
• Create Social Data-Enabled Apps: Developers and entrepreneurs can search, quickly explore and then mine enriched Twitter content and aggregated insights through IBM’s Insights for Twitter service on Bluemix. • Merge Sophisticated, Predictive Analytics with Twitter Data: By automating the steps of data curation, predictive analysis and visual storytelling, Watson Analytics can give business professionals the ability to immediately pull Twitter data into any project in order to identify and explain hidden patterns and relationships to accelerate the understanding of why things happen and what’s likely to happen. • More Easily Analyze Twitter Data: With select cluster configurations of BigInsights on cloud pre-configured with access to Twitter content, clients can combine Twitter data with IBM’s full-featured Enterprise Hadoop-as-a-Service offering also available through IBM Bluemix. Big Data & Analytics IBM and Twitter offer businesses an unprecedented edge to empower decisions through the combination of Twitter’s unique overview of what the world is saying together with IBM’s analytic power to create actionable insights for business decision-makers. More than 4,000 IBM professionals now have access to Twitter data and are trained to enrich the data with analytics
capabilities from IBM industry solutions and cloud-based services. “The partnership between IBM and Twitter helps businesses tap into billions of real-time conversations to make smarter decisions,” said Glenn Finch, Global Leader of Big Data & Analytics for IBM Global Business Services. “Through unique expertise, curation and insights Twitter data is now able to inform decisionmaking far inside organizations.” Here are the top three social insights drawn from over 100 early engagements: 1. Geography is Not Destiny: It’s a global economy, but we’re all still very local. Geographic areas can show significant variance in churn even across subscribers in the same marketing segment with the same data history. Most subscriptionbased telecommunications and media companies that are subject to high churn rates have developed sophisticated analytic models to understand and predict customer turnover. What’s not well understood is the influence of factors like weather or other point-in-time events, within defined geographic areas. By combining Twitter data with other information like rain, wind or snow that triggers service interruptions, IBM identified the correlation between weather events, angry Tweets and customer defections. By helping analyze localized Twitter data combined with weather data, IBM can significantly improve churn models – in some cases by 5 per cent – and help a client take actions to minimize turnover. 2. The Inside is the Outside: Employee turnover within retail businesses directly affects your most loyal customers. What happens privately inside your four walls often goes public via social conversations. There are no more closed doors. IBM
Infoburst IBM found that Twitter is a valuable indicator of demand.
analytic models have shown that consumers value, and Tweet about, the relationship they build with sales associates, particularly in food service where individual tastes and preferences are important. Once a relationship is removed consumers also Tweet, but this time expressing a sense of loss for the relationship and their dissatisfaction with having to ‘start over.’ IBM looked at Twitter data along with loyalty information and the financial performance of different stores and restaurants. Not only did dissatisfaction with employee turnover impact sales negatively, but the dissatisfaction was most keenly felt by the most loyal (and valuable) customers. In one study the impact was highest with a consumer cluster that represented just 3 per cent of the total customer population (over six million in the loyalty program) – yet these customers have some of the highest gross margins for the retailer and shop virtually every day. 3. Fashion Forward with Social Insight: Twitter is an effective demand signal for the apparel industry because as focused as it may be on individual commentary, this creates a compelling picture of worldwide trends. Manufacturers want to know what products to make and when, but constantly changing retail trends make it harder to understand and respond to demand. IBM found that Twitter is a valuable indicator of demand. By using psycholinguistic analytics from IBM Research to extract a full spectrum of psychological, cognitive and social traits from Twitter data that influential fashion bloggers generate - combined with operational data such as sales and market share information - manufacturers can better understand why some products sell well while others don’t. They can also improve merchandising strategies and provide input to future product development.
DATACENTRES are MATURING Mature Data Centres know that protecting their customers’ data isn’t just about being popular, living in the upmarket streets of London, wearing Tier III trainers or comparing the size of their PUE.
A mature data centre understands that high quality, exceptional service, low cost & ultimate flexibility combined with levels of security unsurpassed elsewhere is more important than boasting about the size of your PUE or your tier III label.
Don’t let childish boasts cloud your decision - choose a data centre that offers maturity and puts your business needs first.
Contact MigSolv Today
0845 251 2255
Fundraising website chooses Melbourne Server Hosting to deliver a scalable solution By David Finch, CEO of Make a Donation
Infoburst Make a Donation also help charities by providing free registration pages for their events, data management and marketing support.
Introduction Make a Donation is a unique fundraising website which aims to change the way charitable giving is paid for in the UK. It is the only online giving site in the UK where 100 per cent of the donations go direct to charity. “We set it up after discovering that many sites like Just Giving and Virgin Money Giving charge up to 6.3 per cent in fees for every donation made,” says CEO David Finch. “As well as every penny of every donation going to the charity it is intended for via make-a-donation.org, we also pay the bank and processing charges for the charities we work with, saving a further 1.7 per cent on each donation on average. We don’t charge our charities any fees whatsoever and we never will.” Adding Value In addition to the free fundraising Make a Donation also help charities by providing free registration pages for their events, data management and marketing support. “We give them free access to digital tools so they can spread the word about their campaigns and events,” says David. “We add value to those we work with and are not just another fundraising site. We are supported by scores of businesses that pay a small fee to work with us to support what we do and give back something to their local community in the process. The businesses offer vouchers to the fundraisers, in the form of what we call MAD points, to use their products or services.”
Infoburst Melbourne looked at how to make both the database and front-end web servers both fault tolerant and highly available.
Why Melbourne? “I first came across Melbourne at a Business Network event in London,” continues David. “I got chatting to Steven Allan and told him how unreliable the hosting company we were using at the time was. He told me about Melbourne and I asked him to look at some different options for us.” Further conversations with other members of the Business Network, as well as David’s own research, indicated that Melbourne were highly regarded and would deliver on the promises that their SLA’s described. IT Platform For the website to work for everyone involved – the charities, the fundraisers and the supporting businesses – the site needs to have maximum 100 per cent uptime. So Melbourne looked at how to make both the database and front-end web servers both fault tolerant and highly available. The solution consists of three Galera Database Servers, two Web (Frontend) Servers, two HAproxy Load Balance/Cache Servers, plus a custom OS management solution. A Galera database cluster was chosen for the database. This spreads the database load over three servers, so if any one of them fails, the remaining servers are still able to cope and handle the workload. The web servers are a replicated pair of servers fronted by a further pair of load-balancers. All of the servers utilise Melbourne’s SAN-backed UltraVM public cloud, ensuring that our solution is not hardware dependent. “By bringing all of these elements
together Melbourne has given us a very powerful, fault-tolerant and scalable solution that we know we can depend on,” says David. “In addition to all of the above, it’s a managed service which is just what we needed. We get the technical support and back-up from Melbourne so we can concentrate our resources on developing and marketing our website to enable more people to benefit from fee-free giving. Melbourne have designed the solution so that it supports our projected growth over the next 3 to 5 years.” The Website The site receives and processes donations 24 hours a day, 365 days a year, so downtime is not an option. “Our fundraisers and charities want to receive as much support and as many donations as possible so our service needs to be available at any time of day. As a result we need a flexible solution that delivers a fast user experience on an ‘always available’ website and is robust enough to handle large volumes of donors at any one time. Also, we can’t predict traffic levels so we need to be prepared for any level of visits to the site. It’s not always obvious if a fundraising campaign is going to take off so we need a solution that can expand to meet demand whenever it happens, without the site hanging or crashing.”
infoburst The site receives and processes donations 24 hours a day, 365 days a year, so downtime is not an option.
The Future Make a Donation continue to have a really positive relationship with Melbourne. “They listened to what we needed and produced a solution that worked not only for the present, but for the future,” confirms David. “They have become a sponsor too. On a day-to-day basis they look after all the technical stuff and we crack on with everything else. If there are any issues they’re usually fixed before we’ve noticed them! At the moment we have over 2,600 charities on our website. Among the bigger charities that have embraced our new way of fundraising are Cyclists Fighting Cancer, Hearing Dogs for Deaf People, St James’s Place Foundation, Dreamflight and Age UK.” More than a hundred of the charities also use the crowd funding option, which allows them to promote more specific campaigns, such as funding a particular piece of hospital equipment. “We have over 10,500 individual fundraisers using the website. Many of them also do challenges for charities such as for Cancer Research UK, Macmillan Cancer Support and Prostate Cancer UK. Make a Donation’s vision is for a world where fees for charities are non-existent. Our mission is simple - to revolutionise the future of charitable giving in the UK.” Schneider Electric White paper
DATA CENTRE SUMMIT 2015 NORTH
om .c ld or
ew tr en ac at .d
Manchesterâ€™s Old Trafford Conference Centre
30th of September 2015
Registration is now open Data Centre Summit North is the first in a series of new one-day conference focussed events, with the first set to take place at Manchesterâ€™s Old Trafford Conference Centre on the 30th of September 2015. DCS will bring all of the industries thought leaders together in one place, with the industries leading vendors and end users. The focus of the event will be placed on education, networking and debate, It will provide an open forum for delegates to learn from the best in the business. The event will also feature an exhibit hall where the industries leading companies will all show their newest products and services, together with a networking lounge so that you can make connections with like minded business professionals.
Platinum Headline Sponsor
TO REGISTER CLICK HERE
Virtual Desktop Infrastructure
Infoburst SIRE proposed that the client made the switch to a Virtual Desktop Infrastructure (VDI) Environment.
TO THE CLOUD
SIRE Technology help travel experts plan for the future.
Introduction The client has been a leading brand name in independent tailor-made travel since 1970. If you’re planning a round-the-world gap-year trip, then they’re the experts to help design your itinerary. They operate out of multiple travel centres across the UK and Ireland. The company remains privately owned with a staff of over 1000 and has made travel arrangements for over 12.5 million clients
Pilot Scheme SIRE initially set up a pilot scheme to run in one of the travel centres. They replaced all of the PCs with VDI units, which linked up to a central server that had been updated to Windows 7. The pilot ran for one month with eight users. A pilot phase is an ideal way to showcase the benefits of a solution and at the same time it allowed the client to experience the solution in a live environment.
The Challenge The company were running from Windows XP and due to the operating system reaching the end of its life, they were looking to migrate away from Windows XP over to Windows 7. As well as the Windows migration, they wanted to look at a centralised management system for their desktop environment.
Full Switch Over After full installation of software SIRE began to switch each travel centre over to the new VDI unit. This was carried out during the evening, and in some cases over night, to make sure there was no disruption. Staff left their offices all running Windows XP on a PC and when they returned the next morning they simply turned on their new VDI unit and began working from Windows 7. Windows 7 gold images are housed on a tailoredmade server farm at the premises and at a local data centre. It’s easier to upgrade one centralised system with Windows 7 rather than upgrading operating systems at each travel centre.
The Solution After assessing their current situation, infrastructure and recognising that they operate from 27 travel centres; SIRE proposed that the client made the switch to a Virtual Desktop Infrastructure (VDI) Environment. Benefitst of a VDI Environment • Users can work from any travel centre without having to take their PC or laptop. • Greater security - theft of hardware is not theft of data, so a thief would just end up with a dumb terminal. • Energy Efficient – VDIs require less energy than traditional PCs or laptops. • People can work from anywhere, which promotes productivity • The Upgrade to Windows 7 would be seamless, save time and allow for future upgrades. • Centralisation of the desktop environment.
The Outcome • Onsite and management costs were reduced. • Improved energy efficiency, as VDI do not require the amount of power to run as a typical PC. • Centralised patch management, means that lower bandwidth is required. • Migration to Windows 7 was a huge success and seamless due to the implementation of the VDI system • Benefit from flexible and mobile working with staff able to work from various office locations, seamlessly
Choosing a Data Management Partner
7IM Uses Pulsant to Supply a Range of IT Services Graham Stott, IT Director at 7IM
Introduction Seven Investment Management Ltd (7IM), is an award winning financial services firm that offers discretionary asset management, funds-of-funds portfolios, and a wrap platform for private investors and independent financial advisors (IFAs). Founded ten years ago and with offices in London and Edinburgh, it delivers trusted advice, a common sense approach, proven institutional techniques and process as well as transparent charges for all investments. Business Challenge Crucial to the delivery of 7IMs intermediary financial services is a powerful IT platform that supports client assets of over £4 billion. Graham Stott, IT Director at 7IM explains, “Technology is a cornerstone of our service and represents a major part of our business investment. Although we have an extensive IT team, we have to consider carefully how we best utilise our resources in order to remain competitive while complying with the legislative requirements that govern financial service businesses.” 7IM needs a data management partner that can provide secure locations to house its data servers, a responsive and proactive team to help optimise its systems and a broad range of flexible IT services that it can tap into as its business evolves and grows. It also needs one it can trust with valuable client data. Graham confirms, “Our investors need to know that their data is secure at all times. As well as operating to stringent standards, our systems have to undergo regular third-party audits. Uptime is also essential, particularly because we service IFAs. They need to be able to access systems 24/7. If we are down, the effect cascades out affecting hundreds of other companies and their customers too.” The Solution 7IM uses Pulsant to supply a range of IT services including networking and high speed Internet connections to optimise its systems, ensure resilience and create operational efficiencies at its offices in Edinburgh and London.
As part of this solution, Pulsant provides Tier 3 resilient colocation facilities to support 7IM’s private cloud. This is run from 7IM’s own Microsoft Hyper V servers housed at Pulsant’s Edinburgh data centre, which operates to strict ISO 27001 standards for maximum data integrity. 7IM also uses Pulsant’s WAN+ solution – a fast, secure, prioritised and reliable network for voice over IP as well as other latency sensitive applications. Graham states, “By colocating our high density server racks with Pulsant, we have access to secure enterprise class data centre facilities without unnecessary capital expense. Pulsant has also helped us to reduce our operational costs through the installation of a WAN, which eliminates the heavy call costs between our Edinburgh and London sites, and the use of high speed internet connections to support our data intense virtualised platforms. This is vital to help us manage costs that help us comply with the ‘Capital Adequacy’ standards required by the FSA without overburdening our business.” Business Results On the benefits Graham comments, “By providing the glue that holds our own technical services together, Pulsant has helped us remain cost competitive without compromising service. It has connected our virtual operations and replicated our systems between sites 450 miles apart and has helped support our virtual environment with 0 per cent downtime”. And Graham confirms there is still more to come, “As our business has expanded, this has allowed us to make Edinburgh the disaster recovery site for our London operations. We are also looking at mobile device support and other innovative service delivery models. Using Pulsant, these sorts of projects are now achievable without massive investment on our part. Pulsant can supply all the WAN, back up lines and replication facilities we need. Pulsant is also very responsive – backing their technical knowledge and capabilities with proactive support - whatever the issue or requirement they will sort it out.”
Out of the box vs. outside of the box? Your organisation is different than your competitors, so why assume you have the same cloud needs.
Secure. Private. Personalised. Get a cloud solution that puts you in control of your business’ future. Find out how
â€œWe want answers, no create their ow power and flexibi insight an
BOLDERTHINKING Infoburst GoodData is helping win Bolder Thinking more business, with customers purchasing because of the Advanced Analytics they offer.
to give contact centres ot reports. They need to wn metrics. GoodData’s ility is speeding time to nd winning us business.”
GoodData is helping win Bolder Thinking more business By Nicole Hushka,VP of Product Development, Bolder Thinking
Introduction In the world of telephony, change often happens slowly, and sometimes painfully. Ask any call centre or enterprise IT manager tasked with selecting, maintaining and managing legacy, premise-based phone systems and software. Bolder Thinking addresses telephony pain points with an agile, cloud-based platform that eliminates the cost and complexity of maintaining expensive, traditional solutions — so all departmental managers can focus on running their business, not their technology. Founded by veterans of the call centre industry, the company’s approach to telephony is indeed bold. To make their customers’ lives easier — they enable dynamic scalability and deliver better value. Recognizing that their cloud-based solution’s benefits could extend way beyond operational and cost efficiencies, they looked to give customers real-time access to analytics on call centre performance, something no other provider was offering. Self-Service Analytics With telephony platforms being so complex, and data coming from a mind-boggling variety of systems and sources, delivering cohesive analysis requires a series of difficult extractions and manipulations. It can get so complicated that most providers simply avoid it. Without viewable analytics for abandon rates, call times and call volumes, business leaders rely on IT to help them answer fundamental questions like “How is our customer service contact centre performing?” At Bolder Thinking, they saw an opportunity to address this gap by offering advanced, self-service analytics. It would relieve clients from their dependency on IT to generate reports. It would enable them to be more agile, to act more decisively, and to provide better service. It’s as VP of Product Development and co-founder Nicole Hushka put it, “We want to deliver data that our customers want to see, how they want to see it – and when they want to see it. We want to give our users control.” Delivering Answers, Not Just Reports The Bolder Thinking team looked into building a solution with products like Jaspersoft — quickly realizing that would only deliver reports vs. true insights. “We wanted to create a paradigm shift from a world where IT runs call centres to a world where call centre operators run call centres and IT can focus on strategic tech initiatives,” explained Hushka. The Bolder Thinking team needed a tool that allowed them to empower the operators, supervisors, and call centre managers to get quick access to data, to do ad hoc reporting on the fly themselves, and to significantly reduce their time to insight. They wanted users to be able to open a dashboard and answer their own
questions. If they weren’t meeting SLAs, Bolder Thinking wanted them to be able to find out why by quickly taking themselves through a decision-making tree, a process to unearth insight needed for them to find core issues and make adjustments to optimize performance. In the words of Hushka, “We give our customers awareness through their dashboards, and that gives them the ability to react as well as be proactive. That’s powerful. Advanced Analytics provide quick access to insight and answers, through a single interface that is easy to adapt on-demand to meet new requirements as they arise.” Speed to Insight Wins Business Bolder Thinking knew that GoodData’s end-to-end platform supports the entire data lifecycle, in addition to more data sources than any other solution, making it quick to implement and easy to manage. Within six months, they released Advanced Analytics fully embedded into their existing portal, with more than 200 KPIs that customers now can customize and monitor on a daily basis. Today, users can quickly and easily get visibility into their contact centre’s talk time or speed of answer averages. While this may seem to be a straightforward definition, each client actually uses a unique definition. “We couldn’t develop standard metrics and expect all call centres to accept our definitions,” said Hushka. “We needed to give our customers flexibility and control to create their own terminology.” Bolder Thinking has also recently launched an SOS widget, which gives customers a way to provide online access to call centre personnel through a web browser. If you’re shopping online and wish to talk to a person, you can initiate a call from the e-commerce site and speak to a rep through the speaker and mic on your computer. The GoodData dashboard will track KPIs such as how many attempts callers make, and which types of shoppers are escalating to support calls. In the future, Bolder Thinking plans to include video calling, in addition to voice calls. Feedback has been overwhelmingly positive, with call centre managers able to answer their own questions, resolve issues more quickly, and discover who’s using web access to call centres and why. They now also know in advance what staffing options will provide the most flexibility and efficiencies; and can graphically compare an agent’s performance to their peers, and see at-a-glance daily agent activities. GoodData is helping win Bolder Thinking more business. “After we demo what’s possible, customers have turned the corner from researching our solution to purchasing,” said Hushka. “Multiple clients have offered to forgo certain platform features in exchange for the Advanced Analytics. It’s the insight that GoodData provides that’s helping sell our solution.”
Meeting Business Requirements
BUDGET-BASED Bridging the gap between budget-driven organizations and their IT departments
By Gordon Howes of VMHOSTS looks at... Will send you a pic or Gordy first thing cheers mate
Infoburst Can budget-driven companies demand superior services from the cloud?
Introduction Most people argue that businesses donâ€™t understand the cost associated with new services like cloud computing. These businesses demand superior services and thus fail to realize the various costs that are associated with developing the service and its impact on any IT department. So, can these budget-driven companies demand superior services from the cloud? Can they coexist? A Budget Based IT Organisation Before a thorough assessment can be made between the cloud and a budget-driven organisation, it is important to understand how a budget-based organisation works. These firms create a budget towards the end of the fiscal year and do so within preferred guidelines. The IT department is not necessarily given the top line budget. A series of compromises need to be made in order to negotiate a better solution to the problem. Based on the internal findings of the organisation, the heads of the company can decide to develop the functionality of the IT department or source it from external platforms like Software as a Service (SaaS). The latter tends to be the most appealing as it is affordable and requires no upfront investment. Keeping this in mind, you will come to realize that IT departments need to justify the investments they make. These financial restrictions make it difficult for them to benefit from various services that the cloud has to offer. Bridging the Chasm Most businesses and IT departments will agree to a set of business indicators. These indicators are easy to monitor and measure, thus binding the two together. It can prove to be a valuable step towards developing common grounds to work towards better prospects. These business indicators can create a feeling of joint ownership. The real problem at hand is to bridge the chasm between these budget-driven organizations and their IT departments. Here are some ways of making this proposal work: Chargeback Rather than sticking to a predefined budget,
the IT department will have to be considered as a zero sum organization. In laymanâ€™s terms it is a good idea to transform the IT department into an organization of its own. They will transfer their costs to business units according to their usage. This step will prove to be beneficial with the direct correlation between the use of IT services by business units and the cost the IT departments have to bear. However, most businesses will find such a move to be unwise due to unknown cost factors that will not be in their control. Showback The concept is all about IT departments reporting regularly. For example, they could report the IT consumption for each department on a monthly basis. These reports serve as an allocator for the assignment of the IT budget to business units. This cuts down unnecessary consumption and provisioning of resources when they are no longer needed. Billing The more drastic approach would be to set up the IT department as an internal business unit. This means that they will charge for their services based on per unit cost. When a business makes use of a particular service, they are made aware of the actual cost to their department budget. The purpose of doing so is for the IT department to run their applications in a way where they are neither making a profit nor a loss. The model can be risky if the pricing is not set correctly. Conclusion The traditional financing of IT departments has not been designed to facilitate adequate cloud services. These budgets do not take consumption variability into account, thereby creating a conflict of interest. On the other hand, businesses are not incented at asking what they need. This makes the IT department responsible for running operations and meeting the requirements of the business, but considering the factors mentioned above, there would be an incentive for the departments to work together. They will be able to deliver whatâ€™s best for the business by making proper use of their budgets for adequate cloud computing services.
THREE’S A CROWD
Is Your Cloud Primed for Success?
Tom Homer takes a closer look at customer centric clouds By Tom Homer, Head of EMEA and the Americas, Telstra Global Enterprises and Service
Introduction In an effort to capitalise on the benefits of cloud computing, early adopters turned to multiple cloud vendors to satisfy their various infrastructure needs. For instance, you may have been working with a customer relationship manager (CRM) specialist for hosting your customer insights, and a private cloud expert for financial data. This approach delivered initial gains as IT leaders found their way in the cloud, but it is not conducive to long-term success. Working with a variety of vendors can create complex environments that are hard to control, manage and integrate, while it can also lead to organisational silos, preventing collaboration and the easy transfer of data, limiting performance and the services delivered. Encouragingly, recent Telstra research, which surveyed 675 IT decision makers from across the globe around the cloud services being used by their organisation, suggests businesses are increasingly realising this, with three-quarters of global businesses wanting to procure services from a single provider, compared to using three concurrently. With cloud fast becoming a critical component of IT environments the world over, what steps can today’s businesses take to help ensure you build a platform that is designed for success today and in the years to come, without compromising on individual needs?
the business, while guiding you through any difficulties and reducing the impact of the initial implementation.
Outsource Cloud Management The first step could be to adopt an infrastructureas-a-service (IaaS) cloud model. By leaving your provider with the more routine tasks, such as hardware, data and server management, businesses can become empowered to focus on innovating and add value to the organisation. What’s more, the benefits of IaaS – including improved security and efficiency, reduced costs, and optimised insights – closely align with IT departments’ modern IT objectives. Although most businesses have a clear understanding of IaaS’ advantages, research revealed that over half are yet to implement it due to concerns around relinquishing control of IT environments. As such, vendors in this space should work to alleviate and overcome such concerns across
Think Global Across all industries, competition is fierce and increasingly not restricted by international waters. As businesses look to expand their offerings and grow international footprints, they should also accelerate innovation, provide the latest features and functions across geographical boundaries and time zones, as well as host data offshore to support business growth. If this aligns with your future business plans, then there is much value to be had from working with a single global cloud provider that understands and is familiar with a number of markets. Working with vendors with either a global reach or a global strategy for addressing market demand helps to ensure you can deliver a consistent and compliant experience, regardless of how many markets you are serving.
Go Hybrid As the cloud market settles, cloud vendors increasingly look to offer a portfolio of hybrid-services covering most, if not all, businesses’ cloud requirements. Remove the complexities of dealing with multiple vendors by working with a single provider capable of combining internal and external IT infrastructures, across a combination of private and public clouds, to help support your business outcomes. This hybrid IT approach is one we expect to see gather momentum in the months and years ahead. Make Your Cloud Customer Centric We are living in a buyers’ market – consumers expect to do what they want, when they want, how they want. And they are likely to take their business elsewhere if this isn’t on offer. IT is at the centre of this enablement, but with new services – from mobility through to social media – being created every year you need a cloud platform allowing you to quickly and easily take advantage of these innovations. Adopting an approach to help ensure you can rapidly launch the services and tools demanded by your customers and employees is critical to remaining competitive.
Schneider Electric White paper
Essential Tips for Monitoring Hybrid Cloud Environments
Michael Thompson discusses how best to monitor a hybrid environment.
CAUGHT OUT By Michael Thompson, Director, Systems Management Business at SolarWinds
Introduction The cloud is arguably one of the most pervasive topics in IT today, and the adoption of cloud-based technologies is increasing dramatically. While for most companies, the bulk of IT activity still involves monitoring and managing on-premise systems, we are seeing more and more companies looking at cloud as a primary option for new investment. Recently, industry analyst firm IDC predicted that the global cloud market, including private, public and hybrid clouds, will hit $118 billion in 2015 and crest at $200 billion by 2018. IDC also noted that 2014 showed a 25.9 per cent increase in cloud uptake over 2013, when the market was worth a comparatively small $76.1 billion. As this shift occurs, businesses are often wrapped up in the initial application deployment and logistics of the process and, critically, the monitoring and management of these hybrid environments is often left as an afterthought. This is a mistake that can have long-lasting, damaging repercussions. To avoid getting caught out, here are five essential tips to take into account when thinking about how best to monitor a hybrid environment. Your End Goal Will Change With a hybrid cloud environment the ultimate objective of the overall monitoring and management system often changes. Traditionally, on-premise IT monitoring and management has been implemented primarily to ensure that the individual components and systems – the servers, applications, networks and storage facilities – are available and performing at an optimum level. If they aren’t, then the issue will be flagged and the IT team will be able to quickly and easily pinpoint the problem and fix it. When cloud is introduced (e.g., Amazon EC2, Azure, etc.) this changes. The person managing
the cloud portion of a hybrid system isn’t responsible for pinpointing where and what the problem is. Instead, the primary objective is to be able to quickly and definitively determine who owns the problem – the application owner or the cloud infrastructure provider? To ensure that problems can quickly be attributed to the right place, the IT team will be required to adopt a different way of thinking about monitoring. Cloud Management Systems In order to ensure that every business service is properly managed, and that problems can
Infoburst More and more companies are looking at cloud as a primary option for new investment.
HYBIRDCLOUD be resolved quickly, it is essential that a cloud management strategy is introduced that outlines the clearly defined points of demarcation. This will prove essential on a day-to-day basis, as such a system will ensure that issues are identified, flagged to the owner and resolved before client-facing services are disrupted. To establish this management system, the business needs to work backwards to understand the components that are required to deliver each business service. If a range of services and infrastructure from both on-premise IT and off-premise cloud providers are required, then a system needs to be in place to ensure that the issue is identified quickly and resolved by the right person. Teams must be prepared before any system failure happens so that if the scenario does arrive, adequate data is available and a well thought out troubleshooting plan is already in place. Know Your Cloud Provider There are different types of cloud services. Some are SaaS based (software-as-service), while other clouds services provide the infrastructure that is used to run all manner of applications. Regardless of what type of cloud service you are using, it is essential that you have clear SLA’s in place before committing to the service. If you are dealing with a SaaS offering, where you have no management of the technology, you simply need to be able to identify and explain what a failure would look like and ensure that a metrics based SLA is put in place, so that this can be shared with your provider if the service ever needs fixing. However, in situations where the cloud provider is delivering infrastructure and you are installing and running software on top of it, it’s a little trickier. Firstly, if an error occurs, you will need to determine if it’s your software or configuration that is causing the problem or whether it is the cloud provider that is delivering inadequate performance. To accurately track for this, a monitoring system must be in place to definitively determine where the problem lies. Hard facts will be needed in order to ensure that your provider resolves the issue quickly and without an argument. In this instance, an “I don’t see any problems in my environment, it must be a fault on your end,” will likely not be enough to force a cloud provider to take action. Hybrid vs. On-Premise Solutions When it comes to on-premise infrastructure, responsibility to repair it when it breaks or is faulty lies solely on IT. One of the core advantages of the cloud is that if a problem arises with a cloud component, it doesn’t need to be fixed. Instead, the faulty component can be killed and reprovisioned in a matter of minutes. While there are clear businesses benefits to this set up, the IT team needs to be aware that an entire component can be wiped and replenished at any second, a concept that has never existed with on-premise IT. As such, applications must be architected appropriately, so the business service
provided by the application isn’t impacted when it is replaced. The Development Team When looking to create a hybrid environment, it is essential that core-monitoring decisions be settled up front. Traditionally, the on-premise monitoring format has meant that IT and operations have had to collaborate in order to manage the ownership of the hardware that their applications will eventually run on. With cloud resources, though, developers with access to a credit card or purchase order are able to invest in the infrastructure they want themselves. Without proper coordination, operations can end up taking responsibility for applications that are not designed for use in the cloud, which can ultimately cause serious problems. This loops us right back to the need to ensure that we are monitoring for who the problem belongs to, not just what the problem is. It sounds simple, but the solution to this problem is often overlooked. Once a hybrid environment is established, the easiest way to prevent shadow IT is to proactively reach out and engage with development teams, to ensure that the monitoring systems that are being implemented make sense and have been agreed across the board.
Infoburst Puda nem inum dipsunt restemolent, consedis niendem. Mus doluptur modite nime erum volora net adis simpossimi, veles non reiurio rerferro mos ma corporere, et debiscias alitaquam ventinum
Conclusion Taking the time to plan ahead and pre-empt some of the worst case scenarios in a hybrid environment is the only way to prevent and prepare for problems before they happen. By ensuring that the appropriate monitoring systems are in place, points of demarcation have been made clear, that SLA’s are pre-agreed with your cloud partners and that the operations team is in sync with the wider business, the IT team will be able to successfully implant a hybrid system, without jeopardising the smooth running of the overall business.
Why Businesses Need to Abandon Consumer Cloud Services
By Chris Sigley, General Manager ofÂ Redstor
in November 2014, which led to users being unable to access their cloud-based files. While this outage only lasted a couple of days, many users were clueless as to what had caused the outage and whether their data would be intact when normal service resumed. These incidents bring to light the risk of relying on predominantly consumer-focused services from both a security and reliability perspective. There are steps that businesses can take in order to ensure that their employees remain compliant while being able to benefit from remote and collaborative software. Consumer Technology As a business owner, the first step to facilitating secure remote and collaborative working is to recognise why employees use these consumer cloud technologies. Many staff use these services as a way to drive productivity and asking them what particular functions they find most useful will allow companies to determine what their current infrastructure might be lacking. Educate Staff Once you have started to understand why certain technologies are being used, businesses can start to work towards educating their staff on the risks associated with them. When technologies such as iCloud are used in the workplace, employees tend to be unaware of the security risks. While their intentions are often good, educating staff is an essential step in ensuring the potential threats are understood. Employ the Right Technologies After businesses understand why certain technologies are used within the workplace and have communicated the security risks to their employees, the next step is to look at their current infrastructure and available technologies to uncover how they might be able to deliver the functionality employees require while remaining safe and compliant. Conclusion For many businesses, enterprise-grade remote working and collaboration is becoming essential. By properly integrating these technologies into the workplace, organisations are able to ensure that all employees are using the same technology and that allows administrators to properly and effectively manage confidential data. These professional services also offer considerably more security and privacy than their consumer counterparts such as two-factor authentication, integration with Active Directory and assurances regarding encryption and data sovereignty. Following the introduction of this type of technology into the workplace, companies can feel more confident that their staff are working productively while adhering to the acceptable usage policy. For the end user, the benefits over a consumer service are perhaps less obvious. While the functionality they receive will be almost identical to using consumer services, they will have a much safer and more secure collaboration tool.
Making the Most of Network Analysis
Infoburst Properly applied, analytics can identify potential bottlenecks and help make the network more efficient, secure and reliable.
IN THE CLOUD Anton Basil addresses the challenges of data gathering in a dynamic cloud environment By Anton Basil, Chair of the CEF Analytics Group
Introduction “Why is the phone connection bad?” “Why is my website down?” “Why is this download taking ages?” The customer does not really want an answer to these questions. On one hand the answer is already known – “these things happen” – on the other hand the customer is not so much asking a question as hinting that they will take their business to another provider if things don’t get better. And that is why the provider really does need to know the answers – and will invest a lot in network analytics to find the answers. Analytics have become a critical part of the network infrastructure, and not just for troubleshooting purposes. Properly applied, analytics can identify potential bottlenecks and help prevent performance degradation, and can help make the network more efficient, secure and reliable. In a dynamic cloud environment delivering services on demand, it is even more important to manage performance in real time to ensure a good user experience, but at the same time it is becoming much harder to do this. So, what are the challenges to be addressed? As a process, network analytics can be usefully subdivided into three functions: data gathering, data mining and reporting. Data Gathering in the Cloud Data gathering is the analytics bit: the more data about actual network behaviour that can be garnered across the entire infrastructure, the greater chance of finding all the answers. But it is subtler than that, because experience teaches that some sorts of data are more useful than others, and it can be better to target essential
parameters rather than have them lost under a mountain of useless data. In addition, you do not want the data collection itself to impact the network operation – avoiding, for example, measuring latency in a way that adds latency. In a single static network the difficulty of gathering data is proportional to the network’s complexity: a problem that grew steadily as the network expanded and may have kept pace with the IT department’s increasing expertise. Such steady, organic growth is not possible in a dynamic, virtualized network and there is a corresponding need to virtualize the monitoring and test process to keep abreast of the changes in the network and to reduce the cost and labour of physical testing. But there is also a risk that the virtual test software will consume some of the processing resources and itself impact the measured performance. Under controlled laboratory test conditions steps can be taken to compensate for such errors, but it is not so easy to do this in a working environment. Not only is the data being gathered on a moving target, the problem is compounded when the network spans several provider domains. Even when there is clear agreement about the definition of the various network parameters being measured, different providers might not measure or collate their data in the same way. So, from a data-gathering point of view, cloud services mean a quantum leap in complexity. We no longer have a process that evolves steadily as a single network grows, rather we are forced to correlate inconsistent data across a shifting ecosystem. Trying to gather useful and reliable data in real time will only be possible when there
OPINION are global standards defining the key parameters and the way they are measured, allowing providers and different networks to mix and match data in a simple reliable manner. The Data Mining Challenge Data mining is literally the analytics bit. The more data that has been gathered, the greater the chance of finding every answer, but it can take much longer to find those answers if too much irrelevant data is included. The key to data mining, however, is to know the right questions to ask – and that does still depend on human experience and judgement. Even if we do know what questions to ask, the answers can only be as good as the data that has been gathered. So, data mining cannot be usefully applied until the data-gathering problem has been sorted. Once sufficient reliable and accurate data is available, and we know what questions to ask, then what is needed is sufficient processing power to perform the analysis fast enough for the resulting output to still be relevant. In today’s dynamic network environment that means getting answers in near real time. Relevant Reporting of Results Reporting is no longer about churning out reams of statistics. Presenting information about a highly complex system to a human operator requires a more visual format. A topological map of the infrastructure with key data or trouble spots highlighted at the right point on the structure so the operator can immediately see what is happening can provide this network visibility. But reporting can also serve other audiences: an alarm system would require precise co-ordinates rather than a visual map, and an application would be less concerned with locating the source of degradation than knowing its impact on performance parameters. The CloudEthernet Forum A dynamic cloud environment becomes a sensitive organism where small local problems can cause ripple effects that lead to disastrous consequences on many levels: poor service performance, loss of critical business for the customer, reputation damage for the provider, and ultimately customer churn. At the same time, it is becoming harder to garner and analyse sufficient data about the inner working of this complex environment. That is why network analytics is one of the five VASPA fundamentals being addressed by the CloudEthernet Forum (CEF): Virtualization, Automation, Security, Programmability and Analytics. If a cloud services consumer experiences degradation in service performance, what can be done? The cloud services provider might offer to boost processing by spinning up further VM resources. The cloud carrier might provide additional bandwidth. Is this a cloud carrier or a cloud provider issue, or has the consumer got unrealistic expectations? In a competitive business environment this is the sort of situation that can degenerate into
finger pointing and loss of business. What the CEF is doing is recruiting members from all these cloud stakeholder groups to work together on strategies to anticipate and resolve the challenges of a new and disruptive technology. The cloud stakeholders being recruited by the CEF include major cloud consumers, who can help clarify what is wanted in terms of performance, in order to determine how best to measure it. For example: it is obvious that a cloud consumer subscribing to a streaming video service wants brilliant high-quality video, but the industry has found out that this subjective experience depends on essential parameters like bandwidth latency, jitter and packet loss that can be readily measured and analysed. Hence the CEF needs both cloud carriers and cloud providers onboard to understand the consumer needs, and to find ways to satisfy them. This is as much about business practice and working relationships as technology: if the CEF can define common standards that will make different provider and carrier systems compatible, would the cloud carrier allow the cloud provider some control of its network? When the consumer complains about service performance, can the provider have access to analyse both network and data centre performance and come up with the optimal balance between processing and bandwidth resources? Or would the carrier be allowed access to the provider’s system to do the analysis and resolve the issue if asked? Compare this with the mobile phone industry that has already learned to iron out their differences and make service as seamless as possible. The mobile users expect reasonably consistent performance on their travels, so if they complain that a phone connection has dropped, they do not expect an argument between the various cell services about which network was responsible for the failure. Conclusion This is how it should be with cloud services – and so the CEF is working on a defining standard interface and standard APIs between cloud providers and carriers to create an open cloud environment. This is key to enabling consistent data gathering and consistent definitions that will provide sufficient reliable input for data mining across multiple networks. How the data will be mined on sufficient scale is another issue, and the way the results will be reported or presented is another of the key concerns for the CEF – all the more so since analytics output can provide essential data for automation, security and the other VASPA fundamentals. To achieve these aims, and to create an open cloud environment fast enough to maintain the momentum of cloud migration, the CEF is inviting all types of cloud stakeholders, including systems integrators, NEMs and software developers – as well as the enterprise customers and service providers already referred to – to participate in the standardisation process.
Put the date in your diary
11 & 12 May 2015
DataCentres North Conference & Exhibition designed to meet the needs of the large and growing number of businesses and decision makers responsible for commissioning, designing, maintaining and operating Datacentres, Critical Environments, Server and Comms rooms across the region.
Why Here? Why Now? Based in Manchester DataCentres North is designed to meet the specific needs of the North of England’s DataCentres market. The show gives you access to the growing number of companies basing themselves in the North, many of whom do not go to the London based events.
Emirates Old Trafford Manchester
The Conference Featuring case studies, panel discussions as well as individual papers the DataCentres North conference will address issues affecting both small and large datacentres, Server and Comms rooms including: • DCIM • Legislation • Connectivity • Design • Energy & Sustainability • Cooling • Cloud & Big Data • Virtualisation & Storage And much, much more
Be part of the programme - Share your experience by submitting an abstract via the website
For the latest information visit www.datacentresnorth.com Contact the DataCentres Team: +44 (0) 1892 518877 or email: firstname.lastname@example.org Supported By :
How unified access supports true BYOD
Johan Ragmo explains how to protect your network investments... By Johan Ragmo, Business Development Director, Alcatel-Lucent Enterprise
Introduction Capacity and security have always been a challenge for an enterprise network, but the convergence of personal and mobile devices and the growth of M2M communication is forcing IT departments to re-examine their network infrastructure. Networks that were designed and built a few years ago are not prepared to support the requirements of today’s technology and applications. They were created to handle predictable, static traffic flows that originated mostly from wired devices, not built for the mobile smart devices with their array of apps. As these become the norm in both personal and work lives, they are rapidly usurping desk-bound PCs as preferred devices. IDC has forecast that by 2015, global shipments of tablets will surpass that of PCs and by 2017, 87 per cent of connected device sales will be tablets and smartphones. Today’s networks In practice, existing corporate infrastructure is having serious delivery issues. The inability of the network to meet the requirements of today’s applications results in impairments such as increased jitter, which can decrease the quality of experience (QoE) for real-time application such as voice and video. What’s more, the user’s experience is inconsistent as they move around the enterprise, particularly from a wired to a wireless network. Often, the wired and wireless networks still behave as separate environments, each with its own authentication process and unique
set of policies. This means users may still have different experiences when they are using a wired or a wireless device. It is also complicated for enterprise IT departments: there are two separate management systems, two set of policies and two authentication processes, making maintenance and troubleshooting more difficult. A fully functioning mobile enterprise needs network unity. In order to improve and homogenise employee experience across both wired and wireless networks, enterprise networks do not just require extra capacity but a comprehensive network management system to support employee mobility. Currently, however, many enterprise networks lack the ability to manage the levels of connectivity required to support the growing number and diversity of devices, real-time communication and media rich applications. The right access The problem is in the administration of the users and the heterogeneous devices, which they use in connection with a BYOD policy. A unified access approach simplifies network management to provide a consistent experience across the entire network - both wired and wireless - and enables organisations to differentiate between corporate and personal devices and applications. Setting policies By setting policies in this way, the IT department simplifies management of devices so that, for example, a new employee can add their own devices to their profile on the network. New employees can add their own devices under a
OPINION unified access with BYOD approach, and the portal automatically gives them the right access depending on which device is in use at any given time with on-boarding. All policies and configuration for the different devices are made by the IT department initially in the portal, and once the employee makes the initial engagement, their profile can be built up accordingly. When the same employee gets a new personal iPad which needs to be authenticated on the network, the employee doesn’t have to go back to the IT department to undergo a complex process - they can just go to the online portal and indicate that they are using a new device, which already has its network access settings preconfigured. It’s worth noting that the demand for bandwidth doesn’t just come from humans with smart devices. We must not neglect machine-tomachine communication here either, the so-called Internet of Things. Machine-to-machine (M2M) connections are set to rise dramatically over the next decade. It has been predicted that by 2022 there will be 18 billion M2M connections globally (Machina Research), up from three billion connections in 2012, driven in part by greater mobile access, falling component costs, along with developments in both device intelligence and the growing applications market. Wired devices such as security cameras with sensors, smart meters or GPS all require that data be sent across the network. The competitive advantages that M2M brings to the enterprise cost reduction through automation, the collection of business or operations data and security or regulatory compliance monitoring - can lead to significant efficiencies and greater operational agility. The prioritisation of these devices must also be factored into any unified access approach. Unified access provides real business takeaways - the scalability of being able to start with just a handful of employees and increasing that usage as demands increase, as well as the ability to use a single solution to support a virtual desktop approach with advanced applications across voice, data, LAN and WLAN. Obviously not all organisations have the same needs, or require the same level of infrastructure renewal. A unified access management system can be implemented to update existing technology and protect past investments in the form of a simple software upgrade, or it can even form part of a completely new solution to ensure functionality in the future. This flexibility offers customers the choice of how to implement a unified access strategy when upgrading their network systems, keeping their strategy in line with business requirements and budget. Whichever option is required, ensuring continuity across both wired and wireless networks will significantly help to cope with demand as smart devices and their demanding applications continue to increase in the workplace.
Evolving mobility BYOD, the rise of mobile computing and the growth in M2M applications is not just something that is going to happen, it is happening. People’s behaviour and expectations have changed as a result of mobile consumer technology. The way we use and interact with technology is no longer a static, desk-bound affair. Without a properly designed and implemented unified access strategy, the burden being placed on the existing network architecture from emerging technology and applications will only continue to grow. The result will be frustration for employees, customers and clients when mobility and connectivity are not of the standard that they are accustomed to in other areas of their lives. Networks must be ready to cope with this future demand, and as WLANs become increasingly mission-critical as mobility evolves, a unified access strategy can help to ensure optimal performance and prioritisation across the enterprise network based on business needs and employee role, requirements and location at any given time. www.alcatel-lucent.com
Infoburst BYOD helps you make the most of cloud computing
Rack Mounted 3.75kw Single Phase
Rack Mounted 2kw Single Phase
Mafi Mushkila Ltd
Datacentre Testing, No Problem
14MW Heat Load Available for rent Floor Standing – 3-Phase & Single Phase
Visit us at Datacentre World, Stand G104 for a chance to win a 42” TV www.mafi-Mushkila.co.uk +44 1243 575106
Choosing the Right Companion
David Leyland goes in search of shortcuts to the cloud
TO THE CLOUD By David Leyland, Head of Business, Next Generation Data Centre UK&I at Dimension Data
Introduction Cloud solutions make many promises. The technology is reliable, it’s easy to use, and there are the economic benefits of using less hardware and paying for only what you use. It makes sense then, to get to the cloud as quickly as possible. But rather than migrate everything at once, businesses will likely transition applications slowly over a period of time. Indeed, a firm may not want to move certain workloads to the cloud for reasons relating to functionality, risk, or cost. As such, a critical first step is being able to migrate applications or workloads while continuing to meet the organisation’s service level and governance requirements. It’s therefore imperative that organisations get this right, because if you get off to a bad start in the cloud, you can find yourself on the wrong road. To overcome the many obstacles at this initial stage of the journey, some companies are implementing a hybrid model that allows them to use different solutions for different workloads. So, as they make the transition, they’ll need a mix of public and private cloud, and physical servers to accommodate different workloads and applications.
The Key Questions While it may sound reasonably straightforward to put such a model in place – find a cloud software provider, begin migration – not all providers will have a set up that will work in complete synergy with all applications and workloads. Many businesses don’t carry out sufficient due diligence and end up signing contracts with providers that just aren’t right for them – negating the benefits that the cloud undoubtedly offers. To prevent this from happening, organisations must ask a potential cloud provider some key questions. Ensure compatibility with existing applications by finding out whether a provider uses a similar
Infoburst Organisations should ensure that their cloud provider offers a consistent service in all regions in which they operate before embarking on a mass migration.
OPINION hardware infrastructure. Indeed, businesses should go as far as to ask how the cloud itself is built. For example, what hardware does the provider use for load balancing or what type of firewall is in place? Will the number of categories of tiered storage impact performance? There are many factors that could influence the efficiency of workflow and it’s important that companies don’t have to begin building from scratch with their providers. A truly integrated hybrid solution requires automation between on-premise workloads with those in the public and private cloud. This ensures that applications perform as they should and businesses have the control they need. Again, organisations must know if this is possible from the start, and it could hinge on the hardware in use or on the software and whether it allows the flexibility to switch and work between the two environments. Cloud migration is often a huge undertaking and is not a ‘do-it-yourself’ model. It’s therefore vital for businesses to choose a provider that not only offers the right infrastructure, but also has the skills to help plan and execute the migration. As mentioned above, consistent performance is achieved when on-premise and cloud workloads are integrated, and at some point, companies may want the flexibility of being able to migrate back to an on-premise data centre. Therefore, firms need to be with a provider that has the technical knowhow to oversee and action such options. Many see the cloud as a tool that can facilitate global expansion. Accessible from anywhere in the world, it has further lifted geographical constraints. However, international organisations should ensure that their cloud provider offers a consistent service in all regions in which they operate before embarking on a mass migration. From the database through to the Web access layer, and everywhere in between, organisations must know that they can pick up a phone and talk to someone in the same country and be able to solve problems – ensuring that they aren’t left stranded. The providers’ global coverage is also important for businesses considering entering new markets. Before investing in large and expensive data centres, companies are first able to test the waters through the cloud, enabling them to see, with relatively little expense, whether expansion into that market would be successful or not – mitigating some of the risk associated with new markets. Realising the Benefits The cloud can sometimes be seen as a Venus flytrap i.e. easy to get in, hard to get out. Therefore, once a company has made the step, they must ensure that they are realising the promised benefits, which means looking at where
and how they can increase the utilisation of the cloud environment. To achieve and maintain the right size environment, businesses must work alongside providers to make sure the required tools are in place. For example, organisations must be able to use the application programming interface and the user interface in the same way, both programmatically and to obtain information about the cloud environment. Furthermore, the quality of the reporting is vital in enabling organisations to identify and interpret trends. Producing a spread sheet with disk space, the amount of CPU and the amount of memory is all well and good, but it’s not telling firms what they are actually using and what factors are going to impact their activity. A high quality report can be used to set up automation, scaling up and down operations as needed – for example, by automatically turning off a machine that nobody’s logged into for a few weeks, or that’s only used during the day when you’re running tests. Through automation, companies can eliminate waste, enabling an accurate real time view of the running costs of their cloud applications. There may not be any shortcuts to the cloud, and its promises of business value, but the benefits are worth the journey. It can be a long road and migration will undoubtedly be a smoother experience when carried out with a cloud provider that has the right enablers in place – empowering businesses to reach their goals.
Infoburst Through automation, companies can eliminate waste, enabling an accurate real time view of the running costs of their cloud applications.
Environ SR Racks Think big, think Environ SR, designed to safely and easily accommodate the most demanding of server and equipment technology, choose from two colours, and sizes up to 47U high and 1200mm deep then pack them with up to 1300kg of kit.
Want to save space, time and money?
Contact us +44 (0) 121 326 7557 email@example.com www.excel-networking.com