Page 1

Whitepaper Whitepaper Cloud, Colo or In-house? Today’s data center has a lot of challenges, but when it comes to solutions one size doesn’t always fit all.

Introduction Rapid technology growth and adoption means the modern data center has to serve many masters: It has to be robust, scalable and secure while consuming as little power and staff resources as possible. On top of that, it must have the right combination of physical and virtual infrastructure to protect critical applications from downtime. Oh, and did we mention there’s a budget? The good news is organizations aren’t stuck with a one-size-fits-all approach to implementing a modern data center strategy. In fact, it’s possible to leverage the benefits of each appropriate solution while maintaining the right level of control and expenditure. Modern businesses have three options for data centers, in-house, colocation or cloud services, and those options have nearly endless permutations of strategies. Figuring out which one is right for you requires some strategic planning.

Catalysts of Data Center Evolution Data growth As of 2012, every day 2.5 quintillion bytes of data are created. This is the “data explosion” referenced by IT professionals who are constantly working to evolve their architecture in order to keep up with the avalanche of data being generated. New data is coming from a multitude of sources: digitized medical records and images, social media, internet photo storage and sharing, around-the-clock scientific monitoring equipment and all branches of the scientific, education, medical, business and financial sectors that are adopting new technology. New data is also being created in mass quantities by regulated or legislated industries that have no choice but to increase storage and adapt security protocols for ever-changing compliance guidelines. For example, the American medical industry is racing to comply with the HITECH Act (Health Information Technology for Economic and Clinical Health Act), part of the American Recovery and Reinvestment Act of 2009 that mandates all practices implement an electronic medical and health records system by 2014. In this industry alone the ability to digitize photographs, data and images from medical devices like x-ray machines has caused a complete shift in the data center industry. In fact, the International Healthcare Data Management Survey reported that digital storage of medical images such as X-rays, MRIs and ultrasounds are responsible for a 25% to 50% increase in data from 2011. As technology improves, data will continue expanding at this rate or higher for the foreseeable future. © Copyright 2012 GlassHouse Technologies, Inc. All rights reserved.

Power and space requirements Whatever the catalyst, IT infrastructure will keep growing and demanding more power and space which, in turn, will lead to increased demand for greater efficiency. Certainly, server and storage virtualization can improve how existing resources are used and can sometimes circumvent the need to buy additional resources, however, virtualization means more complexity in the infrastructure. IT departments are also facing green (eco-friendly) directives stemming from a growing worldwide concern about finite natural resources as well as the fundamental increased cost of powering and cooling IT infrastructure. Organizations can leverage virtualization, alternative energy sources, deduplication and other technology that increases efficiency and maximizes resources, but they have to do so with an eye on security, high availability, disaster recovery and their staff’s capacity to manage the combination of technology and infrastructure that is affordable and best for the business.

Managing legacy systems and legacy storage In recent years economic challenges around the world forced many businesses to use legacy systems and storage creatively while stalling IT infrastructure growth and tabling discussions about migrating to new technology. With some markets now picking up, demand for migration, consolidation and upgrades are growing. IT managers have to decide whether a lot or a little change is required, what is stable enough to be recycled or repurposed and whether to migrate to serviceoriented architecture or more advanced solutions.

Security concerns The Second Annual Benchmark Study on Data Security conducted by the Ponemon Institute concluded that data breaches are rising by more than 30% year over year, with most organizations reporting they’ve been breached in the past year. Even one data breach incident can cost an organization thousands of dollars in fines and lost consumer confidence. In this digital age international cyber crime is a full-time underground industry with cyber-predators constantly on the hunt for valuable digitized data. Once advanced malware infiltrates a business it can gain a foothold inside the enterprise network and can penetrate the master key database for authentication tokens, which in turn provides the keys to all the other enterprises that are being secured. Today’s cybercrime is sophisticated and highly planned - data center strategy must include provisions for human and software vulnerabilities. Another security concern is the Bring Your Own Device (BYOD) trend. Nearly all employees have smartphones or tablets – and they’re bringing them to work. IT managers have to figure out guidelines for personal phones, tablets and flash memory drives within the business and ensure that security monitoring procedures or services allow employees to consolidate to one device when appropriate while still maintaining control over sensitive data.

© Copyright 2013 GlassHouse Technologies, Inc. All rights reserved.


Remote access is a must-have for at least a percentage of an organization’s workforce, and that percentage continues to grow as the workforce continues to de-centralize. Safe remote access requires a Secure Sockets Layer Virtual Private Network (SSL VPN) connection, a firewall on the network, an audit trail and role-based access controls. Additional precautions are time-out parameters, anti-virus software, data encryption and download prevention. All of these must be considered in any comprehensive data center strategy.

Data Center Challenges Compliance

Disaster Recovery







Legacy Systems

Power & Space

Data Growth

Dealing with Downtime

Modern data centers must balance the need for storage scalability, environmentally (and budget) conscious power options and appropriate levels of security with downtime tolerance and overhead and budget constraints.

Recent events all over the globe have reminded us that devastating downtime can be caused by sudden natural disasters or acts of aggression. However, most downtime is caused by equipment failure or simple human error. One of the first things to consider in a modern data center strategy is how much data and time the organization can afford to lose. The two primary methods of determining this are the Recovery Point Objective and the Recovery Time Objective.

Recovery Point Objective The Recovery Point Objective (RPO) is the threshold of how much data you can afford to lose since the last backup. Defining your RPO should take into account your organizations tolerance to risk. However, figuring out your current RPO if you don’t already know it can be done using a bottom-up method: Start by examining how frequently backup takes place: Since backup can be intrusive to systems, it is not typically performed more frequently than several hours apart. This means that your backup RPO is probably measured in hours or days of data loss.

Recovery Time Objective The second, the Recovery Time Objective (RTO) is the threshold for how quickly you need to have your application service restored. Using these two primary measures will help you understand your cost of downtime and the risk that the

© Copyright 2013 GlassHouse Technologies, Inc. All rights reserved.


organization can tolerate. Here’s a simple way to estimate the average cost per hour of downtime:

To + Td


Hr + Lr


Cost per occurance

• To = Length of Outage • Td = Time Delta to Data Backup (How long since the last backup?) • Hr = Hourly Rate of Personnel (Calculate by monthly expense per department divided by the number of work hours.)

The pros and cons of setting up an in-house data center must be weighed carefully. While an in-house data center gives the organization total control over storage scalability, power consumption, security and footprint, an in-house data center drastically increases IT overhead, has unique security concerns and may pose significant risk for long-term downtime and permanent data loss.

• Lr = Lost Revenue per Hour (Applies if the department generates profit. A good rule is to look at profitability over three months and divide by the number of work hours.) Finding the right balance of features and price to meet RPO and RTO requirements is one of the most critical things a business can do.

Reducing the footprint As energy consumption rises, organizations increasingly are under pressure to reduce
their environmental impact. In some regions, governments and businesses must report their environmental impact and it’s anticipated that this type of regulation will spread worldwide. In anticipation of increasing environmental legislation, the Carbon Disclosure Project persuades businesses to voluntarily issue Corporate Responsibility Reports on their environmental impact - no doubt those at the forefront will have a marketing edge with environmentally conscious consumers. At the data center level, IT managers must incorporate an understanding of how technology platforms consume electrical resources into their overall strategy.

Reducing IT overhead Reducing IT overhead helps managers focus resources on IT strategy and their core business. Evaluating data center standards and policies for consistency or overlap, outsourcing parts of the data center and leveraging flexible resources are all viable options that can be evaluated as part of the overall strategy.

Data center options Modern data centers must balance the need for storage scalability, environmentally (and budget) conscious power options and appropriate levels of security with downtime tolerance and overhead and budget constraints. When it comes to creating a strategy, there are three types of data centers: In-house, cloud and colocation facilities and each has benefits and drawbacks.

© Copyright 2013 GlassHouse Technologies, Inc. All rights reserved.


In-house • +



• +

• +

• Total control • Fast access • Potentially more secure

• Flexible pricing • Scalable • Potentially reduced costs

• Infrastructure control • Increased physical security • Reduced costs

• Expensive • Labor intensive • Geographical risks

• Risk of downtime • Possible rigidity • Security risks

• -

• -

• -

• Potentially more expensive • Access restrictions • CapEx & OpEx

In-house data center: Pros An in-house data center is built, owned and managed entirely by the organization. The infrastructure and equipment is not shared with or maintained by anyone outside the business. Organizations that adopt this approach enjoy total control of the infrastructure cost, management and security. They can also ensure everything is set up for maximum recoverability via redundant power supplies from multiple providers and generator-powered backup facilities in order to ensure uninterrupted system availability. Fast access to the premises is another perk of the in-house datacenter: If something goes wrong it’s quickly remedied - there’s no travel time for staff or permissions required for entry and modifications.

In-house data center: Cons However, the benefits of an in-house data center come with a hefty price tag. The physical facilities are time consuming and expensive to build or set up, the facility and infrastructure must be temperature controlled, maintained and upgraded over the years, and it must be staffed. Additionally, servers and arrays come with big capital expenditure costs that are never really optimized until they start to reach capacity; by then these large ticket items are often at end-of-life. While in-house data centers offer more infrastructure security, managing physical security can be difficult. An in-house program requires in-house staff that is trained on the hardware and software purchased through capital expenditures mean recruiting, on boarding, training and retention costs. Besides the price tag, the greatest detractor of an in-house data center is that all of your data and applications are in the same geography; in other words, all of your eggs are in one basket. In case of a geographical disaster, the likelihood of losing everything in greatly increased. The pros and cons of setting up an in-house data center must be weighed carefully. While an in-house data center gives the organization total control over storage scalability, power consumption, security and footprint, an in-house data center drastically increases IT overhead, has unique security concerns, carries high disaster recovery and business continuity costs and may pose significant risk for long-term downtime and permanent data loss.

© Copyright 2013 GlassHouse Technologies, Inc. All rights reserved.


Cloud Services Cloud services come in a variety of flavors and choosing the right one requires a balance of budget and the ability to meet service levels for data and applications. Speed to market is key driver for cloud-based strategies as this strategy offers greatly reduced provisioning times. Additionally, virtual server sprawl is also behind some cloud strategy decisions; organizations paying for virtual machines on a monthly basis can simply turn off the requirement or set an expiration date rather than letting it run without being used.

Navigating the complexities of cloud can be confusing. Do it wrong and you’ll spend more than necessary or put the business at risk for downtime. Do it right and you’ll be able to take advantage of the scalability, decreased footprint and resource consumption while achieving a security model that matches your business and compliance needs while meeting the organizations RPO and RTO goals.

Private clouds, hybrid IT and other related changes are creating integrated ecosystems that will all have a major impact; they will enable a wide range of new applications and services while raising many new challenges. In this emerging world, no one platform, form factor, technology or vendor will dominate, and managing this diversity will be an imperative.

Public clouds: Pros and Cons Public cloud services are appealing because you only pay for what you use and the service is scalable. However, because they share infrastructure and hardware with many organizations they offer little or no control over the underlying technology infrastructure. Because they’re an operational expense (OpEx) instead of a capital expense (CapEx), they’re more favorable for an organization’s cashflow. While the major perk of public cloud services is cost savings, the major drawbacks are mapping your security model to your vendor’s cloud implementation and, effective cloud strategies still require process, policy, chargeback and organizational changes designed for a cloud environment.

Private clouds: Pros and Cons Conversely, private cloud services do not share hardware or infrastructure and are the response to the demand for more secure, highly available infrastructure required by legacy applications. Benefits of private clouds include the ability to provide services over a private intranet or via a private data center, increased fault tolerance, business continuity and disaster recovery capabilities and increased security (though public cloud advocates may argue that security in the public cloud can actually be better than a private data center). Furthermore, private cloud options can allow for more customized security policies and infrastructure as workloads can be carefully allocated to public or private cloud resources as per defined security models. However, private clouds tend to integrate incumbent technologies and are expensive to build and maintain.

Hybrid clouds: Pros and Cons A hybrid cloud is a potential answer for organizations that want to take advantage of cloud benefits for non-critical workloads and be assured of a certain resiliency for critical workloads. Hybrid clouds also offer public cloud scaling beyond the private capacity. Hybrid clouds enable customers to build their datacenters to accommodate normal usage levels, rather than peak usage levels, then ‘burst’ into the cloud for peak usage. However, as with private cloud the hybrid cloud requires more maintenance and staff attention than a strategy of 100% public cloud.

© Copyright 2013 GlassHouse Technologies, Inc. All rights reserved.


Multi-tier clouds: Pros and Cons A multi-tier cloud is built to provide the full range of cloud service levels with corresponding tiers of cost. Because all workloads are not equally important, a multi-tier cloud allows organizations to prioritize their application components and take advantage of the cost savings of lower-priority service levels. While benefits include cost savings and increased security where necessary, a multi-tier cloud requires more planning, strategy review and maintenance than the other approaches to cloud. Navigating the complexities of cloud can be confusing. Do it wrong and you’ll spend more than necessary or put the business at risk for downtime. Do it right and you’ll be able to take advantage of the scalability, decreased footprint and resource consumption while achieving the appropriate level of security and meeting the organizations RPO and RTO goals.

Co-Location data center: Pros The last option, a colocation data center (colo) allows organizations to place their hardware in a datacenter owned and maintained by someone else but still use the colocation’s bandwidth as their own.

a well-planned colocation strategy can save an organization time, money and stress while decreasing the footprint and risk of downtime.

In a colo setup, the client maintains total control over the infrastructure, which provides additional security. Maintenance and staffing costs are less than an inhouse data center but clients are assured of an as-good or better level of reliable power, cooling and communication infrastructure they’d have in an in-house data center. Colo clients can also match data and applications to the appropriate level of security, backup and recovery and access. Colo clients generally enjoy cost savings when it comes to the cost of bandwidth and high availability services while maintaining ownership of hardware and software. Owning hardware and software means the client is in control of upgrades and expansion instead of lobbying the provider for improvements. And, if an organization moves there is no downtime associated with server relocation. A colo strategy provides the appropriate and required levels of flexibility and security while offering cost savings around the mechanical and engineering components of a datacenter.

Co-Location data center: Cons Colocations require more maintenance than cloud and can be more expensive than basic Web hosting. For example, when a server needs to be upgraded the client will have to buy and install the hardware. Physical access to the facility can be inconvenient; IT staff will have to travel to the location during the provider’s service hours. Colos also requires CapEx and ongoing maintenance OpEx, which increase overhead for the IT department. Colo service pricing can also fluctuate on a monthly basis depending on server traffic. Increased traffic from a marketing promotion, sales effort or a migration can cause a dramatic increase in the monthly bill. In the end, a well-planned colocation strategy can save an organization time, money and stress while decreasing the footprint and risk of downtime.

© Copyright 2013 GlassHouse Technologies, Inc. All rights reserved.


A tiered approach to data center challenges As we’ve seen so far, there are several solutions to today’s data center challenges. For example, an efficient and effective tiered approach to data center management may look like this:

Tier 1

(In-house or Colo)

Tier 2

(Private cloud)

Tier 3

(Public cloud)

Critical applications

Bespoke in-house applications

E-mail applications

Critical data

Specialized applications

Backoffice applications

Tier 1 / critical applications Critical applications that run the business may be especially important in regulated or legislated industries or organizations that deal in or house sensitive or confidential data. For regulated industries, non-compliance fines alone can ruin an otherwise profitable year. Non-compliance incidents that make headlines (such as security breaches) can cripple the reputation and profitability of a business for years. Therefore, tier one applications require more of the IT budget for security, preservation and control; they could live in the in-house facility or in a colocation.

Tier 2 / specialized or in-house application Applications that are specialized just for the business or are bespoke in-house applications can reside on public or private clouds that provide “just enough” control and security. With tier 2 applications, there is an opportunity for middleground cost savings by not over-protecting what is not critical but is still important.

Tier 3 / Back office applications Email services and office applications can utilize Software-as-a-Service (SaaS) in the public cloud to reduce their data center footprint an overhead costs when security and downtime are not of paramount importance.

© Copyright 2013 GlassHouse Technologies, Inc. All rights reserved.


Conclusion At some point every organization will face a data center strategy dilemma. The good news is, technology has provided a host of options for just about every budget and unique set of requirements. While each of the three types of data center have their own strengths and weaknesses, when planning a data center strategy there’s no need to put all of your eggs in one basket. There isn’t any one right answer that will fit every IT department - and the decision won’t be a static one; cloud environments will be an ever-increasing complex mix of services, data and applications in the public cloud, hybrid cloud, private cloud, and these are all likely to be extensions to the in-house data center and/or colos. Some data and applications will require pure colos and some will require managed cloud service providers. Regardless of the combination, all will have to integrate and interact with others. The technology will be constantly changing as needs change, as data becomes less sensitive or as new technologies and applications are introduced. Time spent up front evaluating the organization’s needs for storage, power consumption, security, downtime tolerance, data center footprint and overhead capacity can help you create a data center strategy that gives you everything you need without paying for anything that you don’t.

About GlassHouse GlassHouse guides customers through the complexities of cloud, virtualization, storage, backup and security through vendor-independent data center infrastructure consulting and managed services. Glasshouse does not sell any product, a principle that enables us to provide objective recommendations and integration strategies. We consider the people, processes, policies and technology already in place while creating a customized plan that mitigates security and non-compliance risks, improves cost and service efficiency, and enables IT departments to become true service providers for their organization. The depth and breadth of our expertise has been developed through more than 17,500 engagements with more than 12,000 clients. For more information visit www. or visit the GlassHouse blog for expert commentary on key data center issues. Twitter users can follow us at @GlassHouse_Tech.

© Copyright 2013 GlassHouse Technologies, Inc. All rights reserved.



this is a test book


this is a test book