Keeping the Peace in the Modern Data Center - Xangati

Page 1

Keeping the Peace in the Modern Data Center

Xangati Blog Atchison Frazer Vice President, Marketing

August 19, 2015


Xangati Blog In the early days of enterprise computing, IT infrastructure performance management was relatively straightforward. Mainframe computers existed in a glass house, literally, and systems administrators were the only ones with a key to the door. No other personnel were allowed in or out; no other hardware or software was installed unless it was done by the person with the key to the glass house. And there was relatively little finger pointing because end-users had dumb terminals on their desktops with apps delivered via thin client. As Moore’s Law started to take hold, the same power a mainframe could deliver centrally was shrunk in size to a microcomputer form-factor with greater ease of use and the ability to be moved around, networked and run applications locally with operating systems to also accommodate personal computers and client-server models. Suddenly, even though the costs-per-CPU were going down, the resources required to manage both data center and distributed computers were going up. As networking and networked-storage expanded to accommodate the growth in always-on computing, the need to facilitate Internet connectivity and even external data center compute resources started to gain currency.


Xangati Blog As compute resources began to increasingly virtualize on and off premise, and with networking and storage layers sprawling, as well as a connected external compute infrastructure, the system administrator’s job in terms of maintaining off and on and hot and cold status for all of the IT infrastructure suddenly grew quite complicated. At about the time that the virtualization layer was introduced, abstracting physical infrastructure resources in addition to adding applications and end-user computing to the hypervisor, the whole visibility ballgame changed dramatically. Performance monitoring became a calculated guess, root cause analysis was difficult to pinpoint and typically the finger pointing with crossed arms usually went in the direction of networking and storage. Converged infrastructures designed to both save costs and drive greater throughput and performance complicated the blame game even further. Now, the business suite is driving a top down mandate to save IT costs wherever possible by commoditizing hardware, leveraging open source software and in many cases, bursting traffic and users out to a hybrid-cloud infrastructure, especially as lines of business managers increasingly demand liquid SaaS-based applications.


Xangati Blog Following the evolution curve of infrastructure performance management, whereby most legacy data centers sprung up with multiple vendors and disparate and dispersed assets, the challenge to gaining a ‘single source of truth’ for performance management analytics have only become harder, especially as service assurance models demand end-to-end visibility to end-user quality of experience. How, then, is a virtualization systems administrator expected to “keep the peace,” if you will, across multiple layers of data centers, networking, storage, compute, application and cloud infrastructures? Here are the eight-great questions to ask as you evaluate whether virtualization management tools provide true cross-silo visibility and service assurance analytics to avoid the IT blame game and eliminate finger pointing altogether: 1. 2. 3. 4.

How much do outages and latency really cost you in time? What are the operating costs for remote site tech support? What are the intangible costs from customer dissatisfaction? Do you really have the most scalable infrastructure tools?


Xangati Blog 5. 6. 7. 8.

How do you prove where and when the problems actually occur? How can you consolidate multiple silo-based tools? What will allow you to successfully provision VDI services? Wouldn’t it be great to troubleshoot problems before my users do!

Here are a few more that you may already be thinking about: do you need proof that clients are getting resources they need and expect? Are you suffering from weak granularity in how storage IOPs are distributed across VMs? Do you lack “live” and predictive service assurance analytics that help triage and solve root causes? Is your management suspect about culpability of the virtualization layer?


Xangati Blog Here are the eight-great ways deploying Xangati can help you keep the peace across all of the silos and infrastructure layers in the contemporary data center: 1. 2. 3. 4. 5. 6. 7.

8.

Deploy the Xangati virtual appliance in your hypervisor of choice and eliminate blind spots left by legacy point tools Benefit from 360-degree DVR-like visibility into your entire infrastructure Visualize live, continuous to-the-second data without using agents or probes Peer into all interactions and interdependencies including: physical, virtual, cloud and every component, object and application Record/replay/export video files of sporadic contention timeframes; ease of install for faster time-to-resolution and Day 2 operate status Reduce unnecessary bandwidth, CPU and memory consumption Better optimize your existing VM resources with live and real-time analysis from VM host to virtual desktop – end to end Reduce unnecessary hardware purchases by heeding granular visibility into resource usage and efficiency to improve right-sized capacity planning


Xangati Blog

Visit our Blog for to learn more


Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.