17 minute read

DCIM

Next Article
Networking

Networking

The next evolution

Marc Garner, VP, Secure Power Division, Schneider Electric UK and Ireland, explains why DCIM must evolve to meet the era of IT infrastructure everywhere.

Advertisement

Today our dependency on digital infrastructure shows no sign of abating. Driven by factors such as the proliferation of smart devices, the emerging availability of 5G networks, and the growth of the Internet of Things (IoT), the volume of digital information surging across the digital economy continues to increase at a rapid rate.

Little of this data is permanently stored on phones, PCs or IoT devices. On the contrary, it is stored in data centres and, in many cases, accessed remotely. Given the always-on nature of the digital world, it is essential that such data centres are secure, sustainable, and resilient, providing 24/7 accessibility to data.

Gartner predicts that by 2025, 75% of enterprise-generated data will be created and processed outside a centralised data centre or cloud. The demands of hybrid IT have required data centres to undergo significant evolution in terms of design, deployment and operations.

For instance, today hyperscale data centres endure, but requirements for low latency connectivity and data availability for use in TV, streaming, social media and gaming platforms has driven more data centres to the edge of the network.

Additionally, the concerns of data sovereignty, security, location and privacy — added to the need for businesses to react quickly to emerging market opportunities — have produced a plethora of new data centre architectures, many of which are smaller, more distributed and with the attendant problems of uptime, remote management and maintenance.

The evolution of management software From the earliest days of digitisation, software has been used to monitor and manage digital infrastructure. Today, we describe such software as Data Centre Infrastructure Management (DCIM), and in reality, we have reached the third generation of this technology.

In the 1980s, and at the dawn of the server era, the infrastructure needed to provide resilience and continuity to hosted applications consisted of little more than racks and uninterruptible power supplies (UPS) with rudimentary tools to monitor such systems and alert users in the event of a power outage. Such tools were not called DCIM at the time, but were effectively the first examples of the category. With hindsight, we can refer to them as DCIM 1.0.

In the 1990s, the heyday of the dot.com era spurred the growth of larger data centres and cloud-distributed software. The industry chose to consolidate core IT infrastructure in purpose-built data centres which brought a new set of management challenges. These included more reliable cooling of high-density racks, managing space effectively and keeping energy costs to a minimum. The latter issue in particular forced operators to pay greater attention to efficiency and forced the development of metrics such as power usage effectiveness (PUE) to benchmark these efforts.

In light of this, management software evolved into a phase we can call DCIM 2.0. Here, the monitoring of performance data from numerous infrastructure components including racks, power distribution units (PDU), cooling equipment and UPS, was used to provide insights to decision-makers, whereby data centres could be designed, built or even modernised for greater efficiency and reliability. Space utilisation was also a key challenge addressed, as were managing vulnerabilities with diligent planning, modeling and reporting to ensure resiliency.

Such tools were mainly focused on large data centres, containing highly integrated and consolidated equipment typically from a handful of vendors. These data centres were likely to have on-site personnel and IT management professionals with highly formalised security procedures. Importantly, the software was typically hosted on premises and frequently, on proprietary hardware.

The era of DCIM 3.0 With the emergence of hybrid IT and edge computing, data centre software has had to evolve again to meet the new challenges posed to owners, operators and CIOs. HPE states that while in 2000, enterprise software was entirely hosted at the core of the network, by 2025, 20% of IT will be hosted in the core, 30% in the public cloud, and 50% at the edge.

For those in the era of infrastructure everywhere, it’s clear to see that the data centre environment has become increasingly complex and difficult to manage. One might even consider everything has become a data centre, in-part.

New research from IDC found the chief concerns of edge deployments were managing the infrastructure at scale, securing remote edge facilities and finding the suitable space, with attendant facilities to ensure resilience and security at the edge. Moreover, between 2014 and 2021 there was a 40% increase in companies that have been compromised by a cyberattack.

The pandemic, for example, forced people to work remotely and brought things into sharp focus. Now the data centre itself is not the only critical point in the ecosystem. One’s home router, PC, or an enterprise network closet is as mission-critical a link in the chain as a cloud data centre, with its strict security regime and redundancy.

For many senior decision makers, managing energy at distributed sites is also going to be a bigger challenge than in traditional data centres and Schneider Electric estimates that by 2040, total data centre energy consumption will be 2,700 TWh, with 60% coming from distributed sites and 40% from data centres.

With the emergence of hybrid IT and edge computing, data centre software has had to evolve again to meet the new challenges posed to owners, operators and CIOs

Resilience and sustainability Today, distributed mission-critical environments need the same levels of security, resilience and efficiency across all points of the network. To realise this, a new iteration of management software is required, which we can call DCIM 3.0.

Recognising that the role of Chief Information Officer (CIO) in many companies has become increasingly business focused and strategic, DCIM 3.0 will equip these decision-makers with insights into strategic issues — including where technology can best be deployed, how efficiently and sustainably it can be operated, and how it can be managed remotely, without loss of resilience.

In some respects, this requires a greater use of artificial intelligence and machine learning to glean actionable information from the data amassed by IoT sensors. It will also require greater standardisation of both software tools and hardware assets to offer ease of management and faster speed of deployment. Further, increased customisation and integration will be key to making the hybrid IT environment resilient, secure, and sustainable.

Customers also seek to deploy management tools in several ways. Some demand on-premises deployments, others insist on private cloud implementations, whereas others are happy to trust the public cloud. All methods must be supported to make DCIM 3.0 a reality.

Ultimately, the issue of environmental sustainability will become increasingly important due to customer demand and government regulation. As well as operational data, DCIM 3.0 tools will have to support decisions such as how to source power from renewable sources, how to dispose of end-of-life products and how to manage the overall carbon footprint, of not just the IT infrastructure, but the enterprise as a whole.

Right now, DCIM 3.0 is still in its infancy, although many of the above capabilities are already available. To deliver on the promise of DCIM 3.0, however, we must learn the lessons of the past and evolve DCIM to support a new generation of resilient, secure, and sustainable data centres.

Alive and kicking?

Nick Ewing, Managing Director at EfficiencyIT, asks – are reports of DCIM software’s demise exaggerated?

What ever happened to data centre infrastructure management (DCIM)? Rewind the clock 10 years and DCIM was touted as the next big thing – a universal panacea for many of the data centre industry’s most pressing challenges. In fact, Gartner predicted in 2013 that within five years, DCIM would be the latest major technology and vendor opportunity disrupting our industry.

Analysts and commentators claimed it would streamline operational efficiency, help end-users monitor and reduce energy consumption and maximise reliability – all while providing a tangible return on investment (ROI) and the ability to manage large, disaggregated IT portfolios with ease.

Sadly, however, DCIM failed to live up to its earliest expectations, the hype curve flatlined and its breakthrough failed to materialise. While DCIM has proved a raging success for some data centre managers, it has unfortunately failed to meet the expectations of others, and where some found major benefits, others felt it was a wasted investment.

There’s also been some major consolidation and change within the vendor space. For example, last autumn Vertiv, whose Aperture asset management software was once one of the most widely-adopted DCIM products, announced it was discontinuing its Trellis platform. While its rival, Nlyte, was acquired by Carrier, a specialist in cooling equipment.

The changing market dynamics haven’t helped to build confidence in the capabilities of DCIM, and the perception is that leading platforms are disappearing from the market. Add to this news that support for existing Trellis contracts will end in 2023, many data centre stakeholders have been left feeling bewildered. Could it be, in fact, that DCIM has become an overblown luxury that most organisations can’t afford or don’t need? As is always the case, the truth may be a little more nuanced.

Breathing new life into DCIM Luckily for DCIM fans, recent investments in user experience, data science, machine learning and remote monitoring have begun to breathe new life into DCIM. And while many data centre operators see its key

strengths in monitoring and management, DCIM has proved itself invaluable during the pandemic and it will continue to add significant value as hybrid working models persist, especially where accessibility, visibility and continuity remain challenges for our industry’s key workers.

The demand for DCIM’s capabilities certainly hasn’t disappeared. The 2021 Uptime Institute annual data centre survey revealed that 76% of operators felt their latest downtime incident could have been prevented with better management, processes or configuration. We have also seen increased demand for DCIM platforms offering simple installation, intuitive ease of use and real-time data-driven insight among our customers.

End-users’ ESG requirements – especially in the colocation space around environmental impact, sustainability and energy efficiency – have also increased in importance since DCIM first appeared on the scene. A Schneider Electric report with 451 Research revealed 97% of customers globally were demanding contractual commitments to sustainability.

Monitoring, measurement, and management software is of course critical to an organisation’s sustainability efforts. However, the grand expectations that DCIM alone would spearhead major efforts throughout the industry to improve energy efficiency and sustainability have yet to be realised.

As with many technologies, implementation remains critical, but sadly this has often been DCIM’s Achilles’ heel. For a DCIM implementation to be successful it is necessary for vendors and end-users to: • Take the time to thoroughly understand the business case • Help the customer deploy the software • Ensure all assets are monitored correctly • Benchmark the DCIM solution’s progress.

Regardless of how important successful implementation may be, it is often beyond the reach of many legacy operators, who continue to struggle with finding the necessary talent due to the widening industry skills gap.

There’s also the procurement cycle to address, which requires multiple stakeholders. The responsibility for managing data centre infrastructure, even those typically addressed via DCIM tools, sits between IT, facilities, and M&E departments – often with different objectives and chains of command.

Finding the right person to sign-off on a new DCIM project, or even identifying the right group of people to first agree to its use, was once a challenge. Luckily the business case is changing, and while the first versions of DCIM required considerable time and effort in terms of customisation, the newer, or next-generation versions, can simplify the process significantly, bringing siloed teams together.

The dawn of a new DCIM era? DCIM may have failed to live up to the initial industry hype, but any reports of its demise are exaggerated, and with the advent of DCIM 3.0, things are quickly changing.

The need, however, remains for software tools to efficiently manage the various functions of a data centre, no matter the type. And the capabilities of those versions deployed over the cloud allow businesses of all sizes to identify what their assets are, where they’re located, and how well they are performing. They can also proactively identify any status or security issues that need to be addressed.

Further, any company that subscribes to ISO 27001, the global standard for IT security, must be able to track its assets and the people who have access and control to those assets. As such, cloud-based DCIM deployments can offer major benefits and allow distributed assets to be monitored and managed at relatively low cost.

Another critical concern is minimising downtime. Here, a vendor-agnostic DCIM platform can provide insights into all key power paths, especially if they comprise equipment from multiple manufacturers. The ability to track dependencies, to minimise potential risks to a mission-critical environment from a single piece of equipment, such as a power distribution unit (PDU), uninterruptible power supply (UPS) or a cooling system, can be identified and potential outages mitigated.

It also remains essential for DCIM software to interact with legacy systems, facilities management suites, IT and network management software. This is best achieved through use of application programming interfaces (APIs) that allow high-level information exchanges between disparate tools.

Some analysts have opined that a particular weakness of Vertiv’s soon-to-be-discontinued Trellis platform was its dependence on Oracle Fusion application development tools, which tended to limit its attractiveness to customers outside of Oracle’s environment. The fact remains, however, that in a world full of distributed data centres, interoperability is essential for all management tools.

A vendor-agnostic DCIM platform can provide insights into all key power paths, especially if they comprise equipment from multiple manufacturers

Expensive luxury or must-have management solution? Measuring return on investment (ROI) is the key to establishing whether DCIM is a good fit for your organisation. Some may say it’s still an expensive overhead and it’s difficult to quantify the benefits when you utilise hardware assets from multiple manufacturers, but vendor-agnostic monitoring capabilities can quickly address that barrier.

Calculating ROI could involve quantifying the reduction in downtime since a software platform was adopted. There is also the reduction in reputational damage and associated costs that may have impacted your business. Another alternative could be to calculate the reduction in power consumption, improved cooling efficiency and thereby reduced PUE.

At EfficiencyIT we’ve always championed DCIM, regardless of the industry hype. For us, it has never been a miracle cure for the industry’s management challenges, and we see it as a valuable tool, which requires careful customer consultation and implementation if end-users are to gain the best results.

With new investments being made all the time into data science and machine learning capabilities, we’re confident that finding an ROI is far simpler than end-users realise.

However, the most immediate and obvious benefit is DCIM’s ability to provide real-time visibility, which is pivotal as we transition towards a greener, more sustainable, and more digitally dependent future.

How can technology support data centres through a global skills shortage?

In this Q&A, Danel Turk, Data Centre Portfolio Manager at ABB, discusses the impact of the global skills shortage on data centre development and operation and highlights key technologies which are helping operators to navigate the challenges posed.

A lack of skilled workers and contractors seems to be problematic for a number of industries. How big is the issue in the data centre sector?

Our customers are telling us that the skills gap is one of the biggest issues they are facing right now, and it’s impacting everything from the construction of new data centres to daily operations and maintenance, and their supply chains.

At the end of last year, we carried out some research in the Asia Pacific region, and almost three out of four operators (74.2%) said that access to specialist sub-contractors and trades was their greatest area of concern, after supply chain resilience (82.2%) and health and safety precautions (77.3%).

The concerns raised in our Asia Pacific survey mirrors trends in Europe, where research reveals that 42% of data centre operators believe there’s not enough skilled labour to deliver increased capacity requirements across the continent. Over 80% of European companies say they have been affected by labour gaps and more than seven out of 10 believe the pandemic has made the industry’s skills shortages worse.

While the shortage of suitably skilled people has been an industry issue for many years, the problems are really starting to bite now, with operators experiencing problems such as extra costs and delays in project delivery times. Operators need solutions – and fast.

With growing global demand for data and the need to increase capacity quickly, the industry will be keen to avoid extended project delivery times. How can technology help data centre operators get new capacity built faster and work around the shortage of construction labourers and skilled subcontractors?

The shortage of construction labourers and trades is being felt around the world, and this is driving a shift in the way data centre capacity is built, as traditional ‘stick built’ designs give way to modular and prefabricated building technology.

Prefabricated products – such as eHouses and skids – are built off-site and factory-tested before being delivered to site as an integrated solution which can then be installed and commissioned quickly and efficiently. Modular electrification solutions are flexible and scalable, and incorporate standard blocks of power which can be repeated to allow for future expansion.

Our research suggests that modular, scalable Ultimately using prefabricated, predesigned modular solutions means additional capacity can be built quicker and with less people

Preventative maintenance is a more efficient use of time for data centre engineers who are based on-site

equipment can reduce build completion time by as much as 50% compared to traditionally built data centres and they can help negate labour shortages in three ways; firstly, a prefabricated solution is resource efficient from an operational point of view, as it requires one project manager dealing with one vendor. Secondly, the products are pre-engineered to spec by the manufacturer and pre-tested before leaving the factory so there’s less need for specialist consultants to design the system or engineers to troubleshoot issues on-site. Thirdly, some manufacturers offer installation and commissioning services for their prefab products, therefore there’s no need for the operator to find their own skilled subcontractor to do the job.

It’s worth noting that digitalised solutions are also quicker to deploy as they require less wiring and less time to assemble on-site than traditional switchgear.

Ultimately using prefabricated, predesigned modular solutions means additional capacity can be built quicker and with less people.

Some reports suggest that the industry needs to recruit another 300,000 people over the next three years to run the world’s data centres. While that recruitment drive goes on, is there a way of reducing the impact of staff shortages on operations and maintenance activities?

Yes – and again, digitalisation is a key enabler. By proactively monitoring data centre equipment and performance using smart maintenance technology, digitalisation moves operators away from the traditional calendar-based maintenance schedules to a predictive maintenance approach, focusing on the most critical maintenance activities. Preventative maintenance is a more efficient use of time for data centre engineers who are based on-site.

The intelligence and analysis digitalisation also streamlines the running of the data centre for operational workers and helps keep it performing efficiently. This maximises employees’ time and is particularly helpful for under-resourced teams.

Digitalisation is also being used by companies like ABB to support data centres with remote maintenance solutions. Last year, we launched two augmented reality (AR) tools to help empower site engineers. CLOSER (Collaborative Operations for electrical systems) is the first port of call. It’s an app which provides fast and easily accessible guidance through an AR-based troubleshooting guide.

If further assistance is needed, or should critical components need to be replaced, site engineers can connect directly with an ABB technical expert through RAISE (Remote Assistance for electrical systems). RAISE allows the field operator and an ABB expert to share a live video connection and use extended reality features, such as digital overlays (like arrows or symbols), in the field of view to give instructions or guidance. RAISE allows users to take and share pictures, audio and video, and guidance can also be given via live text chat.

Can advances in technology support data centres facing supply chain issues too?

To some extent, I think it can as the manufacturing process is faster. Prefabricated, modular build solutions use a standardised design and this speeds up the purchasing and manufacturing processes, and makes deliveries for standard solutions much faster. Digitalisation, which uses less wires and connections, and configurators also expedite the ordering, manufacturing and delivery process.

With modular designs, there is the option of scalability too. This can support operators to negotiate supply chain issues as they don’t need to have everything for their complete build on-site on day one – it can be brought online a section at a time, and added to, which helps smooth out supply chain snags.

This article is from: