DCD>Magazine Issue 30 - The creation of the electronic brain

Page 1

October/November 2018 datacenterdynamics.com

Tall stories

Sc hneider’s man Dave Johnson on the reality of Edge

M akinlegvel multic-enters data

Skills gap

a e b o t s t n a w Who center engineer? data

Power es h c a o r p p a w e N electricity to

The creation of the electronic brain

You Yousay saychallenge. challenge. We You say challenge. Wesay sayopportunity. opportunity. We say opportunity.

Corning® RocketRibbon™ Extreme-Density Cables

Corning® RocketRibbon™ Extreme-Density Cables

® ™ Corning RocketRibbon Extreme-Density Cables Can your network support emerging 5G technology, where high-fiber availability is critical?

extreme-density cables,where designed for data availability center and telecom We created RocketRibbon Can your network support emerging 5G technology, high-fiber is critical? ™ applications, to help meet your network-performance challenges. extreme-density cables,where designed for data center and telecom We created RocketRibbon Can your network support emerging 5G technology, high-fiber availability is critical? applications, to industry-leading help meet™ extreme-density your density network-performance challenges. cables, designed for data center and telecom We Experience created RocketRibbon from a cable that is also easy to manage, identify, ™

applications, helpsuddenly, meet your network-performance and trace.to And, your network challenges challenges. may feel more like opportunities.

Experience industry-leading density from a cable that is also easy to manage, identify, Experience industry-leading density fromchallenges a cable thatmay is also manage, identify, and trace. And, suddenly, network feeleasy more like opportunities. Are you 5G ready? Visityour www.corning.com/rocketribbon/dcd ortocome and see us at

andDCD trace. And, suddenly, your network challenges may more feel more like opportunities. London, 5 and 6 November, booth #21 to learn about our cabling solution for

Are you 5G ready? Visitrequirements. www.corning.com/rocketribbon/dcd or come and see us at emerging network Are you 5G ready? Visit www.corning.com/rocketribbon/dcd or come see ussolution at DCD London, 5 and 6 November, booth #21 to learn more about ourand cabling for DCD London, 5 and 6 November, booth #21 to learn more about our cabling solution for emerging network requirements. emerging network requirements. Are You Corning Connected?

© 2018 Corning Optical Communications. LAN-2317-A4-AEN / September 2018

Are AreYou YouCorning Corning Connected? Connected?

© 2018 Corning Optical Communications. LAN-2317-A4-AEN / September 2018 © 2018 Corning Optical Communications. LAN-2317-A4-AEN / September 2018


ISSN 2058-4946

October/November 2018 7 News Trump trade tariffs hit data center equipment


16 DCD Round-up Keep up-to-date with our events and product announcements


18 Multi-story data centers Towering above us, the data centers on high 23 The Mission Critical supplement A look at what powers the data center 26 Batteries as a service We can have servers and storage on demand, why not power? 30 Going off grid Sharing power, without losing reliability 33 No danger, high voltage HVDC could lead to significant savings. Hikari hopes to light the way


34 Why revive the DCIM camel? DCIM may be lost in the desert, but there’s a pathway to the watering hole

Industry interview



36 Dave Johnson, Schneider Electric “The cloud is consolidating to a limited number of players, who are very demanding. The Edge is growing: there is a large number of players - and the margins are higher.“

38 The creation of the electronic brain Neuromorphic computing hopes to unlock the brain’s secrets


44 Fully automated luxury networking Software-defined networks are here: next, they’re going cloud native 49 Who wants to be a data center engineer? Ways to bridge the skills gap 52 Japan’s awakening The rise and fall of the one stop shop 56 Hand over your data! Crime stories from DCD’s archive



59 A cloud for Russia Yandex: not just the Russian Google 62 Max: More battery research, please!

Issue 30 • October/November 2018 3

From the Editor

Meet the team Global Editor Peter Judge @Judgecorp

Brain power on a high level


his is the 30th issue of DCD magazine in its latest iteration, launched nearly four years ago in February 2015. Our roots are older than that: our predecessor, Zero Downtime Magazine, launched in 2001, and DCD's series of events began the following year - you may be reading this at our London conference, the 17th in the series - and one of the best ever! Next year, we are planning four giant quarterly issues, with space to

News Editor Max Smolaks @MaxSmolax


Multi-story facilities are springing up everywhere, even in places where land used to be cheap. We've noticed a trend (p18) for sites to build upwards, as they gravitate to hubs where connectivity is good. There's also a trend to go where people are, driven by "edge" applications. We've visited high-rise data centers in New York, London and Northern Virginia, to see how things change when you have up to 30 stories of white space.

quadrillion The number of potential connections between neurons in the brain - equated by some to a 1 trillion bit per second processor

Mission-critical infrastructure

Neuromorphic computing is having a renaissance examine specific areas in-depth, and to open up completely new fields. If you have suggestions for interesting subjects, get in touch with us.

"Electronic brains" is a hoary old misnomer for computers - coined in the 1940s, the phrase is as old as computers themselves - but it provides this issue's cover feature. Brain scientists are quick to point out that brains don't use the sequential processing of von Neumann machines. But what if artificial intelligence systems were built to imitate the processing of the brain? After a brief flurry of activity in the last century, things went quiet - but neuromorphic computing is back in the ascendant. Why is this? Sebastian Moss spoke to the pioneers of the movement (p38).

is always on our mind. This issue's supplement (p23) comes as scientists have told us that the world must move to renewable energy by 2050. Data centers may be able to help. We look at software-defined power, demand response, high-voltage distribution and (the return of another hoary buzzword) DCIM.


DCD Magazine • datacenterdynamics.com


SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Head of Design Chris Perrins Designer Dot McHugh Designer Ellie James Head of Sales Martin Docherty Conference Director, NAM Kisandka Moses

PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254

Peter Judge DCD Global Editor



Assistant Editor LATAM Celia Villarrubia @DCDNoticias

Head Office

Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.


Editor LATAM Virginia Toledo @DCDNoticias

DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907

Dive deeper


Reporter Tanwen Dawn-Hiscox @Tanwendh

Conference Director Giovanni Zappulo

The data center skills gap remains the greatest problem facing the sector, according to experts. But efforts to fill the void are increasingly led by the companies in need of this talent, Tanwen Dawn-Hiscox discovered (p49). Also in this issue, we look abroad. Tanwen found out why Japan is such a unique data center market (p52) and Max Smolaks spoke to Yandex. It's more than the "Russian Google" (p59). Our executive interview is with Dave Johnson of Schneider Electric (p36), and elsewhere, we look at the progress of open networking in simplifying data center communications (p44).

Senior Reporter Sebastian Moss @SebMoss




© 2018 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.


Cat ® Electric Power understands that loss of power means loss of reputation and customer confidence. Cat power solutions provide flexible, reliable, quality power in the event of a power outage, responding instantly to provide power to servers and facility services, maintaining your operations and the integrity of your equipment.

After installation, trust Cat to provide commissioning services that seamlessly integrate the power solution into the wider data centre system. Our dealers also provide training, rapid parts and service support alongside a range of preventative maintenance offerings.

Your Cat dealer and our design engineers work with you to design the best power solution for your data centre, helping you to consider:

To find out more about Caterpillar Electric Power and our Data Centre experience, come visit us at DCD, or visit:

• Generator sizing for current and anticipated future operations growth, fuel efficiency and whole life costs • Redundancy for critical backup and flexible maintenance • Remote monitoring for constant communication and performance analysis • Power dense designs optimising space dedicated to data centre equipment • Interior or exterior installation requirements from enclosure design for noise and emissions to exhaust and wiring designs

© 2018 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, their respective logos, ADEM, “Caterpillar Yellow” and the “Power Edge” trade dress, as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.



High Rate & NEW Extended Life High Rate Batteries for Critical Power Data Center Applications

New Smaller 12HRL550 & 400 Footprint

RELIABLE Narada HRL and NEW HRXL Sreies batteries deliver. Engineered and manufactured in state of the art facilities, Narada provides solutions to meet your Critical Power needs. NEW HRXL series have a Full 6-year warranty and Full 5-year at elevated temperatures

ISO9001/14001/TL9000 Certified Quality Backed by Industry Leading Warranties

Narada...Reliable Battery Solutions www.mpinarada.com - email : ups@mpinarada.com - MPI Narada - Newton, MA Tel: 800-982-4339

Whitespace News in brief Microsoft’s suppliers will have to provide paid parental leave in the US “Over the next 12 months we will work with our US suppliers to implement this new paid parental leave policy,” Dev Stahlkopf, corporate VP and general counsel, said - as long as the suppliers meet certain requirements.

White space A world connected: The biggest data center news stories of the last two months

Equinix promotes Charles Meyers to the top job “We are going to build on our strategy, it is going to continue to focus on those core elements of interconnection, global reach and digital ecosystems,” Meyers told DCD.

Fiber optic pioneer Charles Kao passes away Nobel laureate who helped build the foundations of the Internet dies at 84.

EdgeMicro launches edge computing test lab in Denver Invites potential customers to run their pilot projects at no cost.

Stulz launches regional lab as efficiency starts to bite in China “China is such a gigantic market,” Stulz VP Kurt Plötner told DCD. “The amount of orders we are getting for mega data centers is unbelievable.”

France to cut electricity tax for data centers French data center operators are set to see their operational costs slashed, as the government introduces a tax cut on electricity consumption. In Vélizy-Villacoublay, Prime Minister Édouard Philippe told Dassault Systèmes employees, that the taxe intérieure sur la consommation finale d’électricité - or TICFE - would be reduced from €22,5/ MWh to €12/MWh, in a bid to attract more data center investment.

Getty Images

Trump’s trade tariffs to hit cooling fans, semiconductors, telecoms equipment The Trump Administration has imposed import tariffs on $200 billion worth of Chinese goods, including products in the data center and telecommunications space. The tariffs, part of a deepening rift between the world’s two largest economies, start at 10 percent and will rise to 25 percent next year. Around half of all goods China sells to the US are thought to be included. Steve Koenig, senior director of market research at the US Consumer Technology Association, told China Daily: “Companies large and small are determining how best to cope with the situation, which could include deferring the... building of a data center.” Ahead of the tariffs coming into force, the Information Technology and Innovation Foundation warned: “Every American would feel the impacts of tariffs on goods that constitute key inputs to cloud computing and data services, through increased prices, jobs lost as companies make cuts or go out of business, and fewer data centers that would otherwise have brought jobs to communities across the nation.”

It added: “ITIF modeled the impact of tariffs broadly applied to US imports of ICT products from China. ITIF found that that a 10 percent tariff levied on Chinese ICT imports would slow the growth of US output by $163bn over the next 10 years, while a 25 percent tariff would slow output by $332bn.” Linda Moore, CEO of TechNet, a trade group which includes companies like Amazon, Bloom and Oracle, said: “Data centers are complicated supply chain projects that rely on the absolute certainty of receiving key components in a timely and cost-effective way. If critical pieces were to be held up for delivery at a later date with higher, more unpredictable costs, companies would be incentivized to make investments and create jobs in other countries where such needless uncertainty surrounding data centers does not exist.” For a list of the affected products, follow the link below. bit.ly/DontCallitATradeWar

Issue 30 • October/November 2018 7


Digital Realty to acquire Ascenty for $1.8 billion Real estate investment trust and carrierneutral data center operator Digital Realty will acquire Brazilian data center firm Ascenty. The company’s Brazilian subsidiary, Stellar Participações, has entered into a definitive agreement to acquire Ascenty from private equity firm Great Hill Partners for approximately $1.8 billion, in addition to approximately $425 million required to fund the completion of data center projects currently under construction and meet near-term customer demand. At the same time, Digital Realty has entered into an independent bilateral equity commitment with Brookfield Infrastructure, to commit half of the required investment ($613 million) for 49 percent of a joint venture that is expected to ultimately own Ascenty. Both deals are expected to close in the fourth quarter of 2018, subject to customary closing conditions and regulatory approval. Ascenty currently operates eight data centers across São Paulo, Campinas, Rio de Janeiro and Fortaleza, with 39.2MW of capacity in service, 34MW under construction and 33MW ready to be built. The company also runs its own 4,500 kilometer (2,800 mile) fiber optic network connecting its facilities. The firm’s management team is expected to continue working at the new business. bit.ly/KeepingItRealty

Vantage V5 in Santa Clara

Vantage expands Santa Clara campus to reach 75MW Colocation provider Vantage Data Centers has completed the final expansion of its first campus in Santa Clara, California. The four-story building, codenamed V5, adds 15MW of power capacity, bringing the campus total to 75MW and securing the company’s position as the owner of the largest data center cluster in Silicon Valley. “With the final facility on our first Santa Clara campus complete, and construction of our 69MW Matthew Street campus also in Santa Clara well underway, Vantage can support the growth of enterprises, cloud and hyperscale customers well into the future,” said Sureel Choksi, president and CEO at Vantage. Vantage was established in 2010 to run the largest wholesale data center campus in Silicon Valley, located on an industrial estate previously owned by Intel. Its second campus is located in Quincy, Washington. The company is currently building another campus in Santa Clara, and its first campus in the data center hub of Ashburn, Virginia. bit.ly/TheSantaClaraDiet

Aligned Energy is coming to Northern Virginia with a hyperscale campus American wholesale colocation provider Aligned Energy is entering the Northern Virginia market, with a 26-acre data center campus that could grow to offer up to 180MW of power capacity. The site will be located in Ashburn, the world’s largest data center hub. The first phase of the project will deliver 80MW - however the date for the opening hasn’t been set just yet. Being a wholesale operator, when Aligned builds data centers, it does so at scale: its plans for Ashburn feature a campus with 880,000 square feet of space and 180MW of capacity, supported by two on-site power substations and offering access to more than 50 network carriers. The campus will be built in two phases – the first will deliver 370,000 square feet and 80MW of capacity. bit.ly/AlignedWithVirginia

8 8

DCD Magazine • datacenterdynamics.com

Aligned Energy Phoenix


Still relying solely Switch to automated on IR scanning? real-time temperature data.

Introducing the

Starline Temperature Monitor Automated temperature monitoring is the way of the future. The Starline Critical Power Monitor (CPM) now incorporates new temperature sensor functionality. This means that you’re able to monitor the temperature of your end feed lugs in real time– increasing safety and avoiding the expense and hassle of ongoing IR scanning. To learn more about the latest Starline CPM capability, visit StarlinePower.com/DCD_Oct.

Issue 30 • October/November 2018 9


The Linux Foundation

For more on Cloudnative Networking p44

Open source communities unite around Cloud-native Network Functions

LinkedIn rolls out Open19 and shares rack, server designs The Open19 rack and server design, announced two years ago by LinkedIn, is now being deployed in Microsoft-owned data centers, and the blueprints will be published and shared, the Open19 Summit heard. The Open19 design packs standard servers into bricks within a standard 19-inch rack, claiming to cut the cost of these systems by more than 50 percent by reducing waste from the networking and power distribution. The design is being offered under an open source license. Announced two years ago at DCD>Webscale, the specification is ready for production use, project leader Yuval Bachar said at this week’s Summit in San Jose. “Last summer, LinkedIn helped announce the launch of the Open19 Foundation,” said Bachar, LinkedIn’s chief data center architect, and president of the Open19 Foundation. “In the weeks and months to come, we plan to open source every aspect of the Open19 platform - from the mechanical design to the electrical design - to enable anyone to build and create an innovative and competitive ecosystem.” Open19 followed on from the Open Compute Project (OCP), which Facebook created in 2011. However, OCP’s Open Rack standard relies on non-standard 21 in server racks. Open19 involves traditional racks. bit.ly/LinkedInLinksOut

Cloud Native Computing Foundation (CNCF), chiefly responsible for Kubernetes, and the recently established Linux Foundation Networking (LF Networking) group are collaborating on a new class of software tools called Cloudnative Network Functions (CNFs). CNFs are the next generation Virtual Network Functions (VNFs) designed specifically for private, public and hybrid cloud environments, packaged inside application containers based on Kubernetes. VNFs are primarily used by telecoms providers; CNFs are aimed at telecoms providers that have shifted to cloud architectures. bit.ly/CNCFandLFNsCNFs

#DCDAwards The Most Extreme Data Center on the Planet

Vote online today Closes 23 November

10 10 DCD DCD Magazine Magazine •• datacenterdynamics.com datacenterdynamics.com

Inspur brings AI tech into OCP servers Inspur, the largest server vendor in China, is launching a set of Open Compute Project-compliant hyperscale rack servers, bringing established Chinese AI products to the global market. The new range includes an AI server deployed in Baidu’s autonomous car project, developed to the Chinese Scorpio/ ODCC guidelines and now made available in an OCP format. This represents cross-fertilization between the world’s two leading open source hardware initiatives, Inspur VP of data center and cloud, Dolly Wu, told DCD. bit.ly/Inspurational

Huawei joins Open Compute Project Electronics giant Huawei has become the fifth Chinese company to join Open Compute Project (OCP), an open source hardware initiative originally started by Facebook. Huawei is one of the world’s largest server manufacturers, and is currently the second largest network equipment vendor in the world, according to Gartner. The company joins as a Platinum member, which means >Awards | 2018 it will have to pay $40,000 and donate 3,120 hours of engineer time annually, and contribute at least one item of intellectual property to the project. In return, Huawei will get the right to vote on governance issues, nominate its representatives to lead individual work groups, and exhibit at OCP events. OCP has attracted more than 200 corporate-level members, including heavyweights like Microsoft, Apple, Google, HPE and Cisco. bit.ly/ChinaOpensUp


Uptime is everything—

So don’t fall for the imitators. Trust 30 years of innovation and reliability.

Originally released nearly 30 years ago, Starline Track Busway was the first busway of its kind and has been refining and expanding its offering ever since. The system was designed to be maintenance-free; avoiding bolted connections that require routine torquing. In addition, Track Busway’s patented u-shaped copper busbar design creates constant tension and ensures the most reliable connection to power in the industry—meaning continuous uptime for your operation. For more information visit StarlinePower.com/DCD_Oct.

Issue 30 • October/November 2018 11


Amazon plans to invest $951m in Indonesia over 10 years

Google announces $140m Chilean data center expansion AWS data center

Amazon is gearing up to spend as much as 14 trillion rupiah (US$951m) in Indonesia, Singapore broadsheet The Straits Times reports. The plan was conveyed to Indonesian President Yoko Widodo by Werner Vogels, VP and CTO of Amazon, during a meeting held in Jakarta this September. Spread over a 10-year period, a part of the investment will be spent on introducing the company’s cloud computing service to the Indonesian market. It is not clear how much will be allocated to Amazon Web Services (AWS), and how much to the Amazon e-commerce platform, as it competes against entrenched e-commerce rivals such as Tokopedia and Lazada. AWS chief Andy Jassy had always maintained that the company’s long-term vision is to operate from almost every country in the world. With a population of more than a quarter of a billion people, coupled with rapidly increasing Internet and smartphone penetration, there is no question that Indonesia represents an important market for growth in both e-commerce and cloud computing. However, it will face competition from Alibaba Cloud, which launched its cloud service in the country earlier this year. In the Asia Pacific, AWS currently has facilities in Singapore, Korea, Japan, Australia, China and India. But competitor Alibaba Cloud had a head start in Southeast Asian countries such as Malaysia and Singapore. Aside from attracting global players seeking to enter the region, Alibaba has the incumbent’s advantage of being a familiar platform for Chinese companies as they expand overseas.

Google plans to expand its only Latin American data center with a $140m investment that will triple the size of the campus in Chile to 11.2 hectares (27.7 acres). A launch event for the expansion was attended by Chilean President Sebastian Pinera, underscoring the importance of the project for a nation trying to diversify the economy and move away from its dependence on copper, Reuters reports. “What we have to decide is which side are we going to be on: where the new works of the future are created, or where the old works of the past are destroyed,” Pinera said. Edgardo Frias, Google’s general manager in Chile, added: “This new stage reinforces the promise Google made to the region to ensure that large and small companies, non-profit organizations, students, educators and all users can access key tools in a reliable and rapid way.” The construction work is expected to create 1,000 temporary jobs, while the data center will offer 120 permanent positions. Next year will also herald the completion of Curie, Google’s private submarine cable linking its data center in Quilicura, near Santiago, to its data centers in Los Angeles. Meanwhile, Amazon Web Services is considering a data center in Chile to serve as a repository for telescopic data. “Chile is a very important country for AWS. We kept being amazed about the incredible work on astronomy and the telescopes, as real proof points on innovation and technology working together,” Jeffrey Kratz, AWS’s GM for public sector for Latin America, Caribbean and Canada, said. Chile lays claim to some 70 percent of global astronomy investment. bit.ly/ChilesWarmReception


Peter’s Facebook factoid On September 16, Facebook detected some unusual activity on its network. Details remain unclear, but the activity represented the biggest hack in Facebook’s history, with more than 50 million users’ details exposed

Facebook to spend $1.5bn on US data center expansion Facebook plans to expand two of its data centers for $750m apiece. The social network and advertising giant Facebook will add two more halls to its campus in Crook County, Oregon, bringing the company’s total footprint at the site to more than 3.2 million square feet (300,000 sq m), making it Facebook’s largest data center location. For the same price, it will add three more data centers to its still-under-construction Henrico County campus. In total, the campus will span 2.4 million square feet (223,000 sq m) once completed. 1,500 construction workers will be employed to carry out the work, and 200 permanent jobs – including staff for the original campus - will be created to run the site in the long term. Virginia governor Ralph Northam said that the county was “honored to have Facebook on our corporate roster, and we look forward to building on our partnership.” bit.ly/AFacebookPortal

12 DCD Magazine • datacenterdynamics.com


Angola Cables lights up world’s first submarine cable linking Africa to the Americas Japanese telecommunications provider NEC has completed the installation of the first fiber optic submarine cable to cross the South Atlantic and connect Africa to the Americas, two years after starting the project. The system, commissioned by African wholesale telecoms operator Angola Cables, links Luanda in Angola to Fortaleza in Brazil, promising to lower latency – from 350ms to 63ms - and traffic costs between the two continents, while delivering a five-fold increase in data transfer speeds. It is hoped that the new cable system will boost economies in both Africa and South America, vastly speeding up data exchanges between the two continents that previously had to route their communications through Europe. Beyond the obvious connectivity improvements, the cable will lighten the load on fiber optic routes between the US and Asia too, halving latency between Miami and Cape Town. Angola Cables CEO António Nunes said that the company’s ambition was to “transport South American and Asian data packets via our African hub using SACS, together with Monet and the WACS, providing a more efficient direct connectivity option between North, Central and South America onto Africa, Europe and Asia.” He added: “As these developments progress, they will have considerable impact for the future growth and configuration of the global Internet.” bit.ly/SouthwestBySouth

SACS map – Angola Cables

Iridium partners with AWS for satellitebased CloudConnect IoT network Satellite operator Iridium Communications has partnered with Amazon Web Services for the development of Iridium CloudConnect, which aims to be the “first and only” satellite cloudbased solution offering global coverage for Internet of Things applications when it launches in 2019. The news comes after job listings outed Amazon’s off-planet ambitions, with the company teasing “a big, audacious space project” that it is working on. “This is a major disruption for satellite IoT. Costs will drop, time to market will speed up, risk will be reduced, and AWS IoT customers that choose Iridium CloudConnect can now enjoy true global connectivity for their solutions.” CloudConnect will make use of Iridium’s 66 crosslinked satellites, a constellation of systems which are currently being replaced by Iridium NEXT satellites in a $3bn upgrade of the decades-old system. Across seven launches, launch provider SpaceX has already delivered 65 new satellites to Low Earth Orbit (LEO), with 10 more satellites planned for later in the year (nine satellites serve as orbiting spares).

Iridium says that it already has 630,000 active IoT devices using its space network, with subscribers growing at a compounded annual growth rate of approximately 19 percent over the last three years. “Easily this could expand to tens of millions of devices,” Iridium CEO Matt Desch told CNBC. “We’re really covering the whole planet... with terrestrial networks today it’s still only 10 percent or 20 percent [of the Earth]. Everybody today can connect pretty easily with very little effort. Now that Amazon has put our language into the cloud platform, they can extend their applications to the satellite realm.” Desch said that he expects CloudConnect to first cater to large products like agricultural equipment or cargo ships, but said “it will move downwards into smaller and smaller vehicles, such as drones.” Iridium operates as a major communications backbone for businesses operating in remote regions, as well as the US Department of Defense.


CyrusOne installs 350ft tower for fast wireless access to Illinois campus Data center real estate investment trust CyrusOne has added a new telecommunications tower to its Aurora campus in Chicago, Illinois. The 350ft (107m) tower will support microwave and millimeter wireless antenna colocation when it goes live in the fourth quarter of 2018, with high-frequency traders able to place their own antennas on the mast, reducing latency by crucial milliseconds. One of the buildings on campus, codenamed Aurora I, houses the servers of the Chicago Mercantile Exchange Group (CME Group), operator of several large derivatives and futures exchanges. CME used to own the data center until 2016, when it was sold to CyrusOne for $130 million. The campus is also home to the 50MW, 428,000 square foot (39,800 sq m) Aurora II facility. bit.ly/MassiveMicroTower


Issue 30 • October/November 2018 13


Vox Box

Roger Tipley President The Green Grid What’s it like working with governments as a consortium? Often we are dealing with problems that they pose to us, and other times it’s opportunities that they just need advice on. There’s two ways you can be looked at by a government agency - you can be a stakeholder, or you can be a special interest. We try very hard not to be a special interest, or like we’re lobbyists; we’re brought in as experts to bring in more pragmatism and reality into situations where they might have the ear of other folks telling them something different. bit.ly/TipleyTop

Rob Nash-Boulden Director, Data Centers Black & Veatch What’s your view on the Edge? This is the most exciting time for data centers ever. Edge is a very important piece for our clients, we’re still trying to find out what the killer app or driving use case will be. We’re still very carefully following the money to see where the investments are going to come, our clients, those in utilities, governments and other industries that are maybe more inclined to be followers rather than leaders in those areas have embraced it less. Those who are in cloud, colo and networks are all in. bit.ly/Embouldened

US quantum initiatives move forward with DOE, NSF funding and House bill The United States government has unveiled a slew of different Quantum Information Science (QIS) projects, with several different initiatives set to help the nation prepare for a future with quantum computers and quantum communications systems. The Department of Energy has announced $218 million in funding for 85 QIS projects, while the National Science Foundation has awarded $31 million in funds for multidisciplinary quantum research. Meanwhile, the White House has released the National Strategic Overview for Quantum Information Science and held a QIS summit with those from industry, academia and government agencies to discuss its next steps. All this comes just weeks after the House of Representatives passed the $1.275bn National Quantum Initiative, a bill which is expected to

be approved by the Senate later this year. “QIS represents the next frontier in the Information Age,” US Secretary of Energy Rick Perry said in a statement. “At a time of fierce international competition, these investments will ensure sustained American leadership in a field likely to shape the long-term future of information processing and yield multiple new technologies that benefit our economy and society.” To underscore the importance of QIS, the US government this week released the National Strategic Overview for Quantum Information Science. “Long-running US Government investments in QIS and more recent industry involvement have transformed this scientific field into a nascent pillar of the American research and development enterprise,” the document states. “The Trump administration is committed to maintaining and expanding American leadership in QIS to enable future long-term benefits from, and protection of, the science and technology created through this research.” September also saw a White House Office of Science and Technology Policy meeting on QIS attended by Google, IBM and Intel. bit.ly/QubitsAndQubobs

Alibaba oulines plans for quantum processors, AI chips

Peter’s quantum factoid Researchers from the University of the Basque Country claim to have mimicked Darwinian evolution on IBM’s five-qubit quantum computer. One qubit was used to model the genotype and another was used for the phenotype, with the system simulating reproduction, mutations, and death

14 DCD Magazine • datacenterdynamics.com

At its annual Computing Conference in Hangzhou, China, Alibaba laid out a roadmap for the development of cuttingedge technologies over the next five years that include its own quantum computing processors and customized AI chips. The Alibaba DAMO Academy will start developing its own quantum processors with high-precision, multi-qubit hardware. This comes on top of existing initiatives that hope to apply quantum processing towards solving fundamental problems around machine learning and physics simulations. Alibaba expects that in time, commercial products resulting from its research will be introduced to bring value to its customers across a diverse range of industries. The Chinese company also announced the establishment of an as-yet-unnamed subsidiary that will focus on customized AI chips and embedded processors to further support Alibaba’s growing cloud and IoT businesses. “Our research entity is not just about developing state-ofthe-art technology, but also about ensuring that the technology can be applied and used to solve real challenges,” Dr Rong Jin, head of machine intelligence technologies at Alibaba, told DCD. bit.ly/AlibabasNextStep

DCD Round-up Stay up to date with the latest from DCD - as the global hub for all things data center related, we have everything from the latest news, to events, awards, training and research



iation with: soc As

> Verticals | eBook series

February/March 2018 datacenterdynamics.com

April/May 2018 datacenterdynamics.com

Money makes the world go round, but first it requires infrastructure



Prod uc ed

Download the latest digital versions of the magazine and eBook series via Issuu.

A Guide to

DCD>London | United Kingdom Nov 5-6

Mission Critical IT on The Factory Floor

Read for free today!

Exploring the Industrial Edge


Old Billingsgate

Innovation on ICE

Who owns DCIM data?

After Red Storm

Sweden’s bid to make us all more efficient

Cloud services create new problems

The supercomputer that shot down a satellite

June/July 2018 datacenterdynamics.com

DCD>London kicks off the conference program at London’s prestigious Old Billingsgate riverside venue, with an epic debate on the great capacity challenge of our time. We are convening some of the foremost experts that represent each part of the hyperscale ecosystem to explore how extreme capacity requirements are changing deployment models, ownership models, expectations and to ask where this leaves the rest of the industry. bit.ly/DCDLondon2018

Behind the Cell

AI and Automation

We learn about the future of energy in Stockholm

The PlayStation chip that ran a supercomputer

A 10-page deep dive into AI in the data center

August/September 2018 datacenterdynamics.com


Our 16 page supplement uncovers the truth behind the hype


Resilience matters

But do you go with the public cloud or rely on 2N hardware?

Keeping cool

Powered by

A special 14-page supplement on the fascinating world of cooling


Nigeria’s awakening MainOne’s CEO explains how she’s bringing a nation online


The inexorable rise of hyperscale cloud is creating capacity demands that push every part of the supply chain to the limit. With rumored take-up projections of 500MW+ for 2019 in Europe alone, what could break and when?

> Edge | Supplement

Europe gets smarter

What is it?

The shape of it

The telco factor

The one percent

> While some say it is a location, others say applications will define it - but all agree it will be everywhere

> How putting processors close to end users will create new data hierarchies

> Are cell towers the prime data center locations of tomorrow, or a vendor’s pipe dream?

> Dean Bubley says the mobile network edge is important, but overhyped

Going cloud native: rewriting the virtualization rulebook

The inventor of Lithium-ion batteries is still charged with ideas


DCD>Canada | Toronto Beanfield Centre, Toronto

DCD>Chile | Santiago Espacio Riesco, Santiago

DCD>Middle East | Dubai Hyatt Regency Dubai

Nov 13 2018

Nov 21 2018

Nov 27 2018

Keep up-to-date Andrew Jay CBRE

Andy Lawrence Uptime Institute

Dave Johnson Schneider Electric

Don’t miss a play in the data center game. Subscribe to DCD’s magazine in print and online, and you can have us with you wherever you go! DCD’s features will explore in-depth all the top issues in the digital infrastructure universe. Subscriptions datacenterdynamics.com/magazine To email a team member firstname.surname@datacenterdynamics.com

Jim Smith Equinix

Joe Kava Google

Phil Coutts Mace

16 DCD Magazine • datacenterdynamics.com

Find us online datacenterdynamics.com | dcd.events | dcdawards.global

DCD Round-up


DCD>Debates - What does ‘AI Ready’ mean for data center operations?


From industry-certified courses to customized technology training, including in-house development, DCPRO offers a complete solution for the data center industry with an integrated support infrastructure that extends across the globe, led by highly qualified, vendor-certified instructors in a classroom environment as well as online.

What does ‘AI Ready’ mean for data center operations?

2018 Course Calendar

Watch On Demand Content Partner:


The application of AI to infrastructure management and operations could prove to be a silver bullet in terms of driving optimization of resource availability and overall efficiency of your data center. But the potential for AI does not come with the switch of a button, instead potential end-users need to commit to a new mindset and prepare their infrastructure accordingly, if they are to reap the benefits of this game changing technology. Join us for this webinar to learn what you need to be doing to make sure your data center is ‘AI ready.’ Register here:

Bruce Taylor DCD

Power PRO – Madrid | 5 November Power PRO – London | 19 November Cooling PRO - Mexico City | 12 November Data Center Design Awareness – Singapore | 19 November Energy PRO – London | 26 November Cooling PRO – Sydney | 21 November

For more course dates visit: www.dcpro.training

New course

Data Center M&E Cyber Security 2 hour online course


Enzo Greco Nlyte

Greg Knowles IBM

DCD>Debates How is the data center industry re-thinking mission critical power? For more information visit


Watch On Demand


DCD>Debate How is ‘Edge’ transforming the Data Center Services Sector: Colo, Telco & Cloud? For more information visit

Watch On Demand


DCD>Debate What conversation should Colos be having with their tenants about technology adoption? For more information visit

Susanna Kass Baselayer

DCPRO has developed a new 2 hour online module that covers M&E Cyber Security fundamentals. The ‘Introduction to M&E Cyber Security’ course will teach students all the basic principles and best practice that prevents cyber-attacks from happening. Learn about Industrial Control Systems (ICS) and the influence that IoT has on them, as well as why OT (Operational Technology) is a huge target for breaches. The course also covers policies, regulations and organizations as well as prevention, defense, operation and monitoring.

Watch On Demand

Buy Now


bit.ly/ServicesSectorSchneiderNAM DCDnews


Issue 30 • October/November 2018 17

Build it high All around the world, data center builders are finding they have to build upwards. Peter Judge and Tanwen DawnHiscox look at the issues they face


ulti-story buildings are nothing new. People have lived in tower blocks since the days of the Roman Empire, but we have been reluctant to house our servers there - until now. More than 2000 years ago, apartment buildings or “insulae” in the Roman Empire reached seven stories high, and may have gone as high as ten - although they were unstable, and rich residents chose the lower floors rather than the penthouse suites. Steel construction allowed buildings to break the ten-story limit in the 1880s, and cities round the world have not looked back. Here is a way to get more people and businesses into the productive hubs of humanity. Despite this, most data center facilities have resolutely remained at street level. The

majority are in single-story constructions. There are several obvious reasons for this. These facilities rely on heavy duty electrical and air-conditioning equipment; lifting that up to a higher floor is an effort to be avoided, if possible. Similarly, it is easier to lay on supplies of water, diesel fuel and electricity at ground level. People live higher up because they need or want to be closer together, and because land is expensive in the city. Many data centers have the option to live out of town, thanks to the fast communications networks they use. Like a teleworking executive, they can choose to live in comparative luxury. But that is changing. Some customers require low latency for applications used by city workers, so their servers have to be in town. There’s a substantial history of multistory data centers in locations like London, New York and Paris. And on a small urban island like Singapore, there will soon literally be no option but to build upwards.

18 DCD Magazine • datacenterdynamics.com

Peter Judge Global Editor

Tanwen Dawn-Hiscox Reporter

And even out-of-town network hubs are starting to get over-crowded. Loudoun County in Northern Virginia became a huge nexus for data centers, because there was massive network bandwidth located in a region known for sprawling, sparsely populated farms. New York Skyscrapers have been a part of New York’s skyline for more than 100 years, and if you are putting a facility in Manhattan, you won’t have the luxury of a ground floor location. You are also unlikely to have a new build. Intergate Manhattan, at 375 Pearl Street, has been repurposed multiple times. The 32-story building, which casts its shadow on Brooklyn Bridge, was originally owned by New York Telephone, followed by Bell Atlantic and then Verizon. Sabey Data Centers bought it in 2011, but it’s still frequently referred to as the “Verizon building,” because Verizon still has three floors in the facility and pays naming rights to keep its signage at the top of the tower. Each floor is 14 to 23 feet high, which is unusually tall compared to most high rises.

Design + Build Had it been a residential block, Intergate would have been around fifty stories tall. When Sabey took over the facility, it planned to develop 1.1 million square feet of data center space, but according to Sabey vice president Dan Melzer, “the market just wasn’t ready for that.” Instead, the company fitted out 450,000 square feet of technical space, and the remainder was repurposed as office space. This was only possible in the upper half of the building, however, as half of the space has no natural light, built with telephone equipment in mind. While some tenants get the full service, including power, cooling and managed racks, one customer in particular - which, incidentally, operates the New York subway wireless system - only leases power and water from the company and manages its own facilities. Underneath the data center floors are the chillers and generators needed to run them – including an 18MW substation to support one of the company’s so-called turnkey customers. The building has a massive freight lift to carry heavy equipment up and down the tower, and its data center features extend underground, beneath the reception and loading dock floor, to a tank holding enough fuel to run the servers for 72 hours in an emergency: “We have the capacity for 180,000 gallons of fuel, though we don’t need that much right now,” Melzer said, as well as 270,000 gallons of water. By law, “since 9/11, you know,” all fuel across all buildings in the city must be stored on the lowest floor. The company shares its fuel resources with its trusted neighbor – the New

York Police Department (NYPD), whose headquarters are adjacent to the tower. On the very top of the building are the company’s cooling towers and cooling plant, and an antenna for wireless carriers.

Colocation within 60 Hudson dates back to Telx, a company which started out offering connections for telephone “calling cards” in the 1990s. Telx began leasing space in the building in 1997, to offer a neutral interconnection facility, and expanded into Less than a mile away, 60 Hudson Street is other suites, taking over Datagryd in one of downtown Manhattan’s 2013. architectural marvels, Inside 60 Hudson, Telx again repurposed “basically invented the concept of multiple times since a meet-me-area,” Cohen said. its days as the “Instead of having various headquarters types of type 2 circuits, we Reported land price per of telegraph thought: ‘Why don’t we acre in Loudoun County company Western create a meeting room Union. The where carriers can meet company relied on customers?’” And the rest is it being near the AT&T’s history. building – a similar-looking monolith of a Digital Realty, one of the world’s most tower - for its communication lines. Built successful colocation providers, bought Telx in what was a residential neighborhood in in 2015, giving it control of the fifth, ninth, the 1920s, the building’s wide, square plinth eleventh and twenty-third floor; the walls are of a structure was engineered to support owned by a real estate firm. The colocation heavy equipment, and its floors are linked provider’s scope for expansion isn’t clear. “I by pneumatic communication tubes. “The would hope,” he said, but “they don’t really tell building sort of recycled itself for today’s age,” us much.” explained Dan Cohen, senior sales engineer 60 Hudson houses 13,000 cross-connects for current resident Digital Realty. The floors – out of an estimated 25,000 to 30,000 in hold heavy racks with ease, and “a lot of New York. Most nearby submarine cable those pneumatic tubes are now being used traffic passes through the building and, for fiber.” across multiple North American carrier u


Moving up: what changes? It may not be immediately obvious what changes when a data center moves up. Most facilities are still built with a raised floor, capable of holding the weight of heavily-loaded IT racks. These have to be installed on higher floors. The doors, corridors and lifts have to be specified to handle heavy equipment and if there are multiple tenants on the site, there must be a protocol to allow them access to these, whenever they need them. Some sites may even have back-up lifts so there is no single point of failure in getting kit to the cabinet. Power and air-conditioning must be available on all floors housing data halls. In some cases, much of this plant is placed on the roof, in other cases, to one side of the floor. If the generators and cooling systems are above ground, the site managers have to be sure they can get supplies of diesel and water (if required) to them. It’s also important to make sure where any output heat or exhaust goes. It’s not good if the hot air rises, and comes back in the floor above, warming the cold aisle. These are all problems which can be solved - but while data centers could avoid them, it’s no surprise that they did.

Issue 30 • October/November 2018 19

uhotels, Digital Realty processes 70 percent stepping onto them is a vertiginous experience. of all Internet traffic. In common with many other DCD visited the building and, on the contemporary data centers, North 2 ninth floor alone, Digital has four uses a hard floor rather than a meet-me-rooms; it is visibly raised one, with contained hot jam-packed full. aisles at around 38ºC, while Though for the most a slow flow of air at 25ºC is part everything has moved drawn through the room. onto fiber, this is a place The efficiency of the of relics: “DS1 circuits, design won Telehouse a DS3, T3, T1 circuits, all still DCD award in 2017. Height of SUNeVision active.” MEGA-i data center Digital Realty Singapore repurposed the fifth floor Facebook is planning opened in 2001, in 2014 - after passing on an 11-story data center in Hong Kong it several years prior, out of Singapore, at a cost of $1bn. concern that there wouldn’t be That’s no surprise - it’s a major sufficient demand to fill it - “then it economic hub, on a tiny island with became more expensive later on and we eye-watering real estate prices. The only kicked ourselves… But it was still worth it.” surprise is that it’s taken this long to reach this stage. Though it appears empty, the floor is Singapore has a tropical climate to largely leased already, as customers tend to contend with, and Facebook is using some pay for power capacity rather than space, and pretty impressive new technologies to they are commonly overzealous in planning achieve its towering ambition. for future needs. “We can’t sell power that we The first deployment of the new don’t have, right?” StatePoint Liquid Cooling system, developed Diverse connectivity and fiber serve with Nortek, is expected to allow an annual the floor, as well as DR's in-house fiber to Power Usage Effectiveness (PUE) of 1.19. connect latency-conscious carriers with their customers. The new facility will have 170,000 sq m (1.8m sq ft) of space and use 150MW London Imagine a data center as a gigantic computer. Most are big flat boxes, gobbling up land like old-fashioned PCs swallow desk space. Telehouse North 2 is different. With six data floors, it’s like a space-saving “tower system” that could tuck neatly on the floor. The building sits by a Docklands roundabout, at the gateway to the City of London, so there is an incentive to make best use of that land - which led Telehouse to build its tower. The building has adiabatic cooling units from Excool installed on each of the six data floors. The tricky part was to make sure that the cooling systems removed heat from each floor, but did not interfere with each other. “You have to segregate the cool air coming in and ensure you get the hot air away,” building manager Paul Sharp told DCD on a tour. Telehouse enlisted the help of Cundalls and used computational fluid dynamics (CFD) to design the building, which draws in cool air on one side and expels warm air on another side, using the prevailing wind direction to remove the heat.



The air is fed in through a single six-story space which runs up the whole side of the building. There are grilled floors at each level, but

20 DCD Magazine • datacenterdynamics.com

Design + Build

there now, despite the chronic shortage of land in the city. Local players understood the nature of the real estate market well before then. Back in 2001, SUNeVision opened the MEGA-i building, a 30-story purpose-built data center, which is designed to house over 4,000 racks with total gross floor space of more than 350,000 square feet. It is described as “one of the largest Tier 3+ Internet service centre buildings in the world” - although it doesn’t feature on the Uptime list of certified Tier III facilities. The facility sells carrier-neutral retail colocation, marketed under the iAdvantage brand, with customers able to choose from a variety of power and deployment options, from open farm and private suites to customized areas. Tenants include global telecoms carriers, cloud providers, multinationals and local enterprises. It is currently being optimized to keep the technology up to date, while meeting the ever-increasing regional and global demand.

of power, making it Singapore’s biggest single data center, as well as its tallest. The building’s façade will be made of a perforated lightweight material which allows air flow to the mechanical equipment inside. And Singapore won’t stop there: in 2017, the government called for 20-story facilities. Huawei and Keppel Data Centers have joined the Info-communications Media Development Authority of Singapore (IMDA) to carry out a feasibility study on a 'green' high-rise data center building. Details are very scarce, but Ed Ansett, cofounder and chairman of i3 Solutions Group, says a high-rise data center can result in a better PUE: “It certainly can do. The high-rise facility can achieve an exceptionally low PUE provided it adopts a different approach to cooling.” Hong Kong It’s no surprise to find data centers building upwards in Hong Kong, a city where a shortage of real estate famously caused Google to halt operations on a data center in 2012. Both AWS and Google are building

Northern Virginia There’s plenty of space in Northern Virginia, and when the data center operators first came here, they staked their claims and built out, somewhat like the farmers that first settled the area in the 18th century. But even these wide open spaces have limits, and as prices go up - reputedly, to more than $1 million per acre - it’s time to make better use of the land. QTS has opened the first three-story data center in Northern Virginia. The shell has been built, and will ultimately have four 2.5MW halls on each floor, but it is being filled on a segment-by-segment basis. QTS has sold one ground-level hall to a wholesale customer, and plans to fill the halls above it, before moving on to the other quadrants. The halls are being filled in this order because cooling for the three floors relies on convection over the three stories so they have to be built at the same time. “It’s not capitalefficient to build it all at once,” Tag Greason, chief hyperscale officer at QTS, told DCD. Amsterdam A $190 million eight-story Equinix data center towers over Amsterdam’s Science Park. AM4 opened in 2017 and is being billed as an “edge” facility, thanks to its urban location. It is 70m (230 ft) tall, and will have room for 1,550 cabinets in its first phase and 4,200 when fully built out, with a usable floor space of 124,000 square feet (11,500 sq m). Building such a tall facility in Amsterdam’s humid soil required very deep pilings to be planted, right next to university land. The space constraints also required Equinix to dig a large moat around the facility and the adjacent Equinix data centers.

Issue 30 • October/November 2018 21

In 2017-2018, Chain of Hope provided 372 life-saving cardiac procedures t o children suering from heart disease in developing and war-torn countries. Find our more at www.chainofhope.org or call our team on 020 7351 1978 Chain of Hope is a registered charity in the UK no. 1081384

Keep Little Hearts Beating Chain of Hope, Official Charity Partners of DCD


> Critical Infrastructure | In-depth

Defined by software

Going off-grid

High voltage

Find your DCIM

> You’ve heard about software-defined storage and networking. But what about power?

> UPS systems and batteries can benefit the community, not just the facility

> DC power promises increased efficiency but lacks support from hardware vendors

> Massively overhyped, DCIM still offers the best chance for a ‘smart’ data center

POWER DISTRIBUTION UNITS • Vertical/Horizontal Mounting • Combination Units • Power Monitoring • Remote Monitoring • Rated at 13A, 16A and 32A • Bespoke Units • Robust Metal Construction • Availability From Stock • Next Day Delivery



Designed and Manufactured in the UK

+44 (0)20 8905 7273



Critical Infrastructure | In-depth

Critical infrastructure to save the world Saving energy is more important than ever. In the next few pages, we'll introduce you to some of the most promising ways to do it, says Peter Judge

Peter Judge Global Editor


n October 2018, the Intergovernmental Panel on Climate Change (IPCC) issued its starkest warning yet, in a report cautioning that the world must shift almost completely to renewable energy by 2030, or else suffer a temperature rise higher than 1.5°C, which would create catastrophic environmental changes. The only trouble is, moving completely to renewable energy is all but impossible. Sources like solar and wind power are intermittent, and can’t match the peaks - or even the continuous levels - of demand for electricity. Already, grids round the world are straining to deliver peak power, during which period they have to switch on vast amounts of polluting capacity powered by fossil fuels.

Surprisingly, data centers could help with part of the answer. Their continuous demand is a poor match for the renewable supply, but the facilities are blessed with uninterruptible power supplies (UPSs), which can replace the grid supply for a short time. For years now, a small group of pioneers have been fighting to get data centers to accept that it is possible to share these resources, to help the grid. Data center operators jealously guard their UPSs, feeling they are the only guarantee they have of continuous service. But demand-response providers have found ways to offer a package that actually increases the reliability of a UPS, while selling some of its power to the grid. We explain how providers are doing this, on p30.

Inside the data center, this kind of thinking is leading to hardware providers that are approaching a concept that seemed a dream till recently: software-defined power. The cloud is based on the idea that IT services are all provided by bits, which can be delivered most efficiently from an aggregated, virtualized pool. There's at least one vendor that is using lithium-ion batteries in the racks, to smooth the demands of IT equipment. Andy Lawrence of Uptime Research examines the prospects on p26.

High voltage is another long-promised route to data center efficiency that might finally be emerging into reality. It's intuitively more efficient to eliminate stages in which power is converted from AC to DC and back. But after pilot projects, and proposals from groups including the Open Compute Project (OCP), the idea is still a fringe pursuit. Now DC power has a champion in NTT. Sebastian Moss found out how the Japanese company is pushing high voltage DC power into supercomputing (p33). Finally, data center infrastructure management (DCIM) has a curious role in critical infrastructure. After a couple of years as a hyped technology, it suffered from industry disillusionment. Now, it may be time to revive the so-called 'dead camel in motion' (p34), if only because it could be the only way to manage the kind of on-demand services we describe in the rest of this supplement.

It will take more than efficiency Measures like demand reduction, software-defined power and highvoltage distribution will reduce the energy used by any future data center, but it is worth remembering that the overall result of these efforts could be an increase in the total amount of energy used. Energy is a major (often the major) cost in a data center, so reducing energy use makes IT resources cheaper. Consumer services such as Facebook and YouTube are paid for by advertising embedded in the service, and ultimately by access to users' eyeballs. Some business services are going the same way. The financial barriers to using these services are zero or neglibly small, so demand is rocketing and the total energy usage is very likely to increase, following the famous Jevons Paradox. So far, energy reduction has offset demand growth to some extent, but that is all. To meet the demands of the IPCC, the data center industry needs to actually reduce emissions. Greening energy sources is one approach, but this uses up renewables that others could use. Curbing usage is difficult to impossible in a global free market.

Issue 30 • October/November 2018 25

Can power be softwaredefined? Critical infrastructure is normally inflexible. Andy Lawrence finds out if data center power really can be delivered as a flexible service


ometime around 2011, a new phrase entered the IT lexicon: 'software-defined.' The term is sometimes credited to an engineer at VMware, and usually appears in the context of the 'software-defined data center' or 'software-defined network'. The strange phrasing describes how, using distributed management architecture, it is possible to introduce a new level of 'plasticity' to a network or a pool of compute resources, re-making and un-making connections, or re-allocating compute resources, on the fly. Wikipedia describes SDDC as a 'marketing term,' which perhaps reflects the reality that companies and products in the software-defined category trade on the fact that they might be considered innovative, cloudoriented, disruptive and therefore of great interest. Many of the early 'software-defined' innovators were acquired at sky-high valuations. The value and importance put on 'software-defined' in this context was not misplaced. The architecture allows for much of the hard-wired complexity at the component level in IT to be abstracted out and moved to a management 'plane,' thereby allowing for simpler, homogeneous devices to be routinely deployed and managed. This use of aggregated resources is critical to the economics of cloud computing. But equally important, software-defined architectures make it possible to much more easily and cheaply manage the highly complex and dynamic 'east-west' traffic flows in large data centers and beyond. 5G networks and IoT edge

26 DCD DCD Magazine 26 Magazine•• datacenterdynamics.com datacenterdynamics.com

Andy Lawrence Uptime Research

networks would likely be near impossible to build, and uneconomic, without software-defined networking technologies. For some time, some innovators in the data center have been working on how the model of “softwaredefined” can be applied to power. Does this disruptive and revolutionary change in IT have an equivalent in the way that power is distributed and managed? Power is, after all, not so dissimilar to the flow of bits: it is a flow of electrons, which can be stored, generated and consumed, is distributed over a network, and is switchable and routable, and therefore manageable, from a distance. A follow-on question to this is: Even if power can be managed in this way, when is it safe, economic and useful to do so, given the many extra complexities and expenses involved in managing power? The answer to both questions is not quite as binary or as definitive as at the IT level, where the impact of software-defined has been immediate, significant and clearly disruptive. While the application of more intelligence and automation is clearly useful in almost all markets, the opportunities for innovation in power distribution are much less clear. Data centers are a stand-out example: Large facilities present a great opportunity for suppliers, because of the size and growth of the market, vast over-provisioning, high costs, and inflexibility in power distribution and use. But at the same time, the operator’s aspirations for efficiency and agility may be strongly constrained by customer inertia, existing investments, rigid

Critical Infrastructure | In-depth designs and business models, and the need for high availability solutions that have been proven over time (low risk). In the wider energy market, however, a number of parallel and successive waves of innovation have been sweeping through the industry, as the sector moves to more dynamic and distributed generation and storage, greater use of intelligence and automation, and flatter, transactive models. Suppliers working in the field of power management and storage - ABB, GE, Eaton, Siemens, Schneider Electric and Vertiv, to name a few - have been developing “smart energy” technologies, in some cases, for decades. New entrants – most notably electric car and battery innovator Tesla – have also introduced radical new storage and distribution technologies. The impact is being seen at the utility grid level, while microgrids, now a mature technology but a young market, provide a model for dynamic and intelligent management of power sources, storage and consumption. In the data center, similar innovations are underway, although adoption is patchy. These range from the introduction of advanced energy management systems, which may be used to monitor energy use and inform decisions about purchasing, storage and consumption, to microgrid technologies, where power sources and IT consumption are monitored and managed under central control (at present, this technology in a data center context is rare and is most likely to be seen on a campus with a high performance computing center). But the most obvious connection/comparison with the software-defined data center is the softwaredefined power (SDP) architecture, as promoted by the small Silicon Valley company Virtual Power System (VPS); and, less obviously, the use of advanced, shared reserve architectures, as promoted by some UPS suppliers, most notably Vertiv. These architectures are very different, but in both cases, one of the key goals is to reduce the need for spare capacity, and to divert or reserve power for applications, racks or rows that most need it, and away from those that need it least. VPS’ architecture is quite specific: it makes use of distributed lithium-ion batteries in the rack, to provide a “virtual” distributed pool of power that can be used and managed when needed, or to provide back up. In this sense, it is analogous to the homogenous pool of compute resources in a software-defined data center. VPS deploys a number of small, centrally managed, rack-mounted control devices, known as ICE switches, which can be used to turn off the main UPS power at the rack and thereby draw on the local Li-ion battery. The management software plays a key role. Effectively, it can divert power from one rack to another – not by using the Li-ion batteries from one rack to power another rack elsewhere (although this

40% of facilities currently use N+1 configurations instead of 2N

is possible, it is more complex because of the power conversion and harmonization requirements), but by switching from the central UPS to battery in certain racks, making more power available elsewhere. In order to make such a decision, the management software uses ever-changing data about the nature of the loads on the servers and the levels of protection required. Ultimately, although in its early days, loads may be moved around to match the power availability, or moved in order to release power for use elsewhere. As in a software-defined network, the central software is using data and policies to intelligently control relatively simple equipment. In an SDP environment, the software might be considered as a step on the road to “dynamically resourced” data centers, with capacity reserved for the most critical applications, while other less-important applications may have less capacity allocated (“dynamic tiering” is not the appropriate term, as this is about power availability, not fault-tolerance or maintainability). SDP software can also use the batteries to offer an extra, supplemental power source during times of peak demand, effectively enabling a data hall to use more power than has been provisioned by the UPS; or it could be used to enable operators to use local power at times when the grid power is most expensive, or when colocation customers wish to avoid going beyond agreed power use limits, triggering extra charges. Uptime Institute sees the VPS architecture as only one of a number of approaches in the data center that might be classed as “Smart Power” or “Smart Energy,” although not at all the use cases are the same. For example, centralized UPS systems that pool capacity in a “three makes two” configuration, or even up to “eight makes seven” configuration, can use intelligent, policy-driven software switching to maintain capacity, spread loads and reduce risk. The industry trend is towards reducing the provisioning costs using N+1 or N+x configurations (about 40 percent of all data centers have an N+1 solution), rather than full 2N configurations. But this carries a risk: Uptime Institute Research’s 2018 industry survey found that data centers with 2N architectures experience fewer outages than those with N+1 architectures. Among the operators with a 2N architecture, 35 percent experienced an outage in the past three years, while 51 percent of those with an N+1 architecture had an outage in the same period. The likelihood is that this is not entirely due to the investment in the extra capacity. It may be argued u

The industry trend is towards reducing the provisioning costs using N+1 or N+x configurations, but this carries a risk


of operators with an N+1 architecture experienced an outage in the last three years

Issue 30 30 •• October/November October/November 2018 2018 27 27 Issue

u that as extra equipment is added to make up more granular solutions, and extra connections and switches are used to link up the extra UPS systems, the importance of good management, capacity planning and maintenance becomes more important. This is where hardware/software combinations such as shared reserve systems or softwaredefined systems may come into their own. Shared reserve systems can be used to pool the capacity of multiple smaller UPS systems, and then dynamically allocate power from this pool to PDUs using software-managed transfer switches. This involves some complex switching, and naturally worries managers who want to ensure ease of operation and management. But the key is the software – if it is policy-driven, easily operated, and designed with resilience, reliability should be very high – and becomes higher over time as expertise and AI techniques are inevitably embedded. If power management and control software (and associated equipment) can be deployed without adding significantly to either risks or costs, and can largely match the proven levels of availability that can be achieved by directly using hard-wired physical equipment, then the argument for softwaredefined power approaches is clear: greater flexibility in how power is directed, managed, and deployed. In turn, power capacity, relative to the workload, may be significantly increased. Ultimately, this could mean that the data center managers, using a suite of tools (applications), have some centralized control over how power is used, distributed, capped, stored, or even sold to meet changing demands, service levels, or policies. It should enable the building of leaner data centers of all sizes. Suppliers such as Schneider and Vertiv are among those involved in the development effort, working with smaller vendors such as VPS; meanwhile major operators, including Equinix, are investigating the value of the technology. At the same time, smarter, centrally managed constellations of UPS systems and other power equipment are being used to create more granular and manageable reserves of power. Over time, more widespread adoption of these technologies, whoever the suppliers are and whatever the technology is called, seems very likely. But there are barriers to adoption. In fact, in the “Disruptive Data Center” research project by Uptime

A panel of experts convened by Uptime scored SDP as unlikely to succeed

Ultimately, SDP should enable the building of leaner data centers of all sizes

28 DCD DCD Magazine Magazine••datacenterdynamics.com datacenterdynamics.com

Institute and 451 Research, a panel of experts gave SDP one of the lowest rankings (3.4 out of 5) for the likely rate and extent of take up, and the likely impact on the sector. A larger pool of data center operators that we polled was even less enthusiastic. This skepticism most likely reflects a combination of unfamiliarity, and the possible difficulty of justifying such investments, especially for existing environments that are already stable and where many of the costs have been depreciated. But this early stage skepticism does not rule out later, widespread adoption: direct liquid cooling and microgrids both scored even lower than SDP, yet there is a case for arguing both have a strong, long-term future. For evangelists of these technologies, part of the challenge is in convincing operators to invest in a business environment so heavily grounded in existing architectures. SDP is probably best viewed as part of a package of separate but interlinked technologies that each need to make progress before paving the way for others. A key one of these is Li-ion (or similar) battery chemistry adoption, both centrally and in the racks; these batteries can be discharged and recharged hundreds to thousands of times without much effect on battery capacity or life compared to lead-acid batteries, opening the way for much more dynamic, smarter use of energy. Equally, technologies such as DCIM, software automation and AI are regarded cautiously by many operators. As smarter, software-driven tools and systems become more mature, more intelligent, and more integrated, and the use of remote control and automation increases, the adoption of more agile, software-driven systems will increase. Such systems promise greater efficiency and better use of capacity, reduced maintenance, and increased fault tolerance. Andy Lawrence is executive director at Uptime Institute Research

DCD>London | United Kingdom

Nov 5 - 6 2018

On day 2 of our London event, Andy Lawrence will present the Uptime Seminar: Software-Defined Power: What Was The Promise Versus Today's Reality? DCD>London is the gathering not to be missed, with four content themes across 100+ hours of presentations: Digital Transformation and the New Data Center Edge, Being Data Center Energy Smart, Consolidating Infrastructure & Extending its Life and and Living Between On-Prem, Colo & Cloud. bit.ly/DCDLondon2018

Going off-grid The electricity grid is groaning, and data centers have technology that could help it. Peter Judge finds out about the demand for demand-response services

A Peter Judge Global Editor

t a recent DCD Energy Smart event Priscilla Johnson, data center water and energy strategist at utility PG&E, said there is “a dire need for energy efficiency to be married with demand response.” David Murray, president of distribution at Canadian utility Hydro-Quebec, proposed that facilities should be able to “erase themselves from the grid” at critical moments, or contribute power from their backup sources to the grid. The reality is not yet that dramatic, but demand response is on its way, Dr Graham Oakes, founder and chief scientist at Upside Energy in the UK, told DCD. And, if done correctly, sharing your backup power can actually make your facility more reliable, not less. Upside began in 2013 as a bright idea: “There are UPS systems out there with spare capacity,” Oakes explained. “We could build a cloud service that would connect to the UPS and tell it when it could run from the battery for short periods when the grid is under stress.” The idea chimed with UK government policies of the time, and Upside created a pilot project. The company now has a network of about 25MW of backup power systems, part-funded by £2.8 million (US$3.7m) in government investment. In addition, diesel generators at participating sites can be used to generate power. “It’s counterintuitive, but this provides large environmental benefits,” Oakes

30 DCD Magazine • datacenterdynamics.com

explained. The grid’s normal backup is to have gas turbines constantly running at part-load, providing little or no power, so they have spare capacity when needed. “They are less efficient at part-load, which is very significant, because [those turbines] have to be running the whole time.” The service also allows the UK to reduce its reliance on an arrangement to draw power from France during shortfalls. All this saves money, which can be passed on to the UPS users. Upside’s arsenal includes a 1MWh research battery run by the University of Sheffield which can deliver around 2MW of power, making it similar in scale to a small data center. It has also integrated Arqiva’s TV broadcast towers. At around 200kW each, these form a small, but significant part of the UK infrastructure, with diesel-backed UPS systems. Vertiv, the provider of the Arqiva UPSs, is now bundling the Upside service with systems delivered to other customers. This is a big credibility win for Upside: “If you are a data center manager, and a small startup knocks on your door, asking to do clever stuff with your UPS, I guarantee they will say no. If Vertiv comes and says they are working with that startup, it gives them more confidence.” The technology is easier to deploy than you might think. High demand causes generators to slow down, and the mains frequency goes down, Oakes said: “We can detect the frequency change, and respond.”

Many ways to share There are several options to use data center power systems to reduce the load on the grid. The most basic is to switch over to running off a local micro-grid, and relegate the utility grid to back up. This requires a reliable source of energy: the most practical versions use fuel cells powered by natural gas, as renewable power sources are usually to intermittent or small to power a whole facility 24x7. It's possible to make the stored energy in your backup batteries available, to smooth over small peaks in grid demand. However, to make a real difference, the diesel gensets have to fire up. They then either power the data center, taking it off the grid, or feed power out into the grid, saving money either way. In Stockholm, Digiplex found it could go one step further. As well as offering the power of its diesel backup generators to the city, it also pumps its waste heat into the city's district heating system, run by Stockholm Exergi.

Critical Infrastructure | In-depth

In return for making the UPS available to the grid, revenue from the utility is shared with the customer

25MW of power in Upside's network

In fact, it’s more dynamic than that: “If we follow the frequency dynamically, we can increase the power incrementally.” This helps the grid stabilize (on 50Hz in the UK), reducing potential damage to generation equipment, as well as to electronics connected to the grid. The scheme also makes it possible to respond to dynamically changing prices. The price of electricity goes down when demand is low and renewables are onstream, and back up when people are active: “We can charge up at 4am, and access that cheap energy at 5pm when the grid is expensive. “We give people a dashboard where they can see what is happening with the UPS,” Oakes said. “They can withdraw from service if they are doing some maintenance, for instance. The data center manager always has to be in control.”

In return for making the UPS available to the grid, revenue from the utility is shared between Vertiv, Upside and the customer. The customer could also get an upfront discount on the UPS, or a more integrated service package, with high-end maintenance. Regular use for short periods should increase UPS reliability: “If we constantly monitor the UPS, and used it once a month for about 30 minutes, that is a really good way to get paid to have health monitoring on your UPS,” Oakes said.

Upside also monitors and cares for batteries: “We only use 20 percent of the battery capacity. Lead acid batteries will tolerate shallow cycling, and lithium-ion batteries are cycled differently.” Upside is just one of several aggregators: others in the UK include Flexitricity, which operates with industrial plants, and Ecotricity, a wind power provider which offers batteries to households. Meanwhile Vertiv’s competitor Eaton has launched UPS-as-a-

DCD>London | United Kingdom

Reserve Service, co-developed with Swedish energy company Fortum, which lets organizations earn money from their UPS investment by providing energy to the grid. Again, Eaton assured DCD that its service leaves data center operators in control. The idea could lead to data centers buying power on demand from a provider which manages their UPS: “Rolls Royce no longer sells jet engines,” Oakes pointed out. “It’s now selling hours of power.”

Nov 5-6 2018

Want to hear more? Devrim Celal, CEO of Upside Energy and Emiliano Cevenini of Vertiv will use reallife case studies, to show how generators, as well as UPS systems, can be used as energy storage and supply devices, creating a revenue stream without jeopardizing the IT load. bit.ly/DCDLondon2018

Issue 30 • October/November 2018 31

Advertorial: Server Technology

PDU’s new normal - denser, smaller & future proof

Cx outlets combine the functions of C13 and C19 outlets

“Why do I have to change my PDU when I swap devices in-and-out of my equipment rack?”


ound familiar? It’s a common and knew there was a more flexible way to question often heard echoing distribute workloads. And the list goes on! through the data center halls In the same spirit as Microsoft and by frustrated IT and data center Amazon, Server Technology has looked managers. This is because at the rack-mounted PDU and refused to rack-mounted servers, routers, accept its rigid configuration as “normal.” storage devices and switches typically have They cracked the stagnation conundrum a shelf life of 3 to 5 years. However, dynamic and redefined the PDU market with limitless data center environments could face device power options - in one Power Distribution changes every few months. By contrast, Unit. most rack-mounted PDUs have double that Travis Irons, Director of Engineering shelf life. The conundrum resides at Server Technology, explains the within the fact that the changing inspiration for redefining the PDU IT devices are forcing the preexperience. Come visit mature retirement of working “When it comes to powering us at but inflexible power distrian equipment rack, data center DCD>London bution units that can’t meet managers are looking for density, impending requirements. flexibility and longevity. But, as exhibition You wouldn’t stand for devices are constantly swapped stand 64 reconfiguring the living room out of racks, the combination of wall outlet every 5 years when C13 and C19 outlets requires constant you buy a new smart TV just to get changes. Listening to their frustration the latest technology. Why has this become was our inspiration to develop a 2 in 1 outlet acceptable behavior in the data center when PDU.” it comes to the PDU support of rack power Redesigning the PDU was no easy task, distribution? Where is the PDU innovation “if it was simple, other organizations would and flexibility to conform to an everhave done it a long time ago,” Irons said. changing swap out of IT devices? Supporting multiple rack configurations, Precedent has been set in the IT industry load balancing/alternating phases, as well where companies look at a normal process as a lack of real estate in the rack, were also and refuse to accept it. Microsoft looked at concerns that needed to be addressed. the DOS operating system and knew there In addition, Server Technology needed had to be a simpler way to operate a PC. Jeff to consider that when designing a hybrid Bezos at Amazon looked at the IT industry outlet that could function as either a C13 or

C19 outlet, that the C13 and C19 orientation is different - it’s 90 degrees apart. And achieving the UL Rating is no easy process. However, the company prevailed and has evolved its most popular and awardwinning HDOT PDU into the new HDOT Cx. Shattering the notions of what a conventional PDU provides, the HDOT Cx brings limitless possibilities to data centers with just one standard PDU. Without the bulky plastics surrounding the outlets, the HDOT Cx unit gains 20 percent more space to pack in additional outlets. “The HDOT Cx specifications are truly unique in the industry,” Irons added. Never has a single PDU been able to offer: • 2 different types of outlets in 1 unit • The most densely-packed outlets per form factor • Easy load balancing with alternating phase technology • Available in Smart, Switched and POPS • Future ready for equipment updates Server Technology’s innovations didn’t stop with the creation of the actual unit, they also broke the mold for online configuration and shipping. In just 4 easy steps, data center managers can go online and configure the HDOT Cx to their specific needs, and most units will be assembled and shipped in just 10 days. Thanks to Server Technology’s vision and refusal to accept the current data center PDU configuration as “normal,” the industry now has one standard PDU with virtually limitless possibilities with HDOT Cx! This is an organization that keeps its customers powered, supported and gives them a future-proofed path to get ahead. For more information on HDOT Cx visit: www.servertech.com

Server Technology Phone: (1) 775 284 2000 Email: salesint@servertech.com

Critical Infrastructure | In-depth

No danger, high voltage With energy efficiency increasingly leading data center debates, Sebastian Moss reports on efforts to turn to HVDC electricity


ata center operators are always searching for ways to reduce the amount of wasted electricity - one solution may lie in the power distribution itself. Most data centers rely on four major conversion stages, from incoming AC power to the final 1.35 – 12V DC power for the server equipment, with efficiency losses at every step of the way. For more than a decade, some have called for an end to these conversion losses, suggesting a switch to High Voltage Direct Current (HVDC) that could help achieve immediate efficiency gains of seven percent, as well as savings in electrical facility capital cost, space, and improvements in reliability. But such a change is not easy, requiring new types of equipment, different training, and large capital investments. One of the companies leading the charge is NTT Group, the Japanese conglomerate and fourth largest telecommunications company in the world in terms of revenue. NTT is introducing HVDC systems across 1,000 of its telecoms buildings by 2020, and has so far deployed more than 500 systems in over 370 buildings. Its work is supported by the Japanese government agency called NEDO (New Energy and Industrial Technology Development Organization), which has helped fund several projects, including a

"I think people are a little bit afraid of handling these new technologies" Toshihiro Hayashi, NTT Facilities

microgrid in Sendai and a 380V modular data center in Tokyo deployed in 2011. But to really get the ball rolling, NTT and NEDO knew they needed a bigger demonstration. “We focused on the North American market because it's the largest ICT market in the world, and there are a lot of ICT manufacturers there,” NTT Facilities’ Toshihiro Hayashi told DCD. “It was a challenge, looking for a partner for a demonstration project, but the University of Texas at Austin was very interested in this new technology, so we worked together with them on a project, and that was called Hikari.” The company hoped that Hikari, Japanese for 'light,' would illuminate the possibilities of a more efficient way of powering data centers. Housed in the Texas Advanced Computing Center, Hikari is an HPE Apollo 8000-based supercomputer with DCpowered servers, DC battery systems, DC air-conditioning and DC lighting. “This project started in 2014 and then we had a nine-month feasibility period, and after that we started our design and construction from August 2015,” Hayashi said. It took a year for the system to be built and brought online. “After that it moved to the operation phase, and the system is in operation still. We are monitoring the data from our Japanese office.” Hayashi hopes that the system will help data center operators move past their concerns, proving the technology is not dangerous. “I think people are a little bit afraid of handling these new technologies and have some kind of belief that High Voltage DC is dangerous or maybe the electrical shock is big; they have those kind of images. This project proves that this is a very good system, it’s operated safely, and it's operated continuously. So that was a good point to show.” He believes that more promotion of the concept is still required, and that one major

Sebastian Moss Senior Reporter

roadblock remains: the amount of equipment that supports the technology. The lineup of HVDC-compatible ICT equipment is growing, with servers from Fujitsu, Dell and HPE, and routers from Cisco and Nokia, among those available. But there are still far more servers and switches supporting standard data center topologies. “Data center operators, from my impression, would not like to have the choices of ICT equipment limited. So yes, that's a challenge that needs to be resolved,” Hayashi said. In an effort to increase choice, the company has released technical specifications, so that equipment vendors “can understand what kind of requirement is necessary to manufacture the 380V DC applicable products.” Meanwhile, Hikari has another trick up its sleeve, that Hayashi hopes will send ripple effects through the industry: the supercomputer is directly wired to DCgenerating solar panels. Depending on the weather, the sun can power between 30 and 100 percent of the system. “Solar power is one [type of] DC-producing equipment, but also fuel cells and batteries not just stationary batteries but in electrical vehicles - generate DC power. That kind of DC-related equipment is now spreading across the world," Hayashi said. "The Hikari demonstration is not only applicable to the data center, it also serves as a small case study for DC microgrids. It shows that HVDC is very good with renewable energies and batteries and that it's a highly reliable system because the converting stages are fewer, meaning fewer parts and a lower possibility of failure. ”So in order to show those kind of advantages, this Hikari project is a model case. From this demonstration project we’d like to show that the DC world is not just an idea, but a natural next step.”

Issue 30 • October/November 2018 33

Why revive the DCIM camel? DCIM crashed in the middle of its hype cycle, but now smart data centers demand it be brought to life again, say Venessa Moffat and Ken Peters

Venessa Moffat Marketing, strategy and growth hacking specialist

Ken Peters Project manager

DCIM is in the eye of the beholder Fred Koh


n March 2017, IT consultant Andrew Waterston claimed, in an article in DCN, that data center infrastructure management (DCIM) was a ‘Dead Camel In Motion,’ not delivering on its promise and due for replacement. At the time, it was clear that DCIM was not delivering a return on investment (ROI) for end users or even vendors. Around that time, when the acronym DCIM was mentioned in conversations, people shivered, but some of us didn’t give up hope so easily. We believe DCIM has been on extended leave mid hype cycle, but now, it’s time to resuscitate the camel. It’s not dead – it just needs a little support in reaching its potential. The industry is working hard to create true software-defined data centers (SDDC) or Smart DCs, but this requires an intelligent link, middleware connecting the facilities to the IT. How can we do this without DCIM? We believe DCIM is necessary for an SDDC or a Smart DC, but accept that it’s not easy. We want to give DCIM a place and an identity that vendors can agree on, in order to drag it out of the trenches and revive its potential. Many DC operators now have a functioning DCIM system and can show you how they implemented one of the

larger product offerings, but it is normally a work in progress. DCIM isn’t dismissed out of hand any more: There are lots of live implementation programs out there actually starting to deliver real value. In July 2013, Gartner positioned DCIM on its “hype cycle” curve, as slipping well into the Trough of Disillusionment, with a two to five year timeline to reach the Plateau of Productivity. But, by 2015, it was still in the same section, and left a sour taste in many wallets of DC owners, as it still wasn’t reaching its potential. Now, in late 2018, DCIM has disappeared from Gartner’s watchful eye, with other technologies traversing the hype cycle path in its stead. So where has it gone? Does DCIM stand for “dead camel in motion” as Waterston suggested, or has it become something useful? Is it possible that DCIM has quietly become middleware for the SMART data center? Broadly speaking, middleware enables a heterogeneous environment by bridging disparate applications and infrastructure. Whatever name we use, surely this is what our data centers need to become smart and to embrace the internet of things (IoT) and the industrial IoT. The challenge is how to create a heterogeneous environment when these environments comprise a

DCD>Debates What does ‘AI Ready’ mean for data center operations?

Watch On Demand

The application of Artificial Intelligence to infrastructure management and operations could prove to be a silver bullet in terms of driving optimization of resource availability and overall efficiency of your data center. This DCD>Debate looks at AI in the data center. bit.ly/DCDDebatesAI

34 DCD Magazine • datacenterdynamics.com 34

Critical Infrastructure | In-depth system of systems, that include, people, processes, software, hardware, integrations, communication protocols, plant, energy and cooling systems. Middleware solutions are generally employed to minimize the pain of integration between IT systems, but in the data center, it must do more, simplifying the complexity, and facilitating problem-solving across the data center in a transparent, holistic, and logical manner. Moreover, it needs to be able to see the entire stack of architectures in order to align the various aspects of the business, technology and facilities, which famously have different financial horizons. This means that finding a successful DCIM solution is not an easy puzzle to solve. Fred Koh from Graphical Networks wrote recently that “the most accurate way to define DCIM is this: DCIM is in the eye of the beholder,” which doesn’t sound very promising, and he may well have a point. Every environment will always be slightly different from the next. Koh argues that no single DCIM system should be fit for every organization, which leaves us back at the drawing board in terms of a single market definition. Koh highlights a couple of DCIM companies who have withdrawn from the market, and we have also seen some mergers and acquisitions. We’re die-hard optimists though, and we

see evidence that solutions are emerging. For example, this year, US industry analyst Bill Kleyman said in blog posts: “Data centers are getting smarter,” and “don’t think that AI, machine learning, and neural networks are only for cloud-minded DevOps people. You will see these solutions become deeply integrated with data center operations as well as management.” So have we figured out what DCIM is yet, and has it found its place? If vendors still define it differently, then it can’t yet be classed as mature, and this may be one reason why it’s stuck in the quicksand of the trough of disillusionment. The problem is that people’s DCIM requirements vary widely. Companies have a key client group who are their main income so they cater for those clients first. If they are all in one industry or business sector this particular DCIM implementation becomes tailored to that market.


In summary, there are three major shifts towards the smart data center.

management. A proactive strategy will prevent reactive growth, and reduce risk because people have more time to plan properly and look for pitfalls rather than discovering them.

Integration is more widespread The smart DC is seen by some as a cost saving exercise, by others as a means to improve efficiency, but it could be so much more: A fully automated, connected and autonomous facility that can manage the physical elements of data center management as well as the connected software, using AI, a smattering of robotics and a bit of innovation. It will only take one adventurous company to pioneer this approach and we may see another sea change in the way data centers can be operated.

Vendors have persevered This was never an easy problem to solve, but DCIM is the first step towards delivering change management, incident handling, asset registers and other things. Involving more areas of the business than ever before, and encompassing more systems than ever before, the data center soon will be smart. A real SDDC is nearly here.

ROI is appearing As the wider IT group gets involved in DCIM and sees the advances that have occurred over the last few years, they can now identify where they could use this information to assist in change management, capacity planning and pro-active growth

Marketing, strategy and growth hacking specialist Venessa Moffat and project manager Ken Peters were part of the DCIM Deliberations working group of the Data Centre Alliance

At the Peak


Entering the Plateau

Climbing into the Trough

On the Rise

Climbing the Slope

Technology Trigger

Peak of Inflated Expectations

Trough of Disillusionment

Slope of Enlightenment

Plateau of Productivity


Issue 30 • October/November 2018 35

Opportunity lies at the Edge Some say Edge networking is just hype. Schneider Electric's Dave Johnson explains to Peter Judge which applications are real, and which are still emerging

Peter Judge Global Editor


months Time saved by modular construction


chneider Electric sells infrastructure products across the data center industry, worldwide - but it seems that the biggest opportunity is in Edge capacity: smaller facilities designed to be closer to data sources, to users and to devices. Some people believe that the Edge opportunity has been over-hyped, but Dave Johnson, Schneider’s executive vice president of IT, says his market is delivering: “Edge is already a significant nine-digit business for us - measured in dollars, euros or pounds.” This is partly down to falling margins at the higher end: “The cloud is consolidating to a limited number of players, who are very demanding,” he said. “The Edge is growing. There is a large number of players, and the margins are higher.“

compute, storage and networking, he said, and this is the perfect use case for micro data center, something pioneered by APC, where Johnson worked before the company was acquired by Schneider in 2006: “You need a more resilient and secure solution, you need a remotely managed solution. These are cabinets which contain all the built-in backup power and environmental monitoring. They may even be packaged with software and services.” The next market to develop will be the industrial sector, which Johnson said “is at the piloting stage,” with a requirement for local control of factory equipment, and local processing of data for applications like oil exploration.

"A the Edge there are more players and higher margins"

He is not dismissive of the cloud part of the market; he just sees faster growth at the Edge, a location for which he uses a fairly broad definition: “It is everything outside of big central hyperscale data center. Anything from large regional facilities to a small rack, or a gateway talking to connected devices, where you are closer to the data or closer to the user.” The Edge markets which are being introduced right now are led by retail: “This is our top focus for the Edge. As brick and mortar companies try to compete with Amazon, they need to give customers a better experience in-store, and this is driving Edge computing.” These customers need local and resilient

When the Edge takes off in the industrial sector, Johnson reckons Schneider will have a head start, since it already has a division selling heavy plant, and has an understanding of what is going on in sectors like mining, manufacturing and utilities: “Those are areas where we have looked at a hundred use cases.” Like retail, the industrial sector needs IT resources which can operate outside a traditional data center environment, but there are big differences. Shops are working to increase footfall, while industrial sites restrict access. This means that retail environments are less hostile, but have higher requirements to secure their micro data centers from passers-by. A lot of attention has focused on the telecom Edge, where micro facilities are installed in the cell towers and “central offices” of the mobile and fixed phone

36 DCD Magazine • datacenterdynamics.com

Core + Edge networks. “This is needed for next generation applications like autonomous vehicles, VR, AR and drones,” Johnson said. “You really need a good set up for autonomous vehicles,” he added. “Vehicles need to avoid the latency issue, or it can be a matter of life and death. You need something like a micro data center at a cell tower.” Despite this eventual need for a telco Edge, it is still at an early stage, he said. With other vendors talking it up, there’s a risk of disappointment: “We are getting close to the peak of the hype part of the cycle.” The risk, as set out in the hype cycle described by Gartner analysts, is the 'trough of disillusionment.' Johnson said (see DCIM, p34): “We are going to try and soften that, by delivering on requirements.” As well as the hardware - those APC boxes which you can be sure Schneider will sell at every opportunity - there is also the software: “We have historically been a leader in DCIM. We have been in the process of switching from traditional software to cloud-based software, which brings all sorts of benefits. There is a big uptick in interest in using those tools for the Edge. Honestly, this was a bit by accident. We recognized the need for remote monitoring, but it turns out that the cloud-based software we intended for the data center actually resonated with the Edge.”

Schneider will sell through partners, like HPE and Dell: “Customers at the Edge are going to want the whole package to be delivered, supported, serviced and monitored. They aren’t going to want to deal with all the vendors of all the bits and pieces in a micro data center.” Implicitly, Johnson likes the Edge because it has higher margins than facilities run by giant vendors in the cloud space - but the downside of this is that Edge resources are more expensive. No one will adopt it unless they really have to - and the cloud providers will continue to argue that with enough bandwidth, they can do better - and cheaper - with centralized data center resources. “The providers of cloud services will fight this, but there is a wave of connected devices - that is a fact. Every vendor is trying to make devices connectable and manageable, the scale of that will require Edge.”

"Cloud providers will fight this, but IoT requires the Edge"

The Edge will have no IT staff (what retail store has an on-site data center manager?), and it will require simplified lifecycle management: “The Edge won’t have people to provide love and care for the equipment. It needs super-robust systems that won’t break.” There simply aren’t enough traditional data center people to service the Edge market, and the classic pool of data center expertise is ”aging out,” he said. At the same time, millennial users want products like Android or iOS, that simply work: “You don’t expect - or need - anyone to help you fix or configure it. The things have to be super easy to use, they have to be self-healing.” If something breaks, the customer will want a replacement module “which can be swapped out by someone who isn’t an expert.” And beyond that, they will expect predictive maintenance - so they get a replacement before the existing device breaks: “That will eventually be table stakes for the Edge.” Edge customers will also want the proverbial “single neck to throttle,” so

Another alternative to Edge micro-facilities, we suggest, is to push the resources right down into the end-user device. Johnson turns this idea over: “So with autonomous vehicles, that would require a mesh. I need a really, really fast response. And I haven’t seen mesh networks reliable enough for that. I don’t see any way of managing autonomous vehicles without serious Edge computing.” On another level, applications like autonomous vehicles suffer from their utter dependency on the Edge. “It’s chickens and eggs. You can’t do driverless cars without Edge, and you can’t do Edge without cell towers, and if you look at projected timings for 5G, well…” None of these considerations cause him any worry. The big picture is that the Industrial Internet of Things (IIoT) is crucial to Schneider, and the company is lining up its software and hardware to be part of it. He believes it’s a shift that is almost - inevitable. “A few years back, Cisco bet the farm that people were going to do everything using voice over IP (VoIP). Are we betting the company on the Edge? We are not quite there yet.” To hear him talk, it’s clear he thinks we are not far from that point.

Issue 30 • October/November 2018 37

The creation of the electronic brain Sebastian Moss reports on the epic decades-long quest to make computers more like the human brain. Early efforts brought us the deep learning revolution, neuromorphic computing could bring us so much more

38 44 DCD Magazine • datacenterdynamics.com

Sebastian Moss Senior Reporter

Cover Feature


arver Mead is in no rush. As a pioneer in microelectronics, helping develop and design semiconductors, digital chips and silicon compilers, Mead has spent his life trying to move computing forward. “We did a lot of work on how to design a VLSI chip that would make the best computational use of the silicon,” Mead told DCD. Mead was the first to predict the possibility of storing millions of transistors on a chip. Along with Lynn Conway, he enabled the Mead & Conway revolution in very-largescale integration (VLSI) design, ushering in a profound shift in the development of integrated circuits in 1979. Even then, he realized that “the fundamental architecture is vastly underutilizing the potential of the silicon by maybe a factor of a thousand. It got me thinking that brains are founded on just completely different principles than we knew anything about.” Others had already begun to look to the brain for new ideas. In 1943, neurophysiologist Warren McCulloch and mathematician Walter Pitts wrote a paper on how neurons in the brain might work, modeling the first neural network using electrical circuits. “The field as a whole has learned one thing from the brain,” Mead said. “And that is that the average neuron has several thousand inputs.” This insight, slowly developed with increasingly advanced neural networks, led to what we now know as deep learning. “That’s big business today,” Mead said. “And people talk about it now as if we’ve learned what the brain does. That’s the furthest thing from the truth. We’ve learned only the first, the most obvious thing that the brain does.” Working with John Hopfield and Richard Feynman in the 1960s, Mead realized that there was more that we could do with our knowledge of the brain, using it to unlock the true potential of silicon. “Carver Mead is a pivotal figure, he looked at the biophysical firing process of the neurons and synapses, and - being an incredibly brilliant scientist - saw that it was analogous to the physics within the transistors,” Dr Dharmendra S. Modha, IBM Fellow and Chief Scientist for Braininspired Computing, told DCD. “He used the physics of the transistors and hence analog computing to model these processes.” Mead called his creation neuromorphic computing, envisioning a completely new type of hardware that is different from the von Neumann architecture that runs the vast majority of computing hardware to this day.

The field has grown since, bringing in enthusiasts from governments, corporations and universities, all in search of the next evolutionary leap in computational science. Their approaches to neuromorphic computing differ; some stay quite close to Mead’s original vision, outlined in the groundbreaking 1989 book Analog VLSI and Neural Systems, while others have charted new territories. Each group believes that their work could lead to new forms of learning systems, radically more energy efficient processors, and unprecedented levels of fault-tolerance. To understand one of these approaches, we must head to Manchester, England, where a small group of researchers led by Professor Steve Furber hopes to make machine learning algorithms “even more brain-like.” “I’d been designing processors for 20 years and they got a thousand times faster,” the original architect of the Arm CPU told DCD. “But there were still things they couldn’t do that even human babies manage quite easily, such as computer vision and interpreting speech.” Furber wanted to understand what was “fundamentally different about the way that conventional computers process information and the way brains process information. So that led me towards thinking about using computers to try and understand brains better in the hope that we’d be able to transfer some of that knowledge back into computers and build better machines.” The predominant mode of communication between the neurons in the brain is through spikes, “which are like pure

unit impulses where all the information is conveyed simply in the timing of the spikes and the relative timing of different spikes from different neurons,” Furber explained. In an effort to mimic this, developers have created spiking neural networks, which use the frequency of spikes, or the timing between spikes, to encode information. Furber wanted to explore this, and built the SpiNNaker (Spiking Neural Network Architecture) system. SpiNNaker is based on the observation that modeling large numbers of neurons is an "embarrassingly parallel problem,” so Furber and his team used a large number of cores for one machine - hitting one million cores this October. The system still relies on the von Neumann architecture: true to his roots, Furber uses 18 ARM968 processor nodes on a System-on-Chip that was designed by his team. To replicate the connectivity of the brain, the project maps each spike into a packet, in a packet switch fabric. "But it’s a very small packet and the fabric is intrinsically multi-cast so when a neuron that is modeled in the processor spikes, it becomes a packet and then gets delivered to up to 10,000 different destinations.” u


No. of neurons in a mouse brain

Issue 30 • October/November 2018 39

tters Neurotransmi u The nodes communicate using simple messages that are inherently unreliable, a “break with determinism that offers new challenges, but also the potential to discover powerful new principles of massively parallel computation.” Furber added: “The connectivity is huge, so conventional computer communication mechanisms don’t work well. If you take a standard data center arrangement with networking, it’s pretty much impossible to do real-time brain modeling on that kind of system as the packets are designed for carrying large amounts of data. The key idea in SpiNNaker is the way we carry very large numbers of tiny packets with the order of one or two bits of information.” The project's largest system, aptly named The Big Machine, runs in a disused metal workshop at the University of Manchester’s Kilburn Building. “They cleared out the junk that had accumulated there and converted it into a specific room for us with 12 SpiNNaker rack cabinets that in total require up to 50 kilowatts of power,” Furber said. But the system has already spread: “I have a little map of the world with where the SpiNNaker systems are and we cover most continents, I think apart from South America.” In the second quarter of 2020, Furber hopes to tape out SpiNNaker 2, which “will go from 18 to 160 cores on the chip - Arm CortexM4Fs - and just by using a more up-to-date process technology and what we’ve learnt

from the first machine, it’s fairly easy to see how we get to 10x the performance and energy efficiency improvements.” The project has been in development for over a decade, originally with UK Research Council funding, but as the machine was being created, "this large European Union flagship project called the Human Brain Project came along and we were ideally positioned to get involved,” Furber said. One of the two largest scientific projects ever funded by the European Union, the €1bn ($1.15bn), decade-long initiative aims to develop the ICT-based scientific research infrastructure to advance neuroscience, computing and brain-related medicine. Started in 2013, the HBP has brought together scientists from over 100 universities, helping expand our understanding of the brain and spawning somewhat similar initiatives across the globe, including BRAIN in the US and the China Brain Project. “I think what’s pretty unique in the Human Brain Project is that we have that feedback loop - we work very, very closely with neuroscientists,” Professor Karlheinz Meier, one of the original founders of the HBP, said. “It’s neuroscientists working in the wetlabs, those doing theoretical neuroscience, developing principles for brain

“I’d been designing processors for 20 years and they got a thousand times faster. But there were still things they couldn’t do that even human babies manage quite easily" Professor Steve Furber 40 DCD Magazine • datacenterdynamics.com

computation - we work with them every day, and that is really something which makes the HBP very special.” Meier is head of the HBP’s Neuromorphic Computing Platform sub-project, which consists of two programs - SpiNNaker and BrainScaleS, Meier’s own take on neuromorphic computing. “In our case we really have local analog computing, but it’s not, as a whole, an analog computer. The communication between neurons is taking place with the stereotypic action potentials in continuous time, like in our brain. Computationally, it’s a rather precise copy of the brain architecture.” It differs from Mead’s original idea in some regards, but is one of the closest examples of the concept currently available. “That lineage has found its way to Meier,” IBM’s Modha said. As an analog device, the system is capable of unprecedented speed at simulating learning. Conventional supercomputers can run 1,000 times slower than biology, while a system like SpiNNaker works in real-time - “if you want to simulate a day, it takes a day, or a year - it takes a year, which is great for robotics but it’s not good to study learning,” Meier said. “We can compress a day to 10 seconds and then really learn how learning works. You can only do this with an accelerated computing system, which is what


Cover Feature the BrainScaleS system is.” about the brain would definitely help,” Mike The project predates the HBP, with Meier Davies, director of Intel’s Neuromorphic planning to use the knowledge gained from Computing Lab, told DCD with a laugh. “To the HBP for BrainScaleS 2: “One innovation be honest we don’t quite know enough.” is that we have an embedded processor But the company does believe it knows now where we can emulate all kinds of enough to begin to “enable a broader space plasticity, as well as slow chemical processes of computation compared to today’s deep which are, so far, not taken into account in learning models.” neuromorphic systems.” Earlier this year, Intel unveiled Loihi, For example, the effect of dopamine on its first publicly announced neuromorphic learning is an area “we now know a lot more computing architecture. “Loihi is the chip about - we can now emulate how rewardthat we’ve recently completed and based learning works in incredible published - it’s actually the detail.“ fourth chip that’s been done Another innovation concerns in this program, and it’s dendrites, the branched five chips now, one of protoplasmic extensions which is in 10nm.” of a nerve cell, as active Intel’s take on elements that form part of the subject lies the neuronal structure. “That, somewhere between No. of neurons in the hopefully, will allow us to do SpiNNaker and learning without software. So BrainScaleS. “We’ve human brain far our learning mechanisms implemented a are mostly still implemented programmable feature in software that controls set which is faithful to neuromorphic systems, but now we neuromorphic architectural are on the way to using the neuronal principles. It is all distributed structure itself to do the learning.” through the mesh in a very fine-grained Meier’s work is now being undertaken parallel way which means that we believe it at the European Institute for Neuromorphic is going to be very efficient for scaling and Computing, a new facility at the University performance compared to a von Neumann of Heidelberg, Germany. “It’s paradise: We type of an architecture approach.” have all the space we need, we have clean rooms, we have electronics workshops, we Later this year, Loihi will see its first have an experimental hall where we build large deployment, with up to 768 chips our systems.” integrated in a P2P mesh-based fabric A full-size prototype of BrainScaleS 2 is system known as Pohoiki Springs. This expected by the end of the year, and a fullsystem, run from the company’s lab in sized system in EINC by the end of the HBP Oregon, will be available to researchers: in 2023. “We can launch and explore the space of Cutting-edge research may yield spiking neural networks with evolutionary additional advances. “I’m just coming from search techniques. We will be spawning a meeting in York in the UK,” Meier told DCD populations of networks, evaluating their as he was waiting for his return flight. “It was fitness concurrently and then breeding about the role of astrocytes, which are not them and going through an evolutionary the neural cells that we usually talk about, optimization process - and for that, they are not neurons or chemical synapses; of course, you need a lot of hardware these are cells that actually form a big section resource.” of the brain, the so-called glial cells. The concept of relying on a more digital “Some people say they are just there to approach can perhaps be traced to Modha, provide the brain with the necessary energy who spent the early 2000s working on to make the neurons operational, but there “progressively larger brain simulations at are also some ideas that these glial cells the scale of the mouse, the rat, the cat, the contribute to information processing - for monkey and eventually at the scale of the example, to repair processes,” he said. At the human,” he told DCD (for more on brain moment, the glial cells are largely ignored simulation, see box). in mainstream neuromorphic computing “This journey took us through approaches, but Meier believes there’s a three generations of IBM Blue Gene possibility that these overlooked cells "play supercomputer - Blue Gene L, P, and Q, and an important role and, with the embedded the largest computer simulation we did had processors that we have on BrainScaleS 2, it 1014 synapses, the same scale as the human could be another new input to neuromorphic brain. It required 1.5 million processors, computing. That’s really an ongoing research 1.5 petabytes of main memory and 6.3 project.” million threads, and yet the simulation ran Indeed, our knowledge of the brain 1,500 times slower than real-time. To run remains very limited. “Understanding more the simulation in real-time would require


12 gigawatts of power, the entire power generation capability of the island nation of Singapore.” Faced with this realization, “the solution that we came up with was that it’s not possible to have the technologies the brain has, at least today - it may be possible in a century - but what we can achieve today is to look at the reflection of the brain. “We broke apart from not just the von Neumann architecture, but the analogneuron vision of Carver Mead himself,” Modha said. “While the brain - with its 20W of power, two-liter volume, and incredible capability - remains the ultimate goal, the path to achieve it must be grounded in the mathematics of what is architecturally possible. We don’t pledge strict allegiance to the brain as it is implemented in its own organic technology, which we don’t have access to.” Instead, his team turned to a hypothesis within neuroscience that posits that the mammalian cerebral cortex “consists of canonical cortical microcircuits, tiny little u

A model brain What does it take to simulate the human brain? For the Human Brain Project, which hopes to understand and help cure diseases, this is a crucial question. “It really all depends on the resolution of the simulation,” Professor Henry Markram, cofounder of the Human Brain Project and director of the Blue Brain Project, said. “You have a billion organic proteins in a single cell and the interactions are happening at a microsecond scale. If you wanted to model all of that in a human brain, you’d need a computer at about 1030 flops. That’s in the Yottascale.” This is not possible with existing supercomputers, “so they use approximations to look at things from a slightly higher level,” Cray’s EMEA research lab director Adrian Tate told DCD. “But even then this is a much more difficult problem than you can solve on today’s systems. “We are basically having to rethink every aspect of supercomputing and the machines that we will be shipping in 10 years are going to have to be extremely different to the ones that we ship today.”

Issue 30 • October/November 2018 41

“DARPA can make a big difference, it’s showed a lot of interest in the general area because I think it smells that the time is right for something,” Mead told DCD, the day after giving a talk on neuromorphic computing at the agency's 60th anniversary event. IBM is building larger systems, finding some success within the US government. Two years ago it tiled 16 TrueNorth chips in a four-by-four array, with 16 million neurons, and delivered the machine to the Department of Energy’s Lawrence Livermore National Laboratory to simulate the decay of nuclear weapons systems. “This year we took four such systems together and integrated them to create a 64 million neuron system, fitting within a 4U standard rack mounted unit,” Modha said.

u clumps of 200-250 neurons, that are stylized templates that repeat throughout the brain, whether you are dealing with sight, smell, touch, hearing - it doesn’t matter.” In 2011, the group at the IBM Almaden Research Center put together a tiny neurosynaptic core “that was meant to approximate or capture the essence of what a canonical cortical microcircuit looks like. This chip had 256 neurons, it was at the scale of a worm’s brain, C. elegans. “Even though it was a small chip, it was the world’s first fully digital deterministic neuromorphic chip. It was the seed for what was to come later.” After several iterations, they created TrueNorth, a chip with 4,096 neurosynaptic cores, 1 million neurons, and 256 million synapses at the scale of a bee brain. IBM's journey was not one it undertook alone - the company received $70m in funding from the US military, with the Defense Advanced Research Projects Agency (DARPA) launching the Systems of Neuromorphic Adaptive Plastic Scalable Electronics (SyNAPSE) program to fund this and similar endeavors. “Dr Todd Hilton, the founding program manager for the SyNAPSE program, saw that neurons within the brain have high synaptic fan-outs, and neurons within the brain can connect up to 10,000 other neurons, whereas prevailing neuromorphic engineering focused on neurons with very low synaptic fan-out,” Modha said. That idea helped with the development of TrueNorth, and advanced the field as a whole.

Christened Blue Raven, the system is being used by the Air Force Research Laboratory “to enable new computing capabilities... to explore, prototype and demonstrate high-impact, game-changing technologies that enable the Air Force and the nation to maintain its superior technical advantage,” Daniel S. Goddard, director of information directorate at AFRL, said last year. But with TrueNorth, perhaps the most commercially successful neuromorphic chip on the market, still mainly being used for prototyping and demonstration, it is fair to say the field still has some way to go. Those at Intel, IBM and BrainScaleS see their systems having an impact in as few as five years. “There is a case that can be made across all segments of computing,” Davies said. “In fact it’s not exactly clear what the most promising domain is for nearest term commercialization of that technology. It’s easy to imagine use cases at the edge, and robotics is a very compelling space, but you can go all the way up to the data center, and the high end supercomputer.” Others remain cautiously optimistic. “We don’t have any plans to incorporate neuromorphic chips into Cray’s product line,” the supercomputing company’s EMEA research lab director Adrian Tate told DCD. Arm Research’s director of future silicon technology, Greg Yeric, added: “I can see that it’s still in its infancy. But I think there’s a lot to gain - the people on the skeptic side are going to be wrong, I think it’s got some legs.” Carver Mead, meanwhile, is not too worried. He is reminded of those early days of neural networks, of the excitement, the conferences and the articles. “This was the

42 DCD Magazine • datacenterdynamics.com

‘80s, it became very popular, but it didn’t solve any real problems, just some toy problems.” The hype was followed by disillusionment, and “it went into a winter where there was no funding and no interest, nobody writing stories. It was a bubble that didn’t really go anywhere.” That is, until it did. “It took 30 years people working under the radar with no one paying the slightest attention except this small group of people just plugging away and now you’ve got the whole world racing to do the next deep learning thing. It’s huge.” When Mead thinks of the field he helped create, he remembers that winter, and how it thawed. “There will certainly come a time when neuromorphic chips are going to be so much more efficient, so much faster, so much more cost effective, so much more real time than anything you can do with the great big computers that we have now.”

Putting a server in your head For those suffering from neurodegenerative disorders, hope may lie in cutting-edge research on surgically implantable brainmachine interfaces. “These devices are placed under the skull to do things like deepbrain stimulation for people with Parkinson’s disease, epilepsy, Tourette’s syndrome and such,” Abhishek Bhattacharjee, director of the Rutgers Systems Architecture Lab, said. But when the batteries on the systems run out, “you can’t just replace them, so they use wireless charging - but the radios on board can generate excessive heat, and brain tissue is extremely sensitive to heat.” Bhattacharjee hopes to develop extremely energy efficient implants that are able to do more with less energy, allowing for effective treatment across a wider range of diseases. To do that, his team took inspiration from hardware in the server world: "We can take the hardware perceptrons that are commonly used to build branch predictors in server hardware to actually predict the activity of neural behavior and use it to do power management."

The Power of Small Take your network to the next level with highly compact, award-winning Premises Rollable Ribbon Cables from OFS. The AccuRiser™ and AccuFlex® Rollable Ribbon Cables help conquer bandwidth demand, offering the highest fiber density possible for their small size. Featuring a special ribbon fiber design, these cables save on space while doubling the density in existing pathways. And they save time and money on splicing and installation, helping to lower overall costs and get data centers up and running quickly. The Rollable Ribbon design allows for highly efficient ribbon fusion splicing and easy single fiber breakout.

Find out how our cables can make a world of difference. www.ofsoptics.com | +1 888.342.3743

Fully automated luxury space networking Over the next few years, telecommunications networks will have to scale up - and it could be the defining moment for open source software, says Max Smolaks

Max Smolaks News Editor


ince the days of Samuel Morse, the pace of technological progress has been intrinsically linked to the amount of information that can be sent down a piece of wire. More information means better decisions, faster innovation, and increased convenience. Everybody loves a bit of extra bandwidth - from consumers to businesses to governments. As telecommunications networks grew larger and the supply of bandwidth increased, network operators required ever more complex machines, created by businesses that were naturally protective of their inventions. Eventually, the world of telecommunications came to be dominated by expensive metal boxes full of proprietary technology.

But the birth of the Web in the 1990s

For SDN and NFV to work, all elements of the network must speak a common language

blurred the line between telecommunications and IT equipment. Since then, the progress of general-purpose computing and advances in virtualization have gradually reduced the need to design advanced functions into hardware. Recent trends like SoftwareDefined Networking (SDN) and Network Function Virtualization (NFV) have ushered in a new world, in which the hardware is built from common parts in simple combinations; complex services are delivered in software, running on virtual machines. But in order for SDN and NFV to work, it is very important that all the elements of virtualized networks speak the common language, and follow the same standards. This at least partially explains why the networking landscape has gravitated towards collaborative open source development models. Disaggregation of software from hardware has resulted in a generation of successful open networking projects including OpenDaylight, Open Platform for NFV (OPNFV) and Open Network Automation Platform (ONAP).

44 DCD Magazine • datacenterdynamics.com

All of the above are hosted by the Linux Foundation – a non-profit originally established to promote the development of Linux, the world’s favorite server operating system. From these 'humble' origins, the Foundation has grown into a massive hub for open source software development with more than 100 projects spanning AI, connected cars, smart grids and blockchain. It is also responsible for maintaining what is arguably the hottest software tool of the moment, Kubernetes. Earlier this year, the foundation brought six of its networking projects under a single banner, establishing Linux Foundation Networking (LF Networking). "We had all these projects working on specific pieces of technology that together form the whole stack – or at least the large pieces of the stack for NFV and next-generation networking,” Heather Kirksey, VP for networking community and ecosystem development at The Linux Foundation, told DCD. Before the merger, Kirksey served as director of OPNFV. “We were all working with each other anyway, and this would streamline our operations – a few companies were on the board of every single project. It made sense to get us even more closely aligned.” We met Kirksey at the Open Networking Summit in Amsterdam, the event that brings together the people who make open source networking software, and the people who use it. The latest idea to emerge out of this community is Cloud-native Network Functions (CNFs) - the next generation Virtual Network Functions (VNFs) designed specifically for cloud environments, packaged inside application containers based on Kubernetes. Virtual Network Functions (VNFs) are the building blocks of NFV, able to deliver services that, traditionally, used to rely on specialized hardware – examples include virtual switches, virtual load balancers and virtual firewalls. CNFs take the idea further, sticking individual functions into containers

Open Source

to be deployed in any private, hybrid or public cloud. “We’re bringing the best of telecoms and the best of cloud technologies together. Containers, microservices, portability and ease of use are important for cloud. In telecoms, it’s high availability, scalability, and resiliency. These need to come together – and that’s the premise of the CNFs,” Arpit Joshipura, head of networking for The Linux Foundation, told DCD.

Application containers were created for cloud computing – hence, cloud-native. Kubernetes itself falls under the purview of another part of The Linux Foundation, the Cloud-Native Computing Foundation (CNCF), a community that is increasingly interested in collaboration with LF Networking. “We started on this virtualization journey several years ago, looking at making everything programmable and softwaredefined,” Kirksey explained. “We began virtualizing a lot of the capabilities in the network. We did a lot of great work, but we started seeing issues here and there – to be honest, a lot of our early VNFs were just existing hardware code put into a VM. “Suddenly cloud-native comes on the scene, and there’s a lot of performance and efficiency gains that you can get from containerization, there’s a lot more density

– more services per core. Now we are rethinking applications based on cloudnative design patterns. We can leverage a wider pool of developers. Meanwhile the cloud-native folks are looking at networking – but most application developers don’t find networking all that interesting. They just want a pipe to exist. “With those trends of moving towards containerization and microservices, we started to think how cloud-native for NFV would look like.” One of the defining features of containers is they can be scaled easily: in periods of peak demand, just add more copies of the service. Another benefit is portability, since containers package all of the app’s dependencies in the same environment, which can then be moved between any cloud provider. Just like VNFs, multiple CNFs can be strung together to create advanced services, something called ‘service function chaining’ in the telecommunications world. But CNFs also offer improved resiliency: when individual containers fail, Kubernetes’ auto-scaling features mean they will be replaced immediately. The term ‘CNF’ is just a few months old, but it is catching on quickly: there’s

a certain industry buzz here, a common understanding that this technology could simultaneously modernize and simplify the network. Thomas Nadeau, technical director of NFV at Red Hat who literally wrote the book on the subject, told DCD: “When this all becomes containerized, it is very easy to build the applications that can run in these [cloud] environments. You can almost imagine an app store situation, like in OpenShift and Kubernetes today - there’s a catalogue of CNFs, and you just pick them and launch them. If you want an update, they update themselves. “There’s lower cost for everybody to get involved, and lower barriers to entry. It will bring in challengers and disruptors. I think you will see CNFs created by smaller firms and not just the ‘big three’ mobile operators.” It is worth noting that, at this stage, CNFs are still a theoretical concept. First working examples of containerized functions will be seen in the upcoming release of ONAP codenamed ‘Casablanca’ and expected in 2019. Another interesting part of the Linux Foundation is Open Networking Foundation (ONS), an operator-led consortium that creates open source solutions for some of u

Issue 30 • October/November 2018 45

“I would say 5G mandates open source. The scale is astronomical" Arpit Joshipura, Linux Foundation

u the more practical challenges of running networks at scale. Its flagship project is OpenFlow, a communications protocol that enables various SDN devices to interact with each other, widely regarded as the first ever SDN standard. A more recent, and perhaps more interesting, endeavor is CORD (Central Office Re-architected as a Datacenter) - a blueprint for transforming telecommunications facilities that were required by legacy networks into fully featured Edge data centers based on cloud architectures, used to deliver modern services like content caching and analytics. During his keynote at ONS, Rajesh Gadiyar, VP for Data Center Group and CTO for Network Platforms Group at Intel, said there were 20,000 central offices in the US alone - that's 20,000 potential data centers. “Central offices, regional offices, distributed COs, base stations, stadiums – all of these locations are going to have compute and storage, becoming the virtual Edge. That’s where the servers will go, that footprint will go up significantly,” Joshipura said. “The real estate they [network operators] already have will start looking like data centers.” Service providers like AT&T, SK Telecom, Verizon, China Unicom and NTT Communications are already supporting CORD. Ideologically aligned hardware designers of the Open Compute Project are also showing a lot of interest - OCP's Telco Project, an effort to design a rack architecture that satisfies additional environmental and physical requirements of the telecommunications industry, actually predates CORD. Despite their well-advertised benefits, OCP-compliant servers might never become truly popular among colocation customers - but they could offer a perfect fit for the scale and cost requirements of network operators. Many of these technologies are waiting for the perfect use case that’s going to put them to the test – 5G, the fifth generation of wireless networks. When the mobile industry was switching from 3G to 4G, the open source telecommunications stack was still

in its infancy, and Kubernetes simply didn’t exist. With 5G networks, we will complete the virtualization of mobile connectivity, and this time, the tools are ready. According to Joshipura, 5G will be delivered using distributed networks of data centers, with massive facilities at the core and smaller sites at the edge. Resources will be pooled using cloud architectures – for example, while OpenStack has struggled in some markets, it has proven a massive hit with the telecoms crowd, and is expected to serve the industry well into the future. “I would say 5G mandates open source automation, and here’s why: 5G has 100x more bandwidth, there will be 1,000x more devices - the scale is just astronomical. You just cannot provision services manually. That’s why ONAP is getting so much attention – because that’s your automation platform,” Joshipura told DCD. Then there’s the question of cost: during her presentation at ONS, Angela Singhal Whiteford from Affirmed Networks estimated that open source tools can lower the OpEx of a 5G network by as much as 90 percent. She explained that all of this abundance of bandwidth will need to be ‘sliced’ – a single 5G network could have thousands of ‘slices,’ all configured differently and delivering different services, from enterprise connectivity to industrial IoT. The speed and ease of configuration are key to deploying

this many network segments: right now, a new service takes 3 to 12 months to deploy. By completely virtualizing the network, new services can be deployed in minutes. “By moving to open source technology, we have a standardized way of network monitoring and troubleshooting,” Whiteford said. “Think about the operational complexity of monitoring and troubleshooting hundreds and thousands of network slices – without a standardized way to do that, there’s no way you can profitably deliver those services.”

A network engineer, Nadeau adds this perspective: “I’ve long thought that the whole mobile thing was way over-complicated. If you look at the architecture, they have 12-14 moving parts to basically set up a wireless network. The good news is that you used to have to deploy 12 boxes to do those functions, and today you can deploy maybe two servers, and one box that does the radio control. That’s really one of the important parts of 5G – not only will there be higher frequencies and more bandwidth, but also the cost for the operator will go down.” The Linux Foundation says its corporate members represent 65-70 percent of the world’s mobile subscribers. With this many projects at all levels of the networking stack, the organization looks well placed for an important role in the next evolutionary leap of the world’s networks.

DCD>Debates What does the next generation of hyperscale network architecture look like?

Watch On Demand

To understand the network capacity requirements of the world’s largest data centers run by the likes Google, Facebook and Amazon, we need to understand the types of applications they are working on and when they are likely to come on-stream. bit.ly/DCDDebatesNetworking

46 DCD Magazine • datacenterdynamics.com

Advertorial: Schneider Electric

Global Skills X-change Corporation (GSX), Certified Mission Critical Operator (CMCO) program. Northern Virginia Community College, a Data Center Operations (DCO) specialization in its Engineering Technology associate’s degree. That’s all good news, and hopefully more schools will follow suit. But it’ll be at least two years before these programs really begin bearing fruit. Schneider Electric has long offered its own training program for the personnel who operate data centers for our customers around the globe. We target folks who already have experience in a relevant area – electrical or mechanical engineering, fire suppression, generators or the like. They enter a training program that focuses on continuous improvement and associated men, middle-aged white men.” certifications. Eventually they reach Critical Meanwhile, IDC has predicted that Environment Technician (CET) Level 3 worldwide public cloud services and status, meaning they are subject matter infrastructure spending will reach $160 experts (SME) in all relevant disciplines. billion this year, up 23 percent from 2017. We’ve trained at least 1,500 CETs by Meanwhile, Gartner has said that total now. The really tricky part is recruiting cloud services revenue will increase candidates to enter the program. For by 87 percent between 2016 and that, we look to various sources, 2020, reaching $411.4 billion. including folks who are leaving Come visit So, we’ve got an aging the military, technical schools, us at pool of talent coupled with job fairs and word of mouth. rapid growth in data center We’re constantly in DCD>London demand. Clearly, we need a recruitment mode, because exhibition new approach to address the we need to ensure that when stand 51 issue. one of our technicians gets Diversity is one avenue, promoted (or, rarely, leaves for starting with getting more women another organization), we can into the field. This will be a longquickly fill that position. Like everyone term endeavor, however, based on current in the data center business, we have to statistics. The National Girls Collaborative prepare for the rapid growth the industry Project, in a report on girls and women in is seeing and quickly execute when a science, technology, engineering and math customer decides to out-task data center (STEM), found: operations to us. • 22.7 percent of chemical engineers are women To learn more about training your own • 17.5 percent of civil, architectural, and team, download our free e-book, “Essential sanitary engineers are women Elements of Data Center Facility Operations.” • 17.1 percent of industrial engineers are women bit.ly/The12DataCenterEssentials • 10.7 percent of electrical or computer hardware engineers are women Anthony DeSpirito is Schneider Electric’s • 7.9 percent of mechanical engineers are Vice President/General Manager of women Operation Services Data center infrastructure staff can come from any one of those fields, but we’re effectively eliminating nearly half of the potential workforce from the start because not enough women enter the field. Education is another area, and we’re starting to see some movement there. Here are four courses: Southern Methodist University (SMU), post-graduate degree program in the data center discipline, Twitter: TonyDeSpirito1 Marist University, Bachelor’s Degree in LinkedIn: tony-despirito-4b59473 data center science

The Data Center Operations Staffing Problem

Staff are aging while demand for new hires is going through the roof. Anthony DeSpirito suggests some answers to the staffing crisis


he IT industry as a whole is facing a shortage of skilled workers and the data center segment is no exception. No quick fix is available to correct the problem, either. It’ll take creative outreach to new potential workers and lots and lots of training. Sorry to be the bearer of bad news, but there’s really no sugar-coating the situation. Joe Kava, VP of data centers for Google, summed it up nicely in a keynote at a recent conference, saying, “The greatest threat we’re facing is the race for talent.” In a study by TEKsystems, 81 percent of the IT leaders surveyed said it was difficult to find quality candidates for IT jobs. Nearly 50 percent of leaders with open positions didn’t expect they would fill them in the desired timeframe. A big part of the problem is the field is currently dominated by men who are nearing retirement age. “The industry today is predominantly men,” Kava said in his keynote. “Even my own organization is majority men: white



Mission Critical Training

Get in touch today! by email: info@dc-professional.com or visit: www.dcpro.training

A complete solution for the data center industry The world’s first training academy which offers internationally recognized data center certifications. Choose the most effective learning environment for you: • Classroom courses • In-house training • Online courses info@dc-professional.com

• Tailored learning www.dcpro.training

Power + Cooling

Who wants to be a data center engineer? Finding fresh talent to operate data centers is tough enough, and the industry is growing fast; Tanwen Dawn-Hiscox finds out what’s being done to keep up


ompared to the extensive timeline of human existence, data centers are fresh faced and youthful - the first facilities of this kind were built in the 1950s. The mighty mainframes inside those fledgling data centers required a mere handful of specialized staff; the more distributed technologies which followed needed larger technical teams to keep them operational. Our economies now rely on the complex interlocking of infallible digital systems, but these systems are so new that the human infrastructure required to support them is still evolving to catch up. There just aren’t enough data center engineers to go round. Even now, of the millions of engineers in the world pondering ways to improve human existence, many do not know the first thing about data centers. And why should they? They have plenty of other positions to fill, building electrical, industrial or civil infrastructure, creating more efficient, aesthetically pleasing and cheaper solutions to society’s problems.

who entered the workforce in the mid-2000s, let alone the nineties and eighties. Some, like the trainers in DCD’s sister company DCPro, learnt on the job: qualified in electrical engineering, or cooling systems, they found work in data centers and eventually built up enough experience to pass on their legacies. But while this may have worked in the nineties, today’s systems are too complex to be managed by apprentices. Solutions may come from elsewhere: in the April/May issue of DCD>Magazine, we spoke to Salute Inc., an organization founded by Uptime Institute president and former US Army reservist Lee Kirby, which is trying to place veterans in data center operations roles. Kirby argues that the transition from infantryman to data center technician is easy: working remotely, maintaining dangerous equipment, communicating with a team and acting fast in the face of unexpected situations are skills expected both in the army and in data centers. What’s more, he adds, veterans tend to suffer greater unemployment rates than the

Tanwen Dawn-Hiscox Reporter

general population, making them perfect potential employees. But while such initiatives are commendable, they are unlikely to produce enough candidates to fill the data center skills gap. The need for data center staff is dependent on a facility’s size, which is why hyperscalers are those with the most serious problem, and also the reason they are the ones coming up with the most innovative solutions. Headhunting other hyperscale and enterprise staff is an option, and one which must not be underestimated. Outsourcing jobs to third -party providers is another, as justified by a Park Place Technologies spokesperson: “Getting outside help to support them isn’t a sign of weakness, but rather a reasonable adaptation to a tight labor market, where there is more demand for IT skills than supply.” But companies are also pouring money into education programs to create their own teams, desined to specification. For instance, RagingWire recently organized a tour of its facilities in the hope u

Let’s face it, data centers are a niche within a niche. Technologically-minded students and professionals are much more likely to work as software engineers than to consider spending their career in the data center industry. Data center engineers are involved in site selection, reference design - redesigning and upgrading facilities and systems - and migration. They must know how to operate and maintain electrical, cooling, and IT equipment, all of which evolves and changes over time. The modern data center engineer must also be knowledgeable in virtualization and software-defined systems, and understand the benefits of technologies like computational fluid dynamics, increasingly used to analyze and control airflow in data centers. And so, as the industry as a whole grows and matures, its workforce is not only too small to meet its needs - it is also aging: the positions that need to be filled bear little resemblance to what was expected of a data center engineer

Issue 30 • October/November 2018 49

u of inspiring students to consider a career in data center engineering, but some have committed further, building fully fleshed-out curriculums: with help from HPE, the Southern Methodist University in Texas has created a graduate course in data center systems engineering, launched in 2014; and the Northern Virginia Community College, which sits at the heart of the US data center hub, is set to add a Data Center Operations specialization to its Engineering Technology associate’s degree. But it was Irish engineering firm LotusWorks that first acknowledged the need for specialization at the university level, instigating the creation of the world’s first Bachelor's degree in data center engineering.

The company, which supplies engineers to local data center operators, and knew of the Institute of Technology in Sligo’s relevant courses in electronics and automation, put the idea to its customers, one of whom, according to IT Sligo’s head of engineering, Úna Parsons, “would’ve been Google.” “In Lotus' discussions with them, they identified this need for providing this added skill set and the training that was required, so Lotus brought Google to us.” This set the ball in motion, and a year and a half was spent designing the course modules, bringing in additional input from Microsoft and Facebook, “because they all know each other in the data center world and Ireland is the location of a lot of these

Sales is an issue too The problem doesn’t only apply to data center technicians: technical sales staff are just as hard to find, and training new employees in new geographies can not only be costly, but ineffective: it takes more than technical knowledge to be a good salesperson, and a person with no knowledge of the local market can be just as much of an issue.

companies, so it was very easy for us to work with them.” The course sought to train engineers locally – in Europe – to meet the market’s needs. This is why, when it came to choosing a location to host practical sessions for the degree, Google set its sights on the Ecole d’Ingénieurs de la Haute Ecole Louvain en Hainaut (HELHa), in Le Mons, Belgium – which, following on the success of the undergraduate course, this year launched the world’s first Master's in the field, calling it the Data Center Engineering Program. Both curricula are mostly courseworkbased - though Sligo’s tutors deliver lectures live and record them for students who can’t attend them at the time - because students tend to already be working in professional environments. This year’s cohort, Parsons said, included people from “the States, Australia, the UAE, the

an IMS Engineered Products Brand


50 DCD Magazine • datacenterdynamics.com

5400 lbs load rating


Power + Cooling UK, Norway and Holland,” - and participants must not only take time off work, but organize travel and accommodation in order to attend the practical sessions. Students on both courses tackle maths, instrumentation, thermodynamics, water treatment, automation and control, as well as electricity generation. The practical lessons are taught by HELHa’s existing university staff, who sign up for the program on a voluntary basis “because the labs are in English, not in French,” HELHa director, Ir Valérie Seront, said with a laugh. According to Parsons, while none of the teachers on either course were data center specialists, IT Sligo staff were “upskilled” to meet the needs of the initiative. HELHa tutors, meanwhile, receive regular briefings directly from Google. But as Seront pointed out, the tutors are already experts in relevant fields: “data center maintenance is [solving] electrical problems, thermal problems and control problems. These are the same in other companies, in electromechanical areas.” Facilities for the practical sessions were another matter, however, and the university’s labs, used by HELHa students in other engineering domains, required an upgrade to accommodate the program: “We needed particular equipment, like a cooling tower. “We also duplicated our labs because it’s important not to have too many people on the course,” she said. Funding for the upgrades was partly provided by Google, but the university found support among industry partners too, such as ABB and Schneider. Overall, both courses have been hailed as a success. The structure is perceived as positive by the HELHa teaching body, as it does exactly what it sets out to do: it prepares students for a particular work environment.

“It's the first time we've worked together with a company on such a big project. The training is done for the enterprise and starts from the enterprise. “For Belgium, this is unusual. In Belgium, we run a course and then send an engineer to the enterprise. This is not the same. The course was built with an enterprise. We were free to construct it, but Google’s help was very important [to us].” If anything were to be improved upon, Seront would like to extend the practical aspect of the course, conceding that hands-on learning is an inescapable component, albeit difficult to organize. Meanwhile, IT Sligo is exploring different ways of applying the content of the course to other types of engineering, including “more generic, manufacturing industries,” Parsons said. “We can now provide similar content to other cohorts.” Of more interest to our readers, though, the university is considering peripheral courses, such as developing its own Master's to follow on the Bachelor's degree, and perhaps even specialized technician training – another pressing need identified by the industry. The current course focused on the skillset that the industry needed immediately, Parsons said, “but they’re looking at up and down with regard to the skills that they need, what other levels we should be developing.” It is encouraging for the industry that the need for specialized, university-level training for data center engineers has been recognized. Both courses are still in a trial phase; with 30-odd students starting on the undergraduate path this year – up from 16 last year - and less than a dozen on the Master's course, it is fair to say that the project will need to be replicated many times before it can realistically begin to address the skills gap.

Some companies take things one step further, and train engineers themselves: Schneider Electric says it has trained more than 1,500 critical environment technicians, which are then employed by the company to operate its customers’ data centers. Like HELHa, Schneider selects candidates with relevant experience: mechanical or electrical engineers, fire suppression or generator specialists. The problem it faces is where to find such candidates; it cites the military, technical schools and job fairs. Another way to approach the skills gap may be to broaden the search. A larger, more diverse talent pool is bound to bring a rich new wave of enthusiasm to the industry. Both Schneider and Colt have set up programs to encourage women to work in technology, and Colt recently hosted an event to discuss how a diverse workforce benefits businesses - an argument companies like Google have validated in practice. In my search for why data centers are struggling to employ people, no company seemed concerned with the question of renumeration. After all, this is an incredibly productive industry, concentrated in the hands of a few companies, and the stakes are high for data center engineers handling the infrastructure underpinning everything from our banks to our communications, transport and even medical systems. Maybe, just maybe, a bit of a financial incentive is in order. Ultimately, it is a multi-faceted problem to which there is no single solution. But as unemployment looms over vast swathes of modern economies, and technology not only grows in prominence but is better understood by younger generations. Those who seek high-value jobs - or any jobs, for that matter - could look to data centers as a safe haven of employment.

Learn more at amcoenclosures.com/dcd




Issue 30 • October/November 2018 51

Japan's awakening Japan has long maintained its traditions, in culture and in business, keeping its data center service market sheltered from the rest of the world. But a changing cloud landscape has opened the floodgates for foreign companies to feast on the country’s highly developed digital economy, finds Tanwen Dawn-Hiscox


ome to more than 127 million people, Japan is a regional corporate and financial hub, and one of the fastest evolving digital markets in the world: thirteen Japanese companies figured on this year’s Thomson Reuters 100 world-leading technology company list. As our readers will know, such a society needs a robust digital infrastructure. Japan has a notoriously insular approach to business and digital service provisioning, while its real estate prices and energy costs are among the highest on the planet; there is also a shortage of leasable dark fiber. The collection of islands is subject to frequent cataclysmic natural disasters that - as we have witnessed with horror in recent weeks all too often ravage its cities. How does one navigate the data center landscape in such a seemingly adverse environment? In an attempt to answer this question, we spoke to Jabez Tan, head of research for the data center and cloud market at Structure Research, a Canadian market analysis provider specializing in digital infrastructure. Structure recently published an analysis of the data center markets in, where most of the country’s activity is concentrated.

Japanese data center market is unique in that it has traditionally been dominated by systems integrators like Fujitsu, Hitachi, Mitsubishi Electric and NEC. These giants exist because Japanese companies want a “one-stop shop for their IT services,” preferring a single vendor with a broad portfolio to having to provision each service individually. Systems integrators have been safe from foreign competition because, until recently, language and cultural barriers stood in the way of exchanges between international and domestic companies. And until cloud providers began dominating the data center service industry globally, they had no incentive to change, explained Quy Nguyen, VP for global accounts and solutions at Colt DCS - a service provider which broke into the Japanese market in 2014, through the acquisition of local colocation and dark fiber provider KVH: “Many of them were making their money hosting private solutions; it’s a highly profitable business.”

52 DCD Magazine • datacenterdynamics.com

Tanwen Dawn-Hiscox Reporter

But resting on their laurels is no longer an option; change is afoot. As the market matures, companies are becoming “more savvy and more educated in the way they deploy IT architectures and the way they procure IT,” Tan said. They will perhaps learn from the massive colocation player Equinix another foreign company which now brings in more than 15 percent of all of Tokyo’s colocation revenue, and is in the process of building a $70 million data center in the city, its eleventh in the Japanese capital. With a mix of colocation, public cloud, and interconnection services in its facilities, Equinix allows customers to define their needs on a seasonal basis, offering more flexibility than any system integrator could. Tan believes this will force some of the traditional local players to adapt and start offering more managed cloud services to their customers.

“I think there are a lot of drivers converging to make the Japanese market a lot more of an international powerhouse than it previously was”

Colo + Cloud NTT, for example, recently merged its Dimension Data and NTT Communications subsidiaries, precisely to “form a more targeted entity to go after this kind of cloud opportunity,” he said. For Nguyen, reluctance to change is part of Japan’s cultural fabric, and doesn’t necessarily mean that the local market won’t catch up with its international counterparts in due course. “Japan has always been this way. They’re sometimes slow to jump on a trend, but once they do, it’s like a herd mentality.” And, he ventured, “I think that’s what we’re seeing now: There’s no more perceived negativity on cloud services, security issues and such have been addressed and all the barriers have been removed.” Others, like Digital Realty, have found their success not through acquisitions, but by teaming up with systems integrators: The colocation giant’s $1.8bn joint venture with Mitsubishi, MC Digital Realty, already accounts for more than three percent of Tokyo’s colocation market and seven percent of Osaka’s, in terms of IT load capacity. Promising to bring at least ten new data centers online, both of the venture’s participants stand to gain a lot from the country’s booming data center market. Another thing that makes Japan’s data center landscape stand out is that facilities here must be built to endure typhoons, earthquakes, tsunamis and tropical storms. Somewhat counterintuitively, the high likelihood of natural disasters has not dissuaded companies from entering the market, and was identified by Structure as

having an encouraging effect: after the 2011 disaster, companies anxious to ensure their own business continuity caused a spike in demand for local data center services. And, because the country is prone to seismic activity, developers simply build Japanese data centers to withstand the worst weather conditions. According to Tan, typically, in the basement of all data center buildings, “they’ll show you this giant seismic rubber contraption that acts like a spring so that when an earthquake hits, the building moves; instead of crumbling and breaking, at a certain level of vibration it rotates with the seismic activity.” When it comes to the seismic isolation of a facility, Nguyen said, choosing the right systems isn’t complex, but more a matter of how much you are willing to invest. Site selection based on strong bedrock is key, he explained, because “fundamentally, you can’t change the foundation that it sits on.” This was the basis of Colt’s decision to build Inzai, which boasts a so-called probable maximum load percentage - the likelihood of a catastrophic failure in the event of an earthquake - of two percent. The company was vindicated in the 2011 earthquake, when the site experienced “no downtime whatsoever.” New players have all eyed neighboring sites, he said. Real estate costs and a difficulty to access power due to tight government u


6.1m sq ft of whitespace

676MW total power capacity

176,867 racks



Equinix's share of the colocation market


colocation market size, 2018


colocation market growth YoY 2017-2018


2018 Wholesale colocation capacity


Retail colocation capacity Issue 30 • October/November 2018 53

u regulation of the energy sector have done little to dampen colocation providers’ zeal, either. And, because the market is at a turning point, only just opening its doors to international companies, it seems to have a lot of potential. While the market is complex, for Tan it also holds great promise: from its status as a financial capital to its highly developed Internet economy, there is scope for growth in the Japanese data center industry. “I think there are a lot of drivers converging to make the Japanese market a lot more of an international powerhouse than it previously was.” It would seem that Colt agrees. With a third Tokyo campus on the way, on which it is planning to break ground in the second quarter of 2019, Nguyen said: “I can’t disclose the exact figure, but my expectation is that before we go live with that site, we will be largely sold out.” Of the six to ten interested parties, he said, it would take three to fill it to capacity. The company is also looking at building in Osaka, Japan’s second biggest data center market, where it currently leases a small amount of space in a third-party data center. It is worth noting that AWS, Google Cloud and Azure all have footprints in the city, too. However, Nguyen added, “it is enormously difficult to find good land” there, and the company has passed on several opportunities which it considered substandard.

will likely “drive a lot of analytics, big data, AI type workloads, supported by a lot of the media, digital media content and streaming platforms” that want to have local capacity ready for the games. Nguyen noted that Colt has already received interest from new customers, a large portion of which come from China, representing viewing and payment platforms “and a lot of fintech solutions.” The market’s newfound success has had a cumulative effect, he added, increasing demand on top of the hyperscale companies “from other US providers that previously had not had a big position in Japan.” As well as driving more demand, the data center market’s growth could drive institutional change as well: the country may wish to reconsider its data sovereignty laws. Compared to other regional markets, like China, Japan has not felt the need for data sovereignty, as its exchanges were most often domestically focused. Companies previously chose to host their core compute in Japan, but the decision was mostly latency-related. For Colt, storing data locally is a nascent concern. Europe’s GDPR affects companies all around the world by governing where European data is processed, Nguyen said, but China’s flourishing fintech market rules have had a bigger effect locally: “Various Chinese entities each have their own payment scheme which they need to deploy locally for FSA purposes,” he said. Whatever the future brings, one thing is certain: Japan, in all its uniqueness, is no longer an inward-looking, domestically-ruled market, but, much to the joy of its new entrants, a vast land of opportunity.


Analysts agree that international businesses will keep pouring into the country, especially in light of the upcoming 2020 Olympics, which Tan said


940k sq ft of whitespace

107MW total power capacity

29,640 racks



NTT West's share of the colocation market


colocation market size, 2018


colocation market growth YoY 2017-2018



Wholesale colocation capacity


Retail colocation capacity

Come visit us at DCD>London exhibition stand 60

Award Winning Data Centre Transformation Specialists

Since 1998, we have led the industry, undertaking some of the most challenging and complex Data Centre migrations in the world.

Full Professional Services Offering

Consultancy Services

IT Relocation

Project Management

Environment Cabling

Technimove Transition Platform

Pre Cabling

Data Centre Migration

Asset Management Platform

Data Centre Auditing

IT Storage

IT Recycling


Flexible Resourcing

Start your consultation today Visit: www.technimove.com Phone:0208 686 8800

Worldwide Shipping

Global Smart Hands



Data centers look so squeaky clean, it’s hard to believe they can be involved in anything illegal, but you would be surprised what DCD has experienced in our years on the beat

Tales of terror from the dark side of the data center community Student uprising

“Did they get your data? You’ve got to ask yourself one question: ‘Do I feel lucky?’ Well do ya, punk?” CEO announcing a data breach Doctored hardware More subtle than cyber attacks, what if nation states could compromise hardware before it is delivered, so it will pass information or obey remote instructions? In the 1990s, the US government failed in a bid to mandate the use of the Clipper chip, an encryption device designed with a “backdoor” that would give US authorities access to all communications. In 2013, Edward Snowden revealed that much the same job was being done by intercepting and doctoring networking equipment. Suspicion has since shifted to China, an authoritarian regime where most of the world’s computing hardware is made. Huawei, ZTE and Lenovo have all been alleged to have included backdoors, while Bloomberg claimed in a contested report this year that some servers made in China for Supermicro carry a surveillance chip implanted by the People’s Liberation Army.

International attacks China has long engaged in state-sponsored cyber attacks, while Russia has gained attention for its use of disinformation and asymmetric warfare. In September, the Netherlands expelled four men, alleged to be members of a GRU Russian military intelligence unit called Sandworm, which attacked sites including the Organisation for the Prohibition of Chemical Weapons, the UK’s Foreign Office, and Porton Down chemical weapons facility. Russia denies the allegations.

56 DCD Magazine • datacenterdynamics.com

Earlier in 2018, a group of thirty dissident students ransacked the server room at the Paul Valéry University in Montpellier, France, to prevent their fellow pupils from sitting their second term exams. These were to be hosted on the campus’ systems running the opensource learning platform, Moodle. The vandals were protesting a tough change to the university admission process, known as “la loi orientation et réussite des étudiants (ORE),” which they argued would deter otherwise suited candidates from applying for their desired course, leading them to fall back on safer, but perhaps less aspirational pathways. The university reported €300,000 ($384,000) of damage, a large share of which was due to the IT equipment being destroyed, but also included broken chairs, tables, and the cost of cleaning up graffiti.

Security + Risk

Secret hacking

Chief suspect

There is no legal requirement to report a cyber crime, but observers were shocked at the irresponsibility of Uber, which covered up an incident in 2016, paying off hackers who accessed the data of 57 million of its users, as well as 600,000 US drivers. In late 2017, newly appointed CEO Dara Khosrowshahi reported that he “recently learned” that hackers were paid $100,000 to delete the data, and Uber then failed to inform users or relevant authorities. Ousted co-founder Travis Kalanick was allegedly aware of the incident at the time.

We’re not aware of anyone succeeding at data center arson, but DCPro trainer Ian Bitterlin has a few pointers on how to do it: “Walk round the outer perimeter of the property noting the location of the fiber pits and return later that night with a few chums each in a white van and armed with a balaclava, a few gallons of unleaded and a box of matches. Grenades would be better but my local garage doesn’t sell them,” he said in an entirely-theoretical opinion article. “Whip up the cast-iron pit lids, dump the petrol and, like it says on the firework boxes, ‘light and quickly retire.’

Crypto Jacking Large numbers of corporate servers are hijacked to mine Bitcoin and other cryptocurrencies, with their owners left footing massive electricity bills. This crime category has been enabled because mining new cryptocurrency ‘coins’ is an energy-intensive, marginallyprofitable process that involves complex mathematical operations. Running those operations on someone else’s server shifts the energy cost to someone else. It’s no surprise that crypto jacking overtook ransomware as the leading malware in early 2018, according to BitDefender.

“I’m going to make you an offer you can’t understand” The Bitcoinfather

Bitcoin Heist Four men have been charged with a series of thefts at cryptocurrencymining data centers in Iceland, after a dramatic series of events in which one suspect, Sindri Thor Stefansson, escaped jail and briefly fled to the Netherlands. Iceland’s cheap renewable power has made it a Mecca for Bitcoin

mines. These shoestring facilities often lack features of business-grade data centers including high security, making them a target for a gang which stole 600 servers worth almost $2 million in a late 2017 crime spree, whose targets included high profile victims such as Advania in Reykjanes, and the Borealis data center in Borgarbyggð.

Copper theft

Skunk works

The high cost of copper has made data centers and telecom infrastructure prime targets for scrap metal thieves after their cables. In South Africa, R5 billion (US$340m) is lost each year in copper thefts, such as the audacious raid on the City of Johannesburg’s data center in November 2017. Four suspects were caught, after R2m (US$137k) worth of copper cables were taken.

Some years back, a giant cannabis farm was reportedly found at a data center, in a joint operation involving police and IT experts. False fronts on the racks hid hydroponic growing systems and LED lighting. "It seems the legitimate business of the company was being used as a smokescreen for the real operation," said a police spokesperson. “Data centers are the ideal cover. They use lots of power, generate lots of heat, are highly secure, have powerful air filtration and plenty of room - and network operatives always act a bit stoned anyway." In this instance, the guilty party is in fact news site ZDNet, which published this unlikely story on April 1.

Tax dodge Government efforts to support data centers and boost depressed areas can go wrong. In the UK, 675 people invested £79 million in 2011 in Cobalt data centers in Tyneside, and received £131 million in tax relief: they included footballers and celebrities Wayne Rooney, Jimmy Carr, Rick Parfitt, Kenny Dalglish, Roy Hodgson, Terry Venables, Arsene Wenger, and more. But the data centers were just unused shells. Apparently they did not break any laws, but are being chased for tax payments, after a crack down on “aggressive” tax avoidance schemes.

Issue 30 • October/November 2018 57

The Most Extreme Data Center on the Planet

Public voting closes Nov 2 3! See we b link


From frontline duty on the battlefield to the far reaches of space, these facilities live life to the full, thriving in heat or cold, or delivering services from a location that no one would have thought possible - deep underground, or on the edge of space. www.dcdawards.global/extreme-data-center Sponsored by:

Charity Partner

> Awards | 2018

Colo + Cloud

Max Smolaks News Editor

A cloud for Russia As Yandex finally enters the public cloud market, Max Smolaks finds out what to expect from the Russian Internet giant


ow much do you know about Yandex? You might be vaguely aware it’s a technology company hailing from Russia, but to anyone living in one of the former Soviet Socialist republics, Yandex is a household name. It started in 1997 as an online search engine and gradually developed a wide range of services including image search, video hosting, cloud storage, navigation and machine translation, all backed by an extensive advertising network. If that sounds familiar, you will not be surprised to learn that some commentators have started calling Yandex ‘the Russian Google.’ But according to Yan Leshinsky, chief 'cloud-keeper' at Yandex, this comparison does the company a disservice: “Yandex went through quite a transformation over the last three years. It’s no longer just the Russian Google – it’s the Russian Uber, the

Russian Amazon and the Russian Spotify. Yandex has developed quite a unique ecosystem of products, primarily for the consumer market – although we also have business services, especially in advertising.” Yandex offers its own web browser (based on the Chromium project), its own digital personal assistant (Alice), its own selfdriving car fleet (with street trials this year), and now – its own public cloud service. “Over the past 20 years, Yandex has accumulated a lot of different technologies, and gained the ability to design, build and maintain world-class data centers. Considering cloud opportunities in Russia, it comes as no surprise that it decided to build a cloud platform, to become the AWS of Russia,” Leshinsky told DCD. “There are no native cloud platforms in Russia – it’s possible to use any of the big three cloud providers, but all of their capacities are in Europe.” Yandex was born just after the collapse of the Soviet Union, as the country was u

"Yandex is the Russian Google, Uber, Amazon and Spotify" Yan Leshinsky Issue 30 • October/November 2018 59

"The legend of Russian hackers is based on the good quality of software engineering in Russia" As for software, the virtual machines are built with the KVM hypervisor – the same technology that currently powers AWS instances. Leshinsky told us that most of the services available on the cloud platform are based on open source software. “Now, the advantage of a real cloud platform is not just Infrastructure-asa-service, it’s all the additional platform services that let companies reuse the same components, without having to invent them. “On the infrastructure layer, we are

“We see three main pillars of the platform: the first is data processing, you can call it ‘big data.’ That set of services is aimed at storing, processing and visualizing data – some of it is in-house development, some of it is managed versions of open source components. For example, Yandex Cloud offers three databases: Postgress, MongoDB and ClickHouse – the latter was developed at Yandex and open-sourced. “The second pillar is artificial intelligence services – this is already part of the platform. We have released voice-based technologies like text-to-speech, voice recognition and translation. In the second phase on our roadmap, we will release vision-related services. “The third pillar is the developer’s platform. This would offer services for developers and system administrators, to let them quickly create applications on Yandex Cloud, monitor and maintain them. We will have our own application lifecycle management service as well.” Leshinsky said the cloud business already has around 100 corporate customers. Most of these are Russian companies – but the service is aimed at any business with presence in the country. “We will go after enterprise companies directly, and we will have a self-service portal for smaller companies that want to consume and manage cloud resources themselves. We also have a large partner initiative.” Although Yandex owns data centers in Europe, its cloud facilities will, for the time being, remain within Russia – due in part to the country’s strict data residency regulations, passed in 2015. In accordance with Federal Law No. 242-FZ, foreign businesses that handle the personal data of

60 DCD Magazine • datacenterdynamics.com


| Ya ns ky

The initial search service was successful because it was uniquely adapted to the Russian morphology (how words are formed) and could recognize inflection in search queries – providing more accurate results than alternatives, designed to index text in English. The company was also lucky enough to employ cult Russian graphic designer Artemy Lebedev to create its recognizable corporate branding and visual identity – which remains largely unchanged. Today, Yandex has close to a 60 percent share of the Russian search market. It runs the country’s most visited website, seen by 80 million people every month. Yandex Cloud uses the same infrastructure that powers the company’s search and analytics services, and is hosted in the same data centers. “I think it’s a good thing – it will let us grow slowly, and avoid making huge capital investments in this growing business. We have the ability to transfer capacity between the main business unit and Yandex Cloud, if we need to – it’s a common hardware platform designed by Yandex, and it’s flexible enough to be configured to have different storage devices and different amounts of memory.”

using open source technologies: KVM and its ecosystem for hypervisor, OpenContrail [recently rebranded as Tungsten Fabric] for softwaredefined networking, and our storage deck is an in-house development. The reason we are not using something like Ceph is because we feel we can build something more efficient, and as a result – more cost-effective.” According to Leshinsky, almost everything Yandex puts into the public cloud is derived from the products it uses internally. The notable exception is containers – while selling virtual machines to its customers, Yandex itself actually adopted Kubernetes on bare metal, for increased efficiency. But worry not, he said– a managed Kubernetes service will be available before the end of the year.


u getting to grips with capitalism. It was a period of great chaos, as government assets found their way into private ownership, often involving a degree of violence along the way. But it was also a period of great opportunity – while some were busy dividing the country’s natural resources, others were more interested in harnessing the newly discovered power of the Web.

n Ya

s Le

Russian citizens have to keep this data on servers located in the country. This doesn’t mean that international expansion is out of the question, but if Yandex Cloud does venture abroad, it will appear in locations we least expect. “We’re younger and smaller than dominant international players, but we do see that there are some markets that are priority for dominant players, and other markets are underserved – and these would be the markets we target first. For example, we will not expand into the US or Western Europe, but Eastern Europe, Middle-East, Far East or South America are the markets we are considering. It’s still early days.” But what about politics? In the US, Russian hackers are accused of meddling in presidential elections; in the UK, Russian spies are accused of poisoning former Russian spies on British soil. International economic sanctions against the country, imposed over the conflict in Ukraine, are still in place. None of this bothers Yandex – a company that is widely considered to have maintained its independence from the frequently overbearing Russian state. “With regards to the current situation, I don’t think this is hurting us or helping us in any way. Our biggest advantage is that we are local, that we are investing here, and Yandex is a recognized and respected brand. We understand how to provide good customer service. When we will start international expansion, we will consider the political situation in whatever jurisdiction we are choosing to operate in. “The legend of Russian hackers is based on the good quality of software engineering in Russia – and Yandex gets great software engineers, so we are definitely taking advantage of that.”

Advertorial: Moy Materials

Up on the roof Andy Nicoll, Lead Design Manager, explains the attractions of Moy Materials’ roofing solutions


hat led you to work with Moy Materials on your data center projects? I have worked with Moy Materials across three major projects spanning over 100,000 square meters from 2015 to 2018. The power capacity of those projects is in excess of 86MW and Moy have provided the very best in protection. A mission critical part of any data center project. For me, Moy ticks the boxes in terms of a full system process that spans the feasibility/pre-construction, construction and post-construction phases. They tend to not just follow the benchmark, but set the benchmark. Moy provided an all-in-one service that gave us an assured product that had full industry-leading credentials and backing. What are the key design standards that you apply to building data center roofs? Key for me is that the roof system I’m selecting is FM Global approved. Roof selection is mission-critical. It is particularly important that it meets Class 1 Fire ratings, as with global change we are seeing so many high energy wind storms and an FM-approved roof gives us the absolute assurance that the system is going to meet wind uplift resistance values. What separates suppliers from each other when choosing who to partner with on the

selection of your roof system? Moy have a dedicated team in the construction of data center roofs – a team that’s available 24/7. This team worked hand-in-hand with my design team in the specification of the roof system and the preparation of the construction details leading to a bespoke roof system design that not only met, but exceeded the client’s needs. The world is becoming a smaller place and projects traverse territories, which brings its own set of challenges - how does working in different territories impact how you work with key system suppliers? This is a challenge and local design standards can change depending on the territory, I’ve found that working with a team who have proven international experience and can see a project through from feasibility to post-construction is vital. This is not always achievable as some suppliers can’t follow the project across the globe. Luckily, Moy can. It’s a team that appreciate we need solutions that respond in different ways depending on a number of key factors.That’s especially important when the build is global, and they have experience in local building code and international design standards. What are the trust factors in choosing a roof system supplier? I base decisions on a number of factors. For example, Moy have a proven track record with four decades of experience in building envelope design, with over 25 years involvement with international construction projects across the globe and, as mentioned, have full certification. But what’s most important is the client-supplier relationship. Once you engage with the team, the team stays with you through all the stages of design, build and post-construction. Moy gives us the security of knowing that every element of our roof system provides technical solutions to meet our

interfaces and will exceed the required life cycle of the data center. Tell me a little bit about their process… Moy provide stage-by-stage supervision of the roof specification design & construction in conjunction with detailed onsite technical support. I’ve already outlined the pre-construction support. In the construction stage they provide high quality onsite technical support and continuous monitoring and updating on the assembly process. Moy deals with issues when they arise (as they invariably do) in a no-nonsense, proactive and time sensitive way. At the post-construction stage Moy have a sign-off procedure, through which their handover process empowers the client company to take over the operation and maintenance of the roofing system. This can include specific training for the client on the procedure involved in roof maintenance and best practice to maintain and monitor their roofing to outlast the life cycle of the data center

Andy Nicoll – Lead Design Manager with over 20 years in Design Management within Main Contracting, working predominantly on major projects both in the UK and internationally. A portfolio which includes working within the EMEA region delivering campus data center projects.

Contact Details London +44 (0) 1245 707 449 Glasgow +44 (0) 141 840 660 Dublin +353 (0) 1 463 3900 www.moymaterials.com dcenquiries@moymaterials.com DC Projects Lead cathal.quinn@moymaterials.com

+353 (0) 1 463 3900

Power + Cooling

Scotty, we need more power!

I “I’m givin’ her all she’s got, captain!” Yet another phrase attributed to Star Trek that’s not actually in Star Trek

n the previous issue of the DCD Magazine we spoke to John B. Goodenough, the father of the modern Lithium-ion battery. Aged 96, he is still trying to improve on his invention - even the scientist who developed the best energy storage solution available, which helped kickstart the personal electronics revolution, thinks it’s no longer... good enough. And that got me thinking: what would the world look like if we had better battery technology? Obviously, our smartphones would last longer than a day - and that alone could increase productivity, reduce stress and improve the quality of life for some 4.5 billion mobile device owners (Statista). Our luggage would be slimmer, after we stop carrying all manners of chargers around and Starbucks would lose its status as the preferred provider of power sockets. All headphones would be wireless. Power tools would last forever. At the moment, electric vehicles still fail at some of the very basic tasks expected of a car: your gas-guzzler will drive up to 600 miles, make a short stop to refuel, then drive another 600 miles, ad infinitum. Your Tesla will drive around 300 miles before needing a lengthy rest at a very specific location that means no cross-country road trips for the environmentally-conscious motorist. That is, until we get better batteries. But just imagine UPS systems that last for weeks. Imagine how many useful things you could do with the space (and the budget) that is currently occupied by the compulsory diesel generator set and its fuel reserves. Better batteries would simplify the delivery of renewable energy, help match supply and demand at any given time, and keep grid frequencies in check. They would also make data centers less prone to outages. Amazon has its Snowbmobile - a truck full of hard drives that can be filled with data in order to transport it from Point A to Point B. Why not a truck filled with batteries, making weekly shipments of energy to off-grid data centers? Say goodbye to transmission losses! Obviously, the truck would be electric. There are countless applications that could benefit from a better battery. I think we should drop everything we are doing and sink our communal dollars into battery research. We don’t really need another generation of CPUs Moore’s Law is more what you’d call a “guideline” than an actual rule. Most of this is wishful thinking, but like so many aspects of our life, energy production and transmission are undergoing an important transformation. There’s talk about smart grids and software-defined power (p26), the Linux Foundation (p44) has just launched LF Energy - a coalition that wants to bring the benefits of open source software to the energy sector - and our own DCD Energy Smart conference in Stockholm was a resounding success. I can’t wait to return to Sweden in April 2019.

Max Smolaks News Editor @maxsmolax

62 DCD Magazine • datacenterdynamics.com

STULZ stands for precision air-conditioning of the highest level. Whether customized or standardized, data center or industrial application, chiller or software; rely on STULZ for your mission critical cooling. STULZ – your One Stop Shop. www.stulz.co.uk