Page 1

Issue 31 • February 2019 datacenterdynamics.com

Juniper Networks’ CEO Thinking outside the box

SCALE Superpowers & Supercomputers

The US, China, Europe & Japan are locked in a race to build the world's first exaflop machine

Award Winners’ Special Our pick of 2018’s best and brightest

Beating the Greek Depression How Lamda Hellix thrived


Upgrading the birthplace of the Internet

DATA CENTER SURGERY Slicing into a live facility without killing the patient


Turning a fab into a data center


High Rate & NEW Extreme Life High Rate Batteries for Critical Power Data Center Applications

RELIABLE Narada HRL and NEW HRXL Series batteries deliver! Engineered for exceptional long service life and high rate discharges, Narada batteries are the one to choose. Narada provides solutions to meet your Critical Power needs.

ISO9001/14001/TL9000 Certified Quality Backed by Industry Leading Warranties

Narada...Reliable Battery Solutions www.mpinarada.com - email : ups@mpinarada.com - MPI Narada - Newton, MA Tel: 800-982-4339

ISSN 2058-4946

Contents February 2019

6 News From T5 and Infomart comes Stack Infrastructure


14 2019 Calendar Keep up-to-date with our events and product announcements 16 The exascale supercomputer Behind the global race to build the world’s most powerful machines

Industry interview



25 R  ami Rahim, Juniper Networks “I don’t think [hyperscalers] want to get into the business of developing networking equipment just because it’s fun to do so - they will only do it if they can’t get the technology that they need elsewhere.”

29 The modernization supplement Stories of data center rejuvenation 30 The hidden data center sector It’s time to think about an upgrade

32 36

32 From AOL to Stack Infrastructure The Internet was born here, it doesn’t want to die here 36 Data center surgery Keeping the patients alive while fixing their ailments 38 Inside Intel The chip company’s IT CTO gives us a tour of its biggest data center 43 One small step for Chinese AI No privacy means a lot of data


44 Thriving during a Greek tragedy Speaking to Lamda Hellix about economic depression 48  Did you win? Celebrating DCD’s award winners


52 Making it work Data center comissioning for fun and profit 53 Sunday lovely Sunday Keeping a data center running in Nigeria is no mean feat

44 44


54 Max’s take Sometimes I wonder what Jeff Bezos tastes like (Contains graphic imagery)

Issue 31 • February 2019 3

When the future meets the past


new era of high performance computing is arriving, while yesterday's facilities are being brought up to speed. China could beat the US, Japan and the EU in a technology contest much like the space race of the 1960s, with superpowers spending super-bucks to build exascale supercomputers. Sebastian Moss gives us a commentary (p16) on the race that could end up in Guangzhou, China.

Upgrades are like open heart surgery: you mustn't kill the patient Upgrading a data center is like open heart surgery. You need to fix the problems and restore the site to health, without killing it. That's what we found in an eyeopening exploration of the world of data center modernization and upgrades. The infrastructure upgrade sector is a nearly invisible fraction of the data center market. It's hidden from view by the vast and exponential growth of new facilities. It's also somewhat hard to define precisely. Upgrades can range from fitting blanking plates to improve circulation, to adding an entire prefabricated data center hall in your parking lot. On the spectrum between these options, there are cost-benefit equations to calculate. And they don't always come out the way you expect.

"We try harder because we're number two" is a cliché, but for Juniper Networks it seems to be true. Incumbents like the status quo, but insurgent players are ready to rip up the rulebook, the story goes, and that's the tale Rami Rahim, Juniper's CEO, told Max Smolaks (p25). Rahim delighted our open source guru with a tale of how open software has enabled his company to remain a serious competitor to the industry's 800-pound gorilla, Cisco. We also like how Juniper put itself under the leadership of an actual engineer, who started at the company in ASIC design.


From the Editor

Target date for China's first exascale computer. The US and Japan are aiming for 2021, and Europe has plans to deliver one in 2022

What happens when politics impinges on technology? If the technologists are smart, they innovate their way out of any problems. That's what Lamda Hellix did when the Greek economy hit the rocks. According to CEO Apostolos Kakkos (p44), it turned out that the international nature of the colocation business could work to the benefit of Lamda Hellix and Greece as a whole. As a bonus for this issue's modernization feature, Lamda Hellix's flagship site was built as a cable landing station, serving as a prime example of infrastructure repurposing.


News Editor Max Smolaks @MaxSmolax Senior Reporter Sebastian Moss @SebMoss Reporter Tanwen Dawn-Hiscox @Tanwendh SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Head of Design Chris Perrins Designer Dot McHugh Head of Sales Martin Docherty Conference Director, NAM Kisandka Moses Conference Producer, EMEA Zach Lipman

Head Office

PEFC Certified

If you missed our Awards event in December, you overlooked a great night out. You can't catch up on the fun, but this issue we present all the award winners in a four-page special (p48), and interview two of the best. In 2019, why don't you enter?

This product is from sustainably managed forests and controlled sources PEFC/16-33-254

Peter Judge DCD Global Editor


Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.


Global Editor Peter Judge @Judgecorp

DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907

Dive deeper


Meet the team


DCD Magazine • datacenterdynamics.com





© 2019 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.

Still relying solely Switch to automated on IR scanning? real-time temperature data.

Introducing the

Starline Temperature Monitor Automated temperature monitoring is the way of the future. The Starline Critical Power Monitor (CPM) now incorporates new temperature sensor functionality. This means that you’re able to monitor the temperature of your end feed lugs in real time– increasing safety and avoiding the expense and hassle of ongoing IR scanning. To learn more about the latest Starline CPM capability, visit StarlineDataCenter.com/DCD.

Whitespace News in brief Huawei CFO arrested in Canada Meng Wanzhou was arrested over allegations that she helped the telecommunications giant evade US trade sanctions against Iran. Meng, 46, is also the daughter of company founder Ren Zhengfei. China has demanded her release.

White space A world connected: The biggest data center news stories of the last two months

Qatar’s cabinet approves Microsoft Azure data center The company also plans to open Azure cloud data centers in Dubai and Abu Dhabi this year.

Audits reveal 340 illegal construction workers at Google’s Belgium data center Company says it instructed contractor ISG to ensure compliance with local laws.

WindCores project puts small data centers in wind turbines Each wind turbine is 13m wide and 150m high. They will each hold four fire-resistant IT safety cabinets, housing 62U server racks. Fujitsu is providing the servers.

Kaiam’s European operations go into administration 310 employees at the optical transceiver maker’s Livingston factory were laid off with no notice, and the factory closed.


Hyperscale data centers hit 430 in 2018 - Synergy The number of hyperscale data centers, mostly used by major cloud companies and social networks, has increased by 11 percent in 2018. According to Synergy Research, there are now 430 facilities in the world that could be considered ‘hyperscale,’ with around 40 percent of them found in the US, 8 percent in China and 6 percent in Japan. There are 132 more in development.


T5’s Chicago data center

IPI launches Stack Infrastructure: A wholesale provider with eight data centers Data center investor IPI Partners has combined facilities acquired from T5 Data Centers with three sites previously owned by wholesale provider Infomart to create a new brand - Stack Infrastructure - with 100MW of capacity across six US markets. Stack Infrastructure will offer wholesale colocation, along with options for powered shell, and provide new built-to-suit data centers for hyperscale firms. Stack will be run by Brian Cox, previously Infomart CEO, and chief strategy officer will ber Matt VanderZanden, previously head of site selection at Facebook. Stack Infrastructure starts with influential backers - IPI Partners is a joint venture between Iron Point and Iconiq Capital, a venture fund led by Divesh Makan, that in 2014 included Facebook’s Dustin Moskovitz and Sheryl Sandberg, Twitter’s Jack Dorsey

DCD Magazine • datacenterdynamics.com

and LinkedIn’s Reid Hoffman among its financiers. In February 2018, Infomart Data Centers sold the Dallas Infomart building (which gave the service provider its name) to its tenant Equinix. In March 2018, IPI took on the remaining three Infomart facilities. It has been widely expected that Infomart Data Centers would change its name, to avoid confusion with the building where it started. IPI Partners had also backed T5 in 2016, helping to finance its facilities in Portland, Oregon and Dallas. In 2018, IPI was reported to be considering divestment of its stake in T5, but this decision was clearly reversed. Instead, complementary assets suitable for the wholesale market have been shifted across to Infomart - T5 will continue to exist, with five remaining data centers. This merger creates a provider with around 1.5 million sq ft (140,000 sq m) of space across six locations: Infomart’s facilities in Ashburn, Virginia; Portland, Oregon; and Silicon Valley, California - along with T5’s sites in Atlanta, Georgia; Chicago, Illinois; and two in Dallas/Fort Worth, Texas. Stack will continue a T5 plan for a campus in Alliance Texas, Fort Worth. bit.ly/DataCentersStackUp


Nigerian law enforcement agency to relocate its servers following data center fire After its former data center almost burned to the ground, the Economic and Financial Crimes Commission (EFCC) of Nigeria has decided to place its ICT resources where it can keep an eye on them - in its own headquarters. The blaze, the cause of which is unknown, started in the EFCC’s server room, located in the Wuse 2 neighborhood of Abuja, destroying its digital records. The organization’s new headquarters, built to house its interagency task force as well as its data center, are located just five miles away, in the capital’s Jabi district. An EFCC spokesperson told the Daily Trust that the damage caused in the incident is likely to total “a few million” - bearing in mind that a million Nigerian Naira is worth approximately $2,800 - but that the agency would carry out a detailed analysis in order to be sure. The EFCC was created in 2011 in response to allegations that Nigeria was failing to meet international standards in the prevention of corruption and money laundering. Since then, it has made a number of high-profile arrests. bit.ly/FlameOff

US Navy

DoD missile defense data have “systemic” security flaws The US Department of Defense Inspector General found numerous security failings at the data centers and networks supporting the Ballistic Missile Defense System (BMDS). In a report made public in December, the IG investigated a nonstatistical sample of 5 out of 104 DoD locations. The full details are available via the link below, but include failures to enable multi-factor authentication, myriad unpatched systems that had decades-old known vulnerabilities, server racks routinely left unlocked, broken data center door sensors, issues with CCTV cameras, widespread uses of unencrypted removable media, and a lack of suspicious network detection systems. “The disclosure of technical details could allow US adversaries to circumvent the BMDS capabilities, leaving the United States vulnerable to deadly missile attacks,” the report warns. bit.ly/JoinMeInTheBunker

Paramount Pictures

BA to sue CBRE over £58m data center outage British Airways is suing property specialist CBRE over a 2017 data center failure that forced the airline to cancel 672 flights during a three-day Bank Holiday weekend, at an estimated cost of £58 million ($75m). On Saturday, 27 May 2017, a power outage apparently brought down a BA data center operated by CBRE. Backup systems evidently failed to cope, leading to a three-day outage, which stranded thousands of passengers. A BA inquiry has still not published any definitive root cause analysis, and now the company has appointed law firm Linklaters to bring a claim against CBRE Managed Services in London’s High Court. Over a year and a half after the outage, there is still no agreement as to the cause of the problem, and no clarification of early reports about a power supply issue. At the time, BA had two sites close to Heathrow: Boadicea House and Comet House, with contemporaneous news suggesting that a UPS issue shut down Boadicea House. bit.ly/BAhhhh

DCD>Energy Smart

April 1 - 2 2019

This year’s DCD>Energy Smart event in Stockholm focuses on the growing convergence of data centers and energy networks as capacity needs grow and demand on the grid increases. The conference looks at future collaboration, business models and how to deliver reliable data centers anchored to a robust power grid. The main stage will kick off with a keynote session delivered by Greenpeace’s Gary Cook - sharing fresh insights from 2019’s upcoming #ClickingClean report. bit.ly/DCDEnergySmart2019

Issue 31 • February 2019 7


Disused prison in Pennsylvania to become a data center

Virtus to spend £500m on five London data centers

Wiki Commons

Fiber networking and broadband solutions provider United Fiber & Data (UFD) has purchased a disused prison in York, Pennsylvania with a view to turn it into a data center. The facility will serve as a point-of-presence (PoP) for a 400 mile fiber-optic cable system connecting New York City and Ashburn in Virginia. The network, UFD CEO Bill Hynes explained in an interview with the York Dispatch, will offer a geographically diverse route joining the US’ financial hub and the processing heart of the Internet. York County Prison, a six-story architectural marvel designed by B.F Willis, was built in 1906, but has since withered after it was abandoned in 1979. Hynes said that it would start work on the data center in 2019, aiming to bring it online next year. The city of York sold the prison as well as the road on which it is located to UFD, expecting that the data center will generate construction and operational jobs for local residents, as well as improving local connectivity and attracting more ISPs to the area. The provider promises free Internet connectivity for low-income households, saying web access is “a right, just like oxygen and water.” bit.ly/DCDBehindRacks

EdgeCore buys land for 80MW campus in Santa Clara Recently-established infrastructure consortium EdgeCore Internet Real Estate has acquired a parcel of land in Santa Clara, California, where it will build its fifth data center campus. The upcoming site will offer 80MW of power capacity once complete, with the first phase of the project expected in 2020. EdgeCore previously said it plans to invest in six different locations across the US; it has already announced hyperscale campuses in Dallas, Phoenix, Northern Virginia and Reno. EdgeCore was established in the beginning of 2018 with a focus on wholesale colocation, with $800m of capital from Mount Elbert of Colorado, Singapore’s foreign reserve investment firm GIC, and Canadian pension funds manager OPTrust. The consortium says that so far, has purchased enough land to support more than 4.4 million square feet of data center space. bit.ly/ALateSantaTreat

Peter’s hub factoid Loudoun, Northern Virginia, known as Data Center Alley, has around 1kW of data center space per head of population. After discounts, data centers provide $250 million in tax, one quarter of the county’s revenue. 8

DCD Magazine • datacenterdynamics.com

British colocation provider Virtus is planning to spend £500 million ($645m) on a network of five data centers in London. All of the new facilities - including some that have been previously announced - will be constructed simultaneously over the next two years, marking this as one of the largest digital infrastructure initiatives to ever take place in Europe. The project is backed by Singapore-based telecommunications giant ST Telemedia Global Data Centres (STT GDC), which acquired Virtus in 2017. The announcement was welcomed by Graham Stuart, Minister for Investment at the Department for International Trade, who said the projects will generate highly-skilled jobs across the Greater London area. Virtus is the third largest colocation provider in the UK. The company is focused exclusively on the London market, with three existing locations in close proximity to the capital. The £500 million will be used to expand its properties and add another 76MW of power capacity. Three of the new data centers - including previously announced London3, as well as London9 and London10 - will be located in Slough, technically just outside London, and the unofficial data center capital of the UK. Another two facilities, London6 and London7, will be located on the Stockley Park campus, alongside the existing London5, making it the country’s single largest colocation site. At the same time, Virtus will continue expansion of its data center locations in the suburbs of Slough, Hayes and Enfield. Virtus says all of the upcoming facilities will feature ‘living exterior walls’ and power densities upwards of 40kW per rack. bit.ly/AVirtusCycle

Airtel to build ten data centers across India Looking to capitalize on the increased presence of cloud giants and content providers, Bharti Airtel’s Nxtra Data is planning ten data centers across India, from which it will offer colocation, managed hosting, public and private cloud services. According to the Economic Times, the data center arm of the country’s largest mobile operator has already selected four locations in which to build facilities: Pune, Chennai, Mumbai and Kolkata. The first data center, in the Maharashtra city of Pune, will go live in February. Airtel Business CEO Ajay Chitkara said Nxtra Data had established partnerships with “all the big cloud companies,” some of which already use Airtel’s infrastructure, and was doing business with major content and application providers. Airtel is also among the first in line to accommodate any government cloud infrastructure. In 2016, the company signed a deal with CenturyLink – now part of Cyxtera Technologies – which offers its suite of managed IT services from Nxtra Data facilities across India. bit.ly/UpInTheAirtel


IBM Services to help Juniper with cloud management in a $325m deal

More on Juniper p25

California-based Juniper is one of the world’s largest networking equipment and software vendors, with a market cap of approximately $10.6 billion. IBM Services will transition the company to a hybrid cloud architecture that uses Watson to simplify management - IBM’s cognitive computing platform relies on machine learning and features advanced natural language capabilities, which means you can theoretically talk to it the way would to a real human being. IBM’s Watson tech will be applied to a wide variety of tasks at Juniper, from its help desks to its data and voice networks. Bob Worrall, CIO at Juniper, said: “In working with IBM Services, we will be able to collaborate with them on innovative solutions for our cloud-first business model.” It is not clear whether this means Juniper will also become a major customer of IBM Cloud - a quote from Martin Jetter, SVP of IBM GTS, suggests otherwise: “Our work with thousands of enterprises globally has led us to the firm belief that a ‘one-cloud-fits-all’ approach doesn’t work and companies are choosing multiple cloud environments to best meet their needs.” A Juniper spokesperson gave DCD an even more cryptic response: “Juniper Networks regularly reviews business operations and opportunities to improve services and reduce costs. IBM will continue to maintain the high-level of IT service our customers have grown to expect over the years, while allowing us to focus on driving toward longterm growth.” Sebastian Moss


Google gets tax breaks for $600 million New Albany data center proposal Google is considering building a $600 million data center in New Albany, Ohio, after negotiating tax breaks under the pseudonym of Montauk Innovations LLC. The Ohio Tax Credit Authority has approved a 100 percent, 15-year data center sales tax exemption, with the option for it to be renewed for up to 40 years through October 30, 2058. The deal is thought to be worth $43.5 million over the life of the tax credit. “Providing support to this project will help ensure Ohio’s business environment is competitive for this project and add to Ohio’s data center industry,” the Ohio Tax Credit Authority said. It expects the data center to create $2.5 million in payroll. In addition to the Tax Credit Authority approval, New Albany City Council members voted in favor of the project, which is expected to span 275,000 square feet (25,500 sq m). New Albany Mayor Sloan Spalding added the city welcomes and supports the facility. Should the project go ahead, it would be built at the New Albany International Business Park, in the Oak Grove II Community Reinvestment

Area. It is expected to create 50 jobs by the end of 2023, according to a legislative report for New Albany City Council members. “Google is considering acquisition of a property in New Albany, OH, and while we do not have a confirmed timeline for development for the site, we want to ensure that we have the option to further grow should our business demand it,” Google spokesperson Charlotte Smith said in an emailed statement to Columbus Business First. The company has yet to buy the land under consideration. Ohio’s governor, John Kasich, has long been pushing for Google to build a facility in his state. Back in 2017, he said: “I told them, we’ve got Amazon, we’ve got Facebook, we’ve got IBM Data Analytics, all we’re missing is Google. I think we’re going to see movement on that front as well.” Facebook is building a $750 million data center in New Albany, while Amazon has spent $1.1 billion on three data centers in Central Ohio. bit.ly/GiveGoogleABreak

Wiki Commons

Amazon Web Services launches cloud region in Stockholm, Sweden Amazon Web Services has launched its fifth cloud region in Europe, with three ‘zones’ geographically separate data centers - in Stockholm, Sweden. The company originally announced its intention to come to the Nordic nation back in 2017. During construction, 100,000 tons of excess rock was used to raise the Vilsta ski resort by some 10m. Soon after the launch, the cloud company acquired two plots of land in the area around Stockholm: one in Katrineholm and the other in Eskilstuna, for a total of SEK 83.2 (US$9.2m). The purchase “sends a strong signal that the data center market is developing very strongly in the Stockholm region and that our region is a strong attraction point to international investors,” Anna Gissler, acting CEO of Stockholm Business Region, said. Around the same time, Microsoft acquired 130 hectares of land across two sites in the vicinity of Stockholm for SEK 269 million ($29.6m). Google owns land to the north. bit.ly/StockInStockholm

Issue 31 • February 2019 9

D848 12-13 March 2019 ExCeL London



· Sequential Start · Automatic Transfer Switches · Combination Units · 19” Rack & Vertical (Zero U) Units · International Sockets · Custom & Bespoke Design Service

Designed and Manufactured in the UK

+44 (0)20 8905 7273

· Remote Monitoring · Remote Switching · Local Coloured Display · Programmable Sequential Start · External Temperature & Humidity Measurement · USB Port for Storing & Recalling Setup · Can Be Specified With Any of Olson’s Data Centre Products or

as Part of a Bespoke Design or an In-Line Version for Retrofits



Whitespace Photography: Sebastian Moss

For more on HPC p16

Cray warns of substantial net loss in 2018/19, pins hopes on Shasta US supercomputing company Cray has warned of “a substantial net loss for both 2018 and 2019,” as the HPC market continues to struggle - with users extending the life of their existing machines. But the company tried to paint a positive picture of its future, with the upcoming “exascale-capable” Shasta architecture on the horizon. The losses follow a market contraction that has lasted several years. Back in 2015, Cray reported $724.7m in revenue - in 2018, preliminary results show total revenue to be about $450m.

Intel to make 3D stackable chiplets, unveils 10nm Sunny Cove CPU architecture After years of delays, Intel is finally ready to start shipping 10nm CPUs at scale, with the processors based on its newly-announced ‘Sunny Cove’ architecture set to release in 2019 - at least for consumer products. Dates were not given for its Xeon server chip line. At its Architecture Day event, the semiconductor company also announced a 3D packaging technology called ‘Foveros,’ which allows for complex logic dies to be stacked upon one another. It will start shipping in the second half of 2019. Sunny Cove will replace Skylake, the microarchitecture found in the vast majority of servers (as well as laptops and desktops) sold today. Intel has had a tough time trying to produce chips with a 10nm lithography process - back in 2013, it claimed 10nm would be ready by 2015. That got pushed to 2016, then 2017, and then delayed yet again. The company technically released a 10nm product in 2018, with the Core i3 processor codenamed Cannon Lake shipping in limited quantities - but it is thought that the yield on Cannon Lake was so poor, the company made a loss. bit.ly/WhatAbout7nm


AWS starts offering Graviton, a custom Arm CPU built by Amazon At AWS re:Invent in Las Vegas, Amazon Web Services announced that it is now offering Arm CPU-based instances for the first time. The ‘Graviton’ 64-bit processor was designed in-house by Annapurna Labs, a chip company Amazon acquired for $350m in 2015, which is also behind two generations of ‘Nitro’ ASICs that run networking and storage tasks in AWS data centers. bit.ly/AmazonArmsUp

Huawei announces Kunpeng 920, its Armbased processor for the data center Electronics giant Huawei has revealed its first ever Armbased server CPU, the Kunpeng 920, which it says will deliver higher performance per watt than mainstream chips. Based on the ARMv8 architecture, the 7nm Kunpeng 920 comes with 64 cores and operates at 2.6GHz, with eight channels of DDR4 memory. It supports RDMA over Converged Ethernet (RoCE) network protocol, PCIe Gen 4 standard and Cache Coherent Interconnect for Accelerators (CCIX) architecture. Designed for use in big data, distributed storage and for powering native Arm applications, Huawei says its chip is 25 percent faster on the SPECint benchmark, and 30 percent more power efficient than comparable alternatives. bit.ly/TheOnlyHuaweiStoryATM

Issue 31 • February 2019 11


UK justice system brought to a standstill by data center network issue

Nvidia CEO Jensen Huang – Sebastian Moss

Nvidia’s “extraordinary, unusually turbulent, and disappointing quarter” Nvidia has lowered the revenue guidance for the fourth quarter of its financial 2018, citing the decline in cryptocurrency demand, Chinese gaming stagnation, and problems with the global economy. The company reduced its guidance from $2.7 billion (already below Wall Street expectations) to $2.2bn, plus or minus two percent. bit.ly/NoVidia

An extended outage experienced by the UK’s Ministry of Justice has been Sebastian Moss blamed on a failure at a data center supported by Atos and Microsoft. The Crown Prosecution Service, the Criminal Justice Secure Email system (CJSM), and the court hearing information recording system, XHIBIT, were among the services impacted. CJSM also suffered a completely separate failure that affected roughly 12.5 percent of users. “We have been working closely with our suppliers, Atos and Microsoft, to get our systems working again, and yesterday we had restored services to 180 court sites, including the largest ones,” the Parliamentary Under-Secretary of State for Justice, Lucy Frazer, told the House of Commons. Opposition figures were not comforted by the comments, however. Labour Member of Parliament and shadow justice secretary Yasmin Qureshi cited a statement by the chair of the Criminal Bar Association which described the “courts system as being ‘on its knees’ following that failure, and blamed ‘savage cuts to the MoJ budget.’” Qureshi, a former Barrister, added: “Of course, such failings do not happen in a vacuum. The Ministry of Justice has faced cuts of 40 percent in the decade to 2020. The Government are pursuing a £1.2 billion [US$1.58bn] courts reform program, which has seen hundreds of ​courts close, thousands of court staff cut and a rush to digitize many court processes. Are the plans to cut 5,000 further court staff by 2023 still being pursued?” bit.ly/CuttingRemarks

A deer broke into a data center A furry hacker collective posted videos of an unexpected data center guest - a deer. The scared ruminant appears to have broken through the wall of the data center, and was able to make it into the server room before animal control were called. “Breaking Mews: Just when you thought it was going to be routine Monday night 2 am server maintenance... The Venison Red Team gets past your fancy guards and biometrics,” DEFCON Furs tweeted. The group, which organizes events and parties for attendees of infosec event DEFCON who are also interested in the furry subculture, included several images and videos of a deer loose in a data center, along with its apparent entry point - see link below for more. DEFCON Furs provided DCD with a few further details about the incident, confirming the facility was in North America, but declined to reveal specifics. “They were working in the data center, slowly decommissioning it to relocate. That’s why the cables look like a mess,” DEFCON Furs explained. “The deer made the hole apparently through [a] glass window when it got spooked.” As for the startled furry intruder: “It was fine, ran out and rejoined its friends. The police and animal control made sure it was OK before letting it run down the hall and out the steps. I think she was good.” This is not the first time data center operations have been disrupted by animals according to Chris Thomas, founder of the CyberSquirrel1 project, there have been at least 2,524 ‘attacks’ by small animals against critical infrastructure since 2013. bit.ly/MaxsFursonaSpotted

12 DCD Magazine • datacenterdynamics.com



IBM announces Q System One, a quantum computer in a 9ft cube At the Consumer Electronics Show in Las Vegas, IBM unveiled a new quantum computer that is more reliable than its previous experimental prototypes, bringing the company a step closer to commercialization of this technology. IBM The machine, which it claims is the first integrated, general-purpose quantum computer, has been named the ‘Q System One.’ To mark what it calls “an iconic moment” for the business, IBM turned to designers at Map Project Office and Universal Design Studio, tasking them with developing a unique glass enclosure. Rather than operating with bits that are in the state of 0 or 1, like classical computers, IBM’s circuit model quantum computer has trapped quantum bits that can appear in both states at once, theoretically allowing for significantly more computing power - for the right applications. The System One is enclosed in a nine foot sealed cube, made of half-inch thick borosilicate glass. The case opens using “roto-translation,” or motor-driven rotation around two displaced axes - something the company says simplifies the system’s maintenance and upgrade process, and minimizes downtime. “The IBM Q System One is a major step forward in the commercialization of quantum computing,” Arvind Krishna, SVP of Hybrid Cloud and director of IBM Research, said. bit.ly/CubeBits

Jülich Supercomputing Centre gets €36m for HPC, quantum and neuromorphic computing The Jülich Supercomputing Centre has been awarded €36 million (US$41m) to research future computing technologies, including quantum computing and neuromorphic computing. The German high performance computing center will receive €32.4m ($37m) from the Federal Ministry of Education and Research (BMBF), and €3.6m ($4.1m) from the Ministry of Culture and Science of North Rhine-Westphalia. “As a federal government, we want to further develop the technological basis of digitization and artificial intelligence, in particular, in order to safeguard the future of Germany as a federal state,” the parliamentary state secretary of the BMBF and Member of the Bundestag, Thomas Rachel, said (translated). The Jülich Research Centre (which includes JSC) is currently home to the ‘Jülich Wizard for European Leadership Science’ (JUWELS) supercomputer, which has a theoretical peak performance of 12 petaflops. That figure is expected to grow this year thanks to a “booster” upgrade, designed for massively parallel algorithms that run more

efficiently on a manycore platform. JUWELS, and its predecessor JUQEEN, have been used extensively in the Human Brain Project, an ambitious EU-funded effort the increase our understanding of the human brain by emulating its components (see DCD Magazine Issue 30). JSR’s brain research is headed by Professor Katrin Amunts, director of the Institute of Neuroscience and Medicine (INM). Working “with an international technology company in the field of machine learning,” Amunts’ team is building a detailed digital map of the structure and function of the human brain, which helps with the development of neuro-inspired computing technologies. As for quantum computing, JSR currently holds the record for the highest number of qubits simulated using a supercomputer - 48. With the additional funding, JSR will set up new scientific institutes with more than 100 additional scientists to be recruited.

AI chip startup Graphcore raises $200m from BMW, Microsoft and others AI silicon start-up Graphcore has closed its Series D fundraising round, adding $200m from existing investors, as well as companies like Microsoft and BMW’s i Ventures. The UK-based company, which was founded in 2016, has begun shipping its Intelligence Processing Unit (IPU), a chip it claims is significantly faster at running certain machine learning inference and training workloads that either CPUs or GPUs. Currently, only early access customers have access to IPUs, but Graphcore says it is now ramping up to meet high volume production. With the market growing at speed, several chip design startups have received attention and investment, including Cambricon, Horizon Robotics and Cerebras Systems, but with limited deployments and a rapidly changing field, it is hard to evaluate the usefulness of current products. bit.ly/CoreBlimey

Graphcore bit.ly/TotallyWizard

Issue 31 • February 2019 13

2019 Calendar January






iew int


e rvi


v to


>Energy Smart 2 April

>Madrid 29 May

>New York 9-10 April Extras

>AI-Ready Marketplace www.dcd.events


here to vi ck



>Board Room

Edge 15 January

Retail 5 February

Strategy 5 March

Edge 30 April

Energy Smart 8 May

What verticals will define Edge? What design differences will mark them?

How does digital infrastructure transform retail distribution logistics?

Are data centers investment opportunities over-hyped?

Who’s in charge of the healthcare data center?

What role do data centers play in an urban energy smart future?

Modernization 19 February Modernizing your legacy data center - what’s new?

Listen to our expert panels discuss the hot topics of the day on our new live webinar format. You can register online and view our back- catalog on-demand today!

Energy Smart 26 February Can you run your data center offgrid?

AI/HPC 12 March

New Builds 14 May

What does it take to make your data center AI-ready?

How can you reduce the cost per MegaWatt for new data centers?

Edge 20 March

Automation 21 May

20/20 Vision: Physical IT infrastructure for financial services

So now you’ve implemented DCIM, what next?


e re t o v i kh


Cli c


Download our new digital supplements and read the latest magazine issues online on our new interactive media platform!

Print/Digital 31 January Awards winner guide

Digital supplement 01 March Smart Energy

Print/Digital 05 April New York show guide



Data Center Upgrade and Modernization

Building the Colo Way


14 DCD Magazine • datacenterdynamics.com

Digital supplement 01 May Data Center AI

DCD Round-up



>Indonesia 19 June

>Bangalore 18 July

>Shanghai 25 June

>San Francisco 12 July


>Australia 15 August


>Santiago 10 September >Singapore 17-18 September >México 25 September


> Energy Smart Forum



>Dallas 22 October

>London 5-6 November


>São Paulo 5-6 November

>Board Room >5G/Telco Edge Forum

>Mumbai 14 November


>Beijing 5 December

>Awards 5 December London December

Hybrid IT 4 June

Construction 2 July

Telco 3 September

AI/Automation 8 October

Cooling 19 November

Hybrid IT 10 December

How do you compare cost and risk across onprem, colo and cloud?

What’s new when building at megascale?

How are telcos exploiting their data center opportunity?

How is AI evolving as a data center management tool?

High efficiency, where should you be investing your dollars?

How can I predict my data center capacity requirements accurately?

Hybrid IT 17 September

Energy Smart 15 October

Critical Power 26 November

Are multi-tenant data centers inherently energy inefficient?

Will smart energy tech unite the data center and energy industries?

How will Software defined and virtualized power system impact availability?

Print/Digital 04 October London show guide

Digital supplement 01 November

Digital supplement 01 December



Edge 11 June Powering 5G networks, the Internet of Things and Edge Computing

Edge 9 July What’s the impact of the hyperscale edge on colo markets?

Network 24 September What do you need to know about Data Center Interconnects?

Digital supplement 01 June Living at the Edge

Print/Digital 05 July San Francisco show guide

Digital supplement 01 August

Digital supplement 01 September


Hybrid IT



Future Industries & Smart Infrastructure

Telecoms Infrastructure and the Data Center

*Subject to change Issue 31 • February 2019 15

Cover Feature


Superpowers need supercomputers. But who will win the greatest tech contest since the space race? Sebastian Moss finds out


his is a story about scale. Billions of dollars are being spent; thousands of the world’s brightest minds are competing across vast bureaucratic systems; and huge corporations are fighting for contracts. All of this is necessary to build unfathomably powerful machines to handle problems beyond our current capabilities. Most of all, this is a story about nations. For all the talk of Pax Americana, of the ‘end of history,’ of a time of peace, prosperity and cooperation, reality has not been so kind. Competition still reigns - over resources, over science, over the future. And to build that future, superpowers need supercomputers. “There is a race going on,” Mike Vildibill, VP of HPE's advanced technologies group, told DCD. “I don't think it's as public as the space race was. Perhaps that’s in part because some of the implications really refer to national security. It's not simply who gets there first, in terms of national pride although there is a lot of national pride - but there's something much more serious going on.” This race, decades in the making, will bring into existence the first exascale

supercomputers, systems capable of at least one exaflops, or a quintillion (that’s a billion billion) calculations per second. In some senses, ‘exascale’ is an arbitrary figure, especially given limitations with the benchmarks used, but “this is not just about bragging rights,” Eric Van Hensbergen, distinguished engineer and acting director of HPC at Arm Research, told DCD. “Exascale supercomputers are a critical piece of national infrastructure: It's about how we are going to discover the next material, the next type of reactor, the next type of battery. All of that's driven by modeling and simulation on supercomputers. “And so if you don't have access to that resource, that puts you in a bad position economically and in regards to security, and that's why you see this nationalistic fervor around supercomputers.” But to understand the future of computing, one must first understand what we have achieved so far. With a peak performance of 200 petaflops, today’s most powerful supercomputer can be found at the US Department of Energy’s Oak Ridge National Laboratory in Tennessee. Housed in a 9,250 square foot (859 sq m) room, ‘Summit’ features some 4,608 compute

16 16 DCD Magazine • datacenterdynamics.com

Sebastian Moss Senior Reporter


servers, each having two 22-core IBM Power9 CPUs and six Nvidia Tesla V100 GPUs. It consumes 13MW of power, and has 4,000 gallons of water rushing through its cooling systems every minute. Summit, like the Sierra system and the upcoming Perlmutter supercomputer, was created as a stepping stone on the path to exascale, a way to estimate the magnitude of the task that lies ahead. STEP ONE “They wanted an intermediate step to support software development and allow people to begin exploring the technology and how to scale code up,” IBM’s head of exascale, Dave Turek, told DCD. “They wanted that step so that all the design, manufacturing, other processes could be shaken out.” The three pre-exascale systems, collectively known as CORAL, enabled the DOE to align its exascale ambitions with the reality of the market. “A lot of things have to occur in lockstep, and when they don't, you have to be clever enough to figure out a way around that. With the costs of storage in an era of big data, what technology do you use? When is phase-change memory going to come? How will I utilize that? How much risk am I exposing to myself based on assuming that it will be here?” Turek said. "We're not sitting around waiting for the exascale systems to show up,” Doug

Kothe, director of the US government’s Exascale Computing Project (ECP), explained. “These pre-exascale systems are critical to our success. We're doing a lot of work on these systems today.” All of these efforts are meant to prepare the nation for the arrival of exascale systems. Lori Diachin, deputy director of ECP, laid out the roadmap: “The first exascale systems are going to be deployed at Argonne and Oak Ridge, first the Aurora and then Frontier systems in the 2021-22 timeframe, and then the El Capitan system probably around 2023.” Creating these machines will require a huge amount of additional work - one can’t, for example, simply build Summit, but at a larger scale. “So, we could do three [Summits together],” IBM’s Turek said. “We couldn't do five, and we have actually looked into that. The problems are elusive to most people because no one's operated at this level, and I've been doing this for 25 years. You discover things at scale that don't manifest themselves in the nonscale arena. And as much as you plan, as much as you think about it, you discover that you run into problems you didn't anticipate before. “When we considered five, we looked at the current state of network architecture. We looked at the power consumption. We looked at the cost. And we said, ‘Not a great idea.’” An initiative run under the ECP umbrella hopes to drive down thoseu

Issue 31 •Issue January/February 31 • February 2019 17

Cover Feature u costs and ameliorate the technical challenges. The PathForward program provides $258m in funding, shared between AMD, Cray, HPE, IBM, Intel and Nvidia, which must also provide additional capital amounting to at least 40 percent of the total project cost. “PathForward projects are designed to accelerate technologies to, for example, reduce power envelopes and make sure those exascale systems will be cost-competitive and powercompetitive,” Diachin said. For example, IBM’s PathForward projects span ten separate work packages, with a total of 78 milestones, across hardware and software. But for Turek, it is the software that is the real concern. “I wouldn't worry about the hardware, the node, pretty much at all. I think it's really communications, it's file systems, it's operating systems that are the big issues going forward," he said. “A lot of people think of building these supercomputers as an exercise in hardware, but that’s absolutely trivial. Everything else is actually trying to make the software run. And there are subtly complex problems that people don't think about. For example, if you're a systems administrator, how do you manage 10,000 nodes? Do you have a screen that shows 10,000 nodes and what's going on? That's a little hard to deal with. Do you have layers of screens? How do you get alerts? How do you manage the operational behavior of the system when there's so much information out there about the component pieces?”

Buck, told DCD. “Traditional Moore's Law has ended, we cannot build our exascale systems, or our supercomputers of the future, with traditional commodity CPU-based solutions,” he said. “We've got to go towards a future of accelerated computing, and that's not just about having an accelerator that has a lot of flops, but looking at all the different tools that accelerated computing brings to the exascale. “I think the ingredients are now clearly there on the table. We don't need to scale to millions of nodes to achieve exascale. That was an early concern - is there such a thing as an interconnect that can actually deliver or run a single application across millions of nodes?” Accelerated computing - a broad term for tasks that use non-CPU processors, that Nvidia has successfully rebranded to primarily mean its own GPUs - has cut the number of nodes required to reach exascale. “The reality is: with accelerating computing, we can make strong nodes, nodes that have four or eight, or sometimes even 16 GPUs in a single OS image, and that allows for technologies like InfiniBand to connect a modest number of nodes together to achieve amazing exascaleclass performance,” Buck said. THE FIRST LIGHT IBM and Nvidia are together, and separately, vying for several exascale projects. But if everything goes to plan, Intel and Cray will be delivering the first US exascale system - thanks to the past not going to plan. Previous US plans called for Intel and Cray to build Aurora, a 180 petaflop preexascale system, at Argonne National Laboratory. It was supposed to be based on Intel's third-generation ‘Knights Hill’ Xeon Phi processors, and use Cray’s ‘Shasta’ supercomputing architecture. Instead, by the summer of 2018, Knights Hill was discontinued. “As we were looking at our investments and we continued to invest into accelerators in the Xeon processor, we found that we were able to continue to win with Xeon in HPC,” Jennifer Huffstetler, VP and GM of data center product management at Intel Data Center Group, told DCD as way of explaining why Phi was canceled. That decision on how to proceed without Phi came as China was announcing a plan to build an exascale system by 2020 - way ahead of the US’ then-goal of 2023. Hoping to remain competitive, the DOE brought Aurora forward to 2021, with the aim of being the country’s first exascale system. We still don’t know which processor Intel will use, with the company’s Data Center Group GM Trish Damkroger promising “a new platform and new microarchitecture

“Moore's Law has ended, we cannot build our exascale systems with traditional CPUs" Each of the upcoming exascale systems in the US are budgeted under the CORAL-2 acquisition process, which is open to the same six US vendors involved with PathForward. “Those are not part of the ECP itself, but rather a separate procurement process,” Diachin explained. “The ECP is a large initiative that is really being designed to make sure that those exascale platforms are going to be ready to go on day one, with respect to running real science applications. “Our primary focus is on that software stack. It's really about the application, it's about the infrastructure, and the middleware, the runtime systems, the libraries.” For a future exascale system, IBM is likely to turn to the same partner it has employed for Summit and Sierra - GPU maker Nvidia. “You can already see us active in the preexascale era,” the company’s VP of accelerated computing and head of data centers, Ian

18 18 DCD Magazine • datacenterdynamics.com


"We really do anticipate a Cambrian explosion of processor architectures because of the slowing silicon technology"

specifically designed for exascale.” The details are still shrouded in mystery and, with the initial Aurora system canceled, there will not be a pre-exascale system to trial the technology. At the same time, the new Aurora will likely still rely on Cray’s Shasta platform. Coming first to the Perlmutter machine at the National Energy Research Scientific Computing Center (NERSC) in 2020, “the Shasta architecture is designed to be exascale capable,” Cray CTO Steve Scott told DCD. "Its design is motivated by increasingly heterogeneous data-centric workloads." With Shasta, designers can mix processor architectures, including traditional x86 options, Arm-based chips, GPUs and FPGAs. "And I would expect within a year or two we'll have support for at least one of the emerging crop of deep learning accelerators," Scott added. "We really do anticipate a Cambrian explosion of processor architectures because of the slowing silicon technology, meaning that people are turning to architectural specialization to get performance. They're diversifying due to the fact that the underlying CMOS silicon technology isn't advancing at the same rate it once did. And so we're seeing different processors that are optimized for different sorts of workloads." Katie Antypas, division deputy and data department head at NERSC, told DCD: “I think a lot of the exascale systems that are

$205M The cost to build Summit

$1.8BN The expected cost of the three CORAL-2 exascale systems

coming online will look to Perlmutter for the kind of technologies that are in our system.” Perlmutter, which features AMD Epyc CPUs, Nvidia GPUs and Cray's new interconnect, Slingshot, “is also the first large-scale HPC that will have an all-flash file system, which is really important for our workload,” Antypas said. “For our workloads that are doing a lot of data reads, this flash file system will provide a lot of speed up.” AN ENTERPRISING APPROACH Another contender is HPE, whose PathForward efforts are co-led by Vildibill. “The work is really split between five different general areas, and across those five areas we have 20 teams that are actually doing the R&D. It's quite extensive, ranging from optics, to a Gen-Z based system design, to manufacturing of silicon, to software and storage.” Gen-Z came out of HPE’s much-hyped research project called ‘The Machine,’ Vildibill explained. “It's a chip-to-chip protocol that is a fundamental building block for almost everything that we're working on here. We've since put the Gen-Z technology into an open industry collaboration framework,” he added, name-checking partners like Google, Cray and AMD.

HPE realized that “data movement consumes, and will consume, over an order of magnitude more energy to move the data than it does to compute the data,” Vildibill said. “The power consumption of just moving the data was exceeding the entire power envelope that we need to get to exascale computing.” "The toughest of the power problems is not on circuits for computation, but on communication," a mammoth study detailing the challenges on the path to exascale, led by DARPA, warned back in 2008. “There is a real need for the development of architectures and matching programming systems solutions that focus on reducing communications, and thus communication power.” Vildibill concurred: “A shift in paradigm is needed: we've got to figure out a system where we don't move the data around quite so much before we compute it.” Gen-Z hopes to bring about this shift, with the open standard enabling high bandwidth, low latency connections between CPU cores and system memory. “It’s a technology that is very vital to our exascale direction,” Vildibill said. ARMING UP Outside of PathForward, there is another approach HPE might take for future exascale systems - Arm chips. The company is behind the world’s largest Arm-based supercomputer, Astra, located at Sandia National Laboratories, with 2,592 dual-processor servers featuring 145,000 Cavium ThunderX2 cores. Vildibill, whose team was responsible for Astra and the Apollo 70 servers within it, was quick to point out the project is independent of his PathForward work. But he added: "We're interested in both because we have some fundamental challenges and barriers to getting to exascale. "What Arm offers us is a new approach to what was on our roadmaps a year or two ago. The interesting thing about Arm is that it has a development cycle that spins very quickly. Traditional CPU designs have to satisfy the commercial market, the enterprise market, the laptop market, the exascale market, etc. So it's very difficult, or very disruptive, to make changes to those CPUs that apply to all markets.” With Arm, he said, “it is much easier for somebody to develop a very capable CPU that might be targeted at a narrow niche, and therefore can innovate more quickly and not be worried too much about disrupting all of their businesses." Arm may end up in one of the DOE’s future exascale systems, as “one of the requirements of our exascale program is that the systems must be diverse,” the ECP’s Diachin told DCD. She added that the DOE has also taken pains to ensure that software and exascale applications are designed to work across u

Issue 31 •Issue January/February 31 • February 2019 19

Cover Feature u platforms, “because we're seeing such a wide variety of architectures now.” But while Arm’s future in exascale projects in the US is not clear, the architecture will definitely make a big splash in Japan. Fujitsu and Japanese research institute Riken aim to develop the nation's very first exascale system, currently known as Post-K, by 2021. Powering Post-K is A64FX, the first CPU to adopt the Scalable Vector Extension (SVE), a development of the Armv8-A instruction set architecture made with HPC in mind. SVE enables vector lengths that scale from 128 to 2048 bits in 128-bit increments. Rather than specifying a specific vector length, CPU designers can choose the most appropriate vector length for their application and market - allowing for A64FX to be designed with a focus on exascale computing. “That was key. If we do things correctly, the ISA and the architecture gives you an envelope in which you can do a lot of diversification, but without hurting the software ecosystem,” Arm’s Van Hensbergen explained. The adoption of SVE, which has enabled Japan and Fujitsu to target such an ambitious timeframe, was a lucky occurrence. “We were fortunate with SVE in that the origins of that was something very different, and it kind of lined up very nicely; just as we were about to put it on the shelf, Fujitsu and Cray came along and said 'it would be nice if you had this' and we were like 'ah, I was just about to put this away,' and then we retailored it and


The number of European exascale basic technology projects started in 2015


The number of European exascale basic technology projects starting in 2018/19

re-tuned it. Otherwise, it may have taken us longer to get it to market, honestly.” Arm may also find its way into European exascale systems, with the EU still deciding on the specifics of its approach. “There is no high performance chip design in Europe, not for general purpose processors anyway,” Van Hensbergen said. “It eroded steadily over the years, with [European organizations] buying from other countries, and the local industry disappeared. But now they're spending hundreds of millions of euros to try to rebootstrap that industry, because they don't want to be at a disadvantage if politics does start interfering with supply.” THE EUROPEAN PROJECT The details remain murky, but the level of ambition is clear. Juan Pelegrin, Head of Sector Exascale Computing at the European Commission, said: “HPC is critical, and Europe has to be sovereign, we need to be able to rely on ourselves and our own technology. “We’re going to buy two pre-exascale, plus two or three petascale systems by 2020, two exascale by 2022/23, one of which will primarily rely on European technology. And then we are looking at postexascale infrastructure by 2027 - given the advancements in the quantum computing field, we hope that the quantum components will fit into the HPC infrastructure.” The project has secured around €500m (US$569m) from the EU to invest in 20192020, with participating governments set to match that level, along with private businesses providing in-kind contributions to the tune of €422m ($480m). EuroHPC, the joint undertaking that is meant to run all the European supercomputing efforts, has also made a bid to get a further €2.7bn ($3bn) from the EU Digital Europe program for HPC investment across exascale, quantum computing, and competing architectures like neuromorphic computing. Pelegrin added: “There’s also the Horizon Europe program. We’ve made a bid for €100bn, but don’t get excited - it’s not all for HPC. That will be for new

20 20 DCD DCD Magazine Magazine •• datacenterdynamics.com datacenterdynamics.com


The number of member states participating in the EuroHPC Joint Undertaking

research only: algorithms, etc.” For the exascale system made with European technology, the EU has kicked off a huge project within the wider exascale program - the European Processor Initiative. “If we look back 60 years ago, what they did in the US with the Apollo program, it sounded crazy at the time, and then they made it,” Philippe Notton, the general manager of the EPI and head of processors at Atos, said. “Now I am glad and proud to introduce the next moonshot in Europe, which is EPI - different kind of budget, same kind of mission. Crazy, but we're going to do it.” The plan? To build a European microprocessor using RISC-V architecture, embedded FPGA and, perhaps, Arm (“pending the final deal”) components, ready for a preexascale system in 2021, and an upgraded version for the exascale supercomputer in 2022/23. Featuring 23 partners across academia and business, the group also hopes that the lowpower chip will be used in the automotive market, presumably for autonomous vehicles - with BMW, one of those partners, expecting to release a test car sporting a variant of the chip in a few years. “We are also working on PCIe cards, server blades, HPC blades, to hit the target of 2021 you will have lots of things from EPI,” Notton said, adding “it’s thanks to this,” as he pointed to the bags under his eyes. Another exciting approach to exascale can be found in China. “They've invested a tremendous amount in building their

XXXXXXXXXX | XXXXXXXX ecosystem,” Van Hensbergen said. “In some ways they are the poster child for how to catalyze an advantage by just pouring money into it. I think that everyone knows that they're being very aggressive about it, and everyone is trying to react to it, but it's difficult to keep up with the level of investment that they're putting in.” A NATIONAL MOVEMENT Professor Qian Depei, chief scientist in the national R&D project on high performance computing in China, explained: “It was quite unusual [for China] to continually support key projects in one area, but this one has been funded for 15 years. That reflects the importance of the high performance program.” The result was the ‘National Grid,’ which features some 200 petaflops in shared computing power across the country, with roughly 19,000 users. Yet the country wants more. Ultimately, “the goal is to build exascale computers with self-controllable technology, that's a kind of lesson we learned in the past. We just cannot be completely bound to external technology when we build our own system.”

prototypes. “The three prototypes are Sugon, Tianhe-3, and Sunway,” Qian said. “We hope that they show different approaches towards a future exascale system.” Sugon adopts “a relatively traditional accelerated architecture, using x86 processors and accelerators, with the purpose of maintaining legacy software assets.” It uses processors from Chinese chip-maker Hygon, which has a complicated licensing agreement allowing it to make chips based on AMD’s Zen microarchitecture. “Sugon uses low-temperature evaporative cooling, and has no need for fans,” Qian said. “It has a PUE below 1.1, and as the system will be cooler, it increases reliability and performance.” Next up is Tianhe-3, which is “based on a new manycore architecture. Each computing node includes three Matrix-2000+ processors, it uses a butterfly network interconnect, where the maximum number of hops in a communication path is four for the whole system,” Qian explained. “They are working on the next generation interconnect to achieve more than 400Gbps bandwidth.” Then there’s Sunway, another manycorebased machine. “Currently the system is still implemented using the old processor, the ShenWei 26010. The number of nodes is 512, and the peak performance is three petaflops,” but it will soon be upgraded with a newer processor. In developing the prototypes, China has identified several important bottlenecks, Qian said. “For example, the processor, including manycore processor and accelerator, the interconnect, the memory - 3D memory is a big issue, we don't have that capability in China - and software, that will probably take a longer time to improve, and is a big bottleneck.” Considering its attempts to design homegrown processors, “the ecosystem has become a very crucial issue” in China, Qian said. “We need the language, the compilers, the OS, the runtime to support the new processors, and also we need some binary dynamic translation to execute commercial software on our new system. We need the

“The ecosystem has become a very crucial issue...This is a very long-term job" This lesson, hammered home in the ongoing US-China trade war, was first experienced years ago. In 2015, the US Department of Commerce banned US companies from selling Xeon and Xeon Phi CPUs to China's leading national laboratories, claiming the chips were being used to build systems that simulated "nuclear explosive activities," presenting a national security threat. The ban backfired - realizing its vulnerability in foreign markets, China pumped money into domestic computing. “I would say it helps the Chinese when the US imposes an embargo on technology, it just forced more funding to go into developing that technology. It accelerated the roadmap,” professor Jack Dongarra, who curates the bi-annual Top500 list of the world’s fastest computers, told DCD. “They have made considerable progress since the embargo was imposed. They have three exascale machines in planning, and I would expect them to deliver something in the 2020-21 timeframe, based on their own technology.” Before it develops the exascale systems, China is following the same approach taken by the US and the EU in developing several

tools to improve the performance and energy efficiency, and we also need the application development support. This is a very long-term job. We need the cooperation of the industry, academia and also the end-users.” With a rough target of 2020, China may beat the US to the exascale goal, but Qian admitted the first machines will likely be “modest compared with US exascale computers.” Once each nation has built its exascale systems, the demand will come again: More. This is a race, but there is no end in sight. “For us, the big question is what is next?” Marcin Ostasz, of the European industry-led think tank ETP4HPC, said. “We have a lot of projects in place, we know there is a lot of funding through EuroHPC.” “We invited the gurus, the brains of the world, into a workshop in June, and we asked the question: 'Where is the world going?' And we got a lot of good answers that will help us find this vision of the post-exascale systems.” Like the systems before it, that vision will likely be driven by the needs of a society desperate to find solutions to complicated problems, Ostasz said. "We need to look at the gas emissions, at the human burden of dementia; there are challenges in health, in food, etc. These are the applications we will need to address."

What about me? Supercomputers are not a world unto themselves - the innovations gained in the race to exascale will directly impact the wider data center market. “If you look at the history of supercomputing, you see a direct trickle down of so many of the technologies,” Arm’s Eric Van Hensbergen told DCD. “Most of the technologies that the cloud is built on today came out of HPC - it's not the exact same thing, but all of that scaling research that was done for HPC ends up applying there. There are direct links.” Times have changed somewhat, however. Van Hensbergen noted: “Now it's circular, now that community has gone off on its own innovation stretch, and HPC is folding some of those technologies back into supercomputers.”

Issue 31 •Issue January/February 31 • February 2019 21


Data Center M&E Cyber Security Online Course Security breaches happen more and more often at a larger scale, do not be the next one!

Mission Critical Training

Buy the 2 hour online course today!

Take the Mission Critical Awareness Certificate Online 1. Mission Critical Engineering

2. Reliability & Resiliency

Gain Certified Data Center Specialist DCS Status Specialist

Part of the 16-Module Foundations of Mission Critical Infrastructure, this is one of our most flexible online training solutions and is based on the highly acclaimed book “Maintaining Mission Critical Systems Engineering in a 24/7 Environment.” 3. Electrical Systems Maintenance

4. Fundamentals of Power Quality

DCS is the highest level of status an individual can achieve on our career progression framework. It demonstrates a significant commitment of time and resource to professional development, and inspires confidence in customers and employers.

Gain Certified Data Center Technician DCT® Status

This new credential track has been launched to help technical staff involved in the day to day operational activities of the data center understand the major requirements for operational excellence.


Gain Certified Data Center Practitioner DCP® Status Practitioner

Data Center Design Awareness 3-day classroom DCDA is the industry’s most successful foundation course, completed by over 8,000 students worldwide already.

Get staff compliant with our online HSE learning module Get a certificate and a 1 PDH credit for FREE

A complete solution for the data center industry www.dcpro.training

Enrol now for the 2019 Curriculum

Our Corporate Solutions can help you develop your entire workforce

We work closely with you to develop a customized training solution that caters to the ambitions and career roadmap of your employees, in line with your corporate and business development strategy, with cost effective per-person training costs.

Our skills assessment tool helps define their training needs.

“Working with DCPro we’ve been able to roll out basic training for a large part of our workforce and we can track the progress of individuals and teams. It’s providing us with plenty of actionable data.” Paul Saville-King CBRE Data Center Solutions

Energy PRO Course Practitioner


Contact us for a FREE skills gap analysis The course focuses on energy consumption and energy efficiency as well as operational and design strategies and will help increase cost-effectiveness across all capital investments.

Cooling PRO Course Specialist

Power PRO Course Specialist


Used by major operators


The Power PRO course delivers a comprehensive understanding of how power requirements impact the design and operation of data centers as well as the key challenges related to power within them.

This specialist level course focuses on key concepts, practices and optimization of cooling within data centers and through its practical approach it ensures a forwardthinking mentality in terms of cooling technologies.

Book Now!

17th Annual

Limited free passes

>New York 9-10 April 2019 Times Square Marriott Marquis

North America’s largest enterprise data center event Connecting enterprises with cloud-infrastructure technology in the world’s largest data center market

Join the discussion #DCDNYC

For more information visit: bit.ly/DCDNewYork

Headline Sponsor

Knowledge Partners

Lead Sponsors

Global Content Partner

CEO Interview | Rami Rahim, Juniper Networks

You can’t commoditize networking software

Max Smolaks News Editor

Rami Rahim, CEO of Juniper Networks, tells Max Smolaks he wants to sell licenses, not boxes


he differentiation is not in the box anymore. Differentiation is around offering solutions that enable our customers to transform, to obtain cost efficiency and agility benefits of the cloud,” said Rami Rahim, CEO of Juniper Networks, in an interview with DCD. Having obtained a Master's degree in electrical engineering from Stanford, Rahim joined Juniper Networks in 1997 as a specialist in ASIC design, responsible for the chips that powered the first products from the company which has become Cisco’s chief rival in infrastructure networking. Seventeen years later, he assumed the CEO’s seat, at a time when the networking landscape was going through considerable changes. In the nineties, the telecommunications market was dominated by a handful of equipment vendors and expensive, proprietary technology was baked into silicon. It was in this environment that Juniper first challenged Cisco, and managed to carve out a respectable market share. More recently, breakthroughs in chip design and manufacturing have made it possible to deliver advanced networking functionality via generic switches and routers, manufactured in Asia at a very low cost, which meant innovation slowly moved into software. It's not a secret that Juniper's previous CEO, Shaygan Kheradpir, was brought in to cut costs and improve the balance sheet

in order to please activist investors. He succeeded in this mission, so Rahim could focus on one thing - making the most of software-defined networking (SDN). "There is a mindset shift in the industry, where the perception of value is now transitioning from being purely embedded in hardware, to the software domain. And that, in my view, is absolutely positive, because it's in our best interest to monetize our differentiation, to monetize our R&D, commensurate with the way we invest," he said. Under Rahim, Juniper virtualized its popular MX series routers and separated the JunOS network operating system – which saw initial release in 1998 – from the underlying hardware. "We were the first established networking vendor to offer a completely disaggregated operating system model on third-party white box switches, and we've learned a huge amount by going first and deploying with our customers,” he explained. "They want great levels of flexibility: they want to be able to choose the hardware and software capabilities that they need for their specific use cases. They want more network visibility, greater control, more programmability. And they want to have a more granular business model; in other words, they want to pay for what they actually use." In recent years, Juniper has also increased its participation in various open source

Cloud is a theme: it has been a theme for the last couple of years. It will continue to be a theme in 2019, and a few years to come

initiatives. Some of its switches were made compatible with software from the Open Compute Project (OCP), and the company released the code to its SDN ‘meta-controller’ Contrail as open source; initially known as OpenContrail, the project has been rebranded as Tungsten Fabric and now sits at the heart of The Linux Foundation's new networking organization. In 2018, Juniper also joined the Open Networking Foundation, which counts some of the world's largest telecommunications providers among its members. To Rahim, this is a natural extension of the company’s strategy: "As a challenger in this industry, we have always viewed openness as fundamental to our success. As a challenger, openness is your friend." Juniper was, at one point, famous for paying software engineers some of the highest average salaries in Silicon Valley. Today, the company continues this tradition, spending more than 80 percent of its R&D budget on software. And the focus on software has pushed it further into the cloud market, where hardware has been commoditized, and software reigns supreme. "Cloud has very specific meaning for each of our different customer segments - it is still the architectural and service delivery approach that is fueling the growth and the business momentum of the big cloud providers,” he said. "Cloud is also the underlying architecture for all future telco delivery models. 5G, in and of itself, is not going to justify the investment that's required to enable it. It's going to require new services, and so 5G is going to usher in new services that I think are going to make telcos more successful than they have been in the past. And those services are all going to be cloudnative. "And last but not least, on the enterprise side, cloud is the thing that's top of mind for each and every one of the CIOs that I u

Issue 31 • February 2019 25

u talk to - it's all about moving workloads and applications to a multi-cloud environment, and reaping the cost and agility benefits of this approach. Cloud is a theme: it has been a theme for the last couple of years. It will continue to be a theme in 2019, and a few years to come.” Perhaps the most serious change to affect Juniper in the past few years is the change in its customer base: the company originally focused on the telecommunications market, then the enterprise sector. Today, it counts hyperscale data center operators among its most valuable clients. These organizations are running some of the largest server farms in the world, but their very size presents an inherent risk - if the relationship goes well, the supplier could shift hundreds of thousands of products at the stroke of a pen. But if it then goes sour, the supplier will see their revenues decimated in an instant. "We understand the hyperscale market very well - depending on the quarter, roughly around a quarter of our revenue comes from cloud providers; that includes hyperscalers, but also many smaller cloud and SaaS providers throughout the world,” Rahim said. "We understand what hyperscale customers require through practice, through years of engagement with them on a very technical level. We understand what they are looking for in terms of performance, reliability, flexibility, visibility and telemetry. All of those lessons have fed into the technology that we have developed, and the roadmap that we will be introducing to the market that really caters to the hyperscale space." The situation with hyperscalers is made worse by the fact that these companies have enough resources to develop their own networking kit – examples include Facebook’s switches like the Wedge and the Backpack, and LinkedIn’s Pigeon. However, Rahim is not worried that his current customers will dump Juniper kit (or software) to adopt their own, in-house creations. "The folks that work for hyperscale companies, the network operators, the engineers, are extremely talented. In some sense, you are competing with them - you need to demonstrate to them that the capabilities that you are introducing to the market are ahead of not just of you peers, but - for specific use cases - ahead of what they themselves can develop and implement," he

"I abide by the notion that 'only the paranoid survive.' I tend to take all of my competitors worldwide very seriously

explained. "I don't think they want to get into the business of developing networking infrastructure, networking equipment, just because it's fun to do so - they will only do it if they believe they can't get the technology that they need elsewhere. "As long as we apply the right sense of urgency and speed to the kinds of technologies that they care about, we will continue to have a very big role to play in building out the large hyperscale networks around the world." And finally, our conversation turned towards China. Incumbent networking vendors in the US and Europe are currently fighting off a two-pronged attack. On one side, they are squeezed by the white box, ‘no name’ manufacturers that are fully

26 DCD Magazine • datacenterdynamics.com

invested in the meaning of commodity and compete not on features, but on price. On the other side, firms like Huawei and ZTE are constantly improving their game, and have started offering levels of aftermarket service that can match their Western counterparts. "The technology coming out of China has been increasing in capability, increasing in sophistication, for a number of years now - I don't think that there's anything new that happened recently that causes us to be more or less concerned than we have been in the past,” Rahim told DCD. "I abide by the notion that 'only the paranoid survive.' I tend to take all of my competitors worldwide very seriously - this is, at the end of the day, a very competitive industry. But ultimately, what I focus more on are my customer requirements, and how we get to solving for those requirements with truly differentiated technology.” "It comes back down to the fact that I think the differentiation is not in the box anymore.”

>Modernization | Supplement

Modernized by



The hidden data center sector

The birthplace of the Internet

Data center surgery

Inside Intel: From fab to data center

> Why build new, when you can upgrade what you already have?

> First came AOL, then Infomart; now it's time for Stack Infrastructure

> Changing a live facility without going down isn't easy

> They used it to build chips. Now it's simulating them


EcoStruxure IT delivers

into your data center architecture.

Looking for a better way to manage your data centers in the cloud and at the edge? EcoStruxure™ IT — the world’s first cloud-based DCIM — delivers visibility and actionable insights, anywhere, any time. ecostruxureit.com

©2019 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998_20464516_GMA-US

A Special Supplement to DCD February 2019 Modernized by


Giving your facilities a new lease of life


Features 30-31 The hidden data center sector

ome people want shiny new things. Others make a point of sweating their assets and keeping equipment in use until it has more than paid for itself. Neither group is right. When a facility is no longer capable of maintaining its peak performance, a full replacement can be hard to justify, but there will still be plenty of things that can be done to improve the operation of the building and its contents, without breaking the bank. This supplement is about the times those choices are made.

32-33 From AOL to Stack Infrastructure 34-35 Advertorial: modernize or outsource? 36-37 Data center surgery 38-40 Inside Intel


Upgrades and modernization are a hidden data center sector, obscured by the exponential growth in new facilities (p30). And there can also be very good reasons why an older data center should be simply closed down and replaced, when its infrastructure ages. Today's data centers are so different from those built 20 years ago, that it may be prohibitively expensive to completely remake an older building. But the pay-back time for a new build is longer than an upgrade (especially if you include

low-investment fixes like airflow improvements). And some buildings are such prime locations that there's no choice but to refit. Stack Infrastructure is presiding over a rebuild of a facility once owned by AOL in the early days of the commercial Internet (p32), and a couple of New York skyscrapers house data centers that have been upgraded multiple times. Some industrial buildings have been brought into the data center fold, in projects which re-envisage them as efficient installations for digital infrastructure. For example, Intel took a chip fabrication plant in Santa Clara, and turned it into a facility it calls "the world's best data center" (p38). Elsewhere in this magazine, you can read how Lamda Hellix turned a redundant cable landing station in Athens into a data center (p44). But it's not just about improving the value and performance of a data center. There's a another, higher priority, which raises the stakes even higher: don't kill the services you are providing (p36). If you've got a modernization project, we wish you all the best. Tell us how it went!

36 DCD>Debates Modernizing your legacy data center - what's new?


Watch On Demand

As soon as you build or acquire a new data center, it starts to become outdated, and with each year that passes, operators are faced with several conundrums - when to upgrade, what to upgrade, and how radically one should upgrade. bit.ly/ModernizingDataCenter

Issue 31 • February 2019 29

Modernization the hidden data center sector New data center projects get all the attention, but there are a lot of facilities that are being upgraded and modernized, reports Peter Judge

D Peter Judge Global Editor

ata centers are expanding at a mind-boggling rate. The global colocation market is expected to grow to $50.18 billion by the end of 2020, and regional data center hubs are adding capacity at record rates. In the US, around 400MW is being built per year, while Europe has been adding around 175MW of new data center capacity per year for some time, just in its four largest markets. These figures come from property specialists like CBRE and Jones Lang LaSalle (JLL): speaking at DCD’s London event, Andrew Jay, chief executive of CBRE, said the number of new buildings is increasing, as cloud players buy bigger chunks of capacity: "The big hyperscale guys are talking 500MW in 24 months [in Europe]. That’s a paradigm shift." This is alarming: feeding this demand can seem like an impossible task. But it may also be blinding us to a giant sub-sector of the market: namely, modernization. While the industry is concentrating on building new facilities, what happens when the sites we already have need to be upgraded? IT equipment in a data center will need replacing, perhaps every three to five years. The mechanical and electrical parts of the facility have a longer lifetime, perhaps ten to twenty years.

It’s far more effective to upgrade a facility when it only exists on paper

30 DCD Magazine • datacenterdynamics.com

Modernization Supplement But during this time, they will also need For a large upgrade, it is likely to be maintenance, and eventually, replacement be necessary to move the IT load out and upgrading. And these jobs will impact completely. At this stage, you really are the whole data center. effectively doing a new build. And if you No one seems to have clear figures on find somewhere good to place your IT on a the size of the upgrade and modernization temporary basis, why not consider keeping market, perhaps because it is hard to define. it there? Some routine work carried out Upgrading a live site is fraught during the lifetime of the data with difficulties. There are risks center can fall into the category to human life here, if active of upgrades: projects such as power systems are being checking and improving the worked on. Even if it is Tier airflow by using blanking III certified and has two square feet of plates, brush strips and other concurrently maintainable technical space components. power and cooling paths, at Intergate Some “upgrades” can a planned upgrade will actually take place before the involve powering these Manhattan data center is even built. “It’s down and replacing them far more effective to upgrade one at a time. If nothing else, an a facility when it only exists on upgrade can be a good test of the paper,” one professional told me. This is reliability of the facility. much more likely to happen to enterprise Malcolm Howe leads the critical systems facilities, where the project may have a long team at engineering firm Cundall. He spoke timescale, and the company’s requirements from experience for a DCD feature on data may change during the planning process, for center life cycles: “Any interruption to power instance, because of a merger or a change in and cooling systems creates risk. You can business model. work on one path, while the facility hangs off Some upgrades are more substantial, the other. But you need to choreograph it so either altering the containment plan in the you don’t drop the load.” data center, or adding capacity by changing The site may need temporary power and or upgrading the cooling and power cooling during an upgrade, which increases infrastructure. the risk associated with the procedure. Modern IT is likely to have greater Data centers in prime locations may be power density in the racks, and operators more likely to have upgrades, instead of are beginning to run their kit at higher being shut down at the end of the lifecycle of temperatures to reduce their cooling bills. the original build. This is particularly true of This means that an upgrade may require data centers in large cities (see box). different power distribution - and the One more thing is certain about data addition of bigger power feeds - as well as a center modernization: by definition, it will change in cooling systems. change. The sites that are being upgraded Modern upgrades can go as far as adding today had their technology installed ten or a modular “pod” unit inside the building, more years ago, and will be brought up to or placing a new modular building or speed with current best practices. containerized data center alongside it, in The facilities built today should be much the car park. As these modular approaches more upgradeable in the future. Aisles that often bring their own containment and their ensure good containment can be upgraded own power and cooling systems, they may more easily, and the whole facility is likely to actually have less impact on the existing data have been constructed in a modular way. Pod center. designs for rack hardware will be replaceable, Other types of upgrades take place inside and placing mechanical and electrical the existing shell, but have a profound effect equipment on a skid is intended to simplify on the resulting facility. These include configuration and delivery, but it should also virtually ripping out and replacing the data make it easier to upgrade or replace. center, effectively creating a brand new It’s good to know that the process of space within the current building. upgrading is itself being modernized. One common factor with most modernization projects is that the business service is live already, and must be kept online. Data centers are live 24 hours a day, often delivering services which are financially important. This means that it can become necessary to upgrade the site without interrupting its operation.


Upgrades in New York Intergate Manhattan, at 375 Pearl Street, wears its history of upgrades on its sleeve. The 32-story building overshadowing Brooklyn Bridge was first owned by New York Telephone, then Bell Atlantic and Verizon. Sabey Data Centers bought it in 2011, but it’s still frequently referred to as the “Verizon building,” because Verizon has three floors in the facility and pays naming rights to keep its signage at the top of the tower. Colocation space in New York is in short supply, so it’s been worth Sabey’s while to invest in modernizing the facility. The company originally planned to develop 1.1 million square feet of data center space, but Sabey vice president Dan Melzer told DCD: “the market just wasn’t ready for that.” Instead, the company fitted out 450,000 square feet of technical space. The upper floors of the building were repurposed as office space - but for half the tower, this was not possible as it was built to house telephony equipment and has little natural light. The lower floors house the chillers and generators, including an 18MW substation to support the company’s turnkey customers. Re-purposing space has been made easier thanks to the way the building was designed. It has a massive freight lift to carry heavy equipment up and down the tower, and an underground tank holding enough fuel to run generators for 72 hours in an emergency. Nearby, 60 Hudson Street was built to last in the early days of communications. It started out as the headquarters of Western Union, dominating the telegraph industry at the time. It has communication lines from AT&T’s building, and floors strengthened to support heavy equipment, along with vintage pneumatic communications tubes. “The building sort of recycled itself for today’s age,” said Dan Cohen, senior sales engineer for the current resident, Digital Realty. The floors hold heavy racks with ease, and “a lot of those pneumatic tubes are now being used for fiber.”

Issue 31 • February 2019 31

Upgrading the birthplace of the commercial Internet Twenty years ago, the public Internet took off in AOL’s Dulles Technology Center. Peter Judge sees the building coming back into play, ready for the 21st century

D Peter Judge Global Editor

ata Center Alley, a few square miles of Loudoun County, North Virginia, is the greatest concentration of data center space on the planet. And it's where a new provider is upgrading one of the world’s oldest Internet data centers. Locals claim that 70 percent of the world’s Internet traffic passes along the fiber cables that run beneath Loudoun County Parkway and Waxpool Road. There are some 10 million sq ft of data center space here, consuming something like 1GW of electrical power. It’s growing fast, and with a stream of trucks hauling steel and prefabricated concrete to giant construction sites for Digital Realty and CloudHQ, it’s easy to think that everything in Data Center Alley is new. But it’s been an infrastructure hub for more than 20 years, and alongside the new builds, some historic locations are being rebuilt too. Today, the Equinix campus in Ashburn is the leading Internet exchange. Equinix DC1 was the company’s first data center in the late 1990s, and DC2 is reputedly the most connected building on earth. But before Equinix arrived, Virginia had MAE-East, one of the earliest Internet exchanges, and the first data center in Ashburn is believed to be the former Dulles Technology Center, a bunker-like building on Waxpool and Pacific Boulevard, created by AOL in 1997 to serve its growing army of dial-up Internet users. Now more or less forgotten, AOL was the leading Internet provider of its day. New fiber

32 DCD Magazine • datacenterdynamics.com

arrived to serve it, and the Alley snowballed from there, as other providers and data centers arrived to feed on that fiber. AOL is no longer a power in the land (it’s a Verizon brand), but its data center is still there, under a new owner, on prime fiber-connected land close to Equinix DC2. Stack Infrastructure is selling wholesale data center capacity to hyperscale providers. It launched in early 2019 (see News pages), taking over the facilities previously owned by Infomart, a provider which picked up the 10MW AOL site in 2014. Under Infomart Data Centers, 6MW of capacity had been re-opened in the Dulles facility by 2017. Now the work is continuing under the brand Stack Infrastructure, created by Infomart’s new investors IPI. The result, according to Stack sales vice president Dan Ephraim, is a wholesale colo site offering up to 18MW, playing to the building’s strengths but still meeting current demands. “You are in an absolute unicorn,” Ephraim told DCD during our visit. “This is totally unique.” The building is designed to withstand a jet crashing into its roof, Ephraim told me: “Being the first data center here, these people were hypersensitive to the risk of being on a glide path [i.e. close to Dulles International Airport],” he said. “It’s borderline foolish. But the design was: if a Cessna or a Hawker private jet hits one of our data halls, that hall will collapse, but it won’t affect the other five data halls.”

Modernization Supplement

The building’s internal and external walls are built from reinforced “rebar” concrete, and the double-tee roof has six more inches of concrete above it. As well as crashing airplanes, it can also withstand winds of up to 200mph. With a facility this old, the best approach was to almost completely replace the mechanical and electrical assets, Ephraim explained: “We ripped out everything AOL had from a mechanical perspective.” Chillers, pumps and cooling towers were replaced, along with uninterruptible power supplies (UPS), power distribution systems, mechanical cooling, fire preventon, and security systems. This gives major benefits: the facility is a new, flexible data center, inside a super-robust shell. Standing next to the new cooling towers in the yard outside the building, Ephraim said: “Everyone else in the market, for their utility yard, has a dog-pound fence. We’ve got a 30ft concrete wall. They just overbuilt.” Inside, there are five data halls, up to 15,000 sq ft each. It could be our imagination, but as we talked, the echo seemed stronger and louder than in other facilities. Could that

be the rebar talking? The data halls support power densities of up to 600W per sq ft, and the building can deliver a PUE (power usage effectiveness) of 1.2 to 1.25. That’s impressive for today’s shared colo space, and even more so in a building that was created with no consideration for efficiency. When AOL built in 1997, data centers focused on delivery not efficiency, and the term PUE wasn’t coined till 2006. There’s a NOC control room built with a view into the power systems: “They designed it so you could not sit in this room without seeing at least two generators.” Each hall is set up for 2MW but, thanks to AOL’s overbuilding, they can be upgraded quickly to 3MW: “As long as it takes us to order the equipment, that is how long it takes to give the client another MW of compute.” Rebuilding in an older space has created opportunities for creative thinking. AOL built a row of UPS rooms between the mechanical and electrical section and the data halls, but in twenty years, battery technology has moved on. Infomart replaced lead-acid batteries with more compact and intelligent lithium-ion cells, leaving it with whole

The outside wall is concrete, the inside walls are concrete. It’s effectively a bunker, inside of a bunker, inside of a bunker

battery rooms to repurpose. “We recycled 1,800 tons of lead-acid batteries from this building, and replaced them with lithium batteries,” he said. “I have regained 12,000 sq ft of rental space.” It’s specialist space, though. The old battery rooms are smaller than today’s typical wholesale colocation space, but they are super-resilient: “This is the most hardened and fortified space in the building. The outside wall is concrete, the inside walls are concrete. It’s effectively a bunker, inside of a bunker, inside of a bunker.” Who would want such a space? Ephraim doesn’t spell it out, but he says the phrase “hardened and fortified” is a code in the US colo market, for “federal-ready.” Governments might use this space, as well as big cloud providers, for activities that require a heightened level of privacy and security. Despite the emphasis on reliability, the site has not been given an Uptime Institute Tier certificate. In part, that’s because the building predates the Tier rating system, and in part because this is a sophisticated market that will look beyond the certificate to the components of the building. The site has N+1 power and UPS infrastructure, and meets many Tier IV requirements, even though it doesn’t have two utility feeds (just one from Dominion). If the power goes, it does have 80,000 gallons of diesel fuel, half of it below ground. “That’s enough for five to seven days,” Ephraim boasted. “Everyone else in the market guarantees one day. If we lose power at Dominion, we have three generator rooms, and they can pick up a full load.” It also has 650,000 gallons of water storage, fed from two wells. “It’s a closed loop,” Ephraim said. “There’s little evaporation. We’re effectively 3N on water.” One thing the building doesn’t feature is a glossy exterior. Inside, there are all the things a customer would expect, such as office space, storage and lounge areas. But outside, this is the least imposing data center DCD has visited. Compared to nearby facilities from RagingWire, Digital Realty and Equinix, it’s nearly invisible. “What I like about our building is we are selling on anonymity,” Ephraim said. “It’s industrial; it’s not sexy. 25,000 cars drive through this intersection a day, but not one percent of them know what this building is.” It’s not just security by obscurity: AOL made sure the exterior can subtly deflect physical attacks as well as crashing airplanes. Bollards with 8ft roots block any vehicle from directly ramming the gates, and a high berm deflects any attacks to vehicle-arresting cables, as well as blocking casual sightseers. If this building is a unicorn, as Ephraim says, it’s a unicorn that’s had an extreme make-over. It’s an armor-plated energyunicorn, wearing a cloak of invisibility.

Issue 31 • February 2019 33

Advertorial: Schneider Electric

Modernize or outsource? There are several ways to modernize a data center, including low-investment fixes, major upgrades and replacement. But which approach should you adopt?


ata centers are changing fast, and it is sometimes said that a facility becomes obsolete on the day it is built and commissioned. In most cases, this is far from the truth. As data center technology has matured, it has become possible to upgrade an existing site - and, more importantly, it is now possible to understand the likely financial implications of any decision to modernize your facilities. Modernization covers a range of situations. Some facilities may be aging, and need improvements such as more space, more efficient cooling, or more flexible power infrastructure. Others may have reached the end of their useful life. This can mean that maintenance costs are too high, or the systems are antiquated and becoming unreliable. Modernization includes a range of options, from low-cost fixes that address simple problems, through a formal upgrade, to building an entirely new data center. Beyond

this, some or all of the IT functionality can be offloaded to cloud or colocation providers. And of course, the choice is not either/ or, as the way forward will surely involve a combination of these options. When choosing between approaches to modernization, it is wise to consider the total cost of ownership associated with your choice, calculated over 10 years. A total 10-year cost of ownership (TCO) analysis might favor a major data center refit or even a brand new facility. However, the business may be sensitive to cash-flow, and face other strategic considerations such as regulatory requirements, and the life expectancy of the data center and its services. Easy fix modernization In some cases, the CFO cannot authorize funds for major investment in a data center. The CIO will then have to take a minimum investment approach, aiming to buy time, perhaps six or 18 months. The data center team will then have to evaluate the existing infrastructure and

come up with ways to improve it on a limited budget. This translates into an approach which could reduce waste, increase capacity, improve efficiency, and make the facility more reliable. This set of modernization procedures will address the IT load, reducing or consolidating it through virtualization, to cut down on the number of servers in the facility, and also by locating and decommissioning unused or underutilized servers. Many servers are under 10 percent utilization, and are simply wasting resources. It is also possible to manage the energy usage, so that the maximum power drawn by any server is limited. Another major upgrade avenue is in cooling systems, where it is usually possible to make great improvements to airflow. In an older facility, as much as half of the cold air recirculates back to the cooling systems without reaching the racks. These problems can be fixed with blanking plates, brush strips and aisle containment. An often-overlooked quick-fix option is to add preventive maintenance to the operation of the data center. Many components become less efficient with age - for example, batteries - while factors such as low refrigerant levels, clogged filters and overheated capacitors can reduce efficiency. Batteries, fans, capacitors and HVAC equipment can be monitored to identify early signs of failure. Much infrastructure equipment is supplied with firmware that that can perform remote monitoring. For a recent plant, this

Advertorial: Schneider Electric

functionality simply needs to be “turned on;” for an older plant, it may need to be installed. This can even be shared with the equipment provider to make the actions automatic. Easy-fix modernization is a low risk process, but there is a danger. It can morph into a “band-aid” approach, which ignores major long-term needs. Eventually, a strategy of patching equipment which is becoming outmoded will lead to lower reliability.

Upgrading an existing data center If a data center is running out of cooling or power capacity, but still has room in the data hall, it may be time for a major upgrade, especially if easy-fix options have been exhausted, or deemed impractical. This approach typically adds one or more rows of racks to a low-density data center. The simplest approach can be to add a “pod” - including a number of racks, bundled with their own dedicated row-based cooling. This can be deployed as a single unit without impacting the rest of the data center. It allows high-density, high efficiency racks to be deployed within an otherwise low-density facility. Because they are self-contained, pods don’t require changes or upgrades to raised floor cooling, and don’t need computational fluid dynamics analysis before installation. The pods enable a modern, modular approach to data center architecture. Obviously, the choice of whether to invest in a pod for ten years will depend on how long the requirement for in-house capacity will continue. If the basic infrastructure for UPS, power distribution and heat rejection is already at capacity, then another option is to add an external module, or a containerized data center. These are pre-integrated, preengineered and pre-tested, including their own power and cooling systems, and shipped to the site ready to use. Such units can be delivered on a skid, or else in their own modular building. Once delivered, it is simply a matter of connecting power and chilled water piping. They can be installed in a parking lot or other level ground. Installing containerized units is quicker and easier than adding new capacity or a new

facility, and more reliable than upgrading existing plant. They can be significantly faster to deploy than other forms of new capacity, and up to 13 percent cheaper as well. If you intend to keep the data center going for three years, this is a better option than outsourcing it. Adding one or more high density pods can extend the life of a facility. Building a new data center When an existing site has no more available power, cooling or space, and there are new IT requirements, it can make sense to build or buy a new data center. It will be more efficient and ensure a longer lifespan than upgrades or quick fixes. The drawback is the high capital investment required, and the risks associated with capacity planning Of course, building a data center would incur many other costs, including the cost to lay fiber and the cost to migrate hardware and software from one facility to the other.

For more information, use these Schneider Electric resources:. Build vs Colocation TCO Calculator. bit.ly/SchneiderCalculator Modernize or Outsource: Evaluating Your Data Center Options (eBook) bit.ly/SchneiderModernization


Data center surgery Looking forward to your improved and upgraded data center? Peter Judge finds you may have to suffer some inconvenience along the way


etrofitting a live data center is not all that different from open heart surgery,” says Frank McCann, principal engineer for project management and implementation at Verizon Wireless. Just as the heart surgeon has to fix a problem without killing the patient, he points out: “A successful data center is a living, breathing entity - and we have to keep the customer equipment up and running while improving the overall environment.” McCann has plenty of facilities to keep healthy. Even though Verizon sold its colocation business to Equinix, and its managed hosting to IBM, it still has a number of data centers running its core applications and services. Unlike the surgeon, the data center engineer actually does have the option of killing the patient - and resurrecting them in a new body. In some cases shifting all the services into a new facility is possible. But McCann says this is rarely simple: “You have to deal with the ancillary things such as locating energy and telecoms, and relocating your employees,” he explains. “In the South, a new build might make sense as the savings make up for ancillary costs." In his home territory of New Jersey, he sees plenty of upgrade work, as urban data centers generally don’t have the option to expand or move: “In the New York metro area, a retrofit makes sense because there isn’t space for a new build.” The biggest problem with an upgrade is the fact that it may have to be done while the facility is live. “If it’s not concurrently maintainable, then you either switch it off, or work on live equipment,” McCann says. This is not impossible, but it requires care. Engineers can work on different sections of the facility, by moving IT loads. Power is switched away from the racks which are

Dr. Peter Judge Global Editor

“One thing that gets overlooked in no longer supporting live services, which retrofits to older buildings is the people,” he can then be worked on. Then the loads are says. Building a data center, you have the moved back, so the rest of the facility can site to yourself, but when working be updated. on a live facility, you need to work The most important thing is alongside facilities staff. “How to test this procedure before can people park, and get in and using it on live facilities, but out for lunch?” he asks. “How operators are rightly reluctant Cooling below can they get day-to-day work to test reliability in this way: “I this is a waste done with all that noise?” don’t want to test the airbag in of money Changes in IT architectures my car,” McCann observes. can paradoxically make his (ASHRAE) job harder, he says (see box) - as An upgrade can expose virtualization allows workloads to be dangerous differences between consolidated, hardware is driven harder theory and practice in an old facility, and pushed to its limits. “It now requires he warns: “The biggest danger of outages more cooling, and failover processes need when upgrading a live data center is the lack to work better. As things get more softwareof documentation. Something you think is not connected may be connected, or is connected in a way which is non-obvious.” For example, a rack may appear to have an A and a B feed, but in reality they both go to the same source - or one is not connected at all: “They may be labelled wrong, or connected incorrectly.” Like a heart surgeon, a data center modernizer has to be ready for a crisis. When adding a new chiller to a Verizon facility, McCann had difficulties cutting the chilled water pipe. It’s normal practice to freeze the pipe, then cut through it, but this pipe was too big to freeze all the way through. “We had to do a lot of research to find a tool that could cut a hole in a pipe and insert a valve without it leaking,” he says. Even after finding such a tool, the trouble wasn’t over: “There’s a danger of shavings getting into the water or the cut-out getting washed down the line and breaking the cooling system.” The situation arose because this was an old facility. The upgrade had never been planned, and the data center was built without a shut-off or a bypass for its chilled water pipe. “New builds are much easier,” he says.

One thing that gets overlooked in retrofits to older buildings is the people

36 DCD Magazine • datacenterdynamics.com


Modernization Supplement defined, there is more reliance on hardware. Verizon has a long heritage, which can sometimes be a burden but is currently proving very useful. It has hundreds of telco rooms, with space and power, but it has taken a while to properly turn them into data centers, with features like cold aisle containment. This is an upgrade which fits into the current evolution of infrastructure: these will become edge facilities. Built to hold racks of old telecoms switching equipment, they now have plenty of space as the telco footprint consolidates. “We are not running into space issues,” McCann says. “Now it’s power and cooling.” IT equipment is now rated to run at higher temperatures, which ought to reduce the burden on cooling systems, but the increased density may well negate this benefit. And, in any case, operators are still reluctant to run their equipment hotter, if it is critical to their operations: “Equipment can go hotter and hotter, but if you let it go hotter, it might die sooner,” McCann notes. “It’s a balancing act.” He agrees that, thanks to modular technologies, future data centers will be easier to upgrade than the first facilities which are now being retrofitted: “We thought this would be all, and we would never outgrow it. We realize that there will be changes in future.”

When modernization meets consolidation Consolidation begins with a desire to make one’s data center estate more efficient. First of all, the IT load in individual facilities is made more efficient. Then data centers are combined and closed. But if the facilities themselves need upgrading, this can block consolidation or make it more complex. When virtualization first began its march across the IT world, servers were utilized at a rate of about 10 percent, and each application had its own server. The first steps were simple; put applications onto virtual machines, and combine those machines on physical servers, increasing utilization. The drive to consolidate has progressed now, as centralized cloud services provide a greater aggregation and concentration of resources, with improved efficiencies. Large organizations often set goals of not opening any new data centers, presuming that any new capacity shall be bought from cloud service providers, rather than created in on-premises or in-house data centers. The US government is a prime example of this: the 2014 Federal Information Technology Acquisition Reform Act (FITARA) mandated closure and consolidation, and is believed to have saved the administration around $1 billion. It’s one of the policies that the Trump administration has left more-or-less unchanged. The strategy is applied everywhere: according to the Data Centre Alliance, 62 percent of data centers are going through consolidation efforts at any one time. But let’s think about what that means. It involves integrating physical DC locations, as well as optimizing hardware platforms hosting software. In other words, any consolidation effort clearly implies modernization of the facilities which will remain. But that modernization effort will be as unique as the data centers that are being consolidated, says John Leppard of Future Facilities: “The process for consolidation is both complex and unique to each organization and its goals.”

DCD>New York Times Square Marriott Marquis

9-10 April 2019

Ready to retrofit? Untangling the complexity of upgrades whilst being ‘live’ Retrofitting a live data center is not all that different from open heart surgery. We have to keep the customer equipment up and running while improving the overall environment. So many things can go wrong, but utilizing a new location is not always an option. A successful data center is a living, breathing entity. It grows and changes. Be prepared for success and the growing pains you will face. bit.ly/DCDNewYork2019

Issue 31 • February 2019 37

Inside Intel: From silicon fabrication, to energyefficient data center We all know Intel for its chips, but perhaps we should be paying more attention to its data centers, Sebastian Moss reports

Sebastian Moss Senior Reporter

It was our first experiment, and when we were successful at getting 30kW per rack with 24-inch racks, then we said ‘ok now let's go and break the barriers'


hesha Krishnapura is beaming. Of the many data center tours I have been on, Krishnapura’s is clearly the one with the most excited of guides. “Welcome to the world's best data center,” he says with a chuckle. We’re in a huge, 690,000 square foot (64,100 sq m), five-story building right next to Intel’s global headquarters in Santa Clara, California. “Where you are standing is an old Intel fab called D2. The last product that was made here was the first generation Atom in 2008. It was decommissioned by the end of 2008, so by 2009 it was available - just an empty space.” Krishnapura, the CTO and senior principal engineer for Intel IT, "made a proposal to use D2 to learn how to build megascale data centers with energy efficiency, cost efficiency, and so on, in mind.” “At the time, [then-chief administrative officer] Andy Bryant said ‘OK, you guys can have this fab to build, but first go and experiment in the old S11 building.’” There, Krishnapura’s team built a chimney cooling system, along with a traditional CRAC unit, to achieve a power usage effectiveness (PUE)

38 DCD Magazine • datacenterdynamics.com

of 1.18. “It was our first experiment, and when we were successful at getting 30kW per rack with 24-inch racks, then we said ‘ok now let's go and break the barriers.’” The result, built out over several years, is a data center with several unique elements, and an impressively low PUE. "Today you're going to see around 150,000 servers," Krishnapura says as we head into D2P3, Intel’s largest data center, which spans 30,000 square feet (2,800 sq m), with a power capacity of 31MW. The facility uses close-coupled evaporative cooling that relies on recycled water, to help it to reach an annualized PUE of 1.06, significantly below the “worldwide average of about 1.7,” Krishnapura says. He explains: “The city, when they process all the wastewater from homes, like sewer water and all the kitchen waste, they typically throw it into the Bay for natural evaporation. But they also sell that water for industrial use, or landscaping or other stuff, at 50 percent lower cost. So we buy that to cool this data center.” Elsewhere in the old semi conductor fabrication plant are smaller data centers,

Modernization Supplement including D2P4, which has 5MW of power capacity across 5,000 square feet (465 sq m). Thanks to free air cooling, it, too, has a PUE of 1.06 - “they have exactly the same PUE, but totally different techniques.” The two facilities have the lowest PUE of any of Intel’s data centers. “We've closed lots of small, inefficient data centers, and are trying to reduce our average PUE across our data centers to near 1.06,” Krishnapura says. Back in 2003, the company operated 152 data centers, by 2012 the number shrunk to 91. “Now we’re at 56.” The reduction in data center footprint has conversely come as the company has faced more and more compute demand - with a rough increase in demand of 39 percent a year. To meet this challenge with fewer sites, Intel has relied on its old friend, Moore’s Law (the observation that the number of transistors in a chip doubles every two years, proposed by Intel cofounder Gordon Moore). “In 2002, we had 14,191 servers with single-core, two-socket CPUs, totaling 28,000 cores,” Krishnapura says. “Now we have 240,000 servers, we have 1.7 million cores, more than 260 petabytes of storage and more than half a billion network ports within the data center.” While Krishnapura talks, I become aware of something unusual about the facility: “You're already sweating, because it's very hot,” Krishnapura observes. When retrofitting D2, Krishnapura read a paper from Google that revealed the search giant operates Total number of servers its facilities at 78°F across Intel data (25.5°C), in the cold Looking up, Krishnapura aisle. “We said 'why says another difference is the centers limit it at that? What's height. “Industry-standard full the maximum we can IT racks are 42U, roughly 6ft. go to?'” All available IT We are much taller, our racks are equipment supported inlet 60U, it's 9ft.” They are also slimmer: temperatures of up to 95°F (35°C), so instead of the standard 24-inch format, the company settled on a cold aisle target of they are trimmed to 20 inches, allowing for a 91°F (32.7°C). few more racks to be crammed in. “It ranges between around 78-91°F in “In 50 linear feet, where you can put the cold aisle, depending on the outside 25 standard racks, we can put 30 of them. temperature. The hot aisle is usually 20-30°F And as they're taller, we can put a lot more hotter.” servers: each rack supports all the way up to


280 servers, and each rack can support up to 43kW peak power load.” These are used internally for many of the things one would expect from a large enterprise, from running SAP workloads, to hosting webservers, to running Intel's video conferencing tools. They are also used to design the company’s chips. “We use it for all the pathfinding to 7nm, to 5nm, some of the quantum physics algorithms - how the electrons scatter - all of that,” Krishnapura says. u

Issue 31 • February 2019 39

If people want to ‘rack it and stack it,’ they just need to remove four screws - it takes 23 percent of the time [spent previously]. You save 77 percent of the technician's time u By using Intel products in the company's facilities at scale, Krishnapura is additionally able to “give feedback back to the data center chip design business - we want to eat our own dogfood and learn from it.” This creates a “feedback loop into the data center business, about how do we innovate and what kind of chips we want to make,” Krishnapura says. “It could be FPGAs from our Altera acquisition, or it could be the discrete graphics which we are working on, or it could be the other accelerators like Nervana.” But perhaps, for a business like Intel, one of the major benefits of using its own data centers is the obvious one - saving money: “The goal is to run the best data centers in the world and, from 2010 to 2017, we saved more than $2 billion, compared to having everything in the public cloud. So our cost is less than 50 percent of running in a public cloud.” Further cost savings are being realized as the company pushes for fewer, but larger, data centers. “For every 5MW, we are paying

$1.91m less in utility bills, electricity bills, However, with Krishnapura and Intel every year in Santa Clara,” Krishnapura says. holding the patent, there is a downside: “For this particular data center, for example, "It's our IP, obviously we don't want our when it is filled out, we will be paying a competitor chip, graphics or other processor nearly $12m lower electricity bill for to get in." the 31MW.” That electricity is used to The servers are currently only power servers from several available from Supermicro: different vendors - DCD "In fact they were the last spotted logos from HPE, Dell The number of sites company that I pitched the and others - but most of all, idea to, I had pitched it to Intel's 56 data it’s used to power Supermicro every other company... but centers are servers. That’s because the Supermicro saw the value very spread across company partnered with Intel quickly." on Krishnapura’s brainchild: In June 2016, Krishnapura disaggregated servers. discussed the idea with Supermicro "Green computing is not just CEO Charles Liang, "and then six energy-efficient computing, or using less weeks later we had 10,000 servers deployed natural resources like freshwater, there's - it's never been done in the industry also e-waste," Krishnapura says, describing that fast. Now we have more than 80,000 how data center operators often rip out disaggregated servers." entire servers and replace them every five The idea, which aims to reduce the or so years - a huge financial cost, and a number of new servers companies need to significant waste, when some equipment can buy, has not been immediately embraced by be reused, as can the sheet metal. "There is the industry: “It could be that their revenues no reason to change all of this." are also lower,” Krishnapura says. “But what they don't understand is that as [company] IT Disaggregated servers "separate the CPU budgets are flat, you are now giving a reason complex, the I/O complex and the accelerator for them to upgrade every two years, instead complex," he says. "Whatever you need, you of every four.” can independently upgrade easily." This would mean that customers are likely Of course, companies can already to buy more CPUs - which will presumably upgrade aspects of their servers, but it come from Intel - but Krishnapura also insists is a difficult and laborious process. With this is a matter of ethics. “My whole purpose disaggregated servers, "if people want to now is to use less material, less energy to ‘rack it and stack it,’ they just need to remove deliver more compute and more aggressively four screws - it takes 23 percent of the time bring value to the enterprises worldwide [spent previously]. You save 77 percent of the not just to the cloud, but also for the private technician's time, that's a huge value." cloud, for HPC, and every segment.”

40 DCD Magazine • datacenterdynamics.com


Book Now!

2nd Annual


Limited free passes

>Energy Smart 1-2 April 2019 The Brewery Conference Center, Stockholm

Powering new business models at the smart energy nexus Where energy networks and data centers converge

Join the discussion #DCDEnergy Smart

For more information visit: bit.ly/DCDEnergySmart2019


Knowledge Partners

Global Content Partner

Opinion | China


hinese technology is a hot topic: we are increasingly aware that networking hardware from the Middle Kingdom may not be secure, but do we have time to worry about that? Europe is divided and facing Brexit, and despite the US’ protectionist tone, North American manufacturers are closing down plants and relocating elsewhere. Against this lack of focus, China has something the West does not: a coordinated, coherent, all hands on deck, national, regional and corporate digital strategy. I saw this on display at the Urban Planning exhibition hall on Shanghai’s People’s Square recently. The main attraction is a gigantic scale model of the city, brings home what makes the Chinese market so unique: here you have an incredibly compliant people, eager to continue its rapid ascension out of poverty, more than happy to hand over personal data to corporations and governments alike. In Shanghai alone, there are 25.4 million people. Talk about Big Data. There is something unique about China’s fearless, unquestioning advance towards a common goal. Scrap the ethics, they say, they only serve to slow us down. But does this mean that China’s dominance in AI and Big Data will reach beyond borders and into your data centers? In previous magazines, DCD has explored the question of what impact AI will have on the data center. Machine learning technologies can increase efficiency, by predicting a future based on past data. We collect raw sensor data from the data center’s many management systems, process it, label it and feed it into a predictive engine. Operators get insights, make predictions and automate decisions which would previously have needed human sign-off, to alter functions like thermal management. Rather than attempting to analyze the data

themselves, operators can just set priorities. Given the benefits of AI, and a deployment model which starts with major companies deploying the technology internally, and then exporting them when they are tried and tested, my guess is that Chinese companies will increasingly pioneer the technologies we deploy across our data centers. Kai-Fu Lee, PhD in AI, and author of AI Superpowers: China, Silicon Valley, and the New World Order, runs a Chinese VC fund, and believes that China has everything it takes to dominate the AI market: intellectual force, external investment, enough internal capital to sustain itself, easy regulations (or lack thereof) and data. Masses and masses of it. He implies the developers of Silicon Valley are lazy, self-indulgent and too moral (a claim privacy campaigners might dispute). They are no match for China’s new breed of ruthless, relentless entrepreneurs, who honed their skills fighting to the death of their competitors by any means necessary: copycat apps, deception, slanderous smear campaigns, and, ultimately, pushing themselves harder than their competitors to develop the best products on the market. The US has hitherto been a hotbed for research, but China’s government sponsored research has grown exponentially. The BAT companies (Baidu, Alibaba, Tencent) have spent billions on R&D/academic partnerships, and the Chinese government has created a business-friendly environment, especially for digital service providers. As we well know, Western companies hesitate to integrate Chinese technologies into their infrastructure. The trade war between China and the US is a hurdle to this, and meanwhile Europe is choosing whether or not to implement sanctions on one or the other. US, UK and EU agencies have warned against the use of Chinese companies' technologies (especially those of Huawei) in

sco x nHi aw

Despite the US-China trade war, and security fears, we may find ourselves using Chinese technologies in our data centers, says Tanwen Dawn-Hiscox

| Reporter

One small step for Chinese AI n we Tan


China has something the West does not: a coordinated, coherent, all hands on deck, national, regional and corporate digital strategy

national infrastructure, because of the fears of government-sponsored espionage. Given this reluctance to embrace the Chinese model, and Chinese technology, we are unlikely to witness as sudden a prominence of AI technologies in the West, but that doesn’t mean it won’t happen. When it boils down to a choice between security, economic gain, and the adoption of the best technologies, compromise usually comes first. Take China’s launch into space exploration: the country’s Chang'e 4 probe recently touched down in the Aitken basin on the far side of the moon. In an impressive tongue-in-cheek statement, China called this “a small step for the rover, but one giant leap for the Chinese nation.” Until now, international space exploration has belonged to the US, Russia and, to a lesser extent, Europe. Current research projects far outdo anything China has attempted so far, but Chang'e 4 is more than what it seeks to discover. It is a message: the age of Western domination - military, economic and scientific - is over, and we’re here to stay.

Issue 31 • February 2019 43

Thriving during a Greek tragedy What happens when international fiber fails to light up, and your country hits a depression? Lamda Hellix’s CEO tells Peter Judge about data center life in Greece

Peter Judge Global Editor


here is no one-size-fits-all data center business. What your company does will ultimately depend on what the business opportunities are in your location, and in your market sector - and this may create data center providers that are very different to each other. Greece’s Lamda Hellix is a case in point. Created from the fall-out of the telecoms crash of 2001, it has had to think fast, and keep refocusing to handle changes in the tech world, the Greek business climate and the regional digital ecosystem. This year, the company expanded its Athens 2 data center with an extra 1MW, and announced it had passed the Uptime Tier III reliability certification, while the company celebrated fifteen years in the business.

44 DCD Magazine • datacenterdynamics.com

Founder Apostolos Kakkos has worked to develop an international tech business in Greece, and also made time to co-found the European Data Centre Association. We met him in his Athens-2 flagship data center, joining the reception to celebrate Lamda Hellix’s anniversary. He unfolded a fascinating tale of flexibility and a drive to make Greece a significant tech center for South West Europe, with or without the help of major international fiber links. The company’s Athens 1 facility, close to Athens airport, was not originally built as a data center. It was originally planned in the late 1990s by Tyco Global Networks, as a cable landing station for a major international cable which, Kakkos says, would have turned Greece into a major communications hub.

Lamda Hellix design transmission capacity of 3.84Tbps on six fiber pairs, and Tyco’s planned landing station was put on the market cheaply. At this point, Kakkos stepped in, seeing the potential for a building with this location and its connections into Greece’s existing data and power networks. Lamda Hellix was founded in 2002, quickly bought the Tyco landing station and in 2003 opened a data center there - a pioneering move in Greece at the time. “We are the first and only 100 percent neutral provider in Greece,” says Kakkos. “When we started in September 2003, our aim was to address the local market.” A lot of its customers are international firms and service providers with a presence in Greece, he says. It took 12 years before the company opened its second facility, Athens 2, built on the same site, and sharing a lot of infrastructure with Athens 1. That sounds like a long time, but the intervening years were eventful - both for Greece and for Lamda Hellix. Building the data center was a learning experience as a data center is actually quite different from a cable landing station, so Lamda Hellix gained a lot of skills on which it could capitalize: “We had to invent the data center business in Greece, and we didn’t have a lot of potential partners so we had to create our experience and know-how.” The building was not designed for raised floors, so these had to be installed - with a step up and a well by each of the doors. Power and cooling systems had to be sited the whole set-up gained an enviable efficiency rating. “We wanted to have our own center of excellence, in order to construct and design facilities as good as Equinix or Interxion, without spending so much money,” he says. The company had some of the earliest Uptime-accredited engineers in Europe. Of necessity, their expertise is somewhat specialized: “Our expertise is 10 to 15MW data centers, not 50 to 100MW. We aren’t in Northern Virginia - we are in Greece.” The facility began to offer colocation for large Greek enterprises, and multinationals with business located there. The site also offers disaster recovery, with desks equipped with PCs, ready for displaced office staff. But as customers moved in, they also started to create business elsewhere for

Lamda Hellix. Disaster recovery customers began to ask about problems with their onpremises data centers. One customer had a data center which was over-heating. Its cooling supplier suggested using more CRACs to keep the temperature down, but the customer did not trust this suggestion. Lamda Hellix’s engineers visited and put their building expertise to good use. They spotted that the facility’s raised floor was too low, restricting air circulation. “We went, not as a paid service, but as friends,” says Kakkos, “but they asked if we could fix it, and one thing drove the other.” Lamda Hellix moved the whole data center out of the building, raised the floors and put it all back. From this project, the company launched Lamda Hellix Data Center Integration and Consulting Services, which has carried out projects in ten countries, including Ukraine, Malta, Cyprus, Abu Dhabi and the United Arab Emirates. “The next step was a client for whom we designed and constructed a facility,” says Kakkos. “Now we are running 162 facilities for third parties, from 30-40 rack sites up to a couple of MW.” Meanwhile, the company added services in its own data centers: it now has a multi-homing IP service, hardware as a service and hybrid cloud: “We are quite diversified in different services. So far it’s gone well." All this kept Lamda Hellix growing through the Greek Depression, which began in 2009, in the aftermath of the global financial crisis of 2007-2008. Indeed, some aspects of the crisis enabled Lamda Hellix to help Greek businesses who wanted to export services but found it difficult to use cloud providers hosted abroad, such as Amazon Web Services (AWS) and Microsoft Azure, because of government restrictions on currency movement.

We are the first and only 100 percent neutral provider in Greece

But then the telecoms crash happened. Tyco Global Networks - along with other neutral international networks - was a good idea, Kakkos says, but it was ahead of its time: “At that point, all the existing cables were constructions between telco players who had monopolies in their countries. Tyco was producing fiber and selling it to telecoms operators. So why not run the cables and get the return that BT and the rest were getting?” The idea was disruptive, but there was a problem: “At the time, there were no over-the-top services. The Facebooks, and the Netflixes did not exist.” These nascent international cable operators couldn’t make a profit, and in 2001 the network roll-out stalled, leaving redundant capacity on some routes, and other planned links unbuilt. Athens now relies on the Telecom Italia’s MedNautilus network, a fiber ring with a total

Hosting at a Greek data center allowed Greek companies to access world markets, while paying a local provider in Euros. Some Greek companies actually moved services back from abroad into Lamda Hellix. During this period, awareness of privacy also encouraged companies to keep Greek citizens’ data within Greece: “We see a steady and growing acceleration for people using our facilities, including global financial companies and organizations who have to keep data in Greece.” International business - both in u

Issue 31 • February 2019 45

u consulting and in hosting foreign organizations - has been important, says Kakkos: “The Greek economy is small compared to other countries. It’s bigger than Bulgaria and Romania, but it is smaller than the Turkish economy.” Greece never became a transit country, after the collapse of Tyco’s ambitions, but Kakkos still has hopes: “We are trying to help that.” Finally, in 2014, the 3MW Athens 1 was full, and it was time to start a new facility. The project for Athens 2 got under way. But once again, the economy threw in some surprises. By 2014, it seemed Greece had stabilized, and the company went ahead and built, arranging a launch party for June 2015. “It was a Tuesday,” Kakkos remembers. “I thought there would be not many people. We invited lots of people from politics because this was international. It was infrastructure for Greece, for South West Europe.” Kakkos invited everyone including people from the Chinese embassy - but he didn’t expect many to show up for a data center opening. There had been a snap election in January 2015, and the EU bailout was extended. The Tsipras government was expected to agree new payment terms, but the EU rejected their proposals twice. Just before the Lamda Hellix launch, talks broke down, and the government announced a referendum on the EU proposal. On Monday, the day before the launch, the banks closed for a month. “It wasn’t the optimum time for the opening of a data center,” says Kakkos,

It wasn’t the optimum time for the opening of a data center

with a grin. But in those circumstances, the launch was well attended, the opening went ahead, and Athens 2 is now filling up and expanding according to plan. Given the market dynamics, Lamda Hellix is bringing capacity on stream in phases, starting with 500kW and scaling up to a present capacity of around 3MW, with a full capacity of 6MW planned for a couple of years’ time. Both Athens 1 and 2 are two-story facilities, and Kakkos points to some specific enhancements. Lamda Hellix uses two fire suppression systems, with the Inergen inert gas system backed up by a water mist system designed for deployment in a real emergency. The water mist is designed to minimize problems with the technology, but its priority

46 DCD Magazine • datacenterdynamics.com

is to keep the people in the facility safe. Kakkos is proud of the DCD Award Lamda Hellix won in the Service Provider category in 2015. It also won an award in 2012 for “best workplace in Greece.” Its work for other clients also gets noticed. He mentions a few names, including Khazna of the United Arab Emirates, and a modular build for Greek research network GRnet that also won an award. In future, Kakkos hopes that the relaxation of capital controls by the Greek government will enable greater growth in the country’s economy. “We have growth around two percent. The two largest parties are quarreling about whether it could go to four percent. I’m a creator and an entrepreneur, I say how can we get six percent?” If that happens, Kakkos already has plans for Athens 3.


Cat Electric Power understands that loss of power means loss of reputation and customer confidence. Your customers demand an always-on, robust data storage solution without compromise, 24 hours a day, 365 days a year. Cat® power solutions provide flexible, reliable, quality power in the event of a power outage, responding instantly to provide power to servers and facility services, maintaining your operations and the integrity of your equipment. Your Cat dealer and our design engineers work with you to design the best power solution for your data center, helping you to consider: • Generator sizing for current and anticipated future operations growth, fuel efficiency and whole life costs • Redundancy for critical backup and flexible maintenance

© 2018 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, their respective logos, ADEM, “Caterpillar Yellow” and the “Power Edge” trade dress, as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.

• Remote monitoring for constant communication and performance analysis • Power dense designs optimising space dedicated to data center equipment • Interior or exterior installation requirements from enclosure design for noise and emissions to exhaust and wiring designs After installation, trust Cat to provide commissioning services to seamlessly integrate the power solution into the wider data center system. Our dealers also provide training, rapid parts and services support alongside a range of preventative maintenance offerings. To find out more about Caterpillar Electric Power and our Data Center experience, visit: www.cat.com/dcd0809

Demand Cat Electric Power

>Awards | 2018

Category Winners Following months of deliberations with an independent panel of expert judges, DCD Awards is proud to celebrate the industry’s best data center projects and most talented people.


Smart Data Center

Winner: Equinix Category sponsor: Red ICT Design

(Left-Right) Tom Walsh, Panduit | James Coughlan, Huawei


With a rich ecosystem of over 13,000 active connections, ECX Fabric is becoming the de facto global standard for how businesses connect to the cloud.

Billy McHallum, Equinix | Philip Beale, Red ICT Design | Jonathan Humphries & Daragh Campbell, Equinix

Living at the Edge

Winner: ChinData Category sponsor: Panduit

ChinData's new hyperscale data center campus in Beijing lies at the heart of an edge network of 220 custom-built "Gypsophila" edge facilities across China.

Paul Johnson, ABB | Keith Cronshaw & Matthew Pullen, CyrusOne | Host Deborah Frances-White

03 Barry Hennessy, Winthrop | Steven Hammond, NREL

Data Center Eco-Sustainability

04 Infrastructure Scale-Out

Winner: NREL

Winner: CyrusOne

Category sponsor: Winthrop

Category sponsor: ABB

With a world-leading annualized average PUE of 1.036 and pioneering work in componentlevel liquid cooling, NREL's entry really impressed the judges.

CyrusOne's greenfield data center in Allen, Texas, uses a unique modular approach that enabled rapid deployment, while staying green.

48 DCD Magazine • datacenterdynamics.com

Awards 2018 Winners

George Rockett, DCD | JP Burger, Starline

George Navarro, Eaton USA | Sean James, Microsoft | Katheron Garcia, Eaton USA | Martin Murphy, CBRE


Critical 06 Mission Innovation

Energy Efficiency Improvers

Winner: ST Telemedia

Winner: Microsoft

Category sponsor: StarLine

Category sponsor: CBRE

By developing its own pre-cooled coil, ST Telemedia GDC enabled partial free-cooling in its Singapore data center, proving the concept in a tropical climate.

Microsoft worked with Eaton to turn tried-and-true UPS technology into a value-generating power grid asset. This is the second year running that Microsoft picks up this award!


Cloud Migration of the Year

Winner: American Express Category sponsor: Interxion

The project involved hundreds of scrum teams, thousands of engineers across the globe and the re-engineering of hundreds of applications.

Lorenzo Bellucci, Stuart Barnett, & Madhu Reddy, American Express | Michael Rabinowitz, Interxion & Colin Allen, American Express

Team of the 08 Ops Year - Enterprise

Marc Garner, Schneider Electric Osvaldo Antonio Pazianotto, Sabesp

Jonathan O'Shea, Anixter | Brett Ridley, NEXTDC

Team of the 09 Ops Year - Colo+Cloud

Winner: Sabesp

Winner: NEXTDC

Category sponsor: Schneider Electric

Category sponsor: Anixter

The world’s fourth largest water utility, serving 28 million people, successfully deployed a massive IoT monitoring system across its network.

Australia's NEXTDC demonstrated a holistic approach to facilities management, underpinned by adherence to Uptime Institute’s Tier IV Gold certification.

Issue 31 • February 2019 49

Giordano Albertazzi, Vertiv | Brett Ridley, NEXTDC


Data Center Manager of the Year

Winner: Sunday Opadijo, Rack Centre Category sponsor: EkkoSense

Dean Boyle, EkkoSense | Sunday Opadijo, Rack Centre

Sunday and his team have maintained 100 percent uptime and dealt with situations that would send shivers down most managers’ spines.

Team 11 Design of the Year

Winner: NEXTDC Category sponsor: Vertiv

NEXTDC Engineering & partners Aurecon have been awarded the Design Team of the Year for designing and building Australia’s first data centers to receive Tier IV certification from Uptime Institute.

Ali Moinuddin, Uptime Institute Dr Rabih Bashroush, UEL


Industry Initiative of the Year Winner: University of East London Category sponsor: Uptime Institute

The EURECA project played a major role in improving energy efficiency across Europe and saving data centers at least 52GWh per year.

Mission Critical 13 Young Engineer of the Year

Winner: Laura Rogers, Morrison Hershfield Category sponsor: Google Joe Kava, Google Laura Rogers, Morrison Hershfield

50 DCD Magazine • datacenterdynamics.com

Laura Rogers is truly 'one to watch.' Her market knowledge, technical strengths, & more made her our judges’ favorite.

Awards 2018 Winners

Funke Opeke, MainOne

Leader 16 Business of the Year

Winner: Funke Opeke, Founder and CEO of MainOne

Steven Berkoff, Chain of Hope Ambassador

Category sponsor: White & Case

Funke Opeke is no stranger to hard tasks. She has led her company MainOne to success in West Africa, having to build the component parts of a data center business from the ground up, including the cable that attaches it to the rest of the world. She has managed to take on numerous challenges within her male-dominated field. Importantly, she encourages the women of her team to do the same.

Prof. Robert Tozer, Operational Intelligence Gian Walker, Akamai


Corporate Social Responsibility

Winner: Apple, Akamai, Etsy and Swiss Re


Category sponsor: DCPRO

With over 290MW of new renewable generation capacity across four data center operators, this ambitious and historic collaboration showed our judges what can happen on a global scale when big tech companies come together to put actionable climate change strategies in place.


Eoin Vaughan, Mercury Systems | Pitt Turner, Uptime Institute

Public Vote: The Most Extreme Data Center on the Planet

Winner: HPE's Spaceborne Computer Category sponsor: Quality Uptime Services Clare Loxley, HPE Frank Monticelli, Quality Uptime Services

Once the thrill of orbiting our planet at some 400km wears off, astronauts on the International Space Station have HPE’s Spaceborne Computer to run their experiments on.

Outstanding Contribution to the Data Center Industry

Winner: Pitt Turner, Uptime Institute Category sponsor: Mercury Systems

Pitt Turner has helped clients justify several billion dollars of investments, and co-invented what has become the industry standard Tier-level site infrastructure rating system. Issue 31 • February 2019 51

Meet the Winners

Max Smolaks News Editor

What motivates the best data center engineers? We tell the personal stories of the recepients of DCD Awards 2018 Winner Laura Rogers Category sponsor Joe Kava, Google

Making it work: Laura Rogers, comissioning agent at Morrison Hershfield Every year as part of its awards ceremony, DCD highlights some of the achievements of the latest generation of engineers, the brighteyed individuals who have just joined the workforce. At the end of 2018, we presented the Young Mission Critical Engineer of the Year award to Laura Rogers, mechanical commissioning agent working in Atlanta for Morrison Hershfield. So, what makes a champion? She doesn't like to be stuck in the office, she is passionate about energy conservation, and her favorite pastimes include blackout testing. "I like being part of the solution. At the end of the day it has to work - and so, whatever's built, you make it work," Rogers tells DCD Magazine. Morrison Hershfield is an employee-owned engineering consulting firm founded in 1946 and headquartered in Toronto. It sounds like a nice place: one of the founders, Carson Morrison, literally wrote the book on professional ethics and morality in engineering. MH is not limited to data centers - it also does highways, tunnels, bridges and oil platforms. "It has got a family culture," Rogers says. "When I looked them up, before I came for an interview, I was a little bit overwhelmed - they have over 900 engineers, and I didn't want to go work for a big corporate firm. But they are laid back, they have a lot of brilliant people with years of experience, and we have some great clients."

Her previous employer, Leach Wallace Associates, had a wide portfolio of mission critical projects, including hospitals - buildings in many ways similar to data centers. "We were doing an outpatient care pavilion with surgical suites. It was in downtown Chicago: 25 stories, new build, 17 stories of critical space. We had to do an overnight blackout test - we also do these in data centers - where we got the utility involved and we killed the power to the building. It was exciting, we had people all over the place with walkietalkies, watching the mechanical systems. "When you do these tests, nothing is supposed to go wrong, you're supposed to have worked out all the issues - oh, this load didn't transfer, this generator didn't come up. The exhaust fan went out of control and now we're pulling in walls; I've heard of that happening, but not on our site." During the test, load banks are used to simulate 100 percent load, burning through megawatts at a time. Once the power is out, the load transfers to UPS, and then the generators fire up. The transfer switch then transfers the load to the generators, which can run for as long as there's fuel in the tank. "If we run into any issues during this test, then we did not do our jobs properly in the weeks leading up to it. It's the most coordination required between mechanical and electrical - and electrical is just going 'let us turn off the power!'"

"Of course, middle-aged men think they know everything and you know nothing..."

52 DCD Magazine • datacenterdynamics.com

To kick-start her career, Rogers attended University of Maryland to study mechanical engineering, and got an internship during her junior year - in commissioning. "Instead of sitting behind a desk and doing design on AutoCAD or Revit, I got to go out in the field, interact with owners, contractors, design engineers, architects, meet the entire design team, and see the equipment and systems being built and then test them, hands-on. "I'm still working on my professional engineering license. I haven't given up engineering in any way, I'm just doing it from a different perspective. HVAC is not the most exciting thing out there - it's not cars, or jets - but it’s in every building, and you can learn how to fix your air conditioning or heating if it breaks," she laughs. Rogers says building a career as a woman in a traditionally male-dominated field hasn’t been as difficult as it's sometimes portrayed. She actually thinks that it's harder to look young and be taken seriously: "There are these middle-aged men who have been doing it for 20 years, of course they think they know everything and you know nothing. "Whether you're a man or a woman, it's important to craft your emails, to be wellspoken - in commissioning especially, it's a lot of communication." At the same time she, admits: "There's a lot of inappropriate language on job sites. I don't engage in that it becomes a bad habit." She adds that many people underestimate the importance of commissioning: "It is very important from the energy perspective. I've commissioned buildings that had their heating and cooling running in the exact same room at the same time."

Meet the Winners

Preparing for disaster: Sunday Opadijo, chief engineer at Rack Centre Every year as part of the DCD Awards, we celebrate the individual achievements of exceptional data center managers, people who go above and beyond the call of duty, who sacrifice their time, effort and emotional wellbeing on the altar of uptime. At the end of 2018, we presented the Data Center Manager of the Year award to Sunday Opadijo, the man in charge of critical infrastructure at Rack Centre in Lagos, Nigeria. Despite the harsh tropical savanna climate and the notoriously unreliable power grid, Opadijo has presided over six years of uninterrupted data center operation. Sometimes, this meant shedding non-essential loads – like the power to the company’s own office. Opadijo started working for Rack Centre in 2013: “When I joined the company, we were starting from scratch,” he told DCD Magazine. “It’s a global player now, when it comes to data centers. We have lots of foreign clients. “There are a lot of smaller data centers in the region that really were not knowledgeable about Uptime Institute, or other standards like ISO 27001, or were not inclined to invest as required. Their customers just wanted to keep IT infrastructure somewhere, and not demanding 100 percent uptime.” Rack Centre customers do, however - the company opened its first facility in 2013, as the only data center to offer Tier III redundancy in Western Africa at the time.

Before Rack Centre, Opadijo spent nearly a decade working for South African telecommunications provider MTN. “I started doing research on data centers, since it was something I was involved in at the time. I began reading a lot, and applied for a training course, but that didn’t come through, so I decided to train myself - the first data center training that I received was actually online, through APC’s Data Center University [today known as Schneider Electric’s Energy University].” He continued to hone his skill by attending training sessions around the world, including courses in the US and the UK, with organizations like CNet, RF Code and Optimum Path. This opened new doors: “I learned about design, so I was able to do design work for MTN on new facilities.” Opadijo also noted that the domestic market had made great strides in recent years, and today, it was perfectly possible to study data center engineering without leaving Nigeria. His ideas about the importance of selfreliance seem to stem from the fact that the Rack Centre facility is based on modular data center designs made by British modular infrastructure specialist BladeRoom. “You can imagine - BladeRoom is several thousand miles away, they don’t have a presence in Nigeria. If you have any major issues, and you don’t have the knowledge to troubleshoot and fix those, you will experience downtime,” he explained.

Winner Sunday Opadijo Category sponsor Dean Boyle, EkkoSense

"The first generator went down, the second generator went down, then the third generator went down..."

“When I joined, we developed policy documents, processes and procedures for every activity that my team carries out. There’s a checklist for everything.” “One thing that we have learned is that we do not panic. If there’s an emergency, I’m in command, and I tell them what to do, and in which order. We have walkie-talkies and we treat it like a military operation.” Some of the most challenging episodes in Opadijo’s career are linked to the generators, and the weather: “We had lightning strike the data center in 2014. We had another lightning strike in 2016, and the generator actually caught fire. Despite these two major incidents we never had a second of downtime.” “In 2017, we had major, you could say catastrophic issues - the diesel was contaminated with water. The reaction from my team was very swift, but the first generator went down, the second generator went down, then the third generator went down. We store 80,000 liters of diesel - we had to know how much fuel we had to let go. No support organization could have helped us, we couldn’t wait even for an hour. “We kept running on a single generator – we had to do load shedding, we had to prioritize - we cut off the office completely, since at this point, the office is not critical.” The data center remained online, and the team has learned from the experience today, Rack Centre employs a sophisticated diesel delivery value chain management process and has infrastructure in place to avoid bad diesel issues. But in situations like these, the most important thing is preparation: “When you are running a data center, you have to prepare for a potential disaster - then you’re going to be OK.”

Issue 31 • February 2019 53

Max’s Opinion

Eat the rich


obody needs a billion dollars. Nobody deserves a billion dollars. And yet, there are approximately 2,754 billionaires around the world today. If you look at the latest Forbes billionaires list, you will notice that there are only a few industries that make it possible to obtain such an extraordinary level of wealth – these include extraction of natural resources, retail, media, real estate and technology. Half of the world’s ten richest men (and they are all men) made their fortunes through selling computers, software, or online services. While identities of the oil sheikhs and Russian oligarchs are kept relatively obscure, technology billionaires have become household names - we have all heard about Jeff Bezos (net worth $134.9 billion), Mark Zuckerberg ($71bn), Larry Ellison ($58.5bn), Larry Page ($48.8 bn) and Sergey Brin ($47.5bn), and of course, the man who started it all, Bill Gates ($95.7bn). At the end of January, at the World Economic Forum in Davos, Michael Dell ($28.6bn) and his audience had a merry laugh at the proposal by some of the young voices within the Democratic party to adopt progressive taxes of up to 70 percent on incomes over $10 million. As I’m writing this, the video is doing the rounds online, as taxes become, however briefly, a national conversation in the US. As you might have guessed, laughing about the matter didn’t go down too well. We can’t fight inequality without taking some of the money away from the ultra-rich - in a world of finite resources, that’s just not possible. At the same time, you could argue that taxes benefit billionaires in a very direct manner.

It’s taxes that keep their employees healthy, productive, and safe. Taxes

“Every billionaire is a policy failure” Dan Riffle, policy advisor to Alexandria Ocasio-Cortez

pay for the roads used by delivery trucks, and for the United States Postal Service that gets those Amazon packages to their destination. Taxes pay for the glorious DARPA, which remains an engine of American-made innovation that no start-up cluster can match, and the National Laboratories that may win the race to exascale (p14). Taxes also pay for the federal data centers that Michael sells to, and Jeff wants to replace. Dell and friends act like they are royalty, but they are nouveau riche, a startling example of just how far you can get removed from the common people in a single generation. Yes, they give to charity, but some argue this is yet another example of attempting to wrestle power away from the state. We need higher taxes on the ultra-rich. An alternative method of resolving economic grievances was proposed in the US and France in the 17th century. Enlightenment philosopher Jean-Jacques Rousseau once said: “When the people shall have nothing more to eat, they will eat the rich.” Sometimes, when I get angry about the state of the world, I wonder what Jeff Bezos would taste like.

Max Smolaks News Editor

54 DCD Magazine • datacenterdynamics.com

STULZ stands for precision air-conditioning of the highest level. Whether customized or standardized, data center or industrial application, chiller or software; rely on STULZ for your mission critical cooling. STULZ – your One Stop Shop. www.stulz.co.uk

Will your data centre handle your next big idea?

In a connected world, IT service availability is more important than ever. EcoStruxure™ for Data Centers ensures that your physical infrastructure can quickly adapt to the demands of the cloud and the edge — so you’ll be ready

for that next big idea. Learn more about our solutions and services for data centres to ensure IT systems are highly available and efficient.


©2019 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. • 998-20120074_GMA-GB

Profile for DCD Magazine

DCD>Magazine Issue 31 - Exascale  

DCD>Magazine Issue 31 - Exascale