Issue 37 • August 2020 datacenterdynamics.com
THE COVID ISSUE
How data centers are adapting & how we’re fighting back
RELIABLE BATTERY SOLUTIONS
Li-Ion (LFP) and VRLA High Rate Batteries for Critical Power Data Center Applications
E L B A I L E R
UL1642 UL1973 UL9540 UL9540A Tested
Introducing the Narada High Rate Li-Ion (LFP) Series Energy Storage Systems for Critical Power applications. As a compliment to Narada's HRL and HRXL Series VRLA batteries we offer choice in chemistry, warranty, and life.
www.mpinarada.com | ups@mpinarada.com | Newton, MA, USA 800-982-4339
ISSN 2058-4946
Contents August 2020
47
6 News Racism, job losses, trade conflicts, and the virus. Data center news reflects the world around it 14 Dealing with Covid-19 How data centers are adapting to the crisis of our times 18 Beating the virus Supercomputers join the fight for our lives in a grand public-private coalition
22
14
Industry interview 22 “ It’s a billion-dollar startup. Brookfield bought 30 data centers around the globe, with 1,000 customers and a great ops team. But the rest of the business had to be created around those assets,” Evoque CEO Andy Stewart tells us about life after AT&T 24 Reversible computing Our current approach to semiconductors is reaching its limits. What if there was another way that was also significantly more efficient? 28 The Earth, again Creating an entire digital twin of the planet is no mean feat. We go behind the scenes of the European Union’s Destination Earth project
24
31 The colocation supplement Remote management comes into its own; networks face the pandemic surge; meet-me-rooms go virtual, and more in this special supplement on colocation in the modern age 47 How data centers can make way for renewables Our industry can be the lynchpin of the world’s transition to renewables - but only if we embrace new technologies 54 The democratization of 5G and mobile networks Amid a pandemic and a trade war, some are pushing to open up 5G
28
18
56 Look after yourself We’re all just humans going through a weird time. From all of us, stay safe and reach out if you need to talk
Issue 37 • August 2020 3
In this crisis, data can save us
T
he Covid-19 pandemic is changing things fast. It's only four months since lockdowns changed everyone's life. At DCD we mothballed our office space and shifted physical events into the virtual world. Others faced tougher changes. But keep the context: 689,000 people have died, a horrendous tragedy that is still less than 0.01% of the world's population. This issue we look at how data - and data centers - might help keep this impact as low as humanly possible.
"When people are hunkering down, we're running to keep our data center up" (p14) Data centers are an essential service. Anyone who didn't know that before, knows it now. Digital infrastructure has enabled the communications which allow life to go on (p14). But supercomputers are opening a front in the fight-back against Covid-19. As never before, governments are sharing their supercomputers to model drugs that might fight the virus. It's a new level of co-operation, albeit with one major world power excluded (p18).
What a time to launch. Andy Stewart is leading Evoque, the new data center provider which took over AT&T's former colocation empire. He might be wishing he could have started some other year, but he's got plans to thrive.
For more on the pandemic, see our news pages of course (p6), but also our supplement. We set ourselves the job of seeing what's coming for colocation, and everywhere we looked we saw the effects of Covid-19 (p31). To take one example, remote management has always been an option. With travel restricted, it has become essential (p34).
0.01%
From the Editor
A Digital Twin of Earth could save
Global Covid-19 death rate as a fraction of population
us from climate change. The European Union has sanctioned a simulation of the world, to model ways to fight global warming. But can you get enough detail and still see the big picture? (p28).
we have for colos (p42). A more radical suggestion on the tech horizon is to recycle bits themselves, and slash the environmental impact of computing. Make computing reversible, so every calculation can be "decomputed," and the energy demands of computers could virtually go to zero (p24). That's not a fantasy, it follows from the laws of thermodynamics. There is a downside, of course, and if you're a physicist, you might already have spotted it.
4
Deputy Editor Sebastian Moss @SebMoss Reporter Alex Alley SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Designer Dot McHugh Harriet Oakley Head of Sales Erica Baeta Conference Director, Global Rebecca Davison Conference Director, NAM Kisandka Moses Conference Producer, APAC Chris Davison
Head Office DatacenterDynamics 102–108 Clifton Street London EC2A 4HW +44 (0) 207 377 1907
Take the long view. The human race is resilient enough - and co-operative enough - to get through this crisis. But we need to consider basics, from the practicalities of network architecture (p54), to the fundamental need to take care of each other (p56). Contact us if you have needs or insights that need a wider audience. These are important times and our industry has a role to play.
PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254
Peter Judge DCD Global Editor
Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.
Intelligence
Global Editor Peter Judge @Judgecorp
Chief Marketing Officer Dan Loosemore
Recycling servers is one suggestion
Dive deeper
Events
Meet the team
Debates
DCD Magazine • datacenterdynamics.com
Training
Awards
CEEDA
www.pefc.org
© 2020 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
DATA DOESN’T DO DOWNTIME DEMAND CAT® ELECTRIC POWER
Cat® generating sets and power solutions provide flexible, reliable, quality power in the event of a power outage; maintaining your operations, the integrity of your equipment and your reputation. Learn more at http://www.cat.com/datacentre
© 2019 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.
Whitespace
News
NEWS IN BRIEF
Whitespace: The biggest data center news stories of the last quarter
Amsterdam officials scrap data center moratorium The city’s municipal authorities of Amsterdam and Haarlemmermeer have ended the year-long pause and opted for a limit on space and power capacities for any new facilities.
Google Grace Hopper cable to link US, UK, and Spain Google Cloud will deploy a submarine cable between the UK, Spain, and the US. The cable joins Google’s Curie, Dunant, and upcoming Equiano projects; Grace Hopper is expected to be online in 2022.
Novva to build $1bn hyperscale campus in Utah Novva is developing a $1bn hyperscale campus on its 10-acre site in West Jordan, Utah. The new company is currently building a 300,000 sq ft (28,000 sq m) data center on the site and an 80,000 sq ft (7,500 sq m) office.
$100,000 reward offered for info on Facebook construction site noose A racist song was also played over the radio at a Microsoft site After a ‘lynching’ noose was found at a Facebook construction site in Altoona, Iowa, two on-site construction trade groups are offering a $100,000 reward for any information that will lead to an arrest. Two days after the incident, a racist song was played at a nearby Microsoft data center construction site. Both were Turner Construction developments, which called the incidents “despicable and unacceptable acts of hate.” Both projects also involved other contractors. Microsoft told DCD it is working to ensure that action is taken. “We believe in America, and no one should experience fear, workplace intimidation, or be subjected to hateful symbols of racism,” Earl Agan, Central Iowa Building & Construction Trades president, said. “We stand united against racism and discrimination and are committed to ensuring our members feel safe.” The union is partnering with North America’s Building Trades Unions to offer the reward, which will expire on September 1. Altoona Police said that interviews
6
were still being conducted with all information in the case set to be filtered through the FBI’s Omaha office. It is not clear if there is a direct link between the noose incident and what happened at the Microsoft site, despite the proximity and shared contractors. “We are working vigorously to ensure the perpetrators of these hateful acts face the consequences of their actions,” a Turner spokesperson told DCD. Contractors from other companies were also at the sites. Facebook and Microsoft spokespeople also denounced their respective incidents. “It is extremely disheartening to see that even in our own data center industry we are seeing acts of racism and hatred on our job sites,” Microsoft’s GM of global data center execution, Douglas Mouton, said at DCD’s virtual Building at Scale event. “This is impacting everyone and it’s a call to action for us to think how we as leaders take this head-on.” bit.ly/Noosebounty
DCD Magazine • datacenterdynamics.com
Los Alamos, NNSA install cooling system to prep for exascale supercomputers The National Nuclear Security Administration installed cooling towers and equipment at the Los Alamos National Laboratory in New Mexico, for its upcoming pre-exascale supercomputing projects in 2021.
Nautilus’s floating data center finishes construction in Vallejo, California The data center was refurbished by Lind Marine in Vallejo, California, for Nautilus Data Technologies. The facility, called Eli M, was originally built in 1969 and underwent upgrades as a data center, it has since been towed to the Port of Stockton, California.
Intel delays 7nm, considers using rival foundries 7nm PC CPUs will arrive in late 2022 or early 2023, while server CPUs are now planned for 2023. The company’s 7nm data center GPUs are set for late 2021 or early 2022, likely delaying the Aurora exascale supercomputer. AMD, Nvidia, and Ampere all offer 7nm products manufactured by TSMC - which Intel may use. Shares dropped 18 percent.
AWS VP quits due to “chickensh*t” firing of protesting workers
IBM cuts jobs amids Covid-19 impact Big Blue’s Cloud and AI divisions have also been hit by the cutbacks IBM laid off a significant number of employees around the world due to Covid-19. The first job cuts happened in May after IBM’s new CEO Arvind Krishna came to office. Amid a global pandemic and deepening recession, IBM said the market had become highly competitive and “requires flexibility to constantly remix to high-value skills.” IBM did not confirm the number of job losses, nor did it release a company-wide internal email explaining the strategy or rationale behind its move. The number is thought to be significant. The layoffs come after several such moves in the past, including in 2016, 2017, 1,700 jobs in 2019, and 268 earlier this year. Usually, the company has said that
it focused on ‘realigning’ less profitable divisions, with a focus on bolstering its Cloud and AI divisions. This time, however, anecdotal evidence from employees on Reddit and Facebook paints a different picture. “Got whacked this morning from Cognitive Applications,” one user said. “Also hearing Watson Health and Research are being hit hard, along with the perennial classics, GBS and GTS.” “This is the biggest round of [layoffs] in a decade,” one user claimed. IBM, which it is believed has steadily reduced its severance package over the past few years, currently has no plans to cut its shareholder dividend.
AWS VP Tim Bray resigned over AWS’s treatment of protesting employees in May. On his blog, Bray said he was leaving due to the treatment of employees that called out Amazon’s climate change targets, and attempted to improve the safety of workers amid Covid-19. He said the company’s decision to sack staff was “chickenshit,” and was “designed to create a climate of fear.” Bray was the only VPlevel employee to sign a letter to shareholders calling for AWS to stop oil contracts. bit.ly/Birddroppings
bit.ly/Bigblueaxe
HPE sacks staff and cuts pay amidst Covid-19 and bad server sales HPE CFO: “So, by and large, we’re going to leave no stone unturned.” Hewlett Packard Enterprise is reducing staff as server sales dipped due to supply chain disruptions and economic woes. The company said it would “realign the workforce” and cut costs over the next three years, but did not disclose how many jobs were at risk of getting the chop. The plan is for the company to save $1bn by the end of fiscal 2022, and $800m on an annualized run rate. All staff members where it is legally permitted will receive a pay cut through October 31, 2020, with the executive team allegedly taking the largest percentage reduction. For staff in countries where there are no restrictions, the company is implementing unpaid leave. HPE CEO Antonio Neri said: “It definitely was a tough quarter by every measure and I’m disappointed in the performance, but I don’t see this as an indication of our capabilities.” bit.ly/Serverslippage
Issue 37 • August 2020 7
Whitespace Telia Estonia to build solar plant to cut costs at its Laagri data center The Estonian subsidiary of the Nordic telecom giant Telia is building a solar plant to power its data center in Laagri, Estonia. Details are scant, but the company is expecting the farm to pay back its investment over the next six to seven years. Renewable energy firm Pro-Solar will build the plant next to Telia’s data center in Harju County, northern Estonia. It is expected to be operational by the end of the year. The solar plant will not be able to meet the entire needs of the data center but it will make running it cheaper. Telia Estonia’s Network and Infrastructure Unit, Toivo Praakel, said: “Data centers have a fairly even energy consumption, but there are still small roundthe-clock and seasonal fluctuations.” The government is also offering the company incentives to build the solar plant but no details concerning these incentives have been released publicly.
Microsoft runs data center racks on hydrogen fuel cells for 48hrs Hydrogen passed the test, making for a feasible and green backup system Microsoft and Power Innovations powered a row of data center servers for 48 consecutive hours using hydrogen fuel cells. The hyperscale company previously committed to ending its dependency on using diesel fuel in back up generators by 2030, along with a broader promise to be carbon negative. Mark Monroe, a principle infrastructure engineer at Microsoft, said: “We don’t use the diesel generators very much. We start them up once a month to make sure they run and give them a load test once a year to make sure we can transfer load to them correctly, but on average they cover a power outage less than one time per year.” An Azure data center with fuel cells, a hydrogen storage tank, and an electrolyzer
that converts water molecules into hydrogen and oxygen could hypothetically be integrated with the electric power grid to provide load balancing services, the company said. Microsoft used a 250-kilowatt proton exchange membrane fuel cell system to power 10 racks of servers. Most hydrogen is harvested using fossil fuels, emitting CO2 and carbon monoxide during production. To be truly green, the hydrogen must be produced with electrolysis, where electricity runs through water to separate hydrogen and oxygen atoms; the electricity must also come from renewable sources like solar power. bit.ly/Gaspower
Peter’s energy factoid Despite Microsoft’s greener push, the company signed a deal with fracking giant Halliburton to move its digital workloads onto Azure, improving its ability to extract fossil fuels
bit.ly/Balticsun
Switch to use Tesla batteries for Nevada solar project Switch and management firm Capital Dynamics began building three Nevada solar energy projects in July. Known as ‘Gigawatt 1,’ the three developments use panels from First Solar and Tesla Megapack batteries. The solar farms will generate 555MW and have 800MW hours of battery storage. Switch will use thousands of solar panels, as well as Tesla Megapacks - which will be manufactured at the Tesla Gigafactory, in the same Reno
8
DCD Magazine • datacenterdynamics.com
business park as a Switch data center. Megapacks are the automotive company’s largest lithium-ion battery storage system, with 3MW of energy capacity per pack. The project is the result of SB547, a bill (lobbied by both Tesla and Switch) that allowed certain plants or equipment used by a data center to be excluded from regulations, giving flexibility on how they buy energy. bit.ly/Gigabatteries
Advertorial | Starline
Starline powers Fujitsu’s data center expansion project Fujitsu, working with a tight project timeline, needed to transform a car park into an operational data center in a matter of months.
I
n order to meet customer requirements, Fujitsu needed to build an extension onto a 3.2-megawatt data center it manages just north of London. When running additional circuits it was common for Fujitsu to supply two 32 amp supplies to each cabinet - however, there were occasionally other requirements that called for the sizable job of running cables between cabinets and PDUs. When this situation arises, the data center has to handle the cost of new cable and associated
labor, the risk associated with connecting to a live PDU, and completing this in a timely fashion in order to efficiently deliver the project to the customer. For Fujitsu’s expansion project the data center team wanted to bypass these challenges by implementing a flexible power solution that incorporated enhanced metering functionality. The data center team also wanted to avoid putting anything in the floor because that was how they delivered their cooling to the IT. “Our existing site was cabled straight back to the PDUs,” says Head of Data Center Development, UK & Ireland, Fujitsu, Simon Levey. “For our new data hall, we needed something that was flexible and adaptable to anything we might do in the future.” For the data center expansion project, Fujitsu drew up a list of requirements and ultimately chose Starline’s 250 amp Track Busway product as its overhead power distribution system. Throughout the evaluation process, Fujitsu visited Starline’s new 5,200 sq m (56,000 sq ft) manufacturing facility based in the UK, and worked closely with Starline’s local partner, Daxten. Having built a trusting relationship with Daxten, being able to rely on the supplier even after the project was completed was a strong deciding factor. In addition, manufacturing being located in the UK was convenient to fulfill future needs.
Another main factor that drove Fujitsu’s decision was its need to incorporate a flexible metering offering. It was important to the team that options for both wired and wireless metering, which could be directly integrated into the tap offs, were available. “Having a flexible metering option, where we could install wired or wireless meters was very useful,” adds Simon. Initially, the busway was able to be installed quickly to accommodate Fujitsu’s tight deadline. The alternative cabling method would have taken weeks to install— which wasn’t a feasible solution for the project. Furthermore, having the flexibility to easily add additional supplies to cabinets will be increasingly valuable in Fujitsu’s new space. “If we have the required tap offs on-site, we can just plug them in and within minutes have additional circuits up and ready for our customers,” says Simon. When asked about advice for others installing the Track Busway product, Simon stresses the importance of proper labeling and ensuring the orientation of the bus bars is optimized. Overall, end-users should ideally know their tap box requirements as soon as possible and rely on the resources of their local representatives to ensure successful implementation. To learn more about Starline visit www.starlinepower.com
Starline Contact Details emea@starlinepower.com +44 (0) 1183 043180
Whitespace
Keppel’s Alpha DC invests $213m in Chinese data center Alpha Investment Partners is looking to build a 487,000 sq ft (45,000 sq m) data center in Huizhou, China. It will be located at the Tonghu Smart City at a site owned by developers Huizhou Bike. This company will be taken over by Alpha from Huizhou Bike’s previous owners, the Chinese property giant Country Garden Holdings who owns the entire Tonghu site. The data center will now be managed through the Alpha DC Fund, a subsidiary of Alpha, which itself is a subsidiary of Keppel Capital and will become the fund’s first mainland Chinese asset. Country Garden will still be involved in the process since it owns the park, while Alpha’s sibling Keppel Data Centres will be lending a hand in building the facility. The acquisition and build costs will amount to around RMB 1.5bn ($213m) and data center company Shenzhen Huateng Smart Technology will operate the facility when its first phase is complete in 2021. bit.ly/Keppelinchina
Tencent plans $70bn spree on giant data centers and infrastructure for cloud and AI Backed by a Chinese government pledge to recover from Covid-19 Chinese technology giant Tencent plans a huge investment in ‘new infrastructure,’ including cloud computing and artificial intelligence. The company will spend 500 billion yuan ($69.9 billion) over the next five years, on projects including giant data centers with more than a million servers. Tencent SVP Dowson Tong told state media that other sectors included blockchain, supercomputer centers, Internet of Things operating systems, 5G networks, and quantum computing. The announcement comes after the Chinese government in May said that it would issue more debt to support “new infrastructure” projects, its term for technology such as AI, 5G, autonomous systems, and electric cars. Tencent said it would issue up to $20 billion of new bonds to raise capital. The company is best-known for its WeChat messaging
app, which has far more functionality than messenger platforms like WhatsApp - including being able to pay bills, order goods and services, transfer money to other users, and pay in stores. The platform, which has more than a billion active users, censors posts critical of the government, and may be used as a state surveillance tool. Tencent is also the world’s largest video game company, China’s largest music services provider, and one of the world’s most active tech investors. In cloud services, the company trails Alibaba however, with an 18 percent market share, compared to the e-commerce giant’s 46.4 percent, according to research firm Canalys. Behind it is Kingsoft, at 5.4 percent. bit.ly/Morethantencents
Alibaba to spend $28bn on cloud over three years Alibaba plans to spend 200 billion yuan ($28.2 billion) on its cloud infrastructure over three years, across data centers, operating systems, servers, chips, and networks. “The Covid-19 pandemic has posed additional stress on the overall economy across sectors, but it also steers us to put more focus on the digital economy,” Jeff Zhang, president of Alibaba Cloud Intelligence, said in a statement. The company operates 63 availability zones in 21 regions around the world, but has limited market penetration outside Asia. The company has also invested heavily into its own AI inferencing chips and RISC-V hardware. “I think cloud will be... the main business of Alibaba in the future,” CEO Daniel Zhang told CNBC in 2018. bit.ly/Alipayalot
10 DCD Magazine • datacenterdynamics.com
Tech companies grapple with Hong Kong’s new Security Law
Huawei to be out of UK 5G by 2027 Tough for telcos, but not tough enough for some Conservatives Six months after stating that Huawei could supply equipment to UK telecoms companies for their 5G networks, Britain’s Conservative government has changed course by banning Huawei by 2027. Companies will be able to buy Huawei 5G equipment until December 31, at which point it will be banned. Telcos then have until 2027 to remove Huawei 5G equipment entirely. 2G, 3G, and 4G kit can remain until it is no longer needed. Lord Browne of Madingley resigned from Huawei, ahead of the ban. The Telecoms Infrastructure Bill still faces opposition by a significant Conservative backbench that wants tougher restrictions on Huawei, including on older equipment.
The move comes after pressure by the US, which accused Huawei of spying and blocked the company from most semiconductor supply chains back in May. Huawei denies spying and hacking claims but has admitted that restrictions on its supply chain have posed difficulties. A leaked report by GCHQ, reported by The Telegraph, claims US sanctions will make Huawei’s equipment unsafe. Huawei spokesperson Ed Brewster said: “[This] threatens to move Britain into the digital slow lane, push up bills, and deepen the digital divide... Regrettably, our future in the UK has become politicized, this is about US trade policy and not security.”
A law passed in Hong Kong threatens tech companies and their sensitive user data. Facebook, Google, Zoom, LinkedIn, and Twitter have temporarily blocked authorities’ access to user data in response to the sweeping national security law. South Korea’s largest Internet portal, Naver, moved data out of Hong Kong - opting for a site in Singapore instead. Equinix, which operates four Hong Kong data centers, told DCD it had no plans to leave the region. AirTrunk, Amazon, and Google declined to comment. bit.ly/Securitythreat
bit.ly/Huaweigetstheboot
US DoD names Inspur in list of Chinese military-linked companies Along with the less-surprising Huawei The US Department of Defense has published a list of 20 companies it claims are closely tied to the Chinese military. In the technology sector, the companies on the list include stateowned enterprises China Electronics Technology Group, China Mobile, and China Telecommunications Corp, and nominally private businesses Hikvision, Huawei, Inspur, Panda Electronics Group, and Sugon. The world’s third-largest server manufacturer, Inspur has mostly managed to escape the ire of US officials, and has previously been left out of public discussions over Chinese companies with uncomfortable state ties. So far, it has avoided sanctions that would restrict its ability to use the US chips that power all of its server products. The company ignored multiple requests for comment. Following the listing, chipmaker Intel briefly paused sales to the company, but they have since resumed. bit.ly/TechDecoupling
Issue 37 • August 2020 11
Whitespace India bans TikTok, WeChat as tensions with China spiral
Google buys stake in Reliance Jio and invests $10bn into Digital India Joining Facebook and Intel in taking stakes in the telco Google plans to acquire a 7.7 percent stake in India’s biggest telco company, Jio Platforms, for $4.5bn. The holding company of Reliance Jio Infocomm announced the investment during its live-streamed Reliance AGM 2020 and the country manager and VP of Google India, Sanjay Gupta, covered the investment on Google’s blog. The investment is now pending a regulatory review. Google will work with Jio to make a new and cheap Android smartphone. In April, Facebook acquired a 9.99 percent equity stake in Jio Platforms for $5.7bn. After expanding rapidly with low cost data rates that left it heavily in debt, Jio has raised around $20bn since April from Qualcomm,
Intel, KKR, Silver Lake, Vista, and Mubadala, Abu Dhabi’s sovereign wealth fund. Microsoft in 2019 announced a major partnership with the company to build data centers across the country. Separately, Alphabet’s CEO Sundar Pichai announced a ‘Google for India Digitization Fund,’ worth $10bn. The funds will be invested over the next five to seven years. “We’ll do this through a mix of equity investments, partnerships, and operational, infrastructure, and ecosystem investments. This is a reflection of our confidence in the future of India and its digital economy,” CEO Sundar Pichai said.
The Indian government banned TikTok, WeChat, and numerous other apps it claims pose a threat to national security. The decision comes after border clashes between the two superpowers left at least 20 Indian soldiers dead, along with an unknown number of Chinese soldiers. India’s Ministry of Information Technology said it was banning 59 Chinese apps after receiving “many complaints from various sources” about apps that were “stealing and surreptitiously transmitting users’ data in an unauthorized manner.” Among the apps banned are microblogging platform Weibo, strategy game Clash of Kings, Alibaba’s UC Browser, Baidu’s map, and translation apps along with numerous camera filters. The Indian government claims the apps sent user data to China, where it is accessed by authorities. bit.ly/Borderclash
bit.ly/Relianceindia
India’s Airtel sells 25 percent of its data center arm to Carlyle Group The proceeds will go to Bharti Airtel’s Indian expansion plans The Carlyle Group will invest $235m into India’s Bharti Airtel’s data center division Nxtra Data for a 25 percent stake in the business. Airtel will keep its remaining shares in Nxtra. The data center company is currently expanding its platform with multiple data centers in Chennai, Mumbai, and Kolkata. Nxtra will use the proceeds from this transaction to continue scaling up its infrastructure. According to Airtel, India is witnessing a ‘surge’ in demand for data center infrastructure. Airtel’s CEO, Gopal Vittal said: “Rapid digitization has opened up a massive growth opportunity for data centers in India and we plan to accelerate our investments to become a major player in this segment.” Carlyle is an American investment firm that specializes in corporate private equity, real assets, and private credit. bit.ly/IndiaDCexpansion
12 DCD Magazine • datacenterdynamics.com
MILLIONS OF STANDARD CONFIGURATIONS AND CUSTOM RACKS AVAILABLE
THE POSSIBILITIES ARE ENDLESS... MADE IN THE USA
www.amcoenclosures.com/data
847-391-8100
an IMS Engineered Products Brand
Cover feature
Peter Judge Global Editor
Well, that changes everything! Data centers adapted and - mostly - thrived in the pandemic, report Peter Judge and Alex Alley
N
ever in my career did I dream that the supply chains for cleaning supplies would be absolutely critical to our operations,” Digital Realty’s head of procurement Brent Shinall told us in a July DCD keynote speech about the changes the Covid-19 pandemic has brought about within his company and the data center industry.
How big are those changes? Well, Shinall was speaking to a three day webconference. DCD has had five global events since April, and they’ve all been online-only. The Covid-19 pandemic has changed many of our perceptions of value: cleaners and delivery workers are more important than managers; meetings are dangerous; travel is a risk we try to avoid. And digital infrastructure has been part of this.
14 DCD Magazine • datacenterdynamics.com
Alex Alley Reporter
Data centers have enabled digital commerce and online meetings, making digital infrastructure more important than ever before. That’s given the data center world a sense of achievement, and kept digital businesses operating and growing. But lockdowns have tanked economies round the world. Data centers are part of an ecosystem, supporting customers which pay for their services. Some industries - entertainment and hospitality, for instance - have been hit harder than others, and some data centers will suffer an impact. Some companies have reported they are preparing for customers that may have trouble paying their bills. Even if your tenants are online services booming in the lockdown, there could be long term issues. Zoom is profitable, but others are building their user base and still losing money. Others tell us that enterprise activity like data center consolidation is on hold. Physically moving servers, closing buildings and opening new space, is massively more complicated in the Covid-19 world.
Mergers and acquisitions are going ahead, along with data center construction, but much of that activity was started before lockdown, and the obvious difficulties with physical activities may slow down due diligence somewhat. Given these mixed fortunes, it’s no surprise that data centers, and other firms in the sector, have been granted government support to keep their services up and their staff employed (see box: A helping hand). Vital staff Those staff have certainly been dedicated, and operators have stories to tell. In hardhit New York City, DataGryd CEO Tom
"Take a vehicle. Go in and do what you need to do and then leave. There’s no need to stick around”
Brown says morale is good: “You take a step back and you say, ‘Well, I’m grateful that my health is okay and we’re hopeful that all employees and contractors remain healthy.’” Stack Infrastructure runs several hyperscale data centers with around a hundred technical staff. Chief data center officer Mike Casey says the staff are essential workers, and “a lot more important than anybody else on the whole Stack team.” Luckily data centers have been able to adapt their work patterns. DataGryd instituted its own “mini lockdown” (on March 13), reducing its onsite staff, and only letting visitors or staff on site when a task can’t be done remotely: “Technicians are just doing these longer light-load shifts or they’re on more rotations.” Staff travel also had to alter: “The good news is that we have folks that live in New York City and we are giving strict instructions to not take mass transit,” says Brown. “Take a vehicle. We have parking
Photography: NYI
A helping hand No one is invulnerable. Data centers are delivering an essential service, and have been a success story of the pandemic. However, they are part of an ecosystem, and some of their partners have been suffering. For this reason, some companies in the sector have made use of government support to enable their business to carry on and keep staff in jobs. In the US, data center companies that rely on local businesses, including H5 Data Centers, T5, Giga and Lifeline. According to published figures, H5 borrowed between $1m and $2m to help retain 64 jobs, while its Quincy based data center subsidiary separately took out another $150,000350,000 PPP loan for 25 jobs. T5 Data Centers borrowed between $5m-10m, but did not disclose the number of jobs retained. To save four jobs, Giga Data Centers took out between $150,000 and $350,000, while Lifeline Data Centers borrowed a similar amount, saying it would keep 30 jobs. Other US data center firms who took government support include 5Nines, Hostdime, ScaleMatrix PTS, Green House Data, US Signal, McAllen, Alchemy, Quasar and Tonaquint. Other data center specialists in related fields including consultancy, cooling system suppliers, and engineering services firms have also benefited from the US PPP program which has handed out more than $500 billion to companies large and small since the lockdown started.
right next to the building and they have to go in and do what they need to do and then leave. There’s no need to stick around.” Remote management has helped, he adds: “We were prepared for something like this, we have management systems that allow us to automate a lot of processes. You can check [server] temperatures remotely; you can check if there are any big spikes. Should there be any type of alarm, you can work on your laptop or your phone.” Working from home was surprisingly easy, says DataBank CISO Mark Houpt: “We were impressed by the effectiveness for us to move online. We were concerned about the ability of systems like VPNs and to sustain the traffic. We did stress testing and so far we’ve seen no glitches.”
Issue 37 ∞ August 2020 15
Cover feature Cameron Wynne, DataFoundry’s COO, says: “We had to review all of our work down to the physical nature versus things that we could do remotely. For their health, we had to reduce footfall at our data centers. If people need to be in, they work in different offices and socially distance.” Site sterilization But some jobs just can’t be done remotely. For those on site, there’s increased hygiene, with staff using sterilizing wipes on any surface that may have been touched. Some companies are investing in ultraviolet lighting to kill bugs. Biometric security is going hands free, with eye scanners replacing fingerprint locks. At the first whiff of Covid-19, the entire facility is sterilized. Data centers have always had business continuity plans to weather natural disasters, and Stack adapted their plans to the pandemic, says Casey: “We made changes to protect our critical operations staff and our clients. We are taking temperatures as folks come through the door and increasing our janitorial tasks. We also have different rules for shift changeover. We used to overlap shifts, but not anymore, we now don’t have overlaps just to keep the separate shift socially distanced.” If it all goes wrong and one site has to send all the staff home, Stack has a plan to bus or fly people in from other parts of the US. Phillip Koblence, COO at colocation provider NYI, reckons it’s down to communications and practical support: “One of the things that I have found, whether it’s Covid-19 or Hurricane Sandy,
we’re taking back roads to go into our facilities to ensure that we keep our customers and our business up.” Some staff can’t take risks though. DataFoundry set up a special “furlough pot” for employees who felt unsafe, but Wynne says not one employee took a cent: “We gave every single person the ability to, if they couldn’t work from home, to not come in and have no penalty. They could stay home if they felt unsafe or if they felt precautions weren’t met. “And honestly, all of our folks wanted to come to work. They want to be here.
"If people need to be in, they work in different offices and socially distance” is that communication is key. So, making sure that the staff is aware that somebody is looking out for them is key. You want to make sure people are aware that they’re able to take things like Uber or drive. Because they’ve got to also look out for their own families as well, you know.” Despite self-preservation, staff know that important services rely on them. Like police officers or firefighters, they respond accordingly, said Koblence: “During things like hurricanes, when people are hunkering down, in our industry, you’re running towards the data center.” DataFoundry’s Wynne echoes this: “When a hurricane comes to Houston, We drive into it. While the city is evacuating,
They want to make their living. We all have a neighbor who lost their job or was furloughed and can’t put food on the table. And so, our employees have simply chosen to come to work.” Team spirit Part of the reason is team spirit, says Noel O’Grady, director of sales Ireland at business risk company Sungard AS: “Anybody who is part of the essential group tends to not have too many problems. I think it’s because they feel like they’re part of the A-team. In the past, we’ve seen mission-critical workers feeling like they’re at the beating heart of the issue. They feel like they’re special.
16 DCD Magazine • datacenterdynamics.com
After this is over maybe they will go back to the office and say ‘yay we did it!’ Whereas everybody else will be saying ‘Well, you didn’t really need me!’” DataBank’s Mark Houpt agrees: “A number of our staff must work in the data centers. "We have not seen any of those folks fearful to come to work. We’ve seen people willing to step up and take extra shifts when it was necessary.” They are still cautious of course: “They wake up in the morning and if they have the mildest of symptoms related to Covid or someone they live with does, then they have to call in and keep everyone notified.” As the pandemic drags on, it’s clear that some of these measures will be longlasting, or even permanent. DataBank, Data Foundry, and DataGryd all told us they are looking at moving workloads away from the site, so facilities keep operating optimally with fewer personnel in the data center.
NYI had a head start, says Koblence: “We started embracing [remote working] fairly early on for a number of reasons. New York is a fairly expensive environment for human capital, it’s just expensive to have employees live in the area but now we can have specialists in Ohio supervising those systems. In Pittsburgh, Data Foundry had to make changes, when local authorities enforced lockdown measures. Police would give fines to anyone flouting curfew and anyone caught traveling had to make sure they had a good reason to be out. Seeing the problem coming, Wynne
said the company shifted very early to remote working and monitoring: “Before the Governor started locking things down, we went ahead as a company, and decided to do as much as possible to get our folks working from home. “There are lots of things that can be done remotely, telephone support, ticket support and a lot of customer operations can be done remotely.” Keep building While all this is happening, there’s a fresh surge in already brisk demand for online services, and more capacity is needed. “We’re very busy,” says Stack’s Mike Casey.
“We have quite a few enterprise customers and depending on the business they’re in they may have been impacted by the virus. But we’re working with those customers and overall, our business continues to grow. We’ve got construction ongoing during Covid-19.” Construction projects obviously have to be safe, and those teams are isolated from the operations teams, says Casey: “You don’t want to be taking any risks with the ops guys. If one of them gets ill, the entire team who he or she is working with also has to go in isolation.” Some are even moving planned projects forward to keep up with demand. DataGryd had to move projects planned for Q4 2020 forward to Q3. NYI has had some increased demand, but it varies by sector, Koblence says: “We have customers that need to ramp up bandwidth and maybe add a cabinet or two. We have some customers that are part of the entertainment industry whose business has essentially shut down right now and then we have other services that are part of the healthcare industry asking us for an increase in capacity. “We have empty server boxes piled up near dumpsters because we’re getting in new gear,” says Houpt. “This is an example of what’s going on at DataBank right now, customers are
"Some customers may have been impacted by the virus... [But] our business continues to grow. We’ve got construction ongoing during Covid-19" desperate to increase their capacity. But other than that, it’s just business as usual.” Technicians on the ground are more aware of the exceptional circumstances. An NYI technician who did not want to be named said: “My family is proud that I’ve been able to work during these uncertain times and provide for them. "We’ve taken steps as a family to make sure we reduce the possibility of contracting the virus as well as passing it on to others. "We also check in on relatives and friends to see how everyone is doing. It helps to know people still care enough to check on the wellbeing of others.” Another admitted they had “a small fear that we could get sick with the virus, but overall pride.”
Issue 37 ∞ August 2020 17
Supercomputers against Covid
The fight for our lives Nations and industry have put rivalries aside to marshal the greatest force of computing power ever formed. But is it enough to beat the pandemic?
18 DCD Magazine • datacenterdynamics.com
I
t’s the largest public-private computing partnership ever created, a grand collaboration against the challenge of our times. It hopes to minimize the impact of this crisis, help understand how Covid-19 spreads, and how to stop it. We’ve never had anything quite like it, and yet we still have so far to go. The Covid-19 High-Performance Computing Consortium began with a phone call. In the early days of the virus, IBM Research director Dario Gil used his White House connections to call the Office of Science and Technology Policy and ask how his company could help. Soon, the Department of Energy, home to most of the world's most powerful supercomputers, was brought in, along with NASA, all looking to work together. “The principles were outlined pretty quickly,” Dave Turek, IBM’s exascale head at the time, told DCD. “We need to amalgamate our resources. The projects have to have scientific merit, there has to be urgency behind it, it all has to be free, and it has to be really, really fast.” The group created a website and sent out a press release. "Once we made known we're doing this, the phone lit up," Turek said. Soon, all the major cloud providers and supercomputing manufacturers were on board, along with chip designers, and more than a dozen research institutes. Immediately, researchers could gain access to 30 supercomputers with over 400 petaflops of performance, all for free. It has since expanded to 600 petaflops, and is expected to continue to grow - even adding exascale systems in 2021, if need be. The next challenge was speed. There's little point in bringing all this power together if people can't access it, and with the virus spreading further every day, any wasted time would be a tragedy. The group had to quickly form a bureaucratic apparatus that could organize and track all that was happening, without bogging it down with, well, too much bureaucracy. “There’s three portions to it,” Under Secretary of Energy for Science Paul Dabbar explained. “There’s the executive committee chaired by myself and Dario Gil, which deals with a lot of the governance structure.” Then comes the Science Committee, which “reviews the proposals that come in, in general in about three days,” Dabbar told DCD. “A typical grant follow-up process, with submittals, peer review, and selection takes about three months. We can’t take that long.” After that, accepted research proposals are sent to the Computing Allocation Committee, which works out which systems the proposals should be run on - be it the 200 petaflops Summit supercomputer or a cloud
service provider’s data center. “Since the beginning of February, when we had the first Covid project get on Summit, we’ve had hundreds of thousands of node hours on the world's most powerful machine dedicated to specifically this problem,” Oak Ridge Computing Facility’s director of science Bronson Messer said. While Japan’s 415 petaflops Fugaku system this summer overtook Summit in the Top500 rankings, it isn’t actually fully operational - ‘only’ 89 petaflops are currently available, which is also being dedicated to the consortium. “My role is to facilitate getting researchers up to speed on Summit and moving towards what they actually want to do as quickly as possible,” Messer told DCD. Using such a powerful system isn’t a simple task, Oak Ridge’s molecular biophysics director Dr. Jeremy Smith explained: “It's not easy to use the full capability of a supercomputer like Summit, use GPUs and CPUs and get them all talking to each other so they're not idle for very long. So we have a supercomputing group that concentrates on optimizing code to run
“A typical grant followup process takes about three months. We can’t take that long” on the supercomputer fast,” which works in collaboration with IBM and Nvidia on the code. Research project principal investigators (PIs) can suggest which systems they would like to work on, based on specific requirements or on past experience with individual machines or with cloud providers. Summit, with its incredible power, is generally reserved for certain projects, Messer said. “A good Summit project, apart from anything else, is one that can really effectively make use of hybrid node architectures, can really make use of GPUs to do the computation. And then the other part is scalability. It really needs to be a problem that has to be scaled over many, many nodes. “Those are the kinds of problems that I'm most excited about seeing us tackle because nobody else can do things that need a large number of compute nodes all computing in an orchestrated way.” Smith, meanwhile, has been using Summit against Covid-19 since before the consortium was formed. "By the end of January, it became apparent that we could use supercomputers to try and calculate
which chemicals would be useful as drugs against the disease," he told DCD. He teamed up with postdoctoral researcher Micholas Smith (no relation) to identify small-molecule drug compounds of value for experimental testing. "Micholas promptly fell ill with the flu," Smith said. "But you can still run your calculations when you've got the flu and are in bed with a fever, right?" The researchers performed calculations on over 8,000 compounds to find ones that were most likely to bind to the primary spike protein of Covid-19 and stop the viral lifecycle from working, posting the research in mid-February. The media, hungry for positive news amid the crisis, jumped all over the story - although many mischaracterized the work as for a vaccine, rather than a step towards therapeutic treatment. But the attention, and the clear progress the research showed, helped lay the groundwork for the consortium. We're not yet at the stage where a supercomputer could simply find a cure or treatment on its own, they are just part of a wider effort - with much of the current research focused on reducing the time and cost of lab work. Put simply, if researchers in 'wet labs' working with physical components are searching for needles in an enormous haystack, Smith's work reduces the amount of hay that they have to sift through "so you have a higher concentration of needles," he said. "You have a database of chemicals, and from that you make a new database that's enriched in hits, that's to say chemicals.” In Smith’s team, “we have a subgroup that makes computer models of drug targets from experimental results. Another one that does these simulations on the Summit supercomputer, and a docking group which docks these chemicals to the target - we can dock one billion chemicals in a day.” While computing power and simulation methods have improved remarkably over the past decade, “they still leave a lot to be desired,” Smith said. “On average, we are wrong nine times out of 10. If I say ‘this chemical will bind to the target,’ nine times out of 10 it won't. So experimentalists still have to test a good number of chemicals before they find one that actually does bind to the target.” Sure, there’s a large margin of error, but it’s about bringing the number down from an astronomically greater range. “It reduces the cost significantly,” Dr. Jerome Baudry said. “It costs $10-20 dollars to test a molecule, so if you've got a million molecules to screen, it's gonna cost you $20 million, and take a lot of time - just trying to find the potential needle in the haystack,” he said. “Nobody
Issue 37 ∞ August 2020 19
Supercomputers against Covid does that at that scale, even big pharmas. It's very complicated to find this kind of pocket money to start a fishing expedition.” Instead, the fishing can happen at a faster, albeit less accurate, rate on a supercomputer. In normal times, such an effort is cheape, but right now “it’s entirely free”. For Baudry, this has proved astounding: “When, like me, you are given such a computer and the amazing expertise around it and no one is asking you to send a check, well, I don't want to sound too dramatic, but the value is in months saved.” At the University of Alabama in Huntsville, Baudry is working on his own search - as well as collaborating with Smith’s team. “We're trying to model the physical processes that actually happen in the lab, in the test tube, during drug discovery,” Baudry said. “Discovery starts with trying to find small molecules that will dock on the surface of
a protein. A lot of diseases that we are able to treat pharmaceutically are [caused by] a protein that is either not working at all, or is working too well, and it puts a cell into hyperdrive, which could be the cause of cancer.” Usually, treatment involves using “small molecules that will dock on the surface of the protein and restore its function if it's not working anymore, or block it from functioning too quickly if it's working too fast,” Baudry said. Essentially, researchers have to find a key that will go into the lock of the protein that “not only opens the door, but opens the door at the right rate in the right way, instead of not opening the door at all, or opening the door all the time.” The analogy, of course, has its limits: “It's more complicated than that because you have millions of locks, and millions of keys, and the lock itself changes its shape all the time,” Baudry said. “The proteins move at the picosecond - so thousands of billions of a second later in the life of a protein, and the shape of the lock will be different already.” All this needs to be simulated, with countless different molecules matching against countless iterations of the protein. “So it's a lot of calculations indeed - there's no way on Earth we could do it without a supercomputer at a good level.” Using HPE’s Sentinel supercomputer, available through Microsoft Azure, Baudry's team has published a list of 125 naturally occurring products that interact with coronavirus proteins and can be considered for drug development against Covid-19. Access to the
20 DCD Magazine • datacenterdynamics.com
system, and help on optimizing its use, was provided for free by HPE and subsidiary Cray. “Natural products are molecules made by living things - plants, fungi, bacteria; sometimes we study animals, but it's rare.” Baudry is looking at this area, in collaboration with the National Center for Natural Products, as they are “molecules that already have survived the test of billions of years of evolution.” Coming up with an entirely new drug is always a bit of a gamble because “it may work wonderfully well in the test tube, but when you put it in the body, sometimes you realize that you mess with other parts of the cell,” he said. “That’s the reason why most drugs fail clinical trials actually.” Natural products have their own drawbacks, and can be poisonous in their own way of course, but the molecules “are working somewhere in some organism without killing the bug or the plant - so we’ve already crossed a lot of boxes for potential use.” Baudry’s team is the “only group using natural products for Covid-19 on this scale, as far as I know,” he said. “Because this machine is so powerful, we can afford to screen a lot of them.” All around the world, researchers are able to screen molecules, predict Covid’s spread, and simulate social distancing economic models with more resources than ever. “I remember in the first few weeks [of the virus], we did not really think about what was happening to price-performance and all of that - we kept getting requests, and we kept offering GPUs and CPUs,” Google Cloud’s networking product management head Lakshmi Sharma said. The consortium member has given huge amounts of compute to researchers, including more than 80,000 compute nodes to a Harvard Medical School research effort. “We did not think about where the request was coming from, as long as it helped with the discovery of the drug, as long as it helped contact tracing, or with any of that.” She added: “We just went into this mode of serving first and business later.” It’s an attitude shared by everyone in the consortium - and off the record conversations with researchers hammered home that such comments were not just exercises in PR, but that rivals were truly working together for the common good. “Government agencies working together that's pretty standard, right?,” Undersecretary Dabbar said. “Given how much we fund academia, us working with the MITs and the University of Illinois of the world is kind of everyday stuff, too.” But getting the private sector on board in a way that doesn’t involve contractual agreements and financial incentives “has
not been the normal interaction,” he said. “Google, AWS, HPE, IBM, Nvidia, Intel - they're all now working together, contributing for free to help solve this. It's really positive as a member of humanity that people who are normally competitors now contribute for free and work together. I think it's amazing.” Such solidarity goes beyond the corporate level. While the consortium was started as a US affair, it has since officially brought in South Korean and Japanese state research institutes, has a partnership with Europe’s PRACE, and is teaming up with academic institutions in India, Nepal, and elsewhere. “Some of this was proactive diplomacy,” Dabbar said. “We used the G7 just to process a lot of that.” But there remains a major supercomputing power not involved in the consortium: China. Here, it turns out that there are limits to the camaraderie of the collaboration. “I think we are very cautious about research with Communist Party-run China,” Dabbar said. “We certainly have security issues,” he added, referencing reports that the country was behind hacks on Covid-19 research labs. “But I think there's a broader point, much broader than the narrow topic of Covid and the consortium - the US has a culture of open science, and that kind of careful balance between collaboration and competition, that came from Europe in the mid to late 1800s,” he said. “It turns out that a certain government is running a certain way looking at science and technology very differently than the model we follow, including non-transparency and illicit taking of technology. It's hard [for us] to work with an entity like that.” Chinese officials have denied that their nation is behind such attacks. Earlier this year, the country introduced regulations on Covid-19 research, requiring government approval before they could publish results. Some claim the move was to improve the quality of reports, others say its a bid to control information on the start of the pandemic. China’s national supercomputers are being used for the country's own Covid research efforts, while some Chinese universities are using the Russian 'Good Hope Net,' which provides access to 1.3 petaflops of compute. But as much as this story is about international cooperation and competition, about football-field-sized supercomputers, and billion-dollar corporations - it is also about people. The men and women working in the labs and on supercomputers face the same daily toll felt by us all. They see the news, they have relatives with the virus; some may succumb
themselves. Many of the researchers we spoke to on and off record were clearly tired, none were paid to take part in the consortium (where members meet three days a week), and many had to continue their other lab and national security work, trying to balance the sudden crisis with existing obligations. All that DCD spoke to were keen to note that, while the collaboration and access to resources were professionally astounding, was a small plus when set against the tragic personal impact of Covid-19 being felt around the world. But they hope that their work could build the framework to make sure this never happens again. “There's been an ongoing crisis in drug development for years,” Jim Brase, the deputy associate director for computation at Lawrence Livermore National Laboratory, said. “We're not developing new medicines.” Brase, who heads his lab’s vaccine and antibody design for Covid, is also the lead for an older consortium, the Accelerating Therapeutics for Opportunities in Medicine group. ATOM was originally set up to
understand their efficacy, how they engage the various viral targets, what their safety properties are, what their pharmacokinetic properties are, and so on. Then we would be primed to be able to put those rapidly into trials and monitor those.” To fulfill this vision requires computationled design only possible on powerful supercomputers. “That's what we're trying to do with this platform,” he said. “We have demonstrated that we can do molecular design and validate results of this - on timescales of weeks, not years. We are confident that this will work at some level.” Big pharma is pushing into computationally-led design, too. “But they have a really hard time sustaining efforts in the infectious disease target classes, the business model doesn't work very well.” Instead, ATOM - or a project like it - needs to be run in the public interest, as a public-private partnership “that actually remains focused on this and is working on broad antiviral classes of antibacterial agents, new antibiotics, and so on,” Brase believes. “I think in two or three years, we can be in a much, much better shape at being able
“Google, AWS, HPE, IBM, Nvidia, Intel - they're all now working together, contributing for free to help solve this. Competitors are contributing for free and working together. I think it's amazing” focus on cancer, and while that remains a significant aspect of its work, the partnership has expanded to “building a general platform for the computational design of molecules.” In pharmaceutical research, “there are a lot of neglected areas, whether it's rare cancers, or infectious disease in the third world,” he said. “And if we build this sort of platform for open data, open research, and [give academia] drug discovery tools to do a lot more than they can do today, tools that have been traditionally sort of buried inside big pharma companies… they could work on molecular design projects for medicines in the public good.” Right now, much of the Covid drug research work is focused on short term gains, for an obvious reason - we’re dealing with a disaster, and are trying to find a way out as soon as possible. “ATOM will become the repository for some of the rapid gains we've made, because of the focus we've had on this for the last few months,” Brase said. “I think that will greatly boost ATOM, which is longer-term.” The aim is to turn ATOM into a “sustained effort where we're working on creating molecule sets for broad classes of coronaviruses,” Brase said. “Where we
to handle a rapid response to something like this,” he said. “It's not going to be for this crisis, unfortunately, although we hope this will point out the urgent necessity of continuing work on this.” IBM’s Turek also envisions “the beginning of this kind of focused digital investigation of theoretical biological threats: If I can operate on the virus digitally, and understand how it works and how to defeat it, then I don't have to worry about it escaping the laboratory - it makes perfect sense if you think about it.” But in a year where little has made sense, where governments and individuals have often failed to do what seems logical, it’s hard to know what will happen next. The question remains whether, when we finally beat Covid-19, there will be a sustained undertaking to prevent a similar event happening again - or if old rivalries, financial concerns, and other distractions will cause efforts to crumble. Turek remains hopeful: “If somebody came up with a vaccine for Covid-19 tomorrow, I don't think people would sit back and say, ‘well, that's done, let's go back to whatever we were doing before’... Right?”
Issue 37 ∞ August 2020 21
data centers, and he moved on to become a private equity advisor. Now he’s back running data centers, and Evoque is both a blank canvas, and a going concern: “It’s still new in so many ways, with growing pains and I have a chance to put my stamp on it.
"We're still getting our feet under us, but
Waking a sleeping giant
my goal is to double or triple the size of our business in the next five years, both through acquisition, as well as organic build." The backing of Brookfield gives the potential for organic growth and acquisition: “Knowing their scale and size, and where they want to go in infrastructure is exciting - knowing we have a partner with billions of dollars in capital.” It’s also long term: “Traditional private equity looks at the business quarter on quarter, but Brookfield’s horizon is decades. We’re not thinking about how to maximize returns this quarter, but how do we maximize returns over the next five to ten years. It’s very patient capital, which is exciting to be a part of.” Unlike other large data center startups, Evoque is not going after big wholesale customers. “We are not targeting hyperscale. That’s an overcrowded market: If we tried to compete with Digital [Realty], QTS, CoreSite, or Cyrus One, that wouldn’t be a wise use of the assets. By taking a little bit of a contrarian approach and going after multinational enterprises we’ll stand out, not by chasing deals on a cost of capital basis.” Peter Judge Global Editor
Evoque's CEO discusses the AT&T legacy, and what it's like to join a business in the middle of a pandemic
B
ack in 2018, telecoms giant AT&T had some 31 data centers in 11 countries. Like a lot of other telcos, it had built up a colocation business over a period of years, Now, like most of those other telcos, it decided to sell them off, and focus back on its core business. Andy Stewart is head of Evoque, the data center provider which investor Brookfield Infrastructure constructed when it bought AT&T’s portfolio for $1.1 billion at the end of 2018. He’s replaced the launch CEO, Tim Caulfield, and he thinks it’s time we sat up and took notice of the new kid with the established pedigree.
“It’s a billion-dollar startup,” Stewart told DCD. “Brookfield bought 30 data centers around the globe, with 1,000 customers and a great ops team. But the rest of the business had to be created around those assets.” Stewart has been in data centers since 2008, and helped created a successful “rollup” company, TierPoint, steering it through billions of dollars worth of acquisitions and financing. He was the initial CFO at Cequel, in 2010, which became TierPoint in 2012, and was part of the management team which led an investment-backed buyout in 2014. By the end of 2018, TierPoint was approaching 40
22 DCD Magazine • datacenterdynamics.com
It’s also not set up as a public REIT (real estate investment trust): “That gives us a bit more flexibility, if we go down the avenue of added services,” he said, explaining that REITs are required to have standard services across geographies. One big differentiation is Evoque’s global nature, compared with some providers: “We have got a great presence in Asia and Europe, and many of our customers are in multiple countries with us.” Evoque is planning to be opportunistic, however. Covid-19 has sparked a financial downturn, and this may make acquisitions and mergers possible. “As things fall out from the epidemic, that could be an opportune time for us to look at M&A.” Although data center business is still growing in the pandemic, he sees an effect on relations with existing customers, alongside deals to bring in new ones.: “With existing customers we’ve seen a speed-up of activity, but with net new customers, we’ve seen a bit of a pause. In March and April, everybody just stopped what they were doing in terms of new business, whereas existing customers had a lot going on and needed a lot more support.”
CEO Interview | Andy Stewart That increase was down to shifting work patterns: “We’ve been seeing a speedup in requirements to meet end-user demands with increasing Internet traffic and the need to support users at home. At the same time, we’ve seen a decrease in site visits. Customers aren’t coming to us so there relying on us more for remote hands, an offering we have at all our facilities.” The AT&T legacy has given Evoque an experienced team ready to handle this change. “The vast majority of our ops team came across from AT&T, and some have been with these assets for 20 plus years. They know those data centers inside out. “Operationally, the only adjustment we’ve had to make is more remote hands work, and fewer site visits. But those two kind of offset each other, so we’ve been able to be as efficient as before.”
Rosters were changed to keep from spreading the virus, but that was no big deal: “Operationally, data centers don’t require a ton of people.” Having global sites was also a help: Asia was hit by the pandemic first, and Evoque's US and European staff could benefit from the experience of its Asian workers. Evoque hasn’t had any trouble with getting staff into work during the pandemic, said Stewart, but he’s keeping an eye open for political disruptions: “We are in Hong Kong, and we are preparing in case something happens there.” Now lockdowns are easing, and new business seems more possible: ”We are starting to see the virtual data center tours picking up. Net new workloads and migration will pick up again. When the economy opens up and people start to move, there will be a large pent up demand. I’ve heard this from real estate brokers, there’s large enterprise demand, waiting for the Q4 2020 or the first quarter of 2021.” Most of Evoque’s existing customers, and those it’s targeting, will have between one and five MW of capacity. “We will go above and below that, but that’s our sweet spot.” In some ways, though Evoque is a new player, it's continuing AT&T’s strategy: “We really are targeting larger enterprise, multinational enterprise customers that have large requirements, maybe in the top US cities, but ideally with a global requirement.” And of course, one of these customers is AT&T itself, although the company has diversified: “I know they have PoPs around the world, and we’re not always there, but I believe we are their primary provider.” Evoque’s customers may want space and power in the US and UK and Germany and Singapore. Evoque can give it to them, with the flexibility to shift it between locations: “It’s very difficult for customers to predict
colo requirements. They have one umbrella agreement so, as they get more accurate data on usage, if they deploy in Ashburn, but need more capacity in Singapore, they can shift that spend from one to the other. “That ability to help our customers support their end users is something which differentiates us from a lot of other providers.”
Sometimes this may mean a customer has as little as 50kW capacity in one location and more elsewhere. “It’s only us and Equinix that can do that.” There’s another similarity with Equinix. Most of Evoque’s existing customers also go to AT&T for their networking, so it’s a connectivity play: “That’s why they chose these data centers, and we think that helps make customers stickier.” Equinix also majors on connectivity - but there’s a big difference: “We don’t have the same connectivity-rich carrier hotels as Equinix. So we are going to make an effort to add carrier diversity. We have a carrier team that’s actively getting additional carriers into our sites, and in general we are
The existing facilities are in good shape: “They were all built to be Tier III with 2N configuration, and we’ve got great statistics around reliability. We have the capex to make sure they’re up to speed for the next decade. There are probably some cosmetic things we can do, but it’s not a wholesale change-out of the assets.” But Stewart has some upgrading to do on Evoque’s stance on global issues. While the company has a lot of renewable power to many of its sites, it’s not made a bit public commitment to phase out fossil fuels. However, it’s on the agenda of its strategy meetings, and a move in this direction would fit well with its investor: “Brookfield has a large renewable practice, with tens of millions of dollars invested in renewables.”
He’s also raising diversity: “We are driving to be more diverse and inclusive as an organization. That’s something I talked about on my first day. The data center industry tends to be overindexed towards white males, and we are no exception. We have a diversity and inclusion task force, created the first week I joined.
“It’s a billion-dollar startup. Brookfield bought 30 data centers around the globe, with 1,000 customers and a great ops team. But the rest of the business had to be created around those assets” adding four-plus carriers a year to each of our data centers. We’re not at the end, but we have a rich carrier diversity.” Stewart plans to grow Evoque, more by acquiring than building data centers.”We are not as big as we intend to be long term. We want to have facilities that will support large deployments over time.”
And those acquisitions are likely to be in competitive markets: “London is a place where we have a presence, but we’re actively looking for a way to acquire a larger, more scaled-out data center to help us grow in the future. And we can use our customer base to migrate over and make the economics more appealing. “In Singapore, we have a nice presence, we’ve got a great customer base, but we don’t have our own data center in Singapore. We’ll actively look for something where we can be a focused presence in these markets. Brookfield wants us to be in a market and be all in. “AT&T had its spots in Singapore and London and that was fine for them, but there wasn’t an active effort to expand.”
“Everybody gets into the habit of recruiting from the same places, and when you do that, you recruit the same kinds of people. So we need to find ways to broaden our scope and change the places we are looking to. We think that a concerted effort will help us be more inclusive over time.” This is more than just a statement, he says: “We are not just looking to hire more people of color. We are doing it because a McKinsey report says companies that are more diverse perform 40 percent better, There are real benefits to being more diverse. Not just looking good in a press release.” Looking at synergies with the rest of Brookfield, Stewart can imagine Evoque working with its Latin American subsidiary Ascenty, as well as its Australian property DCI, but has another big idea at the back of his mind, based on Brookfield’s real estate. “As Evoque changes over time, looking at services, we might get into Edge compute,” he said. “In Atlanta where we have a great data center, Brookfield has properties downtown. Those could be Edge nodes, and ours could be the hub.”
Issue 37 ∞ August 2020 23
Recycled bits save energy The secret to unleashing more computer power for very little energy cost could be in the laws of thermodynamics
Peter Judge Global Editor
24 DCD Magazine • datacenterdynamics.com
Reversible computing
D
ata center operators have learnt to scrutinize their buildings for wasted energy. They understand that the way to efficiency is to follow the basic lessons of thermodynamics: minimize the effort spent removing heat - and if possible avoid creating it in the first place. They know it’s cool to be “adiabatic”: to avoid transferring heat to one’s surroundings. And they compete to get PUE figures less than 1.1 - which is more than 90 percent efficiency in getting power to the computers. But the servers in the racks are letting the side down. Awesome as their processors are, their CMOS circuitry is actually way less efficient than the CRAC units and power transformers they share a home with. And the technology is reaching its limits. The IT industry has triumphantly ridden on the back of Moore’s Law: the phenomenal fact that every two years, the number of transistors on a chip doubled… for more than forty years. This delivered
and it is tiny: just under 3 zeptojoules (3 x 10-21J) which is kBTln 2, where kB is the Boltzmann constant and T is the temperature in kelvins. Physicists like to use electron volts (eV) to measure small amounts of energy, but even on that scale it’s tiny: 0.0175 eV. “On that basis, computing is only 0.03 percent efficient,” said Summers, a shocking comparison with efficiencies above 95 percent claimed for some of the mechanical and electrical equipment these computers share a building with. Can computers catch up? In their glory days, Moore’s and Koomey’s laws projected that we might reach the Landauer limit by 2050, but Summers thinks that’s never going to happen: “You can’t get down to three zeptojoules because of thermal fluctuations.” But if you can’t reduce the energy required, your technology hits limitations: when you miniaturize it, the power density goes through the roof. Bipolar transistor-transistor logic (TTL) was used for early computers and continued in use till the IBM 3081 of 1980, but as it was
"A conventional computer loses information all the time. Every logic gate, on every clock cycle, destructively overwrites old output with new output, no matter how you do those operations" continuous improvements in computing power and energy efficiency, because of the related Koomey’s Law which says that energy requirements will fall at the same rate. But everyone knows this is coming to an end. Chip makers are struggling to make a 5nm process work, because the gates are so small there is quantum tunneling through them. Looking ahead, we might be able to use around 100 attojoules (100 x 10-18J) per operation, according to Professor Jon Summers, who leads the Research Institute of Sweden (RISE), but there’s an even lower theoretical limit to computing energy, which derives from work by Rolf Landauer of IBM in the 1960s. Landauer observed that, even if everything else in a computer is done completely efficiently, there’s an apparently inescapable energy cost. Whenever there’s an irreversible loss of information - such as erasing a bit - entropy increases, and energy is turned into heat. There’s a fundamental minimum energy required to erase a bit,
miniaturized, it generated so much heat it needed water cooling. The newer CMOS technology rapidly replaced TTL in the 1980s because it used 100,000 times less energy, said Summers: “The heat flux went down, the need for liquid cooling disappeared, and they could stick with air cooling.” Now, as CMOS has shrunk, the heat density has increase: "We’ve gone up that curve again, three times higher than in TTL.” Water cooling is coming back into fashion. As before, other people are looking for alternative technologies. And, as it happens, there is an often overlooked line of research which could drastically reduce the heat emission - sidestepping the Landauer limit, but questioning its assumption that computing involves overwriting data. Landauer assumed that at some point data had to be erased, But what if that were not true? That’s a question which Michael Frank of the US Sandia National Laboratories has been asking for more than 25 years: “A conventional computer loses information all the time. Every logic
gate, on every clock cycle, destructively overwrites old output with new output. Landauer’s principle tells you, no matter how you do those operations, any operation that overwrites memory has to dissipate some energy. That’s just because of the connection between information and entropy.” Yves Lecerf in 1963, and Charles Bennett in 1973, both pointed out that Landauer’s assumption was mistaken. In theory, a computer did not need to erase a bit, as the erasing part wasn’t mathematically required. Back in 1936, Alan Turing had proved that any computation could be done by a device writing and erasing marks on a paper tape, leading to the stored program model followed by all computers since. Turing’s universal machine was not reversible, as it both reads and erases bits (Turing was thinking about other things than entropy). Lecerf and Bennett proved any Universal Turing machine could be made reversible. In the 1970s, Richard Feynman followed this up, noting that there is no lower limit to the energy required for a reversible process, so in principle a reversible computer could give us all the computing we need while consuming almost no energy! However, Feynman pointed out a big drawback that followed from the physics of these systems. To be reversible or “adiabatic,” an operation should take place in thermal equilibrium. To stay in thermal equilibrium, those processes must operate really, really slowly. So the system could be infinitely efficient, at the cost of being infinitely slow. A small number of physicists and computer scientists have been brainstorming for years, looking for reversible technologies which might operate adiabatically or nearly adiabatically - but not take an infinite time over it. In 1982, Edward Fredkin and Tomasso Toffoli at MIT designed reversible logic gates… but this is theory, so they based them on billiard balls. Physicists like to use classical mechanics as a model, and the pair considered a switch where hard spheres entered a physical box and had elastic collisions. Where they bounced to comprised the output - and in theory the switch could work adiabatically in real time. The trouble is, you can’t get a system aligned infinitely precisely with zero friction, any more than you can eliminate the thermal noise and tunneling in an electronic system. Fredkin and Toffoli offered an electronic alternative, based on capacitors and inductors, but that needed
Issue 37 ∞ August 2020 25
Ehenduci liquis accatiore voles dia con rem que sumquam as modisque verro temperibus auta illecab ilia corio et unt untia si inum rerum quistis esciendit
zero resistance to work. Researchers began to work towards reversible circuits in CMOS, and fully reversible circuits began with Saed Younis working in Tom Knight’s group at MIT. At the same time, other groups designed mechanical systems, based on rods and levers. Ralph Merkle at the Institute for Molecular Manufacturing, Palo Alto, designed completely reversible nano-scale machines, based around moving tiny physical bars. Others worked on quantum dots - systems that use single electrons to handle information, at very low temperatures. All these approaches involve tradeoffs. They can be completely reversible, but they would require utterly new manufacturing methods, they might
involve a lot more physical space. Low temperature systems have an associated energy cost, and - as Feynman pointed out - some reversible systems work very slowly. As things stand, the IT industry has chosen a path that prioritizes results over efficiency. That’s nothing new. Canals are fantastically efficient, allowing goods to be floated without friction, pulled by a horse. Yet, in the 19th century, the industrial revolution in Britain chose railways, powered by burning coal, because railways enabled transport to any destination quickly. Classical reversible computing can potentially save energy, doing conventional general problems: “A good analogy is the difference between throwing trash away and recycling,”
26 DCD Magazine • datacenterdynamics.com
Frank told DCD. “It’s easier to put it in the landfill than to transform it, but in principle, you could get a saving.” When we overwrite bits, he said, we let the information turn into entropy, which generates heat that has to be moved out of the machine. “If you have some information that you have generated, and you don’t need it anymore, the reversible way to deal with it is to ‘decompute’ it or undo the physical operations that computed it. In general you can have an extra tape, where you temporarily record information that you would have erased, and then decompute it later.” Decomputing is a bit like regenerative braking in a vehicle - getting back the energy that has been put in. Computing chose CMOS because it
Reversible computing computing is now on that map. There’s lots of excitement about other technologies on the list such as quantum computing, which offers potentially vast increases in speed using superpositions of quantum states. Google and others are racing towards “quantum supremacy,” but quantum may have limits, said Frank. “Quantum computing offers algorithm speed ups. It boils down to specific kinds of problem. It has so much overhead, with very low temperatures and massive systems. The realworld cost per operation is huge - and it will never run a spreadsheet.” Frank is clear about the tradeoffs in reversible computing. Saving bits takes up space. “It’s a tradeoff between cost and energy efficiency,” he told us - but as chips approach their current limitations, that space could become available. Because today's chips are now generating so much heat - the problem
electronics that work at low temperatures is the well-funded area of quantum computing. The quantum bits or “qubits” need to be kept at cryogenic temperatures, and any heat generated in the surrounding control circuits must be removed. It could be that early adiabatic silicon will be developed for supporting circuitry in quantum computing projects, said Frank: “That may end up being one of the first commercial applications of adiabatic logic - you gain some benefits from operating at low temperature.” Getting it to work well at room temperature will require more work and new kinds of component, including resonators which absorb energy and recover it, providing the “regenerative brakes” that decomputing needs. “The energy required for data processing is never going to be zero,” said Frank. “No system is perfectly efficient. We don’t yet know the fundamental limits to
We’re getting closer to the point where you can’t go further. Will the semiconductor industry stagnate, and will people lose interest in it? Or will it do other things?”
works, and since that choice was made reversible computing has been ignored by mainstream research. As one of the MIT group in the 1990s, Frank helped create some of the prototypes and demonstrations of the concepts. He’s stayed true to its promise since then, patiently advocating its potential, while watching investment and media focus go to more exciting prospects like quantum computing. Since 2015, as a senior scientist at the Sandia National Laboratories, he’s established reversible computing as a possible future direction for IT. Through the IEEE, he’s contributed to the International Roadmap for Devices and Systems (IRDS), a series of documents which chart the likely developments in semiconductors - and reversible
Summers noted earlier - they are often riddled with "dark silicon." Today’s chips have more gates on them than they can use at any one time. They are built as systems on a chip (SOCs), that combine different modules. Said Summers: “A multicore chip might have a budget of 100W. In order to not exceed that you can only switch on so many transistors.” Silicon providers are giving us gates we can’t use all at once. “We’ve been duped by the manufacturers,” said Summers. Frank thinks this dark silicon could be used to introduce some aspects of reversible computing. “We could harness more of the transistors that can be fabricated on a given chip, if we have them turned on and doing things adiabatically.” Space on chips, that would otherwise be under-utilized, could be given over to adiabatic switches - because they won’t use significant power and cause heating. Adiabatic circuits made in CMOS are still in their infancy and at present have some limitations: “Adiabatic CMOS gets better at low temperatures, and is especially good at cryogenic temperatures,” he said. Perhaps ironically, a field which needs
how small the losses can become. More generally, we don’t know the fundamental limits to reversible technologies.” Frank’s group is starting to get funding, and access to fabs for custom chips, but it’s slow: “We need more engineers working on these kinds of ideas. We have the funding to make test chips. It’s possible to make these through commercial foundries, too. That’s another avenue we can pursue.” However it happens, we know that something like this is needed, as classical CMOS approaches the end of its useful development. “We’re getting closer to the point where you can’t go further.” said Summers. “Will the semiconductor industry stagnate, and people lose interest in it? Or will it do other things?” Frank and Summers agree that in the next ten years or so there will be a gap between the end of CMOS and whatever the next technology is. “No one knows what will fill that gap at this point,” said Frank. “A wide variety of things are being investigated. There’s no clear winner. I would not be surprised if there’s a period of stagnation. It takes quite a while - about ten years - for a technology to go from working in the lab to mass production.”
Issue 37 ∞ August 2020 27
Earth's twin An EU project hopes to rebuild the world in virtual space, but the concrete challenges may prove a little too real
Sebastian Moss Deputy Editor
28 DCD Magazine • datacenterdynamics.com
The Earth's Twin
T
he European Union is embarking on an ambitious project to create a digital twin of the world. Destination Earth seeks to eventually build this second version of the planet - starting with smaller simulacra of specific areas and planetary components, with an eye to eventually expanding. But the initiative, which has not been previously reported on, faces significant hurdles as it grapples with challenges at a truly global scale. “Digital twins are about developing a digital replica of a physical entity,” Andreas Veispak, Head of Unit in charge of eInfrastructure and Science Cloud in the Directorate-General for Communications Networks, Content and Technology, told DCD. The first digital twin was proposed in 2001 by Professor Michael Grieves, who fleshed out the idea and officially coined the term a year later. "The concept was to underline product lifecycle management," Grieves explained to DCD. "So the idea was that we basically could have a product-centric perspective of things that existed in connection with the physical stuff." The idea was immediately a huge hit in the automotive and aerospace sectors, and has since spread to factories and other industrial settings. “Destination Earth will aim to do this, not in the context of manufacturing or industry, but in the context of Earth systems,” Veispak explained. As we hurtle ever closer to a climate catastrophe, such concerns will be first and foremost in the project, “with the broad objective of allowing us to continuously monitor the planet's health,” Veispak said. Destination Earth will build on three European Commission policies: the Green Deal, the Data Strategy, and the Digital Strategy. Crucial to its development is the EU's Earth observation program Copernicus, managed in partnership with the European Space Agency. “With Copernicus, we have generated a large amount of data that is really helping to understand the situation of our planet; we can provide products on the management of the land, on the quality of the air, on the quality of the oceans, and so on,” said Mauro Facchini, Head of the Unit in charge of Earth Observation in the DirectorateGeneral for Defence Industry and Space. This data will help form the core of Destination Earth, although the program plans to use other sources of information, and allow the EU’s members to add their own data.
“Destination Earth will help national and regional or even local actors to nest their own models and activities into the data and information products we produce at the European level,” Hugo Zunker, policy officer in the Directorate-General for Defence Industry and Space, said. “Basically federating efforts for very complex, very expensive modeling which any Member State alone would not afford.” Others are also looking to space for information on the planet below. Cervest, a climate forecasting platform, was formed in 2016 to use cutting edge artificial intelligence models on satellite data to predict climate risk. "We have a daily feed coming in that goes into our machine learning algorithms," CEO Iggy Bassi explained. "We're looking at image data, soil data, precipitation data, etc," with the data then combined with several climate models to create predictions of how assets will be impacted by climate change. The company, which uses data from Planet, Digital Globe, and the Copernicus Program's Sentinel-2, has seen costs of
Worlds apart Research institutes, nations, and corporations are all trying to understand and replicate our planet in supercomputer form. The US government's Energy Exascale Earth System Model plans to use upcoming exascale systems to simulate the Earth system, improving how we can predict weather events in the future. This summer, the DOE awarded $7m in research contracts to nine universities as part of the E3SM project - with results expected in the next three years. Another project to watch is Microsoft's 'Planetary Computer,' which aims to aggregate environmental data and use machine learning to create a comprehensive model. We planned to include the project in this feature, but Microsoft declined our request to take part in an interview.
“Destination Earth will help national and regional or even local actors to nest their own models and activities into the data and information products we produce at the European level” satellite data plummet. "Satellite prices are going to collapse even further in the next three to four years," Bassi said. But relying primarily on external image data to build a model has its flaws. “You couldn't see the ocean currents or the tectonic plates or anything like that,” Grieves said. “You could only observe surface characteristics and then try to make your determination of that. I think that's fine, and in some ways that is a digital twin. But it wouldn't be my definition of a digital twin of the Earth.” Bassi, who doesn't see his company's product as a digital twin either, said that such a twin could help people visualize climate risk more viscerally. "Even when we're showing our platform to people, investors are scratching their heads like ‘what the hell does climate security mean?’ I think digital twins are a useful tool - if it helps with visualization, we'll certainly look at it." For the EU’s Destination Earth to even begin to touch on some of the challenges it hopes to simulate, it’ll need more than just satellite data. Working out what data to
collect, how to collect it, and making sure it will all be interoperable will be an immense task. With plans to use artificial intelligence on Destination Earth simulations, structured data will be all the more important, Veispak noted. “Solving this data problem is a real challenge, it is not just about brute computing power.” But they will also need brute computing power - much more than the EU currently has. Here, Destination Earth ties into EuroHPC, a €1bn ($1.1bn) project to build a series of pre-exascale and exascale supercomputers across Europe in the coming years. However, Destination Earth will have to share computing resources with the myriad other applications the systems are expected to run. Specific details on Destination Earth’s scope and budget were not disclosed, with much still to be determined before work begins in earnest in 2021. In a rare public comment on the project, the European Commission described Destination Earth in a communique to the European Parliament as a project to “develop a very high
Issue 37 ∞ August 2020 29
The Earth's Twin
"Who's ever thinking they're gonna do this? I'm hoping they're about 10 years old and they plan to live to 100”
precision digital model of the Earth. “This groundbreaking initiative will offer a digital modeling platform to visualize, monitor, and forecast natural and human activity on the planet in support of sustainable development [that] will be constructed progressively.” It is that final point that is the most important: “Developing a full digital twin of Planet Earth - encompassing all the ecosystems, all the different variables - at the current stage of not only technological development, but also data availability, and the ability to integrate and fuse the data - is extremely challenging,” Veispak admitted. “These systems are extremely complex.” Grieves, who is not involved with the project or privy to details beyond those shared by this reporter, was franker. “I'm not sure that we've got the capability to deal with all this, and then when you throw the complex systems aspsect in, and the interaction between climate and sea temperatures and sunspots, and all the things that go into it? Boy, looking at that, who's ever thinking they're gonna do this? I'm hoping they're about 10 years old and they plan to live to 100.” For digital twins to be successful, they are developed with the least amount of data and compute power needed to answer a specific question. If you are crash testing a virtual car, say, you don’t need to model the paint color and license plate number. Everything should be modeled at the minimum fidelity for the specific use case.
“For Destination Earth to become an enabler of an ecosystem and a key element of the Green Deal data space, it should also allow different user communities to bring their own data into the system for the use cases and applications that are specific to them,” Veispak said. This could prove immensely challenging, as the model will have to be built without specific use cases in mind forcing a higher fidelity across the board. The project officials DCD talked to all said that the plan is to produce more limited vertical slices of types of issues or specific areas, but that the aim was to build it into a cohesive whole later on. Specific areas namechecked included hurricanes, extreme weather events, volcanic eruptions, and earthquakes.
30 DCD Magazine • datacenterdynamics.com
Veispak added: “[Destination Earth] could be for climate change-induced effects, the state of the oceans, the cryosphere, biodiversity, land use, natural resources, and so on.” Grieves found the number of areas both daunting and unrealistic. “The ‘and so on’ is the killer. I'll tell you what, after they got past climate change, it raised my suspicions about what they actually could do,” he said. “To have this cohesive digital twin of the Earth, I don't see that happening. I think that is so far out, I just can't conceive. I'm not seeing that anytime this millennium.” It is most likely that the project will stick to vertical areas for a long period of time, expanding in scope and fidelity as time goes on - but never quite reaching that dream of a high-precision global model ready for any use case.
But perhaps it doesn’t have to. There can be a significant amount of value in digital twins of individual areas, particularly climate models, which still have huge error margins. And we can learn things from the project that will have a knock-on effect on the wider world, gaining in use as computing power grows with it. “Back in the '70s, I ran the largest computer in the world,” Grieves said. “And we could do weather forecasting, but it took us 48 hours to predict 24 hours in advance. We proved the concept, though.” Destination Earth may prove concepts too, provide useful information for policymakers and citizens, and create a platform that others can build on. “All the data which we generate based with EU taxpayers' money, is basically with very, very few exceptions - free for everybody to use to take it up to build their own products on it,” Hugo Zunker said. This, one hopes, could unlock a new world of possibilities.
Sponsored by
> Colocation | Supplement
INSIDE
Scaling with confidence Lights-out facilities
Keeping networks up
Servers find a new life
> How data centers are learning to work even better with limited visitors
> Virtualizing the meet-me room could be the way to deal with a surge in demand
> Enterprises can take advantage of hyperscale efficiency, and help the planet
First lithium-ion battery cabinet designed by data center experts for data center users. Cut your total cost of ownership with the Vertiv™ HPL lithium-ion battery system, a high-power energy solution with best-in-class footprint, serviceability and user experience.
Smaller, lighter and lasting up to four times longer than VRLA counterparts, the newest generation of lithium-ion batteries pay for themselves in a few years. The Vertiv HPL battery cabinet features safe and reliable lithium-ion battery modules and a redundant battery management system architecture with internal power supply. These features and the cabinet’s seamless integration with Vertiv UPS units and monitoring systems make it ideal for new deployments or as replacement for lead-acid alternatives. Plus, its user-friendly display leads to a best-in-class user experience.
Why choose Vertiv HPL Energy Storage System? yySafe, reliable backup power that offers a lower total cost of ownership yyDesign that is optimized for use with Liebert® UPS units and for control integration yyWarrantied 10-year battery module runtime at temperatures up to 86°F/30°C for improved power usage effectiveness (PUE) yySuperior serviceability
Visit Vertiv.com/HPL
© 2019 Vertiv Group Corp. All rights reserved. Vertiv™ and the Vertiv logo are trademarks or registered trademarks of Vertiv Group Corp. While every precaution has been taken to ensure accuracy and completeness here, Vertiv Group Corp. assumes no responsibility, and disclaims all liability, for damages resulting from use of this information or for any errors or omissions. Specifications are subject to change at Vertiv’s sole discretion upon notice.
Colo Supplement
Sponsored by
Contents Speaking in confidence...
34 R emote management comes into its own Now's the time for lights-out operation 37 T he new normal Here's what we can expect from networks 38 A dvertorial: Scaling With Confidence From Core To Edge 40 F inancial security Data centers will ride it out, unless the customers go under 41 V irtual meet-me rooms Dealing with shifts in demand starts in the data center itself 42 A part of the community There are plenty of ways facilities can help the neighbors 44 W hat goes around comes around The circular economy reduces waste and cuts energy
40
H
ow confident are you that you can meet the challenges that will come your way? If 2020 so far hasn't shaken your confidence somewhat, you haven't been paying attention. In this supplement, we look at long-running issues in the industry, that have perhaps come into sharper focus due to the twin threats the world is facing - the climate crisis and the Covid-19 pandemic.
Lights out Remote management has always been an ideal. Short of physical hardware replacement, most IT maintenance tasks can be done remotely - so why have colocation customers always wanted to visit and do work on-site? When nations began restricting travel, and workers had to keep their distance from each other, the realization dawned that modern data center equipment is already configured to be controlled remotely and facilities can be operated with minimal human presence. It took a health crisis to make us see it (p34).
Finance
42
44
As we write this, data center finances are still good. Mergers and acquisitions are steaming along, with Digital's purchase of Interxion the latest mega-deal to go through. Data centers are still upgrading and investing in capital equipment, But big clients in aviation and oil may collapse, and supply chains for replacement hardware may be strained (p40).
Networks In the weeks following the imposition of lockdown, network traffic changed radically, routing away from offices, towards homes, and shifting to conferencing and streaming. Everyone was surprised how well things ran - except the network engineers, who'd built with resilience in mind (p36). Meanwhile, inside the data center, virtual meet-me rooms have been proving their worth. Allowing flexible connections, they've joined up the links to allow traffic to switch from business to domestic networks, and alter to encompass more videoconferencing (p41).
Community service The issues facing the world have required people to work together, and maybe it will bring new attention to the ways in which data centers support their local communities (p42). These can range from not making the place look ugly, to sharing electrical energy to ease the local utility.
Circular Economy Finally, the long term survival of mankind depends on using resources more intelligently, and data centers can be a big part of moves towards a circular economy (p44) Servers, racks, and even the concrete that encases the building all embody energy and valuable materials. Reducing this drain on the planet can only help. When we emerge from lockdown, we will be in a new normal. Or as people now say, "the next normal."
Colo Supplement 33
REMOTE
management comes into its own
For years, remote management has been a good idea for data centers. Now it has become a necessity
34 DCD Supplement • datacenterdynamics.com
Peter Judge Global Editor
Colo Supplement | Remote Management
M
anaging a data center remotely has always made sense. Facilities are often in out-of-the-way locations, and it is quicker and cheaper to fix problems remotely instead of getting an engineer on site. At the extreme, it is possible to run a data center with virtually no staff activity - the so-called “lights out” facility. But the reality has often not lived up to the promise. On the one hand, the tools to provide remote control have often been hard to integrate. On the other hand, colocation providers and their customers have been reluctant to trust the remote systems, preferring to touch servers and other equipment directly. In 2020 all that changed - of necessity. As we go to press, large parts of the world are going in and out of lockdown, with travel restrictions still in place. Getting into a data center is awkward, even though data center staff are generally categorized as “essential” and exempt from the restrictions, because digital infrastructure is essential to the economy. But data center reliability experts at the Uptime Institute have advised that visits to a facility should be minimized. In colocation facilities, customers must visit the site less, says Uptime SVP Fred Dickerman, and staff access should be restricted too, and handled very carefully: “When teams come on and off site, they should do handovers from a distance or by phone.” In March, colocation giant Equinix responded to the lockdowns that were being applied, and severely restricted customer access to its data centers. Visitors, customers, contractors, and non-critical Equinix staff were banned from Equinix IBX facilities in France, Germany, Italy, and Spain, with other countries moving to an appointment-only regime. This move placed a heavy requirement on remote functionality, which may have been used rarely in the past, or been incompletely implemented. Products for data center infrastructure management (DCIM) or service management (SM) present themselves as a complete solution, but most betray their origins in one sector or another, or need careful implementation to deliver fully. When the crisis hit, those who had fully functional systems, and a culture of using the tools available, had a head start in dealing with the crisis. Brent Bensten, CTO at QTS Data Centers, counts himself lucky. The data center firm deals with a range of companies from small to large, but it has a service delivery platform (SDP) developed from that of Carpathia Hosting, a 2015 acquisition. The lockdown created a significant change in customer behavior, he said. The number of
logins to SDP went up by 30 percent in the first three weeks of restrictions, and the top users nearly doubled the amount of time they spent on the system - going up from 36 minutes to 62 minutes. Over the same period, customers were still welcome onsite, but visits went down by a similar proportion to the increased traffic on the SDP. “We want them to come if they need to,” Bensten told us in April. “But Covid-19 is a perfect case to use the tools, so they can do remotely what used to be done on site.” Statistics are granular, as different sites have a widely varying number of visitors, depending on the profile of the customers and their stage of deployment. QTS’s largest site in Atlanta could have anywhere from 400 to 700 visitors in a month, but normalizing the period with a previous one, he reckons this went down about 40 percent: “The curves mirrored each other.” If customers are realizing that unnecessary visits are a risk, new procedures may be contributing to this. “We haven’t had to put in place a hard rejection at any site. We require disclosure of where visitors have been, we use biometrics, and sanitizing wipes when they touch things.” The reduction in customer visits is even more striking against a background of data center hardware which is working harder to meet greater traffic demands: “By every statistic we have, power consumption is up, bandwidth is up significantly. With all those indicators going up, you would normally see visitor profiles go up.” QTS was fortunate in having a full-featured SDP, said Bensten: “It’s high-touch, high-need, for people to get what they need in the data center without going in there. It’s the single way to integrate with QTS, all the way from buying the service. It’s available for the iPhone, through a portal, or with an API so you can do everything programmatically.”
Our employees are considered essential workers. When we need physical things our ‘smart hands’ can do the physical work, so the customer doesn’t need to. That range is important. Smaller firms like cloud startups just need a quick check on an app, while big hyperscalers have the resources to get the most out of programmatic access: “How they get used is wildly different. A oneto-two cabinet guy will use his iPhone app.
But a large hyperscaler customer with 1MW of capacity will move loads around to consume less energy and keep the service up reliably, based on the data we share through the API. In the old world, they would have needed to go to the site to do that.” You might expect the tech-savvy big players to adapt to remote use more easily, but that’s not what Bensten found: “The reduction in visits is across the board for every size of customer, including enterprise, and government business.” A remote check with the SDP can actually be more effective than a site visit, as it has access to more data, he said: “We have a massive data lake built over the years, based on data we collect from the millions of sensors in our customer space.” It also includes wider world data such as weather patterns, and effectively looks at the “weather” inside the data center: “We have a team of data scientists using advanced analytics, so we can project our power consumption in seven day intervals to predict future patterns - and the data lake can be mined by our customers as well as by us.” If remote control is good for customers, it’s also good for staff, so QTS implemented home working where possible - using a different view of the same tools: “Our NOC support center is now working remotely, using a mapper with a 3D view of all our buildings down to customers’ cabinets.” Of course the tools can’t do everything, but when something physical has to happen, it’s best for operator staff to go in and do it for the customer, directed by the support center, said Bensten: “Our employees are considered essential workers. When we need physical things our ‘smart hands’ can do the physical work, so the customer doesn’t need to.” The work is directed by the SDP, but staff physically open the cabinets: “We don’t have robots yet.” The staff also operate a slightly different shift pattern, but there’s no dramatic change, said Bensten: “The number of our folks on site at a time hasn’t changed.” QTS also shares its building security, giving customers access to CCTV feeds for their enclosures, said Bensten: “It’s Nest for your cages, you can see who came in and who left,” The operator has the same ability extended to the shared areas, so it can track staff and customers from the entrance through the mantrap to the data halls. Remote management brings up issues of demarcation for colocation vendors and their customers. The customers want to know about the building facilities, such as cooling and power, but those are under the control of the operator. Meanwhile, the operator draws a line at looking inside the IT at operating systems and workloads, leaving those for the customer to manage. “We capture the IT as assets, like servers and storage controllers, so the customers can
Colo Supplement 35
load in IP configurations and VLANs. Our technology doesn’t interrogate their guest OSs.” Both groups see a different view: “Our employees need to see a macro picture, while customers need to see a more drilled-in micro view.” Smaller facilities also got a head start on remote working, simply because of the overhead involved in covering multiple small locations. “Our whole business premise was based on lights out data centers,” said Lance Devin CIO of EdgeConnex, a colocation provider specializing in built-to-order facilities for smaller cities round the world. “We have 2MW sites, not 100MW behemoths. I can’t afford to put three engineers and 17 security people and two maintenance people in a site like that.” With 600 of these facilities, the company had an incentive to enable remote control from the start. “The business justification was already there - it’s more cost effective and cheaper.” And moving further to the edge, with the possibility of 100kW or 200kW sites made remote management more important. But the Covid-19 crisis provided a workout for the company’s EdgeOS data center infrastructure management (DCIM), EdgeOS, Devin told DCD in April. “This is the way we run our business. This was not a change.” The systems manage EdgeConnex’s equipment and the customer equipment in the racks - but the data views have to be managed. Despite the size of its facilities, EdgeConnex is a wholesale vendor, dealing with cloud players, “Our customers don’t want us to know what is in their stuff or vice versa.” So EdgeConnex’s system remotely manages equipment like Liebert cooling
systems, which have computerized predictive maintenance, showing the equipment’s details, when it was certified and tested, and its history, said Devin. SCADA monitors everything every 100ms, spots when something is out of line, and then checks the root cause - for instance finding the faulty remote patch panel (RPP) upstream of the PDUs that suddenly show errors. The system them talks to the vendors of the hardware: “Our ops people don’t have to get in the middle, the system automatically sends a ticket directly to the vendors” The system also communicates to the customer. It knows the location and status of PDUs and other kit, what racks they serve, who will be impacted - and whether it will affect their service level agreement (SLA). “The ticketing system tells our customers the vendor is working on it, automatically.” EdgeConnex also lets customers monitor their equipment visually, but by integrating their own CCTV cameras into the system. “When you think about everything you’ve seen from automation and remote working. You do have everything you need at your fingertips,” said Devin. Views and data are carefully controlled: “One tenant may only see Denver, and within that their real time load and their tickets. They see their cabinets.” Actual visits are an issue in a lightly-staffed facility, pandemic or no pandemic. “We built a mantrap, and a callbox system that worked with the security system, so we can let people in remotely,” said Devin. “We take a picture of them in the mantrap, and then ask a challenge system for dual authentication or a remote biometric read.” Their pass has photo ID, but has to be
36 DCD Supplement • datacenterdynamics.com
issued securely, and the biometric recognition has to be low-maintenance for a lights-out site: “We tried an iris scanner,” said Devin, but it was too complex, with visitors having to repeat the scan at different distances. “You’ve gotta be kidding, people aren’t that good at following instructions.” Fingerprints were rejected as the scanners get greasy. EdgeConnex uses a vascular image of the back of the visitor’s hand - “they don’t touch the lens.” It’s a complex system which EdgeConnex put together from partial solutions. “I looked at four off the shelf DCIM products,” said Devin. “I would guarantee you, any single system, did two things really well. But the reality is there isn’t one system that does it all from ticketing to management to reporting,” Back at QTS, Bentsen agreed that customers need more than DCIM. “We are a big believer in DCIM - we need it to run our building. But it is a small piece of our platform. We love our DCIM, but without our data lake on top of it, using it in ways DCIM was never intended to be used, our service delivery platform would not be able to do what it does.” Bensten thinks the pandemic has changed behavior. “We think our toolset is better for the customer - and the pandemic has pushed people to adopt that.” But what happens after the lockdown? “I guess I hope things won’t go back to the way they were,” said Bensten. “I’ve worked a lot for my career in managed services, and one of my goals is the cloudification of the data center. I want to see the data center working the way the cloud works. “A few months from now, when this is over, the last thing anyone is going to do is hop on a plane to visit a data center.”
THE NEW
NORMAL
As we settle in for the long haul, here’s what networks can expect from the pandemic age
Sebastian Moss Deputy Editor
T
he Internet is going to be alright. The past few months have been difficult, the next few likely more so, but the challenge of global lockdowns has been eased by our ability to communicate, work from home, and unwind with games and video streaming services. So it’s good news that the net was built to handle unprecedented demand. It’s also quite remarkable. “Imagine another utility scaling like this,” Cloudflare CTO John Graham-Cumming told DCD. “Imagine if everyone was like, ‘I need 40 percent more water. All of this. Right now.’ It couldn’t happen.” Networks were built with redundancy in mind, designed to handle the peaks in traffic caused by events such as sports game streams (although, ironically sports is one of the things which is now less likely to be streamed). As a whole, networks should be able to weather the current surge. Traffic began to rise, as one would expect, as lockdowns began to be enacted around the world. “There was a huge increase of traffic, somewhere between something like 20 and 70 percent, depending on the geography,” Graham-Cumming said. “Now they've come up to kind of a new normal level,” And it’s becoming clear that the new normal is likely to extend, as lockdowns will ease gradually and people are likely to continue to stay at home.
Usage has changed. Previously there was one big peak as people came home from work and streamed videos, or accessed things that were blocked at work. “Now you're seeing a double peak, you're seeing a peak around eight to nine o'clock in the morning. And then again in the evening. So a big, big change has definitely happened.” Equally, for obvious reasons, where people access the Internet from has changed: Commercial districts have gone dark, while residential areas are permanently digitally connected. This shift to diffused last mile connectivity has caused some issues, albeit isolated ones, that expose areas of network underfunding or low fiber deployment. Like most crises, it is something that disproportionately impacts the poor. "Mostly, though, the network is very, very flexible," Graham-Cummings said. In an effort to improve its flexibility, in the weeks after the lockdown, network operators rushed to upgrade their systems and do essential maintenance. This had the unfortunate side effect of adding further planned and unplanned outages, network monitoring company ThousandEyes found. “What we've noticed is that overall we've seen a gradual increase in the number of outages globally, from February 17,” product marketing manager Archana Kesavan said. But from April 5 that figure began to drop, as the upgrades were finished and the networks were able to handle increased load.
With traffic plateauing and vital upgrades out the way, it is clear that Covid-19 will not overwhelm network operators. However, there is still a risk of isolated outages. While data center and networking staff are counted as essential, teams are likely to be stretched thin, kept in smaller, isolated teams, and could be understandably tired and distracted due to the current situation. “There will be effects that are caused by people being sick, or people having to care for others or people having less ability to concentrate,” Graham-Cumming said. “The environment has changed, there are additional stresses. So you might see things you didn't expect. I think that's a real thing to worry about.” As we head into yet another record hot summer, it’s also likely we will experience outages that could have happened even without the current pandemic. Outages happen. Except this time, as the entire global economy is propped up by the thin fiber cords connecting us all, large outages would be felt ever more keenly. "We just need it more than we ever did," Graham-Cumming said. "The Internet has become a vital part of our lives.” Data connectivity is now becoming as essential as electricity and (in the developed world at least) is expected to always work. What happens when it doesn’t? “A blackout is a big deal," he said.
Colo Supplement 37
Scaling With Confidence From Core To Edge
A
mid ongoing uncertainty
need in a timely and cost-effective manner,
capacity can speed up global expansion and
– of which the current
and PFM makes that easier. PFM designs
also ensure resiliency, cost control and built
crisis is an example – the
provide value to a variety of colo uses,
quality across multiple regions.
ability to deploy agile
from whole site builds to containerized
infrastructure is increasingly
micro builds at the edge to augmentation
IT service companies with global delivery
important. This includes
of conventional facilities to add power or
capabilities, needed to achieve rapid
cooling capacity.
availability and high scalability during its
the ability to deploy capacity as needed and avoid overprovisioning for uncertain future
T-Systems, one of the largest European
data center expansion. For T-Systems, PFM
needs. One solution, increasingly embraced
BUILD AS YOU GROW
designs met that need, easily allowing for
at all levels of today’s networks, is scalability
Modular expansion theoretically aligns with
future phases of expansion as well as a staged
through prefabricated modular (PFM) designs.
the core colo business model, and as more
investment.
These are IT whitespace and/or power and
and more colos introduce on-demand edge
cooling systems, factory-built and tested,
solutions, the opportunity for PFM in the colo
CAPACITY ON DEMAND
that can enable fast and effective capacity
space is growing.
Colocation providers want to avoid stranded
increases to meet immediate needs. PFM designs support a range of data center
From core to edge, colocation providers
capacity and overprovisioning at all costs.
want to be able to scale with confidence and
PFM designs can be tightly integrated and
facilities, from consolidated enterprises to
build as demand grows. PFM solutions will be
add only the capacity needed, reducing space
cloud and colo providers.
an increasingly important part of their growth
demands and real estate costs.
Colos, in particular, are a potential match
strategies. This includes colocation providers
This smaller footprint also makes
for self-contained, modular solutions. The
with ambitions not only to deploy and scale
them ideal for population-dense urban
multi-tenant data center (MTDC) business
specific sites but also scale their whole
environments where computing demands
model is built on delivering the data center
business internationally.
are skyrocketing. And, modular solutions
space, power and connectivity customers
Standardized, repeatable units of PFM
38 DCD Supplement • datacenterdynamics.com
give colos the ability to free up white space by
Vertiv | Advertorial putting power outside the data center. A healthcare system in the U.S. knew these
environment to ensure consistent quality. The finished system can be tested under load
space constraints first-hand, leading them
conditions in a factory setting to help ensure
Several companies are pushing the
to move their data center operations from
reliability.
boundaries of traditional colocation
on-premise to two colocation facilities. This
Looking specifically at the edge, those sites
using prefabricated and modular
allowed them to use space within the hospital
can be located in harsh environments that
solutions. Their innovations are helping
for revenue-generating purposes, while
require rugged enclosures to protect sensitive
to deliver a more customer-centric edge
reducing personnel costs. The colo provider
electronics and ensure availability.
infrastructure.
used Vertiv SmartCabinet solutions to quickly
PFM units – whitespace or power/
and efficiently update and relocate the
thermal – can be built specifically with such
EdgeConneX
hospital’s IT infrastructure within a 12-month
environments in mind.
Vertiv has collaborated with EdgeConneX
timeframe.
The speed enabled by PFM doesn’t mean
on dozens of projects across three
sacrificing resiliency or availability. For
continents (North America, South
SPEED OF DEPLOYMENT
example, some PFM designs, such as Vertiv™
America and Europe) since 2014.
PFM designs leverage repeatable
SmartMod™, are pre-tier design certified by
manufacturing practices to reduce time to
the Uptime Institute. This pre-certification
EdgeConneX is moving away from the
build and deploy.
speeds up the required certification process
traditional colocation model. They work
for sites, opening up the potential for cost
with large cloud providers, and they’ll
cutter, one-size-fits-all solutions. The
savings and compressing deployment times
build a facility in a matter of weeks using
foundational elements may be consistent, but
even further.
PFM solutions. They are focused on one-
That does not mean these are cookie-
factory manufacturing actually enables more efficient, cost-effective customization. A colo provider can select a prefab solution
Targeted capacity deployment and management is critical to the success of
way delivery of traffic, catered toward content distribution in growing markets.
colocation providers, and PFM solutions
to meet specifications, and those specs can
deliver on-demand capacity better than
EdgeInfra
be repeated as additional capacity is added.
traditional builds.
EdgeInfra, based in the Netherlands, is
Similarly, a provider can select the base model,
These solutions can simplify deployment
another new type of PFM edge colocation
but repeat and tweak requirements based on
for colo providers, whether at the core or
provider. EdgeInfra is adopting a colo
individual site requirements.
at the edge, with factory-built and tested
model within a container – they are
performance and reliability. Regardless of the
deploying shipping containers as edge
to be completed while the site is being
location of the deployment, PFM solutions can
data centers in urban areas, acting as a
prepared. Once built, prefabricated units
provide easy scalability, rapid deployment,
colo provider. They’ll use PFM solutions
can be deployed quickly, and many units are
and reliable predictability.
to build out those small edge sites.
up time to days or weeks instead of months
About Vertiv
EdgeInfra is focused on bidirectional,
with more traditional builds. That extra time
Vertiv designs, builds and services critical
IOT-driven compute. When we look
allows organizations to be more agile and
infrastructure that enables vital applications
ahead to smart vehicles and other
nimble, add capacity only as it’s needed, and
for data centers, communication networks,
innovations that will require more
to quickly react to changes as they arise.
and commercial and industrial facilities.
capacity, these types of compute will be
Factory construction allows that work
considered “plug and play.” This can cut start-
According to a 2020 report from Omdia (previously IHS Markit Technology),
critical. Click here to learn more about Prefab Modular
PFM mechanical modules often arrive in two to four months, to provide critical power capacity. In South Africa, a telecommunications provider used
,V i er ut tiv ol S r Inte grated Modula
on s
prefabricated modular units for a data center system, and the units were packed, shipped, reassembled, and ready for testing in less than six weeks.
ENGINEERING QUALITY for projects that face uncertainty. In some regions, the skilled workforce required for onsite construction may not be available. PFM solutions are built in a factory by trained specialists who can control the
Matt Weil is the Director of Offering Management for Vertiv Integrated Modular Solutions. His expertise includes prefabricated modular solutions at the edge.
l| ei tt W Ma
PFM solutions can provide predictability
Matt Weil, Director, Offering Management, Vertiv Integrated Modular Solutions E: Matthew.Weil@Vertiv.com Vertiv.com
D
ire c
tor
ge , Off ering Mana
t en m
Colo Supplement 39
Financial security The Covid-19 pandemic may trigger a recession, but data centers don’t look likely to suffer, reports Peter Judge
R
emember the start of 2020? Data center investors were looking forward to another year of uninterrupted growth. Six months in, Covid-19 has changed almost everything… except for that expectation of data center growth. Surging demand for online services during lockdown has boosted the growth projections of the industry. Subject to restrictions on movement, data center investments, openings and expansions have continued unabated. The first four months of the year saw a total of $15 billion in merger deals according to data from Synergy Research - although most of these were set up before the year began, and way before the pandemic introduced restrictions to travel. That bumper figure is largely due to one huge merger. Digital Realty bought Interxion for $8.4 billion, the largest deal since Digital’s $7.6 billion purchase of DuPont Fabros in 2017. Digital’s purchases provide peaks within the overall growth curve, but there have been plenty of other deals worth more than a billion. Macquarie Infrastructure Real Assets (MIRA) bought 88 percent of Australiabased AirTrunk in a deal which valued the hyperscale provider at around $1.8 billion. Other large deals have included the acquisition of Global Switch by Chinese investors, and operators including CyrusOne and Iron Mountain have also been buying up and consolidating their rivals. However, later in the year, things may
slow down as these deals require due diligence - which means actually visiting a potential acquisition. “Due diligence requires travel - and travel has been restricted by the shelter-inplace rules,” Rob Plowden, head of the US Data Center practice at legal firm Eversheds Sutherland, told DCD in March. “I am still in a period where we're just getting used to the new normal, but I have definitely seen the brakes have been pumped on due diligence. Deals haven't been terminated, but they have been slowed." For general investors, data center operators that are constituted as real estate investment trusts (REITs) still look good. At least compared with other REITs, in sectors like retail and hospitality, they have an obvious advantage: they remain open and continue to expand - so investors are likely to keep their stakes or increase them. However, lockdown restrictions may cause some practical issues for the data centers themselves, although their own staff are generally classed as essential to keep national infrastructure running. One window into these concerns came in an earnings call by operator QTS, which took place in April. Although the company is in the digital sphere, many of its customers are not immune to the inevitable recession which will follow the lockdown and some like those in the oil and gas and hospitality sectors - are staring real hardship in the face. Some of QTS’ customers have warned they may have difficulty in paying: the company reported “a modest increase in customer requests for payment relief," and
40 DCD Supplement • datacenterdynamics.com
Peter Judge Global Editor
has extended payment terms to some. CFO Jeff Berson said that exposure was comparatively small, as companies in such risk only represent “less than 10 percent of in place recurring revenue," and any losses might be offset by increased demands from digital companies delivering online services to people stuck at home. Physical infrastructure expenditure may be hit in future too. At the end of 2019, Synergy reported that capital expenditure in hyperscale data centers was running at $32 billion per quarter. In the rest of 2020, data centers may have some trouble keeping this up. For one thing, construction may be impeded by restrictions. Facebook had to temporarily pause building at two major sites, in Ireland and Alabama, due to Covid-19. For its part, QTS reported “modest delays in construction activity in a few markets - primarily as a result of availability of contractors and slower permitting.” The other major expenditure in data centers - the equipment inside it, could also be a problem. Factories in Asia experienced breaks in production. This, and possible stockpiling, may cause small gaps in the supply chain. So far, DCD hasn’t heard of serious trouble. QTS, for instance, claims it has "already secured" the vast majority of equipment it needs for the year, and is moving orders forward. A global recession will ultimately hit every business somehow, but digital infrastructure looks to be insulated from all the worst of the pain.
Colo Supplement | Meet me in the Meet-me Room
Virtual
CAPACITY
As network demands increase, can data centers use virtual meet-me rooms to squeeze more capacity and flexibility from their networks?
N
ation by nation, most of the world went into lockdown to reduce the spread of the Covid-19 virus in early 2020. This changed people’s work and private lives, accelerating a move to digital working and relaxing. It may also have accelerated a change within the data center. Different demands on network traffic made it more necessary than ever to have flexible connections between resources within the building and outside it - and will most likely push facilities to adopt a more flexible network topology: the virtual meet-me room. The meet-me room is a physical space in a colocation data center, where telecoms providers and colo tenants connect their equipment together to exchange traffic, without having to go through an expensive local loop. Internet exchange points can also be located inside the meet-me room. Colocation providers will connect their clients together by physical cables “crossconnects,” either directly or via the meetme room. However, as data centers have evolved, this has led to large numbers of cables running between different parts of the building - setting up connections virtually over the building network has become a preferred option, virtualizing the meet-me room. “Vendors are building platforms which empower folks to not need the physical meet-me room as much as they did before,” said Sagi Brody of disaster recovery provider Webair in a session at DCD’s Virtual New York event in March 2020. “They're changing the landscape.” As a new technique, this goes under many different names. It’s referred to as interconnection fabric, software-defined interconnect or data center interconnection (DCI). It also extends beyond a single data centers, with network-as-a-service companies like Megaport and PacketFabric offering flexible connectivity between popular locations across wide geographies. Equinix is a colocation provider that makes significant revenue from cross-connects, and styles itself as an
interconnection provider as much as a colocation player, branding its sites Internet Business Exchanges. Unsurprisingly, Equinix has adopted virtual connections inside its facilities, under the name Equinix Cloud Exchange Fabric (ECX). “In some ways that’s a global large meetme room,” Jon Lin, president Americas at Equinix told DCD. Equinix solutions architect Sanjeevan Srikrishnan describes it as “consuming infrastructure as a service with the capabilities of the cloud,” Digital Realty has a similar offering called Service Exchange, which it put together in partnership with Megaport. These services extend outside the meetme room of the home data center, said Okey Keke, solutions architect at Digital Realty: “We try to provide customers with end-toend connectivity between the infrastructure they have within our facilities, and data sets in another Digital Realty facility or at a third party.” There are interesting results of this. Data is being carried over connections that may bypass the Internet, and use direct physical connections, and because those connections are virtualized, they can be made available more quickly. “We're virtualizing the physical connection, just like we virtualized physical servers,” said Brody. Just as virtual servers can be deployed at will in the cloud, so can virtual connections. This has been useful for services such as back up and disaster recovery which only need to be turned on when needed, but it also came into its own in the pandemic, he said. When business traffic flowed away from traditional business districts to residential areas, it needed a flexible response: “I don't think there's a better use case of that capability than Covid.” With this kind of service, “you have the ability to not only connect the cloud service providers to anyone else that's in the data center, but the ability to turn up services to the ISP - the eyeball network,” said Jezzibell Gilmore, SVP of business development at PacketFabric. According to Equinix, virtual connections
Peter Judge Global Editor
within a single data center can actually meet those shifts at the Edge which are demanded by the pandemic response, speeding up and rerouting traffic to away from offices and towards homes: “It enables customers to exchange traffic with each other, and we’ve been able to use that with a lot of service providers if they are seeing congestions in areas,” said Lin. “We host a lot of the eyeball networks, we host a lot of the core backbone networks, we host a lot of the content providers and the communication providers, as well as the enterprise customers. So if you're talking about the Zooms and the WebExes of this world, we are the ones helping them scale their Edge presence to handle this load,” said Srikrishnan. Virtual connections don’t replace physical cross-connects, said Keke: “In addition to customers increasing cross-connects they are looking at virtual cross-connects because they offer a lot more elasticity and access to a larger ecosystem than a cross-connect to a single business partner or carrier.” The use of direct connections means that organizations are sending less of their data across the public Internet, according to Christian Koch, head of product at PacketFabric. Some of this goes across physical fibers within a colocation site, some goes across services like Megaport, between sites. One thing virtual meet-me rooms won’t do, is change one of the oft-cited gripes of the data center world: the price of Equinix’s cross-connects. Rivals often complain that for the price of Equinix running a cable from one side of its building to the other, a telecoms provider could offer a link across a country, but Jon Lin says virtual links won’t change this. ECX links themselves may be cheaper, but they are a different use case, he says. “If you have a cross-connect, you can scale from 10G to 100G on a dedicated circuit that’s inherently under your control. "There’s a lot of value in this, and ECX is about being agile and having a dynamic software-defined experience. We are pricing based off of the value.”
Colo Supplement 41
Becoming a part of the
COMMUNITY To build in cities, data centers need to become a part of cities. That means looking nicer, and helping out the grid, Sebastian Moss reports
Sebastian Moss Deputy Editor
L
et’s be honest: For local communities, data centers can often be a tough sell. Sure they bring jobs, but not many. There’s some revenue, but it’s usually offset by tax breaks. But beyond
that? It’s this perception that has led to some areas turning against data centers, most notably Amsterdam, which in the summer of 2019 placed a moratorium on new builds. “I think one of the biggest problems they have is that Amsterdam does have a lot of data centers and they do produce these dead areas in the city,” Chad McCarthy, Equinix’s global head of engineering development and master planning, told DCD. Some of the criticisms against data centers are based on unfair presumptions, McCarthy believes, but others are grounded in truths - ones that data centers need to learn from. "You’ve got these large, cubic, plain grey buildings, and big gates outside - no one's really walking around," McCarthy said. "They don't really see that as how they want Amsterdam to be. Amsterdam is a lively place and they don't really want it to look like that."
This is not just an issue with picky Dutch architects, but a wider sentiment shared by many. "I’ve seen a lot of these data centers in Santa Clara and they’re just big, blank boxes; they’re disgusting, they’re just so ugly and when I look at the picture of this one, it’s just one big white plane that’s not interesting," Planning Commissioner Suds Jain said of a RagingWire facility when discussing whether to approve the construction. "I don’t understand how we allow this to happen in our city." Even outside mass conurbations, there are those calling for more care in data center design, with Loudoun County officials last year begging for data centers to be better looking, lamenting the hundreds of identical rectangles dotting the landscape. "We're starting to provide green areas, cafes, and scenic walkways through the campus like universities do," Equinix’s McCarthy said. "If data centers are in the city center, they have to be integrated and have to be part of the city infrastructure." That does not mean blindly following planning officials’ every whim, however, with
42 DCD Supplement • datacenterdynamics.com
McCarthy sharing his distaste for "the number one request" - vertical green walls. "I mean, that is one of the most pointless things you can do from an environmental perspective, it's not easy for plants to grow on a vertical surface, you need to use a lot of water to keep it alive, you have to pump the water up to a great height because these things are typically about 30 meters tall. And it's a complete waste of energy. It is an illusion, we need to move away from things which just don't count, and start looking at what really counts." An area that could have a far greater impact would be shared heating systems, where the waste heat from a power plant is used in adsorption chillers in a data center, and then the waste heat from the data center is given to the district heating system to warm homes and schools. "Once it gets to that point, then you can imagine you're sitting in your apartment at home and you've got your feet on the sofa and you're watching Netflix," McCarthy said. "Yes, you're causing heat in a data center when you watch Netflix, but you're using that to heat your house - and by the way, it's heat which is
Colo Supplement | Better Looking Data Centers
"I’ve seen a lot of these data centers in Santa Clara and they’re just big, blank boxes; they’re disgusting, they’re just so ugly and when I look at the picture of this one, it’s just one big white plane"
a necessary byproduct from the power that's generated to run your television." But an integrated community energy scheme has yet to be rolled out en masse outside of some Nordic nations. "I tried to do adsorption cooling in Frankfurt using waste heat from a coal-fired power station," McCarthy said. "And it was just impossible to negotiate terms." The company would have had to pay for additional heat rejection, the region didn't have an appropriate district heating network to pass on the remaining heat, and the power station wanted to charge exorbitant fees for the heat because they had a sweetheart deal to use river water for free. "And so this is what we're up against we're after a complete modernization and a recalibration of the energy market." As we shift away from fossil fuel power plants that create waste heat for steam turbines, and move to wind farms and solar plants that don’t create excess heat, data center waste heat could become even more important to communities. Renewables could also give data centers another vital role in society - as grid
stabilizers. Using UPS systems for demand response is already being trialed, but could roll out further as data center operators and customers get used to the concept. "It's just one of those inertia factors that has to be overcome for that to work," McCarthy said. "But from a technology perspective, batteries in data centers can be dual purpose. They can cover grid outages for the data center, but they could also stabilize the grid as well." That’s not to say further technological advances won’t make the transition easier, with UPS battery improvements allowing for fundamental data center changes, including allowing companies to drop diesel generators - another community bugbear. "Currently you’ve got a five-minute battery supply and diesel generator," McCarthy said. "It isn't easy to use a fuel cell as a backup source, it takes too long to start." So, in that scenario, you’d likely use the fuel cell as your main source of power, and fail over to the grid. "But the grid is not under your control, so failing over to something which is outside your control is not really acceptable at this point in time, and so that would point you would need
very long battery periods, which only really makes sense if you're dual purposing for grid stability. "So if you were stabilizing the grid and you had something like a four-hour battery then the fuel cell without the diesel generator, I think is something which is very realistic." But, McCarthy cautioned, "you can see that we're moving this specification a long way from where it is now." Much of this will rely on new technologies, government incentives, and regulations - and Equinix notes that it is in talks with the EU on the latter two points. But until then, data centers should focus on a simple task: Being better neighbors. "We need to completely change the way we think about how we live in the community," McCarthy said. He’s hopeful such moves will nix "a perception which has grown over time and it's been left unchecked" that data centers are bad for communities. "The data center is the platform of that digital economy. That interchange of data, the storage of data, and the availability of data is really largely responsible for our standards of living today, even more so right now."
Colo Supplement 43
What goes
AROUND
comes around
Why build a whole new server when Google’s about to chuck one away?
A
s much as the data center industry prides itself on the vast efficiency improvements it has made over the past few decades, and the climatepositive effects of businesses moving to modern facilities, there remains an uncomfortable truth: Electronics are dirty. Whether it’s the chips using rare-earth metal extracted from horrific mines, or trawled from deep sea beds; whether it’s the hard drives with neodymium magnets primarily found in China; or whether it’s the copious quantities of steel used to make racks, smelted in a highly carbon intensive process, the sheer act of setting up a data center is a wildly environmentally damaging one. And then, when the hardware is upgraded, all those servers and all those racks are simply thrown away. “We did a full lifecycle analysis of a standard Open Compute Project (OCP) rack to find out what portion of the total CO2 impact is attributable to pre-use phase, such as mining, manufacturing, system assembly, and so forth, which portion of the CO2 impact is tied to its use, and then which part is attributable to end of life processes,” Ali Fenn, president of ITRenew, told DCD. “And it turns out that in a pretty common use case - where components are manufactured in Asia, systems are assembled in Eastern Europe, and then data center deployment is in the north, and end of life stuff is done locally - then it turns out that 76 percent of the net CO2 impact is attributable to that pre-use phase.” Her company is one of several springing up to try to minimize the impact of server construction, by making data centers a more effective part of a circular economy. The idea is simple: Hyperscale companies like Google, Facebook, and Microsoft buy staggering numbers of OCP servers with
life expectancies of around nine years, but because of their need to always have the latest hardware, usually decommission the servers after just three years. Previously, this would mean that the servers were destined for an early scrapyard, but instead ITRenew buys the servers, wipes them, and resells them to less demanding organizations that then run them for another five or six years. Of course, it still doesn’t stop the servers being environmentally damaging, and eventually they too will likely end up in a landfill somewhere, Fenn admitted. “It’s a deferral of new manufacturing as opposed to avoidance. But it buys us time, right? “In a truly circular world, the system would be regenerative by design and those endof-lifetime things would magically become something else with zero sort of byproduct
"The ODMs are only set up to serve like a hundred thousand servers at a time. They're two percent margin businesses, taking orders from the hyperscalers at massive scale" waste. Unfortunately, electronic components are kind of the hardest things to do in that regard.” Another challenge is that ITRenew does most of its work with more-or-less anonymous “white label” servers from manufacturers like Wiwynn or Quanta. These
44 DCD Supplement • datacenterdynamics.com
Sebastian Moss Deputy Editor are based closely on open design standards from the Open Compute Project (OCP) and are often referred to as ODM (original design manufacturer) kit. It’s harder to work with branded servers, from the likes of HPE or Dell, which are often referred to as OEM (original equipment manufacturer) kit, because OEMs use proprietary firmware, and bundle support packages and agreements which can restrict what ITRenew can do. “We've sold a lot of OEM equipment. But you can't warranty it or support it. What's different now is that, because things are open, we can actually stand behind it and say, ‘we've tested it, we've certified it, with warranty’ - that's the key shift from proprietary OEM systems to open ODM systems.” OCP and the ODM model has been a huge success with hyperscale companies, but it has struggled to make a dent elsewhere in the data center industry, amongst the everyday businesses where ITRenew finds its customers. “Frankly, ODM hardware has not been as widely adopted as it could be,” Fenn said. “A lot of it is that the ODMs are only set up to serve like a hundred thousand servers at a time. They're two percent margin businesses, taking orders from the hyperscalers at massive scale.” With low margins, and often not offering warranties, the ODMs have small sales footprints and channel support, with little desire to chase comparatively tiny enterprise deals. Servers built to OCP specifications therefore represent "a fraction of the total server market and even a fraction of the ODM market," Fenn said. "But that's actually one of the things we're trying to solve." The circular economy is increasingly becoming a focus of governments looking to minimize the impacts of the coming climate catastrophe, and to reduce their reliance on foreign-owned natural resources, with the
Colo Supplement | Recycle and Reuse
EU in particular promising to push a radical circular agenda. But the circular economy, like the normal economy, is currently facing the challenge of the Covid-19 pandemic. For ITRenew, the impact has been a mixed bag. On the positive side, businesses with suddenly constrained cash flow may be more likely to turn to cheaper second hand equipment, while the pandemic has threatened to hit the supply of some components needed for new equipment. While factories across Asia have mostly got back into business, the amount of components in the supply chain has been affected by fear. There’s been an uptick in purchases of usually slow moving memory, and very specific spares, ITRenew’s chief strategy officer, Andrew Perlmutter, told DCD. So, it would seem that general enterprises
might be more receptive to the idea of castoffs from the hyperscale world right now. But, unfortunately, the effects of the pandemic are also reducing the supply of second-hand equipment from the giants, who are changing their habits. Hyperscalers are facing unprecedented demand for their services, and can find themselves understaffed and cautious about unnecessary maintenance. One response is to extend the life of their servers, reducing the number available second hand. “It’s not exactly business as usual,” Perlmutter said. “We have seen a dip, but not so substantial that it's caused major challenges.” Once the crisis is over, however, Perlmutter expects the reverse: “I do think you'll see something of a spike in decommissioning and deployments."
Longer term, Perlmutter believes it’s unlikely that this experience will lead to hyperscalers operating servers for longer. At the moment it remains true that this year’s servers are vastly more powerful than those of three years ago, so there’s an incentive to swap them out. However, Moore's Law is starting to sputter, and the exponential performance curve is nearing its end, so the need to refresh systems so rapidly may go away. Organizations like ITRenew hope that the current pandemic does not blind businesses to the larger coming one - climate change - which makes it imperative to find ways to reduce the waste of resources. “For the first time ever in history, we are all fighting a common battle and maybe people will pull together after this, and things will become less polarized globally,” Fenn said.
Colo Supplement 45
When speed of deployment is critical, Vertiv has you covered. Hot-scalable, isolated, power-dense critical infrastructure from the experts in digital continuity — just in time to meet your unique needs. With the Vertiv™ Power Module 1000/1200, you can rapidly construct redundant blocks of 1000 or 1200 kVA/kW critical power infrastructure for your new or existing facility, giving you added capacity without overburdening IT resources. Plus, the hot-scalable infrastructure ensures business continuity by allowing you to deploy additional units without taking critical loads offline. Why choose Vertiv Power Module? yyHigh power density built around market-leading Liebert® UPS technology yyEnergy-efficient operation with airflow containment to ensure optimal equipment conditions yyClose to “plug and play” functionality minimizes site work and speeds deployment yyHot-scalable units eliminate the need for downtime when boosting power capacity Vertiv.com/PowerModule
© 2019 Vertiv Group Corp. All rights reserved.
Useful Citizens
How data centers can make way for renewables
Peter Judge Global Editor
If data centers share their onsite backup with the grid, we could all benefit
T
he world has to move to renewable energy sources, to reduce emissions and stave off global warming, but adding those sources to the grid, and removing carbon from the system turns out to be a complex issue - one that has to involve the customer and the suppliers. The biggest problem is that renewable energy sources are intermittent. Solar power is only generated during the local daytime, and wind power depends on the weather. Even hydroelectric power, which seems so steady, will vary over the year, because the water flow in the rivers fluctuates. This means that over time, the mix of energy on the grid fluctuates. In some times and places, it’s pretty green, because the renewables are flowing. At other times, the old fossil-burning power stations have to fire up and the mix is dirtier.
Electrical utilities can’t do much to change the timetable of renewable energy sources. So, to cut out the carbon, they have to move to the other side of the equation. They need to work with their customers to try and move demand to the times when the supply is at its greenest. “As we progress through this energy transition, it's very, very likely that wind and solar will penetrate further into our generation mix,” according to Mohan Gandhi. “The EU aims to get about 60 percent variable renewables into our generation mix by 2050 - a big increase from about 15 percent now,” he told a panel session at DCD’s virtual Energy Smart event this year. “For us to have a 100 percent available electricity grid, when we introduced variability, we need to introduce an equal and opposite amount of flexibility,” said Gandhi. For every MWh of variable renewable, added to the grid, we need a MWh of the load
that can be flexible and move to when that renewable energy is available. “Data centers could play a very important role in integrating in creating that flexibility and integrating renewables,” said Gandhi. This may not be immediately obvious. Data centers are large energy consumers (maybe two percent of the capacity) but they want power continuously to support their critical loads. Gandhi points out that data centers all have backup power and local energy stores: “They are players with the ability to instantaneously move loads - and they are located in exactly the sort of places that energy grids are typically strained, which is urban areas.” There’s a spectrum of ways in which data centers (or any other industrial facility with backup power) can help, ranging from switching voluntarily to backup power for a period to remove the load from the grid,
Issue 37 ∞ August 2020 47
Useful Citizens to actually generating excess capacity and feeding that into the grid for short periods (see box out). All this is possible, but Gandhi says: “There's a disconnect between what the industry could do and what the individual data center operator wants to do. The technology is there, but there's something stopping the business model.” One problem is there’s no universal problems. At DCD’s virtual event, Energy Smart 2020 Stephan Stålered, senior project manager at Swedish utility Ellevio, pointed out that different countries have different requirements - and will come up with different solutions. “In the Nordic countries, we normally have a quite stable base production generation,” said Stålered. In particular, there is abundant hydroelectric power, which is more stable than solar or wind power, but still has variations: “We are looking at a couple of months every year when we have a lack of power.” “Hydro has its own particular requirement for flexibility, and it's typically seasonal, so that's the flexibility that they would be looking for in Sweden and in some of these Nordic markets,” said Gandhi, in the same webinar. “In other markets, for example, Germany, Holland, and the UK, when they have a far larger proportion of wind, they need more short term flexibility: you're looking in the minutes versus in the seasons.”
data centers for allowing their energy to be used for the good of all. And that model has to include regulations and taxes imposed by governments, and ways to manage the expectations of customers and partners in the ecosystem. To invest in any sort of system to go off grid voluntarily, or join a fast frequency response scheme, operators need to know what the return will be. “We want to be a part of the green strategy for all the country, so we try to do whatever we can do, but our owners or shareholders would like to have some certainty,” says Halvor Bjerke, COO of Scandinavian provider DigiPlex. “We need to plan our investments for a period of maybe 10 to 15 years at least. If the government might change the incentives down the road, it's hard for us to take the decision.” Customer relationships can be a big problem, said Bjerke: “We have some commitments to the customers that we need to comply with.” These service level agreements (SLAs) can limit the amount of time a facility can run on batteries or generators. Bjerke says. “It’s very difficult to synchronize with users - and in addition many data centers' loads are critical to society. Customers always go first, but if you can do added value to the local community, that would be brilliant.” Fundamentally, for operators and their customers, their business relies
"Many data centers' loads are critical to society. Customers always go first, but if you can do added value to the local community, that would be brilliant" In markets that need a “real time” response, data centers can provide it “because they are large and instantaneous,” said Gandhi. And Europe is probably the most advanced region in using flexible power to transition towards variable renewables, said Gandhi: “Europe is almost like a pilot for the rest of the world to follow.” Stålered wants to get data centers involved: “Data centers are consuming a lot, and they have redundancy that they might be able to sell back to the grid, so we can continue to have a fully stabilized grid for all. We are looking to establish a 'flex' market for this in Stockholm.” Given the complexity and variety within the problem, there’s a really big difficulty in implementing a solution. There needs to be an economic model which compensates
on the loads running in the data centers. So any stored energy used to help the grid must be entirely “spare.” DigiPlex is investigating whether to deliberately overspecify its battery rooms, in order to have capacity to support the grid, said Bjerke. “That's what they're trying to do every time we build something or every time we develop even our legacy data centers,” he said. “It's important to see if we actually can exceed the battery capacity. We're using lithium-ion batteries for all our new projects - and they have a very small footprint.” However these batteries have a lifecycle of around 15 years, and that is why future planning is crucial: “This is the kind of investment that you do one time - but you need to know that you have those incentives for a certain period of time.” “The grid has a very clear need now need
48 DCD Magazine • datacenterdynamics.com
in the near future,” said Gandhi, “but it's very difficult for data centers to do a risk-reward calculation because it's very difficult for them to put a value on the flexibility that they could provide, because local flexibility markets are still quite immature.” Fundamentally it’s about the price the grid pays for assistance, said Stålered. There are multiple markets, including for “flex” power or “intraday” power: “You can be in two or three markets, to sell your overcapacity. But if the price is too low, you will not be in the market.” That’s held back the UK flexibility markets, said Gandhi: “In the UK there's a liquidity issue, as people don't feel like they receive remuneration adequately for the flexibility that they have supplied.” These issues will get ironed out over time, because of the underlying need, that just isn’t going away: “In Stockholm, especially, I would say that we will have a lack of power until at least 2030. It might be even longer,” said Stålered. The world also needs to phase out fossil fuels in transport and heating, so it will move to electric vehicles, adding still further to the demand on the grid, he went on: “The market is there, and the market will be there for at least 10 to 15 years.”
Degrees of support
And work like district heating can add a different dimension - by giving waste heat, data centers can displace fossil fuels used in heating, either directly or by the grid. There’s no single silver bullet to solve this, and it may need data scientists to work out what incentives will work in each different setting, said Gandhi: “There are so many different forms of flexibility across different markets across different time horizons - even including future planning, which is a form of flexibility.” One tricky issue is diesel generators. It can be that switching on a diesel genset at a data center can be the best thing to do, to iron out a local grid problem without the need for a large plant to switch on elsewhere. “You have to take a zoomed out approach to this and take it all into all account,” said Gandhi. But local laws can make that a problem. “The laws in the UK forbid diesel generators to be operated on the grid. If there were a route to relaxing this, it would be good. It’s up to policy makers to address the whole CO2 footprint of temporarily using diesel gensets on the grid.“ What we need, he said, is a set of business models and incentives to develop a market which encourages movement in
the right direction. They should be tailored to each location, which might add to the headache for a multinational operator dealing with multiple sets of regulations. “We have to always understand the data centers will operate in their own individual interest,” said Gandhi. “Markets are a very good way of engaging people - you get a price signal, and data centers can really understand what they're getting and what they're risking. And the other benefit is the market is typically technology-neutral. It could apply to a steel mill, or a data center, directly or via an aggregator pool.” How important are data centers? Gandhi said: “Data centers have a significant role and that role will become more significant and more feasible in the next 10 years.”
There are various ways data centers can adapt to the grid, shifting their load to times when electricity is greener, or else providing power to stabilize the grid without switching on large fossil plant. Research by the Lawrence Berkeley National Laboratory (LBNL) has found that energy consumption can be reduced for a short time, by measures like setting a temporarily higher air temperature, with normal cooling resumed when the load on the grid is less. Data centers have an uninterruptible power supply (UPS), designed to support the data center when the grid fails, consisting of an alternative source of power (usually diesel gensets), and enough energy storage (typically batteries) to keep things running till the gensets fire up. Data centers could switch over to the UPS voluntarily, when energy is less green, leaving the grid and reducing demand - effectively becoming invisible. Alternatively, operators can hand over some measure of control, in a scheme called “UPS-as-a-reserve” (UPSaaR) which treats UPS batteries as a “virtual power plant” - and pays operators for the power they allocate to grid support. In some countries, the utilities integrate the UPS into the grid’s normal control mechanism: when the grid is loaded, its frequency (in the UK, 50Hz) varies. Data centers that sign up to a system known as firm frequency response (FFR) will find their UPS delivers small amounts of power to the grid at times when it is stressed. “Here’s a rundown of the sorts of ways data centers can get involved,” said Gandhi. “They can peak shave, they can shift load, and then they can use either their UPS systems with batteries or diesel generators for short term load or for frequency response.”
DCD>Magazine DCD>Magazine The TheArtificial Renewable Intelligence Supplement Supplement
Out now
This article featured in a free digital supplement on renewable artificial intelligence. energy. Read Read today today to to learn about how rack density, Google stays deep green, learning how technologies, to be transparent the role with of CPUs customers, in inferencing, the birth of the a quest for fusion renewable economy, power, microgrids, and muchand more. much more. bit.ly/AISupplement bit.ly/DCDSupplement
Issue 37 ∞ August 2020 49
Introducing the
DEKA SHIELD from
Our exclusive Deka Shield program gives you peace of mind and exclusive additional warranty protection for your Deka batteries. This benefit is just one offering of Deka Services, the North American service network operated by East Penn. Exclusive DEKA
SHIELD
Deka Shield is an innovative and exclusive program to provide optimum battery performance and life to your Deka Batteries no matter the application: Telecom, UPS, or Switchgear. By allowing Deka Services to install and maintain your batteries, your site will receive extended warranty benefits.
How do I sign up for the Deka Shield program? Installation must be completed by Deka Services or a Deka Services approved certified installer The application and installation area must be approved by Deka Services prior to installation Access for Deka Services to perform annual maintenance during the full warranty period
What coverage does the Deka Shield program provide?*: Full coverage labor to replace any defective product Full labor to prepare any defective product for shipment
Extensive DEKA
Freight allowance for new product to installation site Full return freight for defective product Extended warranty
* Terms and conditions apply – please contact us for additional information.
SERVICES
Deka Services provides full service turnkey EF&I solutions across North America. Their scope of services include, but are not limited to:
• Turnkey EF&I Solutions • Battery Monitoring • Battery Maintenance
• Battery Capacity Testing • Removal and Recycling • Engineering
• Project and Site Management • Logistical Support • Installation
All products and services are backed by East Penn, the largest single site lead battery manufacturer in the world. With over 70 years of manufacturing, battery, and service expertise let Deka Services be your full scale power solution provider.
Open up 5G
The democratization of 5G and mobile networks
Vlad-Gabriel Anghel Contributor
In the new generation of mobile networks, the global pandemic is accelerating a trend to replace specialized hardware with software, Vlad-Gabriel Anghel reports
W
e live in a changed world as a result of the global pandemic. Deeply rooted concepts are rapidly changing, from creating working spaces at home to re-defining of how networks operate. A lot of progress is being achieved in an extremely short time frame, a small positive outcome of a terrible time. Tensions between the US and China are starting to affect digital infrastructure. Under pressure from the Trump administration, the UK and other nations are moving towards banning new Huawei equipment in their infrastructure. Mobile networks seem ripe for disruption through open source software technologies, with development advancing at an astonishingly rapid pace. However, the US has been playing catchup when it comes to mobile network development and research with estimates from Deloitte placing China’s investment $24 billion ahead of those in the US. The US has is trying to jumpstart its own ecosystem with various efforts, including the recent DARPA four-year program on 4G and 5G networks - OPS-5G. These initiatives are creating a new mindset in which mobile networks and RAN (radio access networks) - are designed to be as software-defined and hardware agnostic as possible. Operators want to move away from
proprietary hardware in the mobile networks which carry data between the endpoint (a consumer smartphone) and the network services, operating in data centers. This will make it easier to change providers, either for political reasons, or to save costs. The largest and most successful vendor in this space is - you guessed it - Huawei, with hardware that is considerably less expensive than rivals like Nokia and Ericsson. The approach is nothing new. Softwaredefined technologies have been adopted by large players like Facebook entering emerging markets, to enable them to adopt “no-name” hardware and drive costs down. Software-defined networking technologies are evolving rapidly as hyperscalers chase efficiency gains well past the point of diminishing returns - and this mindset has been heavily accelerated recently. Some experts hope that open source could solve multiple worries created by the much larger numbers of devices connected to 5G networks (including security, resource sharing and bandwidth) as well as tackling undue government influences over vendors by reducing the reliance of these networks on proprietary hardware. The Classic RAN To understand how and why open source must be beneficial, we need to understand the inner workings of our current mobile networks’ traditional architecture. What
exactly happens when you access a particular service from your smartphone whilst using mobile data? A traditional mobile network application is made up of three other networked subsystems - the access network, the transport network (sometimes referred to as backhaul) and the core network. The access network is formed of all the equipment required for the mobile phone to connect to the carrier network - radio towers or antennas depending on each generation of the mobile network. The transport network, or backhaul is represented by all the subsystems that allow the data to be carried to the carrier’s core network (read data center), where services can be accessed. As parts of the mobile network, the access network and the core network are made up of both hardware and software. The core network, being the carrier’s data center, has already benefited from a plethora of open source hardware and software technologies - and overall, data center operators have benefited and leveraged vendor interoperability through established standardized and open interfaces. The same cannot be said about the access network, which has seen changes in design as each generation is adopted en masse. Proprietary hardware, software, and interfaces dominate solutions in this space - an operator can be seen as locked within a certain vendor ecosystem. Within legacy networks like 2G and 3G,
Figure 1 - Typical representation of mobile networks 54 DCD Magazine • datacenterdynamics.com
Figure 2a - Representation of typical 2G Layout the typical access network would consist of antennas on top of a mast connected via RF/ coaxial cables to a remote radio unit (RRU) and a baseband unit (BBU) closely located at the bottom in a baseband cabinet. (Figure 2a) As mobile subscriber numbers increased, so did the need for more efficient and reliable design, a main issue being the attenuation and dispersion of the signal through the RF cables. Late 3G and 4G access network designs use a new piece of hardware called a remote radio head (RRH) that replaces the RRU in the previous designs. This translates radio signals into data packets while being closely located at the top of the mast. Fiber optic cables carry the signals to the baseband unit (Figure 2b). These design iterations were primarily driven by the vendors themselves, therefore proprietary solutions were engineered from top to bottom, with little oversight given to standardization and transparency of these solutions. This in turn has led to operators being forced to choose one vendor and stick with them when deploying access networks - proprietary software running on proprietary hardware, communicating together through proprietary interfaces. Enter OPEN RAN Several solutions have been proposed to tackle this issue like cloud RAN (cRAN), virtual RAN (vRAN) and OpenRAN however widely available commercial implementations of these solutions have only recently started to materialize. The aim is to disrupt deeply rooted concepts on network architecture, with the radical suggestion of using off-the-shelf servers and emulating everything that the dedicated hardware does through open source software technologies and standards. This goes as far as replacing the RRU mentioned above with softwaredefined networking solutions running on commercial, off-the-shelf (COTS) servers and replacing the proprietary communication
Figure 2b - Evolution of Design in late 3G and 4G networks interfaces between the BBU and RRU themselves with standardized open source ones. This effectively shifts the access network design to more closely resemble that of a data center. Given the efficiency and sustainability advances that standardization has allowed in the data center, this is at the very least an intriguing idea to explore. Any new concept succeeded over time with proven results and with education, but software-defined RAN vendors are touting incredible levels of disruption and costreduction. For instance, Mavenir’s publicly available FAQ document claims: “What Mavenir can do through virtualized software with our 3,300 employees, takes Nokia and Ericsson over 100,000 people to accomplish.” It is important to note that OpenRAN applies to all generations of mobile networks, including legacy networks like 2G and 3G. The GSMA Mobile Economy 2020 report predicts that 4G will account for the largest rollout in commercially available RAN solutions by 2025, but legacy networks are expected to still serve around 22 percent of the total subscriber base. In theory, an operator running its mobile networks on OpenRAN technologies would be able to allocate resources more efficiently as its user base shifts from generation to generation, each functionality being served through software that runs on COTS servers which can be managed through DCIM. The political hijacking As with any technology wanting a global impact, organizations supporting this movement have banded together in what is known as the OpenRAN Policy Coalition including Intel, Mavenir, Qualcomm, AT&T, Vodafone, NTT and Cisco. The group is trying to push forward the adoption of “open and interoperable solutions in the Radio Access Network (RAN) as a means to create innovation, spur competition and expand
the supply chain for advanced wireless technologies including 5G.” Hardware manufacturers within this space like Ericsson and Nokia have recently joined, promising to support global collaboration on OpenRAN developments. However, hardware manufacturers might be backing two horses, as both have pushed in their recent NTIA filings for further grants and support on proprietary hardware solutions while at the same time asking for government funding of the OpenRAN movement. All these movements point to a skewed politicized view in the current social climate, which presents the OpenRAN movement as an answer to all the security concerns that come with government-backed vendors, as well as being a weapon in the trade war between China and the US. Despite these hopes, this is an emerging technology and mass rollout is still a few years away, if not more. OpenRAN IRL In real-life scenarios, OpenRAN needs a few more years to mature enough to match the current hardware-based 5G networks. Meanwhile, hardware vendors are releasing second generations or even third generations of that proprietary hardware. Beyond efficiency gains, these come with advanced features such as 5G network splicing, carrier aggregation and more, keeping the proprietary hardware attractive to operators. The reliability of OpenRAN has not yet been tested extensively, as the only commercially available deployment has just seen light in Japan with Rakuten as the operator and Mavenir as software provider as part of an ongoing nationwide 4G deployment. It does seem, however, that OpenRAN enabled Rakuten to compete with established players like NTT DoCoMo and KDDI to offer better services at approximately half the cost for the end consumer. When looking back, software being replaced by hardware has become the norm as technology advanced. Earlier AM/ FM radios needed dedicated equipment to filter out interference, tune in to a particular frequency and decode the signal etc, making them the size of a shoebox. As CPU designs became more efficient and grow in performance, along with the development of highly specialized instruction sets, you can now tune in to your favorite radio station straight from your smartphone. Why not apply the same mentality towards mobile networks? It seems that in future mobile networks will be defined by software and, judging by current movements, they could be fully open.
Issue 37 ∞ August 2020 55
Look after yourself and your colleagues
Your health is more important than your work
T
ake a break. You don’t need me to tell you this is a difficult time. There’s a pandemic going on that most governments are ill-prepared for; authoritarianism is growing in Hong Kong, India, and even the US; climate change has already begun to exact its toll on vulnerable populations. I could go on. But you’ve made it this far. You’re still here. Things will get better. It’s okay to pause, to unplug from all that is happening, and find peace where you can. Yes, there is still much fighting to do: From wearing masks and practicing social distancing, to campaigning for civil rights, to voting for effective climate policy. But don’t let it overwhelm you. Stress, depression, and fear are all on the rise. We’re lucky that this industry is more insulated from Covid-19 than most, but it’s still affected - and we are not just jobs, we’re humans, frail and fraught in our own way.
If you are an employer, reach out to your staff. Let them know you’re in this together, that you understand if they’re not always going to be completely focused. They’re going through a lot, just like you. If you are an employee, don’t feel guilty if your productivity has waned. You’re not an automaton, no matter how much you work with machines. I have spoken to too many employees and their bosses who try to rationalize this as a ‘new normal’ - or the “next normal” - we must accept. I get it, many businesses are cratering, and they’re desperate. But this crisis is exacting a toll, and it’s unreasonable to expect employees to shield their company from that forcing them to absorb it all in their personal lives. A shared crisis means shared suffering. You’re not a data center. Downtime is allowed. Take a break. Don’t think about Q4 results. Don’t think about the craziness of our time. Unwind, recharge, recuperate. And then come back to help the fight.
56 DCD Magazine • datacenterdynamics.com
It’s okay to pause, to unplug from all that is happening, and find peace where you can
Providing Speed the Market Needs.
With Gray’s 60-year history in the industrial sector, we are more than equipped to meet the unique challenges data centers require. Gray has built mission critical facilities for domestic and international customers, including a tier 4 facility, which house cloud-based services. From concept to commissioning and beyond, you can count on Gray to make your vision a reality. Zach Lemley Senior Manager, Business Development zlemley@gray.com gray.com
No matter the environment, Starline’s at the center.
Hyperscale, Colocation, Enterprise. Time-tested power distribution for every environment. Starline Track Busway has been the leading overhead power distribution provider—and a critical infrastructure component—for all types of data centers over the past 30 years. The system requires little to no maintenance, and has earned a reputation of reliability due to its innovative busbar design.
StarlineDataCenter.com/DCD