DCD Magazine: Why do we need a Quantum Internet?

Page 1

Issue 43 • January 2022 datacenterdynamics.com

Why do we need a Quantum Internet? The unexpected payoff from linked qubits, and the quantum magic that makes it possible Gas flaring CEO

Ingvil Smines of Earth Wind & Power

Cooling supplement

New ways to get heat out of your data center

Wiring Antarctica

A subsea cable links the last continent

Who’s a good bot? Robot dogs report for security duty



YOUR BEST CHOICE FOR LITHIUM BATTERY SOLUTIONS UL9540A Tested UL1642, UL1973, UN38.3 Certified Compliant LFP Safe Lithium Technology 15-Year Design Life Solution Delivering Up to 300 kW CBC/IBC Seismic Zone 4 Certified

www.mpinarada.com | ups@mpinarada.com | Newton, MA, USA 800-982-4339

ISSN 2058-4946


December 2021 / January 2022


6 News Mega mergers, a giant Dutch Meta site, Google staff issues and a shootout in a crypto mine 17 Why we need a Quantum Internet A quantum version of TCP/IP is on its way - and it could drive a quantum computing breakthrough


The CEO interview “ 150 billion cubic meters of gas is flared every year,” says Earth Wind & Power CEO Ingvil Smines. But why run data centers on that gas?

28 DCD>Awards 2021 The people who won, the year our gala ceremony returned to a physical space


31 Cooling supplement Two-phase cooling finally comes of age, which means trouble for the PUE metric. And we meet a couple of radical new cooling techniques 45 J oining the cavern club Digital Metalla wants to create a secure underground facility in a disused Sardinian mine


48 Who’s a good bot? CEO Wes Swenson explains why Novva is using Boston Dynamics’ Spot dog robot in Utah 53 A ntarctica comes in from the cold Could a subsea cable finally link up the last unconnected continent?


61 T he DPU dilemma There’s a major shift in servers - and it’s coming from the humble NIC 66 Taking the nuclear option Nuclear power has low emissions. Could data centers fund a new generation of small reactors?


70 The dangers of big cloud providers Giant AWS outages should finally teach us not to trust in the technical abilities of Big Cloud

Issue 43 • December 2021 / January 2022 3



an IMS Engineered Products Brand



From the Editor

Meet the team


here is always something new on the way, and in this issue we welcome a host of unexpected arrivals. We look at the near and far future of networking, we examine a previously unthinkable way to power your data center, we track fiber through Antarctic ice, and we find beautiful new ways to cool your facility. We also quiz a Swedish CEO who wants to save the planet by burning more gas. I know, we have our doubts. And finally, we hear about a Utah facility patrolled by robot dogs.

Quantum nets are some way off, but SmartNICs are evolving to take over your networks Quantum comms Let's start with the Quantum Internet. Connecting quantum computers is a whole lot more complex than linking AI servers. How can you get a signal out of a quantum system - which depends on complete isolation to function? Researchers have found a very clever answer (distributing entanglement through teleportation), and they've started writing code for the network. But the big payoff comes later: linking multiple quantum systems could overcome the basic problem of bringing enough qubits to bear on big problems, and unleash a quantum computing revolution.

Winners all In 2021, DCD held its gala Awards event again - in person. Once again we celebrated the best projects, and the most inventive players in the industry, Meet the winners in our three page report (p28).

1 nsec

Here comes the Quantum Internet

Flare up Ingvil Smines wants to burn more gas to save the planet - so we grilled the Earth Wind & Power CEO on what she means. It turns out there's a lot of waste gas burnt off at oil wells. Environmentalists say we should leave it in the ground. The EW&P boss has other ideas (p24).

Timing precision required in a quantum Internet



Partner Content Editor Claire Fletcher Head of Partner Content Graeme Burton @graemeburton SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Designer Dot McHugh Head of Sales Erica Baeta Conference Director, Global Rebecca Davison

Chief Marketing Officer Dan Loosemore

Head Office DatacenterDynamics 22 York Buildings, John Adam Street, London, WC2N 6JU

PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254

Peter Judge DCD Global Editor

Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.


News Editor Dan Swinhoe @DanSwinhoe

Conference Producer, APAC Chris Davison

Dive even deeper


Editor Sebastian Moss @SebMoss

Conference Director, NAM Kisandka Moses

Networks and cooling Quantum nets are some years off but there's change happening now: SmartNICs are evolving to take over your networks (p61). And we find how US and New Zealand agencies plan to roll out a network to the chilly wastes of Earth's most isolated inhabited spot: South Polar research stations (p53). Speaking of chilly, our supplement finds new ways to get heat out of your facility. The thermal vibration bell is a quivering metal tree with mechanically assisted radiating leaves (p42). And as two-phase cooling finally comes of age (p34), deployed in standardized form in HPC facilities, the industry needs to jump to it, as PUE is going to become obsolete (p38). We find how a Sardinian firm wants to host IT underground, in a mine that dates back to Roman times (p45). And then there's Spot. The welltrained Boston Dynamics robot dog is now on patrol in Novva's Utah data center. Good boy, Spot (p50)!

Global Editor Peter Judge @Judgecorp




© 2022 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.




The biggest data center news stories of the last three months

Partners Group buys Iceland’s atNorth atNorth has two facilities in Iceland totaling 83MW, and is building an 11MW site in Stockholm, Sweden. Partners Group says atNorth is its fourth digital infrastructure investment in 2021.

TikTok’s €420m Irish data center delayed due to Covid-19 Construction shutdowns have pushed the Chinese social media’s facility opening from early 2022 to late in the year.

Chip shortages cause server delays at Final Fantasy XIV data centers Players are struggling to access the video game Final Fantasy XIV due to publisher Square Enix being unable to acquire servers fast enough due to semiconductor shortages.

KKR & GIP acquire CyrusOne for $15bn, American Tower acquires CoreSite for $10.1bn CyrusOne and CoreSite are both to be taken over in multi-billion dollar acquisitions. CyrusOne will be taken private in an acquisition by KKR and Global Infrastructure Partners, while CoreSite has been taken over by American Tower, a publicly-listed Real Estate Investment Trusts (REIT) traditionally focused on cell tower infrastructure. The two data center REITs were both rumored to be potentially exploring sales. The deals mean three such publicly-listed data center firms have been acquired this year. CyrusOne, which operates around 50 data centers globally, was acquired for $15bn. “We have built one of the world’s leading data center companies with a presence across key US and international markets supporting our customers’ mission-critical digital infrastructure requirements while creating significant value for our stockholders,” said Dave Ferdman, co-founder and interim president and CEO of CyrusOne. The deal is expected to close in the second quarter of 2022. Upon completion of the transaction, CyrusOne will be a privately held company wholly owned by KKR and GIP. KKR’s investment is being made primarily from its global infrastructure and real estate equity strategies, and GIP’s

investment is being made from its global infrastructure funds. On the same day as the acquisition was announced, American Tower bought CoreSite for $10.1 billion. CoreSite operates around 25 data centers across the US. American Tower said the deal will be “transformative” for its mobile edge compute business, allowing it to establish a “converged communications and computing infrastructure offering with distributed points of presence across multiple Edge layers.” Tom Bartlett, American Tower CEO, said: “We are in the early stages of a cloudbased, connected and globally distributed digital transformation that will evolve over the next decade and beyond. We expect the combination of our leading global distributed real estate portfolio and CoreSite’s high quality, interconnectionfocused data center business to help position American Tower to lead in the 5G world.” Another publicly-listed REIT, QTS, was acquired by Blackstone this year for $10bn, while Cyxtera recently became the latest publicly-listed data center firm after completing a SPAC merger with Starboard Value Acquisition Corp. bit.ly/MegaDataCenterDeals

6 DCD Magazine • datacenterdynamics.com

China says local govs should stop “blind and disorderly development” of data centers to hit green targets Local governments have been told “in principle” not to provide incentives to data center companies to build facilities in areas that aren’t classified as national hubs by the government.

US FTC sues to stop Nvidia’s Arm acquisition, says it would harm data center chip innovation The US agency said that the deal would give Nvidia too much control over the technology and designs of rival firms, and give it the means and incentive to stifle innovation. The vote to issue the administrative complaint was 4-0.

Korea Gas and KT announce data center at LNG terminal The two companies will develop a data center that would use the cold energy from regasification at the LNG import plant near Seoul, Korea. The combination would save energy at the data center and use waste cold energy. The use of LNG cold energy could save around 12MW of power at a data center such as the one KT runs in Yongsan.

AtlasEdge acquires 12 Colt data centers across Europe AtlasEdge, the recently formed joint venture between Liberty Global and DigitalBridge, has acquired twelve data centers from Colt Data Centre Services (DCS). Colt said it is focused on building and developing larger hyperscale data center sites, and the divested facilities were ‘better suited’ to an operator exclusively focused on developing the emerging colocation market within Europe. The portfolio includes data centers in 11 tier one and tier two markets across Europe, including Amsterdam, Barcelona, Berlin, Brussels, Copenhagen, Hamburg, London, Madrid, Milan, Paris, and Zurich. AtlasEdge said it now operates more than 100 data centers across 11 countries in Europe. The

financial terms of the deal were not disclosed. Colt Technology Services will become an anchor tenant across multiple facilities. Atlas noted that as well as helping the company enter new geographies, the deal will help establish a collaboration between itself and Colt DCS. “We are delighted to welcome these sites into our expanding portfolio” said Josh Joshi, executive chairman, AtlasEdge. “We are tapping into an exciting and emerging market where real-time data traffic is growing and compute gravitating to the edge of the network. Our approach is open, carrier-neutral and collaborative, and we look forward to working alongside Colt.” In May, telecoms company Liberty Global

and digital infrastructure fund Digital Colony (now DigitalBridge) announced plans to launch AtlasEdge to operate more than 100 Edge data centers across Europe. The deal brings together DigitalBridge’s Edge assets and Liberty Global’s real estate portfolio, with several Liberty Global operating companies acting as anchor tenants; Virgin Media in the UK, Sunrise-UPC in Switzerland, and UPC in Poland. Last month Digital Realty announced it was investing in Atlas. Colt has six remaining facilities across Europe; two each in London, UK, and Frankfurt, Germany; and one each in Paris, France, and Rotterdam, the Netherlands. It retains its portfolio of facilities across APAC. “Having conducted a thorough review of its portfolio, Colt DCS identified twelve colocation sites that were better suited for an operator such as AtlasEdge, which is exclusively focused on developing the emerging colocation market across Europe,” the company said of the news. It said the hyperscale facilities currently owned and operated by Colt, as well as those in development, are unaffected by the deal. In July, Japanese conglomerate Mitsui and investment firm Fidelity formed a joint venture to build hyperscale data centers in Japan that would be operated by Fidelityowned Colt. Niclas Sanfridsson, CEO of Colt DCS, added: “By restructuring and focusing on our hyperscale facilities, we can meet our customers’ needs on-demand with true scalability and efficiency, while meeting their sustainability targets.” bit.ly/AtlasConsumed

Iron Mountain to acquire ITRenew for $725 million Data center and storage company Iron Mountain has acquired IT Asset disposal & recycling firm ITRenew. Iron Mountain will acquire 80 percent of ITRenew from private equity firm ZMC for approximately $725 million in cash, with the remaining 20 percent acquired within three years of close for a minimum enterprise value of $925 million. The transaction is expected to close in the first quarter of 2022. Founded in 2000, ITRenew provides decommissioning and asset disposal services for data centers while also reselling recovered hardware. Following the close of the transaction, ITRenew will form the platform for Iron Mountain’s Global IT Asset Lifecycle Management business. “This strategic transaction marks an important step in advancing Iron Mountain’s position in Asset Lifecycle Management and accelerating our enterprise growth trajectory,” said William Meaney, CEO of Iron Mountain. “ITRenew complements our fast-growing IT Asset Lifecycle Management and Data Center businesses bringing capabilities to serve some of the largest and most innovative companies in the world.” ZMC acquired a majority stake in ITRenew in 2017 for an undisclosed amount. Morgan Stanley & Co. LLC, is serving as financial advisor, and Weil, Gotshal & Manges LLP is serving as legal counsel to Iron Mountain. bit.ly/RecycledMountain

Issue 43 • December 2021 / January 2022 7

Zeewolde approves Facebook to build Netherlands’ largest data center The small town of Zeewolde has granted Meta a permit to build the largest data center in Netherlands, for its Facebook, Instagram and WhatsApp applications. The facility, which will potentially have five halls and use 200MW of electrical power, was approved in late December by a meeting of the council of the 22,000 population town, 50km East of Amsterdam in the province of Flevoland. The data center will be built on 166 hectares (410 acres) of farmland, currently known as Trekkersveld IV (Tractor field IV). The role of data centers has been controversial in the Netherlands, with opponents claiming they soak up available renewable energy and land, and create very few jobs in return. The province of Flevoland currently has a ban on new data center constructions, pending a study into their impact on the community. However, Meta evaded the June ban by applying in February. bit.ly/ZeewoldesLargest

Facebook plans huge $29-34 billion capex spending spree in 2022, will invest in AI, servers, and data centers Up from around $19bn in 2021 Facebook expects to spend tens of billions of dollars on data centers, servers, and offices in 2022. The company, which is undergoing a rebrand to Meta, said that it expected capital expenditures of $29 billion to $34 billion next year - up from $19bn this year. In its latest earnings call, chief financial officer David Wehner said that the increase in expenditure was “driven by our investments in data centers, servers, network infrastructure, and office facilities.” This comes despite the fact that Facebook plans to offer remote work to staff, even after Covid-19 is under control. Wehner added: “A large factor driving the increase in capex spend is an investment in our AI and machine learning capabilities, which we expect to benefit our efforts in ranking and recommendations for experiences across our products, including in feed and video, as well as improving ads performance and relevance.” As part of the AI spending, it is developing its own in-house AI chips for video transcoding and recommendations, which would only be used in its own data centers and wholesale footprint.

The company did not break down how much would be spent on each area. The planned investment will be Facebook’s largest annual capex yet. Facebook first crossed the $1bn expenditure line in 2012, steadily increasing to $7bn a year by 2017. The next year, costs jumped to $13.9bn, followed by $15bn in 2019. It had planned to spend around $19bn in 2020, but deferred $3bn in infrastructure spend into 2021 due to Covid-19. That helped make 2019 its biggest capex year to date, with $4.5bn spent last quarter alone. Now, however, ‘Meta’ hopes to increase that spend by as much as 79 percent. News of this surge in spending helped raise the share price of Facebook suppliers, including networking company Arista. “This is a very positive read for Arista as Facebook is one of the two cloud titans that account for a large portion of Arista revenue,” Evercore ISI analyst Amit Daryanani wrote in a note to clients. “This is also a positive for Cisco, to a lesser extent, as we think they may gain some share at Facebook in the 2022/23 time frame,” Daryanani said. bit.ly/FacebooksSpendingSpree

Facebook turns to AWS as “longterm strategic cloud provider” for acquisitions, thirdparty collaborations, and AI Meta has deepened its ties to Amazon Web Services. The company said that it already uses Amazon’s cloud to complement its existing on-premises infrastructure, but will expand its use of the world’s largest cloud provider. Meta said that it will run third-party collaborations in AWS and use the cloud to support acquisitions of companies that are already powered by AWS. It will also use AWS’s compute services for artificial intelligence research and development for its Meta AI group. The two firms work together on improving the performance for customers running PyTorch on AWS, the open source machine learning library primarily developed by Facebook’s AI Research lab. “Meta and AWS have been expanding our collaboration over the last five years,” Kathrin Renz, AWS VP, said. bit.ly/StatusItsComplicated

Issue 43 • December 2021 / January 2022 9



Caterpillar Electric Power is the leading source for the power solutions needed to protect critical applications at hundreds of data centers around the world; from search engine providers and social media networks to financial institutions. So much of everyday life is now enabled by the internet and the data we exchange. Caterpillar backup power solutions help data centers achieve the industry’s highest standards for reliability, performance and sustainability, and keep their customers connected. Visit www.cat.com/datacenter-ame to find out more.

© 2021 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos “Caterpillar Corporate Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.

Attempted theft at Abkhazia cryptocurrency data center leads to gunfight, one dead A man was killed during an attempted robbery at a cryptocurrency mining data center in Abkhazia, a partially recognized separatist state that is seen as part of Georgia by most nations. According to police, one of the men running the illegal facility accidentally shot one of his own friends during the incident. One of the data center’s operators, Renat Temurovich Pachalia, allegedly used an illegal Kalashnikov assault rifle, while his friends used Kalashnikovs and Makarov pistols, to defend the facility from at least five people, police state. At some point during the firefight, Pachalia accidentally shot his friend, Ardzinba A.B. Cryptocurrency mining has boomed in Abkhazia due to its unique political situation. The Abkhaz–Georgian conflict has been long and bloody, with Abkhazia seeking full independence and Georgia offering significant autonomy. The conflict spiraled into an all-out war in 1994, along with ethnic cleansing of Georgians in the region. Efforts to repair relations were dashed during the 2008 Russian invasion of another disputed Georgian territory, South Ossetia. The five-day war had a lasting impact, with Russia and its allies officially recognizing both Abkhazia and South Ossetia

German court convicts eight over illegal data center in former NATO bunker

as independent states, and stationing military bases in the regions. That brings us to power. The Enguri hydroelectric power station, the world’s second-highest concrete arch dam, spans both Georgia and Abkhazia, so the crucial power plant can only function if the two sides cooperate. This has led to an uneasy truce on the dam, with a 1997 agreement meaning that Georgia gets 60 percent of the energy generated by the 1,320MW power station and Abkhazia gets the remaining 40 percent. Critically, Abkhazia essentially gets that energy for free (they just pay for distribution, not generation), leading to consumer prices of just $0.005 per kilowatt-hour in Abkhazia, compared to Georgia’s $0.08 per kWh. This has made it perfect for Bitcoin and other cryptocurrency mining operations which, after initial hardware costs, profit from the margin between electricity costs and the value of imaginary coins. But a Bitcoin boom risks upsetting the uneasy power balance between the two regions. Georgian politicians claim that since 2018 power consumption in Abkhazia has grown significantly due to crypto mining. bit.ly/CryptoPowerStruggle

A German court has convicted eight people who were involved in operating a data center at a former NATO bunker. Among the illegal services allegedly hosted at the German data center were Cannabis Road, Fraudsters, Flugsvamp, orangechemicals, and the world’s secondlargest narcotics marketplace, Wall Street Market. A large-scale attack on Deutsche Telekom routers in November 2016 is also thought to have been controlled via servers hosted there. The ‘CyberBunker’ facility in Traben-Trarbach, western Germany, was raided by more than 600 police officers in September 2019. Built by the West German military in the 1970s, the site was used by the Bundeswehr’s meteorological division until 2012. A year later, it was sold to HermanJohan Xennt, who told locals he would build a webhosting business there. Instead the company hosted illegal material, offering help to clients. bit.ly/BadDataCenters

Cryptocurrency miners fled China for Kazakhstan, now they’re causing power shortages A surge in cryptocurrency mining in Kazakhstan is causing power shortages, forcing the government to temporarily cut off some miners. China’s efforts to ban mining and the high price of some cryptocurrencies have led to a huge growth in mining in the country, with the energy ministry estimating that electricity demand has jumped by eight percent so far in 2021 versus the usual one or two percent. Since October, six regions in the country have faced blackouts. Following three major power plants in the north going into emergency shutdown last month the state grid operator, Kegoc, said that it would start rationing power to the country’s 50 registered miners, which consume around 600MW. The Financial Times believes that around 87,849 mining rigs were brought to Kazakhstan from China, following the ban. Many are not officially registered with the government, with the Ministry of Energy estimating as much as 1,200MW is siphoned off by illegal miners. bit.ly/CryptoExodus

Issue 43 • December 2021 / January 2022 11

No matter the environment, Starline’s at the center.

Hyperscale, Colocation, Enterprise. Time-tested power distribution for every environment. Starline Track Busway has been the leading overhead power distribution provider—and a critical infrastructure component—for all types of data centers over the past 30 years. The system requires little to no maintenance, and has earned a reputation of reliability due to its innovative busbar design.


Contractor Mortenson shuts down Meta/ Facebook data center construction site in Utah due to racist graffiti Contractor M.A. Mortenson shut down work on a Meta/ Facebook data center after discovering racist graffiti at the Eagle Mountain, Utah, site. The incident was the second at the construction site in just over a week, with Mortenson imposing a sitewide stand down both times. The graffiti, found at the construction toilets, said “Kill a n***** day 11/29.” Mortenson said that it will use the time to put additional training, safety, and security enhancements in place, and train workers on its antiharassment policy. “These measures include but are not limited to respectful workforce training for everyone on the project, additional security cameras, investigative assistance from external resources, access monitoring throughout the site, a heightened security presence and relocation of portable toilets into controlled areas,” the company said in a statement. Last year, contractor Turner shut down two Facebook data center projects in Ohio and Iowa when racist graffiti - and even a noose - were found on site. bit.ly/AGrowingProblem

Google backtracks on not paying data center contractors a promised pandemic bonus After union workers threaten action Google will pay its data center contractors a promised bonus that had been introduced due to the Covid-19 pandemic, and then withdrawn. The company had originally told employees of contractor Modis that they would receive $200 extra a week until the end of the year, if they worked a full week. Then it stopped sending the payments inby October, including promised back pay. Following the sudden stoppage - which Modis said was due to Google managers raising issues with the scheme - the Alphabet Workers UnionCWA threatened action. The New York Times reports that the temps and contractors sent more than 100 messages and emails to managers over the lost pay. They arranged a videoconference of 130 data center

workers discussing possible action, with some even suggesting work stoppage. “Temps, vendors, and contractors (TVCs) are scared and feel replaceable all of the time,” former Google/Modis data center contractor Shannon Wait told DCD in our six-month investigation into worker abuses and labor rights violations at the company’s data centers. She added: “Together, they’re so strong.” In the US, Google Modis contractors usually receive around $15 an hour, and are given few of the benefits afforded to full-time Google staffers. TVCs originally filled additional roles at the company’s data centers, but have increasingly become the backbone of the cloud provider’s operations. bit.ly/GoogleHowToFormAUnion

Peter’s Google factoid Google attempted to hide documents from a trial into whether it illegally fired employees, the NLRB found. Among the docs were training materials on how to campaign against unionization.

Worker trapped inside shipping container at data center A worker was trapped in a shipping container at a Facebook/Meta data center construction site in DeKalb County, Illinois. The individual appeared not to know where they were, with emergency services having to play air horns to try and locate where they were hidden. The person was recovered unharmed. “The person was found safe,” Fire Chief Jeff McMaster told DCD. “The fire department did find somebody and they were all well - I cannot ascertain how long they were actually trapped. But we did receive a 911 call, we searched the area, and we found where the

person was located. And they were released without incident.” From the department being called to the person being freed took 13 minutes, McMaster said. Local groups that transcribe emergency service scanners reveal that rescue personnel had a hard time locating the shipping container, so used an air horn while the trapped person made noises. Contractor Mortenson said in a statement: “A worker on the DeKalb project site was accidentally locked in a Conex box for a short period of time.” bit.ly/ContainerizationGetsOutOfHand

Issue 43 • December 2021 / January 2022 13

Northern Virginia tops data center location list dominated by the US Despite its ban on new data centers, Singapore lists Northern Virginia has once again been named the world’s most desirable data center location, in the annual list from property specialist Cushman & Wakefield. Eight of the top spots go to US cities in the Global Data Center Market Comparison report which ranks Internet hubs according to criteria including fiber connectivity, tax breaks, and the price of land and power. The top 10 list is heavily US-dominated, and Cushman predicts that Northern Virginia will likely reach more than 2GW in the next two years, with a current capacity of 1.7GW. Singapore climbed from number five to tie second, despite having had a long-standing moratorium on new data center projects, which is only now beginning to open. the list also included four new entrants. After Northern Virginia, Silicon Valley and Singapore tie for second place, followed by Atlanta and Chicago tied fourth. Hong Kong, Phoenix, Sydney, Dallas line up in sixth to ninth place, with a tie for tenth place going to Portland and Seattle. bit.ly/CantKillTheKing

JPMorgan spent $2bn on new data centers in 2021, and plans to spend more A total of $12 billion on tech - and the big bank is still moving to the cloud JPMorgan spent $2 billion on new data centers in 2021, despite a continued move to get its IT into the cloud. The US finance giant spent $12 billion on technology in 2021, and plans to increase that further by eight percent in 2022, it revealed in an earnings call last week which prompted criticism from analysts, and a small drop in its share price, according to the Financial Times. Executives on the call explained that the investment was needed to provide data centers and cloud services enabling it to expand into new markets like the UK. Pushed by analysts on the call, chief executive Jaimie Dimon explained that even though the company is moving to the cloud, it needs to keep opening new data centers - and still keeping its old data centers running. “We spent $2 billion on brand-new data centers, OK, which have all the cloud capability you can have in private data centers and stuff like that,” said Dimon, in response to a question from Mike Mayo of Wells Fargo Securities. “We’re still running the old data centers.” He said that the investment in new data centers was mostly on applications, which would be ready to move to the cloud: “All this stuff going to these new data centers, which is now completely up and running, are on apps. Most of the applications that go in have to be

cloud-eligible. Most of the data that goes in has to be cloud eligible.” The bank has multiple cloud capabilities and is running a lot of major programs on AWS, but also has some on Google and Microsoft. Between 30 and 50 percent of the company’s apps, and all its data, would be moving to “cloud-related type of stuff,” said Dimon: “This stuff is absolutely totally valuable,” he said praising the power of the cloud and big data to deal with “risk, fraud, marketing, capabilities, offers, customer satisfaction, do with errors and complaints, prospecting, it’s extraordinary.” Asked to give more detail on the technology expenditure, Dimon said the company’s credit card business runs applications on a mainframe in an old data center which are going to be moved to the cloud: “Card runs a mainframe, which is quite good,” he said. The mainframe handles 60 million accounts efficiently and economically, and has been updated recently, he said: “But it’s a mainframe system in the old data center.” Moving to the cloud is not about savings, he said: “The cost savings by running that will be $30-$40m, but,” he said, the real benefit is security. bit.ly/JP-More-Gan

Tonga volcanic eruption damages subsea cables Tonga is likely to be without its subsea cables for weeks in the wake of a devastating underwater volcanic eruption and subsequent tsunami. On January 15, the country was devastated by a three foot tsunami, triggered by an underwater volcanic eruption. The scale of the devastation is not yet known due to communication issues. This is made worse by the fact that the nation’s only submarine cable was damaged. The international Tonga Cable, laid in 2013, runs 827km to Fiji. The Tonga Domestic cable connects the islands of Vava’u (landing at Neiafu), Lifuka (Pangai), and Tongatapu (Nuku’alofa); it was laid in 2018. Digicel, which is a minority shareholder in the majority Tonga government-owned cable, said in that statement that “all communication to the outside world in Tonga is affected due to damage on the Tonga Cable Limited submarine cable.” The company said it was working urgently with local authorities to “resolve the damage.” bit.ly/CableCut

Issue 43 • December 2021 / January 2022 15

No matter the environment, Starline’s at the center.

Hyperscale, Colocation, Enterprise. Time-tested power distribution for every environment. Starline Track Busway has been the leading overhead power distribution provider—and a critical infrastructure component—for all types of data centers over the past 30 years. The system requires little to no maintenance, and has earned a reputation of reliability due to its innovative busbar design.


Quantum Internet

Why do we need a

QUANTUM INTERNET? Peter Judge Global Editor

Quantum computers are still in development. But the quantum Internet may be closer than you think


uantum computers were proposed forty years ago, but have yet to make any real impact in the waorld. Researchers are struggling to get significant amounts of processing power from systems that have to be kept close to absolute zero. But at the same time, the quantum informatics community is already hard at work on the next step beyond individual quantum computers: an Internet that can connect them to work together. What’s the big hurry? Well, the community that builds the Internet has always been keen to incorporate all the latest technology, so it would actually be surprising if there wasn’t a group looking at handling quantum information on the Internet. And work on a quantum Internet looks like it could change what we can achieve with

communications technologies, with spin-offs that will benefit existing networking. It could also improve the chances of getting real value from quantum computing. Computing by quantum There is a kind of quantum communications happening already, says Wojciech Kozlowski of QuTech, a quantum computing institute which is a collaboration of the Technical University of Delft and the Dutch research organization TNO. “There are telcos working directly on providing quantum key distribution (QKD) services,” he says. QKD distributes encryption keys, using a quantum state to detect whether anyone has eavesdropped on the link. “SK Telecom and Deutsche Telekom are interested in this.” But QKD does not relate to quantum computers, and their "quantum bits” or qubits.

These are the heart of quantum computing: they exist as both 1 and 0 at the same time (see box). Both those values are live, and can be operated on simultaneously, potentially speeding up computing massively. But is networking quantum computers such a big deal? Quantum computers output classical data, the same as any other device and that data can easily go into the Internet’s standard TCP/IP packets. You can already communicate with plenty of experimental quantum computers on the Internet. For instance, Amazon’s AWS Braket lets you build and test quantum algorithms on different quantum processors (D-Wave, IonQ, or Rigetti) or use a simulator. Microsoft does something similar on Azure, and IBM also lets users play with a five-qubit quantum system. Job done? Well no. Sharing the output of isolated quantum

Issue 43 • December 2021 / January 2022 17

alive and dead - as long as the box is kept closed. So how do you communicate with a sealed box?

computers is one thing; linking quantum computers up is something else. The power of quantum computing is in having a large number of qubits, whose value is not determined till the end of the calculation. If you can link those “live” qubits, you’ve linked the internals of your quantum computers, and you’ve effectively created a bigger quantum computer. Why link up quantum computers? “It is hard to build quantum computers large enough to solve really big problems,” explains Vint Cerf, one of the Internet’s founders, in an email exchange with DCD. The problem is harder because the quantum world creates errors: “You need a lot of physical qubits to make a much smaller number of logical qubits - because error correction requires a lot of physical qubits.” So, we actually do need to make our quantum computers bigger. But we want to link “live” qubits, rather than just sharing the quantum computer’s output. We want to distribute the internal states of that computer without collapsing the wave function - and that means distributing entanglement (see box) The quantum Internet means taking quantum effects and distributing them across

a network. That turns out to be very complex but potentially very powerful. “Scaling of quantum computers is facilitated by the distribution of entanglement,” says Cerf, who works at Google. “In theory, you can make a larger quantum system if you can distribute entanglement to a larger number of distinct quantum machines.” There’s a problem though. There are several approaches to building qubits for the development of quantum computers, including superconducting junctions, trapped-ion quantum computers, and others based on photons. One thing they all have in common is they have to be isolated from the world in order to function. The qubits have to maintain a state of quantum coherence, in which all the quantum states can exist, like the cat in Schrödinger’s famous paradox that is both

The quantum Internet takes quantum effects and distributes them across a network. That turns out to be very complex - but potentially very powerful 18 DCD Magazine • datacenterdynamics.com

National quantum Internet That question is being asked at tech companies like Google, and at Universities round the world. Europe has the Quantum Internet Alliance, a grouping of multiple universities and bodies including QuTech, and the US has Q-Next, the US national quantum information science program, headed by University of Chicago Professor David Awschalom The US Department of Energy thinks the question is important enough to fund, and has supported projects at Q-Next lead partner, the Argonne National Laboratory. Among its successes, Q-Next has shared quantum states over a 52-mile fiber link which has become the nucleus of a possible future national quantum Internet. Europe’s QIA has notched up some successes too, including the first network to connect three quantum processors, with quantum information passed through an intermediate node, created at the QuTech institute for quantum informatics. Another QIA member, the Max Planck Institute, used a single photon to share quantum information - something which is important, as we shall see. Over at Argonne, Martin Suchara, a scientist from the University of Chicago, has DoE funding for his work on quantum communications. But he echoes the difficulty of transmitting quantum information. “The no-cloning theorem says, if you have a quantum state, you cannot copy it,” says Suchara. “This is really a big engineering challenge.” Send for the standards body With that much work going on, we are beginning to see the start of a quantum Internet. But apart from the technical difficulty, there’s another danger. All these bodies could create several, incompatible quantum Internets. And that would betray what the original Internet was all about. The Internet famously operates by “rough consensus and running code.” Engineers make sure things work, and can be duplicated in multiple systems, before setting them in stone. For 35 years, the body that has ensured that running code has been the Internet Engineering Task Force (IETF). It’s the body that curates standards for anything added to our global nervous system. Since the dawn of the Internet, the IETF has published standards known as “RFCs” (requests for comment). These define the network protocols which ensure your emails

Quantum Internet and your video chats can be received by other people. If we are going to have a quantum Internet, we’ll need an RFC which sets out how quantum computers communicate. Right now, that’s too blue-sky for the hands-on engineers and protocol designers of the IETF. So quantum Internet pioneers have taken their ideas to the IETF’s sister group, the forward-looking Internet Research Task Force (IRTF). The IRTF has a Quantum Internet Research Group (QIRG), with two chairs: Rodney Van Meter, a professor at Keio University, Japan; and Wojciech Kozlowski at QuTech in Delft. QIRG has been quietly looking at developments that will introduce completely new ways to do networks. “It is the transmission of qubits that draws the line between a genuine quantum network and a collection of quantum computers connected over a classical network,” says the QIRG’s document, Architectural Principles for a Quantum Internet. “A quantum network is defined as a collection of nodes that is able to exchange qubits and distribute entangled states amongst themselves.” The work is creating a buzz. Before Covid-19 made IETF pause its in-person meetings, QIRG get-togethers drew quite an attendance, says Kozlowski, “but it was more on a kind of curiosity basis.” Building up from fundamentals The fundamental principle of quantum networking is distributing entanglement (see box), which can then be used to share the state of a qubit between different locations. Suchara explains: “The trick is, you don't directly transmit the quantum state that is so precious to you: you distribute entanglement. Two photons are entangled, in a well-defined state that basically ties them together. And you transmit one of the photons of the pair over the network.” He goes on: “Once you distribute the entanglement between the communicating endpoints, you can use what is called quantum teleportation to transmit the quantum state. It essentially consumes the entanglement and transmits the quantum state from point A to point B.” “The quantum data itself never physically enters the network,” explains Kozlowski. “It is teleported directly to the remote end.” Kozlowski points out that teleporting qubits is exciting, but distributing entanglement is the fundamental thing. “For example, quantum key distribution can be run on the entanglement based network without any teleportation, and so can a bunch of other applications. Most quantum application protocols start from saying ‘I have a bunch of states,’ and teleportation is just one way of using these states.”

Quantum network protocols are difficult because the qubit payload goes on the quantum channel, and the header goes on the classical channel Towards a quantum Internet protocol In late 2020, Kozlowski co-authored a paper with colleagues at Delft, proposing a quantum Internet protocol, which sets an important precedent, by placing distributed entanglement into a framework similar to the layered stacks - OSI or TCP/IP - which define communication over classical networks. “Our proposed stack was heavily inspired by TCP/IP, or OSI.” he tells us. “The definitions of each layer are slightly different, but there was this physical layer at the bottom, which was about attempting to generate entanglement.” That word “attempting” is important, he says: “It fails a lot of the time. Then we have a link responsible for operating on a single link, between two quantum repeaters or an end node and a quantum repeater. The physical layer will say ‘I failed,’ or ‘I succeeded.’ The link layer would be responsible for managing that to eventually say, ‘Hey, I actually created entanglement for you.’” The protocol has to keep track of which qubits are entangled on the different nodes, and this brings in a parallel network channel: “One important thing about distributing entanglement is the two nodes that end up with an entangled pair must agree which of their qubits are entangled with which qubits. One cannot just randomly use any qubit that's entangled. "One has to be able to identify in the protocol, which qubit on one node is entangled with which qubit on the other node. If one does not keep track of that information, then these qubits are useless.” “Let’s say these two nodes generate hundreds of entangled pairs, but only two of them are destined for a particular application, then that application must get the right qubits from the protocol stack. And those two qubits must be entangled with each other, not just any random qubit at the other node.” The classical Internet has to transmit coordinating signals like this, but it can put them in a header to each data packet. This can’t happen on the quantum Internet, so “header” information has to go on a parallel channel, over the classical Internet. “The thing that makes software and network protocols difficult for quantum is that one could imagine a packet which has a qubit as a payload, and has a header. But they never ever travel on the same channel. The qubit goes on the quantum channel, and the header would go on the classical channel.”

It needs a classical channel “It is a hard requirement to have classical communication channels between all the nodes participating in the quantum network,” says Kozlowski. In its protocol design, the Delft team took the opportunity to have headers on the classical Internet that don’t correspond to quantum payloads on the quantum Internet, says Kozlowski: “We chose a slightly different approach where the signaling and control messages are not directly coupled. We do have packets that contain control information, just like headers do. What is different, though, is that they do not necessarily have a payload attached to them.” In practice, this means that the quantum Internet will always need the classical Internet to carry the header information. Every quantum node must also be on the classical Internet. So a quantum Internet will have quantum information on the quantum plane, and a control plane operating in parallel on the classical Internet, handling the classical data it needs to complete the distribution of qubit entanglement. Applications on top The Delft proposal keeps things general, by putting teleportation up at the top, as an application running over the quantum Internet, not as a lower layer service. Kozlowski says earlier ideas suggested including teleportation in the transport layer, but “we have not proposed such a transport layer, because a lot of applications don't even need to teleport qubits.” The Delft paper proposes a transport layer, which just operates directly on the entangled pair, to deliver services such as remote operations. Over at Argonne, Suchara’s project has an eye on standards work, and agrees with the principles: “How should the quantum Internet protocol stack look like?” he asks. “It is likely that it will resemble in some way the OSI model. There will be layers and some portion of the protocol at the lowest layer will have to control the hardware.” Like Kozlowski, he sees the lower layer managing photon detectors and quantum memories in repeater nodes. Above that, he says, “the topmost layer is the application. There are certain actions you have to take for quantum teleportation to succeed. That's the topmost layer.

Issue 43 • December 2021 / January 2022 19

“And then there's all the stuff in the middle, to route the photons through the network. If you have a complicated topology, with multiple network nodes, you want to go beyond point to point communication; you want multiple users and multiple applications. Sorting out the middle layers creates a lot of open questions, says Suchara: “How can this be done efficiently? This is what we would like to answer.” There is little danger of divergence at this stage, but over in Delft, Kozlowski’s colleagues have actually begun to write code towards an eventual quantum internet. While this article was being written, QuTech researchers published a paper, Experimental Demonstration of Entanglement Delivery Using a Quantum Network Stack. The introduction promises: “Our results mark a clear transition from physics experiments to quantum communication systems, which will enable the development and testing of components of future quantum networks.” Practical implementations Putting this into practice brings up the next set of problems. Distributing entanglement relies on photons (particles of light), which quantum Internet researchers refer to as “flying qubits,” to contrast them with the stationary qubits in the end system, known as “matter qubits.” The use of photons sounds reassuring. This is light. It’s the same stuff we send down fiber optic cables in the classical Internet. And one of the major qubit technologies in the lab is based on photons. But there’s a difference here. For one thing, we’re working on quantum scales. A bit sent over a classical network will be a burst of light, containing many millions of photons. Flying qubits are made up of single (entangled) photons. “In classical communication, you just encode your bit in thousands of photons, or create extra copies of your bit,” says Suchara. “In the quantum world, you cannot do that. You have to use a single photon to encode a quantum state. And if that single photon is lost, you lose your information.” Also, the network needs to know if the flying qubit has been successfully launched. This means knowing if entanglement has been achieved, and that requires what the quantum pioneers call a “heralded entanglement generation scheme.” Working with single photons over optical fiber has limits. Beyond a few km, it’s not possible. So the quantum Internet researchers have come up with “entanglement swapping.” A series of intermediate systems called “quantum repeaters” are set up, so that the remote end of a pair can be repeatedly teleported till it reaches its destination. This is still not perfect. The fidelity of the

copy degrades, so multiple qubits are used and “distilled” in a process called “quantum error correction.” On one level that simply means repeating the process till it works, says Suchara. “You have these entangled photons and you transmit them. If some portion - even a very large portion - of these entangled pairs is lost, that's okay. You just need to get a few of them through.” Kozlowski agrees: “The reason why distributed entanglement works, and qubit distribution doesn't work, is because when we distribute entanglement, it's in a known form. It's in what we call Bell states. So if one fails, if one is lost, one just creates it again.” Quantum repeaters will have to handle error correction and distillation, as well as routing and management. But end nodes will be more complicated, says Kozlowski. “Quantum repeaters have to generate entanglement and do a bit of entanglement swapping. End nodes must have good capacity for a quantum memory that can hold qubits and can actually perform local processing instructions and operations. In addition to generating entanglement, one can then execute an application on it.” Dealing with time Quantum networks will also have to deal with other practical issues like timing. They will need incredible time synchronization because of the way entanglement is generated today. Generating entanglement is more complicated than sending a single photon. Most proposed schemes send a photon from each end of the link to create an entangled pair at the midpoint.

20 DCD Magazine • datacenterdynamics.com

“The two nodes will emit a photon towards a station that's in the middle,” says Kozlowski. “And these two photons have to meet at exactly the same time in the middle. There's very little tolerance, because the further apart they arrive, the lower quality the entanglement is.” He is not kidding. Entanglement needs synchronization to nanosecond precision, with sub-nanosecond jitter of the clocks. The best way to deliver timing is to include it in the physical layer, says Kozlowski. But there’s a question: “Should one synchronize just a link? Or should one synchronize the entire network to a nanosecond level? Obviously, synchronizing an entire network to a nanosecond level is difficult. It may be necessary, but I intuitively would say it's not necessary, I want to limit this nanosecond synchronization to each link. On a bigger scale, the quantum network has a mundane practical need to operate quickly. Quantum memories will have a short lifetime, as a qubit only lasts as long as it can be kept isolated from the environment. “In general, one wants to do everything as fast as possible, as well, for the very simple reason that quantum memory is very shortlived,” says Kozlowski. Networked quantum systems have to produce pairs of Bell states fast enough to get work done before stored qubits decay. But the process of making entangled pairs is still slow. Till that technology improves, quantum networked systems will achieve less, the further they are separated. “Time is an expensive resource,” says the QIRG document. “Ultimately, it is the lifetime of quantum memories that imposes some of the most difficult conditions for operating an extended network of quantum nodes. As Vint Cerf says: “There is good work going on at Google on quantum computers. Others are working on quantum relays that preserve entanglement. The big issues are maintaining coherence long enough to get an answer. Note that the quantum network does have speed of light limitations despite the apparent distance-independent character

Quantum Internet of quantum entanglement. The photons take time to move on the quantum network. If entanglement dissipates with time, then speed of light delay contributes to that.” Demands of standards The Internet wouldn’t be the Internet if it didn’t insist on standards. So the QIRG has marked “homogeneity” as a challenge. The eventual quantum Internet, like today’s classical Internet, should operate just as well, regardless of whose hardware you are using. Different quantum repeaters should work together, and different kinds of quantum computers should be able to use the network, just as the Internet doesn’t tell you what laptop to use. At the moment, linking different kinds of quantum systems is a goal for the future, says Kozlowski: “Currently, they have to be the same, because different hardware setups have different requirements for what optical interaction they can sustain. When they go on the fiber, they get converted to the telecom frequency, and then it gets converted back when it's on the other end. And the story gets more complicated when one has to integrate between two different setups” “There is ongoing work to implement technology to allow crosstalk between different hardware platforms, but that's ongoing research work. It’s a goal,” he says. “Because this is such an early stage. We just have to live with the constraints we have. Very often, software is written to address near-term constraints, more than to aim for these higher lofty goals of full platform independence and physical layer independence.” The communication has also got to be secure. On the classical Internet, security was an afterthought, because the original pioneers like Cerf were working in a close-knit community where everyone knew each other. With 35 years of Internet experience to refer to, the QIRG is building in security from the outset. Fortunately, quantum cryptography is highly secure - and works on quantum networks, almost by definition. As well as this, the QIRG wants a quantum Internet that is resilient and easy to monitor. “I think participation in the standardization meetings, by the scientific community and potential users, is really important,” says Suchara. “Because there are some important architectural decisions to be made - and it's not clear how to make them.” Starting small The quantum Internet will start local. While the QIRG is thinking about spanning kilometers, quantum computing startups can see the use in much smaller networks, says Steve Brierley CEO of Riverlane, a company impatient to hook up large enough quantum computers to do real work.

The quantum Internet will start small. Early networked quantum computers will be in the same room - potentially even in the same fridge “The concept is to build well functioning modules, and then network them together,” says Brierley. “For now, this would be in the same room - potentially even in the same fridge.” That level of networking “could happen over the next five years,” says Brierley. “In fact, there are already demonstrations of that today.” Apart from anything else, long distance quantum networks will generate latencies. As we noted earlier, that will limit what can be done, because quantum data is very shortlived. For now, not many people can be involved, says Kozlowski: “The hardware is currently still in the lab, and not really outside.” But, for the researchers in those labs, Kozlowski says: “There's lots to do,” and the work is “really complicated. Everybody's pushing the limits, but when you want to compare it to the size and scope of the classical Internet, it's still basic.” Professor Andreas Wallraff at ETH University in Zurich used an actual pipe to send microwave signals between two fridges. The Max Planck Institute has shown quantum teleportation, to transmit quantum information from a qubit to a lab 50 meters away. At QuTech Professor Stephanie Wehner has shown a “link layer” protocol, which provides reliable links over an underlying quantum network. And QuTech has demonstrated a quantum network in which two nodes are linked with an intermediate repeater. Over in the US at Argonne, Suchara is definitely starting small with his efforts at creating reliable communications. He is working on sharing quantum states, and doesn’t expect to personally link any quantum computers just yet. For him, the first thing is to get systems well enough synchronized to handle basic quantum communications: “With FPGA boards, we already have a clock synchronization protocol that's working in the laboratory. We would like to achieve this clock synchronization in the network - and we think the first year of our project will focus on this.” Suchara thinks that long-distance quantum computing is coming: “What's going to happen soon is connecting quantum computers that are in different cities, or really very far away."

The five stages of quantum networks QuTech, the quantum computing institute at the University of Delft has a roadmap towards the quantum Internet. Pre-quantum networks Even before quantum information is exchanged, classical networks with trusted repeaters can quantum key distribution. Proto-quantum networks 1. True quantum networks begin when entanglement is used to effectively share a qubit with another location. 2. Quantum repeaters use the nocloning theorem to distribute the entanglement further to cover more distance, and share a qubit to any location. Advanced quantum networks 3. Quantum memory arrives, storing quantum information for a period of time. This allows teleportation and blind quantum computation, and quantum clock synchronization. 4. Memory lifetimes and error rates improve to allow simple distributed quantum computing. 5. Each end node has a fully-fledged quantum computer and completely distributed quantum computing is possible.

Ideally, he thinks long links will use the same protocol suite, but he accepts short links might need different protocols from long distances: “The middle layers that get triggered into communication may be different for short and long distance communication. But I would say it is important to build a protocol suite that's as universal as possible. And I can tell you, one thing that makes the classical Internet so successful is the ability to interconnect heterogeneous networks.”

Issue 43 • December 2021 / January 2022 21

Suchara is already looking at heterogeneity: “There are different types of encoding of quantum information. One type is continuous variable. The other type is discrete variable. You can have hybrid entanglement between continuous variable and discrete variable systems. We want to explore the theoretical protocols that would allow connecting these systems and also do a demonstration that would allow transmission of both of these types of entanglement on the Argonne Labs quantum link network.” The Argonne group has a quantum network simulator to try out ideas: “We have a working prototype of the protocol. And a simulator that allows evaluation of alternative protocol design. It’s an open source tool that's available for the community. And our plan is to keep extending the simulator with new functionality.” How far can it go? Making quantum networks extend over long distances will be hard because of the imperfections of entanglement generation and of the repeaters, as well as the fact that single photons on a long fiber will eventually get lost. “This is a technological limitation,” says Kozlowski. “Give us enough years and the hardware will be good enough to distribute entanglement over longer and longer distances.” It’s not clear how quickly we’ll get there. Kozlowski estimates practical entanglementbased quantum networks might exist in 10 to 15 years. This is actually a fast turnaround, and the quantum Internet will be skipping past decades of trial and error in the classical world. by starting with layered protocol stacks and software-defined networking. The step beyond to distributing quantum computing, will be a harder one to take, because at present quantum computers mostly use superconducting or trapped-ion qubits, and these don’t inherently interact with photons. Why do it? At this stage, it may all sound too complex. Why do it? Professor Kozlowski spells out what the quantum Internet is not: “It is not aimed to replace the classical Internet. Because, just as a quantum computer will not replace a classical computer, certain things are done quite well, classically. And there's no need to replace that with quantum.” One spin-off could be improvements to classical networks: “In my opinion, we should use this as an opportunity to improve certain aspects of the OSI model of classical control protocols,” says Suchara.

“For example, one of the issues with the current Internet is security of the control plane. And I think if you do a redesign, it's a great opportunity to build in more security mechanisms into the control plane to improve robustness. Obviously, the Internet has done great over the decades in almost every respect imaginable, but still one of the points that could be improved is the security of the control plane.” Kozlowski agrees, pointing out that quantum key distribution exists, and there are other quantum cryptography primitives, that can deliver things like better authentication and more secure links. The improvements in timing could also have benefits, including the creation of longer baseline radio telescopes, and other giant planetary instruments. The big payoff could be distributing quantum computing, but Kozlowski sounds a note of caution: “Currently it's not 100 percent clear how one would do the computations. We have to first figure out how do we do computations on 10,000 computers, as opposed to one big computer.” But Steve Brierley wants to see large, practical quantum computers which take high-performance computing (HPC) far beyond its current impressive achievements. Thanks to today’s HPC systems, Brierly says: “We no longer ‘discover’ aircraft, we design them using fluid dynamics. We have a model of the underlying physics, and we use HPC to solve those equations.” If quantum computers reach industrial scales, Brierly believes they could bring that same effect to other sectors like medicine, where we know the physics, but can’t yet solve the equations quickly enough. “We still ‘discover’ new medicines,” he says. “We've decoded the human DNA sequence, but that's just a parts list.” Creating a medicine means finding a chemical that locks into a specific site on a protein because of its shape and electrical properties. But predicting those interactions means solving equations for atoms in motion. “Proteins move over time, and this creates more places for the molecule to bind,” he says. “Quantum mechanics tells us how molecules and proteins, atoms and electrons will move over time. But we don't have the compute to solve the equations. And as Richard Feynman put it, we never will, unless we build a quantum computer.” A quantum computer that could invent any new medicine would be well worth the effort, he says: “I'd be disappointed if the only thing a quantum computer does is optimize some logistics route, or solve a number theory problem that we already know the answer to.” To get there, it sounds like what we actually need is a distributed quantum computer. And for that, we need the quantum Internet.

22 DCD Magazine • datacenterdynamics.com

What you need to build a quantum Internet Quantum mechanics describes reality on a sub-microscopic level. Here’s a quick guide to some concepts to understand the quantum Internet of the future: Qubits While classical data is expressed in bits, which can be 0 or 1, quantum data is expressed in qubits, which are a “superposition.” They exist as both 1 and 0. When you have multiple qubits, all possible states exist at the same time. Entanglement When two particles are entangled, they have quantum states that are not independent. When they are a long distance apart, a measurement of one particle will instantly affect the other. Einstein called this “spooky action at a distance.” Quantum Internet developers call it a good way to communicate quantum information. Bell pairs Two highly-entangled qubits are a “Bell pair” (named after the physicist John Stewart Bell) which can have four values. If the two qubits are separated, then the state of the pair can be changed by operations in either of the end nodes. Measurement Superposition is powerful, but creates a problem. In a classical system, you can look at a bit at any time. In a quantum system, the act of measuring destroys the superposition - effectively ending any entanglement, and turning the qubit into an ordinary bit, either a 1 or a 0. The no-cloning theorem To add to the difficulty, you can’t copy a qubit either. The “no-cloning theorem” says you can’t create a copy of an unknown quantum state. So amplifying and transmitting signals has to happen differently as well. Teleportation Though you can’t read or copy a qubit, you can transmit an unknown qubit state - using a distributed entangled pair. The source entangles the qubit it wants to transmit with its end of the pair. The destination does the same, and its end of the entangled pair is transformed into the unknown qubit state. Because the original qubit state is destroyed, the qubit been “teleported.” (not copied).


Data center experts deploy

with less risk.

EcoStruxure™ for Data Center delivers efficiency, performance, and predictability. • Rules-based design accelerates the deployment of your micro, row, pod, or modular data centers. • Lifecycle services drive continuous performance. • Cloud-based management and services help maintain uptime and manage alarms. #WhatsYourBoldIdea se.com/datacenter

©2020 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998-21069523_GMA

EcoStruxure Pod Data Center EcoStruxure IT

out in avalanches, and how to rescue your colleagues.” From there she got involved in rope access: “I was doing maintenance on bridges. Instead of using scaffolding, we said we can do it with ropes - and we can do it in like 10 minutes instead of building this scaffolding.” “I started a company, called in Norwegian Ut-veg, It's directly translated as ‘a way out’, but a way out is also a way into something new, out of the house and into nature.” At this point, her attention turned to oil and she began working within the Norwegian oil industry, not against it.

Peter Judge Global Editor

Flaring for the world We talk to the former Norwegian politician who wants to make sure that energy doesn’t get wasted


ngvil Smines Tybring-Gjedde is a puzzle. She’s proud of her environmental credentials - but she’s had a career in the petroleum industry, and even represented it in the Norwegian government. She’s entered the data center world as the CEO of Earth Wind & Power, a company that says it can help with the climate crisis. But EW&P offers fossil-powered data centers running on natural gas burnt at oil wells. All that takes some explaining, but she says it all comes down to efficiency, getting off your high horse, and “looking at the facts.”

A climber “I started out hating the [petroleum] industry,” she tells us over Zoom. “I was a professional climber, and I loved the outdoor life. I didn't want people to have any footprint. I was really eager to stop the oil and gas industry. I was on the outside, and I only saw the negative part of it.” Climbing was how she started, and it’s clearly still in her bones. Each step of her career she’s found the next handhold, and tested it before transferring her weight. As a climbing instructor, she taught soldiers “how to survive, how not to be taken

24 DCD Magazine • datacenterdynamics.com

Climbing onto oil rigs Around 1990, Smines Tybring-Gjedde noticed something. Across the North Sea, oil rigs in the British oil fields were using climbers. “There were actually British climbers on the British [continental] shelf,” she says, “and we thought, if they can do it on the British shelf, we should do it on the Norwegian shelf instead of scaffolding.” She pioneered rope access on Norway’s oil rigs, starting with a job request to change some bolts on an onshore oil rig. “I said yes, I can do it,” she tells us, but before she could take the job, she had to get approval. Statoil could not give her a permit for rope access on their oil rigs, “so I traveled to Scotland and got two certificates there.” Back in Norway, she became a rope access entrepreneur: “I got a job painting the legs of a platform offshore. It was a test. So I climbed down to do the sandblasting and painting, and they used a scaffolding company on the other legs. “And you know, my team was finished with the whole work before the scaffoldings were built. It was quite a nice job, we got a lot of money, and I still liked being a climber.” After that, she alternated climbing mountains round the world, with earning money on oil rigs to fund the expeditions. Gradually her opinion of the industry changed.: “I saw that the oil and gas industry was very eager to reduce its footprint in the environment. So, from not liking it at all, I had a huge transition. I saw that the people working there and the industry as a whole did have some very good morals or ethics - or the needs of doing a good job.” Fueling global warming In the big picture, there is plenty to dislike about Norwegian oil. The country has a large, nationalized petroleum industry, which is undeniably fuelling global warming. It supplies roughly two percent of the oil burnt worldwide. The Intergovernmental Panel on Climate Change’s recent report, and subsequent studies have found that the only way to hit our

CEO Interview climate change target of 1.5C global warming is to stop using fossil fuel. We must leave 60 percent of the world’s oil reserves in the ground (and 90 percent of the coal). But the Norwegian government won’t be doing that, funded as it is from oil fields run by government-run Statoil. The sector is expected to make the government 277 billion Krone ($30 billion) in 2020. For comparison, that’s roughly onetenth of Norway’s GDP or $6,000 for each of the country’s five million population. At home, Norway is very virtuous, with all its electricity coming from renewable sources. It’s making fast progress at electrifying its transport fleet too. But it’s the fourteenth largest oil exporter in the world (and the third largest gas exporter), with a climate change policy that was rated insufficient by Climate Action Tracker. And far from calling a halt or slowing down, the new center-left government, which replaced the previous Conservative/Progress alliance, is actually planning to grow Norway’s oil industry. Smines Tybring Gjedde sees a good side to the oil industry. It may be burning the planet, but Norway’s oil companies are concerned for marine life conservation, an area where it’s actually been “at the forefront,” she claims. “When oil was found on the Norwegian shelf,” she says, “it was immediately put into laws and regulations that this new industry shouldn't destroy what the other industries were living off - and that was fisheries and shipping. “It was completely new, but they said that we need to take care of the fisheries, the industries and the environment of the ocean, at the same time as developing the oil and gas industry in such a way that it is it will not destroy the living of the sea, and at the same time also develop the shipping industry to be contributors to the oil and gas industry.” Norway was lucky to strike oil, she says, but it handled it well: “We are lucky to have oil, but it was not luck that made us put the oil to our pension fund. That was that was just good management and clever people. “They were really looking years ahead,” she says. “I am having goosebumps now just talking about it. The oil and gas belong to the Norwegian people.” Climbing into politics In 2005, she began a traverse into politics. “I didn’t have any wishes to become a politician,” she told us, but she got involved in a campaign for fathers’ access to their children: “I was writing articles in the newspapers, because I was really into how fathers were treated when they divorced. In an equal country like Norway, dads and mums should have equal rights to see their children after they divorced.”

"It's estimated by the IEA, that 30 percent of total global energy production was lost or wasted in 2021. And the World Bank has estimated that up to 150 billion cubic meters of gas is flared every year" This was around the time she got reacquainted with her childhood sweetheart, the far-right politician Christian Tybring-Gjedde. The two both have children from previous marriages, and married in 2009. She also got in touch with Norway’s rightwing, anti-immigration Progress Party. Her husband Christian Tybring-Gjedde is on the extreme end of the party, never a Minister though he was its first (and, for a time its only) MP. He denies the existence of man-made climate change, and twice nominated Donald Trump for the Nobel Peace Prize. We don’t mention her husband. At the 2013 election, the Progress Party won more seats, becoming the third-largest party in the Norwegian Parliament, and formed a coalition government with the Conservative Party. It was then that she got a phone call: “The prime minister called me and asked if I would consider to join them as a deputy minister in the ministry of petroleum and energy. Having been working there for 25 years, I thought, well, a politician with a background for what you should do in the ministry would be nice. So of course I accepted.” Although not an elected member of parliament, she was appointed state secretary in the Ministry of Petroleum and Energy in 2015. In 2019 she was appointed Minister of Public Security in the Ministry of Justice and Public Security, serving till the coalition disintegrated in 2020. How to use flare gas When her political career ended, she joined some former oil company executives to set up Earth Wind & Power, an energy company with the goal of using energy that would otherwise be wasted. The company’s first target is oil rigs, at sea and on land, which “flare” natural gas, burning it off because it is considered unprofitable to transport and use. The practice has been banned in Norway from the start of its oil business in 1972, except in emergencies, so Norway pipes its gas to shore. “It's like spaghetti on the ground underneath the water,” she says. “The Norwegian continental shelf is quite small, and we have a lot of activity all over it. So the gas pipes are all over. So either they put their gas into the gas pipes and sell it, or they pump

it back down into the earth to increase oil recovery.” Environmentalists would reject any idea of getting more oil out of the ground, but she thinks it’s good use of resources: “Just taking out a little bit of what the reservoir contains is poor resource management. You've done all the investments, so it's really important to get as much as you possibly can out of the reservoirs.” As an oil minister, she found that other countries aren’t so scrupulous about using flare gas. Flying into oil-producing countries, she looked down and was shocked: “It looks like a birthday cake, you know, with all the flares, and I thought what a waste!” Oil producers flare off their gas because they can’t find anything better to do with it, but the International Energy Agency wants this to change: “It's estimated by the IEA, that 30 percent of total global energy production was lost or wasted in 2021. And the World Bank has estimated that up to 150 billion cubic meters of gas is flared every year.” That much gas, she says, “could power the continent of Africa, or the entire fleet of cars in Europe, and it's about 150 percent of what Norway exports every year. It’s poor management, and it’s really, really devastating for the environment.” As well as this it’s a misuse of resources: “It's horrible, when we know that 60 percent of people across the African continent are without any access to electricity, and we are flaring all this gas. It's horrible.” Earth Wind & Power wanted to fix this: “We tried to find solutions of how we can stop the flaring. How could you make it into a product that somebody needed?” One idea was to bottle the flare gas, and distribute it in energy-poor areas. “One of the founders, his family has been trying to help African women create jobs. We thought that we could take the flare gas, put it into containers, give micro loans to the women and have them sell containers instead of burning coal and wood.” Firewood is often collected by children, who can't go to school, she said: “So we could reduce flaring, give the women a job, and improve the health of women who die of cancer from making food at wooden ovens and help their children go to school, and then come back to the villages and make industry and business for themselves.”

Issue 43 • December 2021 / January 2022 25

That idea failed: “We couldn't find a way to do it economically. That’s why nobody does it.” A foothold in data Next, the fledgling business picked up on the “megatrend” of digitization: “We know that in 2016, about one percent of the total electricity in the world was used by the data center industry - and there's an assumption that by 2025 or 2030, 20 percent of the total electricity utilization will go to the data center industry” [according to estimates by Huawei researcher Anders Andrae]. “This is happening when the world needs more and more energy. How can we do it? We need more energy in general, we need to have reduced emissions from the energy, and we need more and more energy for the digitalization of the world. And we also know that 150 billion cubic meters of gas is flared each year.” If transporting the gas was a problem, she said, “maybe instead we could move the mountain to Mohammed, and put the data centers where the energy is produced.” EW&P’s idea is to “take the flare gas, make electricity by it, have a cable and put it into the data centers we have made in forty-foot containers.” The data centers themselves communicate over fiber if it’s available but “very often, if there’s no infrastructure in the rural areas where they are flaring, we can communicate or sell it or transport it by satellites. It's like a Kinder egg, with a lot of layers.” Make no mistake, these data centers are run on fossil electricity. But they are reducing emissions compared with flaring the gas, she says: “We’re not just transferring the emissions from the flare gas to another industry, because by burning it as we do, we reduce the methane by approximately 100 percent and the NOx and VOCs [volatile organic compounds] by 98 percent. “We’re not reducing the CO2 - not yet,” she says. “But we are looking into that as well. We might have a solution. We don't know yet, but we are really putting a lot of emphasis in finding a solution for that also.” It’s a nice aside - in 2016, as junior energy minister, she launched one of the world’s first full-scale carbon capture and storage (CCS) projects. Even without CCS, she believes a data center run on flare gas can satisfy the definitions of environmental, social and corporate governance (ESG). because of the reduction in other GHGs and pollutants:

“So we are also making an ESG data center service.” Off-grid data centers also potentially reduce the need for generation capacity, she says. “When everything in society is to be electrified, you need more electricity at some times of the day, like eight o'clock in the morning and four o'clock in the evening. You have to make the grid bigger to take the peaks, and then you have to pay for all the grid capacity that you don't use during the rest of the day. Being outside the grid, using energy that is stranded, is a double positive effort on the environment.” Why not leave it in the ground? Making oil wells greener might sound like a good idea, but environmentalists would disagree. The science is clear: the world needs to stop oil producers in their tracks, and wean our infrastructure off fossil energy. Using flare gas may make some oil wells more profitable, actually prolonging the use of oil. So it’s not surprising that oil companies seem to be welcoming the idea. Surprisingly, there is a Net-Zero Producers Forum of oil-producing nations, which includes Norway, Canada, the USA, Qatar, and Saudi Arabia. A truly net-zero oil industry would be an oxymoron, however. So the group is diverting attention to smaller issues like flaring. “They have to stop the flaring. And Earth Wind & Power is the only solution. So they're

The IPCC have found that the only way to hit our climate change target of 1.5C global warming is to stop using fossil fuel 26 DCD Magazine • datacenterdynamics.com

really eager to talk to us, both to stop flaring, and to use the excess energy as much as we can.” She has a few arguments for continuing with fossil fuels, starting with global equity. The idea of an easy, hard stop to fossil fuel exploitation is an easy illusion for the privileged world, she says: “I believe that it's really important for us in the Western world to look at the situation in for example, Africa, where 60 percent don’t have energy at all. Sadly, that’s a line big oil companies often take, but GHGs emitted by poor people are just as deadly as those from the developed world. The real answer is to fund the development of sustainable energy in those countries - an issue the rich world failed to address at the recent COP26 climate conference. She also deploys the “bridge” argument that natural gas is less polluting than coal. Africans burn a lot of coal, and export gas, she says: “If you change that coal and use gas instead, you reduce the emissions a lot, and it's much better energy efficiency to burn gas than coal. So that's a huge step.” “Who are we to say that these people should not reduce their energy emissions by using gas instead of coal? I think it's really difficult to sit on my high horse here in Norway, warm, and with 98 percent renewables in my electricity system. I think it's really important to look at the facts and help. “To have the energy transition to more renewables, you need something that can be a bridge,” she says. “And gas is really, really good at doing that. Especially in the African continent, I think we should applaud them for using gas instead of coal.” Climbing into renewables Norway already uses resources efficiently, so EW&P is looking abroad to find stranded

CEO Interview resources to exploit: “Where they flare the most is in Africa, and Asia and the Middle East.” And it’s also already operating in renewables as well: “We saw after a little while that there's excess energy in all kinds of energy production. It's in solar, it's in wind, it's on geothermal. And what's really good is that because we don't need a grid, we can be the birth giver to new renewable energy production.” Providing a ready customer for power can help get renewable projects started in countries with less reliable grids, she says: “Say you're going to build a geothermal station. As long as you have a hole and power coming out of it, we can put our container on top of that. We can buy your energy so you can invest in building infrastructure which is really, really expensive. Developing a new renewable energy plant will be better economically with EW&P than without, because we can be offtakers of the energy immediately.” “The cheapest way to develop more energy is in Africa is extra solar,” she says. “They have a huge possibility because it's cheap. But you need to make it economically viable. We can help them do that. They can install the solar panels, and start the solar firm, and we can help with the takeoff of that power, while they are building the grid or the mini-grid. “Solar and wind are on track to become the cheapest form of energy in the coming years. And the IEA has estimated that 70 percent of the total investment in this space needs to occur in developing countries.” The company will start with gas flares, and its first projects are coming in 2022, the CEO tells us: “We have started with some small projects in Africa, just to show that it's possible. We will have our first container in operation in March.” EW&P is using a container developed by German efficient hardware company Cloud&Heat: “It’s highly energy efficient, with 30 percent better energy efficiency than the average - and we can also operate in temperatures from -30C to up to 50C.” That reliability is important. Among other things, a liquid cooled container is easier to seal, to keep out salt sea water or desert sand: “When you place a container offshore, you don't want any breakage. You really need to double check everything. Cloud&Heat looks a very good solution. “We’ve done feasibility studies with oil and gas offshore sites - and they concluded very successfully. And we are working with a world leader in wind and solar to provide integrated solutions for offshore renewables.” Being a swing arm EW&P is entering a contract for more than ten flare sites in Oman, and has done a pilot project to be part of a solar mini-grid in Uganda, she told us: “It was a very, very small

The cheapest way to develop more energy is in Africa is extra solar. But you need to make it economically viable. We can help them do that" unit, just to see that it's actually possible. In Africa, with 60 million people not having access to any form of energy, it is really, really important to have mini-grids that can contribute what the local society needs. But the possibility to pay for it is also very low, so you need to find solutions for that as well.” Solar and wind farms sometimes make extra energy, and an EW&P micro data center can act as a “swing arm” to use that excess, or enable investment to put in more turbines or solar panels: “We can buy that energy, and in that way help the economy go around for renewable investment.” But is a data center the best use of that electricity, we ask her. Given that the world needs more electricity to decarbonize society, wouldn’t it be better to store the energy for use elsewhere, instead of consuming it in data centers? EW&P has looked into this but most solutions are too complex, says Smines Tybring-Gjedde. For instance, electrolysis could generate hydrogen, but this just installs a lot more hardware and leaves the energy company with the same problem: storage and transport of a different gas. “The reason they are burning 150 billion cubic meters of gas is because nobody has found a solution. We have found the solution that we think is the best as per now. Because this is easy. And it's movable. And it's stackable. We don't leave any footprint - because when there's no gas left, or when they need more of the energy that we have been helping them have an off-take for, we can put our containers on the truck and go to the next station.“ But is this processing worth doing? There’s another important question: the flare gas energy may be turned into processing, but what if the processing is itself wasted? The Cloud&Heat container which EW&P is using was co-designed by BitFury, a leading light in Bitcoin and cryptocurrency mining hardware - a global industry which consumes more electricity than a sizable country, for no environmental benefit. EW&P’s site includes a “Fact Center” with some bullish predictions, including a suggestion that blockchain may be running 20 percent of the global economic infrastructure by 2030 - even though most technology commentators acknowledge that the structure of blockchain makes it massively energyhungry and unlikely to scale in this way.

But Smines Tybring-Gjedde says blockchain is just one of many options for using EW&P’s output. “It depends on the site. What kind of interconnections are there? Are they high bandwidth, with a consistent power supply? Is it low bandwidth with a consistent power supply? Or is it low bandwidth with an inconsistent power supply?” she says. “It’s the circumstances that determine what’s possible.” It’s not possible to say how much EW&P capacity will go to crypto mining, as it’s in an early stage: “It's feasibility studies, and it [depends on] the bandwidth, the power supply, and the circumstances. Some countries may ban Bitcoin. And some don't. So it's really impossible to say. We are a data center service company. And we will look at each site and offer what can be done on that site.” In Africa, locally-powered data centers could be used for local clouds to enable digital sovereignty: “Cybersecurity is really, really important. And having your own cloud is highly interesting for several of our potential customers. Instead of buying an American cloud, you can produce your own, with your own energy produced in your own country. That is one of the services that we can provide.” Where there’s no consistent connection, then different work would be done: “It could be blockchain, it could be long term storage, which doesn't require continuity of service. It could be general HPC for commercial industries, or CFD modeling or AI training.” She goes on: “I think it's really important to use excess, ‘stranded’ energy. When there's low bandwidth and an inconsistent power supply, then you do operations that don’t need anything more than that. In that way, you don't interfere in the grid or use fiber that you don't need.” In most cases, even remote sites have enough connectivity to be useful: “You can transport the data center services by satellites. We have 5G all over the world now.” It’s been a long climb for Ingvil Smines Tybring-Gjedde. And given the contradiction between reducing emissions and enabling oil exploitation, her latest course looks like a tricky, overhanging rockface. The EW&P project may be more efficient, it may reduce greenhouse gas, and it could give compute power to developing countries. But it’s still burning fossil fuel in a world that desperately needs to stop burning. That is a difficult ascent to negotiate.

Issue 43 • December 2021 / January 2022 27

>Global Awards

Category Winners DCD is very pleased to announce the winners of the 2021 edition of the annual DCD Awards, celebrating the best data center projects and most talented people in the industry. Data Center Architecture Award.

Winner: Ashton Old Baths (UK) by Tameside Council In collaboration with Sudlows and MCAU Architects With data center construction booming, architects are raising the stakes and setting a new standard for beautiful data centers. This category was decided by a public vote, in which more than 4,000 people expressed their preference from a highly competitive shortlist. The winner turned an abandoned Victorian bath house in Greater Manchester, into a data center which holds local council and NHS infrastructure, along with a cooperative offering commercial space. Tameside Council said this data center was about providing the digital infrastructure needed to enable and keep digital businesses in the local area.


Edge Data Center Project of the Year

Winner: Edge Centre's EC1 Grafton: The Sustainable Edge The Edge is moving from the era of promise to the era of delivery. This award category shows how Edge is evolving and diversifying. The Grafton facility in New South Wales is Edge Centre's regional test case data center. The site has never been connected to a utility grid and is run on 100 percent solar power.


Mission-Critical Tech Innovation Award

Winner: Microsoft Cloud Two-Phase Immersion Cooling Deployment

Enterprise Data Center Evolution Award

Winner: IntertekPSI ERP System Migration

In collaboration with Skytap & Meridian IT This Award recognizes the increasingly complex enterprise data center journey. Enterprise is far more than just a consumer in this landscape. The project migrated a business critical and colpex financial ERP system into the cloud within a six month window.

28 DCD Magazine • datacenterdynamics.com

This award is recognizes cutting-edge technology solutions Microsoft has deployed the world's first public cloud production environment using two-phase immersion cooling, lowering energy consumption


Awards 2021 Winners


Carbon Champion Award

Winner: Digital Realty River Cooling Project in Marseille This award focuses on the industry's path towards net-zero, recognizing the world’s most energy-aware and innovative approaches to building and fitting-out sustainable digital infrastructure. Digital Realty's investment into three data centers in Marseille has made the city a leading Internet Hub. Two of the data centers are cooled with water from the local river system via 3km of buried pipework in a 19th century tunnel. The IT is cooled by heat exchange, with a PUE of 1.2. Cooling 22 W in this way saves an estimated 18,400 MWh and 795 tons of CO2 at full load.


Data Center Design Innovation Award

Latin America Data Center Development Award

Winner: CSC LUMI Super-computer, Kajaani, Finland

In collaboration with Granlund & Synopsis This award recognizes facilities that have pushed boundaries to overcome a unique challenge. The CSC 'state of the art' supercomputer had to factor in modifications. The hardware specs were available midway through, so design and construction went in tandem.


Winner: Scala Data Centres CarbonNeutral Colocation Data Center, Brazil

Middle East & Africa Data Center Development Award

Winner: Rack Centre LGS 1, Nigeria

This award showcases data center development which met specific local requirements. Scala is the first colo in Latin America to move to 100 percent renewables, achieved largely via PPAs and carbon offset.

This award showcases data center development which met specific local requirements. With this project, Rack Centre doubled the largest carrierneutral data center in West Africa, in 80 weeks from Feb 2020. It updated from N+2 to 3N2, cut CAPEX, and reduced PUE to 1.45 (in tropical climate conditions).


Emerging Asia Pacific Data Center Development Award

Winner: SUPERNAP Thailand GWS Cloud and SD-WAN implementations


This award showcases data center development which has pushed the boundaries of design and construction in the context of specific local requirements and challenges. Supernap Thailand’s data center operations team demonstrated their outstanding capabilities keeping an important Cisco SD-WAN and Cloud platform deployment on time to meet growing customer demands,, whilst facing and overcoming numerous supply chain challenges exacerbated by the issues created during the Covid-19 pandemic.

Issue 43 • December 2021 / January 2022 29

Awards 2021 Winners

Data Center Construction Team of the Year


Winner: Bouygues E&S VIRTUS LONDON7 Data Center Delivery Team The delivery of a data center is a race against the clock. This award recognizes teams who have shown commitment and initiative in their building projects, and who have delivered data centers which far exceeded expectations. This was a technically challenging project involving the delivery of both a 9MW liquid cooled HPC center and a 28MW IT build. As the projects progressed through 2021, they faced limitations imposed by Covid but needed also to cope with the flexibility of changing designs. Bouygues and Virtus brought experience to the project from a long-standing partnership


Data Center Operations Team of the Year

Sustainability Pioneer

Winner: Aidin Aghamiri, CEO of ITRenew

Winner: NVIDIA & Kao Data Cambridge-1 Super-computer Deployment Team

Teamwork is a crucial skill alongside technical know-how. The team was formed for the installation of a 400 petaflops supercomputer at Kao's Cambridge facility - the UK's most powerful.


Young Mission-Critical Engineer of the Year

Winner: Chris Hayward, PhillipsPage Associates The data center community needs a new generation, ready to pick up the baton and take the sector on to new achievements. A lead mechanical engineer, Chris has worked on data center projects across Europe. He goes beyond what is expected of an engineer of his grade, and nurtures others in his team.

Sustainability has moved us beyond efficiency inside the four walls of the data center Aidin is an entrepreneur with a strong sense of social responsibility. He built ITRenew, a global business, out of circularity before it was cool. His motto is “refuse to settle for a world that pits economic success against social good.”.


Outstanding Contribution to the Data Center Industry Winner: Lex Coors, Chief Data Center Technology and Engineering Officer for Interxion | A Digital Realty Company


Every industry has its champions, thought leaders and pioneers who will be revered for their achievements for years to come. The previous winners of this Award are all distinguished by their extensive service to the data center industry. "Lex has been around since I can remember and since the industry as we know it began," says DCD CEO George Rockett. "The business he represents has grown from an acorn into a multi-billion euro entity over that time. He is a discussion fire starter, an innovator, and an educator (now with associate professor in front of his name), a communicator, and a major voice of the industry to the likes of the European Commission. Maybe more importantly, he’s a great friend to many of us."

30 DCD Magazine • datacenterdynamics.com

Sponsored by

Cooling Supplement


New technologies and new benchmarks Why PUE has to change

> The long-lived efficiency measure is no longer enough, thanks to new cooling methods

A new phase for cooling

> Two-phase cooling has been bubbling under for some time. Now it’s properly arriving

Cool vibrations

> A Glasgow firm converts thermal energy into motion, to speed heat away

A BOLD NEW WAY TO MANAGE YOUR DATA CENTERS Streamline your data center monitoring and management operations with Honeywell Data Center Manager.

Learn how we are shaping the way businesses manage their data center operations

Cooling Supplement

Sponsored by

Contents 34. A new phase for liquid cooling Two-phase cooling has been bubbling under for some time. Now it's properly arriving 38. Why PUE has to change The long-lived efficiency measure is no longer enough, thanks to the arrival of new cooling methods 41. Cooling with water Harmful HFC gases are on the way out. What if we could replace them with pure water? 42. Cool vibrations A Glasgow firm converts thermal energy into motion, to speed heat away

Cooling evolves


hile air conditioning systems still dominate the data center world, new alternatives have been emerging.

Liquid cooling has been in the running for some time. Traditionalists have argued the time isn't right, while baths of dielectric appear in HPC facilities. At the same time, there's a bunch of alternatives coming along - and all this is producing some confusion about how you evaluate the contenders. This supplement takes a look across the landscape of data center cooling.

In search of a new metric This kind of technology seachange could drive new ways to measure efficiency - because they create a world where the venerable PUE metric doesn't work any more Power usage effectiveness (PUE) divides data center energy into rack power (good) and facility power (bad). But direct liquid cooling actually changes the boundaries. Liquid cooling does away with fans in servers. that's a part of its efficiency promise - but in traditional PUE it actually counts against it, because it reduces power use in the rack. For these and other reasons, PUE is going to have to change.

Cooling with water Two-phase comes of age



We start with a surprise. While data center operators have been grudgingly starting to accept that increasing power densities will demand liquid cooling, they've expected circulating or tank systems that absorb heat and remove it through conduction and convection, In two-phase systems, the coolant boils and condenses. This removes more heat, but till recently, only over-clocking gamers have been up for the risk of vibrational damage or other consequences of pushing the boundaries. But there really is a rapid increase in the power density of HPC and AI racks, and this seems to be coinciding with moves to make two-phase practical. We talk to the people who are working on standards which could allow multi-vendor systems and maybe make two-phase the norm in certain sectors of the market.

With F-gas regulations phasing out the HFCs in traditional chillers, we need to use something else. One alternative is water. Adiabatic chillers are a lowenergy alternative to traditional units, but the industry is realizing it can't go on using ever-increasing amounts of water. So it's interesting to see a company with chillers that use water in a traditional compression cycle. Will it catch on?

Shaking the heat away A truly bizarre looking chiller debuted at the COP26 event in Glasgow. With metal leaves fluttering round an upright trunk, the thermal vibration bell (TVB) looks like a steampunk tree. But it's not an art installation. It's a serious contender in which thermal energy converts to mechanical motion which drives its own removal.

Issue 43 • December 2021 / January 2022 33

Liquid cooling: A new phase Two-phase liquid cooling has finally arrived. Vendors are making purpose-built liquid cooled servers


ata center operators have been avoiding liquid cooling. Keeping it as a potential option for the future, but never a mainstream operational approach. Liquid cooling proponents warned that rack power densities could only get so high before liquid cooling was necessary, but their forecasts were always forestalled. The Green Grid suggested that air cooling can only work up to around 25kW per rack, but AI applications threaten to go above that. In the past, when rack power densities approached levels where air cooling could not perform, silicon makers would improve their chips’ efficiency, or cooling systems

would get better. Liquid cooling was considered a last resort, an exotic option for systems with very high energy use. It required tweaks to the hardware, and mainstream vendors did not make servers designed to be cooled by liquid. But all the world’s fastest supercomputers are cooled by liquid to support the high power density, and a lot of Bitcoin mining rigs have direct-to-chip cooling or immersion cooling, so their chips can be run at high clock rates. Most data center operators are too conservative for that sort of thing, so they’ve backed off. This year, that could be changing. Major

34 DCD Magazine • datacenterdynamics.com

Peter Judge Global Editor

announcements at the OCP Summit - the get-together for standardized data center equipment - centered on liquid cooling. And within those announcements, it’s now clear that hardware makers are making servers that are specifically designed for liquid cooling. The reason is clear: hardware power densities are now reaching the tipping point: “Higher power chipsets are very commonplace now, we’re seeing 500W or 600W GPUs, and CPUs reaching 800W to 1,000W,” says Joe Capes, CEO of immersion cooling specialist LiquidStack. “Fundamentally, it just becomes extremely challenging to air cool above 270W per chip.“ When air-cooling hits the performance

Cooling Supplement wall, it’s not a linear change either, it’s geometric. Because fans are resistances, their power draw is the I-squared-R set down by Ohm’s law, going up by the square of the current. “So as you go to higher power chipsets, the fans have to be larger, or they have to run at higher speeds. And higher the speed, the more wasteful they become,” says Capes. “If you have a traditional chilled water computer room air handler (CRAH), you want those fans running at 25 to 28 percent of total capacity to reduce the I-squared-R loss. If you have to ramp those fans to 70 or 80 percent of their rated speed, you're consuming a massive additional amount of energy.” Out of the crypto ghetto LiquidStack itself charts the progress of liquid cooling. Founded in 2013 by Hong Kong entrepreneur Kar-Wing Lau, it used two-phase cooling (see box) to pack 500kW of high-performance computing in a shipping container, with a PUE of 1.02. The company was bought by cryptocurrency specialist BitFury in 2015, and put to work trying to eke more performance out of Bitcoin rigs. Then in 2021, eyeing the potential of liquid cooling

“Higher power chipsets are very commonplace now, we’re seeing 500W or 600W GPUs, and CPUs reaching 800W to 1,000W” in the mainstream, BitFury appointed former Schneider executive Capes as CEO and spun LiquidStack out as a standalone company. Wiwynn, a Taiwanese server-maker with a big share of the white-label market for data center servers, contributed to a $10 million Series A funding drive - and Capes gives this partnership much of the credit for the market-ready liquid cooled servers he showed at two back-to-back highperformance computing events: the OCP Summit in San Jose, and SC21 in St Louis. “In the immersion cooling space, you have a three-legged stool: the tanks, the fluid, and the hardware,” says Capes. All three elements put together make for a more efficient system. Alongside Wiwynn’s servers, for the third leg, LiquidStack has a partnership with 3M to use Novec 7000, a non-fluorocarbon dielectric fluid, which boils at 34°C (93°F)

and recondenses in the company’s DataTank system, removing heat efficiently in the process. Purpose-built servers are a big step because till now, all motherboards and servers have been designed to be cooled by air, with wide open spaces and fans. Liquid cooling these servers is a process of “just removing the fans and heat sinks, and tricking the BIOS - saying ‘You're not air cooled anymore.’” That brings benefits, but the servers are bigger than they need to be, says Capes: “You have a 4U air cooled server, which should really be a 1U or a half-U size.” LiquidStack is showing a 4U DataTank, which holds four rack units of equipment, and absorbs 3kW of heat per rack unit equivalent to a density of 126kW per rack. The company also makes a 48U data rank, holding the equivalent of a full rack. Standards The servers in the tank are made by WiWynn, to the OCP’s open accelerator interface (OAI) specification, using standardized definitions for liquid cooling. This has several benefits across all types of liquid cooling. For one thing, it means that other vendors can get on board, and know their servers will fit into tanks from LiquidStack or other vendors, and users should be able to mix and match equipment in the long term. “The power delivery scheme is another important area of standardization,” says Capes, “whether it be through AC bus bar, or DC bus bar, at 48V, 24V or 12V.” For another thing, the simple existence of a standard should help convince conservative data center operators it’s safe to adopt - if only because the systems are checked with all the possible components that might be used, so customers know they should be able to get replacements and refills for a long time. Take coolants: “Right now the marketplace is adopting 3M Novex 649, a dielectric with a low GWP (global warming potential)”, says Capes. “This is replacing refrigerants like R410A and R407C that have very high global warming potential and are also hazardous. “It's very important when you start looking at standards, particularly in the design of hardware, that you're not using

Issue 43 • December 2021 / January 2022 35

and carbon footprint by eliminating all of this steel and aluminum and whatnot is a major benefit.” Pushing the technology Single-phase liquid cooling vendors emphasize the simplicity of their solutions. The immersion tanks may need some propellors to move the fluid around but largely use convection. There’s no vibration caused by bubbling, so vendors like GRC and Asperitas say equipment will last longer. “People talk about immersion with a single stroke, and don’t differentiate between single-phase and two-phase," GRC CEO Peter Poulin said in a DCD interview, arguing that single-phase is the immersion cooling technique that’s ready now. But two-phase allows for higher density, and that can potentially go further than the existing units. Although hardware makers are starting to tailor their servers to use liquid cooling, they’ve only taken the first steps of removing excess baggage and putting things slightly closer together. Beyond this, equipment could be made which simply would not work outside of a liquid environment. “The hardware design has not caught up to two-phase immersion cooling,” says Capes. “This OAI server is very exciting, at 3kW per RU. But we’ve already demonstrated the ability to cool up to 5.25 kilowatts in this tank.“

materials that could be incompatible with these various dielectric fluids, whether they be Novec or fluorocarbon, or a mineral oil or, or synthetic oil. That's where OCP is really contributing a lot right now. An organization like OCP will kick all the tires, including things like the safety and compatibility of connectors, and the overall physical specifications. “I've been talking recently with some colocation providers around floor load weighting,” says Capes. “It's a different design approach to deploy data tanks instead

of conventional racks, you know, 600mm by 1200mm racks.” A specification tells those colo providers where it’s safe to put tanks, he says: “by standardizing and disseminating this information, it helps more rapidly enable the market to use different liquid cooling approaches.” In the specific case of LiquidStack, the OCP standard did away with a lot of excess material, cutting the embodied footprint of the servers, says Capes: “There's no metal chassis around the kit. It's essentially just a motherboard. The sheer reduction in space

36 DCD Magazine • datacenterdynamics.com

Beyond measurement The industry’s efficiency measurements are not well-prepared for the arrival of liquid cooling in quantity, according to Uptime research analyst Jacqueline Davis. Data center efficiency has been measured by power usage effectiveness (PUE), a ratio of IT power to facility power. But liquid cooling undermines how that measurement is made, because of the way it simplifies the hardware. “Direct liquid cooling implementations achieve a partial PUE of 1.02 to 1.03, outperforming the most efficient air-cooling systems by low single-digit percentages,” says Davis. “But PUE does not capture most of DLC’s energy gains.” Conventional servers include fans, which are powered from the rack, and therefore their power is included in the “IT power” part of PUE. They are considered part of the payload the data center is supporting. When liquid cooling does away with those fans, this reduces energy, and increases efficiency - but harms PUE. “Because server fans are powered by the server power supply, their consumption counts as IT power,” points out Davis. “Suppliers have modeled fan power consumption extensively, and it is a non-trivial amount. Estimates

Cooling Supplement typically range between five percent and 10 percent of total IT power.” There’s another factor though. Silicon chips heat up and waste energy due to leakage currents - even when they are idling. This is one reason for the fact that data center servers use almost the same power when they are doing nothing, a shocking level of waste, which is not being addressed because the PUE calculation ignores it. Liquid cooling can provide a more controlled environment, where leakage currents are lower, which is good. Potentially, with really reliable cooling tanks, the electronics could be designed differently to take advantage of this, allowing chips to resume their increases in power-efficiency. That’s a good thing - but it raises the question of how these improvements will be measured, says Davis: “If the promise of widespread adoption of DLC materializes, PUE, in its current form, maybe heading toward the end of its usefulness.” Reducing water “The big reason why people are going with two-phase immersion cooling is because of the low PUE. It has roughly double the amount of heat rejection capacity of cold plates or single-phase,” says Capes. But a stronger draw may turn out to be the fact that liquid cooling does not use water. Data centers with conventional cooling systems, often turn on some evaporative cooling when conditions require, for instance, if the outside air temperature is too high. This means running the data center chilled water through a wet heat exchanger, which is cooled by evaporation. “Two-phase cooling can reject heat without using water,” says Capes. And this may be a factor for LiquidStack’s most highprofile customer: Microsoft. There’s a LiquidStack cooling system installed at Microsoft’s Quincy data center, alongside an earlier one made by its partner Wiwynn. “We are the first cloud provider that is running two-phase immersion cooling in a production environment,” Husam Alissa, a principal hardware engineer on Microsoft’s team for data center advanced development said of the installation. Microsoft has taken a broader approach to its environmental footprint than some, with a promise to reduce its water use by 95 percent before 2024, and to become “waterpositive” by 2030, producing more clean water than it consumes. One way to do this is to run data centers hotter and use less water for evaporative cooling, but switching workloads to cooling by liquids with no water involved could also help. “The only way to get there is is by using technologies that have high working fluid temperatures,” says Capes.

“If the promise of widespread adoption of DLC materializes, PUE, in its current form, maybe heading toward the end of its usefulness" Industry interest The first sign of the need for highperformance liquid cooling has been the boom in hot chips: “The semiconductor activity really began about eight to nine months ago. And that's been quickly followed by a very dynamic level of interest and engagement with the primary hardware OEMs as well.” Bitcoin mining continues to soak up a lot of it, and recent moves to damp down the Bitcoin frenzy in China have pushed some crypto facilities to places like Texas, which are simply too hot to allow air cooling of mining rigs. But there are definite signs that customers beyond the expected markets of HPC and crypto-mining are taking this seriously. “One thing that's surprising is the pickup in colocation,” says Capes. “We thought colocation was going to be a laggard market for immersion cooling, as traditional colos are not really driving the hardware specificaitons. But we've actually now seen a number of projects where colos are aiming to use immersion cooling technology for HPC applications” He adds: “We've been surprised to learn that some are deploying two-phase immersion cooling in self-built data centers and colocation sites - which tells me that hyperscalers are looking to move to the market, maybe even faster than what we anticipated.” Edge cases Another big potential boom is in the Edge, micro-facilities are expected to serve data close to applications. Liquid cooling scores here, because it allows compact systems which don’t need an air-conditioned space. “By 2025, a lot of the data will be created at the Edge. And with a proliferation of micro data centers and Edge data centers, compaction becomes important,” says Capes. Single-phase cooling should play well here, but he obviously prefers two-phase. “With single phase, you need to have a relatively bulky tank, because you're pumping the dielectric fluid around, whereas in a two-phase immersion system you can actually place the server boards to within two and a half millimeters of one another," he said.

How far will this go? It’s clear that we’ll see more liquid cooling, but how far will it take over the world? “The short answer is the technology and the chipsets will determine how fast the market moves away from air cooling to liquid cooling,” says Capes. Another factor is whether the technology is going into new buildings or being retrofitted to existing data centers - because whether it’s single-phase or two-phase, a liquid cooled system will be heavier than its air cooled brethren. Older data centers simply may not be designed to support large numbers of immersion tanks. “If you have a three-floor data center, and you designed your second and third floors for 250 pounds per square foot of floor loading, it might be a challenge to deploy immersion cooling on all those floors,” says Capes. “But the interesting dynamic is that because you can radically ramp up the amount of power per tank, you may not need those second and third floors. You may be able to accomplish on your ground floor slab, what you would have been doing on three or four floors with air cooling.” Some data centers may evolve to have liquid cooling on the ground floor’s concrete slab base, and any continuing air cooled systems will be in the upper floors. But new buildings may be constructed with liquid cooling in mind, says Capes: “I was talking to one prominent colocation company this week, and they said that they're going to design all of their buildings to 500 pounds per square foot to accommodate immersion cooling.” Increased awareness of the water consumption of data centers may push the adoption faster: “If other hyperscalers come out with aggressive targets for water reduction like Microsoft has, then that will accelerate the adoption of liquid cooling even faster.” If water cooling hits a significant proportion of the market, say 20 percent, that will kick off ”a transition, the likes of which we’ve never seen,” says Capes. “It's hard to say whether that horizon is, is on us in five years or 10 years, but certainly if water scarcity, and higher chip power continue to evolve as trends, I think we'll see more than

Issue 43 • December 2021 / January 2022 37

Is PUE too long in the tooth? What’s next for energy efficiency metrics in the data center industry?


ince it was first proposed by Christian Belady and Chris Malone and promoted by the Green Grid in 2006, power usage effectiveness (PUE) has become the de facto metric with which energy efficiency of data centers are measured. It is easy to calculate – a ratio between total facility power and power consumed by the IT load – and it provides a simple single metric that can be widely understood by non-technical people. But as leading facilities move to PUEs below 1.1 and more companies look to achieve carbon neutral status, it’s time to ask; should we move on from PUE? And if so, what to? PUE: a victim of its own success In a perfect world, PUE would be 1; every kilowatt of energy coming into the data center would be used only to power the IT hardware. And while a perfect 1 is

Dan Swinhoe News Editor

likely impossible, being able to quickly and easily measure energy use has driven improvements. The Uptime Institute says the industry average for PUE has dropped from around 2.5 in 2007, to around 1.5 today. But even Uptime has said the metric is looking ‘rusty’ after so long. “PUE has been one of the most popular, easily understood and, therefore, widely used metrics since the Green Grid standard was published in 2016 under ISO/IEC 30134-2:2016,” says Gérard Thibault, CTO, Kao Data. “I believe that its simplicity has been key, and in many respects, customers use it as a metric to

evaluate whether they are being charged cost-effectively for the energy their IT consumes and how sustainable they are.” “In an industry that is somewhat complex, PUE has become accepted by customers as a clear indicator of energy efficiency, becoming inextricably linked to sustainability credentials.” However, after more than a decade, PUE could be starting to become a victim of its own success, argues Malcolm Howe, Partner at engineering firm Cundall, whose data center clients include Facebook. “PUE has been an immensely beneficial tool for the industry. And in

“People can quite readily get their heads around it on a superficial level, but it's got loads of things wrong with it when you get below the surface”

Cooling Supplement the guise of the role that it was originally intended for it has driven very significant improvements in energy efficiency in data centers,” he says. “People can quite readily get their heads around it on a very superficial level, but it's got loads of things wrong with it when you get below the surface.” The limitations of PUE As we near the limits of what conventional cooling can achieve and data center owners and operators look towards carbon neutrality or even negativity, PUE starts to lose its value as a metric. Some of the most efficient data centers are starting to achieve PUEs of 1.1 or lower; the EU-funded Boden Type data center in Sweden has recorded a 1.018 PUE, while Huawei claims its modular data center product has an annual PUE of 1.111. Google says its large facilities average 1.1 globally but can be as low as 1.07. As facilities become more efficient, measuring improvements with PUE becomes harder and gains become increasingly incremental. “We're now down to PUEs of 1.0-whatever; we need more precise methods of measurement,” says Howe. “We're focused on trying to achieve these sustainability targets and net-zero, and we're doing it with a tool that is a blunt instrument; we're using a metric that doesn't really capture the impact of what's going on.” An example of its bluntness is that PUE doesn’t capture what is happening at rack level. IT power to the rack can power rack level UPS or on-board fans; energy that could and probably should be added to the debit sheet, yet isn’t. Howe notes that in a conventional air-cooled rack, as much as 10 percent of the power delivered to the IT equipment is consumed by the server and PSU cooling fans. Some companies can make their PUE look better this way in what he describes as “creative accountancy.” “Not all of that power is actually being used for IT. And I think a lot of people lose sight of that,” he says. “A lot of model operators putting UPS at rack level; they can immediately make their PUE look better by shifting the UPS load out of the infrastructure power and putting it into the IT power. You're just moving it from one side of the line to the other, you haven't actually changed the performance.” A marketing metric Howe also notes that PUE’s simplicity and widespread use across the industry has led to it being used as a competitive

“It’s taken on an importance and a profile within the industry that was never intended, for which it's not really suited” advantage weapon as much as an internal improvement metric. “Over time it has been seized upon as a marketing tool,” he says. “Operators are using it to bring in customers, and customers are going along with that and giving minimum PUE standards in the RFPs.” Many operators will happily tout the PUE of their latest facilities, and use it as a lure for sustainability-conscious customers. “PUE was never really intended for that, it was always an improvement metric,” He says operators should “measure the PUE, implement changes, and then measure it again to assess the effectiveness. “It’s taken on an importance and a profile within the industry that was never intended, for which it's not really suited.” Liquid cooling and PUE As we reach the limit of air cooling, liquid cooling is becoming an increasingly popular and feasible alternative. But as adoption increases, the utility of PUE as a yardstick lowers. “Even if you had a perfectly efficient fan motor, it is going to consume power,” says Howe. “We've got to the limit of what is achievable within the physics of what we've been doing with air cooling.” In an opinion piece for DCD, Uptime Institute research analyst Jacqueline Davis recently warned that techniques such as direct liquid cooling (DLC) profoundly change the profile of data center energy consumption and “seriously undermine” PUE as a benchmarking tool and even “eventually spell its obsolescence” as an efficiency metric. “While DLC technology has been an established yet niche technology for decades, some in the data center sector think it’s on the verge of being more widely used,” she said. “DLC reshapes the composition of energy consumption of the facility and IT infrastructure beyond simply lowering the calculated PUE to near the absolute limit.” Davis noted that most DLC implementations achieve a partial PUE of 1.02 to 1.03, by lowering the facility power. But they also reduce energy demands inside the rack by doing away with fans - a move that reduces energy waste, but also

actually makes PUE worse. “PUE, in its current form, may be heading toward the end of its usefulness,” she added. “The potential absence of a useful PUE metric would represent a discontinuity of historical trending. Moreover, it would hollow out competitive benchmarking: all DLC data centers will be very efficient, with immaterial energy differences.” “Tracking of IT utilization, and an overall more granular approach to monitoring the power consumption of workloads, could quantify efficiency gains much better than any future versions of PUE.” TUE: One metric among many or the new PUE? If PUE really is too long in the tooth, is there a ready-made replacement that has the simplicity of PUE but can provide a better picture? Howe says Total-Power Usage Effectiveness (TUE) can be a more effective metric at calculating a data center’s overall energy performance. TUE is calculated via IT Power Usage Effectiveness (ITUE) x PUE. ITUE accounts for the impact of rack-level ancillary components such as server cooling fans, power supply units and voltage regulators. TUE is obtained by multiplying ITUE (a server-specific value) with PUE (a data center infrastructure value). “ITUE is like a PUE at rack level and addressing what is going on in a way that PUE on its own does not. It's saying this is how much energy is going to the rack; and this is how much of that energy is actually going to the electronic components,” he says. “It's giving you a much more precise understanding of what's going on that level; which is you've either got some serving server fans spinning around or you've got some dielectric pumps, or you've got something else which may be completely passive.” TUE and ITUE aren’t new; Dr. Michael Patterson (Intel Corp.) and the Energy Efficiency HPC Working Group (EEHPC WG) et al proposed the two alternative metrics around a decade ago. Compared to PUE, TUE’s adoption has been slow. The equations aren’t much

Issue 43 • December 2021 / January 2022 39

harder to calculate, but TUE requires a greater understanding of the IT hardware in place – and that is something that many colo operators won’t have a lot of visibility into. Howe says Cundall is starting to have conversations about TUE and moving on from PUE with its customers partly as it looks to ensure all its projects are heading towards net-zero carbon, but also as customers look to achieve their own sustainability goals. As more companies look to deploy liquid cooling – which takes efficiency beyond PUE – more companies may opt for TUE as a way to better illustrate their sustainability credentials amid a landscape where most facilities operate at PUEs in the low 1.1s. “Operators are going to want to try and position themselves as being ahead of the game and doing more and achieving more. Previously, people have been comparing each other using PUE, but you might now get companies saying ‘my TUE is X’.” PUE & TUE; just parts of a wider sustainability picture While TUE can provide a more granular view of energy efficiency, even then it is still just one part of the total sustainability package. There is no one metric to capture the entire picture of a data center’s sustainability impact. Many of the large operators now offset their energy use with the likes of energy credits and power purchase agreements to ensure their operations are powered directly by renewable energy, or at least matched with equivalent energy contributing to local grids. Taking this further, Google and others such as Aligned and T5 are beginning to release tools that can show a more

“We are aware of ITUE and TUE, but believe that PUE still offers our industry a widely adopted metric that can be used" granular breakdown of energy use by facility, showing a more accurate picture of renewable energy use hour by hour. Companies are increasingly touting their Water Usage Effectiveness (WUE) – a ratio which divides the annual site water usage in liters by the IT equipment energy usage in kilowatt-hours (KwH) – to illustrate how little water their facilities use. Carbon Usage Effectiveness (CUE) aims to measure CO2 emissions produced by the data center and the energy consumption of IT equipment. Schneider Electric recently published a framework document, designed to help data center companies to report on their environmental impact, and assess their progress towards sustainability. The paper goes beyond the industry's normal focus on PUE and sets out five areas to work on across 23 metrics. “Environmental sustainability reporting is a growing focus for many data center operators," Pankaj Sharma, EVP of Schneider's secure power division, told DCD at the time. "Yet, the industry lacks a standardized approach for implementing, measuring, and reporting on environmental impact. Amongst those 23 metrics, Schneider includes greenhouse gas (GHG) emissions (across Scopes 1, 2, and 3); water use, both on-site and in the supply chain; waste material from data center sites generated, landfilled, and diverted; and even species

abundance to measure biodiversity in the surrounding land. In 2020 the Swiss Datacenter Efficiency Association launched the SDEA Label, which aims to rate a facility’s efficiency and climate impact in an ‘end-to-end’ way; taking into account not only PUE but also infrastructure utilization and the site’s overall energy recycling capabilities. Despite the competition from young upstarts, PUE is still seemingly the top dog of sustainability metrics for now. “In Harlow, we operate at industryleading levels of efficiency, and we use PUE to help us monitor, measure, and optimize our customers’ energy footprints,” says Kao’s Thibault. “We are aware of ITUE and TUE, but believe that PUE still offers our industry a widely adopted metric that can be used by new and legacy data centers to understand their energy requirements, to optimize their ITE environments and reduce waste.” “One might arguably say that our industry has been too focussed on PUE, [but] alternative metrics will require greater data, parameters, and discussion between operators and customers in order to define new standards and drive adoption. Many legacy operators also may not have the means to achieve anything further than improving PUE levels, which is another consideration for our industry. Until that level of granularity is available, PUE remains the best option.”

Is the software green? Metrics are numerical indicators, and don’t easily measure subjective influences. What the workloads are doing is beyond the remit of data center designers, engineers, and even operators, but that energy – the 1 of PUE’s 1.x – should not escape scrutiny when looking at the wider sustainability picture. “PUE provides a good measure of how much power they consume, but not how they use it!” says Thibault. “As the requirement for sustainability and efficiency increases, software and machine code could potentially enable greater efficiencies. Legacy software, for example, consumes far more energy than

newer applications and this is an area that could change the energy efficiency landscape. Thibault has previously told DCD that colo operators can’t directly affect the 1 of the PUE, because it’s their customers’ business. But he notes there should be more focus around improving the efficiency of the code to really make that initial '1' of PUE the “most efficient that it can be.” Organizations such as The Green Software Foundation are looking to promote greater sustainability in software through more efficient coding, but despite the backing of Accenture, ThoughtWorks,

40 DCD Magazine • datacenterdynamics.com

the Linux Foundation, Microsoft and its subsidiary GitHub, the ‘green coding’ concept is still relatively niche amid the realities of business operations. “The measure of the performance of a data center is not how much power the IT equipment is consuming, but what data processing work it is actually doing,” notes Howe. “[TUE] still doesn't talk about what the server itself is doing; all of these calculations assume that the energy that is delivered to the electronic components is doing useful work rather than having a server that's just idling.”

Cooling Supplement

Peter Judge Global Editor

Cooling with water Is it possible to run a conventional cooling system - but use water instead of harmful refrigerant gases? We found a firm that thinks so


onventional cooling systems have some complex environmental issues. Traditional air conditioning units consume large amounts of energy and use HFCs (hydrofluorocarbons), which are powerful greenhouse gases. These gases are being phased out by industry agreements such as the European F-gas regulations. In temperate climates, “adiabatic” cooling systems remove heat by evaporating water but data centers’ excessive thirst for water is also coming in for criticism. But what if it were possible to have a refrigeration system that uses water as the working fluid - and uses it in a closed circuit so it doesn’t get consumed? Water refrigeration That’s what German tech firm Efficient Energy claims to have, with its eChiller: “The eChiller is the only refrigeration machine that works with pure water as a refrigerant and has an energetic performance that exceeds the state of the art by a factor of four to five,” claimed Juergen Suss in a video on the company’s site. Suss served as CEO and CTO in the company’s early days from 2013, before leaving to join Danfoss. The current management is finding a boost from changes in the industry: “The

refrigerant market is currently undergoing a huge transformation,” says Thomas Bartmann, sales director at Efficient Energy, Bartmann. “With the goal of cutting emissions, the EU enacted the F-Gas Regulation which will drastically restrict the use of traditional, HFC based refrigerants, forcing operators to fundamentally rethink their cooling strategies.” “We’re feeling the rocketing demand for natural refrigerants and environmentally friendly refrigeration technology firsthand,” he adds. Perhaps to increase familiarity for those used to conventional refrigerants, Efficient Energy likes to refer to water as R718, its name in the ASHRAE/ANSI standard listing of chemical refrigerants. And the device is somewhat familiar: the eChiller has the same components as a conventional chiller - an evaporator, a compressor, a condenser, and an expansion device. It uses the direct evaporation of water in a near-vacuum in a closed circuit, cooling the primary circuit through heat exchangers. Bartman says Efficient has had products in operation since the end of 2014. A new chiller, the eChiller 120, was introduced in 2020 with 120kW of cooling power. The system is in use in a few data centers already, including BT’s Hamburg facility, which installed three of Efficient’s earlier 40kW units in 2017. The BT data center uses cold aisle cooling, with racks arranged in 100kW

“cubes” connected to power and cooling systems. There’s a chilled water network, and the eChillers were installed to cool water in that loop. The data center has an energy management system, linked to a building management system, which determines how many of the eChillers are actually needed at a given time. The system keeps logs, and Efficient handles system maintenance. A trial showed the system could reliably deliver chilled water at 16°C. At Sparkasse savings bank in BadenWürttemberg, Efficient provided of cooling to a warm-aisle containment system. In this case, the bank was gradually adding equipment in a building with a maximum capacity of 70kW. Efficient started with 8-10 kW and expanded that to 35kW. With those and other projects under its belt, the company has established itself first in the DACH region (Germany, Austria, and Switzerland) and is now setting up a network of distributors across Europe in 2021, with partners signed in the UK, France, Sweden, and Norway. Bartmann says Efficient is being approached by prospective partners: “As the only supplier of series-produced chillers that use water as a climate-neutral refrigerant, we see ourselves as a pioneer of sustainable refrigeration solutions.”

Issue 43 • December 2021 / January 2022 41

Cool vibrations A new cooling technology debuted at COP26 - one which harnesses thermal energy to create vibrations


utside a data center in Glasgow stands a bizarrelooking device. A metal cylinder, some four meters high, is studded with metal paddles. The paddles are all waving up and down, like the leaves of some sort of mechanical tree. It is, hands down, the most bizarre and alien-looking piece of data center kit we’ve ever seen, but the company behind it, Glasgow-based Katrick Technologies, believes it is set to take the data center world by storm. “Our genuinely disruptive passive cooling system is set to revolutionize the data center market,” promises Katrick’s video about the technology, “transforming data centers from energy hungry centers, into eco-friendly data providers.” The thermal vibration bell (TVB) came

seemingly out of nowhere, popping up at a Glasgow data center, during the COP26 climate change conference in November, accompanied by stirring claims from Katrick. With a launch event during COP26, the bell has drawn members of the British and Scottish parliaments, including Ivan Paul McKee, Minister for Business, Trade, Tourism, and Enterprise in the Scottish government. The bell is “a new refrigeration cycle” according to Karthik Velayutham, founder

Peter Judge Global Editor

and co-CEO of Katrick, which has a patent application for the system, whose internals fully match the novelty of its exterior. The system uses two different cooling fluids, Velayutham explained to DCD, and it works because of their different properties. In a large drum at the base, hot water from the data center’s primary heat removal system passes through a heat exchanger, giving up its heat to a coolant which surrounds it. This coolant has a high density, so it remains in the lower part of the bell, but has a low boiling point, so it starts to bubble.

"Initial results have been very pleasing. We think we can save up to 70 percent of our cooling costs, and 25 percent of our overall energy usage"

42 DCD Magazine • datacenterdynamics.com

Cooling Supplement These bubbles rise through a grille into the top half of the bell, where they pass through a second coolant, with a low density and a high boiling point. As well as taking the heat energy from the bubbles, the top part of the bell harnesses their mechanical energy. The bubbles agitate paddles in the coolant, each of which connects to one of the waving leaves outside the bell. Each paddle has two ends: one blade inside the bell absorbs heat and is moved by the coolant bubbles. And the second blade, outside the bell, gives up its heat - a process accelerated by the agitation of the paddle. The outside paddles radiate heat, and the motion created by the vibrations accelerates that heat loss, explained Velayutham. As a passive system, the bell simply uses the heat input to drive the cooling mechanism. This is in contrast to a conventional chiller which, like any air conditioning system, needs electricity to drive the mechanical energy necessary to deliver cooling. It’s part of a portfolio of thermomechanical systems which Katrick is developing, along with an energy harvesting system, which is effectively a novel wind turbine, whose unconventional fan blades can gather energy even at low wind speeds. As well as placing its thermal bells at data centers, Katrick wants to build walls of these turbines alongside roads and airports, to passively gather green electricity. The unidirectional string mirror Underlying many of Katrick’s inventions is the uni-directional string mirror (UDSM), a multi-point conversion system, which captures vibrations from a given surface area, converges them to a focal point, and converts them to energy. In its wind patents, the company uses the UDSM to harness small amounts of vibrational energy, which can create usable power As the company explains, “vibration to power is a known process which has been in use for over a century. When you talk over your phone, the sound (which is a form of vibration) is captured by the microphone and converted to electricity.” The company believes that UDSM is efficient enough to create usable energy from a variety of sources, including wind, waves, and heat. “Heat is a low-quality form of energy with atoms moving randomly,” says Katrick’s site. “By capturing and converting them into mechanical vibrations they are transformed into a highly organized form of energy. We can extract this energy efficiently through smaller pockets to capture and provide higher quality power.”

"We are doing testing to apply this technology to any type of data centers, whether it's air-cooled, or water-cooled or liquid-cooled" In the case of the thermal vibration bell, the energy may be organized, but it is not harnessed further. Although the paddles move, their motion is not converted into electrical energy or any other form of useful power. Instead, the energy is used directly, to help radiate heat, replacing energy that would otherwise be required to drive conventional chillers. Out of research Velayutham came up with the UDSM concept while studying waves and looking for a way to harness their energy. It was tested at the University of Strathclyde’s Naval Architecture, Oceans, and Marine Energy (NOAME) department, which established that a UDSM device can capture and converge vibrations. A further NOAME project, under the Energy Technology Partnership (ETP) program, developed the working concept of UDSM panels, which capture wind energy. A three-month project at NOAME confirmed mechanical vibration can be captured by a panel, converging the vibration to a focal point with an energy increase of over 250 times, a lens effect later increased to 400 times. The ETP program also brought in Glasgow Caledonian University, and the bi-fluid thermal vibration bell heat engine concept was developed. Internationally, also worked on the concept with thermal engineers at Indian consultancy Energia India to prove heat energy can be converted into fluid vibrations in the bi-fluid bell. That project found the heat engine can convert up to 30 percent of thermal energy to mechanical vibrations. Test drive partner No matter how good, ideas like this can languish on the shelf, unless there’s an industrial partner willing to take them on. Katrick signed with local data center provider Iomart, to test the TVB at its Glasgow data center during October 2021. Iomart, like many other providers, has been keen to adopt renewable energy, and also reduce the amount of energy it uses to cool its data centers. The bell is several steps more radical than simply moving to renewable power

or cutting waste, but Iomart CEO Reece Donovan has been impressed with what he has seen: "Initial results have been very pleasing. We think we can save up to 70 percent of our cooling costs, and 25 percent of our overall energy usage." To create the test system, Katrick decided to build a module with a capacity of 120kW, a substantial but not excessive capacity, and one which would make a measurable difference to most data centers’ thermal performance. Katrick visited multiple facilities, to come up with a useful design, says Velayutham: "We measured their,layouts, and we believe 120kW is a very good number." The actual prototype was built by a boutique manufacturing company in Glasgow, but could easily be scaled up. Retrofitting to existing sites Katrick believes the system can replace conventional chillers, and can be retrofitted where those chillers are connected to a chilled water cooling circuit. "Our passive cooling solution is completely retrofittable, providing an end-to-end solution to passively cool refrigerants from existing systems," says a company video. Velayutham says it could be applicable in many climates, but concedes it will be most applicable in temperate regions like the Northern hemisphere countries where most of the world’s data centers are located. "Technologies which are easily adaptable, should be accessible to everyone around the world," says Velayutham. "We are doing testing to apply this technology to any type of data centers, whether it's air-cooled, or water-cooled or liquid-cooled, even systems that would come in five years' time." Made in Scotland But any business building the system is likely to remain in Scotland, says Velayutham. A graduate of Strathclyde University, he has worked in the Glasgow area, in marine engineering, as well as energy systems since them. He’s a firm believer in keeping the technology where it was born, he told DCD: "We have done extensive research on the costing. We want to make everything in Glasgow. I started this in Glasgow, so I want to stick to Glasgow."

Issue 43 • December 2021 / January 2022 43

ADVANCE FAULT DETECTION OF YOUR DATA CENTER CRITICAL ASSETS Protect your most critical assets, leveraging real-time analytics to proactively determine when maintenance is needed. With Honeywell Forge Digitized Maintenance, you can gain insight into the health of your data center.

Learn how we can help you proactively monitor your data center operations

Sardinian Mine

Joining the cavern club Digital Metalla hopes to ape Norway’s Lefdal Mine and create a secure underground facility in a disused Sardinian mine

Dan Swinhoe News Editor


hat have the Romans ever done for us? Well, apart from the sanitation, the medicine, education, wine, public order, irrigation, roads, the fresh-water system, and public health, they also made southwest Sardinia a mining region that today looks set to become the home for a new underground data center in the Mediterranean. Like Lefdal mine in Norway, Dauvea hopes to establish a new Tier III/IV data center in a disused mine; one that runs on renewable energy and uses the mine’s naturally cool air and water in lieu of traditional cooling equipment. DCD speaks to Dauvea’s executives about the Digital Metalla project. Digital Metalla; a data center in a Sardinian mine Antonio Pittalis and Salvatore Pulvirenti, founders and managing directors at Italian ICT firm Dauvea, have careers going back almost 30 years through Tiscali and Telecom Italia, CRS4 (Center for Advanced Studies, Research and Development in Sardinia), and Engineering D-HUB. Founded in 2017, Cagliari-based Dauvea provides a number of IT, cybersecurity. and cloud services. With the new Digital Metalla project, the pair aim to convert a disused government-owned silver mine into a data center. The mine, located near a town called Iglesias to the southwest of the main island in an area broadly known as Sulcis, was closed around 2000, but the area has been a mining hub since Roman times. Metalla is an archaeological site in the ancient Phoenician city of Sulci (or Sulcis), to the north of Sant'Antioco; the small island southwest of Sardinia's main island. The area has been mined for lead, silver, zinc, coal, and other minerals for thousands of years, and mines in the area only began to close in the 1990s. The area has previously been submitted for consideration as a UNESCO World Heritage site. “Metalla is a Latin word for mining,” explains Pulvirenti. “We tried to combine the old and the new.”

As well as having two independent fiber loops in the area, the company will use two naturally-cooled 200m deep lakes within the mine for equipment cooling using a heat exchanger. “The site contains several hundred million square cubic meters of water, a huge amount of water,” says Pulvirenti. “The air and water inside the site is 15 degrees Celsius – which is very low compared with the average temperature in Sardinia which is around 35 degrees Celcius in summer – and it is at the same temperature all year round.” Across two data rooms the company

aims to install data hall modules in phases up to around 2MW, though there could be potential to reach 4MW. The company would power the facility via an on-site solar farm and is exploring the possibility of including a hydrogen generation plant. Dauvea began exploring the possibility of a data center in the area around 2017/18, and in 2019 the local government began the process of offering the mine site out for tender for potential commercial uses. The tender process finished in 2020, with Dauvea granted rights to the facility for 25 years, and the pair spent another year gathering the necessary permits and authorizations to allow the project to go ahead. Dauvea is what Pulvirenti describes as ‘infrastructure-light’ in terms of its own data center footprint – it offers services from colocation and cloud facilities – but the two have been involved in a number of data center projects in previous roles. Pulvirenti acknowledges that hyperscalers

Issue 43 • December 2021 / January 2022 45

are unlikely to be interested in such a small project. But it will host Dauvea’s infrastructure, offer colocation space to local government and businesses, and also provide disaster recovery services and potentially a digital cavern or vault for storing cryptocurrency assets. “The data center as you can understand is not a hyperscale data center, it’s not big enough to be that,” says Pulvirenti. “But we believe that more and more that Edge data centers are needed to provide new kinds of services to the population and businesses. “We believe that there are multiple business use cases. It’s very well-protected, and an ideal place for storing important digital assets; apart from a traditional data center it could be really sold as a digital cavern.” Like Lefdal Mine, Dauvea hopes to attract HPC customers focused on keeping their operations as sustainable as possible. He also thinks the location would be potentially useful for quantum computing. “Quantum computing infrastructure needs to be in a place where you do not have any kind of interference,” says Pulvirenti. “The only way to get zero interference is being inside a mountain or mine.” While the University of Cagliari and the CRS4 research centers have some interest in quantum computing research, it doesn’t seem likely any Sardinia-based companies

would have the scale to require a quantum computer. It’s also yet to be seen if any of the leading quantum computing companies would be willing to locate a system away from their main hubs in a remote corner of a Mediterranean island. Dauvea is self-funded by Pittalis & Pulvirenti; the two aim to set Digital Metalla up as a separate company that will own and operate the facility in order to draw the necessary funding – the pair are targeting EU grants & funds as well as private investment – and open up more partnership opportunities. The company estimates phase one of the project, to deliver the initial 500kW along with the infrastructure and equipment that will enable further expansion, will cost around €10 million. To get to 1MW the company is projecting a required investment of around €15-17 million. “We have in touch with several funds, including innovation funds and especially funds related to energy efficiency,” says Pulvirenti. The company hopes to have the required funding to begin the project in 2022 and deliver the first 500kW module around 12 months later. Converting a mine into a data center The entrance of the mine is due to be enlarged and have added security measures installed.

"We believe that there are multiple business use cases. It’s very well-protected, and an ideal place for storing important digital assets" 46 DCD Magazine • datacenterdynamics.com

While there is fiber and power in the area, Pittalis says currently there is ‘nothing’ in the mine itself and the company will have to build all the necessary infrastructure from scratch. From the entrance, there is a 600m long gallery. Two rooms – the planned data halls – branch from the main path and currently measure around 300 square meters each. The company aims to expand the size of these rooms further. “We are working with the University of Calgary’s engineering faculty to set up the right amount of dynamite to use inside the cave to enlarge the rooms,” says Pittalis. The data center infrastructure will be 500kW Schneider Electric data center modules. At the end of the 600m tunnel are the two lakes. Once the first data hall is installed, water will be pumped from one lake through a heat exchanger, passed through the data halls to cool the equipment, and then released into the second lake. “We are taking water from the left-hand side and we put the water after passing the heat exchanger in the other side,” says Pulvirenti. “The idea is to take the water at 15oC from one lake and discharge it at 20oC on the other one.” The two lakes are connected, but only join around 200m down, so the water drawn from the ‘cold water’ side will remain at the required temperature. Thanks to this cooling approach Dauvea is targeting a PUE of less than 1.1; around 1.05 or lower. As well as the usual data center infrastructure and equipment, the company is required to build a safety corridor within the mineshaft that staff can use in an emergency. Sustainability and giving back The project will be connected to the local power grid, but will be aiming to generate most of its power from an on-site solar farm. The company is also looking to install a hydrogen generation plant at the site that will be powered by the excess capacity from the solar farm and be used to power on-site batteries/fuel cells. Excess power will be given to the local power grid. Dauvea is in discussions with potential partners for that part of the project. Pulvirenti is from the Sulcis area and both men have lived and worked on the island for more than 20 years. For them, the project is also about giving back to their home. “This challenge for us is to give back to the territory what we received the last 20 years. We started our business here, where we learned a lot of the ICT world, and we want to give back to the territory.” “Whenever we talk about this project to anyone, government or private, we see a lot of enthusiasm,” says Pittalis. “The local government in the town of the site are enthusiastic.”



Thomson Power Systems can design, manufacture and service switches for your critical power needs? AUTOMATIC TRANSFER SWITCHES We manufacture both power contactor and circuit breaker based Transfer Switches to meet your requirements. Our complete line up of Residential, Agricultural, Industrial, Commercial, and Mission Critical designs are available from 100A to 4000A. LOW AND MEDIUM VOLTAGE, PARALLELING SWITCHGEAR AND CONTROL SYSTEMS

Switchgear product family can provide a complete integrated control and power switching solution to meet any Power Generation System application. Switchgear and Switchboard lines represent over 40 years of product development and custom system design experience.



Who’s a good bot? As Novva deploys Boston Dynamics’ Spot dog robot at its Utah data center, CEO Wes Swenson says larger data centers need automated security

Dan Swinhoe News Editor


magine the scenario: you arrive at a data center in Utah around the holiday season. As you walk the aisles of the data hall, a four-legged robot dog in a Christmas jumper walks up to you and greets you by name before returning to its rounds. Such a scene isn’t some poorly-written and far-fetched fan fiction, but a potential reality. Novva data centers, a new firm led by former C7 CEO Wes Swenson and backed by CIM Group, has partnered with Utah’s Brigham Young University to deploy customized Spot dog robots developed by Boston Dynamics in its data center. First announced in September, Novva said the Wire (Wes’ Industrious Robot Employee) machines will fulfill “multiple missioncritical roles” at its Utah campus, including temperature and equipment monitoring, greeting guests, confirming building occupant security clearance via facial scanning and recognition, and other general ‘tasks and missions.’ As we’ve previously written, robots in data centers are often talked about, but rarely progress past the prototype or pilot project stage. But Swenson tells DCD bigger and bigger data centers will need automation and robotics to ensure optimal operations. “We're trying everything we can to build a better data center, so why not give it a shot?” says Swenson. “So far we're actually really happy with what it does - but it is cutting edge. It is not for the light-hearted. You really have to be committed to it.” The promise of robots in the data center Data centers have a long and patchy history with deploying robots. The long-heralded lights-out data center maintained by robots remains out of reach, and deployments of robots cohabiting with humans have had mixed results. Back in 2013, IBM hacked an iRobot (similar to the Roomba autonomous vacuum cleaner) to travel around a data center tracking temperature and other data. The pilot project was quietly shelved. The Korea Advanced Institute of Science and Technology tried a similar idea; its Scout

bot patrolled the KAIST iCubeCloud Data Center using vision-based monitoring to look for issues. A follow-up study promised to attach a robotic arm that could work on servers. It was never published. Naver’s Cloud Ring data center in Sejong City, South Korea, also promised to use a number of robots for maintenance, monitoring, and security. The facility was due to come into operation in 2022. It has made a number of announcements about deploying robots at its offices as well, but the machines don’t appear to have been rolled out yet. However, some companies have seemingly managed to successfully deploy robots at certain facilities. German Internet exchange company DE-CIX has rolled out a family of automated “patch robots," including Patchy McPatchbot, Sir Patchalot, and Margaret Patcher. These are based on X-Y gantries, and can locate a socket in an optical distribution

48 DCD Magazine • datacenterdynamics.com

frame, similar to a traditional patch panel, and then plug a fiber optic cable into it. “During the 2020 11.11 Global Shopping Festival, Alibaba Cloud data centers had its Tianxun inspection robot upgraded to the second generation,” Wendy Zhao, Senior Director & Principal Engineer, Alibaba Cloud Intelligence, previously told DCD. “The second-generation Tianxun robot is AI-powered and can work without human intervention to automatically replace any faulty hard disks. The whole replacement process, including automatic inspection, faulty disk locating, disk replacing, and charging, can be completed quickly and smoothly. The disk can be replaced in four minutes.” Switch is developing its own robot, the Switch Sentry, essentially a 360-degree camera and heat sensors on wheels that can act as a security guard. It travels autonomously, but humans take over remotely when an incident

Good Boy occurs. The company said it hopes to turn the robot into its own business line, but no such timeline has been announced. Spot the Boston Dynamics dog has been used by a number of companies, including Tesla, for facilities surveillance. However, even though Boston Dynamics’ former parent Google has more than enough data centers, Novva is apparently the first and possibly only data center Spot has patrolled. Why bother with robots? Novva’s flagship data center is a $1bn, 100-acre campus in West Jordan, Utah. Construction is taking place over four phases and will total over 1.5 million sq ft (139,350 sqm) of data center space when finished. The first phase, involving a 300,000 sq ft (28,000 sq m) data center was completed in late 2021 and includes a 120MW substation as well as an 80,000 sq ft (7,500 sq m) office building for Novva's headquarters. “With such large facilities, it really requires kind of a different approach. We had not seen this before in another data center, but it was really out of necessity that we did something because of the scale of our data centers.” Swenson says it is taking a “holistic” approach to security and observation, with the Spot robots, drones outside – and eventually inside – and maybe the use of Boston Dynamics’ humanoid Atlas robots. “Our whole automation approach is to get to where the humans can monitor for anomalies, rather than sitting there tweaking things manually,” he says. Research firm Gartner recently predicted that half of cloud data centers will be leveraging advanced robots by 2025. “The gap between growing server and storage volumes at data centers, and the number of capable workers to manage them all is expanding,” said Sid Nag, research VP at Gartner. “The risk of doing nothing to address these shortcomings is significant for enterprises. “Data center operations will only increase in complexity as organizations move more diverse workloads to the cloud. Data centers are an ideal sector to pair robots and AI to deliver a more secure, accurate and efficient environment that requires much less human intervention,” he said. There have been a number of announcements around Spot and other robots dogs used for construction and facilities maintenance this year; IBM has partnered with Boston Dynamics to explore how to increase the utility of mobile robots used in industrial environments, while Tesla and Hyundai have reportedly the robots patrolling their manufacturing facilities. Construction firm Robins and Morton has deployed Spot to help with the development of a HostDime data center in Eatonville, Florida. Verizon has partnered with Ghost Robotics to explore how 5G-enabled robots can support

"It’s definitely something really exciting to see in person. You just look at it and [think] how is this real?” inspection, surveillance, mapping, and security. Spot the dog at a data centere Swenson says the idea was to use Spot, or Wire to do some of the mundane tasks humans may tire of in a large facility. The machine runs pre-determined missions throughout the data center to collect data, monitor equipment, and report anything unusual. “There are some things that humans get tired of, like the repetition,” he says. “We put QR codes on every piece of machinery in the data center. When human patrols or inspectors go around, they have to scan those QR codes as a routine facility check.” Currently, Spot takes temperature readings and performs security tasks ensuring there are no unexpected visitors in the data halls. “As it's doing its rounds, it's feeding back all of that information into us into the control center, where we do statistical quality control and watch for anomalies,” says Swenson. As well as taking a reading of the ambient temperature of the room or data aisle, it can also read the displays on equipment and feed that back to the control center. It also performs a security role: The dog is integrated with Novva’s security database, and will scan the faces of people in the data center against the company’s list of authorized or expected personnel. Recognized visitors will be greeted by name via a generated audio file, strangers are reported to the control center. “When it recognizes a face, It will check the database and it will say good morning to your name,” says Swenson. “If it doesn't recognize you then it reports it back to the control center as an alert; it marks the GPS location, takes an automatic photograph, and then the control center will follow up on it. The robot will then either stay put or continue its rounds.” Developing Spot for data center use Novva partnered with BYU to build upon Spot’s Software Development Kit (SDK) and create a more user-friendly software package that made it easier to edit and improve going forward. “We employed BYU's engineering department to help us advance that programming, and get it to a stage in which the software is easier for my technicians to adjust missions,” says Swenson. “BYU did a great job with it. They had it for almost a year; that kind of gives you an idea that it takes quite a bit to do this.” Joseph LeCheminant, a Mechanical Engineering MS Student at Brigham Young

University Capstone, was involved in the project as part of his undergraduate degree. He was part of a team of seven students; a mix of mechanical, electrical, and computer engineers that worked on the project for two semesters (around eight months in total). “It’s definitely something really exciting to see in person. You just look at it and [think] how is this real?” he says. “It was fun to branch out a little bit from traditional mechanical engineering.” Part of LeCheminant’s role in the project was to help write the custom API to connect Spot’s facial recognition capabilities developed by BYU to Novva’s own facial recognition database and generate the audio files to say the person’s name. The robot comes with advanced capabilities for walking and object avoidance; the team added software to use Spot’s cameras for facial and meter readings, as well develop its autonomous route navigation capabilities. On the hardware side, the team added the thermometer to take a reading of the atmospheric temperature in the computer rooms. LeCheminant says the team had something of a learning curve with Boston Dynamics’ SDK for Spot, especially as some of the students had no experience of coding, let alone coding in Python as required. “We the mechanical engineers had to learn a bunch of things that the electrical and computer engineers already knew, and then we needed to understand the SDK that came with Spot to develop it to customize it,” explains. “None of the mechanical engineers had ever used Python ever, and by the end we could write a Python script from scratch in Python and make Spot do things.” Novva has since upgraded to a later model of Spot, but the one the BYU team were developing on didn’t have battery charging capabilities, which meant the students had to regularly change batteries by hand. At the time, Spot also was limited on connectivity and wouldn’t connect to WPA Enterprise Wi-Fi networks – though support for that has since been added – so the team had to buy its own routers and set up its own network to run the robot dog. During development, the students had regular meetings with Novva to discuss requirements and development progress. The students’ involvement in the project finished before the robot was actually deployed in the data center, but both BYU and Novva regard the project as a success in terms of the work the students did.

Issue 43 • December 2021 / January 2022 49

"Eventually we can get the robots to run payloads from the warehouse out to the data centers; they'll be able to pull small trailers and walk behind you" LeCheminant describes the project as a “huge privilege to us as students.” “I think it's a really cool statement that they've made,” he says. “It was great to work with them. They were easy to work with, extremely flexible, and just happy to provide what we needed to succeed and give us the learning experiences students.” The software packages developed by BYU should be easily editable for future Novva development and allow for further improvements in the future. Swenson says that while some of the project may have been developed using open source tools, Novva currently owns the custom software packages developed during the project, and would be unlikely to license or resell them to a competitor currently. It would, however, consider contributing part of what was developed to an open-source foundation or project dedicated to robotics. Deploying Spot in Utah Novva currently has two Spot robots, but aims to increase the number as it expands its campus and also use Spot in other facilities. “There will probably be three to four dogs per facility,” says Swenson, “So in Utah, you would see anywhere from 16 to 20 dogs. We like to be N+1, so we want an extra dog on hand. ” Novva acquired a second data center in Colorado in September, and Swenson says that the campus will have around six dogs once all three buildings are fully developed. Other planned facilities the company has in the works will also see dog deployments. Spot comes with a control panel, and has to be manually walked around the facility to learn the building. As well as teaching Spot the current layout of the data center, the company also has mapped equipment that isn’t yet installed so that Spot’s patrol routes don’t need regular updating. “A lot of the cabinets aren’t in place yet but the robot is programmed to operate as if they are so it doesn't take us a lot to deploy it,” says Swenson. In its YouTube videos, Boston Dynamics often has QR-code like signage around its facilities and objects its machines are interacting with. Novva has adopted a similar system. “We also have a QR type code system that sits about three feet above the floor throughout the building. Its eyes can recognize those to verify where it's at.” In the near future, Swenson hopes to deploy

the Spot/Wire dogs outside to respond to alerts and provide ground support to the drones. “We are waiting for the next major upgrade of the robot from Boston Dynamics where it will actually have an arm, and one robot can open the door for another robot.” “Eventually, I think even next year, we can get the robots to run payloads from the warehouse out to the data centers; they'll be able to pull small trailers and walk behind you.” The uncanny valley, but a good dog Anyone who has watched Boston Dynamics’ carefully choreographed videos is bound to have felt a sense of eeriness about the machines; while clearly not human or dog, the movements of Atlas and Spot can look strangely natural. Swenson and LeCheminant both acknowledge the “uncanny valley” feeling the machines can inspire, but note the robots have a certain mesmerizing allure. “The feedback from friends and family was this is a little bit uncanny and they definitely wouldn't want to come across this and have it come chasing them,” says LeCheminant. “But, as the developers, you understand that it only does what it's told. It does have some pre-programmed autonomy, but it's not gonna roam off on its own somewhere.” LeCheminant adds one of the reasons the ability to greet visitors by name was added was to try and “break down” any barriers people might have about the machine. “People get a little bit surprised by it, but it is captivating,” adds Swenson. “It stops you in your tracks when you see it. “Some people do get a little nervous when it gets too close. These things are quite large; they weigh a few hundred pounds, and they can be slightly dystopian in appearance.” Occasionally through our call, Swenson genders the robot before correcting himself, but accepts he and the team have a certain fondness for the canine-like machine. “We kind of embrace it as a living thing, you almost can't avoid it. Even though it's just this robotic object, you do take some endearment to it. We're even having a dog sweater made for it for the holidays!” When the machines are deployed outside, Swenson is keen to note that the machines are still only for surveillance and won’t be performing any kind of K9 heroics. “It's not going to interfere. It's not going to attack, it's not going to defend, it's not going to do any of that stuff. And we don't really want it

50 DCD Magazine • datacenterdynamics.com

to. If it's something we need to call the police for, we would do that.” Not the perfect monitoring machine “For the most part, it runs its missions and then it comes back to its dog house where it sits down on a platform and recharges itself so it doesn't really require any human intervention to do that.” “You have to clean the 360-degree camera once in a while but other than that, it's pretty hands-off.” While everybody in the control center is trained on how to use Spot, the company currently has one developer internally to manage the robot’s ongoing development. “If the operators are seeing something with the dog or the drone, then they just register a ticket with IT and we'll fix it. Or, if it's an improvement – like changing the angle of the gaze, things like that – then we put in a ticket and see if we can improve it.” “At this stage, it doesn't take a team of people, but that's really due to the fact that BYU did so much advanced engineering on it and made the software interface, the user interface more approachable for us.” While impressive, the machine isn’t yet totally enterprise-ready in terms of day-today operations. While data centers require constant uptime, Spot still requires regular downtime and isn’t quite ready for continuous development in terms of software updates. As part of Novva’s contract with Boston Dynamics, the machines are entitled to regular software and hardware updates, but rather than OTT software updates or on-site maintenance, Novva has to send the machines away. “Right now, we actually physically send the dog back and they may send us another dog or replacement dog with upgraded software or some kind of mechanical system. We may not get the exact same dog back.” Swenson says Novva has to unequip the external equipment – LiDAR, 360 camera etc – before the company can send a machine back. And once a replacement arrives, Novva has to go through a QA process to check that whatever software upgrades implemented don’t clash with Novva’s software packages, and then make any amendments if required. “Things like that are a little bit complex and burdensome. It's not the most plug and play technology at this stage. It does take some dedication of resource to make it work.” While the company is hoping to make a number of software updates and enable new capabilities, it plans to only do large-scale changes every four to six months. “We're trying to keep it in static mode right now. We're trying to not spend more time programming it than we are actually using it, so we're trying to limit its revisions.” Another issue is connectivity. While the machine now has support for enterprise Wi-Fi

Good Boy networks, it has to be connected at all times and currently has no option for the likes of LTE or any IoT wireless protocols. “The robot has to work off of Wi-Fi connectivity; it's not really built for 5G and it doesn't operate outside,” says Swenson. “If it loses some kind of connectivity or things like that, it will just stop in its tracks, lie down, and alert the control center.” Swenson says he hopes that Boston Dynamics will add some 4G/5G capabilities to the machine soon, but if not, Novva will look to customize the machine further to add that. And if that proves difficult, the company will expand its Wi-Fi to cover the wider campus beyond the offices and data halls. He also notes there’s just no competing with human understanding and intuition on some things. “It doesn't think on its own. It doesn't make decisions, when it sees something it doesn't react to it. It would take a lot of programming for a robot to hear friction and equipment and be able to notify the control center that it hears that; things like that are difficult and probably not worth it yet to program.” The Explorer model of Spot costs $74,500, while the Enterprise model is more expensive. Extra equipment – such as the 360-degree inspection camera ($29,750), LiDAR sensor ($18,450), or on-board GPU processor ($24,500) – cost extra. Swenson says that including maintenance and equipment, Novva is spending around $200,000 per dog; a price point the company finds effective in order to add an extra layer of monitoring and security it didn't have before. Novva is also seemingly interested in

Boston Dynamic’s human Atlas robot and keen to test its capabilities in a data center environment once the robotics company makes the machine available. “We would very much like to try Atlas as well when they feel like that it’s ready for primetime,” said Swenson. “Not that we would abandon the dogs altogether, but if we could get to the more humanoid [robot] where it's on two legs and it stands up and it can open its own doors and carry payloads, and lift the servers into the cabinets for the clients - that's where we're

looking to in the future.” But he reiterates that even deploying Atlas robots won’t lead to an en-masse replacement of human workers. “It's really a complement to the human side of the business,” says Swenson. “There is really no replacement of humans for this. It is really complementary. In fact, I would say we've had to add personnel to program these missions.” “It's not perfect, it's not plug and play and it still takes some programming adjustment. But we see this is a critical element of what we do in the coming years.”

Drones in a data center As well as Boston Dynamics’ Spot robot, Novva is currently rolling out semiautomated drones for security purposes. Equipped with 4K cameras, LiDAR and infrared, the quadcopter drones perform regular autonomous perimeter checks, as well as responding to ad hoc alerts. “When you run a 100-acre campus, you really should have an aerial view of the operation,” says Anderson. “For the most part, it just does its own thing and then just autonomously goes back to its landing site.” The company currently operates two drones – models known as Blackbirds and supplied by Nightingale Security – but plans to have four at its Utah campus. The machines live within a designated enclosure known as a base station with a retractable roof that acts as a landing pad and charging pad. Novva has around 10 pre-defined missions the drones can run. As well as

routine perimeter checks, the drones can react to certain alerts. “We have detectors on our fencing that detect any kind of vibration, and it has an algorithm that detects whether that vibration is due to the wind speed or if it's due to some kind of other interference,” says Anderson. “If we think it's somebody trying to climb over the fence or cut it, the drone will automatically launch towards that sensor and get eyes on it.” The system includes pre-defined limits around altitude, no-fly areas (such as over the company’s on-site substation), and won’t look at certain neighboring buildings. Anderson says everyone at Novva’s control center, including himself, has to be certified with the FAA to fly a drone, and then internally the company has different pilot statuses. Less certified members of the team can

pause and rewind pre-set missions to check for anything the drones may have missed, but those with top flight status can take full control of the drones. “The drone itself requires very little human intervention once it's preprogrammed. And even then it has all sorts of safeguards; emergency landing zones, no-fly zones, altitude zoning.” Today Novva only has medium-sized drones that operate outdoors, but in the future, the company is aiming to have microdrones operate within the data halls. “Right now we're experimenting with smaller drones that are three inches in diameter,” says Anderson. “They would operate within the data center and be able to monitor things [and watch for] anomalies within an aerial view.” Anderson says that once deployed, the company aims to have eight to 10 microdrones per building.

Issue 43 • December 2021 / January 2022 51

WHEN BUILDINGS PERFORM SO DO BOTTOM LINES Honeywell is here to help you find cost-effective ways to protect your critical infrastructure and choose a set of building technologies that helps optimize uptime, safety and sustainability. Find out what your data center can achieve, with Honeywell

52 DCD Magazine • datacenterdynamics.com

The Last Continent

Antarctica comes in from the cold Could a subsea cable finally be close to connecting the last unconnected continent?


espite the recent boom in subsea Internet cables, Antarctica raemains the last unconnected continent. In its long summer, it is home to dozens of research stations hosting thousands of researchers, who generate terabytes of data a day. But the whole continent relies on satellite connectivity which barely qualifies as broadband. Terrestrial subsea cables can now reach up to 300 terabits per second (Tbps) but Antarctica gets less than 30Mbps from its satellite links Even the International Space Station, in Earth orbit, does better than our most southern continent; at 600 megabits-persecond - more than 20 times the bandwidth of the US’ McMurdo research station. But, nearly a decade after it was last considered, 2021 has seen a flurry of interest in finally laying a cable to Antarctica that could revolutionize the way scientists conduct their research. Megabit Internet in the Antarctic Both McMurdo and New Zealand’s Scott Base are on Ross Island, which is 20miles off the Antarctic coast, and the southernmost island in the world, some 4,000 km (2,400 miles) south of Christchurch, New Zealand, Till last year, the Scott Base relied on a 2Mbps satellite connection, but recently upgraded to 10Mbps download / 6Mbps upload from IntelSat with 300ms latency between Scott Base and New Zealand. The US McMurdo station, on the Southern tip of Ross Island, has something closer to 25Mbps to share between up to

Dan Swinhoe News Editor

1,000 people in the Austral summer. “I've seen the Antarctic program evolve from relying on point-to-point HF radio that was used to move data from McMurdo and South Pole stations in the 80s and early ‘90s to modern satellite communications,” Patrick Smith tells DCD. Smith is manager of technology development and polar research support at the National Science Foundation – the US government’s science agency: “We're trying to run a science town that supports all the logistical things it has to do; software updates, exchanging database files, moving cargo between us and logistics centers up north, supporting the scientists and our telephone systems, email, and medical video teleconferencing. “We're doing all this incredible amount of stuff with the amount of bandwidth that's typically available for one household in rural America, and we're starting to hit the practical limits of being able to keep growing and expanding.” He says that even a 100Gbps cable would be “essentially infinite bandwidth” for McMurdo and could relieve a lot of the constraints. Can NFS make a subsea cable? This year the NFS put on a workshop to discuss how a cable would change the lives and research of scientists and staff stations in Antarctica, and is in the process of feasibility study ahead of potentially developing its own cable to the continent. After decades of relying on satellites, the workshop produced a comprehensive report detailing the impacts of how a cable and a huge bandwidth boost could benefit research, logistics & safety, and the personal

"We're doing this incredible amount with the typical bandwidth available for one household in rural America. We're starting to hit the practical limits"

lives of the Antarctic mission. “A combination of new US science experiments requires the US to think differently about their data transfer requirements, and redevelopment of NZ and US Antarctic bases has forced both programs to look to the future and address long-standing connectivity issues,” says Dr. John Cottle, chief scientific advisor to Antarctica New Zealand, the government agency responsible for carrying out New Zealand's activities in Antarctica. “As science, associated technology, and data storage methods progress it’s natural that much greater amounts of data will be collected by scientists and they will want to transfer this data to their home institutions for processing, Financially, a cable begins to make more sense as data volumes rise, and I [also] think there is generally a greater demand and expectation for connectivity in all aspects of life.” Smith says the NSF first looked at the feasibility of a subsea cable from Australasia to Ross Island around 10 years ago, driven partly by the then-proposed Pacific Fibre cable that would have gone from Australia to New Zealand and onto the US. However, PF folded in 2012 and Smith says the NSF let the topic ‘go dormant.’ But two recent developments re-ignited NSF’s attention: Chile’s Humboldt cable is due to connect South America with Asia. Meanwhile, Datagrid is building a 50MW hyperscale facility on New Zealand’s South Island near Invercargill. “The needs and the demand to help support the growth of our science program have shifted the direction towards [a cable]. As the need and desire for digital transformation have grown, we’ve bumped into some limits of what you can do with just regular conventional satellites.” A number of researchers DCD spoke to note the current US administration under President Biden is more likely to favor large science projects with a strong climate component than the previous Trump regime.

Issue 43 • December 2021 / January 2022 53

“The NSF, all of a sudden, has a bit of a fire under it to explore this idea,” says Professor Peter Neff of the University of Minnesota’s Department of Soil, Water, and Climate, who helped put on the workshop. The race to connect Antarctica heats up After years of little activity, suddenly there are a number of potential cable projects looking at connecting Antarctica to both South America as well as Australasia. In May 2021 Silica Networks, a Datco Group company, announced it was to spend $2 million on a feasibility study for linking Ushuaia in Argentina, Peurto Williams near Cape Horn (Chile’s southernmost point), and Antarctica’s King George Island, located 1,000km to the south and home to nearly a dozen research stations. In November, the Chilean government also signed an agreement to explore the possibility of a subsea cable from Puerto Williams to King George Island. “We are taking the first step to create an Antarctic digital hub,” said the Chilean Undersecretary of Telecommunications, Francisco Moreno. “We aspire that the large volumes of scientific information... circulate from Antarctica to the rest of the world through Chilean telecommunications networks.” On the Australasian side of the continent, there are a number of potential cable projects in the works beyond the NSF’s potential cable. In a February 2021 submission to the Joint Standing Committee on the National Capital and External Territories, the Australian Bureau of Meteorology (BoM) discussed the possibility of a subsea cable connecting

Mawson, Davis and Casey stations on the Antarctic continent, and the Macquarie Island research station, to Tasmania. “Establishing an intercontinental submarine cable to Antarctica may be beneficial to Australian interests, and better ensure safe and secure operations in the territory by diversifying the communication infrastructure used to operate the Bureau’s Antarctic meteorological services and allow for the expansion of services and capabilities across the vast continent,” the report said. According to ZDNet, the Australian Antarctic Division currently uses C-band satellite connections from Speedcast at each of its four stations, which are capable of 9Mbps and have 300ms of latency. Each station also has a backup data link from the Inmarsat Broadband Global Area Network, which provides a 0.65Mbps link with a latency of 700 milliseconds. Remi Galasso is involved in two potential projects. He is founder of Hawaiki Cable, which links Australia, New Zealand, Hawaii, and the US West Coast. His other venture Datagrid has a hyperscale project on South Island with a fiber connection between New Zealand and Australia. That cable could include the potential for expansion to Ross Island. Galasso is also reportedly in talks with Chilean officials about the possibility of landing the Asia-South America Digital Gateway project connecting South America to Asia Pacific on New Zealand’s South Island rather than Auckland to the north, and this could feature a spur to Antarctica. The venture would be separate to Hawaiki. Australia’s Norrlink is a newlyestablished company aiming to connect

54 DCD Magazine • datacenterdynamics.com

Hobart in Tasmania, Macquarie Island, McMurdo & Scott research stations on Ross Island, and the Italian Zucchelli research station at Terra Nova Bay. Joe Harvey, managing director at Norrlink, tells DCD the company aims to develop a cable funded by a consortium made of up the science community with interests in Antarctica. He says the company is currently in discussions with potential customers, is focused on forming the consortium to assemble funding, and is “optimistic” that it will place the order by mid-2022. “Currently we are focused on forming the consortium to assemble funding,” he says. “This is a research-focused cable and the project differs greatly from a standard commercial sub-sea revenue-producing fiber project.” He said there is interest around building the first LEO satellite earth stations and sees “great potential” to work with the likes of OneWeb and Starlink on a landing point on the continent, and that a cable could enable more remote commercial experiments akin to those that take place ISS. People, research, and safety A cable to McMurdo and other research bases could be transformative for scientists’ research as well as their quality of life. On the research side, more data is generated by scientists than can be sent out, so information is often compressed and pared down to be sent out in subsamples via satellite. Raw data is then flown out via hard drives. A cable could allow scientists to record and send far more data back to their respective research institutions much faster.

The Last Continent “Scarcity is the mode for everything down there, from luxuries like food and being warm all the time, all the way to having good Internet or not,” says UoM’s Neff. “We're definitely used to not doing much with data when we're in Antarctica; we design all of our experiments based on this assumption of the scarcity of resources. “If we were to have full-fiber Internet, we can start going wild with our imagination of all the ways we get improved science. Having better bandwidth would fundamentally change the way that humans work down there.” Likewise, Antarctica NZ’s Cottle says a cable that went to Scott Base could enable year-round monitoring of Ross Sea and Southern Ocean, and remote control and tracking of air and undersea drones alongside weather balloons; more real-time observations and increased data density for geographic, atmospheric, oceanographic, and meteorological data; and even enable the base to begin to use cloud computing for large data sets. The atmospheric research community in particular is very interested in getting more real-time data and visuals out. The weather influences everything on the Antarctic continent; flights to McMurdo, the South Pole, and other field locations sometimes have to turn back because weather conditions make it dangerous to land. The Antarctic mesoscale prediction system uses data collected from Antarctica and processes it at the National Center for Atmospheric Research to provide a detailed view of the weather systems around McMurdo and the surrounding area. A cable would allow more data to be sent to the US more quickly, allowing for more accurate

"If we get full-fiber Internet, we can start going wild with ways to get improved science. It would fundamentally change the way humans work there" long-term predictions and the ability to send more information back to McMurdo in closer to real-time. “If the bandwidth was there the information could come out of the National Center for Atmospheric Research where the model is run, onto the ice to have a much more comprehensive description of what's happening,” explains Professor David Bromwich, a polar meteorologist at Ohio State University. “Things can change very rapidly down there. [Right now] you've got what's going to happen over the next few hours, but the NSF want to expand the field season away from just the warmer months so that they can do some activities throughout the year. And so then the question becomes what is it like not just over the next few hours, but over the next few days?” McMurdo is a major logistics hub for scientists traveling on to the South Pole, field stations, or other research bases, and having greater bandwidth offers researchers the chance to be more productive while they wait to move on. Adding a cable could improve the quality of life for people stationed at McMurdo immensely. Wi-Fi access is currently restricted to science personnel and only in certain buildings. “It's a small town; there could be anywhere from 500 to 1,000 people at its peak, and they're trying to maintain their

lives back home while they're still deployed in the Antarctic,” says NSF’s Smith. “Contact with family and friends, maybe even paying bills and the other things, removing these bandwidth constraints just makes it a lot easier.” And while it may sound trivial compared to science research or predicting potentially deadly weather, a cable will also enable greater access to social media. The NSF is always keen to promote science, and unfettered capabilities around video calls and social media can help spread the word of polar science. TikTok videos from two researchers at the South Pole have regularly gone viral over the last year and shown how powerful social media can be. “Even things like our educational outreach requires modern digital communications,” says NSF’s Smith. “A lot of our grantees try to do educational outreach sessions using social media, which we have to limit. Taking that constraint off, they could do things like live sessions with classrooms.” Neff notes that the human aspect of life in Antarctica can be lost in the research endeavor, and more bandwidth would make it easier to share the whole experience. “The product of our research projects is usually a paper in the Journal of Glaciology, but that doesn't do justice to the massive human effort of those people supporting

Issue 43 • December 2021 / January 2022 55

you. None of that is captured in our outputs,” he says. “It would add a lot of additional depth to communicate better what doing science means and the genuine teamwork it involves; you have to really care about everybody on your project, whether or not they have a PhD doesn't matter when you're in Antarctica.” Subsea cables in sub-zero temperatures Smith says the NSF has a desktop study being commissioned with assistance from the Defense Information Systems Agency (DISA) – which he says is the governmental subject matter expert in this area – to explore the feasibility, costs, and timelines of a potential cable project and address what he describes as “Antarctic uniques.” As well as ensuring a cable would be routed and protected to avoid icebergs gouging the sea floor, icebreakers would almost certainly be needed, and there may be very limited window to lay a cable. The Southern Ocean is also notably difficult to navigate.

“No one's laid a cable in the Antarctic before, I would not be surprised if there are some challenges that we'll have to come to terms with and figure out,” Smith says. “What's the iceberg scour risk; we need to make sure we understand what the bathymetry is like; what does it mean to try to lay cable when you only have limited ice-free cover; what are the sea-states; and can you get cable ships that are rated to operate in polar regions?” Nobody has laid a cable in the Antarctic, but there are some parallels with cables laid in the Arctic. Cables connecting Greenland to Canada and Iceland were laid in 2010, while cables linking the island of Svalbard to Norway and Finland were laid in 2002. Alaskan communications firm Quintillion partnered with Ciena for its 2017 Alaskan Subsea Cable – which it plans to extend to Europe – and Russia is planning to lay the Polar Express cable along its northern Arctic coast. Brian Lavallée, senior director at Ciena says laying cables in polar regions is challenging but possible. “Ice-related challenges greatly

“No one's laid a cable in the Antarctic before, I would not be surprised if there are some challenges that we'll have to come to terms with" 56 DCD Magazine • datacenterdynamics.com

influence cable installation, protection, and maintenance,” he says. “Modified or purpose-built ships capable of navigating icy Arctic/Antarctic waterways are required, which may be accompanied by purposebuilt ice-breaker ships because laying a submarine cable follows a predetermined surveyed path of which parts may be frozen.” “Changes to the cable itself will likely be minor to unnecessary in most cases, as there are existing cables already in cold waters. Installation and burial methods, as well as armoring, may be customized to where the submarine cable is physically installed.” Smith notes, however, that the ocean area the cable would pass through has been studied extensively through international science programs of not only the US but also the likes of New Zealand and South Korea. “There's a wealth of scientific information and a body of knowledge from a whole range of disciplines from people who study the ice sheets; icebergs; the ocean bottom and iceberg scour; the benthic [sea bottom] community of organic life. I believe that there is going to be a large body of scientific knowledge that the desktop study can tap into.” SMART cables become scientific instruments One unexpected benefit of any cable to

The Last Continent Antarctica flagged at the workshop is that it could be used not only to carry scientific data, but become a research instrument itself. “The instrumentation of the cable opens up some exciting possibilities for oceanographic research in an area of the world that really doesn't have that much in the way of sustained monitoring,” says Smith. “A lot of connections were drawn between the instrumentations on a cable and how that could benefit climate change research, and long-term monitoring that’s essential for climate change monitoring.” Scientific Monitoring And Reliable

Telecommunications (SMART) Cable Systems insert sensors into subsea cable repeaters, providing continuous information on ocean temperature and water pressure at regular distance intervals. Potential projects are in discussion for deployment off the coasts of Sicily, Portugal, Vanuatu, and New Zealand. “It [would] provide a new window into the deep ocean,” says Professor Bruce Howe, Research Professor at the University of Hawaii’s Department of Ocean and Resources Engineering, School of Ocean and Earth Science and Technology (SOEST). “They are so few measurements,

and whatever measurements we have are typically very intermittent.” Dedicated science cables are rare, numbering in the dozens compared to the 400 or so commercial subsea cables. Most oceanographic research data is collected via ships and buoys, which are expensive and often only provide isolated and intermittent data. A SMART cable could provide huge amounts of new information. “By adding 2,000 repeaters with deep temperature measurements, that would be 2,000 measurements we don't have [currently],” says Howe. “We would be providing a time and

Connecting the South Pole The Amundsen–Scott South Pole Station lies some 850 miles south of McMåurdo Station. Despite being the largest data generator on the continent, it too suffers from limited connectivity and has little prospect of being directly connected to a fiber cable to McMurdo or the rest of the world any time soon. South Pole Station’s biggest data generators are the IceCube Neutrino Observatory and the South Pole Telescope, which between them generate terabytes of data a day. The telescope looks at the early universe and the millimeter band; it was one of the facilities that helped capture the first image of a black hole in 2019. IceCube is dedicated to studying the origin of the highest energy cosmic rays through neutrino reactions in the ice; its thousands of sensors are located more than 1km under the Antarctic ice and distributed over a cubic kilometer. “If you're trying to understand the early universe you are looking in the millimeter bands, microwave frequencies, a few 100 gigahertz, they don't travel very far right in the atmosphere because they're absorbed by oxygen and by water,” says Professor Nathan Whitehorn of Michigan State University. “If you're trying to see what's coming from space, you need to be someplace that has very little oxygen and very little water and one of the best sites is at the South Pole.” While McMurdo station has continuous but limited connectivity, South Pole Station is even more restricted both in terms of coverage and capacity. Today the South Pole currently relies on communications from three satellites; NASA’s TDRSS F6, Airbus’ Skynet 4C, and the US DoD’s DSCS III B7 providing 4 hours, 6 hours, and 3.5 hours of coverage respectively (though these windows overlap and shift constantly throughout the year). Iridium provides more continuous coverage

but can’t be used for anything beyond voice calls or small (<100kb) emails that cost dollars per message. For more than 20 years until 2016, one of the main satellites the South Pole relied on was the GOES-3 weather satellite. Launched in 1978 and originally built for the National Oceanic and Atmospheric Administration as part of the Geostationary Operational Environmental Satellite system, GOES-3 became a communications satellite from 1989 after its imaging components failed, and was used by the NSF for links to the South Pole from 1995 when its drifting orbit brought it in range of the station there. Providing around 1.5Mbps up and down, it was decommissioned in 2016 after 38 years in operation. The TDRS satellite supports an 5Mbps S-band link in both directions for telephone calls, web browsing, large emails, file transfers, video teleconferencing, etc. and a 300Mbps Ku-Band link reserved exclusively for the transmission of science research data and a few other large station operation files. It is not available for the general station population. Skynet and DSCS satellites both operate on the X-band, the former offering 1.544Mbps and the latter a 30Mbps southbound connection and 10Mbps northbound connection. “For half the year, the only time that you have access to the rest of the world is in the middle of the night, which is challenging as a working environment,” says Whitehorn. “If you're trying to connect from McMurdo to the South Pole, you go up to geosynchronous orbit, down from geosynchronous orbit, through some networks in the United States, back up and then back down and then the round trip does that several more hops. So you can have ping times between McMurdo and the South Pole in excess of 10 seconds. “If you're trying to just get the manual for

a piece of equipment that you're trying to use, often that involves waiting for 12 hours, unless you thought of it ahead of time and have it printed out. I've ended up using handheld satellite phones over Iridium to call people and ask them to Google things on a fairly regular basis.” As well as limited bandwidth, all three satellites are long in the tooth; DSCS-3 B7 is the youngest of the three machines but was launched in 1995; TDRSS F6 was launched in 1993 and Skynet 4C 1990. Most satellites have an expected lifespan of up to 18 years, and many of the satellites’ sister machines have already been retired. DSCS-3 B7 and TDRSS F6 were launched with a 10 yeardesign life, and Skynet-4C just seven years. The South Pole Telescope generates around 1.5TB of data a day and the IceCube telescope around 1TB. The South Pole Telescope maintains a 100 core scale computing cluster, while the IceCube project has something closer to a 200 core cluster to do some limited pre-processing and systems management, but the team is able to transmit around 100GB a day off-site via satellite. The rest is flown out once per year as raw data on hard drives. “We have developed custom software to manage these various data transfer streams as well as to check that all data are received in the northern hemisphere, resending files if necessary,” says Dr. John Kelley, IceCube Manager of Detector Operations, Wisconsin IceCube Particle Astrophysics Center (WIPAC) at the University of Wisconsin– Madison. “Overall this process works rather well but there are many links in the chain where transfer hiccups are possible, and latencies of a day or two are typical.” “You really have to just wait a full year before you have access to the data,” notes Whitehorn. “And you have to hope that it's good and there aren't problems that nobody knew about because they couldn't see it.”

Issue 43 • December 2021 / January 2022 57

"The whole ice sheet moves, and at different rates; there's a mountain range you have to get over; there's glacial ice draining through that pass..." space-dependent measurement of sealevel rise as waters melting from different locations and flowing around the globe try to equilibrate.” By measuring water pressure, for example, scientists can track sea-level rise. More water in the ocean increases the pressure on the bottom. A cable from New Zealand to Antarctica would serve more directly to measure the formation of Arctic bottom water. “The coldest water in the ocean is formed in Antarctica, and sinks down and spreads out through the global ocean,” says Howe. “One of the formation areas is the Ross Sea where that cable would go, so you would have direct measurements of temperature versus time going out over 5,000 kilometers. That would be a direct, great benefit.” At an industry average of around $30,000 per kilometer, a 3,500km or so cable from NZ to Ross Island could cost the NSF somewhere in the region of $100 million, before adding the cost of SMART repeaters, the likely extra cost of icebreaker ships, etc. Howe says he and the ITU task force calculate SMART cables increase the cost of a cable by around 10 percent. But the

scientific benefit could help the NSF justify the cost of the whole project. The NSF’s annual budget is around $8 billion, with around $500 million of that dedicated to the polar mission. The Innovation and Competition Act of 2021 is yet to pass, but could potentially increase NSF’s budgets by tens of billions of dollars over Biden’s term. In the meantime, the NSF would likely either need to reshuffle funds or ask congress for the money in addition to its usual allocation. It’s also possible countries with neighboring research stations may also want to contribute to the cost. “Antarctica is changing,” says Neff. “It's a really big place and it's hard to observe all of it, especially from on the ground. Any way that we can get more observations in space and in time by having things plugged into the fiber system or having more bandwidth in at our main research station McMurdo Station is beneficial.” The ‘last mile’ to the South Pole Connecting McMurdo is seeming possible in the near team, but connecting the numerous field bases and temporary operations further inland could remain a challenge, and satellites will likely remain a

58 DCD Magazine • datacenterdynamics.com

key part of the connectivity landscape. “There's different ranges in terms of 'last mile,' the zero to 100-150 kilometer radius around McMurdo is within reach of potential modern wireless solutions,” says Smith, noting that the base could serve as a ‘terabit hotspot' for last-mile connections out into the near field. “When you get out into the deeper field, it is harder. We do have isolated field camps that are scattered around West Antarctica depending on where the science drives them. At this point in time, I'm kind of looking at satellite solutions to help them,” says Smith. The South Pole presents an even harder challenge. Located some 1,360 km (850 miles) inland and occupied by around 150 people in the summer, the Amundsen– Scott South Pole Station houses the IceCube Neutrino Observatory and South Pole Telescope; two large data-generating projects. “The South Pole Station has our biggest demand for data with the burgeoning astronomy and astrophysics programs there,” says Smith. “Right now, we're supporting them by satellite, and there are discussions about various satellite solutions to meet future needs.” Smith says when the NSF explored a potential cable to McMurdo a decade ago it also looked at the potential to lay a cable across the Antarctic continent between South Pole and McMurdo - a swathe of land covered in deep ice. “That was a pretty tough challenge; the whole ice sheet moves, and at different rates; there's a mountain range you have to get over; there's glacial ice draining through that pass you would go through; and then even at sea level on the Ross Ice Shelf, there's a lot of motion and twisting action.” He notes that satellite is likely to remain key for the South Pole for the foreseeable future. It will also remain important to McMurdo for resilience and backup. Even if a cable does arrive, there will always be a possibility of failure in such an inhospitable place. “I think what we do in Antarctica is incredible, and full conductivity down there would bring us into the next era of Antarctic science,” concludes Neff. “We went from the era of exploration up to the 1920s and 1930s, hopped over into this era of science in Antarctica in the 1950s, and we've been operating basically in that same way since.” “It's time to try to shift things and have the full power of 21st-century digital technology down there. It's only going to become more important as Antarctica continues to change in a warming world.”

Drive decarbonisation via the unique EnergyAware UPS solution

Contribute to the broader energy transition and support a new renewable grid system

Generate more value form battery storage assets: 1 to 3 year payback

EnergyAware technology: Leading the way to a greener energy future Energy is mostly taken for granted, a commodity that can be bought and utilised. Chief Financial Officers see energy as an unavoidable, undesirable cost. But the status quo is changing. The opportunity to transform from energy consumer to energy pro-sumer and a grid contributor has never been greater. This new future will see data centres make valuable environmental and financial gains with no loss of resilience, control or productivity. And it could help all of us move to a greener energy grid, adding a valuable new dimension to an organisation’s Corporate Social Responsibility activities. To learn more about Eaton’s EnergyAware UPS visit Eaton.com/upsaar


The DPU dilemma: life beyond SmartNICs There’s a major shift happening in server hardware, and it’s emerged from a surprising direction: the humble network card


e are near the start of the next major architectural shift in IT infrastructure,” says Paul Turner, vice president, product management vSphere at VMware in a blog. He’s talking about new servers which are built with maximum programmability in a single cost-effective form factor. And what made this revolution possible is a simple thing: making network cards smarter. The process began with the Smart network interface card or SmartNIC - and has led to a specialized chip: the DPU or data processing unit - an ambiguouslynamed device with a wide range of applications. “As these DPUs become more common we can expect to see functions like encryption/decryption, firewalling, packet inspection, routing, storage networking and more being handled by the DPU,” predicts Turner. The birth of SmartNICs Specialized chips exist because x86 family processors are great at general purpose tasks, but for specific jobs, they can be much slower than a purpose-built system. That’s why graphics processor chips (GPUs) have boomed, first in games consoles, then in AI systems. “The GPU was really designed to

be the best at doing the math to draw triangles,” explains Eric Hayes, CEO of Fungible, one of the leading exponents of the new network chips. “Jensen Huang at Nvidia was brilliant enough to apply that technology to machine learning, and realize that architecture is very well suited to that type of workload.” Like GPUs, SmartNICs began with a small job: offloading some network functions from the CPU, so network traffic could flow faster. And, like GPUs, they’ve eventually found themselves with a wide portfolio of uses. But the SmartNIC is not a uniform, one-size-fits-all category. They started to appear as networks got faster, and had to carry more users’ traffic, explains Baron Fung, an analyst at Delloro Group. “10Gbps is now more of a legacy technology,” Fung explained, in a DCD webinar. “Over the last few years we've seen general cloud providers shift towards 25 Gig, many of them are today undergoing the transition to 400 Gig.” At the same time, “cloud providers need

Peter Judge Global Editor

to consolidate workloads from thousands and thousands of end users. SmartNICs became one of the solutions to manage all that data traffic." Servers can get by with standard or “foundational” NICs up to around 200Gbps, says Fung. “Today, most of the servers in the market have standard NICs.” Above that, network vendors have created “performance” NICs, using specialized ASICs to offload network functions, but SmartNICs are different. “SmartNICs add another layer of performance over Performance NICs,“ says Fung. “Essentially, these devices boil down to being fully programmable devices with their own processor, operating system, integrated memory and network fabric. It’s like a server within a server, offering a different range of offload services from the host CPU.” It’s a growth area: “SmartNICs are relatively small in numbers now, but in the next few years, we see increasing adoption of these devices.” SmartNICS are moving from a specialist

“Over the last few years we've seen general cloud providers shift towards 25 Gig, many of them are today undergoing the transition to 400 Gig” Issue 43 • December 2021 / January 2022 61

market to more general use: “Today, most of the smart devices are exclusive to cloud hyperscalers like Amazon and Microsoft, who are building their own SmartNICs for their own data centers,” says Fung. “But as vendors are releasing more innovative products, and better software development frameworks for end users to optimize their devices, we can see more adoption by the rest of the market, as well.” SmartNICs will grow at three percent annual growth over the next few years, but will remain a small part of the overall market, because they are pricey: “Today, they are a three to five times premium over a standard NIC. And that high cost needs to be justified.” In a general network application, SmartNICs can justify their cost by making networks more efficient. “They also prolong the life of infrastructure, because these smart devices can be optimized through software. It's really a balance, whether or not a higher price point for SmartNICs is justifiable.” But there is potential for confusion, as different vendors pitch them with different names, and different functions. Alongside SmartNICs and DPUs, Intel has pitched in with the broadly similar infrastructure processing unit (IPU). “There’s many different acronyms from different vendors, and we’ve seen vendors trying to differentiate with features that are unique to the target applications they're addressing,” says Fung. Enter Fungible One of those vendors is Fungible. The company is the brainchild of Pradeep Sindhu, a formidable network re-builder and former Xerox PARC scientist, who founded Juniper Networks in 1996. Juniper was based on the idea of using special purpose silicon for network routers, instead of using software running on general purpose network switches. It rapidly took market share from Cisco. In 2015, Pradeep founded Fungible, to make special purpose devices again - only this time making network accelerators that he dubbed “data processing units” or DPUs. He’s now CTO, and the CEO role has been picked up by long-time silicon executive Eric Hayes. Hayes says the Fungible vision is based on the need to move more data from place to place: “There's data everywhere, and everybody's collecting data and storing data. And the question really comes down to how do you process all that data?” Equinix architect Kaladhar Voruganti gives a concrete example: “An aeroplane generates about 4.5 terabytes of data per day per plane. And if you're trying to create models or digital twins, you can imagine

the amount of data one has to move,” says Voruganti, who serves in the office of the CTO at Equinix. CPUs and GPUs aren’t designed to help with the tasks of moving and handling data, says Hayes: When you start running those types of workloads on general purpose CPUs or GPUs, you end up being very inefficient, getting the equivalent of an instruction per clock. You're burning a lot of cores, and you're not getting a lot of work done for the amount of power that you're burning.” Hayes reckons there’s a clear distinction between SmartNICs, and DPUs which go beyond rigid network tasks: “DPUs were designed for data processing. They're designed to do the processing of data that x86 and GPUs can't do efficiently.”

62 DCD Magazine Supplement • datacenterdynamics.com • datacenterdynamics.com

He says the total cost of ownership benefit is clear: “it really comes down to what is the incremental cost of adding the GPU to do those workloads versus the amount of general purpose processing, you'd have to burn otherwise. According to Hayes, the early generations of SmartNICs are “just different combinations of arm or x86 CPUs, with FPGAs and hardwired, configurable pipelines. They have a limited amount of performance trade off for flexibility.” By contrast, Fungible’s DPU has “a custom designed CPU that allows a custom instruction set with tightly coupled hardware acceleration. So the architecture enables flexibility and performance at the same time.“


The Fungible chip has a MIPS 64 bit RISC processor with tightly coupled hardware accelerators: “Tightly coupled hardware accelerators in a data path CPU: this is the definition of a DPU.” The DPU can hold “a very, very efficient implementation of a TCP stack, with the highest level of instructions per clock available, relative to a general purpose CPU.” What do DPUs do? DPUs make networked processing go faster, but Fungible is looking at three specific applications which shake up other parts of the IT stack. The first one is the most obvious: speeding up networks. Networks are increasingly implemented in software, thanks to the software defined networking (SDN) movement. “This goes all the way back to the days of Nicira [an SDN pioneer bought by VMware],” says Hayes. SDN networks make the system more flexible by handling their functions in software. But when that software runs on general purpose processors, it is, says Hayes, “extremely inefficient.” SmartNICs take some steps to improving SDN functionality, Hayes says, but “not not at the level of performance of a DPU.”

Beyond simple SDN, SmartNICs will be essential in more intelligent network ecosystems, such as the OpenRAN (open radio access network) systems emerging to get 5G delivered. Rewriting storage The next application is much more ambitious. DPUs can potentially rebuild storage for the data-centric age, says Hayes, by creating running memory access protocols over TCP/IP and offloading that, thus creating, “inline computational storage.” NVMe, or non-volatile memory express, is an interface designed to access flash memory, usually attached by the PCI express bus. Running NVMe over TCP/ IP, and putting that whole stack on a DPU, offloads the whole memory access job from the CPU, and means that flash memory no longer has to be directly connected to the CPU.

“The point of doing NVMe over TCP is to be able to take all of your flash storage out of your server,” says Hayes. “You can define a very simple server with a general purpose x86 for general purpose processing, then drop in a DPU to do all the rest of the storage work for you.” As far as the CPU is concerned, “the DPU looks like a storage device, it acts like a storage device and offloads all of the drivers that typically have to run on the general purpose processor. This is a tremendous amount of work that the x86 or Arm would have to do - and it gets offloaded to the GPU, freeing up all of those cycles to do the purpose of why you want the server in the first place.” Flash devices accessed over TCP/IP can go from being local disks, to become a centralized pooled storage device, says Hayes. “That becomes very efficient. It’s inline computational storage, and that means we can actually process the coming

“You can define a very simple server with a general purpose x86 for general purpose processing, then drop in a DPU to do all the rest of the storage work for you” Issue 43 • December 2021 / January 2022 63

Power Supplement into storage or going back out. In addition to that, we can process it while it's at rest. You don't need to move that data around, you can process it locally with a GPU.” Speeding GPUs In a final application DPUs meet the other great offload workhorse, the GPU, and helps to harness them better - because after all, there’s a lot of communication between CPUs and GPUs. “In most cases today, you have a basic x86 processor that’s there to babysit a lot of GPUs,” says Hayes. “It becomes a bottleneck as the data has to get in and out from the GPUs, across the PCI interface, and into the general purpose CPU memory.” Handing that communication task over to a DPU lets you “disaggregate those GPUs,” says Hayes, making them into separate modules which can be dealt with at arm's length. “It can reduce the reliance on the GPU-PCI interface, and gives you the ability to mix and match whatever number of GPUs you want, and even thinslice them across multiple CPUs.” This is much more efficient, and more affordable in a multi-user environment, than dedicating sets of GPUs to specific x86 processors, he says. A final use case for DPUs is security. They can be given the ability to speed up encryption and decryption, says Hayes, and network providers welcome this. “We want to ensure that the fabric that we have is secure,” says Voruganti. Easier bare metal for service providers and enterprises Equinix is keen to use DPUs, and it has a pretty solid application for them: Metal, the bare metal compute-on-demand service it implemented using technology from its recent Packet acquisition. In Metal, Equinix offers its customers access to physical hardware in its facilities, but it wants to offer them flexibility. With DPUs, it could potentially allow the same hardware to perform radically different tasks, without physical rewiring. “What I like about Fungible’s solution is the ability to use the DPU in different form factors in different solutions,” says Voruganti. “I think in a softwaredefined composable model, there will be new software to configure hardware for instance as AI servers, or storage controller heads or other devices “Instead of configuring different servers with different cards and having many different SKUs of servers, I think it will make our life a lot easier if we can use software to basically compose the servers based on the user requirements.”

“In most cases today, you have a basic x86 processor that’s there to babysit a lot of GPUs. It becomes a bottleneck as the data has to go from GPUs, across the PCI interface, and to the CPU" That may sound like a fairly specialized application, but there are many enterprises with a similar need to bare-metal service providers like Equinix. There’s a big movement right now under the banner of “cloud repatriation," where disillusioned early cloud customers have found they have little control of their costs when they put everything into the cloud. So they are moving resources back into colos or their own data center spaces. But they have a problem, says Hayes. “You’ve moved away from the uncontrolled costs of all-in cloud, but you still want it to look like what you’ve been used to in the cloud.” These new enterprise implementations are “hybrid," but they want flexibility. “A lot of these who started in the cloud, haven't necessarily got the networking infrastructure and IT talent of a company that started with a private network,” says Hayes. DPU-based systems, he says, “make it easy for them to build, operate, deploy these types of networks.” Standards needed But it’s still early days, and Voruganti would like the vendors to sort out one or two things: “We're still in the initial stages of this, so the public cloud vendors have different flavors of quote-unquote smartNICs,” he says. “One of the things that operators find challenging is we would like some standardization of the industry, so that there is some ability for the operator to switch between vendors for supply chain reasons, and to have a multi vendor strategy.” Right now, however, with DPU and SmartNIC vendors offering different architectures, “it is an apples-to-oranges comparison among SmartNIC vendors.” With some definitions in place, the industry could have an ecosystem, and DPUs could even become a more-or-less standard item. Power hungry DPUs? He’s got another beef: “We’re also concerned about power consumption. While vendors like Fungible work to keep within a power envelope, we believe that the overall hardware design has to be

64 DCD Magazine Supplement • datacenterdynamics.com • datacenterdynamics.com

much more seamlessly integrated with the data center design.” Racks loaded with SmartNICs are “increasing the power envelope,” he says. “We might historically have 7.5kW per rack, in some cases up to 15 kilowatts. But we're finding with the new compute and storage applications the demand for power is now going between 30 to 40kW per rack. It’s no good just adding another power-hungry chip type into a data center designed to keep a previous generation of hardware cool: “I think the cooling strategies that are being used by these hardware vendors have to be more seamlessly integrated to get a better cooling solution.” Equinix is aiming to bring special processing units under some control: “We’re looking at the Open19 standards, and we’re starting to engage with different vendors and industry to see whether we can standardize so that it's easy to come up with cooling solutions.” Standards - or performance? Hayes takes those points, but he isn’t that keen on commoditizing his special product, and says you’ll need specialist hardware to avoid that overheating: “It's all about software. In our view, long term, the winner in this market will be the one that can build up all those services in the most efficient infrastructure possible. The more efficient your infrastructure is, the lower the power, the more users you can get per CPU and per bit of flash memory, so the more margin dollars you're gonna make.” Fung, the analyst, can see the difficulties of standardization: “It would be nice if there can be multi-vendor solutions. But I don't really see that happening, as each vendor has its own kind of solution that's different.” But he believes a more standardized ecosystem will have to emerge, if DPUs are to reach more customers: “I’m forecasting that about a third of the DPU market will be in smaller providers and private data centers. There must be software development kits that enable these smaller companies to bring products to market, because they don’t have thousands of engineers like AWS or Microsoft.”

Interested in joining the 21st century gold rush for green data? Welcome to Sweden.



Taking the nuclear option Data centers need a steady source of power, with no greenhouse emissions. Could nuclear power be the answer?


Peter Judge Global Editor ata centers need to have a steady supply of electricity that comes from a sustainable source, which doesn’t pump CO2 or other greenhouse gases into the

atmosphere. A small number of organizations are starting to think that nuclear power could fit the bill. Nuclear power has an image problem. It’s tinged with its military origins, there’s a very vocal campaign against it, and nuclear projects all seem to be too costly, too late, or - in cases like Chernobyl - too dangerous. But countries like France rely on nuclear electricity, and are lobbying to have it classified as a clean technology, because it delivers steady base load electricity, without making greenhouse gas emissions. Environmentalist George Monbiot has become a supporter of nuclear power, arguing that its health risks are “tiny by comparison” with those of coal. The Fukushima accident in Japan was an unprecedented nuclear disaster, but it caused no noticeable increase in cancer deaths, even amongst workers clearing the site. Meanwhile, many are killed by pollution from coal-fired power stations - around 250,00 in China each year for instance - and that’s before we consider the greenhouse effect. “The nuclear industry takes accountability for its waste,” says Alastair Evans, director of corporate affairs at aircraft engine maker Rolls-Royce, a company which aims to take a lead in small nuclear reactors. “The fossil fuel industry doesn’t do that. If the fossil fuel industry had managed their waste in as responsible a way as the nuclear industry, then we wouldn't be having COP26 and climate conferences to try to solve the problem we're in now.”

"If the fossil fuel industry had managed their waste in as responsible a way as the nuclear industry, then we wouldn't be having COP26" Nuclear but better For all its green credentials, today’s nuclear industry is all too often bad business, with projects that take far too long, get stuck in planning and licensing, and go over budget. The UK has a new reactor being built by EDF at Hinkley Point, but it is terminally over-budget and late. Its output will cost €115 per MWh, which is double that of renewables. To make matters worse, its delay blights the grid and causes more emissions, because utilities have to burn more gas to make up for its nonappearance. Rolls-Royce is one of a number of companies worldwide that say we can avoid this with small modular reactors (SMRs), which can be built in factories and delivered where they are wanted. Standard units can be pre-approved, and other approvals are easier because they can be done in parallel, says Evans. “You don’t have to go back to government for a once in a generation decision like Hinkley Point,” he says. Nuclear you can buy They’re also easier to finance. At 470MW, Rolls-Royce’s SMRs will be a fraction of the size of Hinkley’s 2.3GW output, but cost less than a tenth the price, at around €2 billion. [Note to the reader. Nuclear reactors normally quote figures their thermal output in MWt and use MWe to refer to the amount that can be converted and delivered as electrical power. In this article, we will only refer to the electrical output, and quote it in

“A user, say a data center, books a slot for a unit, and it rolls off the production line the same way you'd order an aeroplane engine” 66 DCD Magazine • datacenterdynamics.com

MW for simplicity.] According to Rolls-Royce's site, the SMR "takes advantage of factory-built modularisation techniques to drastically reduce the amount of on-site construction and can deliver a low-cost nuclear solution that is competitive with renewable alternatives". “Like wind farms, the cost of a nuclear plant is all up-front,” says Evans. “And they give steady power for six years.” A decommissioned nuclear plant in Trawsfynydd in Wales is being considered for the first of Roll-Royce’s SMRs, and the company has spoken publicly of its ambition to build 16 in the UK. It’s reckoned that the Trawsynydd site could support two SMRs and already has all the cables and other infrastructure needed. At the time of writing, there’s no official government policy on this, however, UK Prime Minister Boris Johnson told the Conservative Party conference in September that nuclear power was necessary to decarbonize the UK electricity grid by 2035, as there is a lot of gas-fueled generation to phase out. Rolls-Royce’s SMR program had some £200 million from the UK government, and a similar amount of funding from industrial partners in a consortium which includes Cavendish Nuclear, a subsidiary of Babcock International, along with Assystem, Atkins, BAM Nuttall, Laing O’Rourke, National Nuclear Laboratory (NNL), Jacobs, The Welding Institute (TWI), and Nuclear AMRC. Having achieved its matched funding, the Rolls-Royce SMR business will submit a design to the UK Generic Design Assessment (GDA) process which approves new nuclear installations, and will also start identifying sites for the factories it will need to build the reactor components. Where and when the SMRs themselves will land is not yet clear.

Grid Scale

Learning from submarines Other countries have been making similar steps, with Nuscale in the US getting government support for a small-scale reactor program (see box). In France, President Macron has pledged to fund EDF developing SMRs for international use. The reactors are simplified versions of the large projects - most are pressurized water reactors (PWRs), and they call in experience from sea-going reactors, such as those in nuclear submarines. SMRs will be the size of a couple of football pitches. They’d be put together inside a temporary warehouse building which would then be removed, leaving the power station in situe. These reactors have of necessity been made to be small, portable, and reliable. Nuclear sub reactors range up to around 150MW. French submarines are powered by a 48MW unit which needs no refueling for 30 years. Russia’s SMR program is based heavily on nuclear submarine and icebreaker units - the state energy producer Rosatom has put together a 70MW floating unit, which

can be towed into position offshore where needed. In the UK, the first few of Rolls-Royce’s units will be paid for by the government. Subsequent ones will be available commercially, as the project has been “derisked,” and it’s easy to get debt and equity to build more. Ready to invest in At that stage, Rolls-Royce would float off the SMR business as a standalone company, financed by equity investors, and start taking orders for new nuclear plants. The company hopes to get factories established in the next few years to start making the SMRs. And that’s where data centers and other industrial customers will come in: “A user, say a data center, books a slot for a unit, and it rolls off the production line the same way you'd order an aeroplane engine.” Why would people invest? This kind of nuclear could be much more viable than the giant projects. Rolls-Royce expects its SMRs to produce electricity at around €50 per MWh. That makes it as cheap

Beyond the SMR Alongside the SMR, there’s an other UK project involving Atkins and Cavendish, to build an Advanced Modular Reactor (AMR), called U-Battery. This is also aiming to deliver reactors built in factories, with a capacity of around 10MW, later in the 2030s. U-Battery has demonstrated a mock-up of its reactor vessel and heat exchangers, as a milestone towards delivery of an actual system. AMRs would be a next generation of nuclear plant based on different technology. Some of those under consideration include high temperature gas reactors (HTGR), sodium-cooled fast reactors (SFR), lead-cooled fast reactors (LFR), molten salt reactors (MSR), supercritical water-cooled reactors (SCWR) and gas-cooled fast reactors (GFR).

Issue 43 • December 2021 / January 2022 67

as offshore wind power - but with the very important benefit that the power is delivered continuously. If the whole of society is decarbonizing, then our need for electricity will expand. Heating and transport must be switched to electricity, and that means more electricity must be generated. And beyond that, the industry’s dependence on energy has been revealed graphically by the effects of the current hike in gas prices. By providing long-term, guaranteed low carbon power which can support baseload without the variability of solar and wind power, SMRs could help support the decarbonization of industries. A large amount of baseload electricity could also help foster other energy storage systems. For instance, hydrogen could be used as a portable fuel, but to be green it would have to be made by electrolysis using renewable electricity. Benefits for data centers “We are keen to present the off-grid

"We are keen to present the off-grid application of stable secure green power to any and all carbonintensive industries" application of stable secure green power to any and all carbon-intensive industries,” says Evans. By paying the costs of an SMR located nearby, a steelworks could switch to green electricity, and escape from increases in the cost of gas - while also reducing its emissions drastically. Data centers should find this an easy market to participate in. Large operators like Google and Facebook are well used to making power purchase agreements (PPAs) for wind farms or solar plants: Rolls-Royce believes they could take a PPA for a portion of an SMR project’s output. For a data center operator, a PPA for a portion of an SMR might seem a distinct upgrade on a PPA for a wind farm.

While the operator has paid for green electricity to match its consumption, the wind farm would deliver it at particular times, instead of when the data center needs it, so the renewable energy would be more of an offset for the electricity used by the data center, which would be made by whatever mix is actually on the grid at that time. Data centers that buy a PPA for nuclear energy, on the other hand, would be able to just plug straight into the power station. At that point, going nuclear provides the best of both worlds: as well as emitting no greenhouse gases, the data center would also be independent - free of the fear of blackouts on the grid.

The US view In the US, Nuscale is the front runner for small reactors, working on a water-cooled design. The company’s Nuscale Power Module (NPM) is a 77MW integral pressurized water reactor (PWR), with 12 of these modules combined in a flagship plant design, that has a total gross output of 924MW. “This is just short of traditional gigawatt nuclear plants, which provide around 1,000MW,” according to Ryan Dean, senior public affairs specialist at Nuscale. “We also offer smaller power plant solutions in four-module (308MW) and six-module (462MW) sizes.” The project started at the University of Oregon with Department of Energy funding from 2000 to 2003, then went commercial when the funding was cut. After problems with its first major backer, the company is now funded by engineering firm Fluor, and expects to produce a working reactor soon. Dean says the reactor will be “an ideal solution for decarbonizing energy intensive industries. The level of plant safety and resiliency is appealing to hospitals, government installations, and digital data storage facilities that serve as mission critical infrastructure and need a limited amount of reliable electricity," he says.

Nuscale claims to offer 154MW at 99.95 percent reliability or 77MW at 99.98 percent reliability - both over the 60-year lifetime of the plant, and it’s designed to work in island mode as part of a microgrid. It’s also the first nuclear plant design capable of so-called “black start”, ie it can be switched on without any external grid power, according to Dean. The station is earthquake proof and EMP resistant, and will keep itself cool in the event of losing power. Dean also points to other benefits, including using the plants for desalination, with each module producing around 77 million gallons (290 million liters) of drinking water per day. They are good for load following, helping grid capacity deal with the intermittency of wind, solar, and hydro generation. All nuclear reactors produce a lot of waste heat, and Nuscale is looking to find ways to use the waste process heat from its reactors for industrial applications such as the generation of hydrogen for fuel cells. “We are progressing towards the commercialization of our first project,” says Dean. “By the end of this decade, a Nuscale small modular reactor (SMR) power plant will become part of the Carbon Free Power Project (CFPP), an

68 DCD Magazine • datacenterdynamics.com

initiative spearheaded by the public power consortium Utah Associated Municipal Power Systems (UAMPS). “In August 2020, we made history as the first and only small modular reactor to receive design approval from the US Nuclear Regulatory Commission (NRC),” he continues. “Nuscale is actively pursuing projects around the world. We have memorandums of understanding for the deployment of Nuscale SMRs in 11 countries.” Meanwhile, the US has at least one nuclear-powered data center in the works. Talen Energy operates a 2.7GW nuclear power plant at Susquehanna in Pennsylvania, and in September 2021 it broke ground on a project to build a data center on the site that could grow to 300MW. Unlike the large scale projects we’ve looked at here, the project won’t be contributing to decarbonizing the grid, however. It’s more of a scheme to soak up surplus energy for profit and detoxify the crypto market. Talen has said the Susquehanna Hyperscale Campus in Berwick, Luzerne County, will be home to cryptocurrency miners, simply using energy to generate speculative assets, rather than any effort to decarbonize existing industry.

YOUR VISION, OUR DUTY. Mercury is EMEA’s leading data centre construction partner. We build and manage complex engineering projects that reimagine how people work and live in the built environment. Your Vision is borne out of our commitment to the client. It puts them at the heart of everything and positions Mercury as their strategic partner. Our Duty is a bold promise that Mercury will always deliver. We believe that real innovation happens if you’re willing to be brave. To learn more visit www.mercuryeng.com

Difficulties Staying Up

The dangers of big cloud providers


ystopian fiction focuses on large technology companies and AI systems taking over the world. The reality is that they are taking it down. As more of the planet is digitized, and those digital services are put onto a small number of cloud providers, it’s a recipe for disaster. And it’s only getting worse. This December, the world’s largest cloud provider, Amazon Web Services suffered two major outages, across three different regions. Considering the vast number of companies that rely on AWS, it’s not clear just how many companies and crucial systems were impacted. Recreational services like Netflix, Call of Duty, and Tinder were all brought down, but far more critical networks could have also been affected. Among those caught up by the outages was Amazon’s own logistics network. The $1.7 trillion company will be able to survive the holiday shopping gridlock, but its employees will be ultimately the ones who will have to suffer. Not only were they offered unpaid time off during the outage, but they will be the ones forced to work overtime to catch up on the backlog. Amazon Flex drivers, its contract delivery workers, also were told their pay may come late due to the outage. At such a large scale, with so many caught up in the outage, it’s hard to know just how many people will have suffered because of this. As more services digitally transform, it will only get worse. Last year, a patient died after a ransomware attack took down a hospital’s IT system, delaying operations. It won’t be long before an AWS outage could take out an entire city’s hospitals.

70 DCD Magazine • datacenterdynamics.com

There is also little hope that IT administrators and CIOs will take the risk seriously. Despite running the cloud itself, even Amazon failed to prepare properly. Its status page was tied to a region, rendering it useless during the first outage. Its logistics network and a multitude of internal programs did not follow best practices, and collapsed when only one region went down. If Amazon doesn’t use AWS properly, how can we expect others to? Equally, don’t expect AWS and others to simply eliminate outages. Human error and device failures will always happen. As AWS gets bigger, the complexity of keeping it running is spiraling out of control. During December’s outage, staff joined a 600-person phone call where they speculated about external attacks and other nefarious activities. “The more I read AWS’s analysis of the useast-1 outage, the less confident I find myself in my understanding of failure modes and blast radii,” Duckbill Group’s Corey Quinn said on Twitter. “It’s not at all clear that AWS is fully aware of them, either.” Equally, don’t expect multi-cloud to necessarily help. “The idea of being in multiple clouds for resilience is a red herring,” Quinn said. “They end up going down three times as often because they now have exposure to everyone’s outages, not just AWS’s.” So, we’re living in a world where more critical services are hosted on fewer cloud providers, which are themselves getting unmanageably complex. This means dangerous outages are inevitable. We can do our best to mitigate and prepare by following best practices, but we should be ready for them to happen nonetheless. And, if they do, we should at least make sure the workers at the bottom are paid.



Caterpillar is the leading source for backup power solutions protecting mission-critical applications at hundreds of data centers around the world. We support data centers in achieving the industry’s highest standards for reliability, performance and sustainability, keeping their customers connected. Caterpillar backup power solutions for data centers:


Meet the highest standards for reliability and performance, including those of the Uptime Institute.

Are optimised for operational efficiency, cost effectiveness, and longevity.

Visit www.cat.com/datacenter-eu to find out more.

© 2021 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos “Caterpillar Corporate Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.

Can be configured to include low-carbon alternatives to diesel.

Stay ready to run with tailored service contracts, optimising performance.

VIAVI congratulates all 2021 DCD Global Awards finalists! We are pleased to sponsor the Edge Data Center Project of the Year and salute this year’s winner: Edge Centres

Command the network. Successfully deploy networks, applications, and services. Consistently optimize performance across the ecosystem. Always provide the highest quality customer experience. Every day, you fight to successfully transition to next-generation technologies and implement major transformations with greater confidence. VIAVI provides the multi-dimensional visibility, intelligence, and insight you need to efficiently manage physical and virtual environments and profitably deliver optimum service levels, transition to new technologies, and launch innovative services.

Learn more at viavisolutions.com/hyperscale