Issue 41 • July 2021 datacenterdynamics.com
High altitude Internet after the fall of Loon
Charles Liang on Bloomberg’s ‘fake news’
Birth of computing
ENIAC and EDSAC still matter, 75 years on
Internet founder eyes connecting space
On the server farm Agriculture feeds off the IT revolution
YOUR BEST CHOICE FOR LITHIUM BATTERIES
RELIABLE BATTERY SOLUTIONS
NESP SERIES HIGH RATE LITHIUM UPS BATTERY SYSTEM
www.mpinarada.com | firstname.lastname@example.org | Newton, MA, USA 800-982-4339
Contents July 2021
6 News Hyperscale keeps scaling, Ireland turns on data centers, and more 13 Turbulence in the stratosphere Has the failure of Loon killed the prospects for high-altitude Internet? 21 The CEO interview
“ The Bloomberg story is fake news, from our understanding. Nothing really happened.” Supermicro CEO Charles Liang on 28 years of industry-defining server products... and those spy chip allegations
25 Edge supplement From CDNs to portable Edge boxes, we track the signs that Edge is here. 41 D elay tolerant networking How zebras helped the pioneers of the Internet extend into space 45 Talking to the stars Charting the many challenges of communicating with Alpha Centauri 48 Server farms High tech is changing agriculture 52 Crustacean cultivation Waste heat has made the first land-based lobster farm possible
56 ENIAC at 75: A pioneer of computing It’s 75 years since the first general purpose computer sparked change 61 R ebuilding the first real computer EDSAC launched business computing, but all that was left was memories 70 The world’s most efficient facility How Boden used holistic cooling to hit a PUE of 1.0148 73 Chia is eating storage The cryptocurrency which lays waste to hard drives and SSDs 76 The right scale Mega data centers are a fixture but their locations may change
80 Back page: dumb money ”When these things go wrong, they go seriously bloody wrong”
Issue 41 • July 2021 3
MILLIONS OF STANDARD CONFIGURATIONS AND CUSTOM RACKS AVAILABLE
CUSTOMIZATION IS OUR STANDARD.
THEMILLIONS POSSIBILITIES AREIN TWOENDLESS... OF CONFIGURABLE RACKS AVAILABLE WEEKS OR LESS MADE IN THE USA www.amcoenclosures.com/data www.amcoenclosures.com/data
an IMS Engineered Products Brand
MADE IN THE USA
Historic times, heroic challenges
his issue brings you more than ever before, as we take you back to the dawn of computing, to the stratosphere, and out to the stars. Vint Cerf and Leonard Kleinrock, Internet pioneers, will take you on a voyage to space with the Delay Tolerant Network protocol (p41). We meet two of the first computer systems from 70 years ago - and head for the farm, where data centers are growing your food (p48). And we check on the latest reports on practical Edge developments (p23).
Zoologists tracking zebras may have enabled the Internet to reach into space (p41) Hidden history In the post-war era, the US and Britain raced to make the first practical computing system. America produced ENIAC, a brain which could be programmed, but only by completely rewiring it (p56). Just a couple of years later, in the UK, radio engineers in Cambridge built the first truly practical programmable computer - EDSAC (p61). These machines, programmed by women, quietly worked calculations for scientists, won Nobel prizes and launched business computing. Then they were thrown aside. Today, we would have no idea how they worked were it not for dedicated enthusiasts, like those who have rebuilt EDSAC.
Internet from the sky It's tempting to see Loon as one more example of Google's hubris, hyping an impractical idea of delivering connectivity by balloon. But while Loon deflated, HAPS (high altitude pseudo-satellites) are still colonizing the stratosphere with a motley fleet of drones and balloons, with blimps and satellites for company. These people are determined: one day the sky will be your downlink (p13).
From the Editor
Miles to Mars. The kind of distance the Internet Delay Tolerant Network protocol needs to handle
Servers on the farm Who puts racks in a big shed with power and cooling systems, and wants to help the environment? As well as data center people, there's another answer: the new breed of indoor and vertical farmers. The similarities between the two are deep, but will they compete or cooperate? We spoke to experts on the new farms to find out (p48). And we found how data centers enabled the first land-based lobster farm (p52).
Editor Sebastian Moss @SebMoss News Editor Dan Swinhoe @DanSwinhoe Head of Partner Content Graeme Burton @graemeburton SEA Correspondent Paul Mah @PaulMah Brazil Correspondent Tatiane Aquim @DCDFocuspt Designer Dot McHugh Head of Sales Erica Baeta Conference Director, Global Rebecca Davison Conference Director, NAM Kisandka Moses Conference Producer, APAC Chris Davison
Head Office DatacenterDynamics 22 York Buildings, John Adam Street, London, WC2N 6JU
Chia versus the environment Not all tech projects help the world. Cryptocurrencies turn electricity into a gambling chip. A Seagate expert tried to convince us that Chia is different while the world fries thousands of his hard drives and SSDs for a profit (p73). We prefer Boden, the world's most efficient facility, where engineers pursued holistic systems that gives every user exactly the cooling they need (p70). And finally, a veteran CEO, Charles Liang, looks back on 28 years at Supermicro... and addresses the Bloomberg spying allegations (p21).
PEFC Certified This product is from sustainably managed forests and controlled sources PEFC/16-33-254
Peter Judge DCD Global Editor
Follow the story and find out more about DCD products that can further expand your knowledge. Each product is represented with a different icon and color, shown below.
Global Editor Peter Judge @Judgecorp
Chief Marketing Officer Dan Loosemore
Dive even deeper
Meet the team
© 2021 Data Centre Dynamics Limited All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording or otherwise, or be stored in any retrieval system of any nature, without prior written permission of Data Centre Dynamics Limited. Applications for written permission should be directed to the editorial team at editorial@ datacenterdynamics.com. Any views or opinions expressed do not necessarily represent the views or opinions of Data Centre Dynamics Limited or its affiliates. Disclaimer of liability: Whilst every effort has been made to ensure the quality and accuracy of the information contained in this publication at the time of going to press, Data Centre Dynamics Limited and its affiliates assume no responsibility as to the accuracy or completeness of and, to the extent permitted by law, shall not be liable for any errors or omissions or any loss, damage or expense incurred by reliance on information or any statement contained in this publication. Advertisers are solely responsible for the content of the advertising material which they submit to us and for ensuring that the material complies with applicable laws. Data Centre Dynamics Limited and its affiliates are not responsible for any error, omission or material. Inclusion of any advertisement is not intended to endorse any views expressed, nor products or services offered, nor the organisations sponsoring the advertisement.
NEWS IN BRIEF
Whitespace: The biggest data center news stories of the last three months
Microsoft to acquire AT&T’s Network Cloud; Azure to host AT&T 5G network The company will acquire AT&T’s Network Cloud division. In return, AT&T will move its 5G network to Azure, beating out Google, IBM, and AWS.
Quantum Loophole buys 2,100-acre property for gigawatt data center campus Joint venture with TPG Real Estate Partners will build a huge campus in Frederick County, Maryland. Site owner Alcoa says the deal was valued at $100m.
AWS Frankfurt shuts down when safety systems evacuate staff, block access Three hour outage caused when an air handling issue accidentally triggered a fire suppression system. This then removed oxygen from the air, so humans could not enter the data hall.
Hyperscale operator building reaches $150 billion in a year First quarter capex is 31 percent up on last year taking the number of facilities to 625 Hyperscale operators spent $38 billion in the first quarter on capital expenditure - up 31 percent on the same quarter last year. The world’s cloud giants spent more than $149 billion in the last four quarters on capex - half of which went to building new data centers. The pack is led by Amazon, Microsoft, Google, and Facebook, according to a report from Synergy Research Group. The researcher reckons the world now has 625 hyperscale facilities - up from 541 in July 2020. “For hyperscale operators the pandemic proved to be more of a stimulus to growth rather than a barrier. Over the last four quarters we continued to see extremely strong growth in revenue, capex, and data center spending,” said John Dinsdale, a chief analyst at Synergy Research Group. The $38 billion figure is an increase from $32 billion just over a year ago. The top four spent far more than their rivals, with Amazon and Microsoft growing particularly strongly, and Google dropping off a little. Apple, Alibaba, and Tencent were the next biggest players - with Apple bouncing
back and Facebook, Alibaba, and Tencent all increasing spending. IBM, NTT, Oracle, JD.com, Twitter, and Baidu also made strong hyperscale plays. In exchange for the money they have been spending, the top 20 hyperscale companies have generated massive revenues, which have been increasing faster than their capex outgoings: over $1.7 trillion in aggregate over the last four quarters, which is up 24 percent from the preceding four quarters. “It is interesting to compare their fortunes with other types of major service provider around the world,” said Dinsdale. “As hyperscale capex levels keep on setting new records, it is in stark contrast with telcos whose capex has essentially been totally flat for five years now, mirroring their inability to grow overall revenues. “Given the ongoing growth in service revenues for hyperscalers and the everincreasing need for a larger global data center footprint, we are forecasting continued double-digit growth in hyperscale capex for several years to come.” bit.ly/TheBigGetBigger
6 DCD Magazine • datacenterdynamics.com
Euronext CEO on postBrexit UK data center exit: “Big decisions come with big consequences” Europe’s largest stock exchange Euronext is moving its data center out of London to Italy because it needs a “predictable regulatory environment,” which the UK no longer has.
39 human rights groups call on Google to cancel Saudi Arabian cloud region “Saudi Arabia has a dismal human rights record, including digital surveillance of dissidents, and is an unsafe country to host the Google Cloud Platform,” Rasha Abdul Rahim, director of Amnesty Tech, said. Google plans to partner with Saudi Aramco.
OVHcloud won’t reveal the cause of its disastrous fire till 2022 OVHcloud has apologized to customers for the disastrous fire which destroyed one of its data centers in March - but says it can’t reveal what caused the fire until 2022. Before then, it plans to go public. The company said it expects to launch its IPO before the end of 2021. Family of founder Octave Klaba will retain control.
Uncertain future for 30 Irish data centers as regulators seek grid stability The future of more than 30 proposed data centers in Ireland is unclear, following moves by national regulators to reduce pressure on the grid. The Commission for Regulation of Utilities (CRU) has told national grid manager EirGrid and ESB Networks to prioritize applications for data centers in regions where power supplies are not struggling. The Irish Times reports that Eirgrid is concerned this would mean it could not approve grid access for 30 data centers. The majority of the facilities are in the Dublin area, which is under significant power supply constraints. Eirgrid has agreed to connect 1,800MW of data centers, but has applications for another 2,000MW.
Eirgrid spokesperson David Martin told The Irish Times that the CRU had begun a consultation on the issue. “The outcome of this consultation will provide clarity to Eirgrid and the data center industry on next steps in terms of the future facilitation of data centers on the electricity system,” he said. “We look forward to processing connection applications for data centers in line with the resulting policy once the outcome of this consultation is determined by CRU.” The impact of data centers on the Irish grid has been a point of contention for years, with the industry on track to use 27 percent of the nation’s power output by 2029, up from 11 percent last year.
In March, EirGrid said it might require new energy-demanding projects such as data centers to be built near to renewable energy sources. “There is growing alarm at the spread of data centers, what they are doing to our hopes of reaching the Paris Agreement targets, as well as our climate goals, and whether an economic policy based on the unlimited growth of data centers is compatible with any chance of tackling a climate catastrophe,” politician Bríd Smith said earlier that month. “This is getting out of control.” Her party, People Before Profit, has also introduced a new bill to amend the Planning and Development act prevent new data center developments in the country. The Planning and Development (Climate Emergency Measures Act) 2021 would place an absolute ban on data centers, Liquid Natural Gas plants, and new fossil fuelrelated infrastructure. The party says a ban on fracked gas imports is not enough Smith said that embracing more data centers meant entering “a magical roundabout where the land is festooned with data centers empowered by fields of windmills that litter our lands, and banks of windmills off our shores and around our coast.” She said that “data centers are not essential for the future of our economy or our society. They are not great investments and they are only essential as a component of an economy built and structured on the needs of the corporate sector.” bit.ly/FestoonedWithDataCenters
Dutch province of Flevoland enforces temporary data center moratorium The Dutch province of Flevoland will not approve any new data center developments for an unknown period of time. The region plans to study the impact of large data centers on the local community and infrastructure, before it agrees to any more facilities being built in its borders. “In order to prevent good spatial planning in Flevoland from being jeopardized by the arrival of (clusters of) data centers at locations that could disrupt the optimal development of Flevoland, no cooperation will be provided [to data center operators] during the period in which no data center strategy has been established,” an amendment by GroenLinks (GreenLeft), Party for the Animals, 50Plus, and JA21 (translated). The strategy will be developed after a series of studies that focus on “infrastructure, sustainable energy, water, spatial integration and diversification of the economic ecosystem.” The Dutch Data Center Association disputed claims in other regions that data centers could lead to water shortages: “Drinking water is always ranked higher than water for the industry, in case a water shortage occurs.” bit.ly/DataCentersFallOutOfFlevo
Mining giant BHP signs cloud deals with AWS and Microsoft Azure Mining giant BHP has signed cloud deals with Amazon Web Services and Microsoft Azure. AWS will provide data analytics and machine learning capabilities, while Azure will host the company’s core global applications portfolio, including 17,500TB of data. The company said that it will move out of regional data centers as it shifts to the cloud. “Digital technology is in everything we do at BHP, from how we connect to our customers and partners every day to how we extract and find resources more safely and sustainably,” BHP CTO Laura Tyler said. “We are leveraging next-generation technologies like cloud, machine learning,
and data analytics to solve complex business problems and unlock value even faster. The world’s largest mining company by market capitalization, BHP operates in the aluminum, coal, copper, ferro-alloys, iron ore, titanium, nickel, diamond, and silver mining sectors, as well as the oil, gas, and liquefied natural gas markets. A study of carbon dioxide and methane emissions from 1751–2010 found that BHP was one of the world’s top emitters: the 90 companies that produce two-thirds of global greenhouse gas emissions. BHP, which came in at number 19, has also been linked to local environmental disasters, including the Bento Rodrigues tailings dam collapse, the worst disaster in Brazil’s history.
In its country of residence, Australia, the company has been linked to aggressive lobbying to scrap carbon taxes. “Regulation, litigation, and shareholder actions targeted at the private entities responsible for tobacco-related diseases played a significant role in the history of tobacco control; one could imagine comparable actions aimed at the private entities involved in the production of fossil fuels, particularly insofar as some of the entities included in this analysis have played a role in efforts to impede legislation that might slow the production and sale of carbon fuels,” the emissions study said. Cloud companies like Amazon and Microsoft have all called for more action to fight climate change, and have invested heavily to reduce the emissions of their data centers. But at the same time, they have actively pursued fossil fuel cloud contracts, developing custom AI tools to help make extraction more profitable. Microsoft Azure works with Exxon, Shell, Halliburton, Total, and many others. Amazon also chases similar deals, through its AWS for Oil & Gas division, and serves both BP and Shell. Rival Google last year said that it would no longer develop custom AI/ML solutions to facilitate upstream extraction, but continues to work with Chevron, Total, Schlumberger, and BP. It also entered into a partnership with Saudi Aramco to open data centers in the Middle East. bit.ly/TellMeHowTheCloudIsGreen
Google, AWS win $1.2 billion Israel Nimbus tender for cloud services US tech giants Google and Amazon Web Services have won a NIS 4 billion ($1.2 billion) tender to provide cloud services to Israeli government agencies. The services will be offered out of data centers built in Israel. Microsoft, Oracle, and IBM also competed for the Nimbus project. The Israel Finance Ministry’s Government Procurement Administration announced in 2019 that it was preparing a tender “for the provision of public-platform-based cloud services to the government ministries and additional governmental units.” Among the agencies set to be supported is the IDF, as well as the Israel Land Authority, which Human Rights Watch has accused of discriminatory policies designed to segregate Palestinians in occupied West Bank. As violence between Israel and self-governed Gaza flared, more than 250 employees at Google called on the company to drop the contract and follow its human rights principles. But The Times of Israel reports that the terms of the contract include provisions that prevent AWS or Google from halting services to the Government. bit.ly/BoycottFreeContracts
8 8 DCD DCD Magazine Magazine • datacenterdynamics.com • datacenterdynamics.com
Leidos wins $2.5 billion NASA IT & communications contract Contract include on-premise data center and cloud NASA has awarded Leidos Inc. a $2.5 billion IT and communications services contract. The agency-wide contract for Advanced Enterprise Global Information Technology Solutions (AEGIS) runs for 10 years and will see Leidos provide WAN, center LAN, telecommunications, cybersecurity support, both on-premises and managed cloud data center resources, online collaboration tools, cable plant, emergency and early warning and notification systems, telephony, and radio systems across NASA’s data center footprint. “Our work with NASA to evolve their IT infrastructure ultimately helps support their mission of returning to the moon, exploring the universe, and continuing to learn more about our own planet,” said Leidos Civil Group President Jim Moos. “We look forward to providing communication, data center, cloud, and cybersecurity services to NASA to further enable mission-critical operations,” Moos added. bit.ly/StickingToEarth
Department of Defense cancels $10bn JEDI cloud contract Microsoft won the decade-long deal, but now must recompete JEDI is dead. The US Department of Defense has officially scrapped the controversial $10bn cloud contract it awarded to Microsoft in 2019. Instead, both Microsoft and Amazon will compete for a new ‘Joint Warfighter Cloud Capability’ contract that has been described as a multi-cloud multi-vendor program. The Joint Enterprise Defense Infrastructure contract was meant to provide the majority of the data center and Edge compute needs of the world’s largest military. But JEDI was dogged with criticism and complaints from the start of the procurement process. Essentially every cloud provider but frontrunner Amazon complained that the single-source contract made for a worse platform - although their criticism was motivated by the fact that most did not meet the stringent requirements set by the DoD to even compete. Those requirements, which appeared perfectly suited for AWS, were themselves the point of much contention. Oracle pointed to several DoD employees involved in JEDI who would go on to work with or for Amazon, or that had an existing business relationship. Multiple investigations failed to find any substantial impropriety, but Oracle is still fighting the case in court - most recently trying
to pull in the Supreme Court. It also managed to get a document alleging a vast Amazon conspiracy onto the desk of then-President Trump. At the same time, Trump was publicly criticizing Amazon and founder Jeff Bezos, who also owns The Washington Post. When, after several delays, the DoD awarded Microsoft the contract, Amazon immediately cried foul. The company pointed to Trump’s public comments, as well as reports on private ones, to allege a pattern of interference by the President that may have led to it being dropped from the bidding process. The DoD and Microsoft have long denied the allegation. The DoD Inspector General cleared the military’s decision to award JEDI to Microsoft. But, crucially, the IG admitted that it was unable to fully investigate White House interference “because of the assertion of a ‘presidential communications privilege.’ Amazon took both the DoD and Microsoft to court, with the case still ongoing. As part of the suit, Amazon got the court to hold up JEDI until the case concludes. That case is expected to drag on, delaying the contract by years. That’s too long for the DoD, with the military now looking to start again. bit.ly/AmazonStrikesBack
Biden expands US investment bans on Chinese companies including Inspur President Joe Biden has signed an executive order banning US entities from investing in 59 Chinese companies. Building upon an existing EO signed by President Trump, the order claims it is blocking investment in the companies due to national security concerns, and claims they help surveil and suppress individuals around the world. It brings across the existing companies on the list, and adds several more. Among the Chinese businesses US entities are blocked from investing in are China Mobile, Hikvision, Huawei, SMIC, and Inspur. The DoD claimed that server maker Inspur was closely tied to the Chinese military back in June 2020, but did not publicly recommend any action. At the time, we noted that Inspur had so far managed to escape the ire of US officials. The third-largest server manufacturer in the world, and the largest in China, it works closely with US chip designers Nvidia, Intel, and AMD. bit.ly/NoRelationsReset
Whitespace Apple opens Chinese facility, despite security compromise Apple has officially opened a new data center in the southwestern province of Guizhou, China. The new facility, jointly built by Apple and GuizhouCloud Big Data Industry Co Ltd in Gui’an New Area, will offer iCloud services to customers on the Chinese mainland. State employees physically manage the servers at the data center, and control access to the site. According to The New York Times, Apple abandoned the encryption technology it used elsewhere after the Chinese government blocked it. The digital keys used to unlock the data are stored in the same facilities, as specifically requested by the Chinese government. Outside of China, Apple stores the keys on hardware security modules developed by Thales. Chinese officials did not allow the device to be used, so a new one was developed by Apple to be used at the data center. As Apple does not control the Chinese data centers, it does not have to turn data over to Chinese law enforcement, which would be illegal. Instead, Chinese authorities can ask GCBD for Apple user data. bit.ly/ThePrivacyCompany
Apple’s long-delayed Iowa data center still in the planning stage Construction pauses again, with delivery pushed back seven years Construction work on Apple’s long-delayed Iowa data center is still yet to begin, according to the company. The tech giant said that the facility has entered into the design phase, more than four years after it was first announced and a year after it was originally due to come into operation. The company first announced its 400,000 sq ft (37,000 sqm) data center in Waukee, Iowa in 2017, saying at the time the facility was due to be brought online in 2020. It seems at some point the date was changed to August 2022, before the company put in a request in 2019 with the Iowa Economic Development Authority to extend the deadline to delay the facility another five years and push the completion date to August 2027.
“The design process is underway for Apple’s new data center, which is expected to create over 500 construction and operations jobs in Waukee,” the company said. The news comes as the company announced plans to make new contributions of more than $430 billion and add 20,000 new jobs across the country over the next five years. “At this moment of recovery and rebuilding, Apple is doubling down on our commitment to US innovation and manufacturing with a generational investment reaching communities across all 50 states,” said Tim Cook, Apple’s CEO. “We’re creating jobs in cutting-edge fields — from 5G to silicon engineering to artificial intelligence — investing in the next generation of businesses.” bit.ly/IoWait
Peter’s Apple factoid Despite Steve Jobs’ vow to “destroy Android,” the company is heavily reliant on Google. In fact, Apple is believed to be Google Cloud’s largest customer, and is internally referred to as ‘Bigfoot.’
Apple seeks planning extension for Athenry data center Apple appears to have restarted its efforts to build a data center in Athenry, Ireland. This June, Apple submitted an application to Galway County Council for an extension to its planning permission on the site and, says it aims to have built a data center on the site by the end of the extension period. The seemingly abandoned project’s revival is a big surprise. Apple bought the land for around $15 million in 2014. Though its plans to build a data center by 2017 were approved by the local council, appeals kept bringing the case back to the Irish planning board (An Bord Pleanála) and the Commercial Courts.
10 DCD Magazine • datacenterdynamics.com
The company’s plans led to some protests (as well as marches in favor of the project), and environmental campaigners appealed to stop the development. Other challenges, a shortage of judges, and court issues repeatedly delayed the project. Apple subsequently abandoned its plans in 2018 and put the land up for sale in 2019, despite state head Taoiseach Leo Varadkar promising to “do anything” to get it back on track. Following the lengthy saga, Ireland revamped its planning process bit.ly/WhatIsDeadMayNeverDie
Blackstone to buy QTS Realty Trust for $10 billion The deal is expected to close in the second half of 2021 Blackstone Group will buy QTS Realty Trust for $10 billion and take the data center operator private. The investment company’s Blackstone Infrastructure Partners unit, together with its non-traded real-estate investment trust, known as BREIT, have agreed to pay $78 a share for QTS. The deal is expected to close in the second half of 2021, and includes the assumption of the data center operator’s existing debt. The agreement includes a 40-day “go-shop” period which permits QTS and its representatives to actively solicit and alternative acquisition proposals. “We are pleased to enter into this transaction with Blackstone, as it will deliver compelling, immediate, and certain value to stockholders while positioning QTS to continue supporting customers’ expanding data center infrastructure needs,” said Philip Trahanas, lead director of the QTS Board of Directors. “The QTS Board regularly reviews the company’s strategy and market opportunities to maximize stockholder value, and we are confident this transaction achieves that objective.” “We see a significant market opportunity for growth as hyperscale customers and
enterprises continue to leverage our worldclass infrastructure to support their digital transformation initiatives,” added Chad Williams, chairman and CEO of QTS. “We are confident this transaction is the right step to achieve our strategic objectives in our next phase of growth.” Founded in 2003 and based in Overland Park, Kansas, QTS is a real estate investment trust that owns more than seven million square feet of data center space in 28 locations across North America and Europe. The company went public in 2012 via an IPO on the NY Stock Exchange. Last year the company opened new facilities in Atlanta and Oregon, and recently announced plans for a further 1 million sq ft facility in Atlanta. “We are delighted to back QTS and its worldclass management team as they continue to scale the company to meet the rising demand for data centers,” said Greg Blank, senior managing director, Blackstone Infrastructure Partners. “QTS aligns with one of Blackstone’s highest conviction themes – data proliferation – and the required investment makes it well suited as a long-term holding for our perpetual capital vehicles.” bit.ly/QTSGetsStoned
Switch plans 1.5 million square foot data center at Dell HQ in Texas, called The Rock Switch plans to build its fifth major US data center campus in Texas at the global headquarters of Dell in Round Rock, Texas. ‘The Rock’ will span 1.5 million square feet (140,000 sq m), with campus site preparation and permitting expected to begin in the summer of 2021. The new facility comes as Switch is in the midst of acquiring Texan data center company Data Foundry for $420 million. Together with The Rock and DF’s ~60MW facilities, Switch will operate 2 million sq ft (185,000 sq m) of space and 185MW of power. “This is another transformative milestone in the growth of our company to further expand our geographic diversity to the central region of the US,” said Switch founder and CEO Rob Roy. The two are also working together to build Edge data centers at FedEx locations across the US. bit.ly/StarringTheRock
Yondr announces $2 billion expansion into Americas Company plans organic growth across three years Hyperscale data center developer Yondr has announced a $2 billion data center expansion plan into North and South America. “The significant investment in data center projects will add to Yondr Group’s existing network of global data centers and underscores the growing momentum to meet cloud and Edge computing demands in the Americas,” the company said in a statement. Details of the specifics were slim, but a spokesperson for Yondr told DCD that the investment will be used over the next three years to grow organically. “Our data center campus size averages 150MW to meet large scale capacities - We are not a colocation player and don’t intend to be one. The markets we are looking at will be a combination of existing data center metros, with a goal of entering new markets as well.” The funds for the expansion plan were sourced from ‘a variety of private investors,’ according to the company. Yondr has delivered more than 450MW of built capacity across Europe since 2011. bit.ly/BuildingOverYondr
Thomson Power Systems can design, manufacture and service switches for your critical power needs? AUTOMATIC TRANSFER SWITCHES We manufacture both power contactor and circuit breaker based Transfer Switches to meet your requirements. Our complete line up of Residential, Agricultural, Industrial, Commercial, and Mission Critical designs are available from 100A to 4000A. LOW AND MEDIUM VOLTAGE, PARALLELING SWITCHGEAR AND CONTROL SYSTEMS
Switchgear product family can provide a complete integrated control and power switching solution to meet any Power Generation System application. Switchgear and Switchboard lines represent over 40 years of product development and custom system design experience.
Turbulence in the stratosphere
Dan Swinhoe News Editor
Alphabet’s high altitude balloons were going to connect rural areas and usher in a new era of sub-satellite comms. Has its closure shut the door on a whole industry?
early half the planet still isn’t fully connected to the Internet. Reaching the remainder means installing fiber and cell towers in remote and sparsely populated regions, often in difficult terrain - and that’s an expensive game with little reward. Satellites are one answer, but it takes specialist equipment to receive the signals, so they aren’t a mass-market solution, especially for mobile operators looking to make money from technologies such as 4G and 5G. High altitude pseudo-satellites (HAPS) - whether they be airships, balloons, or fixed-wing drones - offer a way to provide connectivity to rural and unconnected areas without the upfront costs of cell towers or satellites or the need for specialist receivers. An Alphabet subsidiary, Loon began as a ‘moonshot’ project from the skunkworks X Lab, seeking to provide broadband connectivity from the sky using highaltitude balloons. Loon made the first commercial HAPS deployments in Kenya and Mozambique, but was closed early in 2021 after a decade in development. Loon made the technology viable from an engineering point of view, but closed because it couldn’t make the economics work for a self-sustaining business. While Loon itself might be little more than another abandoned Google experiment, the project’s successes and failures illustrate the ongoing struggles of the HAPS market. For years, high altitude drones, balloons, and airships have been stuck in the R&D phase. Their fleets of autonomous platforms, offering cell tower-like connectivity without the costs, are always about to deliver, but always just around the corner. Now, while the technology to ensure these platform can fly and relay signals is getting closer, questions still remain about whether they will ever have a viable business model. Is Loon’s closure another early misstep in a nascent sector, the final coffin in the
"Google's Loon was a non-starter. They had very little control over their balloons, they were not the right size. If they managed to equal a single base station, that would have been an achievement" industry doomed to fail, or the dark moment just before the dawn of a new day? Why Loon failed Project Loon began as a ‘moonshot’ in 2011 before being spun out in a separate business unit under parent company Alphabet in 2018. It proposed using high-altitude balloons in the stratosphere at an altitude of 18-25 km (1116 miles), to create an aerial wireless network and deliver the Internet to remote and rural communities through ‘floating cell towers.’ Development was slow but seemed successful; as well as running pilot projects in New Zealand, Sri Lanka, and Brazil, the balloons were able to provide connectivity following natural disasters in Puerto Rico in 2017 and Peru in 2019. Loon managed to achieve a record flight duration of 312 days for a single balloon and raised funding: in 2019 it got $125 million from SoftBank - just one of many investments from the Japanese conglomerate across the HAPS sector. In July 2020 Loon seemed to hit commercial success, announcing a partnership with Telkom Kenya, with the balloons providing services to customers across a 50,000 sq km (19,300 sq miles) area including the towns of Iten, Eldoret, Baringo, Nakuru, Kakamega, Kisumu, Kisii, Bomet, Kericho, and Narok. A similar partnership was announced with Vodacom for coverage in Mozambique around the same time. However, while the technology apparently worked as intended, the business case was never properly figured out. In January 2021 Alphabet announced it would be closing
the project after nine years in development, saying that it hadn’t found a way to lower costs enough to build a long-term, sustainable business. “Developing radical new technology is inherently risky, but that doesn’t make breaking this news any easier. Today, I’m sad to share that Loon will be winding down,” said Alastair Westgarth, chief executive of Loon, at the time. Westgarth has since become CEO of robot delivery startup Starship Technologies. With the closure of Loon, the company’s operations in Kenya and Mozambique also ended. The telcos didn’t say what the effect would be on their coverage or service. DCD reached out to Alphabet and Telkom Kenya for this piece, but requests for interviews were declined. “Google's [Loon] was a non-starter, in my opinion, from the off,” says Derek Long, head of telecoms and mobile at Cambridge Consultants. “They had very little control over their balloons; if you want to provide continual coverage you’d need a continual train of balloons going over the area. “Also, their balloon was not physically dimensioned particularly appropriately for the application; the communications payload was relatively simple. I don't know how much capacity they provided, but if they managed to provide the capacity of a single terrestrial base station, that would have been an achievement with solar power.” Drones didn’t fly either Loon was the most high-profile HAPS project, but it was far from the only time large companies entered - and then left - the space.
Issue 41 ∞ July 2021 13
Photography: Doug Coldwell Facebook’s Aquila project was a fixedwing solar-powered drone that promised free Internet coverage – and new customers for FB properties – in rural and under-connected areas. Facebook bought solar drone maker Ascenta in 2014, and went ahead with the huge-but-fragile Aquila flying wing. At 43m, its wingspan was as big as a Boeing 737, but it weighed only 880 pounds (400kg). Aquila managed just two test flights, the first of which in 2016 ended in a crash. Despite a partnership with Airbus, the project was canceled in 2018. Facebook NDAs mean no one involved with the project could talk to DCD. Facebook has since focused on easier terrestrial projects, investing in fiber deployments in Pakistan with Nayatel, developing machines that can quickly deploy fiber onto overhead powerlines, and installing its own submarine cables. The social network has also partnered with satellite firm Eutelsat to provide broadband in Africa, although even that effort has faced setbacks - in particular, when the SpaceX rocket set to carry the satellite into orbit exploded on the launchpad.
In 2008, DARPA funded a more ambitious drone. The Boeing SolarEagle had a colossal 120m wingspan and an ambition to stay aloft for five years at a time. It was canceled in 2012. In the UK, the Ordnance Survey mapping agency had its own Astigan fixed-wing HAPS project, focused on Earth observation and mapping rather than comms. At 38m and 149kg, it was smaller and lighter than Aquila, but was canceled earlier in 2021 after the initiative was unable to secure a suitable strategic partner. Google also dabbled in solar drones, buying Titan Aerospace in 2014 and planning to use it to deliver connectivity to rural and under-connected regions. Project Titan was a solar-powered fixed-wing platform, larger than Aquila, with a 50m wingspan and projected to have the connectivity of 100 cell towers. Much like Aquila, the project had one failed test flight in 2015 and was killed off in 2017. The Loon legacy For all its flaws, Loon may have been the HAPS industry’s most successful child; a platform that was able to stay aloft for months
14 DCD Magazine • datacenterdynamics.com
at a time, deliver the desired connectivity to a targeted area. Most significantly, it was the first to attempt a commercial deployment. Despite the project’s demise, certain technologies developed for Loon will continue, including the light-based highspeed broadband Project Taara. Loon also partnered with Telesat to develop a temporospatial software-defined network (SDN) operating system. That will be deployed on Telesat’s Low Earth Orbit (LEO) satellite constellation to deliver broadband connectivity. “Loon spearheaded a lot of the developments and brought a lot of interest to the industry,” says Shivaprakash Muruganandham, analyst at Northern Sky Research. “But those closures have been setbacks across the industry; they had to close down because they just could not figure out the business model.” And while Loon’s closure is another blot on the HAPS history book, its successes leave behind a growing community that believes it is close to turning the corner of commercial viability. “Despite all those failures [of Loon etc.], the market is keen to see somebody succeed,”
Stratospheric Internet adds Phil Varty, industrial strategy manager at BAE. “If the measure is the market interest, I think it's still there. Clearly, there's caution because it's really hard. This is not straightforward. There's a lot still to do to make that service a reality.” Telcos pick up on UAVs Loon pushed the envelope on innovations such as connecting communications payloads from the stratosphere to devices on the ground, software that manages constellations of vehicles, and developing techniques to navigate windcurrents and enable the balloons to reach and stay in one area. And a multitude of companies are still operating in the HAPS space and looking to build on Loon’s success and provide drones, airships, and balloons for commercial connectivity use cases. “There are a variety of different projects around the world in each of these segments that are at various stages of completion,” says NSR’s Muruganandham. “HAPS have a rich history; there have been periods of interest that have been waxing and waning over the last few decades. “Right now, we're in one such wave, wherein we have so many different vehicles or platforms in development and there's a lot of government-type users or other investors also trying to drive this community forward. “Loon kickstarted a more recent wave in the commercial world, but I think it’s going to take an entity or group of entities with deep pockets that are willing to wait for over a long period of time for it to really find its niche.” Despite the closures of various projects, there are a number of companies making bets that HAPS will become a viable means of providing connectivity. HAPSMobile is a subsidiary of SoftBank’s telco arm, part-owned by a US military drone firm, AeroVironment. Set up in 2017, HAPSMobile aims to offer commercial services around 2023 through its fixed-wing SunGlider platform, most likely through wholesale offerings to operators rather than direct to customers or consumers. The company has done five successful test flights of a 78m wingspan drone, including a stratospheric test flight and a demonstration of LTE services via a HAPS platform. Designed to stay aloft for up to five months, part of the platform’s communication payload was developed in partnership with Loon and
adapted from the search giant’s balloon project. Matthew Nicholson, corporate communications manager at SoftBank and HAPSMobile, said that network resiliency in the face of natural disaster was one the main reasons the telco has an interest in nonterrestrial communication technologies. The company is also part of the HAPS Alliance; a group of companies looking to accelerate development and promotion of HAPS globally that included Loon and still includes companies such as Intelsat, KDDI, Radisys, Nokia, and Ericsson, as well as several defense and aerospace firms. “We've worked very closely [with Loon] in a number of areas,” says Nicholson. “Alphabet made their decision, but our plans haven't changed in any way; we still see tremendous potential for HAPS. There's a lot of work to be done but with SoftBank, part of the DNA of the company is it has a very long term view of things. We continue to press forward and build on Loon’s legacy.” Another telco, Deutsche Telekom, has partnered with Stratospheric Platforms Limited (SPL) and Cambridge Consultants to conduct LTE/4G voice and data connectivity tests from a fixed-wing platform. Tests conducted by the companies last October above Germany’s Bavaria region enabled voice over LTE (VoLTE) calls, video calls, data downloads, and web browsing on a standard smartphone. Unlike most other fixed-wing platforms, SPL utilizes hydrogen fuel cells rather than solar technology. Given the military potential of a highaltitude drone capable of flying for weeks at a time, it should come as little surprise that defense companies are active in the HAPS space. British defense contractor Qinetiq developed the Zephyr platform from 2003, and sold it to Airbus in 2013. In 2018, the Zephyr S flew from Arizona and remained aloft for 25 days 23h 57min. The company has made other successful test flights more recently, with three planes in service, and is close to a commercial deployment. In 2017, Russian company Lavochkin was conducting tests of its high altitude LA-252 drone, which was due to operate nine miles high and could be used as a repeater, a WiFi transmitter, and general communication device. Another fixed-wing design, BAE’s
Facebook's Aquila drone managed just two test flights, the first of which ended in a crash, and was canceled in 2018. Google's Titan had one failed test flight and was killed off in 2017
Persistent High Altitude Solar Aircraft (Phasa-35) had its first test flight in February 2020. Further tests are planned in the US this summer, and the company tells DCD it is hoping to have a market-ready product in three to four years’ time. Are airships the answer? Alongside the fixed-wing designs, some are turning to airships. It’s a technology that dates back to Victorian times, but motorized airships are probably the least mature category of HAPS. Yet in theory their ability to carry heavy payloads and stay in a stationary holding pattern for long periods may make them the most attractive option in the long term. Thales Alenia Space is working on the Stratobus airship platform, and last year began working on concept studies with the French defense procurement agency DGA. Lockheed Martin is also working on an airship design, as are US startup Sceye, Israeli firm Atlas LTA, French firm Flying Whales, and Czech company Stratosyst, while China has previously promoted tests of its Yuanmeng airship. “Airships are probably going to be more cost-effective than balloons or pseudosatellites in the long term,” says NSR’s Muruganandham, “But they are in a very early development stage right now and there are still a lot of R&D requirements that still have to be established.” “Airships have always been a bit of a struggle,” says BAE’s Varty. “There aren't any airships that are viable, and there's nothing on the horizon that says airships are actually practical. If you can make it work, that'd be really good, but nobody's managed to make them work.” It’s worth mentioning that despite Loon’s rapid deflation, its balloons are still available. The Google partner that developed them, Raven Aerostar, provides a number of balloon-based platforms. Balloons are probably the most mature technology, but Loon showed the practical hurdles they face. Their limited payload, power constraints, and relative lack of flight path control all increase the number of platforms required to provide uninterrupted coverage over a given area, driving up deployment costs and complexity. Drones rule the atmosphere Varty says there’s a clear winner in the stratosphere: “In terms of viability right now, [fixed-wing] pseudo-satellites will be a better option.” Fixed-wing platforms technology is on a steady maturation curve, and several people DCD spoke to believe a commercial deployment could be in operation within the next couple of years. “It's a very big market,” says BAE’s Varty.
Issue 41 ∞ July 2021 15
“There's a lot of opportunity for a range of different solutions. If you're talking for 5G and Internet then I think it's a very big market; there's a lot of countries around the world which don't have good coverage today.” During the early phases of the Phasa project, Varty said BAE wasn’t sure if there would be much market interest in 4G/5G type connectivity use cases, but now says it is ‘convinced’ it is possible. “What we're seeing is some of the larger players in those markets wanting to explore how a HAPS [might] provide a benefit in those areas where terrestrial towers are not covering the areas that they want to cover; where there's still decent amounts of population but not a lot of ground infrastructure. “There's definitely an interest in a business case for a HAPS-type network in a lot of areas around the planet where that satellite coverage isn't good enough or ever going to be because of latency issues and the geography just doesn't lend itself to putting more towers up.” During the Deutsche Telekom pilot, Cambridge Consultants helped develop an antenna to accommodate the distance and potential signal interference. “In order to overcome this free space path loss and other atmospheric effects that could impact the signal strength, you do need to advance the communications payload,” says Long. “On a HAPS you can have a reasonably advanced antenna, and because you're in such a high altitude you can provide coverage over a reasonably large area.” Long said that where a mobile operator might need around 13,000 cell tower sites per technology – 2G, 3G, 4G, etc. – to achieve around 90 percent coverage in the UK, it could get equivalent coverage via HAPS with around 100 platforms. More work would need to be done to ensure platforms could handle the traffic of an entire city or region, however. “From a single platform you could probably provide coverage over East Anglia, for example, or the best part of London. Obviously, however, if you're covering even half in London, you're going to be generating terabytes of data at any given moment,” he says. Competition from above In the last decade, the drone industry has matured quickly. And while the sky isn’t full
of UAVs delivering parcels as some predicted, they have found comfort zones in areas such as film and infrastructure monitoring. At the same time, companies such as SpaceX’s Starlink have created a boom in mega-constellations of LEO satellites offering high-speed low-cost Internet to rural and under-connected areas; a very similar promise to HAPS. Does the arrival of LEO sats hurt the viability of HAPS before they’ve really gotten off the ground? Some will argue there is a clear crossover between the two, but continuous coverage, speed of deployment, latency, and the fact HAPS can be used to deliver 4G/5G/broadband connectivity without needing a satellite receiver are commonly-cited benefits of HAPS over satellite deployments. Unsurprisingly, most of the people DCD spoke to thought LEO satellites would not hurt the HAPS market, but rather complement one another. “With satellites, you have to go through all the hassle of getting a new satellite up there,” says BAE’s Varty. “With HAPS, it's not a big deal to bring one of these down and change the sensor out. You can move them around much more easily than you can move satellites around. There's a whole lot of flexibility in a HAPS constellation that you don't have with a satellite one. “Satellites do have their place, but there's definitely flexibility in what we do, which you're not going to get with satellites in terms of being able to move them around, to change them out, and update them as the technology changes over a course of months rather than years. All of those things give you that advantage. And I think we will see the price points as well being somewhat lower for a HAPS system than it will be for most satellite systems.” NSR’s Muruganandham says that while mobile operators are waiting for the LEO satellite constellations to launch and come online at scale, there is an interim opportunity for HAPS to provide connectivity to those remote or underserved areas. “Whether these platforms themselves are actually able to deploy by that time and figure out the right markets to target, that's a difficult question.” Defense departments have taken a keen interest in both HAPS and LEO satellites as ways to provide coverage and intelligence to
"With satellites, you have to go through all the hassle of getting a new satellite up there. With HAPS, it's not a big deal to bring one of these down and change the sensor out" 16 DCD Magazine • datacenterdynamics.com
ground forces. LEO satellite firm OneWeb has made a number of deals with both US and Canadian defense departments to provide coverage in remote areas. It’s also possible that commercial operators will follow defense and use LEO satellites in conjunction with HAPS; using both to ensure complete coverage, or using one for backhaul transmitted by the other. Alongside its investment in Loon and its HAPSMobile UAV play, SoftBank has invested across the board, in both LEO and geosynchronous earth orbiting (GEO) satellites, and even in aerostats (tethered blimps - see p19). That’s starting to bear fruit, as the company is beginning to offer what it calls non-terrestrial network (NTN) solutions that provide connectivity from space and the stratosphere. SoftBank’s offerings will include GEO satellite NB IoT services provided by Skylo, LEO satellite communications via OneWeb, and HAPS-based telecoms via HAPSMobile. The government link Whether for in the field communications or intelligence gathering, defense departments and governments are likely to play a large role in the success of HAPS. BAE’s Head of Hawk India Dave Corefield says he expects the military to pick up the technology slightly ahead of the commercial market, while Varty adds that the regulatory environment on the commercial side needs more changes than would be required in the military domain, which will likely mean military deployments will happen sooner. SoftBank’s Nicholson adds that governments may be interested in investing in HAPS to ensure they have backup options during disasters. And while the company is likely targeting overseas markets before commercial deployments in Japan, he says the Japanese government has shown interest in the technology. Challenges in development Analyst firm Northern Sky Research predicts the HAPS market will reach $4 billion by 2029, saying there were around 40 projects in various stages of development, with balloons as the most mature market with airships and pseudo-satellites still in the preliminary stages of development. “I look at it as a very nascent market,” says NSR’s Muruganandham. “There has been a lot of promise in the past, and there's still a lot of regulatory and technological challenges that these companies have to overcome. “I don't know if I would state that this is a growth market opportunity; I think it's still going to remain, at least in communications, an adjacent market to the satcom and terrestrial communications market.”
Currently we're adapting satellite payloads. I think as we demonstrate the technology is mature enough, people are going to design and develop payloads specifically for HAPS" The challenges with HAPS are myriad; on the aviation side, companies need to develop platforms that can not only get into the stratosphere and stay there for a long period of time, but stay within a relatively tight flight path (and ideally do so autonomously). On the communications side, creating payloads that can provide large-scale, high bandwidth and fast coverage over a large area using minimal power is no small feat. But the technology is moving quickly. Loon showed that the communications challenges could be overcome with enough engineering, and advances in solar efficiency and materials science continue to reduce airframe weight. Meanwhile, improvements in battery technology developed for the auto industry have pushed the HAPS sector on immensely in the last few years. “We've been able to take advantage of an ultra-lightweight solar array, and incorporate that into the vehicle design,” explains Dave Corfield, program director for BAE’s Phasa-35 program, “and also a significant amount of R&D into taking commercial off the shelf batteries or cells and turn those into battery packs. “In addition, we've got really innovative construction techniques in the airframe, with some further R&D running now to try and
Policía de Investigaciones de Chile
strip even further weight out of the airframe, because for every bit of airframe mass we save we can increase payload.” Adjacent industries such as satellites are contributing lightweight, miniaturized payloads suitable for a HAPS platform, but almost all of them need modification before they can be fixed onto the platforms themselves and used. “Currently we're taking payloads and having to adapt them onto the aircraft,” says Corfield. “I think as we demonstrate the technology is mature enough, people are going to start to actually design and develop payloads specifically for HAPS vehicles which are going to be even better.” Alongside payloads, solar panels and batteries are improving, but HAPS still need developments in management and control, if we’re ever to see large-scale automated deployments. Corfield says constellation management and Beyond Line of Sight (BLOS) control are two areas that the company is working on closely as it works towards commercialization. Varty adds that BAE is in the process of developing a process for producing platforms at a scale of hundreds per deployment, as might be necessary for a commercial mobile operator.
Give us spectrum Regulation is also an area that needs work. Requirements around satellites are wellestablished. And while small consumer and commercial-sized UAVs have been slowly integrated into national airspaces in a number of countries, there is little sign of regulation for large unmanned platforms flying in the stratosphere and no certification to say they’re safe and airworthy. “We're heavily engaged with the regulators to understand how we would certify a HAPS system,” says Corfield. “It's very much hand in hand, walking that journey together because no one's done this before.” Questions also remain around spectrum allocation. Loon used the same spectrum as its mobile operating partners, and there is a limited amount of bandwidth dedicated to HAPS. But SoftBank’s Nicholson says the company is hoping to the next World Radiocommunication Conference (WRC) in 2023 will dedicate spectrum to HAPS. “We've been lobbying to get spectrum allocated specifically for HAPS. Some already does exist, but we need to get more spectrum for HAPS to make it viable,” he says. “Without those key regulatory approvals you're not able to make a commercial business out of that.” Nicholson didn’t say why more dedicated spectrum was so important to SoftBank’s commercialization plans, but it might be to help reduce the cost of deployment and reduce the number of platforms companies will need to deploy to cover a desired area. “In telecoms in general there is usually a trade-off between spectrum bandwidth and deployment cost,” explains Toby Youell, research analyst at spectrum research firm Policy Tracker. “One HAPS with unlimited spectrum could be enough to meet all demand, whereas an allocation of 1Hz would require many HAPs to provide the same service. This trade-off is what drives operators to spend billions for spectrum at auctions, because it ultimately saves operators in buildout costs. “Several WRCs since 1997 have identified more of these precious airwaves for HAPS, including at WRC-19,” adds Youell. “But as Google Loon shows, deployments so far have not met their stratospheric promises.” The shape of commercial HAPS Cambridge Consultant’s Long believes we could see the first commercial HAPS deployment within a couple of years, and almost certainly within the next five, likely in Asia or Africa. In terms of the platforms, the technology is pretty much there if an operator were willing to accept a first-generation product that had room for improvement. Whether it would be a viable business in the long run, however, is still an open question.
Issue 41 ∞ July 2021 17
“We're really close in getting this stuff to work,” he says. “There's a decent amount of innovation still to be done in terms of improving the performance of the communications payload and the power supply on the HAPS, but it's a matter of putting all the different pieces together and to make sure that you can have an economic case. The engineering needs to make the economics work, and that still needs to be proven.” While fixed towers or satellites require high upfront capital expenditure, their ongoing operational costs are fairly low. With HAPS, however, that capex/opex equation is flipped on its head. The time and cost of getting them into the sky will be lower than firing a satellite into orbit or building a 200ft cell tower. But a fleet of HAPS regularly landing and taking off and requiring management and integration with civilian airspace will need much more ongoing investment and management. And until we see a sustained commercial – or at least pilot project – deployment by a mobile operator, it’s unclear what the equations for a successful business model using HAPS actually look like. It’s still unknown how many platforms would be needed to provide continuous coverage; Long suggests if an operator had 100 HAPS serving in the air at any one time, they might need refueling every two to three weeks, including a day or so to fly each way from the ground to the stratosphere, meaning an operator might need another 10-15 platforms on standby to launch as needed. Unlike the military, the commercial market won’t want to own or control HAPS platforms. Just as many telcos are now selling their towers and offering connectivity on leased infrastructure, they are more likely to want to use managed HAPS deployments delivering airborne connectivity. Last year, Richard Deakin, CEO of SPL, said his Deutsche Telekom-backed company and Cambridge Consultants might consider “setting up as a small airline” to run the fleet of aircraft, each of which will have a design life of ten years, and provide wholesale services to operators. Under current rules, however, each uncrewed aircraft has to have a full-time controller, meaning 2-3 people would be required per platform per day for continuous coverage. HAPS could find their comfort zones as temporary deployments before fixed terrestrial towers are installed, or during disasters or emergencies, or as point solutions for remote infrastructure such as mines or offshore oil rigs. Fringe business models As well as IoT connectivity, disasters, and remote area coverage, SoftBank’s Nicholson
"I could see one of the cloud vendors launching a relatively small fleet of HAPS, set up the access, and bypass the terrestrial infrastructure. I could also see somebody like Disney coming in" says that another potential use case for HAPS is connecting the skies themselves. “We envision that there will be applications in the sky, so HAPS definitely have a role to play there,” he says. “With drone deliveries and even drone taxis, we believe there'll be a need for connectivity in the air.” “With a base station on the ground, it's very difficult to project [signals] up. If you have an aircraft in the stratosphere covering a wide radius it's easy to reach drones that may be high up.” When asked if he could foresee a HAPSonly mobile operator, Long says it’s possible, but the risk-averse nature of the telecoms industry means it could take a large-scale technology player to shake things up. “I could see one of the cloud vendors; AWS, Google, etc., launch a relatively small fleet of HAPS, set up the access to AWS, and bypass all of the terrestrial infrastructure. I could also see somebody like Disney coming in and saying, 'I'm fed up with having to deal with all these operators, we will just do our own [network].’ “Telcos themselves are a bit conservative. It might require Ofcom to come along and issue a new license for a non-terrestrial network in the UK to a large non-UK operator. It might require that order and magnitude of initiative to shake people up.” While it’s possible that we could see
18 DCD Magazine • datacenterdynamics.com
a HAPS-only mobile operator, most commercial HAPS deployments are likely to be supplementing coverage given the relative lack of completely greenfield territories. “I don't know if the business models work out favorably in the case of HAPS where companies are looking to solely make a profit out of using only high altitude platforms for their connectivity use cases,” warns NSR’s Muruganandham. He says they need “a lot of advancement” in technology and regulations. “It’s going to be very fringe,” he says, predicting that while terrestrial and sat comms play catch up, HAPS could be served as a midway platform until they are built out. "Those kinds of surge applications, like in disaster response where you have these telecom towers knocked down and applications of that nature, they might be a good fit,” he added. BAE’s Varty likens the HAPS market to the current explosion in satellites, and how the market has rapidly changed and accepted huge numbers of new machines entering orbit in a short space of time. “At the beginning, it was single satellites, and then a couple of them doing constellations and now we're talking about thousands of satellites in constellations in orbit,” he says. “Within 10 years, you should see large constellations of HAPS.”
Aerostats offer an alternative to HAPS and LEO satellites
t the same time that companies are investing in airships, fixed-wing HAPS platforms, and LEO satellite constellations, a number of companies, especially telcos, are looking at using tethered drones or airships – also known as aerostats or blimps – to provide temporary connectivity in events such as natural disasters. In 2018, KT Corp revealed its 5G Emergency Rescue Platform known as Skyship. A ground station would beam 5G to the skyship, which would then broadcast signals to nearby drones and robots. In the UK, BT’s EE has previously tested a tethered drone and balloon concept known as ‘air masts’ for use as temporary coverage masts. The company used balloon technology from a company known as Helikite in 2017 for a Red Bull cycling event in Wales. Last year, FirstNet One deployed an LTE blimp for the first time in the aftermath of Hurricane Laura to support AT&T’s response effort in Louisiana. Free-floating airships or balloons have to generate their own power and find a way to connect both to the providers’ network and the customers they are trying to connect, but aerostats offer a simpler middle ground, getting power and network feeds from a cable. “We don't typically go over 1,000 feet/300 meters or so. That is our level of high. We're like the low and slow version of high altitude platforms,” says Joe Ryan, VP of business development at Altaeros, an MIT spin-off backed by SoftBank. Altaeros’ helium-filled SuperTower platform can carry a payload of 660 lb (300kg) and power 1MW of equipment. It can operate in winds up to 63 mph (100 kmph), at a height of 1,000 ft (305 m).
The company’s tethered platforms stay in a fixed position, and remain fully connected to the ground with fiber and power lines. Like HAPS, aerostats can connect areas that are underserved or where terrestrial networks are affected by natural disasters. Providing coverage for IoT networks in remote areas is also a potential service. “You can put any equipment that one would normally see on a tower on an aerostat,” says Ryan. “It may not be 30,000 feet in the air but it's substantially higher than your average tower, giving it the ability to broadcast the wireless signal a lot further.”
Difficult business case Ryan says that a single aerostat can provide coverage equivalent to around 15 separate towers because of this height advantage: “Having to build 15 towers is a difficult business case to cover a rural area; 15 towers, 15 separate fiber or microwave connections, 15 separate power setups, I don't see anyone replacing existing systems, but putting this up in remote areas makes it affordable and makes it doable.” Ryan also says the company is signing an agreement with a “major tier-one wireless carrier within the next few months” to provide emergency services in the event of a disaster. “We’re also working with other markets like mining and government markets, anywhere where they need something that is able to provide communications over a very wide area.” While most blimps or aerostats require a team of five to 10 people on-site, Altaeros’ platform is automated, and will climb and descend as the weather allows without the need for human intervention.
"You can put any equipment that one would normally see on a tower on an aerostat. It may not be 30,000 feet up but it's higher than a tower"
Dan Swinhoe News Editor
“It doesn't fly, but it controls itself in the air,” says Ryan. “It's able to keep its station and stay in one place.” The platforms are deployable via truck or even helicopter and could theoretically operate for years at a time with minimal interruption, although every 30 days a person must attend the platform to replenish lost helium. Aerostats avoid zoning regulation and only require a Notice to Airmen (NOTAM) and some onboard lights to ensure they are visible to any aircraft in the area. Ryan says in the future Altaeros is looking to reduce the size of the platform where possible, and is also hoping that the platforms will be able to integrate with LEO satellite constellations to act as a relay and provide connectivity without the need for a ground-based receiver. Mobile operators don’t want fixed assets; just as they are selling off towers and providing connectivity services, Ryan expects most mobile operators will lease aerostats or buy them as a service, especially now their deployments can be largely automated. “Most wireless carriers, they don't even want to be in the business owning of towers anymore. The idea of a wireless carrier owning aerostats that have a crew of five to 10 is just silly, that was never going to happen.” In theory, a HAPS deployment could operate in any environment regardless of the infrastructure on the ground, but aerostats need at least some existing infrastructure. However, HAPS are still very much in the development phase, while aerostats are a tried and tested technology that is ready for deployment. “Rural coverage, emergency coverage, you need a lot of tools in the toolbox to be able to do it, from towers all the way up to HAPS,” says Ryan. “And most of these things don't overlap with each other. We just happen to do it a different way and provide a little slightly different service.” “I suspect we'll see a lot more aerostats in the next five years, and I believe that we will be complementary with the LEO satellites.”
Issue 41 ∞ July 2021 19
CONFIDENCE ON THEY’RE NOT JUST POWER EXPERTS. THEY’RE DATA CENTER POWER EXPERTS. As a top global provider of backup power for the data center industry, Cummins has built the largest dedicated support network. We train teams from around the world as data center specialists, experts who know how to fine-tune every component of your Cummins generator sets to your data center’s exact power demands. Cummins data center specialists work where your data lives, making sure your confidence is always on.
CLICK HERE TO TURN YOUR CONFIDENCE ON.
28 years later: Supermicro’s wild ride We talk to Supermicro’s CEO about the evolution of the server market, green computing, and Bloomberg’s hacking allegations
harles Liang has been building servers for a long time. "Supermico started 28 years ago," he told me repeatedly during our conversation. "28 years ago we were a startup company, three people in a garage." During this lengthy tenure as CEO, Liang has seen his company grow out of that garage to become a major server manufacturer at the forefront of the digital revolution. We talked about that journey, the company’s next step, and that Bloomberg article. Liang was born in Taiwan, but Supermicro is very much an American success story. After graduating from the University of Texas (Arlington), Liang spent several years as an engineer at Silicon Valley firms like Chips & Technologies and Suntek. Then, in 1993, he formed his own motherboard company out of San Jose - initially targeting PCs, but soon pivoting to servers. The company has always tried to leverage its global heritage, relying on domestic design teams and a mixture of US and overseas production to build a server empire. Its future now is reliant on that international approach, with foreign markets key to growth and lower costs. But its Asian roots have also seen it buffeted by global tensions far too large for it to control, threatening the company’s reputation and profitability. Supermicro first opened a campus in Taiwan back in 2010, but - seeing growing APAC demand, looking to lower costs, and needing to reduce its reliance on Chinese manufacturers - it began work on a much larger complex in Liang’s homeland in 2018. "For the past three years, we have been aggressively expanding our campus in Taiwan,” Liang said. “That has helped us a lot during Covid,” although it took a $30m hit last year due to the pandemic. Now, as the virus spreads across the island nation, forcing shutdowns, he hopes that the Silicon Valley and Netherlands campuses will help pick up any of the slack.
Sebastian Moss Editor
"So far, the impact of Covid in Taiwan is relatively more controllable. We already keep separation, different buildings for different functions, as a core idea” “So far, the impact [of Covid in Taiwan] is relatively more controllable,” Liang said, adding that the campus was designed during Covid and therefore is built with the pandemic in mind. “We already keep separation, different buildings for different functions, as a core idea.” Taiwan is key to the company's next phase, where it can produce servers for far less than in the US, more easily competing for hyperscale customers. It also expects to be able to nearly double production capacity, manufacturing over two million servers per year by this summer. In its latest quarterly report, the company detailed a noticeable sales "weakness in the United States," but said it was offset by "double-digit" growth in many Asian and European countries. The investment comes at a critical time for a business competing against established OEMs and a growing number of ODMs.
To brand or not to brand? Original equipment manufacturers such as Dell and HPE sell branded products with hardware solutions, vendor support, and crucially - trust. As the old adage goes, no one gets fired for buying IBM. Original design manufacturers, on the other hand, cut out a lot of that support, a lot of the branding, and get rid of large sales teams and marketing budgets. It's often a race to the bottom on pricing, with no-name 'white box' companies competing over large orders for hyperscalers. Those hyperscalers, with huge in-house engineering teams, don't need to pay extra
for help from the likes of Dell, the way smaller businesses or non-tech enterprises do. Given the rise of the cloud, the ODM model steadily building customer trust, and the slow growth of the ODM-friendly Open Compute Project, demand for ODM servers has rocketed. In 2020, a year of firsts, ODM server sales outpaced OEMs for the first time. Throughout this decade-long shift in data center compute consumption, Supermicro has tried to straddle both worlds. "We support an OEM model Supermicro brand name. At the same time, we have ODM function as well," Liang explained. "A customer who buys a large volume - they care about cost. We give them a brand name for that, but at an ODM price. It's challenging, but it works." Supermicro also sells millions of subsystems and components, including motherboards, various plug-in cards, and system enclosures, to value-added resellers, systems integrators, and even other OEMs. The hybrid approach has allowed it to sell a huge number of servers to IBM Cloud (thought to be its biggest single customer), develop custom hardware for Nutanix, and high-performance computing (HPC) servers for small tech companies like Elemental Technologies, as well as bring in billions in component sales. The company has long shrugged off the growth of rival ODMs, contending that the primarily Chinese businesses have lower R&D expenditure and that their focus on rockbottom prices shows in the end product. It has leaned heavily on its ‘designed in Silicon Valley’ credentials to bolster its brand.
Issue 41 ∞ July 2021 21
That argument may have held true for several years, but a prolonged period of ODM growth and competition within the sector has meant that many of the leading companies like Wiwynn, Foxconn, and Quanta are now reliable and formidable competitors. At the same time, Supermicro’s shift to Taiwanese bulk production faces competition from local rivals. A new crop of domestic ODMs have entered the market hoping to carve out a space for themselves, including Gigabyte Technology, AIC, Asustek Computer and Mitac Computing Technology. All of those businesses, a 2019 Digitimes report claimed, were vying for Amazon Web Services (China) orders that had traditionally gone to Supermicro. "I'm very confident with our solution," Liang counters. “I feel I'm really lucky. Since I started the company 28 years ago, we have used the building block solution as our key architecture.”
Single subsystem Liang explained that the company designed a single subsystem that is “compatible across the product line, even across different generations or products.” This composable infrastructure “helps us lower our cost and simplify our inventory, and improve our lead time, because we are using the building block solution for all our subsystems.” When a customer asks for a unique order, Supermicro “pulls out our standard subsystem for our building block solution, and we are able to integrate a customer applicationoptimized solution quickly because we share the same inventory,” Liang claimed. “Not just lower cost, but a shared inventory that helps us have room to support tier two, tier three customers quicker and with a better price.” The company has hundreds of server variants based on its core solution, and in 2020 went more aggressively after the hyperscale market with its new MegaDC and CloudDC line. As new chip architectures come onto the scene, Liang said that the building block’s flexibility is proving useful, with Intel’s dominance of server compute being chipped away. “It doesn't matter which CPU or GPU,” he said. “One building block solution for all different kinds of CPU, GPU, and for different kinds of customer.” Liang says that the success of this building block solution is down to a mixture of luck,
foresight, and time. "We spent the 28 years just focused on one single topic: Bringing the best hardware to the customer." There’s been another focus for the company, one he is keen to share, with Liang breaking into lengthy explanations even when asked unrelated questions. That subject is Green compute. More than a decade ago, Liang began focusing on server efficiency, with old videos at Computex showing him warning of rapidly rising data center energy use amid a warming climate. Efforts by his company and others have staved off some of the direst predictions. A 2020 study by Lawrence Berkeley National Laboratory found that global data center compute rose by 550 percent between 2010 and 2018. In that same time, energy use rose just six percent. But there's still a long way to go, Liang argues, especially with compute demands rising and CPU improvements slowing. "Green computing can save energy costs, and it's a good idea for our planet, reducing our carbon footprint," Liang said, launching into a detailed defense of a slightly higher capex for an ROI that he claims can be measured in months. Supermicro also uses a 3MW Bloom energy fuel cell to reduce emissions and longterm costs at its San Francisco site, and has other eco-friendly initiatives like its e-waste reducing disaggregated server architecture. "[We must] reduce carbon footprint, make our planet healthier," Liang said. "These fossil fuel power plants are still everywhere." Of course, despite the lofty intention of helping the planet, the company suffers from the same ethical lapses as many in the tech sector: It is a major supplier to the oil and gas industry, delivering HPC equipment to help them find and exploit new fossil reserves. Liang, head of the Green Earth Charitable Organization, dodged questions about any hypocrisy in selling to those still building the fossil fuel power plants he argues we need to eliminate. "They care a lot about energy efficiency too," he said. "We have some oil and gas companies that go for a liquid submerged solution with us and some others that go for liquid cooling. The energy companies pay attention to energy consumption - although they generate energy, they care." Indeed, companies across all industries care now more than ever about energy
"Green computing can save energy costs, and it's a good idea for our planet. We must reduce carbon footprint, make our planet healthier. These fossil fuel power plants are still everywhere" 22 DCD Magazine • datacenterdynamics.com
consumption. Liang said that his focus on green compute was finally paying off, as the world grapples with climate change. This new business opportunity comes after a rocky few years for the storied company.
Hit by the SEC... and Bloomberg In 2017, it was revealed that the company was improperly and prematurely recognizing revenue for fiscal 2015 through fiscal 2017. In an effort to boost its numbers, it recognized revenue on servers it had yet to deliver to customers, shipped servers to customers prior to customer authorization, and shipped misassembled goods to customers. A subsequent Securities and Exchange Commission investigation fined the company $17.5 million, and Supermicro replaced much of its executive team - but neither admitted nor denied the SEC’s findings. “Supermicro is committed to conducting our business ethically and transparently,” Liang said in 2020. “We fell short of our standards, and we have implemented numerous remedial actions and internal control enhancements to prevent such errors from recurring.” Liang was not himself accused of misconduct, but had to reimburse $2.1m in stock profits in a separate settlement. Delays in reporting financial results due to the lengthy audit meant that the company was delisted from the Nasdaq in August 2018, a deeply embarrassing event. It took until January 2020 to return. As it was still reeling from the delisting, it got hit by something much worse: A devastating claim that its products were spying on its biggest customers. In what read like something ripped from the pages of a Hollywood screenplay, Bloomberg Businessweek said in 2018 that Chinese spies had infiltrated Supermicro's supply chain. The People’s Liberation Army was adding tiny microchips, no bigger than a grain of rice, to Supermicro motherboards to get access to US state secrets, the publication claimed. Bloomberg said Amazon discovered the chip when doing due diligence on a potential acquisition, Elemental Technologies. It informed the FBI, as did Apple, which soon removed all Supermicro servers from its data centers. Other impacted businesses include a major bank, government contractors, Department of Defense data centers, and the CIA, the publication reported in an article citing 17 people familiar with the matter, including six current and former senior national security officials. Bloomberg said that Supermicro’s international setup had left it open to attack. Not only was it relying on China for manufacturing, but its San Jose offices were
"We spent the 28 years just focused on one single topic: Bringing the best hardware to the customer"
Issue 41 ∞ July 2021 23
primarily staffed by Taiwanese and Chinese employees, with Mandarin the preferred language. This made it easier for China to understand the company’s operations, Bloomberg alleges, although investigations are still ongoing as to whether any spies were planted within the company (a subsequent report claimed the FBI surveilled several suspicious employees). The story took a surreal segue when, not long after the article was published, I met with a number of Department of Energy employees for a drink. When asked what they thought of the rumor, most laughingly brushed it aside as highly unlikely. But one individual, who said he had been traveling and not following the news, jumped up abruptly when I explained the story. He said that he had to make a phone call, and ran out the door. When he returned some time later, he would not allow the subject to be broached, but was clearly disconcerted. The following few weeks proved just as strange. Cyber security experts picked at various parts of the story, and every company named in the report came forward to vociferously deny it. "We want to assure you that a recent report in Bloomberg Businessweek alleging the compromise of our servers is not true," Apple said, taking the unusual step of writing a letter to Congress denying the article. "You should know that Bloomberg provided us with no evidence to substantiate their claims and our internal investigations concluded their claims were simply wrong." AWS CEO Andy Jassy added: "[The] Bloomberg story is wrong about Amazon, too. They offered no proof, [the] story kept changing, and [they] showed no interest in our answers unless we could validate their theories. Reporters got played or took liberties. Bloomberg should retract." Major security agencies also denied the report, loudly and aggressively - the NSA went as far as to say it was "befuddled" by the article. And yet, Bloomberg stood by its reporting, claiming it had conducted more than 100 interviews to build a picture of the hack. "Although details have been very tightly held, there is physical evidence out there in the world," one of the article’s authors, Michael Riley, said on Twitter at the time. "Now that details are out, it will be hard to keep more from emerging," said Riley. And yet, so far, nothing has emerged. Supermicro pushed back against the claims, although Bloomberg said that it is likely the company had no idea about the hack. Third-party investigations firm Nardello & Co, hired by Supermicro to probe its supply chain, was unable to find any evidence of such a hack. As Supermicro has not taken Bloomberg to court, the claims have not been legally tested.
"The Bloomberg story is fake news, from our understanding. Nothing really happened. However, we keep paranoid and try to make sure our solutions are safe to our customers" In the immediate aftershock of the story, Supermicro's share price halved. An unknown number of customers left or paused purchases. Supermicro was forced to shift manufacturing out of China - although it now claims it was doing that anyway, partially because of the trade war. Slowly, but surely, it put the allegation in its rearview mirror. No evidence has been produced. Most of its major customers stayed with it, though Apple admitted it has left Supermicro, apparently for other reasons. Then Bloomberg struck again. This February, after more than two years, the publication doubled down on its original allegations, providing more context to its claims, but still no concrete proof. It said that the DoD found that its Supermicro servers were sending network data to China via secret chips way back in 2010, and that a similar incident occurred at Intel in 2014. It should be noted that when I visited an Intel data center in 2019, the company was still very much a fan of Supermicro - nearly all of the facility's 150,000 servers came from them, and Intel proudly displayed its disaggregated server joint venture, dreamed up after a talk with Liang in 2016. "I met with Liang, and then six weeks later we had 10,000 servers deployed - it's never been done in the industry that fast," Intel IT CTO Shesha Krishnapura told me at the time, crediting the building block strategy. "Now we have more than 80,000 disaggregated servers." Bloomberg also alleges that the US government took steps to isolate Supermicro servers from classified networks, but in many cases left them unaltered to avoid alerting the Chinese government that they were onto them. Before showing their hand, US officials wanted to discover the hackers' aims. The cyber security community remains highly suspicious of the article, and Bloomberg has not responded to requests for comment from DCD and others. "The Bloomberg story is fake news, from our understanding," Liang told DCD. "We have never heard any customer have that experience, and we’ve never seen anything like that. For sure, we continue to improve the security of our production and supply chain, and make our testing system more robust to cover more angles.
24 DCD Magazine • datacenterdynamics.com
"Nothing really happened. However, we keep paranoid and try to make sure our solutions are safe to our customers." This time, the report had less impact, only briefly denting the Supermicro share price. Now, the company is close to beating its all-time high of $40 a share, achieved back in 2015, before all the accounting issues and hacking drama. Admittedly, this has somewhat been boosted by a generous stock buyback initiative. While he would not be drawn on specific numbers, Liang said that the Bloomberg story did not lead to a mass customer exodus. "Nothing is more beautiful than the reality, the truth is the truth." And global political quarrels may yet prove a boon to Supermicro. Rival Inspur, which also tries to be both an OEM and an ODM (and has come up with the term Joint Design Manufacturing), is now in the crosshairs of the US government. China's first server manufacturer, Inspur quickly became its homeland's largest supplier by highlighting its lack of US ties amid the growing NSA surveillance scandal brought to light by Edward Snowden in 2014. But now the US claims its alleged links to the Chinese military-industrial complex are a threat to national security. In November 2020, President Donald Trump signed an executive order blocking US entities from investing in the world's third-largest server manufacturer. President Biden reaffirmed the ban in a new EO in June 2021. As of publication, the US has yet to put Inspur on its 'Entity List', alongside several other Chinese firms that Americans are blocked from investing in. Such an addition would bar US companies working with Inspur, including major chip firms, and could damage Inspur just like Huawei before it. The Commerce Department declined to comment about its decision-making process. Inspur previously denied it was linked to the Chinese government, but did not respond to requests for comment. For Liang, such global tensions are part of business, just the latest chapter in the story of a company that has overcome pandemics and shortages, claims of hacks, and accounting irregularities. He has seen it all before, and he will see it all again. He repeated: "I have been at this company for 28 years."
The Edge in Action Supplement
Real signs that the Edge is becoming a reality Data centers to go
Can CDNs deliver Edge?
The Edge business case
> What if you only need a data center for a few days or weeks?
> Content delivery networks already do a job much like the Edge
> Before investors spend money on the Edge, they need to see it work
IT professionals operate with
at the edge.
6U Wall Mount EcoStruxure™ Micro Data Center from Schneider Electric™ enables reliable digital experiences. • Accommodate heavier edge servers unobtrusively. • Deploy faster with pre-integrated solutions delivered in shock packaging. • Rely on the support of our partner and services network. EcoStruxure IT Expert apc.com/edge
©2021 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998-21556505_GMA-US
6U Wall Mount EcoStruxure Micro Data Center
The Edge in Action Supplement
Contents 28. D ata centers to go What if you only need a data center for a few days or weeks? 30. Can CDNs deliver the Edge? Content delivery networks already do a job very much like Edge networks 32. Advertorial: A data-driven model to forecast energy consumption at the Edge 34. T he Edge business case Before investors spend money on the Edge, they need to see it work 37. Edge radio needs government help Don't just ban Huawei, support the OpenRAN alternative
Is the Edge already here?
very time we return to the Edge the picture becomes a little clearer. This time round, we have some of the clearest images so far of how the Edge will develop along with a strong and surprising suggestion: While the industry prophesies the coming of the Edge, it may already be here.
What we mean by this is, all the elements of the Edge exist already. Miniaturized hardware is here, web applications are already able to serve users, giving a quick response, and the industry is drawing revenue from low-latency applications. What if we already have the Edge, and all that remains is to make it better?
Hardware to go
Some Edge use cases are timelimited. You may need to provide a wireless network at a festival, you might need a temporary data center to support relief workers when a hurricane strikes. For all these use cases, you want a portable Edge system that can be delivered to a site, connected to power and network, and switched on. Temporary data centers are a growing industry niche, but they are definitely an Edge which already exists. In the last 15 years, data centers too have shrunk, so what filled a shipping container in 2006 will now go in overhead luggage on a plane. Now we just need to make sure they can be kept secure, and reliable (p28).
Netflix already did it When someone tells you that we need a whole new network to make the Edge work, ask how they watch their TV. All the streaming media services already deliver low latency content to customers, cached at the Edge and increasingly enhanced with interactive features. Sure, networks need to be fast and responsive to meet the needs of the Edge, but Netflix and Prime are working fine already, and the likes of Akamai, Cloudflare, and Fastly are already supporting customer applications in their Edge hardware (p30).
Business Edge? The current wave of Edge pitches are all about businesses delivering new applications. How will this translate into business models? Any new tech area feels like a goldrush: Prospectors rush into a virtual Klondike, but few strike lucky, and most lose their shirts and have to move on. We've got enough data now to start to see how this will play out in the Edge world - it may give us yet another bout of deja vu (p34).
Radio Edge Meanwhile, mobile networks look like being one of the few genuinely innovative areas of the Edge. 5G provides genuinely new capacity and function which Edge applications can use. Perhaps more significantly, it's brought in a new approach to radio access which would shake up the mobile world. Open radio access networks (OpenRAN) are a radical threat to old-school telcos. But they may be the only way 5G can happen (p37).
The Edge in Action Supplement 27
Edge data centers to go What if you only need a data center for a few days or weeks? The big players have you covered. Andy Patrizio Contributor
hen you think of an Edge data center, what comes to mind? An Equinix or DRT facility in a big city? A Vapor IO container sitting next to a cell tower? A couple of racks in a Tier 3 city building? Or perhaps a ruggedized cabinet in the corner of a factory? These are the common deployments of Edge data centers and they have one thing in common: they aren’t meant to move. That’s because the deployment is considered permanent. However, there is a growing number of scenarios that require temporary compute and that means mobile, easily deployed and removed “data centers,” with an emphasis on being temporary deployments. “If I were to build it once and leave it there and only use it once a year, probably not a good financial investment,” said Joseph Reele, vice president and solution architect for Schneider Electric. “This is more along the lines of unanticipated need, like a flood or hurricane, think of natural disasters that come in that kind of wipe out that infrastructure that may be in place there or not wipe it out, but severely impact its performance.” This is hardly new. Sun Microsystems introduced a 'data center to go' in 2006 called Project Blackbox. Its fortunes dried right up with Sun, though. These new portable data centers are much smaller than a shipping container. How small? “When I talk small I'm talking a four blade cluster that can go into the overhead bin on a United flight,” said Bharath Ramesh, head of global product management & strategy for converged Edge systems at HPE. “And if you have flown as much as I have, you know, I can't even put my backpack in there half the time. And I'm able to put a four-node Azure Stack HCI cluster with a commercial carry case, with the switching power and everything they need into that overhead. And that is small. I think we'd all agree,” he added.
"I can put a four blade cluster into the overhead bin on a United flight. I can't even put my backpack in there half the time, and I'm able to stow a four-node Azure Stack HCI cluster with switching and power" Thanks to technological advances, that four-node cluster - the HPE EL8000 - comes with 428 cores, 80 terabytes of storage, six terabytes of memory, 10 gigabit networking, dual net switches, redundant power supplies, security and manageability. “That's the same as our data center class in that package. So yes, you get a tremendous amount of performance in a package that small,” said Ramesh. Portable data centers are also much more customized and tightly integrated, meaning they are built to order by the customer, said Pierluca Chiodelli, VP
28 DCD Supplement • datacenterdynamics.com
of Edge engineering technology at Dell Technologies. Think hyperconverged infrastructure (HCI) to an even more extreme degree. “Internally they are just like a regular data center and the customer can choose the components to fit their needs,” he explained. “Power delivery and density, cooling options, resiliency, security, accessibility and operations can be optimized for a given customer or IT needs. Everything they select is part of a package that is shipped fully integrated out of the factory to be put in place for the customer.”
The Edge in Action Supplement Still nascent Dave McCarthy, vice president in IDC's worldwide infrastructure practice, said he has definitely been hearing about this trend towards temporary data centers for a variety of situations. “They're bringing together a bunch of data sources that they need to be able to, in real time, combine and understand what's happening, and then and then take action,” he said. The market is still fairly new, so McCarthy had no data points to share. He said the top five vendors in the space were Schneider Electric, Dell Technologies, HPE, Vertiv, and Microsoft. IDC sees these temporary Edge data centers in the following scenarios: • Military/defense • Disaster response • Sporting events • Media & entertainment production • Oil & gas exploration • Construction Those use cases require two things: speed and value, or in industry parlance, low latency and high bandwidth, said Reele. “4K video, as an example, takes a lot of bandwidth to carry that data. So as connected things start to become more and more around us, these use cases start to develop that demand,” he said.
Extra security needed But since they are mobile, it’s easy for them to grow legs and walk away, so to speak. So these data centers need different management than your typical Edge center, with far less trust. “We assume that the system is compromised at all times. It's a zero trust mentality. The system wakes up and assumes that firmware is compromised. And we validate that against keys that are burned into our silicon, we call it silicon Root of Trust,” said Ramesh. HPE uses a variety of security measures like ensuring that the system boot is metered and its data volumes are encrypted. All of its CD drives are secure encrypted drives so if need be an admin can completely wipe the system if they think it's compromised. “So what we're doing is creating layers
"Mobile data centers need management with far less trust. The system wakes up and assumes that firmware is compromised, and validates it against keys that are burned into silicon" of an onion,” said Ramesh. “And we're making sure that the culmination is all those layers make it really hard that even if somebody gets physical access to the system. "So our assumption is zero trust because at the Edge, there is no access control data center protecting your system, it has to be secure, by default.” But Jamie Bourassa, VP of Edge computing at Schneider, says he’s seen these portable data centers in mobile rigs, which are really a central command post with desks, screens, communication links, and inside that rig is a miniature data center. Rather than a stand-alone, unit, the portable data center is part of a larger mobile unit and there is no stealing the compute without taking the whole rig, an unlikely event. “As I think through the different use cases, say for concerts, you do see people bringing in very advanced video systems that are driving a tremendous amount of content and audio. They're also collecting information about the people that are there. And those are use cases where the data center is really consolidated around the actual rig for that particular vendor or that particular media company who's deploying it,” he said.
Market is growing IDC definitely sees growth in this area, it’s just so early there aren’t a lot of numbers, McCarthy said. But the market is growing and expanding into new territories. “We're seeing some of this show up also in like education. I know that I've seen situations and like, like oil and gas, especially on the exploration side. I think there are lots of situations where, whether it's just the temporary nature of it, or offering augmented capacity, that we will continue to see it grow,” he said. Ramesh predicts that the next stage will be high performance computing and
"When data can be captured and used in real time, customers will want to improve the performance of their edge sites and the ability to house and operate compute, storage and networking will evolve"
supercomputers in a box, because the growth of data is at the Edge is enormous, with potentially as many as 55 billion connected Edge devices by 2022. “That's a ginormous amount of data. And that data gravity is going to create a demand for Edge compute, that's of the enterprise variety, not Edge compute that's in your phone or in your fitness tracker, because that enterprise compute is what's going to help businesses create new business models, or just transform the efficiency of what they're doing,” he said. Chiodelli thinks demand will increase as connectivity gets better and Edge devices allow for data to be captured faster and more effectively. “When data can be captured and used in real time, customers will want to continue to improve the performance of their Edge sites to get even more value from this and the need for the localized ability to house and operate compute, storage and networking will evolve,” he said.
Emergency response On Tuesday, August 4, 2020, two explosions hit the port of Beirut, Lebanon, killing at least 200 people and injuring more than 5,000. By August 7, Télécoms Sans Frontières (TSF) had deployed mobile technology consisting of a satellite link, Internet service and a call center which allows disaster victims to report their medical, psychological and financial needs. Founded in 1998, TSF has developed a continuously evolving portable tool kit that can be delivered to any disaster zone within days, using generator power if necessary, and satellite links courtesy of Inmarsat. At its most basic, TSF can deploy a basic cellular system in a rugged backpack that can help medical staff to coordinate operations and support cash transfers for beleaguered survivors. The organization is supported by donations.
The Edge in Action Supplement 29
Peter Judge Global Editor
Could CDNs deliver the Edge? Edge applications need new levels of network support. Can our existing content delivery networks provide that, or do we need a new approach?
dge data centers will have to be more automated than their centralized “cloud” counterparts. That becomes obvious when you consider the logistics of sending trained support staff to work on the racks, when those racks are installed in 10,000 different cabinets across the country. What might be less obvious is that the networks underlying the Edge will also have to implement a new level of automation even beyond the automation we already take for granted. “The networks of yesterday were architected for very different kinds of workloads. Applications have evolved over the last four decades; networks have not evolved," says Cole Crawford, CEO of Vapor IO. It’s a bold statement - and in some ways it’s almost an admission of defeat from a company which has been pitching micro data centers for the Edge for some years. Frankly, there are a lot of companies like this, including Vapor, EdgeMicro, EdgeInfra, SmartEdge in the UK. Their idea was that IoT and other lowlatency applications needed capacity in containers, at locations like cell towers.
Place a shipping container of IT at every cell tower, they said, and it would fill up with IT to support new apps ushered in by of 5G. None of that rapid growth has happened - and it sounds very much like Crawford is admitting that the applications and the networks (and maybe the Edge business models) aren’t up to it. Cell towers don’t have the right power, the networks don’t handle applications the right way. Despite that, and despite the level of hype around most Edge players, there’s no doubt that in some cases, the Edge is actually happening - and it is having to do unexpected things to the Internet. The Internet was built for availability, not performance. That’s why streaming media services like Netflix and Amazon Prime have small stacks of hardware in every location they can manage, so their customers can watch videos delivered from a nearby server, without lags and jitter.
It’s also why content delivery networks (CDNs), like Akamai, Cloudflare and Fastly, have grown up since the turn of the century. In recent years, they have rebranded themselves as Edge networks, promising low latency to customers’ eyeballs, and protection from DDoS attacks.
Running code at the Edge Initially, Akamai and its rivals had a simple job: speed up static content to users by caching it at the Edge. That changed for video services. Now, as we evolve towards the Edge of recent hype, it’s making a qualitative change: CDN customers are running code at the Edge, so their applications can interact with users directly. CDNs are well aware of their increasing importance. For many years Cloudflare has marketed itself as a buffer between the user and the service provider, preventing DDoS attacks as well as speeding interactions.
CDNs like Cloudflare market themselves as a buffer preventing DDoS attacks. That claim rang hollow on June 8 whan a Fastly outage hurt us all
30 DCD Supplement • datacenterdynamics.com
The Edge in Action Supplement
"The networks of yesterday were architected for very different workloads. Applications have evolved over the last four decades; networks have not"
CDNs' change towards Edge is more than a rebranding exercise. They are pitching services which can add reliability and intelligence to the Edge. That claim rang a little hollow on June 8. An outage took down Fastly’s CDN for several hours, and the world looked on aghast, as a failure at this company which few had heard of took down a huge chunk of the visible Internet. Victims included major news, media and shopping outlets such as CNN, HBO, the New York Times and eBay. Even Amazon.com suffered, despite the fact that it runs its own CDN, called Cloudfront. Fastly had shipped out code containing a single error, which was triggered by a customer’s action, instantly ending its service to everyone. That was a sign. Before the pure-play Edge hype merchants could get their containers to the cell towers, the Edge has already arrived, and we already rely on it. This was the moment when the world understood that it trusts Edge networks it doesn’t know about, and they don’t always do what we need them to do. They market their reliability, but they are vulnerable. One week after Fastly failed, Akamai upgraded its Edge network. This was not a kneejerk reaction, though. It was a response to the new Edge, driven by applications, not raw content. The company already provides “EdgeWorkers” - serverless compute containers that deploy customer code at the Edge. They’ve had an upgrade to have different performance options, and they now have other support functions.
Akamai added a key value store, EdgeKV, to give applications at the Edge fast access to data they need. For instance, it could allow content companies to quickly look up their customers’ locations and deliver content which is optimized - and legal under local copyright and other laws. Akamai also updated its system to optimize API traffic: commands given to Edge applications, which are exploding on the Akamai service. In 2020, Akamai delivered more than 300 trillion API requests - 53 percent more than the previous year. To accelerate API communications, Akamai has added special-purpose hardware to its Edge nodes, with reserved capacity and prioritized routing to keep transactions happening. “With this latest platform release, we hope to accelerate this innovation by giving developers the ability to build truly transformative applications at the Edge,” said Lelah Manz, SVP of Akamai’s Edge Technology Group. She says that so far customers have been “scratching the surface,” and new ideas will emerge.
Sticking plaster? For some, the CDN approach, even upgraded in this way, is still a sticking plaster on the old Internet. Will it be able to handle the arrival of fully distributed applications, served to end users via 5G networks? Cole Crawford believes that the Internet needs a major upgrade. He hasn’t given up on containerized data centers - he’s still got plans to plant them in 36 cities in the US with Zayo providing dark fiber between them. But his focus is on the network, and he says his new industry group The Open Grid Alliance (OGA) will fix the Internet. "We have the technology and capability to fix the broken underlay network, to make it even more reliable and secure,” he says. “The multi-cloud services grid is really about eradicating those deformities in the underlay." If you think that sounds hugely ambitious, you’re right. Crawford has a history of getting involved in industry groups which hope to change everything. He was there when OpenStack created an open source cloud platform, and when Open Compute proposed open source hardware for data centers. This time round, he’s got VMware, Dell, MobileEdgeX, and PacketFabrics on board. And, whether OGA ever comes to anything,
or not, we may be seeing the first fruits of the new Edge-focused Internet in Vapor’s latest announcements. Vapor’s current schtick is the Kinetic Grid, an automated service delivered in the US, using VMware for the application level stuff and Zayo’s dark fiber for connectivity. According to Crawford and his partners at VMware, the Grid will manage Edge applications across a grid with hardware in (eventually) 36 US cities, prioritizing applications which need low latency, and adapting to local infrastructure failures, delivering the fast response times and resilience promised by CDNs. The Kinetic Grid relies on an instrumented hardware layers, to real-time status can be passed through an API (Vapor’s Synse) to the VMware platform, which can then orchestrate where and how applications are run. "A high performance, multilocation. highly-engineered layer 2-and-up network with telemetry is what you get from the Kinetic Grid," VMware VP of product marketing Stephen Spellicy told DCD. "This gets offered to enterprise customers who want to have highly reserved, highperformance Edge computing resource available for whatever applications and data they want." In simulations, the Kinetic grid responds automatically when a hypothetical Edge facility in Atlanta hits trouble. Applications that need low latency, like radio access networks (RANs) are prioritized locally. More forgiving ones, such as video conferencing, can be parcelled off to Pittsburgh. These constraints are pre-programmed through so-called intent-based networking, and reveal an interesting thing about the new Edge that may be emerging. Applications which have a human user can be given a slower connection, because human reaction times are in milliseconds. Those dealing directly with machines get the fast lane of the Edge, because they have to get through. That will also hold true of the CDN approach, where Akamai’s IoT Edge Connect will speed links between sensors and applications. Crawford reckons that CDNs will deploy their servers on his Kinetic Grid, to benefit from its underlying resilience. Like many other proposed technology upgrades, it’s only likely to succeed if the big players can’t make things work with their existing approaches.
The Edge in Action Supplement 31
A data-driven model to forecast energy consumption at the Edge The Edge makes up a big part of the sector’s growing energy demands. It’s time to get smart about that
espite the image of data centers as large, power-
hyperscale facilities. According to Gartner, by 2025, 75 percent
low-latency applications, including digital streaming from film, TV, and music platforms.
hungry facilities, the
of enterprise data is expected to be created
The rise in IoT connected devices, artificial
reality is that much of the
and processed at the Edge. IDC also predicts
intelligence (AI), and machine learning is
anticipated growth in energy
massive growth, with the worldwide Edge
causing a surge in digital transformation
consumption will occur at
computing market expected to reach a value
across almost every industry. Many
the Edge. While they are far less obtrusive than
of $250.6 billion, with a compound annual
organizations are designing new experiences,
their cloud service counterparts, Edge data
growth rate (CAGR) of 12.5 percent between
reimagining business processes, and creating
centers contain mission-critical applications
both new products and digital services that
and, as such, must be designed, built, and
There are several factors driving the
operated to similar, if not the same standards
proliferation of data and its consumption at
of resilience, efficiency, and sustainability as
the Edge. Among them is the demand for
32 DCD Supplement • datacenterdynamics.com
rely on innovative and resilient technologies to underpin them. This is leading to more data being created
Schneider Electric | Advertorial
and shared across the network, ultimately causing delays in transmission and download speeds, known as latency. To overcome such network congestion, data must, therefore, be stored and processed close to where it is
There is a need to apply the same due diligence to reducing power consumption at the Edge as has long been in the case of larger data centers
generated and consumed, a trend known as Edge computing.
perspective, but to drive energy efficiency,
generations, anticipated PUE ratings will also
security, and uptime. However, with Edge
improve. For example, the tool’s default values
the prolific growth at the Edge is the energy
demands accelerating, how can industry
assumes that a centralized data center’s PUE
demands fueling the transformation. The cost
professionals get an understanding of the
will improve from 1.35 in 2021 to 1.25 in 2040,
of energy production and the need to shift to
impact Edge computing is having on the
and that the average PUE of Edge computing
more sustainable operations has long required
world’s energy consumption, and how
facilities will improve from 2.0 in 2021 to 1.5
designers of large data centers to embrace
focusing on efficiency and sustainability can
sustainability strategies. Now the same
One of the challenges that emerges from
attention must be paid to the design of smaller facilities at the Edge.
PUE ratios are also adjustable, meaning
Forecasting Edge energy consumption
the user can leverage the tool under different
Schneider Electric has recently developed
possible scenarios to see the impact that Edge
a new TradeOff Tool, the Data center & Edge
computing has on energy consumption and
Energy demands at the Edge
Global Energy Forecast, which helps users
Today various analysis suggests that data
to model and create possible global energy
centers represent 1-2 percent of global
consumption scenarios based on a set of pre-
electricity consumption, and by 2030 as
input assumptions. This includes the design
With dependency on mission-critical
much as 3000 TWh of energy will be used
of the physical infrastructure systems and its
infrastructure continuing to increase at
by IT, doubling the potential global electrical
associated power usage effectiveness (PUE)
a dramatic rate, it’s crucial that energy
consumption. At the Edge, deploying 100,000
rating, as well as anticipated growth of data
efficiency and sustainability become critical
data centers, each consuming 10kW of
center and Edge loads between now and 2040.
factors in the roll out of Edge computing
power would create a power consumption of
Based on these assumptions, the tool
1,000MW for the IT energy alone. Assuming
generates several forecast charts depicting
terms of energy use, is essential, and operators
a moderate power usage effectiveness (PUE)
total energy in TWh consumed by both
cannot afford to hit and hope, or become
ratio of 1.5 would mean these systems also
Edge and centralized data centers, total IT
more efficient as they go.
emit the equivalent of 800k tons of CO2.
energy (TWh), total energy mix comparing
However, if each Edge facility were
the percentages consumed in the Edge and
remains critical, it is the design of these
standardized and designed for a PUE of 1.1, we
central sectors and the IT energy mix between
systems which offers end-users a truly
could reduce the total CO2 emissions to 580K
practical means of ensuring sustainability at
tons annually. Clearly, there is a need to apply
In terms of data, the model utilizes a
infrastructure. Greater accuracy, especially in
While energy management software
the Edge. It requires greater standardization,
the same due diligence to reducing power
capacity analysis created by IDC in 2019.
modularity, resilience, performance, and
consumption at the Edge as has long been in
From this model, Schneider Electric was able
efficiency to form the building blocks of Edge
the case of larger data centers. Consequently,
to derive the likely ratio of centralized data
there is also a clear benefit in producing pre-
centers versus Edge IT load in 2021, which
Further, by considering energy efficient
integrated systems where standardization,
was split between 65 percent at the center
deployment methodologies and embracing a
modularity, performance, and sustainability
and 35 percent at the Edge. When predicting
culture of continuous innovation, operators
form fundamental components.
energy usage in 2040, the respective default
can choose a more sustainable approach to
ratios are 44 percent and 56 percent.
These building blocks offer users the ability to design, build, and operate Edge data
Based on these assumptions, the growth
centers for greater sustainability, while energy
rate for centralized and Edge data centers
efficient technologies such as Lithium-ion
is calculated at 6 percent and 11 percent
UPS and liquid cooling can help to reduce
annually. The tool allows these values to be
burdens on the system, overcome potential
adjusted by the user to reflect differing growth
component failures, and allow for higher
rates as conditions and/ or assumptions
performance without negatively affecting
PUE. Open and vendor-agnostic, next-
To derive the non-IT energy consumed by activities such as cooling and lighting,
generation data center infrastructure
PUE values are estimated based on the
management (DCIM) platforms are also
assumption that as technology continues to
essential, not just from a remote monitoring
evolve, or becomes more efficient via future
Wendy Torell, Schneider Electric Wendy Torell is senior research analyst at the Data Center Science Center - part of the IT Division at Schneider Electric
The Edge in Action Supplement 33
The Edge business case Investors want to spend money on the Edge - but first they need to see it work Sebastian Moss Deputy Editor
s money floods into the data center market, similar funding of Edge projects and companies has been conspicuously absent. After years of Edge expansion being perpetually on the horizon, investors are cautiously exploring the field, looking for opportunity in what could be the next big thing. “Edge is still quite complex,” Isaac Vaz, director of Aviva Investors, said at DCD>Building the Edge. “So to the extent that Edge messaging can be translated better to institutional capital and private equity it will definitely widen the pool of capital that it can definitely come into fund Edge computing. “Otherwise, we have been building a pool of capital just from venture capitalists and private equity,” he explained. “But if you really can start to communicate the benefits and maybe find a way to de-risk the investment in the Edge, then the pool of capital opens to more trillions of dollars from life insurance and pension capital. They are very keen to invest in these emerging technologies but, obviously, they are seeking some downside protection.” That downside protection comes in the form of stable business plans, not built on hope, but on specific customer sets and stable roadmaps built on predictable demand. “We see demand coming in two distinct branches, one before 2025, and then one after,” Anthony Milovantsev, managing director of investment advisor Altman Solon, said. “They're actually very discrete, different use cases and segments driving the two time periods. And in the first one, the next four years, we see the Edge being used by mobile network operators (MNOs), hyperscalers, and local managed service providers/SaaS players. And they all need Edge for different things.” MNOs are primarily focused on the 5G roll out over the next few years, Milovantsev said. “They quite literally need physical space in a dispersed fashion to
“When we normally go to the banks that we use for our traditional data centers, it's very easy to sit down with them... The problem you have at the moment with Edge is this a very unknown territory. This is very much like the Klondike of Europe" make this basic consumer product work. And I can also tell you very frankly, from our discussions with them that some of them are still debating how much they need.” While it’s not clear how much of the Edge telcos will use, and how quickly 5G will expand, “they'll be the first users and will really make the revenues flow in the early years,” Milovantsev believes. Then comes the hyperscalers, who are still piloting projects and working out business plans. "They're really testing out their user base, what it's like to have very localized availability zones and pushing through image rendering and gaming use cases," he said. "It's going to be smaller in the early years. They're just testing the pricing models: how much they're paying for infrastructure, and how much their customers will pay them for the actual services." Once they settle on what they think works - likely involving a partnership with telcos - hyperscalers could bring the same aggressive tactics and big wallets we have seen in the standard data center market. "And then the third part to that is local managed service providers and SaaS players, where they're providing services to enterprise customers, and really seeing that latency really matters," Milovantsev said. "And we're pretty confident that this going to be pretty robust in four years. There are already revenue flows in the ecosystem from these three." Then comes the second wave, after 2025 or so, Milovantsev believes. “This is where things get very interesting, but
34 DCD Supplement • datacenterdynamics.com
are a lot more into the future. And these use cases, there'll be a lot of buzzwords here: So one day there will be 6G, IoT, autonomous car driving, real time advertising, remote healthcare.” And then, hopefully, enterprises will come on board. “When companies start to really understand that this is going to benefit their customers, and is mission critical for them, then the ecosystem is gonna take off - we think it's going to be in that second tranche. If it takes off earlier than that, that's even better." But even with this optimism, and an understanding that the Edge will have to grow steadily in waves, it has been hard for those building such networks to raise money. “When we normally go to the banks that we use for our traditional data centers, it's very easy to sit down with them and show them where the revenue is going to come in, what our capex is, what our operating costs will be, and where our returns to make the whole project work are,” Dataplex CEO and co-founder Eddie Kilbane said. “The problem you have at the moment with Edge is this a very unknown territory. “This is very much like the Klondike of Europe,” he said, referencing the pinnacle of the US gold rush. “We're going to build a network where we're asking people to put capital out before we start so that we can get the hyperscalers and the SaaStype companies to put their equipment in locations on the basis they can see how it works and work out in the own minds, what their requirements are in terms of scale and size.” Dataplex hopes to build up to 1,000
The Edge in Action Supplement 'ENode' Edge data centers across European city centers, each home to 10-18 racks. “We're looking for initial seed capital for the 10 sites across the UK, so we can build a platform,” Kilbane said. “We've had a number of hyperscalers say that if we build it, they would provide racks in those locations so that they can trial this and see how this works. “Once that happens, then it is easy to go to something like Aviva Investors to say ‘look, here's the concept. Here's the project. These are the people who have worked on the trial. These are the results. And these are the long term commitments they're going to make. Are you happy to commit now to that first $100 million for the next rollout?’” It’s just about getting that initial ball rolling, Kilbane argues. “We're just trying to find that right partner that can take us all the way,” he said. “It's been a long journey, a lot of slide decks, a lot of meetings, a lot of 'thank you, we'll get back to you.' It's been a very slow burn.” The one thing the company found proved successful was to mention 5G a lot in pitches. “Even though 5G is only a medium for communicating, if we don't bring in a very strong 5G aspect into the whole equation, you lose part of the room in terms of the investment group,” he said. It is also getting easier due to growing publicity about the Edge, and general growing hype, he said. “It's tough, but when you’re a first adopter and a first mover, you're always gonna have to do quite a few presentations before you hit the mother lode.” Kilbane admitted that things are slightly different in the US, where mobile operators or mobile tower companies are looking to use their existing infrastructure and are more actively investing in the Edge. That doesn’t work in Europe, he noted: “The European tower business is completely different in so far as the masts tend to be on top of roofs. This Edge network really needs to be at the ground where it's the closest to the fiber closest to the public and the eyes, and then connected to 5G for connectivity.” While US telcos are looking to use some of their infrastructure for Edge, Altman Solon’s Milovantsev noted that all the world over, people are cautious about
“It's tough, but when you’re a first adopter and a first mover, you're always gonna have to do quite a few presentations before you hit the mother lode”
The Edge in Action Supplement 35
anything that might require them to spend money. “They're very nervous about anything too cutting edge, because this is an industry that largely got burned by the promise of 4G,” he said. “They spent billions of dollars of capex on it, and largely just invited over-the-top applications to kill old revenue streams. “They have physical assets that they can use - they don't have to be Tier IV data centers, they can be very simple telco PoPs,” he explained. “And within that you can place some servers and call it an Edge solution just to test out the market. They're doing this without spending the big check just yet.” Another way they are exploring the space is by selling off their infrastructure and land, putting another group in charge of the costs and risks. “These carveouts can be in the dozens, hundreds or thousands of facilities as they basically have physical assets all up and down a country,” Milovantsev said, “And in that scenario, they'd have somebody else, invest and takeover, and the telco would be anchor tenants - they could get the benefit of Edge proliferation and be customers without doing all the dirty work.” In May, we got to see this theory in action - European telecoms giant Liberty Global partnered with digital infrastructure fund Digital Colony to launch a huge European Edge data center joint venture, AtlasEdge Data Centres. The company will take over Liberty Global's data centers and use Liberty companies like Virgin Media, SunriseUPC, and O2 (if a pending $38bn merger is approved) as anchor tenants. With Liberty dealing with $28bn in debt, the spun out business can operate more freely, without the tightening noose of repayments. “Combining Liberty Global’s technical real estate and track record in building successful, sustainable businesses with Digital Colony’s expertise in digital infrastructure investment creates an exciting platform for growth that will deliver long-term value," Liberty Global CEO Mike Fries said at the time. "The proposed joint venture presents significant growth opportunities as we look to build this business into a leading European Edge data center operator." A huge entrant to the nascent market does not scare Digiplex, Kilbane said. “[With something like this] we may be the second adopter, not the first. But there's a market for everybody. In fact, it makes life easier for me because someone else with more money than I can get access to has
already proven the very model I've been taking around and showing people for the last 18 months.” Now that the first players are building out their Edge, the clock is ticking on whether it will follow the careful roadmap envisioned by Milovantsev, or
if it will follow the Klondike Gold Rush analogy Kilbane referenced earlier: Where prospectors flooded into a market in search of wealth, only for most to strikeout. A few years later, most left in search of the next big thing.
"[Telcos are] very nervous about anything too cutting edge, because this is an industry that largely got burned by the promise of 4G. They spent billions of dollars on it, and largely just invited overthe-top applications to kill old revenue streams"
36 DCD Supplement • datacenterdynamics.com
The Edge in Action Supplement
Dan Swinhoe News Editor
Edge radio networks need government support Banning Huawei from mobile infrastructure might cripple the Edge rollout - unless governments can give real support to the open alternatives
he radio access network (RAN) is the technology that provides the connection between devices and the core network. That’s important for the Edge, because flagship Edge applications like the Internet of Things connect up using cellular radio networks. RAN has come to the fore now, because mobile networks are on the brink of a major upgrade. They are moving to 5G - and Edge players need them to make that move. But that upgrade will be costly - and RAN components are the most expensive part of a telco network. Operators want a way to upgrade their systems and minimize cost - and the way out looks like using a more open approach than the traditional legacy end-to-end integrated RAN systems they have been tied to till now. “Legacy RAN solutions are notoriously
restrictive, as major components of a RAN must come from the same vendor’s system,” says Tuan Phan, partner at Zero Friction, and member of the Emerging Trends Working Group at IT governance group ISACA. “This vendor lock-in constrains feature delivery, stifles innovation, limits efficiency and inflates costs.” The industry has an initiative aiming to help break operators free of RAN lock-in: OpenRAN. First, the RAN stack is virtualized, making a virtual RAN or V-RAN. Then open interfaces are created that allow for interoperability, so telcos can in theory adopt
a best-of-breed approach to designing their networks; selecting better components at better price points, rather than being compelled to buy the whole system from a very select number of end-to-end providers. “We cannot sustain technology innovation without ensuring interoperability between designs. No single company can be good at everything,” says Phan. “The OpenRAN approach allows the telcos to address 5G demand by leveraging an ecosystem of vendors that collaborate using the same standards using the benefits of cloud-based architecture. The bottom line is that the telcos
"We cannot sustain technology innovation without ensuring interoperability between designs. No single company can be good at everything." The Edge in Action Supplement 37
can get to the 5G market quicker, achieve scaling at a significantly incrementally smaller cost basis, and reduce their supply and implementation risks.” Operators and governments are keen to open up the supply chain ecosystem in the telecoms space so a large number of vendors can offer better quality components at better prices. Operators want this for the cost benefits. Governments want to address national security concerns: closed mobile networks can become tied to a single company. In the case of Huawei, that raises suspicions. But while the technology is progressing, government intervention needs to be more proactive than simply banning unwanted vendors.
OpenRAN needs a change of mindset In the OpenRAN space, Mavenir, Parallel Wireless and Altiostar Networks have found success, while mainstream players like HPE and Dell are moving to provide commodity hardware to run the virtualized RANs. In that respect, diversification efforts have had some impact, and the next twelve months will be interesting if other players like NEC, Samsung, and Fujitsu make a concerted effort to move into Europe, says Paul Graham, Partner for Technology, Media And Telecommunications at the law firm, Fieldfisher. “People are finding these niches, and now they can do a whole range of them because they’re not trying to balance R&D teams,” says Paul Rhodes, OpenRAN and 5G Principal Consultant at systems integrator World Wide Technology (WWT). He thinks OpenRAN gives new companies an opportunity, and improves product quality: “In an end-to-end approach there are some rubbish products, but poor value lower
“In an end-to-end approach there are some rubbish products., Poor value products get weeded out in OpenRAN. They simply won't survive” grade products get weeded out in OpenRAN. They simply won't survive.” However, even if there is more choice on the market, operators might still have a limit in terms of how many vendors they are willing to deploy on any one network, partly for pragmatic reasons of managing complexity. “Bear in mind that most operators like to have a primary source and a secondary source,” BT’s CTO Howard Watson, said earlier in the year. “It's unlikely that all of us will start deploying equipment from four or five different vendors because the operational challenge of the person in the van maintaining that tends to limit you to a choice of two." Working with an installed base makes it harder, says Prakash Sangam, founder and principal at Tantra Analyst. Operators such as Verizon, AT&T and T-Mobile will find complex brownfield sites more of a challenge than greenfield deployments. “The biggest challenge for them is lack of
38 DCD Supplement • datacenterdynamics.com
‘single throat to choke,’ in this architecture,” he says. “They have gotten accustomed to dealing with a single vendor for the whole system for integration, troubleshooting and others, for decades. It’s hard to change that very quickly.” At the same time, the switch to open standards could help large players to consolidate. Incumbent vendors could lose market share in some areas, because operators can pick and choose components, but if products are more interoperable, they will find it easier to buy rival products and integrate them into existing offerings - even if the ecosystem gets too large to own. “They may have the balance sheet to actually swallow some of these larger companies, and perhaps take out a part of the ecosystem that's doing particularly well,” says WWT’s Rhodes. “I don't think they're going to be taking out HPE or Dell, but because they are moving to a V-RAN architecture, almost anybody in the OpenRAN ecosystem would be a wise
The Edge in Action Supplement buy for them.” “Are there going to be some good ripe targets for acquisition? Absolutely,” adds Kalyan Sundhar, VP & GM of 5G Edge To Core Products at Keysight Technologies, a test and measurement company. “They might gobble some up, but I think the ecosystem is quite vast right now and it's not going to be easy to just take a few and then you're done and it’s a closed thing again. I think it's just too wide of a market with too many players.”
Huawei bans could help or hinder OpenRAN adoption Amid increasing global tension, the role that large Chinese vendors play in critical telecoms infrastructure in many countries has led to concern. And while these fears highlight the need for greater plurality of choice when it comes to hardware, direct intervention to remove certain players from the market might not naturally create a wider ecosystem to fill the void. In the UK, several years of reviews into supply chain diversification and the reliance on ‘high-risk vendors’ such as Huawei and ZTE culminated in the UK Government banning UK telecoms operators from using such equipment. “About a year ago, we were saying no involvement in the core network and only 35 percent in the rest of the network,” says Fieldfisher’s Graham. “That’s changed and the operators have to strip Huawei equipment out of the core network by 2023 and the entire network by 2027.” However, stripping out Huawei from a network might not help OpenRAN providers, because the bans may have come too soon for OpenRAN to be a ready-made replacement. “In terms of macro RAN equipment, there are essentially two viable infrastructure vendors in the UK - Nokia, and Ericsson,” explains Graham. “A lot of the operators have got other things to think about. Instead of OpenRAN technology, they've got this massive redeployment exercise to do by 2027. That's imposed a quite significant cost on the operators that they've got to wear over the next five to six years.” The need to switch out now-verboten hardware has highlighted the need for a broader supply chain and ecosystem, but the urgent timescale means operators can’t wait for OpenRAN technology to mature, and could risk deploying it in dense urban environments too soon. “Rip-and-replace directives from several Western governments have certainly highlighted just how little competition there is in the mobile network infrastructure market and the need to diversify the supply chain,” says John Baker, SVP Business
"The deadlines imposed in rip-and-replace directives may force operators to turn to the incumbent duopoly, instead of OpenRAN" Development, Mavenir. “The deadlines imposed in some of those directives may have the unintended consequence of forcing some operators to turn to the incumbent duopoly rather than taking advantage of the improved flexibility and versatility of OpenRAN.” However, it’s not all bad news. Keysight’s Sundhar says that Nokia and Ericsson’s proprietary systems may pick up some business Huawei loses through these ripand-replace directives, but the move will create opportunities for other open providers. “It’s not all Huawei losing business and Ericsson, Nokia, and Samsung gaining everywhere,” he says. “It's also left the door open for the movement into OpenRAN.”
Government needs to understand OpenRAN Given the critical nature of telecoms infrastructure, and the incredibly expensive nature of its roll-out, governments could have a massive role in the fate of OpenRAN. Amid those mandated bans of certain providers, Governments should look to create an environment where operators can test and develop OpenRAN deployments with little cost or risk, or face more potential consolidation in the market. While the OpenRAN technology is still maturing, especially around areas such as performance and much-needed interoperability, governments should consider providing operators more in the way of sandboxes and trial locations so they can have a way to validate results. “What the operators want is for the government to fund testbed trials,” says Fieldfisher’s Graham, "so that this new technology can be properly tested. “You want the funding to be in place, and the right sort of commercial incentives, and then really you want the government to step away. You don't want them that heavily involved in any strategy decisions about betting on a particular type of technology.” Keysight’s Sundhar says the fact that there are so many players entering the space creates many variables around performance and interoperability, and so test & validation centers are required so operators can understand how vendors’ components might work together in a real-world setting. “There is now a necessity for having the certified test centers, and they cannot be driven just by network equipment vendor,”
he says. “Those give you an early certification that you are able to mix and match pieces and they are able to work with the rest of these particular components, that gives the operators a verdict that yes, collectively, these four or five things can work together.” In February, Germany allocated over €300 million for the development of OpenRAN technology as part of its €130 billion postCovid stimulus package, while the UK Government recently launched a SmartRAN Open Network Interoperability Centre (SONIC) to allow existing and emerging suppliers to ‘come together to test and demonstrate interoperable solutions’. “With the first vendors on board and more expected to follow, this is an exciting step in the journey of the SONIC project,” said Simon Saunders, director of emerging and online technology at Ofcom. “SONIC offers a unique opportunity to shape what the UK’s supply chains of the future could look like and support innovation. So we encourage companies across the telecoms industry to express their interest in participating, as we get set for the project to go live in May.” Similar efforts are going on in the US. In November 2020, the US House approved a bill to provide $750 million of Federal funding grants to support OpenRAN 5G deployments in the United States. In February, the FCC said it was starting an inquiry on whether to develop a policy on OpenRAN and how it can support development and deployment of the technology in the US. “OpenRAN has emerged as one promising path to drive 5G security and innovation in the United States,” said FCC acting chairwoman Jessica Rosenworcel. “With this inquiry, we will start to compile a record about how we can secure our vulnerable supply chains once and for all, and revitalize the nation’s 5G leadership and innovation.” If this goes international, then things look promising, says WWT’s Rhodes. “That really is going to be the icing on the cake that then encourages other countries to start going. If the right incentives come and they align between UK and Germany for example, then we could see something with the first commercial OpenRAN deployments happening this year.” For a movement based around the word “open," the politics can look pretty opaque. But the Edge simply won’t reach its full potential without some coherent approach to opening up radio access networks.
The Edge in Action Supplement 39
IT professionals manage with
at the edge using EcoStruxure™ IT.
Gain the flexibility you need to optimize uptime at the edge. • Gain visibility and actionable insights into the health of your IT sites to assure continuity of your operations. • Instead of reacting to IT issues, take advantage of analytics and data-driven recommendations for proactive management. • Keep costs under control by managing your IT sites more efficiently. Choose to outsource to Schneider Electric’s experts, address issues yourself, or leverage a mix of both approaches.
©2021 Schneider Electric. All Rights Reserved. Schneider Electric | Life Is On and EcoStruxure are trademarks and the property of Schneider Electric SE, its subsidiaries, and affiliated companies. 998-21556505_GMA-US
EcoStruxure IT Expert
Interplanetary Internet, digital zebras, and the disconnected Edge
Sebastian Moss Editor
Delay-tolerant networks ask you to imagine an Internet where connectivity isn’t guaranteed
ere you to have traveled through central Kenya in the early 2000s, you may have come across something highly unusual: a dazzle of zebras sporting strange collars. The animals were not part of a bizarre fashion show, but rather early pioneers of a technology that could one day span the Solar System, connecting other planets to one giant network. The connected world we inhabit today is based on instant gratification. "The big issue is that the Internet protocols that are on the TCP/IP stack were designed with a paradigm that ‘I can guarantee that I send the information, and that information will be received, and I will get an acknowledgment during an amount of time that is quite small,’” Professor Vasco N. G. J. Soares explained, over a choppy video call that repeatedly reminded us what happens when that paradigm breaks down. Fifty years on from its invention, the Transmission Control Protocol (TCP) still serves as the de facto backbone of how our connected age operates. But there are many places where such a setup is not economically or physically possible. The plains of Africa are one such locale, especially in 2001 when Kenya had virtually no rural cellular connectivity, and satellite connectivity required bulky, power-hungry, and expensive equipment. Zebras care not for connectivity; they don’t plan their movements around where to find the best WiFi signal. And that was a problem for an international group of zoologists and technologists who wanted to track them.
"When Bob [Kahn] and I started to work on the Internet, we published our documentation in 1974 right in the middle of the Cold War, we laid out how it all works" - Vint Cerf
Faced with a landscape devoid of connection, the team had to come up with a way to study, track, and collect data on zebras - and get that data back from the field. To pull this off, the group turned to a technology first conceived in the 1990s delay or disruption-tolerant networking (DTN). At its core is the idea of ‘store and forward,’ where information is passed from node to node and then stored when connectivity falls apart, before being sent to the next link in the chain. Instead of an endto-end network, it is a careful hop-by-hop approach enabling asynchronous delivery. In the case of ZebraNet, each equine served as a node, equipped with a collar featuring solar panels, GPS, a small CPU, flash memory, and radio connectivity. Instead of communicating with satellites or telecoms infrastructure, the migratory habits of each zebra are stored on the collar. Then, when the animal is near another electronic equine, it shares the data. This continues until one of the zebras passes a mobile base station - perhaps attached to a Range Rover - and it uploads all that it has collected. "It was one base station for about 10-12 collars," project member and Princeton
University Professor Margaret Martonosi told DCD. “The main limit on storage capacity had to do with the physical design of what would fit on the circuit board and inside the collar module. Our early papers did some simulation-based estimates regarding storage requirements and likely delivery rates.” It's an idea that sounds simple on the face of it, but one that requires a surprisingly complex and thought-out approach to DTN, especially with more ambitious deployments. “How much information you need to store depends on the application,” Soares explained. “So this means that you need to study the application that you're going to enable using this type of connection, and then the amount of storage, and also the technologies that are going to be used to exchange information between the devices.” You also need to decide how to get data from A, the initial collection point, to Z, the end-user or the wider network. How do you ensure that it travels an efficient route between moving and disconnecting nodes, without sending it down dead ends or causing a bottleneck somewhere in the middle?
Issue 41 ∞ July 2021 41
42 DCD Magazine • datacenterdynamics.com
Tolerant This remains an area of vigorous debate, with multiple approaches as to how one should operate a DTN currently being pitched. The most basic approach is singlecopy routing protocols, where each node carries the bundle forward to the next node it encounters, until it reaches its final destination. Adding geographic routing could mean that it only sends it forward when it meets a node that is physically closer to the end state, or is heading in the right direction. Then there are multiple-copy routing protocols that see each node send it to a bunch of others. Versions of this approach like the ‘epidemic protocol’ would spread data across a network rapidly, but risk flooding all the nodes. "On a scenario that has infinite resources, this will be the best protocol," Soares said. "But in reality, it's not a good choice because it will exhaust the bandwidth and the storage on all the nodes." ‘Spray and Wait’ tries to build on this by adding limits to control the flooding. Another approach, ‘PRoPHET,’ applies probabilistic routing to nodes when they move in non-random patterns. For example, after enough study, it would be possible to predict general movement patterns of zebras, and build a routing protocol based upon it. Each time data travels through the network, it is used to update the probabilistic routing - although this can make it more brittle to sudden, unexpected changes. For his work at the Instituto Politécnico de Castelo Branco, Soares combined geographic routing with Spray and Wait to form the routing protocol ‘GeoSpray.’ "My scenario was assuming vehicles moving and data traveling between them, and so I would need the geographic information," he said. "A single copy is the best option if you can guarantee connection to the destination, but sometimes you have to use multiples to ensure that you will find someone that will get it there for you eventually.” Each approach, the amount of storage, and how long nodes store data before deleting it, has to be crafted for the application. In South Africa, a DTN deployment was used to connect rural areas. Communities used e-kiosks to send emails, but the data was just stored on the system. When a school bus passed, it transferred the data to the bus, brought it to the city, and connected it to the wider net. When it returned, it brought any replies with it. But as we connect every inch of the
globe, such spaces for DTN are shrinking. “The spread of cell connectivity across so much of the world has certainly been helpful for overall connectivity and does supplant DTN to a degree,” Martonosi admitted. “On the other hand, the cost of cell connectivity is still high (often prohibitively so) for many people. From a cost perspective, collaborative dynamic DTNs and mesh networks seem like a very helpful technology direction.” Following ZebraNet, Martonosi worked on a DTN system to connect rural parts of Nicaragua, C-Link, and SignalGuru to share vehicle data. Due to increasing connectivity, such efforts "have not caught on widely," she said. "But you can see aspects of these techniques still around - for example, the bluetooth-based contact tracing apps for Covid-19 are not dissimilar from some aspects of ZebraNet and C-Link’s design." Terrestrial DTN proponents now primarily focus on low-power IoT deployments, or situations where networks have been impacted - such as natural disasters, or battlefields. Indeed, the US Defense Advanced Research Projects Agency (DARPA) is one of the largest funders of DTN, fearing that the connectivity-reliant US military could be easily disrupted. "DTN represents a fundamental shift in networking protocols that will result in military networks that function reliably and securely, even in the changing conditions and challenging environments where our troops must succeed now and in the future," BBN Technologies CEO Tad Elmer said after his company received $8.9m from DARPA to explore battlefield DTN. The agency has published much of its work, but whether all of its research is out in the open is yet to be seen. However, DARPA was also instrumental in funding the development of the TCP/IP-based Internet, which was carried out in public. "The irony is that when Bob [Kahn] and I started to work on the Internet, we published our documentation in 1974,"
TCP/IP co-creator Vint Cerf told DCD. "Right in the middle of the Cold War, we laid out how it all works. “And then all of the subsequent work, of course, was done in the open as well. That was based on the belief that if the Defense Department actually wanted to use this technology, it would need to have its allies use it as well, otherwise you wouldn't have interoperability for this command and control infrastructure. Then, as the technology developed, “I also came to the conclusion that the general public should have access to this,” Cerf recalled. “And so we opened it up in 1989, and the first commercial services started. The same argument can be made for the Bundle Protocol.” With the DTN Bundle Protocol (published as “RFC5050”) Cerf is not content with ushering in the connected planet. He eyes other worlds entirely. “In order to effectively support manned and robotic space exploration, you need communications, both for command of the spacecraft and to get the data back,” he said. “And if you can't get the data back, why the hell are we going out there? So my view has always been ‘let's build up a richer capability for communication than point-to-point radio links, and/or bent pipe relays.’ “That's what's driven me since 1998.” DTN is perfect for space, where delay is inevitable. Planets, satellites, and spacecraft are far apart, always in motion, and their relative distances are constantly in flux. “When two things are far enough apart, and they are in motion, you have to aim ahead of where it is, it’s like shooting a moving target,” Cerf said. “It has to arrive at the then when the spacecraft actually gets to where the signal is propagating.” Across such vast distances, “the notion of 'now' is very broken in these kinds of large delay environments,” he noted, adding that the harsh conditions of space also meant that disruptions were possible. What we use now to connect our few solar assets relies primarily on line-of-sight
“In order to effectively support manned and robotic space exploration, you need communications, both for command of the spacecraft and to get the data back. If you can't get the data back, why go up there?" Issue 41 ∞ July 2021 43
"As we overcome various and sundry barriers, the biggest one right now is just getting commercial implementations in place so that they are available to anyone who wants to build a spacecraft”
communication and a fragile network of overstretched ground stations. With the Bundle Protocol, Cerf and the InterPlanetary Internet Special Interest Group (IPNSIG) of the Internet Society hope to make a larger and more ambitious network possible in space. An earlier version, CFDP, has already been successfully trialed by Martian rovers Spirit and Opportunity, while the International Space Station tested out the Bundle Protocol in 2016. “We had onboard experiments going on, and we were able to use the interplanetary protocol to move data back and forth - commands up to the experiments, and data back down again,” Cerf said. With the Artemis Moon program, the Bundle Protocol may prove crucial to connecting the far side of the Moon, as well as nodes blocked from line-of-sight by craters. “Artemis may be the critical turning point for the interplanetary system, because I believe that will end up being a requirement in order to successfully prosecute that mission.” DTN could form the backbone of Artemis, LunaNet, and the European Space Agency’s Project Moonlight. As humanity heads into space once again, this time it will expect sufficient communication capabilities. “We can smell the success of all this; we can see how we can make it work,” Cerf said. “And as we overcome various and sundry barriers, the biggest one right now, in my view, is just getting commercial implementations in place so that there are off-the-shelf implementations available to anyone who wants to design and build a spacecraft.” There’s still a lot to work out when operating at astronomical distances, of course. “Because of the variable delay and the very large potential delay, the domain name system (DNS) doesn't work for this kind of operation,” Cerf said. “So we've ended up with kind of a two-step resolution for identifiers. First you have to
figure out which planet are you going to and then after that you can do the mapping from the identifier to an address at that locale, where you can actually send the data. “In the [terrestrial] Internet protocols, you do a one step workout - you take the domain name, you do a lookup in the DNS, you get an IP address back and then you open a TCP connection to that target. Here, we do two steps before we can figure out where the actual target is.” Again, as with the zebras, cars, and other DTN deployments, understanding how much storage each space node should have will be crucial to its effective operation. But working that out is still an open question. “If I know where the nodes are, and I know the physics, and I know what the data rates could be, how do I know I have a network which is capable of supporting the demand?” Cerf asked. “So I went to the best possible source for this question, Leonard Kleinrock at UCLA." Kleinrock is the father of queuing theory and packet switching, and one of the key people behind ARPANET. “He's still very, very active - he's 87,
44 DCD Magazine • datacenterdynamics.com
but still blasting on,” said Cerf. “I sent him a note saying, ‘look, here's the problem, I've got this collection of nodes, and I've got a traffic matrix, and I have this DTN environment, how do I calculate the capacity of the system so that I know I'm not gonna overwhelm it?” Two days later, Kleinrock replied with “two pages of dense math saying, ‘okay, here's how you formulate this problem,’” Cerf laughed. Kleinrock shared with DCD the October 2020 email exchange in which the two Internet pioneers debate what Kleinrock described as an "interesting and reasonably unorthodox question." "Here's our situation," Cerf said in the email, outlining the immense difficulty of system design in a network where just the distance of Earth to Mars can vary from 34 million to 249 million miles. “The discrete nature of this problem vs continuous and statistical seems to make it much harder." Kleinrock provided some calculations (pictured), and referenced earlier work with Mario Gerla and Luigi Fratta on a Flow Deviation algorithm. He told DCD: “It suggests the algorithm could be used where the capacities are changing, which means that you constantly run this algorithm as the capacity is either predictably changing or dynamically changing." Cerf said that Kleinrock proved immensely helpful. “Now, I didn't get the whole answer. I still don't have the whole answer,” he said. “But, I know I have one of the best minds in the business looking at the problem.” As with many other aspects of the Interplanetary Internet, “this is not a solved problem,” Cerf said. “But we're on it.”
Talking to Alpha Centauri - still a long way off
nce we digitize our solar system, our next impulse will be to spread further, interconnecting the stars. But then the challenges of distance really start to become apparent. "Now let's roll up our sleeves and say, where’s that star? It's Alpha Centauri - it's actually a group of stars - and the closest of the group is 4.2 light years, which is pretty darn far," said Dr. Don M. Boroson, Laboratory Fellow in the communication systems division of MIT Lincoln Laboratory. In a thought experiment for the InterPlanetary Networking Special Interest Group (IPNSIG), Boroson did some calculations on what would be required to communicate with the stars. “Let's try to do one kilobit per second," he said, mapping out the requirements, and noting that any lower data rates would produce so few photons it would be hard to spot from background radiation. “The most efficient way to send data turns out to be sending what we call orthogonal signals, and it can be a pulse,” he said, with the process allowing for a much as four bits to be sent per single photon. Boroson trialed a similar system when he built the world’s first successful spacebased, high-rate lasercom system for NASA's GeoLITE program, as well as the Mars Laser Communications Demonstration, which ultimately did not fly due to the cancellation of another satellite it was meant to share a ride with. Following that, he built the world’s longest lasercom system between the Moon and Earth. “On lunar laser comm we did post positioning modulation,” he said in a briefing to IPNSIG attended by DCD. “We made engineering trade-offs, so we were able to get two bits for every received photon.” To send this 73db signal from Alpha Centauri would likely require a three and a half meter telescope. “The Hubble is 2.4 meters, so this is bigger.” The transmission would require at least a kilowatt. “Lasers today are not 100 percent wall plug efficient, so that means that we
need to make several kilowatts of power in order to make this laser work,” he said. The Earth-based ground receiver would need to be around 40 meters - something the European Space Agency already plans to develop for other research projects - or something could be built in space or on the Moon. So, now that you have your system at Alpha Centauri and your ground receiver, how do you actually get them to talk? Current long-distance communication between two systems usually involves some level of feedback and error correction, with one system able to tell the other if its communications are off-target. The problem is that this is a significantly longer distance. “There's no feedback,” Boroson said. “The Earth station can't tell the satellite it missed because it's an eight-year round trip. This is all open loop, I have to hit my spot with my narrow little beam.” That spot is astronomically minuscule. The Earth that the satellite will see will be 4.2 years in the past, and the point it needs to aim at will be another 4.2 years in the future. During this time, the Earth is in constant motion, traveling about 2.6 million kilometers a day (relative to the Sun). Using the same wavelength as Boroson used in his Lunar mission, 1.5 microns, the data beam will be 17 million kilometers across by the time it reaches Earth. “So this is kind of the size of our spot,” he said. “The Earth would move through that spot in about seven days. So I better point my beam to be plus or minus three and a half days, or three and a half degrees, in something that took 4.2 years to get there. Otherwise, I'm going to miss.” There’s also the motion of Alpha Centauri, and the ‘proper motion’ of the two systems in relation to each other to take into account. “So I need to know all these motions in all these positions.” That motion will also make getting the satellite to Alpha Centauri challenging. Alpha Centauri A and B orbit each other with a period of 79.91 years. "So the angle between them changes to the tune of a few 10s of microradians, but it's changing all the
Sebastian Moss Editor
“The station can't tell the satellite it missed; it's an 8yr round trip" time," Boroson said. When we send probes to Mars, the rockets periodically recheck their trajectory along the route and adjust accordingly. Over the significantly longer distance to one of the Alpha Centauri stars, the chance of needing to course correct is much higher - but the ability to do it is much more constrained. Between our star and the end goal, power will be hard to come by. The system will be the furthest man has ever gone from an energy source, and solar panels will be useless until it nears Alpha Centauri. The launch will likely have to be perfectly accurate, with no ability to realign midway during its journey. Now how long would that journey take? Let's look at the fastest thing humans have ever made, the Parker Solar Probe. "It gets really close to the Sun and takes pictures of it," Boroson said. "But it's hot there, so it zips around it to cool down." At peak, it hits a whopping 692,000 kilometers per hour. Maintaining that speed, it would still take 10,000 years to get to Alpha Centauri. "I wouldn't want to wait that long," Boroson said. As part of the thought experiment, he said to presume we managed to build a system that could get there in 100 years, traveling at just under 50 million kilometers per hour. We don't know if we can build electronics to last that long, we don't even know if there will be advanced humans still on the planet to receive any messages, but let's assume we can make it. Even then, the speed becomes a problem if it doesn't slow down, it would blast through the star's solar system in less than a week, giving a preciously short window to get the data back. Should we pull all that off, we better make that count.
Issue 41 ∞ July 2021 45
PLIABLE RIBBON ALLOWS FOR LESS SPACE WITH MORE FIBER HOW PLIABLE RIBBON TECHNOLOGY IS CHANGING THE FIBER OPTIC CABLE LANDSCAPE
WHAT IS PLIABLE FIBER OPTIC RIBBON? Pliable ribbon cable designs fill central tube cables with more fiber than ever before. The pliable structure has no preferential bend, therefore allowing the fibers to collapse on top of one another while still attached in ribbon form. This feature allows the circular central tube of a cable to be completely occupied with fiber rather than having space left empty by using a rectangular or square stack of traditional flat ribbons. There are also pliable ribbon cable designs that utilize 200um-sized fiber to make design even more densely packed, space-saving cables for today’s ever-increasing demand for more bandwidth.
TERMINATING RIBBON-BASED CABLES Ribbon-based cable constructions offer multiple advantages over loose tube and tight buffer cable constructions in the area of fiber termination. The process for splicing the fiber is the same – Strip, Clean, Cleaver, Splice, and Protect. The only change is using a heated jacket remover for removing the ribbon matrix. 144 Single splicers at 120 splices per second would take an experienced splicing technician around 288 minutes (4.8 hours); however, splicing 144 fibers in a 12-count ribbon construction will yield a splice time of only 24 minutes. Splicing 12ct ribbon would be 92% more efficient than splicing single fiber! MPO Splice-On Connectors are field installable connectors designed for customized, on-site terminations with ribbon cabling. Logistical delays associated with pre-engineered cables are eliminated because of the flexibility of determining exact cable lengths and easy terminations on the work site.
FREEFORM RIBBON® TECHNOLOGY DESCRIPTION Sumitomo Electric Lightwave’s Freeform Ribbon® allows for dense fiber packing and a small cable diameter with a nonpreferential bend axis thereby increasing density in space-constrained applications. Sumitomo Electric’s patented pliable Freeform Ribbon® construction is designed to both pack densely in small form factor cables while still being capable to transform quickly, without any tools, to splice-ready form similar to standard/ flat ribbon for fast and easy 12ct ribbon
864F STANDARD FLAT RIBBON
Standard Ribbon Design
splicing (for both in-line and fusion spliceon connector splicing applications). Whether installing high fiber count cables, such as 1728, 3456 and higher to fit into existing 1.5” or 2” ducts, or needing to work with smaller and easy to terminate interconnect cables, the Freeform Ribbon® is the central component to achieve both.
For more information visit SumitomoElectricLightwave.com
1728F FREEFORM RIBBON®
Freeform Ribbon® Design
Double the Fiber, Same Outside Diameter
Server farms: high tech agriculture
Dan Swinhoe News Editor
Whether strawberries, cabbage, or cannabis, indoor farms are increasing in number. Will they soon start competing for the same space and power as data centers?
s cities look to become healthier and more sustainable, indoor farming could grow more food in confined spaces and offer fresher produce to the hungry
masses. Rows of crops like strawberries, fresh greens, or even cannabis can be grown vertically for higher yields. But indoor farms, like data centers, need large amounts of power and water. They also both need optimal temperature and atmospheric conditions, and generate heat data centers through servers, farms through LED lighting. Like data centers, indoor farms can fit into anything from shipping containers to massive warehouses, but efficiencies of scale mean both industries are better suited to large shell buildings. As indoor farms expand, will they compete with data centers for space and power, or will the two industries coexist and collaborate?
A growth industry As well as taking up less land and water, indoor farms grow better produce in otherwise impractical locations. A 2019 Cambridge Consultants report said vertical farming could increase crop yield and quality, while reducing food miles. “As a grower, you're constantly responding to those environmental variables around you,” says Douglas Elder, head of pre-sales at UK firm Intelligent Growth Solutions (IGS), which provides modular vertical farm units called growth towers. “It's really important that you are able to control everything from the lighting, the temperature, humidity, CO2, all of those things that affect how a plant grows. You take control of it, then it's completely predictable and repeatable. “[Indoor farms] offer more resilience to the food supply system, but also offer higher
quality produce where needed, and allow you to grow where you can't [grow outdoors].” Take broccoli, typically a very low-cost vegetable. But in Scotland, companies have to spend large amounts of time and money transporting the plants from where they are grown. Equally, indoor farming could help in the Middle East, where many crops are imported. The legal cannabis market is worth somewhere in the region of $20 billion a year in the US. There is some overlap, but indoor farming is worth around the same per year globally, while vertical farming is estimated to be a market worth somewhere around $10 billion. The indoor farming market is still in an early phase, with vendors hyping up their capabilities, while investors pump in large amounts of money. “It's still fairly early but it's getting to that stage where it's starting to cross over into the industrial scale; there is a lot of money flying around in this space,” explains Elder. “There's a lot of talk, a lot of smoke and mirrors still going on with this industry. Now it's time for them to stop talking and actually start doing.” Sepehr Mousavi, innovation research lead at Swedish indoor farming startup SweGreen, agrees it can be a ‘tricky market’ to navigate. Based in the DN Tower in Stockholm, SweGreen can trace its roots to early indoor farming firm Plantagon. “Many of these actors talk about automation and use of artificial intelligence, but rather as buzzwords to be able to create a place for themselves in the picture and also the possibility of obtaining venture capital,” he says. The indoor cannabis market is risky, says Michael Fluegeman, principal engineer at Touchstone Engineering Corp: “The clientele is a little different; if you've got people coming out of the black market, you've got speculators, you've got people with really no background in business and building or designing anything.
48 DCD Magazine • datacenterdynamics.com
“They may have experienced growing some plants in their yard or in their bedroom but [large-scale growing] is quite a different animal.” Fluegeman says the cannabis industry is applying technology and strategies from data centers, but he has experienced people who claim to have funding - only to find, with engineers at work, that the money isn’t there. Once cannabis is legalized federally and the pharmaceutical industry is fully engaged, the US industry will rapidly mature, become more professional, and embrace ideas such as efficiencies, automation, and softwaredriven farming, he says. In the future, there may also be indoor insect farming: Aspire Food Group is working on a project to create an indoor vertical cricket farming facility - in a controlled and automated environment.
Plant factories From the outside, a large indoor farm might look like a data center. Both need large amounts of floor space, both have high power requirements (though data centers more so) and both need ample water. “You're trying to do the same function; you've got a lot of electrical systems generating heat, it's all about insulation and external control,” says Elder. “They are effectively big insulated boxes with their own HVAC systems that are trying to take advantage of free cooling where available, avoiding any sort of influences from additional heat loads from outside.” Both data centers and farms require backup power in the event of primary power failure. Data centers need good airflow management and temperature control to keep servers operating efficiently, farms need it to ensure plants are healthy. Both farms and data centers need security measures - though maybe only cannabis facilities need the same level of security as a data center. Early indoor farms reportedly used data center equipment, as specialist equipment
Food for thought
wasn’t available. Net Zero Agriculture’s systems are inspired by IT rack designs and used in shipping containers. Many companies in the UPS and HVAC space such as ABB, Air2O, and Schneider will serve both sectors. Standby generators and automatic power transfer switches are the same, says Fluegeman, and farms can learn from data centers’ skill at getting airflow where the servers need it: “Grow facilities have some similar challenges: If you starve certain areas of airflow, the plants are not going to do well.” Grow facilities are beginning to customize cooling equipment to deliver what they need more cheaply, says Fluegeman: “Like anything else, it becomes cost-driven; when there's enough scale and quantity it makes sense for vendors to address a specific need.” “A lot of facilities that I've worked on had very simple fans,” adds Jerry Poon, lead electrical engineer, Touchstone Engineering. “A lot of that comes from not having a knowledge of what [cooling equipment] is out there. “But the cooling equipment has been getting more sophisticated and more customized. The customer base is getting
savvier in terms of using better and more complicated equipment, and it's evolving separately from the data center world.” There are big differences between the sectors, of course. Where data centers need fiber and connectivity to the outside world, even highly-automated farms are more independent and need only the most basic broadband connections. DCD is not aware of major data center pest problems either, though we’ve heard of the odd rat or cat - and in one instance, a deer - in data centers. By contrast, indoor farms need constant pest control to keep insects off the crop. Lighting is often incidental to a data center, but indoor farms rely on LEDs to nourish and maintain crops. Most of their heat generation comes from lighting, often totaling around 70 percent of total energy use. They also need to ensure CO2 levels are steady to ensure plants are healthy, and need
"Farms can learn from data centers' airflow skills. Grow facilities have some similar challenges: if you starve certain areas of airflow, the plants are not going to do well”
tighter humidity control than a typical data center. IGS has a ‘virtual power plant’ for farms to more easily control the LED lighting, as well as the total power draw. Elder says it can be useful if colocated with infrastructure that has more consistent and critical power needs. Farms don’t have the same uptime needs as data centers. Those energy-intensive lights are not always needed 24 hours a day, so their power requirements vary during the day and night. Such farms are less concerned about outages, as plants can cope better than servers if the power goes off unexpectedly. “Plants have a natural resilience,” says Elder. “They're natural organisms that are used to these sorts of interruptions. With a longer growth cycle crop, if you switch the lighting off for two hours every two weeks unexpectedly, you're not really going to see much of an impact.” Elder argues modern automated indoor farms are more like factories than data centers. “These are effectively plant factories at the end of the day,” he says. They can use similar scheduling and handling machinery: Cambridge Consultants found most robots used in vertical farming have been repurposed from arm systems like those used in automotive production lines.
Issue 41 ∞ July 2021 49
SweGreen’s Mousavi says factories could also help with areas such as predictive analytics, AI-led automation, and vision systems for diagnostics: “There is a lot of space to do more effective things around the use of robots and automation.” Fighting for land and power? The rise in e-commerce coupled with the Covid-19 pandemic drove up the need for warehouse and distribution space as people shopped online for delivery to their home. As people spent more time online and companies established work from home infrastructure, the need for data center space also surged, causing increased competition for prime real estate in urban areas. Urban indoor farms are not in that competition, as most are relatively small scale. They need to have transport to distribute their crops, but they have more choice in location as they don’t need highbandwidth fiber. But as urban farms – especially vertical farms – become more common, this could change. Cannabis grow facilities can have a floorspace in the low tens of thousands of square feet and power requirements around 1MW, but some can total more than one million sq ft. A 2016 study from the Lawrence Berkeley National Laboratory suggested legalized indoor marijuana-growing
operations could use up to one percent of electricity in the US. IGS’ Elder says his company’s vertical towers average a power draw of 100kW per pair. One project involves more than 100 towers, with power reaching the 5MW range. Unlike data centers, however, there may be an upper limit to farms’ energy needs. Different crops need more power or automation, but Cambridge Consultants says vertical farms need around 20-70W/sq ft (200 - 700W/sqm), which is only one tenth of the 200-300W/sq ft (2,000-3,000W/sqm) a server farm burns. While data centers will always push for greater density, there are limits to the density of plants an indoor farm can achieve, even when growing vertically. ISG’s Elder says LED efficiency gains mean energy density footprints may even go down. We have heard of one early example of competition between horticulture and data centers. Jeroen Burks, CEO of Blockheating, noted how a Microsoft data center in the Netherlands was built on land previously allocated for use as an industrial-sized greenhouse. Burks says greenhouses in the county can easily reach tens of hectares in size, and require multiple megawatts of energy to power. Microsoft’s Holland-Kroon data center was challenged by a local agriculture and horticulture group, saying the industry “cannot spare a hectare” in the area. The indoor market is less established than greenhouses, and still choosing location on price, says Touchstone’s Poon. “The farther
“The data center world and cannabis are both power hogs, but the cannabis industry is a little bit behind, they're less worried about power quality, and more about just having the power capacity" 50 DCD Magazine • datacenterdynamics.com
out you go the cheaper the land power and water is. “The data center world and cannabis are both power hogs, but the cannabis industry is a little bit behind, they're worried less about power quality, and more about just having the power capacity for their operations.” Buying land on price leaves practical problems. Every cannabis project Poon has worked on had to ask the utility for increased electrical infrastructure: “They look for location, they look for size, but they don't think about the mechanical capacity, they don't think about the electrical capacity,” he says. On one project, the lack of power left the farm running generators 24x7 as a longterm ‘temporary’ solution while waiting for a utility upgrade. Collaboration and colocation Some data centers are reusing waste heat through district heating schemes connected to homes or offices. But some are now looking at heating agriculture and horticulture infrastructure instead. QScale plans to place greenhouses on its campuses in Quebec, Canada, and use the excess heat for agriculture. “We want to contribute to the food autonomy of the province with a potential of 400 hectares of greenhouses for the first campus,” said founder Martin Bouchard. “In Lévis, we have adjacent agricultural land which is equivalent to 80 football fields, enough to produce 2,880 tonnes of raspberries and 83,200 tonnes of tomatoes.” Green Mountain in Norway is providing heat to both a land-based lobster farm (see next feature) and a trout farm, allowing them to dispense with expensive water heating and recycling systems. German data center firm Windcloud set up a similar scheme sending its excess heat to an algae farm. Researchers at Lancaster University are looking at modular data centers which dry coffee beans in markets such as Costa Rica. In 2020, Finnish developer Yit and Blockheating of the Netherlands both offered heat to nearby greenhouses. In Blockheating’s case, the servers themselves were recycled OCP standard hardware from hyperscale facilities, delivering bare metal services, with excess heat piped to the greenhouses for crops such as tomatoes, cucumbers, and peppers. US entrepreneur Daryl Gibson recently proposed the Eden Fusion Center; a tropical indoor forest that would be heated with data centers during the winter months. Unlike greenhouses, indoor farms generally have a surplus of heat from their LEDs, so they are more likely to donate excess heat to district heating schemes
Food for thought than benefit from them, but there are still opportunities for indoor farms and data centers to cohabit, especially if the farms can be flexible on power requirements. “It's not about using the surplus heat for vertical farming, but rather creating a smart energy system and connection between these two different units to lend the energy to each other and borrow it from each other,” says SweGreen’s Mousavi. Kansas Freedom Farms says solar panels installed on its large-scale vertical farms could produce a surplus of power for on-site data centers and cell towers. “We are quite often approached by data center developers to look at these opportunities,” says IGS’ Elder. “Typically there's spare capacity available that they want to utilize, but they want something that can be flexible.” “It's a positive PR thing as well; doing something useful for a slightly different industrial application in the same space if there is spare capacity.” In the Netherlands, Microsoft collects rainwater to cool its data centers, then sends the excess to nearby greenhouses. But water-hungry data centers might not always be willing or able to share. “If you're producing greywater for a data center, maybe with some degree of purification, some of that could be repurposed for growing plants,” says Touchstone’s Fluegeman. “But for the most part, I think they want to locate them where you have plentiful fresh water.” Some small farms want to locate by offices, restaurants, or similar spaces. SweGreen’s Mousavi thinks it could be a “win-win opportunity” for them to occupy redundant server rooms or old basement data centers that have been vacated as companies consolidate to colocation facilities or the cloud. IGS’ Elder says the concept is good in
"Vertical farming companies are run by people who are greenhouse experts. They think data centers are irrelevant because they are focused on the production of greens” theory, but there are practical challenges in multi-use buildings. Equipment can be expensive and under-utilized, while pest control needs management and the crop needs distribution. “If it's going straight into a kitchen in that building, great. But if you're having to transport crops from distributed locations to central points, it starts becoming problematic.” Edge agriculture Regardless of colocation opportunities, agritech needs its own IT infrastructure, both at the Edge and via the cloud. Agritech startup Aerofarms has a 70,000 square foot facility in New Jersey with plants in vertical columns that climb 36 feet. The company uses Dell technology for its IoT gateway and on-site compute needs. A Purdue University project is using robots to develop future agriculture automation systems, in a greenhouse heated by Digital Crossroads’ Indiana data center. Blockheating also hosts a local university robotics project on the farm at its facility, says Burks. A robot travels through greenhouses making predictions on yield, while Blockheating provides the compute infrastructure, while sending excess heat back into the greenhouses to keep the crops warm. He notes, however, that the farm’s IT only uses one rack out of the 10 that are currently heating the farm, so it's unrealistic
for a farm’s agricultural IT to provide all the heating it needs. As farms use more automation and analytics, they may need more on-site compute, and produce more heat for their own use. But, as with other industries, much of this will be in the cloud. IGS’ Elder says his company is keeping on-site compute infrastructure relatively low, as programmable logic controllers respond to a database of functions that can be programmed through the cloud: “That allows us to simplify the requirements on-site and minimize computer-based functions on-site that are typically more unreliable than a PLC. “It's all cloud-based software that's actually driving those PLCs. We obviously have to have a stable Internet connection, but it is easier for us to manage sites remotely through cloud-based servers.” SweGreen’s Mousavi says his company’s farm also uses PLCs, but is looking into developing more real-time solutions using 5G and Edge computing, with some unnamed partners in Sweden: “I can't name them, but you could guess for yourself.” Mousavi adds the big opportunity could be expertise: “One problem from the vertical farming industry is many of these companies are run by people who are greenhouse experts. "They think [data centers] are irrelevant because they are focused on the production of greens.”
Big tech gets into big farming A number of cloud and hardware companies are hungry to show off their own farming capabilities. Microsoft in particular is helping feed its staff, as a way to attract more agriculture customers. Microsoft donated hydroponic growing equipment to Seattle architect Melanie Corey-Ferrini for a project in warehouse space at Sabey’s Intergate East data center campus in Tukwila. The company has a number of on-site indoor and vertical farms at its Redmond campus and in 2017 claimed it grew more than 15,000 pounds of lettuce and a ton of other greens a year onsite for employees. Microsoft’s indoor crops are unsurprisingly powered by Azure
cloud services, and the company conducts research to improve its agriculture-focused products. The company has invested $1.5 million in the Grand Farm Project; a research initiative in North Dakota to create a fully autonomous farm, which will use the company’s Azure FarmBeats solution. As well as FarmBeats, Microsoft’s Sonoma project is working on automated indoor and greenhouse farming. The Sonoma team beat Tencent and Intel in a cucumber-growing competition in the first autonomous greenhouse challenge at Wageningen University and Research in Holland, growing more than 50 kilograms of cucumbers per square meter, though figures showed the AI competitors used more electricity than human growers.
Issue 41 ∞ July 2021 51
Crustacean cultivation A land-based farm has finally produced plate-sized lobstersthanks to a data center
he European lobster fishing industry is in trouble. The much-prized crustaceans are in huge demand as one of the most expensive items of seafood on restaurant menus. Yet the crustaceans take years to mature, and fishing has decimated populations. For any other food species, the answer would be to set up fish farms, but lobster cultivation faces several seemingly insurmountable problems. Lobsters need relatively warm seawater (20°C/ 68°F) to grow, and they are cannibalistic, so they have to be reared in individual pens. This has made conventional fish farms too labor-intensive and energy-intensive to be economic. For 21 years, a group in Norway has struggled to overcome all these hurdles to set up the Norwegian Lobster Farm. After endless setbacks, the group is finally set to go into full production next year - and byproducts from a local data center are what's got the project across the finish line.
Cannibals It is possible to hatch lobsters, and grow them past the larval stage in dry land nurseries, such as the UK's National Lobster Hatchery in Padstow Cornwall, but growing them to full size has been too demanding for any land-based farm, says CEO of the farm, Asbjørn Drengstig. "Juveniles take six years to mature, because they don't grow in the wintertime, because the temperature is too low in the sea," says Drengstig. Kept in 20°C water all year round, lobsters can mature in two years. But during that time, it is very expensive to hand rear them with carefully controlled feeding. And keeping the water warm is expensive too. The Norwegian Lobster Farm has been able to develop two technologies: a robotic system for automating the intensive process of feeding and nurturing individual lobsters, and a system that economizes on heat by cleaning and recirculating the warm water in the facility. This reduces heat requirements but adds complexity.
52 DCD Magazine • datacenterdynamics.com
Peter Judge Global Editor
It has been an epic struggle, says Drengstig: "At first we needed to do basic R&D, to reveal the growth cycle and all the things we needed to know. "Then we tried to scale up - and then the financial crisis came in 2008 which restricted further growth. And then in 2011, the whole farm burned down to the ground. We lost everything overnight. And then it took some time to revitalize the company and get it back on its feet." That recovery had the backing of two EU funded projects, Automarus and Devaela, to fully realize the system of recycling aquaculture technology (RAS), and the robotic lobster nursery, which uses vision systems to make land-based lobster farming viable, by tracking individual lobsters, feeding them precisely, and keeping them separate to reduce cannibalism. "We organize the whole hatching operation to separate different stages and reduce the cannibalism," Drengstig explains. The farm is validating an advanced system which will allow live monitoring of all the
Pithy the lobster stock, and automated feeding: "We are able to track each single lobster every day, we have individual tracking throughout the whole cycle. No other aquaculture system can track on individual basis." He goes on: "The main challenge is to have accurate feeding. When they are small we actually feed one-two millimeter size portion every day. Very high accuracy, right?" To conserve heat, the farm has been using recycling aquaculture technology (RAS), a long term for the process of recycling its water to minimize the waste of heat. By continually reusing the water, the farm can reduce its water wastage to five percent, but that is at a very high cost in capital terms. RAS is expensive. All this left the farm struggling to get to finally opening, even ten years after the fire. To Drengstig's surprise, the farm had a neighbor who could make the complexity of RAS irrelevant, and push the project across the finishing line.
"The main benefit is we reduce our environmental footprint by using their data center waste heat, because normally we need to use heaters to heat up the temperature of the water" heat up the temperature of the water. "This water will be water free of charge, and we don't have to invest in complicated recirculating infrastructure. And managing operations will be less complicated. Fewer things can go wrong. We don't have to have recirculating pumps, pipes, valves, and all those things that complicate a re-circulating farm. "This would just be flow-through water, and it simplifies the operation, it will be a different world."
creatures which it nurtures - growing when the temperature allows and waiting patiently for conditions to change. "It's been a very interesting journey," says Drengstig. "I usually tell people you need to be patient, you need to have passion. And you need to be a little bit stubborn." "Before the fire, we had tremendous reviews from culinary chefs and gourmet magazines. It has been a fantastic journey going downstream in the value chain. We saw that this product actually has its rights in the world, so we kept on going."
Data centers have the best heat Enter Green Mountain Colocation operator Green Mountain's DC1Stavanger data center is located in a bunker on Rennesoy, the next island along from the Lobster Farm. DC1 is cooled by a salt-water fjord, taking in water in at 8°C (46.4°F) and releasing it at 20°C. It produces warm water all year round, at the ideal temperature for Drengstig's crustaceans. The two found each other through mutual contacts. Fishing is central to Norway's culture, and Green Mountain is well-connected in the world of fish farms. It also offers waste heat to a trout farm on Rennesoy. The company's previous CEO, Knut Molaug, came to the company after a long career in the world's largest aquaculture company, Akva, and left in 2017 to run a firm making fish farm robots, Aqua Robotics. Aqua's bots are aquatic, so there aren't any at the Lobster Farm, but Molaug is on its board. The partnership between the two has no downside, says Drengstig. Norwegian Lobster Farm is building a new facility next to Green Mountain DC1, which should open in 2022, and produce lobsters for restaurants within a year. The arrangement will save 15 percent of the farm's operational cost because it no longer has to heat any water. But there's a bigger saving on equipment, because Drengstig can do away with the complexity of RAS and save 25 percent of its capital investment. "The main benefit is we reduce our environmental footprint by using their waste heat," says Drengstig, "because normally we have our recycling system to preserve the heat inside the farm. We only exchange five percent, but still we need to use heaters to
Drengstig says he has looked at other sources of warm water, but they are not steady enough. "The problem for me is that suddenly they have maintenance, and then they shut down - and then you have a fluctuation in temperature." By contrast, data centers are steady: "They can never stop. They have redundancy, and will never stop the warm water flow. If the water stops, they have much bigger problems than we have!" The relationship is close, he says: "It's like a hand in a glove. I use the term circular economy for reducing the footprint. Increasing sustainability and utilizing energy, and organic material, I think the world is turning towards greener thinking to reuse what previously was a waste. Now it's actually a resource." For the last 21 years of struggle, the Lobster Farm has been a little like the
Asbjorn Drengstig of Norwegian Lobster Farm and Tor Kristian Gyland of Green Mountain
Issue 41 ∞ July 2021 53
Protection against historic weather events DATA CENTER, AUSTIN, TEXAS UPS type: Power module: No-break rating: Phase 1 install: Totall install: Operating voltage: Configuration: Housing:
Dynamic UPS 2200 kVA 1760 kW 2 modules 3 modules in total 12.47kV/60 Hz Parallel system Outdoor, containerized
An estimated 4.5 million homes and businesses in Texas were left without power during what is undoubtedly the largest forced blackout in U.S. history. These blackouts came as a direct result of severe winter storms that swept across the United States in February 2021.
During days of on-going utility interruptions and uncertainty, the HITEC Dynamic UPS system provided 100% uptime to the critical load, and life safety systems, providing guaranteed power at the most critical time.’
Client Case Details To mitigate a complete failure of the Texas grid, the Electric Reliability Council of Texas (ERCOT) implemented rolling blackouts, shedding as much as 26,000 megawatts of load. The extreme weather had affected power-generation capacity, particularly gas-fired plants. Pipelines had not been insulated and had frozen, which made it difficult for plants to get the fuel and other power-generation sources they needed. One customer, located within Austin, Texas, is supported by two HITEC PowerPRO2700 UPS
systems, and these provided complete protection during multiple sustained power outages, resulting in 100% uptime during even the most severe of environments. In 2021 we look forward to commissioning a third 2200 kVA dynamic rotary UPS within its own purpose-built container to accommodate load expansion. The three DRUPS, operating in a medium voltage parallel configuration, will continue to provide the highest levels of reliable power.
“BEGINNING FEBRUARY 15TH, AUSTIN, TEXAS, EXPERIENCED A WEEKLONG, UNPRECEDENTED WINTER STORM WITH TEMPERATURES AS LOW AS -15 DEGREES CELSIUS (5 DEGREES FAHRENHEIT) AND 25 CM (10 INCHES) OF SNOW AND ICE. MUNICIPAL UTILITIES WERE BURDENED TO THE POINT THAT CUSTOMERS WERE IMPACTED BY ROLLING BLACKOUTS TO RELIEVE THE POWER GRID. I WANT TO COMMEND HITEC POWER PROTECTION AS OUR TWO DYNAMIC ROTARY UPS-TYPE POWERPRO2700 UNITS PREVAILED EFFORTLESSLY THROUGH THE STORM AND CARRIED OUR CRITICAL LOADS WITH GREAT SUCCESS. GREAT JOB – KEEP UP THE GREAT WORK!” ‘FACILITY MANAGER’
Customer Experience By selecting HITEC PowerPRO2700, the client has the most efficient power protection solution for their data center, one that will keep the electricity bill low for many years to come, allowing them to compete effectively in the data center
market. In addition, the exceptional award-winning industrial design of the PowerPRO2700 provides a reliable and robust solution, protecting against even the harshest environmental conditions.
Hitec Power Protection BV P.O. Box 65 7600 AB Almelo The Netherlands
CONTINUOUS POWER IN YOUR CONTROL
Tel: +31 546 589 589 Web: hitec-ups.com E-mail: email@example.com
Dan Swinhoe News Editor
ENIAC at 75: A pioneer of computing The Electronic Numerical Integrator and Computer (ENIAC) was one of the world’s first general purpose computers. And 2021 marks 75 years since it was first unveiled to the public
021 marks 75 years since Electronic Numerical Integrator and Computer (ENIAC) was first revealed to the public. An early milestone of computing history, ENIAC’s development was a collection of important milestones. ENIAC may have been the first electronic general purpose machine which was Turing complete - i.e. theoretically able to handle any computational problem. Its development was key to the founding of the commercial computing industry, providing many of the early ideas and
principles that underpin computers of all shapes and sizes. A ‘revolution in the mathematics of engineering’ Built between 1943-1945 at the University of Pennsylvania by engineers John Presper Eckert and John William Mauchly, ENIAC was created to calculate artillery tables - the projectile trajectories of explosive shells - for the US Army Ballistics Research Laboratory. Mauchly proposed using a generalpurpose electronic computer to calculate
56 DCD Magazine • datacenterdynamics.com
ballistic trajectories in 1942 in a five-page memo called The Use of Vacuum Tube Devices in Calculating. After getting wind of the idea, the US Army commissioned the university to build the machine, known as the time as Project PX. The system was completed and brought online towards the end of 1945, and moved to the Aberdeen Proving Ground in Maryland in 1947. Taking up a 1,500-square-foot room at UPenn’s Moore School of Electrical Engineering, ENIAC comprised 40 ninefoot cabinets. Weighing 30 tons, the machine contained over 18,000 vacuum
Revisiting ENIAC tubes and 1,500 relays, as well as hundreds of thousands of resistors, capacitors, and inductors. “It was a bunch of guys bending metal in the basement of a building in UPenn,” says Jim Thompson, CTO of the ClearPath Forward product at Unisys, a company that through acquisitions can trace its lineage back to ENIAC, and the Eckert-Mauchly Corporation, founded in 1946. “There was nobody building computer parts; these guys made ENIAC literally out of radios and televisions and everything else they could find, taking vacuum tubes that were designed for a different purpose and then turning them into logic devices and repurposing them.” After the end of WW2, ENIAC was donated to the University of Pennsylvania on February 15, 1946. According to the Smithsonian, where parts of the machine now reside, an Army press release at the time described ENIAC as “a new machine that is expected to revolutionize the mathematics of engineering and change many of our industrial design methods. “Begun in 1943 at the request of the Ordnance Department to break a mathematical bottleneck in ballistic research, its peacetime uses extend to all branches of scientific and engineering work.” Prior to ENIAC, human ‘computers,’ mostly teams of women, performed calculations by hand with mechanical calculators. Predicting a shell’s path used calculations that took into account air density, temperature, and wind. A single trajectory took a person around 20 to 40 hours to ‘compute.’ With ENIAC, the same calculations were now possible in 30 seconds. Input was done through an IBM card reader and an IBM card punch was used for output. While ENIAC had no system to store memory at first, the punch cards could be used for external memory storage. A 100-word magnetic-core memory built by the Burroughs Corporation was added to ENIAC in 1953. Capable of around 5,000 calculations a second, ENIAC was a thousand times faster than any other machine of the time and had modules to multiply, divide, and square root. “This was a machine that straddled the point in history where we went from mechanical calculators and adding machines to electronic computers,” says Thompson. While a massive step-change in terms of capability compared to any other computer in the world at the time, it also had various challenges in operation. With minimal cooling technology – two 20-horsepower
“Once a program was written, the program-specific logic had to be literally wired into the machine, meaning programmers had to physically move the cables on a plugboard and change the switches" blowers – ENIAC raised the room temperature to 50ºC when in operation and its 160kW energy consumption caused blackouts in the city of Philadelphia. Reliability was also a constant challenge. Before speciality tubes became available in 1948, the machine used standard radio tubes, which burnt out on a near-daily basis. At first it took hours to work out which tube had actually blown, but the team eventually developed a system to reduce this down to around 15 minutes thanks to some ‘predictive maintenance’ and careful monitoring of equipment. ENIAC was difficult and complicated to use. Initially, it used patch cables and switches for its programming, and reprogramming the machine was a physically taxing task requiring a lot of preplanning and often took days to do. “The first electronic digital computers, including ENIAC, had to be programmed by wiring using patch cords,” explains David Taylor, co-founder of coding tutorial site Prooffreader and a computing history enthusiast. “Once a program was written, the
Issue 41 ∞ July 2021 57
program-specific logic had to be literally wired into the machine, meaning programmers had to physically move the cables on a plugboard and change the switches that controlled the response to inputs. “ENIAC was then able to solve only that particular problem. To change the program, the machine’s data paths had to be handwired again. This was a quite laborious process, taking a few days to make the necessary physical changes and weeks to design and write new programs.” Improvements in 1948 made it possible to execute stored programs set in function table memory, speeding up the ‘programming’ process. “The three different kinds of memory used in ENIAC were replaced with a single, erasable high-speed memory, allowing programs to be stored in the form of readonly memory,” says Taylor. “This conversion immensely sped up the reprogramming process, taking only a few hours instead of days.” ENIAC’s legacy: an important part of history The late 1930s to 1940s were rife with computing pioneers developing historically significant machines, all to help the war effort. IBM’s Harvard Mark I - another general purpose machine of the era - was capable of just three additions or subtractions per second, while multiplications each took six seconds, divisions 15 seconds, and a logarithm or a trigonometric function more than a minute. The UK’s Colossus, built at Bletchley Park, was incredibly important to deciphering Nazi cryptography, but was built for one specific purpose and government secrecy meant its learnings couldn’t be shared at the time. The Atanasoff–Berry computer, built in 1942 by John V. Atanasoff, was neither programmable nor Turing-complete. The Konrad Zuse-built Z3 was completed in Berlin in 1941 but government funding was denied as it was not viewed as important, the machine never put into everyday use, and was destroyed during Allied bombing of Berlin in 1943. “It’s easy to focus on how slow, massive, power-hungry and memorypoor these early computers were, rather than recognizing the exponential leaps in technology that they represented compared to the previous state of the art,” says Charlie Ashton, senior director of business development at SmartNIC provider Napatech. “One of the most impressive statistics about ENIAC is the multiple-orders-of-
“ENIAC certainly goes down as a pivotal moment in computing. It was important in the mathematical modeling that led to the hydrogen bomb" magnitude improvement in performance compared to the electro-mechanical machines that it replaced. While historians may argue over the importance of certain machines and which ones have the honor of being ‘first’ in various categories, few can argue over whether ENIAC was a leap forward and had a real-world impact. Its computing power was a massive leap in comparison to its peers at the time, and its general purpose nature led the way for computers being reprogrammable for any number of potential use cases. “There were other computers (mechanical and electro-mechanical) before it. And we can over-index on the fact that it was the first working general purpose digital computer, but the impact was far greater,” says Charles Edge, CTO of startup investment firm Bootstrappers.mn and host of The History Of Computing Podcast. At a cost of around $400,000 at the time - equivalent to around $7 million today ENIAC was a relative bargain, even if the original project budget was just $61,700. The machine was retired in October 1955 after a lightning strike, but it had already made a lasting mark on the nascent computer industry.
58 DCD Magazine • datacenterdynamics.com
A failure and a success By the time ENIAC was ready for service in 1945, the war was coming to an end. And as a result, it was never actually used for its intended purpose of calculating ballistic trajectories. “The difference for ENIAC was it really was a general-purpose problem solver,” says Thompson. “Even though it showed up late in the war and didn't really contribute to its original design purpose, it was immediately adapted to help with the US effort around nuclear weapons, to do agriculture work, and anything where we had to do a lot of computations quickly.” During its lifetime, the machine performed calculations for the design of a hydrogen bomb, weather predictions, cosmic-ray studies, random-number studies, and even wind-tunnel design. ENIAC’s work to investigate the distance that neutrons would likely travel through various materials helped popularize Monte Carlo methods of calculations. “ENIAC certainly goes down as a pivotal moment in computing,” says Edge. “It was important in the mathematical modeling that led to the hydrogen bomb. In part out of that work, we got the Monte-Carlo simulation and von Neumann’s legacy
“It’s easy to focus on how slow, massive, powerhungry and memory-poor these early computers were, rather than recognizing the exponential leaps in technology that they represented" coming from that early work. We got the concept of stored programs.” Mauchly’s interest in computers reportedly stemmed from hopes of forecasting the weather through computers and using electronics to aid statistical analysis of weather phenomena. And ENIAC did in fact conduct the first 24hour weather forecast. But, as with most computers of the time, John von Neumann and the specter of nuclear weapons research was an important part of ENIAC’s output. While working on the hydrogen bomb at Los Alamos National Laboratory, von Neumann grew aware of ENIAC’s development, and after becoming involved in its development, the machine’s first program was not ballistics tables but a study of the feasibility of a thermonuclear weapon. von Neumann also worked with IBM’s Harvard Mark I, and essentially created the von Neumann architecture when documenting his thoughts on the ENIAC’s successor machine, the EDVAC. Many ENIAC researchers gave the first computer education talks in Philadelphia,
Pennsylvania in 1946 that are collectively known as The Theory and Techniques for Design of Digital Computers and often referred to as the Moore School Lectures. The talks were highly influential in the future development of computers. “The Moore School Lectures helped produce Claude Shannon of Bell Labs, Jay Forrester at MIT, mainframe developers from GE which would go on to be a substantial player in the mainframe industry, and engineers, researchers, and the future of the still-nascent computer industry,” says Edge. The Pentagon invited experts from Britain as well as the US to jumpstart research in the field. The Lectures, and von Neumann’s memo, First Draft of a Report on the EDVAC, sparked off a race to create truly general purpose systems which could run from stored programs. Mauchly and Eckert were quick to build on their ideas, delivering EDVAC (Electronic Discrete Variable Automatic Computer) to the US Army’s Ballistic Research Laboratory in 1949. Design work began before ENIAC was even fully operational, implementing
The women of ENIAC While ENIAC was designed and built by men, six female programmers Jean Jennings Bartik, Marlyn Wescoff Meltzer, Betty Snyder Holberton, Ruth Lichterman Teitelbaum, Kathleen Antonelli, and Frances Bilas Spence - were responsible for ‘coding’ the machine by moving cables by hand. Most of the six were maths graduates - Snyder had studied journalism - and had been hired along with around 80 other women to manually calculate the trajectory for the Ballistic Research Laboratory. “They were looking for operators of a new machine they were building called the ENIAC,” Bartik told the Computer History Museum. “Of course I had no idea what it was, but I knew it wasn’t doing hand calculation.” ENIAC’s classified status meant the programmers were not at first allowed into the room to see the machine and were instead told to work out programs from blueprints in an adjacent room. As Columbia University describes it, without anything as user-friendly as an instruction manual the six women taught themselves ENIAC’s operation from its logical and electrical block diagrams, and then figured out how to program it. They created their own flow charts, programming sheets, wrote the program and placed it on the ENIAC. Together they are credited with creating the first set of routines, the first software applications, and first classes in programming. However, despite their essential role, the women were rarely named in any pictures or press around ENIAC. Their involvement in the project went largely undocumented for years. “It's very much a parallel to the Hidden Figures story around NASA,” says Unisys’ Thompson. “The impact of women early in this industry is an under-told story.” Much like the African American female mathematicians at NASA whose input was essential during the space race, it’s only recently their roles in ENIAC have been widely recognized. All six women were inducted into the Women in Technology Hall of Fame in 1997.
Issue 41 ∞ July 2021 59
architectural and logical improvements conceived during the ENIAC's construction. However, they were beaten by the Manchester Baby system, the first storedprogram computer, and narrowly by the EDSAC in Cambridge, which is regarded as the first practical computer, and the origin of business computing via the Lyons Leo (see next page). “ENIAC also heralded a variety of build techniques that persist to this day,” says Thompson. “It was a modular design, it was a scalable site so you can add more capability to it, you can change capability.” “Over its lifetime, which was just shy of a decade, they added all kinds of technology to it that changed from how it was at the beginning to how it was at the end. That becomes sort of a blueprint for computing as you and I know it.” The start of commercial computing Today, technology names are some of the most valuable brands in the world. But even in the 1940s, technology was able to captivate the media. With Bletchley Park’s Colossus still a state secret, ENIAC was free to grab the headlines. A NY Times headline from February 1946 read ‘Electronic Computer Flashes Answers, May Speed Engineering.’ In other publications, ENIAC was described by the press at the time as the “mechanical brain” and “electronic Einstein.” ENIAC had an early technology marketing campaign. Javier García, academic director of the engineering and sciences area at the U-tad University Center, Spain, explains that as funding dried up after the end of the war, the creators of ENIAC produced and exhibited a film about its operation to drive interest. In one marketing trick, the team incorporated panels with light and ping-pong balls with painted numbers that lit up while carrying out an operation to impress viewers. “They were useless. Mere aesthetics. But in the popular imagination, it has remained as the image of the first computers. Just look at science fiction films. In fact, to reach more people, these bulbs are defined as an electronic brain. An absolute marketing success,” Garcia told El Pais. While neither ENIAC nor EDVAC, nor the British EDSAC, were sold on the market, Mauchly and Eckert’s work was an important leap forward for the commercialization of computing. The two launched the first computer company, Electronic Control Co., in 1946 after leaving UPenn over patent disputes. Unable to find a bank or investor that would lend them money, the two borrowed $25,000 from Eckert's father to get the
Photography courtesy of the U.S. Army
“Within 10 years of its birth and five after having been shut down, we were building generalpurpose commercial computers. It created a whole new industry" business off the ground. Due to financial issues, the renamed Eckert-Mauchly Computer Co. was sold in 1950 to Remington-Rand. Eckert-Mauchly developed two successor machines: BINAC was the US’s first stored program computer in 1949, while UNIVAC didn’t launch till 1951, after acquisition by Remington-Rand. With varying degrees of success, these helped kickstart the sale and use of computers for dedicated business purposes in the US. “At that point in time IBM was in the business,” says Thompson. “IBM did a better job of making that pivot and commercializing, but Eckert and Mauchly were there at the beginning and they started producing standardized computers that could be built in volume for the day – that is more than one – from a standard set of designs and sold to customers to do a general set of tasks. So that's a pretty big switch from a machine that's built for a purpose. “Within 10 years of its birth and five after having been shut down, we were building general-purpose commercial computers. It created a whole new industry. We would
60 DCD Magazine • datacenterdynamics.com
not have programmers, we would not have system designers, we wouldn't have all kinds of things that we have [today]. Not 15 years out from ENIAC we're building computers to go to the Moon.” Northrop accepted the first BINAC in September 1949, but it reportedly never worked properly after it was delivered and was never used as a production machine. Northrop blamed EMCC for packing the machine incorrectly, while Eckert and Mauchly said Northrop had assembled it poorly and wouldn't let its engineers onsite to fix it. However, after the Rand acquisition, the first UNIVAC was successfully delivered to the United States Census Bureau in March 1951. The fifth machine – built for the US Atomic Energy Commission – was famously used by CBS to correctly predict the result of the 1952 presidential election in a marketing scheme cooked up by the pair. Though not in name, Eckert and Mauchly’s company lives on today. Rand merged with Sperry Corp. in 1955 and then Burroughs in 1986 and became Unisys. Eckert retired from Unisys in 1989.
Rebuilding EDSAC: the first real computer EDSAC was a landmark in computer history. A team of volunteers are rebuilding it, using nothing more than photographs and memories
he computer industry is careless of history. It may have utterly changed our lives through digitization, but in the process it has neglected its own records. The first true computers were an achievement comparable to a Moon landing, but in some cases, nothing remains of them. Back in 1949, EDSAC was probably the first truly practical computer to go into everyday use. It spawned the first successful commercial computers and the first software libraries. But sixty years later, when engineers and historians wanted to understand it, they had a problem: there were no proper records of how it worked. “It was a giant jigsaw puzzle with half the pieces missing,” says Andrew Herbert, the retired computer scientist who for a decade has led a project to rebuild EDSAC at the UK’s National Museum of Computing in Bletchley. The original EDSAC was built from vacuum tubes and stored data in tubes of mercury, before semiconductors enabled silicon chips. Within a decade, it was replaced, and the original machine was scrapped. All Herbert’s team had to work on was a set of photographs and a few notebooks. Birth of an era In the aftermath of World War II, the potential of electronics was obvious. Radar had won aerial battles, while Bletchley Park’s Colossus automated codebreaking, rendering the German Enigma code transparent, and shortening the war. But Colossus was hardwired for its wartime work; scientists and the military wanted a new generation of systems to tackle different jobs. Ideally, they wanted systems which could be applied to any problem. US wartime computing program produced ENIAC. It was a general purpose system, but it was delivered too late to help against the Nazis, and it had to be physically rewired to run different programs (see article, p56). The US government held a tech summit,
and invited experts from Britain. Visionary computing pioneer John von Neumann wrote notes on the next steps. He suggested systems which could load programs from a data store, and the race was on. The “Baby” system from Manchester was the first to run stored programs in 1948, but that was just a test. The first successful general purpose computer arrived in 1949 in Cambridge: EDSAC, or Electronic Delay Storage Automatic Calculator. Built by Professor Maurice Wilkes at Cambridge University’s Mathematical Laboratory, EDSAC was supported by a small team of technicians, and performed calculations for scientists. In its short life it helped three of them win Nobel Prizes. It also supported the foundations for the theory and practice of computing. EDSAC saw the invention of the subroutine and the creation of the first software library. Its logic was built from thousands upon thousands of vacuum tubes, wired by hand into metal shelves in six-foot racks. It consumed 11kW of power, and performed 650 instructions per second. Its memory, held in vibrating tubes of mercury, amounted to about 1kbyte in today’s terms, although it was arranged as 512 18-bit word, instead of today’s bytes. Data was fed in on paper tape, and the results came out on a teleprinter, with a cathode-ray tube display to show the status of its memory. “Maurice Wilkes wanted something which could do calculations that would otherwise have required paper and pencil,” says Herbert. It performed calculations far faster than a human could, and rapidly became a University workhorse. In 1949, it ran its first program - a square root generator created by Beatrice “Trixie” Worsley, the world’s first computing PhD. in 1951 it found the world’s largest (79-digit) prime number, and in 1952 Sandy Douglas created the first video game, allowing operators to play noughts-and-crosses (tic-
Peter Judge Global Editor
tac-toe) on EDSAC’s cathode ray tube display. By July 1958, it had been superseded by EDSAC 2. All work shifted to the new machine, and EDSAC was unceremoniously decommissioned. It was stripped of its parts, and its metal framework was scrapped. Before its demise, EDSAC had revolutionized academic computing - and also kicked off commercial computing as we know it today. British restaurant chain J Lyons & Co sponsored EDSAC, and its LEO 1, the world’s first business computer, was developed from the EDSAC design. The story begins It was 2011 when the National Museum of Computing at Bletchley Park announced a unique project. It would build a working replica of EDSAC, in a project funded and inspired by Cambridge entrepreneur Hermann Hauser. Hauser had helped found Acorn Computers (famous for the BBC Micro), as well as other tech companies including ARM. But as a PhD student he had worked with EDSAC veterans, and knew its significance. In 2010, he asked former Cambridge professor David Hartley if EDSAC could be rebuilt. Hartley thought it was impossible, but Hauser pushed, asking: “Why don’t you find out and let me know?” Andrew Herbert, one of the Museum’s trustees, was the obvious person to get on the possible project. A Cambridge computer scientist, he lectured under EDSAC designer Maurice Wilkes in 1978, before a career in which he led the Microsoft Research lab in Cambridge and then EMEA head of Microsoft Research, finally retiring in 2011. It rapidly became clear this was not a simple job, but emphatically worth doing, says Herbert: “We’re getting a better understanding of our computer heritage, and celebrating a triumph of British computer technology.” It’s been one of the greatest retirement projects imaginable for Herbert: “It’s men in a shed. It’s the pleasure of still using your brain,
Issue 41 ∞ July 2021 61
and learning new skills - even for those who are very experienced.” Working from scratch Given the task of making a computer, the original EDSAC builders worked from scratch. A group of radar and radio engineers accustomed to analog electronics, they were pioneers who invented digital thinking. With almost no records to guide them, the reconstruction team had to go through that same process, working out how antique analog circuits could produce digital results. The reconstruction process has taken ten years so far, while the original EDSAC was built in three years. Herbert explains that the pioneers had huge resources by the standards of the day, and a large team: “The original team were steeped in radio and radar technology, and worked six day weeks with a full budget.” The reconstructors are part-timers, and they have had to think their way into the pioneers’ minds and then rebuild infrastructure they needed to make the systems again. “We are working with volunteer labor. You get a day or a day and a half from people, and you can only work on one bit at a time.” Some of the volunteers are people in their 80s: “They had their first contact with electronics in National Service.” In 2012, when Herbert’s team started work, nothing of EDSAC survived. The whole machine had been taken apart and unceremoniously scrapped when EDSAC 2 replaced it. It wasn’t seen as a valuable object, just a stepping stone, explains Herbert. The traces that remain are random, notebooks, diagrams and photos - and some of them are self-contradictory. “There are some photos, but you can’t tell which are from the final or the early days of EDSAC. There’s very little about the physical construction or actual engineering. This has been worked out from scraps of evidence.” One issue was that the EDSAC was not a single object, with one design. It changed as the pioneers built it, and changed further during its life. New ideas replaced older ones and sections were improved and rebuilt. It quickly became clear that the reconstruction team had to make decisions. It wasn’t possible to make an exact “replica." Instead, the team agreed a goal. The reconstruction would be a working system, which could be maintained by the staff and National Museum of Computing and seen working by visitors. It would use the original designs and detective work in the same 1950s style, and use 1950s technology where possible. These groundrules mean that the reconstructed system has to be more reliable than the original, Herbert says. The analog
systems of the original EDSAC could drift out of alignment, and it needed constant tweaking: “EDSAC was operated by a well trained cadre of [female] operators, who adjusted settings to keep it stable.” That won’t be possible in a museum: “The staff are enthusiastic, but they have other work to do, so it must operate without constant tending.” When it is in use, the EDSAC reconstruction will have modern sensors monitoring its vital signs, and warning if adjustments are needed. Memory issues The team had to change some aspects of the design for the modern world. “The original design had exposed mains wiring and poor earthing,” says Herbert. “Health and safety hadn’t been invented then." But there were other fundamental problems. The main memory used mercury delay lines, which store information as sound in a standing wave on a column of mercury. The technology had evolved from wartime radar systems, and was introduced to early computers by J Presper Eckert, using mercury because of its ideal acoustic properties. However, not only were mercury delay lines unreliable, mercury is poisonous, but that wasn’t the only problem. “In fact with mercury, the issue wasn’t health and safety,” Herbert explained. The tubes could have been filled carefully, and drained and reassembled every couple of years, but there was another issue: “That much mercury would cost £160,000 ($221,000) on the open market. We didn’t have that in the budget.”
Keeping with the spirit of 1950s technology, Herbert’s colleagues had a suggestion: if they couldn’t use mercury, why not use a later evolution of the technology? Later in the 1950s, mercury memories were replaced by solid nickel wires, and these “torsion wire delay” memories stayed in use until the 1960s. “We used nickel delay lines - we wanted something authentic, but easy to maintain,” explains Herbert. They were still obsolete 50 years ago, however, and the team had to rediscover how to make memory systems. But their behavior was easier to predict and reproduce. The hard work started with the basic structure of EDSAC. It was built from racks, each of which hold 13 “chassis” pieces, shelves made of folded sheet metal. Each chassis held thermionic valves (vacuum tubes), plugged into sockets and interconnected with wiring and passive components (capacitors and resistors). Reconstructing these basic modules was going to be a struggle, because they didn’t exist any more. “The original machine was sold off for scrap,” says Herbert. Luckily, one chassis had escaped destruction: “Someone bought a rack to make bookshelves, and this guy took a chassis to scavenge and never got round to it.” Sixty years on, this chassis emerged from a shed - one of several lucky breaks for Herbert: “You never know what is going to turn up.” Detective work The EDSAC team know how to make new chassis modules for the system, but what
“There are some photos, but you can’t tell which are from the final or the early days of EDSAC. This has been worked out from scraps of evidence”
Cambridge News, June 1955
62 DCD Magazine • datacenterdynamics.com
“The hardest thing to get was the valve holders. In the end we got them made in China"
Alan Clarke / The National Museum of Computing should they put in those new shelves? The valves were visible, but the wiring was hidden under the folded metal, and the photos did not show it. Luckily the photos were high quality, and could be magnified. “Magnify the photos, and you can see a label, which might say ‘half adder,' says Herbert. With deduction and reverse engineering, we had to ask what are the circuits which would do that?” The team knew digital electronics, and how to build basic functions like AND and OR gates. They established some templates and had another stroke of luck - some hand drawn circuit diagrams turned up for two of the chassis modules, which gave the team an idea of the kind of circuits the pioneers had used. They also had handwritten notes which outlined the functional breakdown of the system. Finally, the photographs helped, because the types of valve were visible, and the team could puzzle out what they might be for: “We could recognize patterns in the layout of valves,” says Herbert. “Things which are connected together, would be put close together.” Looking at the photos, the team started to see AND gates and amplifiers, and recognized higher functions like adder circuits: “It was mostly a bottom-up process,” says Herbert. “We knew from some written documents what functions there were.” The team knew what it was looking for but didn’t know the route there: “It was like a jigsaw puzzle. By the time you’ve got near the
end, it’s obvious where the last pieces go.” The modular design helped too. Like modern systems, EDSAC used blocks of technology to do particular jobs, and replicated them: “There’s one chassis which is repeated 43 times, making up about a third of the machine.” You can tell Herbert is pleased with the detective work: “Obviously, it’s quite likely there are bits of our EDSAC which weren’t part of the original. But we know each part has the same function and works the same way, and where we can see a chassis, ours is identical.” Army surplus valves There were more compromises to be made in the construction process. After a lot of discussion, the team set themselves what sounds an ambitious goal: to build the logic entirely with traditional vacuum tube valves because that seemed to be the essence of the original machine. On the face of it, this sounds insane. These valves haven’t been used in computers for 60 years. They were famously unreliable, and were replaced by transistors in the 60s. Surely there can’t be enough authentic 1950s valves available after 70 years? Surprisingly, there are plenty: “In terms of parts, there enough valves out there to build another three EDSACs - or to keep this one going for many years,” says Herbert, proudly. “These valves were in radar systems from the 1940s to the 1960s,” he explains. During this time, manufacturing improved and reliability went up.
Plenty of these valves were made, and they survive because of the vagaries of military procurement: “Governments always over procure in spares, and when the systems are withdrawn, the spares all go out to surplus dealers. They have them in a hangar, waiting for a call from me!” Herbert clearly enjoyed his negotiations with military surplus dealers: “The dealer would say the valves should be expensive because they are rare,” he says. In return, Herbert would point out that there were literally no other buyers for them. The EDSAC storeroom soon filled with valves, still in their original cardboard packaging. “They’re in good condition considering they’ve in a box for 60 years, though some have deteriorated.” A wiring compromise In the wiring underneath the shelves, it was a different story. 1950s resistors and capacitors can still be found, but the team made the decision to use modern passive components, which are smaller and more reliable. “Some people were uncomfortable when we decided to buy modern capacitors and resistors,” he says. “We get given old ones, and they are the right shapes and size and color, but they drift out of spec. If we’d gone down that path, it would have given us huge reliability issues.” Using 1950s capacitors and resistors would have meant another storeroom for those components and a continuous headache for the Museum when it would eventually operate the system, adding massively to the maintenance required. In Herbert’s view, the modern resistors still keep faith with the spirit of the project, as the same wiring circuits are used: “Fundamentally, providing it’s a 100k resister it doesn’t matter whether it is modern, and the size of an ant, or a 1940s unit the size of a finger.” There was another small problem in putting the shelves together. While the project had components, for the circuitry, and valves, it could not get valve holders. In the 1950s they did not wear out, and spares weren’t kept. “The hardest thing to get was the valve holders,” says Herbert. “In the end we got them made in China. They cost a penny each and we got thousands.” The team sent a specification to China, but it wasn’t plain sailing. “They all came back a little bit tight,” Herbert explains - so he had to
Issue 41 ∞ July 2021 63
make adjustments to around 12,000 sockets. “I spent many happy weeks with a hand drill and a little burr.” It’s the kind of engineering which is unknown in computers now, but vitally important for the EDSAC team. Badly fitting holders would be an Achilles' Heel, says Herbert. If the fit is too loose it invites corrosion on the legs of the valves, or loose connections needing constant waggling. Finishing that stage of the project was something of a relief, and the EDSAC reconstruction now has enough spares for 20 years. A message from the past “During the build, we were going up the same learning curve as the pioneers,” says Herbert. Much like their forebears, the team built the replica EDSAC one subsystem at a time, but they had one major benefit: as each subsystem was constructed, it could be tested against a digital simulation of the whole of EDSAC. The first fundamental part to make was the heart of the system the clock which provides a digital pulse to synchronize different parts of the machine. This was vital, given that the behavior of the analog parts of the system could vary wildly. At every stage, the reconstruction team had to deal with noisy signals, struggling to get the system to give a clean digital voltage. One big issue was the use of monostables or “monoflops," which produce a predetermined pulse signal. EDSAC was originally built with monostables, but the reconstruction team found they were unreliable, drifting off their settings. Reluctantly, they shifted to using bistables or “flip-flops” which switch between two stable states: “We found in several situations, we couldn’t make monostables work as well as we wanted.” Unexpectedly, the decision was vindicated, years into the project: “A chap turned up, who was actually one of the engineers at Cambridge who joined just after EDSAC was decommissioned. One of his first jobs was to clear out cupboards, including the EDSAC schematics. He came to the computer museum, and said ‘I think I’ve got these drawings in my attic’.” Looking at these designs from late in the life of EDSAC, they found the original pioneers had also switched to flip-flops: “We’ve been following the same path,” Herbert realized. “All these diagrams had bistables where we’d had monostables.” “Some of the drawings exactly corresponded to our design, some had significant variations, but in all cases, the differences were in parts of the machine we weren’t happy with. Where the drawings mismatched, it was evidence of progress in design.”
Alan Clarke / TNMOC
A chap turned up, one of the engineers after EDSAC. He came to the computer museum, and said ‘I think I’ve got these drawings in my attic’” The team was very glad to have this message from the original EDSAC pioneers, though a bit frustrated, because they’d had to make the decision on their own. “Going to bistables accelerated our progress. If we hadn’t done that would still be fighting the same problems.” A more reliable EDSAC The team built an EDSAC that was as close as possible to the original, but it didn’t reject the use of modern tools in building it. The pioneers used oscilloscopes and voltmeters which were new technology and still primitive. The reconstruction team had Arduino modules, laptops, simulators and modern signal analyzers. They’ve been able to make sure each module works by testing it against simulations based on what the logic designers in the team came up with. “The logical designers’ work remains pretty solid - considering it was done by someone reading historical records which were incomplete.” “I think in some ways we may end up with a more reliable EDSAC than the original machine,” Herbert believes. “We’ve not been building it in the rush the pioneers were, and we have better diagnostics. It will be better tuned and we can diagnose problems more deeply. “They didn’t have logic analyzers, or Python code to crawl all over the traces and compare them with the formal logic." The modern team uses a simulator to generate logic traces, which can be compared with actual readings from inside the reconstructed machine. The use of valves continues to push the
64 DCD Magazine • datacenterdynamics.com
team back to the basics, away from the “Lego” world of transistors and microchips. “EDSAC is not Lego: you have to treat it as a big complicated machine,” he explains. “The signal can be soft and squishy. It climbs up a ramp, you get echoes and a ringing effect. It is tough to get good clean pulses.” The pioneers used a signal level (a logical 1) that was any voltage above 20V. “In some parts of the machine it was 40V or 50V. Sometimes it would drop down to 15V. Sometimes it would work, and sometimes it wouldn’t. To build any part of the system, the team has to start with the signals: “The start of commissioning is getting respectable signal voltages, with sharp enough edges. Then it comes down to timing issues.” The team could build the processor and arithmetic units as autonomous systems, but they had to be synchronized: “There’s a lot of low level electrical stuff, but once we’ve got that sorted, it springs into life.” Final stages Work continued during the pandemic in 2020. The museum was shut, but work was distributed. “Teams took home a chassis to work on and, when they could, they reassembled them,” Herbert recalls. “Groups collaborated through Zoom calls into the museum. So instead of the whole team gathering inside the machine, one was inside while the others linked from home.” During the easing of restrictions in March of 2020, volunteers put a large part of the machine together, and it worked for the first time: “It came back with the answer to the problem they were working on in December.”
Making history The project is now in its final stages. All the modules have been built and tested, and the machine can have runs of up to a couple of hours: “That will meet our need for demonstrating it in the museum.“ But integration isn’t simple: “Several parts of the machine are independently clocked from each others, and have to be brought into synchronization,” he says. “You can be held back for a whole cycle - we wouldn’t tolerate that in a modern computer design!” Herbert expects to spend a busy summer getting the machine working properly before opening to the public later in the year. “Our next milestone is when we successfully are able to execute all the arithmetical and general instructions in the instruction set,” he told DCD early in 2021. “We are about halfway through that. We’ve knocked all the big bugs out of the machine.” The group is still learning. “As we commission the machine, we’ve discovered some things are redundant. Sometimes you don’t need amplifier states, for instance. There are some valves that aren’t doing anything - and we suspect that was the case in the original!” The next step is to get the entire instruction set working, and then to operate using the delay lines for memory. “The delay lines have been built and tested standalone, so we anticipate that being fairly straightforward.” Eventually, the machine will operate with its traditional input and output via paper tape and a teleprinter. The “initial orders” have been coded onto tape: “That’s like a boot ROM.” “Then we will be done,” says Herbert. We will have achieved the brief of the replica project.“ Life support But the story won’t stop there, because the Museum will then have the issue, as Herbert puts it, of “how to keep the damn thing going.” The reconstruction team has had an incredible depth of experience to draw on, including experts like Peter Lawrence, and chief designer Chris Burton, who worked for Ferranti in Manchester from the 1950s, designing systems like the Pegasus. The group has had a huge spirit of camaraderie, and drawn in pioneers like the early programmers Margaret Marr, Joyce Wheeler and Liz Howe, all of whom shared memories, checking and approving aspects of the final design. But that team won’t be available once the
machine is handed over to the museum. “At the moment you need experts to maintain it,” says Herbert, but that won’t be feasible in the museum, so the restorers will be handing their recreated system over with its own life support system. A small group is looking at creating monitors using modern gadgets like the Raspberry Pi which will track signals from the system. Ideally, these will be able to help museum staff troubleshoot, issuing instructions like “Change valve five on chassis six.” That’s an intrusion of modern technology but is entirely appropriate, he says. “We’ve chosen not to compromise on how EDSAC itself works. It’s as authentic as we can make it. But we’ve had every luxury we can afford on how we build and monitor it. “If you were keeping an elderly genius alive, you would use every medical technique.” The meaning of it all In the end, what is the significance of this achievement? EDSAC will run at around 650 instructions per second, and consumes around 9kW of power. Today, the average phone goes more than a million times faster than that on a tiny power budget. But EDSAC was revolutionary for its time, Herbert points out: “That’s 1,500 times faster than a post doc with a hand calculator.” And it appeared at a time when computing was developing fast, and no one was sure where it would lead. “Cambridge was building a machine to work in the maths lab. They wanted to evaluate it to see what a proper computer could be like. They were surprised by how useful it was and how long it kept going.” It could have been forgotten completely, he says: “Once EDSAC was working they were fed up with it. It was built to be a service machine.
Alan Clarke / TNMOC
"We’ve discovered some things are redundant There are some valves that aren’t doing anything and we suspect that was the case in the original!”
It was handed over as a working service, with an engineering team around it.” The team who built it were excited to move on to EDSAC 2, which made a big step, using a bit slice design instead of EDSAC’s monolithic architecture. “EDSAC 2 used several identical chassis to handle one bit on each one. EDSAC 1 was monolithic. If one chassis blew up, the system was broken. With EDSAC 2, you could pull in a spare one.” EDSAC 2 also took software deeper. “EDSAC 1 was hardwired, but EDSAC 2 was microprogrammed so they could try out new instructions. “ But many of the founding ideas in practical computing flowed from EDSAC. It allowed libraries of subroutines: “That was well done, it was a long while before other machines had such a well designed interface. Others needed machine code, but Cambridge programmers didn’t have to know machine code.“ “It wasn’t the biggest, the fastest or the best engineered,” Herbert sums up, “but it changed the world more than the others.” It kicked off commercial computing in a big way, through the involvement of the J Lyons and Co restaurant chain. Lyons senior management recognized that admin costs were eating away at their margins, and wanted to automate receipts and inventory. They heard that computing was happening in America, and visited the team building Univac, who suggested that EDSAC might be a better fit for them. Lyons provided an engineer, and funding which helped get EDSAC built, and then built its own machine, LEO. It was the first computer in the world to work on commercial applications, and was a clone developed from EDSAC. Herbert hopes the project will revive disappearing expertise and inspire students to learn about the technical challenges of the pioneers, creating a living educational resource for students. “As we have our struggle to work with EDSAC and to get it working, I think back to the pioneers led by Wilkes. They didn’t have modern laptops signal analyzers and Zoom to help them do the work. They had quite simple early oscilloscopes and voltmeters, and we wonder how they went about debugging the machine. "They didn’t even have the knowledge that we had - that it would eventually work. For them at times it must have appeared to be very, very daunting. And, of course, they had no previous experience of building a computer - the team had come from various backgrounds of radar, instrument making, radio, and so forth.” For Herbert, it’s been exciting: “We are privileged that we know how EDSAC works.”
Issue 41 ∞ July 2021 65
The Smartest Battery Choice for Resilient, Profitable Data Centers Reduce Your Risk, Cost, Hassle — with Proven Technology. We get it. Data center leaders and CIOs face endless demands — greater efficiency, agility and operational sustainability — while mitigating risk and lowering costs. Deka Fahrenheit is your answer. It’s an advanced battery technology for conquering your biggest data center battery challenges.
The Deka Fahrenheit Difference Deka Fahrenheit is a long-life, high-tech battery system designed exclusively for fast-paced data centers like yours. Our system provides the most reliable and flexible power protection you need at the most competitive Total Cost of Ownership (TCO) available.
Your Biggest Benefits:
Best TCO for Data Centers
Proven Longer Life
Slash lifetime TCO with lower
Field testing and customer
Virtually 100% recyclable. End-of-
upfront cost, no battery
experience show an extended
life value and recycling helps lower
management system required,
battery life that reduces the number
cost of new batteries and ensures
longer life and less maintenance.
of battery replacements over
self-sustaining supply chain.
the life of the system.
Trusted Battery Experts
A technology known for its long
Expand and adapt as needed,
Located on over 520 acres in Lyon
history as a safe, reliable,
without making a long-term
Station, PA, East Penn is one of
high-performance solution —
commitment to an unproven
the world’s largest and most
for added peace of mind.
battery technology chosen by
trusted battery manufacturers.
a cabinet supplier.
We’ll be there, for the long-term.
Let Facts Drive Your Decision. Balancing your data center needs isn’t easy. Deka Fahrenheit simplifies your battery decision by comparing the TCO of a Deka Fahrenheit battery system to lithium iron phosphate.
Overall TCO: Deka Fahrenheit Wins (1036.3kWb - 480VDC Battery System)
TOTAL COST OF OWNERSHIP
Lithium Iron Phosphate
YEARS IN SERVICE
Data Center TCO Analysis Factors 1 MW System (1036.3kWb - 480VDC Battery System)
Lithium Iron Phosphate
Initial System Cost
Maintenance Cost Per Battery
Replacement Cost Per Battery
Replacement Labor Cost Per Battery
Battery End-of-Life Value or Cost Total Cost of Ownership (TCO)*
$91 COST per kwh
$33 CREDIT per kwh
TCO = $832,662
TCO = $568,111
Approximately $264,551 in Savings
* Space calculations assume floor space costs of $60 per ft2, and Net Present Value (NPV) of 6%. Space assumptions include 2018 NFPA855 requirements with 4’ aisle. Does not include additional costs for UL9540A design changes or facility insurance for lithium iron phosphate systems. Total decommissioning costs for a 1MW Li-Ion battery based grid energy storage system is estimated at $91,000. Source: EPRI, Recycling and Disposal of Battery-Based Grid Energy Storage Systems: A Preliminary Investigation, B. Westlake. https://www.epri.com/#/pages/summary/000000003002006911/ Terms and conditions: Nothing contained herein, including TCO costs and assumptions utilized, constitute an offer of sale. There is no warranty, express or implied, related to the accuracy of the assumptions or the costs. These assumptions include estimates related to capital and operating expenses, maintenance, product life, initial and replacement product price and labor over a 15-year period. All data subject to change without notice.
Specifications: The High Tech Behind Deka Fahrenheit • Advanced AGM front access design decreases maintenance, improves safety and longevity • IPF® Technology — Optimizes capacity and reliability • Microcat® Catalyst — Increases recombination and prevents dryout • Sustainably designed for recyclability — End-of-life value enhances profitability • Exclusive Thermal Management Technology System: ° THT™ Plastic — Optimizes internal compression ° Helios™ Additive — Lowers float current and corrosion ° TempX™ Alloy — Inhibits corrosion
Deka Shield Protection Allow Deka Services to install and maintain your Deka Fahrenheit batteries and your site will receive extended warranty benefits. Deka Services provides full-service turnkey EF&I solutions across North America. Ask East Penn for details.
Do you have the best battery system for your data center? You can’t afford downtime or extra costs. Contact East Penn for a full TCO analysis.
Stuart, output to 20 inches Stuart, output to 20 inches, two copies
610-682-3263 | www.dekafahrenheit.com | firstname.lastname@example.org Deka Road, Lyon Station, PA 19536-0147 USA
Holistic cooling at the world's most efficient data center An EU-funded project built a data center with a PUE of 1.0148. Can that be translated to the real world?
n early 2019, a small group of researchers launched an ambitious project that they hoped would change how data centers are built and regulated. The idea? Build the world's most efficient data center. In just a few years, they would hit that milestone, developing a system with a power usage effectiveness (PUE) of just 1.0148. It didn’t begin that way. “We always wanted to have a showroom to highlight how our power is very clean here, and free of disturbances,” the director of the Boden Business Agency, Nils Lindh, explained. “Our view was that you don’t need any backup power or UPS type function here.” The Swedish municipality, already home to a number of data centers, envisioned a small deployment on municipal land, simply for the purpose of showing off its stable power, primarily provided by a hydroelectric dam. To develop the project, the BBA turned to UK firm EcoCooling and Hungarian developer H1 Systems, both of whom had previously worked in Boden. “And then as we conceptualized this idea and started talking to finance people, one of them pointed out this Horizon 2020 program,” H1's then-general director László Kozma explained. Horizon 2020 was a huge €60 billion
research program that ran from 2014 and 2020. Nestled amongst its many tenders, Kozma found the EU was looking to build a data center with a PUE of below 1.1. “I knew that a Hungarian company has only a two to three percent probability of being selected,” Kozma recalled. “But this might be that two percent - and we already had a good start, with an international consortium: a British cooling manufacturer, a Swedish municipality agency, and a Hungarian small / medium enterprise.” It was time to expand the plan from beyond a simple showroom, to “something a lot more serious,” he said. The group brought in the Research Institute of Sweden (RISE), based in the nearby city of Luleå - which was already home to a huge Facebook data center. “And after that, we went back to this project advisor who said that there was one thing still
Sebastian Moss Editor
missing - the big European name,” Kozma said. “There’s this unofficial list of 25 research institutions whom you have to take into your consortium to raise the probability of your winning.” It was well known that the larger economies got most of the Horizon 2020 money - science magazine Nature found that 40 percent of the program cash went to Germany, France, and the UK. The group turned to the Fraunhofer Institute as the final member of the team, and Lindh concedes that political machinations were at play: “Germany being the largest contributor to the European Union, we thought it would be good to have a German research institute involved,” he said. It worked. In October 2017, the group was awarded a €6 million contract titled 'Bringing to market more energy efficient and integrated
Controlling the whole data center as a single system, the cooling was architected around keeping chips at a constant temperature, no matter the workload level
70 DCD Magazine • datacenterdynamics.com
Building in Boden data centers.' "That was the title they gave us, and it's what it would have been if we wrote it," Kozma said. "Our idea and the European Commission's idea just fitted 100 percent." Now, the group had just 36 months to pull it off. As the BBA began work on permitting, H1 drafted data center designs, and EcoCooling conceptualized cooling methods, the Fraunhofer Institute had one year to develop a system for synthetic workloads. “Our responsibility in the project was to design a benchmark to emulate real world applications,” Fraunhofer’s head of Modeling and Networking Reinhard Herzog said. “And based on that benchmark, we tried to evaluate if the cooling policies work under the noisy behavior of real world applications, not just the stable artificial synthetic workloads that we used as tools.” Based on their work building smart city tools for Hamburg, the Fraunhofer Institute created a set of workloads that “resembled a smart city application with a lot of sensor data flowing in and some stream processing, and then evaluation dashboard application workloads,” Herzog said. “And the other scenario we modeled was for predictive maintenance applications.” Both were scaled up to the data center level, and designed so that the researchers could run the same workloads again and again as they tested out different cooling configurations. “So, after all this preparation phase, it was six or seven months of building,” H1’s Kozma recalls. “The building was inaugurated in the first months of 2019. I remember it was fucking cold.” DCD visited the facility at the time, with our very own Max Smolaks making similar observations on the unusually frigid temperatures at the time. "What we are doing with this project is we are creating a very efficient, and therefore low cost, operating system, we are creating a very low cost building system, which is going to enable the little guys," Alan Beresford, EcoCooling MD, told us at the time. “By little,
"Did we know in advance we’d reach that low PUE? No, at the beginning of the project, most of the people in our team thought we could reach 1.07-1.08." I mean truly small operators, compared to the world of multi-gigawatt operators: less than 100kW.” Indeed, the Boden Type One Data Center was quite small - a 500kW deployment consisting of four modular pods. One was filled with Open Compute Project CPU servers gifted to RISE by Facebook, one filled with GPUs for rendering, and another bursting with crypto mining ASICs, with the fourth left as a control room. In each of these pods, the team tried out its own approach to fighting heat: holistic cooling. “We were able to take control of the fans in the servers and slow them down,” Professor Jon Summers, RISE scientific leader in data centers, said. “And we synchronize the IT fans with the cooler fans.” Controlling the whole data center as a single system, the cooling was architected around keeping chips at a constant temperature, no matter the workload level. “There's a controller on the server that would constantly change the fan speed so that the CPU temperature was 60 degrees,” Summers said. “And as the fan’s speed changed it would send that information to an algorithm which would then tell the cooler what speeds it needs to operate at to match the fan speeds of all these servers so that you get a neutral pressure.” “It becomes a very well-balanced system, but you need the communication between the various layers.” This proved remarkably effective at eking out efficiency gains, as the whole data center’s cooling system worked in unison, rather than different aisles and servers fighting each other. “We achieved a PUE of 1.0148,” Summers
said. “Yes, insane.” The data center building was also designed for efficiency, dropping the plenum for a chicken coop design that allows for a natural chimney effect. “Did we know in advance we’d reach that PUE?” Kozma said. “No, at the beginning of the project, most of the people in our team thought we could reach 1.07-1.08. By turning to holistic cooling, dropping the UPS, using a different design, and several other features, it’s hard to directly say just how big a part each innovation played in achieving a PUE record. “To answer that, I should have built a kind of a normal building next to this and measured them against each other,” Kozma said - but they only had a budget for the one system. The location also provided advantages. “The call text from the EU was to go for the lowest PUE possible,” Summers said. “Putting an aircooled data center in the north of Sweden, you have tons of fresh cold air,” although he added there were some challenges of dealing with the air when it was well below freezing. “Obviously we took advantage of our geographical location, but we also took advantage of the fact that we had control. We went for the lowest inlet temperature we could possibly get away with, 15°C (59°F), which is easily achievable 7,500 hours of the year.” H1 built a simulation to test out whether the BTDC would be feasible in other locations, using historical climatic data on six European cities. The data center could remain within ASHRAE conditions for five of the cities, but in Athens it would slightly step out of the boundaries “two or three percent of the of the hours in a year," Kozma said. "Of course, the climate is changing, and we used historical
Issue 41 ∞ July 2021 71
data," he cautioned. There's also the issue that removing the UPS - responsible for a couple of points of PUE efficiency - is just not feasible for many locales. Still, “the experiment worked,” Summers said. "Slowing everything down allowed us to achieve a much better PUE." One issue with the result is PUE itself. "I'm very critical of PUE," Summers said. "It's not a metric that you would use to describe the energy efficiency of a data center in its entirety." PUE is the ratio of the total amount of energy used by a data center, to the energy delivered to computing equipment. Therefore, it does not penalize you for using inefficient IT hardware - you could run a 200MW IT deployment capable of a single petaflops of compute that could have a lower PUE than a 2MW deployment capable of 10 petaflops. "The problem is that we didn't have another metric that we use to represent that," Summers said. "Although the Commission was interested in us exploring other metrics, or maybe coming up with a metric ourselves, there is no simpler metric than PUE, unfortunately." The issue of PUE continues to exercise the data center sector in Europe. The EU has pledged to reach continental carbon neutrality by 2050, and the data center sector has promised to help, by reaching the goal by 2030, in a Climate Neutral Data Centre Pact. However, to convince the EU of its bona fides, the Pact has promised to create a new metric which will improve on PUE. With all of PUE’s flaws, it’s still one of the few ways we have of measuring efficiency. At an annual PUE of 1.0148, BTDC outperformed every other facility in the world - including the previous frontrunner, the NREL's Energy Systems Integration Facility, which reached 1.032 in 2017. Most of the commercial world is well short of this mark, but hyperscalers like Google and Facebook boast PUEs of 1.10 or less (in cooler countries), thanks to huge investments in energy efficiency, and some economies of scale. It’s possible that hyperscalers may use some form of holistic cooling. "We found out that they know what they're doing with cooling at Facebook, but they
haven't told the world about it," Summers said. "I think when the European Commission discovered that they spent all this money to find out what Facebook are doing," he trailed off. "That's a cynical way of looking at it, but that wasn't the whole idea of the project, anyway." Instead, BTDC hoped to prove just how efficient data centers could be if they put efficiency at the forefront of design, and to create a more open approach to holistic cooling, by putting the work in the public domain. The project may also pressure server manufacturers to open up fan controls, and could also help European regulators which despite the work of the Pact - still looks ready to crack down on this energy-hungry industry. One hurdle to applying the work is that holistic cooling is not feasible for colocation data centers, where servers are owned by tenants, who will not hand over control to the colo owner. Colos simply cannot control every fan of every server in every rack. Still, the project is finding life in enterprise data centers used by single clients. EcoCooling "uses holistic cooling control in all its deployments now," Summers said. "I think
that their customers are seeing the value in that immediately." For H1, there has also been some demand. "There was a Bulgarian company who wanted what was built in Sweden," Kozma said. "For a small Hungarian company, it didn't make sense to go to Bulgaria to build, so we helped them to design it and after that a local company will build it. Another will be built in Norway with the same idea." Fraunhofer, too, plans to commercialize the work. "The tool itself is open source," Herzog said. "But we're using it to make studies on the scalability of our applications, and on behalf of cities when they are trying to design what kind of application they need to rent from data centers." As for Boden Type One, it's still there. Instead of knocking it down, asset owners H1 and EcoCooling sold the project. It's now set to be expanded and used as one of Europe's largest
"I'm very critical of PUE. It's not a metric that you would use to describe the energy efficiency of a data center in its entirety. The problem is that we didn't have another metric"
72 DCD Magazine • datacenterdynamics.com
Understanding Chia, the cryptocurrency straining storage markets Keeping up with the cryptocurrencies
s I write this, cryptocurrency markets are going through yet another brutal crash. By the time you read this, who knows? Perhaps the decline has proved permanent, or perhaps it has rebounded to new heights, gaining new astronomical valuations on the back of an Elon Musk tweet. This volatility has long roiled cryptocurrency trading, spilling out into the wider world. During periods of peak demand, GPU supplies have dwindled, causing companies like Nvidia to apologize to their traditional partners. During periods of crypto collapse, GPUs have flooded the second-hand market, causing companies like Nvidia to apologize to their investors.
Throughout this, the immense energy requirements of crunching pointless math to mine the virtual token have burned through untold mountains of coal, oil, and natural gas, leading to emissions matched only by nationstates. It has proved a fundamentally broken and destructive model, devouring all in its path to produce something with no inherent value. Its assumed value, meanwhile, has fluctuated erratically, making and breaking millionaires in a matter of moments. There’s another way, say proponents of a new form of cryptocurrency. One that would require far less power, would leave chip markets unscathed, and would harness an underutilized resource: Storage. Cryptocurrency stalwarts like Bitcoin,
Sebastian Moss Editor
Ethereum, and Dogecoin all rely on proof of work - that is running calculations on GPUs and ASICs to mine coin. New currencies like Chia and FileCoin rely on miners filing storage space with random numbers. The Chia blockchain comes up with its own random number, and the winning miner is the one who has the closest match. Chia describes this as “proof of space and proof of time” but essentially the more hard drive space you have, the more random numbers you own, and the greater your chance of winning. "The way that the execution happens in the consensus algorithm is now not based on a work calculation, but instead based on a space and time calculation," Jason Feist, Seagate's VP of engineering and leader of emerging
Issue 41 ∞ July 2021 73
products and solutions, explained. "First and foremost, you take a device, and you have to plot the space. Plotting is very much like agricultural farming, you have to go out and define how big the plot is, what information should be there." For Chia, each plot needs to be roughly 100GB. "So the first instance of that plot is very write-intensive, you have to take random data, and you have to put it down on the device in predefined plots," he said. “SSDs are super fast at doing that activity, but they also have write life and write duty cycle requirements that we need to be mindful of,” he said, adding that Seagate had been in conversation with the team behind Chia for the past three years. “Because you can go really fast and you can burn through the terabytes written lifecycle of the flash cell itself.” Indeed, following Chia’s release in May, reports swirled of it destroying consumergrade SSDs, with miners recommended to use enterprise equipment. After plotting, there’s farming. “That’s looking at all of the space that has been allocated by the plots, and you have to prove that it's been allocated,” Feist said. “So all of the plots ultimately end up on hard drives, or whatever is the lowest cost storage medium. It's not performance-intensive, it's not network-intensive, it's not computeintensive, it's just a query that's sent out. Once a match has been met, then that's how the reward is handed out.” At that point, it becomes a lottery. In May, Tom's Hardware calculated that a 10TB drive should give a miner odds of winning at 0.000257 percent, but this figure will change as the currency changes in popularity. Each day,
there are 4,608 chances to win. “The more plots you have, the more farming you can do, the higher probability of winning the proof occurs, and hence, then your ability to reap monetary rewards goes up,” Feist said. Finally, the last piece to bear in mind is proof of time. “To ensure that you can't fake the system by just saying ‘I'm storing it’ when you’re not, there are functions that have to be allocated and calculated on the data that is stored in that plot.” Chia sets a predefined function, and asks for that answer. “If you don't have that data, the predictability of that time response is out of bounds, you'll have to recalculate and do a whole bunch of math that will never show up within a prescribed window.” Altogether, this makes for a large network that can operate on HDDs and SDDs, doling out Chia coin to those taking part. Created by BitTorrent inventor Bram Cohen, Chia was pitched as a system that could take advantage of the exabytes of unused storage already in circulation. But it hasn’t worked out that way. Miners - who of course did not have those drives turned to the open market. In May and June, large capacity hard drives went out of stock across Asia as Chinese miners pounced. For the few places still stocking 4TB and higher drives, prices jumped more than 60 percent. It got so bad that the official newspaper of the Chinese Communist Party's Beijing Municipal Committee warned that it risked hampering storage-intensive state surveillance efforts. In Germany, data center operator Hetzner Online banned crypto mining as demand spiked. "We have received many orders for our large hard drive servers," the company said (translated). "For this, however, large storage boxes are increasingly being rented. With storage boxes this leads to problems with the bandwidth on the host systems. "With Chia mining, there is also the problem that the hard drives are extremely stressed by the many read and write processes
"The more plots you have, the more farming you can do, the higher probability of winning the proof occurs, and hence, then your ability to reap monetary rewards goes up” 74 DCD Magazine • datacenterdynamics.com
and will therefore break." The long term impact of mining on enterprise drives is yet to be seen. Creator Cohen says claims it damages enterprise-class drives are "just plain wrong... for the most part." But it’s impacted the market, squeezing supplies to breaking point across Asia. "We have a robust supply-demand process," Feist countered, when asked if Chia's success could lead to the struggles seen by Nvidia. "We've been dealing with ebbs and flows in the market for many, many years. We've experienced things as far back as the Thailand flood that changed the supply-demand curves many years ago." When we wrote about the currency in May, around 3.9 million terabytes of storage space was being used by miners. It's now over 31 million terabytes. It's not the only thing that has grown. As Chia spread in popularity and awareness, shares in Seagate and Western Digital surged 19 percent and 24 percent respectively. Both have since fallen a bit as they underperformed the market. Chia, too, is currently on the decline. Since mid-May highs, its value - as measured by the dollar - has steadily fallen. It’s unclear if Chia will prove fruitful for investors over the long term, but at least in the short term it is reaping benefits for Seagate shareholders. Feist is insistent that proof of storage will have a broader impact, ushering in new forms of digital ledger, with provenance-based blockchain mechanisms helping transform how we operate. Similar claims were made when Bitcoin entered the scene in 2009, but have yet to find any real traction, with blockchain primarily found in limited marketing-friendly deployments. Chia hopes to try again, with a less powerhungry take on the problem. But there are still power demands across the storage, compute, and networking used to operate Chia. Then there’s the embedded carbon of building additional drives. "Manufacturing our drives uses energy and produces greenhouse gases," Seagate's environmental report states. "Our two largest sources of GHG emissions are purchased electricity and 'fugitive emissions,' or the unintended release of gases," it added, detailing steps to try to reduce the emissions. In 2017, the company said that it released 15.04 million metric tons of CO2 a year. While it hopes to reduce that by 20 percent by 2025, the success of storage crypto could jeopardize or slow that effort. At least it can be said that Chia is undeniably more energy-efficient than Bitcoin, with mining data centers more likely to look like cold storage than the ramshackle hot-houses Bitcoin is known for. But it would burn far less energy if it didn’t exist at all.
No matter the environment, Starline’s at the center.
Hyperscale, Colocation, Enterprise. Time-tested power distribution for every environment. Starline Track Busway has been the leading overhead power distribution provider—and a critical infrastructure component—for all types of data centers over the past 30 years. The system requires little to no maintenance, and has earned a reputation of reliability due to its innovative busbar design.
The right scale Mega data centers are here to stay, but how and where we build them is changing Sebastian Moss Editor
76 DCD Magazine • datacenterdynamics.com
Building at Scale Supplement
he data center industry is used to growth. The market is getting bigger, the power demands are skyrocketing, and the dollar valuations are through the roof. "10 years ago, 20MW was huge," Yondr chief development officer Pete Jones told DCD. "If someone offered you 20MW, you would have bought a Ferrari before you'd have done anything else." In just a few years, expectations have expanded massively - with 100MW+ data centers dotting the US countryside and growing in rural regions in the Nordics. "There's a certain complexity when you start to scale that isn't just linearly proportional to the number of megawatts,” Jones warned, noting that the bigger you grow the harder it gets. “Your burden goes up, and if things go wrong at scale the consequences are so much larger - you need to have a much more robust, thick-skinned leadership team for these projects.” Still, with hyperscalers now more than a decade into their cloud push, the process of building “these large scale things in the middle of nowhere is a pretty well-oiled machine,” Jones noted, admitting that uptake of the company’s hyperscale-focused HyperBloc (150-300MW) has been “a hell of a lot lower than MetroBlock” (40-150MW). Google’s regional director of EMEA data center infrastructure delivery, Paul Henry, concurred. The company knows how to build huge campuses, he said, but is now focused on bringing costs "as close to the raw input cost as possible." Take cement - "at some level, you can't get it any cheaper, same for steel," he said. "The manufacturers that build UPSs, and generators, at some point they're getting down to really razor-thin margin. The biggest builders have done a good job of getting really efficient, but you have to deliver faster, cheaper, and so forth." To pull this off, the company is in the midst of changing how it designs and builds its facilities, big and small. Historically, every data center it has built has been different, based on the cuttingedge tech and ideas of the time. "It's very difficult to shorten our lead time, and be able to be best in class on schedule and cost delivery when we have that continuous change," Henry explained. "We are [now] standardizing not only our
design, but our overall execution strategy, as well as developing all of our systems into a series of products that are built into an execution strategy that is really a kit of parts," he explained. This standardized system "takes a lot of design work on the front end to build a modularization strategy, rather than stick build in the field," Henry said. "We've done that - in our new generation of data center design, we're actually looking to take about 50 percent of our job hours off of the construction site, and move it into manufacturing facilities." Before breaking ground, Google creates a work package defining the entire bill of materials for a scope of work, including job hours and crew size, as well as component cost. "So very much akin to the Ikea strategy," he said. "It's all been pre-defined." The changes have helped Google bring construction time down from 22 months to less than 18 months. It hopes to squeeze that further, down to just 12 months - reducing cost and making it easier to predict demand.
Hyperscalers are coming to town But Google and other hyperscalers, are not only changing how they build data centers. They're also changing where they build. "The biggest companies used to build these 200 -400 megawatt data centers and everything would be in there," CyrusOne's SVP of corporate development Brian Doricko said at DCD>Building at Scale this May. "But now those same firms are selling more and more of cloud services, [and customers] want to know their applications are going to live in multiple buildings and multiple places." Add in data residency laws, latency demands, and cutthroat cloud competition, and you have a reality where hyperscalers can't just live out in the wilderness. Now, they're coming for the suburbs and city centers. “We've been predominantly in five campuses within EMEA,” Google’s Henry said. “And those have been fairly large scale data centers, ranging anywhere from 32MW to 60MW per data center,” with multiple facilities on each campus. “But we're seeing a bit of a shift as to our strategy - scale for us now in the region is really looking at how do we get into all the metros that we need to expand into, and that's happening at a rapid pace.
With data residency laws, latency demands, and cutthroat competition, hyperscalers are coming from the wilderness to the city centers
How do you get 3x 100MW for every player in every metro? For the three largest players, that's 900MW “So we're moving away from just the five key campuses, into almost all the tier one metros,” he said. In many places, that starts with a “toehold,” Henry explained, of around 3MW. “But then we have the ability to scale up, and it may be to what our standard design now is - an 88MW facility. And then you grow that to a campus where you may have four or five buildings within a campus.” Starting small but in countless metros, and then expanding rapidly, “is really what we're seeing as scale across the region,” Henry said. This is a whole different kind of scale - an astonishingly large footprint across countless metros and regions. "I can anticipate that we'll be in every country in EMEA at some point," he said. That's when things really get complicated. "How do you get 3x 100MW for each player, in every metro? For just the three [biggest] players, that's 900MW you got to create to have three substantially sized availability zones in every Metro," Yondr's Jones said. "That's not an unsubstantial challenge to pull off." Hyperscalers aren't comfortable spreading that out across tons of small sites, Jones said. "They are saying 'we need fewer, we cannot deal with even the contractual burden of managing 800 leases.’" Instead, they hope to build reasonably sized facilities in city locations - where you face all sorts of regulations, permitting, local protests, and space issues. One of the ways to square that circle has been to relax site restrictions, Jones said. "10 years ago, end users would have a campus profile that said the site could not be near an airport, a train line, etc. Before you've even left your office, you've excluded two-thirds of the city. "Fast forward 10 years, it's like ‘oh, as well as meeting all of those ludicrous kinds of constraints, it also has to be 100MW in size,’" he said. "Forget about it. So we've seen a real acceptance of the trade-offs [required to locate in a city]." Such difficulties have also helped a cottage industry grow up to help hyperscalers navigate the complex and sometimes contradictory regulatory framework of different cities. Take Frankfurt: "There aren't any big campuses, part of
Issue 41 ∞ July 2021 77
which is a land problem," Jones said. "So you solve the land problem, and immediately you're into a power problem. And then you run into a regulatory limit where once you build 18MW you have to start a new building. Then there's Seveso legislation” (on how many hazards can be on site). Such limitations restrict the scale a hyperscaler can operate at some locations, Jones said. "I think choosing the right scale has to be case-specific - what are the constraints that exist market by market that might stop you from achieving scale?”
Master planning Fitting data centers into the fabric of an existing city is always a challenge that will likely leave at least one community unsatisfied. How about building an environment just for data centers? "If we were to look at Northern Virginia, how would we do it differently if we could master plan it instead of just the natural, organic way that it grew on its own?" Scott Noteboom asked. As CTO of Quantum Loophole, Noteboom hopes to find out. "We've acquired north of 2,000 acres, with a gigawatt to start from a substation off our primary transmission that can scale to 3GW," he said, with the company looking to serve as a master planner that runs the campus for hyperscalers and colos to then build on top of. "I think that there's building blocks that are bigger, more efficient, and more economical than an individual data center can ever hold," Noteboom said. "On the energy side, the transition from in-building UPS to the community level allows for
critical power-as-a-service using utility energy storage solutions.” Or with cooling, he envisions coolingas-a-service operating at the community level instead of the individual data center level. “Lastly, the data center now serves as the network exchange,” he said. “If we were to do Northern Virginia over again - instead of having 50 data centers that had hundreds of individual construction projects, building fiber that took many, many months and years to each of those buildings, all crossing the chasm of easements and rights of ways - we would have all of that pre-planned and built out in service by network center.” How about, he suggested building network centers designed end to end for network, which then connects to the surrounding data centers. “We're talking about scale - data centers are getting bigger and bigger throughout Northern Virginia,” Noteboom said. “They're next to schools, they're next to condominium complexes, the noise they're emitting is annoying to neighbors and others. Power plants have just sloppily thrown substations all over the place.” This organic growth has worked - “it's such a miracle, and it's such a great thing,”
Noteboom said. But as we look back, “can we take a look at all of these attributes, and remove the conspicuous nature of the data center, remove all the complexity, remove all the knots of Northern Virginia. And then when we build a master plan community, we can ask what does that community look like?" This vision could lead to huge data centers built on huge campuses, which themselves are part of huge master-planned mega-campuses. But given the scale of the Internet, it still could not be enough - with those sites then connected to growing facilities within metros, and to smaller Edge sites spread across the region. That may force a reckoning among data center operators as they find the industry increasingly butting up against the reality of living alongside humans. "Data centers are going to consume as much as entire countries," Jones said. "If you're in local community, or you're going for a permit or planning, I don't think local job creation and renewables have ever been hotter issues to have evolved answers to. "And I think we are finding communities becoming more astute. They smell bullshit very quickly."
DCD>Building DCD>Magazine at Scale Supplement Huge The Artificial data centers Intelligence means huge Supplement challenges
This article featured in our a free free digital digital supplement supplement onon artificial building intelligence. data centers Read at scale. todayRead to learn about today to learn rack about density, howdeep data centers learningcan technologies, use digital twins the role in of construction, CPUs in inferencing, the SAP growth the quest for and strategy, fusion thepower, value that andcan much be unlocked more. by taking advantage of a complementary industry during operations. bit.ly/DCDatScale bit.ly/AISupplement
78 DCD Magazine • datacenterdynamics.com
POWER TO KEEP YOU CONNECTED
Protect your critical data with backup power that never stops. Our priority is to solve your data center challenges efficiently with custom continuous, standby, and temporary power solutions you can trust to keep you connected. Our trusted reputation and unrivaled product support demonstrate the value of choosing Caterpillar. For more information visit www.cat.com/datacenter.
© 2021 Caterpillar. All Rights Reserved. CAT, CATERPILLAR, LET’S DO THE WORK, their respective logos, “Caterpillar Corporate Yellow”, the “Power Edge” and Cat “Modern Hex” trade dress as well as corporate and product identity used herein, are trademarks of Caterpillar and may not be used without permission.
Mo money mo problems
Dumb money comes for the data center
ata centers are booming at a time where many other sectors are in decline. The pandemic has upended other business models, while demand for digital services has shot through the roof. For those in the industry, this has proved lucrative. For those missing out, it has meant a mad scramble to buy a stake in the market, or invest in new opportunities. I have covered this industry for more than half a decade, and in the past year I have seen exceptional investor growth. Every week brings a new partnership, a new blockbuster deal, and a new company promising to undercut the rest. It all seems to be happening a bit too fast, with too few experienced hands in charge of many of these decisions. Curious to know what this could mean for the data center industry, I caught up with a respected data center builder and contractor to get a ground-level view of what’s happening. “We’ve experienced this wave of new money pouring into the sector,” the person, who requested anonymity, said. “And a lot of that new money is being advised badly.” He added: “You’re seeing on LinkedIn ‘So and so is now advising So&So fund,’ and you think ‘God help you So&So fund, I wouldn’t touch that person with a bargepole for their advice on entering a new sector.’” Investors with little experience in this complex, dynamic, and difficult market are chasing easy returns.
80 DCD Magazine • datacenterdynamics.com
“I feel bad for them,” the builder said. “There’s an underappreciation of the risk during building the thing, and then operating the thing.” This has impacted the wider market, he said, with the lower perceived risk leading to lower prices and skewed client expectations. “If the new prices are 15 percent less than what the market’s been floating out for the last five years, then it’s a real problem, because it’s a fake distortion of what’s going on,” he said. “It’s highly improbable that, just because there’s a whole load of new money, someone has magically found a way to do things for 15 percent cheaper. Some of these folks will realize they can’t make money, or some assets will have gone distressed or won’t have finished construction.” He pointed to one project now looking at two years of delays. “When these things go wrong, they go seriously bloody wrong.” The rest of the industry will have to weather this storm - competing against artificial prices and timelines, while at the same time dealing with the added cost of more competitors fighting over land, equipment, and customers. But in those potentially distressed properties, opportunity could lie. Just as the death of Enron led to the birth of Switch, with founder Rob Roy picking up his first data center for pennies on the dollar, the facilities funded by today’s dumb money could prove the acquisition targets of tomorrow.
Industry Most Reliable Leak Detection System Since 1988 Intelligent leak detection solution with fast respond time Can detect both conductive and non-conductive liquids Chemically resistant and ruggedized design for long service life No false alarm for dirt and dust buildup Intuitive user interface with touchscreen panel Third-party integration capable via BACnet, Modbus, and dry contact
Supermicro A+ Servers
Join us online to experience the World’s Most Versatile portfolio of 3rd Gen AMD EPYC Processor-based Server and Storage Systems. TM
Learn more at www.supermicro.com