DCD Magazine #56: How this feature got to the Moon

Page 48


Partnering for AI-Ready Data Centers

Transform your data center into a powerhouse of efficiency with Schneider Electric’s AI-Ready solutions. Our end-to-end infrastructure services are designed to adapt to the growing demands of AI workloads, ensuring your operations remain resilient. Leverage advanced power and cooling systems tailored for high-density compute environments that maximize performance while minimizing environmental impact. Partner with us to redefine your data center’s energy strategy and take the lead in the AI era.

se.com

March 2025

6 News

Stargate, CoreWeave’s IPO, Microsoft’s pullback

15 This feature is on the Moon

Here’s how DCD became the first publication to have a lunar outpost. Oh, and there’s a data center there too

27 The Edge supplement

Cooling, AI, and education

43 Operating in the light, and in the dark (net)

Behind the efforts to fight CSA content and clean the Internet

48 Vertiv rising

The CEO of Vertiv on surfing the AI wave, while avoiding a crash

51 Welcome to gas town

AI data centers are turning to gas to power ever larger campuses

57 Becoming Nebius

The former Yandex spin off hopes to become an AI cloud contender

62 Palistar’s big tower play with Symphony Towers Infrastructure

The US telco tower push

65 Cogent’s colo journey

Dave Schaeffer on the company’s new data center ambitions

72 Copper to colo

Ziply’s plan to repurpose its Central Offices

78 JPMorgan Chase & Co.’s IT strategy

On-prem and on cloud, we talk to CIO Darrin Alves

82 The x-factor

Behind Equinix’s xScale ambitions in the mega-cluster era

86 On the cusp of the Kuiper campaign

A status update of Starlink’s biggest rival

90 The Open Ran conundrum

Do incumbents have a stranglehold over O-RAN?

94 5G on the frontline

Protecting Latvia from Russian invasion

98 Op-ed: The party goes on

From magnetic-bearing chillers to purpose-built air handlers, the full line of proven data center solutions from YORK performance optimized to meet the uptime requirements of today and the sustainability goals of tomorrow. After all, we’re not waiting for the future: we’re engineering it.

Learn more about the full line of YORK® data center solutions: YORK.com/data-centers

CRAH Computer Room Air Handler
YVAM Air-Cooled Magnetic Bearing Centrifugal Chiller

We're on the Moon.

Yes, really From the Editor

OMaking history

ne does not get into data center journalism chasing an adrenaline rush. And yet.

For the cover (p15), we profile a small startup's wild dreams of building data centers on the Moon, culminating with the perilous landing of the first lunar data center on March 6, 2025.

But what better way to understand this journey than to take part? So this cover feature traveled on the data center, as the first feature ever on the Moon.

DCD: The only lunar publicationTM

Vertiv's CEO

Giordano Albertazzi has precided over a terrific run at the power and cooling company, timed nicely with the AI boom. He talks to DCD about how Vertiv hopes to stay ahead of growing competition, where he thinks bottlenecks will slow expansion, and why paranoia is good for the soul (p48).

Life's a gas

The extreme and rapid power demands of AI has caused the onceenvironmentally-conscious data center sector to embrace natural gas.

We look at the surge in gas demand, particularly in the US, and profile the countless projects springing up that aim to help get data centers online (p51).

Yandex's Nebius' data center push

Spun out of the one-time Russian tech giant Yandex, Nebius hopes to build a new identity in the West. We travel to its data center in Finland to learn of its plans to become an AI powerhouse (p57).

Telcos and data centers

Telco fans, rejoice. We have three features looking at different parts of the industry, and how they interlink with data centers.

First, we talk to CEO of Symphony Towers about the towers industry and the ongoing promise of 5G rollouts (p62). Then we ask Cogent’s CEO about how it is building out a data center business (p65), and Ziply about its plan to repurpose Central Offices as data center sites (p72).

JPMorgan's hybrid infrastructure

As the world's largest bank by market cap, JPMorgan Chase & Co. doesn't like being frivolous with its money. So why is the business spending billions on its own data centers, along with cloud, and other compute? We chat to CIO Darrin Alves about the bank's IT roadmap (p78).

Equinix's xScale up

The colo giant plunged into hyperscaler build outs with xScale ahead of the AI boom.

But now, its investments risk seeming quaint in a world of decabillion gigawatt projects. Program managing director Krupal Raval tells us how xScale is changing (p82).

Plus more

An Edge supplement, Kuiper plans, fighting the dark web, O-RAN & more!

384,400km

The average distance between you and the Moon

Publisher & Editor-in-Chief

Sebastian Moss

Senior Editor

Dan Swinhoe

Features Editor

Matthew Gooding

Telecoms Editor

Paul 'Telco Dave' Lipscombe

CSN Editor

Charlotte Trueman

C&H Senior Reporter

Georgia Butler

E&S Senior Reporter

Zachary Skidmore

Junior Reporter

Niva Yadav

Head of Partner Content

Claire Fletcher

Copywriter

Farah Johnson-May

Erika Chaffey

Designer Eleni Zevgaridou

Media Marketing

Stephen Scott

Group Commercial Director

Erica Baeta

Conference Director, Global

Rebecca Davison

Live Events

Gabriella Gillett-Perez

Matthew Welch

Audrey Pascual

Joshua Lloyd-Braiden

Channel Management

Team Lead

Alex Dickins

Channel Manager

Kat Sullivan

Emma Brooks

Zoe Turner

Tam Pledger

Director of Marketing

Services

Nina Bernard

CEO

Dan Loosemore

Head Office

DatacenterDynamics 32-38 Saffron Hill, London, EC1N 8FH

Sebastian Moss Editor-in-Chief

The biggest data center news stories of the last three months

NEWS IN BRIEF

AT&T signs $850m Central Office sale-leaseback deal

AT&T has sold 70 copper network Central Offices to a real estate firm in a sale-leaseback deal. It is the second such deal Reign Capital has signed with the carrier. The portfolio totals 13m sq ft.

OpenAI officially announces multigigawatt Stargate project

OpenAI has announced ‘The Stargate Project,’ a new company set to invest $500 billion into AI infrastructure over the next four years.

The data centers will be exclusively used by OpenAI as it expands its generative AI compute portfolio. Of the total investment, $100bn will be deployed ‘immediately.’

SoftBank, OpenAI, Oracle, and Abu Dhabi’s MGX are the equity investors in Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. SoftBank’s Masayoshi Son will serve as chairman.

In the works since at least last year, the buildout is currently underway, starting in Texas with Oracle’s project in Abilene, Texas, which is itself leased from Crusoe on land owned by Lancium.

Oracle founder Larry Ellison said that 10 data centers were currently under construction at the site in Texas.

OpenAI and Oracle are expected to deploy 64,000 Nvidia GB200s at the Stargate data center in Abilene, Texas, by the end of 2026.

Specifics of what OpenAI has planned have yet to be revealed, but the company last year shared a document with the White House presenting a plan for 5GW data centers - which would be the largest

facilities in the world.

OpenAI is also reportedly exploring other Stargate data center options in 16 states: Arizona, California, Florida, Louisiana, Maryland, Nevada, New York, Ohio, Oregon, Pennsylvania, Utah, Texas, Virginia, Washington, Wisconsin, and West Virginia.

The company has said it aims to develop five to ten 1GW sites, totaling around 8GW of capacity by 20230.

One of those might be Cipher Mining’s Barber Lake site outside Colorado City in west Texas.

SoftBank recently invested $50m in crypto and AI firm Cipher, gaining a period of exclusivity to acquire the 250-acre site which is able to support up to 300MW. SoftBank made the investment through an affiliate known as Star Beacon LLC.

One company set to be impacted by Stargate is Microsoft, OpenAI’s largest shareolder. Currently, OpenAI gets most of its data center capacity from Microsoft.

With the Stargate announcement, however, Microsoft now only has a right of first refusal to serve the AI firm. Microsoft is not thought to be an investor in Stargate.

Stargate is reportedly set to provide threequarters of OpenAI’s compute power needed to run and develop its AI models by 2030.

Full funding for the project isn’t secured.

AWS & Microsoft plan Irish subsea cables

Microsoft has filed to develop three subsea cables linking Ireland to the UK, with all three set to land in Wales.

AWS has also filed to develop a cable from Ireland to the US.

Macquarie to invest $5bn in

Applied Digital

Macquarie Asset Management (MAM) is set to invest $5 billion into Applied Digital, a cryptomining firm pivoting to AI and HPC data centers. The money will be used to build out Applied’s Ellendale site in North Dakota.

Fiber firm Lightpath launches Edge DC unit

US fiber firm Lightpath is moving into deploying Edge data centers, adding compute to locations along its fiber network. The company will be deploying new modular data centers at four existing in-line amplifier (ILA) along its NYC-Ashburn fiber route. ILA’s amplify singals along fiber routes.

Tract launches Fleet DC to build data centers

Data center park developer Tract has launched a new unit to build data centers.

The new unit, Fleet DC, will focus on developing 500MW+ campuses and has 3GW in development. It will be developing both on Tract-owned land and at third party sites.

Cogent launches 55 Edge data center across US

Fiber firm Cogent has launched a network of 55 Edge data centers in the US, totaling 20MW across 108,800 sq ft The sites average around 40 racks and 350kW, and were repurposed from the former Sprint fiber business acquired from T-Mobile in 2022. The company has also repurposed some 52 larger data centers from the acquired sites.

Microsoft targets $80bn data center capex in 2025

Microsoft’s expected capital expenditures on data centers is set to reach new highs in 2025, but some reports suggest the company has been pulling back from some leases.

In January, Microsoft president Brad Smith said the company plans to spend $80 billion on building AI data centers in the 2025 financial year, a significant increase on last year.

He said: “In FY 2025, Microsoft is on track to invest approximately $80 billion to build-out AI-enabled data centers to train AI models and deploy AI and cloud-based applications around the world.

“More than half of this total investment will be in the United States, reflecting our commitment to this country and our confidence in the American economy.”

The $80 billion would reflect a significant increase on the $53 billion capex spend Microsoft made in 2023.

Microsoft’s data center build-out in the US and beyond has already been extensive. Documents leaked last April revealed it had more than 5GW of capacity at its disposal, with plans to add an additional 1.5GW in the first half of 2025. It is possible this has since been revised upwards.

However, despite this increased capex, the company may be backing off in some locations.

An analyst report from TD Securities published in February reported that Microsoft had backed out of several leases agreements.

The report suggested the company had

AI cloud startup CoreWeave files for IPO

“1) canceled leases in the US, totaling ‘a couple of hundred MWs’ with at least two private data center operators, 2) has pulled back on the conversion of ‘500’s’ to leases, and 3) has re-allocated a considerable portion of its international spend to the US.”

One of the reasons for cancellation was said to be delays in securing power to planned data centers.

A spokesperson for Microsoft said the company was “on track” to spend that $80bn figure in 2025.

“Last year alone, we added more capacity than any prior year in history. While we may strategically pace or adjust our infrastructure in some areas, we will continue to grow strongly in all regions.“ the person said.

During its most recent earnings call, CFO Amy Hood revealed that the company was working from a “pretty capacity-constrained place,” adding that they have been “short [of] power and space.”

CEO Satya Nadella recently said in an interview that “there will be an overbuild” of AI infrastructure, but his company will be “leasing a lot of capacity in ’27, ’28.”

Google has said it expects its capex to jump to $75bn in 2025, up from 2024’s $52 5bn. The majority will be going towards data centers, servers, and networking.

Amazon, meanwhile, expects its 2025 capex to reach $100bn. The company has said the “majority” of that spending will go towards AWS.

AI cloud provider CoreWeave is going public via an IPO on the Nasdaq stock exchange.

A listing date has not been provided, but the company will be listed under the ticker “CRWV.” Reports that the company was looking to file an IPO imminently emerged in late February, but rumors had been circling since last year.

In the company’s SEC filing, CoreWeave noted that, as of December 31, 2024, it had 32 data centers operating more than 250,000 GPUs in total, and more than 360MW of active power.

“Our total contracted power extends to approximately 1.3GW as of December 31, 2024, which we expect to roll out over the coming years,” the company added.

In addition, CoreWeave noted that its revenue has seen a huge incremental increase over the years. “Our revenue was $16 million, $229m, and $1.9bn for the years ended December 31, 2022, 2023, and 2024, respectively, representing year-over-year growth of 1,346 percent and 737 percent, respectively.

CoreWeave was originally founded in 2017 as a cryptomining firm. Around 77 percent of CoreWeave’s revenue in 2024 was from its two largest customers, Microsoft being the largest and accounting for 62 percent of CoreWeave’s revenue that year.

OpenAI plans to spend $11.9bn on CoreWeave services, and take an equity stake.

Natural gas booms amid data center

desire for nearterm capacity - sustainability goals suffers

Data center companies are increasingly turning to natural gas as a power source amid a boom for near-term capacity driven by demand for AI.

In recent months, data center firms including Hyperscale Data, Vantage, Sharon AI, Meta, CloudBurst, EdgeConneX, Crusoe, Gryphon Digital, Duos have all announced or filed plans to power data centers with natural gas.

Tapping into natural gas pipelines and power plants offer data center firms a quick route to bringing new capacity online at a time when the wait for connections to electricity grids can reach several years (for more, see p51).

In Ohio, EdgeConneX aims to deploy a 120MW gas-fired power plant to be the primary energy source of a data center it is developing in New Albany. It could launch in Q1 2026.

Australian AI cloud firm Sharon AI aims to develop a 250MW natural gas-powered data center in Texas. The company is working with New Era Helium and recently acquired a 200-acre site outside Odessa. The initial phase is due online next year.

Hyperscale Data, a company with a data center in Michigan, has announced plans to deploy 40MW of behind-the-meter gas capacity at the site, in addition to the facility’s 300MW grid connection.

xAI, the AI startup founded by Elon Musk, uses VoltaGrid power solutions at its data center in Memphis.

February saw Vantage Data Centers partner with microgrid firm VoltaGrid to deploy 1GW of gas-powered off-grid capacity at its facilities.

VoltaGrid’s natural gas microgrid technology will be integrated into Vantage’s data centers as its primary power source.

VoltaGrid previously partnered with gas engine provider Jenbacher on a new modular natural gas power system for the data center market. The QPac platform generates 20MW per reciprocating node and can be combined to deliver up to 200MW of prime power.

VoltaGrid claims it will be able to deploy up to 50MW of QPac units per month, with deliveries commencing to US customers beginning in 2025.

At the same time, utilities, natural gas providers, and energy plant developers have been seeking to build large numbers of gas plants explicitly targeting data center demand.

The likes of Enterfy, Dominion, Chevron, ExxonMobil are aiming to build large-scale gas plants to power data center demand across the US.

Intel faces split by TSMC and Broadcom amid struggles

Intel’s future could be in question after reports emerged in February claiming that Broadcom and TSMC were weighing up plans to acquire some of the chipmaker’s assets.

The Wall Street Journal reported that Broadcom is looking into the possibility of acquiring Intel’s chip design business, while Bloomberg noted that TSMC was eyeing up some of the company’s factories.

While the Bloomberg report initially claimed that the acquisition was at the request of Trump administration officials, a separate Reuters report published several hours later said President Trump was unlikely to approve of a foreign company operating Intel’s US factories.

CEO Pat Gelsinger ‘retired’ from Intel in December 2024 and in January 2025, the company posted its third consecutive quarterly loss, with fourth-quarter revenue down seven percent year-on-year (YoY) to $14.3 billion, whilst full-year revenue declined by two percent YoY to $53.1 billion.

In late September 2024, Intel reportedly rejected an offer from Arm to acquire the company’s product division. That same month, asset manager Apollo reportedly offered to invest up to $5 billion in Intel as reports surfaced that rival chipmaker Qualcomm was also eyeing the struggling chipmaker.

Endeavour’s Edged signs 2GW SMR nuclear deal with Deep Fission

Deep Fission, a small modular nuclear reactor (SMR) developer, has partnered with Endeavour to develop and deploy its technology.

The companies have committed to co-developing 2GW of nuclear energy to supply Endeavour’s global portfolio of data centers which operate under the Edged brand. The first reactors are expected to be operational by 2029.

The Deep Fission Borehole Reactor 1 (DFBR-1) is a pressurized water reactor (PWR) that produces 15MWt (thermal) and 5MWe (electric) and has an estimated fuel cycle of between ten to 20 years.

It is designed to be placed in a 30-inch borehole, using deep geology to provide pressurization and containment, which, according to the company, will increase security and lower costs.

The reactor can be placed up to one mile deep, where its hydrostatic pressure is similar to the pressure found in standard PWRs. As a result, DFBR-1 will not have thick-walled pressurization vessels.

Edged has data centers in Spain, Portugal, and the US.

Supporting applications:

• Artificial intelligence (AI/ML)

• Cloud computing

• Augmented reality (AR)

• Industry 4.0

• 5G cellular networks

• Data Center Infrastructure Management (DCIM)

Accelerate Your Data Center Fiber Connectivity for AI

Cabling considerations can help save cost, power and installation time:

Speed of Deployment Sustainable and future-proof

Global reach, capacity and scale

Scan below to learn more or visit commscope.com/insights/unlocking-the-future-of-ai-networks

DOGE impacts US government IT efforts

Elon Musk’s Department of Government Efficiency (DOGE) has been targeting the US government’s IT estate.

DOGE has shut down 18F, the technology consulting unit of the General Services Administration.

18F was responsible for managing the IRS’ tax filing service, as well as designing and updating government websites, and helping government agencies build, buy, and share technology products.

The National Oceanic and Atmospheric Administration (NOAA) has cut around 10 percent of its workforce and is canceling research center leases as part of President Donald Trump’s government efficiency efforts.

Musk has also commented on the government’s use of Iron Mountain’s Pennsylvania site, which hosts physical records.

However, Bill Meaney the CEO of Iron Mountain, said that such contracts were only a small percentage of total revenue, and opportunities lay in working with the new efficiency department.

DOGE has canceled close to 750 government leases since the start of the year, equating to around one in ten federal office spaces and some 9.6 million sq ft.

DOGE claims its efforts have ‘saved’ around $660m, but many of its savings have been disputed and retracted. It previously claimed billions in savings.

Exa Infrastructure acquires subsea cable firm Aqua Comms

Exa Infrastructure is to acquire subsea cable operator Aqua Comms.

The network firm, owned by investment firm I Squared Capital, announced in January that it has signed binding agreements to acquire Aqua Comms from D9.

D9 disclosed the net proceeds of the transaction to be $48 million, noting the final amount to be “extremely disappointing.”

Aqua Comms was previously backed by UK infrastructure fund Digital 9 Infrastructure, which is currently in the process of winding down and selling off its assets. D9 acquired Aqua in 2021.

Ireland-based Aqua Comms operates submarine cable systems and supplies fiber pairs, spectrum, and wholesale network capacity to the global content, cloud, carrier & enterprise markets.

The company owns and operates America Europe Connect-1 (AEC-1), America Europe Connect-2 (AEC-2), CeltixConnect-1 (CC-1), and CeltixConnect-2 (CC-2) and is part of a consortium that owns/operates the Amitié cable system (AEC-3).

D9 has only been in operation for four years. Investment firm Triple Point raised £300 million ($365m) floating D9 in March 2021, with plans to acquire a number of digital infrastructure firms.

Dan’s Data Point

After acquiring a number of companies, including Verne and Aquacomms, D9 announced in February 2024 that it had decided to wind down its operations and sell off its assets following a strategic review.

In October, the company appointed InfraRed Capital Partners as its new investment manager to oversee the company’s wind-down, months after D9 sold its European data center firm Verne to Ardian.

At the end of December, D9 confirmed it agreed to sell its stake in EMIC-1 (Europe MiddleEast India Connect) for $42m.

According to the company, the project “continues to be impacted by ongoing conflicts in the Red Sea area, which have led to an indefinite delay to its final construction completion.”

Exa operates more than 150,000km of digital infrastructure across 37 countries, including 20 cable landing stations that provide critical connectivity to subsea systems.

The company was formed out of European, subsea, and North American network infrastructure and data center assets previously owned by GTT and snapped up by I Squared Capital in September 2021.

Exa is led by former Aqua CEO Jim Fagan.

Australian startup Cortical Labs is offering what it calls the world’s first ‘biological computer for $35,000 a pop. The CL1 uses “labcultivated neurons from human stem cells” with silicon to create what it claims is a more efficient system for AI uses cases.

Equinix has lost its appeal to get a gas-powered data center through planning permission in Dublin.

In February Irish planning regulator An Bord Pleanála denied Equinix’s appeal to develop a data center along Nangor Road in the Profile Park Business Park, in the Clondalkin area of the Irish capital.

The site was originally planned to be powered by the electric grid, but after failing to secure a permanent supply, adjusted the proposal to use gas power to power the facility.

An Bord Pleanála said the project

wasn’t suitable for the site’s zoning due to its lack of grid connection, lack of significant on-site renewable power, a lack of evidence around PPAs, and proposed reliance on a gas power plant.

Despite that setback, just before Christmas Equinix took over two data centers in Dublin from BT.

Equinix acquired two BT faiclities, in CityWest and Ballycoolin for €59 million ($61.3m).

The CityWest site totals 10MW across 120,000 sq ft, while the Ballycoolen site offers 3.5MW across 40,000 sq ft.

January saw Equinix and designer Maximilian Raynor reveal a dress made from repurposed data center equipment

Energy Storage for Resiliency

UtilityInnovation’s Battery Energy Storage Solution for resiliency is a fully integrated power system born from Volvo Group’s industrial electric drivetrains, providing ultimate modularity and performance to meet the needs of the toughest data center applications while maximizing capital and available space.

AWS targets every county from Loudoun to Richmond

Amazon Web Services (AWS) is aiming to develop a data center in each county between Northern Virginia and the city of Richmond.

In a recent permit application to the US Army Corps of Engineers regarding Amazon’s Caroline County data center project, Amazon revealed that its goal is “[to build] a data center in each County between Northern Virginia and Richmond,” noting that the Mattermeade project “fulfills requirements of a data center within the area of Spotsylvania and Caroline counties.”

Amazon has long held a significant presence in Virginia, and in 2023 committed to investing $35 billion in expanding its footprint in the state.

Northern Virginia is home to six states and six independent cities: Fairfax, Arlington, Loudoun, Prince William, Spotsylvania, and Stafford counties, and cities Alexandria, Fairfax, Falls Church, Fredericksburg, Manassas, and Manassas Park.

Of those six counties in Northern Virginia, Amazon has data centers in Loudoun and Fairfax counties as well as Prince William county, and has filed to develop in Spotsylvania, and Stafford, though does not yet seem to have any projects in progress in Arlington.

The company has been seen making moves in Louisa, Fauquier, Culpeper, King George, Spotsylvania, Orange, and

Y Combinator looks to remove humans from data centers

Caroline counties. Projects are in various stages of development, with some having faced stiff opposition from local residents and officials.

Amazon previously had plans for a data center in Frederick County, but pulled out in early 2022.

With Amazon’s plan for Virginia seemingly extending down to Richmond, this adds numerous other counties into the mix, including Warren County, Clarke County, Rappahannock County, Madison County, Richmond County, King and Queen County, Hanover County, Goochland County, Fluvanna County, Greene County, and Albemarle County, among others - depending how wide the company chooses to cast its net.

Despite its parent company’s Seattle roots, Virginia, especially Northern Virginia (NoVA) has always been the home of Amazon’s cloud operations. The state, and specifically Loudoun County, hosted Amazon Web Services’ (AWS) first data centers when the company launched its first cloud facilities in 2006.

Its exact footprint across Virginia isn’t known, but totals more than 50 data centers across the region, with dozens more in development. Greenpeace estimated the company had 1.7GW of capacity back in 2019, having more than doubled that figure since 2015. Amazon’s US-East Northern Virginia cloud region has been described as the largest single concentration of corporate data centers in the world.

Within Caroline County, the company has filed to develop an 11-build campus known as the Mattermeade data center campus.

CleanArc is also planning a 600MW campus in Caroline, set to go live in 2026.

Startup accelerator Y Combinator is looking to invest in data center software and robotics firms.

The venture capital business has put out a call for startups developing automated solutions across the entire data center design and build process.

Y Combinator is looking to find companies that can remove humans from the entire chain, starting at site selection.

“We need more data centers that are created faster and cheaper to build out the infrastructure needed for AI progress,” Diana Hu, YC group partner, said in a YouTube video.

“Hyperscale data center projects take many years to complete and - given all the interest and funding that has come up, and all the news, which is great - we need new companies and more clever solutions to

speed up this build-out. Whether it be in power infrastructure, cooling, procurement of all materials, or project management.”

Hu previously worked in augmented reality and data science.

Dalton Caldwell, YC managing director, added: “I think we can paint a picture of what the future will look like. Software is going to handle all aspects of planning and building a new data center or warehouse. This can include site selection, construction, set up, and ongoing management.

The company has already invested in data center space business Starcloud and underwater data center company NetworkOcean.

“Now, picture these data centers or warehouses: They’re going to be what’s called lights out. There’s going to be robots, autonomously operating 24/7. We want to fund startups to help create this vision.”

With customizable solutions and collaborative engineering, see how Legrand’s approach to AI infrastructure can help your data center address:

• Rising power supply and thermal density

• Heavier, larger rack loads

• Challenges with cable management and connectivity

• Increasingly critical management and monitoring

How this feature got to the Moon

And what it means for the future of digital infrastructure, humanity, and what we will leave behind

“Lonestar will save Earth's data one byte at a time,”
>>Chris Stott

This article comes to you from the Moon. The story of how it got there is decades in the making, involving vast government research efforts in space exploration and lunar landing technology, a concerted private effort to reignite the Space Race, and the dreams of a small startup hoping to upend the data center industry.

But first, a caveat: this feature was written in a mad dash, during one week in October 2023 to meet a payload certification deadline. It represents a time capsule of what we understood in that moment. [Editor's note: Terrestrial updates from the year 2025 will be added in square brackets.]

Credit: Sebastian Moss

If everything goes to plan, this article will be ferried to the lunar surface in February or March, following an earlier successful landing in November or December. It will then be stored there, and also transmitted back to Earth, before appearing on these pages.

The launch may have been delayed, it may have exploded at launch, in orbit, or on the surface of the Moon.

But, if you are reading this, it means that something has gone right.

[Things actually went slightly awry, with the launch delayed by a year to February 26, 2025.]

The precious things put forth by the Moon

The idea for Lonestar began in the early months of 2018. The NotPetya ransomware attack had caused more than $10 billion in damages, and a group of businesses were concerned about the future security of their data on an increasingly troubled planet. They approached Chris Stott, then CEO of satellite spectrum company ManSat, for advice.

"We looked at data centers underwater, in jungles, deserts, and under the mountains," he recalled. "And everywhere we looked on the Earth we found data sovereignty issues and network issues."

Maybe, just maybe, the answer lay off Earth, he thought, looking up at the Moon. But, before the newly-founded Lonestar could even begin to consider the technologically complex task of setting up off-planet Disaster Recovery as a Service, the company had to check whether it was legal.

"We took one of the most regulated of all human activities, and then we added more regulations, like data sovereignty, on top," Stott said. Fortunately for the company, the lengthy history of satellite case law played to its advantage. The Moon is not sovereign, so any hosted payload simply acts as a mini-embassy of its host nation.

This, the company soon realized, meant that it could send data centers

to the Moon, but still meet the data sovereignty requirements of companies on Earth.

A Danish business could back up its data on a whole different celestial body, and yet legally the data would still be in Denmark. Things get a little more complex when serious data processing occurs, but this is how it works for disaster recovery.

Its legal questions answered, Lonestar raised $5 million for its first step - a proof of concept. The startup signed a contract with lunar lander developer Intuitive Machines to go up on its first two missions.

“The interesting issue here is to distinguish between local lunar surface communication and communication back to Earth,” >>Vint Cerf

By the time you read this, IM-1 and -2 should have both landed. If so, IM-1 would be the first non-governmental spacecraft to successfully land on the lunar surface, after an Israeli effort crashed in 2019 and a Japanese attempt smashed into the lunar ground in April 2023.

A dozen flight controllers at Intuitive Machines' control center will have helped shepherd IM-1 on its roughly 384,400km (238,855 mi) journey. Three teams, Red, White, and Blue, will have worked eight-hour shifts from its launch on a SpaceX Falcon 9 rocket and through the mission's surface operations, expected to last roughly two weeks. [The mission was a success, but the lander fell on its side, and could only operate for six days. The

Credit: Sebastian Moss
Chris Stott

“The Moon’s first data center is capable of storing 8TB on [Phison] SSDs, and has a single Microchip PolaFire SoC FPGA, running Ubuntu and Yocto"

Credit: Sebastian Moss

IM-2 ‘Athena’ lander design was tweaked following this incident. Rival Firefly managed to successfully land on the lunar surface on 2 March, 2025.]

“The idea of landing on the Moon is not something new to Intuitive Machines," president and CEO Steve Altemus told DCD. "The core group that started Intuitive Machines in 2013 was part of NASA's Human Spaceflight and Advanced Technology programs, including Project Morpheus."

Project Morpheus was an autonomous lunar lander prototype that NASA developed in 2010, using a liquid methane and liquid oxygen engine design. Intuitive took the research further.

"We applied the decades of experience with NASA human spaceflight, Project Morpheus, and the commercial and government lunar missions that have come short of a soft landing," Altemus said. "All of these things contribute to the probability of success in our first mission."

Lonestar's contribution to this initial mission will simply be in the software domain, building upon a December 2021 test on the International Space Station (ISS) with Canonical and Redwire.

"We put the world's first softwaredefined data center in the space station and it ran fine on a 10-year-old computer running Windows 10,” Stott said proudly.

The company will transmit the US Declaration of Independence to the

lander three times, one for each stage of the journey - once while it is in transit, once while it is in orbit, and once after it has landed. It will then send a copy back from the Moon to Earth.

It will also travel pre-loaded with the Magna Carta and data for the State of Florida. [These tests were all carried out successfully before the lander died.]

But it is on the second IM lander that Lonestar plans to touch down with a dedicated data center of its own.

The first data center on the Moon

The data center on which this article resides is not a large one. On Earth, it would not be considered a real data center - but, up here, it represents a small step to a potentially much larger future.

The Moon’s first data center is capable of storing 8TB on [Phison] SSDs, and has a single Microchip PolaFire SoC FPGA, running Ubuntu and Yocto. Lonestar’s partner Skycorp built the hardware and space-tested similar equipment on the ISS, separately to the Lonestar software trial.

"What Skycorp has been able to do and prove out on the ISS is that our hardware degrades at about the same rate as it would on Earth," company CEO and retired Air Force general Steve Kwast said.

This small deployment on the lunar South Pole will survive on a proportionally

small power envelope. The Nova-C lander generates around 200W across all payloads during transit and on the lunar surface, using a mixture of solar power and batteries.

The lunar data center will also have limited connectivity, with “100 kilobits for the entire mission uplink,” Stott said. “That’s for us to send commands to our data center payload.”

Sending data back from the Moon, “we get a megabit per second uplink at certain times of the day,” he said. “We have one gigabyte in total for the mission.”

A few kilobytes of that data ration will be used to send this article back down to Earth.

Intuitive Machines plans to deploy five Data Relay Satellites to provide greater connectivity for future missions, but Lonestar hopes to have outgrown Intuitive’s landers by then.

For now, the data center is just this small deployment, encased in a 3D-printed shell created by design firm Big.

[On 6 March, IM-2’s Athena lander successfully touched down on the lunar surface, but again fell on its side, causing a premature end to missions. This article was beamed home from Earth orbit, lunar transit, and lunar orbit, with this version of the feature the one sent from the Moon's orbit. The feature was not able to leave the lander on the Moon itself. The article is, however, on the lunar surface.]

[The Lonestar mission managed to operate on the Moon, as the only payload that was able to run, with the team managing to upload data to the system and get back telemetry.]

A quick note to future lunar explorers: If you look closely, you’ll see the shell is wavy and casts a shadow.

When the light hits it in the lunar morning, you will see the face of a man - Brigadier General Charles Duke, retired NASA astronaut, Apollo 16 Moonwalker, and CapCom for Apollo 11.

When the rays of the afternoon sun grace it, you will see the face of a woman - Nicole Stott, retired NASA astronaut, spacewalker, and Chris Stott's wife.

"I could not think of a more perfect pair to represent past, present, and future," the Lonestar CEO said.

Credit: Lunanet

Goodbye Moon

This data center is dying. Soon, it will be permanently powered down, and this article will remain frozen in solid state until radiation and cold wipe it from existence in the years to come.

But that’s okay. This was expected. The lander was built to last just 21 Earth days - seven in transit and one lunar day on the surface (that’s 14 Earth days). In the coming long lunar night, there will be no more power for its solar panels, and the cold will kill its critical systems. [With the lander falling over, this death came faster.]

Future data centers, should they come, will be more permanent, Stott said.

That means developing facilities capable of withstanding the harsh realities of the Moon.

Surviving the lunar night

On the surface, conditions are unforgiving.

"You're going to be landing in one of the coldest parts, but it's not the coldest part," lunar scientist Dr. Charles Shearer said. "Those are the permanently shadowed regions, which are -418°F (-250°C)."

“The idea is that no single company or agency will have to provide all of the communications and navigation infrastructure for the Moon,”
>> David Israel, NASA

IM-2 will only be subjected to 208°F (-130°C), by which point the mission will have ended. During the mission, conditions will be milder - it’ll face temperatures of 140°F (60°C), much lower than the 250°F (120°C) that can be found further north.

With the lander originally planned to touch down closer to the equator, before NASA moved it, Stott said the data center was built to handle external temperatures of 482°F (250°C).

Depending on where you land, you face different challenges. “Equatorial landing locations are less susceptible to local lighting and shadowing effects,”

Dr. Tim Crain, Intuitive’s CTO and cofounder, said.

“At the poles, a slight variation in local terrain can provide deep shadowing at the landing site that our models might not predict. Equatorial landing locations are subject to extreme peak heating with an ‘overhead’ solar illumination and reflected surface lighting.”

There's a small layer of gases around the Moon known as an exosphere, which cannot hold much heat. Unlike our atmosphere, which helps maintain a level of temperature across day and night, the swing in temperature is sudden and harsh - something longer-term projects will have to prepare for.

There’s also the radiation.

"Some of it is relatively low energy and fairly constant, and that's just the solar wind particles," Dr. Shearer said. "Any material on the surface of the Moon is going to be influenced by that."

Occasionally, there are solar flares "which can be really detrimental to humans and electronics - it's very deadly," he said.

"And then, with regard to your mission, there are micro-meteorites that impact manmade objects on the surface. The

odds of them hitting some structure or facility may be on the order of one or two per year."

His team is working on a potential solution: The Tall Lunar Tower. Essentially, the idea is for autonomously assembled towers to stretch over 50m into the dark lunar sky. From the top, solar panels will unfurl, catching rays that would otherwise be lost.

Mahlin analyzed the 'Connecting Ridge,' next to the Shackleton Crater, which is a potential Artemis III landing region. If solar panels were put at ground level, they would eke out just a few Earth days of light every lunar cycle. "At 30-50 meters, you see that go up significantly," he said.

The height also brings the solar panels above the Moon's layer of dust particles, at least according to current research. "Exactly how high they go is something that needs more study," Mahlin said, "but most research shows that the particle density is not very high after a few meters." The towers are expected to be able to operate for at least 10-15 years.

“We can provide heat and power that allows these payloads not only to survive but to operate during the lunar night, increasing the lifetime of these landers and payloads to five plus years,”
>> Tyler Bernstein, Zeno Power

These are tiny, and will likely only cause minor surface damage. Larger meteorites are possible (just look at the Moon’s craters), although less common.

The bigger risk is not from larger rocks crashing down, but instead from something much smaller: lunar dust. "The lunar surface is coated with what's called a regolith - I was actually looking at some yesterday - and it can be 45-100 microns, so that's a really fine grain," Dr. Shearer said. "It can be fairly glassy and very, very sharp, so it can abrade things pretty quickly."

Making matters worse, regolith also has "a low electrical conductivity and therefore accumulates charges. It could be detrimental if you have a circuit board that gets dust on it."

Towers on the Moon

In areas of shadow, which include potentially resource-rich craters (also known as cold traps), it will be too dark for landers. But there are those here still trying to catch the sun.

"In certain areas on the surface, you are power-starved," Matthew Mahlin, a researcher at NASA's Structural Mechanics and Concepts Branch, said.

Each tower is designed to provide 50-100kW, which is what’s needed by the expected lunar payload. "We're pushing for the fixed infrastructure that can create the power generation capability to support things like data centers or a lunar base camp," Mahlin said.

"Similarly, they could be used for any industrial processes on the Moon, such as refining lunar regolith, or capturing the volatiles in these cold traps in the permanently shadowed regions of the Moon."

At around 100kg, these towers are much lighter and cheaper to get to space than batteries. They also offer an opportunity for dual-use.

"I think putting a communications payload on them is going to be one of the first things we do," Mahlin said. "We've had a lot of interest - it's like putting a satellite on the top of a stick, it can be selfcontained up there.

Power is but one part of the challenge of the Moon. There’s also the matter of warmth, considering the plunging night temperatures.

Nuclear winter warmer

Startup Zeno Power has its own dual-use technology that it hopes can offer both power and heat: nuclear batteries.

"Historically, NASA has always used an

isotope called plutonium-238, which is a terrific isotope, but an isotope that has to be specifically developed for these uses. This means that there is not enough to meet the growing demand for power on the lunar surface," CEO Tyler Bernstein explained.

Zeno Power instead turned to the much more available strontium-90, essentially a nuclear waste product with a half-life of 28 years, for its Radioisotope Power Systems.

"With the Intuitive mission, they intend for the lander to survive for 14 days during the lunar day, and then freeze to death during the winter night,” Bernstein said. “So NASA is paying $77 million, alongside the commercial payloads [like Lonestar], for it to last 14 days.

"What we can do is provide heat and power that allows these payloads not only to survive but to operate during the lunar night, increasing the lifetime of these landers and payloads from 14 days to five plus years."

While the Radioisotope Power System is comparatively heavy, Bernstein believes that the extended life it gives a mission makes it worthwhile.

For now, the devices are relatively lowpower. “We're looking at watts,” Bernstein said. “At the moment, we're not at the point of powering massive data centers. Our ambition is to get to the kilowattscale of electricity, but not megawatts.”

Beyond that, large nuclear reactors make more sense. NASA's Fission Surface Power program has teamed up with the Department of Energy to test a 10kW-class system to operate on the Moon by the late 2020s, with a goal of eventually reaching megawatts.

Alongside the power, both reactors and decaying isotope systems like Zeno Power’s offer the benefit of heat. “Yes, electronics want electricity to operate,” Bernstein said. “But, more importantly, they want to stay warm, so that they don't freeze to death.

“India's lander only lasted for 14 dayswhat killed it was a lack of heat. You can use electricity to generate heat, or you can just use nuclear material that is naturally decaying and producing it.”

Going underground

In the long term, Lonestar hopes to avoid

many of the challenges posed on the surface by nestling safely in lunar lava tubes, giant underground caverns formed during the eruption of basaltic lava flows.

"We want to get down into those lava tubes," Stott said. "It's a perfect place to put batteries too."

One of the tubes the company is evaluating is 93 kilometers long, 80 meters deep, and a kilometer wide. "You could put three Manhattans in there," he said.

Prof. Dr. Andreas Nüchter, of the University of Würzburg, also sees safety down below: "I always say that the first humans on Earth also lived in caves, before technology advanced so that we didn’t need to.

"The whole idea is to use these lava tubes to provide shelter from all the evil things that are out there - radiation is blocked to a certain degree, and temperatures are a microclimate.”

There is still much to learn about the structure of these tubes, and several research efforts plan to venture forth into the unknown darkness. "If you're going into a natural structure, there's still a fear that you don't have a clue what its stability is," Dr. Shearer said.

Nüchter, a professor of robotics, was part of the European Space Agency (ESA)’s DAEDALUS project - which evaluated the

possibility of lowering a spherical robot from a crane into the tube’s depths.

"We came to the conclusion that it was technically feasible," Nüchter said. "We proposed it having cameras and LiDAR sensors to see as much as possible. Everybody would be pretty happy if it saw rocks covered in ice."

The LiDAR proposed by the team would use different frequencies, each of which reacts differently to materials based on their reflectivity properties. "If, let's say, the stone is covered in water, then you can detect this," he explained. "Because if you use an infrared light that is absorbed by water, you will get nothing back. But if you use a green light, it has little difficulty penetrating."

Combining the data from the different frequencies would allow researchers to build a map not just of the shape of the lava tube, but of its properties.

A follow-up project, co-funded by ESA, is currently underway to see whether it would be possible to add stick-like linear actuators to allow the robot to move itself.

The system would potentially leave behind signal repeaters and even charging stations, allowing it to travel deeper into the vast tunnels.

NASA is considering its own lunar lava tube expedition with robots, but both projects remain years away.

Credit: Intuitive Machines

More immediately, this article has an inquisitive fellow traveler on the IM-2 lander: Micro-Nova, a small, cuboid robot that plans to fly into the opening of a lava tube and take photographs of what it sees.

"Can you imagine flying into a cave or crater on the Moon with a camera, reporting that data back to Earth, and saying, ‘here's what the inside of the Moon looks like?’ Those insights are what we're building with Micro-Nova,” Intuitive’s Dr. Crain said.

"Micro-Nova is essentially a drone that has its own propulsion, its own navigation sensors that can detach from Nova-C and fly in a kind of hopping arch, which

why not stay in space? Why go through all the added trouble of landing on the surface? The answer, Stott said, is scale. Underground is where he, one day, hopes to operate large data centers with significant storage capacity.

“They will also be perpetual, because we'll be able to go up and change out old equipment if it breaks,” he said. “We can use robots to do that.”

The company is not the only one to view the cold expanse of space as fertile ground for a computing revolution.

"We do not plan to replace the terrestrial data center, but supplement

is fuel-optimal if it needs to achieve maximum distance from the lander. It can also fly very prescribed fixed altitude trajectories for the collection of scientific data with a common reference."

[Again, Micro-Nova was rendered interoperable by the faulty landing.]

The sky is not the limit Alongside its plans to go underground, Lonestar hopes to deploy some facilities to orbit the Moon by 2026.

Its lunar orbiting satellites will boast storage in the petabytes, as well as provide a connectivity option for lunar facilities. “We expect the orbiters to last at least five to seven years,” Stott said. “If we do it well, it should be seven to ten.”

He sees the satellites as a stepping stone to the surface, as they are much easier to deploy without requiring a lander or governmental help. “We can run to our own schedule, we're not waiting on NASA, and we can get past the day-night cycle.”

That, of course, begs the question:

it with Edge computing in space," said Koichiro Matsufuji, co-CEO of Space Compass, a joint effort by satellite operator SKY Perfect JSAT and digital infrastructure giant NTT.

The company plans to launch geostationary orbit (GEO) Optical Data Relay satellites from 2025 and Edge ones from 2026. "We are still studying what kind of Edge computing capability we can provide," Matsufuji said.

For the first phase, power will again be constrained. "There are probably tens of watts of power available for the compute, but in the future we plan to expand the capability as we increase the volume of satellites,” he said.

The company sees these satellites less as a way to process terrestrial applications, and more as a waypoint for other satellites. Earth observation satellites, for example, could send data to the space Edge for unnecessary images of clouds to be filtered out.

"It's not only the processing but also the low latency download capability," Matsufuji said, with the satellites using

optical and RF dual connection with proposed speeds of up to 20Gbps. "That combination can provide a value that is very significant, we think around $50m a year."

Separately, Space Compass is part of a project led by Mitsui & Co. to study what should go into the successor to the ISS's Japanese Experiment Module. Space Compass will look at the optical communication potential, as well as a data center deployment inside the new station.

It's still early days, but Matsufuji noted that the project is different from the GEO satellite computing service “because the ISS has more space to accommodate big computers."

The dream among the stars

The goal at Lonestar is to deploy data centers in orbit around the Moon, on its surface, and under the ground. Stott has a vision of how large this could grow that some would call ambitious, but others would see as a case of lunar lunacy.

“Out of the exabyte a day that humanity creates, 63 percent is regulated,” Stott claimed. “That's our market; we'll just keep expanding and expanding because we can never meet demand.”

The idea of immutable data stacks able to meet terrestrial regulation is one that could appeal to numerous businesses willing to pay the premium. As the drives on the Moon get older, and business-critical data gets transferred to newer, denser SSDs, Stott hopes to offer consumers cheaper storage on the older gear, through a new business line called ‘Selene.’

“Why don't we just store it somewhere else, just in case?” he said. “Because of the space program, we can do it in a cost-effective way - I know it sounds outlandish, but why not? Data can be offplanet, but in-country.”

The company is in the midst of securing another round of investment. It’s not likely to ever raise enough money to run an independent space program, but that’s fine. It doesn’t have to do this on its own.

“We’re building on 60 years of investment in space exploration and Silicon Valley, and are building a team drawn from the very best of the data

The Lonestar payload / on the Moon

center and satellite industries,” Stott said. “All to launch a whole new industrycreating a revolutionary solution to a global data problem.”

Key to any chance of success is the continued decline in the cost of getting matter into space. Intuitive CTO Dr. Crain explained: “The reduced cost of space launches makes a great deal of space commerce possible because the necessary capital investment of the launch is less of a barrier to innovation and new approaches to providing space services and development.”

It cost about $54,500 to launch a kilogram into Low Earth Orbit (LEO) on NASA’s space shuttle, before its cancellation in 2011. SpaceX’s Falcon 9, which began supplying the ISS in 2012, brought that down to around $1,5002,500. If its Starship mega-rocket proves successful, that could drop as low as $100.

That’s a big if. Elon Musk’s rocket company has pulled off miracles in the past, but developing the world's most powerful rocket has proved unsurprisingly challenging. Tests have so far ended explosively, and it's hard to give any reliable timeline for when commercial launches will begin - and how quickly prices will come down. [Starship has since achieved several milestones, but is still exploding. Its latest test blew up on 6 March, 2025.]

“We have a letter of intent from SpaceX for the use of Starships to take 100,000 kilograms every single time, so then we can get to the exabyte and yottabyte level,” Stott said. This is a bold claim, with a yottabyte of storage representing more than what is currently on Earth.

Alongside the rocket advances, Lonestar will require significant leaps in lunar robotics technology. “We have talked to Astrobotic about leasing their equipment,” he said. “You've got NASA, ESA, the Japanese Space Agency, and the Canadians all already paying people to build the robots. They’re coming.”

The coming wave Lonestar may be joined by other data centers on the Moon over the coming decade.

A 2021 patent by the Shanghai Aerospace System Engineering Institute lays out an idea for a lunar data center "soaked in insulating heat-conducting

oil." Shanghai Aerospace was responsible for developing the Chang'e 3 rover (from 2013-15), as well as other lunar technology.

China has been, by far, the most successful lunar explorer of the modern Space Race.

2019's Chang'e 4 was the first lander in the world to touch down softly on the far side of the Moon, with data beamed back via the communication relay satellite Queqiao. A year later, Chang'e 5

bigger than the desired standard for low latency applications such as virtual machines and network storage,” she said.

Another, previously unreported, patent hints at the possibility of a competitor of considerable scale: Amazon. Way back in 2018, the cloud giant quietly filed a patent for a satellite-based content delivery network (CDN) in an extraterrestrial environment.

Diagrams show "a space-based data center on the Moon" connecting to a

successfully collected and returned lunar samples. Chang'e 6 plans to do the same, except this time from the far side of the Moon, in 2024.

“We understand they're going out to their One Belt One Road clients saying ‘the world's about to get very horrible, we will store your data for you on the Moon,’” Stott claimed. “It's a weird market confirmation of our business.”

Italian space agency ASI, meanwhile, has contracted Thales Alenia Space to study the feasibility of a lunar data center - but this is intended as a resource to serve manned and unmanned lunar missions of the future, rather than remote storage for Earth-based organizations.

"Relying on Earth-based computational resources is simply not acceptable,” Eleonora Zeminiani, the head of the aerospace company's Human Exploration New Initiatives division, told DCD in 2021.

“Communications with Earth are subject to a [noticeable] latency, one order of magnitude bigger than what we consider acceptable for today’s VoIP standards and two orders of magnitude

wider network. Amazon has since begun launching Internet-providing satellites through Project Kuiper, which could eventually grow to a constellation of 3,236 satellites in LEO.

They do not currently have much on-board compute or storage, and are primarily focused on connectivity.

Connecting our largest satellite

Connectivity is also beginning to come to the lunar surface.

As well as carrying this data center and our story, the IM-2 mission will test the Moon’s first mobile communications network, provided by Nokia under NASA's Tipping Point program.

Around the same time as this feature is being beamed back to Earth, a small rover will have begun to leave the lander. It will demonstrate Nokia’s lunar 4G deployment - a network that will also serve the MicroNova hopper we met earlier. [The mission detailed here did not happen, although it did manage to send some data back from the lunar surface before it died.]

Credit: Intuitive Machines

“The mission will essentially have two driving paths,” Nokia Bell Labs VP and head of the project Thierry Klein told us. “First, the rover is going to circle around the lander at a roughly 300-meter radius, and that's just to get good coverage measurements.

“After that, we'll drive off in the direction of the Shackleton Ridge for about two kilometers.”

The signal could go further, with tests estimating around 4.5km. But, with the Lunar Outpost rover having to stop and recharge occasionally, that’s as far as the robot is expected to travel before darkness comes. During their brief life, the rover and hopper will send back images of the Moon, transmitted over a 4G network, and then back home.

The project was first trialed at a facility in Colorado, whose volcanic terrain is somewhat akin to the Moon's surface, and the short-range test showed speeds "north of 75-80 megabits per second." That drops as the rover gets further away, but is still significantly greater than the Moon-toEarth link it shares with Lonestar on the lander.

Nokia also put the system through 25 tests across shock, vibration, acceleration, temperature, humidity, vacuum, radiation, and more. "We made some modifications, but they were relatively minor - we came out pretty good on that first cycle," Klein said.

The telco deployment was, however, only built for the mission at hand. It will suffer the same fate when the night comes, so was not designed with those conditions in mind. "That's what we're studying now," Klein said. "There's a lot of work that still needs to go into how to survive the night, especially radiation hardening. The equipment in the Tipping Point program is not what you would deploy for a five-year mission."

A big question is where these 4G and 5G antennae would live on the Moon. "If this is mounted on the outside of a tower, then we’ll need to take care of the thermal factors, but if it's mounted inside or in a thermally-controlled cabinet, then we don't."

Klein is confident, however, that a solution is achievable: "From a comms perspective, we're not so worried, we're having conversations with the chip providers about radiation hardening now."

For the last year, alongside this immediate project, "Nokia has been focused quite a bit on the longer term capabilities of lunar cellular technologies and how it fits into architectures that they may be thinking about for the Artemis program."

This, and future projects, are expected to feed into LunaNet, NASA's plan to develop a network of cooperating networks akin to the terrestrial Internet.

Earlier efforts to send landers and rovers to the Moon have treated every mission as a standalone event, with each system requiring its own direct connection back to Earth. That requires heavy, expensive, and power-hungry equipment on every system - and a direct line of sight to the Earth.

David Israel, NASA exploration and space communications projects division architect, hopes to change this all with LunaNet. The idea is to create a unified framework that governments and industry can share, so that telco towers like Nokia’s can connect to satellites, ground stations, lasers, and everything else.

“When you travel to a town you've never been to before, your phone knows what time it is, where you are, and can access all the information in the world,” Israel said. “There's multiple applications running and all these different data connections to different places, all going on at the same time.”

That’s something we take for granted on Earth, but is currently lacking on the lunar surface, as there’s no agreed framework for everyone to talk to each other.

LunaNet relies on Delay-Tolerant Networking (DTN), a store-and-forward protocol for dealing with the fact that connecting nodes could be moving satellites and may pop in and out of existence.

“The interesting issue here is to distinguish between local lunar surface communication and communication back to Earth,” Vint Cerf, one of the developers of DTN and one of the creators of the Internet, told DCD in 2021.

“If the installations that need to communicate back to Earth are on the side of the Moon that is facing us all the time, you almost don't need a relay capability, except locally,” he said. “If it's

Credit: Sebastian Moss

on the other side, that's a whole other story. Now we need orbiting spacecraft to pick up a signal and hang onto it and then transmit when it gets back to the other side where it can see the Earth.”

He added: “The two things that drive my interest in the LunaNet are the configurations where we end up with something on the side where we can't use direct communication to Earth. And the other one might be local communication where the radio signals are obscured.”

Israel concurred, noting that any network will require local compute and storage to be a success. “Even on the side facing the Earth, there can be bandwidth constraints alongside the latency ones,” he said.

“If there's a network on the Moon that has a higher speed and connectivity, then you wouldn't have those bandwidth limitations - and that's where having some sort of compute power on the Moon would start to have benefits.”

After years of negotiations and work, LunaNet has finally reached the procurement stage. By the time this article is live, industry partners will likely have been chosen, and contracts will be in place. [Procurement is still ongoing.]

Also contributing to LunaNet is ESA, through its Project Moonlight program. "This is significant," Israel said, "both Moonlight and LunaNet are using the same interoperability specifications in their procurement - the idea is that no single company or agency will have to provide all of the communications and navigation infrastructure for things on the Moon."

Project Moonlight is Europe’s own effort to begin the process of digitizing the Moon.

“Moonlight consists of two steps,” Bernhard Hufenbach, head of ESA's strategic planning office in the Directorate of Human Spaceflight and Operations, said. “The first one is Lunar Pathfinder, this is an ongoing project, where we try to put in place a data relay spacecraft.”

That craft, built by Surrey Satellite Technology, “has a low data rate as we’ll relay primarily science data back to us,” he said. Planned for 2026, data won’t be sent in real-time, and will rely on store-andforward protocols.

“And then, based on this experience, we're planning to build a more significant

“Nokia has been focused quite a bit on the longer term capabilities of lunar cellular technologies,” >> Thierry Klein Nokia

service, which includes high data rate communication, including potentially real-time communication,” Hufenbach said. “It will also provide a navigation service, which helps the asset on the Moon to get the timing and positioning signal,” similar to GPS and Galileo on Earth.

Ready by 2028 at the earliest, that second stage is expected to help support a variety of lunar applications. Not only will it connect to systems on the far side of the Moon, it will lower the cost of those on the closer side - their signals will only have to reach the satellites, rather than all the way to Earth.

"The pricing of the service should be such that it becomes economically more attractive to go through the relay," Hufenbach said. "Plus, you may need less powerful antennae or transmitters, which are cheaper and use less power."

So far, all lunar missions have operated without a GPS-like navigation system - possibly to their detriment. “Many of the recent lander failures could probably have been avoided if such a system was available. It’s a paradigm shift.

Even after it’s live, it will likely take a while for lunar explorers - robotic and human - to fully trust the positioning, navigation, and timing (PNT) system, so they will likely still travel with all the excess local guidance and connectivity for a few more years.

"The first generation of this new system will still see very traditional lunar exploration activities," Hufenbach said. After that, he hopes, demand will expand with more permanent lunar deployments.

That will, of course, require data centers. "If you have future scientific infrastructure creating huge amounts of data, like radio telescopes, for example, it would make sense for there to be data centers on the Moon," he said. "Same for human bases, you can also do some processing on the Moon."

Hufenbach added: "I believe data centers on the Moon could be one of the more industry-driven interests. It would be interesting to have a secure data service there."

A lunar colony

Beyond just connectivity, the Defense Advanced Research Project Agency (DARPA) hopes to foster a wider gamut of infrastructure and commercial applications on the Moon.

Through its recently launched LunA-10 program, DARPA is working with industry partners (including Klein’s team at Nokia) to understand what it takes to build a community on the Moon.

“The hope is that, as we get this commercial economy started, we can start thinking about the whole idea of interoperability - I build power, you build comms,” DARPA program manager Dr. Michael ‘Orbit’ Nayak (Maj, USAF) said. “And we'll just make them work together.”

In a previous role, Orbit deployed on the Earth's South Pole. "One day, I had to go dig a bunch of six-foot holes in the snow," he recalled. "And that sounds very silly, but we were looking for a power line that was key to my experiment."

He mentioned the challenge to others at the South Pole Station who represented different nations. "They all came together to help - the environment brought us together across different countries, there's very much this international environment of collaboration."

LunA-10 is centered around bringing that collaboration to the Moon. "We're aiming to define a framework whereby there are many services on the Moon, and I don't need to bring everything I need to survive with me, I can just plug into it."

The project, which will be nearing completion by the time you read this, will look at both the risks presented by trying to develop a lunar community, and how to solve them over the next ten years.

A number of different sectors will be evaluated, including power, communications, and positioning. That means looking at data centers on the Moon, and how much compute should be local or just moved back to the Earth. "I don't know the answer to that question yet," Orbit said. "I think nobody does. That's really what I'm hoping to get out of this study."

Another unknown that DARPA is hoping to answer is whether it's possible to design a single laser system that can provide optical power beaming, laser communications, and PNT over optical communications all at the same time.

"I think it's quite feasible from a technology perspective," Orbit said. "What I don't know is if it's feasible in a 10-year horizon."

Once it has worked out what is likely possible, DARPA will then help research and fund the key components and missions that will allow lunar communities to flourish.

Chris Stott's wife Nicole spent some 103 days in space across two missions. The second lasted just two weeks. "It just wasn't long enough, they had to pull my clawing hands off of the hatch to get me back into the shuttle to come home - it's such a special place," she said.

But, Nicole added, "while the adventure side of space is awesome - I highly recommend it, don't get me wrong - the most important fact is that everything we're doing there, whether it's the technology, the hardware, the people, the relationships, or all of this science, ultimately, is about improving life on Earth.

"I think that the things we could do in space that we haven't done yet could solve some of the greatest challenges we have."

All of the proposed infrastructure will contribute to NASA's wider Artemis program, which aims to land astronauts on the Moon no earlier than 2025 or 2026, and eventually build an enduring lunar base. [Artemis is behind schedule, and its future under President Trump is unclear. Right-hand man Elon Musk has called the Moon a “distraction,” and called for NASA to go straight for Mars. NASA also faces cuts.]

For Dr. Shearer, who earlier described many of the complex and brutal challenges of life on the lunar surface, the problems are not insurmountable. “I have worked with NASA for many, many years - and this is the most optimistic I've been in terms of getting humans on the Moon and developing a space economy.”

Only time will tell what shape exactly that economy will take. “Does that mean that there will be ice mines at Shoemaker Crater or Shackleton Crater, and that the South Pole of the Moon will become

the hub of a flourishing multi-planetary civilization?” posed Oliver Morton, author of The Moon: A History for the Future “No, I don't think that necessarily follows at all.”

He noted that the environment is somewhat similar to Antarctica, “and although humans have been living there for many decades, it is not even remotely self-reliant. And I don't think that a selfreliant Moon outpost is a particularly likely thing or a particularly necessary thing, but I think a scientifically useful Moon outpost is quite plausible.

“And there will be something different about looking at the Moon and knowing that there are people permanently on it.”

What we leave behind Stott thinks about permanency often. We create reams of data, most of it at risk

of being lost due to accident, malicious intent, or natural disaster.

Lonestar is primarily a commercial endeavor, and the reality is that the vast majority of the data it stores will only be of commercial value. “It’s not just about that,” Stott said. “We've gone off to a bunch of sustainable development goal charities and NGOs and said ‘we'll do this for you for free.’”

Similarly to the Arctic World Archive, a film-based data center deep in a Svalbard mine that DCD visited in 2021, Stott views lunar deployments as a necessary backup for the world’s most important data.

“Our goal is global backup, global refresh, global restore, all from the Moon. Lonestar will save Earth's data one byte at a time.”

This all may fail. For all the years of research, the might of nations and megacorporations, space remains frighteningly difficult. Much of the lunar surface and what lies beneath it still needs to be explored and studied.

Lonestar may fail after this landing. It may become a footnote in history, with this article just a footnote to that footnote. Statistically, the history of space exploration suggests that failure is a likely outcome.

But, with the Earth wracked by climate disaster and geopolitical strife, the idea that at least some of what we have created could survive us presents a tantalizing prospect.

When you go to space, there’s a phrase for the feeling you get when you see our pale blue dot below - the Overview Effect.

"I felt how fragile our planet is," former NASA astronaut and Lonestar advisor José Hernández told DCD. "You look at the thickness of the atmosphere from space and it's so thin - it’s scary.

“In one fell swoop, I could see the whole view of the world. We have to be good stewards of our planet, and have to be very careful with what we have."

- Sebastian Moss, Editor-in-Chief and Lunar Reporter, DCD

The rocket launch, credit: Sebastian Moss

The Edge Supplement

The Edge State of Play

Cooling at the Edge

> Liquid cooling took over data centers, now it’s coming to the Edge

The Edge criteria

> How AI’s success hinges on the success and flexibility of Edge infrastructure

Educating at the Edge

> Duos Edge wants to close the digital divide at rural schools in the US Sponsored

Contents

30. Liquid Cooling: The Edge of reason How liquid cooling has become the great enabler for high-density compute deployments at the Edge

34. Mastering the Edge How Viavi is revolutionizing network monitoring with AI-powered analytics, real-time insights, and automation to help businesses master their operations

36. Not a cookie cutter approach: The criteria for a robust Edge infrastructure How AI’s success hinges on the success and flexibility of Edge infrastructure

38. Educating at the Edge Duos Edge wants to close the digital divide experienced by schools in rural locations across the US

Sponsored by

An AI future at the Edge

As the AI market matures, the focus for many businesses is likely to shift from the training of models and tools to inference, the process of running these tools so that they can benefit businesses and consumers.

And as inference grows in importance, Edge computing could come into its own, with infrastructure located close to end users potentially offering a low latency solution to efficiently deliver AI-powered services.

Building for this AI future at the Edge will require a high degree of flexibility, according to Luca Beltramino, chief data center officer at Italian media infrastructure provider Rai Way. Required specifications are likely to vary wildly from deployment to deployment, so open minds will be needed to come up with the optimal solution, he says.

In this supplement, we feature more insights from Beltramino and Cambridge Management Consulting’s Duncan Clubb, who spoke to DCD about the requirements for building robust Edge infrastructure. As well as AI, the duo covered optimal locations for Edge data centers, as well as security, data sovereignty, and sustainability.

Cooling will also be a key consideration for anyone wishing to deploy AI at the Edge. With the market still in its infancy, a dominant cooling technology has yet to emerge, but proponents of liquid cooling say it is the obvious choice to keep Edge AI infrastructure chilled.

One of the reasons for this, the

vendors say, is that it can potentially be installed more easily in small spaces, making it ideal for remote or repurposed spaces often used for Edge infrastructure. However, splashing out on a liquid cooling unit may not be the most costeffective solution if you already have air cooling in place that can do the job. Charlotte Trueman takes a look at liquid cooling’s Edge potential.

Away from AI, Edge infrastructure is being used to help bolster the education system in rural Texas. Despite being one of the epicenters of the AI revolution, with myriad AI data center developments planned for the state, many Texans struggle for even basic Internet connectivity, and this impacts businesses and government services, including education.

A new venture, Duos Edge AI, says it has the solution with a network of containerized data centers that can be set up quickly and relatively cheaply by school districts to enable pupils to access online learning tools. It has already made its first deployment, in Amarillo, Texas, and has plans to expand its network to other school districts.

The company, launched as a subsidiary of rail technology firm Duos, is headed up by Doug Recker, an Edge veteran who has already built and sold two data center businesses. Matthew Gooding spoke to him about his mission with Duos Edge AI, and why he thinks the company can make a real difference and level up education opportunities for many young people.

Liquid cooling: The Edge of reason

How liquid cooling has become the great enabler for high-density compute deployments at the Edge

In 2020, then Iceotope CEO David Craig wrote an article for DCD in which he argued “the Edge has needs which only liquid cooling can satisfy.”

“As companies across almost every vertical sector begin the deployment of IT in the myriad of locations where data processing and access is required close to people and things,” Craig wrote. “These new Edge environments will not be uniform in nature but common to all will be the

need to keep the IT equipment cool.”

Five years later, Craig has retired from the liquid cooling company, but it would appear his predictions have come to fruition.

Neil Edmunds, Iceotope’s VP of product management, describes Edge as anything outside of that “super highly controlled white space that data centers typically have,” meaning that pretty much anywhere could be considered for Edge deployments, given access to the right technology.

While the AI revolution has been characterized by daily headlines about the hyperscalers increasing their data center footprint to house all their AI compute needs, high-powered infrastructure is not just the preserve of those with the deepest pockets or the largest data centers. For example, in February 2024, Supermicro partnered with Nvidia to launch applicationoptimized Edge AI servers that support a host of different Nvidia hardware deployments, including multiple H100s

Iceotope’s KUL AI liquid cooling offering

Credit: Iceotope

- although according to Supermicro’s website, all those servers are air-cooled.

That being said, the Edge is continuing to grow in popularity, especially as companies continue to shift from AI training to inferencing – where the model applies its learned knowledge – as most of this work can take place in an Edge environment, reducing the time required to send data to a centralized server and receive a response.

As a result, not everyone who has managed to get their hands on some Nvidia H100s wants to keep them in a data center in Virginia, US, or Slough, UK.

Edmunds says Edge deployments can be found anywhere from up a pole or lamp post, where the IT equipment is exposed to wind, rain, or even snow, to an office block or repurposed building. A company might also be considering an Edge deployment in an environment, such as the Nordic nations or South America, where it could be subject to extreme temperatures.

All this means that while, in some cases, the Edge location could be clean and well-lit, it’s unlikely to have all the amenities you’d find in a traditional data center. With such a wide range of potential Edge locations, versatile cooling solutions are needed, and this is where liquid comes into play.

Edging out air cooling

Liquid cooling isn’t a new concept, having been used in some capacity to chill compute hardware since the 1960s. However, while most conversations around the technology relate to its role in helping to sustain and grow the increasingly dense racks required for AI workloads, for those looking to deploy compute infrastructure at the Edge, it’s also a practicality.

Proponents of liquid cooling say it is particularly suited to Edge deployments because it is easier to install than air-based systems. This could make converting an abandoned building, for example, much easier, Edmunds says.

“There may not be very much space to put an external heat rejection system of traditional size and scale for an air-cooled facility,” he explains. “So typically, certainly for Iceotope and I would presume for other liquid-cooled

technologies, the outdoor infrastructure needed for heat rejection of liquidcooled designs to air is much more compact than air-to-air infrastructure.

“On typical data centers, you have huge cooling towers that evaporate loads of water, big air-to-air heat rejection systems, or a fresh air system to blow through. All of these things are going to be very challenging to deploy at the Edge because they usually come with tons of filtration, lots of space requirements, and potentially lots of power requirements to operate them. The need for a compact, almost portable, deployable heat rejection system is going to be pretty important to enable people to utilize previously occupied spaces which are now abandoned.”

“The world talks about AI, but even at aggressive projections, it's only going to be 20 percent of the entire data center market.

“So, when you start talking about traditional workloads, it's not going to be all AI, it's not all going to be high density,

However, Sean Graham, research director for cloud to Edge data center trends at IDC, said that, while it certainly has its benefits for some deployments, the case for liquid cooling at the Edge is perhaps not quite as cut and dry as one might be led to believe.

Graham feels that, when it comes to the various types of liquid cooling technologies on offer, immersion is best suited for the Edge. But, ultimately, the answer to which cooling option is best always depends on deployment.

“It really becomes down to what compute you have,” Graham says.

“Imagine how much it's going to cost to convert a potentially tenanted or rented space into something that can have a traditional air-cooled system deployed in it"

and air is still an appropriate cooling method. You even have liquid-to-air, these hybrid solutions. So air still has a very valid place in the market.”

Furthermore, Graham notes that if your Edge location is already set up for air cooling and your densities are low, splashing out on liquid cooling might not necessarily make the most sense.

Australian geophysics company DUG, previously known as DownUnder GeoSolutions, has chosen to eschew air and bring liquid cooling to the Edge.

Originally founded to process seismic data at mining locations, since 2003 DUG has been providing immersion-cooled high-performance computing (HPC) solutions for scientific data analysis.

Ron Schop, EVP at DUG, says two factors fueled the company’s decision to dive into the world of immersion cooling.

“The first thing was the data was becoming denser and bigger, so our compute requirements were getting bigger,” he says. “The second was that we were working on new technologies to do seismic imaging, and that also required even more compute.

“So the bottom line was, for

DUG immersion cooling offering

everything we were looking to do in the future, we just needed more and more compute. Then we looked at our power bill and thought, this is not sustainable.”

While Schop says the company did look at other technologies (“liquid-tochip is interesting, but probably on the scale we need, not applicable”) it was immersion cooling that could offer DUG the cooling and efficiency it so desperately needed.

“For us, it was just initially about the dollars, because the compute we had was too expensive [to run] and yet we needed more of it,” says Schop. “So we really seriously started looking at alternatives, and immersion cooling for us was the clear winner.”

Immersion cooling exists in two forms, single-phase and two-phase, and is the practice of immersing the servers directly into a tub of dielectric fluid and using the high heat-carrying potential of these fluids to move the heat away from the IT equipment, offering an extremely high cooling efficiency.

It is far from the only liquid cooling technology on the market. Iceotope, by comparison, combines both immersion and direct-to-chip cooling for its liquid cooling offering – precisely targeting the hottest components compute with a small amount of dielectric fluid, a model which the company says allows it to

recapture nearly 100 percent of the heat generated.

But once DUG had decided to throw its weight behind immersion, it encountered another problem - the solutions on the market at the time didn’t quite live up to their promise, particularly when deployed at scale.

Schop recalls: “It became quite clear that we wanted a single-phase solution, where the fluid just stays in the tank, because as soon as fluid moves out of the tank, that's a whole other set of problems.

“We started out in a garage, we’re very much a ‘can-do’ sort of company, and we thought, well, we can't see the immersion solution that we need so let's build it ourselves. So we started building our own cooling tanks.”

“Edge deployments could range from up a pole, or on a lamp post or a street corner, where the deployment would be exposed to wind, rain, and even snow"

Consequently, DUG became known as an early adopter of immersion-cooled data center designs. Schop says he occasionally still has arguments with “die-hard air-cooled proponents” but his response is simply to show them the data DUG has collected to back its offering, telling the nay-sayers that the company spends half the amount on electricity, its compute doesn’t break down, and it sees hardly any outages.

He adds when you have so much equipment, the “logical choice” is to use immersion cooling, and says: “The Edge compute we offer would be very hard to put together in an air-cooled solution. And, on top of that, by having [the compute] sitting in a tank, we can put it in some of the most extreme places in the world.”

How low can you go?

In October 2024, DUG deployed a prototype immersion cooling data center container, dubbed Nomad, at the Adacen data center in Silver Spring, Maryland.

“We’ve got a Nomad 10, a 20, and a 40 - that's a 10-foot, a 20-foot, and a 40-foot,” Schop explains. The 10 model contains a single immersion tank, and though the larger Nomads are still at the design phase, the 20 is likely to have six tanks, while the 40 will have 12.

Schop says that while DUG’s solution might not look as attractive as other immersion cooling tanks on the market, “they’re built to work,” and as a result are “pretty robust," which could be music to the ears of those looking to deploy at the Edge.

Nomad has now moved from prototype to product, and Schop says the Nomad 10 has roughly around 50-60kW of heat rejection capacity.

“We’re still working on the 40-foot container but it’ll probably have about 750kW of heat rejection, and we’re hoping to get that up to 1MW,” he says.

DUG has been using a PAO6 fluid but is now moving its tanks to PAO8 as PAO8 is a higher-viscosity fluid for better wear protection under load. (PAO is a

DUG Nomad deployment

dielectric fluid called Polyalphaolefin.)

Schop says the fluid used by DUG in its tanks has been “a great investment” because not only can you use it for a really long time, but it also brings the energy costs down and the compute components that are immersed in the fluid don’t move, meaning they run even more efficiently.

“We get the most value out of our expensive GPUs because they sit immersed in oil therefore they don't degrade,” he says. “We only upgrade because we get better CPUs and GPUs available on the market.”

Schop says customers often ask DUG how much compute can you put in Nomad, to which he says the answer depends on what you want and what you can afford.

“The interesting thing is that often, in the past, your amount of compute was directed by the space that you had,” he explains. “In our tank, we’ve got 26 rack units and that would be the thing that determined what kind of compute you could put in there. Now, it's not about the rack units you have, it's about how much heat you can reject. Because if you can use 26 rack units and stuff them with GPUs, which we're close to doing, then you're getting a lot of compute for your buck.”

Bringing whitespace to the Edge

At present, Iceotope’s liquid cooling offerings fall into three main categories: KUL Data Center, KUL AI, and KUL Edge. The latter comes in three different designs that the company says has been optimized for “high-density, low latency Edge computing.”

Iceotope says its KUL Edge solution can provide a 40 percent energy reduction and requires no water, while the KUL RAN – designed to meet the needs of high-density, low-latency vRAN, Open RAN, and 5G services offered by telcos – can deliver 20 percent energy savings, or 40 percent when compared to traditional aircooled systems.

Edmunds says that when it comes to Edge deployments, typically customers approach Iceotope when they don't want to over-invest in a white space build out in an Edge location – “imagine

"So the bottom line was, for everything we were looking to do in the future, we just needed more and more compute. Then we looked at our power bill and thought, this is not sustainable"

how much it's going to cost to convert a potentially tenanted or rented space into something that can have a traditional air-cooled system deployed in it,” he says.

He says that Iceotope has been working with one customer looking to deploy “a significant amount of AI racks” in obsolete office spaces around the UK and has consequently been contending with these types of issues, particularly how to reject the heat from 200-300kW of compute.

“Instead, they would rather spend the money on the compute and the relatively low additional cost of Isotope and the rack with its liquid-cooled infrastructure,” Edmunds says.

When it comes to cost, IDC’s Graham says that while liquid cooling at the Edge does have “compelling value,” it might not be the best financial choice for every company.

“With liquid at the Edge, there are definitely scenarios where it could have a lower total cost of ownership (TCO),” Graham says. “Deploying liquid is going to be more expensive just out of the box, but can present a lower TCO, not just from the heating and cooling expense or from the electricity needed to cool the compute, but it can also be beneficial support for IT infrastructure because there's a lot of reports out there that say once you have the device immersed in a fluid, it doesn't have as many problems.”

But, that doesn’t mean there aren’t potential barriers to deploying liquid at the Edge. Graham says that while liquid could have a lower TCO over its lifetime, it can also be expensive to deploy upfront, meaning that companies need to consider if they have the budget for it right now, or if the money could be better invested elsewhere in the business.

“You also have to consider the workloads and whether this going into an existing environment, in addition to your budget,” he says.

What Graham does agree with is that when it comes to solutions like Iceotope’s, whose liquid-cooled racks are delivered to customers as a sealed chassis to protect the compute components against potential contamination issues, the “value proposition is real.”

For example, Iceotope says in the “rare event of a service-related incident, only one server is affected, preserving the integrity of the remaining infrastructure.”

“I've heard complaints about leaves, cobwebs, or other stuff getting sucked in through fans and things like that, blocking the air ducts on servers and causing them to overheat or go down,” Edmunds says. “Then you have to send the service technician out to fix it, and that costs a lot of money.”

“There are so many elements rolled up into this Iceotope sealed solution,” he adds, suggesting it essentially means Iceotope’s racks become the data center white space, but closer to the IT itself, helping to address another question he says customers often have about Edge deployments – how do I put that amount of compute in an unusual space?

The answer, Edmunds says, always comes back to performance and efficiency, with liquid cooling acting as the great enabler when it comes to allowing these huge AI-focused compute deployments to run without issue.

And he says the way the company’s liquid cooling technology has been developed for Edge environments can also be a boon for traditional data centers. “In the mainstream data center we’re also enclosing and cooling extremely high-powered equipment in a very tight package, in a very small form factor,” he says.

“It's like a Venn diagram of technological benefits. Some apply over here but it’s the same over there too. And we really are going to continue pushing the limits on cooling performance and efficiency.” 

Mastering the Edge

How Viavi is revolutionizing network monitoring with AI-powered analytics, real-time insights, and automation to help businesses master their operations

As enterprise cloud services and AI continue to evolve, the role and importance of Edge networks are growing rapidly. To meet performance and service level requirements, maintaining these networks is crucial.

DCD spoke with Ilya Samokhin, global product lead at Viavi Solutions, to explore the challenges involved and the solutions available, including their recently launched XEdge – an industry-first, all-in-one solution for continuous Edge network testing and monitoring at scale.

Elevating network monitoring with AI and automation

As AI adoption continues to accelerate, Samokhin has observed a growing demand for computing solutions to keep pace:

“As companies implement more AIdriven applications, they’re noticing shifts in traffic patterns, including an increase in uplink traffic and a greater need for realtime data processing.”

This shift is driving the demand for advanced network monitoring, enabling sophisticated analytics across multiple networks simultaneously. Samokhin notes that as monitoring needs evolve, service-level agreements (SLAs) are

becoming more stringent, emphasizing the importance of ensuring networks meet or exceed agreed-upon performance standards.

With Viavi’s recent XEdge launch, Samokhin highlights how the solution addresses these demands, offering a range of advanced features that leverage Edge computing and AI technologies:

• AI-driven analytics to optimize network performance

• Active testing for Wi-Fi, 5G, and LTE

• Edge-based testing for real-time, location-specific insights

• Machine learning for predictive analytics, detecting network anomalies before they affect service quality

• Drive and walk test functionality for on-site testing in specific facilities, such as manufacturing plants.

Since monitoring and troubleshooting can be time-intensive (often taking hours, if not days), Samokhin explains that XEdge automates these processes, delivering realtime SLA monitoring that reduces testing time by up to 80 percent. This solution helps alleviate some of the complexities and pain points traditionally associated with radio network management:

“It eliminates the need for an in-house

radio frequency expert by automating both active and passive network monitoring. It also reduces troubleshooting time with real-time data analytics. Because our data comes from real sensors, we can detect network events instantly. This data is accessible via REST API, allowing seamless integration with existing management and monitoring systems.”

This means a single control center can monitor and manage hundreds of remote locations, e.g., warehouses, with immediate visibility into network conditions.

“It also supports multi-operator environments,” Samokhin adds. “This means enterprise customers who rely on multiple cellular operators – either instead of or alongside private networks – can use XEdge. We support up to four SIM cards simultaneously, so customers can monitor mobile networks, private networks, and Wi-Fi at the same time.”

Cutting costs and carbon footprints

In private networks and data centers, deploying infrastructure often requires a plethora of manual site visits for tasks like installation, troubleshooting, upgrades, compliance checks, and staff training –each of which is time-consuming and costly. Applying the benefits of XEdge to real-life applications, Samokhin explains that instead of having 20 site visits per year,

remote monitoring and automation tools help reduce the need for frequent on-site presence, cutting it down to about two visits per year.

Naturally, each customer and country defines the cost of these visits differently, but considering that each visit often involves a highly specialized professional with specific expertise, the savings can be significant. By reducing the number of site visits and performing much of the data processing locally, unnecessary data transfers and energy-intensive cloud processing are also reduced, indirectly supporting sustainability goals.

“What we aim to achieve is enabling professionals to remotely manage multiple sites. For example, rather than sending a specialist to visit 50 different locations, they can manage them all from a central location,” says Samokhin. He adds:

“We’ve observed that up to 70 or 80 percent of remote site visits can be eliminated using our solution, thanks to automation and remote monitoring.”

Error: Human

With all the focus on the automation capabilities of XEdge, Samokhin takes a moment to clarify that there is still an important human element in the data collection portion of monitoring. He explains that while network monitoring can be automated, data collection is often still a manual task, particularly in wireless networks. In complex networks where devices aren’t fully connected, manual intervention is a crucial and necessary step.

However, Samokhin explains that with a system that ensures all devices are connected and operating properly, automation becomes much easier:

“Our solution reduces the need for manual involvement in data collection and processing by up to 99 percent, minimizing the need for human action in most cases.”

This way, XEdge integrates companies’ legacy tools and systems, centralizing data collection through APIs. This reduces the need for costly managed services, eliminates many human errors, and ultimately lowers operational costs.

Metrics monitored include signal strength, interference, network

“Because our data comes from real sensors, we can detect network events instantly” >> Ilya Samokhin

throughput, and network KPIs like latency, jitter, and packet loss, which are closer to the application layer.

“In addition, we track SLA compliance parameters, including availability and performance thresholds, and apply predictive analytics and anomaly detection.”

Locking down security

As an ISO 9001-certified company, Viavi adheres to strict privacy and cybersecurity standards across the organization, ensuring everything it builds aligns with the latest regulations.

To address customer security concerns, Samokhin reassures that Viavi employs industry-standard encryption, secure API-based access, and multi-layered authentication to safeguard data privacy.

“Each customer also has a dedicated access controller, ensuring that their data

remains isolated and protected from unauthorized access.”

While XEdge operates autonomously, it is supported by real-time alerts, predictive analytics, and its own telemetry, which monitors metrics such as temperature, GPS, and other vital data to track device health:

“You can control the device via Wi-Fi or one of its modems, and if an issue arises, the system triggers an automated diagnosis and notifies the administrator. In most cases, physical access isn’t required, but it may be needed in specific instances, such as when syncing issues occur.”

“The device also has a battery, so in case of power loss, it continues operating for several hours,” he adds.

Guiding customers every step of the way

Samokhin emphasizes that to support the customer journey in implementing a new, high-tech system like XEdge, Viavi provides comprehensive training, online user guides, and a global network of partners to assist customers.

Once the XEdge controller and platform provide root cause analysis to help users identify potential issues, Viavi offers additional products to extend analytics and guide the next steps.

In conclusion, for any company transitioning from Wi-Fi to mobile networks, Samokhin highlights that Viavi understands how non-telco businesses often find traditional telco solutions overly complex. XEdge is designed with simplicity in mind, enabling enterprise teams to monitor mobile networks as easily as they would Wi-Fi networks, making it an ideal solution for companies moving into the mobile space.

>> More information on Viavi Solutions and XEdge can be found at www. viavisolutions.com

Not a cookie cutter approach: The criteria for a robust Edge infrastructure

How AI’s success hinges on the success and flexibility of Edge infrastructure

Edge data centers are set to become increasingly important in the AI era, offering solutions for increased latency, real-time data processing, more secure data privacy, and sustainability.

Some may even say AI’s potential depends on the success of Edge facilities, with much of the crucial inferencing work required by AI’s end users well suited to Edge processing.

Duncan Clubb, senior partner at Cambridge Management Consulting, believes staying flexible is the most important consideration for deploying AI at the Edge.

Speaking as part of a broadcast on DCD’s Edge channel, Clubb said: “Not everything will look like big, fat Nvidia Blackwell servers.”

Luca Beltramino is chief data center officer at Rai Way, a publicly listed company that operates the digital infrastructure used by Italy’s state-owned TV channel, Rai. He told the audience that operators cannot predict the future of AI so must keep the infrastructure as flexible as possible. Creating Edge infrastructure is not a simple formula, with specifications varying depending on whether the facility has been designed for colocation, cloud, or high-performance computing.

Deploying AI at the Edge is still in its infancy, said Clubb. He explained that

currently, the primary AI Edge application has been for video analytics and in the manufacturing industry; environments where there is a requirement to transmit high amounts of data at low latency.

In the sporting world, Edge computing has been deployed both at the F1 and during the 2024 Olympics to enable real-time analysis of various sports and fixtures.

Clubb added that latency is often the deciding factor for Edge deployments. Where AI is being used to generate interactions with humans in virtual environments, such as online gaming, latency must replicate that of a human’s built-in latency between the eye and brain. In other words, it has to be very low.

In environments where AI can be used to improve efficiency, such as in manufacturing, ultra-low latency requirements will continue to drive infrastructure to the Edge, Clubb said.

In Edge deployments, cooling options must remain flexible, the pair believe. Clubb said cooling will largely be dictated by chip manufacturers, with liquid cooling likely to remain the most popular choice. Air cooling will not be sufficient to cool racks larger than 50kW, added Beltramino. Even liquid cooling can produce heat, making room cooling an equally important consideration, Clubb said, and designing cooling systems with fixed densities seems a recipe for disaster.

Images credit: Rai Way

Operators should, instead, take a variety of measures to support a range of workloads.

Beltramino said that he has seen examples of operators adding liquid cooling onto existing data centers, as a means of retrofitting. Other companies have brought in “modular containerized type solutions” to cater specifically for high-density racks. In the case where data centers have created rigid and fixed deployments, but have access to power, the operators can bring in modules to provide customers with some highdensity capacity. However, it will always be easier to do this from scratch.

“We need to produce infrastructure that can cope with changes in hardware,” Beltramino said. While it is not clear what the future will bring, data center operators are likely to allocate specific rooms in facilities for AI workloads and bring in external units for higher densities.

Clubb explained densities are expected to grow, but the industry has gotten carried away with giant deployments. He suggested the industry will see growth in smaller modules, with Edge deployments averaging around the 5-20MW range, depending on the industry and use case.

Away from AI, Edge is becoming especially important for other use cases too, which means that in turn the need for mixed-purpose data centers still exists, but the priority is to remain flexible.

Where redundancy has become increasingly important for most data center operators, Clubb suggested that the industry might have to ask itself if it can be satisfied with reduced resilience in the Edge environment.

He believes it is possible not everything will need to be built to Tier IV standards, with training models for AI at the Edge, for example, requiring lower levels of resilience. However, for companies like banks, uptime remains crucial. Beltramino adds that Edge opens the door for building redundancy and resiliency across multiple sites and coordinating this across several Tier II or Tier III facilities. Rai Way is currently planning to build 18 Edge data centers across 20 Italian regions, as part of its Edge network project.

Beltramino explained that regional data centers are going to become increasingly popular. Location is a big factor in Edge

deployments, he said. In the UK, “80 percent of data centers are located around London, and the majority of the remainder are in Newport,” Clubb said. “Yet 90 percent of activity happens outside of those locations.” It is no longer useful to have data centers concentrated around FLAP-D markets - Frankfurt, London, Amsterdam, Paris, and Dublin, Clubb said.

“There is a new paradigm for compute that is requiring operators to be more distributed,” he added.

Beltramino told viewers the industry has seen people moving data closer and closer to the Edge in efforts to repatriate their data. Even in Europe, where

repatriation may not seem an issue, the distance between Milan and Paris is such that people want to bring their data to the Edge for processes where latency becomes a requirement because quick decisions need to be made.

On the topic of repatriation, Clubb says data sovereignty will drive Edge infrastructure as enterprises begin to want control, compliance, and accreditation of their data processing. However, this brings with it some challenges.

Delivering inference for regulated bodies means that Edge security, both physical and cyber, must be robust, says Clubb. For investment banks, LLMs can be located anywhere; their primary concern is their proprietary data and that is where security and compliance become important.

Physical security is another challenge the Edge environment is likely to face, says Beltramino. As it is not possible to have a full workforce across every Edge site 24/7, the sites will need to be remotely controlled. It is at this point that software enters the equation. It will become crucial in running infrastructure in an Edge environment, ensuring security and uptime for customers.

“Not everything will look like big, fat Nvidia Blackwell servers,”
>> Duncan Clubb

Sustainability could be a big boon for Edge deployments. Clubb explained that it offers an opportunity for “using electrons twice,” with the possibility of using waste data center heat for district heating networks. Beltramino said Google is an example of an operator in the hyperscale market using waste heat to warm an industrial park in Finland. With Edge, deployments can be located closer to communities and this can make reusing waste heat even easier. Whether for drying wood pulp, district heating, greenhouses, fish farms, or agriculture, the uses for waste heat are vast.

AI at the Edge may still be in its infancy, but the priority will be to remain flexible. Operators should refrain from a cookiecutter approach and embrace flexibility and change.

View the full broadcast, entitled The criteria for a robust Edge infrastructure, on the DCD Edge computing channel 

Educating at the Edge

Duos Edge wants to close the digital divide experienced by schools in rural locations across the US

exas has become one of the epicenters of the AI data center boom.

Already home to Tesla’s 50,000-GPU supercomputer, Cortex, it will soon host the first stage of OpenAI’s ambitious Stargate project, which will see an initial $100 billion poured into AI infrastructure. Myriad other data center developments are in the works as the hyperscalers look to take advantage of the plentiful space - and power - on offer in the Lone Star State.

(Credit: HABesen/Getty Images
Palo Duro Canyon, just south of Amarillo in rural Texas (Credit: Hundley_Photography/Getty Images)

But as unprecedented amounts of data are set to whizz to and from Texas data centers at speeds of hundreds of gigabits per second, the contrast with the connectivity woes experienced by many residents of the state, particularly those in rural communities, could hardly be more stark. According to the most recent US census data, some seven million people in Texas lack broadband access, while poor connections hamper the ability of government agencies to deliver basic services.

In a letter to residents highlighting the issue, Texas Comptroller Glenn Hegar, the senator who oversees the state’s accounts, said: “23 percent of Texans are unable to attend online classes, see a healthcare provider from their living room, fill out a job application online, start a business or access online marketplaces from their kitchen table. These barriers negatively affect Texans’ quality of life and limit economic opportunities for people and the state overall.”

The struggle for connectivity extends to school classrooms in many parts of Texas, where slow and unreliable networks often mean pupils are being denied access to valuable resources that are available to their peers elsewhere in the US. Now a new company, Duos Edge AI, wants to close this digital divide with a network of Edge data centers which it believes can be transformational.

Getting Duos Edge AI on track

Duos Edge AI is a subsidiary of Duos Technologies, a business that has traditionally been as concerned with shunters as it has with servers.

Founded in 2001 and based in Jacksonville, Florida, Duos Technologies uses machine vision-based solutions to automate inspections of trains and trucks while in motion. What this means in practice is that a train passes through an examination portal which is equipped with multiple cameras that take pictures of each individual wagon. The company’s software then brings all these images together to conduct an automated remote inspection, flagging up mechanical problems that may need addressing.

As part of this work, Duos had already deployed Edge data centers adjacent to

its scanners on railroads across the US, through a partnership with rail operator Amtrak. The pods apparently enable data to be captured from trains traveling at speeds of up to 125 miles per hour, and deliver safety information about these trains within 60 seconds.

The company spotted an opportunity to sell these data centers to third parties, but to do that they needed someone who knew the Edge computing business. So they teamed up with Doug Recker.

The avuncular Recker had previously built and sold two data center businesses. He launched Colo5 in 2008, setting up two large-scale data centers in Jacksonville and Lakeland, Florida. After flogging this business to Cologix in September 2014, Recker resurfaced in 2017 at the helm of EdgePresence, which rolled out 100kW Edge data centers, housed in 12ft by 30ft containers, across the US.

EdgePresence quickly made its mark, supporting telecoms infrastructure firm American Tower in its Edge computing program, and drawing a $30 million investment from DataBank. It wasn’t long before larger suitors came calling, and in April 2023 Ubiquity acquired the company for an undisclosed sum.

Recker stayed on at Ubiquity for a few months to aid the transition, but was soon looking to build a new Edge business. While his experience of the data center industry was undoubtedly vital to his decision to join Duos Edge AI, he says the time he spent abroad while in between jobs was the biggest influence on his current career trajectory.

“I’ve been to rural schools where I can’t even get any phone signal, or the network is down so they can’t access the normal curriculum,”
>> Doug Recker

With a non-compete clause preventing him from joining another data center company after he left Cologix in 2014, Recker headed to Africa, where he worked with an NGO, Matter, in Zimbabwe and other countries, deploying its Innovation Hub Pods. These are air-conditioned classrooms housed in converted shipping containers that use solar energy to provide power and connectivity, enabling children to access a computing curriculum and other forms of online learning.

“We’d drop these pods in and have 15 kids sitting around tables and learning,” says Recker, who is Duos Edge AI president. “These kids were like sponges, they would absorb everything, and even

Doug Recker, CEO
Credit: Duos Edge AI

though they had never used a computer before they would be coding within two days. That experience gave me the drive to do what I’m doing now - children need opportunities to learn and we have similar problems here in our backyard.

“If I ever make it big I’ll be going back there to support those guys because they do a lot of great work and I learned a lot.”

Problems in US schools may be slightly different to those experienced by pupils in developing nations, but Recker says they can be just as disruptive. “I’ve been to rural schools here where I can’t even get any phone signal, or the network is down so they can’t access the normal curriculum and are sharing books,” he says. “It’s not fair to those children because they’re not getting the same education as kids in a major city.”

The poor connectivity in rural America was highlighted in the Covid-19 pandemic, Recker says. With classes being held online, even pupils who had reliable broadband or wireless Internet connections at home were unable

to access platforms such as Google Classroom because they were hosted on school district servers in the nearest big city. And given the sheer size of states such as Texas, the nearest settlement is often pretty far away.

“During the pandemic the kids found they couldn’t get onto the servers because they were down or overloaded,” Recker says. “In District 16 in Amarillo,

Texas, where we first deployed, you had more than 50 schools linking back to a data center in Dallas, 500 miles away. Connectivity was terrible for them.”

Show me the way to Amarillo

“It’s like we’ve built a major data center there, it’s just they’ve spent $1 million on it rather than $30 million,”
>> Doug Recker

This is where the Duos Edge AI solution comes in. Launched in June 2024, the company’s Edge data centers contain 15 cabinets housed in a 55ft by 13ft (13m by 4m) pod. These can be set up on municipal land and connect to the area’s schools, providing much faster access to online services.

Referring to the company’s first project, in Amarillo, Recker explains: “We took a small piece of the School District central office’s parking lot and dropped the pod in. We then partnered with a big company called FiberLight, which had been given a grant to build fiber connections to the school district, and brought in multiple carriers to that pod.

“Now you have all the schools connecting back to the hub. The longest distance is nine miles, rather than 500.”

Credit: Drazen Zigic/Getty Images

The fiber connection into the pod allows it to service multiple carriers, Recker says, offering more choice for end users. “Before this, AT&T was the only player in town, so if its network went down, the whole city was down,” he says. “Now we have a minimum of four carriers in the box, which brings additional resilience and competitiveness to the market.”

Cabinets within the pod are currently low density, Recker explains, due to the nature of the applications they run. “These are learning platforms that don’t require a lot of power,” he says. “The average we’re seeing is 5-6kW per cabinet, but the pod has the potential to go up to 300kW at 15 cabinets, we would just need to switch the cooling system out because at the moment it’s DX with hot and cold aisle containment. If we switched that out for a chiller, we could go to a higher density, but we don’t see the need for that with this application.”

This low density means that, unlike developers of larger data centers, Duos Edge is not engaged in a quest to find more power. “We’re putting our pods at hubs for the education districts, which are where everyone who is part of the education sector but isn’t connected to a school comes to work,” Recker says. “These are often housed in old grocery stores or warehouses, so what we’re finding is that they already have plenty of power to them, and I only want to take 300kW off there. It’s nothing.”

Duos Edge AI’s pods are manufactured by a company Recker has worked with in his previous data center companies. Made with a steel frame, he says they are capable of handling cabinets weighing up to 10,000lbs, and can withstand wind speeds of up to 160 mph. Getting the pods up and running takes three months, from signing the contract to installation, Recker adds.

The system appears popular in Amarillo. Speaking last year when the pod was launched, Michael Keough, Region 16’s CTO, said: "Introducing Edge data centers in our area is a critical step in ensuring that our students have the resources and connectivity they need to succeed. In such a rural area, this solution serves as a vital catalyst for digital equity, ensuring that our schools and communities remain competitive both

within Texas and across the nation."

Recker says this popularity extends beyond the education sector. “What we thought was going to happen was that the 15 cabinets would just be focused on the schools,” he says. “But we’ve had clinics and the local hospital calling us for space, as well as the city government, the fire and police services, and the local university. It’s like we’ve built a major data center there, it’s just they’ve spent $1 million on it rather than $30 million.”

The future is farming

Duos intends to sell its Edge data centers to more school districts in Texas and, eventually, further afield. Recker is also looking to other, underserved, markets in rural areas.

“An area I think we’re going to see pick up in the next few months is farming,” he says. “Farmers are using drones and AI systems to monitor cattle and crops, looking out for signs of disease or predators. All that data that they gather has to go and be computed somewhere, so they can connect back into our box and enjoy a high level of connectivity to a hub that’s only a few miles away. I had no idea that Amarillo was one of the largest cattle-producing regions, we’re learning about these markets as we go, which has been really cool.”

All this means Recker has plenty to keep him occupied. He has staffed Duos Edge AI with some people he has worked with in his previous ventures, but says being part of the wider Duos operation has brought big benefits too.

“In my career so far I’ve gone out and raised some money, built a business then sold it four or five years later,” he says. “What I learned when I started looking at this opportunity was that I needed help to install these pods, particularly at scale. Duos was actually a colo tenant of mine at Colo5, so I’ve known about them for a while and they’ve got a ton of smart people who have experience of installing these huts, which are like mini data centers, along railway lines.

“They have all the knowledge we need when it comes to deploying the pods and getting things like permits in place. We’re a separate subsidiary of the ‘mothership,’ but we’re also able to call on some top talent when we need it, so it’s been a great fit.”

AI AT THE EDGE?

Beyond

the Edge, Duos also has grander data center ambitions, and in December announced it would be deploying four 50MW data centers at the Pampa Energy Center, a 500-acre site northwest of Amarillo.

These data centers will be supported by on-site energy generation, with 500MW coming from a natural gas plant and 200MW from wind turbines and other alternative energy sources.

Recker’s unit will be tasked with building the data center infrastructure for the site, while another Duos subsidiary, Duos Energy, is responsible for getting the required power up and running, working in partnership with Fortress Investment Group.

While the company sees supplying school districts with the low-density infrastructure they require as its main purpose, Recker believes there is also an opportunity to turn its pods into Edge AI installations. He says Duos is working on a design for a 2MW pod that will house 10 cabinets designed for AI workloads.

“The reason we’re doing 2MW is that it’s easier to find that much power at several of these buildings that are unoccupied than it is for a company like CoreWeave to find hundreds of megawatts in one place,” he says. “We can say to customers, ‘we’ll give you 40 sites at 2MW each deployed over the next six months.’”

He adds: “It’s a lot easier to get a permit for 2MW installations, and the good thing is that with remote working there are a lot of vacant buildings with 5MW-10MW of power to them. I can drop two pods in the parking lot and have them up and running really quickly.

“Our core focus is on education, but this could be an interesting sideline.” 

Operating in the light, and in the dark (net)

The infrastructure behind the issue of child sexual abuse content on the Internet, and tackling it

Trying to conceptualize the Internet is like trying to draw the universe. At any given moment, it is expanding, complexifying, and mutating.

Unfortunately, within that expanse, there is the unavoidable seedy underbelly. The shadows in which the worst of humanity skulk, and multiply. At the bottom of that deep dark pit, is the taboo subject of child sexual abuse content - or child porn - an issue that is imperative to tackle.

While the Internet is not limited to the World Wide Web, it is the tool that many of us use to interact with it, and the two have become somewhat synonymous.

The World Wide Web was founded by Tim Berners-Lee while he was at the European Organization for Nuclear Research - better known as CERN - in 1980 and opened to the public in 1993. It was designed as a “universal linked information system” and while initially used by CERN and other academic and scientific institutions, once the protocol and code were made royalty-free, it

Georgia Butler Senior Reporter, Cloud & Hybrid

became widespread.

People were able to make websites for whatever they wanted, and boy, did they take that literally.

While many would have turned to the Internet for quaint blogs, or legitimate business offerings, once you have a new avenue for committing crime, those interested will most certainly take advantage of the situation.

The Internet Watch Foundation (IWF) has dedicated itself to a particular nichethat of child sexual abuse (CSA) content, and more specifically, the removal of it.

Keeping watch

“We were founded really because, once the Internet started to become available mainstream, there started to be reports of child sexual abuse material being found on there. At the time, the Internet service providers in the UK were the ones held responsible and told to do something about it, so the IWF was founded as a hotline that the public could report to,”

Dan Sexton, CTO at IWF, tells DCD

“Now, we’ve got more than 200 members including all the big tech companies, ISPs in the UK, and increasingly from around the world. Our mission is an Internet free from child sexual abuse.”

Initially, IWF was a sort of ‘hotline.’ People would report content that they came across on the Internet, and IWF would then get it taken down.

“Around 10 years ago, we were getting reports of content and we would action those reports and see links on the website for other content, or additional images

"The really sensitive stuff is all kept on-premise, and we have a dedicated, secure air-gap network where all the data and images are stored,"
>>Daniel Sexton, IWF

and videos, but we couldn’t do anything. We couldn’t work outside the remit of the content that was reported to us,” says Sexton.

“So we requested permission from the UK government and Crown Prosecution Service to allow us to start doing proactive searching, so now we can use the initial intelligence for further investigation.”

Once the IWF has found content, and has established that it is indeed imagery or videos of CSA, the foundation then goes about getting it removed.

“It’s finding where it is on the Internet. So if we find imagery or videos on a website, it’s working out who owns that website, what country it is registered in, and how we facilitate the removal of that.

“With the open web, you can track down website addresses, and IP addresses to regions and registers to work out the owner and administrators, and then alert authorities along the way that the content is illegal.”

Notably, the majority of the content IWF deals with is not actually websites themselves dedicated to CSA (although this does play an important part in the issue). Instead, it is people sharing content on websites that allow usergenerated content.

These cases are a bit easier - simply alerting the host and getting that image removed. When it is an entire website, “these are potentially monetized. Those ones we take much more seriously. Again, in those cases, it's tracking down where they are and getting them blocked and removed from the Internet. In this case, the entire site needs to get blocked, removed, and deregistered.”

Once the CSA content has been found, it has to be reviewed and tagged with detailed data about what is happening in the content, how old the child is, the age, the race, and any other descriptive data. Once this is done, the information can be turned into a “hash” - or an identification number, which is kept in IWF’s massive database.

“In very simple terms, our hash list takes an image, and we run our algorithm against it, and it turns it into a number. A database of those numbers is provided to industry members and then every time an image or file is shared on a member’s website they can compare it to our database,” he explains.

“That means that once we have found an image, you can block it everywhere.”

The value of this work goes without saying. But, as IWF also works with the Police, and the Child Abuse Image Database (CAID), the content must also be stored.

Storing unsavory content

Understandably, IWF is not comfortable sharing the exact infrastructure underpinning its storage systems. This data is, after all, very personal - or as Sexton says - “It’s about as secure and sensitive as any data could be.”

IWF’s infrastructure is seemingly separate from CAID, with Sexton noting - “[CAID] is a unique facility where all the reinforcement agencies in the UK can find content. It all gets centrally uploaded to help law enforcement in their investigations… All of our findings are shared with CAID.”

He adds: “We have this really great relationship with CAID where we are contributing to CAID to help law enforcement, and then that law enforcement data is being shared with the IWF.”

Sexton explains that, while he can't go into detail about the “exact infrastructure that we use,” IWF does keep everything within a “very secure network.”

“We work on the Internet, but we have a very robust pipeline to ensure that any content we assess is passed through various stages and kept in a secure area. It’s a really big concern. This is highly illegal content, and it’s also highly sensitive content.”

Dan Sexton, CTO at IWF

Interestingly, while this might be expected to mean only on-premise, IWF’s attitude towards this seems to have shifted. During a 2022 interview with Tech Monitor, Sexton said that the foundation couldn’t really embrace cloud computing - “The really sensitive stuff is all kept on-premise, and we have a dedicated, secure air-gap network where all the data and images are stored.”

However, he seems more open to the concept of cloud computing three years later.

“As far as on-premise and the cloud, which is an ongoing conversation within the community, there have always been concerns about the security of the cloud but I think as an industry we have sort of moved past that,” he says.

“It’s more and more accepted that secure public cloud is a thing that is very much being used - I mean it's being used by governments and organizations around the world. Our stance is that it needs to be very secure, so at the moment that sees a lot of on-premise being used as a solution, but that could well change. It certainly seems that the public cloud is being used for more and more sensitive information, and there are definitely advantages to the cloud such as leveraging greater processing power.”

CAID itself is also nearing the end of its hosting agreement. In a tender published in May 2024, CAID noted that its current hosting contract would expire in March 2026, and was looking to procure these services.

While similarly, details for this are limited, the requirements are for “hosted and managed scalable infrastructure,” to be “hosted in UK-based data centers

“They are resistant to takedowns, and they don’t typically adhere to law enforcement requests. They turn a blind eye to what their users are doing,”
>>Richard Hummel, Netscout

with Police Assured Secured Facility status, including the use of Tier 1 public cloud providers,” and that it should be “Separated from other tenants so that underlying infrastructure providers have no access to the CAID applications or data.”

Most importantly, the tender notes that the previous legal opinion, last updated in 2020, said that “CAID data should not be held on cloud-hosted infrastructure and should remain within Police-owned data centers.”

“Since this advice was given there has been considerable development in both cloud hosting technology and the needs and approaches of law enforcement efforts to combat child sexual abuse and exploitation. As such, this opinion has been revisited and it is now considered that removing these restrictions can be compliant with data protection requirements.”

While both the Police and IWF are working to remove CSA content from the Internet, some of that content remains untraceable.

Sexton explains to DCD: “We can do this on the open web, but we can’t do this on the dark web or Tor networks. They are designed to make it anonymous, and that's why you see all these horrendous reports of illegal content on the dark web. We can find that content, but we can’t see who runs the website or where it is located. It's much, much harder to get that removed.”

Those websites only accessible via the dark web, while keeping their location hidden, are still being physically stored somewhere, and by someone.

The web is dark and deep Bulletproof Hosting, according to Netscout’s director of threat intelligence Richard Hummel, previously was used to describe networks that were “very clearly criminal.”

“The point of these networks was that anyone subscribing could randomize their IP addresses and domain names. So, as a security professional, if you were trying to track command and control or track a specific adversary, you would have to know the algorithm in order to figure out what the IP address would be next,” says Hummel, adding however that the “nomenclature has morphed over time.”

Bulletproof Hosting now is more typically used to refer to a provider that is very resilient. “They are resistant to takedowns, and they don’t typically adhere to law enforcement requests. They turn a blind eye to what their users are doing.”

The actual service of Bulletproof Hosting is not illegal, and many that fall into the category are something of a grey area. “These networks will sometimes have legitimate purposes, and then turn the other way when users do certain things. Then there are the Bulletproof networks that very clearly know that their user is doing bad stuff and they just don’t care.”

This is reiterated by Gerald Beuchelt, CISO of Acronis, who tells DCD that Bulletproof Hosters typically don’t require the same level of scrutiny that other, more above-board, providers might.

“These providers are popping up all over the world, and depending on who the users are, might be in ‘unfriendly’

Richard Hummel, Netscout
Gerald Beuchelt, Acronis

countries,” says Beuchelt, offering Russia and China as an example. “There have also been quite a few Bulletproof Hosters in Western countries. The Netherlands is particularly known for having a regulatory regime that enables Bulletproof Hosters to exist,” he says, noting that at a conference in the last couple of years in the Netherlands, he recalls a discussion about how they could de-anonymize those providers.

Hummel adds that one of the ways to tell if a provider is a Bulletproof Hosting Provider is if you go to their website and they state that they are not a BHP - “it's kind of a giveaway. If they specifically say we aren’t this, they probably are.”

Beuchelt affirmed that “they are probably not advertising it [that they are BPH] on the front page of the Wall Street Journal, but if you go on the darknet you will find marketplaces where it is publicly advertised.”

In the case of some Bulletproof networks, they will sometimes actually use legitimate cloud hosting accounts and “tumble their traffic through them” instead of operating their own infrastructure, though Hummel notes that in the case of CSA, this is likely not to be the case. “That’s probably going to be very much ‘underground.'”

There have been some cases of successfully taking down CSA and websites dedicated to it even on the dark web.

Freedom Hosting, which was operated by Eric Eoin Marques and, according to a Krebs on Security report, had a reputation as being a “safe haven” for hosting CSA content, is one of them. Marques, described in an FBI warrant as “the largest facilitator of child porn on the planet” was arrested in Ireland in 2013, and in 2021 was sentenced to 27 years imprisonment.

In 2019, a “CyberBunker” facility in Traben-Trarbach, western Germany, was raided by more than 600 police officers, eventually leading to eight convictions. Among the illegal services allegedly hosted at the German data center were Cannabis Road, Fraudsters, Flugsvamp, Flight Vamp 2.0, orangechemicals, and the world's second-largest narcotics marketplace, Wall Street Market, and according to reports - CSA sites.

Built by the West German military

in the 1970s, the site was used by the Bundeswehr’s meteorological division until 2012. A year later, it was sold to Herman-Johan Xennt, who told locals he would build a web hosting business there. In total, around 200 servers were seized.

Perhaps most famously is the “Welcome to Video” case. Welcome to Video was a South Korean website that was owned and operated by Son Jung-woo. Son hosted the site from servers operating in his home in Chungchongnam-do in South Korea and between 2015 and 2018 distributed around 220,000 pieces of CSA content which were available for purchase with cryptocurrency.

The cryptocurrency transactions were found by the US Internal Revenue Service Criminal Investigations department, which asked Homeland Security to investigate. The servers hosting Welcome to Video had the IP address embedded in the source code, enabling the Korean National Police Agency (KNPA) to arrest Son.

In total, eight terabytes of CSA content were seized, 45 percent of which had never been seen by law enforcement before, and 337 site users were arrested.

Actually breaking down the infrastructure and arresting the hosting providers is crucial to actually making an impact. Hummel explains, that if not, these services are very resilient, and will simply move to another network and “continue as if nothing happened.”

“If we succeed at actually confiscating the infrastructure, which has happened before, and shut down servers - to actually get a Bulletproof Hosting Provider shut down, maybe you could make some arrests of the people that actually established that service and then I think you could start to feel some of the effects of that,” he says, adding “But it’s very problematic. It’s very difficult to do, and I imagine law enforcement has just as much frustration with this.”

A Herculean battle

In Greek mythology, the Hydra of Lerna is a many-headed serpentine lake monster, and in some iterations, chopping its head off would see the Hydra grow two more.

While the takedown of sites hosting CSA cannot be directly described in the same light, the issue is ramping up. The

Internet continues to expand - like the universe - and attempting to monitor it is a never-ending challenge.

As IWF’s Sexton puts it: “Right now, the Internet is so big that its sort of anonymity with obscurity.”

While some emerging (and already emerged) technologies such as AI can play a role in assisting those working on the side of the light - for example, the IWF has tested using AI for triage when assessing websites with thousands of images, and AI can be trained for content moderation by industry and others, the proliferation of AI has also added to the problem.

AI-generated content has now also entered the scene. From a legality standpoint, it remains the same as CSA content. Just because an AI created it, does not mean that it’s permitted - at least in the UK where IWF primarily operates.

“The legislation in the UK is robust enough to cover both real material, photo-realistic synthetic content, or sheerly synthetic content. The problem it does create is one of quantity. Previously, to create CSA, it would require someone to have access to a child and conduct abuse.

“Then with the rise of the Internet we also saw an increase in self-generated content. Now, AI has the ability to create it without any contact with a child at all. People now have effectively an infinite ability to generate this content.”

Sexton adds: “From the perspective of law enforcement, their job isn't just to find content and remove it, it's also safeguarding children and arresting those who are abusing them and distributing that content. This is much harder if you can't tell whether a child is real, and there is a real risk that time will be spent chasing synthetic children that don’t exist or indeed not following up on real abuse because they think it looks like, or it looks like or appears to be, AIgenerated.”

Ultimately, while the Hydra isn’t being killed, it is still important to keep chopping its heads off no matter how many new ones grow back.

Sexton remains optimistic about the IWF’s work: “It's a continual battle of adding friction and making it harder. And if it's too hard, the hope is that they will just stop doing it.” 

The rise of Vertiv

Sitting down with the CEO of the company

that timed the AI boom just right

In the years leading up to the launch of ChatGPT, shares in Vertiv were declining, inexorably returning to its 2020 IPO price. Concerns about then-CEO Rob Johnson’s leadership and the growth potential of its bet on high-density and liquid cooling critical infrastructure were mounting.

Then everything changed. AI data centers exploded, and the company's valuation soared, now up some 755 percent (also helping matters was a stock buyback and dividend increase).

For CEO Giordano Albertazzi, who was appointed on 1 January 2023, it has been an untarnished story of success.

"You can call it being a fair-weather CEO," Albertazzi tells us, when we put it to him that he joined at the perfect time. "But you could just grow with the market, while our value creation hinges around growing more than the market."

The company, he argues, played a good hand well, making it the key cooling and power equipment provider for the AI era.

Getting to that point required keeping a tight leash on spending, and reaching the necessary size to benefit from economies of scale, Albertazzi says. He may only be two years into the top job, but he's an old hand at Vertiv - dating back beyond its 2016 formation to its earlier life as Liebert and Emerson Network Power.

"I think of Vertiv as a company that has improved a lot, not just in growth,

but also in terms of profit, by growing our fixed cost as little as necessary and really leveraging volume and turning volume into bottom line,” he says. “That type of philosophy is profoundly rooted in the way I run the business. I've been with a company for 20+ years in very good and very bad weather, not always at the top helm, but certainly with quite an important helm in my hands."

Naturally, that begs the question how a company avoids reaching its apex too soon, and staves off a decline. Vertiv is “relentless in investing in innovation and technology, because it is technology that will create a long-term competitive advantage," Albertazzi says.

"People tell us we are doing quite well, but I still say 'Okay, what can go wrong? How can we be better? What are the things that we need to strengthen?’ That culture is good for every weather you come across."

He adds: "It requires a healthy paranoia - almost on the verge of unhealthy. Just there, on the line.”

For now, however, the company's main challenge is the enviable issue of simply trying to keep up with demand. "Demand is stronger than supply, which is a good situation for an equipment provider," he says.

Ballooning hyperscaler capex costs, new players like OpenAI’s Stargate, and billions in new funding announced in the first few months of 2025 alone have made

it clear that this demand still has some ways to go.

“The market is insane in terms of appetite,” Albertazzi says, but notes that the industry’s desire for growth is, again, constrained by the realities of supply. “When you factor in all the complexities that are permitted, access to power, etc, that insane appetite gets moderated by a number of factors,” he says.

The company in November 2024 told its investors that it expected its cloud and colocation business to grow 15-17 percent every year for the next five years. “Projections have not changed at this time,” Gio says, despite Stargate and other huge announcements.

Data center operators are seeking to circumvent as many blocks as possiblelooking to on-site power, and pushing for governments to speed up permitting - but there is still a simple limit to how much capacity can come online in a short space of time.

“We're talking about an industry that is de facto a construction industry, you cannot just go triple digit the moment you wake up,” he says. “You don't go viral building data centers, as you would with the use of an app.”

Were there no such constraints, “the market would probably be 25-30 percent,” Albertazzi says. “And then there’s the bumper of construction, having the right number, simply having qualified personnel on-site. Thousands of people, not a walk in the park.”

This, while a headache for those trying to deploy as fast as possible during the AI arms race, is actually a good thing, Albertazzi argues.

“The market doesn't have an ability to infinitely accelerate or slow down,” he says. “I have said several times to investors and customers that those moderating factors are not bad for the industry, it’s not super acceleration and slowdowns.”

The industry can ideally avoid some of the more aggressive booms and busts of faster-moving sectors, he believesalthough market panics over DeepSeek may suggest investors aren’t so sure.

What is sure, however, is that there will not only be more data centers, but that a number of those sites will be bigger than anything we have seen before. “The trend is absolutely undeniable that the average size of the data center campus is going

significantly up one order of magnitude, at least,” Albertazzi says.

“But whether that is 1GW, 2GW, 5GW, or how many gigawatts, is less important from our infrastructure standpoint. From the point of view of the logic in which you build it doesn't change dramatically.”

What will be different is the rack densities within those sites. “That is definitely going to influence the way data centers are built,” Albertazzi explains. “We have already seen a change in the thermal chain - everything is liquid cooling.”

Racks have jumped from single digits to as much as 150kW or more, and densities show no sign of slowing down. DCD understands that GPU-leader Nvidia reached out to Vertiv and other companies about the possibility of 1MW racks in the years to come.

“1MW is an easy thing to say, because it makes all your math very easy,” Albertazzi says, sidestepping the question about the research effort. “But it's undeniable that the density will continue to increase. Will we get to 1MW exactly? I don't know, but density will definitely go higher and higher because the compute will be so much more efficient when that happens.”

Such a rack would probably not look like the ones of today. “It will have a slightly bigger footprint, but it will not be huge. It won’t have the footprint of 10 racks. Certainly, it will be much more robust, simply because the weight will be totally different scales. But the concept would not be alien to what we think of today.”

Another leap in density will necessitate “a further revolution in the liquid cooling, and a paradigm change on the power side,” he envisions. “Higher voltages, different types of power infrastructure, all things that are still dynamic. We are very fortunate in having a strong relationship with Nvidia and defining what the power thermal needs will be there, two, three, five years out.”

All this means it is “not easy being a designer of data centers for today,” Albertazzi says. “You design something today, and you will have that asset built 18 months from now, probably 2027. That asset probably needs to be up and running and generating returns up until 2047. That's a lot. There will be a lot of IT cycles happening in those 20 years at speed and dramatic change, unheard of and never experienced before. So that's the most difficult part.”

Albertazzi says Vertiv’s relationship with Nvidia, hyperscalers, colos, and enterprises will be key to surviving this chaotic moment. “We feel we have a particularly important role to work with them and share our technology roadmaps, so they have future-proof infrastructure, or one that you can retrofit very efficiently and very effectively without having to overhaul the whole thing,” he says.

Whether Vertiv can maintain its lead for this revolution and the next is an open question. After enjoying an early start, it now faces competitors that are keen to eat away at its business.

KKR acquired liquid cooling firm CoolIT in 2023 to help it scale, while in late 2024 manufacturing giant Flex acquired JetCool, and long-time Vertiv rival Schneider Electric picked up Motivair for some $850 million.

“We're not afraid of competition," Albertazzi says. "The industry has always been competitive, even when it was growing in the low single digits. There is a consolidation happening in the liquid cooling side and, very often, it’s the same competitors that we had before liquid cooling was not there.

“Now, the same competitors are extending their influence into that part of the market which is very normal. I don't want to sound dismissive - again, we're paranoid - but it's a competitive market, and we believe in our competitive advantages.”

Gio Albertazzi - Vertiv

Welcome to Gas Land - how natural gas is powering the US AI boom

As AI drives increasing energy consumption in the US, natural gas looks set to fill the gap

The rapid expansion of artificial intelligence (AI) is fueling an unprecedented surge in energy requirements across the US data center market. According to Berkeley Labs, electricity demand from the sector could increase by between 74-132GW by 2029, potentially accounting for up to 12 percent of total US consumption.

While the industry has long been a leader in sustainable energy procurement, the absence of a commercially viable low-carbon baseload source in the face of such exponential growth is reshaping discussions on power procurement. Therefore, as AI data centers expand in size and number, the need for reliable, dispatchable power is increasing, placing natural gas at the forefront as the only scalable and immediately available option to meet demand.

AI's insatiable appetite

AI installations have an enormous appetite for power, largely due to their reliance on large numbers of GPUs that can consume up to 33 times more energy per task than traditional CPUs. They also generate significantly more heat, driving the need for powerful cooling systems, which can account for nearly 40 percent of a data center’s total energy use. Training advanced AI models is even more energy-intensive, often running continuously for days or weeks. Training ChatGPT-4, for instance, reportedly consumed 50GWh - more than 50 times its predecessor.

Despite efficiency improvements recently demonstrated by DeepSeek - a Chinese AI model promising similar performance to US models with lower energy consumption and fewer

high-end chips - experts have urged caution. Benjamin Lee, a University of Pennsylvania professor, notes that DeepSeek's efficiency applies only to a narrow class of AI computations. While it could impact the number of gigawattscale centers, he said, demand for inference would likely rise, necessitating continued data center build-out. In addition, there are investigations ongoing as to whether DeepSeek used OpenAI’s models to help train its model, casting further doubt on its energy saving claims.

Indeed, the data center and power sectors have remained bullish on their projections. AI is still expected to dominate new data center capacity going forward, accounting for 70 percent of total demand by 2030, according to McKinsey. This puts increasing pressure on the energy supply side to meet this demand. A recent report from think tank Rand warned that if the US wishes to retain at least a 75 percent majority of worldwide AI computing, it would require almost 51GW to be available to data centers by 2027.

Therefore, natural gas has emerged as the only viable and rapidly deployable power source capable of meeting the soaring energy demands of AI-driven data centers.

The natural solution

Natural gas is uniquely suited to meet the immediate needs of data centers. It is the most dispatchable fuel source available, able to be switched off and on easily, can run at capacity factors exceeding 80 percent, and can be ramped up within minutes to meet demand. The US is also uniquely positioned to meet the demand as the largest net producer of dry natural gas, supported by a vast pipeline network spanning more than three million miles.

"Some developers are working on behind-themeter generation directly served by natural gas. Others want to remain connected to the grid,"
>>Caitlin Tessin, Enbridge

Therefore, many analysts believe that, to meet the staggering pipeline of new data centers, natural gas is the only real option. "In the short term, natural gas is the way to go. Many utilities consider it the most viable option for balancing the growing loads while ensuring reliability," said Karthik Subramanian, an analyst at Lux Research.

Consequently, utilities facing massive data center pipeline capacity increases are turning to the natural gas sector to meet forecasted demand. For example, utilities serving the Carolina, Georgia, and Virginia markets have announced plans to add 20GW of new natural gas generation capacity by 2040, with twothirds of forecasted load growth tied to new data center capacity.

This massive surge in data center pipeline capacity has fundamentally changed the conversation around power sourcing, according to Jamie Smith, COO at power company RPower. “It’s no longer just about prioritizing renewable energy; instead, it’s become a matter of simply obtaining reliable power," he explains. "Natural gas has emerged as a crucial solution, providing a stable and scalable source of electricity that meets the

immediate needs of data centers."

The election of Donald Trump, who has already pledged to fast-track new natural gas generation, and utilities' moves to shore up and expand their natural gas generation capacity are giving natural gas companies an increasingly free hand to expand their production and supply infrastructure. This bodes well, particularly for midstream gas companies - responsible for transporting, storing, and processing natural gas - which are targeting massive growth across their pipeline network driven by data center demand.

Pipeline buildout

Midstream gas companies are anticipating a significant uptick in demand from the data center sector, with some predicting this to be as high as "10 to 12 billion cubic feet per day (bcf/d) by 2030," according to Michael Grande, managing director of midstream energy and refining at analyst firm S&P Global. Although actual demand may fall short of those upper estimates - S&P forecasts a more modest growth range of three to six bcf/d - midstream companies have already made large capital commitments to pipeline extensions across the country.

Cathy Kunkel, an energy consultant at the Institute for Energy Economics and Financial Analysis (IEEFA), believes that midstream companies are sensing a real "opportunity with data centers" and are actively looking to "supply pipelines to new natural gas plants" to directly serve the data center market.

Midstream firms such as Enbridge have openly recognized the opportunity presented by data centers. According to Caitlin Tessin, vice president of market innovation and Gulf Coast business development at Enbridge, the company

has "seen interest from data center providers in multiple regions.” In the southeast alone, the company said it has had interest in serving up to 4.5GW to 5GW of demand.

In Virginia, North Carolina, South Carolina, and Georgia alone, pipeline operators have proposed or begun construction on more than 3.3 bcf/d of new pipeline capacity by 2040, according to the IEEFA.

Midstream flexibility

While the majority of the new pipelines are expected to supply utilities, midstream companies are approaching demand with an open mind and have stressed adaptability and flexibility in how they will supply the data center sector. According to Tessin, the lack of a one-size-fits-all approach to powering data centers means that operators are looking at off-grid and grid-based solutions.

"Some developers are working on behind-the-meter generation directly served by natural gas, so we are actively siting lateral projects off our mainline to support them. Others want to remain connected to the grid and are working with electric distribution companies to site new power plants that would then serve them," she explains.

This has led to an increase in midstream pipeline operators building laterals off their main pipeline to directly serve data centers, such as AI data center developer CloudBurst's ten-year natural gas supply deal with midstream gas firm Energy Transfer to power its 1.2GW Texas AI data center via a dedicated lateral.

According to S&P Global’s Grande, these laterals are not only "practical," but also "avoid permitting issues and public opposition while offering direct

"Natural gas [has become] the industry standard for large-scale data centers today because it's one of the few energy sources that can scale up to gigawatt levels quickly,"
>>Darrick Horton, TensorWave

power access." As a result, data center companies, especially AI-focused firms, facing persistent permitting challenges and transmission constraints, are looking towards off-grid natural gas generation as their quickest solution.

Offgrid options for data centers

The rise of natural gas as an offgrid power solution is beginning to reshape power procurement, fostering collaboration between gas producers and data centers.

Increasingly, data center operators are colocating with upstream gas producers, particularly in states with strong infrastructure, lower costs, and favorable permitting, such as Texas, the Midwest, Colorado, Utah, New Mexico, and Louisiana. Texas, in particular, is emerging as a leading off-grid AI hub due to its size, tax advantages, and status as the top US gas producer.

Several data center firms have signed agreements with gas providers at the source to secure direct, off-grid power in Texas. A notable example is Texas Critical Data Centers, a joint venture between

New Era Helium and AI cloud firm Sharon AI. The JV plans to build a 250MW AI data center adjacent to a gas production facility in the Permian Basin.

According to Will Gray, CEO of New Era Helium, areas such as the Permian Basin offer a unique opportunity for AI and cloud data centers, as it “offers cheap power and ample infrastructure for scalability, making it ideal for such projects."

Therefore, for data centers looking to scale rapidly, off-grid natural gas offers clear advantages, namely, lower costs, streamlined permitting, and a stable, long-term energy supply. This has led smaller-scale AI-focused developers to increasingly view off-grid natural gas as the industry standard to power their operations, as it is one of the few energy sources that can quickly scale to gigawatt levels, according to Darrick Horton, CEO of AMD-based cloud provider TensorWave.

While much of the new AI data center capacity will remain in hotspots such as Virginia, the attraction of off-grid natural gas supply is pushing “operators to opt for locations further out, where they have the flexibility to build out their own power solutions,” points out David Dorman, director of development at Duos Technologies, which develops power infrastructure through its Duos Energy subsidiary.

The shift to off-grid solutions has not only been limited to plucky AI start-ups. Large-scale players are embracing natural gas for its reliability and scalability. For example, the first Stargate data center in Abilene, Texas, is reportedly set to use an off-grid natural gas power plant for its power – as well as renewable assets. The surging demand has also attracted major oil and gas companies. ExxonMobil

and Chevron have announced plans to develop dispatchable off-grid natural gas power plants for data centers.

As a result, while utilities remain integral to AI's growth, an increasing range of options is being made available to data centers to power their operations behind the meter, entrenching natural gas as a central power source for data centers wishing to meet their AI ambitions.

Risky business

The rise of AI is changing the way the market views power procurement. However, two inherent fears continue to permeate the market: the risk of overbuilding new gas-fired generation and the subsequent impact this new power could have on data centers' sustainability credentials.

Fears of overbuilding new gas generation have been primarily fueled by a growing discrepancy between utility capacity growth projections and those of independent researchers. Dominion Energy, for instance, projects data center growth seven percent higher than the Electric Power Research Institute’s (EPRI) upper forecast.

According to Jeremy Fisher, principal advisor for climate and energy at environmental pressure group Sierra Club, the impact of overbuilding new gas-fired generation capacity may result in "a huge amount of stranded asset risk being passed over to American ratepayers, as utilities build gas infrastructure for speculative data center demand that may not materialize."

Several utilities, including American Electric Power Ohio, have introduced tariffs that will require new data center customers to pay for the majority of the energy they say they need each month to cover the cost of infrastructure. However, IEEFA’s Kunkel has argued more needs to be done to ensure that the bill does not fall on the ratepayer. “If the customer is gone, other ratepayers pick up the tab unless regulators step in now and put more safeguards to push some of the risk back onto the data center developers," she says. In addition, the uncertainties around forecast discrepancies and growth of data centers actively colocating with natural gas facilities have led people like Fisher to argue that the increasing alignment seen “risks tying the sector’s growth directly to fossil fuel infrastructure."

While proponents claim that natural gas is the cleanest fossil fuel alternative, emitting 50 percent less CO2 than coal, the growth of capacity associated with the AI boom risks massive emission increases. Estimates from Goldman Sachs have suggested a potential increase of 200 million tons of carbon dioxide emissions per year by 2030. These fears are exacerbated by the fact that data center operators have already reported huge increases in carbon emissions over the past five years. Last year, Google reported that its 2023 emissions rose 13 percent compared with the previous year and 48 percent over five years.

Despite the overt concerns over the impact natural gas buildout could have on carbon emissions, Michael Grande believes it is becoming almost an open secret within the sector that “tech companies with zero-emission goals may still turn to gas for quick power, despite the contradiction with their public commitments.”

Therefore, to assuage concerns, data centers, and utilities are increasingly promoting natural gas as a “bridging fuel” that can support the transition to lowcarbon baseload alternatives. However, the nature of these alternatives remains up for debate.

What's the alternative?

Natural gas has been touted as a bridge fuel for years now, with many in the data center center expressing optimism about its bridging credentials. "Natural gas can serve as a reliable and scalable fuel source that can bridge the gap between traditional power systems and future low-carbon technologies," Raj Chudgar, chief power officer at data center operator EdgeConneX. Earlier this year, EdgeConneX announced plans to develop a 120MW natural gas plant in Ohio to power a data center campus. The plant will be the primary power source for the data center, serving its energy needs behind-the-meter.

Several low-carbon technologies offer a potential solution to the base load question. Small Modular Reactors (SMRs) are one promising option, offering reliable baseload power at scales of 80300MW. Data centers have been widely committed to technology over the last year, with AWS, Google, and Oracle all signing long-term supply agreements.

One of the novel agreements was between gas generation firm RPower and SMR developer Oklo. The two companies announced a partnership that could serve as a powerful test case for using natural gas as a bridge fuel to SMRs, deploying a phased power model. Natural gas would be installed to meet data centers' immediate power needs, but it would be slowly phased out and replaced by SMRs as they become commercially available.

"By constructing natural gas-fired power plants now, data centers can access affordable, reliable energy," and "once nuclear power becomes available, these gas plants can still play a role by providing backup capacity, load following, supporting peak demand, and ensuring grid stability," says RPower’s Jamie Smith.

However, despite increasing interest in SMRs, they remain unproven. Although Oklo states that it aims to deploy its first SMR in 2027, the company has faced skepticism following the Nuclear Regulatory Commission's denial of its application to build and operate a reactor in 2022. As a result, the timeframe of the deal remains shaky, with Smith admitting that the transition could last "anywhere from three years to over a decade."

For others, hydrogen is seen as a potential answer. This is particularly popular amongst natural gas producers. "Hydrogen is another potential future solution. Most modern gas generation equipment can already consume a blend of hydrogen and natural gas, and in some cases, even operate on 100 percent hydrogen with modifications," says Dorman.

However, despite increasing interest, green hydrogen production remains nascent, making it an impractical shortterm substitute.

While emerging low-carbon technologies, such as hydrogen and SMRs, might offer significant potential, they also face notable barriers and are unlikely to make a noticeable impact before the decade's end.

In turn, as AI data centers become more intertwined with natural gas infrastructure, the drive to transition to cleaner energy may weaken. In the near term, natural gas will be the primary power source fueling new data center growth across the US - emissions targets be damned.

Listen to the DCD podcast, new episodes available every two weeks.

Tune in for free: bit.ly/ZeroDowntime

Hear from Vint Cerf, Synopsys, Microsoft, Google, Digital Realty, and more!

ABB Data Centre Solutions.

ABB’s complete electrical portfolio provides the energy efficiency and energy insights to monitor and reduce power usage as well as technological advancements to lower carbon and GHG emissions. ABB is your premiere sustainability partner, providing 100+ years of our domain expertise in electrical solutions to solve some of your most difficult sustainability issues. Let’s write the future. Together.

Becoming Nebius Following

Russia’s invasion of Ukraine, Nebius rose from the ashes of Yandex and is now positioning itself as a major player in the AI cloud space

In February 2024, Yandex – the company often referred to as Russia’s answer to Google – announced it was selling its Russian assets to a consortium of investors for $5.2 billion.

The news came almost two years to the day of the illegal Russian invasion of Ukraine, a conflict that has become the largest and deadliest in Europe since World War II.

Within months of war breaking out, Yandex had already sold off its media business to Russian state-controlled social media giant VK (the division's head

was sanctioned in the months following the invasion) and was reportedly looking for a way to exit Russia.

Prior to the sale, Yandex's holding company had already been based in the Netherlands for more than a decade and it is in Amsterdam where the newly-rebranded Nebius Group is now headquartered. The Dutch capital is home to around 500 Nebius employees, although following the company’s exit from Russia, its employees - in particular Russian workers who had to leave the country - settled in all corners of the world.

The new company retained control of Yandex’s Finnish data center and its Nebius AI unit - the inspiration for its new moniker - as well as data firm Toloka AI, edtech provider TripleTen, and autonomous driving developer Avride.

Unsurprisingly, given its new name, Nebius also announced it was “building one of the largest commercially available artificial intelligence (AI) infrastructure businesses based in Europe,” and would be offering an AI-centric cloud platform built for intensive AI workloads. Its fullstack infrastructure will be designed to service the growth of the global AI

Charlotte Trueman Compute, Storage, and Networking Editor

industry, including large-scale GPU clusters, cloud platforms, and tools and services for developers.

On a cold weekend at the beginning of October 2024, DCD was invited to Finland to visit the Nebius data center in Mäntsälä.

After a long time unable to say very much for obvious reasons, the company was finally ready to emerge in its new form and show Nebius off to the world, as was demonstrated by the flurry of announcements that bookended the trip.

Earlier in October, Nebius had announced plans to deploy an Nvidia H200 cluster in Paris as part of the company’s stated aim to invest more than $1 billion in AI infrastructure in Europe by mid-2025.

In the week following the visit, Nebius publicly stated it would be tripling the capacity of its Finnish data center, placing upwards of 60,000 GPUs at the site to expand its capacity to 75MW. The $1bn investment figure includes what

“We are pretty much almost in the form we should be as a company,“
>>Andrey Korolenko, head of infrastructure, Nebius

the

Subsequently, Nebius announced it would deploy an Nvidia H200 GPU cluster at a data center owned by Patmos in Kansas City, Missouri, began trading

again on the Nasdaq, and revealed details of a $700m equity fundraise. The company also plans to invest up to $1.5 billion in capex this year, with a significant portion going into GPUs and data center infrastructure.

Bringing efficiency to the south of Finland

Located in south Finland, 60km outside of Helsinki, the trip to the Mäntsälä data center begins and ends with an hour-long drive through the beautiful Scandinavian countryside.

The trip marks the first time Nebius has opened its doors since the sale and rebranding. Following all the usual security checks that accompany any data center visit, DCD is ushered inside for the tour, hosted by Nebius’ head of infrastructure, Andrey Korolenko, a former long-time Yandex employee who held the same job title prior to the separation.

In the company’s introductory remarks, chief marketing officer Anastasia Zemskova says that Nebius is looking to bridge the gap between AI practitioners and the current offerings being provided by specialized AI cloud providers, which it believes are inadequate.

“Our approach to building the cloud has always been based on what we see on the market, in the real consumption, in the real world of data scientists and mathematicians that are currently being stretched all over the place trying to not only train their models but also build the infrastructure that could sustain those models well,” says Zemskova. “We are very user-centric.

“What we see on the market is that cloud providers are not capable of keeping up with what machine learning engineers are needing, and it's very common that users are offered GPUs with no cloud on top of it.”

She adds the cloud “has been evolving all the time,” so the company’s goal is to deliver a “hyperscaler-like experience combined with GPUs, so that users of our AI-centric cloud can utilize the best from the cloud, but have it tailor-made for AI.”

At present, in addition to the tens of thousands of GPUs it currently houses, the 25MW facility is also home to the ISEG supercomputer; a 46.54 petaflops system equipped with Intel Xeon CPUs and Nvidia H100 GPUs. On the November 2024 edition of the Top500 list of the world’s most powerful supercomputers, ISEG ranked 29th.

At its current capacity, the Mäntsälä data center only uses air-cooling technology to chill its servers – one of

The new racks arriving at the data center in Mäntsälä, Finland
company will spend on its planned Finnish expansion.
Andrey Korolenko

the benefits of building your facility in a country where cold air is freely available all year round.

The facility also exports heat via a local district heating network, providing warmth for around 2,500 homes in the area. As a result, Nebius says the data center is among the most energy-efficient in the world, and presently, the facility has a PUE of 1.12.

The data center uses two levels of filtering for air quality, with the air intake system able to handle 7-8 million cubic meters of air per hour. The outside air is cleaned by the filters before being directed down onto the servers, through the cold aisle, and then back into Mäntsälä.

However, this will change as the company deploys more powerful GPUs. After all, even the frigid Finnish air can only do so much when confronted with 1kW chips. As of December 2024, Nebius said a “liquid cooling system designed by Nvidia” is being installed in Finland, and at the company’s new colocation facility in Kansas City, to accommodate Nvidia Blackwell GPUs.

Another change that the data center will see following its expansion is the move away from diesel generators. Originally installed to ensure uptime in the first data center building, Korolenko says that the second, third, and fourth buildings will not use diesel generators, due to both an improvement in design and the reliability of the Finnish power grid.

The data center also doesn’t have any UPS batteries - “batteries are a huge headache, to be honest,” instead relying on a flywheel to generate kinetic energy that will feed the alternator if the facility experiences a drop in power.

Do you want to build a rack?

In addition to improving current AI cloud offerings, Nebius is also looking to address the GPU imbalance that exists between Europe and the US, noting that because the States is so “compute hungry” a hardware sparsity exists on the continent.

“From day one, we've been concentrating on building huge clusters for these great distributed workloads,” Zemskova said. “But we also see the inadequacy in the fact that people who

“We see the inadequacy in the fact that people who are needing a lot less cannot

find those GPUs on

the

market.

This

is why we cover infrastructure needs of any scale, from a single GPU to these large-scale clusters,”

>>Anastasia Zemskova, chief marketing officer, Nebius

need a lot less cannot find those GPUs on the market. This is why we cover infrastructure needs of any scale, from a single GPU to these large-scale clusters.”

Nebius designs its own racks to house the frankly astonishing amount of GPUs the company has been able to get its hands on, including Nvidia H100s, H200s, and Blackwell GPUs, which, as of late January, should be in the company’s possession “in a matter of weeks.”

The racks in Mäntsälä – in addition to those soon to be deployed in Paris and Kansas City – have been designed by the company’s own R&D team and, according to Nebius, are optimized for free cooling, operating within an inlet temperature range of +15°C and +40°C (+59°F and 104°F).

The company also claims that Nebiusdesigned servers are 28 percent more efficient compared to those in an “average data center” and, in combination with the facility’s “outstanding power usage efficiency” allows it to offer “competitive pricing for GPU resources.”

In a briefing document provided to DCD before the visit, Nebius said its hardware “consumes approximately 3050 percent less energy for computations than servers with standard architecture while delivering twice the performance.”

While Nebius did not detail how it was defining the “average data center,” at a London event organized by the company in February 2025, Gleb Evstropov, head of compute and network services, explained that the figure was reached by calculating the difference between the power consumed by a standard 19” HGX server and a Nebius HGX server across different inlet temperatures.

However, he went on to note that, while the 28 percent figure related to servers deployed in Nebius data centers, it was closer to 22 percent for servers deployed in colocation facilities.

Mid-way through our visit to the Mäntsälä data center, a fresh batch of racks had just arrived from the manufacturer in Taiwan, and we stopped to watch as they were unpacked.

Turning to a rack that had already been unboxed, Korolenko asked the group if we’d like to see some H100s, before pulling out one of the shelves to reveal Nvidia hardware worth $600,000.

To Nebius
Air filters inside the data center
Inside the data center
Korolenko says reports of overheating are not a worry for the company, as Nebius has been working on its Blackwell racks since early November and has not encountered any problems under laboratory conditions.

“That’s a couple of Lamborghinis in this one rack,” he says, rather casually.

Nebius is a preferred cloud service provider in the Nvidia Partner Network, and as a result approximately 450 of the 1,000 engineers the company had employed as of October 2024 work exclusively with Nvidia.

Furthermore, Nebius’ recent $700m fundraise included participation from Nvidia.

Speaking to DCD several months after the trip, Korolenko says that Nebius is still primarily deploying H100s at the moment but will soon be taking shipments of more than 22,000 Blackwell GPUs, with the order consisting of B200s, GB200s, B300s, and GB300s, expected to arrive in that order. Nebius will start by deploying its Blackwell chips in the US, with European deployments to follow soon after.

“It will take longer [to deploy Blackwell] than it takes us to deploy the current generation, for the purely technical reason that first deployments are usually just longer as you need to test everything and fix everything,” Korolenko says.

When asked if he was concerned about the previously reported overheating issues that had plagued early Blackwell deployments, Korolenko says it's not a worry for the company as Nebius has been working on its Blackwell racks since early November and has not encountered any problems under

laboratory conditions.

However, he adds that, while it’s normal to re-architect racks for most new chip generations, Blackwell is noticeably different – “you have different power, different management, and a different layout.

“We'll see how the mass deployment goes, but I believe that we are leaving ourselves a margin for error, so I believe that it will be good. It will be a learning curve from the start, but so far, it looks okay.”

The future is Nebius

Although Nebius is currently spinning a lot of GPU-shaped plates across multiple regions, speaking in January, Korolenko is confident the company’s various plans are on schedule.

On its decision to launch in the US market, where Nebius is leasing space at a former printing press turned data center in Kansas City, Missouri, he says having a presence in America is important as “most of the customers and most of the money is coming from the US.”

Initially, the company plans to deploy a 5MW Nvidia H200 GPU cluster at the Patmos-owned data center, which can expand to a maximum of 40MW, or about 35,000 GPUs, at full capacity.

While Korolenko acknowledges that

Kansas City isn’t the first place you might think of when it comes to data center locations, he says the site was chosen for its power price and availability. The company has since announced a 300MW site in New Jersey.

However, this new focus on the US doesn’t mean that Nebius’s ambitions to plug the European GPU deficit have been put on hold, although Korolenko does note that it's not just hardware Europe is short on, saying that equipment and construction workers are also harder to come by on the continent compared to other regions. This March, the company announced a small deployment at Verne's data center in Iceland.

“[The US] is also not so concerned about green power, for example, it’s not even close, to be honest,” he says. “They're also way more pragmatic and focused on the delivery dates, whereas Europe is just a bit more relaxed.”

Despite the challenges, given how much time the company spent grappling with the uncertainty of its future, Nebius seems happy to find itself in a position where it can speak about these things openly.

Korolenko says it has been an “extremely intense” period for the company, but argues the projects it has unveiled “prove that we can do [what Nebius set out to do].” He adds: “We are pretty much in the form we should be as a company.“

Nvidia H100s

Eaton 9395X –the next generation UPS

Ease of deployment

• Faster manufacturing due to ready-built sub-assemblies

• Simplified installation with inter-cabinet busbar design

• Plug-in self-configuring power modules

• One-click configuration of large systems

Compact footprint

• Occupies up to 30% less floorspace, leaving more room for revenuegenerating IT equipment

• Installation can be against a wall or back-to-back

• Integration with switchgear saves space, as well as the cost of installation and cabling

The Eaton 9395X is a new addition to our large UPS range. It builds on a legacy of proven power protection of Eaton’s 9395 family, providing a market-leading footprint with the best power density, leaving more space for your revenue generating IT equipment.

This next generation Eaton 9395X UPS offers more power, with ratings from 1.0 to 1.7 MVA, in the same compact footprint and brings you even more reliable, cost-effective power, which is cleaner thanks to our grid-interactive technology. With world-class manufacturing processes and a design optimized for easy commissioning, the 9395X offers the shortest lead-time from order entry to activation of your critical load protection.

• Save on your energy bill with improved efficiency of 97.5% and reduced need for cooling due to up to 30% less heat loss

• Choose the correct size capacity for your immediate needs, and easily scale up later in 340 kW steps

• Optimized battery sizing with wide battery DC voltage range Cost efficient & flexible

Easy maintenance

• More reliable self-monitoring system

• Less need for scheduled maintenance checks

• Safe maintenance while keeping loads protected

• System status information provided

More resilient

• Builds on the capabilities of the proven Power Xpert 9395P UPS

• Improved environmental tolerance for modern datacenters

• Component condition monitoring

• HotSync patented load-sharing technology

• Native grid-interactive capabilities

• Reduce facility operating costs or earn revenue through energy market participation

Palistar’s big tower play with Symphony Towers Infrastructure

Nearly 30 years in the telecom towers industry and Bernard Borghei is still just as pumped up

"The overall telecom infrastructure segment in the US is extremely attractive and active right now,” says Bernard Borghei, CEO of Symphony Towers Infrastructure. And he should know.

It’s been 11 years since Borghei co-founded Vertical Bridge, one of the biggest network infrastructure companies in the US.

Vertical Bridge owns and masterleases more than 17,000 towers across the US today, and owns more than 500,000 owned and masterleased sites across the country. Prior to that, Borghei was SVP and partner at Global Tower Partners.

After leaving the company in 2022, Borghei had a stint at Tower Engineering Professionals, before he

then joined Symphony Wireless, an infrastructure company backed by Palistar Capital.

When Borghei first spoke with DCD, in September 2024, he was CEO of Symphony Wireless, a company that acquires ground leases and easement rights to telecom towers. But a few months on, the picture has changed.

Two becomes one

In January 2025, Palistar chose to merge Symphony Wireless with CTI Towers, another firm backed by the alternative asset manager. The new single entity is known as Symphony Towers Infrastructure with Borghei as the CEO.

“We had two successful, mid-sized companies operating,” Borghei explains in a second interview conducted following the merger. “Each of us was growing the business well, but we were medium-sized companies.”

Founded in 2019, Symphony Wireless acquired ground leases and easement rights to telecom towers, which it then leased out.

CTI, a telecom tower firm, was founded in 2011 with an investment from

Comcast Ventures. Acquired by Palistar in 2020, it owned, managed, and marketed more than 1,800 infrastructure assets across the US prior to the acquisition.

Borghei explains that both companies served similar customers, such as carriers AT&T, Verizon, T-Mobile, and Dish, plus other regional networks operating in the US.

“We thought it'd be nice to combine and have one master approach,” he says. “We had to come back and do our homework, make sure we could do that from all various perspectives with the investors and everybody else involved. Fortunately, it worked.”

The merged company forms what Borghei describes as a “massive platform.” Indeed it is the fifth-largest privately held infrastructure company in the US.

Reaching more of the country

The acquisition gives the new entity greater scale, says Borghei. “It makes us more meaningful to our clients, because now we have 3,000 assets to offer them,” he says.

Borghei hopes the merger will also improve the efficiency of the company, something that he believes is important for the long-term success of the business.

The company is some way off the likes of Crown Castle and American Tower Corporation, two of the biggest US tower operators, each with more than 40,000 towers across the country.

In comparison, Symphony Tower

Bernard Borghei

Infrastructure operates 3,000 assets across all 50 states.

The strategy for the company isn’t to build towers or acquire new assets in the same way that other towercos operate, says Borghei.

“It all comes down to finding the right creative, strategic opportunities to grow the business,” he says. “We're not going to go out there and buy things just for the sake of buying things. We're going to be very strategic.”

Instead, he wants to “organically” leverage the company’s existing assets.

“We want to take the existing towers that we have to attract new tenants to colocate on them and drive leases up on these towers,” says Borghei.

He explains that Symphony isn’t interested in building new infrastructure, noting this is the remit of sister company Harmony Towers. Borghei insists there are no plans to combine Symphony and Harmony into one melody, and says the latter company will continue to play to its own tune.

“As of today, that’s not on the radar at all,” he says when asked about the potential for a further merger. “Harmony is the tower business, but it's a different part of the tower business. Building towers requires different talents, processes, and skill sets.

“The duration of building a tower in the US, with all the zoning requirements, is different than us buying assets on the rooftops and things like that. So I have no indications from Palistar that combining us with Harmony is something that they're looking at.”

5G is far from finished

US carriers are currently in the process of expanding their 5G mobile networks, something that presents a great opportunity for the tower industry, according to Borghei.

“Only 50 percent of the LTE for 5G sites in the US have so far been upgraded, so there's a lot more activity that needs to take place to change the equipment,” he told DCD in September of last year.

He says that the rate of these deployments to the tower sites will rise as economic conditions change.

“There’s still plenty of [5G] build-out to embark on,” Borghei says. “And we continue to believe and see that as interest rates come down, the US operators will

start reinvesting in their network capacity and also expansion of the coverage in the rural areas.”

A few months on, Borghei’s view has not changed, but he notes that the business will continue to evaluate industry trends closely.

“Overall, we see the trend for 5G networks continues to increase, but if something unforeseen happens, such as revenue per user starts really plateauing, or if the subscriber growth plateaus, I think you may see the carriers look to like different avenues to generate revenue profits,” he says.

Borghei points to T-Mobile’s recent acquisition of advertising technology media company Vistar Media as an example of a carrier pivoting to a new channel to diversify its revenue streams.

“Now maybe they have a brilliant plan to tie those locations and use them as part of small cells or whatever that they just haven't communicated to the industry, but I think it's pretty interesting that they are trying to find a different way to generate revenue,” he says.

“So we will watch those trends and continue to monitor the growth and stress on the 5G networks and how the carriers are prioritizing allocated capital to it.”

Spectrum challenges

While 5G presents an opportunity for firms like Symphony Towers, challenges around spectrum that have stifled growth in the last two years remain.

Since March 2023, the Federal Communications Commission (FCC) has been unable to issue carriers spectrum after the US Senate allowed its authority to hold spectrum auctions to lapse for the first time ever. This means carriers have been unable to deploy additional spectrum.

The change of government could move things along, says Borghei, but he warns that it will take time for this to happen.

President Donald Trump appointed Brendan Carr as chair of the Federal Communications Commission (FCC) shortly after his election win last year. Carr, who’s been a commissioner since 2017, began the role in January.

At the time of writing, the FCC is still unable to carry out spectrum auctions, meaning carriers remain in limbo.

“It’s going to take some time to get

new spectrum made available, because let's assume [Carr] gets the authority reinstated to auction spectrum, the teams have to go and really choose which part of the spectrum they want to auction first,” Borghei says.

“Normally in our industry, when a spectrum auction happens, it would take a year before the carriers would deploy it. They have to get it tested in their environments to see how it really propagates.”

He explains that this impacts carriers and original equipment manufacturers when it comes to rolling out their technology. The need to resolve the spectrum debacle would help carriers to “stabilize their planning,” he notes.

That said, Borghei is excited for when the spectrum situation is sorted out, in particular suggesting that private network deployments could flourish.

“There's some privately held spectrum that companies bought in the previous few rounds of auctions, and haven't really deployed them, and maybe are willing to transact,” he says. “So I think you're going to see some private spectrum deals take place regionally, within the US, where the carriers need certain bandwidth to help them optimize their network.”

Pumped up

For now, Symphony has no plans to venture into data center assets, says Borghei, though he doesn’t rule an international expansion if the opportunity is right.

Borghei said last year that Palistar has issued Symphony $1.2 billion to invest in the telecom infrastructure space, which he says is a “massive market.”

“When you look at the number of sites deployed in the US, you're talking about a few hundred thousand assets available,” Borghei.

This hasn’t changed post-merger, he adds: “I still have plenty of opportunity to grow this business, and Palistar has been encouraging me to do so as well.”

If anything, Borghei says the merger has only made him - and Symphony Towers Infrastructure - more driven to succeed.

“I told our team here when we announced the merger that if this doesn't get your blood pumping, you might be in the wrong place,” he says. “These types of opportunities are rare.” 

Grundfos Data Center Solutions

Keep your cool

Efficient water solutions for effective data flow

Meet your efficiency and redundancy goals

Smart pumps offering up to IE5 efficiency, redundant solutions that meet up to 2N+1 redundancy – whether you’re planning a Tier 1 or Tier 4 data center, you can rely on Grundfos to keep your data center servers cool with a 75-year history of innovation and sustainability at the core of our corporate strategy.

Your benefits:

• High-efficiency cooling solutions saving water and energy

• Redundancy meeting up to Tier 4 requirements

• End-to-end partnership process

Cogent’s colo journey: Dave Schaeffer on the company’s new data center ambitions

Turning Sprint’s wireline real estate into a new data center portfolio

Before the mobile era, telecoms were the kings of terrestrial infrastructure. Copper and early fiber networks crisscrossed the US, with thousands of exchange buildings dotted across the country. While many operators long ago sold their data centers, these legacy buildings offer traditional carriers a way back into the data center game.

After acquiring the former Sprint fiber business from T-Mobile, Cogent has been a tear to convert dozens of old switching sites to colo and wholesale data center space. Ziply Fiber, meanwhile, has been hard at work upgrading its copper network to fiber, and aims to use all the free space and power in its Central Offices for colo services.

Both companies see opportunity. AI is causing increased demand for immediate capacity –especially at the Edge as the industry moves towards inferencing – but also pushing smaller local customers out of larger facilities. Can telecoms buildings dating back as far as the 1960s be made relevant again as data centers in the AI era of the 2020s?

Most know Cogent Communications as an Internet service provider, delivering fiber services via some 100,000 miles of intercity fiber and nearly 200,000 miles of metro fiber across the US and beyond. But the company has quietly built up a significant portfolio of data centers, and has been expanding it further after a recent acquisition.

Cogent’s history is founded on acquisitions. CEO Dave Schaeffer founded it in 1999 at the peak of the Dotcom bubble, and the company acquired more than a dozen businesses on the cheap as the bubble burst. The list of companies bought includes NetRail, Allied Riser, FiberCity Networks, PSINet, Global Access,

and more. A 2007 report suggested Cogent acquired some $14 billion worth of distressed assets for just $60 million over that time.

The company’s latest acquisition – legacy Sprint wireline assets from T-Mobile –has seen the company take over an asset portfolio ready-made to capitalize on the need for space and power amid an AI boom. And acquired it for only $1.

I’d buy that for a dollar

In September 2022, Cogent announced it was buying the Sprint Corporation wireline business (formerly known as Sprint Global Markets Group, or Sprint GMG) from T-Mobile for just a single dollar.

Burbank

Even better; as part of the agreement, T-Mobile was set to pay Cogent $700 million over the next four-and-a-half years for Cogent's IP transit services.

The deal largely comprised the legacy Sprint US long-haul fiber network, acquired by T-Mobile as part of the two companies’ $26bn merger in 2020. The fiber assets were made up of around 19,000 long-haul route miles, 1,300 metro route miles, and thousands of miles of leased dark fiber. The business generated around $560 million in revenue in 2021 from some 1,400 clients.

More interestingly, for DCD at least, was the footprint of building assets that came with the fiber. The deal included some 482 technical spaces and switch sites owned fee simple across the US, with the largest 45 set to be converted into colocation data centers.

During the 1980s and 1990s, Sprint had built a fiber optic network that terminated in tandem switch sites and was designed to allow connectivity to black fibers. According to Schaeffer, the network carried exclusively voice traffic until the late 1990s, then carried some proprietary data and a small amount of Internet traffic. By the time of the acquisition, the network was “virtually empty,” with almost no traffic on it.

These were assets, according to the Cogent CEO, generating no revenue, connecting only to the Sprint backbone and to ILECs in their territory for TDM interfaces. According to Schaeffer, this footprint was built at a cost of some $20 billion.

“The facilities are typically on about five-to-six acres in an industrial neighborhood, typically 15 to 20 miles from the central business districts,” Schaeffer tells DCD. “Built in the mid1980s, they are typically poured concrete block construction, about two stories tall.”

The facilities are a mix - some with raised floor and underfloor cabling; some are on tile floors with overhead cable ladders.

“We then went through that inventory of buildings, and identified 48 of those sites as suitable for data center conversion,” he adds. “Built to house telephones switches, they were only connected previously to the Sprint backbone. They had no metro connectivity to the markets in which they were located.”

“With 1.8 million square feet of data center space and 180MW of power, we're definitely in the top ten data center operators in the world”

The footprint spans the US, including sites in Atlanta, Georgia; Baltimore, Maryland; St. Paul, Minnesota; Akron, Ohio; Wyoming; Springfield, Massachusetts; Nashville, Tennessee; Tacoma, Washington; Phoenix, Arizona; Merchantville-Pennsauken, New Jersey; Buffalo; several locations in Texas; and several locations in Northern and Southern California, including Burbank.

“Once we identified those 48 facilities,

we began efforts to remove old telephone equipment from those facilities, with a total of approximately 22,500 bays of equipment that needed to be de-installed and removed,” Schaeffer tells us. “We also connected those locations to major carrier aggregation points in the markets in which they were located. We then have been working on general maintenance projects; updating fire suppression, security, UPS and battery systems, generator testing, and general cosmetic maintenance.”

A new footprint of legacy real estate

Cogent has been relatively quiet on announcements about this endeavor. The data center assets acquired from Sprint were mentioned in investor presentations at the time the original deal was made, but largely left out of the press releases. In the years since, the only time the project is mentioned is during quarterly earnings calls.

At the time of the acquisition, Cogent operated some 53 data centers, totaling 600,000 sq ft and around 77MW of capacity; already a fairly sizeable footprint in the retail colo space, if one it didn’t talk about often.

“It's been a relatively small part of our total business. Only two of the (pre-Sprint) facilities were outright owned. So now we have a much bigger inventory of owned facilities,” Schaeffer says.

Schaeffer has previously told investors that Cogent has traditionally made around $20 million a year from its colocation business; the company originally thought

Phoenix Data Center
Dave Schaeffer, Cogent

the Sprint sites could add another $15-$20 million or more annually, since revised up to $30-40m. The company has previously said it aimed to invest around $50m in the program.

Once fully fitted out, the new portfolio will double the company’s footprint by the number of facilities, and triple that footprint by space – adding more than one million sq ft and some 160MW of capacity. With this project, Cogent has quietly become a sizable player in the data center space.

“Now, with 1.8 million square feet of data center space, and 180MW of power, we're definitely in the top ten data center operators in the world.”

The number has since been revised up to 159 sites, offering 197MW across some 1.9 million sq ft

The conversion effort has been impressive. There were some 22,500 racks to clear out from the facilities,

“We didn't expect initially to have this wholesale component to repurposing of these switch sites. But with the acute shortage of available power changed our thinking”

most of them filled with legacy telecoms equipment that had been dead for a decade. In its place would be a single consolidated cage for Cogent’s network equipment.

“These were not built as data centers, they were built as telephone central offices,” Schaeffer has previously said.

“Many of them are quite large, but we had to remove telephone equipment and we had to condition those spaces to turn them into marketable data centers.”

“The equipment that we removed was primarily telephone switches; Lucent, Alcatel switches, boxes, telephone frames, and some older transmission technology,” he notes to DCD. “Typically, these were lined up in rows – 52,500 cabinets, each roughly three feet wide – about 12 miles of cabinets, with literally tens of thousands of miles of cabling between the cabinets and in the ceilings. We had to disconnect the power, pull all the cables out, and then unbolt and remove the equipment.”

The time needed to convert a site will vary, ranging from as little as six months up to 18 months, though Schaeffer says most take close to a year. The undertaking has largely been done by staff Cogent inherited as part of the acquisition.

“When we acquired Sprint, we acquired a workforce. There were not really dedicated people to these facilities. We have repurposed people; there were field service people who were doing a slightly different job before, have gone through the field service organization and gone down a specialized group to focus on this particular project.”

Another major undertaking has been converting the power plants from negative 48-volt DC power to 120-volt AC power. Originally the company planned to put inverters in, but decided to instead replace the UPS systems at each site as it provides better power efficiency.

“The initial thinking that only a retail

Fort Worth Texas - Cogent
Akron Data Center

play was to re-use the existing DC plant and just install converters,” he says. “For higher efficiency and greater load, it makes more sense just to put in new assets.”

Given the legacy of the sites, Schaeffer admits the sites are relatively constrained on the power densities they can offer –somewhere around 100 watts a foot – and so power is likely to be more of a limiting factor than space availability.

“We're comfortable if they want to use liquid, immersion cooling,” he says. “We're comfortable if they want to spread it out and use more traditional air cooling. That would really be their decision.”

“I think how each customer is going to use it is somewhat up to their business model and, in some cases, the ability to upgrade the inbound power. In many of these facilities, we do have sufficient land with appropriate zoning where we could add to the facility if we were able to sell out everything that we had.”

Schaeffer says upgrades or expansions to the sites would have to be on a case-by-case basis depending on what the local utility can offer. The CEO says Cogent has not had definitive conversations with many utilities about it – and notes some likely to be capped where they are at, while others could offer greater capacity.

The deal to acquire the Sprint assets closed in May 2023. By Q2 2024, the company had completed the conversion of 31 sites, rising to 43 by Q3 2024 – and had upped the total number set to be converted up to 48 sites and later 52 core sites. The company aims to have them all converted and fully operational and marketable by the end of Q2 2025.

Cogent and Scaeffer haven’t shared how many people and man-hours have

been involved in the transformation project, but admitted to DCD that it had been a “significant” investment.

The wholesale pivot

Originally the plan was to keep some space at each site for the Cogent network and offer a limited amount of retail colocation from each facility. The rest of the space was to be kept fallow and expanded as the retail side demanded. But the thinking has now changed.

“We are typically dividing them into three spaces,” Schaeffer tells DCD. “One small amount of space will be used to house our network equipment; think of that as a POP or telco room that will typically be about 1,000 square feet, about half a megawatt of power.”

“We are then generally taking out 10,000 square feet, about a megawatt of power, and we are going to be offering colocation services on a retail basis to customers on a one- and two-rack size average sale.”

The company expects the retail business to remain about three percent of what will be a bigger combined company. But the opportunity on the wholesale side

“We had to remove 52,500 cabinets, each roughly three feet wide – about 12 miles of cabinets, with literally tens of thousands of miles of cabling”

could be even larger.

Last year Cogent decided it had an opportunity to capitalize on the frantic demand for space and power demanded by AI hardware. As well as retail colo, the company is now also marketing the sites on a wholesale basis, either as a direct sale, or a long-term lease.

“We're taking all of the remaining space and power, offering that on a wholesale basis to either other data center operators who want to acquire this space and add it to their footprint or to end users,” he explains. “We are doing that under two different potential economic models. One would be to rent excess capacity at about a million dollars per megawatt per year on a triple Net basis. And the other model would be for a potential company to acquire the facility; in which case, we may or may not retain that retail space and telco room.”

Schaeffer said the decision to pivot to making the entirety of these sites available to wholesalers wasn’t made until the end of 2023 – some eight months after the deal closed.

“We didn't expect initially to have this wholesale component to the repurposing

Merchantville-Pennsaucken

of these switch sites. Our thinking on this was initially that we would only put the POP in the corner and the retail colo, with the remainder of the space [to lie] fallow for future development,” he says. “But as the demand for space and power increased materially amid the rollout of large language model training for AI, there has been clearly an acute shortage of available power, which changed our thinking into what we originally intended to be future growth opportunities.”

At the time of our conversation in late 2024, Cogent is in the process of marketing out the capacity to potential wholesale customers. Schaeffer says the company has existing relationships with many of its target customers that may be interested in the facilities thanks to its fiber business.

“It [the reception from potential customers] has been better than we expected. We have done tours with multiple parties at multiple locations,” Schaeffer says.

At the time of our conversation, Schaeffer said there has been interest from companies, but is yet to sign any definitive binding agreements. Originally, there was more interest from parties in acquiring the sites outright. However, now it seems conversations have evolved to focus more on leasing agreements.

“There's probably not one party that will take the whole footprint. It's possible, but it's probable that there'll be different transactions, a mix of purchases and leases,” he adds.

Based on comments made during Cogent earnings calls, around 23 of the 48 total sites that the firm is converting to data centers have been deemed suitable for wholesale monetization, totaling more tha 88MW.

AC/DC: FOR THOSE ABOUT TO RACK

While today’s data centers are mostly powered by AC power, Central Offices and other telecomsfocused facilities were generally built to run on DC power. But this fell out of favor around the turn of the millennium, with facilities built to distribute AC power. This is one of the key reasons your landline phone continued to work even when there was a power outage at your home.

Though rare, you will still see DC power in some data centers today. Equinix has previously told us the company still uses -48V rectifier plants to distribute power to classic telecom applications inside some of its facilities – though this isn’t widespread.

Chips inside servers actually generally run on DC power, but data centers receive and distribute the power in AC. This leads to a merrygo-round of AC/DC swapovers. After receiving higher-voltage AC, the power is stepped down inside a data center to a lower voltage and safely routed to the server rooms. Much of that power is sent to the Uninterruptable Power Supply (UPS) system, where the AC power is converted to DC power to maintain the batteries, and then converted back to AC and routed to the racks and the IT equipment. Inside each individual switch and server, the PSU or rectifier converts it again back to the DC that the electronics wants.

“Power from the utility is AC, so it’s

converted to -48V DC for the legacy central office equipment. Therefore, the DC conversion needs to be undone at the Central Office, and as close to the utility as possible” explains Steven Carlini, VP and chief advocate, AI and data center at Schneider Electric.

“The major limitation with COs is that they’re usually not very large and the power distributed is DC, which requires a conversion from utility power or AC-mode. Additionally, the back-up power systems used for legacy telco were traditionally batteries, so depending on the backup or runtime required, installing a generator and dealing with local noise and fuel storage regulations may also be an issue. Another important limitation is the HVAC system, specifically the cooling, which usually needs to be completely redesigned.”

“At Schneider Electric, we’ve found it’s usually better to rethink or rearchitect the entire power train from the grid, and design in the appropriate power distribution and back-up power sources.”

Changing old telecom sites from DC to AC would usually include putting in new converters and/or rectifiers.

Each conversion results in a slight loss of energy, leading to inefficiencies. As a result, some hyperscalers are exploring DC distribution as a way to remove those inefficiencies and reduce energy loss. Solar and wind farms, and many fuel cells, also push out DC power, so colocated or behind-themeter type deployments could benefit from fewer conversions.

Similar ideas have been attempted over the years with varying success. But today’s need to make every watt count amid ongoing capacity crunches – combined with the scale at which hyperscalers operate – mean DC could be making a real comeback. 

“We're still testing the market, but I do think the availability of these assets and a limitation on power availability in the general market, means we're going to be able to monetize most if not all of them,” he tells us.

Since DCD spoke to Cogent, the company has put at least six sites up for sale and/or wholesale lease publicly. The facilities span Orlando, Florida; Fort Worth, Texas; Elkridge, Maryland; Akron, Ohio; Kansas City, Missouri; and Atlanta, Georgia. The facilities range from 38,650 sq ft up to 110,740 sq ft, offering from 5MW to 14MW. To buy, the prices range from $44.1 million up to $140m, and total more than $495m for all six.

The company has since said it has recieved multiple offers for the entire portfolio - and held off deploying retail space in some of the wholesale locations in case the wholesaler demands the entire space.

A steal or perfectly rational? Telecoms providers have long been known to be ok with selling off assetsespecially their data centers.

But, even by telco standards, paying a company hundreds of millions of dollars to take a potentially lucrative data center portfolio seems like a missed opportunity.

“I think it was perfectly rational on their part. T-Mobile is a pure-play wireless carrier,” Schaeffer argues. “They owned the Sprint Wireline network as an afterthought that came along with buying the wireless business.”

“They acquired a small, declining enterprise service business, where 93

percent of the revenues did not utilize the Sprint network, and that business was burning a million dollars a day in cash. It was declining. Cash burn was getting worse, not better, and it was a cash drain to T-Mobile.

“They had acquired a huge capitalintense network that was constructed at a cost of $20.5 billion between 1982 and 1991 that was essentially sitting fallow – whether it was the data centers or the fiber. And since it was not strategic to their business, they were willing to divest that.”

“We got the network assets for $1 but were paid $700m to take the operating business, which was a cash drain to T-Mobile. From our perspective, we are taking on a global enterprise business that we’re in the process of stabilizing but will probably not end up ever being a great business.”

Cogent is also decommissioning some legacy Cogent-leased data center facilities that are redundant with its fee simple-owned Sprint facilities.

Schaeffer tells DCD that up to a dozen existing Cogent sites may be exited and customers migrated into the repurposed Sprint sites in the same markets.

The CEO has previously told investors there was almost 300,000 square feet of leased technical space that Cogent will be exiting, which will save the company $180m in the US and $25 million internationally.

A former Cogent facility in Herndon, Virginia, was recently put up for sale, but it's unclear when the company exited the site.

Separately, a number of vacant or soon-to-be vacant former Sprint data centers occupied by T-Mobile have come onto the market over the last couple of years. Small facilities in Texas, Florida, Iowa, and Maryland have been listed for sale - many listings noted T-Mobile was exiting the sites due to already having data centers in the area.

As well as the core 48 or so sites that are being repurposed, there are still some 440-odd other technical spaces and switch sites that the company inherited from the acquisition.

“Many of them are too remote, too small, with too little power to justify repurposing as data center,” Schaeffer says. Some are needed for the network, and others, he suggests, may suit being repurposed for some other nonnetwork application.

In March 2025, Cogent announced it had added a further 55 Sprint sites to its retail portfolio as Edge sites. Typically supporting 40 racks each with 350kW of power, they total 20MW across 108,800 sq ft.

When asked if Cogent might spin out a separate data center business, Schaeffer says he is open to “whatever maximizes the value.”

“It's hard to know what the future is going to bring,” he says. “Until we get tenants in these facilities, I don't think a spin-out makes sense.

“It may make sense to effectively sell it as a business if someone's going to pay us where we put it in our enterprise valuation.” 

Kansas City, Missouri - Cogent

Copper to colo: Ziply’s plan to repurpose its Central Offices

The

data center opportunity in telco infrastructure from the 1960s

hough we live in the age of silicon chips and fiber optics, copper ran the digital world for decades. Thousands of miles of copper and thousands of exchanges dot the US, soon to be retired for more modern technologies.

The telecoms companies of the world may be retiring their copper networks in lieu of fiber, but the real estate footprint of yesteryear’s networks still has something to offer. US telecoms firm Ziply is launching a colocation business based on a footprint of aged telecoms assets the company inherited.

“We're in hardened traditional telco fortresses. Our buildings, more than the typical data center, are structured around Cold War resilience,”

>>John van Oppen, Ziply

The company’s new data center network, totaling more than 200 facilities across Washington, Oregon, Idaho, and Montana, is based on the firm’s footprint of copper network Central Offices that, in some cases, date back to the swinging sixties.

A local Internet service provider (ISP), Ziply Fiber is dedicated to bringing fiber Internet to Washington, Oregon, Idaho, and Montana. A subsidiary of private investment company WaveDivision Capital, the company was formed by former Wave Broadband executive Steven Weed in May of 2020 when it acquired the Northwest operations of Frontier Communications.

Ziply’s history meant it was well-placed from an infrastructure point of view. As the successor to General Telephone Company, it inherited a lot of the Central Offices from the carrier’s days as the incumbent local exchange carrier (ILEC). ILECs were local telephone companies that held the regional monopoly on landline service before the market was opened to competitive local exchange carriers (CLECs) in 1996.

Central Offices, also known as telephone exchanges, were long the key switching points for the old telephone networks. Telecom companies across the US had thousands of these sites, which vary in size depending on the area they were built to serve, that have become increasingly underutilized as carriers switch to fiber networks.

But these purpose-built facilitiesconstructed in phases from the 1950s to around the 1990s amid the days of DSL Internet - represent a largely untapped resource for telecoms companies, offering carriers a chance to create a new fleet of data centers as the switch from copper networks to fiber continues.

Ziply’s facilities are largely centered around Seattle, Portland, Kennewick, Boise, and Spokane, but stretch to locations across Idaho, Oregon, and Washington.

“We have buildings that go back into the 1960s on a pretty regular basis, and we have ones that were built in newer periods up to the mid-1990s and the dial-up modem revolution,” says John van Oppen, VP network, Ziply Fiber. “In that period, a whole bunch more sites

“We have buildings that go back into the 60s up to the mid-90s and the dialup modem revolution. DSL lost to cable, and the space was mostly sitting underutilized since then, leaving us with a big opportunity,”
>>John van Oppen

were built or expanded, because they had to handle more phone lines after everybody went from one phone line at home to two or three, and so that was a massive expansion in infrastructure. Then DSL lost to cable, and the space has mostly been sitting underutilized since then. That left us with a big opportunity once we started decommissioning equipment.”

From GTE to Ziply

Ziply’s network has its roots back in the General Telephone Company of the Northwest, Inc. which was founded in 1964. Later known as GTE Northwest, the company was acquired by Bell Atlantic and became Verizon Northwest. After being acquired by Frontier in 2010, it

became Frontier Northwest.

Following its acquisition and rebranding to Ziply, the company said it would invest $500 million in improving the network and upgrading from copper to fiber, before raising another $450m for further network expansion. The repositioning of these Central Office sites for data center use came as a natural follow-on from the network upgrades.

“We knew what was in there, as far as old telco gear, but our vision was to build out a modern fiber network, so we understood that so much of that equipment would be decommissioned,” says Chris Gellos, GM/VP commercial, Ziply Fiber. “It's really been a neat project for us to follow on top of the modernization of the network and our central offices to operate our network, to now jump into this more customer-facing project and providing colocation.”

The facilities were previously filled with large amounts of equipment for the company’s old copper network. A lot of that was ripped out and replaced as part of the network upgrade program, leaving plenty of newly created space for the company’s colo business.

One site, van Oppen notes, once peaked at 98,000 phone lines. Today that building has capacity for 75 racks and some 500kW of unused power now the switch has been decommissioned.

The company hasn’t shared an official number of the total capacity of the sites, but Ziply tells DCD it is “many, many megawatts,” with the largest site totaling just under 100,000 sq ft (9,290 sqm.)

“There's four or five sites that have more than 1MW available; probably 30 or 40 that have somewhere between 500kW and 1MW available; and there's a whole bunch – hundreds – that have around 50-300kW available,” says van Oppen.

The company hasn’t officially priced the assets, but told DCD the replacement value would be “several billion dollars.”

“This is a uniquely ILEC footprint,” says van Oppen. “You wouldn't have built them for a newer world, unless you were building colos.”

The company spent months ripping out phone switches and the

John van Oppen, Ziply

ladder racks, taking the spaces back to largely empty rooms with 18-20ft (5.4-6m) ceilings that can be configured to the customers’ needs.

“After pulling out the gear, we take a look at what space is available in the building, and then we figure out if we need to rip out ladder racks, ducts, or whatever was configured the way we decided wasn't acceptable for colo, and shuffle it around,” says van Oppen. “Then you install UPSs, you install new air handlers.

“Our planning is driven by what we think we're going to sell; how many watts per square foot, how many watts per rack. We have this great shell that has power, diesel storage, and network, and then you reconfigure the inside of it to fit a different type of layout than it was previously used for.”

Each site generally has an A-B electrical feed (or A-B-C in some cases), along with several days' worth of fuel storage. Like Cogent’s project, the sites largely run on -48v DC power. The company is undergoing some retrofitting work; some carrier customers want DC power, but most want AC power.

“Our big retrofit challenge has really been to make AC power available in these sites,” says van Oppen. “And we've been doing that in a combination of a preemptive basis and an ad hoc basis.”

That the sites are all already permitted and stripped back means that there is a lot of immediately available capacity.

Though some of these locations

“We acquired the old phone company assets, but we are not the old phone company,” >>Chris Gellos

date back to the days of Little Richard or the Beatles, there’s scope to add new buildings in some cases. While the company will initially focus on using the already authorized but underutilized load it has been allocated, the company suggests there is also opportunity in increasing capacity to existing sites, which avoids some of the delays of new grid connections.

“We own something like 400 buildings, and we own the land, and it’s fascinating to me to see what's possible,” says van

Oppen. He says that one CO was in a position where there was enough land to double the size of the facilities, without eating into the number of parking spaces.

“We've looked at a couple of sites and realized that we could get to 5-10MW without too much trouble and without spending a ton of money with utilities,” he adds. “We also have a number of sites that have megawatts of unused electrical service today.”

Cold War thinking for 2020s resilience

Today the company has some 30 people on the facilities team. Outside contractors do a lot of the building out of space, but the maintenance - including of HVAC and generators - is done in-house.

“If there’s a refrigerant leak, it's our guys that go fix it,” says van Oppen. “If a generator doesn't start somewhere, our diesel mechanics take it personally checking to see if it was their part that didn't work. And that that kind of culture is super important to create for reliability.”

The way the COs are set up also helps ensure reliability. In November 2022, a winter storm hit Snohomish County, just north of Seattle, leaving nearly 200,000 without power. Power cuts in the area lasted five or six days in some cases.

“Our buildings, more than the typical data center, are actually structured around almost Cold War resilience,” says van Oppen. “We're in hardened traditional telco fortresses; all of our sites have at least five days of fuel. We joke that we will be the last one standing in these markets.

“We had nine or 10 sites on generators for that whole period,” he adds, referring to the 2022 storms. “By day five, we said we should probably order some diesel to this site.”

While Ziply’s sites were all up and running, the driver of the refueling truck could only report in from the COs, as their cell phone carrier was backhauling via a different provider that had gone offline.

“Our biggest failure in that scenario was our third-party cell carrier, who since switched a number of those towers to us,” van Oppen beams.

In a post to LinkedIn in early December 2024 following bad weather, van Oppen noted that more than 20 Ziply sites in the Seattle region were running

Ziply - Hillsboro, Oregon III
Chris Gellos, Ziply

on backup power for more than 36 hours without issue.

Colo and carrier thinking

The company launched with around eight of the 200 facilities listed as “movein ready” – with racks, HVAC, and UPS installed and ready – and had some customers in place at launch. Word that Ziply was planning to offer colo had gotten out, and some interested parties reached out before it officially launched to place hardware in the old COs.

“It had an organic start, even though we had already planned to go in this direction,” says Gellos. “They came to us and we met their needs.”

Early customers included local businesses – so-called server huggers who like to be close to their hardware – local governments and other carriers. Ziply believes the sites “fill a gap” for retail colo customers struggling to compete for space against AI customers.

“We've been watching AI users displace all the normal colo users, because we're in a unique position to help them solve for connectivity and space and power problems,” says van Oppen. “They have been pushed out of or priced out of the retail market as operators have gone to selling their power to a few big customers. I think there's a really big niche there that folks can use us for.”

One of the major early deals was a government customer that wanted a custom deployment with a lot of battery runtime as part of requirements described

ZIPLY GETS ACQUIRED

Aroundthe same time as launching the colo service, Ziply was involved in two acquisitions. The company acquired the Pacific Northwest assets of Cox-owned fiber-optic provider Unite Private Networks (UPN), adding some 12,000 fiber route miles across 21 states to its network. The company confirmed to DCD the acquisition doesn’t include any data centers.

Around the same time, Ziply itself was acquired by Bell Canada in a deal expected to be worth as much as C$7 billion (US$5bn). The deal is expected to close during the second half of 2025, and Ziply will continue to operate as a separate business unit headquartered in Kirkland, Washington. It will be interesting to see how Bell Canada approaches this colocation portfolio. The telco sold the majority of its data center footprint – some 25 data centers across 13 sites – to Equinix in 2020 in a deal valued at $780 million (CA$1 billion). The deal comprised around 1.2 million sq ft (111,500 sqm) of data center space across sites in Toronto (x4), Calgary (x3), Kamloops, Montreal, Ottawa, Saint John, Vancouver, and Winnipeg.

At the time, Bell said it would retain five other data centers that are located in its network Central Offices in Calgary, Halifax, Saint John, St. John's, and Toronto. Today, the company has POPs at dozens of locations across Canada and into the US, and offers carrier colocation at ten sites.

Gellos tells us it’s “too early” to say how the acquisition will shake out on the data center side. When asked if it's possible that

Ziply could follow telco tradition and carve out the repurposed CO assets into a standalone colo company, he says nothing is off the table in the long term, but such a move would be unlikely in the short term.

Van Oppen believes the “infrastructure is more valuable together.” He says: “More importantly, there's a symbiotic relationship between the conduits, the fiber, and the buildings. That's why the phone company owned them in the first place."

Whatever the long term outcome of the mergers, Ziply’s zippy transformation from legacy copper network to fiber provider with a large regional colo footprint feels like a story that could only come from a newly established upstart, even if the roots come from legacy infrastructure.

“Fortune 500 institutions are run very differently than a growthoriented company. The majority of the leadership that we have in this company has a growth company background,” says Gellos. “That's a very different way of thinking than just kind of steering the aircraft carrier. And so I think we have a distinct advantage in that.”

Van Oppen notes change has been difficult in an ILEC with close to 100 years of history, and was made possible by disrupting the management structure aggressively as part of the carve out from Frontier.

He says: “I've never been impressed with most ILECs' ability to change direction. There's a lot of inertia in these big companies, and the ILECs are probably the worst in that footprint.”

“We did network upgrades at a pace and decommissioning of legacy infrastructure at a pace that would have been completely impossible in any other large telco that I'm aware of,” he adds. 

AT&T central office Kalamazoo, Michigan

by the company as “not normal.” Van Oppen notes, however, that it was “easy” to meet these demands.

“We had a big site that fit the requirements where they wanted it, and had a ton of empty space from all of our cleanup efforts,” he says. “We had so much empty space, my team was using it for storage and had bought a forklift for the site.”

That deal was something of a surprise for the company.

“One thing that we learned in this is that we've had some of the most interesting opportunities on sites we didn't expect,” van Oppen adds. “That site wasn't on the list originally, because we thought nobody was going to want colo in that town. And it turns out they do.”

“This is a uniquely ILEC footprint, you wouldn't have built them for a newer world,”
>>John van Oppen

Another named customer is Scatter Creek InfoNet, a family-owned broadband provider to Washington’s Tenino and Kalama areas. The firm is now leasing out of Ziply’s Somerset West Central Office in Hillsboro, Oregon.

To date, all of the deployments have been air-cooled, with the company generally supporting averages of 5-7kW per rack but seeing densities up to 18kW

A NEW OPPORTUNITY FOR CARRIERS

A2015report from FierceWireless suggests there could be as many as 30,000 COs across North America.

While many telcos long ago divested their data center assets, their portfolios of Central Offices offer a chance to re-enter the data center game without a major wave of new investment. These kinds of sites can be an interesting option for local businesses and governments, local carriers, Edge service providers, and retail customers who might be struggling to find capacity at larger colo facilities amid an AI rush. Telecoms consultancy firm STL Partners predicts there will be as many as 1,800 “network Edge” data centers globally by 2028 – up from just over 800 today.

“Telecommunications operators have been presented with a unique opportunity to address the growing demand for data center infrastructure by repurposing old CO facilities as Edge colocation centers,” says Steven Carlini, VP and chief advocate, AI and data center at Schneider Electric. “With increasing demand for low-latency processing for AI inferencing, these COs are an ideal solution to house Edge computing infrastructure, given that they are closely located to densely populated environments.”

Carlini continues: “As the COs transformed from copper to fiber, and analog to digital, much of this work has already been done. At the same time, the footprint of the physical infrastructure equipment also shrank. As such, many CO operators saw this as an opportunity to lease to different colos and hyperscalers, and break away from their traditional business models.”

Ziply isn’t the only firm looking to reuse its old Central Offices. Frontier – which is being acquired by Verizon –previously announced a deal to let AT&T deploy equipment for its 5G network in its former copper central offices. Lumen has also reportedly pivoted some of its old Central Offices to data center colocation. Frontier also offers what it calls ‘Edge Colocation’ at more than 2,500 locations, including its Central Offices.

While Lumen might be converting some of its CO estate to colocation, the company sold its ILEC business (operating under the CenturyLink brand) in 20 Midwest and Southeast states to Apollo Global for $7.5 billion back in 2021. The acquired business was later renamed Brightspeed. Brightspeed has since put multiple data centers on the market on a sale-leaseback offering, and

in some cases. But van Oppen says Ziply has received some interest in liquid cooling – some of its sites already run chilled water loops, which could lend themselves to liquid-cooled systems.

Van Oppen notes that Ziply itself is a customer of a lot of colocation providers on the network side, and while Ziply “works well” with them, he sees the pain points of being a customer and is trying to do things differently. At the same time, the company wants to “behave more like a colo operator” than the incumbent carrier.

“We ask ourselves, how can we be a friendlier vendor than our own customer experience with the large carrier facilities?” he says. “Most data centers won't do deals past five years. And if you're a network service provider, you really want long-term

the company today offers colocation services from its facilities.

January 2025 saw Verizon announcing a new offering called AI Connect to serve AI workloads. While vague, the service combines its fiber, Edge, and data center assets. In its quarterly earnings call the same month, Kyle Malady, CEO, Verizon Business, noted it has available land and power to potentially expand its data center footprint to better serve AI infrastructure.

“As we look across our assets, take inventory, and compare against other players in the market, we believe that we are in a leadership position when it comes to usable power and space,” Malady said. “We have facilities across the United States that either have spare power, space, and cooling or can be retrofitted. As we sit here today, we have 2-10MW+ of usable power across many of our sites.

“As we move through our network transformation work, we will continue to free up more resources that could be made available for AI Connect,” he continued. “In addition, we have between 100 and 200 acres of undeveloped land, some currently zoned for data center build and much of it in prime data center-friendly areas.”

assurance. Because we own the buildings, we can do longer-term deals on those. And that's what's really important to making the sites into an anchor.”

The company's fiber network connects to data centers from major providers in its service region, including Sabey, Digital Realty, Flexential, H5, NTT, Vantage, Cologix, TierPoint, Evoque, and others. The connectivity options for its CO sites are a major selling point for Ziply – with the company hosting multiple carriers in many of the facilities.

“Some carriers have stripped Central Offices out of their assets. We've made a conscious decision not to because we think they're most valuable when combined with the conduit assets,” says van Oppen.

Verizon previously sold off a footprint of around 24 data centers across 15 markets to Equinix back in 2016 for $3.6bn. Many of the facilities were those Verizon inherited when it bought Terremark in 2011. Verizon still retains a number of core data centers across the US, however, and is also in the process of acquiring Frontier.

“When you acquire ILEC properties, you're acquiring a very unique footprint that is distributed widely throughout the areas that it serves,” says Gellos.

AT&T, however, has taken a different approach at some of its sites. In 2017 the telco, which previously sold its data center business to Brookfield, announced plans to build a network of Edge data centers at central offices, cell towers, and telephone exchanges across the US.

However, in January 2025 the company announced an $850 million sale-leaseback with real estate firm Reign Capital for 74 underutilized Central Offices. The sites total 13 million sq ft, with the telco leasing back “only space needed” for its network. The company noted the deal comprised a “small portion” of its CO portfolio, and followed a similar deal with Real in 2021 for 13 other properties. AT&T plans to

“We're in hardened traditional telco fortresses. Our buildings, more than the typical data center, are structured around Cold War resilience,” >>John van Oppen

“We're not a normal colo provider in the traditional sense, because a colo provider wouldn't be able to sell you conduit down the street. We can pull you a dedicated cable the entire way through existing conduits between two

retire its copper network by 2029 and is amid a major fiber build-out across the US.

In the UK, the country’s copper network telephone exchanges all belong to BT and its subsidiary Openreach. BT was the state-owned telecoms incumbent and part of the governmentowned Post Office for decades; its copper and fiber assets are now managed by Openreach. The companies are also in the midst of a large fiber rollout and planned retirement of copper networks.

Openreach currently operates some 5,600 BT-owned exchanges; most of those are for copper and other legacy services, with the company operating its fiber service from around 1,000 newer exchanges, known as Openreach Handover Points (OHPs). In 2023, Openreach said it aims to close 103 legacy exchanges by December 2030, starting with a trial of five sites. The company is in discussions with communications providers including BT, Sky, Vodafone, and others over closing the other 4,600 exchanges the company has across the UK in the 2030s.

Not every telco will succeed in such efforts. STL said 2024 was the first time it had noticed companies winding down

of our buildings. If you wanted your own private ring, you could have that. I'm not aware of any colos that are able to do that, beyond their own campus. And that's the difference between being the carrier that also owns the conduits on the street and being a colo provider.

“And that kind of feature set is the kind of stuff enterprise customers are asking for these days; how do I get private, fully protected, redundant infrastructure?”

The company has also donated fiber and space to the local IX, NWAX; a nonprofit Internet exchange in Portland, Oregon, that has placed a switch in one of Ziply’s sites.

“We acquired the old phone company assets, but we are not the old phone company, by any means,” adds Gellos. 

their network Edge propositions, having already built capacity. Cox Edge, the Edge compute unit of cable company Cox Communications, was folded last year, with the 30 sites it had opened quietly rolled into Cox’s private cellular network division.

While fiber networks don’t have the same footprint of Central Offices, they do often feature a portfolio of smaller in-line amplifier (ILA) shelters – small sheds every 60 miles or so that repeat fiber signals – that could present an opportunity. February 2025 saw Fiber firm Lightpath announce a new Edge data center unit called LightCube. Details are sparse, but the company has said it will deploy modular facilities at ILA sites along its New York-Ashburn route.

ILA shelters have remained largely unchanged for decades, but Meta has launched a program to make it quicker and easier to build these sites and more efficient to run them.

The company aims to create liquidcooled sites able to host up to 24 racks supporting densities higher than the traditional 800W and closer to 4kW. With design updates that the social media firm said it aims to share, a new footprint of Edge sites could soon be available to host compute. 

The IT wingspan of JPMorgan Chase & Co.

CIO

of infrastructure platforms Darrin Alves tells DCD about JPMC’s IT infrastructure strategy

JPMorgan Chase & Co. (JPMC) is a figurehead of the banking industry. As the largest bank in the US and with a major international presence, its very name conjures up images of Wall Street.

With origins dating as far back as 1799, the current iteration of JPMC was founded in 2000 through the merger of New York City banks JP Morgan & Co. and Chase Manhattan Company. Needless to say, the centuries have led to a vast sprawl

of JPMC, and the same can be said about its IT Infrastructure.

Darrin Alves was appointed CIO of infrastructure platforms at JPMC in April 2023, after a lengthy career in similar infrastructure and technology operations roles at the likes of eBay, Skype, Walmart, and most recently Amazon.

He says JPMC has always approached technology head-on and with a voracious appetite, always seeking the next edge in its operations when it comes to digital infrastructure. This sees the company

"If a cloud provider has an outage, the financial services industry cannot be impacted by that,”
>>Darrin Alves
Georgia Butler Senior Reporter, Cloud & Hybrid

using a healthy mix of enterprise data centers and cloud computing technologies.

The company’s cloud stance has been well publicized. In April 2024, CEO Jamie Dimon wrote in a letter to investors that the company was aiming to get 75 percent of its data, and 70 percent of its applications, into the cloud that year.

“We’ve actually exceeded that goal, as far as getting data into the public cloud,” Alves tells DCD, with a hint of pride. “We are very nuanced, though. It wasn’t just an arbitrary goal, we had specific use cases, it’s not just an initiative where we are trying to get everything into the cloud.”

Alves added that the company “strongly believes” in a multi-cloud and hybrid strategy. While he declined to comment on the cloud companies it works with, Dimon has previously stated that JPMC is a customer of the notorious Big Three - Amazon’s AWS, Microsoft Azure, and Google Cloud.

The reason behind that multi-cloud strategy is relatively simple: resiliency.

“We take a hybrid, multi-cloud approach because we are very concerned about concentration risk,” Alves says. “If a cloud provider had an outage, the financial services industry cannot be impacted by that - it is critical infrastructure, and we don’t want to be beholden or at risk with any single vendor.”

JPMC turns to the cloud for a variety of reasons. Alves offers the example of generative AI - noting that cloud partnerships have enabled the bank to “experiment with capabilities that we do not yet have on-premises."

“We use the cloud where there are leading edge capabilities, but also we will use it for data storage where we don’t want to tie up our data centers with that.”

On the data center side, JPMC has a pretty hefty estate comprising 32 data centers across the world.

Those data centers are a combination of JPMC-owned facilities, and colocation, though Alves notes they own a “large majority,” and almost all of their data centers in the US and Europe. “We are reducing and concentrating that down to 17 over the next few years,” he tells DCD, adding that this is a “soft goal” and doesn't have a fixed deadline.

JPMC declined to comment on if they

would then sell those exited facilities.

“Among the 32 are state-of-the-art hyperscale facilities, and we keep a large portfolio mainly for regulatory reasons,” Alves explains. “We are in a large number of countries with active regulators that have sovereignty requirements and things of that nature, so we have to abide by the rules and regulations of those countries.”

Darrin Alves
“For things we don’t think we should host in the cloud, we want them to be in our most efficient data centers,”
>>Darrin Alves

Details about the data centers themselves are sparse, though Alves shares that the hyperscale facilities are modularly designed and built with high efficiency and low PUE ratings. “For the things that we don’t think we should host in the cloud, we want them to be in our most efficient data centers,” he says. JPMC doesn’t share an exact number for its PUEs, but Alves says that the data centers are only a few years old and built using “best practice.”

The world of enterprise data centers is shrinking, but on-premise servers remain common in many industries, particularly those that are highly regulated, such as financial services. For JPMC, this is a choice that makes sense, Alves says.

“We’re at a size and scale where we can do things pretty effectively as far as

purchasing, and then we can also provide levels of security that you can’t guarantee in the public cloud or sometimes even a colocation,” he explains. “It’s about finding the right hosting model.”

When it comes to making that choice, JPMC follows a pipeline of thought. Decisions are first based on legality and compliance, followed by security standards enabling JPMC to “dictate who touches it [the data] or where it could be and that could apply to a cloud provider, or a municipality or country,” Alves says.

After that, the bank looks at customer availability and resiliency. “It’s only once we get past those three things that we look for the right architecture, and the last thing we do is optimize for cost,” Alves adds.

In 2024, JPMC spent $17 billion on technology - although this was not limited to infrastructure and encompassed all technology spend across the company. The figure represents an increase of around $1.5bn on the year prior, and from $12bn in 2021.

Normally that number isn’t broken down to look specifically at data centers, but in 2021, CEO Dimon shared that $2bn had gone to new data centers. At the time this move was questioned by analysts, given that the company was also looking to move a lot of applications to the cloud.

Regardless of the reception, JPMC has remained committed to a hybrid approach, and its overall spend in 2025 is looking likely to be more than $17bn.

During the 2024 Q4 earnings call in January 2025, CFO Jeremy Barnum said that the increase in tech spending was mostly “business-driven as we continue to invest in new products, features, and customer platforms, as well as modernization.”

Barnum noted, however, that the company said at its latest investment day that it had reached “peak modernization spend.”

“As Jamie [Dimon] says, we're always modernizing,” Barnum told investors. “So, the fact that we've gotten to a peak and then it might come down a little bit from here still means we're going to be constantly modernizing. But, at the margin, that means that inside the tech teams, there's a little bit of capacity that gets freed up to focus on features and new product development.”

He added that the company is also finding efficiency in its hardware utilization, and software development.

JPMC’s data centers are home to a variety of hardware, including the classic mainframe.

Alves says he cannot share specifics about the work the mainframes are doing - though it is in the region of 10 trillion transactions daily - for security reasons. “What I will say is that a lot of the systems that were using the mainframe have been modernized and they aren’t using it in the same way they have been, but the most critical systems are still using it.” In early 2022, Dimon told analysts that the company’s credit card business was run on mainframes.

Beyond that, the company is investing in GPUs, and also uses quantum computing.

On the GPU front, JPMC’s upcoming data center modules are designed for liquid cooling which will enable them to increase density in their data halls and give the company direct access to these technologies. “We don’t want to have to fight with everybody for cloud space, so we have a portfolio that allows us to do bursts and offer our own service.”

The company does its own training, though this is “small-scale fine-tuning and training,” so JPMC doesn’t need to invest on the same massive scale that we are seeing the cloud companies doing. Interestingly, though, the cloud providers’ capex and investment in this area does have a knock-on effect.

Alves explains: “We focus more of our AI workloads on inference, and with that, we have to be very conscious of the supply chain, because the same things we use to build our facilities, they [the hyperscalers] are using. We keep a

“We’re seeing in lead times for some devices - chillers, generators, etc - is that it's in excess of two years,”
>> Darrin Alves

very close eye on lead times so we can continue to expand our facilities and adapt. If we need to make a decision two years earlier than we would have had to previously, we will.”

He continues: “There are different horizons. We look at the near future - so one to two years, then there's five years out, and a 10-year clip. What we’re seeing in lead times for some devices - chillers and generators, for example - is that lead times are in excess of two years, and then if you are building a facility, there is construction time too.

“We are constantly forecasting what we believe our demand will be. And more recently, that has included the use of AI and what that impact will be. And then, looking out in those five to 10-year clips to see if we need to start doing something now to match that supply chain.”

The company’s use of AI has long been public. JPMC launched its “LLM Suite,” a generative AI tool, in 2024.

“[LLM Suite] was a controlled

environment where we could give them access to large language models to concern things for internal, non-public facing use cases,” he says. “We've evolved into using generative AI, and machine learning to do things without internal tooling.

“I use it in the infrastructure area. We have a large number of these cases across the company, but I alone have probably 20 or 30 where we're using it to augment anomaly detection and things like that, to help us run and optimize the infrastructure,” Alves says, noting that they are currently looking into ways they can use agentic workloads to integrate into more core parts of the business.

On the quantum computing side, JPMC is still at the R&D stage. The company is a “strategic partner” of Quantinuum, and took part in a $300m equity fundraise for the quantum company in early 2024. It has also been publishing research papers related to quantum computing since 2020.

Overall, for Alves, he says his role as CIO at JPMC has been wildly rewarding, and he particularly cites the people he works with as a part of that. “I think people underestimate the team. The leadership, across each business unit, is the number one business operator in that area. People don’t realize the level of success that even the leadership team one clip below Jamie [Dimon] has.

“They've been fun to work with, and on the technology side, as I mentioned, I'm working with technologies that I didn't get to work with before. The variety of technologies that we use, being part of financial services, the processes, and the regulatory requirements, are things that are new to me.

“That new muscle I've been developing has been quite interesting.”

Data Center Cooling Solutions

In mission-critical data centers, maintaining precise cooling is essential for uptime, performance and efficiency. Belimo’s Energy Valve ensures consistent differential pressure across cold plates, optimizing heat transfer and preventing thermal fluctuations that can impact server performance.

Belimo solutions provide:

 Reliable temperature regulation to prevent IT equipment failures

 Scalable solutions to support increasing server loads

 Efficient energy management to reduce waste and operational costs

Equinix's x-factor

Joint ventures are helping the colo giant expand its xScale program for the hyperscalers

The financial figures thrown around, willy-nilly by companies developing AI data centers have become so enormous that they often feel meaningless.

OpenAI, SoftBank, and chums plan to drop $500 billion for Stargate, and Meta is rumored to be looking at a $200bn campus somewhere in the US. Projects with a price tag of $1bn are so commonplace that, for many, they barely merit talking about.

In comparison to Stargate, the $15 billion that colo giant Equinix and its partners have allocated for a joint venture to build hyperscale data centers in the US feels like a drop in the ocean, but in the real world it still represents a significant outlay. Krupal Raval, managing director of the company’s xScale program, which develops AI infrastructure for big clients, believes it can have a big impact.

“It’s almost embarrassing to talk about the $15bn because sometimes it feels like we’re pounding our chest and saying

‘look how much money we’re putting out in investment capital,’” he says.

“But really, it’s necessary because of how different AI in terms of the scale and design. The product xScale is offering is quite different to anything we’ve done historically.”

Scaling up

Equinix has been building data centers under the xScale brand since 2019, when it partnered with GIC, Singapore’s

sovereign wealth fund, to create a joint venture to build hyperscale data centers in key European markets.

The brand has since spread around the world, with Equinix pursuing the JV model with GIC as a key partner. Together, the pair have jointly invested more than $7.5bn in xScale-branded data centers, which have opened in countries including the UK, Japan, France, Brazil, and South Korea.

On the company’s most recent earnings call, in February 2025, Equinix CEO Adaire Fox-Martin said it had 16 xScale projects under construction around the world, which will offer customers 34,000 additional cabinets with 165MW total capacity. Total operational capacity for xScale data centers is 400MW globally, Fox-Martin added.

Conquering America has been a more recent ambition for xScale. In April 2024, the company formed a $600 million JV with PGIM Real Estate, the property investment and financing arm of Prudential Financial’s global asset management business, to fund a 28MW data center in Silicon Valley, the first phase of which came online last year. It has also purchased land for xScale data centers in Atlanta, Georgia, and Dallas, Texas.

Raval says the decision to delay its entry into the US market was a strategic one for xScale. “We’ve intentionally been everywhere but America,” he says. “When people asked Charles [Meyers, former Equinix CEO] if we were interested in the US market, his answer was always that we didn’t want to be provider 55 in the States trying to address that market. We had to see value for our shareholders and our customers, we don’t just do things to put pins on a map.”

AI has changed the picture significantly, Raval says, and with demand going through the roof he believes the time is right for Equinix to make its move. “We’re seeing our customers saying they’re willing to work with ‘Johnny startup’ if it means getting a 2025 or 2026 delivery, but that’s not sustainable or scalable,” he says. “I don’t mean to denigrate the competition, because some of it is really good, but some of it is not so great, so we want to work with our partners to provide a stable and credible option.”

Having seen many of its rivals scoop up billions of dollars in AI-related projects already, some may say Equinix is late to the party, but the company’s timing could prove to be spot on. Donald Trump’s re-election as US president has seen the White House ramp up protectionist policies and reiterate an “America first” approach, and left tech companies scrambling to invest in the US in a bid

“We had to see value for our shareholders and our customers, we don’t just do things to put pins on a map,”
>>Krupal Raval

to curry favor with the new regime. This means data center space designed specifically for these firms is likely to be snapped up fast, despite fears that the overall AI infrastructure market may be slowing down.

The latest xScale JV involves Equinix, GIC and the Canada Pension Plan (CPP). It aims to raise $15 billion to purchase land for several 100MW+ data center campuses in the US, eventually adding more than 1.5GW of new capacity for Equinix’s hyperscale customers.

CPP Investments and GIC each control a 37.5 percent equity interest in the joint venture, with Equinix owning a 25 percent stake. Each party has made equity commitments, and it is expected the JV will take on debt to bring investable capital up to $15 billion.

So far, Equinix has stayed tight-lipped on the locations it is targeting for the new campuses, and despite DCD’s prompting, Raval doesn’t let the cat out of the bag. “None of our customers are saying ‘go

here and we’ll give you our book of business,’” he says. “But equally we have a pretty good sense of where the needs are, so that’s where we’ll be targeting our investments.”

He continues: “Outside of the JV, we’ve acquired land for an xScale campus in Atlanta. It’s not yet in the JV but logically it would be - that will be up to the investors and partners.

"That investment is the playbook for xScale, finding sites that can support multiple hundreds of megawatts in good metros. It’s an in-fill strategy rather than, ‘let’s just build a gigawatt out in the sticks.’”

Building in flexibility

Raval says the xScale data centers will be designed with the future in mind, as the AI matures and moves away from largescale training of models into the nitty gritty of inference and providing services for businesses.

“We’ve spent a lot of time doing design workshops with our customers so we’re on the journey together,” he says. “That gives us confidence. There’s no guarantee what we’re deciding now is the right answer, so we want to build flexibility into our products.”

This flexibility covers things like being ready for rapidly increasing rack densities and different cooling systems, Raval says. He adds: “This comes at a bit of a premium in terms of time and money, but we would rather invest to have that flexibility. Some of our peers are keen to just get stuff knocked out quickly, but we hope that by doing it this way we can offer ongoing value to our clients.”

For Raval, managing such an array of joint ventures around the world is a sizeable challenge, but one that he enjoys. “We take pride in being methodical and getting things right, and my goal is to move fast and get things right,” he says. “We have great support from our investment partners, and I have 14,000 colleagues at Equinix who are all really smart people and can offer differing opinions on how to do things. That doesn’t come for free and it can slow us down at times, but it’s a big net benefit and I’m proud of what we’re doing here.” 

Fragile data center networks put your business at risk

"WhenCrowdStrike’s security software upgrade glitch caused the largest outage in the history of IT last year, the world noticed. A hiccup of that size, crashing 8.5 million systems globally, points to a far bigger problem than indigestion.

The truth is today’s data center networks are just too fragile. Misconfigurations, software bugs, legacy infrastructure and tool sprawl – these are some of the glitches that lurk around every data center corner. While you focus on fixing issues, restoring services, and moving on – over and over again – you may be wondering why the outages keep happening and whether there’s a better strategy to cut your risk.

Yes, there is.

Let’s take a closer look at why data center networks are so fragile and the three steps you can take to future-proof your data center.

Data center management is broken

From the very first mainframe in the early 1950s, reliability has mattered. In those days, processing speeds hit 40MHz and daily output required pen and paper to survive. Today, data centers process multiple exabytes of data daily over 100 Gbps connections.

And expectations are only growing. Consider that META alone recently released an open-source Llama large language model that will be governed by 405 billion parameters.

Yet, the way we manage networks is fundamentally broken, and that’s what’s leading to these endless rounds of frustrating and disruptive outages. What are the primary culprits?

• Human error: Whether the source is vendor product quality issues or the mistakes of network operators, human error is a major issue according to a recent study by IDC. Misconfigurations, incorrect policy changes, “fat fingering” and lack of pre-production testing by network operators contribute substantially to network outages.

• Poor visibility: Without insight into traffic patterns, device performance and network state, IT teams become far too reactive. Poor visibility means they operate in the dark, reacting to issues only after they have impacted customers. With engineers forced to manually correlate data across disconnected tools, troubleshooting becomes a slow, inefficient process that pushes up mean time to resolution (MTTR).

• Tool sprawl: In parallel, IT teams are overwhelmed by a malignancy of disjointed tools, leaving them with blind spots, redundant data collection and alert fatigue. Indeed, IDC data show that most organizations are working with 10 or more tools focused on observability alone. Engineers must manually correlate data, which can increase MTTR and delay decisions. Without unified observability, organizations struggle to detect issues early, optimize performance and ensure reliability. Addressing these issues requires consolidation, standardization and AI-driven analytics to enhance automation and resilience in data center operations.

Network complexity can also add to the problem. Driven by multiple factors, including hybrid cloud/on-premises environments and legacy hardware integration, it will take automation and observability to maintain the necessary consistency in connectivity, enforce security policies and ensure seamless data flow across cloud providers and on-premises systems.

Legacy hardware complicates things as well. Outdated protocols and proprietary configurations clash with modern API-driven and software-defined networking (SDN), creating operational inefficiencies and integration failures.

Further compounding the situation is poor product quality. The source might be buggy software or defective hardware, but poor quality in networking products can have a major impact on network reliability. Unpredictable failures and operational disruptions can leave organizations scrambling to fix problems they never saw coming or waiting for networking vendor suppliers to provide fixes.

How to future-proof data center networks

Addressing this list of challenges is essential or data center networks risk significant operational disruptions. In parallel, data center operators risk losing competitive advantage and network engineers risk falling behind in their skillsets and career options.

IDC’s research makes it clear that data center networks must evolve. Here are three things the industry must do to make it happen:

1. Build self-healing networks

Traditional monitoring alerts teams after an issue occurs. The answer lies in a network that can detect problems and resolve them autonomously. Organizations should look for solutions that deliver event-driven automation automatically, responds to failures, and initiates corrective actions in real time.

2. Use platforms to make networking simpler

Among the enterprises surveyed by IDC, 60% prefer a platform approach to a collection of best-of-breed components. Why? Because complexity is the enemy of reliability.

Instead of juggling disjointed tools, enterprises need:

• Centralized management platforms that unify monitoring, security and automation

• AI-powered analytics to detect anomalies across hybrid environments

• Intent-based networking that aligns network behavior with business goals.

Companies should invest in unified networking solutions. Platforms will deliver faster troubleshooting, fewer outages and improved efficiency across IT teams. The quality of a network management platform directly impacts reliability. Better tools mean fewer errors and more uptime.

3. Embrace real automation

Many network teams fear automation because they don’t trust it. And rightly so: Most automation today is glorified scripting.

Real automation is event-driven and adapts to changing conditions in real time. It can leverage state data to identify and autonomously fix problems before they become outages.

The shift from reactive to proactive automation is happening, and enterprises that embrace it will see massive improvements in uptime and efficiency.

Is your network ready for reliability?

To deliver the reliability your data center network needs to thrive in the digital age, an evolution is needed. This can address the fragility of today’s data centers and put an end to frustrating and disruptive outages.

When looking to modernize your data center network, keep the focus on fixing issues before outages occur with self-healing networks, eliminating human error with event-driven automation, and insisting on unrivaled quality in both your hardware and your software.

To learn how the Nokia Event-Driven Automation (EDA) platform can help your data center evolution, visit our Booth #205 at DCDConnect NYC. A modern infrastructure automation platform, the Nokia EDA provides data center networking Day 0, Day 1, Day 2+ life-cycle management. Built on Kubernetes and taking advantage of its vast open-source eco-system, EDA simplifies operations, abstracts the complexities of multi-vendor networks, and ensures reliable operations with capabilities that eliminate human error. Network operators can trust automation more when using EDA, which equips them to move faster and with more confidence. 

On the cusp of the Kuiper campaign

With the unfolding phenomenon of SpaceX’s Starlink rollout having caused a pronounced stir in the satellite market, the industry has been eager for clues as to how the emergence of another big tech Low Earth Orbit (LEO) giant in Amazon’s Kuiper Project could shake things up further as it rolls out this year.

Kuiper’s initial constellation is intended to contain 3,236 satellites weighing around 600kg each, with a solar array spanning eight meters and costing up to $2 million per satellite to manufacture - heftier than Starlink’s 260kg models which also sport a similar-sized array - delivering broadband through three consumer and commercial terminals capable of speeds of 100Mbps, 400Mbps, and 1Gbps at ruthless price points.

“The satellite broadband business increasingly looks like a battle of titans, with the traditional players caught in the middle,”
>>Caleb Henry, Quilty Space

In total, the Kuiper fleet is intended to deliver 117Tbps when fully deployed, beating Starlink’s current capacity of 102Tbps, though it is likely Kuiper will always be playing catchup to Elon Musk’s company given that it also has plans for further deployments. Amazon claims its terminals will be hundreds of dollars below Starlink’s, which has a history of subsidizing its terminal prices to expand its customer base. Currently, Starlink clients have access to a network of more than 7,000 satellites.

Morgan Stanley estimates capital expenditure for the company on this work will amount to $96.4 billion in 2025 alone. Barclays estimates Kuiper’s revenue would represent $61bn by 2030, made up of $26bn in consumer segments and $25bn in business and enterprise such as data centers, as well as aviation and maritime connectivity. Reaching that outcome will depend on hitting rollout deadlines without any logistical or manufacturing surprises, as well as obtaining vital regulatory approvals.

First announced in 2019 and headed by Rajeev Badyal, the former vice president of Starlink, Kuiper launched its demonstrator satellites in December of 2023. Prior to this, in April 2022, Amazon announced contracts with launch providers Arianespace, Blue Origin, and ULA for up to 83 launches, which were due to begin in 2024. This has now been pushed back into 2025, and Kuiper expects to be able to deliver service later this year too, although this

Laurence Russell Contributor

goal may prove optimistic. For reference, it took Starlink more than a year to build up enough capacity to guarantee commercial viability.

“Now that Kuiper is on the verge of its massive launch campaign, the satellite broadband business increasingly looks like a battle of titans, with the traditional players caught in the middle,” explains Caleb Henry, director of research at intelligence firm Quilty Space.

Amazon’s Kuiper team declined to comment to DCD on the ramifications of their entry into the satellite market.

The unfolding of the LEO era defined by these titanic entrants has influenced a string of mergers between the legacy satellite connectivity providers – many of whom are experts of more traditional geostationary satellites – such as Eutelsat merging with OneWeb, Viasat with Inmarsat, and most recently Eutelsat/ OneWeb with SES.

These providers have seen success as innovative and intuitive connectivity partners with long-time relationships with their customers, who have often highlighted their differentiators in the face of faceless big-box offerings. Even so, in the face of big tech reliability, ubiquitous brand identity, and prices that border on the predatory, customers will be convinced, as many satellite operators themselves have been by the unbeatable price of launch with SpaceX Falcon 9.

Low-Earth advantage

Closer to the surface of the planet, LEO satellites can deliver more throughput at the cost of the satellite’s effective reach. As demonstrated in a 1981 NASA mission headed by Sir Martin Sweeting, a service delivered by satellites in such close orbit required a network of models above all delivery areas on Earth simultaneously, making them unrealistic in the 1980s. Today, multiple companies sell LEObased services, but few can claim to be a one-stop-shop for all connectivity needs.

While LEO broadband has, at times, outpaced terrestrial Internet services in certain regions, it is generally slower than fiber. But it remains a good option for rural and remote connectivity where cabled connections become impractical, as well as in developing nations. Sectors like military, aviation, and maritime, where connected assets are mobile in nature and often located far from urban

hubs are natural anchor customers. It does hurt that military agencies also typically have considerable spending power.

LEO can also make up a hybrid network of mixed connections as a

backup assuring more resilient networks, which is crucial for mission-critical applications and represents the most apparent advantage for most static data centers.

Tore Morten Olsen, president of maritime at Marlink, a network solutions provider for shipping, offshore, and remote markets, defines hybrid networks as the solution of choice for maritime and energy users. “LEO works well within a hybrid network because it enables high bandwidth and low latency applications that can be harder to achieve over VSAT without major bandwidth commitments,” he tells DCD. “Demand for LEO connectivity is growing very strongly in maritime because it is a market that until recently has been a somewhat underserved niche. With the exception of operators of large and sophisticated vessels, most shipping companies have traditionally made little outlay on digital solutions but that is changing, not least due to LEO.”

With 100,000 vessels in the world, many still using very basic communications, easy-installation LEO serving business needs and crew welfare has a lot of room to grow.

How big is Starlink’s lead?

With a total of 12,000 satellites planned to enter the Starlink constellation and paperwork filed to regulators for 30,000 more, Musk’s service has the numbers advantage, though quantity may not be the sole ingredient for success.

Experts have claimed Starlink’s fascination with vertical integration could prevent symbiotic technological

relationships, distancing the company from enterprise and government needs as well as the associated regulatory frameworks.

“Starlink will certainly not be the gold standard,” argues Ronald van der Breggen, chief commercial officer at Rivada Space Networks, a competing satellite network provider planning a 600-satellite-strong LEO Outernet constellation also expected to begin launching in 2025.

“While it has first mover advantage as an ISP using LEO satellites, there will be others that ultimately will give Starlink a run for their money,” he adds. “The ISP business is a difficult one where margins are very slim, if positive at all, and the projections made by Starlink back in 2019 - to achieve $35 billion in five years - have been missed in a big way.”

At Marlink, LEO integration was key to rolling out a complete set of connectivity services including cloud, cybersecurity, and IoT.

“Marlink is already in conversation with Kuiper and Telesat about the services they will offer in future, because the more options we can provide the better for our users,” Van der Breggen says. “When those services start to arrive in shipping and energy, the questions from buyers will be how they compete on price and service quality. Just like consumer markets, maritime and energy are highly competitive so the impact can only be judged for real once the services are available.”

Kuiper’s chances

With the benefit of hindsight, the Kuiper Project could achieve second-mover advantage, learning from Starlink’s mistakes and taking advantage of painstakingly established regulatory frameworks trailblazed by the pioneer to make for smooth sailing for a fast follower like Kuiper. Experts have agreed that the entry of a second billionairebacked corporate empire could easily upend the emerging balance of market powers, if not represent a new monopolizing influence.

“Kuiper is mostly a me-too product and will probably not benefit from [studying Starlink’s fumbles],” Rivada’s Van der Breggen contends. “If differentiation is sought by Kuiper, chances are that it will be in the pricing.”

Amazon

Kuiper’s aggressive pricing could dovetail neatly with the cloud services offered by AWS, Amazon’s marketleading public cloud platform, as well as the company’s consumer offering Amazon Prime.

There are already examples of companies wanting to take advantage of this. In September 2024, mining group Johannesburg-based Gold Fields said its decision to move to AWS was partly in expectation that the Kuiper system would integrate more efficiently with Amazon’s cloud offering.

“Amazon could offer their Internet access services [to the public] through the Kuiper constellation completely for free,” says Van der Breggen. “Anticipating that revenues through its website would increase by a couple of percent as a result, which would already make it worth it. Unthinkable? Well, don't forget that Starlink started out subsidizing their antennas, only asking for a nominal amount.”

There are other participants in this race. The protectionist policies pursued by the US and other governments around the world could accelerate an existing trend in satellite markets for data sovereignty. Billionaires aside, international buyers may come to prefer or even require domestic options in connectivity.

So far, Kuiper has already signed deals with telecoms groups like Vrio, Vodafone, and Verizon for connectivity in broadband and mobility. In midFebruary, it announced a £670,000 ($865,000) consultancy contract with the UK Ministry of Defence, studying the potential for “translator” satellites to facilitate orbital communication between

"Kuiper is mostly a me-too product; if differentiation is sought by Kuiper, chances are that it will be in the pricing,”
>>Ronald van der Breggen, Rivada Space Networks

military, government, and private satellites.

Another noteworthy interest has emerged from the Taiwanese government, which confirmed talks with the Kuiper Project in December 2024. The nation has long seen a decisive strategic advantage in access to LEO constellations following news of the use of Starlink in the war in Ukraine could be limited, and has sought orbital communications support to augment their defense in the case of a Chinese invasion.

Since early 2023, Taipei has stated a desire not to “tie ourselves to any particular satellite provider,” but rather work with as many as possible to achieve better resilience, all but guaranteeing its interest in Kuiper.

Small-tech competition

“While the business hasn’t traditionally boycotted providers based on their nation, securing connection and security from the right allies as part of data sovereignty most certainly has,” says Rivada’s Van der Breggen. “The ability of various satellite service providers to support and fully control the flow and location of data does sway enterprise and government users to make drastic decisions as to who to choose as their data communication service provider.”

Van der Breggen explains that Rivada stood to benefit from this trend. Being an operator founded in Germany, with offices in Berlin and Munich, could ensure uptake of German LEO demand. The appetite for an alternative to Starlink in Germany could be high following Musk’s attempts to intervene in the recent German general election, which saw him voice his support for the farright AfD party.

Rivada’s multi-node network approach allowing full mesh rooftopto-rooftop connections for government and enterprise means the company is “not so much guided by what Kuiper or Starlink are doing, as they cannot offer these types of services,” Van der Breggen suggests.

Many legacy providers also tout this sense of know-how in delivering bespoke connectivity, emphasizing their diversity of technologies and decades of experience serving industries.

Starlink appears to believe in the latter, given its tactic of reselling services through many existing network providers, and selling capacity into competing networks with pre-existing connections to unfamiliar but lucrative markets such as those in Asia. Since the latter half of 2022, Starlink has been working partners such as Speedcast, SES, KVH Industries, netTALK maritime, Singtel, Tototheo Maritime, and, of course, Marlink, in shipping alone.

For now, this tactic appears to have sold capacity while spreading the Starlink name without an overwhelming marketing burden, nor having tailored their offering with advanced customer relationships, leaving the hard work to the networks being partnered with.

We may see Kuiper do something similar, though its exact path and how successful it turns out to be could hinge upon how it navigates 2025. 

Leading the Way in Data Center Sustainability

AMI Data Center Manager (DCM) is an onpremise, out-of-band solution for monitoring server and device inventory, utilization, health, power, and thermals.

AMI DCM addresses data center manageability, reliability, planning, and sustainability challenges with real-time data, predictive analytics, and advanced reporting.

Take control of data center manageability, reliability, planning, and sustainability challenges with real-time data, predictive analytics, and advanced reporting.

DCM is an on-premise, out-of-band solution for monitoring server and device inventory, utilization, health, power, and thermals.

DCM is designed to enhance operational efficiency, reduce costs, and ensure the availability and uptime of critical infrastructure.

DCM manages a data center's carbon footprint to achieve sustainability goals.

Designed to enhance operational efficiency, reduce costs, and ensure the availability and uptime of critical infrastructure, DCM manages a data center’s carbon footprint to achieve sustainability goals.

Carbon Emissions Measurement and Projections

Centralize the management of diverse devices across data centers, even in multiple locations

Reduce operating expenses by optimizing power consumption, identifying zombie servers, addressing thermal hotspots, and improving cooling

Improve data center reliability and uptime proactively by monitoring device component health and responding to alerts

Centralize the management of diverse devices across data centers, even in multiple locations.

servers, addressing thermal hotspots, and improving cooling.

Improve data center reliability and uptime proactively by monitoring device component health and responding to alerts.

Monitor, manage, and control data center carbon emissions to achieve sustainability goals and comply with regulations.

Do the incumbents have a stranglehold over Open RAN?

Initially hesitant to push for Open RAN, the bigger vendors seem to be making the most headway, but is this at the expense of the challengers that pushed hard to begin with?

Two years is a long time, especially in the telecoms industry. It’s been almost two years since DCD last dug deep into Open RAN, just as momentum around the technology was growing, with carriers announcing trials left, right, and center.

And while some of that excitement remains, it feels as if it’s assessment time from the industry, which is looking to grade the work done so far.

Open RAN or O-RAN as it’s also known, allows providers to mix and match solutions from multiple vendors, which is impossible under most current network setups.

Paul Lipscombe Telecoms Editor

Traditionally Radio Access Network (RAN) hasn’t been like this, instead involving proprietary technology, with equipment from one vendor rarely being capable of interfacing with other components from rival vendors.

Several operators, vendors, government bodies, and collaborative industry groups have pushed for Open RAN as a way to diversify the vendor supply chain, allowing for greater interoperability of mobile networks.

Causes for concern

Since 2023, there has been an increase in the number of commercial Open RAN deployments, but has it been enough?

Not according to analyst firm Dell’Oro, which reported a decline of 30 percent in revenue for the first three quarters of last year.

“The Open RAN movement has come a long way since the O-RAN Alliance was founded in 2018,” Stefan Pongratz, VP industry analyst, Dell’Oro told DCD

“Keeping in mind that the mission with this movement is to ‘reshape the RAN industry and ecosystem towards more intelligent, open, virtualized, and fully interoperable mobile networks,’ I think the results so far are mixed.”

The report blamed slower 5G adoption and delays in readiness of the technology for the slump.

“From a revenue perspective, progress has stalled a bit. In retrospect, Open RAN revenues accelerated rapidly between 2019 and 2022,” says Pongratz. Since then, investments have declined, he adds.

Multi-vendor RAN hasn’t taken off Opinions are split on Open RAN and its credentials. Dimitris Mavrakis, senior research director at ABI Research, remarked on LinkedIn that “Open RAN is dead,” suggesting that tier-1 vendors have taken over.

“Safe to say that Open RAN will not meet its initial promise, but will morph to something else, likely dominated by tier-1 vendors. Perhaps interest will shift to the AI RAN alliance and AI will dominate all discussions,” Mavrakis posted in early February of this year.

Interestingly enough, ABI said in 2020 that it estimates that total spending

on Open RAN radio units for the public outdoor macrocell network will reach $69.5 billion in 2030.

Mavrakis’s view that vendors such as Ericsson and Nokia will dominate Open RAN may not be too far off the mark. For example, the $14 billion Open RAN deal that Ericsson won from US carrier AT&T back in 2023 is effectively a single-vendor deal and not particularly open, as many in the industry have pointed out.

But Pongratz says he expects Open RAN to take off in the long term, noting that there’s a collective push from many within the industry to make it a success.

“Vendors making up around half of the global macro RAN market are embracing most of the pillars in the broader Open RAN vision. O-RAN-ready hardware will soon no longer be the exception but the norm.”

“Safe to say that Open RAN will not meet its initial promise, but will morph to something else, likely dominated by Tier-1 vendors,”
>>Dimitris Mavrakis

Hard to usurp the incumbents

One of the main arguments for Open RAN was that it was meant to help diversify the supply chain of mobile networks.

It promised to open up the RAN to new players, driving competition for the incumbents such as Ericsson, Nokia, and other big names like Samsung. This is perhaps why, in 2021, Ericsson claimed that Open RAN would be more expensive than traditional RAN and offer worse performance.

“There’s been a motivation to try and break the stranglehold that a handful of companies have had on the RAN market,” says Philip Laidler, managing director for consulting at telecoms industry research firm STL Partners. “So given that, it's not been very successful.”

But Laidler does think Ericsson and Nokia have changed their ways

However, he does acknowledge that the short-term outlook for Open RAN is less clear, bemoaning the speed at which operators have activated Open RAN.

“The movement is not enough to change the vendor dynamics and the business case for multi-vendor RAN,” adds Pongratz. “While the split between multi-vendor and single-vendor RAN is not too asymmetric so far, the results are masked by the greenfield mix. As brownfield deployments comprise a greater share of the Open RAN market, the share of single-vendor O-RAN will rise.

He adds that while no-one expected O-RAN “would be a magic architecture that would somehow change the rules of the game, the multi-vendor progress is underwhelming, even with a very low bar.”

slightly over the years in the face of new dynamics.

“To some extent, Ericsson and Nokia have made quite a few concessions,” he says. “I wouldn’t say they’ve always fully embraced Open RAN as a movement, but they have sort of shifted their positions to a more open one, although some people would argue that to get them on board has resulted in less openness.”

Cost is a key factor too, according to Earl Lum, president, EJL Wireless Research. He says companies such as Ericsson and Nokia have significantly deeper pockets for R&D to further the technology.

“It’s hard because competitors are constantly chasing this golden ring around the merry-go-round that's just inches from their grasp,” Lum tells DCD

“The ring isn't static, it moves, and that ring is R&D. So when you look at the R&D of Nokia and Ericsson, they're pouring billions of dollars while the startups are pouring tens of millions of dollars. You’re not going to outspend the incumbents.”

He likens the tier-1 vendors to Formula One and its competition to push karts. “As the incumbent continues to move, you have to keep pace. But keeping pace isn't good enough.

“This race always ends up with the incumbents winning.”

Are carriers embracing O-RAN?

EJL’s Lum suspects AT&T’s lucrative deal with Ericsson suggests a hesitancy from the industry to trust different partners.

“AT&T saying [to other vendors] that you're not ever going to be the main guy, sort of tells you what every other operator might be thinking but hasn't publicly said,” argues Lum.

He uses Vodafone and its Open RAN network as an example. The operator plans to have at least 30 percent of its European sites running on Open RAN technology by the end of the decade.

The company, which began its Open RAN deployment in the UK in 2023, opted for Samsung as its vendor partner.

“It’s likely Samsung will secure a fairly big chunk of that [Open RAN], but the other 70 percent of its network is split between Ericsson and Nokia, because that’s who Vodafone trusts,” says Lum.

“While no one expected Open RAN would magically change the rules of the game, multivendor progress is underwhelming, even with a very low bar,”
>>Stefan Pongratz

Mavenir’s Open RAN battles

One of the hopefuls looking for a piece of the Open RAN action is Mavenir Telecom.

Formed in 2017, the US software company has been one of the biggest advocates for Open RAN.

It has won contracts with operators including Telefónica, Vodafone, Deutsche Telekom, Dish, and Virgin Media O2.

The company also has deals with Norway's Ice to support the operator's 4G and 5G networks and Bermudan newcomer Paradise Mobile.

“Ericsson and Nokia haven’t always fully embraced Open RAN as a movement, but they have sort of shifted their positions to a more open one,”
>>Philip Laidler

“It’s the same thing for Deutsche Telekom, Orange, and Telefonica. How much of this percentage are they actually going to allocate, and how much do they trust that that vendor is going to be around in 10 years for the next kit upgrade that's going to happen?”

There’s a temptation to stick to just one radio supplier for mobile operators, says Professor Emil Björnson, head of division at the KTH Royal Institute of Technology.

“As a telecom operator, it’s very convenient if you can buy your equipment from someone who takes the responsibility for the entire network. Single vendor networks may be one of the best options right now,” says Björnson.

He notes operators can then swap out different components further down the line as long the equipment is Open RAN compliant.

But despite that hope, the company has faced financial difficulties after missing interest payments on debt.

“We’d like to see more big wins,” Mavenir’s then SVP for business development, John Baker, said in October 2023.

Baker, who at the time was the face of the company’s Open RAN push, made the comments at Fyuz, an event hosted by the Telecom Infra Project (TIP), which was founded in 2016 to promote industry-led collaboration around open networks, and also includes the Open RAN Alliance’s Open RAN Summit, and the more recent Metaverse Connectivity Summit.

Even back then, Baker acknowledged that the bigger vendors proved difficult to compete with. Baker has since left the company.

“It’s hard because competitors are constantly chasing this golden ring around the merry-goround that's just inches from their grasp,”
>>Earl Lum

But Mavenir’s Rick Mostaert, vice president of product management, RAN, told DCD more recently that it’s seen successes in the Open RAN field and is confident that the technology and its own business can thrive.

“I think there have been some successes, but also some things that we wish would go faster, from a technology standpoint,” says Mostaert.

He feels the industry is more open than it has ever been, citing Open RAN deployments in the US, such as Boost Mobile, and in Japan and Europe.

“Other industries have been more open, while cellular stuck where it was a bit closed off, at least mentally in some cases,” he says. “But that's all changed in the last three years, where now have Open RAN networks deployed around the world.”

Mostaert declined to comment about the reported financial hardships but did make clear his view on single vendor Open RAN.

“I don’t think single vendor RAN is in the spirit of Open RAN,” the incumbent challenger says, noting that this isn’t what Open RAN was intended to be.

“It’s the operator's choice but marketing single-vendor RAN as a thing, I don’t understand that or the advantages of that. You’re saying you are Open RAN, but you’re not really open if it all comes from one place.”

That said, he insists that the biggername vendors are welcome, noting there’s no reason from a technical standpoint that the incumbent's radios can’t be mixed with different RAN vendors.

6G might be the big enabler, not 5G

Software firm Red Hat has also thrown its hat into the Open RAN ring. The IBM subsidiary has penned several deals with operators, including Deutsche Telekom.

Even then, Hanen Garcia, global telco solutions manager, Red Hat, accepts the rollout of Open RAN networks has been slower than anticipated.

He says this has been because the industry has been slow to adopt the idea of Open RAN.

“It’s moving forward, but it’s not speeding up. If we had Open RAN specified just after 4G, we’d be in a different situation,” notes Garcia.

“But, unfortunately, it happened in the middle of when everybody was already deploying 5G.”

He’s positive that the approach from several carriers to push Open RAN, such as Vodafone and KDDI, is evidence that there’s an appetite for open networks, but warns that it will take time.

“Things probably aren’t going to accelerate at the speed as everybody would expect, probably when 5G Advanced is going to start rolling out, and when we going to get close to 6G.”

That’s a view shared by many that DCD spoke to about Open RAN’s future projection.

Björnson suggests that there’s an opportunity over the next five years for the industry to become more comfortable with Open RAN networks ahead of the 6G era, which is widely expected to launch commercially around 2030.

STL’s Laidler also expects Open RAN to flourish when 6G comes.

“I think with 6G it will be a lot more like Open RAN than it is now,” he says. “I think you will be able to disaggregate more of the components, if you want to. If you look at the 5G Core you have a lot more disaggregation. There you can generally buy lots of components, hardware, and software for all completely separate entities, and you can knit them together. There will be a lot more of this with 6G.”

Build it and they will come

As Open RAN deals continue to be announced by carriers and network vendors across the world, it would seem the technology is here to stay.

And although the rollout hasn’t been as quick as many would have expected, there’s optimism that it will grow as hoped.

“It takes us 10 years to move from one generation to another, but the softwarization of the RAN is definitively on trend with other technologies in the past,” says Garcia, who believes single-vendor Open RAN deals will translate to multi-vendor as Open RAN matures.

Mostaert is equally bullish about Open RAN’s chances.

“If you look at the ecosystem of companies involved in Open RAN from radio vendors, software vendors, chipsets, servers, it's everybody, right?” Mostaert says. “The ecosystem is very vibrant.”

Whether the optimism can translate into success for the challengers to the incumbents when it comes to securing big-money deals remains to be seen. For some hopefuls, the wait for 6G might be too long. 

5G on the frontline

How LMT is using 5G technology to secure Latvia’s digital and physical border

Though it has only existed as an independent nation since 1918, Latvia is no stranger to invasion. Home to just 1.8 million residents, it was occupied by Germany during both World Wars and twice by the Soviet Union on either side of World War II. From 1944, Russification took hold of the nation. The neo-Baroque architecture, still hidden behind Soviet facades in the capital city of Riga today, is a stark reminder of the country that endured Soviet political repression and socioeconomic reform.

Following Latvia’s liberation from the Soviet Union in 1991, the nation’s economy, which had been heavily reliant on companies such as electronics manufacturer State Electrotechnical Factory (VEF) providing goods for the Soviet market, collapsed.

New titans of industry were required, and Juris Binde, a project conductor at VEF, was appointed president of the state-owned telco Latvian Mobile Telecom (LMT) in 1991 and was handed the challenge of building Latvia’s first mobile network operator.

Thirty years on, Binde might consider his efforts to have been a success, with LMT recording revenues of €175 million ($182m) in 2023. In his own words, he is “the only president the LMT has ever seen,” and the telco is a reflection of the man at the helm: a traditional mobile operator with innovation and engineering at its core.

Binde explains: “Many telco companies are managed by finance people, or by lawyers, or marketing people.” LMT, on the other hand, is run by engineers, he says, and it becomes

Niva Yadav Junior Reporter
Credit: LMT

clear that Binde prides himself on running a company that does more than churn out mobile data plans.

Since joining the EU in 2004, Latvia has enjoyed economic growth and stability. However, the country, still haunted by its occupation, is facing its Russian enemy once again. In 2022, it designated Russia as a state sponsor of terrorism and closed entry for Russian citizens, ceasing visas for Russians entirely.

LMT has now launched its 5G military testbed, a site dedicated to the experimentation and demonstration of 5G military applications. Although born out of a willingness to expand its offerings, this experiment is now more significant than ever and has now been met with Latvia’s need to defend itself from the Russia that once occupied it.

Fighting fire with 5G

Since the war in Ukraine began in 2022, relations between Latvia and Russia have completely “frozen," says Sergejs Potapkins, a researcher at the Latvian Institute of International Affairs and former member and deputy chairman of the Foreign Affairs Committee of the Latvian Parliament. Bilateral trade has come to a complete halt, and the budget for defense has been on the rise, he adds.

Potapkins represented Latvia’s Harmony Party (also known as Concord), which is commonly accused of having a pro-Russia stance by opponents. He claims the party is categorically against the annexation of Crimea and the Ukrainian invasion, but it secured less than five percent at the 2022 Latvian general election, an indicator of anti-Russian sentiment that prevails throughout the small nation.

But fear of Russia was not the reason behind the LMT’s Ādaži testbed. Ingmārs Pūķis, vice president and chief marketing and business development officer at LMT, points out the testbed was launched before the Ukraine invasion in 2022, with the war merely being a “catalyst” for the support the telco has received.

The testbed is located at the Ādaži military base, 20km from the center of Riga. The base serves as a training camp for the Latvian army and, since 2017, has been home to a multinational NATO battalion. The testbed itself has been in

operation since 2020 and is formed of two standalone 5G networks. It is open to all NATO allies to develop 5G defense applications.

The testbed was born when LMT realized that 5G was surplus to requirements for most of its domestic customers. The telco has deployed more than 160 5G base stations across the nation, but according to Pūķis, 4G is sufficient for human use.

“Almost all broadband necessities can be addressed by 4G,” Pūķis says. Indeed, many Latvians eschew a wired broadband connection in their own home in favor of relying on 4G. According to the European Commission, as early as 2017 mobile and fixed services have directly competed, with 4G emerging as an alternative to fixed broadband, particularly in rural areas. Pūķis says: “WiFi is lousy, it’s clumsy, it’s slow, it’s unsafe.” Binde and LMT have sustained a “mobile is the future” approach in almost all their campaigns.

So, instead of punting 5G at its customers, the Ādaži testbed was born.

The use of 5G for military use has still had its fair share of skepticism. Pūķis says: “People asked, what is a telco doing getting involved in state defense? We are historically a civilian company selling

“We are historically a civilian company selling iPhones and doing nice, little adverts about the Internet,”
>>Ingmārs Pūķis

iPhones and doing nice little adverts about the Internet.” More saliently, civilian technology has historically been perceived as “vulnerable and insecure” compared to the hardened systems used by the military, he explains.

Neils Kalniņš, director of the Techritory Forum, an international 5G event organized by the state-owned Electronic Communications Office of Latvia. He says previous editions of the forum, held in Riga, saw anti-5G protesters travel from across Europe to voice their opposition. He argues the root of the anti-5G sentiment was started in the Kremlin, with Russia harvesting its ability to dismantle European countries by causing panic around emerging technologies.

This has not stopped LMT from developing 5G military use cases.

Kaspars Pollaks, LMT’s head of business for defense and public safety, says the capabilities of 5G at the site, which he calls “the digital backbone of the allied forces,” are significant. He was formerly a navy officer, and is just one of several military personnel who joined LMT’s innovation and defense team to facilitate the telco’s pivot from civilian technology.

So far, 5G has enabled connectivity solutions that could be far superior to radio. Pūķis explains that one of 5G’s biggest advantages is that it is easier to hide because of all the other activities within the spectrum, making it difficult for the enemy to distinguish between civilian or military communications.

It can also be handy in training, using mobile Edge solutions to assist soldiers from different geographies to participate in the same virtual reality, as well as for remotely repairing vehicles. For example, a specialist was once flown out to repair a broken-down SATCOM vehicle in Afghanistan. Now, Pūķis says, 5G would allow any soldier to remotely repair the

LMT
LMT

vehicle using an open-reality solution.

In light of Russian satellite GPS jamming, 5G triumphs once again, allowing aircraft and vehicles to be located and tracked without interference. Speaking of land, air, and sea, Pollaks believes 5G will solve all connectivity problems across the three, interconnecting all operational military domains and changing the strategic game entirely.

The modern battlefield

But where does the game begin? Pollaks argues that modern warfare is no longer soldiers lined up on a battlefield, and that we have instead entered the era of cyber warfare. Kalniņš adds the government’s greatest challenge is to create a digital border to defend itself from Russia. He warns that “drone technologies” are just one of the “provocations” that can pose a threat to the nation.

Last year, Latvia proposed the Drone Coalition in conjunction with the UK. The coalition aims to supply drones to the Ukraine. Potapkins says whilst Latvia may not be able to produce heavy military tanks, it can still play its part by innovating drone technologies. He says for a small nation with limited human and capital resources, it is “unique” to be able to produce such a competitive product.

On the physical border, Pūķis says 5G can be used to assist immigration teams. Latvia has recently accused Belarus of letting illegal migrants cross into Latvia. This is an instance where border security, police, and military need to collaborate. Not all of these units would have access to the traditional military “green radio.” However, 5G has the potential to simplify coordination and collaboration at the country’s borders.

"You can switch off the Internet, or do whatever you like in Latvia, but Latvians will be part of the European Union in our minds and souls,”
>>Neils Kalniņš

Deploying in unchartered waters

Ādaži lies on the coast of the Baltic Sea, which has had an uneasy few months, with accusations of Russian subsea sabotage rife. In spite of some EU intelligence personnel stating these incidents, which have hit multiple undersea cables, were accidental, Kalniņš remains fairly certain that Russia is responsible.

Protecting these cables is now a “super high priority” in the minds of the Latvian government, he says, and 5G could be deployed to underpin surveillance systems and underwater drones to detect potential enemies in the sea. For example, Kalniņš says, Russia has been accused of operating a “shadow fleet” or “dark fleet;” ships with opaque ownership, that frequently change their names and operate outside maritime regulations, but which have links to the Kremlin. A report from the Atlantic Council confirmed that since 2022, Russia’s dark fleet has grown to some 600 vessels.

Even months before the Baltic Sea became a hotbed for geopolitical tension, Binde says he predicted that trialing and testing 5G deployment in the Baltic Sea would be critically important.

The LMT has already successfully trialed its 5G network in Baltic waters, using shore-to-ship, and ship-to-ship communication. On board Latvian port service provider LVR Flote’s Varma icebreaker ship, the LMT has deployed 5G connectivity that can be delivered as far as 53km from the base station. Pilot boats and floating drones are all part of an intricate system to broadcast seabed measurements and remote video transmissions.

Kalniņš says that, whilst Latvia might not have Ericsson or Nokia like its NATO

counterparts, the country has always been good at telecoms. From its radio factories to its engineering university in Riga, developing telecommunications is almost “like a genome in Latvian DNA,” he says. The 5G Techritory event in Riga was developed eight years ago to highlight these expertise and invite big names from the telecoms industry to use Latvia as a national testbed and a playground for technological innovation.

He adds the country already has shared its expertise and innovations with the likes of Ukraine and Moldova. The country’s relations with the rest of Europe and its Baltic siblings remain strong, and Kalniņš does not fear the spread of misinformation or Russian propaganda in the country. He says, “You can switch off the Internet, or do whatever you like in Latvia, but Latvians will be part of the European Union in our minds and souls.”

LMT’s future looks slightly less clear. Recent attempts to merge the company with Tet, another Latvian telco, have raised questions about whether it will continue to innovate and drive forward change. Sweden’s Telia has shares in both LMT and Tet, as does the Latvian government. But a merger would create a new and incredibly dominant company that could drive up market prices. Alternatively, the government could buy out Telia’s shares in both companies to bring in a new strategic investor.

Telia initiated a vote of no confidence against Binde at the end of last year, but Binde survived. Regardless, he is sure the government will make the right choice, and does not lack confidence in his own abilities.

He says: “You can take away my millions, but if you leave me with my system, I will be a millionaire once again.” 

Niva Yadav
Credit: Niva Yadav

Celebrate good times, compute on

The party goes on

It’s been a busy start to the year. After endless breathless announcements about the coming AI era, culminating in the $500 billion Stargate project, cracks started to show with the release of DeepSeek’s cheaper models.

Stock markets panicked, and data center, chip, and utility stocks tumbled. As the weeks went on, it became clear that DeepSeek’s models weren’t as cheap as initially thought, and hyperscaler pledges of record spending helped comfort spooked investors.

The party, it seems, is still going strong. Data center capex continues to grow, with analysts predicting that it’ll reach a trillion dollars by 2029. Cloud companies have also signaled that customer demand remains high - each of the three biggest hyperscalers said that they would have made more money if they weren’t capacity-constrained.

And yet, there are causes for concern. DeepSeek helped highlight how uncertain the market is, with investors fearful of reliving the dotcom bubble pop. When hyperscalers announced that they would be spending more than ever on data centers, their share price dropped - last year, the increase caused values to jump.

Then there’s Microsoft. While it is still set to spend a record $80bn on capex this year, the company has walked back from certain data center

leases, and passed on the biggest data center contract in history with OpenAI.

Microsoft CEO Satya Nadella has suggested that AI could be overhyped, and said that the company would be cautious with its data center spend over fears of overbuilding. At the same time, the cloud provider hopes to keep up with the very real demand, and keep ahead of competitors, as companies try to make AI useful to society and businesses.

At the same time, wider market forces are conspiring to dampen appetites for risky bets. Destructive tariffs, trade wars, actual wars, a dismantling of the global order, and mass firings across the US government are causing widespread uncertainty.

Stock markets have taken a pounding, with tech potentially even more exposed due to its reliance on global supply chains, global markets, and global talent.

As markets wobble, it gets harder to find funding for multi-year and even decade-long infrastructure investments when the next few months seem perilous and chaotic.

The party may be continuing, but it’s clear that time is running out for AI to prove its worth. 

Power and cool AI with one complete solution.

AI has arrived and it has come with unprecedented demand for power and cooling. Untangle the complexities with Vertiv™ 360AI, including complete solutions to seamlessly power and cool AI workloads.

Turn static files into dynamic content formats.

Create a flipbook
Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.