Page 1

> Edge | Supplement


Powered by

What is it?

The shape of it

The telco factor

The one percent

> While some say it is a location, others say applications will define it - but all agree it will be everywhere

> How putting processors close to end users will create new data hierarchies

> Are cell towers the prime data center locations of tomorrow, or a vendor’s pipe dream?

> Dean Bubley says the mobile network edge is important, but overhyped

GET THIS CLOSE TO THE EDGE When latency becomes a real issue and milliseconds of downtime equals a wreck, you can rely on Vertiv for reliable mission critical edge infrastructure.

Explore more solutions at: VertivCo.com/OwnYourEdge VertivCo.com/OwnYourEdge-EMEA Š2018 Vertiv and the Vertiv logo are trademark or registered trademarks of Vertiv Co.

A Special Supplement to DCD August/September 2018


Bringing the infrastructure to your door

Features 4-5 What is the Edge 6-7

Powered by


The shape of Edge

8-10 The telco Edge 14-15 The one percent Edge



08 30

dge is coming to get you. Digital infrastructure is moving ever closer to users, sensors and intelligent devices. Tiny data centers will be popping up in cell towers, office buildings and the white goods in your kitchen. Or will they? According to the theory, all of these trending technologies have something in common. They cannot work without fast connections to local data storage and processing. So the industry is scrambling to deliver just that - or so we are told. This supplement aims to examine the reality behind the Edge hype. We explain what the Edge is (p04), what architectures it might use (p06), along with a look at the currently fashionable implementation - the telco Edge (p08). In recent news, HPE has promised to invest another $4bn in what it describes as the "Intelligent Edge." The budget is earmarked for data collection, and tools to translate that data into intelligence and business actions. HPE promises to deliver security, machine learning and automation products to support Edge computing, a strategy it has been working on for the past two years. HPE clearly sees this as an emerging market, not a set of products in a catalog. With the arrival of Edge, even old hardware has a new pitch, and there are plenty of containerized data centers aimed at the same niche (see Edge Contenders, p04-05).

The telco Edge is an interesting play. Cell towers and the "central offices" of the fixed telephone networks all have fast links to endusers' homes and devices. But there are hurdles. The industry has spent years aggregating resources into giant cloud data centers to save costs. Any Edge facility will lose the economies of scale, and each Megabyte and core will cost much more there. The benefits of low latency must outweigh the extra overhead of a small unit of IT resource. It's also not clear what the business model will be. If the Edge resource is at the telco tower, will it be owned and operated by the telco, or by a conventional cloud or colo player, running a virtual facility across a vast number of locations? One class of company has already deployed in this manner: content delivery networks (CDNs). Their business model is based on boosting content to users, and turning that distributed content at the Edge into a protection against low speeds and network failures. Other business models will emerge, but for now the Edge is an existing niche for specialists, combined with a large theoretical market. It's still possible that market may not take shape as predicted. Both Google and Microsoft have an alternative vision, which pushes the Edge processing further out, into AI chips on the mobile devices which will consume Edge services. In the following pages, we look at how the big picture of Edge will emerge.

Issue 29 • August/September 2018 3

What is the Edge? Edge is more than a surge of hype. Tanwen Dawn-Hiscox talks to the pioneers to find just how real it is


Tanwen Dawn-Hiscox Reporter

"Data is now coming from the customer and being distributed from point to point."

verything’s got to change, we hear. Computing has to move from big data centers to the “Edge” of the network, closer to users. Is this just a periodic surge of hype for a new buzzword, or is real change happening? As with all hype, we can predict two things. Firstly, Edge will change. The Edge that succeeds won’t be the Edge that is being debated now, just as the things you are doing with mobile data aren’t the things that were predicted ten or twenty years ago. And secondly, if and when Edge wins, you’ll never hear the term again, because it will be like air; all around without needing a second thought.

What is Edge right now? According to some, it describes micro facilities at cell towers (see p06-07). Others say it represents responsive applications, wherever they happen to be. Today, we centralize computing in remote locations, to benefit from economies of scale. But as processing and content distribution requirements grow, compute will be needed to be placed ten miles, five miles or less from the end-user. “The smart thinker’s answer is that the Edge is at the telecoms base station,” Peter

Edge contenders

Hopton, the founder and CEO of liquid cooling systems provider Iceotope, told DCD. Usually “within a mile and a half to two miles from your location,” mobile network towers are also increasingly used for media content on phones, for smart vehicles, and for the sensors which make up the Internet of Things (IoT). But the Edge will go further than this. The next generation of mobile networks, 5G, is still being defined; it promises faster links - but over a shorter range. Cell towers are “going to be getting a lot closer,” as close as “hundreds of meters” away. Just as Green IT was a great marketing buzzword ten years ago, Edge is the marketing gold mine of the moment. But Hopton says it is a real trend, and those who grasp it will “come out on top.” Edge also helps handle the change in the way data circulates. “It used to be that everything was made in Hollywood and distributed to customers. We had huge downloads but small uploads,” Hopton explained. “With the growth of social media, everyone wants to be a YouTuber and upload

A selection of vendors who want a piece of the Edge






Snowballl is a 20TB portable storage device designed to ship data to Amazon Web Services (AWS) data centers. It can now run EC2 cloud instances as an appliance intended for temporary Edge capacity.

Data center builder Compass Datacenters’ EdgePoint subsidiary makes 10-rack, 80kW micro data centers. Two 40 sq m demonstration units can be seen at its Texas headquarters.

Since 2014, early entrant DartPoints has been selling Schneider micro data centers custom designed for the Edge. The first customers are businesses and real estate owners in the Dallas area, the next target is telcos.

Aimed at the telco Edge, EdgeMicro takes Schneider containers and adds its own Tower Traffic Xchange (TTX) network technology. A demonstration unit was seen with Akamai servers inside.

The EdgeStation is a sealed portable liquid cooled system that immerses electronics in a dielectric fluid. It can run free-standing without a raised floor or air conditioning.

4 DCD Magazine • datacenterdynamics.com

Edge Supplement their own content. “Data is now coming from the customer and being distributed from point to point. It’s no longer from the core outwards, it’s from the outside back to the core and then back out again.” This new dynamic is likely to evolve further as new technologies emerge.

Developing and commercializing these technologies may be dependent on the Edge’s distributed infrastructure, but some distributed approaches already exist, and will be improved by it. Arguably the oldest Edge companies are content delivery networks (CDNs) such as Akamai. For them (see box), the Edge has evolved into a protective shield that keeps content safe against attacks and outages. For the newcomers, Edge is up for reevaluation. A group of vendors including Vapor IO, bare metal cloud provider Packet and Arm recently published a report detailing their understanding of Edge and defining its terms. The State of the Edge describes

Edge use cases

> 12kW Financial modeling


< 3kW

the creation of Edge native applications, but points out that some apps may merely be enhanced by the availability of local processing power. Some companies already run applications that rely on distributed infrastructure: HPE, which recently announced a $4bn investment in Edge technologies, said this model is being used in large businesses and stadiums enabling WiFi connectivity; and in manufacturing environments, for predictive maintenance. For example, Colin I’Anson, an HPE fellow and the company’s chief technologist for Edge, said its IoT sensors and servers are used by participants in the Formula 1 Grand Prix for airflow dynamics: "There are rules from the FIA and they allow you to only have a certain amount of energy use, a certain amount of compute use. “We've purposed that capability for the IoT so we've got a low power ability to place a good server down at the Edge. We are capable then of running significant workloads.” On this basis, it’s clear that Edge is not a single thing, but a dynamic term, for a dynamic set of functions, delivered on a highly varied set of hardware.

On-demand 3D rendering for customers

Facial recognition program for pharmacy security

ML of robotic factory machines

AI for a unmanned shipping container

Cold storage for account documents

Collecting data from sensors

Expanding WiFi on a college campus


Warehouse Location


Edge as a defense Akamai, the world’s largest content distribution network, was arguably the first company to succeed with an Edge service as we understand it. Using global request routing, failover and load balancing, it caches and offloads content close to end-users. James Kretchmar, the company’s vice president and CTO for EMEA and APJ, explained: “In the early days, we saw that the Internet was going to experience future problems as more and more users accessed heavier and heavier content, and that a centralized infrastructure wasn’t going to make that possible.” Early on, he told DCD, this largely consisted of “delivering video at really high quality bit rate," but now the distributed network also serves to “absorb the largest attacks on the Internet.” Distributed Denial of Service (DDoS) attacks are hard to defend against, because they come from all sides and can be colossal in terms of bandwidth: Kretchmar described a recent case in which the company defended a site against bogus traffic totaling a terabit per second. Operating a distributed network not only means that attacks can be absorbed, but that they can be “blocked at the Edge, before they get to centralized choke points and overwhelming something or getting anywhere close to a customer’s centralized infrastructure.” As well as DDoS defense, the company provides web application firewalls and bot detection tools at the Edge.

Source: Maggie Shillington, IHS Markit 2018




Vapor IO


Rittal’s pre-configured Edge Data Center modules include climate control, power distribution, uninterruptible power supply, fire suppression, monitoring and secure access, holding OCP Open Rack and Open19 designs.

Schneider’s preassembled Micro Data Centers have shock resistant 42U racks, two UPS systems with a total capacity of 5kVA, switched rack PDUs offering a 230V AC supply and a Netbotz rack monitor 250.

The 48U micro Edge unit from Stulz combines direct-to-chip liquid cooling from CoolIT, air-cooling, and optional heat reuse. It is available for HPC and standard applications.

The six-rack circular Vapor Chamber, with 36U racks, is shipped in a a custom-built 150kW container with three mini racks in the upper chamber for switches or low power equipment.

Vertiv offers micro data centers designed for cell towers, but believes the Edge will not be a single opportunity, and offers compact UPS and cooling units for custom applications.

Issue 29 • August/September 2018 5

The shape of Edge

Tanwen Dawn-Hiscox Reporter

Not all of your Edge data has to hit the core. If you build it right it could even save your phone battery, Tanwen Dawn-Hiscox finds


dge applications imply that data is collected and acted upon by local users or devices, and that not all of this data has to travel to the "core." In fact, it is preferable if most of

it doesn’t. To illustrate, Iceotope’s Peter Hopton gave the following example: “Imagine you walk into a room and there’s a sensor recording masses of data, such as ‘is there someone in the room this second.’ “Eventually, you turn that data into a bunch of statistics: ‘today on average so many people were in the room, these were the busy times of the room.’ That’s processed data, but the raw data is just bulk crap. You want to pull the gems out of that in order to make it usable.” In the case of autonomous cars, for instance, you might have “seven videos in HD and a set of sensor radar,” but “all you want to know is where are the potholes.” “So you take that data, you put it into a computer at the Edge of the network, finding potholes, diversions and traffic conditions, and reporting them back to the core. Next day other vehicles can be intelligent, learn, and change their rule set.” Similarly, HPE installed AI systems for a customer in the food and beverage industry, and placed servers at the Edge for the collection and observation of data, sending it back to the core for the learning part of the process. You can’t shift all the data to the core, whether centralized colocation or cloud data centers, Hopton explained, because of the limits of physics. “If you transmitted all that raw data from all seven cameras and that radar and sonar from every autonomous car, you would just clog the bandwidth.” He added: “There are limits for the transmission of data that are creeping up on us everywhere. Every 18 months we get twice

as much data from the chips using the same amount of energy because of Moore’s Law, but we’ve still got the ceiling on transmitting that.” For Steven Carlini, the director of innovation of Schneider Electric’s IT division, putting compute at the Edge is “an effort from customers that want to reduce latency, the primary driver of Edge - in the public eye, at least.” A lot of the cloud data centers were built outside urban or very populated areas, firstly because infrastructure operators didn’t want to have to deal with an excess of network traffic, and secondly because it cost them less to do so. As the shift to the cloud occurred, however, it became clear that latency was “more than they would like or was tolerable by their users." “The obvious examples are things like Microsoft’s Office 365 that was introduced and was only serviced out of three data centers globally. The latency when it first came out was really bad, so Microsoft started moving into a lot of city-based colocation facilities. And you saw the same thing with Google.” As well as addressing issues of bandwidth and latency, the Edge helps lower the cost of transmitting and storing large amounts of data. This, Carlini said, is an effort on service providers’ part to reduce their own networking expenses. Another, perhaps less obvious argument for Edge, explained Hopton, is that as 5G brings about a massive increase of mobile data bandwidth; an unintended consequence will be that mobile phone batteries will run out of charge sooner, because they will be transmitting a lot more data over the same distance. The obvious answer on how to improve on this, he said, is to make the data “transmit over less distance.”

6 DCD Magazine • datacenterdynamics.com

“Otherwise, everyone’s going to be charging their phones every two hours.” However distributed it is, the Edge won’t replace the cloud. All of the results and logs will be archived using public infrastrcture. “It’s not a winner takes all,” Loren Long, co-founder and chief strategy officer at DartPoints, told DCD. “Neither is the IT world in general.”

“The outlook that the Edge is going to compete with or take over the core is very binary”

Edge Supplement

“The outlook that the Edge is going to compete with or take over the core,” he said, “is a very binary outlook,” which he joked is probably only used “to sell tickets to conferences.” Long compared the situation to city planning, and the human body. Building residential streets, he said, doesn’t “lessen the need for highways,” nor does having capillaries at our fingertips reduce the need for large arteries. “So as the Edge grows, the core is going to continue to grow.” This doesn’t mean that applications at the core, or the cloud computing model, will remain the same, however. “But it’s not going to kill anything; everything is going to continue to grow.” Long talks about a stratification of processing, “where a lot more data analytics and processing and storage happens at the core, but onsite processing happens at the Edge.” “There’s no competition,” he said. “It’s all complementary.” While the cloud might continue to grow, HPE’s Colin I’Anson thinks the percentage of data processed at the Edge will increase.

Research firm Gartner agrees: according to a recent study, half of all data will be created or processed outside of a traditional or cloud data center by 2022. At the moment, Edge is tantalizingly poised between hype and implementation. Some say the telecom Edge is ready to go, while others point to problems (see article). We’ll leave the last word to the most optimistic provider, Schneider’s Carlini:

“What we’re seeing is a lot of companies mobilizing to do that, and we’re actually negotiating and collaborating with a lot of companies. We’re actually looking at and rolling out proof of concept 5G test site applications.” The opportunity is there, but it’s still taking shape. While that happens, everyone’s pitch is changing to meet the current best ideas about how Edge will eventually play out.

Edge application by product type > 12kW


Rack: enclosure with cooling (micro DC) rPDU: metered outlet

Rack: enclosure with cooling (micro DC) rPDU: monitored

Rack: enclosure with electrical locking doors rPDU: metered outlet

Rack: enclosure with cooling (micro DC) rPDU: switched with outlet metering

Rack: NEMA 12 enclosure with cooling rPDU: switched with outlet metering

Rack: Four-post open frame rack rPDU: basic

Rack: NEMA 12 enclosure rPDU: switched

Rack: NEMA 12 enclosure with cooling rPDU: switched with outlet metering

< 3kW Office

Warehouse Location


Source: Maggie Shillington, IHS Markit 2018

Issue 29 • August/September 2018 7

The telco Edge

Tanwen Dawn-Hiscox Reporter

Where will the Edge be implemented? For many people, it will be located at the telephone exchanges and cell towers which are close to end-users and devices, reports Tanwen Dawn-Hiscox


hile the cloud made a virtue of abandoning specific geographic locations, the Edge is all about placing data and services where they are needed. But there’s an ongoing debate about where exactly that is. DartPoints co-founder Loren Long said we shouldn’t fixate on where the Edge is: “Edge isn’t a geographical location, it’s not pointing at cell towers and saying ‘that’s the Edge.’ “Edge is an instance. It is the place where the compute and storage is required to be to get the optimal performance. That could be a cell tower, it could be further back.” For Long, Edge computing isn’t new. He cites EdgeConneX, which deploys servers in conventional data centers across small cities for Comcast: “because that’s as far as Comcast needs to go.” Schneider calls this the regional Edge, and says it is mainly used for high bandwidth content delivery and gaming - which Steven Carlini, senior director of Schneider's data center solutions division, describes as a very large market, and also one which is keeping the desktop computing industry alive. The recent State of the Edge report proposes the four following principles to define the Edge: “The Edge is a location, not a thing; there are lots of edges, but the Edge we care about today is the Edge of the last mile network; the Edge has two sides, an infrastructure edge and a device edge; compute will exist on both sides, working in

coordination with the centralized cloud.” Others are less reticent about location. Several vendors are looking hard at cell towers, which are the confluence between the digital infrastructure and its users, both humans and connected devices. Cell tower Edge installations need the characteristics of the traditional data center high availability, power and cooling, overlaid with network hardware that is the traditional domain of mobile operators. DartPoints is creating small data centers for just this niche. The company’s CEO Michael Ortiz says content providers and application providers will benefit from the so-called “telco Edge,” but the carriers will gain most, through new services like 5G, IoT and autonomous cars that “can’t live anywhere except the cell tower.” Schneider agrees that the next aim of Edge computing is to move even closer to the users, which can be on-premise, at the base of cell towers, but also bringing in the fixed-line operators' facilities, so-called “central offices.” Central offices were originally built decades ago by legacy telco providers to store analog telephony equipment. As the networks went digital, this was replaced by much more space-efficient digital equipment, leaving empty rooms with great power and networking, potentially ideal for local data center usage, or else using the CO as “a prime location to cache a lot of data.” In the Central Office Rearchitected as a Datacenter (CORD) initiative, IT and telecoms vendors


cell towers owned by Crown Castle in the US

8 DCD Magazine • datacenterdynamics.com

aim to create definitions and standard platforms to make it easy to roll out data center services in these telecoms spaces, using technologies including software defined networking (SDN) and network function virtualization (NFV). The closely related OpenCORD aims to offer this as open source. Central offices are set up in prime locations and have direct access to networks, “because they’re in the same building,” Carlini said. “So it’s kind of a win-win.” “We see this happening either before or in parallel with the cell tower build-out,” which Schneider predicts will happen soon, even though Carlini admits we are not seeing a massive effort yet. Caching data at cell towers, he said, will allow “5G to operate and deliver on its promises. “Information is going to have to be cached within a community - a small area that’s going to share that data where the 5G network is going to operate. “Whereas 4G operates in a very oneto-one relationship with the devices, 5G operates in a shared model where we have a bunch of towers interfacing with the devices and antennas instead of one.” Akamai is also considering equipment at the base of cell towers in the future - but this option is “more in an R&D phase right now,” explained James Kretchmar. Specifically, the company is exploring the use cases that would make it worthwhile, and weighing the pros and cons. How real is all this?

Schneider’s Carlini says he’s seeing this model emerge, but as a “continuation of the build-out of 4G.” Until the successor, 5G, comes along, there will be no urgent need to cache data at cell towers or in central offices en masse. “When 5G kicks in, that’s when you’re going to see a huge wave,” Carlini said. That’s a drawback, as the actual technologies 5G will be using are still being defined. Despite this, there’s real activity now, he assured us: “From a Schneider perspective, there are actual projects that we’re working on now.” In the long term, the arguments for cell tower and central office deployments are strong. The hurdles include developing the business model and sorting out ownership. In practice, building the mobile network Edge won’t be easy, but vendors persist in claiming it won’t present any problems. DartPoints’ Long recalls pitches along the lines of ‘you put it out there and it solves all your problems.’ “But that’s ridiculous,” he laughed. “Nowhere in technology has anything deployed itself that easily, and especially something as complicated as the Edge because a lot of things have to change from

the core out: all the networks, all the way to how routing cables are done today.” Companies may not be willing to make these changes, because they perceive them as threatening their competitive advantage over one another. “If you think about wireless carriers, AT&T, Verizon, Sprint, T-Mobile, they all fight about who has the best network. They see that as their unique differentiator.” And so, he said, “when you bring compute, storage, content and applications to the Edge, that are now on a completely different backhaul network, the unique networks each of the carriers operate disappear, and the carriers are almost relegated to antennas, so they may not necessarily be that excited.” If carriers were keen to adopt such a model, they would still need to collaborate

with companies up and down the stack to define it. First, Long said, businesses will need to stop “claiming” the Edge as their own. “In fact, it is very much going to be a micro ecosystem.” Carlini concurs: “At the cell tower, there are multiple stakeholders. There’s the landowners, there’s the tower companies that actually own the enclosures that the equipment goes in, there’s the service providers, and there’s even governments that are involved, regulating what can and can’t go into these sites.” Nor is there a standard process for deployment: “It’s not clear whether the equipment can go in the huts that they have there already or if there’s going to be prefab containers that we’re going to have to drop on site as an additional hut.”

“This market is going to be colossally huge, so we shouldn’t be fighting over


Issue 29 • August/September 2018 9

Edge Supplement The latter option - placing modules on the property beside the cell tower would complicate things more, he said, because “that’s when you run into a lot of these issues with government regulations and local jurisdiction on what can be there and what can’t. So that opens a whole ‘nother can of worms.” In any case, such an upheaval will mean that many mistakes are likely to be made: “This applies to our level, the tower operators, mobile operators, all the way up to the content and application providers.” Another potential issue is outlined by telecoms specialist Dean Bubley (p36-37): the power capacity at cell sites might not be sufficient to accommodate much compute. He argues that instead, device-based applications will offload workloads to the network, or cloud providers will distribute certain aspects of their applications. The network Edge, he stated, will be the control point for applications like security, and as an evolution of content distribution networks, but nothing more. Most of the compute-heavy applications will either be processed on device or in the cloud. Even content distribution specialist Akamai has its doubts: while cell towers and central offices may provide perfect locations for data offloading closer to users, and are being considered among “a number of different environments,” they have drawbacks, too: concessions on disk and storage space. “In some of these locations you’re going to have a smaller amount of disk space available, or storage space available than you would have in other spaces, so you’d be making a trade off for ‘which content does it make most sense to have there, that would get a benefit from offload that a lot of end-users would be requesting’ versus if you were to make a step or two out, more like the traditional Edge of today, then you have more disk space available, and you can take other trade-offs into consideration.”

What such a business model would look like is still unclear, and this will be the company’s final concern once it has identified which use cases are worthwhile “from any combination of making the user experience better, making it easier for the network operators by offloading the network, and making it better for our customers in being able to do that.” But physical expansion isn’t everything, and being in as many locations and geographies as possible is only one aspect of Akamai’s ambitions to improve its services; in parallel it is developing more efficient software to avoid having to multiply servers, despite increasing demand.


The share of enterprise-generated data created and processed outside a traditional centralized data center or cloud in 2017

Rest assured, there is light at the end of the tunnel for the cell tower Edge. As DartPoints’ Long puts it, “nobody is going to own [the Edge], there’s not a single definition, there’s not a single implementation and this market is going to be colossally huge, so we shouldn’t be fighting over it.” Driving these decisions, he continued, will be the customers, all of whom are likely to have their own set of specifications to suit their needs. “Whether it’s Google, Microsoft, Amazon, Facebook, LinkedIn, Netflix,” content and application providers are most likely to “know where they need to go for themselves.” Customers will require modular, tailored solutions to deploy capacity at the Edge. Depending on their purpose, the configuration, deployment, security layer and redundancy requirements will vary. “The intent here is that our components may have similarities, but our solutions are very different.” Long said cell tower applications are “most likely to be a smaller containerized modular solution.” But it’s not a single solution: “Not all cell sites are cell towers; they’re called buildings with antennas on top,” and different products might meet the need for data centers in storage rooms or office blocks.

Living at the Edge Award DCD>Awards | 2018

Open for Submissions

From a rooftop in Manhattan to a car park in New Delhi, a factory floor in Frankfurt to a mobile mast in Manila, the Edge has many definitions. This updated award category seeks to celebrate the practice of building innovative data centers at the Edge, wherever that may be. bit.ly/DCDAwardsEdge

10 DCD Magazine • datacenterdynamics.com

Lessons from history To understand what the Edge will entail, and what forms it will take, businesses would do well to learn from a previous technological transition telcos had to adapt to, Jason Hoffman, CEO of Deutsche Telekom subsidiary MobilEdgeX, said at DCD>Webscale this June. "If we look at what’s happened in mobile networks over the last 25 years, it started with people talking to each other, moved to messaging, and now we also consume video on it - the vast majority of network traffic today is video consumption." This pattern will be replicated in the new Edge world, he said, but with some crucial updates: "There's going to be the equivalent of messaging for this world but it's going to be between machines instead of human beings. "And then there’s going to be a video analog, and that's going to be a tremendous change, as it's going to be video coming into the network instead of going out. We'll have devices and machines out there generating video – automobiles, security cameras, body cameras, things like that." How we build the Edge, "from a network design to a base infrastructure to what type of fundamental capabilities will start to show up" will be defined by this, Hoffman believes. "And that’s going to be a major transformation for the industry. If you just think about how the telcos went from voice calls to video, that was a big deal."

4 Annual th

> Colo+Cloud | Dallas October 30 2018 // Hyatt Regency Dallas

The future of digital infrastructure for Colo, Telco, Cloud & MSP EDGE FOCUS DAY

October 29 2018 Building the Edge

DCD>Debates How is the â&#x20AC;&#x2DC;Edgeâ&#x20AC;&#x2122; transforming the Data Center Services Sector?

Oct 2 11.00am CST

As a prequel to this event, we are hosting a live webinar discussion starring the conference keynote speakers. If you are unable to join us in Dallas, or want to get ahead of the game please join us. Watch the full debate:


Principal Sponsor Lead Sponsors

To sponsor or exhibit, contact: alastair.gillies@datacenterdynamics.com @DCDConverged #DCDColoCloud

Datacenter Dynamics

For more information visit www.DCD.events

Global Content Partner

DCD Global Discussions

Advertorial: Vertiv

Enabling a Future of Edge-to-Core Computing The growth in edge computing represents one of the bigger challenges many organizations will face in the coming years.


he growth in edge computing represents one of the bigger challenges many organizations will face in the coming years. Cisco projects that there will be 23 billion connected devices by 2021, while Gartner and IDC predict 20.8 billion and 28.1 billion by 2020 respectively. These devices have the potential to generate huge volumes of data.

Of course, this represents only one side of the edge equation. While the amount of data being generated at the edge is growing, so too is the amount of data being consumed. According to the Cisco Visual Networking Index, global IP traffic is expected to grow from 1.2 zettabytes in 2016 to 3.3 zettabytes by 2021. Video, which accounted for 73 percent of IP data in 2016, is expected to grow to 82 percent by 2021.

The edge will play a role in both enabling the effective use of data from connected devices and in delivering data to remote users and devices. Part of the challenge will be one of scale: how quickly can we deploy the distributed computing infrastructure required to support these rapidly emerging use cases. But, there is also another challenge to be considered. In many cases, the growth of the edge will require a fundamental shift from the current core-to-edge computing model, in which the majority of data flows from the core to the edge, to a model that reflects more interaction and more movement of data from edge-tocore.

Advertorial: Vertiv

To download the full report on edge archetypes, and access other edge resources, visit www.VertivCo.com/Edge

Taking a Data-Centric Approach to Edge Infrastructure Despite the magnitude of its impact, there exists today a lack of clarity associated with the term edge computing and all that it encompasses. Consider the example of a similarly broad term: cloud computing. When IT managers make decisions about where their workloads will reside, they need to be more precise than “in the cloud.” They need to decide whether they will use an on-premises private cloud, hosted private cloud, infrastructure-as-aservice, platform-as-a-service or software-asa-service. That does more than facilitate communication; it facilitates decision making. Vertiv has attempted to bring similar clarity to edge computing by conducting an extensive audit and analysis of existing and emerging edge use cases. What emerged was the recognition of a unifying factor that edge use cases could be organized around. Edge applications, by their nature, have a data-centric set of workload requirements. This data-centric approach, filtered through requirements for availability, security and the nature of the application, proved to be central to understanding and categorizing edge use cases.

3. Machine-to-Machine Latency Sensitive The Machine-to-Machine Latency Sensitive Archetype, while similar to the HumanLatency Sensitive Archetype in that low latency is the defining factor in both archetypes, is even more dependent on edge infrastructure. Machines not only process data faster than humans, requiring lower latency, they are also less able to adapt to lags created by latency. As a result, where the cloud may be able to support HumanLatency Sensitive use cases to a certain point as they scale, Machine-to-Machine Latency Sensitive use cases are enabled by edge infrastructure. 4. Life Critical The Life Critical Archetype includes use cases that impact human health or safety and so have very low latency and very high availability requirements. Autonomous Vehicles are probably the best-known use case within the Life Critical Archetype. Based on the rapid developments that have occurred, and the amount of investment this use case is attracting, it is now easy to envision a future in which Autonomous Vehicles are commonplace. Yet, we’ve also had recent reminders of both the criticality of this use case and the challenges that must be addressed before that future vision becomes a reality. Once the technology matures and adoption reaches a tipping point, this use case could scale extremely quickly as drivers convert to autonomous vehicles.

About Vertiv Vertiv designs, builds and services critical infrastructure that enables vital applications for data centers, communication networks, and commercial and industrial facilities. For a more detailed discussion of edge archetypes, read the report, Four Edge Archetypes and their Technology Requirements.


2. Human-Latency Sensitive The Human-Latency Sensitive Archetype includes applications where latency negatively impacts the experience of humans using a technology or service, requiring compute and storage close to the user. Human-Latency Sensitive use cases fall into two categories: those which are already widely used but supported primarily by cloud or core computing, such as natural language processing, and those that are emerging, such as Smart Security and Smart Retail. In both cases, edge infrastructure will be required to enable these use cases to scale with the growth of the businesses or applications that depend on them.

|V ert iv V

The result of our analysis was the identification of four archetypes that can help guide decisions regarding the infrastructure required to support edge applications. These four archetypes are:

These four archetypes are described in more detail in the Vertiv report, Defining the Edge: Four Edge Archetypes and their Technology Requirements. They represent just the first step in defining the infrastructure needed to support the future of edge computing. But it is not one that should be understated. When we shared the archetypes with industry analyst Lucas Beran of IHS Markit, he commented that, "The Vertiv archetype classification for the edge is critical. This will help the industry define edge applications by characteristics and challenges and move toward identifying common infrastructure solutions." Edge computing has the potential to reshape the network architectures we’ve lived with for the last twenty years. Working together, we can ensure that process happens as efficiently and intelligently as possible.

se n

Defining Edge Archetypes

1. Data Intensive The Data Intensive Archetype encompasses use cases where the amount of data is so large that layers of storage and computing are required between the endpoint and the cloud to reduce bandwidth costs or latency. Key uses cases within this archetype include High-Definition Content Delivery and IoT applications, such as Smart Homes, Buildings, Factories and Cities. With bandwidth the limiting factor in Data Intensive use cases, these applications typically scale by the need for more data to improve the quality of service.

r ti Ma


l O

Martin Olsen, Vice president, global edge and integrated solutions E: Martin.Olsen@VertivCo.com VertivCo.com Martin Olsen brings more than 15 years of experience in global mission-critical infrastructure design, innovation and operation to his role as vice president of global edge and infrastructure solutions at Vertiv.

The one percent Edge

Dean Bubley Disruptive Analysis

Mobile edge devices, and nodes to support them, will represent less than one percent of the power of the cloud, says Dean Bubley


keep hearing that Edge computing is the next big thing - and specifically, in-network edge computing models such as MEC. (See Box for a list of different types of "Edge"). I hear it from network vendors, telcos, some consultants, blockchain-based startups and others. But, oddly, very rarely from developers of applications or devices. My view is that it's important, but it's also being overhyped. Network-edge computing will only ever be a small slice of the overall cloud and computing domain. And because it's small, it will likely be an addition to (and integrated with) web-scale cloud platforms. We are very unlikely to see Edge-first providers become "the next Amazon Web Services, only distributed."

Why do I think it will be small? Because I've been looking at it through a different lens to most: power. It's a metric used by those at the top and bottom ends of the computing industry, but only rarely by those in the middle, such as network owners. This means they're ignoring a couple of orders of magnitude. Cloud computing involves huge numbers of servers, processors, equipment racks, and square meters of floorspace. But the figure that gets used most among data-center folk is probably power consumption in watts, or more usually kW, MW or GW. Power is useful, as it covers the needs not just of compute CPUs and GPUs, but also storage and networking elements in data centers. Organizing and analyzing information is ultimately about energy, so it's a valid, top-level metric. The world's big data centers have a total power consumption of roughly 100GW. A typical facility might have a capacity of 30MW, but the world's largest data centers can use over 100MW each, and there are plans for locations with 600MW or even 1GW. They're not all running at full power, all the time of course.

This growth is driven by an increase in the number of servers and racks, but it also reflects power consumption for each server, as chips get more powerful. Most racks use 3-5kW of power, but some can go as high as 20kW if power - and cooling - is available. So "the cloud" needs 100GW, a figure that is continuing to grow rapidly. Meanwhile, smaller, regional data-centers in second- and third-tier cities are growing and companies and governments often have private data-centers as well, using about 1MW to 5MW each.

The "device edge" is the other end of the computing power spectrum. When devices use batteries, managing the power-budget down to watts or milliwatts is critical. Sensors might use less than 10mW when idle, and 100mW when actively processing data. A Raspberry Pi might use 0.5W, a smartphone processor might use 1-3W, an IoT gateway (controlling various local devices) could consume 5-10W, a laptop might draw 50W, and a decent crypto mining rig might use 1kW. Beyond this, researchers are working on sub-milliwatt vision processors, and ARM has designs able to run machine-learning algorithms on very low-powered devices. But perhaps the most interesting "device edge" is the future top-end Nvidia Pegasus board, aimed at self-driving vehicles. It is a 500W supercomputer. That might sound like a lot of electricity, but it's still less than one percent of the engine power on most cars. A top-end Tesla P100D puts over 500kW to the wheels in "ludicrous mode." Cars' aircon alone might use 2kW. Although relatively small, these deviceedge computing platforms are numerous. There are billions of phones, and hundreds of millions of vehicles and PCs. Potentially, we'll get tens of billions of sensors. So at one end we have milliwatts. multiplied by millions of devices, and at the other end we have Gigawatts in a few centralized facilities.

14 DCD Magazine • datacenterdynamics.com

So what about the middle, where the network lives? There are many companies talking about MEC (Multi-access Edge Computing), with servers designed to run at cellular base stations, network aggregation points, and also in fixed-network nodes. Some are "micro data centers" capable of holding a few racks of servers near the largest cell towers. The very largest might be 50kW shipping-container sized units, but those will be pretty rare and will obviously need a dedicated power supply.

Definitions of the Edge: • D  ata-center companies call smaller sites in 2nd/3rd-tier cities the Edge. • Fixed and cable operators think of central offices (exchanges) as mini data-centers at the Edge, or perhaps white-box gateways/ servers on business premises. • Mobile operators think of servers at cell-sites or aggregation points as the Edge. Some vendors pitch indoor small cells as the Edge. • IT companies refer to small servers at company sites, linked to cloud platforms, as the Edge. • Mesh-network vendors/SPs think of gateways or access points as the Edge. • IoT companies think of localised controllers, or gateways for clusters of devices as the Edge. • Device and silicon vendors think of a smart end-point (eg a car, or a smartphone or even a Raspberry Pi) as the Edge. • Some cloud players also have a "software Edge", such as Amazon's Greengrass, which can be implemented in various physical locations.

Edge Supplement Enterprise / Regional

Edge (Network-Edge & Device-Edge)


EQSCALE vision processing chip = 0.2mW

GE Mini Field Agent IoT Gateway = 4W




EdgeMicro = 50kW

NVIDIA Pegasus = 500W





Switch Las Vegas = 300 MW




Normal mobile cell-tower = 1kW

Raspberry Pi Zero = 0.5W

Virtuosys Edge Platform = 30W

The actual power supply available to a typical cell tower might be 1-2kW. The radio gets first call, but perhaps 10 percent could be dedicated to a compute platform (a generous assumption), we get 100-200W. In other words, cell tower Edge-nodes mostly can’t support a container data center, and most such nodes will be less than half as powerful as a single car's computer. Cellular small-cells, home gateways, cable street-side cabinets or enterprise "white boxes" will have even smaller modules: for these, 10W to 30W is more reasonable.

Five years from now, there could probably be 150GW of large-scale data centers, plus a decent number of midsize regional datacenters, plus private enterprise facilities. And we could have 10 billion phones, PCs, tablets and other small end-points contributing to a distributed edge. We might also have 10 million almost-autonomous vehicles, with a lot of compute.

Now, imagine 10 million Edge compute nodes, at cell sites large and small, built into Wi-Fi APs or controllers, and perhaps in cable/fixed streetside cabinets. They will likely have power ratings between 10W and 300W, although the largest will be numerically few in number. Choose 100W on average, for a simpler calculation. And let's add in 20,000 container-sized 50kW units, or repurposed central-offices-asdata centers, as well. In this optimistic assumption (see Box 2: Energy for the Edge) we end up with a network edge which consumes less than one

Standard server rack = 5kW



Typical telco domain for computing / cloud

Typical DC = 1-5MW

percent of total aggregate compute capability. With more pessimistic assumptions, it might easily be just 0.1 percent. Admittedly this is a crude analysis. A lot of devices will be running idle most of the time, and laptops are often switched off entirely. But equally, network-edge computers won't be running at 100 percent, 24x7 either. At a rough, order-of-magnitude level, anything more than one percent of total power will simply not be possible, unless there are large-scale upgrades to the network infrastructure's power sources, perhaps installed at the same time as backhaul upgrades for 5G, or deployment of FTTH. Could this 0.1-1 percent of computing be of such pivotal importance, that it brings everything else into their orbit and control? Could the "Edge" really be the new frontier? I think not. In reality, the reverse is more likely. Either device-based applications will offload certain workloads to the network, or the hyperscale clouds will distribute certain functions. There will be some counter-examples, where the network-edge is the control point for certain verticals or applications - say some security functions, as well as an evolution of today's CDNs. But will IoT management, or AI, be concentrated in these Edge nodes? It seems improbable. There will be almost no applications that run only in the network-edge - it’ll be used just for specific workloads or microservices, as a subset of a broader multi-tier application. The main compute heavy-lifting will be done on-device, or on-cloud. Collaboration between Edge Compute providers and

Copyright Disruptive Analysis Ltd 2018

industry/hyperscale cloud will be needed, as the network-edge will only be a component in a bigger solution, and will only very rarely be the most important component.

One thing is definite: mobile operators won’t become distributed quasi-Amazons, running image-processing for all nearby cars or industry 4.0 robots in their networks, linked via 5G. This landscape of compute resource may throw up some unintended consequences. Ironically, it seems more likely that a future car's hefty computer, and abundant local power, could be used to offload tasks from the network, rather than vice versa. Dean Bubley is founder of Disruptive Analysis www.disruptive-analysis.com.

Energy for the Edge • 1 50GW large data centers • 50GW regional and corporate data centers • 20,000x 50kW = 1GW big/ aggregation-point "networkedge" • 10m x 100W = 1GW "deep" network-edge nodes • 1bn x 50W = 50GW of PCs • 10bn x 1W = 10GW "small" device edge compute nodes • 10m x 500W = 5GW of in-vehicle compute nodes • 10bn x 100mW = 1GW of sensors

Issue 29 • August/September 2018 15

GET THIS CLOSE TO THE EDGE When latency becomes a real issue and milliseconds of downtime equals a lost sale, you can rely on Vertiv for reliable mission critical edge infrastructure.

Explore more solutions at: VertivCo.com/OwnYourEdge VertivCo.com/OwnYourEdge-EMEA Š2018 Vertiv and the Vertiv logo are trademark or registered trademarks of Vertiv Co.

Profile for DCD Magazine

DCD Edge Supplement  

DCD Edge Supplement