5G Handbook 2022

Page 1

Private 5G: What is it? How does it work? Page 6

How RedCap fits into 5G and IoT Page 10

MAY 2022

5G

Open RAN rewrites network-testing rules Page 14

A SUPPLEMENT TO DESIGN WORLD

HANDBOOK


220401_SecRouter_DW_US.indd 1

3/25/22 1:21 PM


It’s not you, Mark.

I just need more space.

Thin, Powerful and Frees Up Space.

Shrink your device with ultra low profile capacitance. Visit cde.com/flatpack for full details or contact us at (864) 843-2277.


5G HANDBOOK

5G IN 2022: AN ENGINEER’S PERSPECTIVE Welcome to EE World’s 2022 5G Handbook. We gathered a new collection of articles, some 15 in total that appear in the electronic version. The articles in this year’s handbook look at 5G across hardware and software. If you’re an IoT device designer that needs to connect to a private 5G network, you’ve come to the right place. If you’re a network engineer, we’ve got plenty for you, too.

RO

WE

• EDI

TOR

Our call for articles included a list of some fifty suggested topics. To our surprise, several contributors chose to cover private networks. Those responses served as a wakeup call. Here’s what you’ll find in the ebook edition, with four in the print edition. “Private 5G: What is it? How does it work?” covers the basics of private networks. “Customize private 5G networks for application requirements,” shows you how to create a private network that fits specific needs. You may think of private networks as mostly software adaptations of public networks, yet there are plenty of physical aspects to consider. “Optimize RF signals in private networks” discusses how factory walls and object interfere with 5G signals. Small IoT devices that connect to public and private networks don’t need 5G’s full capabilities. “How RedCap fits into 5G and IoT” explains how RedCap devices support data rates of 85 Mb/sec, a performance grade serving a wide range of industrial and IoT use cases. IoT devices need antennas. That’s where “5G Reshapes mobile RF design strategies” comes in to explain how mmWave signals call for new RF design skills. Those radios also depend on power amplifiers. “How DPD improves power amplifier efficiency” tells you what radios do to reduce their carbon footprint.

5G’s mmWave requires test engineers to think differently about production test. Integration forces test engineers to develop over-the air tests. That includes smartphones in “Mitigate mmWave test costs in 5G smartphones.” We also explore how to test Open RAN components in “Open RAN rewrites network-testing rules.” Between the two current 5G frequency ranges sits 12 GHz, which could add to 5G’s long list of frequencies. “5G demands for 12 GHz spectrum call for consensus on testing” looks at what might be the future of 5G radios and tests. Networks need software, lots of it. “Network automation takes the work out of upgrades” and “Orchestration at the edge reduces network latency” get into how software operates those networks. We’ve heard much about 5G’s reduced latency compared with LTE, but is it all hype? Where’s the data? Now, we have “Measurements show 5G improves latency in public networks” to show the improvement, which depends on traffic flow. For the second year, we have two articles on network timing. “How timing design and management synchronizes 5G networks” and “Synchronizing 5G networks brings timing challenges” cover different aspects of network timing. 5G continues to evolve, with enhancements recently specified and more in development. Those standards will produce myriad of products, for radios, networks, and IoT devices. Stay with EE World for the latest development and design techniques.

MAR 2

DESIGN WORLD — EE NETWORK

TIN

5 • 2022

eeworldonline.com

|

designworldonline.com


Introducing the world’s smallest high-frequency wirewound chip inductor!

Actual Size

(Tiny, isn’t it?)

Once again, Coilcraft leads the way with another major size reduction in High-Q wirewound chip inductors Measuring just 0.47 x 0.28 mm, with an ultra-low height of 0.35 mm, our new 016008C Series Ceramic Chip Inductors offer up to 40% higher Q than all thin film types: up to 62 at 2.4 GHz. High Q helps minimize insertion loss in RF antenna impedance matching circuits, making the 016008C ideal for high-frequency applications such as cell phones, wearable

devices, and LTE or 5G IoT networks. The 016008C Series is available in 36 carefully selected inductance values ranging from 0.45 nH to 24 nH, with lower DCR than all thin film counterparts. Find out why this small part is such a big deal. Download the datasheet and order your free evaluation samples today at www.coilcraft.com. ®

WWW.COILCRAFT.COM


K

M A Y

2 0 2 2

DESIGN WORLD F O L L O W

EDITORIAL VP, Editorial Director Paul J. Heney pheney@wtwhmedia.com @wtwh_paulheney Senior Contributing Editor Leslie Langnau llangnau@wtwhmedia.com @dw_3Dprinting Executive Editor Leland Teschler lteschler@wtwhmedia.com @dw_LeeTeschler Senior Editor Aimee Kalnoskas akalnoskas@wtwhmedia.com @eeworld_aimee Editor Martin Rowe mrowe@wtwhmedia.com @measurementblue Executive Editor Lisa Eitel leitel@wtwhmedia.com @dw_LisaEitel Senior Editor Miles Budimir mbudimir@wtwhmedia.com @dw_Motion Senior Editor Mary Gannon mgannon@wtwhmedia.com @dw_MaryGannon

T H E

W H O L E

CREATIVE SERVICES & PRINT PRODUCTION VP, Creative Services Mark Rook mrook@wtwhmedia.com @wtwh_graphics Art Director Matthew Claney mclaney@wtwhmedia.com @wtwh_designer Senior Graphic Designer Allison Washko awashko@wtwhmedia.com @wtwh_allison

Graphic Designer Mariel Evans mevans@wtwhmedia.com @wtwh_mariel Director, Audience Development Bruce Sprague bsprague@wtwhmedia.com

IN-PERSON EVENTS

T E A M

O N

T W I T T E R

MARKETING VP, Digital Marketing Virginia Goulding vgoulding@wtwhmedia.com @wtwh_virginia Digital Marketing Specialist Francesca Barrett fbarrett@wtwhmedia.com @Francesca_WTWH Digital Design Manager Samantha King sking@wtwhmedia.com Marketing Graphic Designer Hannah Bragg hbragg@wtwhmedia.com Webinar Manager Matt Boblett mboblett@wtwhmedia.com Webinar Coordinator Halle Kirsh hkirsh@wtwhmedia.com Webinar Coordinator Kim Dorsey kdorsey@wtwhmedia.com

Events Manager Jen Osborne jkolasky@wtwhmedia.com @wtwh_Jen

FINANCE

Event Marketing Specialist Olivia Zemanek ozemanek@wtwhmedia.com

Controller Brian Korsberg bkorsberg@wtwhmedia.com

Event Coordinator Alexis Ferenczy aferenczy@wtwhmedia.com

Accounts Receivable Specialist Jamila Milton jmilton@wtwhmedia.com

@ D E S I G N W O R L D

ONLINE DEVELOPMENT & PRODUCTION Web Development Manager B. David Miyares dmiyares@wtwhmedia.com @wtwh_WebDave Senior Digital Media Manager Patrick Curran pcurran@wtwhmedia.com @wtwhseopatrick Front End Developer Melissa Annand mannand@wtwhmedia.com Software Engineer David Bozentka dbozentka@wtwhmedia.com Digital Production Manager Reggie Hall rhall@wtwhmedia.com Digital Production Specialist Nicole Lender nlender@wtwhmedia.com Digital Production Specialist Elise Ondak eondak@wtwhmedia.com

VIDEOGRAPHY SERVICES Video Manager Bradley Voyten bvoyten@wtwhmedia.com @bv10wtwh Videographer Garrett McCafferty gmccafferty@wtwhmedia.com Videographer Kara Singleton ksingleton@wtwhmedia.com

PRODUCTION SERVICES Customer Service Manager Stephanie Hulett shulett@wtwhmedia.com Customer Service Representative Tracy Powers tpowers@wtwhmedia.com Customer Service Representative JoAnn Martin jmartin@wtwhmedia.com Customer Service Representative Renee Massey-Linston renee@wtwhmedia.com

Digital Production Specialist Nicole Johnson njohnson@wtwhmedia.com VP, Strategic Initiatives Jay Hopper jhopper@wtwhmedia.com

Associate Editor Mike Santora msantora@wtwhmedia.com @dw_MikeSantora

2011 - 2020

WTWH Media, LLC 1111 Superior Ave., Suite 2600 Cleveland, OH 44114 Ph: 888.543.2447 FAX: 888.543.2447

2014 Winner

2014 - 2016

2013- 2017

DESIGN WORLD does not pass judgment on subjects of controversy nor enter into dispute with or between any individuals or organizations. DESIGN WORLD is also an independent forum for the expression of opinions relevant to industry issues. Letters to the editor and by-lined articles express the views of the author and not necessarily of the publisher or the publication. Every effort is made to provide accurate information; however, publisher assumes no responsibility for accuracy of submitted advertising and editorial information. Non-commissioned articles and news releases cannot be acknowledged. Unsolicited materials cannot be returned nor will this organization assume responsibility for their care. DESIGN WORLD does not endorse any products, programs or services of advertisers or editorial contributors. Copyright© 2022 by WTWH Media, LLC. No part of this publication may be reproduced in any form or by any means, electronic or mechanical, or by recording, or by any information storage or retrieval system, without written permission from the publisher. SUBSCRIPTION RATES: Free and controlled circulation to qualified subscribers. Non-qualified persons may subscribe at the following rates: U.S. and possessions: 1 year: $125; 2 years: $200; 3 years: $275; Canadian and foreign, 1 year: $195; only US funds are accepted. Single copies $15 each. Subscriptions are prepaid, and check or money orders only. SUBSCRIBER SERVICES: To order a subscription or change your address, please email: designworld@omeda.com, or visit our web site at www.designworldonline.com POSTMASTER: Send address changes to: Design World, 1111 Superior Ave., Suite 2600, Cleveland, OH 44114

4

DESIGN WORLD — EE NETWORK

5 • 2022

eeworldonline.com

|

designworldonline.com


CO N T E N T S 5 G

02

5G IN 2022: AN ENGINEER’S PERSPECTIVE

06

PRIVATE 5G: WHAT IS IT? HOW DOES IT WORK?

10

12

14 16

18 20

H A N D B O O K

PRIVATE NETWORKS, WHETHER OPERATED BY USERS OR BY WIRELESS CARRIERS, REQUIRES RADIOS, ADDRESSING, TIMING, AND AUTOMATION TO MAKE THEM RUN.

HOW REDCAP FITS INTO 5G AND IoT

TARGETED AT IOT APPLICATIONS, THE REDUCED-CAPABILITY IN 5G WILL SUPPORT A WIDE RANGE OF DEVICES AND APPLICATIONS THAT NEED LOW LATENCY AND HIGH-RELIABILITY BUT NOT HIGH SPEED.

23

OPEN RAN REWRITES NETWORK-TESTING RULES

OPEN RADIO ACCESS NETWORKS (OPEN RAN) BRING NEW TESTING RESPONSIBILITIES THAT SERVICE PROVIDERS HAVEN’T HAD TO CONTEND WITH BEFORE.

HOW DPD IMPROVES POWER AMPLIFIER EFFICIENCY

DIGITAL PREDISTORTION COMPENSATES FOR AN AMPLIFIER’S NONLINEARITIES, LETTING IT OPERATE IN ITS NONLINEAR REGION OF MAXIMUM POWER EFFICIENCY.

ORCHESTRATION AT THE EDGE REDUCES NETWORK LATENCY

EDGE COMPUTING BRINGS BENEFITS TO USERS BY REDUCING NETWORK LATENCY, THUS IMPROVING SERVICES, BUT NEEDS MANAGEMENT ORCHESTRATION. HERE, WE DEMONSTRATE HOW ORCHESTRATION IMPROVES LATENCY IN AN AUTOMOTIVE EDGE-COMPUTING USE CASE BY REALLOCATING NETWORK RESOURCES.

eeworldonline.com

|

designworldonline.com

NETWORK AUTOMATION TAKES THE WORK OUT OF UPGRADES

TELECOM NETWORKS CAN USE AUTOMATION TO PUSH SOFTWARE UPGRADES INTO THE FIELD WITHOUT THE NEED FOR HUMAN INTERVENTION.

MEASUREMENTS SHOW 5G IMPROVES LATENCY IN PUBLIC NETWORKS

29

CUSTOMIZE PRIVATE 5G NETWORKS FOR APPLICATION REQUIREMENTS

32

5G RESHAPES MOBILE RF DESIGN STRATEGIES

35

HOW TIMING DESIGN AND MANAGEMENT SYNCHRONIZES 5G NETWORKS

38

5G DEMANDS FOR 12 GHZ SPECTRUM CALL FOR CONSENSUS ON TESTING

40

OPTIMIZE RF SIGNALS IN PRIVATE NETWORKS

44

MITIGATE4 mmWAVE TEST COSTS IN 5G SMARTPHONES

BATTERY BACKUP CHEMISTRIES FOR 5G SMALL CELL SITES

AS THE NUMBER OF CELL SITES INCREASE AND THEIR SIZES DECREASE, ENGINEERS HAVE OPTIONS TO CONSIDER FOR BATTERY BACKUP. DIFFERING BATTERY CHEMISTRIES OFFER MORE CHOICES AND DIFFERENT PERFORMANCE LEVELS. SELECTING THE RIGHT BATTERY CHEMISTRY FOR EACH APPLICATION IS CRITICAL TO ENSURE RELIABLE, LONG LASTING, AND COST-EFFECTIVE POWER DELIVERY.

2 0 2 2

26

SYNCHRONIZING 5G NETWORKS BRINGS TIMING CHALLENGES

FREQUENCY SYNCHRONIZATION IN THE RADIO ACCESS NETWORK (RAN) MINIMIZES DISTURBANCES AT THE AIR INTERFACE, ENSURES MOBILE HANDOVER BETWEEN RADIO BASE STATIONS, AND FULFILLS LEGAL AND REGULATORY REQUIREMENTS ASSOCIATED WITH A FREQUENCY LICENSE.

M A Y

MEASUREMENTS ON PUBLIC NETWORKS AT OUR FACILITIS DEMONSTRTE HOW 5G’S LOWER LATENCY COMPARED TO LTE CAN IMPROVE INDUSTRIAL APPLICATIONS.

5G PRIVATE NETWORKS SHOW POTENTIAL FOR A WIDE RANGE OF DIVERSE APPLICATIONS. ENGINEERS CAN IMPLEMENT FEATURES TO ADDRESS THE REQUIREMENTS OF MARKETS SUCH AS MANUFACTURING AND NON-TERRESTRIAL NETWORKS.

STEEP LEARNING CURVES IN RF mmWAVE ANTENNA DESIGN DEMAND COLLABORATION AND TECHNOLOGY UNDERSTANDING BEYOND SUB-6 GHZ STRATEGIES.

THE INFRASTRUCTURE NEEDED TO DELIVER COST EFFECTIVE, RELIABLE, AND SECURE TIMING THROUGHOUT CELLULAR NETWORKS NEEDS PROPER ARCHITECTURE, DESIGN, AND MANAGEMENT. TIGHTER TIME ACCURACY DEMANDS FOR 5G NETWORKING EQUIPMENT REQUIRES RELIABLE AND ROBUST TIMING ARCHITECTURES THAT GUARANTEE NETWORK PERFORMANCE.

BEFORE THE TESTING BEGINS, SEVERAL POLICY AND BUSINESS ISSUES NEED RESOLUTION. STAKEHOLDERS MUST AGREE ON ONE TESTING METHODOLOGY, WHICH SHOULD COMBINE MODELING AND REAL WORLD MEASUREMENTS.

SIMULATIONS AND SIGNAL MEASUREMENTS CAN PROVIDE THE INSIGHT YOU NEED TO CREATE A FUNCTIONAL PRIVATE NETWORK FOR INDUSTRY AND FACTORY USE.

THE ECONOMICS OF MANUFACTURING TEST DIRECTLY AFFECT HOW YOU PLAN FOR THE RIGHT NUMBER OF TEST STATIONS AND TEST SITES. ADDING MMWAVE TESTING MAKES THE DIFFERENCE.

5 • 2022

DESIGN WORLD — EE NETWORK

5


5G HANDBOOK

PRIVATE 5G: WHAT IS IT? HOW DOES IT WORK?

PRIVATE NETWORKS, WHETHER OPERATED BY USERS OR BY WIRELESS CARRIERS, REQUIRES RADIOS, ADDRESSING, TIMING, AND AUTOMATION TO MAKE THEM RUN. 3GPP Release 16 introduced enhancements to 5G new radio (NR) and 5G core for private networks. With these additions to 5G, enterprises can now use private 5G networks. You may wonder what’s special about private 5G when enterprises already use Wi-Fi-based LANs and LTE (4G) on specific bands such as b48 (CBRS band). Here’s how 3GPP-based private 5G works and how it provides alternative network connectivity. A private wireless network is a wireless network operated for the sole purpose of connecting devices that belong to a private entity such as an enterprise. Private entities or service providers can operate private networks. A private entity can get connectivity as a service from the service provider. From 3GPP TS 22.261, the following are the definitions of private 5G networks: Private communication: A communication between two or more UEs belonging to a restricted set of UEs. Private network: An isolated network deployment that does not interact with a public network. Private slice: A dedicated network slice deployment for the sole use by a specific third-party.

6

SRIDHAR BHASKARAN, RATUKEN SYMPHONY

Not all use cases require a 5G NR based network. Use cases include: • Connectivity of devices (laptops, PDAs, mobile phones) within an enterprise for communication and for use of generalpurpose software applications. • AR/VR applications in enterprise. • Connectivity of sensors, robots, actuators and the applications controlling them in a factory. • Connectivity of equipment and their controlling applications in a particular industry vertical (shipping, mining, logistics). • Public safety networks.

3GPP wireless technologies and use cases Table 1 shows how different 3GPP wireless technologies fit different use cases. 3GPP defines 5G NR, NB-IoT, LTE and LTE-M for private wireless, covering a range of use cases from high-speed broadband to low latency and high device densities. Traditional cellular network technologies used a concept called Public Land Mobile Number (PLMN) ID, which they broadcast over the air. Each device had a SIM card with its home network (home PLMN) credentials burnt in. The PLMN ID of a network consisted of mobile

Use case

Applicable 3GPP wireless technologies

Unique offerings

Enterprise communication and general purpose application usage (multimedia, email, document sharing, messaging, online meetings etc.)

Private LTE (Band 48 / CBRS band)

Lightly licensed spectrum, SIM-based security, QoS-based scheduler.

AR / VR application

5G NR

Works on licensed, lightly licensed as well as unlicensed spectrum, SIM-based security, tailored QoS characteristics (5QI) for AR/VR applications, low latency scheduling.

Factory equipment, sensors, robots, actuators, 5G NR precision equipment and their control

Works on licensed, lightly licensed as well as unlicensed spectrum, SIM-based security, low-latency scheduling (URLLC), deterministic QoS and latency.

Table 1. Use cases and applicable wireless technologies.

Low-power, battery-operated IoT devices

NB-IoT, LTE-M with EPC core or 5G core

Licensed spectrum, low power modes, extended coverage and SIM-based security.

Audio / video production, 4K video

NR (with 5G Core)

Works on licensed, lightly licensed as well as unlicensed spectrum, SIM-based security, flexible slot format, flexible UL/DL configuration, 3GPP end-to-end ecosystem for AV production.

Public safety

LTE or NR

Works on licensed, lightly licensed as well as unlicensed spectrum, SIM-based security, end-to-end system for MCPTT, MCVideo and public safety / mission critical applications.

DESIGN WORLD — EE NETWORK

5 • 2022

eeworldonline.com

|

designworldonline.com


PRIVATE NETWORKS DNN. The networking backbone (IP routes, shown as black lines) may, however, be shared with the public network provider and other private networks. • Providing a dedicated network slice within the 3GPP network for the enterprise devices. In this method (Figure 3), the public network provider can assign dedicated instances of gNB-CU-UP, UPF, SMF and IP data networks for the private network. These dedicated instances (shown as red and green lines) may use a dedicated and isolated IP route.

Figure 1. An SNPN ID lets user devices latch on the cellular networks. country code (MCC) and mobile network code (MNC). When a device latches on to a cell, it goes through a cell selection and PLMN selection process whereby it prefers to latch on to the same PLMN as its SIM card (if the cell broadcasts the same or equivalent PLMN. Cellular operators had to register and get their PLMN ID (MNC) from a national assignment authority. PLMN ID was required because private networks deploy without

cellular operator’s network. Figure 1 shows the structure of SNPN ID. Here, and NID Private Enterprise Number (PEN) is the same as the IANAassigned enterprise ID typically used in IP communications. Public Network Integrated non-public networks (NPNs) are NPNs made available through cellular operator PLMNs by one of the following means:

Network slicing Network slicing in 5G lets a user equipment (UE) device connect through a cellular network that provides traffic isolation to the application servers, except for shared radio layers. This capability came in 3GPP Release 15. The following are some of the key network slicing capabilities in 5G. • Assigning a network-slice identifier and providing rules for the UE to select the right network-slice identifier. • Assigning dedicated instances of user-plane functions and sessionmanagement functions for the slice, allowing for isolation of traffic within the network. • Mapping of slices to transport layer technologies (MPLS, SRv6, L2VPN, L3VPN, and so on).

Figure 2. This Public-Network Integrated, Non-public Network uses the separate DNN method to assign instances of the network.

dependence on getting a number from a national assignment body. 3GPP introduced Standalone Non-Public Network (SNPN) ID that private 5G networks can broadcast to users. The devices that belong to such a private network can latch on to the cells broadcasting this identifier. Thus, a private 5G network can operate standalone without relying on any service provider or

• Providing a dedicated data network (DNN) behind the 3GPP user plane gateways for the enterprise devices. In this method, the user plane forwarding (UPF), the IP data networks behind the UPF and the Session Management Function (SMF) controlling the UPF can be assigned separately for the private network

In a PNI-NPN network, slicing provides private access but does not prevent UEs from trying to access the network in areas where the UE lacks permission to use the network slice allocated for the NPN. 3GPP introduced closed access groups (CAGs) in 3GPP Release 16, whereby public networks offer private network connectivity and broadcast the CAG ID. Only devices that have access credentials for that CAG ID can latch on to such cells, thus providing access restriction.

Numerology / Sub-carrier spacing

TTI

Applicable Frequency Range

μ=0, 15 KHz sub-carrier spacing

1 msec

FR1 (< 7.125 GHz)

μ=1, 30 KHz sub-carrier spacing

500 μsec

FR1 (< 7.125 GHz)

μ=2, 60 KHz sub-carrier spacing

250 μsec

FR1 (< 7.125 GHz) and FR2 (> 24.25 GHz)

μ=3, 120 KHz sub-carrier spacing

125 μsec

FR2 (> 24.25 GHz)

μ=4, 240 KHz sub-carrier spacing

62.5 μsec

FR2 (> 24.25 GHz) (currently not used in practice)

eeworldonline.com

|

designworldonline.com

5 • 2022

Table 2. Transmit time intervals for subcarrier spacing.

DESIGN WORLD — EE NETWORK

7


5G HANDBOOK Private LAN and time-sensitive communications Private LAN networks over a 5G wireless network became available with 3GPP Release 16. An Ethernet LAN network can be created behind the UPF and UEs may join those LAN networks through the 5G NR radio network. This is useful for low latency Ethernet-based LAN applications such as connecting factory machines and robots. Figure 4 shows how networks can use time-sensitive networking (TSN) where a grand master clock distributes timing and synchronization from the network to the UEs.

NR in unlicensed bands (NR-U) 3GPP has supported LTE in unlicensed spectrum since Release 13. LTE supported unlicensed access only as a supplementary access while the anchor carrier always remained as a licensed carrier. Hence it was called “license-assisted access.” From Release 16, 5G NR supports the use of unlicensed spectrum both in assisted mode as well as standalone mode. This allows deployment of NR radio in standalone mode in unlicensed bands. NR-U is supported in unlicensed bands up to 5 GHz (5150 to 5925 MHz) as well as in the 6 GHz band (5925 to 7125 MHz). At frequencies below 5 GHz, fair co-existence with existing unlicensed technologies such as Wi-Fi and LTE LAA are necessary. Bands above 6 GHz are a greenfield without coexistence issues. Regulatory aspects for this band are not yet fully established in many countries. These bands also support energy detection-based channel access.

Local-area data networking 5G lets UEs run over specific data networks called local area data networks (LADN) at specific location areas. The 5G core network can advertise the availability of a particular data network to the UE when it moves into a specific

8

Feature

Available Since

PDCP duplication through SCells and SCG cells

Release 15 and enhanced in Release 16 to support up to 4 logical channels

Conditional Handovers

Release 16

Dual active protocol stack

Release 16

T312 based fast failure recovery for PCell

Release 16

Multi TRP MIMO

Release 16

Fast recovery of MCG link when the SCG link is still operational.

Release 16

PDCCH enhancements – configurable field sizes for DCI for improved reliability

Release 16

location area. This feature allows deploying private networks that are purpose built for specific location-centric applications and advertise to UEs wherever they are available.

allows for different scheduling intervals (Transmit Time Interval – TTI) ranging from 120 μsec for 120 kHz sub-carrier spacing numerology to 1 msec for 15 kHz sub-carrier

Figure 3. This Public Network Integrated Non Public Network uses dedicated instances of gNB-CU-UP, UPF, SMF and IP data network properties.

Low-latency scheduling 5G NR provides flexible frame structure and numerologies (denoted by μ) at the physical layer to enable frame scheduling with variable latencies based on frequency range. This

Table 4. Spectrum allocations for several countries and regions.

Region

Band / Spectrum

USA

n48 (3550 MHz – 3700 MHz) – CBRS n96 (5925 MHz – 7125 MHz) – unlicensed band

Germany

n77 / n78 (3700 MHz – 3800 MHz) – licensed for private use

France

n38 (2570 MHz – 2620 MHz) – licensed for private use

UK

n77 (3800 MHz – 4200 MHz) – licensed for private use

European Union

n96 (5900 MHz – 6400 MHz) – unlicensed band

Japan and China

n79 (4400 MHz – 5000 MHz) – licensed for private use

South Korea

n96 (5925 MHz – 7125 MHz) – unlicensed band

DESIGN WORLD — EE NETWORK

Table 3. Link reliability features found in 3GPP Releases 15 and 16.

5 • 2022

spacing. Table 2 provides the TTIs available for different numerologies. In addition, 5G NR supports mini-slot scheduling, whereby scheduling takes place at sub-slot level. It also supports configured grants, flexible physical downlink control channel (PDCCH) to physical downlink shared channel (PDSCH) gaps and flexible PDCCH to PUSCH gaps, providing complete control on the uplink and downlink packet transfer latencies. These features are useful for private 5G use cases such as industrial IoT and robotics that require precision and low latency.

Spectrum Different regions of the world are looking at using different bands for 5G NR in private networks. Table 4 provides a high-level view of the spectrum scenario. eeworldonline.com

|

designworldonline.com


PRIVATE NETWORKS Figure 4. Time sensitive network within a 5G Network distributes timing information for synchronizing devices. (Source 3GPP TS 23.501)

In Table 4, some parts of the world are assigning dedicated licensed spectrum for the sole use of private 5G. In these bands, 3GPP NR-based private 5G network deployments are a natural choice. The n96 band co-exists with the 6 GHz band used by Wi-Fi 6E.

into the network should be fully automated. Open RAN- based open radio units (O-RU) support automatic DHCP-based discovery

Figure 5. An end-to-end on-premises hosted private 5G network runs as a standalone network.

gNB-DU’s address to the O-RU. The O-RU then connects to the gNB-DU, which can then push the carrier configuration to the O-RU. If the network functions as shown in Figures 5 and 6 are deployed as cloud native functions, deployment can be automated by a cloud infrastructure manager such as Kubernetes.

Conclusion Private 5G using NR as radio technology and 3GPP defined 5G core finds its niche in many use cases. The flexible design of physicallayer features such as multiple numerologies, flexible slot formats, mini-slot scheduling and flexible uplink/downlink configuration provides a rich set of features that you can use in many applications.

Private 5G end-to-end architecture Figures 5 and 6 depict the end-to-end architecture options for deploying private 5G networks. Figure 5 shows a private network that broadcasts a SNPN NID and the RAN baseband and 5G core network functions are hosted on premises. Figure 6 shows a deployment where a public network provides services for private networks (PNI-NPN) by broadcasting CAG and provides isolated network functions for each private network by means of network slices and dedicated DNN.

of the network management system’s (called service and management orchestrator – SMO) address and subsequent NETCONF configuration protocols to call the SMO. The SMO can then push the configuration of the

Automated network deployment For private enterprises, the deployment of wireless networks should be a single-touch operation. After doing radio planning, the only human intervention should be to physically deploy the radio units. Once the radio unit is powered, its configuration and connectivity to the baseband and onboarding eeworldonline.com

|

designworldonline.com

Figure 6. Private networks such as this rely on a slice of the public network.

5 • 2022

DESIGN WORLD — EE NETWORK

9


5G HANDBOOK

HOW REDCAP FITS INTO 5G AND IoT TARGETED AT IOT APPLICATIONS, THE REDUCED-CAPABILITY IN 5G WILL SUPPORT A WIDE RANGE OF DEVICES AND APPLICATIONS THAT NEED LOW LATENCY AND HIGH-RELIABILITY BUT NOT HIGH SPEED.

NIR SHAPIRA, CEVA

5G’s real promise comes not from mobile phones, but from industrial and business applications where network operators hope to recover the huge investments they’ve made in 5G network infrastructure and spectrum. 5G for IoT extends the mobile network to billions of IoT nodes. That’s a much more ambitious goal than consumer use because 5G must adapt to the diverse nature of endpoints. These endpoint “things” range from mobile phones and

Figure 1. 5G will consist of four distinct but related vectors.

fixed wireless networks that need lots of bandwidth and “frequently on” operation to sensor modules that infrequently transmit or receive small packets of data. Many IoT product needs will, however, fall somewhere in-between. They don’t need high-speed 5G performance, but they have much tighter power/cost requirements than phones or fixed terminals and aggressively target latency for a wide range of use cases. That’s where the new 5G RedCap standard comes into play. The 3GPP broadband standard specifies four distinct (but related) vectors (Figure 1). The first and primary vector, enhanced mobile broadband (eMBB), specified in Release 15, targets consumer handsets and Fixed Wireless Access (FWA). A second vector, specified in Release 16, covers ultra-reliable, low-latency communication (URLLC). This vector targets time-critical edge computing and supports future AR/ VR devices. A third vector is Sidelink. Primarily targeted for automotive V2X, Sidelink is expected to be supported in non-automotive use cases beginning with Release 18, including industrial and IoT (where it will converge with RedCap). The fourth vector is 5G for IoT. LTE, in particular LTE Cat-1, partially serves these applications but is now being

10

DESIGN WORLD — EE NETWORK

fully embraced by 5G, with introduction of 5G RedCap in Release 17. Further improvements and optimizations should come in Release 18.

What is RedCap? Reduced Capability (RedCap) brings the benefits of 5G, especially in the network with performance on par with previous generation LTE, at significantly lower latency, targeting the missing performance grade between baseline full performance 5G devices supporting eMBB and URLLC (with peak rates in the range of Gbps), and narrow bandwidth massive machine-type communication (mMTC, supporting 1 Mb/ sec). RedCap devices will support around 85 Mb/sec, a performance grade that can serve a wide range of industrial and IoT use cases. Table 1 contrasts the major segments of 5G. Supporting IoT in 5G networks allows for cost effective greenfield deployments - no need to support three networks: 5G, LTE, and eMTC/NB-IoT). 5G’s focus on reduced-capacity use cases coincides with the rapid rise in private networks. RedCap and private networks open the field for new cost-effective deployments addressing enterprise and industrial use cases. The 5G network also provides more spectrum

5 • 2022

options, including unlicensed spectrum. RedCap will also support operation in millimeter wave, where the limited coverage becomes advantageous in spectral re-use. 5G networks also offer increased coverage in sub-6 GHz through beamforming, where the onset of complexity lies on the infrastructure side, keeping the user-side equipment lean, compact, and power efficient. 5G also allows improved device positioning, with Release 17 promising centimeter-level positioning accuracy, which will further benefit industrial use cases.

Figure 2. Network slices let private networks and applications run in configurations defined for a particular use.

eeworldonline.com

|

designworldonline.com


IOT CONNECTIVITY High performance 5G

Reduced Capacity 5G

mMTC LTE Cat-M

Bit rate

Over 1 Gb/sec

~85 Mb/sec

1 Mb/sec

Bandwidth

100 MHz

20 MHz (Rel 18 5 MHz)

2-5 MHz

Number of antennas

2 to 4

1 to 2

1

Number of layers

2 to 4

Typically 1

1

Modulation

256 QAM

64 QAM

16 QAM

5G networks provide superior network management over LTE, supporting guaranteed quality of service (QoS). A common network infrastructure can offer this enhanced management through software-based network slicing, security, and mobile edge computing (MEC) options. Slicing (Figure 2) can provide tiered levels of QoS without need for adding more hardware, reusing the same network infrastructure to support highly fragmented industrial use cases. MEC moves traditional cloud compute functions into the network to reduce congestion and improve cost and latency. For example, security and AI functions can be provided in local gateways. Private cellular networks are another possible application, where ultra-performance may be less important than the advantages of enhanced security, improved availability, and function tuned to need of that network. What kind of IoT applications could benefit from these enhancements? • Industrial sensors in factories and the grid (pressure, CO2, humidity, motion, temperature sensors and many more); • surveillance monitors; • wearables (including emerging XR devices for the glasses form factor), automotive telematics; • e-health and medical monitoring devices. These applications have some level of criticality in information they share or to which they must respond. Yet, these applications are very sensitive to cost and power constraints. RedCap aims to help IoT product builders find the right balance. One concrete example is for automotive telemetry. We usually think of cellular V2X applications in vehicles for communication with other vehicles or infrastructure for safety, navigation, and traffic management. These require dedicated support for Sidelink communications and require similar performance as with 5G Redcap. Telemetry is a quite different

eeworldonline.com

|

designworldonline.com

Table 1. Comparison of high performance 5G, RedCap, and LTE Cat M.

application. The telematics box (T-box) is an established piece of electronics in modern cars, responsible for remotely connecting vehicle functions and more. These systems use cellular technology, and emerging systems are starting to use 5G. Here, telematics can handle over the-air updates, collect and communicate vehicle statistics, and schedule service when detecting problems. Telematics communication link volume could exceed that of V2X, and it is suited for the Redcap performance envelope.

Design for RedCap? RedCap isn’t fully ratified yet; ratification is expected around the middle of 2022. We expect initial RedCap deployments in 2024 to 2025, with volume ramp-up not before 2026. Changing a communications method once RedCap comes online could be expensive both in schedule and in development cost. I’d recommend that you build your product with a communication system that can operate over LTE but can easily switch to RedCap through a software update or simple re-spin. Many product builders are eager to find such a solution to communications issues. A communications link will help them put products on shelves right away, with a smooth migration to 5G RedCap when the standard matures. There’s always risk in planning for a next generation without undermining nearterm revenue, but that risk can and should be manageable. Because IoT applications require low-cost devices, we expect RedCap implementations to be much more integrated, not only digital baseband and RF portions, but in some cases integration with other complementing communication technologies as well as application-specific SoCs. This will allow optimization of SoCs to specific use cases. It should also bring new vendors and OEMs - who don’t specialize in communications - an option to differentiate their products by integrating RedCap IP into SoCs, similar as they do now for Wi-Fi.

5 • 2022

DESIGN WORLD — EE NETWORK

11


5G HANDBOOK

SYNCHRONIZING 5G NETWORKS BRINGS TIMING CHALLENGES FREQUENCY SYNCHRONIZATION IN THE RADIO ACCESS NETWORK (RAN) MINIMIZES DISTURBANCES AT THE AIR INTERFACE, ENSURES MOBILE GARY GIUST, SITIME HANDOVER BETWEEN RADIO BASE STATIONS, AND FULFILLS LEGAL AND REGULATORY REQUIREMENTS ASSOCIATED WITH A FREQUENCY LICENSE. Radio-access networks (RANs) must minimize disturbances at the air interface. Disturbances in timing can upend synchronization of the radio RAN and ultimately to the downstream network. Strict timing requirements specified in industry standards ensure that base-station handovers occur smoothly and that a wireless network complies with legal and regulatory requirements associated with a frequency license. Most legacy 2G, 3G, and 4G networks utilize frequency division duplex (FDD) where uplink and downlink radios transmit simultaneously but at different frequencies separated by some offset. This offset is an inefficient use of frequency spectrum, which is increasingly scarce and costly to acquire. Time division duplex (TDD) offers a costeffective alternative to expand access into unoccupied upper-spectrum ranges. In TDD mode, networks optimize spectral efficiency by sharing the same frequency for uplink and downlink radios, but at different times. Due to the time-sensitive nature in which networks send TDD data, phase (time) synchronization is required to meet tight frame-start specifications and avoid unwanted interference between neighboring base stations and user equipment. Time synchronization also enables mainstream features and services to emerge in 5G networks. For example, carrier aggregation (CA) uses multiple radio frequencies to increase speed and reliability in challenging environments, enabling faster downloads and continuous video streaming. Multiple input multiple output (MIMO) technology combines multiple antennas at source and destination to provide a more stable connection with less congestion. Geolocation services let operators pinpoint the geographical location of devices connected to the Internet.

throughout the network, measured with respect to a common reference. Overall, the total endto-end TE must be less than ±1.5 μs. The ITU-T provides Recommendations to build the transport network, in which Question Group 13 from Working Party 3 in Study Group 15 addresses network synchronization and time distribution. Recommendation G.8271.1 specifies a maximum absolute network limit for TE of 1100 ns at reference point C, leaving 400 ns to the base station, as illustrated in Figure 1. The O-RAN Fronthaul Working Group specification O-RAN.WG4.CUS.0-v07.00 assumes the fronthaul network has a maximum TE of 95 ns or 140 ns depending on whether a regular radio unit (O-RU) with device TE of 80 ns or enhanced O-RU (with device TE of 35 ns) is used. These TE limits are independent of the number of switches, two of which are shown in Fig. 1 for illustration only. The fronthaul network’s relative TE similarly varies between 60 nsec and 190 nsec, as shown in Fig. 1. Table 1 summarizes O-RAN time alignment error (TAE) requirements for various 5G features, where TAE measures the largest timing

Figure 1. This example O-RAN C2 network shows an end-to-end time-error budget, enabling synchronization based on applicable standards. difference between any two signals. Finally, 3GPP specifies RAN air interface requirements in TS 36.104, in which version 17.4.0 defines TAE to be less than 65 ns, 130 ns, 260 ns and 260 ns for MIMO at each carrier frequency, intra-band contiguous CA, intraband non-contiguous CA, and inter-band carrier aggregation, respectively. 3GPP additionally specifies less than 50 ppb frequency error at the base-station air interface. Other relevant standards enabling time synchronization include Synchronous Ethernet (SyncE), specified in ITU-T G.8262 and G.8262.1, which frequency synchronizes the Ethernet physical layer, and IEEE 1588 Precision Time Protocol (PTP), which enables clock synchronization in a computer network. Additionally, IEEE 802.1cm specifies timesensitive networking for fronthaul streams in Ethernet, and an evolved common public radio

Figure 2. A PTP servo loop locks a local clock to network time, filtering PDV in the process.

Timing requirements Figure 1 shows an Open RAN, O-RAN network implementation illustrating how multiple standards combine to build an end-to-end solution enabling time synchronization. Here, time-error (TE) requirements appear budgeted

12

DESIGN WORLD — EE NETWORK

5 • 2022

eeworldonline.com

|

designworldonline.com


NETWORK TIMING Figure 3. Synchronization applications depend on an oscillator’s ability to maintain constant output frequency in the presence of changing temperature. Here, curve 1 is a better choice, even though it’s rated lifetime stability is poorer.

interface (eCPRI) defines a protocol for remote radio units to communicate with base stations. Each of these standards plays a pivotal role in synchronizing 5G networks.

Synchronization and 5G timing challenges One key trend emerging from the transition of 4G to 5G networks is densification. To increase capacity in metro areas with a high concentration of users, wireless carriers are deploying cellular radios all over the urban landscape. For example, radios will appear mounted on telephone poles, lamp posts, building corners, curbside municipal power-supply cabinets and below manhole covers. Such densification will subject these radios to a wide range of environmental stressors and require a

higher level of performance from timing devices to ensure a reliable network. Let’s review a few common scenarios to understand their impact on timing. Figure 2 illustrates a PTP servo loop, which is a fundamental building block for synchronizing time across distributed systems. The servo loop disciplines a local clock to network time, and in the process low-pass filters incoming network noise, known as packet-delay variation (PDV). Because filtering input noise is good, why not lower the loop bandwidth as much as possible? It turns out, because of where the local oscillator (LO) sits in the loop, as the loop bandwidth lowers, the LO noise that appears at the output increases. Eventually, this LO noise will dominate such that lowering the bandwidth further actually

5G Feature

Maximum |TAE| (normative)

Related 802.1cm/eCPRI Timing Category (informative)

TDD

3 μs

C

Dual Connectivity

3 μs

C

MIMO, TX Diversity

65 ns

-

CA (intraband contiguous per BS type)

130 ns

A

CA (intraband contiguous per BS type)

260 ns

B

CA (interband or intraband non-contiguous)

3 μs

C

Observed Time Difference Of Arrival (OTDOA), not defined by 3GPP

<< 1.5 μs

-

eeworldonline.com

|

designworldonline.com

5 • 2022

raises the total output noise. The optimum loop bandwidth is thus selected to balance incoming and internal noise processes. The key point is that selecting a lower-noise LO enables lowering the servo-loop bandwidth to filter more PDV, resulting in a more accurate local time (e.g. time stamps). The bandwidth of the servo loop is such that the loop updates about once every few seconds. Between updates, the unfiltered local oscillator (LO) output frequency provides local time. Thus, the noise of the LO transfers to the local time between loop updates. This means the most important LO performance metric for synchronization applications is short-term stability. This is in contrast with traditional thinking, wherein precision oscillators are selected for their generic banner specification of “stability” (i.e. frequency-over-temperature stability), which is guaranteed over the lifetime of the product. Because the servo loop updates periodically, designers need not worry about longer time frames during locked operation (holdover being a separate consideration). What is the dominant noise source in precision oscillators that contributes to short term instability? When operated in constant ambient temperature and airflow, random noise processes dominate. Allan deviation is one measure to quantify this noise. Otherwise, thermal stability can dominate. Referred to as dF/ dT (the derivative of frequency versus temperature), thermal stability captures how an oscillator’s output frequency changes with changes in temperature. Figure 3 highlights the critical nature of dF/ dT by comparing stability curves of two hypothetical devices. Device 1 has a poorer datasheet stability of ±100 ppb compared to ±50 ppb of Device 2. Because Device 1’s curve is smoother, it provides better (lower) peak dF/dT performance. In the locked state, the system relies on the stability of the LO between PTP servo-loop updates to maintain a stable frequency in the presence of thermal gradients and dynamic airflow. Thus, what matters is an oscillator’s low sensitivity to temperature changes (dF/dT), not lifetime peak-to-peak stability. Additionally, smoother dF/dT performance enables easier thermal compensation, if desired. DESIGN WORLD — EE NETWORK

13


5G HANDBOOK

OPEN RAN REWRITES NETWORK-TESTING RULES OPEN RADIO ACCESS NETWORKS (OPEN RAN) BRING NEW TESTING RESPONSIBILITIES THAT SERVICE PROVIDERS HAVEN’T HAD TO CONTEND WITH BEFORE. Big changes are coming to service provider’s Radio Access Networks (RAN). After years of closed, proprietary radio technologies, Open RAN has started shaking up the status quo. By disaggregating monolithic RAN technologies and bringing new vendors and architectures into the mix, Open RAN promises new innovations and efficiencies. Industry analysts project real multi-vendor Open RAN maturity in as little as one to three years. So, we should expect to see operators deploying all sorts of groundbreaking new RAN capabilities any minute now, right?

however, shifts the burden of testing and integrating RAN components to you. Suddenly, service provider - not the vendor - must make sure all the multivendor pieces fit together correctly and perform the as expected. If you’re imagining the testing capabilities needed to do that, compared with what many service providers have in place today, you can’t help noticing a significant gap. Why does Open RAN make network testing and validation so much more complicated? What should the industry do to prepare for tomorrow’s open networks? Let’s take a closer look.

Well, not so fast.

Freedom brings complexity and responsibility

Open RAN offers an alternative to proprietary, monolithic radio technologies from one of a handful of vendors. It also,

Multi-vendor openness is perhaps the most significant driver for Open RAN. By allowing new startups and third-party

ANIL KOLLIPARA, SPIRENT COMMUNICATIONS

software vendors to play in this space, Open RAN aims to bring new ideas and capabilities to operators, along with lower costs from increased competition. Open RAN introduces more discrete RAN vendors and components, makes architecting and validating these networks far more complex. Service providers have traditionally bought RAN baseband units as a pre-validated system, likely from a single longstanding vendor. With Open RAN, you source multiple components—Radio Unit (RU), Centralized Unit (CU), Distributed Unit (DU), and RAN Intelligent Controller (RIC). You’re responsible for integrating them, testing them, and validating them. Even if a vendor labels an Open RAN component “plug-and-play,” that doesn’t necessarily mean “plug and

perform.” Yes, the vendor is asserting that their component complies with the standard, but that’s just a starting point. You have no guarantee that different vendors have implemented the standards consistently, that their components will interact the way you expect, or that you won’t discover previously unknown problems under certain conditions. Open RAN is still new, as are many of the components you’ll be using. Even some of the vendors are new to Open RAN. You’ll need to perform exhaustive testing on every component, in multiple configurations, simulating a wide range of load conditions, deployment scenarios, and environmental variables. If there’s a problem, it’s now on you to figure out what’s happening and which vendor needs to help fix it.

Figure 1. Open RAN functional components need testing for standards compliance, functionality, and interoperability.

14

DESIGN WORLD — EE NETWORK

5 • 2022

eeworldonline.com

|

designworldonline.com


OPEN RAN TEST Build an open RAN testing strategy What does a viable Open RAN testing approach look like? At the highest level, it should include: • isolation testing to validate standards compliance and performance of each Open RAN component; • adjacency testing to validate interoperability between pairs of components; • end-to-end testing to validate that multi-vendor RAN components function as a unified system; • performance testing to verify that your Open RAN architecture is hitting the targets you’ve set. If this sounds like a significant effort, it is. So, how should you approach it? Start with thorough, methodical interoperability and performance testing for each element of the architecture (Figure 1). RU: Typically deployed near the antenna or integrated with it, the RU processes radio signals, amplifies them, and digitizes them. It’s the most delay-sensitive component of the RAN, so you’ll want to perform wraparound testing to ensure interoperability with other components. To understand the coverage range and capacity of a cell, you’ll need to test for performance, mobility, and end-to-end functionality across diverse real-world scenarios. That includes testing across multiple bandwidths, frequencies, modulation schemes, and antenna types. DU: The DU performs real-time baseband processing and handles the lower layers of the protocol stack. This component plays a key role in Open RAN’s flexibility to support different distributed architectures and splits. The DU can be located near the cell site or at the network edge. You’ll want to thoroughly test the DU for capacity, performance, throughput, and mobility. You’ll also need to validate its interoperability with the RU and CU, and its support for diverse voice, video, data, and emergency services.

CU: The CU performs less time-sensitive processing for higher layers of the protocol stack. It connects to other CUs and, potentially, thousands of DUs and hundreds of thousands of user devices. Therefore, you should thoroughly test this component for performance, throughput, and capacity at massive scales, as well as validating interoperability with DUs and the core network. RIC: The RIC acts as innovation enabler in Open RAN architectures — a platform to run software from new vendors and startups in the RAN. That software can enable a variety of new capabilities such as automating lifecycle maintenance tasks, optimizing resources, integrating third-party intelligence into customer-facing services, and so on. RICs come in two flavors: near-real-time (near-RT) and non-real-time (non-RT), both of which you should thoroughly test for interoperability with OU and DU components. This kind of testing doesn’t represent entirely new ground for service providers, but it’s a much bigger effort than was needed in the past. It’s also not a one-time job. Open RAN doesn’t just mean more network components, it means more updates to those components, from different vendors releasing software at their own cadences. So, you’ll need to support this testing effort on an ongoing basis and repeat it every time something in the environment changes.

Looking ahead It’s hard to overstate the value that Open RAN can bring to operators and their customers, but we shouldn’t downplay how big of a change it represents. You need to emulate every part of an Open RAN architecture, test with real-world applications, and automate many aspects of the testing process.

Advanced Driver-Assistance Systems (ADAS) Classroom Whether or not you want a self-driving car, there’s no arguing that most drivers need “assistance”, which makes ADAS a formidable technology. It’s a new learning curve for many, and this classroom is here to help. Neural network software (and the memory to accommodate it), ADAS sensors, LiDAR, are just some of the technologies that allow a car to “see”. How and why? START WITH TUTORIALS IN THIS EE CLASSROOM!

Check out our EE Classroom to learn more: www.eeworldonline.com/adas-classroom


5G HANDBOOK

BATTERY BACKUP CHEMISTRIES FOR 5G SMALL CELL SITES AS THE NUMBER OF CELL SITES INCREASE AND THEIR SIZES The deployment of mmWave technology with 5G DECREASE, ENGINEERS HAVE OPTIONS TO CONSIDER FOR forces wireless operators to install many small BATTERY BACKUP. DIFFERING BATTERY CHEMISTRIES OFFER cells, each at a reduced distance between MORE CHOICES AND DIFFERENT PERFORMANCE LEVELS. the customer and the base-station antenna. SELECTING THE RIGHT BATTERY CHEMISTRY FOR EACH Small cell sites are now located in buildings, APPLICATION IS CRITICAL TO ENSURE RELIABLE, LONG on lamp posts, in neighborhoods, and along LASTING, AND COST-EFFECTIVE POWER DELIVERY. JD DIGIACOMANDREA, GREEN CUBES TECHNOLOGY

travel corridors. Each site must tap into

available power, which can introduce an increased likelihood of power loss events.

Figure 1. This 12 V VRLA battery used in telecom contains flammable materials and requires vertical orientation.

16

DESIGN WORLD — EE NETWORK

What happens to the network each time we lose power? Of course, batteries engage and provide backup power for the location until grid power is restored. As the number of cell sites increases and the available footprint decreases, there are multiple options to consider for battery backup. Today’s battery chemistries offer more choices and different levels of performance than were available a short time ago. Selecting the best battery chemistry for each application is critical to ensure reliable, long lasting, and costeffective power delivery. This article presents some of the considerations and trade-offs when selecting a battery for small cells. Macro cell sites typically use lead-acid batteries for backup power, as well as fossilfuel powered generators to provide power during a power loss event. Regulations typically stipulate that critical infrastructure must be operable for a certain amount of time during a power outage, typically 8 to 24 hours. The power requirements for macro cell sites vary depending on the site size, the number of collocated service providers, and the type of equipment. Additionally, the power load will vary throughout the day based on the number of active users of the site. All these factors affect the typical power requirements of a macro cell site. Power requirements for small-cell sites are less than those for a macro cell because small cells typically serve a single operator, have a smaller coverage area, and tend to provide a specific type of coverage. There are some small cell site locations that provide critical coverage and need to provide service during a power outage. There are also small cell sites in locations prone to frequent power outages due to their location on the grid or in the community. In these cases, it’s important to 5 • 2022

consider the type of battery backup options. These sites are tucked in neighborhoods, commercial areas, in stadiums, and in higher numbers than their macro counterparts. Placing a battery at each small cell site or each cluster in stadiums makes much more sense than installing a fossil-fuel generator. The two leading battery chemistries for small cell site backup power are valveregulated lead acid (VRLA) and lithium ion. Each of chemistry has unique features that you should consider when selecting a backup power source. Factors include cost, weight, size, energy storage capacity, lifetime, operating temperature, and maintenance. Lead-acid batteries were invented in 1860 and continue to be a leading energy storage product for many industries. There are multiple types of lead-acid batteries, but the most common for small site backup is the VRLA type. Lead-acid batteries built for telecom applications are the least expensive option in terms of cost per kWh installed at the beginning of life. This is due to the large, mature manufacturing base developed over the last 50 to 75 years. Operators have multiple vendors to choose from, all producing the same chemistry. Lead acid batteries are also highly recyclable, with up to 99% recyclability. VRLA chemistry has some drawbacks. These batteries are built in 12 V increments and therefore require installing four pieces wired in series to power 48 V telecom equipment. These connections must be checked during regular maintenance intervals as they can become loose and cause a failure when discharged. VRLA batteries generate flammable hydrogen and oxygen as a normal part of operation. They’re used to pressurize the battery. In the case of a vent failure, battery failure, or overcharging, these gasses can escape into the room and therefore the rooms are regulated to provide adequate ventilation to prevent the risk of an explosion. Figure 1 shows a cutaway view of a VRLA battery. Because there is a fluid (acid) in the battery and gas vents in the top, the mounting options for VRLA are limited to one orientation. Lead acid batteries are the heaviest of the available chemistries and require battery specific shelving, and floors

eeworldonline.com

|

designworldonline.com


SMALL-CELL BACKUP POWER Figure 2. A telecom Li-ion battery contains a battery-management system (BMS), circuit protection, and communicates its health to the host.

to support the enormous weights of full battery banks. Additionally, if you discharge a VRLA battery less than 50%, permanent damage occurs to the lead plates and will significantly reduce the lifetime of the batteries. This requires operators to either replace their batteries earlier than anticipated or suffer a system shutdown during a power failure event due to failing batteries. Introduced in the 1970s, lithium-ion batteries recent improvements in chemistry, cost, and manufacturing have made them attractive alternatives to lead-acid batteries for the telecom industry. There are multiple lithiumion battery chemistries, but two dominate in the telecom industry: lithium nickel manganese cobalt

Table 1. Comparison of lithium iron phosphate and nickel manganese cobalt battery chemistries.

(NMC) and lithium iron phosphate (LFP). These chemistries are quite similar, but have a few differences illustrated in Table 1. These chemistries are manufactured in 48 V rack mountable form factors and can be wired in parallel to provide the required backup capacity. There are special considerations when wiring Li-Ion batteries in parallel, depending on the technology integrated directly into the battery by each manufacturer. Li-Ion batteries of different SOC, capacity, and age should not be wired in parallel unless the manufacturer integrates special balance circuitry. Mixing and matching Li-Ion batteries without proper balance circuitry will lead to accelerated wear at the least, and unpredicted down time at the worst. Lithium-ion batteries are more energy dense, weigh less,

LFP vs. NMC

Parameter

Lithium Iron Phosphate (LFP)

Nickel Manganese Cobalt (NMC)

Voltage

3.2 V

3.7 V

Weight Energy Density

90-120 Wh/Kg

150-250 Wh/Kg

Volume Energy Denisty

300-350 Wh/l

500-700 Wh/l

Max Discharge Rate

30C

2C

Max Charge Rate

10C

0.5C

Typical Cycle Life (@80%)

3000+ Cycles

500-1000 Cycles

Calendar Life (@80%)

8-10 years

**4-5 years

Thermal Runaway Onset*

195 °C 210 °C

170 °C 500 °C

Thermal Runaway Increase*

and take up less space than lead acid. Additionally, lithiumion batteries can be cycled to 100% depth of discharge to approximately 3500 cycles before they degrade to a point where they should be considered for replacement. This feature alone allows lithium batteries to be designed into systems for 10 or more years without replacement. This allows for more backup time to be provided in a smaller space. This is advantageous to small cell installations where space is at a premium. Lithium-ion batteries are also completely sealed; you can mount them in any orientation. This allows for installers to squeeze batteries into tight locations, mount the battery sideways, or even upside down. Small cell sites with equipment boxes located on poles, in closets, or tucked away in a utility right of way are usually limited in space and need to be as small as possible. One of the biggest benefits to lithium-ion batteries is the built in Battery Management System

Comparison NMC Batteries are lighter and more compact LFP Batteries provide more power over a shorter period, and can be charged faster LFP Batteries will deliver more cycles over a longer calendar life NMC Batteries have lower thermal runaway thresholds and will burn hotter

*Royal Society of Chemistry, 2014 ** With derated charge voltage

eeworldonline.com

|

designworldonline.com

(BMS). Every lithium-ion battery must have a BMS that monitors cell voltages, battery currents and temperatures and will protect the battery from abuse. This provides a built-in safety measure, prolonging the life of the battery by ensuring the battery operates in optimal conditions. There are many additional features that BMS’s offer such as state of charge, state of health, cycle count, fault reporting, digital communications (i.e., RS485, Modbus), and integration with site controllers. These features (Figure 2) enable remote battery management, eliminating routine maintenance trips to remote sites and significantly reducing maintenance costs. Now operators can send a technician to check on a set of batteries after the batteries proactively send a request for maintenance through the cloud. As the wireless industry transitions to 5G and small cell sites, the power requirements for each site have been increased. Each site is becoming more unique in requirements as we put more radios in more places. Operators must be cautious with their power supply designs and deployments. There are a few options when it comes to powering backup systems, including fossil fuel generators, lead acid, and lithiumion batteries. Each solution has its own benefits, and each small cell site will require evaluation and integration to ensure that critical communications are available when they are needed most.

5 • 2022

Furthur reading Get Nerdy, Learn about Lithium Batteries: https://batteryuniversity. com/article/bu-204-how-do-lithiumbatteries-work Top 3 issues designers face with larger Li-Ion installations https://greencubestech.com/powerelectronics-news-avoid-the-top-3issues-ups-designers-encounterwhen-upgrading-to-lithium/

DESIGN WORLD — EE NETWORK

17


5G HANDBOOK

HOW DPD IMPROVES POWER AMPLIFIER EFFICIENCY DIGITAL PREDISTORTION COMPENSATES FOR AN AMPLIFIER’S NONLINEARITIES, LETTING IT OPERATE IN ITS NONLINEAR REGION OF MAXIMUM POWER EFFICIENCY. POORIA VARAHRAM, BENETEL Power amplifiers (PAs) are indispensable components in a wireless communication system. Unfortunately, they’re inherently nonlinear. This nonlinearity generates spectral

is the inverse of the power amplifier gain compression (AM/AM) and a phase rotation that is the negative of the PA phase rotation (AM/PM) as shown in Figure 1.

regrowth, leading to unwanted radiation and

DPD functional blocks

adjacent-channel interference. It also causes

DPD must adapt to variations in PA nonlinearity. Another challenge of DPD is the dependence of the PA’s transfer characteristics on the frequency content of the signal, defined as changes of the amplitude and phase in distortion components based on past signal values. These are typically referred to as memory effects. In addition to the correction of the PA nonlinearity, memory-effect compensation is an important requirement of the DPD algorithm especially when the signal bandwidth increases. Most adaptive DPD blocks adapt to the slow variation of PA characteristics. The fast variation is, however, typically not taken into consideration. Attempting to compensate for the fast variation of PA characteristics or short-term memory effects, with conventional DPD (based on Taylor series) schemes can degrade the performance or even make the system unstable. The DPD techniques to use are based on subset of Volterra series, e.g., memory polynomial and generalized memory polynomial. The nonlinear correction may be applied to the complex baseband signals using any suitable form of digital signal processing, including both real and complex domain (I/Q or polar) processing. Preferably, the nonlinear correction is applied using a parallel processing architecture, whereby two or more samples are processed simultaneously, to accommodate the high sample rate of the expanded bandwidth. Figure 2 shows the functional blocks of an adaptive DPD in the transmit chain of a typical communication system. The DPD subsystem

in-band distortion, resulting in degradation of its error vector magnitude (EVM) performance. To improve EMV, a PA needs to be backed off far from its saturation point, leading to very low power efficiency, typically less than 10%. Thus, more than 90% of the DC power gets lost, turned into heat. Considering the large number of wireless base stations deployed worldwide, improved PA efficiency could substantially reduce the electricity and cooling costs incurred by cellular operators. Digital pre distortion (DPD) provides an effective method to linearize PAs. DPD lets cost-efficient nonlinear PAs run in their nonlinear regions with minimized distortions, resulting in higher output power and greater power efficiency. The concept is based on inserting a nonlinear function (the inverse function of the amplifier) between the input signal and the amplifier to produce a linear output. The DPD must adapt to variations in PA nonlinearity with time, temperature, and use of different operating channels. In this article, the DPD concept is described in more detail and its effectiveness at improving PA efficiency shown. Moreover, some of the DPD challenges to improve the power efficiency in 5G systems will be discussed.

What is digital pre-distortion? DPD operates in the digital baseband domain, offering lower implementation complexity than alternative techniques such as feedforward linearization, which operates in the RF domain. DPD lets cost-efficient nonlinear PAs operate at higher output powers and into their nonlinear regions with minimized distortion, resulting in greater power efficiency. The DPD is equivalent to a nonlinear circuit with a gain expansion response that

18

DESIGN WORLD — EE NETWORK

Figure 1. Block diagram of a DPD system shows how it linearizes the PA. consists of different blocks including a DPD model look-up table (LUT), a complex gain multiplier, and an adaptation block. Following digital up-conversion (DUC), the signal peakto-average ratio (PAPR) is reduced using a crest factor reduction (CFR) technique. CFR also plays a key role in improving the PA efficiency by reducing the back-off. Most systems apply DPD to the digital baseband signals immediately after the CFR block. Without CFR in the system, the PAPR after DPD will increase. That could cause instability in the DPD algorithm convergence. The DPD training at system startup occurs from collecting the PA samples from the observation path with CFR is disabled. This allows the system to obtain the PA’s full characteristics for DPD training. The other block in Fig. 2 is the quadrature error correction (QEC). Due to the imperfections in the local oscillator (LO) in the up-conversion (UC) block and digital to analog converter (DAC), the IQ imbalance occurred, which requires compensaton in QEC.

How does DPD improve PA efficiency? PA efficiency in communication systems is one of the most important factors for prolonging battery life and minimizing system costs.

Figure 2. A transmitter’s DPD block resides between the crest-factor reduction block and the DAC/ADC functions.

5 • 2022

eeworldonline.com

|

designworldonline.com


POWER-AMPLIFIER EFFICIENCY PAs are typically the most power consuming components in communication systems and can account for up to 70% of the total power budget. Improvements in PA efficiency can substantially reduce both power consumption and cooling requirements. PA efficiency depends on the input or output back-off chosen to operate the PA and on the power transistor operation class. There is a tradeoff between the linearity and efficiency for different PA classes. For example, a Class A amplifier has the highest linearity, but poorest efficiency, while a Class AB design sacrifices some linearity to achieve improved efficiency. Today’s base stations, however, use Doherty amplifiers. A Doherty PA is a combination of both Class AB and Class C. It provides both better linearity and higher efficiency than other PA classes. We define PA efficiency as a measure of how effectively a device can convert the DC power of the supply into the RF signal power delivered to the load. To measure the operating efficiency of the PA, we often use the following parameters. • Drain efficiency: the ratio between the RF output power to the DC consumed power. • Power added efficiency (PAE): the ratio of the difference between RF output power and the input power to the DC consumed power.

In an ideal PA, PAE=1. This means that the power delivered to the load is equal to the power derived from the DC supply, which states that there is no power consumption in the PA. Unfortunately, this can’t happen because there is no fully linear PA. Figure 3 shows a normal PA input power vs. output power characteristic under ideal linear response conditions with and without DPD. As input power increases, the PA response becomes more nonlinear, eventually reaching a saturation point. This nonlinearity induces in-band and out-of-band distortion on the transmitted signal, reducing link quality. To achieve the desired linearity, the PA must operate in the linear region, which means the PA needs a lower input power. The degree of the back-off is normally described relative to the 3 dB saturation point (PSAT(3 dB)) which is the point at which the PA output power is 3 dB less than it would be if the PA characteristic were perfectly linear. The blue area in Fig. 3 shows a PA’s operating region without DPD, with input power backed off to maintain linearity. For a typical power amplifier, drain efficiency increases with input power so operating in a backed-off state translates to poor power efficiency. One approach to improve efficiency in this scenario is to use a much higher spec PA, but this has significant cost implications. By adding DPD to the system, the PA can operate at higher output power levels while

The drain-efficiency metric measures only the DC power converted into RF power without considering the power already present in the RF signal at the PA input. Meanwhile, the PAE does account for the input RF signal power, defined as:

Figure 3. DPD moves a PA’s operating region closer to its saturation point.

eeworldonline.com

|

designworldonline.com

still meeting spectral emission mask requirements. This DPDenabled operating region is shown in green in Fig. 3 and results in an improvement in the device PAE. Despite the attractive benefits of DPD, there are still some factors that limit the maximum power at which the PA can operate. For example, we must account for the PAPR of the input signal - with a high PAPR signal requiring the PA to operate further from the saturation point, reducing the power efficiency.

DPD Challenges in 5G Systems Modern 5G systems present several challenges for DPD implementation, which include: • Wide bandwidth. To meet the promise of drastically increased upload and download speeds, 5G systems must be capable of supporting wider bandwidth signals than previous generations. In a 5G system, the instantaneous bandwidth (IBW) can be as high as 400 MHz. For DPD, the challenge of increased signal bandwidth is twofold. Firstly, PA’s exhibit increased memory effects with wideband signals and require more complex modelling techniques to characterize and compensate. Secondly, the feedback path in the DPD model adaptation chain must improve to capture a signal of sufficient quality to extract a suitable DPD model. In general, DPD adaption requires that the feedback path sampling rate be high enough to capture five times the signal bandwidth meaning increased cost and implementation complexity as bandwidth increases.

5 • 2022

• Multi-band and multicarrier. Another challenge facing DPD is in the multi-carrier applications (inter-band or intra-band). Here, the main issues are the separate DPD processing needed for each band and the high PAPR signal. • Massive MIMO (mMIMO). With mMIMO, the main challenge is the large number of PAs that the DPD algorithm should be able to track and linearize the behavioral changes of each PA. Depending on the system size, this can drastically increase the processing requirements and the power consumption. • Time division duplex (TDD) operation. The main challenge associated with TDD operation is due to the PA switching on and off in a duty cycle. The DPD must model the thermal effects of the transistor and PA characteristics during this switching.

Conclusion DPD is one of the main blocks in the digital front-end of communication systems. It increases the power efficiency of the system by linearizing the PA. Without DPD, the PA must operate with large back-off to meet the spectral emission mask, which then reduces the output power and the overall power efficiency. To achieve the desired output power, a much higher spec PA is needed. This significantly increases the system cost. DPD provides a cost-effective solution to achieve the expected output power, while also meeting spectral emission requirements.

DESIGN WORLD — EE NETWORK

19


5G HANDBOOK

ORCHESTRATION AT THE EDGE REDUCES NETWORK LATENCY EDGE COMPUTING BRINGS BENEFITS TO USERS BY REDUCING Multi-access edge computing (MEC), commonly NETWORK LATENCY, THUS IMPROVING SERVICES, BUT NEEDS called “edge computing,” moves data MANAGEMENT ORCHESTRATION. HERE, WE DEMONSTRATE HOW processing to the edge of communication ORCHESTRATION IMPROVES LATENCY IN AN AUTOMOTIVE EDGE- networks. Bringing computing close to COMPUTING USE CASE BY REALLOCATING NETWORK RESOURCES. the source and/or destination where such

JOHANN MARQUEZ-BARJA AND NINA SLAMNIK-KRIJESTORA, IMEC

data is produced and consumed means that data need not travel to the cloud for processing. MEC therefore reduces the latency perceived by end-users, in part by reallocating network resources. 5G systems aim to deliver enhanced network performance, focusing on enhanced Mobile Broadband (eMBB), Ultra Reliable Low Latency Communications (URLLC), and massive Machine Type Communications (mMTC). Each 5G development phase, specified in a 3GPP release, focuses on one of the above enhancements. The latest, Release 16, sets the technical enhancements to deliver URLLC, where the 5G network not only

20

DESIGN WORLD — EE NETWORK

5 • 2022

get improved by new radio and core technologies, but by relying on the MEC and its orchestrated services. Several services can be located and run at the edge. These services tackle the diverse needs of the different sectors, known as industrial verticals, that include health, finances, and transportation. Figure 1 illustrates the 5G ecosystem consisting of the 5G NR, 5G core, and MEC infrastructure. In this automotive use case, orchestrated services deliver Connected and Automated Mobility (CAM) services located at the network edge to minimize communication latency. In this application, MEC enhances driving and safety, being less dependent on driver’s actions, and ensures the higher safety needed for vehicle/end-user needs. Instructions from the network infrastructure can arrive in less than 100 msec.

Figure 1. An orchestration layer handles CAM services for vehicles in 5G ecosystem.

eeworldonline.com

|

designworldonline.com


NETWORK ORCHESTRATION Figure 2. High-performance test beds let us test orchestration of vehicular services at the edge. analytics and therefore enable proactive application relocation. Our MEC application orchestrator runs on the master node and it supports an optimized MEC host selection for application-context relocation. Thus, the network function virtualization infrastructure (NFVI) in our PoC consists of three distributed MEC hosts located on the highway site (worker nodes). The client in our PoC is on-boarded to the vehicle’s on-board unit (OBU), which connects to the distributed MEC application services over long-range 4G.

Response-time measurements

Network orchestration performs specific tasks in a network operating as a subset of network-management software called management and orchestration (MANO) systems. A MANO system manages the network and its computing resources. There are several MANO products on the market, both private and public (open-source). Indeed, MANO is standardized by the European Telecommunications Standards Institute (ETSI), the so-called ETSI NFV MANO. As a part of the future Release 17, 3GPP is now standardizing an architecture for enabling edge applications, which includes native/by design interaction between the 5G communication system and the MEC orchestrated by MANO.

Experimentation setup To understand the impact and benefits of edge computing in 5G mobility services, our research group tested several publicly available MANO systems. We performed tests relying on our high-performance

eeworldonline.com

|

designworldonline.com

computing and networking facilities for mobility (also known as test beds) located in Belgium. Figure 2 shows the proofof-concept (PoC) we built to measure the performance of our MEC application orchestrator. This experimentation setup consists of the components from two test bed facilities, i.e., the Virtual Wall test bed (Ghent, Belgium), and the Smart Highway test bed (Antwerp, Belgium). In particular, the Virtual Wall is a test bed for large networking and cloud experiments, whereas the Smart Highway is a test site built on top of the E313 highway for the purpose of vehicle-to-everything (V2X) research. In such a PoC setup, the roadside units (RSUs) on the smart highway function as MEC hosts, creating a distributed edgecloud environment where MEC application services are deployed, and the vehicle on-board unit operates as a client. The setup is based entirely on the open-source Kubernetes container orchestration engine, where corresponding extensions enhance orchestration decisions to enable data

5 • 2022

For experimentation purposes, we created a testing V2X application service that generates notifications for vehicles based on the current conditions on the road. We deployed this service in the distributed MEC hosts in our PoC, as a cloud-native application and a containerized piece of software, with RESTful APIs exposed to vehicles for retrieving information about driving conditions on the road. Figure 3a shows the values of the measured round-trip time (RTT), which is the network latency -- the time between the moment when the request is initiated (vehicle sending request for the latest conditions on the roads) and receiving information from the MEC service. We measured these RTT values at the vehicle/ user side. Three curves represent RTT measurements for all three application servers deployed in distributed MEC environments. This setup tests the impact of the network on the overall service response time, shown in Figure 3b. The response time contains the transmission and propagation delay (network impact) and the computational delay on the application server (MEC impact). Figure 3b also shows the application server’s overall response time, measured on the vehicle (user) side, for two different scenarios. This response time is important because it shows the delay in retrieving the important contextual driving information from the

DESIGN WORLD — EE NETWORK

21


5G HANDBOOK Figure 3. The MEC service response time measured at the vehicle/user side, where we observe the network and computing latencies with and without MEC orchestration.

server. Keeping this response time low (below 100 ms) is essential for the vehicle to make efficient maneuvering decisions.

Host selection decisions As you can see in both scenarios, the MEC application orchestrator never selects MEC host 1 for an application placement due to its high resource consumption. This happens due to the load test we ran on this MEC host, thereby artificially increasing the CPU load to train our prediction model. On the other side, MEC hosts 2 and 3 are selected based on the projected resource consumption due to the RTT of similar scale. In the first scenario, the network does not perform application-context relocation, which means that the vehicle remains connected to the MEC host 2. Once the load increases on the MEC host 2 after 200 sec (Figure 3b), the response time of the application service increases. Driving information about the conditions on the road might be significantly delayed at the vehicle side, leading to the inefficient decisions that will affect the whole maneuver. In scenario 2, the CPU load increases on the MEC host 2 (i.e., resource availability decreased), as predicted by our algorithm for the time after 200 sec. The proactive decision on relocating the applicationcontext from application service on the MEC host 2 to MEC host 3 results in the relatively stable response time. Such response time

22

DESIGN WORLD — EE NETWORK

does not increase when the vehicle starts retrieving service information from the application service on the MEC host 2. Similarly, such a decision to proactively relocate the service and avoid service disruptions can be made by our algorithm in case user mobility event notification is received from the core network. Testing such a scenario is part of our future work. We obtained mean values of 331.117 msec and 252.924 msec, and standard deviations are 117.543 msec and 29.786 msec, for the scenario with no application-context relocation, and for the one with relocation, respectively. We can see that in scenario 1, when there is no application context relocation for the observations that appear after the 200th second, the deviation from the mean is large -- the increase in response time is statistically significant. Thus, in scenario 2, we show that optimized and proactive MEC host selection results in application-context relocation that helps to improve the overall response time. MEC also prevents service unavailability, which leads to outdated information about the conditions on the road. That, in turn, highly affects the vehicle’s maneuver decisions.

the case of mobility, where service continuity is essential. We showed how performing an orchestrated and optimized applicationcontext relocation from one edge host to another lets the vehicle always connect to the most suitable application server to retrieve important information about driving conditions on the road. This information is important, especially for autonomous vehicles that need to derive decisions about maneuvering without any assistance from the driver.

Takeaways Management and orchestration play a significant role in modern service deployments often present in highly distributed and heterogeneous environments. Such environments become even more important in

5 • 2022

eeworldonline.com

|

designworldonline.com


NETWORK AUTOMATION

NETWORK AUTOMATION TAKES THE WORK OUT OF UPGRADES TELECOM NETWORKS CAN USE AUTOMATION TO PUSH SOFTWARE UPGRADES INTO THE FIELD WITHOUT THE NEED FOR HUMAN INTERVENTION.

Just as manufacturing has moved from an entirely manual process to one that’s highly automated, so too is automation making inroads to telecom and datacom networks. Automating operations such as software updates no longer disrupt the network. Automation is becoming the key to scaling, testing, and allocating software and underlying hardware resources. While many network functions can benefit from automation, let’s focus on automating updates to the Radio Access Network (RAN). We will first discuss the automation enablers, then steps where RAN automation can happen. Finally, we will look at each enabler. Automation enablers such as Zero Touch Provisioning (ZTP), Continuous Integration/Continuous Development (CI/CD) and Artificial Intelligence/Machine Learning (AI/ML) are important for software-based cloud-native networks across all stages of network deployment (Figure 1).

Network Automation in Steps The stages of the network deployment to automate are as follows: 1. Cloudification is the foundation for the initial stage of setting up the network environment. RAN architecture needs to be cloud native for automation to take place. Cloudification through cloud-native functions (containers and microservices) is the foundation of Open RAN and will help with effective automation. To optimize performance, in the enterprise example, software implementation went from monolithic, self-contained applications running on dedicated servers to a new model built on webscale models. It eventually evolved to microservices. A microservice is decomposition of an application into separate parts, each running in a lightweight “container” environment such as Docker, rkt, or Linux LXD. Virtual machines (VMs) – burdened with a whole OS — are simply too bulky to host microservices. By deconstructing a RAN service into microservices, you can address any performance issue by spinning up multiple instances of the RAN microservice that might create a performance issue. Figure 2 shows how virtualization has evolved. Different RAN function components can be implemented as separate microservices rather than as one monolithic VM. Thus, they can scale up to optimize the RAN function’s performance. Each microservice can be deployed, upgraded, scaled, and restarted independently of other microservices in the RAN application, using an automated system, enabling frequent updates to live applications without impacting service level agreements (SLAs). A microservices architecture also lets mobile operators push

EUGINA JORDAN, PARALLEL WIRELESS

out RAN upgrades to as many sites as needed, as testing a microservice involves few test cases. In addition, a microservices architecture supports an agile DevOps model. DevOps combines software development (Dev) and information-technology operations (Ops). Its goal is to shorten the systems development lifecycle and provide continuous delivery with high software quality. The DevOps movement, which inspired large, enterprise organizations with agile practices, let developers make quick changes. It was, however, difficult to get full benefit because their legacy development process could not support short software development delivery cycles and frequent production releases. The DevOps movement developed the CI/CD methodology to release software into production quickly, reliably, and repeatably. Figure 3 shows the software distribution process. DevOps processes let mobile operators push out RAN upgrades without taking down the entire site or sites, as testing a microservice involves a very few test cases. Testing an entire monolithic (though virtualized) application takes many days. A RAN function cannot be automated unless it’s containerized and based on microservices. With that in place, the network deployment automation can move to the next step. 2. Bringing up a radio site, which includes commissioning and provisioning services. ZTP is best utilized at this stage. With ZTP, a mobile operator need not perform any manual tasks to configure the cell sites. 3. Once sites are configured, you can apply CI/CD to automate updates and reduce onsite or data center manual labor. By reducing or eliminating the need to send engineers on-site, mobile operators can reduce costs. CI/CD is a key enabler for automating network testing and upgrades. 4. Fourth is the optimization stage, which involves intelligent automated optimization of the network, providing services to the subscribers. AI/ML plays an important role here. Open RAN networks natively include an AI/ML framework into the RAN architecture with NearRT RIC & Non-RT RAN Intelligence Controller Functions. The RIC hosts

Stages of Network to Automate

Figure 1. Network Automation uses zero-touch positioning, Continuous Integration/Continuous Development, and AI/ML to bring upgrades to the radio access network. Source: Parallel Wireless.

eeworldonline.com

|

designworldonline.com

5 • 2022

DESIGN WORLD — EE NETWORK

23


5G HANDBOOK Evolution of Virtulization

Figure 2. Network virtualization has moved from running on dedicated hardware to software containers on a COTS server. microservices-based applications called xApps for Near-RT RIC and rApps for Non-Real-time RIC. With the help of rApps and xApps, Open RAN integrates AI/ML based decision making into the network. See “What is RAN intelligent controller?” AI models fall into two categories: supervised and unsupervised learning. Being real-time, cellular networks prefer unsupervised learner models to eliminate continuous model and training. The Near RT RIC should include AI as an xAPP responsible for predicting, preventing, and mitigating situations (i.e. handover) that affect customer experience. AI needs to be in the near-real-time RIC because it will drive time-sensitive decisions for network performance. All xAPPs should use unsupervised learning mode. AI software will use algorithms created by ML running as an rAPP in the non-real time RIC. Any algorithms and training can be built in non-real time. The reinforcement of those decisions needs to happen in real-time by AI. An ML rAPP from the non-real-time RIC will help the AI xAPP in the real-time RIC to recognize traffic patterns and abnormalities. The rAPP can then adjust network health, provisioning the appropriate RAN resources for optimal subscriber experience.

AI/ML will enable proactive action and the ability to accurately predict the future. Based on prediction, the network can take preventative action to avoid future similar situations.

Automation enablers ZTP ensures that sites are configured quickly and automatically. Such automation reduces or eliminates the need to send engineers on-site. ZTP will be critical for dense 5G deployments when network operators need to configure hundreds of sites. Once sites are configured, CI/CD can automate updates and reduce manual labor on-site or in the data center through automated software upgrade distribution. IT and enterprise industries have used CI/CD frameworks for years and how it’s coming to the telecom sector. There are two important factors to keep

in mind when adopting CI/CD for Open RAN. The first factor is the disaggregation itself, as hardware and software components are coming from different vendors. The second consideration is around physical components such as servers and radios in the RAN. When applying CI/CD models to RAN upgrades, they need to holistically feed into the overall CI/CD strategy across all network segments – RAN, transport, and core. So, in addition to creating a cohesive RAN CI/CD strategy, a mobile operator needs to create an overall network CI/CD strategy. DevOps and CI/CD enable fast software changes. The updates delivered to sites can be monitored to evaluate how they impact end users, and whether they are achieving the predetermined business goals. The integration, software upgrades, and lifecycle management of these disaggregated software components running on COTS hardware enables a new testing model. Testing software from the different groups within an organization need not take place in silos, but rather under an overall CI/CD umbrella. Thus, CI/CD can reduce development time from hours to minutes, eliminating most of the manual tasks. This approach will help with creating CI/ CD blueprints for future deployments resulting in a more interconnected ecosystem of Open RAN vendors. By implementing CI/CD, mobile operators embrace greater collaboration among

Figure 3. In a DevOps model uses a process of continuous development, test, and deployment. Source: GitLab

AI/ML algorithms will be responsible for: • • • • •

24

forecasting parameters; detecting anomalies; predicting failures; projecting heat maps; classifying components into groups.

DESIGN WORLD — EE NETWORK

5 • 2022

eeworldonline.com

|

designworldonline.com


NETWORK AUTOMATION different ecosystem members. Such an ecosystem supports multi-vendor, cloud-native network function onboarding and lifecycle management. This approach minimizes risk through frequent delivery of new features and new optimizations while increasing efficiency via automation that leads to the faster introduction of new services. These open automation tools enable access to vendor-neutral sets of applications. The combined power of containers and CI/CD: Agile DevOps simplifies automation by providing validated stack templates for containers to host microservices. These upgrades will be automated with CI/CD. The combination of software being pushed via CI/CD to containers allows MNOs to easily define their own architecture and make Open RAN easier and more cost-effective to deploy and maintain. The main benefit will be in sites running as a service with software updates being pushed to hundreds of sites automatically instead of scheduling them for upgrades when a crew is available to go on site and upgrade manually. CI/CD can also automate testing and upgrades.Implementing a CI/CD model in the telecoms industry helps to migrate the testing, integration, software release, and actual software deployment of the RAN from manual fieldwork to automated and remote deployment. Manual on-site upgrades are subject to mistakes, and the maintenance window is short. With automation, mistakes are eliminated, and the time window is expanded. If there is an issue with infrastructure, automation will enable moving the application to another data center or the edge, depending on the application. Rollbacks for application or container failing are automated, so the latest stable version is always available, minimizing downtime and any impact to the end-user.

A clear automation strategy, utilizing the RIC and defined processes across CI/CD, ZTP, AI/ML, and Analytics will help mobile operators to move into a fully automated RAN world, which is key when RAN components come from different vendors as with Open RAN. The scope of work is the same as with legacy RAN; what is different is the number of suppliers that will be a part of the Open RAN ecosystem. Automation of configuration with ZTP and automation of ongoing maintenance with CI/CD, AI/ ML will help mobile operators to realize the promise of Open RAN, avoiding vendor lock-in, while Increasing efficiency, providing better resource utilization, and driving down overall TCO.

What is a RAN Intelligent Controller? A RAN Intelligent Controller (RIC) helps

RIC Use Cases and Key Applications

operators to optimize and launch new services by allowing them to make the most of network resources. It also helps operators ease network congestion. The RAN intelligent Controller (RIC) is cloud-native, and a central component of an open and virtualized RAN network. See summary of the use cases in the table below. The RIC provides advanced control functionality, which delivers increased efficiency and better radio resource management. These control functionalities leverage analytics and data-driven approaches including advanced machine learning and artificial intelligence (ML/AI) tools to improve resource management capabilities. eeworldonline.com

|

designworldonline.com

A RIC manages traffic, beamforming, and energy use in a radio access network. Source: Parallel Wireless.

5 • 2022

DESIGN WORLD — EE NETWORK

25


5G HANDBOOK

MEASUREMENTS SHOW 5G IMPROVES LATENCY IN PUBLIC NETWORKS MEASUREMENTS ON PUBLIC NETWORKS AT OUR FACILITIES DEMONSTRATE HOW 5G’S LOWER LATENCY COMPARED TO LTE CAN IMPROVE INDUSTRIAL APPLICATIONS.

MEIK KOTTKAMP, ROHDE & SCHWARZ

5G ultra-reliable low latency (URLLC) use cases are generally associated

Network

with automotive and industrial applications that include sending control commands to a mobile

LTE

robot or automated guided vehicle (AGV). These applications depend on stable, reliable, and always available communication links with low latency

5G NR

to achieve fast reactions. 3GPP, the responsible standardization committee for 5G New Radio (NR), provided performance results as part of its self-evaluation reports including comprehensive performance data, which include latency results. The data shows round-trip delays for both downlink and uplink directions within the cellular network. Those delays are based on simulations using a small, fixed packet size data transmission, generally not identical to the applied data rates in deployed networks. Do real-life networks deployed today perform as promised? To find out, we made some measurements. Most public 5G networks deployed so far use the 5G non-standalone (NSA) architecture, anchored with LTE networks. Here, we report our measurement results for both one-way

Duplexing

Band

DL-AFRCN (CF)

UL-AFRCN (CF)

BW (UL & DL)

FDD

b1

475 (2157.5 MHz)

18475 (1967.5 MHz)

15 MHz

FDD

b3

1300 (1815 MHz)

19300 (1720 MHz)

20 MHz

FDD

b7

3050 (2650 MHz)

21050 (2530 MHz)

5 MHz

FDD

b8

3749 (954.9 MHz)

21749 (909.9 MHz)

5 MHz

FDD

n1

431554 (2157.8 MHz)

393560 (1967.8 MHz)

Up to 20 MHz

TDD

n78

642554 (3653.3 MHz)

642554 (3653.3 MHz)

Up to 100 MHz

latency (OWL) and round-trip time (RTT) delays in a commercially deployed, public 5G NSA network. Additionally, we compare those with LTE latency performance at the same locations. We made all measurements around the Rohde & Schwarz site in Munich, Germany. Table 1 lists the frequency bands deployed in the area. We used a Samsung S20 device with a commercial Deutsche Telekom SIM card, providing access to LTE and 5G services. The smartphone used our QualiPoc Android measurement app. (ARFCN stands for Absolute Radio Frequency Channel Number.) With respect to OWL, we used a prototype with connected GPS resources at the transmitting and receiving ends. Latency depends on the data service

Table 1. Frequency bands and channels that were allocated during measurements. (packet size and transmission interval). Typical tests use Ping measurements with 32-byte packets, which is not reflecting a real application. Therefore, our measurements apply data rates of 100 kb/sec, 1 Mb/sec, and 15 Mb/sec. The results reveal a clear data rate impact to the latency performance. As expected, we noticed lower latency with 5G NR compared to LTE. Most of the improvements occur in cellular uplink (UL) direction for packets sent from the source in the user device through the radio access and core network to the receiving entity in the server. Latency increases as data rates increase,

Figure 1. 5G NR measurements (RTT) in the “high” traffic load pattern.

26

DESIGN WORLD — EE NETWORK

5 • 2022

eeworldonline.com

|

designworldonline.com


LATENCY ANALYSIS Traffic pattern

Packets per second Packet size

Target bandwidth

Constant Low

125

100 bytes

0.1 mb/sec (0.47%)

Constant Medium

200

650 bytes

1 Mb/sec

Constant High

1300

1450 bytes

15 Mb/sec

Table 2. List of traffic load patterns used in the measurement campaign. with generally significantly better performance in downlink (DL) than in uplink (UL). Best case 5G NR measurements revealed less than 7 msec OWL in DL for a 100 kb/sec data rate service. Worst case 5G NR measurements showed around 18 msec OWL in UL for a 15 Mb/sec data service.

Cellular network and measurement campaign We analyzed latency at several locations. Measurements included interactivity test measurements, which measures round-trip-time (RTT) as well as one-way latency measurements that let us measure delay on the UL and DL direction separately. Although we performed initial measurements in locations with 5G NR network coverage, we turned off the phone’s 5G capabilities to compare 5G to LTE. Table 1 provides an overview of LTE and 5G frequencies we identified in the coverage area. Figure 1 shows measurement area showing initial 5G NR RTT results. The colors indicate latencies at their locations. We ran a scouting phase and multiple measurement campaigns, i.e. initially getting a basic understanding of coverage and latency performance and subsequently executing campaigns at the most reliable location that allowed both LTE and 5G NR measurements. The QualiPoc Android tool lets us sending

data streams with the traffic characteristics as shown in Table 2. The packets send from the device are received and immediately reflected by a server using the Two-Way Active Measurement Protocol (TWAMP). The TWAMP server is located in one of the offices of Rohde & Schwarz accessible by a public IP address. Figure 2 illustrates the test setup. Note that

In this case, we used dedicated software on the server and the device for the tests. Furthermore, additional GPS sources at the transmitting and receiving end provide a pulse used during a pre-synchronization and during the actual measurement phase. Consequently, the OWL test duration is increased compared with the interactivity test. To ensure a fair comparison of RTT and OWL results, we concentrated on stationary tests.

Results To illustrate the distribution of our data we chose a box plot representation (see Figures 4, 5 and 6). The box plots show not just the central tendency (median, mean and mode) but the data spread. Furthermore, box plots helped us identify the outliers inside our data

Figure 2. The test setup uses a smartphone app for RTT and OWL measurements.

although focus our campaign to access LTE and 5G NR latency performance, the solution is agnostic to the underlying wireless technology. Figure 2 additionally describes the applied prototype test setup for OWL measurements.

sets. Based on statistics theory, outliers of a normal distribution correspond to 0.7% of the whole data. Outlier values in many cases indicate a measurement error, which we chose to exclude from our results.

Figure 3. Locations of second Measurement Phase (map shows mean RTT values on “Medium” traffic load pattern of each location).

eeworldonline.com

|

designworldonline.com

5 • 2022

DESIGN WORLD — EE NETWORK

27


5G HANDBOOK We scouted the environment first and executed measurements at five locations over several weeks and recorded comprehensive results. For this article, we concentrate on the LTE versus 5G NR comparison. For that reason, we decided on two locations, which provided us stable LTE and 5G NR results as shown in Figure 3. Worth noting that we expected the sum of the OWL-UL and OWL-DL not to exceed the RTT delay since the RTT also includes additional delays within the server (reflections, processing time). We merged all our NR data samples for these two locations (L1NR, L2-NR) and all our LTE data samples (L1-LTE, L2-LTE) thus creating a large data set to get an averaged overview on the latency performance between 5G NR and LTE. We had not identified significant changes depending on the day of the week nor the time of the day measurements were executed. Figures 4, 5 and 6 show the overall results for the different data rate services respectively. Table 3 summarizes the latency performance improvement for all traffic patterns.

Figure 4. Latency box plots per technology used for comparing RTT with OWL results for “Low” traffic pattern.

Conclusions and outlook Upon completing the discussion on our measurement results, we ended up with several conclusions.

Figure 5. Latency box plots per technology used for comparing RTT with OWL results for “Medium” traffic pattern.

• In general, latency on the UL direction is higher than in the DL direction. • The improvement that we get from 5G in terms of OWL latency is predominantly on the UL direction. • In the high traffic load pattern, we get less overall OWL improvement compared to the RTT improvement. • We notice more improvement on the DL direction as the traffic load increases. • The RTT measurements are consistently higher compared to the sum of OWL-UL and OWL-DL reflecting the impact from a reflecting server. The presented analysis illustrates the results for an example commercially deployed network — measurements were executed in Munich in summer and fall 2021. 5G NR network deployment is an ongoing process, which may result in improvements due to increased coverage and additional installed capacity. We believe that latency performance is of particular interest in private 5G NR deployments, specifically if a private 5G network is used in an industrial environment to support use cases such as process automation, remote control of automation equipment or operation of mobile robots, and AGVs. In these deployments we expect better flexibility to adapt network settings to the target applications of interest. Rohde & Schwarz has deployed its own private 5G network in one of its manufacturing sites, namely in Teisnach, Germany. We will continue our measurement campaign in our own private 5G network and intend to present the results in a similar report.

Figure 6. Latency box plots per technology used for comparing RTT with OWL results for “High” traffic pattern.

Traffic

RTT improvement

Total OWL improvement

OWL-UL improvement

OWL-UL improvement

Low

6.45 msec

6.38 msec

6.35 msec (99.53%)

0.03 msec (0.47%)

Medium

8.00 msec

8.05 msec

7.59 msec (94.29%)

0.46 msec (5.71%)

High

6.60 msec

4.68 msec

3.04 msec (64.96%)

1.64 msec (35.04%)

28

DESIGN WORLD — EE NETWORK

5 • 2022

Table 3. Summary of the latency performance improvements based on median values that the 5G network provides compared to LTE.

eeworldonline.com

|

designworldonline.com


PRIVATE NETWORKS

CUSTOMIZE PRIVATE 5G NETWORKS FOR APPLICATION REQUIREMENTS 5G PRIVATE NETWORKS SHOW POTENTIAL FOR A WIDE RANGE OF DIVERSE APPLICATIONS. ENGINEERS CAN IMPLEMENT FEATURES TO ADDRESS THE REQUIREMENTS OF MARKETS SUCH AS MANUFACTURING AND NON-TERRESTRIAL NETWORKS.

5G private networks embrace a wide range of diverse applications, each requiring a different blend of technical features and capabilities. A private network can be built from a customizable set of baseline technologies comprising a silicon platform, layer 1 PHY, and layers 2 and 3 protocol stack software. Such customization enables systems to address vertical use cases defined in the 5G 3GPP specifications or extend their capabilities further than these where applicable. Why should engineers developing private networks consider 5G? How do they need to approach their radio access network (RAN) design, depending on the application they’re addressing?

Private 5G network requirements

If we consider satcomms as another transport network, then eMBB will enable satellites to handle video over 5G, potentially at the edge of existing networks to support traffic offload applications, or to fill in gaps in coverage. For IoT applications, the MMTC capability will enable connectivity via satellites for remote locations and connected cars, ships or trains [Ref 1]. The 3GPP standards provide capabilities aimed at these kinds of vertical applications. For example, 3GPP Release 17, planned for June 2022, adds specific new enhancements both for non-terrestrial networks (NTN) and for URLLC in industrial IoT applications. 3GPP Release 18, due in 2024, will add more capabilities, including more enablers for factories and connected IoT devices, and enhancements in radio resource control (RRC)

3GPP defines 5G based on three key use cases: • • •

PAUL A. MOAKES, COMMAGILITY

for NTNs [Ref 2]. Not all commercial devices will, however, support these features. For many use cases, engineers will want to evaluate whether going beyond the 3GPP standard enables them to better meet their requirements or create a differentiator.

Manufacturing and IIoT The manufacturing sector is a major market for private LTE/5G networks, both now and in the future. Analysys Mason forecasts that the sector will account for more than 40% of private LTE/5G network expenditure in 2026, helping manufacturers to increase automation and efficiency [Ref 3].

Figure 1. The O-RAN logical architecture shows how the RAN functions are split into separate components. Source: O-RAN Alliance.

Enhanced mobile broadband (eMBB); Massive machine type communications (MMTC); Ultra-reliable and low latency communications (URLLC).

For applications using 5G private networks, the main area of interest is often in URLLC. For manufacturing and industrial IoT (IIoT) use cases, high reliability is essential to avoid expensive downtime. Low latency is just as important, for example, in industrial automation on a production line or for self-driving trucks or robots in a distribution warehouse. Reduced Capacity (RedCap) systems, introduced in 3GPP Release 17, can reduce the cost and power of devices connected to a base station to support large numbers of sensor devices. eeworldonline.com

|

designworldonline.com

5 • 2022

DESIGN WORLD — EE NETWORK

29


5G HANDBOOK 3GPP is working to include capabilities aimed at industrial applications in the 5G standard. Indeed, Release 16 includes specific support for non-public or private networks, as well as integrating 5G with the IEEE 802.1 specifications for Time Sensitive Networking (TSN) [Ref 4]. These are important for many industrial and manufacturing applications. Private network support includes self-assignment of network IDs within the private network or coordinated ID assignment where globally unique IDs can be allocated. These IDs enable network selection and reselection, overload control, access control, and barring, giving the private network owner full control over security and network resource usage. Adding TSN capabilities enables communication with high availability and reliability features devices and a deterministic latency. Supporting strict synchronization is important for precision control, accuracy, and improvement in process speed. These features broaden how 5G private networks can be used in industry and manufacturing, particularly as the shift to Industry 4.0 adds further demands. Of course, industrial applications aren’t just about production lines and machine tools. A smart factory can include robots handling multiple tasks, self-driving vehicles, and augmented reality (AR) headsets for employees. All of these add new demands for the bandwidth, reliability and low latency offered by 5G -- with a private network the most dependable way to deliver these capabilities.

Non-Terrestrial Networks For NTN, particularly satellite, we need a system that can handle issues such as higher latency, high levels of interference, and multiple parallel channels. These include enhancements such as increased timer values to handle larger latencies and enhanced uplink channel scheduling, and random-access channel access procedures. One of the challenges is a large Doppler shift in frequency that varies over time due to satellite movement. To compensate for this Doppler shift, each device on the network must calculate its own frequency adjustment. This requires changes to timing relationships, the Hybrid Automatic Repeat

30

DESIGN WORLD — EE NETWORK

Figure 2. The O-RAN high-level architecture shows how external system data can be fed to the SMO framework. Source: O-RAN Alliance Request (HARQ), and uplink synchronization. Satellites incur longer propagation delays than many other devices, due to the much longer distances involved. This can make it difficult to achieve the low latencies associated with 5G, which can be as low as 1 msec. To maintain a subjectively good user experience, engineers may need to take a network-level view, ensuring content is distributed in advance as close as possible to the consumer, thus keeping overall latency as low as possible. For example, they can evaluate the use of geostationary, mid-Earth or low-Earth orbit satellite networks and whether a transparent (bent pipe) or regenerative payload option makes sense. NTNs have been specifically included in the 3GPP 5G RAN standards from Release 15 onwards. 3GPP Release 17 assumes that a GPS/GNSS location capability is present to enable the frequency adjustment calculation for Doppler shift, but the GNSS signals may be weak or even absent at times, making this difficult [ref 5]. 3GPP is considering the regenerative payload option for standardization in Release 18 with three potential design

5 • 2022

options, including Open-RAN architectures. These are: • Full gNobeB on board; • gNodeB Centralized Unit (gNBCU) on the ground, gNodeB Distributed Unit (gNB-DU) on board; • gNB on the ground, LLS (Low layer split), RU (Radio Unit) on satellite. All of these options have advantages and disadvantages that network designers must consider. These options include substantial modifications to the protocol stack compared to a terrestrial network.

Customizing 5G systems Private networks in specific applications add new demands. By their very nature, they permit system designs that step outside the regular network standards — if

Figure 3. Multiple software modules can be split in different ways across the available hardware processing resources, to meet the demands of particular applications.

eeworldonline.com

|

designworldonline.com


PRIVATE NETWORKS you are building a closed, private infrastructure, you can make your own decisions where to extend the envelope. In practice, network engineers are unlikely to have the capabilities, time, or inclination to re-invent the wheel, particularly as the complexity of 5G makes building a dedicated private network a non-trivial task. The best answer will often be to start off with a system that includes hardware and software that can get close to the target requirements, and then customize the features as needed. Engineers can undertake this for themselves or work with third parties who can provide the design resources needed. This means that when they are starting out on a new design, engineers can aim high and work towards their ideal system, rather than having to settle for the closest match from off-the-shelf COTS products.

Open RAN The shift to open specifications increases flexibility for design engineers, letting them partition their application across different hardware and software blocks in the way that suits best, as well as to pick “best of breed” subsystems for each part. One of the biggest trends is towards an Open RAN architecture, spearheaded by the O-RAN Alliance [Ref 6]. This aims to make the RAN more open, scalable, and interoperable; standard specifications enable a flexible, multi-vendor network. Figure 1 shows the logical architecture of the O-RAN model, with the gNodeB split into central units (CU) and radio units (RU). Additional distributed units (DU) can be added as required. With the gNodeB segmented this way, rather than a single monolithic block, it can be easier to adapt to the specific requirements of private networks. For example, one part of a system may require ultra-low latency while another does not. This split model gives engineers the flexibility to allocate RAN resources accordingly. For industrial and manufacturing applications, the O-RAN architecture provides new technical capabilities. For example, external data from an industrial control panel can be sent as an input to the service management and orchestration (SMO) platform and used by an AI/ML model to prioritize important data over low priority traffic (Figure 2), thus improving Quality of Service (QoS) [Ref 7]. For the satcom market, the O-RAN architecture provides flexibility in size, weight and power of the non-terrestrial equipment using the alternative architectures mentioned above. In practice, the O-RAN architecture enables multiple different ways of splitting the components across the available hardware and software resources. Figure 3 shows three possible options for splitting functions in the RAN. Option 2 creates a DU that includes the radio interface. This option is used mainly to separate the control and user-plane functionality between the CU and DU. Use cases focus on non-real time applications with increased latency and reduced bandwidth. Option 6 is a MAC/PHY split where the MAC resides with the CU and the RU includes the full PHY stack. It’s standardized by the Small Cell Forum 5G network Functional Application Platform Interface (5G nFAPI) specifications. This split is useful where only centralized scheduling is required, it also adds supports for multi-vendor RUs with different radio capabilities. eeworldonline.com

|

designworldonline.com

Standardized by the O-RAN Alliance, Option 7.2 remains the most popular split. This option moves most time-domain PHY functions that can be virtualized out of the radio unit and into the DU, leaving only the beamforming and FFT functions in the RU. This split is works well for URLLC and supporting carrier aggregation but requires a fronthaul that supports time-synchronous networking for eCPRI fronthaul. For many applications, 5G private networks hold the promise of reliable, high-performance connectivity that can provide the required low latency and high data rates. By customizing the 5G RAN, going beyond the 3GPP standards, engineers can build systems that meet their requirements accurately and efficiently.

References 1. “Spend on private LTE/5G networks will be small but an important opportunity for future IoT growth,” Analysys Mason, March 29, 2021. https://www.analysysmason.com/ research/content/articles/private-networks-iot-rdme0-rma17/ 2. “Satellite Communication Services: An integral part of the 5G Ecosystem,” EMEA Satellite Operators Association, July 2020. https://gsoasatellite.com/wp-content/ uploads/2020-11-5G-Ecosystems-UPDATE-NOV-2020.pdf 3. 3GPP Release 18 Features, https://www.3gpp.org/ftp/Inbox/Marcoms/Relase_18_ features_tsg94_v09.pdf 4. “Private LTE/5G networks: worldwide trends and forecasts 2021–2026,” Analysys Mason, March 17, 2021. https://www.analysysmason.com/research/content/regionalforecasts-/private-5g-networks-forecast-rdme0-rma18-rma17/ 5. “5G for Industry 4.0,” 3GPP, May 13, 2020. https://www.3gpp.org/news-events/2122tsn_v_lan 6. 3GPP Release 17, https://www.3gpp.org/release-17 7. O-RAN Alliance, https://www.o-ran.org

5 • 2022

DESIGN WORLD — EE NETWORK

31


5G HANDBOOK

5G RESHAPES MOBILE RF DESIGN STRATEGIES

STEEP LEARNING CURVES IN RF MMWAVE ANTENNA DESIGN DEMAND COLLABORATION AND TECHNOLOGY UNDERSTANDING BEYOND SUB-6 GHZ STRATEGIES. 5G shifts us from the connected era to the data era. Where wireless strategies were once about connecting everything, 5G delivers information and empowers services that improve our lives. For engineers and designers, this advancement comes at the cost of increased complexity, unique engineering challenges, and critical design considerations. Mobile carrier build-outs focus on both sub-6 GHz and millimeter wave (mmWave) spectrums. The spectrums are, however, inherently different in terms of engineering, performance, deployment, and purpose. Antenna design, manufacturing processes, RF leakage, and a dearth of engineering expertise contribute to 5G’s deployment challenge. Sub-6 GHz frequencies easily coexist with LTE technologies. The “sub-6 GHz” spectrum is integral to mobile carrier networks. Some equate sub-6 GHz to high performing LTE, with faster data rates and more geographical endurance than mmWave. Unlike mmWave, sub-6 GHz transmits through buildings, walls, and terrain –- real-world characteristics that challenge mmWave performance. On this

landscape, sub-6 GHz sacrifices speed for signal consistency. Compared to sub-6 GHz signals, mmWaves don’t travel as far. Buildings, trees, people, and even weather can interfere with mmWave signal integrity. Designing for mmWave-based 5G requires attention not only to the antenna but also to the feeds, traces, and connections that go into that antenna, which must be designed to efficiently handle frequencies above 40 GHz. End-toend signal integrity is necessary for every high frequency 5G device and critical to defining how well devices perform and utilize ultrahigh-speed 5G signals. The progression of 4G devices is a good example. New materials, introduced for use in the flexible printed circuit (FPC) signal transfer lines and the antennas themselves, shifted from polyamides to liquid crystal polymers (LCPs) and then again to a modified polyamide. These transitions affect cost reduction and an attempt to reduce signal loss. LCP and modified polyamide components save space in the phone design. Their bendable, flexible nature enables better routing. Add to the mix mmWave speeds that are 10-100x faster, and it becomes apparent that antenna designers must design for the whole solution to ensure signal integrity.

TIM GAGNON, MOLEX

mmWave antenna considerations Where sub-6 GHz relies on non-array antennas (more traditional omnidirectional monopole/dipole style), mmWave requires array antennas, a type most often used in military or scientific applications. Many antenna engineers lack expertise in deep space radar communications. Even with a solid understanding of antenna engineering, engineers need a deeper skillset, one that includes beamforming and beam steering that further increases design complexity, or at the very least ensure its vast difference from more traditional antenna design. (See Figures 1 and 2.) mmWave’s higher frequencies mean its components are more sensitive to frequency and temperature, which can cause antenna detuning. Materials can affect performance of antennas and other components in proximity on a device. Further, meeting 5G performance will require more than one array, so there are multiple systems competing for space in small devices.

Make smarter material choices At the same time, the number of interconnects and antennas inside these devices has vastly increased, causing pressure for new thinking in how they are engineered. This is especially true as device manufacturers fail to offer relief on the necessary specs. Specs are increasing in terms of performance demands even as device manufacturers reduce costs. Engineers must move away from materials unfriendly to mmWaves. Furthermore, engineers must evaluate manufacturing processes for RF interconnects and antennas. Designs are moving away from traditional PCB integrated or polyimide flexible circuit

Figure 1. beam-steering changes the direction of the antenna array’s beam maximum from (a) broadside to some angle away from broadside (b) with the purpose of “finding” a cell site or user.

32

DESIGN WORLD — EE NETWORK

5 • 2022

eeworldonline.com

|

designworldonline.com


mmWAVE DESIGN

Figure 2. An overlay plot shows the farfield gain of an antenna in package (AiP) designed with high yield manufacturing. transmission lines and PC/ABS antennas. Instead, engineers opt for plated plastics, molded and laminated materials made from low-loss LCPs. Moves to stamping, forming, and molding let engineers replace more expensive components with lower-cost components that may be friendlier for processing; however, engineers must be sure to select the correct dielectric materials (Figure 3). High-quality designs will incorporate low loss, low dielectric constant (Dk) substrates, creating direct value in antenna efficiency by minimizing thermal and dielectric loss. For example, in a massive MIMO (multiple-input and multiple-output) deployment featuring 32 transmit and 32 receive signals, the actual power per channel or per signal is very low and therefore transmission loss becomes critical. Here, you must balance any desire to use molded materials with the substance’s inherently higher dielectric constants –- a fine line to meet in achieving top performance through use of a lower-cost material. Creating a high precision simulated structure is a crucial step and requires knowledge of all material properties for accurate results. eeworldonline.com

|

Simulate surface roughness and antenna-covering sensitivity In addition to simulating dielectric properties of a substrate, simulation tools, using a microwave simulator, can model surface roughness of conductors early in the design process. From the simulations, you can understand the level of accuracy required. Simulation of various mmWave applications demonstrates the advantage of suitable surface roughness; when shifted to a degraded level, antenna efficiencies drop dramatically. Even variations as small as tenths of a millimeter in size may create significant performance changes. While design models and plating materials must be carefully considered and simulated for performance value, the process also reduces product iterations and accelerates development timelines. For evolving 5G, this is another potential hurdle for engineers accustomed to the inherent forgiveness of sub-6 GHz frequencies.

designworldonline.com

Antenna coverings used in macro and small-cell base stations also require special consideration because mmWave elements are very sensitive to these materials. Reflections and interference may result when the antenna attempts to radiate mmWave transmissions through the substance. Materials must be kept very thin and with a low Dk, as reflections then tend to be smaller amplitude. Ideally, covering material must be highly fractional in terms of its thickness compared to the wavelength in use. As mmWave designs move higher in frequency, enclosure thickness poses more potential interference on the waves moving through the antenna. In contrast, sub-6 GHz has very little interaction with enclosures; it does affect where its waves resonate, but with much less reflection and loss compared to mmWave.

Multiple radios in a single device 5G engineers must consider the full architecture of a device, as sub-6 GHz and mmWave are implemented along with WiFi, Bluetooth, NFC, and UWB. The addition of the 5G radio in an already crowded device creates new requirement levels for RF leakage in that device. Performance is a concern, as is placement of each individual component (Figure 4). As engineers address mmWave and its unique challenges, the system must not leak energy into the rest of the device, potentially affecting performance of other frequency

bands. This requires deeper and more careful consideration of the RF chain, from modem to the mmWave antenna’s front end. From an engineering perspective, this need has translated to a stronger and more fine-grained focus on improvements to RF shielding and digital interconnects as well as transmission line systems. Increasingly demanding design specs speak to this need, driving further emphasis on collaboration among design teams. Engineering teams must work together, simultaneously evaluating all aspects of a design. Even as signal integrity teams address connector, interconnect, and transmission line performance for power and loss, electromagnetics staff studies the design’s broad impact on RF leakage. This can be a highly iterative process, heavily reliant on constant communication. One design edit may improve signal integrity but demand a trade-off to manage or reduce RF leakage. Precision manufacturing must then maintain tight tolerances. It’s a design process that has the potential to involve every team from mechanical and signal integrity to electromagnetics and molding specialists. To combat design pitfalls, 5G engineers are rethinking the design process and engineering team structure. Groups that can tap into a diverse industry presence are well-positioned, blending design capabilities in

Figure 3. A simulation model demonstrates the electric field leakage from a long RF transmission line flexible circuit and a pair of PCB-mounted connectors.

5 • 2022

DESIGN WORLD — EE NETWORK

33


5G HANDBOOK high-speed connectors, antennas for base stations, and consumer device connectivity to complete a more holistic view of 5G. Engineers from the antenna group, as well as the board-to-board or board-to-FPC designers, might have once worked in isolation. Today, these groups must not only be more collaborative amongst one another but with their customers. Historically, consumer device developers have waited until late in the process to determine antenna design and placement. This is no longer an option. Give consideration to mmWave strategies early, which may reveal that an existing sub-6 GHz technology provider may not have the depth to seamlessly integrate mmWave advances. Engineers also need a different mindset from the customer. The evolution of their product design may or may not have been purely straightforward before involving the engineering groups. Sharing this kind of insight can help define the design’s direction, even as the engineering groups can in turn create impact based on their larger industry experiences. Constant interaction and feedback require a willingness to share lessons learned and create communication channels that remain open throughout the design process. This does represent a higher bar for customers, who may find themselves challenged to meet it by more experienced, insightful, and more demanding engineering teams. Handing over a spec is not sufficient, and the design and engineering groups in the mix will make clear their expectation for customers to see and participate in the full development process.

Ideally, collaboration across departments, business units, and teams begins early in the design process. Such collaboration can build long-term, trusted relationships through shared simulation files, routine meetings among front-end and back-end teams, and constant formal and informal feedback. 5G is different – for mobile carriers, designers and engineers, and end user customers. Antenna designers, microwave circuit designers, even those designing devices and considering adding mmWave functionality, must embrace significant cost and steep learning curves for understanding and implementing these technologies. But even as mmWave rollout timelines are longer, the industry is benefitting from time spent developing and releasing more cost-competitive hardware options, as well as refining solutions that solve deployment challenges such as signal integrity. This is a long-term advantage, coupled with greater clarity on how to execute smart, collaborative 5G designs, and is poised to drive mmWave further and expand its capabilities.

It’s not a web page, it’s an industry information site So much happens between issues of R&D World that even another issue would not be enough to keep up. That’s why it makes sense to visit rdworldonline.com and stay on Twitter, Facebook and Linkedin. It’s updated regularly with relevant technical information and other significant news for the design engineering community.

rdworldonline.com


NETWORK TIMING

HOW TIMING DESIGN AND MANAGEMENT SYNCHRONIZES 5G NETWORKS DARRIN GILE, MICROCHIP TECHNOLOGY

THE INFRASTRUCTURE NEEDED TO DELIVER COST EFFECTIVE, RELIABLE, AND SECURE TIMING THROUGHOUT CELLULAR NETWORKS NEEDS PROPER ARCHITECTURE, DESIGN, AND MANAGEMENT. TIGHTER TIME ACCURACY DEMANDS FOR 5G NETWORKING EQUIPMENT REQUIRES RELIABLE AND ROBUST TIMING ARCHITECTURES THAT GUARANTEE NETWORK PERFORMANCE.

As networks shift from communication links based on frequency division duplex (FDD) to time division duplex (TDD), the need for not only frequency but accurate phase and time synchronization

(PRTC). Despite the various timing flows, the key functions needed to support timing distribution through an O-RAN network are still based on SyncE, IEEE-1588, and GNSS.

arises. Equipment that operators deployed in TDD networks rely on a

Timing Standards

combination of GNSS, Synchronous Ethernet (SyncE), and the IEEE-

Various timing recommendations have been put in place so that each network element meets certain frequency, phase, and time requirements, ensuring proper end-to-end network operation. For TDD cellular networks, the basic synchronization service requirement defined by the 3GPP for time synchronization was set to 3 µsec between base stations. The International Telecommunication Union Telecommunication Standardization Sector (ITU-T) maintains a set of recommendations that define the absolute maximum time error (max|TE|) error between a common point and the end application based on the 3GPP requirement, which translates to ±1.5 µsec. GNSS became the dominant means for sourcing time in TDD networks through PTRCs. One approach was to co-locate the GNSS receivers at the radio site, but this requires good line of sight to the sky for reliable operation. Radios located indoors or in locations that prohibit clear line of sight cannot take advantage of a local GNSS source. GNSS is also subject to outages due to line-of-sight disruptions such as weather events, or targeted attacks from spoofing or jamming. The sheer number of planned 5G NR sites make the cost of installing and maintaining GNSS sources difficult for carriers to absorb. The need for a more accurate PRTCs in addition to the concerns

1588 Precision Time Protocol (PTP) to deliver accurate frequency, phase, and time across the network. The new 5G RAN architecture introduced in 3GPP Release 15 split the baseband unit (BBU) and remote radio head (RRH) into Centralized Units (CU), Distributed Units (DU), and Radio Units (RU). This architecture creates a disaggregated and virtualized network that allows carriers to realize efficiencies and cost savings. Disaggregation gave rise to the enhanced common public radio interface (eCPRI), which connects the DU and RU. This interface provides distinct advantages over the CPRI links previously used to connect the BBU to remote radio head (RRH). Because eCPRI is packet based, synchronization with the RU now occurs by employing PTP and SyncE. Additionally, the Open RAN movement has standardized hardware and interfaces based upon 3GPP recommendations. The O-RAN Alliance has defined four options for the distribution of timing through the fronthaul network. In all four configurations, the RU either receives timing from the DU or from a nearby Primary Reference Time Clock

Figure 1. To meet latency specs, a network must maintain ±1.5 µsec timing limits where the total spreads out over the network elements.

eeworldonline.com

|

designworldonline.com

5 • 2022

DESIGN WORLD — EE NETWORK

35


5G HANDBOOK Last common T-BC of the basestation cluster Figure 2. T-BCs in 5G networks have a max|TE| based upon the class level.

about the reliability and cost of deploying GNSS has led to the definition of an enhanced primary time clock, ePRTC. The ePRTC can initiate time through GNSS or another network standard time source traceable to UTC. After acquiring the time, an ePRTC then uses a cesium or better atomic reference oscillator to

Class max|TE|

cTE

A

100 nsec

50 nsec

B

70 nsec

20 nsec

C

30 nsec

10 nsec

D

For further study For further study

Table 1. G.8372.2 T-BC and T-TSC Clock Equipment Time Error Limits. maintain a reliable, highly accurate and stable time reference for the network. The use of an autonomous atomic timed reference provides a level of immunity to disruptions and provides stable holdover for up to 14 days. The time accuracy of the ePRTC is ±30 nsec to UTC, a major improvement of the ±100 nsec accuracy specified for the previous PRTC. This improved accuracy meets the demanding network requirements of 5G NR. The Telecom Boundary Clock (T-BC) and destination clock (T-TSC) are other crucial elements that ensure the network accurately propagates time. The T-BC typically resides in a switch or router and is responsible for recovering time from upstream links and passing it to downstream links. The Ethernet equipment clock (EEC), or SyncE, within the T-BC/T-TSC provides a stable and accurate frequency reference traceable to the primary reference clock (PRC/PRS) with a frequency accuracy of 0.01 pbb. Using SyncE alongside PTP offers

36

DESIGN WORLD — EE NETWORK

several advantages for accuracy and cost improvements. The SyncE reference, which is more accurate than the local oscillator, can drive the PTP engine. This lets the PTP engine filter larger amounts of packet delay variation (PDV), which improves the overall phase accuracy.

Basic time accuracy requirement For TDD network deployments, the end-toend network limit for time accuracy is ±1.5 µsec as detailed in G.8271. From this value, a timing budget that defines the performance required for each network element is derived, thus meeting the end-to-end limit. The clock equipment specification, defined in G.8273.2, breaks down the time error into constant and dynamic time error. Constant time error (cTE) represents error that occurs from inherent delays in the network. These errors cannot be filtered; they accumulate as time propagates through the network. Dynamic time error (dTE) is error derived from high- or low-frequency noise. Proper filtering of the network reference clocks can reduce these errors.

The ±1.5 µsec basic network limit is divided amongst the network elements. The allowable time-error budget for each network element for the 4G network is shown in Figure 1. The PRTC with T-GM is limited to ±100 nsec of error and each T-BC is given a max|TE| based upon the class level. Table 1 details the max|TE| allocated for each clock class. Additionally, a cTE limit is assigned for each T-BC depending on the class level. Network link asymmetries and the end application each receive assigned maxTE values as well. Networks supporting up to 10 hop class A T-BC or 20 hop Class B T-BCs are sufficient to meet the basic network limit.

Advanced time-accuracy requirements The 1.5 µsec basic end-to-end requirement is the same for 4G and 5G networks. However, certain radio technologies such as coordinated multipoint, carrier aggregation, or massive MIMO, place more stringent limits on the time error. Figure 2 shows the concept of relative time error, which describes the time error of the end applications traceable to

Figure 3. 5G network timing relies on switches and synchronizers.

5 • 2022

eeworldonline.com

|

designworldonline.com


NETWORK TIMING the last common point of a radio cluster. The advanced time accuracy requirements needed for NR deployments reduced the allowable relative time alignment error (TAE) to 130 nsec, or ±65 nsec maxTE, within a cluster. In addition to the new ePRTC discussed earlier, Table 1 also lists new classes of T-BC and T-TSC clocks that have been defined by the ITU-T to support these tighter limits. The G.8372.2 class C and the emerging class D requirements further constrain the allowable TE each element can introduce. Each class C and class D element is required to support the enhanced Ethernet Equipment Clock (eEEC) standard as defined in G.8262.1.

Design for timing Figure 3 shows a typical block diagram of the key components used to maintain, manage, and distribute timing in equipment design. It can be used as a guide when designing for CU, DU, or RU applications. The main function in the timing design is a system synchronizer comprised of one or more sophisticated phase-locked loops (PLL) that offer the functionality necessary to provide precise frequency and time synchronization. These synchronizers are responsible for clock monitoring, reference switching, filtering, and syntonizing accurate clocks that keep equipment synchronized to network time. Multiple PLLs within the same synchronizer allow for support of SyncE, PTP, and additional time requirements. Synchronizers supporting multiple inputs and outputs can monitor and syntonize clocks for various interfaces. For SyncE support, one or more recovered clocks connect to the system synchronizer, which will qualify and manage the various input references. The synchronizer selects one recovered clock as the primary clock and the SyncE PLL will filter the clock before redistributing it to the egress nodes. If support for the enhanced Ethernet Electrical Clock (eEEC), as defined in G.8262.1 is needed, it is critical to ensure that the SyncE recovered clock can be quickly squelched on a Loss of Signal (LOS) condition. This ensures that the short-term and long-term phase transient limits of G.8262.1 can be satisfied. Proper implementation of PTP requires accurate timestamping capabilities and dedicated software to properly maintain precise time synchronization. To minimize delay, timestamp units should reside as close to the edge of the box as possible. For Class B equipment, a timestamp unit with 10 nsec of accuracy is sufficient. To meet Class C clock requirements, timestamp units should have an accuracy of 4 nsec or better. A PTP software stack and, most crucial, a robust time algorithm is needed to handle PTP packet communication, process timestamps, and make frequency and phase adjustments to the Time PLL inside of the system synchronizer. The Time PLL could also be locked to pulse per second (PPS) inputs from local PRTCs or other equipment that supply PPS references. Finally, a precision oscillator provides the frequency base at startup and ensures stable operation in cases of network disruption. Not all use cases will require robust holdover capabilities, but the further into the core of the network the equipment will be placed, the more stable the oscillator needs to be.

for the ingress eEEC on LOS conditions, you can apply calibration techniques to properly manage the cTE and dTE within a given design. Provisioning for on-board or in-system calibration has become more popular and necessary as engineers drive to minimize the time error that equipment introduces. The identification of potential sources of cTE due to process, temperature, and voltage need to be taken into consideration at time of component selection. You need to minimize delays introduced by buffers, FPGAs, timestamp units, or other devices in the timing path and, if possible, correct then with calibration techniques at the board and/or system level. Input-to-output propagation delays through buffers and other devices can be allocated for by providing a feedback path back to system synchronizers that allow for dynamic calibration of delay. It is no longer sufficient to solely focus on the input-to-output delay of a box because of the relative time-error requirement introduced by the advanced time accuracy limits. Pay attention to the output-to-output alignment for each PPS output within a system. Additionally, for chassis equipment, the line card PLL bandwidths for each output should be the same or programmed as high as possible to ensure the equipment handles any phase changes as identically as possible across all outputs. Synchronizers, timestamping PHYs, and switches can compensate for known delays inherent in board designs. From a board level, static calibration techniques can compensate for trace delay and propagation delays on a per-output or per-port basis. Synchronizers can provide per-input trace delay and buffer compensation, picosecond phase adjustment resolution, per- output trace delay compensation, and perinput or per-output cable delay compensation for GNSS or G.703 1PPS interfaces. Additionally, advanced timestamping devices provide perport timestamp calibration with picosecond resolution. These features offer flexibility to measure and correct for phase error within a system to ensure TE is minimized.

Conclusion Synchronization requirements and capabilities continually evolve to drive the ultra-low latency, high bandwidth, and advanced new radio applications for 5G and beyond. Satisfying the new enhanced standards for time accuracy in network equipment requires careful planning of the timing architecture. Component suppliers continue to address these demands by improving timing performance of chipsets. The addition of measurement and calibration capabilities in synchronizer products help minimize time errors in devices used in the timing path. With proper timing architecture planning and design the tight time accuracy requirements for 5G can be achieved.

References The Enhanced Primary Reference Time Clock (ePRTC) as a Solution for GNSS Vulnerability, September 2020, https://ww1.microchip.com/downloads/en/DeviceDoc/00003630A.pdf

Class C & class D design considerations Take care when designing the timing architecture in systems required to meet the class C and D requirements. In addition to improved timestamp accuracy and the need of providing squelching capability eeworldonline.com

|

designworldonline.com

5 • 2022

DESIGN WORLD — EE NETWORK

37


5G HANDBOOK

5G DEMANDS FOR 12 GHZ SPECTRUM CALL FOR CONSENSUS ON TESTING BEFORE THE TESTING BEGINS, SEVERAL POLICY AND BUSINESS ISSUES NEED RESOLUTION. STAKEHOLDERS MUST AGREE ON ONE TESTING METHODOLOGY, WHICH SHOULD COMBINE MODELING AND REAL WORLD MEASUREMENTS. DAVID HALL, NI

In the same way that no amount of food ever seems to satisfy a hungry teenager, even C-band and mmWave won’t forever meet the demands for wireless data. As researchers eye sub-THz and THz spectrum for future mobile networks, another more reasonable band has come of interest. The debate over 12 GHz spectrum for 5G use is in full swing. Interestingly, the wireless industry can learn some important lessons about the 12 GHz spectrum based on the high-profile C-band fallout between the Federal Aviation Administration (FAA) and Federal Communications Commission (FCC). That controversy made it clear because possible interference with radar altimeters remains unresolved. Engineering teams working on behalf of the FAA and FCC used somewhat contrasting assumptions to determine whether radar altimeters could handle potential interference in C-band spectrum, with each reaching a different conclusion. The FCC claims that coexistence isn’t possible, and that the test methodology used by the FAA wasn’t realistic. Instead of agreeing on a test methodology and collaborating on a test approach, the FAA has now taken the issue to the court of public opinion, highlighting the potential dangers of 5G interference during flights. Without a common and agreed upon approach to co-existence test in 3.5 GHz (C-band) spectrum, 5G deployment in zones around airports have been delayed.

Figure 1. The U.S. Frequency allocation chart shows the current state of the 12.2 GHz to 12.7 GHz band.

38

DESIGN WORLD — EE NETWORK

5 • 2022

eeworldonline.com

|

designworldonline.com


12 GHz TEST 12 GHz spectrum debate intensifies As the C-band debate rages, the 12 GHz spectrum is inching into the spotlight. From 1996 through 2004, the FCC held various competitive bidding auctions to the 12 GHz spectrum. As a result, satellite communication companies including Dish, DirecTV and Elon Musk’s Starlink now own licenses for the spectrum, whose rules prohibit ground-based transmissions for broadband internet and satellite TV. Despite the effort, 25 years later, at least some of these same licensees are petitioning the FCC to use at least some portion of the 500 MHz available at 12 GHz for 5G use –- a rule that would make it significantly more valuable. Figure 1 shows where the 500 MHz portion of the 12 GHz spectrum fits in. Part of this interest stems from the relative simplicity of using 12 GHz for mobile communication. Unlike mmWave, 12 GHz is somewhat more forgiving of propagation path loss and benefits from availability of higher efficiency semiconductor process technologies like Gallium Nitride (GAN). While mmWave delivers expanded network capacity in dense cities, the cost of deployment doesn’t bode well for more suburban communities. With shorter range due to atmospheric absorption, mmWave requires a massive investment in wireless infrastructure to cover suburban environments. As the mobile data needs of these communities increase over time, 12 GHz might very well be an option that offers 500 MHz of spectrum at a much lower deployment cost. Indeed, depending on propagation characteristics, the 12 GHz spectrum could deliver the broadband technology many rural communities need because it offers high data rates across a relatively wide coverage area. At the 2022 INCOMPAS Policy Summit panel discussion in February 2022, participants said opening up the spectrum would solidify U.S. global leadership in 5G, protect the nation’s economic interests, and improve competition, which would give customers more options. Right now, the 12 GHz spectrum is used only for downlink, or transmitting signals from satellites to Earth. Interference could be created, for example, by base stations on the ground blasting radio waves that desensitize satellite TV receivers. Though these technical problems loom, the current 12 GHz debate involves policy questions as well. Can the 12 GHz spectrum usage rules change without re-auctioning it? Many argue that at the time of the original auction, the prohibition against ground-based transmissions reduced the value of the spectrum. Thus, merely opening the spectrum to 5G use wouldn’t be fair for all parties. Others argue that the FCC should buy out the incumbents and auction the spectrum for mobile use. In either scenario, a likely outcome is that the incumbent 12 GHz satellite operators remain in some capacity, and with that comes the need for interference and co-existence testing. Although the cost of interference in the 12 GHz band is not quite as severe as interference in C-band around an airport, we still must characterize and understand it. We’ve learned from the FCC versus FAA disagreement that if all parties cannot agree on the test methodology, they clearly won’t accept the measurement or simulation results. Of course, it’s not as simple as one group “promising” not to produce emissions into the neighboring band. For example, if your neighbor were to install an underground fuel-oil tank, you would likely have concerns about oil contaminating your soil. You might even file a petition with your local government

eeworldonline.com

|

designworldonline.com

authorities to stop the construction. After all, you purchased the property under the assumption that you would never have to worry about oil from an underground tank seeping into your yard. On a different scale, incumbents in the 12 GHz band have a similar cause for concern. Given that land-based transmission wasn’t allowed when the 12 GHz spectrum was being auctioned, its only reasonable to expect some guarantee that others using this band for 5G won’t impact existing satellite TV services.

Testing for interference and beyond Although there are lingering public policy questions regarding rightful owners of the 12 GHz band, we can be certain about the need for coexistence testing. The 12 GHz issue differs from the altimeter C-band issue in that terrestrial and satellite signals will use the same 500 MHz, meaning they will need some form of spectrum sharing, perhaps similar to spectrum sharing between 4G/LTE or 5G/CBRS networks. That’s not the case with altimeters and C-band, which occupy different but adjacent bands. As we learned from the FCC vs. FAA debacle, a good first step is for stakeholders to first agree on the methodology for assessing the interference problem. In theory, the test methodology is straightforward. As with any coexistence testing, a signal source generates interference into a receiver while you measure digital-receiver characteristics such as bit-error rate (BER). Measurements can answer questions such as “how strong is the interferer, how close is the interfering signal, and which receivers do you use to conduct the test?” Because the answers can dramatically affect the conclusions of executing a test, using a third-party arbiter that both sides can trust might be a good approach. Of course, interference testing is just one aspect of testing necessary to make 12 GHz 5G networks a reality. While mathematical models help in predicting a radio’s performance in the real world, test beds and prototypes are critical to understand the practical performance in a 12 GHz environment. 5G was the first generation of mobile technology receive extensive testing using software defined radios (SDRs) both in the sub-6 GHz and mmWave bands. As wireless networks continue to increase in complexity -– with cognitive capabilities and advanced beamforming becoming the norm -– this will only continue for the 12 GHz band. As commercialization of future bands accelerates, these prototyping systems are also rapidly advancing to help engineers validate key performance benchmarks and shorten time to market. The need for advanced test and prototyping capabilities becomes more critical as we look towards future generations of wireless, such as 6G. The technologies that 6G might use are in early phases of research. Lead researchers now study the use of sub-terahertz and terahertz spectrum, joint communication and sensing, and even cognitive networks as key innovations. Each technology could introduce new benefits to consumers but could carry its own unique test challenges that the wireless industry will continue to collectively assess. The value of the 12 GHz spectrum is growing as more industries see its potential. If those industries can agree on one testing methodology to assess 5G interference in this spectrum and beyond, we’ll all reap the benefits.

5 • 2022

DESIGN WORLD — EE NETWORK

39


5G HANDBOOK

OPTIMIZE RF SIGNALS IN PRIVATE NETWORKS SIMULATIONS AND SIGNAL MEASUREMENTS CAN PROVIDE THE INSIGHT YOU NEED TO CREATE A FUNCTIONAL PRIVATE NETWORK FOR INDUSTRY AND FACTORY USE.

JAGADEESH DANTULURI AND DYLAN MCGRATH, KEYSIGHT TECHNOLOGIES

3GPP Release 16 adds support for time-sensitive networking (TSN) integration and other enhancements that support ultra-reliable, low latency communications (URLLC). These enhancements pave the way for 5G private networks that could increase productivity, quality, efficiency, and safety across multiple industries. These 5G private networks — termed “non-public networks” because they operate independently of the available public networks – could bring use cases in energy, supply chain management, retail, healthcare, education, and beyond. Manufacturing looks to become the most prominent use case for 5G private networks. Private networks serve as the backbone of smart factories, where they connect, monitor, and control the intricate choreography between robotic equipment, monitoring sensors, and production lines. Today, most of these smart factory networks are wired networks that use Ethernet. Replacing these wired networks with wireless 5G private networks will yield obvious advantages, including reducing physical infrastructure and enabling greater mobility. Smart factories and other advanced manufacturing facilities may benefit from 5G private networks. Unfortunately, these facilities challenge RF signals and the networks that rely on them. Dense metal – including infrastructure and heavy machinery in motion – can create reflect, diffract, and scatter RF signals. 5G private networks operating in manufacturing facilities may also have to contend with interference from other radios such as Wi-Fi and MulteFire. These and other signal impediments can hinder the extreme reliability and latency requirements for TSN and other network technologies that advanced manufacturing facilities require. A small change in network latency can be devastating to manufacturing productivity.

40

DESIGN WORLD — EE NETWORK

Figure 1. Partially blocking a signal’s Fresnel ellipsoid can interfere with line-of-sight signal paths.

How signal blockage affects a 3.5 GHz signal In the U.S., private networks operate from 3.55 GHz to 3.65 GHz, called mid-band 3.5 GHz. When a 5G signal encounters a metallic object, what happens to the signal depends on the object’s size and shape. If the metallic object is large enough to block the Fresnel ellipsoid of the propagating signal (Figure 1), then it blocks the signal’s direct path; the signal will reflect. The signal may still propagate towards

the receiver through reflections or other interactions. Thus, even if the object is large enough to completely obstruct the signal’s Fresnel ellipsoid, that signal may still find its way to the receiver. If the metallic object is smaller than the propagating signal’s Fresnel ellipsoid, it may not block the signal’s direct path. The wave will continue to propagate toward the receiver, but at a diminished signal power due to partial

Factory — Sources of Interference

Figure 2. Moving and stationary objects on factory floors can block and reflect RF signals.

5 • 2022

eeworldonline.com

|

designworldonline.com


PRIVATE NETWORKS your lab until achieving the desired radio performance (Figure 3).

Private Network Implementations

Figure 3. Channel emulation of a factory radio environment provides insight into how actual signals might perform. diffraction and scattering from the metallic object. Such a partially diffracted signal may or may not be able to reach the receiver with sufficient power. The metallic object’s shape can create a signal blockage that can affect the interaction. If the object is flat, there will be reflection and possibly diffraction from the edges. If the metallic object has curved surfaces, the signal wave will be scattered in multiple directions. During the network design and planning stage, engineers can model some effects of signal blockage on a factory floor in a laboratory environment. Because many variables, including the size and shape of the blockage, impact a signal’s strength when it reaches the receiver, you’ll need to perform field testing to determine how a 5G signal interacts with the real-world factory environment. It is imposible to maintain a 5G private network running applications that depend on TSN if propagating signals do not reach the devices operating on that network with sufficient power (Figure 2).

Fading considerations in private network design Before discussing the required measurements, let’s investigate the radio channels. A radio propagation channel is the environment where the radio signal travels from the eeworldonline.com

|

transmitter to the receiver. Understanding the signal propagation characteristics in a wireless channel is important for designing any wireless communication system, including private networks. This understanding helps you design the transmission signals and processing algorithms at the receivers. It also helps you to understand the fundamental limits of the radio performance. Many factors, including the antenna design, thermal noise, and environmental effects (reflections, diffractions, scattering) affect radio signal quality. There are two types of radio fading: largescale and small-scale. Path loss and shadowing cause large-scale fading while changes in time, frequency, and space domains cause small-scale fading. The signal strength at the receiver can change because of fading. For example, a radio transmission at 3.5 GHz at 0 dB power has a different data throughput performance than the same transmission at 3.7 GHz in the same environment. Likewise, different distances (space) between the transmitter and receiver will have different performance characteristics. Industrial environments such as factories have a lot of metal, moving vehicles such as automated guided vehicles (AGVs), autonomous mobile

designworldonline.com

robots (AMRs), and other radionoise-generating equipment. The fading effects in factories differ from other environments. Hence, private network radio systems designed for outdoor or traditional commercial communications don’t fit well for industry 4.0 environments. As a communication system design engineer, you use radio channel models to overcome the challenges outlined above. Modern-day communication systems are complex. Without models, you must design and build the system, go into the field, perform the tests, and, if needed, modify the designs. You’ll need to repeat the steps until the system achieves its desired performance. Radio channel models eliminate this complexity by representing the real-world radio environments in software. You can repeat the tests in

Communication RF systems are designed, built, and verified using channel models. Channel modeling precedes the communication systems design. Use a channel sounder to send a particular training sequence of the radio and measure its performance at various receiving points with specialized radio measurement equipment. This follows the curve fitting and various other methods for generating a channel model for that environment. Because field environments are unique, there is no guarantee that communications systems can achieve peak performance in all radio conditions. Before installing a private network, you must size and design the networks to meet the installation’s functional and performance needs. Network planning for an indoor environment involves multiple considerations. The two primary considerations are coverage and capacity. Coverage is provided by the optimal placement, with proper orientation, of radio units throughout the required coverage area. The network needs sufficient signal strength and signal and interference to noise ratio (SINR) at every point in the space. Typical criteria for consideration are

Figure 4. Using CW waveforms can provide some information based on basic path-loss measurements.

5 • 2022

DESIGN WORLD — EE NETWORK

41


5G HANDBOOK Figure 5. 5G mmWave OTA field analysis can use MIMO with simulated 5G signals.

95% reference signal received power (RSRP) or SINR. Lack of adequate coverage and signal strength leads to the following: • • • •

inability of UEs to find and connect to a network; poor throughput and latency performance; connection drops and increased handovers; increased power consumption by the UEs.

Field RF Measurements

Capacity is provided through the bandwidth of the deployed cells, carrier aggregation, and densification if necessary. Lack of capacity can lead to congestion, low throughputs, and high latencies for end users. The initial coverage planning Figure 6. Open RAN radio units — also referred to as RF planning — can function as transmitters or involves using modeling software to reivers for RF testing in facilities. place hypothetical RF transceivers and

42

DESIGN WORLD — EE NETWORK

measure the signal strengths throughout the building, with appropriate modeling of the propagation environment, including walls, windows, and other barriers. Such propagation modeling provides an estimate, which you must verify using field measurements.

5 • 2022

Your field RF assessment objective is to verify that the theoretical designs are as close to the possible real-world performance. Perform the following activities: • • •

collect power measurements and 5G metrics at points in space for FR1 and FR2 carriers. compare path loss from the software models to field measurements. estimate 5G coverage and performance based on those measurements.

eeworldonline.com

|

designworldonline.com


PRIVATE NETWORKS Engineers typically use a continuous wave (CW) signal source for field measurements. Place a radio unit in the desired location and connect a CW source. Walk through the entire space with a spectrum analyzer to capture CW signal measurements. CW measurements may not, however, accurately indicate the signal strength from a wideband 5G signal. For better results, use a realistic 5G signal for the following reasons. •

• •

the correlation between the actual 5G performance and an RF analysis based only on CW waveform single antenna power measurements will not be great. power measurements alone will show path loss but not the actual channel conditions for multipath and MIMO. in all cases, the measurement equipment link budget performance will likely not match the actual 5G deployed equipment link budget performance. having additional 5G measurement metrics will help in optimizing future 5G network deployment and ultimately in optimizing any custom UE and gNB designs.

OTA RF measurements in the Field Figure 4 shows a typical test setup. A 5G signal generator can generate complex 5G waveforms – downlink, uplink, or both. Depending on the generator, antenna, and analyzer configurations, you can create simple or complex configurations.

The spectrum analyzer/decoder captures the 5G waveform at various physical locations in the field. It then performs the complete 5G frame decoding. The decoded signals are used to compare with the theoretical models. Power measurements alone (Fig. 4) will show path loss but not the actual channel conditions for multipath and MIMO. RF environment analysis using 5G frame waveform decoding (Figure 5) is more accurate than CW or reference signal received power (RSRP) only measurements. You can upgrade the setup by using an Open RAN radio unit (ORU) either as a receiver or as a transmitter and perform a similar analysis. Using this configuration lets you compare the theoretical models with the real-world Open RAN measurements, shown in Figure 6.

Verify RF performance in real-world conditions Engineers use radio channel models to design, test, and validate complex communication systems. Network engineers plan and design the network using software models in a lab. Industry 4.0 radio environments are, however, complex and unique. Many interference sources can plague an RF network. Fading caused by rich scattering is much more severe. Industry 4.0 applications require private networks with a higher uplink data throughput, high-density connections, ultrareliable low latencies, and 99.9999% reliability. To meet these performance requirements, you must verify the RF performance in the real-world conditions of their indoor environments and ensure that the RF performance meets their quality requirements.

Santa Clara Convention Center

2022 CO-LOCATED EVENT

robobusiness.com

Sponsorship opportunities are available for future ROBOBusinessDirect programs.

For more information, contact

COURTNEY NAGLE

cseel@wtwhmedia.com | 440.523.1685


5G HANDBOOK

MITIGATE4 mmWAVE TEST COSTS IN 5G SMARTPHONES THE ECONOMICS OF MANUFACTURING TEST DIRECTLY AFFECT HOW YOU PLAN FOR THE RIGHT NUMBER OF TEST STATIONS AND TEST SITES. ADDING MMWAVE TESTING MAKES THE DIFFERENCE.

DAVID VONDRAN AND ROB MESSIER, TERADYNE The smartphone migration from 4G to 5G brings mmWave bands with wider bandwidths, speedier data rates, and higher production test costs. These higher costs for equipment and operations in high-volume manufacturing (HVM) parts force manufacturers to rethink the economics of automated test equipment (ATE). The insights gained from analyzing test costs may even set the bar for future generations of devices, depending on the lessons learned in this current ramp. At the core of this topic is the ATE financial analysis for mmWave technology commercialization. Such analysis will

Figure 1. A 5G smartphone relies on many connections that inherently increase complexity.

5G RF Front-End

directly affect how much test capacity a production facility needs and how many test sites minimize total costs. Figure 1 shows a typical 5G smartphone. The number of wireless links that fit in today’s devices, which enable applications that use primary navigation, connectivity, and cellular technologies, is an impressive engineering feat. Equally impressive are the other technologies, including display, digital, and battery.

RF Architecture

Figure 2. The 5G RF front-end segments in two categories: Sub-6 GHz and mmWave. In general, the Sub-6 GHz links offer useful data rates while mmWave links are 10x greater in data rate capabilities.

44

DESIGN WORLD — EE NETWORK

5 • 2022

Figure 2 shows the evolution in signal distribution within the 5G smartphone’s front-end, segmented between sub-6 GHz and mmWave. Sub-6 GHz frequency bands have low, medium, and large links (in bandwidth). Even the largest pales in comparison to the massive links possible with mmWave technology. While many engineers understand sub-6 GHz, the mmWave links are new to 5G and require closer inspection to understand the technology and economics of commercialization. Figure 3 shows a detailed overview

eeworldonline.com

|

designworldonline.com


mmWAVE PRODUCTION TEST 5G-IF Link

Forecast 5G-Enabled Smartphone Unite Shipments (2019-2025)

Figure 3. This RF architecture reveals the key chips and features that enable mmWave technology within the 5G smartphone, including the high-speed digital (HSD) connections to the frontend (FE). In this way, RF to bits is woven into the RF architecture.

of the RF architecture that enables these 5G links [Ref. 1]. Note the added mmWave complexity in the integration of the modem, 5G-IF link, and mmWave transceiver, which includes the beamforming antenna. The mmWave antenna requires strategic placement to minimize blockage from hands; the 5G-IF feature supports the link budget for optimal signal quality. Comparing RF architectures, the sub-6 GHz band is a baseline and the mmWave link introduces a new intermediate prerequisite, 5G-IF, for 8 GHz to 20 GHz testing. In doing so, the test plan complexity increases dramatically (test duration, tester resources, methodologies, and calibration to name a few). The additive complexity is related to the sub-6 GHz to baseband (RF to bits/analog), whereas mmWave is RF to an intermediate frequency (IF) before ultimately arriving at baseband. To

Figure 4. A 5G smartphone forecast developed by Teradyne quantifies the amount of semiconductor content (i.e., chips) that fuels the economics for mmWave commercialization. Today, only premium smartphones include mmWave while the rest are economical configurations with sub-6 GHz bands only.

complicate matters, the mmWave links are deployed in multiple places within the 5G smartphone for spatial diversity.

5G smartphone ramp Unfortunately, mmWave links increase the amount of test required and therefore have a dramatic effect on the economics. At the time of publishing, the 5G smartphone is offered in two high-level configurations: economy tier and premium tier. The premium tier includes mmWave. Figure 4 shows a 5G smartphone forecast developed by Teradyne, which estimates that 1.2 billion units per year will ship by 2025. The RF architecture flexibly supports both the economy and premium configurations and can likely scale to unit shipment levels consistent with 4G smartphones. Remember that the proportion of phones with mmWave links will increase over time.

Adding mmWave to a smartphone increases manufacturing costs; the challenge is how to minimize the economic impact. There are many proven techniques, such as reducing test time, but these lack the impact to overcome mmWave expenses. This new silicon will benefit most from considering increasing site counts. As a baseline, today’s HVM has standardized on a given number of sites (e.g., x4, or x8). The next section explores the financial implications of increasing sites.

Total x8 Fleet Cost (Cumulative, 10 years) Figure 5. Cumulative fleet cost over time shows more OpEx than CapEx contribution to economics. Note the difference in trajectories suggests that optimizing OpEx as a priority will have more impact on total cost of test.

Figure 6. Cumulative comparison of x8 versus x16 sites in terms of CapEx/ OpEx breakdown. Note lower total expenses in x16 scenario. The takeaway is a redistribution of expenses occurs with greater site count: OpEx expenses dominate total CoT, and the reduction in OpEx is proportional to fewer testers, handlers, and consumables.

eeworldonline.com

|

designworldonline.com

5 • 2022

DESIGN WORLD — EE NETWORK

45


5G HANDBOOK Tester Fleet Size vs. Number of Sites and PTE (100M units/year, 30 secs test duration)

Tester fleet economics The economics of deploying a tester fleet is well known so the focus is on a financial comparison of scenarios involving the number of sites [Ref. 2]. Predictably, this single configuration choice has the largest effect on the overall economics, which is why you should give it the most consideration in ramping new silicon. As a quick introduction, these high-level economic indicators are as follows: • Capital equipment expenses (CapEx). This is the initial expense associated with obtaining the necessary test equipment, (e.g., tester and handlers). CapEx usually depreciates over a five-year period corresponding to the lifetime of equipment. • Operating expenses (OpEx). This is the yearly operating expenses associated with the operation of the equipment, which can include the consumables that customize testers for a specific device under test (DUT). • Total cost of test (CoT). The sum of all fleet expenses over a typical lifecycle, both CapEx and OpEx, divided by units shipped. For the financial comparison in Figures 5 and 6, we consider the final test of a packaged RF transceiver device (i.e., notwafer although the economics are similar). The approach is high-level to assess macro insights. As a typical user scenario, the units per year are proportional to 5G smartphone unit shipments, where the mmWave variant has x3 or x4 links. For purposes of this financial analysis, we will target 100 M units per year with a nominal test duration of 30 sec per device. The fleet depreciation period is five years while the typical lifetime is 10 years. A side-by-side comparison of x8 versus x16 sites should produce recommendations for how to configure the tester fleet for new silicon ramps. Actual results and configuration vary so apply the following takeaways on a conceptual basis. Figure 5 shows the baseline fleet costs for x8 sites where cumulative results are calculated over time. The yearly breakdown into OpEx (blue) and CapEx (green) illustrates that OpEx has the greatest impact on total CoT beginning in the first year and establishes the trajectory for subsequent years. In contrast, CapEx is

46

DESIGN WORLD — EE NETWORK

Figure 7. The tester fleet size for unit shipments of 100M/year (and test duration of 30 secs/DUT). Compared to x8 sites, increasing sites to x16 has the net effect of reducing the number of testers from x18 to x9, which has a dramatic effect on OpEx.

CoT Reduction from UPH Improvements via Increased Number of Sites

Figure 8. As a result of reducing the number of testers, the CoT exhibits the reduction characteristics shown above, which is proportional to the units per hours (UPH) improvement that results from higher site counts.

more constrained due to the depreciated nature of that expense. Similarly in Figure 6, cumulative fleet calculations for x8 and x16 cases enable a sideby-side comparison. At the end of year 10, the total unit shipments are 1 billion (100M/year times 10 years), so the total CoT calculation is straightforward: •

x8 sites produces total CoT of $0.18 at a total cost of $180 M (CapEx/ OpEx ratio of 27%/73%) 5 • 2022

x16 sites produces total CoT of $0.11 at a total cost of $115M (CapEx/ OpEx ratio of 42%/58%), a 39% reduction in total CoT In this comparison, lifetime savings of $65 M are possible by migrating to x16 sites

In Figure 7, the number of testers in a fleet is over 38% fewer when comparing x8 to x16 sites. The concept of parallel test efficiency (PTE) is important to introduce as a mechanism for quantifying the independence of parallel

eeworldonline.com

|

designworldonline.com


mmWAVE PRODUCTION TEST test duration. Ideally, 100% translates to complete independence. Usually, PTE is in the 97% to 99% range as shown in Figure 7 trace overlays. The key takeaway is each site needs dedicated tester resources to deliver the economic benefits of more sites. In addition, this PTE insight demonstrates the impact is greater as site counts increase; in fact, at x16 sites, a 1% PTE delta equates to a ~10% fleet size delta. At the next level of granularity, the economic results in Figure 8 overlay units per hour (UPH) and CoT for sites from x8 to x16 and considers the impact of PTE. Note how the PTE has a greater effect with the greater number of sites; in fact, at x16 sites, a 1% PTE delta equates to a ~20% UPH delta.

Conclusion The complexity of adding multiple mmWave links to the 5G smartphone affects the RF architecture in such a way that is similar to, but proportionately higher, than existing Sub-6 GHz implementation. Thus, the economics for production testing are similar and predict a prohibitively expensive increase in total CoT for new silicon. By increasing sites, the potential to offset some of this additive mmWave expense is possible and may even accelerate the commercialization of mmWave technology. Increasing test site counts will have the most dramatic impact on fleet economics and total CoT. The impact is related to reducing the number of testers and, in doing so, proportionally reducing the OpEx necessary for HVM (which dominates total CoT). It may be counterintuitive, but greater CapEx that produces higher units per hour is highly desirable for the dramatic reduction in testers, handlers, and

consumables, which has a corresponding reduction in OpEx, especially in the context of lifetime calculations. In a new silicon perspective, these results reveal that test engineers should prioritize their total CoT activities as follows: • Minimize OpEx with focus on UPH, including consideration of greatest parallel test efficiency (PTE). • Minimize test duration. • Minimize CapEx. This holistic and lifetime approach to HVM planning optimally configures the tester fleet to deliver the best economics. The smartphone follows the 18-month cadence of Moore’s Law. Moreover, new silicon will continue to evolve in ways that deliver enhanced experiences to consumers, as seen with the emergence of mmWave technology. Sustaining a growth trajectory is a cyclical process in the relentless pursuit of cost reductions. ATE parallelism and efficiency are vital mechanisms fueling growth in the HVM semiconductor ecosystem.

REFERENCES 1. Hurtarte, Jorge S., “Test Challenges and Solutions for Testing Wi-Fi 6E, UWB and 5G NR IF Devices in the 3-12 GHz Range.” 2. Kramer, Randy, “Test strategy implications on cost of test,” Evaluation Engineering, Jan. 25, 2018.

CONNECT WITH US! CHECK US OUT on ISSUU.COM!

eeworldonline.com


AD INDEX 5 G

H A N D B O O K

M A Y

2 0 2 2

Coilcraft.................................................................... 3

Digi-Key Electronics.................................. Cover, IFC

Cornell Dubilier Electronics, Inc............................... 1

Green Cubes Technology...................................... BC

SALES

LEADERSHIP TEAM

Jami Brownlee jbrownlee@wtwhmedia.com 224.760.1055

Courtney Nagle cseel@wtwhmedia.com 440.523.1685

Jim Dempsey jdempsey@wtwhmedia.com 216.387.1916

Jim Powers jpowers@wtwhmedia.com 312.925.7793 @jpowers_media

Mike Francesconi mfrancesconi@wtwhmedia.com 630.488.9029

Publisher Mike Emich memich@wtwhmedia.com 508.446.1823 @wtwh_memich Managing Director Scott McCafferty smccafferty@wtwhmedia.com 310.279.3844 @SMMcCafferty EVP Marshall Matheson mmatheson@wtwhmedia.com 805.895.3609 @mmatheson

EE Classroom on Silicon Carbide

Silicon Carbide (SiC) has made its mark in bringing faster, a smaller, and more reliable components than its fellow semiconductors to market. While SiC components have been around for a couple of decades, there is still a lot to learn and a lot to consider when choosing the most suitable WBG semiconductor for your device. LET US HELP with tutorials, from looking at how WBG semis stack up in power conversion efficiency to an overview of SiC FETs and MOSFETs.

Check out our EE Classroom to learn more: www.eeworldonline.com/silicon-carbide-classroom


DESIGN WORLD ONLINE EVENTS AND WEBINARS Check out the DESIGN WORLD WEBINAR SERIES where manufacturers share atheir experiences and expertise to help design engineers better understand technology, product related issues and challenges.

WEBINAR SERIES

FOR UPCOMING AND ON-DEMAND WEBINARS, GO TO: www.designworldonline.com/design-world-online-events-and-webinars


RELIABLE LITHIUM-ION BACKUP POWER FOR CRITICAL COMMUNICATIONS SYSTEMS 48V Backup power that saves space, weight and lasts 3 times longer than lead acid batteries.

www.greencubestech.com GCT_5G_Print.indd 1

4/26/2022 1:50:35 PM