EEWeb Pulse - Issue 76

Page 1

INTERVIEW

EEWeb Issue 76

December 11, 2012

Samta Bansal Senior Manager, Product Marketing Cadence SPECIAL FEATURE

MCU Wars 2.2 Supporting MCUs in an RTOS TECHNICAL ARTICLE

Challenges of 20nm Analog Design

Electrical Engineering Community

Visit www.eeweb.com

1 eeweb.com


Experts Exchanging Ideas Every Day. VISIT DIGIKEY.COM/TECHXCHANGE TODAY! Digi-Key is an authorized distributor for all supplier partners. New products added daily. Š 2012 Digi-Key Corporation, 701 Brooks Ave. South, Thief River Falls, MN 56701, USA


EEWeb PULSE

TABLE OF CONTENTS

4

Samta Bansal CADENCE Interview with Samta Bansal - Senior Manager, Product Marketing

Featured Products

13

Taming the Challenges of 20nm Custom Analog Design

14

BY SAMTA BANSAL WITH CADENCE The challenges that custom and analog designers face from the manufacturing complexity in 20 nm IC design.

MCU Wars 2.2: Supporting MCUs in an RTOS

26

Two experts in RTOS sit down to discuss the advantages of supporting MCUs in a real-time operating system as well as the risks of writing your own system.

30

Distribution Systems - Part 3 BY NICHOLAS ABI-SAMRA WITH QUANTA TECHNOLOGY Why Distribution Automation (DA) is considered in developing the Smart Grid as it transforms the distribution network towards more automation.

36

RTZ - Return to Zero Comic

Visit www.eeweb.com

3


EEWeb PULSE

B 4

EEWeb | Electrical Engineering Community


INTERVIEW

Cadence Design Systems is a leading global EDA company that specializes in custom ICs and electronic systems. We spoke with the Senior Manager of Product Marketing, Samta Bansal, about the challenges of silicon realization, the advantages of moving to 20nm ICs and the overall benefits of the 20nm process technology.

Bansal Cadence

Visit www.eeweb.com

5


EEWeb PULSE Can you tell us about your work experience/history before you became Senior Manager, Product Marketing, Silicon Realization at Cadence? After my masters in Physics and Bachelors in EEE, I joined as application engineer focusing on DFT and ATPG solutions and soon expanded into front end synthesis. After spending few years as core technical, I completed my MBA and jumped into marketing to contribute on the business and strategy of the products – always what I had wanted to do. It was becoming obvious that power and performance will become more of an issue as we go along the Moore curve and advance node shrink will go only so far. Achieving design closure required more integrated approach to the design methodology than “over the wall” methods that existed at that time. So I started focusing on Hierarchical methodologies, System exploration techniques, links to front end and back end, understanding packaging and how it impacts the design and design closure and that expanded my horizon to deeper understanding of not only the gaps that existed between EDA tools at that time but also the trends that are going to impact how we do design and how we provide software in EDA tools to help out. With this broader outlook, I worked on numerous efforts like Giga Gates, 3D-IC, Advanced node among others across product groups to help bring synergies among the tools at Cadence. This was core to Silicon realization group – which stands for ensuring that we can deliver for the silicon for our designers. So my work at Cadence allows me to look at the complete picture and help identify efficiencies and deficiencies within our offerings

6

“My work at Cadence allows me to look at the complete picture and help identify efficiencies and deficiencies within our offerings and provide for more rounded approach to issues like 20nm, 3D-IC to our customers.” and provide for more rounded approach to issues like 20nm, 3D-IC to our customers. What are some general challenges for silicon realization? Challenges revolve around “What will it take to achieve silicon success at 20nm and below.” This includes combination of a) Technology innovation: New technologies to provide for consumer demands of performance, power and area b) EDA tools and techniques: In addition to adopting the new technologies EDA tools also need to provide improved efficiency and certainty for silicon success. c) Collaborations with customers and foundries are becoming more and more important to deliver against that challenge. Let me talk about these one at a time: Technology scaling is not over. However it’s going to be a lot harder to move the technology forward and achieve the performance, power, and density requirements that will

EEWeb | Electrical Engineering Community

make the technology interesting for customers. Innovations in strain engineering and high-k metal gates (HKMG) have kept things going so far however we are coming to the end of the planar era. Technology development such as 3D devices like FinFETs or tri-gates, 3D-IC TSVs is required now and probably exploring things like silicon nanowires, carbon nanotubes, and the integration of photonics onto the wafer have to start as well. So continued technology innovation is critical to Silicon success and a challenge for Silicon realization. EDA Tools and Techniques: Let us take a small example of EDA tool challenges: Advanced process nodes generate hundreds of design rules - But can people still generate good designs? If we make spacing larger or voltage higher, we solve some problems, but then we don’t get the density, frequency and power advantages of process developments. So the question is whether the designer can design optimally enough with the resources they have and how can EDA tools help? So understanding the additions and constraints the new technologies, new nodes bring is one, seamless


INTERVIEW

understanding of these across digital, custom/analog, signoff, design for manufacturability (DFM), and IP is second challenge and the most important one in my mind is efficient optimization between the different fabrics with so many variables to achieve silicon success in time to market window. This is done by combining new innovations in software development, by providing lot of “Indesign” techniques or “lookahead” strategies to optimize the entire design cycle convergence and pouring money into R&D development. Collaborations are key to providing silicon success. While technology development and EDA

tool innovations are two vectors, collaboration with customers and foundries is key to ensure these developments meet what is required and what can be physically manufactured. This also poses challenges to the complexity of managing the different requirements from different foundries to provide what works for customers. Which one wins: 20 or 3D-IC? With the growing demand of high bandwidth, low power devices, it is unavoidable not to move to next node. However, moving analog to such smaller geometries brings technical as well as economical challenges. Do I really have to shrink my analog

to 20nm? Can I reuse my existing analog block and yet achieve my performance and power goals? These are the tradeoffs that will push for 3D-IC technology adoption as well. So while people move to 20nm and beyond for differentiated SOC, they will also integrate the older node IPs using 3D-IC and TSV technology in their applications to reach their bandwidth and performance goals and manage the NRE for their product. So in short, both technologies will coexist. It will depend on applications – volume, cost; depend on requirements of performance, power and area for the products that will govern if one is used or both.

Visit www.eeweb.com

7


EEWeb PULSE What would it take to adopt 20nm and 3DIC? Adopting advanced technologies will depend upon solving the technical challenges both from design and manufacturing ends, EDA tool readiness, wider collaborations in the ecosystem and last but not the least how cost effective the technology in the end happens to be. a) Architecting a scalable and repeatable model around technical challenges: While the technology innovations are taking place with double patterning, FinFETs, low power, and through-silicon vias (TSVs) they all have their challenges. For instance, double patterning requires a decomposition method and a router that can support double patterning rules. FinFETs require SPICE modeling. TSVs raise thermal issues and bring challenges on bonding, debonding and probing technologies among several others. Entire DFT and ATPG methodology have to be architected to test the dies in the 3D stack. Can we move from “restrictive” advanced node design rules to “prescriptive” design rules? So these and many many others have to be resolved before mass adoption of a technology can take place. We have and always will have “early adaptors” for the technologies but for mass adoption scalability and repeatability is important. b) EDA tool readiness is important. While these technologies are evolving, keeping EDA tools in pace with the innovations is very demanding but necessary. This is where collaborations with partners, experiences on test chips with “early adaptors” are critical towards architecting a repeatable and dependable solution for mass adoption. For 20nm we have so far more than 25 design already

8

done with critical partners in the ecosystem, and about 8 testchips and 1 production chips in 3D-IC with several inflight. c) Wider collaboration will be must. With the economical pressures each industry is in today, an IDM-like model, where you have the benefits of a very close collaboration between design and manufacturing without having the economic issues of owning an $8 billion fab will become a must at 20nm and beyond. For 3D-IC it will extend even further where entire semiconductor business model will have to evolve and agreed upon to ensure ownership, accountability and reliability of the final product. d) Last but not the least, Overall cost of the technology has to make sense. Everyone is in this industry to make money. If a technology can’t help make money it dies. So helping reduce cost for any new technology is the key to its wider adoption. It holds true for 20nm, FinFETs, SOI process and for TSVs as well.

What are the cost implications of 20nm and 3DIC – one or both? As with any new technologies, there are risks, uncertainties and inefficiencies in the process of design and manufacturing and the yields related to those. Same holds true for advanced node like 20nm with Double-patterning, FinFETs and TSVs. Today, 20nm brings in requirements of Dual mask for example. That is one added step in the manufacturing cost. 20nm requires extra design steps in the process – coloring etc. That is another cost added at design stage in terms of how much time it takes to do designing at 20nm. 20nm designs are complex enough that more people resources are required to address the design issues which adds to the cost of human power required to do 20nm and many others. So the costs are surrounding manufacturing, people, design, EDA tools etc. Everyone wants a premium for their early

“While today it may appear costly to adopt these advanced node technologies, in terms of overall benefit and tradeoff with the performance, power gains and sometimes yield gains as well that these technologies might bring, more than often, the cost offsets itself.”

EEWeb | Electrical Engineering Community


INTERVIEW development work and that is what makes any new technology in the beginning expensive for everyone. 3D-IC adds more complexity to this cost equation with potential of several fabs, several EDA vendors, Several IPs, testing complexity of the stack among several others. However given time to any new technology, the manufacturing matures, designs and design challenges are well understood, EDA tools become efficient which in turn leads to lower costs to enable wider adoption of technology. While today it may appear costly to adopt these advanced node technologies, in terms of overall benefit and tradeoff with the performance, power gains and sometimes yield gains as well that these technologies might bring, more than often, the cost offsets itself. What are the key advantages of moving to 20nm, and where are you seeing the most interest? There are three primary reasons why we are seeing more system and semiconductor companies consider 20nm: performance, power, and area (PPA) in terms of how much power and area you can save, and the amount of transistors/IP you can put on the chip. Within our customer base, we Q3. are seeing a lot of interest in the A. wireless space, which includes smartphones, tablets, and consumer devices. In this market you have to support different standards, the device has to be really fast, it has to have Internet access, and all this has to be done at lower power so you don’t drain the battery. We’re also seeing interest in 20nm in

other segments like computing and graphics processors

EDA tools must handle the design size and complexity that comes along with 20nm. That requires an ability to handle exponentially increasing IP and an entire SoC. Designers also need to do power management on entire SoCs and do verification signoff in a reasonable period of time.

Overall, what are the primary design challenges at 20nm? There are three kinds of challenges: 1. Silicon manufacturability and managing variations

How do you maximize yield and 3. Concurrent “performance, power, manufacturability at 20nm? As you and area” optimization go to 20nm there is an explosion in the different rules you have to deal The major objective behind the shrinking process node is to achieve with—there about 400 interconnect. advanced You becauseare of the increased more layout-dependent effects, where the proximit thehaverequired performance and layoutnear rules layers. each for otherthe leadsmetal to variations in both timing and power. Additionally, double patterning power savings in the minimal area 2. “Giga-scale” design productivity possible. However, it gets harder comes into themust picture. a. EDA tools handle the design size and complexity that comes along with 20nm. That requires a and harder to get the most optimized handle exponentially increasing IP and an entire SoC. Designers also need to do power managemen tradeoff these three Timing and variations SoCs and power do verification signoff inare a reasonable period between of time. aspects. The integrated approach dramatic at 20nm. Here the design 3. Concurrent “performance, power, and area” optimization synthesis, implementation, might work, but not at the level of to a. The major objective behind the shrinking process node is to achieve the required performance and p and signoff becomes critical to optimized t performance or power you intend. savings in the minimal area possible. However, it gets harder and harder to get the most ensure design convergence andand signoff be Metalbetween pitches gone Thefrom thesehave three aspects. integrated approach to synthesis, implementation, to ensure predictable success at 20nm. success at 20nm. 100nmcritical to 80nm anddesign 64nm,convergence and thereandpredictable “Concurrent” PPA Optimization GigaHertz and Clocks

Low Power

High-Density IP Reuse - 3D-IC TSV

“Giga-Scale” Design and Productivity Abstractions to Handle Complexity

Automation/ Multi-CPU to Accelerate Closure

Mixed-Signal Pervasive

Silicon Manufacturability and Variation Complex Design Rules >400 DRC

Double Patterning (DPT)

Variations LayoutDependent Effects (LDE)

Accuracy and Pessimism Removal Indesign Signoff

Do you really need double-patterning technology (DPT) at 20nm? Can I do without DPT?

challenges is between Weincreased see different coupling schools of thoughts nowthe in theWhat industrydesign – mostly everybody movingcome to 20nm will need to because conventional lithography is not cutting it any more, but there are few who say grey scale would do with double patterning? wires. There are also more parasitics minimal impact on the design process. Here are the pros and cons regarding each approach: in device modeling because of the To relieve designers from dealing With DPT: Layout features are completely disappearing because of lithography distortion, and it’s not treat increased interconnect. You have because of the optical resolution limit. Double patterning gives a new leaseconsiderations, on life of the existing [lithograp with additional DPT more layout-dependent effects, technology. where the proximity of cells near we are building some capabilities Without DPT: Some users build margins to account DPT effects, but thatBut defies the very rightforinto the tools. here arepurpose of mo each otheris leads variations 20nm, which area. So to although it might in be simpler from a design perspective to build margins and carry some things you should know about: both timing power. normal way ofand designing at 20nm, this approach has overhead of area as well as iterations at the back end manufacturing can’t close on the performance and power the customer is shooting for.

2. “Giga-scale” design productivity

• The first thing that double

Q4.

What design challenges come with double patterning?

A.

To relieve designers from dealing with additional DPT considerations, we are building some capabilities righ tools. But here are some things you should know about: Visit www.eeweb.com

9

• The first thing that double patterning impacts is cell and library generation. You need to make sure silic


EEWeb PULSE patterning impacts is cell and library generation. You need to make sure silicon IP is compliant with doublepatterning layout rules.

increase in the size of designs, and it’s done with a 2x density shrink and 50% better performance. There are several considerations:

It is also critical to account for double patterning during placement. Cadence has a unique technology that does automatic colorized placement, and the end benefit is a less congested design. With less congestion it is much easier to meet timing and power requirements.

• To handle these large designs, it is a must to down-size the scale such that you retain important information to make the right decisions throughout the design process. This requires a unique abstraction technique. Cadence has been working on something called “GigaFlex” models, which allow you to abstract out large design macros or blocks from a physical and timing point of view at different levels (depending on the design process you’re using). GigaFlex technology helps reduce the netlist to somewhere from 80% to 90% (depending on the design style) and

• And, the biggest impact is in routing. The double patterning has to be integrated inside the routing solution—it cannot be an afterthought where you finish the routing and then run decomposition. It has to be done correct-by- construction and that’s our approach to it.

the rest of the logic and physical design. You need to manage useful skew. Having acquired and integrated Azuro clock-concurrent optimization (CCOpt) technology within our digital solution, customers get a much better end result in performance, power, and area. Variability is already a problem at 40nm and 28nm. What is new and different at 20nm? One aspect that gets worse involves layout-dependent effects (LDE). At 20nm, cells are much closer to each other and the proximity effect of different kinds of cells and interconnects has a worse effect on both timing and power. LDE due to lithography and stress need to be characterized up front, and what’s

“Cadence has been collaborating closely with our 20nm ecosystem partners for a long time, and we engage with them very early in the cycle. We even help them define 20nm technical specifications and interfaces.” We carry double-patterning intent forward from cell and IP generation to double-pattern aware routing, and finally to signoff physical verification. This provides faster convergence because intent is carried forward throughout the flow. A second benefit is better quality of results. What kinds of transistor counts can be expected at 20nm, what should one watch out for, and how can EDA tools help? 20nm is expected to provide 8 to 12 billion transistors, so that’s a huge

10

helps accelerate the implementation design closure up to 5x. • The clock network gets really complex at 40nm and 28nm. At 20nm, many more clocks are introduced. People are gating clocks, there are power shutoffs, and there are many modes and corners. A traditional clock design methodology will just not cut it— you need a new architecture that has been designed from scratch. In the tradi- tional clock design methodology, clocks are treated as an afterthought. At 20nm, you need clock design that is concurrent with

EEWeb | Electrical Engineering Community

needed is context-driven placement and optimization. The Cadence custom and digital implementation system determines how different cells are going to interact and how one layout configuration affects timing and power compared to another. It takes care of those effects during schematic and place-androute itself, choosing the right neighbors to get better performance and power.


INTERVIEW What’s needed in a 20nm design tool flow? Will a point tool approach work? Point tools will not work. At Cadence, we have two goals: mitigate 20nm design risk, and help customers accelerate 20nm designs. Both of these goals require an end-to-end flow. Things like double patterning, clock design, and layout-dependent effects all have to be considered upfront in the design flow, from IP characterization to placement and routing to final signoff. Prevent ► analyze ► optimize is key to success at 20nm. This is how we have architected our 20m offering, which covers both custom and digital design. Prevent issues upfront by integrating layout with the schematic generation process, by integrating signoff within the implementation stage, and by removing dependencies at the very end of the design process to accelerate design closure. Cadence has been collaborating closely with our 20nm ecosystem

characterization requires a more accurate method of modeling in timing libraries, which is also supported in Encounter Timing System and Encounter Power System today. From a power signoff perspective, DC and AC electromigration effects are more severe in 20nm than previous nodes, and require more accurate EM analysis and fixing. These are a few examples of how the Encounter signoff solution supports 20nm process technology.

partners for a long time, and we engage with them very early in the cycle. We even help them define 20nm technical specifications and interfaces. Right now we are working on multiple test-chip tapeouts with our partners to make sure that our modeling, abstraction, and flow will produce the best results. There’s still more 20nm work involved in moving to production, and there will be additional fine-tuning of our tools and methodologies, but we are going through that exercise right now.

What are the overall benefits of 20nm process technology?

How do 20nm manufacturing requirements affect timing and power signoff?

• 2x gate density Improvement • 20% speed improvement at Vdd=0.85V

Mask shifting due to double patterning results in slight capacitive variation between different metal shapes, even within the same MMMC corner. This is captured using a multi-value SPEF approach, supported by Cadence QRC Extraction, Encounter Timing System, and Encounter Power System. In addition, cell

• 25% switching power reduction • Multiple Vt and Lg options extends performance coverage

For more information about Cadence, visit their website at:

www.cadence.com

Visit www.eeweb.com

11


Technology You Can Trust

Avago Technologies Optocouplers

A Superior Technology for High Voltage Protection! IEC 60747-5-5 Certified

Optocouplers are the only isolation devices that meet or exceed the IEC 60747-5-5 International Safety Standard for insulation and isolation. Stringent evaluation tests show Avago’s optocouplers deliver outstanding performance on essential safety and deliver exceptional High Voltage protection for your equipment. Alternative isolation technologies such as ADI’s magnetic or TI’s capacitive isolators do not deliver anywhere near the high voltage insulation protection or noise isolation capabilities that optocouplers deliver. For more details on this subject, read our white paper at: www.avagoresponsecenter.com/672


FEATURED PRODUCTS Test Support for DDR4 Standards Tektronix, Inc. announced that it is adding full electrical verification and conformance test support for JEDEC DDR4, DDR3L and LPDDR3 standards, giving design engineers the tools they need to bring chips and systems that incorporate these next generation memory technologies to market. Tektronix, Inc. announced that it is adding full electrical verification and conformance test support for JEDEC DDR4, DDR3L and LPDDR3 standards, giving design engineers the tools they need to bring chips and systems that incorporate these next generation memory technologies to market. For more information, please click here.

150W Digital-Input Power Stage The TAS5622-TAS5624DDVEVM PurePath™ EVM demonstrates the most recent version of TAS5622DDV or TAS5624DDV integrated circuit power stage. TAS5622 and TAS5624 are high-performance, integrated Stereo Feedback Digital Amplifier Power Stages designed to drive 3Ω speakers at up to 165W per channel for TAS5622 and 200W per channel for TAS5624. They require only passive demodulation filters to deliver efficient high quality audio amplification. The EVM can be configured as 2 BTL channels for stereo evaluation or 1 PBTL (parallel BTL) channel for subwoofer evaluation. For more information, please click here.

Virtualization for Architectures MIPS Technologies, Inc. announced a major release of the MIPS architecture, encompassing the MIPS32, MIPS64 and microMIPS instruction set architectures. Based on work done over more than two years, Release 5 (“R5”) of the MIPS base architecture incorporates important functionality including virtualization and SIMD (Single Instruction Multiple Data) modules. The MIPS SIMD architecture (MSA) module allows efficient parallel processing of vector operations. This functionality is of growing importance across a range of applications. For more information, please click here.

32 Channel High Voltage Analog Switch Supertex introduced the HV2809, a thirty-two channel, high voltage, analog switch IC designed for use in medical ultrasound imaging systems as a probe selection device. It replaces the typical electromechanical relays performing the probe selection function and takes up less printed circuit board area, emits no audible noise and increases system reliability. The HV2809 serves as a sixteen-pole, double throw (16PDT), high voltage switch array. Using Supertex’s proprietary HVCMOS technology, the IC efficiently controls analog signals with low power CMOS logic and high voltage bilateral DMOS switches. It features a very low quiescent current of 10µA for low power dissipation and a bandwidth frequency of up to 50MHz. For more information, please click here.

Visit www.eeweb.com

13


EEWeb PULSE

Taming the Challenges of 20nm Custom/Analog Design Samta Bansal Cadence

Custom and analog designers will lay the foundation for 20nm IC design. However, they face many challenges that arise from manufacturing complexity. The solution lies not just in improving individual tools, but in a new design methodology that allows rapid layout prototyping, in-design signoff, and close collaboration between schematic and layout designers.

14

EEWeb | Electrical Engineering Community


PROJECT

e s

Visit www.eeweb.com

15


EEWeb PULSE

Taming the Ch Custom Digital

Introduction

Common Design Styles

For many electronics OEMs, particularly those working with mobile applications, the move to the 20nm process node will be irresistible. Early estimates point to a potential 30-50% performance gain, 30% dynamic power savings, and 50% area reduction compared to the 28nm node. Chip complexity may range up to 8-12 billion transistors. With all its benefits, the 20nm node will open the door to a new generation of smaller, faster, more differentiated devices.

Multi-Patterned (MPT)

Chip Assembly Local Interconnect

La Ef

Figure 2: Key manu

Analog I/O’s SRAM/Memory

SHOULD

SHOULD

The solution to 20nm custom/analog ch methodology. In this methodology, circu SHOULD will have the ability toSHOULD rapidly exchange before the layout is completed. The flow everything signoff stage MUST MUST during the final

Double Patterning The 20nm process node, however, comes with many MUST MUST Standard Cell challenges, and most of the discussion thus far has The most-discussed manufacturability is concentrated on the challenges faced by digital separate lithograp NO SHOULDmasks so that 193nm Custom Digital designers1. This white paper focuses on the custom mask. This technology is needed to get c and analog designers who will lay the foundation are below 80nm, which will be the case for 20nm design. Custom designers will produce NO COULD Chip Assembly the standard cells, memories, and I/Os that digital When double patterning is used, each m designers will assemble into systems-on-chip (SoCs). that are halfabouts” the pitch that would otherw Figure 2: Key manufacturability “care differ according to desi Analog designers will create the IP blocks that will be Figure 2: Key manufacturability “care abouts” differ according to d increasing concerns about integrated into 20nm SoCs, nearly all of which will be mixed-signal design, integration, The solution to 20nm custom/analog challenges lies not only inConventional new point tools, bu mixed-signal. Lith and verification. Mixed-signal methodology. In this methodology, circuit (schematic) and layout designers will wo All designers face several “big picture” challenges at ability interactions increase as Circuit designers will be able t will have the to rapidly will exchange information. 20nm. One is simply the investment that’sbefore required. more and more digital control the layout is completed. The flow will use in-design, signoff-quality engines The research firm IBS predicts substantialeverything increasesduring circuitry is signoff used, and as And analog the final stage. all tools will be “double patterning-a in fab costs, process R&D, mask costs, and design and digital components come costs with the move from 28nm to 20nm (FigurePatterning into close proximity. Double 1). Profitability may require shipments of 60-100 Taming the Challenges of 20nm Custom/Analog Design Of most concern toissue custom/ The most-discussed manufacturability at 20nm is double patterning. This tech million units. Finances will thus mandate careful risk analog designers— and the separate masks so that 193nm lithography can print structures that are too close to considerations. main topic of this white paper— mask. This technology is needed to get current 193nm lithography equipment to pr arewhich the challenges from are below 80nm, will be thethat casearise for at least some of the metal layers for alm 28nm 20nm Geometry features disappearing manufacturing complexity. What lithography distortion When double patterning is used, each mask is exposed separately, and the exposur is unique about 20nm is the deep $4Bthat - $7Bare half theand $3B Fab Costs pitch that would otherwise be of printable with 193nm lithography (F complex interdependency Figure 3: Key manufacturabili manufacturing and variability, onFigure 3: Double patterning makes i $2.1B - $3B $1.2B Process R&D top of increasing timing, power, Conventional Lithography Double Patt and area challenges. Concerns The concept may be simple, but managi $5M - $8M $2M - $3M Mask Costs include the following: decomposition process in which alternat Design Costs

$50M - $90M

$120M - $500M

Figure 1: Fab, process, mask and design costs are much higher at 20nm (IBS, May 2011)

be placed on which mask. This results in • The use of double patterning (with extra mask features. For example, traces layers) so 193nm photolithography equipment can that are th print features at 20nm

Figure 1: Fab, process, mask, and design costs are much higher at 20nm (IBS, May 2011)

www.cadence.com • Layout-dependent effects (LDE) in which the layout allenge is design enablement, which mostly represents a worsening of existing concerns. These concerns context—what is placed near to a device—can impact Another predictability, challenge low is power, designcomplexity, enablement, e to market, profitability, and cost.which At 20nm, an increased device performance by as much as 30% represents a worsening existing silicon IP mustmostly be obtained from multiple sources, andofthere will be concerns. increasing concerns about mixedgn, integration,These and verification. Mixed-signal interactions will increase as more and more digital control concerns include time to market, profitability, • New local interconnect layers Enables printing of images b used, and as analog and digital components come into close proximity. predictability, low power, complexity, and cost. At Geometry features disappearing due to

lithography distortion

spacing design rules

20nm, andesigners— increased of silicon IP must be the•challenges More than ncern to custom/analog and amount the main topic of this white paper—are that 5,000 design rules, including some new manufacturingobtained complexity.from What is unique about 20nm isand the deep andwill complex multiple sources, there be interdependency and difficult ones cturing and variability, on top of increasing timing, power, and area challenges. Concerns include the Figure 3: Double patterning makes it possible to print features that could not be prin

16

EEWeb | Electrical Engineering Community The concept maycan beprint simple, but managing double patterning is difficult. It requires of double patterning (with extra mask layers) so 193nm photolithography equipment features

decomposition process in which alternate colors (such as red and green) are used to


allenges of 20nm Custom/Analog NO Design SHOULD

COULD ayout-Dependent ffects (LDE)

PROJECT

COULD

COULD NO variation and sensitivity • Device

rapidly exchange information. Circuit designers will be able to obtain early parasitic estimates before • A new type of transistor, the FinFET the layout is completed. The flow will use in-design, ufacturability “care abouts” differ according to design style signoff-quality engines as opposed to attempting to fix Concerns about manufacturability issues MUST everything during the final signoff stage. And all tools vary according to design hallenges lies not only in new point tools, but in astyle new (Figure custom 2). designwill be “double patterning-aware” and ready for 20nm. For designers example, will analog I/O collaboration, designers and uit (schematic) and layout workand in close MUST are most concerned about LDE, circuit estimates information. Circuit designers will be able to obtain early parasitic Double Patterning specifications, and area vs. performance w will use in-design, signoff-quality engines as opposed to attempting to fix tradeoffs. Memory and standard cell The most-discussed manufacturability issue at 20nm e. And all tools will be “double patterning-aware” and ready for 20nm. SHOULD designers are very concerned about is double patterning. This technology splits a layer density, and as such, double patterning and into two separate masks so that 193nm lithography local interconnect are key concerns. SHOULD can print structures that are too close together to ssue at 20nm is double patterning. This technology splits a layer into resolve two with a single mask. This technology is needed The solution to 20nm custom/analog to get current 193nm lithography equipment to print phy can print structures that are too close together to resolve with a single COULD challenges lies not only in new point tools, current 193nm lithography equipment to print correctly when metal correctly pitches when metal pitches are below 80nm, which but in a new custom design methodology. will be the case for at least some of the metal layers for e for at least some of the metal layers for almost any 20nm design. In this methodology, circuit (schematic) COULD almost any 20nm design. and layout workto in closefeatures mask is exposed separately, and thedesigners exposureswill overlap create and will have3).the ability to When double patterning is used, each mask is exposed wise be printable withcollaboration, 193nm lithography (Figure ign style separately, and the exposures overlap to create design style features that are half the pitch that would otherwise be printable with 193nm lithography (Figure 3). ut in a new custom design hography Double Patterning

ork in close collaboration, and to obtain early parasitic estimates as opposed to attempting to fix aware” and ready for 20nm.

The concept may be simple, but managing double patterning is difficult. It requires a two-color layout decomposition process in which alternate colors (such as red and green) are used to indicate which features will be placed on which mask. This results in added design rules that restrict the placement and proximity of layout features. For example, traces that are the same color can’t be placed too closely together.

hnology splits a layer into two ogether to resolve with a single Taming the Challenges of 20nm Custom/Analog Design rint correctly when metal pitches As shown in Figure 4, it is very easy to create a designmost any 20nm design. g due to Enables printing of images below minimum rule checking (DRC) “loop,” which is a coloring spacing resAs overlap to in create features conflict that“loop,” cannot converge a solution that works. shown Figure 4, it isdesign veryrules easy to create a design-rule checking (DRC) which is aoncoloring conflict that Figure 3). And in many cases, it will be necessary to trace back cannot converge on a solution that works. And in many cases, it will be necessary to trace back a number of steps a ityto “care abouts” differ to created. design style number of steps to unravel how the loop was created. unravel how theaccording loop was

it possible to print features that could not be printed with 193nm lithography

terning

ing double patterning1is difficult. It requires a two-color layout 2 3 te colors (such as red and green) are used to indicate which features will n added design rules that restrict the placement and proximity of layout he same color can’t be placed too closely together.

4

5

3

Two Nets Colored

below minimum

Create Wire Three Nets

Conflict Created

Re- Colored Conflict Remains

Loop

Figure 4: Double-patterning loops are Figure easy to4:create Double-patterning loops are easy to create

nted with 193nm lithography

Handling double patterning properly is a big concern for custom designers of standard cells, memories, and I/ Os. These designers must be cognizant of coloring as they create layouts to optimize area. It is difficult to achieve s a two-color layout Visit www.eeweb.com 17 a high density while making the design decomposable. According to some reports, standard cells that previously o indicate which features will


EEWeb PULSE Handling double patterning properly is a big concern for custom designers of standard cells, memories, and I/Os. These designers must be cognizant of coloring as they create layouts to optimize area. It is difficult to achieve a high density while making the design decomposable. According to some reports, standard cells that previously took four hours to lay out sometimes take a week at 20nm, because designers have to keep re-running verification as they try to pack decomposable cells as tightly as possible.

Instead of running signoff verification once every four hours, a quick double patterning check should be run after every editing command. In this way errors can be fixed quickly, and designers don’t end up with DRC loops that may take many steps to unwind. Layout-Dependent Effects

At 20nm, it’s not enough to model the performance Taming the Challenges of 20nm Custom/Analog Design of a transistor or cell in isolation—where a device is placed in a layout, and what is near to it, can change the behavior of the device. This is called layout-dependent Analog designers concerned about the As shown in Figure 4,are it is very easy to create a design-rule checking (DRC) “loop,” which is a coloring conflict that effect (LDE), and it has a big impact on performance mismatches that anon additional mask cause. Double cannot converge a solution thatcan works. And in many cases, it will be necessary to trace back a number of steps and power. While LDE was an emerging problem at patterning performance because to unravelimpacts how theelectrical loop was created. 28nm, it is significantly worse at 20nm, where cells are different masks on a given layer will shift during the much closer together. At 20nm, up to 30% of device manufacturing process (Figure 5). This 2mask shift 1 3 performance can be attributed4to the layout “context.”5 causes variations that have a direct impact on RC and That is, the neighborhood in which a device is placed. the interconnect. As a result, parasitic matching can Figure 6 shows how voltage threshold can change become very challenging. according to “well proximity effect,” or how close a EDA tools can help automate the colorized device is placed to a well. decomposition process and can help ensure As shown in Figure 7, there are many potential sources correctness. In a 20nm-aware toolset, all physical of LDE. While a few of these effects emerged at 40nm design tools should be double patterning-aware, Conflict Created Re- Colored Loop and above, most are far more problematic at 20nm. Two Nets Colored Create Wire Three Nets including placement, routing, extraction, and physical Conflict Remains between gates including For example, the distance verification. For example, extraction must be able to dummy poly has a direct effect on the drain current of predict the capacitance variation resulting mask Figure 4:from Double-patterning loops are easy to create the transistor (poly spacing effect). Length of diffusion shift. (LOD) is the distance from an n-channel or p-channel Handling double patterning properly is a big concern for custom designers of standard cells, memories, and I/ Another capability that’s needed is automated colorto the shallow trench isolation (STI) oxide edge. Os. These designers must be cognizant of coloring as they create layouts to optimize area. It is difficult to achieve aware layout. Here, color conflicts are avoided as the Oxide diffusion (OD)-to-OD spacing is active-to-active a high density while making the design decomposable. According to some reports, standard cells that previously layout designer draws shapes and places cells. Once spacing. took four hours to lay out sometimes take a week at 20nm, because designers have to keep re-running verification a shape is dropped into place, the tool automatically as they try to pack decomposable cells as tightly as possible. A major cause of LDE is mechanical stress, which is makes any needed coloring changes. Locking color often induced to improve CMOS transistor Analogdown designers concerned mismatches an intentionally additional mask can cause. Double patterning choices in theare design phaseabout sets athe constraint for that performance2. For example, a dual stress linerprocess is a electricaland performance becausethat different masks on a given layer will shift during the manufacturing theimpacts manufacturer helps to ensure matching silicon nitride (SiN) capping layer that is intentionally (Figure 5). maskpairs shift causes thatonhave is correct byThis placing of netsvariations or devices the a direct impact on RC and the interconnect. As a result, deposited to be compressive on PMOS and tensile parasitic same mask.matching can become very challenging. ∆y Mask 1 Mask 2

Mask 1 Mask 2 x

x

x+∆x

Ideal Geometry

x-∆x

Geometry –With Mask Shift

Figure 5: Mask shift occurs with double patterning Figure 5: Mask shift occurs with double patterning

18

EDA tools can help automate the colorized decomposition process and can help ensure correctness. In a 20nm-aware toolset, all physical design tools should be double patterning‒aware, including placement, routing, EEWeb | Electrical Engineering Community extraction, and physical verification. For example, extraction must be able to predict the capacitance variation


(LDE), and it has a big impact on performance and power. While LDE was an emerging problem at 28nm, it is the Challenges 20nm significantly worse at 20nm, where cells areTaming much closer together. At of 20nm, upCustom/Analog to 30% of device Design performance can be attributed to the layout “context.” That is, the neighborhood in which a device is placed. Figure 6 shows how voltage threshold can change according to “well proximity effect,” or how close a device is placed to a well.

PROJECT

out-Dependent Effects

0nm, it’s not enough to model the performance of a transistor or cell in isolation—where a device is placed ayout, and what is near to it, can change the behavior of the device. This is called layout-dependent effect , and it has a big impact on performance and power. While LDE was an emerging problem at 28nm, it is ficantly worse at 20nm, where cells are much closer together. At 20nm, up to 30% of device performance can tributed to the layout “context.” That is, the neighborhood in which a device is placed. Figure 6 shows how ge threshold can change according to “well proximity effect,” or how close a device is placed to a well.

Difference 24.18dB Sampled Gain (dB)

Min Distance per DRM (Active to NWell)

sc (um)

Sampled Gain (dB)

VTH

Difference 24.18dB

Distance to Well

Min Distance needed for expected gain

Min Distance per DRM (Active to NWell)

Min Distance needed for expected gain

Figure Figure 6: Well proximity effect source LDE, it impacts voltage thresholds 6: Well proximity effectis is aa source of of LDE, and itand impacts voltage thresholds sc (um)

m

VTH

on NMOS—improving both.of LDE. pre-layout sensitivity analysis toolsatcan identify devices As shown in Figure 7,the thereperformance are many potentialofsources While a few of these effects emerged 40nm and above, most are more problematic at 20nm. between gatesCircuit includingdesigners dummy Nonetheless, it results infar variability that may makeForit example, that the aredistance sensitive to LDE. can also has a direct effect on theisdrain of the transistor make (poly spacing Length of diffusion (LOD) partial is the difficult poly to close timing. Stress alsocurrent unintentionally use effect). of LDE simulation using layouts distance from an n-channel or p-channel to the shallow trench isolation (STI) oxide edge. Oxide diffusion (OD)induced through technologies such as shallow trench and LDE-aware layout module generators. Layout to-OD spacing is active-to-active spacing. isolation, which isolates transistors and determines engineers can use LDE hotspot detection and fixing. active-to-active Distance to Wellspacing. Layout-Dependent Effects PriorWhat’s to At 40nm and needed is 28nm context-driven placement and 40 nm Beyond LDE cannot be modeled in Pcells or device models. optimization that can determine how different cells Figure 6: Well proximity effect is a source of LDE, and it impacts voltage thresholds It is no longer enough to create a schematic, pick a are going to interact and how the layout context affects X X X Well Proximity Effect (WPE) own in Figuretopology, 7, there are many potential sources of LDE.itWhile fewwall of these timing effects and emerged at 40nm run a simulation, and throw overathe power. The design tool should take care of above, most are more problematic at 20nm. example, the distance including dummy to far a layout designer. At 20nm,For circuit designers havebetween LDEgates during both schematic and layout phases. But it’s X X Poly Spacing Effect (PSE) has a direct effect on the drain current of the transistor (poly spacing effect). Length of diffusion (LOD) is the to consider layout context as well as device topology, not just about tools. Circuit and layout designers have nce from an n-channel p-channel to the shallow trencheffects isolationprior (STI) oxide Oxide (OD)- in new ways, with a much and theyorneed to simulate with layout to edge. to learn todiffusion work together D spacing is active-to-active spacing. X X of Diffusion layout completion. WhileLength this may sound(LOD) paradoxical, higher levelXof cooperation. Layout-Dependent Effects OD to OD Spacing Prior to Effect (OSE) At 40nm 40 nm Well Proximity Effect (WPE) www.cadence.com Poly Spacing Effect

OD to OD Spacing Effect (OSE)

New Interconnect Layers X For many custom designers, 20nm is density. Local interconnect (LI) layers—also called middle-ofline (MOL) layers—offer one way 5 to achieve very dense local routing below the first metal layer (Figure 8). There may be several of these layers, and most don’t use contacts; instead, they connect by shape overlap without any need for a cut layer. Getting rid of

Figure 7: Many layout-dependent effects first appear at 40nm X) all(black about X X X and become severe at 28nm and below, with limited workarounds (red X)

(PSE)

Length of Diffusion (LOD)

28nm and X Beyond

X

X

X

X

X

X

X

Figure 7: Many layout-dependent effects first appear at 40nm (black X) Figure severe 7: Many at layout-dependent effects firstlimited appearwokrarounds at 40nm (black (red X) x) and become 28nm and below, with and become severe at 28nm and below, with limited workarounds (red X)

5 Visit www.eeweb.com

19


EEWeb PULSE contacts makes routing denser because contacts are bigger than nets, and can’t be placed too close to nets. However, designers need to be aware that LI layers have their own restrictive design rules. For example, LI shapes can only be rectangles, and they often have fixed directions with length constraints.

optimizing leakage or gain in custom/analog design. A limited set of device sizes are available for design. Width and length parameters are limited to a small set of values, and with fewer choices, manually tuning transistors to meet specs (such as gain) is difficult.

Some 20nm designs—and many, if not most, designs at 14nm and below—will use a new type of transistor New Design Rules called a FinFET (or tri-gate in Intel’s terminology). In a FinFET, the FET gate wraps around three sides of At 20nm, there may be more than 5,000 design rule the transistor’s elevated channel, or “fin” (Figure 9). checks. There are more than 1,000 new rules since This forms conducting channels on three sides of the the 90nm process node, and double patterning alone Taming the Challenges of 20nm Custom/Analog Design vertical fin structure, providing much more control requires 30-40 checks. The 20nm node adds another over current than planar transistors. Multiple fins can 400 new advanced rules such as wrong width and be used toinduced providetoadditional current. A major cause ofdiscrete LDE is mechanical stress, which is and often intentionally improve CMOS transistor wrong spacing, width and spacing, For pin example, a dual linerDesigners is a silicon nitride (SiN) capping layer that is intentionally deposited performance special rules .for access onstress cells. FinFETs promise greatly reduced power at a given to be compressive on PMOS and tensile on NMOS—improving the performance of both. Nonetheless, it results in will face directional orientation rules, specific rules level of performance. According Intel3 (which variabilitylength/width that may make it difficult to close timing. and Stress is also unintentionally induced through to technologies such is regarding and transistor proximity, using tri-gate transistorsspacing. in its 22nm “Ivy Bridge” chip), as shallow trench isolation, which isolates transistors and determines active-to-active new rules governing legal inter-digitation patterns. 22nm tri-gate transistors provide a 37% performance 2

LDE cannot be modeled in Pcells or device models. It is no longer enough a schematic, pick a topology, increase andto create use 50% less power at the same Anrun important point to remember is that 20nm doesn’t a simulation, and throw it over the wall to a layout designer. At 20nm, circuit designers have to consider layout performance than 32nm planar transistors, for an justcontext bring asinwell more rules; it brings in more complex as device topology, and they need to simulate with layout effects prior to layout completion. While added wafer cost of only 2-3%. rules. Simple rules of thumb or memorization won’t tools can identify devices that are sensitive to LDE. Circuit this may sound paradoxical, pre-layout sensitivity analysis work any more. What’s is simulation a design using system designers can also make needed use of LDE partial However, layouts andFinFETs LDE-aware layout module generators. raise some challenges for custom/ thatLayout can check design rules as the design progresses, engineers can use LDE hotspot detection and fixing. analog designers. One constraint is that all the fins rather than waiting for a final signoff check. on the transistors on a given chip must be the same What’s needed is context-driven placement and optimization that can determine how different cells are going

widthThe anddesign height. Designers can add fins during to increase Device Complexity and context Variation to interact and how the layout affects timing and power. tool should take care of LDE the width, but this can only be done in discrete both schematic and layout phases. But it’s not just about tools. Circuit and layout designers have to learn to work Transistors 20nm small andlevel very fast, increments—you can add 2 fins or 3 fins, but not 2.75 together inatnew ways,are withvery a much higher of cooperation. and variation is a constant challenge. Transistors are fins. In contrast, planar transistors can be adjusted to sensitive to channel length and channel doping, and any channel width in order to meet specifications. New Interconnect Layers transistor behavior is subject to short-channel effects. challenges include called additional design rules, For many custom designers, 20nm is all about density. interconnect (LI) layers—also middle-of-line Custom designers must minimize leakage, ensureLocalOther manufacturing variations in the width and height (MOL) layers—offer way to achieve very dense local routing below the first metal layer (Figure 8). There may of reliability, and achieveone reasonable yields. be several of these layers, and most don’t use contacts; instead, by via shape overlap without need with fins, they and connect metal and resistance. SPICEany models forreality a cut layer. Getting rid transistors of contacts makes denseradditional because contacts are bigger than nets, and be The is that 20nm were routing designed parameters will be needed for can’t the FinFETs, placed too close to nets. for achieving high densities in digital design, not for and simulators must be able to interpret them.

MOL Metal1 V0

Metal1

Metal1 V0

LI2 LI1

Metal1

V0 LI2

LIPO Poly

LI1 Active (+)

V0 LI2

Metal1 V0 LIPO

LI2

LI1

LI1

Active (+)

Active (+)

Active ( -) Nwell Li1 Connects to Active

LiPO Connects to Poly

Substrate

Li2 connects Li1 and to LiPO Substrate

Figure 8: Local interconnect provide additional density Figure 8: Locallayers interconnect layers provide routing additional routing density

20

However, designers need to be aware that LI layers have their own restrictive design rules. For example, LI shapes can only be|rectangles, they often have fixed directions with length constraints. EEWeb Electricaland Engineering Community


PROJECT

elevated channel, or “fin” (Figure 9). This forms conducting channels on three sides of the vertical fin structure, providing much more control over current than planar transistors. Multiple fins can be used to provide additional current. Gate

Gate

Drain

Drain High-K Dielectric Source

Fin

Oxide

Oxide

Oxide

Source Oxide

Silicon Substrate

Silicon Substrate

PlanarFET

Tri-gate, FinFET

Figure 9: A FinFET wraps(right) a gate around three sides of an elevated channelchannel Figure(right) 9: A FinFET wraps a gate around three sides of an elevated

Extraction tools must be aware of the capacitance and simulation without layout parasitics. The design is FinFETs promise greatly reduced power at a given level of performance. According to Intel3 (which is using tri-gate resistance that arises from 3D transistor structures. tossed to layout designers who handle device creation, transistors in its 22nm “Ivy Bridge” chip), 22nm tri-gate transistors provide a 37% performance increase and use Layout tools will have to be optimized to handle manual placement, and manual routing. Next comes 50% less power at the same performance than 32nm planar transistors, for an added wafer cost of only 2-3%. FinFETs. And like any new technology, FinFETS will physical verification, extraction, and a final simulation. require ecosystem support including EDA tools, It’s a time-consuming, methodology in which However, FinFETs raise some challenges for custom/analog designers. One constraint isserial that all the fins on the Taming the Challenges of 20nm Custom/Analog Desig transistors on a given chip must be the same width and height. Designers can add very fins tolate increase width,process, but process design kits (PDKs), physical IP, and siliconissues are exposed in thethe design this can only be done in discrete increments—you can add 2and fins many or 3 fins, but not 2.75 fins.may In contrast, proven manufacturing processes. design iterations occur. planar No wonder The traditional custom/analog is a manual, “throw it over wall” approach. Circuit schematic transistors can be adjusted to any channel widthflow in order to blocks meet specifications. that tookthefour hours to lay outdesigners at 28nmdomight A New Custom Methodology entry and run an ideal simulation without layout parasitics. The design is tossed to layout designers who handle take a week at 20nm! Other challenges include additional design placement, rules, manufacturing variations the width and height ofextraction, fins, and and a final device creation, manual and manual routing. Nextincomes physical verification,

traditional custom/analog flow is a manual, “throw it over the wall” approach. Circuit designers do schematic entry and run an ideal

LAYOUT

The www.cadence.com

SCHEMATIC

Tometal resolve theresistance. challenges described in additional the above simulation. It’s a time-consuming, serial methodology which issues are the exposed very late insimulators the design process, and via SPICE models with parameters be needed for FinFETs, and Figurewill 10in depicts a more automated and collaborative iterations may occur. wonder blocks that took four hours to lay outthat at 28nm might take a sections, in and the many custom/analog flow needs toNoaware must be every able totool interpret them.design Extraction tools must be of the capacitance and resistance arises methodology. Here, circuit designersfromdraw week at 20nm! be3Daware of the changes that 20nm brings, transistor structures. Layout tools will haveincluding to be optimized to handle FinFETs. And like any new technology, schematics, just like they always have. But they also FinFETSpatterning, will requireLDE, ecosystem support including EDA and tools, process design kits to (PDKs), physical IP, anddraw silicondouble local interconnect, complex Figure 10 depicts a more automated collaborative methodology. Here, circuit designers schematics, pass constraints the layout designers, and run ajust like they always have. But they pass constraints to the layout designers, and run a pre-layout parasitic and provenrules, manufacturing processes. design device variation, and FinFETs. But also it is not pre-layout parasitic and LDE estimation. On the right LDE estimation. the right side the diagram, both circuit designers and layout designers can use Modgens (a enough to just improve tools. What On is needed is a ofnew side ofto the diagram, circuit designers layout Cadence® term for automatic module generators) quickly generateboth layouts for structures such and as differential A New Custom Methodology methodology that provides a higher level of automation designers use Modgens (a Cadence® term for pairs, current mirrors, and resistor arrays. While not a finalcan “DRC-clean” layout, these automatically generated than existing flows. Inlayouts this allow methodology, circuit andto beautomatic accurate physical effects extracted, analyzed, and simulated. Modules can then be fed into an module generators) to quickly generate To resolve the challenges described in the above sections, every tool in the custom/analog flow needs to be aware layout designers will exchange and share information, analog placer and assembled into a floorplan. of the changes that 20nm brings, including double patterning, LDE, local interconnect, complex design rules, layout prototyping device variation, and FinFETs. But it is not enough to just improve tools. What is needed is a new methodology will provide early that provides a higher level of automation than Schematic Entryexisting flows. In this methodology, circuit and layout MODGENdesigners Creation estimates of will exchange and share information, layout prototyping will provide early estimates of parasitics and LDE, and an parasitics and LDE, in-design signoff approach will greatly shorten final signoff runs. and an in-design Constraint Entry Device Placement signoff approach will Rapid greatly shorten final Test Creation and Information Net Routing signoff runs. Initial Simulations Exchange Pre-Layout Parasitic and LDE Estimation

7

In-Design Signoff

Design Centering

Extraction Pre and Post Layout Comparisons

Figure 10: A new custom design methodology allows rapid information exchange between schematic and layout designers

Figure 10: A new custom design methodology allows rapid information exchange between schematic and layout designers

Layout engineers then perform device placement, routing, in-design signoff, and extraction. In short, the basic roles of circuit designers and layout designers remain the same, but there is an ongoing and rapid exchange Visiteasily www.eeweb.com of information and a high degree of collaboration. Constraints flow between the schematic and layout environments.

21


EEWeb PULSE layouts for structures such as differential pairs, current mirrors, and resistor arrays. While not a final “DRCclean” layout, these automatically generated layouts allow accurate physical effects to be extracted, analyzed, and simulated. Modules can then be fed into an analog placer and assembled into a floorplan. Layout engineers then perform device placement, routing, in-design signoff, and extraction. In short, the basic roles of circuit designers and layout designers remain the same, but there is an ongoing and rapid exchange of information and a high degree of collaboration. Constraints flow easily between the schematic and layout environments. Automatic module generators help enable “rapid layout prototyping,” which is the ability to quickly generate an extracted layout view so circuit designers can run simulations with real parasitic and LDE information. In this way, the electrical and mismatch problems that might be caused by layout effects can be spotted and remedied early in the design cycle. The module generators, however, must be ready for 20nm, with place-and-route engine support for 20nm design rules, complex abutment support, array-based FinFET configurations, coloring for double patterning, and LDE awareness. Another approach for bringing physical effects into initial simulations is incremental design. Here, designers lay out the pieces they care about with assisted or full automation, and gather as much physical and electrical information as they can. This results in a partial layout extraction. The emphasis is on placement rather than routing. The point is that designers are not taking the time to do a full layout, just what’s necessary to generate the desired parasitic information. Finally, in-design signoff is a must at 20nm. If a designer makes a mistake during layout, the feedback should come immediately, not four hours or a month later. Otherwise many design iterations may be needed for each block, with each iteration taking longer than it took to do the original design. There are two types of interactive editing checks that can help avoid those iterations. One is “in-edit checking,” which warns of errors while geometry is being created. Another is “post-edit signoff-quality checking,” which exercises a more robust check after each edit is completed. The key to in-design signoff is having “signoff-quality” engines that run during the design flow. This does not

22

EEWeb | Electrical Engineering Community

remove the need for a final signoff check, but it greatly reduces the amount of time and potentially the number of licenses that a final check might consume. Conclusion There is no doubt that 20nm is coming. The performance, power, and density advantages of the 20nm node will provide a competitive advantage to those who adopt it. While there are a number of design and manufacturing challenges, the good news is that the challenges are manageable—but only if we look beyond individual tools and rethink the ways that custom and analog circuits are designed. It’s certainly true that every tool in the custom/analog flow must be 20nm aware, and able to handle such challenges as double patterning, LDE, and complex design rules. But improving individual tools is not enough. What’s needed is a new custom/analog methodology in which schematic and layout designers work in close cooperation, schematic designers can run prototype layouts to gather parasitic and LDE information, and signoff-quality engines permit “indesign signoff” that catches the vast majority of errors before the final signoff phase. While there’s been much discussion of the problems facing digital designers at 20nm, it’s the custom and analog designers who will lay the foundation for this process node. Digital design cannot proceed without standard cells, memories, and I/Os, and SoC design cannot move ahead without analog/mixed-signal IP. A complete 20nm solution must therefore include custom/analog and digital design, and allow a close interaction between these domains, preferably using a common database such as OpenAccess. Cadence Design Systems offers such a solution today. References [1] A Call to Action: How 20nm Will Change IC Design, white paper, cadence.com [2] Modeling Stress-Induced Variability Optimizes IC Timing Performance, white paper, cadence.com [3] Intel Reinvents Transistors Using New 3-D Structure, press release, intel.com


PROJECT

Visit www.eeweb.com

23


EE W eb

ARTICLES

Need For Universal Wireless Power Solution

Dave Baarman Director Of Advanced Technologies

"Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium laudantium,

doloremque totam

rem

aperiam,

eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem

C om m un ity

Ele ct ric al En gi ne er in g

Making Wireless Truly Wireless:

JOBS

COMMUNITY

DEVELOPMENT TOOLS

JOIN TODAY

www.eeweb.com Little Sensors, Big Ideas®

LXRS®

Lose all the wires, keep all the data MicroStrain’s new LXRS™ Wireless Sensing System offers 100% data throughput under most operating conditions With Lossless Protocol

Without Lossless Protocol

Our new LXRS™ Wireless Sensing System includes:

• Lossless wireless communications protocols provide 100% packet success rate • Extended range radio link to 2 km • Scalable wireless sensor networks support continuous, burst, and hybrid sampling modes • Time synchronized to +/- 32 microseconds

Call 800.449.3878 or visit us online at www.microstrain.com

To learn more about our LXRS Wireless Sensor Networks, scan here for demo video


Get the Datasheet and Order Eval Boards http://www.intersil.com

Dual 15A/Single 30A Step-Down Power Module ISL8225M

Features

The ISL8225M is a fully-encapsulated step-down switching power supply that can deliver up to 100W output power from a small 17mm square PCB footprint. The two 15A outputs may be used independently or combined to deliver a single 30A output. Designing a high-performance board-mounted power supply has never been simpler -- only a few external components are needed to create a very dense and reliable power solution.

• Fully-encapsulated dual step-down switching power supply

Automatic current sharing and phase interleaving allow up to six modules to be paralleled for 180A output capability. 1.5% output voltage accuracy, differential remote voltage sensing and fast transient response create a very high-performance power system. Built-in output over-voltage, over-current and over-temperature protection enhance system reliability. The ISL8225M is available in a thermally-enhanced QFN package. Excellent efficiency and low thermal resistance permit full power operation without heat sinks or fans. In addition, the QFN package with external leads permits easy probing and visual solder inspection.

• Up to 100W output from a 17mm square PCB footprint • Dual 15A or single 30A output • Up to 95% conversion efficiency • 4.5V to 20V input voltage range • 0.6V to 6V output voltage range • 1.5% output voltage accuracy with differential remote sensing • Up to six modules may be paralleled to support 180A output current • Output over-voltage, over-current and over-temperature protection • Full power operation without heat sinks or fans • QFN package with exposed leads permits easy probing and visual solder inspection

Related Resources

Applications

• See AN1789 “ISL8225MEVAL2Z 6-Phase, 90A Evaluation Board Setup Procedure”

• Computing, networking and telecom infrastructure equipment

• See AN1790 “ISL8225MEVAL3Z 30A, Single Output Evaluation Board Setup Procedure”

• Industrial and medical equipment

• See AN1793, “ISL8225MEVAL4Z Dual 15A/Optional 30A Cascadable Evaluation Board”

• General purpose point-of-load (POL) power

• See ISL8225M 110A Thermal Performance Video

VIN

4.5V TO 20V

VIN1 VIN2

4x22µF

VSEN1+

EN/FF1

OFF ON

EN/FF2 VSEN2-

ISL8225M

PGND

SGND

MODE

VSEN1-

17m

m

17

mm

1kΩ RSET 1kΩ

5x100µF

7.5mm

VMON1 VMON2

VCC 4.7µF

1.2V@30A VOUT

VOUT1 VOUT2

COMP1 COMP2 470pF

NOTE: ALL PINS NOT SHOWN ARE FLOATING.

FIGURE 1. COMPLETE 30A STEP-DOWN POWER SUPPLY

December 3, 2012 FN7822.0

FIGURE 2. SMALL FOOTPRINT WITH HIGH POWER DENSITY

Intersil (and design) is a registered trademark of Intersil Americas Inc. Copyright Intersil Americas Inc. 2012 All Rights Reserved. All other trademarks mentioned are the property of their respective owners.


EEWeb PULSE

Episode 2.2

Supporting M in an RTOS

26

EEWeb | Electrical Engineering Community


MCUs

SPECIAL FEATURE

In this episode of MCU Wars, Richard Barry and Jean Labross continue the discussion about the nuances of developing in an RTOS. The discussion ranges from the challenges of RTOS developers have in supporting MCUs to reasons why an engineer should not try writing their own RTOS. This series was filmed at DevCon 2012 by Renesas in Anaheim, California. DevCon provides an environment for valuable technical information exchange and access to Renesas’ technology experts and partners from around the world. Visit www.eeweb.com 27


EEWeb PULSE

What are some of the challenges as RTOS developers in supporting MCUs? I’d say that the first challenge is to gain a deep understanding of exactly how the Richard MCU works—the instruction set, the nuances of it, how the interrupts work and how they interact with each other. A large proportion of developing a Kernel for a particular architecture is actually reading—reading the documentation and exploring the additions the compiler provides such as the ability to interact with the hardware in the most maintainable way possible. Above the deep hardware interaction, you also have a very complex timing problem, where you are really looking at the robustness and making sure that the Kernel is really doing what people expect it to do and that the scheduling algorithm is being maintained perfectly and all the prioritization is working without unforeseen interactions. It’s not a trivial job, although the amount of software you produce at the end of it may be quite small. The amount of time and effort that goes into it is actually quite intense. The good news about this thing is that the user itself that uses our products doesn’t have to worry about any of that. Jean This is a one-time thing. In fact, we have ported the Kernel over to 45 different CPU architectures, so the thing with that is that the Kernel was designed from the get-go to be extremely portable, so there is really no need to worry from the customer’s point of view. All of that complexity has been isolated and the hope is that the customer actually uses the software as is and we worry about the interaction with the microprocessor, instruction set and how to protect critical sections. We are removing all of that effort so that the user can just write their application.

Does it make sense for engineers to write their own RTOS?

Jean

28

I don’t think it makes sense to write your own RTOS unless you really have a very specific need. Other than that, it makes no sense to write your own Kernel given that there are tons of Kernels available out there. If you are writing a Kernel that

Available at www.freertos.org

A large proportion of de particular architecture is a the documentation and e compiler provides such as the hardware in the most m

will eventually have added on features such as TCPIP and USB support, then it’s likely that you will have to write your own Kernel in order to support some of these primitives that are common in real-time Kernels. Because of that, you would most likely write your Kernel the same way we write our Kernels in the first place. Why reinvent the wheel? You should be concentrating on using the wheel instead of reinventing it. I kind of look at it as; you are not in the business of creating houses, you are in the business of staying in the house and living in the house. You don’t want to start reinventing screws and hammers and things you actually want to start using.

EEWeb | Electrical Engineering Community


SPECIAL FEATURE

you write your own when you can get something that’s free, commercial quality, supported, around for eight years, is known to be really robust and is there for you instantly. Why would write your own? It is an intellectual challenge and I can understand the enjoyment of it, but don’t do it on my time!

Jean

The only comment I would have there is that you say it’s a commercial-grade product and it’s supported, but support is not free. The product is free but your hours are not.

I’m going to have to agree and disagree with that. The support is free, but Richard my hours are not. I don’t charge the customer for my hours. There is a commercial version of FreeRTOS that comes with a support contract. A lot of companies, for legal reasons, won’t have software in their product unless there is a written contract, so if you want a support contract, you can have that right if you want to.

Available at www.micrium.com

eveloping a Kernel for a actually reading—reading exploring the additions the s the ability to interact with maintainable way possible.

Jean

Correct—but you have to purchase that separately. In [Micrium’s] case, when you purchase our software, not only does the source-code come with it, but also a full-year support contract for our product. It’s all included.

Continued in Episode 3...

To view this episode of MCU Wars and other EEWeb videos:

What advice do you have for project managers willing to write their own RTOS? If I was managing a team that wanted to write their own RTOS, I would say, Richard “Fine. Do it. But do it on your own time, not on mine,” because I don’t want to pay for something that isn’t going to work as well as products that I can get off the shelf. Even free products like mine; why would

Click Here Visit www.eeweb.com

29


EEWeb PULSE

Distributio Systems Automation & Optimization Part 32

30

Nicholas Abi-Samra

Vice President, President ofAsset Quanta Technologies Vice Management - Quanta Technology

EEWeb | Electrical Engineering Community


TECH ARTICLE

on

Visit www.eeweb.com

31


EEWeb PULSE IX. IEC 61850 The scope of standard IEC 61850 is to support the communications needs in substations. The goal of the standard is interoperability. The standard contains an object-oriented data model that groups all data according to the common user functions in objects called Logical Nodes (LN). All related data attributes are contained and defined in these Logical Nodes. The access to all the data is provided in a standardized way by the services of the standard, which are defined to fulfill the performance requirements. The abstract data models defined in IEC 61850 can be mapped to a number of protocols . Benefits from using IEC 61850 include: Lower Installation Cost

Lower Transducer Cost

Lower Commissioning Cost

IEC 61850 enables devices to exchange data and status using GOOSE and GSSE over the station LAN without having to wire separate links for each relay. Where: GOOSE - Generic Object Oriented Substation Event (datasets) GSSE - Generic Substation Status Event (status) IEC 61850 devices require far fewer manual configuration as legacy devices Many applications require nothing more than setting up a network address in order to establish communications IEC 61850 devices require far fewer manual configuration as legacy devices Many applications require nothing more than setting up a network address in order to establish communications

Lower Equipment Migration Cost

All devices share the same naming conventions minimizing the reconfiguration of client applications when those devices are changed

Lower Integration Cost

IEC 61850 networks are capable of delivering data without separate communication front-ends or reconfiguring devices By utilizing the same networking technology across the utility enterprise the cost to integrate substation data into the enterprise is substantially reduced

Fewer By utilizing Substation Configuration Language (SCL), the user can Procurement Misinterpretation specify exactly what is expected to be provided in each device that is not subject to misinterpretation by suppliers

32

X. FEW WORDS ABOUT DA CYBER SERCURITY ISSUES There is a significant challenge in determining how to protect DA from cyber security threats when integrating DA’s information and operations technologies (IT and OT). However, information on real cyber security incidents in DA systems is still rare. Security of intelligent electronic devices (IED’s) is the subject of IEEE Standard 1686, which defines the requirements for IED’s, e.g. for user authentication or security event logging. It is important to mention that adherence to the standard though does not ensure adequate cyber security, and the security requirements should be tailored for every DA application, starting with IEEE 1686 as a basis. The solution requires rules and governance around data handling that surpass IT and OT requirements individually. Grappling with DA (and smart grid in general) security issues will require measures such as: • Creating a foundation for the utility’s security policies, through an intuitive set of policies and standards •Defining enterprise-wide cyber security controls • Continually assessing the vulnerability of DA system components • Identifying and addressing changes in regulations and standards • Establishing measures to monitor security performance including periodic vulnerability assessments

DA will be a revolutionary change to distribution systems that is occurring in an evolutionary manner due to the tremendous investment in legacy systems and the rate of technological progress. XI. COPING WITH LEGACY SYSTEMS The incremental and evolving nature of the DA applications may come head on with the legacy systems. Phased approaches in many DA deployments may entail upgrades to existing legacy systems or integration with them. Many of these legacy systems could even be homegrown applications with little documentation. Such systems could be of limited capabilities or may have been built for a completely different purpose than what these they are now asked to serve. The major challenge is in the communications architecture.

EEWeb | Electrical Engineering Community


TECH ARTICLE

Example of Legacy Systems Serial Communication Protocols:

XII. ANALYSIS NEEDED TO DEAL WITH NEW DISTRIBUTION SYSTEMS

• Multiple protocol types, data formats, data addressing, etc.

There are a number of engineering analysis which are associated with DA, a sample of which are:

• Specialized point to point links to IEDs

• Overcurrent protection issues (e.g., due to reverse power flow can negatively impact protection coordination)

• Limited capabilities • Difficult or no access points for other applications Communication paths must be re-configured when new devices or applications are added Finally, there exists a large variety and differences in age of primary equipment, which is not prepared for advanced automation and communication; e.g., many grid devices are not prepared for remote control, or a fuse is used for protection purposes, instead of a circuit breaker. Therefore, the smart grid policy of the utility companies requires an individual migration and modernization strategy for a future oriented distribution automation and protection solution.

• Impacts on surge arresters and temporary overvoltage (TOV) issues that can impact distribution and customer equipment. • Voltage rise and regulation issues (especially for long lightly-loaded feeders) • Interaction with capacitor banks, LTCs and line voltage regulators • Reactive power fluctuations issues • Power Quality issues (e.g., harmonic injection)

A layered intelligence approach can help accomodate a gradual rollout of DA components while also realizing immediate benefits.

Legacy

IEC 61850

Hardwired signals for relay to relay links N*(N–1/2) links for N relays. For large number of devices, the number of connections approaches 1/2n2 Filtering on links to prevent false trips Reprogramming can require rewiring

Number of links for N relays in N and shared with SCADA Relays send their status to all other relays at once using GOOSE Status exchanged continuously

Network Hub

Legacy Architecture

IEC 61850 Architecture

Figure 5: Example Comparison Between Legacy and IEC 61850 AArchitectures Visit www.eeweb.com

33


EEWeb PULSE • Voltage unbalance due to significant proliferation of single-phase (residential type) PV • Power factor issues which may impact feeder losses XIII. DA ASSET MANAGEMENT APPLICATIONS Distribution Automation (DA) provides the utilities means and capabilities to track the performance of distribution assets (transformers, cables, breakers, etc .) in a way which has not been possible before. Equipment loading information, operation history and event characteristics which can be tracked can all provide intelligence on the condition of assets. Such intelligence can be utilized by the asset owners and operators to make more educated and smarter decisions about maintenance programs and asset replacement strategies. This is especially valuable for those assets which have been in operation for many decades (a.k.a, aging assets). Specifically, some of Asset Management Systems which are empowered by DA are: • Advanced equipment diagnostics to characterize the condition of distribution system assets • Advanced methods to determine remaining lifetime of equipment • Advanced testing techniques to support asset condition assessments • Decision making tools based on equipment condition for system configuration and management XIV. CONCLUDING REMARKS Developing a smart grid through improved DA can improve network efficiency and increase return on investment for electricity distribution utilities. Many DA technologies are now commercially available from a large number of manufacturers. It is the opinion of this this writer that in five years, by the year 2018, the global DA market can be north of $25 billion with Asia Pacific accounting for about a third of that. In USA, the DA market growth will be driven by needs to modernize an aging infrastructure and the need for increasing reliability. In emerging economies, which are encouraging investment in energy grids, intelligent hardware and system efficiency improvements will be the more prevalent drivers. DA applications have been successfully be deployed in many areas, such as:

condition of the distribution assets • Peak load management in the case of emergencies • System restoration after failure (e.g., fault location, incipient fault detection (and location) and automated switching on feeders ) • System efficiency improvements (e.g., detecting losses, including non-technical losses and voltage /var control to reduce losses) Finally, DA has become more important in facilitation of the integration of renewable distributed generation. Through improving energy efficiency, DA can provide opportunities for reduction of greenhouse gasses in electricity generation. For Part 1 of this Article - click here. For Part 2 of this Article - click here. About the Author

Nicholas is a Vice President at Quanta Technology, and leads the Asset Management practice area. He and his team help utilities better manage and modernize their assets at lower total lifecycle cost. Before Quanta Technology, Nicholas was at Accenture, the Electric Power Research Institute (EPRI) and Westinghouse. At Accenture he served as a Senior Manager at the Smart Grid Services, and developed a number of smart grid analytics and tools for grid modernization. At EPRI, he was a Senior Technical Executive and served as the Director of Transmission and Grid Operations and Planning. At Westinghouse, he was a Westinghouse Fellow Engineer, Westinghouse’s highest technical position) and managed a number engineering groups. Nicholas is the Chairman of the IEEE Power and Energy Society and the Power Electronics Societies in San Diego, CA. He has over 60 technical papers and articles. He served as the General Chair and Technical Session Coordinator of the IEEE Power & Energy (PES) 2012 General Meeting. Nicholas was the principal investigator into most of the major grid blackouts, both in the USA and abroad. He is an expert in dealing with aging assets and the effects of extreme weather on power systems.

• Asset management applications based on the

34

EEWeb | Electrical Engineering Community


TECH ARTICLE

World’s lowest power capacitive sensors with auto-calibration NXP is a leader in low power capacitance touch sensors, which work based on the fact that the human body can serve as one of the capacitive plates in parallel to the second plate, connected to the input of the NXP capacitive sensor device. Thanks to a patented auto-calibration technology, the capacitive sensors can detect changes in capacitance and continually adjust to the environment. Things such as dirt, humidity, freezing temperatures, or damage to the electrode do not affect the device function. The rise of touch sensors in modern electronics has become a worldwide phenomenon, and with NXP’s low power capacitive sensors it’s never been easier to create the future.

Learn more at: touch.interfacechips.com

Visit www.eeweb.com

35


Wiser Words - Part 6

Back to the Present - Part 7

eManual


Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.