Page 1

Featured Product:

ARM Cortex-A9 delivers 8000 DMIPS within a 250 mW power budget.

green design e wireless communications: managing multiple RF chains consumer electronics: streaming video to mobile devices portable power: reducing power with dynamic voltage scaling CEO Interview:

Chris Rowen Tensilica October 2007

An RTC Group Publication

MSP430 Goes Wireless


Complete Development Tool

ONLY $49

Designing with the world’s lowest power MCU just got even easier. Wirelessly enable your design with the eZ430-RF2500, the world’s smallest low-power wireless development tool. At only $49, the tool includes a USB emulator to program and debug your application in-system and two 2.4-GHz wireless target boards featuring the highly integrated MSP430F2274 ultra-low-power MCU. All the required software is included such as a complete Integrated Development Environment and SimpliciTI TM , a propriety lowpower star network stack, enabling robust wireless networks out of the box. The MSP430F22x4 combines 16-MIPS performance with a 200-ksps 10-bit ADC and 2 op-amps and is paired with the CC2500 multi-channel RF transceiver designed for low-power wireless applications. MCU: MSP430F2274

RF Transceiver: CC2500

Coming Soon! Need to extend your network?


32 KB


2.4 GHz

Additional eZ430-RF2500T


1 KB


–101 dBm

target boards can be


10-bit ADC, 2 op amps

Max Data Rate

500 kbps

ordered for $20 each.



RX Current

13.3 mA

Standby Current

0.7 μA

TX Current

21.2 mA

Order Today! or 800.477.8924, ext. 4037 Technology for Innovators, the red/black banner and SimpliciTI are trademarks of Texas Instruments.


contents departments

editorial letter dave’s two cents industry news analysts’ pages product feature design idea products for designers


5 6 8 12 36 38 40

Dynamic Power

1 Normalized Power


Static Power (leakage) 0.0001

17 green design

cover feature Technology Node (nm) 500

Mobile Video: 20 Managing Multiple RF Chains

Streaming Video to 26 Mobile Devices

Reducing Power with 32 Dynamic Voltage Scaling Alexander Friebe, Texas Instruments


CEO Interview Chris Rowen 46 Tensilica

Image Entropy Compressed Video



Entropy Analysis



WiFi Mesh

consumer electronics

90 65


WUSB 22 mobile video

portable power

180 130


Rob Baxter, Nellymoser

350 250


wireless communications

Yves Cognet, Symmetricom


John East, Actel Corporation

Cellular 1995 2005 2000 GSM: HSDPA Year CDMA: EV-DO


The Electronics Industry: 16 The Power to Change

Program Clock Reference

MP QM Model

Packet Loss Mobile Broadcast IP Network Probability Impairments US- MediaFLO (Delay, Jitter,...) EU - DVB-H

30 streaming video

second opinion Chris Eddington 48



new web exclusive article

Challenges in the development of body-worn wireless sensor nodes

Bert Gyselinckx and Els Parton, IMEC


35 distributed power

October 2007


team editorial team

Editorial Director Editor-in-Chief Managing Editor Copy Editor

Creative Director Art Director Graphic Designer Director of Web Development

Web Developer

Warren Andrews, John Donovan, Marina Tringali, Rochelle Cohn

art and media team Jason Van Dorn, Kirsten T. Wyatt, Christopher Saucier, Marke Hallowell, Brian Hubbell,

management team Associate Publisher Product Marketing Manager (acting) Western Regional Sales Manager Western Regional Sales Manager Eastern Regional Sales Manager Inside Sales Manager 9/21/07 10:52:58 AM Circulation

Untitled-2 1

Marina Tringali, Aaron Foellmi, Stacy Gandre, Lauren Hintze, Nancy Vanderslice, Carrie Bowers, Shannon McNichols,

executive management


Chief Executive Officer Vice President Vice President of Finance Director of Corporate Marketing Director of Art and Media

portable design advisory council



,OW0OWER#/$%#S s #3, s #3,


,OW0OWER$!#S s #3, s #3, ,OW0OWER!$# s #3, (EADPHONE!MPLIFIER s #3,



John Reardon, Cindy Hickson, Cindy Muir, Aaron Foellmi, Jason Van Dorn,

Mark Davidson, National Semiconductor Doug Grant, Analog Devices, Inc. Dave Heacock, Texas Instruments Kazuyoshi Yamada, NEC America

corporate office The RTC Group 905 Calle Amanecer, Suite 250 San Clemente, CA 92673 Phone 949.226.2000 Fax 949.226.2050

For reprints contact: Marina Tringali, Published by the RTC Group. Copyright 2007, the RTC Group. Printed in the United States. All rights reserved. All related graphics are trademarks of the RTC Group. All other brand and product names are the property of their holders. Periodicals postage at San Clemente, CA 92673. Postmaster: send changes of address to: Portable Design, 905 Calle Amanecer, Suite 250, San Clemente, CA 92673. Portable Design(ISSN 1086-1300) is published monthly by RTC Group 905 Calle Amanecer, Suite 250, San Clemente, CA 92673. Telephone 949-226-2000; 949226-2050; Web Address


editorial letter


A tattered version of me is back in the office after attending five trade shows in three weeks—the Intel Developers Forum (IDF) in San Francisco; the Power Architecture Developer Conference (PADC) and the Silicon Hills Summit in Austin; the ARM Developers’ Conference; and our very own Portable Design Conference and Exhibition in Santa Clara. The takeaway from all of these— in addition to thumb drives and stacks of press kits—is that portable is hot, and everyone wants a piece of the action. The Portable Design conference was more of a reunion than a new event, since we put on this show very successfully until the Internet bubble burst in 2001; the tech market tanked; and money for conferences was suddenly diverted to outplacement. Now technology is again driving the economy, but this time it’s the $143B consumer electronics market. This time it’s not based on the irrational exuberance of hot money chasing two guys with a great rap and a foosball machine—but no business plan. This time it’s driven by highly innovative engineers designing real products that can barely keep up with the exploding consumer demand for them. And the hottest products are the ones that our readers are designing—portable, wireless devices that delight, entertain and connect people around the world. So far I’ve lived through the semiconductor revolution, the PC revolution and the Internet revolution; I’ve been privileged to work in and around Silicon Valley through all of them. The Valley—not to mention Austin, Boston, San Diego, Phoenix, Portland and many others—is again abuzz with startup energy: smart money backing smart people making smart products. The Consumer Electronics Association estimates that as much as half of the electronics industry is focused on portable designs. What we’re seeing now is the Portable Revolution. It’s creative, disruptive and just plain fun. At IDF Intel constantly reaffirmed its commitment to the portable market. Coming to the party 20 years after ARM locked up the embedded space, Intel is creating a new category of products called Mobile Internet Devices (MIDs), which look a lot like wide iPhones—except they aren’t phones (in version 1.0 anyway). According to Anand Chandrasekher, Intel SVP and general manager of the Ultra Mobility Group, “In the first half of 2008, Intel will [deliver]… our first platform designed from the ground up for MIDs and UMPCs – codenamed Menlow, which will deliver 10x lower power compared

to the first UMPCs in the market.” The other big deal about Menlow is that it comes with Wi-Fi, 3G and WiMAX included. Did I mention the Wireless Revolution? Maybe MIDs will catch on; Web browsing on cell phones does leave a lot to be desired, but who wants to carry a MID and a smartphone?

The Portable Revolution john donovan, editor-in-chief

UMPCs, for their part, never really took off, filling as they do a niche between notebooks and high-end handsets with the advantages of neither. The recent demise of the Palm Foleo may spell the demise of this category. Still, Menlow could power a lot of other portable designs, so we’ll have to wait for the data sheet and samples. ARM wasted no time returning fire at ARM DevCon, announcing its new Cortex-A9 MPCore multicore processor, which ARM claims can deliver up to 8000 DMIPS while living within a 250 mW power budget. For 2000 DMIPS of performance when designed in a TSMC 65 nm G process, the core logic requires less than 1.5 mm2 of silicon. While Portable Design is waiting to see a data sheet, that’s got to translate to low power. Meanwhile, the Power architecture camp was in Austin stressing that their instruction set architecture—embodied in the new Power ISA version 2.05 specification and the forthcoming Power Architecture Platform Requirements for embedded systems (ePAPR) spec—isn’t all about speed (read: high power), it also “allows microarchitectures that cover a spectrum of requirements.” While the new Power6 core is a screamer, the N200 is aimed at low-power portable applications. So far Freescale is the lonely flag carrier for the Power camp in the embedded space, but aims to change that. Stay tuned. So there you have it: smart money backing smart people making smart products. The Portable Revolution—creative, disruptive and just plain fun. If you can survive all the fall trade shows. October 2007

dave’s two cents


One of today’s catch phrases is, “It’s a digital world.” I wonder how true this really is. Is every measurement quantized? Do all processes operate in discrete time? This might explain some human behavior. Maybe we all have different foreground and background loop strategies and time bases. For me, individuals who suspend their walking algorithm to poll their surroundings at the end of an escalator can cause me to execute a “low tolerance” subroutine. This also happens when an individual steps from a revolving door and immediately begins to execute a high-priority sun shade service routine that seems to take up all their bandwidth, leaving nothing to service the “continue walking” routine. Other individuals have timer-service routines that cause them to stop walking periodically to service a timer-driven scan of their surroundings. If my clock was synchronized to theirs, then maybe I would not run into their backs at the airport or shopping mall.

dave’s two cents on...

It’s a Digital World … or Is It? Last month, Darnell held the fourth annual Digital Power Forum, DPF ’07. During this conference a round table discussion ensued about whether or not digitally controlled power would provide more efficient power solutions. The moderator asked, “When would digital power solutions appear in portable equipment?” The answer depends on the definition of digital power. In the power industry digital power remains a “hot” topic, but the definition of “digital power” depends on the solution provider. If digital power is digital control of the power on and off switch, then digital power has been part of portable from the very beginning. If digital power means digital configuration and reconfiguration, then digital power is already there for chargers, backlighting and processor power. If digital power is all of the above, plus digital loop compensation, then that is a good question. As all the power engineers working in the portable industry know, portable applications are very demanding. Each circuit added to the solution must provide value to the end customer. So what would switching from an analog solution to a digital solution provide? Analog power 


solutions in portable applications are already digitally managed by onboard microcontrollers. These same power solutions are already configured / reconfigured by these same microcontrollers. So what can digital bring to the game? A presenter from iWatt at DPF ’07 described an adapter solution that uses digital compensation to control a flyback converter from the primary side for AC adapter applications. The solution’s particular power level was well suited for cell phone charger adapters. The solution’s digital loop compensation helped reduce the number of external components including elimination of opto-isolators. The controller operated in various power modes, CV and CC. Digital compensation reduced the component count and cost of an adapter solution, while maintaining acceptable performance. How about digital loop compensation between battery and load? Today, the focus of full digital power mostly has been on stationary applications. However, many of the same benefits exist for portable. In order to close the loop digitally, the device must have a method for rapidly determining the deviation of the output from the desired voltage. The device must rapidly compute the response needed to bring the output to the required value. Finally, the device must have PWM resolution necessary to output the proper duty cycle. If the device meets all of these requirements, it also should have the ability to provide more than just on and off power control. Digitally controlled power provides new knobs that can compensate for the numerous power states and input voltages common in portable applications. Just as in the adapter, this can end in reduced component count and cost. Savings can result in lower prices for customers or to provide higher capacity batteries. Digital power dominating portable applications is probably several years away. One reason may be the quiescent current for digital solutions. However, given the benefit of more and finer control knobs, the power industry will find a way. So, “Is it a digital world?” Maybe not yet. It may be that we humans just occasionally act like we are digitally controlled. Then again, maybe we really are and some day we will figure out how to sync up and share common execution routines. For my two cents, full digital power in portable applications is getting closer. The benefits will allow portable devices to compensate for both the variety of power input sources and the numerous load power states. As far as escalators and revolving doors, I would be satisfied if these devices would digitally interface with people to do a better job clearing the exit area. Dave Freeman, Texas Instruments

Intersil Handheld Products High Performance Analog

We’re Hip to Handheld.

Improve your performance in portable media players with Intersil’s high-performance analog ICs.

Analog Mixed Signal: Amplifiers DCPs Light Sensors Real-Time Clocks RS-232 Interface Sub Ohm Analog Switches Switches/MUXes Video Drivers Voltage References

Go to for samples, datasheets and support

Intersil – An industry leader in Switching Regulators and Amplifiers. ©2007 Intersil Americas Inc. All rights reserved. The following are trademarks or services marks owned by Intersil Corporation or one of its subsidiaries, and may be registered in the USA and/or other countries: Intersil (and design) and i (and design).

Power Management: Backlight Drivers Battery Authentication Battery Chargers Fuel Gauges Integrated FET Regulators LCD Display Power LDOs Memory Power Management Overvoltage and Overcurrent Protection Voltage Monitors

news Live at IDF: Otellini’s Roadmap

News Analysis: Intel Takes Aim at ARM

John Donovan, Editor-in-Chief

Intel CEO Paul Otellini kicked off this year’s Fall Intel Developer’s Forum declaring that multicore processors represent “the biggest single shift in microprocessor performance in the past decade.” He went on to tout the advantages of Intel’s new multicore processors—clearly designed more with data centers in mind than portable gadgets. Still, he stayed on message that it isn’t performance that matters but “performance per watt.” Intel’s Core 2 Duo, introduced in July, was architected for “energy-efficient performance,” according to Otellini. One big surprise was an appearance by Phil Schiller, SVP of marketing for Apple. What started out as a tentative relationship has certainly blossomed. As of March, all of Apple’s notebooks will be based on Intel Core 2 Duos. Mac Pro workstations will be based on Xeon processors. Since starting to use Intel chips in their notebooks, Apple’s market share for notebooks jumped from 6% to 12%, which certainly inclined Apple toward the Intel camp. Dual and Quad Cores Otellini claimed the Core 2 Duo sold five million units in the first 60 days, the fastest ramp ever for an Intel chip. It offers 35-80% better performance than the fastest Pentium with 20% lower power consumption. Intel is offering a $1 million prize to the first OEM/ODM who designs a notebook computer using the new chip. Quad core products will start rolling out in November, first for the gaming market. They’ll be branded Core 2 Quad Extreme and will be 70% faster than dual cores, according to Otellini. The Xeon 5100—with one million already shipped since its introduction three months ago—is based on the Core 2 microarchitecture. November will see the introduction of a quadcore Xeon 4300, to be followed in 1Q08 by a low-voltage Xeon. The quad-core Xeon will deliver 50% better performance than its dual-core predecessor with the same power envelope. Intel’s workhorse Centrino processor will also be getting an update. The first generation Centrino was introduced in 2003 for notebook computers; the second generation in 2004. The current 3G version is based on Core 2 Duo. The next generation—code named Santa Rosa—


by John Donovan, Editor-in-Chief

will have NAND memory on board (on a separate substrate), providing 2x faster application loading and recovery from hibernation. Otellini showed a prototype wafer holding 80 dies, each containing “the world’s first teraflop computer on a chip.” Intel expects to be in production with the chips within the next five years. Broadband Gets Airplay Portable broadband will be “the next inflection point for mobility,” according to Otellini. He’s looking to WiMAX to provide the wireless infrastructure. IEEE 802.16e—mobile WiMAX—has been approved, and carriers are promising to support it. “WiMAX will be five times faster at 1/10 the cost” of EVDO, according to Gary Forsee, CEO of Sprint Nextel. Intel expects to see add-in cards for it starting later this year, with networks appearing in 2007. In 2008 Intel will include WiMAX in its Centrino chips, with the aim of making it as ubiquitous as Wi-Fi. The Future Looks Fab Intel continues to aggressively pursue its process migration timeline, “trying to keep [Gordon Moore] honest.” They have shipped 40 million 65 nm CPUs to date, and are not shipping more 65 nm devices than 90 nm ones. Intel is building three new fabs to produce 45 nm wafers. Fab D1D in Oregon will cost $3 billion, cover 212K square feet and start production in 2H07. Fab 32 in Arizona and Fab 28 in Israel are both under construction. Arizona will start production in 2H07 and Israel 1H08. The total cost for these fabs will run $9 billion and will include 500,000 square feet of clean room facilities. Intel Corporation, Santa Clara, CA. (408) 765-8080. [].

At last month’s IDF, Intel rolled out its vision for portable and ultra-portable devices, taking direct aim at ARM, the 700-lb. gorilla who owns this space. The Intel vision was spelled out in a couple of back to back keynotes by David (Dadi) Perlmutter, Intel senior vice president and general manager, Mobility Group, and Anand Chandrasekher, Intel senior vice president and general manager of the Ultra Mobility Group. The key is a new, Intel-invented category of WiMAX-powered products called Mobile Internet Devices (MIDs). What’s a MID? So what is a MID? At the Spring IDF in Beijing this April Intel announced the Mobile Internet Device Innovation Alliance “to develop solutions that offer customers a US$500 device that can access the Internet anywhere.” Intel’s initial partners included Asustek Computer, BenQ, Compal Electronics, High Tech Computer (HTC) and Quanta Computer, all Tier 1 and 2 Taiwan OEMs. Not coincidentally, all of these firms make handheld CE devices, but none—with the possible exception of HTC—are ever likely to hit the big time in cell phones. But a new category of devices could give them another chance at the brass ring. That’s just what Intel is proposing. In his talk, Chandrasekher lost no time taking aim at ARM. Cell phones, he claims, are severely limited in their ability to handle the Internet. Chandrasekher showed a slide documenting the results of tests Intel had done loading Web sites using Intel architecture (IA) chips and a variety of ARM chips. The ARM chips typically displayed “an order of magnitude” more errors than the devices that used IA chips. The Intel platforms performed better, it was explained, because of using “compatible hardware and software” that migrated over from proven PC applications. On the contrary, ARM implementations of flash software on handsets typically occur two years after they appear on the PC,

so porting to “another architecture” must be a very time-consuming task that can be avoided if you stick with the IA architecture. According to Chandrasekher, “In the first half of 2008, Intel will take a major step to deliver what these users are looking for with our first platform designed from the ground up for MIDs and UMPCs—codenamed Menlow, which will deliver 10x lower power compared to the first UMPCs in the market.” The other big deal about Menlow is that it comes with Wi-Fi, 3G and WiMAX included.

MIDs vs. Cell Phones Assuming all goes as planned, Menlow will no doubt be a signal technical achievement and will finally give Intel a chipset that can compete with ARM’s offerings. But why would anyone buy a MID in addition to their smart phone if they can already surf the Web on their phone—which also serves as a media player, camera, calendar and database, not to mention phone—applications that aren’t planned for MIDs? Chandrasekher’s answer is that WiMAX will deliver high-speed Internet performance that cell phones can never match. That is certainly true for current 2.5G and 3G phones; WiMAX is far faster than EV-DO Rev 1, which is the best data network U.S. carriers have to date. But while WiMAX is a 4G technology, it is hardly the only one. While it has been scoring success in Japan with NTT DoCoMo, take-up in Europe has been slow, where carriers who are heavily invested in GSM technologies—and who paid through the nose for new spectrum—are reluctant to build entirely new networks to accommodate WiMAX. In the U.S., Sprint has made a major commitment to WiMAX, but they are yet to build their network, and the other carriers will very likely hang back to see how fast they

can amortize the billions the network will cost. In short, Intel may make millions of chips that support WiMAX, but without a widespread WiMAX infrastructure out there, they may be pushing a string for a while. Also, cell phones are improving at a spectacular rate and will continue to do so. Delivering a better Internet “experience” is a high priority for handset makers, and there is no obvious technical reason why they can’t deliver the performance that Intel promises with their MIDs, perhaps even before the first MIDs roll out. The prototype MID that Chandrasekher demonstrated during his talk looks for all the world like a wide iPhone or a Sony PSP. The iPhone is probably the start of a migration to softwarebased user interfaces in handhelds that will look a lot like the planned MIDs. By the time MIDs come online, the delta between them and top-end handsets may be small enough to prevent this category from ever really taking off. That would also seem to be the case with the ultra-mobile PC (UMPC), a category that has yet to gain any real market traction. UMPCs fill a niche between notebooks and high-end handsets, with the advantages of neither. The recent demise of the Palm Foleo may spell the demise of this category. Sticky Sockets If Intel indeed produces chips that can compete with the ARM11 and Cortex-A8, can they sell them? The money play is in handsets, but the ARM lock on these sockets will be very hard to break. ARM has sold billions of its processors into every conceivable type of portable device. They have long-standing licensing relations with almost every major handset manufacturer, plus a large ecosystem of software and service providers. There would have to be a very compelling performance delta to convince handset makers to completely redesign their product lines around a new architecture. Still, Intel is a market maker, and one with deep pockets and a lot of very smart, aggressive people. ARM would do well to be paranoid, though perhaps not overly so as yet. At IDF Intel announced a dramatic roadmap for ultra-portable products. Now let’s see what ARM has up its sleeve. The gauntlet has

been thrown down and the battle finally joined in earnest. ARM Inc., Sunnyvale, CA. (408) 734-5600. [].

MediaTek to Acquire Analog Devices Cellular Handset Radio and Baseband Chipset Operations

MediaTek, Inc. has announced that it has signed a definitive agreement to acquire the assets related to the Analog Devices, Inc. Othello radio and SoftFone baseband chipset product lines, as well as certain cellular handset baseband support operations, for approximately $350 million in cash. These product lines represented approximately $230 million in revenue for ADI, based on fiscal year 2006 financial results. Through this acquisition, MediaTek’s wireless handset division gains: a global team of approximately 400 experienced product development and customer support professionals; an established customer base around the world; new radio transceiver and baseband chipset products, including GSM, GPRS, EDGE, WCDMA and TDSCDMA chipsets to further strengthen its existing portfolio; and key patents and intellectual property to increase MediaTek’s competitiveness. ADI plans to continue to invest in the wireless handset market by focusing on developing highperformance analog, micro-electro mechanical systems (MEMS) and programmable digital signal processing (DSP) products that enhance the audio, video, connectivity and power efficiency capabilities in a range of wireless multimedia devices. According to Jerald G. Fishman, ADI’s president and CEO, “This transaction will allow ADI to focus our resources in areas where our signal processing expertise can provide unique capabilities and earn superior returns. In addition, it unlocks the value of the Othello and SoftFone operations by creating the scale needed to support the R&D investment required for sustainable, long-term success.” The boards of directors of both companies have approved the transaction, which is expected to close near the end of 2007. MediaTek, Inc., HsinChu, Taiwan +886-3-567-0766. [].


news Intel Inks Agreement with ARC

ARC International has announced that it has signed a new multi-year, royalty bearing licensing agreement with Intel Corporation. The agreement covers several ARC products and was completed in the second half of 2007. Key benefits ARC’s configurable solutions bring to Intel’s system-on-chip (SoC) developers include the ability to reduce power consumption and increase a chipset’s performance by adding custom instruction extensions. Additionally, ARC provides comprehensive support and training to Intel development centers in North America.

Jim McGregor, In-Stat’s principal analyst and research director for enabling technologies, said, “Intel is renowned for driving the direction of technology through the development of leadingedge semiconductors and platform solutions. By combining ARC processor-based wireless broadband solutions with low-power X86 processors, Intel is underscoring its commitment to achieving flexible and power-efficient solutions through heterogeneous multicore platforms.” McGregor added, “Intel has become a leader in wireless broadband communications through its leadership in Wi-Fi and WiMAX technology for mobile applications. In-Stat predicts that the rapid adoption of this technology could lead to close to 480 million mobile devices shipping in personal and professional applications by 2010, including mobile PCs, mobile Internet devices (MIDs), ultra-mobile PCs (UMPCs), portable media players (PMPs), digital handsets and other consumer electronic devices.” ARC International, San Jose, CA. (408) 4373400. [].

S3 Delivers 65 nm Mixed-Signal Converter IP

Silicon & Software Systems (S3) has announced the immediate availability of silicon



results for its portfolio of high-performance, mixed-signal converter IP at the 65nm technology node. S3’s extensive IP portfolio includes AFEs, ADCs, DACs and associated PLL components, optimized for integration into system ICs targeting consumer, wireless, network and digital broadcast applications. End markets served by existing S3 clients include WLAN, WiMAX, digital broadcast standards (e.g. DVB-T, DVB-S, DVB-C and DVB-H), High-Definition (HD) video applications and power-line communications. S3 is taping out solutions, integrating IP into single-chip systems, leveraging extensive design experience and producing more than 35 SoC designs at the 90 nm and 65 nm process technology nodes. Silicon & Software Systems US Inc., San Jose, CA. (408) 236-7900. [].

Virtutech Simics to Support IBM Mambo Processor Model

Virtutech, Inc. has announced that it is creating an API to incorporate processor models from IBM’s simulator, Mambo, into Virtutech’s full-system simulator, Simics. Mambo is a fullsystem simulation environment that has been widely used in the exploration of IBM POWER Architecture processor and system design. By incorporating its Mambo processor model into Simics, IBM enables its developers to leverage the Simics virtualized software development platform to identify design issues that may affect functionality and performance far earlier in the development cycle, when they are much less costly to correct. The growth in complexity of system components has placed a new emphasis on the simulation of system behavior. As interactions among processors and components become increasingly important in system design, access to the fullsystem simulation capabilities of the Virtutech Simics platform has become an indispensable tool for the evaluation of new systems. Virtualized software development provides semiconductor manufacturers with a simulated environment

where everything is deterministic, everything can be seen, everything can be controlled, and software developers can perform “what if” scenarios without real-world hardware constraints. About Mambo Mambo is a system simulator developed by IBM Research Labs to meet the needs of IBM hardware and software designers for fast, accurate, execution-driven simulation of complete systems, incorporating parameterized architectural models. This environment enables the development and tuning of production-level operating systems, compilers and critical software support well in advance of hardware availability, which can significantly shorten the critical path of system development. Virtutech, Inc., San Jose, CA. (408) 392-9150. [].

Wind River and Freescale Introduce Multiprocessing Solution for the Freescale MPC8572 Processor

Wind River Systems, Inc. and Freescale Semiconductor demonstrated an end-to-end multicore solution optimized for Freescale’s MPC8572 dual-core processor at the 2007 Power Architecture Developer Conference, hosted by Power. org. Wind River optimized the solution by tailoring its industry-leading VxWorks real-time operating system (RTOS) specifically to support symmetric multiprocessing (SMP). Multicore solutions from Freescale and Wind River are designed to help remove the complexity associated with developing a software solution on a multicore silicon system while taking full advantage of the performance benefits of multicore hardware components, ultimately enabling customers to improve performance, lower costs and significantly reduce time-tomarket. Today’s demonstration reinforces each company’s commitment to developing technologies that will drive standardization and widespread adoption of multicore architecture and solutions in the device market. The MPC8572 family of processors is designed to offer clock speeds from 1.2 GHz up to 1.5 GHz, combining two powerful e500 proces-

sor cores, enhanced peripherals and high-speed interconnect technology to balance processor performance with I/O system throughput. Based on Freescale’s 90 nm silicon-on-insulator (SOI) copper interconnect process technology, the MPC8572 is designed to deliver higher performance with lower power dissipation. The MPC8572 processor provides a significant performance increase and represents the next step in continuous innovation from the popular PowerQUICC family of embedded processors. With uncompromising integration, the MPC8572 platform builds on the performance of Power Architecture technology and adds advanced features to enhance deep packet inspection, classification and security acceleration. To meet the needs of developers implementing Freescale’s MPC8572-based designs, Wind River provides a complete solution of products that support multiprocessing and multicore technologies, including the Wind River Workbench Development Suite and an early access release of Workbench On-Chip Debugging Edition. Software optimization for multicore devices requires a different design paradigm with regard to application interaction, performance and concurrency. Wind River’s technology leadership in debugging multicore processors with its Workbench On-Chip Debugging solution enables customers to quickly identify problems between the hardware and software using a patent-pending multicore debugging technology. As device manufacturers implement their initial multicore designs, development and on-chip debug tools from Wind River will reduce time spent in prototyping and debugging, enabling faster time-to-market. Freescale Semiconductor, Austin, TX. (800) 521-6274. []. Wind River Systems, Inc., Alameda, CA. (800) 872-4977. [].

New Research Study Presents Comprehensive View of Power Architecture Processor Market Opportunity announced the completion of the most comprehensive research study ever undertaken on Power Architecture

processors and the diverse markets they serve, The Power Architecture Market Model. The research, conducted by IDC, features a unique data analysis tool to help users identify and gauge emerging growth opportunities for the Power Architecture technology platform. Focusing on the embedded processor space, The Power Architecture Market Model provides market share data in more than 70 categories of embedded electronics and forecasts both current and future opportunities for Power Architecture processors through 2011. IDC built an innovative query tool into the market model that lets users customize their views of the research to explore target markets and technical features most relevant to their companies. “We’re committed to informing our members of developments affecting Power Architecture in the marketplace and to identifying the opportunities that will help them grow their businesses,” said Fawzi Behmann, chairperson of the Marketing Committee. “The report, The Power Architecture Market Model, is the first in a series of similar programs that will help members share insights and foster the collaborative climate needed to grow the architecture.” The report showed very strong market share leadership for the Power Architecture technology platform in several market categories, including game consoles, wireless infrastructure, printers, industrial control equipment, medical equipment, Ethernet switches and WiMAX. Further details will be released after the full report is distributed to members at the end of September. “In the course of our research, IDC was given unprecedented access to every major chip vendor in the Power Architecture community,” said Mario Morales, vice president of semiconductor research at IDC. “We combined these companies’ data sets with IDC’s extensive research on embedded device categories and came up with what we believe is the most complete picture to date of the market for Power Architecture processors.”

Previously, companies invested in Power Architecture technology commissioned custom research to gauge the total available market for the technology platform. Those studies provided only a partial or extrapolated view of the market, largely because competitors’ data were hard to come by. For the Power Architecture Market Model, served as a unifying agent and a trusted collection point for competitors to contribute their companies’ privileged information. These companies’ proprietary data sets were aggregated and filtered to provide a direct, reliable assessment of Power Architecture market size. “The accuracy and breadth of perspective presented in this research study would not have been possible without a collaborative forum such as,” said Nelda Currah, chair of the Power Brand Advisory Council, which developed the Power Architecture Market Model. “ provided the means for competitors to work together on Power Architecture market sizing, because they knew there was this mediating organization that could be trusted to safeguard the sensitive information they contributed. As a result, the entire Power Architecture community benefits from a new, holistic view of the technology platform.” is providing company-wide licenses for the Power Architecture Market Model to qualifying members at the end of September., Austin, TX. (805) 341-7269. [].



analysts’ pages WiMAX Gains Serious Momentum as Trials Lead to Deployments

The global telecommunications industry is on the cusp of major change, and operators are approaching critical decisions about their 4G strategies, as mobile WiMAX (802.16e) starts to move from trials and pilots to the first real-world WiMAX network deployments. As described in a new study from ABI Research, mobile operators and other service providers are planning mobile WiMAX networks all over the world, mainly in the 2.5 GHz and 3.5 GHz bands. “The mobile wireless industry is in a state of major change as mobile operators decide which IP-OFDMA path they will take for their 4G networks,” says principal mobile broadband analyst Philip Solis. “The new and unproven (on a large commercial scale) mobile WiMAX has positioned itself against the potential Goliath that LTE (Long Term Evolution) is expected to become.” The research forecasts substantial numbers of WiMAX subscribers worldwide: more than 95 million using CPE devices by 2012, and almost 200 million using mobile devices, with some overlap between the two groups. Solis points out that while WiMAX equipment interoperability certification timelines have slipped somewhat, and LTE benefits from having evolved out of the widely deployed GSM technology, WiMAX has at least a two-year head start in reaching the market. The major semiconductor and equipment makers, with the exception of Qualcomm and Ericsson, are staking out their positions for this emerging sector, while operators’ enthusiasm, led by Sprint’s and Clearwire’s firm commitments in the United States, is rising sharply. Vodafone is looking to WiMAX for some of its newer markets such as the Middle East and Eastern Europe; BT and Telecom Italia Mobile are also showing interest. And ABI Research understands that another as yet unnamed “major European mobile operator” is “seriously considering WiMAX.” Meanwhile, amid this increasing momentum, chipset companies are positioning them-



selves to support a wide variety of device types beyond the traditional handsets and laptops, including UMPCs, mobile Internet devices and consumer electronics products such as portable game devices, portable media players and imaging devices. ABI Research, Oyster Bay, NY. (516) 624-2500. [].

Wireless 4G Technology Beginning to Shape Up

Although an official definition of wireless 4G technology will not be released until the 2008/2009 timeframe in the form of the ITU’s IMT-Advanced requirements, there are already clear contenders for the designation, reports InStat. The primary 4G technologies of the future are expected to be Long Term Evolution (LTE), Ultra Mobile Broadband (UMB) and IEEE 802.16m WiMAX, the high-tech market research firm says. “Companies are extremely uncomfortable talking about ‘4G’ technologies, since the ITU has not defined 4G yet,” says Gemma Tedesco, In-Stat analyst. “However, each of the contending 4G technologies has a cheerleader, with Ericsson touting LTE, Qualcomm preferring UMB and Intel touting 802.16m WiMAX.” Recent research by In-Stat found the following: • Two widely expected requirements for 4G technologies are that they be OFDMAbased and that they support 100 Mbits/s for wide area mobile applications. • With the dominant worldwide technology currently being GSM/EDGE, and HSPA and EV-DO handsets not expected to be dominant until 2012, 4G technology rollouts will most likely start in the 2010-2012 timeframe. • It is widely believed that mobile operators will initially deploy 4G very slowly, rely-

ing on their EV-DO or HSPA networks to provide for more ubiquitous coverage. • Drivers of LTE, UMB and 802.16m WiMAX adoption will include the following: the re-allocation of older spectrum for 4G technologies; the resolution of any WiMAX IPR issues; the creation of FDD profiles for 802.16e WiMAX; the uptake rate of 802.16e in Mobile PCs; the uptake rate of 3G cellular in Mobile PCs; the continued evolution of the mobile handset; and an increase in the uptake rate of wireless broadband technologies into portable CE devices. • Realistically, initial implementations of LTE, UMB and 802.16m WiMAX may fall short of throughput and other expectations, with later enhancements, or even some type of technology combination, actually bringing real 4G to the table. In-Stat, Scottsdale, AZ. (480) 483-4440. [].

iSuppli Reduces 2007 Semiconductor Forecast—but Sees Reasons for Optimism

iSuppli Corp. has reduced its forecast of global semiconductor revenue growth in 2007 to 3.5 percent, down from its previous prediction of a 6 percent rise. Ironically, the downward revision comes at a time when chip revenue is up, the memory industry is improving and the outlook for electronic equipment markets is on the rise. However, these stronger conditions in the second half of 2007 will be insufficient to completely offset the impact of the first half’s weakness, spurring iSuppli’s forecast downgrade. Global semiconductor revenue now is expected to rise to $269.9 billion in 2007, up 3.5 percent from $260.6 billion in 2006. iSuppli issued its previous 6 percent annual growth forecast in June. The accompanying table presents iSuppli’s annual global semiconductor forecast. Global semiconductor revenue declined by 6 percent in the first half of 2007 compared to the second half of 2006, limiting the full-year market growth potential.

Second-Half Rally However, the second half is bringing a revival of growth, one that springs not only from the normal year-end seasonal strength, but also from a surge in memory IC prices and revenue and a stronger end-equipment market. Global semiconductor revenue will rise by 10 percent in the second half compared to the first—marking a major turnaround in market conditions. Semiconductor revenue will rise by 9.8 percent sequentially in the third quarter and by 4 percent in the fourth. Memory IC revenue will rise by an impressive 15 percent in the second half compared to the first as ASP erosion is blunted and the holiday season commences, bringing stronger sales of PCs. With memory accounting for 23 percent of total semiconductor revenue in 2007, this will have a major impact on the overall chip market. DRAM suppliers in the first half had increased manufacturing at a rapid rate, which will cause their bit shipments to rise by a stunning 94 percent in 2007, compared to the industry average of 55 to 60 percent annual growth. This oversupply caused a decline in memory prices that severely impacted the entire semiconductor market. However, DRAM suppliers in the third quarter began slowing production growth, causing pricing to stabilize—and even rise in July. While DRAM pricing has since softened, the market is still much stronger than it was in the first half. After declining by 10 percent and 23.8 percent sequentially in the first and second quarters of 2007 respectively, DRAM revenue will rise at a hefty 20.8 percent rate in the third quarter and will remain

iSuppli Figure: Global Annual Semiconductor Forecast (Revenue in Millions of U.S. Dollars) Millions of U.S. Dollars

Bad Memories “The major cause of the first-half semiconductor industry weakness was a 13 percent sequential decline in revenue during the period for memory Integrated Circuits (ICs), led by DRAM and NAND-type flash,” said Gary Grandbois, principal analyst with iSuppli. “The memory revenue decline was spurred by a drop in Average Selling Prices (ASPs), which in turn was caused by a glut of parts on the market.”

400,000 350,000 300,000 250,000 200,000 150,000 100,000 50,000 0 2006



flat with a marginal 0.2 percent decline in the fourth quarter. NAND on the Upswing The NAND flash market recovery has been more dramatic in the third quarter. Prices for NAND are expected to increase in the third quarter, contrasting starkly to the 40 percent decline in per-megabyte prices in the first quarter. Following a 20.6 percent plunge in sequential revenue in the first quarter, the NAND market recovered partially with a 14.7 percent rise in the second quarter. Conditions have improved markedly in the third quarter, with an expected 37.5 percent rise, which will be followed by a modest 6.5 percent increase in the fourth quarter. Due to stable pricing conditions in the second half, the DRAM and NAND markets are expected to grow by 2.5 percent and 15 percent this year, respectively. The resurgence will have legs; with memory revenue growth continuing into 2008, the total semiconductor market is expected to achieve a 9.3 percent revenue expansion next year. Equipment on the Rise While the 2007 semiconductor forecast has declined, the outlook for shipments of electronic equipment using those semiconductors has improved. iSuppli has raised its forecast of 2007 electronic equipment revenue growth to 6.8 percent, up from 6 percent before. Of the six major electronic equipment segments, five of them—i.e. data processing, wireless communications, wired communications,




consumer electronics and automotive—have been upgraded by iSuppli. Industrial equipment is the only segment that has not been upgraded. These positive developments are likely to ripple into 2008, with electronic equipment revenue expected to rise by 7 percent for the year, up from the previous forecast of 6.4 percent. iSuppli Corporation, El Segundo, CA. (310) 524-4000. [].

Mobile Broadband-Enabled Consumer Electronics Devices Will Expand in 2008

There is a new class of devices emerging on the horizon: a convergence of everyday consumer electronics and mobile broadband. Consumer electronics increasingly will include Wi-Fi for connectivity within the home and to the Internet; Wi-Fi-enabled portable consumer electronics devices are a bridge to the mobile consumer electronics devices. Service providers will offer services to these devices in addition to handsets and laptops. SK Telecom is doing this today for Samsung’s HSDPA-enabled camera, and Sprint Nextel and Clearwire will heavily promote connectivity for a wide range of WiMAX-enabled consumer electronics devices. Phil Solis, principal analyst for ABI Research, says, “The market for cellular-enabled consumer electronics devices will gather momentum in 2008 and 2009 with Qualcomm’s Snapdragon platform. As mobile WiMAX networks increase their coverage, more WiMAX-



analysts’ pages only devices will be sold. And between WiMAX network deployments and devices using Qualcomm’s Snapdragon platform or Freescale’s MXC platform, the 2008 to 2009 time period will be critical for the development of this market.” Solis continues, “Qualcomm’s Snapdragon platform should reduce dramatically the cost of including cellular connectivity in consumer electronics devices by bundling wide-area connectivity with short-range wireless technologies and a multimedia processor. Even so, this market is dealing with consumer electronics manufacturers who feel that even the integration of Wi-Fi is nearly cost-prohibitive.” “The benefits outweigh the additional costs, but the challenge for vendors is to sell consumers on those benefits.” By offering all the processing power and connectivity functionality (WPAN, WLAN, WWAN) that a mobile consumer electronics device would need, the Snapdragon platform should be an attractive package for consumer electronics manufacturers. Reference designs should be available by 4Q 2007, and products using the Snapdragon platform should be in the market during the 2008 time frame. Brazil, Mexico and Venezuela have the greatest opportunities for mobile consumer electronics within Latin America, but pale in comparison with the Asia-Pacific and North American regions. ABI Research expects Qualcomm to make several relevant announcements regarding its Snapdragon platform at the 2008 Consumer Electronics Show, at a time when many WiMAX-enabled consumer electronics will be announced as well. ABI Research, Oyster Bay, NY. (516) 624-2500. [].

Bluetooth Market Continues Growth, but Rate Is Slowing

Bluetooth had another successful year in 2006, and it will have continued success in 2007, led by its increasing penetration into mo-



bile phones, reports In-Stat. However, market growth for Bluetooth products is beginning to slow, and it will see some complications arising from integration trends and new Bluetooth standards hitting the market, the high-tech market research firm says. The market for Bluetooth chips is also in flux. “The Bluetooth silicon market is beginning to see some consolidation, as larger silicon vendors add new capabilities, such as Wi-Fi and GPS, to their chip portfolios, either by internal development or acquisition,” says Brian O’Rourke, In-Stat analyst. “The goal is to create combined radio silicon that is being demanded by mobile phone vendors.” Recent research by In-Stat found the following: • Growth of Bluetooth devices will increase by 34% in 2007, slowing from the recent past. • Wireless chip companies are seeking to offer integrated radio chips with Bluetooth, Wi-Fi, GPS and FM. • New low-power and high-data-rate Bluetooth standards will emerge over the next two years. • According to recently conducted In-Stat surveys, France, Germany and the UK have the highest percentages of those extremely or very familiar with Bluetooth. Korea and Japan had the lowest percentages, while the U.S. was in the middle. In-Stat, Scottsdale, AZ. (480) 483-4440. [].

Teardown Analysis of Latest nano Reveals Extensive Component Changes

Apple Inc. calls it the iPod nano—but the latest version of the company’s compact music player introduced last week is virtually a completely new design, reusing almost no components and sporting a bevy of fresh suppliers compared to the previous model, according to a dissection conducted by iSuppli Corp.’s Teardown Analysis service. Component suppliers making their nano product line debut in the latest version in-

clude Micron Technology Inc., Dialog Semiconductor GmbH and Intersil Corp., while Synaptics Inc. returns to the platform after an absence. “The changes in components have resulted in significant cost reductions in the nano design, allowing Apple to offer a product that is less expensive to build and that has enhanced features compared to its predecessor,” said Andrew Rassweiler, senior analyst and teardown services manager for iSuppli. Dropping the BOM iSuppli’s Teardown Analysis team has dissected the low-priced version of the new nano and has determined the product carries a Bill of Materials (BOM) cost of $58.85 for the 4 Gbyte version and $82.85 for the 8 Gbyte version. iSuppli’s estimate of the new nanos’ BOMs is strictly limited to costs for components and other materials used to construct the product. The estimate does not include costs for manufacturing, software, intellectual property, accessories and packaging. The BOM figures also do not include research and development costs, since such data cannot be derived from a teardown and component analysis. The BOM of the new 4 Gbyte nano is 18.5 percent lower than the $72.24 direct materials cost of the previous version of the nano released in late 2006. The new product has the lowest BOM of any member of the nano line analyzed by iSuppli. The accompanying table presents iSuppli’s summary BOM estimate for the new nanos. Raising the Margin The retail price of the 4 Gbyte version is $149, compared to a hardware BOM of $58.85. For the 8 Gbyte version, the retail price is $199, compared to a hardware BOM of $82.85. Apple’s products traditionally have been sold at retail pricing that is about twice the level of their hardware BOM costs, based on iSuppli’s teardown extensive analysis of devices including the iPhone, the iPod shuffle and previous members of the

iPod nano line. This represents a high level compared to most electronic products. For the new nanos, Apple has exceeded even its usual lofty standards. Out with the Old, and in with the nano The arrival of new nano semiconductor suppliers, Micron, Dialog and Intersil—and the return of Synaptics—has been accompanied by the departure of previous part providers, NXP Semiconductors and Cypress Semiconductor Corp. in the latest version of the product. Such wholesale supplier swaps are not unusual for Apple, which frequently switches its component partners. With Apple, it seems, no supplier is safe, and no slot is a given.

Direct Materials Cost Estimate of the new iPod nanos (Pricing in U.S. Dollars)*

Component Type

4 GByte Model

8 GByte Model

Flash Memory






Core Video Processor/Microprocessor



Electro mechanicals









Misc. Components






Power Management IC



Video Driver






Mixed Signal Array / Touch Wheel Controller



Buck Regulators



Utility Flash Memory





Micron a Big Winner in the new nano U.S. semiconductor supplier Micron was the most notable addition to the nano. This represents the first time that iSuppli’s Teardown Analysis Service has identified a Micron part in an iPod. In the nano torn down by iSuppli, Micron was the maker of the high-density NAND flash memory that serves as media storage, worth $24 in the 4 Gbyte version of the product and $48 in the 8 Gbyte version. This gave Micron the largest single portion of value in the nano of any supplier at 40.8 percent for the 4 Gbyte version and 57.9 percent for the 8 Gbyte. Apple’s primary suppliers of NAND flash historically have been Korea’s Samsung Electronics Co. Ltd. (which has been the dominant seller), Japan’s Toshiba Corp. and Korea’s Hynix Semiconductor Inc. Micron is last in share position as a supplier to Apple for NAND flash, and only began shipping small quantities during the last year. While this is a major win for Micron, Samsung remains the world’s largest maker of NAND-type flash and is likely to continue to be used as a supplier by Apple, iSuppli believes.

sor/microprocessor chip in the system, supplied by Samsung. Costing $8.60, the Samsung core IC processor accounts for 14.6 percent of the 4 Gbyte version’s BOM, and 10.4 percent of the 8 Gbyte’s BOM. This is the second time around for Samsung’s core processing IC in the nano line; in the version of the nano released in late 2006, Samsung supplied the core processing IC as well. Samsung also contributed 32 Mbytes of Mobile SDRAM, worth $2.72, or 4.6 percent of the 4 Gbyte BOM and 3.3 percent of the 8 Gbyte BOM.

Samsung: Kicking Apps and Taking Names One of the biggest and most important slots on the nano is the combined core video proces-

Big Sales for Little nanos iSuppli tentatively forecasts that total iPod nano shipments will reach about 23 million units in 2007 and 27.9 million during 2008.

Subtotal - Direct Materials

*Direct Materials Only: Manufacturing, Accessories, IP, Software, R&D, Etc. are Not Included

“Consumers will be interested in buying the nanos due to their enhanced features, mainly video capability and a high-quality display,” said Chris Crotty, senior analyst, consumer electronics for iSuppli. iSuppli Corporation, El Segundo, CA. (310) 524-4000. [].



cover feature green design

The Electronics Industry: The Power to Change “If we all did the things we are capable of doing, we would literally astound ourselves.” --Thomas Edison

by John East, President and CEO, Actel Corporation


If Thomas Edison had known what his initial inventions would spawn, would he be delighted or horrified? The answer is probably a little bit of both. Given the amazing technological innovations over the past 150 years, and the dramatic improvements to our everyday quality of life, Mr. Edison surely would be pleased. However, these technological advances have come at a price. As made famous by Al Gore’s An Inconvenient Truth, the electronic devices that we use every day are contributing to the greenhouse gases associated with global warming today. So while these electronic innovations are making our world better, they are at the same time posing a very real threat. According to a United Nations report issued in May 2007, the average global temperature will rise by as much as 11°F by the turn of the century, even with an aggressive program aimed at minimizing this rise.



Another recent report from the International Energy Agency in Paris, notes that from 2003 to 2050, the world’s population is projected to grow from 6.4 billion people to 9.1 billion, a 42 percent increase. If energy use per person and technology remain the same, total energy use and greenhouse gas emissions will be 42 percent higher in 2050. Thus, today’s high-tech community has the opportunity to play a major role in resolving the world’s global warming problems with further technological change. The 1990s brought a proliferation of electronics to our society as the world became increasingly dependent on technology including desktop PCs and a rising variety of portable devices, such as smart phones, portable media players and GPS systems. A significant increase in the demand for power has accompanied this technology growth, and unfortunately most electronic devices are not as energy-efficient as they could be. According to the Climate

Savers Computing Initiative, today’s desktop PCs waste nearly half of the power delivered to them, making them a perfect example of the need for low-power offerings. And, while the size and type of devices we use may change, we remain increasingly dependent on our electronic devices to interact, inform and communicate. Designers of portable, battery-powered equipment are faced with a daunting challenge—how can they continue to satisfy the insatiable consumer demand for smaller, cheaper, feature-rich portable devices with longer battery lives in shrinking market windows? But the challenge goes beyond simply satisfying consumer demand. Unfortunately, the generation of the electricity required to power electronic systems, both large and small, contributes to a surprisingly high proportion of greenhouse gases. How can users stay in touch and informed without destroying the planet in the process? Semiconductor suppliers have the power, if you will, to make key changes that can dramatically improve greenhouse gas emissions caused by the operation of electronics.

Reducing Energy Across the Power Continuum

Power in semiconductor devices takes two basic forms: static and dynamic. Static power is consumed when the part is not doing any useful work, while dynamic power is consumed when the device is actively working. Until recently, dynamic power was the dominant source of power consumption. Once helping to manage the dynamic power problem, device supply voltages (VCC) had scaled downward with process shrinks and subsequent lower system voltages, but the days of continued scaling are gone. Additionally, the physics associated with integrated circuits (ICs) on smaller process geometries have dramatically increased power related to leakage. And, with leakage worsening, static power has begun to dominate the power consumption equation as the biggest concern (Figure 1). Today, many technology companies are talking about reducing energy usage across the power continuum—from chips to systems—with the goal of helping to protect the environment.

I believe the U.S. electronics industry needs to make a coordinated attack on power consumption—from chips to systems.

Though environmentally friendly steps have been taken, such as lead-free initiatives and RoHS compliance, the electronics industry has not adequately addressed the power issue. And while the presence of small quantities of lead in electronics devices does indeed present a problem, its scope is minimal compared to the disastrous effects that could come if we fail to control global warming. From a global perspective, there are many advances underway to combat the greenhouse gases that are a direct result of electronics emissions. Under EU and domestic rules, utilities can charge higher rates for alternative energy through government-mandated prices, which reward companies for building carbonfriendly power plants. Such subsidies have helped Europe build up this industry by providing financial incentives to companies that invest in new technologies. In August 2007, BusinessWeek claimed Europe’s emphasis on wind power has put it ahead of other regions in the race toward green power. The 1997 Kyoto Protocol also addresses climate change and assigns mandatory limitations for the reduction of greenhouse gas emissions. As of December 2006, 169 countries and other governmental entities had ratified the agreement. Notable exceptions include the United States and Australia. In Japan, the Japanese Ministry of the Environment has launched a national campaign named “Team 6 Percent” to OCTOBER 2007


cover feature


Normalized Power

help reach the country’s Kyoto Protocol objectives. The campaign refers to the effort to reduce greenhouse gas emissions to a level six percent below the level of 1990 through a variety of programs, including setting air-conditioning to higher temperatures, avoiding water waste, figure 1 choosing and buying 100 eco-friendly products, stopping car idling, eliminating excessive packaging and unplugDynamic 1 ging devices not being Power used. As another example, China has created 0.01 a team led by Premier Wen Jiabao to fulfill its energy conservaStatic Power tion and pollution (leakage) cutting tasks outlined 0.0001 nd by Greenpeace and the European Renewer exploration able Energy Council ether your goal speak directly (EREC) in its Energy 0.0000001 ical page, the Revolution global ght resource. 1990 2010 2015 2020 1995 2005 2000 technology, study. The Premier Year es and products and his team have set Technology 45 22 an aggressive goal of Node (nm) 500 350 250 180 130 90 65 ed cutting energy consumption by 20 perStatic vs. Dynamic Power by Process Node cent and pollution emissions by 10 percent by 2010. Additionally, China is becoming companies providing solutions now more ambitious about the development of wind exploration into products, technologies and companies. Whether your goal is to research the latest datasheet from a company, and resource. solar photovoltaic mp to a company's technical page, the goal of Get Connected is to put you in touchenergy with the right Whichever level of(PV) systems. gy, Get Connected will help you connect with the companies and products you areChina searching for.set a target that by 2020, 16 percent has onnected of the country’s primary energy will come from renewable sources. In stark contrast, of the total energy consumed in the United States, about 39 percent is used to generate electricity, yet the national average for participation in renewable energy programs is only one percent. In the Silicon Valley alone, data from California’s Energy Commission and Department of Transportation suggests that carbon dioxide emissions in 2006 were 5.6 percent above 1990 levels, not Get Connected 20 percent below them as specified in climatewith companies mentioned in this article. changing laws like AB 32. From a govern-

End of Article


PORTABLE DESIGN Get Connected with companies mentioned in this article.

mental perspective, programs from the Environmental Protection Agency (EPA) are aimed at cleaner energy sources. And while schools, businesses and consumers are beginning to make efforts to go green, the United States is not doing nearly enough. We, the electronics industry, have the capability to make broad sweeping changes that could dramatically affect our environment. I believe the U.S. electronics industry needs to make a coordinated attack on power consumption—from chips to systems. While lots of companies are talking about power reduction initiatives, much, much more can be done. The new power paradigm calls for the electronics industry to take responsibility for reducing energy consumption, improving power efficiency and ultimately, reducing greenhouse gasses. The industry can accomplish this starting with the design of ultra-low-power chips and systems through to the development of industry-wide power efficiency guidelines. For example, no Environmental Protection Agency (EPA) Energy Star guidelines exist for semiconductors to date, even though these semiconductor products directly contribute to the power efficiency and management of Energy Star-rated products. Our industry needs to rally around an approach to benchmarking power efficiency for “low-power” ICs. Wellconceived requirements for semiconductors would enable boards, systems and end products to minimize energy consumption, improve power efficiency and reduce greenhouse gases. If the industry supported such a program, semiconductor manufacturers would be held accountable for designing power-efficient chips. This could make a dramatic difference in greenhouse gases.

Efficiency Changes That Can Be Made Today

There are other immediate things that we, as engineers, in the electronics industry can do. A key area for immediate change is electric motors, which are used in nearly everything from elevators to home appliances. In 2005, the United States consumed 4,055 billion kWh of electrical power. More than 50 percent of this

cover feature power was used in electric motors, translating into a staggering 2000 billion kWh. Unfortunately, many of the motors currently in use are inefficient and waste a substantial amount of the power they consume. For example, the efficiency of small AC motors can be as low as 50 percent. While motor efficiency improves to more than 90 percent as motor size increases, there is still opportunity to improve efficiency and reduce energy consumption. By adding intelligent load matching or variable speed control, the power efficiency of electric motors across the full range can be increased. This can be accomplished for a range of motor types at a cost attractive for most applications. In fact, coupled with best practices, this combination can result in motor efficiencies approaching 95 percent and, when implemented broadly, could result in annual reduction in U.S. energy consumption of as much as 300 billion kWh, saving billions of dollars and reducing greenhouse gases by more than 180 million metric tons. Generally, when designing a system, a power goal is set. Often, however, if the designer “approximately” meets this specification, little additional effort is expended to improve the design, leaving watts on the table. Because electronic systems are sold by the hundreds of millions, a few watts of inefficiency in each system eventually translates into staggering amounts of resources being consumed unnecessarily, which ultimately has a detrimental impact on the environment. Unfortunately, there is usually no easy way to track power down to the individual components or voltage rails, making the job of removing all unnecessary power from devices a difficult task. There is also rarely a way to measure voltages, currents and temperatures when the system is in operation, which complicates the ability to recognize when things are going badly. The proliferation of new standards, such as Advanced Telecommunications Computer Architecture (ATCA), MicroTCA and Intelligent Platform Management Interface (IPMI), prove that the world needs and wants system enterprise-level power management. These applications require the ability to measure voltages, currents and temperature in real time

The electronics industry needs to step up, take responsibility and play a leadership role in developing and delivering low-power devices for our changing world.

and recognize problems; the ability to log and communicate this data; and the ability to take corrective action when appropriate. After all, knowing that a power supply is providing more current than it should, or that the board temperature is higher than it should be, is not helpful unless steps can be taken to correct it. System management has historically required multichip solutions. With as many as 10-15 extra chips, these designs cost money, consume valuable board space and burn additional power, which means that the “solution” to the problem is not a solution at all. Multichip solutions also require substantial engineering resources, which are often a scarce commodity. And yet, despite these significant costs and the availability of single-chip solutions, the industry has put little effort forth into being smarter about managing and controlling system power. I believe much can be done with the power-efficient technology available today. The electronics industry needs to step up, take responsibility and play a leadership role in developing and delivering low-power devices for our changing world. It is no longer a choice. It’s mandatory. We as an industry have the power to make dramatic changes. Mr. Edison was definitely right. If the electronics industry did the things we are capable of doing, we would literally astound ourselves. Actel Corporation, Mountain View, CA. (650) 318-4200. [].



wireless communications mobile video

Mobile Video: Managing Multiple RF Chains A lot of RF technologies are contending for space in your cell phone. This article assesses their strengths and weaknesses for delivering mobile video.

by Rob Baxter, CTO, Nellymoser


Communication networks that enable video on mobile devices are evolving rapidly, albeit in a very fragmented way. A variety of cellular networks for mobile phones enjoy over two billion mobile phone subscribers worldwide. In the U.S., the major carriers currently use CDMA or GSM. GSM carriers now support a combination of GPRS, EDGE and UMTS networks. Wi-Fi and Bluetooth are wireless networks in widespread use today, and many new cellular and non-cellular network technologies are emerging from laboratories into the marketplace. How does one go about predicting mobile video trends? Focus on what is important to mobile video users: the ability to view content on demand, a small device and long battery life. Viewing content on demand, which enables time shifting, is the most crucial issue. Most consumers today want to be able to view what



interests them at any time. The popularity of the iPod, TiVo and Slingbox attests to this cultural shift from broadcast to on-demand viewing. Small devices with low power consumption enable consumers to view content anywhere they choose. From a designer’s viewpoint, the number of RF chains must be minimized to conserve power and constrain the device size. Knowing which networks represent the most value to a user is of paramount importance to the device designer.

Bluetooth, Wi-Fi and Cellular

Short-, medium- and long-range communications are currently dominated by Bluetooth, Wi-Fi and cellular, respectively (Figure 1), and those technologies are likely to continue to dominate for the next two to three years. Bluetooth enables short-range, peer-to-peer, ad hoc wireless networks. The

Dream of Darkness,


wireless communications

number of people walking around talking on their Bluetooth hands-free headset affirms its pervasiveness. For medium ranges, Wi-Fi is quickly becoming ubiquitous for wireless LANs not only in homes but also in coffee houses, many new cars and European trains. Advanfigure 1 tages include peer-to-peer communication capability (e.g., sending music or video to Cellular a friend nearby), speed and Voice Over InGSM: HSDPA CDMA: EV-DO ternet Protocol (VoIP). While Wi-Fi tends to consume a significant amount of power, several low-power Wi-Fi chips have been introduced in recent years. The fact that MiWiFi Mesh crosoft is planning to support Wi-Fi mesh networks is yet another reason why Wi-Fi is likely to continue to dominate mediumWUSB range communications. Municipal networks have demonstrated nd that the effective range of Wi-Fi can be significantly extended. For example, the Wier exploration WiMax ether your goal Fi network in Minneapolis will eventually speak directly cover 60 square miles throughout the city. ical page, the ght resource. Even though it was only partially complete, Mobile Broadcast technology, the Minneapolis Wi-Fi network was rapidly US- MediaFLO es and products extended in the aftermath of the recent InterEU - DVB-H ed state 35W Mississippi River bridge collapse to cover the entire emergency zone and proDominant wireless technologies versus range. vide critical support. Is there a dominant cellular network? GSM networks are based on open standards and have the most number of users worldcompanies providing solutions now wide. CDMA networks are largely based on exploration into products, technologies and companies. Whether your goal is to research the latest datasheet from a company, mp to a company's technical page, the goal of Get Connected is to put you in touchpatented with the right resource. Whichever level of Qualcomm technology. A CDMA gy, Get Connected will help you connect with the companies and products you arenetwork searching for. only requires one chip whereas suponnected porting GSM networks typically requires two. CDMA’s current advantage in minimizing the number of RF chains, however, may soon vanish since GPRS is quickly becoming obsolete and there are chips that support UMTS with EDGE fallback. There are several emerging technologies that are likely to affect the near future. First consider cellular networks. UMTS has adopted Wideband CDMA (WCDMA) even though some features of WCDMA include Get Connected with companies mentioned in this article. patented Qualcomm technology. High-Speed

End of Article



Get Connected with companies mentioned in this article.

Downlink Packet Access (HSDPA) is a UMTS enhancement that uses WCDMA and is available on the AT&T (formerly Cingular) cellular network. HSDPA will likely pervade the GSM carrier networks within a few years. With the availability of HSDPA and the popularity of the iPhone (especially if an iPhone with HSDPA becomes available), GSM networks are likely to continue to dominate the cellular networks.

Broadcast Networks

Some carriers are expanding their networks to include mobile video broadcast networks such as MediaFLO and DVB-H. Broadcast in this context means one-way communication, i.e. video signals are sent from a broadcast station to mobile devices. One of the major advantages of broadcast networks, as with traditional broadcast TV, is that once there are enough transmitters to cover a region, there is no additional cost to add viewers and there is no limit to the number of viewers that can be added. In contrast, two-way cellular networks have a limited capacity, especially at the cell tower level, and increasing the capacity of a cellular network involves purchasing more equipment. In addition, receiving video signals on your mobile phone usually involves the transmission between a wireless gateway and an Internet destination. Increasing the capacity of a cellular network to deliver video signals to more viewers may involve increasing the capacity of the Internet in the region of interest. This also involves the purchase of more equipment. But the big advantage of a two-way cellular network is that content can be viewed whenever the user requests it. In addition, the programming can be interactive and every user can become a generator of content. The one-way communication path of MediaFLO and DVB-H networks does not inherently accommodate viewing on demand since there is no “demand” return path. However, viewing on demand can also be

wireless communications

With the availability of HSDPA and the popularity of the iPhone (especially if an iPhone with HSDPA becomes available), GSM networks are likely to continue to dominate the cellular networks.

accomplished with sufficient device storage. In fact, with current video compression technology a 1-hour TV program can be stored with about 100 Mbytes, so a device with 8 Gbytes of memory has room for approximately 80 hours of programming (the size of a typical Tivo digital video recording system). So if handsets that feature MediaFLO or DVB-H include a Digital Video Recorder (DVR), that would substantially increase the value proposition of the mobile broadcast networks. MediaFLO is based on Qualcomm proprietary technology and DVB-H is based on the open Digital Video Broadcast standard aimed at portable (handheld) devices. DVB-H will not be competitive in the U.S. because two of the four major carriers (Verizon and AT&T) have opted for MediaFLO and a third carrier (Sprint) is conducting MediaFLO trials. In addition, Modeo, the company with the highest potential for introducing DVB-H into the U.S. market, has been shut down by its parent company Crown Castle. In Europe, DVB-H is gaining traction and has the potential to become not just a complement to cellular networks but a competitor, especially if time-shifting becomes possible with larger and faster device storage capabilities. It will take at least a couple of years for a mobile broadcast technology to gain traction in the marketplace. Until then, cellular networks will dominate over broadcast networks because of their existing infrastructure, higher versatility and inherent support for on-demand viewing.

What about other non-cellular networks? Ultra-Wide Band (UWB) is a radio technology characterized by a total bandwidth exceeding the lesser of 500 MHz or 20% of the arithmetic center frequency. The Wireless USB (WUSB) protocol uses UWB radio technology. WUSB has a higher data rate, consumes less power, and is more secure and more economical than Bluetooth. With such advantages, the stage is set for WUSB to overtake Bluetooth for short range wireless communications.

figure 2


Another rapidly emerging medium-range wireless technology is WiMAX, (which stands for Worldwide Interoperability for Microwave Access). Many people assume that WiMAX will replace Wi-Fi but that is highly unlikely. WiMAX is targeted to service providers rather than end users and is much more expensive to deploy than Wi-Fi. Wi-Fi is associated with the IEEE 802.11 standard whereas WiMAX is associated with the 802.16 standard. Wi-Fi should be thought of as adding mobility to LANs, whereas WiMAX provides a wireless alternative to DSL and cable for the “last mile� to the end user. While WiMAX is more expensive to deploy than Wi-Fi, for service providers it is more economical than running wire. It is possible for the same mobile device to have access to both Wi-Fi and WiMAX. In that case, WiMAX has several advantages.




30ft. 300ft. Emerging wireless technologies.



wireless communications

It has higher data speeds (75 Mbits/s to 268 Mbits/s compared with Wi-Fi’s 11 Mbits/s to 54 Mbits/s), longer ranges (30 miles for fixed stations and 3-10 miles for mobile stations compared with Wi-Fi’s 100-300 feet range), there are low-power WiMAX chips available, and a WiMAX connection is more stable than a Wi-Fi connection. WiMAX uses MAC (Media Access Control) scheduling, which means that users compete for access only the first time they connect to the network and that they are allocated a specific amount of bandwidth. Wi-Fi, in contrast, uses contention-based access so users continually compete with all other devices on the network and the quality of the connection can change depending on who else is using the access point. The biggest roadblock to deploying WiMAX seems to be FCC connd cerns about having a large number of WiMAX deployments that have licensed use of the 2-6 er exploration ether your goal GHz spectrum and unlicensed use of frequenspeak directly cies up to 66 GHz because TV viewers may ical page, the ght resource. experience interference. technology, WiMAX will eventually compete with celes and products lular especially as the mobility of WiMAX ed improves. But cellular networks have several advantages. First, they have an existing infrastructure. Second, cellular networks already support mobility whereas current WiMAX systems are mostly fixed stations. On the other hand, the big advantage of WiMAX is companies providing solutions now bandwidth—70 versus 3 Mbits/s (opexploration into products, technologies and companies. Whether your goal is to research the latest datasheetMbits/s from a company, mp to a company's technical page, the goal of Get Connected is to put you in touchtimistically) with the right resource. WhicheverThat’s level of a big advanfor HSDPA. gy, Get Connected will help you connect with the companies and products you aretage—so searching for. much that Sprint/Nextel has decided onnected to invest in a WiMAX network to exploit its majority position in the 2.5 GHz spectrum in the U.S.


End of Article Get Connected

with companies mentioned in this article.


The IEEE 802.20 standard, nicknamed Mobile-Fi, could be considered as another emerging technology for medium-range wireless connectivity. This technology emphasizes high-speed mobility (e.g., high-speed trains). However, the similarity between the 802.16 and 802.20 standards makes the fu-


Get Connected with companies mentioned in this article.

The fact that only a handful of handsets support mobile broadcast networks in the U.S. tends to suggest that it will be an uphill battle for these networks to become profitable.

ture of 802.20 unclear. To date, 802.20 has received very little attention in the marketplace, and the working group changed its voting rules after undergoing an unprecedented suspension last year. The fact that only a handful of handsets support mobile broadcast networks in the U.S. tends to suggest that it will be an uphill battle for these networks to become profitable. So for the next couple of years devices should support Bluetooth, Wi-Fi and cellular networks. The cellular network support should include a 3G network (e.g., HSDPA or EVDO). Some carriers already support smooth handoffs between Wi-Fi and cellular networks so that users can move outdoors and indoors while maintaining seamlessly connectivity. Bluetooth is likely to be replaced by WUSB in the near future, and there will be increasing pressure to include WiMAX once its infrastructure is in place. And we haven’t even discussed GPS. So many RF chains, so little space. Nellymoser, Inc., Arlington, MA. (781) 645-1515. [].

The Newest Semiconductors

The ONLY New Catalog Every 90 Days Experience Mouser’s time-to-market advantage with no minimums and same-day shipping of the newest products from more than 335 leading suppliers.

The Newest Products For Your Newest Designs Over 875,000 Products Online

(800) 346-6873

consumer electronics streaming video to mobile devices

Streaming Video to Mobile Devices Like IPTV, streaming video to mobile devices must measure Quality of Service (QoS) and Quality of Experience (QoE). This article examines these issues in detail from the network perspective.

by Y  ves Cognet, Chief Technology Officer, QoE Assurance Division, Symmetricom


Service providers worldwide are excited about the revenue potential of streaming video— video to the home, by the name of IPTV, and video to mobile devices, by the name of TV to mobile. Bundling such video services as part of a triple or quad play offering is perceived in many places around the world as the only way of surviving for most telecommunications service providers that are loosing market share from their legacy voice service every minute. And, telecommunications service providers have every right to be excited. Market pundits predict bold growth. The market for IPTV is still in its infancy. An article in Broadcast Engineering magazine attributes research firm Infonetics Research as finding that IPTV service provider revenue jumped 178 percent in 2006 to $2.8 billion worldwide, with growth expected to continue in the double to triple digits every



year worldwide. On the mobile side, according to the CTIA, the international association for the wireless telecommunications industry, wireless, broadband convergence and mobile computing make up a 100-country, $500 billion global industry with 2.3 billion subscribers—and it’s still growing. There are many trends driving this growth. The cost of mobile devices continues to drop, enabling more mobile users to sign up. Mobile broadband speeds also continue to increase, enticing more customers to use mobile broadband services. Research firm Informa Telecoms predicts the worldwide market for mobile entertainment services, including mobile music and TV, will grow from around $18 billion in 2006 to more than $38 billion in 2011. But TV over “any” broadband doesn’t come for free. There is an investment required for infrastructure to ensure the highest Quality of

Service (QoS) and Quality of Experience (QoE) for these demanding services to seek the potential revenues. And service providers everywhere seemingly understanding this. According to an article on, an Infonetics Research report predicts IPTV equipment sales to grow from $371 million to $6.8 billion between 2005 and 2009. Success for any new streaming video deployment is highly dependent on ensuring that networks can handle the demanding load. Ensuring network performance for such interactive services cannot simply be answered by increasing raw bandwidth. Because a lack of tolerance is highest with television service, service providers cannot sacrifice TV picture quality and risk customer migration to other providers. Video quality is critical in IPTV, and before you think it is not as critical to mobile devices, think again. Mobile devices continue to evolve at paces more rapid than anyone can keep up with. Today’s small twoinch cell phone screen is quickly morphing into dedicated streaming video devices, and that doesn’t even account for laptops with 12 inch and bigger widescreens. So, QoS and QoE also play an important role in TV to mobile. For IPTV to the home, QoE is seen as an equally important measurement compared with the traditional QoS measurement. The same continues to grow in relevancy for mobile TV too. The user experience, which is critical with streaming video, cannot be guaranteed with assessments from the core network alone. QoE seeks to ensure user experience by measuring quality as the user experiences it, not as what is inferred from the core network via traditional QoS measurements. There are fundamental technologies for ensuring network performance, and then there are advanced technologies for assuring a service over that network.

Measuring Network Performance

There are two main approaches for network performance measurement and verification and both are applicable to IPTV or TV to mobile. There is the passive approach, which collects information from data gener-

ated by users and servers. Then there is the active approach, which in a lab or test environment controlled traffic is generated and measured for diagnostic purposes. There is a place for both, one for performance assurance “before” disruptions take place, the other to monitor live services.

figure 1 Network Impairments (Delay, Jitter, ...) Compression




Video Source Measurement System

Quality Score

Dual-ended system

Passive measurement devices are usually inserted physically in the networking chain and can be limited in capabilities when the traffic flows are encrypted. Their added weight into an infrastructure does create an increased risk of service disruption if the equipment fails, but they do provide monitoring of live services, so the trade-off is worth it. Some passive devices perform their measurements at sampling intervals only. This does limit their ability in a way that they cannot account for peaks and valleys inherent in a typical network, whether short or long, small or large. Passive measurements by their very nature are also not reproducible because they are based on real-world user traffic existing at a time. As well, passive measurement devices cannot be located close to each user’s device. Hence their measurements do not necessarily reflect the real end-user experience. Passive solutions are primarily geared for ensuring fast recovery of already failed or disrupted services, and existing solutions are good at this. Only when end users begin experiencing significantly reduced network performance



consumer electronics

(or complete outages) will the network manager be able to start the process of network investigation and repair. Active network testing and diagnostic solutions, on the other hand, generate network

For best practices in ensuring video services to a set top or mobile, before disruptions take place, active monitoring is ideal.

traffic in a controllable, repeatable manner that provides a number of key advantages. The nd active measurement devices are usually connected to a switch or routers. This does not er exploration ether your goal add a point of failure in the networking chain speak directly like a passive solution might, nor any change ical page, the ght resource. in the network topology. Active testing also technology, provides measurement ubiquity. If the active es and products measurement is done by software running on ed a traditional computer or mobile device, it can be located at any place in the network: close to the end user to reproduce as much as possible of the end-user experience. Active measurement solutions are the only solutions that allow network operators to companies providing solutions now create repeatable performance benchmarks. exploration into products, technologies and companies. Whether your goal is to research the latest datasheet from a company, mp to a company's technical page, the goal of Get Connected is to put you in touchThese with thebenchmarks right resource. Whichever of can belevel defined according gy, Get Connected will help you connect with the companies and products you areto searching SLAs for. or classes of service or any other crionnected teria. The test traffic that is generated can be very precisely defined in volume, protocol, traffic type, etc. Thus, performance can be fine-tuned for specific applications, be it IPTV or TV to mobile. It is often necessary to test how the network will react to particular types and loads of network traffic in specific scenarios or at particular times of the day or week. It is also imperative that such tests can be repeated in an identical manner at different times. Active Get Connected with companies mentioned in this article. solutions provide the necessary on-demand

End of Article



Get Connected with companies mentioned in this article.

network testing while passive solutions are at the mercy of whatever happens to be traversing the network at any given time. So, active testing is ideal for ensuring performance prior to deployment or for measuring quality during the life of a service using desired benchmarks. For best practices in ensuring video services to a set top or mobile, before disruptions take place, active monitoring is ideal because it provides preemptive discovery of network faults and performance anomalies. By generating and sending particular traffic patterns through the network, actual or potential network performance anomalies are brought to light before end users become aware of them. Because active solutions generate the packets that are sent across the network and then measure that performance, the network manager enjoys complete control over the type, timing and recurrence frequency of this traffic. This control yields a much more accurate understanding of the relevant measurements and statistics than is possible by measuring actual network traffic. Before deployment of a particular network application or service such as TV to mobile, there is nothing for a passive testing solution to analyze, since the new application is not yet generating any traffic to measure. After deployment, a passive solution does become more valuable to recognize live disruptions. An active solution, on the other hand, can emulate the exact traffic that will occur on the network in the future and make determinations regarding the network’s readiness to support the new traffic as well as the impact the new traffic will have on existing network applications. One example of this is verifying whether or not desired QoS specifications are being met; if video QoS traffic doesn’t yet appear on the network, a passive solution cannot make any determinations about how the network’s configuration will support that QoS level. It is only after service deployment that passive solutions become a recognizable advantage. Passive measurement tools continuously poll the network to collect measurements, traps and details for implementing alarms

Assessing Video Quality

Video introduces a myriad of new potential challenges to the network. Digitalization and compression, jitter, degradation, streaming, network transport, codecs and decompression are all critical concerns specifically related to video applications. Methods in use for a long time now have provided guesstimates to what a user might be experiencing, but these are simple inferences made from findings in the core of a service provider’s network—a decent

consumer electronics

about service disruptions. Active testing products, on the other hand, enable a controlled traffic load to the network at predetermined times. While monitoring live undetermined traffic is vital, traffic controllability is also of fundamental importance, both in terms of not interfering with the mission-critical network and in terms of controlling the timing of packet generation to ensure the quality of statistics. With active solutions, the volume, timing and other parameters of the traffic introduced into the network are fully adjustable, and small traffic volumes are enough to obtain meaningful measurements. Measurement precision and accuracy are usually better with active measurement devices because these devices are located at the two termination points of the measurements. Using GPS-based time synchronization or carrier-class NTP, an active measurement device can perform very precise measurement, even on very low bandwidth traffic. There are a few adopted or emerging standards in the IP performance measurement domain, such as RFC 2330. Such standards define very precisely both the measurement methodology and the measurements like the delay, the delay variation (jitter), the packet loss, etc. Only an active measurement solution can comply to the standards. As you can see, there is a need for both active and passive testing and monitoring for streaming video with each playing distinct roles. Drilling deeper beyond assessing the roles of a passive and active monitoring solution in the network, what can a typical network engineer expect when evaluating video quality over that network?

guess at best. An accurate method can pinpoint a user’s problems from their living room or mobile device, ensuring the QoE is met. For streaming video from IPTV or mobile TV, a service provider should be able to look directly into the device in use by the consumer for potential problems or issues. Previous research about the assessment of video quality had been focused on a model that is “dual ended.” The dual-ended (J.144) model requires a comparison between impaired video and source video, ideal for active testing solutions. Impaired video is the result of different and complex processes such as video digitalization and compression, streaming, network transport and decompression (Figure 1). In a lab environment where the source vid-

figure 2 Network Impairments (Delay, Jitter, ...) Compression




Video Source Measurement System

Quality Score

Single-ended system

eo and the impaired video can be assessed in a controlled environment by an engineer, the dual-ended model might work well. However, when the source and the impaired video are miles away, such as in delivering IPTV to the home or video to a mobile, this model is impractical. This is a scenario more ideal for passive testing and monitoring. When the objective is to analyze in real time, from different locations, multiple to hundreds of video sources broadcasted over an IP network to millions of consumers, the processing power required to compare in real time can make for a cost-prohibitive approach.




consumer electronics

Quality Models

There is a solution whereby no reference source is needed. Such a similar model has existed for VoIP for a long time and has been standardized by the ITU under the G.107 acronym. This VoIP model, known as the E-Mode,l doesn’t require a “well-known” source to be injected in the network in order to asses VoIP quality. Such models that don’t require a wellknown source are also referred to as “non-intrusive” models. The Moving Picture Quality Metrics (MPQM) research that several labs have developed over the last 10 or so years provides the foundation for a good model requiring no source reference. MPQM is a single-end-

figure 3 Image Entropy Compressed Video

er exploration ether your goal speak directly ical page, the ght resource. technology, es and products

Entropy Analysis


Program Clock Reference


IP Network Impairments (Delay, Jitter,...)


MP QM Model

Quality Score

Packet Loss Probability

MPQM Model

companies providing solutions now

edthemodel that assesses video quality in real exploration into products, technologies and companies. Whether your goal is to research latest datasheet from a company, mp to a company's technical page, the goal of Get Connected is to put you in touchtime with the right resource. Whichever level of model scores (Figure 2). The MPQM gy, Get Connected will help you connect with the companies and products you arevideo searching for. quality the same way the E-model as-


End of Article Get Connected

with companies mentioned in this article.


sesses VoIP quality. This provides a simplistic measurement approach for network engineers and one that might be familiar from VoIP scoring. MPQM is based on the Human Vision System and takes into account how the degradation of the source through the whole process—compression, transmission, decompression—impacts the image quality as perceived by the eyes and the brain of a real person (Figure 3), not just by inferences made from core network impairments. While this is clearly advantageous to IPTV,


Get Connected with companies mentioned in this article.

video to a mobile device must also adhere to a minimum level of quality and so it is also ideal for any streaming video application. As devices improve things such as screen resolution and size, and other dedicated mobile TV devices hit the market, ensuring higher quality video to a mobile device will continue to grow in importance. Most artifacts that downgrade video quality come from the compression itself and also from network impairments such as jitter, packet loss, etc. For more information on network impairments, look to the standards ITU Y1540/1541 and IETF RFC 2330. The benefits of a single-ended model are obvious; it can be integrated in the content and distribution chain—not just in a lab—up to the device. Now, end-to-end video quality assessment becomes a reality, with a single-ended solution that leverages the established MPQM foundation and can score video quality in a similar way that VoIP is scored. Remember, VoIP scoring is established as a best-practice method. The video quality score should take into account the well-known transport stream key parameters, or KPI, defined in ETSI TR 101290 and also parameters related to the performance of the network layer as defined by ITU Y1540/1541 or IETF RFC2330. In order to work, it should be fed with several parameters: the Packet Loss Rate (PLR) probability, the image’s entropy, which is the quantity of information carried by an image, network jitter, network losses and the Program Clock Reference (PCR) jitter. This leverages findings from EPFL lab and others and also takes into account the codec type, such as MPEG2 or H264. The packet loss probability is deducted from the network and streaming impairments as well as the depth of the buffer that will be used for the image decompression. The image entropy takes into account several parameters such as the nature of the video frame that will be impacted or are impacted by jitter or loss. For example, in an MPEG2 world, the loss of an I, B and P frames doesn’t have the same impact on video qual-

Monitoring Systems

With such a solution in hand, an end-toend monitoring system can be deployed for either unicast video streams such as video on demand or multicast of video streams such as IPTV or mobile TV. For mobile video, assessing quality of streaming video will require the same assurances. And, this will only grow in importance as mobile devices increase performance of video quality that is possible. By deploying non-intrusive hardware probes at different locations on the network, a service provider can easily monitor—and provide alarm systems and real-time information—the performance of its services across the whole chain, be it IPTV to the home or streaming TV video to a mobile device. QoS and QoE are a critical part of not just

consumer electronics

ity as perceived by an end user. It should also account for the relative sizes of frames as well as the size of the GOP. The loss of an I frame that belongs to a video stream that has been compressed with a GOP equal to 1 doesn’t have the same impact if this video stream has been compressed with a GOP of 12. The GOP describes the sequences of I, B and P frames and the loss, whether partial or total, of an I frame make the reconstitution of the next related B and P frame very difficult. From an information point of view, an I frame has a greater entropy than a B or a P frame. Similar situations arise in an H.264 world, I frames there are called reference slices and are constructed from a series of consecutive macro blocks. The loss of a reference slice has even greater impact due to the nature of the H264 encoding process. For this reason, at least, most deployments that are still at different stages make use of the FEC mechanism that does cost some extra bandwidth, but minimizes the impact of lost frames on the quality. We will also address the needs for monitoring the sound channels that are also important to mobile TV. Believe it or not, end user expectation may be greater for high-quality sounds than for high-definition TV.

The successful service providers will be the ones that understand that proactive monitoring is cyclical.

network operations, but to sustaining larger business objectives that include revenue and customer retention and growth assurances. As service providers continue their adventures into video into the home or to a mobile device, the successful service providers will be the ones that understand that proactive monitoring is cyclical. It must be continually repeated throughout the life of a service, from the moment one decides to asses network capabilities to deliver new video services, through deployment, commissioning, and on through to supporting customer SLAs. Continuous refined assessment requires tools that measure deep into the network. Streaming video to mobile devices already requires insight beyond whether the service is simply up and running. The right tools require deep drill-down capabilities to pinpoint discrepancies right down to the device or application, whether it’s in the core, at the home or on a mobile device. And, the right tools must provide complete capabilities to assess packet loss, jitter, delay and other such benchmarks that severely impact video quality. Streaming video to a mobile device no doubt introduces yet more complication into mobile networks and now, more than ever, service quality assurance is a must. Symmetricom, Inc., San Jose, CA, (408) 433 0910 [].



portable power distributed power

Reducing Power with Dynamic Voltage Scaling Using DVS on a system-wide basis can save a lot of power—if you plan for and implement it properly.

by Alexander Friebe, Product Marketing Engineer, Texas Instruments


In today’s portable applications two features have become increasingly important: system run-time and standby time. Therefore, battery life and best usage of given and limited battery capacity are critical, independent of which application you consider. These can be consumer-focused (i.e., media players or handheld computing), industrial (i.e., multi-meter or control systems) or medical (i.e., ultrasound systems) applications. Many times next-generation products will feature a processor and bus with increased speed performance. In battery-powered applications, the need for speed and higher processor performance usually directly conflicts with the need for longer battery runtime. One option to overcome this conflict is dynamic voltage scaling (DVS). It operates the processor and bus with high speed when needed, and reduces power consumption when not needed.



Alternatively, DVS also can be used on nonbattery-operated applications or larger systems where power consumption must be managed to reduce operating expenses. For example, in a server farm, each server adds to the server room’s thermal loading. Each server can employ DVS techniques to reduce its power. This incremental power savings, multiplied by hundreds of servers, yields significant reduction in the building’s thermal loading. This article covers the basics of dynamic voltage scaling, how to specify a power supply that supports DVS, and what firmware/software requirements are involved. Also provided is an example of power savings using DVS.

What Is DVS?

Dynamic voltage scaling (DVS) is a boardlevel approach used to reduce the average power consumption. Typically it is employed on boards with a microprocessor, digital signal

portable power tions. The second rail supplies the power to the DSP’s core logic. The core voltage is rated for operation between 1.2 and 1.6V (Figure 1). The maximum clock frequency that this DSP can support depends on the core voltage. The higher the core voltage, the higher the clock frequency will be. From the DSP datasheet, the maximum clock frequency is 108 MHz with a 1.2V core, 144 MHz with a 1.35V core, and 200 MHz with a 1.6V core. Notice these are maximum clock frequencies. There is no limit to minimum clock figure 1 frequency. If desired, the 1.6V core can be 250 operated at 10 MHz. Actual maximum 200 200 and minimum clock frequencies are deter150 mined by the exter144 nal oscillator and two 100 108 registers in the DSP that adjust the inter50 nal phase-locked loop (PLL). DSP software 0 can write to the two 1.6 1.2 1.35 registers to alter the Core Voltage (V) clock frequency over a large range of valMaximum DSP clock frequency and related core voltage. ues. In this example, Maximum Clock Speed (MHz)

processor (DSP) or other large-scale integration components that can operate at various supply voltages. To accomplish this, the system’s switching losses are reduced by selectively dropping the system’s voltage. Additionally, DVS often is implemented in combination with clock frequency scaling to achieve maximum power savings. Let’s start by looking at how DVS works. DVS is used primarily in digital systems to reduce the power consumed by switching processing elements. Today’s embedded systems and DSPs use high clock frequencies to boost the system’s performance and provide fast processing. At high clock frequencies, switching power losses dominate most other system power losses. These switching losses are due to the charging and discharging of system capacitance. The capacitors being switched can come from input load impedance, stray capacitance from printed circuit board (PCB) traces, or most likely the MOSFET gate capacitances used inside the processor. Generally, spoken power goes up by the square of the applied voltage, and linearly with the applied switching frequency. In most systems, capacitance is fixed by the ICs, board or system components. However, we can adjust the voltage and frequency to vary the power consumption of this circuit and system.

figure 2

Possible Power Budget Savings?

600 Total Power (mW)

First, switching power can be reduced by a factor of two by cutting the switching frequency by half, which results in a linear power consumption reduction. Second, switching power can be reduced by a factor of four by cutting the voltage by half. Third, switching power can be reduced by a factor of eight by cutting the voltage and frequency by half.



400 300 200 100


0 24

Using a DSP Processor

Let’s look at a processor, for example the TMS320VC5509A. This is a high-performance DSP with dual-voltage rails. One voltage rail is 3.3V and is used to power the DSP’s I/O func-



168 CVdd=1.2 CVdd=1.6

Clock Frequency (MHz) Relation of total power consumption and clock frequency.



Power (mW)

portable power

the external oscillator’s clock frequencies are between 24-192 MHz. Power consumption is a function of the clock frequency and core voltage. The graph in Figure figure 3 2 shows power consumption versus clock 350 frequency based on core 300 voltages of 1.6V and 1.2V. At point one on 250 the graph, the DSP is op200 erating at 192 MHz with a 1.6V core voltage, and 150 consumes about 500 100 mW of power. As the frequency is reduced, 50 the power drops linearly to 100 mW at 24 MHz. 0 However, at 24 MHz, Vcore 1.2 1.3 1.4 1.5 1.6 1.7 1.8 200 MHz core voltage can be re125 MHz 50 MHz duced to 1.2V. Static Power Dynamic Power Operating at 24 MHz with a 1.2V core voltage consumes Overall power consumption and related core voltage. only 50 mW of power. In this case, there is a factor of 10 in power figure 4 savings between the two operating points. Power Consumption vs Time - NO DVS The idea of DVS is to use only the highest Pavg=Pmax=500mW 500mW power consumption operating point needed to perform the present task. There is one 50mW “high power” operating point that is used Time when the DSP requires full computational and operating capacPower Consumption vs Time - WITH DVS ity. The “low power” operating point is used 500mW when the DSP requires less demanding comPavg=168mW putational and operating capacity. In this 50mW example, we will look at only two operating Time points. It is possible, however, to operate Power consumption in a non-DVS and DVS system. at any speed along the



core voltage lines in order to provide many levels of power savings. Point 1: Vcore=1.6, F=192 MHz, Power=500 mW Point 2: Vcore=1.2, F=24 MHz, Power=50 mW

What Else – Static Losses

In addition to frequency and voltage selection, another parameter that influences the system’s overall power budget is static losses. The TMS320VC5509A has very low static losses. It does not change with the frequency of operation. Excluding any loads, static losses are due mostly to leakage currents in transistors, which are dependent on the silicon’s geometry used to fabricate the transistors. The amount of leakage current is mostly linear with the voltage applied (VCC), so it looks like a resistor. Therefore, the operating point with the least amount of overall power consumption depends mainly on two parameters. What is the lowest static operating point? And how long does the required software process take to execute a certain number of clock cycles? At lower clock frequencies there will be lower dynamic losses, except the amount of time to perform the operation is longer and static losses are incurred the entire time (Figure 3).

How Does DVS Work in a System?

Typically, DVS is used with microprocessors or DSPs intelligent enough to predict their processing needs. The DSP must know how much computational power it requires to perform the present and near-future tasks so that it can adjust its clock frequency and core voltage accordingly. For example, the high power operating point is selected to perform a fast Fourier transform on an acquired analog signal. The low power operating point is selected when the DSP is idle, waiting for an interrupt to occur to trigger a process. Switching between high and low power operating points reduces the system’s average power consumption. Figure 4 shows a DSP operating at 192 MHz with 1.6V core at all times in order to support all operating conditions. In this case, the average power consumption is 500 mW. However, switching between the two operating points as needed reduces average power consumption to 168 mW. The average power savings is a function of power consumed dur-

DVS Power Management Solution

Now let’s change the DSP core voltage between 1.2V and 1.6V. In one power management IC we use the TPS62400, a dual output channel DC/DC converter, adjustable with a one-pin EasyScale interface. Next we determine the feedback resistor values to set the voltages. The datasheet uses the equation to determine the feedback resistors, based on the desired output voltage at power up. The I/O voltage rail is set to 3.3V and will not change during operation. At power up, the core voltage must be set to 1.6V so the core can run at its maximum clock speed. The resistor divider shown in Figure 5 sets the core to 1.6V. Next we need a way for the firmware or software to change the core voltage. Let’s use one general-purpose I/O pin (GPIO) on the DSP for this purpose. The GPIO pin is connected to the DC/DC converter’s MODE pin. It provides the single-wire

portable power

serial communications channel. When needed, the DSP can send EasyScale commands to change the output voltage by toggling the GPIO pin. This is all that is needed from a hardware perspective. A system with dynamic voltage scaling can have significant power savings over a similar system without DVS. To realize a system with DVS, it requires a DSP or processor that can tolerate having its core or supply voltage changed. Additionally, a DVS system requires a special power supply to support dynamic changes in the voltage and remain stable. The DVS power supply also must have a method to initiate a change in the output voltage either by hardware or software means. Texas Instruments Inc., Dallas, TX. (800) 336-5236. [].

figure 5


DVdd (I/O)



ing the high and low power points, and the duty cycle of each. Up to now, we’ve looked at the DSP’s specifications and requirements. Now let’s look at the power supply requirements needed to support DVS. Most systems using DVS have dualvoltage rails: a core voltage to power the processing elements, and an I/O voltage to power the external buses and peripherals. Having one power supply that provides both rails saves space, cost, parts count and simplifies design. To support DVS, the power supply must have at least one output that can be dynamically adjusted easily. Many power supplies have adjustable voltage outputs. However, changing the output voltage often requires changes in external components that may affect power supply stability. A power supply used for DVS must have a way to change the output voltage, either via software or by a selection pin. Moreover, the power supply must remain stable over a wide range of input and output voltages and load currents. Some companies attempt to use standard power supplies and modify the feedback path with external components to provide two operating points. This can cause instability in the power supply, and must be extensively tested to ensure stability over the wide operating conditions required of a DVS power supply. It is best to choose a power supply specifically designed for DVS applications.



PLL MULTI Register PLL PLL DIV Register


CPU_Clock = OSC *





Example split-rail DSP power supply with DVS.



product feature Multicore Moves to Portable Designs ARM Cortex-A9 delivers 8000 DMIPS within a 250 mW power budget.

by John Donovan, Editor-in-Chief As Intel and AMD race to crank out symmetric multiprocessing (SMP) multicore processors for desktops and server farms, ARM is moving to deliver the same capability for portable designs. Actually, ARM has been licensing its ARM11 MPCore—which it bills as the “first integrated multiprocessor core”—since May 2004. This month it introduced its “second generation” MPCore product, which sports an impressive datasheet. At the annual ARM Developers’ Conference in Santa Clara this month, ARM launched its

new Cortex-A9 processors. The ARM Cortex-A9 MPCore multicore processor and ARM Cortex-A9 single core processor deliver up to 8000 DMIPS performance within a 250 mW power budget. For 2000 DMIPS of performance when designed in a TSMC 65 nm G process, the core logic costs less than 1.5 mm2 of silicon. The new chips are targeted at smartphones, connected mobile computers, consumer electronics, automotive infotainment, networking and other embedded and enterprise devices. The Cortex-A9 processors utilize a dynamic length, 8-stage superscalar, multi-issue pipeline with speculative out-of-order execution. Each core is capable of executing up to four instructions per cycle in devices clocked at more than 1 GHz while also providing reductions in the cost and inefficiencies of today’s leading 8-stage processors. Despite the new microarchitecture, ARM was careful to ensure software compatibility with existing designs. Cortex-A9 processors are compatible with other Cortex family processors and the ARM MPCore technology, thereby inheriting an ecosystem of OS/RTOS, middleware and applications to lower the costs associated with adopting a new processor. In order to simplify and broaden the adoption



of multicore solutions, the Cortex-A9 MPCore processor supports system-level coherence with accelerators and DMA to further increase performance and reduce power consumption at the system level. Each processor is available with ARM Advantage standard cells and memories for a traditional and convenient synthesizable flow and provides increased levels of power efficiency with a similar silicon cost and power budget as the previous ARM11 family of processors. The Cortex-A9 MPCore processor is the first ARM processor to combine the Cortex application-class architecture with multiprocessing capabilities for scalable performance. The Cortex-A9 provides enhanced multicore technology that includes an Accelerator Coherence Port (ACP) for increased system performance and lower system power; an Advanced Bus Interface Unit for low latency in high-bandwidth devices; Multicore TrustZone technology with interrupt virtualization to enable hardware-based security and enhanced paravirtualization solutions; and a Generalized Interrupt Controller (GIC) for software portability and optimized multicore communication. Since performance doesn’t always scale linearly with an increasing number of cores, ARM paid particular attention to scalability. The Cortex-A9 MPCore multicore processor demonstrated near linear scalability in a variety of EEMBC (Embedded Microprocessor Benchmark Consortium) benchmarks, with additional processor units providing up to four times the performance of a comparable single core processor. Both ARM Cortex-A9 processors include the ARM application-specific architecture extensions, including DSP and SIMD extensions and Jazelle, TrustZone and Intelligent Energy Manager (IEM) technologies. In addition, ARM has developed a full range of supporting technology, including high-performance single and double precision floating-point instructions; support for the ARM NEON advanced SIMD instruction set first introduced with the Cortex-A8 processor for accelerated media and signal processing functions; a comprehensive set of PrimeCell fabric IP components, including a dynamic memory controller, a static memory controller, an AMBA 3 AXI configurable interconnect and an optimized L2 Cache Controller; ARM Mali graphics processing units; and software support from ARM RealView SoC Designer and the RealView Development Suite. The ARM Cortex-A9 single core and ARM Cortex-A9 MPCore processors are available for licensing now along with supporting technology. ARM has announced that several Partners have already selected the Cortex-A9 processors, including NEC Electronics, NVIDIA, Samsung, ST Microelectronics and Texas Instruments. So expect to see these cores coming to a portable device near you in the fairly near future. ARM Inc. Sunnyvale, CA. (408) 734-5600. [].

MicroTouch is Going Mobile


the Possibilities

3M Touch Systems

MicroTouch Flex Capacitive Touch Sensors for Mobile Applications TM

• Nearly Invisible ITO

Proprietary index matching technology to minimize ITO visibility

• Ultrathin Substrate

0.05 mm PET substrate enables compact design

• Creative Form Factors

Allows designers the freedom to explore a myriad of shapes

• High Volume Production

Roll process is capable of producing millions of units per month

Learn more about MicroTouch Going Mobile by calling 888-659-1080 or visit for details.

3M © 2007 MicroTouch is a trademark of the 3M Company.

design idea Low-Cost Circuit Converts Clock to Low-Distortion Sinewave by Leo Sahlsten, Maxim Integrated Products Inc., Finland figure 1 +5V


U1 2 3 7 9 12 1 14 10 15

Clk In

P0 P1 P2 P3 P4

5 4 6 11 13

Q0 Q1 Q2 Q3 Q4

2 3 7 9 12 1 14 10 15



P0 P1 P2 P3 P4

R5 51k


5 4 6 11 13

Q0 Q1 Q2 Q3 Q4

162k R1 100k R2


100k R3



4 2


8 3

R6 51k










Sine Out

7 1 6

C1 100nF

This inexpensive circuit derives a low-distortion sinewave from a 50%-duty-cycle clock signal. harmonic amplitude is 1/3 that of the fundamental, the 5th is 1/5 that of the fundamental, and so on. Filter circuits give better results if you first attenuate the input signal’s highest-amplitude unwanted harmonics. This job is easily accomplished with a ring counter (U2) and a simple weighted-resistance network that attenuates all harmonics below the 9th by at least 70 dB (Figure 2). An 8th-order lowpass, switched-capacitor elliptic filter (U3) removes most of the remaining harmonics. U3’s corner frequency is set by the input clock as fclock/100. Ring counter U1 divides the incoming CMOS-level clock signal by ten. The second ring counter (U2) also divides the clock by ten, but its outputs are summed by a weighted-resistance network to produce a 9-step approximation of a sinewave. That waveform is further filtered by U3, which attenuates all harmonics below the noise level. The circuit’s input signal (clk in) serves as a clock for U3. To achieve the lowest distortion, U3’s input should be biased to Vdd/2, and its input signal attenuated to 2.2V peak. This attenuation is accomplished with a voltage divider consisting of the weighting network’s output resistance and the filter IC’s input resistance (R5 and R6 in parallel). Below 10 kHz, the circuit shown achieves distortion levels below 0.01%.

A simple, low-cost circuit (Figure 1) uses the existing clock in a digital system to generate low-distortion audio signals. Because most digitalsystem clocks are derived from crystal oscillators, those clocks produce stable and accurate sinewaves. The most obvious approach is to divide the clock frequency down to the required audio frequency, and then filter out the harmonics. A squarewave of 50% duty cycle, for instance, contains only odd harmonics (3rd, 5th, 7th, etc.), and their amplitudes decrease with frequency. The 3rd-

figure 2 10 1 100m


10m 1m 100µ 10µ 1µ 100n 10n 1n









Frequency/kHertz A simple resistance network in the Figure 1 circuit (R1-R4) greatly reduces harmonic distortion below the 9th harmonic.






Maxim Integrated Products, Inc. Sunnyvale, CA. (408) 737-7600. [].


products for designers MIMO RF Test Solutions

The MathWorks Introduces RF Blockset 2

Keithley Instruments, Inc. has released the industry’s leading 4X4 MIMO (multiple-input, multiple-output) RF test system for R&D and production testing of next-generation RF communications equipment and devices. Keithley’s MIMO RF test system consists of its new Model 2920 Vector Signal Generator (VSG) and Model 2820 Vector Signal Analyzer (VSA), Model 2895 MIMO Synchronization Unit and powerful MIMO Signal Analysis Software. No other MIMO test system in the industry offers support for 4X4 MIMO applications; support for a multitude of commercial standards, including cellular, WiMAX and WLAN; +/-1 nanosecond signal sampler synchronization; less then 1 nanosecond peak-to-peak signal sampler jitter; and less than 1 degree of peak-to-peak RF-carrier phase jitter. By allowing for tight multiunit synchronization with these high-performance measurement specifications, the new Keithley system can support MIMO measurements on demanding signals such as 802.11n 40 MHz WLAN MIMO and 802.16e Wave 2 Mobile WiMAX. The new Model 2920 Vector Signal Generator is available in two configurations with maximum frequencies of 4 GHz or 6 GHz and can generate signals as low as 10 MHz. An optional 80 MHz arbitrary waveform generator bandwidth with 100 Msamples of waveform memory gives users the capability of testing a vast array of commercial communications signals, including GSM, EDGE, W-CDMA, cdma2000, SISO WLAN and the industry’s most demanding 802.11n 40 MHz WLAN MIMO signal. Keithley’s new Model 2820 Vector Signal Analyzer comes with 40 MHz of bandwidth as standard in either a 4 GHz or 6 GHz configuration. The Model 2820 is highly versatile and can test a range of signals, including GSM, EDGE, W-CDMA (uplink/downlink) and cdma2000, along with a multitude of WLAN signals, including the 802.11n 40 MHz WLAN MIMO signal in both MIMO and SISO configurations. The tight synchronization of signals and low sampler and RF carrier phase jitter featured in Keithley’s MIMO RF test solution allow for highly accurate and repeatable measurements that ensure high product quality and high production yields. In addition, the Model 2820 and Model 2920 can be set up in 2-, 3- or 4-channel configurations. These dual-purpose instruments can be used either as stand-alone instruments or as part of a 4X4 MIMO test system, so users don’t need to dedicate individual signal analyzer or signal generator units to a task. The Model 2895 MIMO Synchronization Unit provides highly synchronized signals to the system instruments, which allows for up to 4X4 MIMO test synchronization. This gives the system a highly precise and stable alignment between up to four signal analyzers and generators. The Model 2895 distributes common signals such as a local oscillator and common clock and precise trigger to all the instruments connected to the system and allows accurate and repeatable measurements of OFDM (orthogonal frequency-division multiplexing) MIMO signals. Keithley’s Model 2820 RF Vector Signal Analyzer starts at $22,500 (4 GHz) and is available in four to six weeks ARO. The Model 2920 RF Vector Signal Generator begins at $17,500 (4 GHz) and is available in six to eight weeks ARO beginning November 1, 2007. The Model 2895 MIMO Synchronization Unit is priced at $9,900 and will be available in four to six weeks ARO as of December 1, 2007. Lastly, the Model 280111 WLAN 802.11n MIMO Signal Analysis Software is $9,500 and is available in two weeks ARO beginning January 1, 2008. Full pricing information for various configurations is available online.

The MathWorks has introduced RF Blockset 2, which extends Simulink with a library of blocks to model the behavior of linear and nonlinear radio frequency (RF) components—filters, transmission lines, amplifiers and mixers—by supporting the widespread Agilent standards for large signalscattering parameters in system-level verification models. In addition to enabling access to industry-standard data file formats for network parameters and noise properties, such as S2P, Y2P, Z2P and H2P, RF Blockset 2 lets engineers import system-level verification models in the S2D, P2D and AMP formats that specify not only the network parameters and noise properties, but also the nonlinear properties of a component. Engineers can obtain S2D and P2D files from component manufacturers, measured data, and the Verification Model Extractor feature in Advanced Design System (ADS) from Agilent. “RF Blockset helps engineers from the communications systems world collaborate with engineers from the RF world,” said Colin Warwick, technical marketing manager at The MathWorks. “It facilitates communication across teams in different disciplines that each use different languages and tools. For example, wireless communication system engineers use Simulink with Communications Blockset to create an executable specification, but RF engineers use a transistor-level circuit simulator for their RF subsystem. With RF Blockset, the system engineer can generate specifications that the RF engineer can use, and the RF engineer can extract verification models that the systems engineer can use.” RF Blockset is available immediately for the Solaris, Linux, Microsoft Windows and Macintosh platforms. U.S. list prices start at $2,000.

Keithley Instruments, Inc., Cleveland, OH. (440) 248-040.0 [].

Magma Design Automation, Inc., Santa Clara, CA. (408) 565-7500. [].



The MathWorks, Inc., Natick, MA. (508) 647-7000. [].

Magma and UMC Deliver Physical Verification and DFM Solution for 65 nm Magma Design Automation Inc. and UMC have announced a broad physical verification and design for manufacturability (DFM) solution for 65 nanometer (nm) designs. The two companies have completed joint qualification of Magma’s Quartz DRC, Quartz LVS and Quartz DFM for UMC’s advanced processes and development of foundry-validated runsets and models that support the flow, which is tuned to run smoothly and detect design problems so they are corrected quickly. With the software, runsets and models, designers can accelerate time-to-market and improve manufacturability of chips targeted for UMC’s advanced 65 nm process technology. According to Magma, Quartz DRC and Quartz LVS are the industry’s first fully scalable design rule checking (DRC) and layout vs. schematic (LVS) products. Leveraging a unique architecture and advanced modeling capabilities, this physical verification solution allows affordable, massively distributed processing. With this technology and the right number of processors, virtually any design can be physically verified in less than two hours. Quartz DFM allows designers to analyze and characterize the timing and power impact due to sources of manufacturing variability, such as lithography and CMP. With this, designers can reduce the guard-banding required by traditional design flows, allowing significant gains in timing and power performance.

Quickfilter Technologies, Inc. has announced a reference design to allow a single QF1D512 Simple and versatile FIR engine (SavFIRe) IC to be configured to provide multiple channel filtering in a single device. Using this solution, the device allows designers to implement audio equalization for both the left and right channels. In the two-channel equalizer configuration, the QF1D512 SavFIRe chip can implement, as an example, a 200-20K Hz band pass filter. The filter will reject better than 24dB per octave in the transition band based on a 44.1 kHz sampling rate. With Quickfilter’s QuickPro design tools, the user can also implement a freeform editor for a virtually infinite array of “sound shaping” possibilities. This configuration can be applied to both wired and wireless audio systems including wireless speakers, stereo headsets, docking stations for iPods, other MP3 players and satellite radio systems, and audio systems for stereo networks. Announced in September 2006, the QF1D512 SavFIRe chip allows systems designers to quickly and easily add precision digital filtering to an application. The chip can simply be added between an existing analog-todigital converter (ADC) and the host controller (microcontroller, microprocessor, digital signal processor or field programmable gate array), or can be connected as a coprocessor device for controllers with embedded ADCs. The QF1D512 SavFIRe chip is packaged in a 3 x 3 mm QFN package, is characterized over the industrial temperature range and is available now in production quantities. Quickfilter Technologies, Inc., Allen, TX. (214) 547-0460. [].

Audio Amplifier with Spread Spectrum and Integrated Boost Architecture Reduces EMI Sensitivity in Portable Applications

Ultra-Compact Input Device with Multi-Mode Linear and 2D Navigation for Handheld and Ultra-Miniature PC Applications Avago Technologies has announced a thin ultra-compact innovative input device solution that combines a navigation pad module with a motion sense and interface integrated circuit (IC) to provide a mouse-like pointing solution for use in a variety of mobile devices. Typical applications include mobile phones, ultraminiature PCs, digital still cameras, computer peripheral and other handheld and gaming applications. With a low profile height of 1.7 mm and diameter of 14 mm, this new ultra-compact navigation pad from Avago incorporates a self-centering button with click function. Based on embedded capacitive sense technology, this miniature input module will enable designers to develop input devices for mobile devices that can greatly enhance the user’s experience by providing analog cursor control for Web browsing, gaming or digital camera applications. Avago’s latest input device solution combines its AMRT-1410 Ultra-Compact Navigation Pad Module and AMRI-2000 Mobile Navigation Sense and Interface IC. The AMRT-1410 incorporates a self-centering snap-on button, which can be customized to meet industrial design requirements, with an integrated click function in an ultra-compact package. The AMRI-2000 IC enables the input module to operate either as a four- or eight-way switch with scrollwheel functionality or as an analog mouse or joystick to provide end-users with new and unique navigation experiences such as rapid scrolling and panning of menus and long lists, mouse-like navigation of Web pages, drag and drop operations, as well as analog joystick-like control to make mobile gaming more enjoyable. In addition, with the capacitive sense technology incorporated into this input device solution, bare skin contact is not required for operation of the navigation pad module. As a result, end-users will have the added benefit of being able to operate the input device while wearing gloves. The operating mode of the AMRT-1410 and AMRI-2000 input device solution can be dynamically reconfigured to provide the best user navigation experience for any active application. Moreover, the backward compatible digital switch mode with scroll function provides the user with a well-known interface for menu and list navigation. For example, the navigation pad can function as an analog joystick for a gaming application, as a digital 4-way switch with scrollwheel functionality for menu navigation and phonebook or MP3 list scrolling, and as a mouse for Web page navigation and image manipulation. Avago’s AMRT-1410 and AMRI-2000 provides designers with an input module that provides tactile feedback for intuitive use, an integrated tactile click and a capacitive sensor IC. The IC interfaces to the host over an I2Ccompatible or SPI serial bus. Moreover, this IC was designed specifically for low battery power operation and includes programmable auto wake-up power saving modes to conserve power and extend battery life. The Avago’s AMRT-1410/AMRI-2000 input device solution is competitively priced below $3 per bundle in high volumes. Samples and production quantities are available now.

National Semiconductor Corporation has introduced the industry’s first single-chip 3W mono Class D audio amplifier with spread spectrum technology and an integrated boost converter. The LM48511 Boomer amplifier, from National’s PowerWise energy-efficient product portfolio, joins the recently announced 1.2W LM48510 as the second product in a new family of Boomer Class D audio amplifiers that allow portable products to operate at constant high-level output power even as batteries deplete to lower voltages. The LM48511 drives an 8-ohm speaker load at 3W continuous power to enable louder speaker volumes for manufacturers of push-to-talk cell phones, portable global positioning systems (GPS) and MP3 docking stations with portable speakers, as well as a broad range of other battery-powered applications. The LM48511 uses National’s unique spread spectrum technology to lower electromagnetic interference (EMI) emissions more than 11 dB below the Avago Technologies, San Jose, CA. Federal Communications Commission (FCC) limit. The LM48511’s 80 (408) 435-7400. []. percent efficiency at 5V extends battery life when compared to boosted Class AB amplifiers, and its independent regulator and amplifier shutdown controls also optimize power savings by disabling the regulator when high output power is not required. The device’s small footprint reduces printed circuit board size and lowers development costs. The LM48511 high-efficiency Class D audio power amplifier provides 2.5W to 3W of continuous power into an 8-ohm speaker when operating from a 3V to 5V power supply with less than 1 percent total harmonic distortion plus noise (THD+N). The gain of the LM48511 is set by external resistors, which allows independent gain control from multiple sources by summing the signals. The LM48511 features a low-power consumption shutdown mode as well as output short-circuit and thermal overload protection. Advanced pop-and-click circuitry eliminates output transients which would otherwise occur during power or shutdown cycles. The amplifier also includes selectable feedback networks that allow the designer to scale power and manage efficiency. Available now in a 24-pin LLP package, the LM48511 is priced at $1.75 in 1,000-unit quantities. National Semiconductor Corporation, Santa Clara, CA. (408) 721-5000. [].



products for designers

Programmable Digital Filter for Multiple Inputs

products for designers

Software Performs Serial Data Link Analysis

Single-Inductor Low VIN Buck-Boost DC/DC Controller

Optimized for Medium Power Outputs Tektronix, Inc. has announced a new end-to-end high-speed serial data analysis (SDLA) software package with test capabilities extending from a Linear Technology Corporation has transmitter to a receiver and including the connecting channel. The new introduced the LTC3785, a 96% efficient 80SJNB Advanced software runs on the Tektronix DSA8200 Digital Serial buck-boost switching regulator controller Analyzer. 80SJNB Advanced combined with the DSA8200 platform’s that operates from input voltages above, TDR/TDT and S-Parameter support through iConnect software provides below or equal to the output voltage for engineers with the first complete Serial Data Link Analysis (SDLA) packpowering tablet PCs, handheld instruage to ensure a readable signal reaches the receiver. ments, wireless modems, portable meThe latest high-speed serial technologies are at the core of the comdia players and a wide variety of single puter, consumer electronics and communications industry designs and or dual-cell Li-Ion or multi-cell alkaline/ require test equipment capable of higher performance and more extenNiMH powered devices. sive analysis. The new 80SJNB Advanced software includes feed-forward Medium power buck-boost circuits and decision feedback equalization (FFE and DFE) for a virtual view of the have traditionally relied on transformers signal as it appears at the comparator inside the receiver. Emulation of (SEPIC) or two cascaded DC/DC convertthe interconnect channel enables testing the transmitter performance ers, one for the step-up (boost) and one against multiple interconnects. Support for fixture de-embedding facilifor the step-down (buck) conversion. The tates virtual probing at inaccessible points. The complete SDLA offering LTC3785 requires only a single inductor from an input range of 2.7V to provides the fundamental measurements needed to validate compliance 10V, offers an identical output range, and can deliver up to 50W of output of high-speed serial standards such as 10 GbE Ethernet, PCI Express power. Operating with 4-switch synchronous rectification, the LTC3785 and SATA, enabling the development of higher performance products for provides seamless transitions between the buck and boost operating the new digital world. modes. Complete Link Impairment Compensation The LTC3785’s proprietary topology and control architecture employs Equalization is a MOSFET RDS sensing for forward and reverse current limiting, yielding unbroad term for several rivaled efficiency. A sense resistor may be used when increased accuracy techniques of manipuis desired. Moreover, the LTC3785 incorporates Burst Mode operation, lating the signal shape which reduces light load quiescent current to less than 100 uA, a valuin order to overcome able feature in battery-powered systems. In addition, fault protection is frequency-dependent provided for over voltage, over current and short circuit conditions in all loss of the channel. operating modes. The operating frequency can be programmed from 100 This loss changes kHz to 1 MHz with a single resistor and the LTC3785 also incorporates the shape of the NRZ true output disconnect during shutdown. data signal at the reThe LTC3785 is offered in a 4 mm x 4 mm QFN-24 package. The ceiver from the desired 1,000-piece price is $3.56 each. square-wave to severely Summary of Features: LTC3785 distorted, closed eye • Single Inductor Architecture Allows Operation with the Input Voltage waveform. 80SJNB AdAbove, Below or Equal to the Output Voltage vanced provides the impor• 2.7V to 10V Input and Output Voltage Range tant equalization methods of • 4-Switch, Synchronous Operation for up to 96% Efficiency FFE and DFE on the receiver side, and supports generation and measure• RDS Current Sensing Enhances Efficiency ment of pre-emphasis and de-emphasis on the transmitter side. • 100 uA No Load Quiescent Current Another capability of the software is channel emulation, providing • 100 kHz to 1 MHz Programmable Constant Frequency Operation customers a push-button way to see the waveform impairments due to • Over Voltage and Over Current Protection channel (interconnect) transmission loss. Through channel emulation • True Output Disconnect During Shutdown engineers are able to acquire the signal at the transmitter output and • All N-Channel MOSFET Power Switches distort it by an (emulated) channel—be it a backplane, connector or Linear Technology, Milpitas, CA. (408) 432-1900. []. anything that can be described with TDR/TDT or with S-Parameters—and verify the performance at the channel end, without waiting for the physical hardware’s availability. Final analysis is performed using 80SJNB Advanced jitter and noise decomposition and BER eye diagrams. With 80SJNB Advanced, design engineers gain the ability to compare different transmitter-to-channel-to-equalizer combinations, resulting in improved design and verification. The waveform and the eye diagram at any point are not only viewable, but also available for a complete Jitter, Noise and BER analysis. Package Also Provides SSC Support for Advanced Transmitter Characterization Acquisition and advanced analysis of complex signals is increasingly needed on transmitters with spread spectrum clocks (SSC). 80SJNB software and the DSA8200 now provide engineers a sampling oscilloscope with an unmatched ability to perform SSC acquisitions and SSC jitter analysis. SSC is widely used in desktop and laptop PCs and is incorporated into standards including SATA and PCI Express. 80SJNB Advanced software is available with a new DSA8200 or as an upgrade if a customer has already purchased 80SJNB or 80SJNB Essentials. The U.S. MSRP for 80SJNB is $15,800 when ordered with a DSA8200. Existing customers of 80SJNB software can download 80SJNB Essentials from free of charge, or can upgrade to 80SJNB Advanced for $4,900, U.S. MSRP. Tektronix, Inc., Beaverton, OR. (503) 627-4027. [].



2.7W Constant Output Power Class-D Audio Amplifier Texas Instruments Incorporated (TI) has introduced a monolithic, filterfree, Class-D audio power amplifier with an integrated boost converter that provides constant output power for portable applications such as personal navigation devices, PDAs, mobile phones, portable media players and handheld gaming devices. The combination of the 2.7W Class-D amplifier and integrated boost converter provides 85 percent overall efficiency, with little heat dissipation, to prolong battery life when the user is playing music or in a phone conversation.

The TPA2013D1 generates high output power from a low supply voltage without distorting the audio, and can also supply the power of external devices such as TI’s TPA2010D1 and other similar Class-D amplifiers. The device has a wide supply voltage operation of 1.8V to 5.5V to simplify power supply design and allow for direct connection to the battery. The new device provides a maximum output power of 2.7W across a 4-ohm load or 2.2W across an 8-ohm load in addition to providing an adjustable constant output power of up to 1.5W in the entire Lithium-Ion battery range of 2.3V to 4.8V. This capability makes the audio output power insensitive to battery voltage fluctuations, thereby maintaining audio quality and volume as the battery discharges. The innovative design of the TPA2013D1 eliminates the requirement for some external components enabling a total solution size of just 6.5 mm x 6.5 mm, which includes the amplifier, boost converter and external components. This represents a board area reduction of more than 50 percent compared to typical solutions. The combination of size, features and performance reduces overall system cost and allows for a sleeker, more differentiated end product. Several key features of the TPA2013D1 help increase system audio quality. For example, all internal modules run off the same reference clock, silencing potential audible beat frequencies that could occur when using a separate discrete amplifier and boost converter. The synchronized clock and very high power supply rejection ratio (PSRR), 95 dB at 217 Hz, serve to further reduce noise in the system, avoiding “buzz” noises on amplifier outputs that can often be generated from RF power amplifiers in GSM phones, for instance. The TPA2013D1 provides thermal and short circuit protection with an auto recovery option to ensure excellent reliability and robust operation. The device’s pinout has been optimized to reduce EMI, providing easy layout and meeting EMI requirements of the Federal Communications Commission (FCC). The TPA2013D1 Class-D audio power amplifier is available now in a 2.3 mm x 2.3 mm, 16-ball WCSP package or in a 20-pin, 4 mm x 4 mm QFN package. Pricing in 1,000-unit quantities is $1.55 for the WCSP and $1.45 for the QFN package. Samples and EVMs are available for 24-hour delivery through the TI Web site. Texas Instruments Inc., Dallas, TX. (800) 336-5236. [].



products for designers

First Global TV Standards CMOS Tuner IC with the Performance of a Can Tuner MaxLinear, Inc. has announced the MxL5007, a TV tuner IC that meets all global digital and analog cable and terrestrial TV standards. The MxL5007 incorporates key CMOS circuit design and radio architecture technology breakthroughs that enable “can” tuner equivalent performance in digital/analog televisions, terrestrial and cable set-top box (STB) applications, and portable TV applications. The device is the latest in MaxLinear’s MxL5000 family of tuner ICs. It is based on the company’s proprietary digital CMOS implementation, which not only exceeds the performance of exotic SiGe BiCMOS processbased tuner solutions, but also has the low cost, low power consumption and heat dissipation that is only achievable using CMOS technology. MxL5007 comes in a 5 mm x 5 mm 32-pin QFN package and consumes 300 mW of power, which is three to six times lower than competing products. The MxL5007 even exceeds the exacting requirements of ATSC A/74 Receiver Performance Guidelines. Until now, the stringent ATSC A/74 requirements could only be met by traditional can tuners or higher power SiGe-BiCMOS tuners using expensive external tracking and/or SAW filters. Exceeding the A/74 requirements without the use of external tracking and SAW filters is a key accomplishment of the MxL5007. With this level of performance, module manufacturers can reduce size, power and cost of their modules for even high-end products like digital televisions. The MxL5007 also provides the performance and features necessary for on-board implementations in digital/analog television, set-top box and personal video recorder (PVR) applications, along with the small size necessary for delivering live television in cars and in portable devices such as PCs, DVD players, PDAs and digital media players. The MxL5007’s support of global TV standards allows manufacturers to easily ship products to different world markets by mating MaxLinear’s global TV tuner with the appropriate standard specific demodulator, or by using a multi-standard demodulator. The MxL5007 can receive an input signal spanning a continuous frequency band from 44 MHz to 1002 MHz from a 75-ohm antenna or cable. It supports every major cable and terrestrial digital and analog TV standard, including ATSC, DVB-T, DVB-H, ISDB-T 13-segment, DVB-C, DMBT(H), 64/256 QAM for U.S. cable applications, analog cable, DOCSIS 3.0, NTSC, PAL and SECAM. The MxL5007 has an integrated low noise amplifier (LNA), on-chip tracking & PLL loop filters, automatic gain control, LO generation and channel selectivity functions for simplified and low-cost board-level design. By eliminating the need for external SAW filters, external tracking filters PLL loop filters, and external loop through circuitry, and by requiring no factory calibration or adjustments, the MxL5007 greatly reduces BOM and manufacturing costs for end customers. Additionally, MxL5007 has a flexible output intermediate frequency (IF) ranging from 4 to 44 MHz to support many demodulators and has an integrated on-chip loop through function that vastly simplifies STBs requiring RF out and multi-tuner applications such as PVRs and televisions supporting picturein-picture or similar functionality. Engineering samples of the MxL5007 and evaluation kits will be available in Q4 2007, with production quantities expected in Q2 2008. MaxLinear, Inc., Carlsbad, CA. (760) 692-0711. [].

RealView Profiler for Embedded Software Analysis At this month’s ARM Developers’ Conference, ARM introduced the RealView Profiler, a unique tool specifically designed to enable non-intrusive analysis of software performance and code coverage of real system workloads running over minutes, hours or days. Using the RealView Profiler, developers can typically improve the performance of their application by more than 20 percent, while at the same time enabling reduction of the ROM size requirements by a similar amount. The tool also includes comprehensive analysis of both statement and branch code coverage, enabling software testing to achieve and demonstrate 100 percent code coverage to ensure the highest levels of software validation The RealView Profiler is designed to complement ARM’s Compilation technology and will drive ARM processor-based devices to new performance levels. Based on the comprehensive ARM debug and trace infrastructure, the RealView Profiler delivers unrivalled insight into the performance of embedded device software. The RealView Profiler supports performance analysis from the early stages right until the end of the design cycle and, therefore, greatly reduces software development project risks. To enable this, the ARM RealView Profiler supports both hardware profiling, via the new RealView Trace 2 capture unit, and virtual platform profiling, via the ultra-fast RealView Real-Time System Models. The RealView Profiler is a plug-in to the industry-leading Eclipse Integrated Development Environment and in this environment provides a graphical user interface that is designed for ease of use with a familiar look and feel. This significantly increases productivity in the normally challenging optimization phase. The RealView Profiler can be used alongside RealView Development Suite and the GNU tools to provide performance and code-path coverage information. The RealView Profiler provides its information for both the executed machine instructions and the original source code. This is essential for the success of the widely used practice of incorporating third-party software into the end product; without this a thorough impact analysis is exceedingly difficult.

The RealView Profiler offers unrivalled insight into the performance and behavior of ARM processor-based devices, providing detailed information on CPU interlocks, unexpected instruction delays, code efficiency and low-level instruction views mapped back to software developer’s source code annotated with performance information. This enables software developers to take full advantage of ARM processor-based devices in the shortest possible time. The RealView Profiler provides hardware profiling in combination with the new RealView Trace 2 unit using a unique streaming profiling technique. This enables continuous profiling for long periods of time at frequencies up to 250 MHz. This first release of the RealView Profiler will support the ARM926EJ-S, ARM1136JF-S, ARM1176JZF-S and Cortex-R4 processors, with more to be added in the next months. The RealView Trace 2 unit also supports classic 32-bit data and instruction trace at up to 400 MHz to enable advanced trace and debug using the ARM RealView Development Suite. The RealView Profiler also provides profiling without the need for actual hardware by including four RealView Real-Time System Models of the ARM926EJ-S, ARM1136JF-S, ARM1176JZF-S and Cortex-A8 processors. These models can execute at more than 200 MHz running on a standard PC platform and can be deployed across development teams at low cost. ARM Inc., Sunnyvale, CA. (408) 734-5600. [].



What’s in the Embedded Industry’s Basement? FEATRURED PRODUCTS

Quad port Gigabit Ethernet copper PCI adapter 115 remaining


Make An Offer!

Dual port Ethernet copper PCI adapter 105 remaining


Make An Offer!

Single port Gigabit Ethernet copper PCI adapter 60 remaining


Make An Offer!

Dual PCI-X SCSI Ultra-3 LVD host adapter 65 remaining

Dual PCI SCSI Ultra-3 LVD host adapter 65 remaining

Make An Offer!

Make An Offer!



Purchase excess inventory at clearance prices!



19” 8U 1U 1024X768 Rackmount Monitor, 20.5” Deep aRGB, VGA, DVI-D, 15” TFT/LCD S-Video, CVS 1 remaining 1280x1024 $1299 2 remaining Make An Offer!


Make An Offer!

17” 8U Rackmount Monitor, aRGB, VGA, 1280x1024 2 remaining


Make An Offer!

1600X1200 8RU 1RU Keyboard 20.1” Panel Shelf with Cherry Mount TFT/LCD PS/2 1 remaining Keyboard/Trackball Combo $2993 2 remaining Make An Offer!


Make An Offer!

Don’t miss out on these great products

ceo interview Chris Rowen Tensilica

Over the years ARM has amassed system-level and software expertise over a wide range of vertical markets. How can a small- to mid-sized IP company compete with that kind of firepower?

ARM certainly has a strong, broad ecosystem in a number of areas. But in a number of the highest volume areas, it’s simply not suitable to run the key applications on a generic RISC processor. In the case of audio or video, it would be impossible—or at least terribly inefficient—to run that kind of task on an ARM processor. It’s just not suited to doing audio or video or heavy-duty DSP, whereas our cores are. The software that exists for doing that [on an ARM processor] is either for occasional use where the power inefficiencies don’t matter, or it’s some kind of high-level software that’s not directly tied to the actual data processing for audio or video. Typically, we will be several times more cycle efficient and four or five times more energy efficient than a high-performance ARM processor for audio; the difference in video is probably more like an order of magnitude. So the ecosystem that ARM has, while very good, generally doesn’t touch some of these critical processing functions. We’ve been able to develop a very strong ecosystem, for example, in leading-edge audio. I think we probably do more modern audio cores than anyone in the world. Now the ecosystem shoe is on the other foot.

october 2007

Recently Portable Design’s Editor-in-Chief John Donovan managed to catch up with Tensilica’s CEO Chris Rowen. Tensilica’s programmable Extensa processors—long ensconced in Cisco routers—have more recently started appearing in large numbers in handsets from Motorola, Samsung and LG, thanks to licensees AMD/ATI, Samsung and NVIDIA. Since the interview took place the day before the annual ARM Developer’s Conference, the obvious first question was, “How do you (successfully) wrestle with an 800-lb. gorilla?”

You’ve recently had some design wins in the portable space, where ARM is the dominant player. Are you competing with them for sockets, or what’s your strategy for coexisting on the same SoC?

We probably coexist on silicon with ARM at least as much as anyone. But a lot of our design wins in the mobile space come in features that are complementary to the controller. In fact, we see at least as much of the growth in the semiconductor IP space coming from the data plane—the processors that do the significant audio, video and protocol processing—as from the management processors. In many of these designs there are also ARM processors, but we may have multiple Tensilica processors doing audio and video, for example. In some cases, there are also control functions that are being carved out of the main control processor. We’ve had significant design wins in Wi-Fi, Bluetooth and DSL, things that could be done by the control processor, but because of cost, performance and power considerations, they’re done more effectively in one of our small controller cores.



You’re choosing to address the data plane, which is usually the domain of DSPs. How do you compete with DSPs in those areas you’re trying to address?

The requirements for DSPs are much more diverse than the requirements for CPUs in control applications. The right answer for DSPs tends to vary a lot depending on whether you’re doing 16-bit comms or audio or 8-bit video. You don’t want a classic one-size-fits-all solution. Even within 16-bit communications, as people move to significantly more computationally intensive applications such as 3G wireless or WiMAX, the computational complexity is such that you can’t do it in a generalpurpose DSP anymore. You really only have two choices. One is to build hardwired accelerator engines that sit beside the DSP and do the heavy lifting for OFDM, FFT, turbo decoding or low-density parity checks. There’s a whole host of algorithms that are beyond what you can do with the few hundred Macs you would put in a general-purpose DSP in a power-conscious chip. The other option is to do an application-specific DSP. We’ve had a lot of success with application-specific DSPs, because they’re much more closely tuned to a specific subclass of algorithms and are therefore much faster and/or much smaller than a general-purpose DSP of equivalent performance.

If a complex SoC uses IP from a variety of vendors, what challenges do you see in stitching all of this disparate IP together into a consistent design flow?

There are substantial challenges as people move to higher and higher levels of integration. There are broadly two categories of IP that matter. There’s the interface IP that has a significant analog component and is closely tied to the fab process; those sort of have to be treated as black boxes at the boundary of the design. There’s a relatively simple protocol by which the digital core of the design talks to its interfaces. The other major category is the big building blocks like processors and DSPs; it has a different set of integration issues. Most of that “star IP” is synthesizable; physically managing it is relatively straightforward. On the other hand, there’s a much higher level of demand on the modeling, programming and configuration of that IP. Having good models, having good programming tools, having consistent modeling and programming tools across all flavors of processors in your system is more and more valuable to the chip integrator. So we see people wanting to integrate a lot more subsystems with a lot more processors, but really, they want to have as few processor families and vendors as possible. A lot of what we are doing is unifying the data plane, so that the architecture for your audio has the same core DNA as what you’re using for video, DSPs and protocol processing.

Do you foresee consolidation in the IP market, and if so, who is going to be doing the consolidating?

I think there are two forces that are balancing out. One is—given the sheer complexity of these big designs—semiconductor designers have to think more and more about chip-level design and functionality, so they are increasingly biased toward outsourcing the component technology. As a result there are more opportunities for IP suppliers to find a ready audience just because there are so many blocks that are being demanded. That’s why the IP business is the fastest-growing segment of the silicon infrastructure. On the other hand, it’s clear that you also don’t want chaos. You don’t really want a dozen different suppliers. Customers want families of products that work together well. Broadening your product line is one form of consolidation, and one that we have been pursuing.

Noting MIPS’ recent acquisition of Chipidea—as well as ARM’s earlier acquisition of Artisan—have you identified any technologies that Tensilica might want to acquire to broaden your product offerings?

It’s a natural thing for the non-configurable processor guys to do this, because there’s only so far you can go with a controller product line, and it’s only natural to want to expand into higher-growth areas. But if [the acquisitions] aren’t closely related to the buyer’s core business, it’s hard to get full synergy out of them; a high percentage of acquisitions fail over time. That said, we’re always keeping an eye out for things that makes sense to us.

Despite the growth you noted of the IP industry, there are a lot of small players, and some people have suggested that the IP business model won’t sustain real growth. How do you scale an IP business to get to $1 billion?

There are a couple of pieces you need to look at. One is, what are the forces that determine the size of a business, and how do you measure the size? You don’t really want to compare an IP company, which is in a 100% gross margin business, with a semiconductor company, which may be a 40% gross margin business, because a dollar of revenue in an IP business is also a dollar of gross margin. So when you look at an IP company their financial clout is probably two to 2-1/2 times bigger than it would appear if you just looked at the revenue line. In addition, processor companies in particular tend to build entrenched franchises that give them longevity. When a semiconductor company adopts an architecture, resistance to changing to something else is pretty high. And so you have this ongoing investment, not only from the sponsor of an architecture but also from the customers, so that architecture tends to get more and more locked in. So those franchises, especially when you have a good royalty generation model, really become something that both investors and customers value a great deal, because they know that the company behind the architecture is going to be around for a long time. But that also means it takes a while to grow an IP company. You don’t have quick hits with semiconductor IP companies. ARM and MIPS are both overnight successes after 20-odd years.

How do you envision the semiconductor IP market changing over the next three to five years?

I think that one big trend will be that the leading players will be providing broad, flexible product lines that span both control and data planes, because every SoC will need some of both and it will be easier for the designer to get them from a smaller number of suppliers. Next, I think there are many new opportunities in analog and interface IP, because there are a lot of clever techniques that are being invented for novel signaling interfaces and memory types that will emerge in the IP space. I think that a lot of the foundation IP like memory technology and standard cell libraries will actually be tied very closely to the process technologies, so the foundries will be playing an increasingly strong role in driving the direction for cell libraries. Finally, there are also lots of opportunities for vertical knowledge. If you know all about cellular protocols or advanced video or audio, you’ve got something that can be widely leveraged. There are so many different product types and silicon suppliers because there is a diverse market that can’t be addressed by any one semiconductor IP player or chip manufacturer. Semiconductors and semiconductor IP are pretty distinct business models with complementary roles, and we expect to see the semiconductor IP space strengthen over time.



second opinion ESL in FPGAs

By Chris Eddington, Senior Technical Marketing Manager, Synplicity Why is architectural optimization so critical in the ESL design flow?

The main benefit of doing model-based design is that it provides a higher level of abstraction not only for design capture but also for simulation, validation and verification of your algorithms. Our focus has been on DSP synthesis or architectural synthesis. The reason people often throw so much away with code generation methodologies is that they tend to simply copy the model. This is a typical failure of a lot of the graphical tools on the market. They are either just code generation or they are just focused on creating simulation models and implementation is very limited. Architectural synthesis and optimizations are really required to truly automate a high-level model-into-silicon flow.

What is Synplicity’s approach to architectural synthesis?

Our approach is to create a high-level model, using Simulink and/or Matlab, where you quickly and easily can capture the sample rate, quantization and all your desired behavior at a very high level. Once that’s done, you specify the type of architectures you want by providing constraints on pipelining, resource sharing, etc. Then we provide the ability to automatically synthesize an implementation into RTL. Using the constraints, you can create optimized RTL implementations with more serial or more parallel types of architectures. For example, let’s say your algorithm has 100 multipliers in it. You can apply a folding factor that will pull that down and implement it using only 20 multipliers by utilizing a higher clock speed so that you can share the (fewer) available multipliers. And, it’s all done without changing the model behavior or the algorithm verification. The powerful part of this approach is that you can get very high-quality RTL because you can optimize the architecture for a given target.

How does synthesizing at the architectural level affect timing closure?

What we do today is a classic timing estimation model. We compile some benchmarks in the target technology and we use those as a way to estimate timing when we do these architectural optimizations. However in current versions of our tool, we now have the ability to call our downstream logic synthesis tools to exactly estimate the timing of a particular block that you’ve used in a high-level model.

Can Synplicity’s DSP tools optimize any FPGA vendor’s architecture?

With our tool, after you generate your model and verify it, you then pass it to our DSP synthesis tool, Synplify DSP. At that point, you can specify any target device Synplicity supports. You can indicate that you 48


want Verilog or VHDL or both. Finally, you can specify your architectural optimizations. Our tool then goes off and figures out how to optimally implement your design and, for many blocks, it will make micro-architectural decisions for the particular architecture and timing constraints. If we run into a FIR filter block, for example, we need to know what the target technology is capable of doing for this block at the specified sample rate. In some cases a transposed-form might be best, in others a direct-form might yield better results. What we do next is call our Synplify Pro tool for that block, actually synthesize a set of circuits, and get the exact gate-level performance from it and then use that as part of our architectural optimization choice. The benefit here is that we get exact timing information and the model gets implemented in a more optimal way based on the target technology.

Why is Synplicity’s approach unique?

Synplicity has its own model library in Simulink, which is cycle- and sample-accurate to the implementation behavior which reflects the potential micro-architectural optimizations that can be chosen. We have very good multi-rate support and we support vector arithmetic, which can make parallelism and multichannel algorithms extremely concise. We also have an M compiler so you can embed M language blocks to compile and synthesize it into RTL as well. We believe this architectural synthesis approach is unique and a critical ingredient for ESL tools to become successful. We call it DSP synthesis because we’re focused on DSP algorithms right now. Our modeling libraries are focused on things like multipliers, math functions, wireless communications, FFTs and multi-rate filtering. Synplicity provides a quality synthesis approach to implementation, which is what you have to have to make this work. Secondly, everything is cycle- and bit-accurate modeling so that the verification flow is also maintained. We also create a test bench; we capture the input and output data in the Simulink/Matlab environment and create the test vectors in a batch file to run with popular simulators so that you can verify at the RTL level. And, when you get down to the gate level, you can rerun it to make sure it’s all working. This is also a critical feature to maintain the verification flow as well.



...Your Vehicle to Success in the

Fast Paced

Telecom Market

The Mountain View Alliance puts you in the drivers’ seat! Now you can become part of this fast-paced 2-day conference designed to accelerate the adoption of open-specication based components and platforms. Take your place in the design and evolution of commercial off-the-shelf implementations in the telecommunication and wireless infrastructure. Set your plans to include a diverse and effective conference that brings company’s and individuals together from the entire COTS ecosystem.

Mountain View Alliance Communications Ecosystem Conference March 11 and 12, 2008 South San Francisco Conference Center

event calendar 11/05-08/07

TechNet Asia-Pacific Honolulu, HI 11/05-09/07

SDR’07 Technical Conference Denver, CO

advertiser index


Real-Time & Embedded Computing Conference

3M Touch Systems


Detroit, MI






Real-Time & Embedded Computing Conference

Cirrus Logic


Toronto, ON

Intersil Corporation


Linx Technologies, Inc



SC07 Reno, NV

Mouser Electronic





National Semiconductor


Real-Time & Embedded Computing Conference


MAE Military & Aerospace Electronics Conference

Rogers Corporation


Gaydon, UK




Texas Instruments


Embedded Technology 2007 Yokohama, Japan 11/20/07

IEEE Globecom Washington, DC 11/29/07

Real-Time & Embedded Computing Conference Vancouver, BC 12/04/07

Real-Time & Embedded Computing Conference Seattle, WA 12/06/07

Real-Time & Embedded Computing Conference Portland, OR If you wish to have your industry event listed, contact Sally Bixby with The RTC Group at

Are Your Designs At Risk?

Don't be fooled by imitation material. The performance of real PORON® Urethanes can't be matched.

One of the challenges being faced in today's handheld design market is the risk of inferior materials being used as a replacement for high quality, high performance products.

CONSIDER THE RISKS INVOLVED WITH IMITATION MATERIAL Potential Design Failure Products can fail due to poor heat resistance, poor compression set resistance and high outgassing found in low quality substitute material. Tarnished Reputation When products fail, the device manufacturers and material converters put their reputations at risk.



One of the most DUST SEAL PERFORMANCE important gasket Percentage Passing After 4 Weeks functions in hand100 held devices is 80 to seal out harmful 60 40 dust and particles. 20 The sealing materi0 al’s ability to Imitation PORON® Urethane 1.0 mm Thick Gaskets maintain long-term performance is Safeguard your critical designs with the performance essential. Because outstanding dust sealing of PORON® Urethanes of its high compression set resistance, PORON® Urethanes bounce back so that gaskets hold their shape and seal for prolonged periods, effectively blocking contaminants and extending product life. The result is LCD displays stay crisp and clear. The Rogers logo and PORON are licensed trademarks of Rogers Corporation.

The minimal outgassing characteristics of PORON® Urethanes contribute to reduced LCD fogging for improved visibility and clarity over the life of the product. Durability is key to the service life of handheld products. Unlike counterfeit materials, real PORON® Urethanes retain their physical and mechanical properties, including excellent energy absorption, and dimensional stability, even at elevated temperatures. PERCENT THICKNESS RETAINED AT HIGH TEMPERATURES PERCENTAGE THICKNESS RETAINED

Rogers Corporation, the manufacturer of PORON® Urethane for more than 25 years, and its joint venture Rogers Inoac Corp (RIC), have recently seen an increase in imitation material in the marketplace. This has resulted in products that don’t meet high quality standards and specifications that can lead to product failures.



100 80

60 40 20

0 PORON® Urethane A

PORON® Urethane B


PORON Urethanes maintain superior compression set resistance at elevated temperatures for a prolonged period of time, versus imitation materials ®

The unique microcellular structure of PORON® Urethanes contributes to fabrication ease and design stability. Die cuts are clean with never a crushed edge, making intricate jobs a production reality. Real PORON® Urethanes from Rogers Corporation and RIC are high quality, high performance materials specified worldwide for gasketing, sealing and energy absorption applications. For more information on how PORON® Urethanes help to eliminate risks, or to order a free copy of our ELECTRONICS DESIGN SOLUTIONS BROCHURE, visit our website

Industry’s First LED Driver Offering True Linear Dimming for the Human Eye LM3509 Dual-Output Constant-Current LED Driver Operates at 90% Efficiency 10 μH

2.7V to 5.5V




30 mA per string OVP



10 kΩ




Dual White LED Bias Supply

Non-Linear Brightness Steps 120%

Ideal for driving LEDs in mobile phones, digital cameras, and navigation system displays

100% 80%


60% 40% 20% 0% 1










93 4 LEDs 90 87 84 81 78 75 72 69 66 3.0 Or call: 1-800-272-9959

National Semiconductor Corporation, 2007. National Semiconductor and

are registered trademarks of National Semiconductor Corporation. All rights reserved.










For FREE samples, datasheets, and more information on the LM3509, contact us today at:


BMAIN or BSUB Code (Decimal) *tSTEP is the time between LED current steps programmed via bits RMP0, RMP1

Efficiency (%)

LED Current (% of ILED_MAX)

Features • 32 Exponentially spaced dimming states with 800:1 LED current ratio • Auto-dimming function enables transition from one dimming state to the other at different speeds • Integrates OLED power supply • 2 independently controlled constant current outputs for main and sub displays • Drives 10 LEDs at 30 mA with 0.15% current matching • Simultaneously drives 5 LEDs at 20 mA and delivers 21V at 40 mA for OLED power supply • I2C-compatible programmable brightness control






Featured Product:  

Editorial Director Warren Andrews, Editor-in-Chief John Donovan, Managing Editor Marina Tri...

Featured Product:  

Editorial Director Warren Andrews, Editor-in-Chief John Donovan, Managing Editor Marina Tri...