Page 1

Networked Intelligence Builds out the Smart Grid Microcontrollers Spread the Load to On-Chip Peripherals “Get me to the SoC on Time”— CPUs Marry FPGAs Real World Connected Systems Magazine. Produced by Intelligent Systems Source

20 28 36

Vol 16 / No 10 / October 2015

Windows 10 Moves into the Internet of Things

An RTC Group Publication

Get to Market Faster and More Profitably

About Avnet



From solution design and development through deployment and support, Avnet is uniquely equipped to help you get your solution to market faster and more profitably – anywhere in the world. If you are looking for ways to expand the market opportunity for your technology solution, look to Avnet. With our deep technical expertise, global footprint and relationships with leading hardware manufacturers we can take unnecessary time, cost and complexity out of your business so you can focus on what you do best. We also have experience in software-defined, converged and hyper-converged systems which can be integrated and optimized for the needs of your solution. Put your technology solution on the fast track. Go to to learn more about Avnet’s Integrated Systems and Appliances. Contact us for more information at


The Magazine of Record for the Embedded Computing Industry



Next Generation of Memory to Emerge from Smartphones by Tom Williams, Editor-in-Chief




Windows 10 and the Internet of Things by Jo Sunga and Andrew Chou, Advantech



Windows 10 and the Internet of Things





07 40

Will there be an iCar? Apple Commits to Electric Vehicle Project

Latest Developments in the Embedded Marketplace

PRODUCTS & TECHNOLOGY Newest Embedded Technology Used by Industry Leaders

What the Smart Grid Can Learn from the iPhone by Brett Burger, National Instruments




The Industrial Internet Will Help Solar Arrays Actually Reduce Carbon Emissions by Brett Murphy, Real-Time Innovations



Pushing the Embedded Boundaries with New 8-bit MCU Peripherals by Jin Xu, Microchip Technology


Hybrid Devices Maximize Flexibility, Performance and Energy Efficiency for Wearable Technology by Dr. Tim Saxe, QuickLogic


24 The Industrial Internet Will Help Solar Arrays Actually Reduce Carbon Emissions


CPUs and the FPGAs: Making the SoC Connection by Rodger Hosking, Pentek, Inc

RTC Magazine OCTOBER 2015 | 3

U.S. Postal Service Statement of Ownership, Management and Circulation Required by 39 USC 3685. 1) Title of Publication: RTC Magazine. 2) Publication Number 1092-1524. 3) Filing Date 11/002/2015 4) Frequency of issue is monthly. 5) Number of issues published annually: 12. 6) Annual subscription price: n/a. 7) Complete Mailing Address of Known Offices of Publication: The RTC Group, 905 Calle Amanecer, Suite 150, San Clemente, CA 92673 Orange County. 8)Complete Mailing Address of Headquarters of General Office of Publisher: The RTC Group 905 Calle Amanecer, Suite 150, San Clemente, CA 92673 Orange County, California. 9) Publisher: John Reardon, The RTC Group, 905 Calle Amanecer, Suite 150, San Clemente, CA 92673 Orange County, CA 92673. Editor: Jeff Child, 905 Calle Amanecer, Suite 150, San Clemente, CA 92673 Orange County, California. Managing Editor: James Pirie. The RTC Group, 905 Calle Amanecer, Suite 150, San Clemente, CA 92673 Orange County, CA. 10) John Reardon, Zoltan Hunor. The RTC Group; 905 Calle Amanecer, Suite 150, San Clemente, CA 92673 Orange County, California. 11) Known Bondholders Holding 1 Percent or More of Total Amount of Bonds, Mortgages, or Other Securities: None. 12)Tax Status: The purpose, function, and nonprofit status of this organization and the exempt status for federal income tax purposes has not changed during the preceding 12 months. 13) Publication Title: COTS Journal. 14) Issue date for Circulation data: 9/1/15 RTC Magazine. 15a)Extent and Nature of Circulation: average number of copies each issue during preceding 12 months (Net press run): 9625. Number copies of single issue published nearest to filing date: 7500 b)1. Paid/requested outside-county mail subscriptions stated on form 9625. (Include advertiserยนs proof and exchange copies)/Average number copies each issue during preceding 12 months:9102, number copies of single issue published nearest to filing date: 7500. b) 2. Paid in-county subscriptions (include advertiserยนs proof and exchange copies)/average number copies each issue during preceding 12 months/number copies of single issue published nearest to filing date: n/a. b)3. Sales through dealers and carriers, street vendors, counter sales and other non-USPS paid distribution/average number copies each issue during preceding 12 months: n/a, number copies of single issue published nearest to filing date: n/a. b)4. Other classes mailed through the USPS/average number copies each issue during preceding 12 months: n/a, number copies of single issue published nearest to filing date: n/a. c)Total paid and/or requested circulation [sum of 15b. (1), (2), (3) and (4) average number copies each issue during preceding 12 months: 9102 number copies of single issue published nearest to filing date: 6865. d1) Free distribution outside of the mail (carriers or other means)/ average number copies each issue during preceding 12 months: n/a, number copies of single issue published nearest to filing date:n/a. e) Total free distribution (sum of 15d. and 15e.)/ average number copies each issue during preceding 12 months: 504 number copies of single issue published nearest to filing date: 615. f) Total distribution (sum of 15c and 15e)/ average number copies each issue during preceding 12 months:9606 number copies of single issue published nearest to filing date: 7480. g) Copies not distributed/ average number copies each issue during preceding 12 months: 240, number copies of single issue published nearest to filing date: 20. h) Total (sum of 15f and g)/ average number copies each issue during preceding 12 months: 9846 number copies of single issue published nearest to filing date: 7500. i) Percent paid and/or requested circulation (15c divided by 15f times 100)/ average number copies each issue during preceding 12 months: 99.7%, number copies of single issue published nearest to filing date: 99.7% 16. Publication of statement of ownership. Publication will be printed in the October issue of this publication. 17)Signature and title of the editor, publisher, business manager or owner: James Pirie (Managing Editor)10/01/2015. I certify that all information furnished on this form is true and complete. I understand that anyone who furnishes false or misleading information on this form or who omits material or information requested on the form may be subjected to criminal sanctions(including fines and imprisonment)and/or civil sanctions (including multiple damages and civil penalties). James Pirie, Managing Editor

4 | RTC Magazine OCTOBER 2015


PUBLISHER President John Reardon, Vice President Aaron Foellmi,

EDITORIAL Editor-In-Chief Tom Williams, Senior Editor Clarence Peckham, Contributing Editors Colin McCracken and Paul Rosenfeld

ART/PRODUCTION Art Director Jim Bell, Graphic Designer Hugo Ricardo,

ADVERTISING/WEB ADVERTISING Western Regional Sales Manager Mark Dunaway, (949) 226-2023 Eastern U.S. and EMEA Sales Manager Ruby Brower, (949) 226-2004

BILLING Vice President of Finance Cindy Muir, (949) 226-2021

TO CONTACT RTC MAGAZINE: Home Office The RTC Group, 905 Calle Amanecer, Suite 150, San Clemente, CA 92673 Phone: (949) 226-2000 Fax: (949) 226-2050 Web:

Editorial Office Tom Williams, Editor-in-Chief 1669 Nelson Road, No. 2, Scotts Valley, CA 95066 Phone: (831) 335-1509 Published by The RTC Group Copyright 2015, The RTC Group. Printed in the United States. All rights reserved. All related graphics are trademarks of The RTC Group. All other brand and product names are the property of their holders.

Critical Recording in Any Arena When You Can’t Afford to Miss a Beat!


Introducing Pentek’s expanded line of Talon COTS, rugged, portable and lab-based recorders. Built to capture wideband SIGINT, radar and communication signals right out-of-the-box: • • • • • • • • • •

Analog RF/IF, 10 GbE, LVDS, sFPDP solutions Real-time sustained recording to 4 GB/sec Recording and playback operation Analog signal bandwidths to 1.6 GHz Shock and vibration resistant Solid State Drives GPS time and position stamping ® Hot-swappable storage to Windows NTFS RAIDs Remote operation & multi-system synchronization ® SystemFlow API & GUI with Signal Analyzer Complete documentation & lifetime support

Pentek’s rugged turn-key recorders are built and tested for fast, reliable and secure operation in your environment. Call 201-818-5900 or go to for your FREE High-Speed Recording Systems Handbook and Talon Recording Systems Catalog.

Pentek, Inc., One Park Way, Upper Saddle River, NJ 07458 • Phone: 201.818.5900 • Fax: 201.818.5904 • • Worldwide Distribution & Support, Copyright © 2013 Pentek, Inc. Pentek, Talon and SystemFlow are trademarks of Pentek, Inc. Other trademarks are properties of their respective owners.


Will there be an iCar? Apple Commits to Electric Vehicle Project by Tom Williams, Editor-In-Chief

OK, if anybody still harbors skeptical thoughts about the future of the electric automobile, may they please check them at the door? It has recently become known that Apple has raised the status of its efforts to build an electric car to that of “committed project” and set a target ship date of 2019. The project has gotten permission to increase the 600-person design team to about 1,800 and is known to be hiring experts in electrical and driverless vehicles. Budgeting is, of course, a well-guarded secret, but it is generally known that Apple is sitting on substantially more cash than the entire worldwide automobile industry—something like $180 billion. Given that Apple is by nature and heritage a mass market consumer company, it would make sense to expect that an Apple electric vehicle will be priced considerably well below the current Tesla Model S and it is also known that Tesla is working on a lower-priced model. While it has long been known that Apple was evaluating the idea of getting into the electric vehicle business, the common wisdom was that it would prove very difficult for a company with no prior experience in the automobile industry. That now sounds more like a desperate hope than a serious opinion. While Apple has a solid reputation of success in mobile devices, an automobile would still be quite a cultural departure. It is hard to farm out production of automobiles to contract manufacturers in China, so Apple would have to invest in bricks and mortar, and—where it would probably shine—in automated production facilities. And Apple’s entry into the auto market could have a profound psychological effect on the market. It would definitely put fear into the

6 | RTC Magazine OCTOBER 2015

hearts of traditional automakers, who have been quite tentative about plunging into electric vehicles and that alone could result in growth of the market. Seeing such growth spurred by an Apple commitment could cause other manufacturers to increase their own efforts. In addition, Tesla has already released a whole host of its patents into the public domain in hopes of stimulating the industry, which they hope will grow the market for all players. Apple has a history of that same attitude. In fact, Steve Jobs told me in a conversation long ago that soon after starting Apple they invited the retired Intel executive Mike Markula into the company because they saw they needed his expertise. Granting Markula a share in the company was a good move, Jobs said, “. . because we figured it was better to have fifty percent of something than one hundred percent of nothing.” By now most of us are aware of the truly vast amount of software—approaching 100 million lines of code—and embedded devices that go into making a modern automobile. For all its embedded microcontrollers and controls, the internal combustion engine is still the most primitive element. The advent of the electric vehicle is moving us toward the day when the entire vehicle is a networked collection of electronic/electrical devices, which is itself networked to the outside world. Apple’s influence on the design of such vehicles can be expected to reflect its own background. Will there be an Auto App Store? It is sure to influence the design of the user interface. And if Apple’s push appears to be successful, what could be the effect on the computer industry as a whole? How will Samsung react? Will we wake up one day and find cars based on

Android? If established automobile manufacturers do not move quickly is it possible that the computer industry could carve out a huge chunk of the transportation market? These possibilities must now at least be seriously considered. There are forces at work in technology, energy, the environment and economics that could signal some major shifts. The recent enormous scandal involving Volkswagen has many possible dimensions for interpretation, which have been all over the news and industry press. One observation, however, is relevant in this context. The ability to optimize the power/performance of the internal combustion engine is reaching its limit. The manipulation of the exhaust gases when the engine was under test was an attempt to make the “clean” diesel engine appear that it had much more room for optimization than it really did. It was a reaction to the disruptive technology that the digital electric vehicle really represents. And we should emphasize the “digital” here because EVs are more than just cars with electric motors. They are integrated and networked digital systems. And the VW scandal with its costs to the company and its destruction of consumer confidence has simply served to accelerate the move toward the EV. The embedded computing industry is in a prime position to participate in the move to EVs just as it is now experiencing a huge influx of embedded devices—and particularly the need to satisfy demands for reliable software—into conventional autos. As with every emergence of a disruptive technology, survival lies in the ability to adapt. Today’s car makers would do well to heed the signs of change.


Mentor Graphics and AMD to Accelerate ARMv8-A Linux Development for Embedded Systems Mentor Graphics has announced its agreement with AMD to provide comprehensive embedded Linux tools for AMD 64-bit ARM-based processors using the Mentor Embedded Sourcery CodeBench Lite offering. To address the growing complexity of embedded systems, AMD embedded customers will have access to the Sourcery CodeBench technology for development and debugging applications for Linux-based embedded systems. AMD and Mentor Graphics in 2014 agreed to create an embedded software ecosystem for developers to access powerful open-source, embedded C/C++ development tools to build embedded software in complex heterogeneous architectures based on AMD’s 64-bit ARMbased processors. The AMD 64-bit ARM-based processors target next-generation embedded data center applications, communications and networking infrastructure, and industrial solutions. The Sourcery CodeBench Professional solution optimizes embedded development quality and productivity, critical for advanced server applications based on AMD’s 64-bit ARM-based processors running Linux. Embedded developers will be able to access system insight on software execution, performance and debugging of Linux-based embedded systems. The Mentor Embedded Sourcery CodeBench product enables the development of embedded systems on microcontrollers and microprocessors for RTOS, bare metal, and Linux-based applications. The Sourcery CodeBench tool and the integrated Sourcery Analyzer tool help developers to quickly identify and fix functional and performance issues in complex embedded systems. As part of the agreement, embedded developers will have free access to the following customized embedded Linux development products: Sourcery CodeBench Lite for 64-bit ARM GNU/Linux; Complete GNU-based C/C++ development toolchain for custom Linux target platform development , the GNU Debugger for Linux application debug, and Windows and Linux host-Thbased development options.

Veritas to Acquire GE Embedded Systems Business Veritas Capital has agreed to acquire the embedded technology and systems business from General Electric. Financial terms weren’t announced. GE’s Embedded Systems business provides open architecture electronic systems for aerospace, defense, and industrial solutions. GE’s Embedded Systems business is a supplier of sophisticated, open architecture electronic systems for aerospace, defense, and industrial solutions, currently operating within GE Energy Management (Intelligent Platforms subdivision). The Embedded Systems’ broad product portfolio employs various highly engineered, patented and high-performance commercial computing technologies to address advanced size, weight and power challenges that are common to rapidly evolving aerospace, defense, and industrial applications and programs. These rugged products are designed to withstand harsh environments such as extreme temperature, high vibration and exposure to natural elements. The Business is headquartered in Huntsville, AL with five facilities globally, including three in North America and two in Europe, and has approximately 700 employees worldwide. “Veritas brings deep experience investing in the defense market, a vast network of relationships in the government and commercial markets, and a track record of adding significant value to its portfolio companies. We are excited by the prospect of leveraging the firm’s expertise as we accelerate the growth of the Business,” said Bernie Anger, General Manager of GE’s Intelligent Platforms business. The transaction is anticipated to close in the fourth quarter. Upon closing, GE’s Embedded Systems business will be renamed and operate as an independent company at its current headquarters in Huntsville, AL. Existing senior management will continue to lead the Business.


WITH JUST A COUPLE CLICKS. • See Instructional Videos • Shop Boards Online • Read Articles & More • Request a Quote RTC Magazine OCTOBER 2015 | 7


The Linux Foundation Announces Linux Performance Workgroup The Linux Foundation has announced the DiaMon workgroup, which is creating a de-facto standard for tracing and monitoring infrastructure in order to improve diagnostics of Linux user space programs. Founding members of the DiaMon workgroup include EfficiOS, Freescale, Google, Harman, Hitachi, Huawei, IBM, Intel, Netflix, Qualcomm, Red Hat, SUSE and Wind River. The workgroup will bring together developers and users to standardize on interfaces and common exchange formats. The effort should improve the effectiveness of Linux performance testing tools. With the increasing complexity of both the software and hardware ecosystem, it is becoming more difficult to understand computing problems when they arise. At the same time, the requirements of enterprise Linux users are getting more demanding; higher throughput is needed but at bounded latency due to functions like video streaming, for example. Significant time is required to investigate programs that have typically not been diagnosed together, such as open source components and libraries and third party, proprietary software. The DiaMon Workgroup will create de-facto standards and tools for tracing, monitoring and diagnostics. It should increase interoperability among tools, as well as improve Linux-based tracing, profiling, logging and monitoring features. It will include software and tools that include trace data creation; activation; collection and storage; postmortem analysis tools; visualization tools; ability to move analysis from postmortem to live/ monitoring; scripting; data conversion; and correlation and analysis of different sources of trace data. The DiaMon workgroup will also enable end users to exchange user guides, white papers, data sheets and general guidelines about diagnostic and monitoring best practices tools analysis.

Webroot and Lynx Partner to Protect IoT Devices from Targeted Attacks Webroot and Lynx Software Technologies have announced a strategic partnership. The companies are combining unique security technologies which enable developers to build advanced threat detection and protection into Internet of Things (IoT) and Industrial Internet of Things (IIoT) devices. The Webroot IoT Security Toolkit enables IoT and IIoT system integrators and solution designers to integrate real-time threat intelligence services and intelligent cybersecurity device agents to protect critical systems against modern threats. The LynxSecure Hypervisor from Lynx Software Technologies is a secure virtualization solution that is based on separation kernel technology, originally designed to separate and protect military networks at different classification levels. Combining these technologies involves embedding components of the Webroot IoT Security Toolkit inside the LynxSecure Hypervisor secure virtual space. This allows for detection, identification and containment of threats without the constraints or risks inherent in running solutions at the operating system level. Any threats that enter the IoT environment can be quickly identified by Webroot BrightCloud threat intelligence, quarantined using the LynxSecure isolation capabilities, and removed from the system.

8 | RTC Magazine OCTOBER 2015

RTI Announces Global Services Delivery Collaboration With Tech Mahindra Microchip Technology has announced that Real-Time Innovations (RTI) has announced a global partnership with a fellow Industrial Internet Consortium (IIC) member, Tech Mahindra a specialist in digital transformation, consulting and business re-engineering. Tech Mahindra joins RTI’s rapidly expanding Services Delivery Partner (SDP) program, designed to help companies capitalize on the growing Industrial Internet of Things (IIoT) market. RTI SDPs offer outsourced product development, system integration and domain-specific consulting across multiple industries. RTI delivers IIoT solutions for customers spanning medical, energy, mining, air traffic control, trading, automotive, unmanned systems, industrial SCADA, naval systems, air and missile defense, ground stations and science. Tech Mahindra is a leader in digital transformation delivering IIoT and IoT-based solutions for their customers. Tech Mahindra’s expertise in consulting and system integration strengths directly align with RTI’s core focus areas. Tech Mahindra will provide domain expertise and consultancy as well as development of user applications, systems software and data-center cloud services. Combined efforts of RTI and Tech Mahindra will help companies meet the emerging and dynamic requirements of the IIoT market in key verticals including energy, automotive, avionics, healthcare and telecommunications across the globe. Tech Mahindra’s global services team is available to accept new projects with Connext DDS immediately. Customers using these services will have access to RTI’s full Connext DDS product line and can leverage RTI’s engineering and architectural services as needed.


gridComm, PT Siklon Energy Nusantara Partner on Cloud-Based Intelligent Street Lights gridComm, a provider of power line communication (PLC) solutions that enable the transformation of traditional power grid into a smart grid, has signed a partnership agreement with a leading LED manufacturer in Indonesia, PT Siklon Energy Nusantara, to jointly provide a Cloud-based Street Light Management Solution to the Indonesian market. Given Siklon´s leadership and expertise in LED lighting and advanced production facilities, the partnership enables the delivery of a complete Intelligent Street Light Solution tailored for Indonesia. gridComm’s Intelligent Street Lighting Solution serves as a cornerstone of a ‘Smart City’ with a reduced carbon footprint. Based on the company’s GC2200, the next-generation orthogonal frequency division multiple access (OFDMA) PLC transceiver, gridComm’s Intelligent Street Lighting Solution transforms traditional street lighting into energy-aware, remotely managed and monitored web-based networks. Smart City is the latest catchword in urban planning around the world. Smart Cities encourage the use of technology to improve governance and enhance citizen well-being, while at the same time reducing costs by increasing productivity. Jakarta has already launched the Smart City blueprint, which connects citizens and government agencies through the Cloud. The natural next step for the city will be to transform its 350,000 street lights into smart street lights, a move that gridComm and Siklon are positioned to fulfill. “With state-of-the-art LED lighting and capable technical staff, Siklon, as the first and only LED street lamp manufacturer in Indonesia, is a great match for us,” said Mike Holt, CEO of gridComm. “As Indonesia embarks on its Smart Street Lighting Initiative (SSLI), our companies make an ideal team for addressing their needs. We are proud to be selected by Siklon as a strategic partner to deploy smart street light solutions in Indonesia.”

Dialog Semiconductor to acquire Atmel for $4.6 Billion Dialog Semiconductor and Atmel have announced that Dialog has agreed to acquire Atmel in a cash and stock transaction for total consideration of approximately $4.6 billion. The acquisition creates a global leader in both power management and embedded processing solutions. The transaction results in a fast growing and innovative powerhouse, supporting mobile power, IoT and automotive customers. The combined company will address an attractive, fast growing market opportunity of approximately $20 billion by 2019. Dialog will complement its position in power management ICs with a leading portfolio of proprietary and ARM-based microcontrollers in addition to high performance ICs for connectivity, touch and security. Dialog will also leverage Atmel’s established sales channels to significantly diversify its customer base. Through realized synergies, the combination is expected to deliver an improved operating model and enable new revenue growth opportunities. The transaction is expected to close in the first quarter of calendar 2016. Dialog intends to fund the transaction with a combination of existing cash, $2.1 billion of new debt and the issuance to Atmel shareholders of approximately 49 million ADSs expected to be listed on the New York Stock Exchange or the NASDAQ Stock Market. Post transaction, it is projected that Atmel shareholders will own approximately 38 percent of the combined company. The transaction would result in a capital structure with leverage of approximately 3x Net Debt/Estimated LTM EBITDA at closing. Dialog expects to continue to have a strong cash flow generation profile and have the ability to substantially pay down the transaction debt approximately three years after closing.


Full-On Deve Targets Indus lopment Suite trial Automatio n

The Magazine

of Record for

the Embedded

Computer Indus


Medical Devic es Merge Intelligence with Connectivity Temperature Consi derati Critical Solid State ons for Storage Vol 16 / No 3

COM Modul Variety and es Grow in Capability

10 24 32

/ MARCH 2015

An RTC Group Publication



Next Generation of Memory to Emerge from Smartphones Getting low-power, dense, high-performance memory to work effectively in such a limited form factor requires a huge engineering effort. The results, however, may be not only a new generation of phones, but also a new modular character for embedded devices in general. by Tom Williams, Editor-in-Chief

There may have been a time when being a memory manufacturer seemed like a pretty straight line job of cramming as many bits onto a die as possible and trying to keep up with the ever-increasing clock speed of the day’s processors. Of course, there were also power and heat dissipation issues but they were not insurmountable and users were always happy to get more RAM into the system, be it in the form of additional boards plugged into the bus or denser packages or both. Storage was, and to a large extent still is, provided in the form of rotating media of ever decreasing size and ever increasing density. This old scenario has been on the wane for some time now and is speeding on its way to entry into what we may soon call the “days of yore.” Semiconductor memory continues to march to the tune of Moore’s Law getting ever denser as geometries shrink, power consumption and heat dissipation continue to shrink, package height gets smaller and new technologies pack more bits into smaller spaces. This affects not only RAM but also nonvolatile storage like NAND flash, which is gradually but steadily replacing the mechanical rotating drive in much the way the quartz crystal watch has replaced large numbers of the old “clink-clank-clunk” mechanical watches. This development, which in now greatly accelerating, could have profound effects on the design and composition of embedded systems in wide areas of applications. The most intense arena of development appears to be for smartphones and tablets, which of course represent the highest volume market (Figure 1). For a memory vendor to land a contract with a Samsung, an Apple or a large Chinese vendor can well reward the expense of devoting major effort to a detailed and optimized design. In the smartphone space, according to the Senior Product Marketing Manager for Micron Technology’s Mobile Business Unit, Ken Steck, “We try to develop not just the correct technology (low power, high throughput, etc.) but also the right density. We want to be sure that we’re not guessing— underguessing of overguessing.” 10 | RTC Magazine OCTOBER 2015

Thus, while the demand for memory density is certainly increasing, it is important to hit current needs while also looking to meet future demands. But at this point, says Steck, “There is now a need for low power, performance and density that has not yet been met.” It is not just an LPDRAM device. “We’re trying to create the perfect intersection of LPRAM with NAND density that meets the requirements for the customer including the height of the chip, the thermal level, throughput and everything. So we try to have the right density mixtures that intersect with the design requirements.” Optimizing memory for a given customer’s design can entail a large amount of detailed work on that particular design including software such as fine tuning the firmware and the drivers and also often fine tuning the actual physical layout to make sure the processor can use the memory in the best way. According to Steck, this requires staying in lock step with the chip manufacturers to see where they are going and the type of memory architectures they will need. In fact, often it comes down to the literal proximity of the RAM to the processor given the speed of the CPU and the length of the traces involved. Increasingly, there are designs, such as that in the Apple watch, where processor and RAM are two separate dies, but in a single package that from the outside looks like a single device. There is quite a bit of software work that goes along with making memory to make it work well, including tuning the drivers as well as the fact that on many memory devices there is a dedicated processor with its own firmware that must be matched with the processor so that everything operates optimally. Talk about embedded systems within embedded systems! The pace of change and demands on memory in smartphones and tablets will of course continue. The design cycle has recently gone from three years to about six months. “There are things on the shelf now that our teams knew about eighteen months ago,” Steck says. The pace of the increase in demand for both RAM

Figure 1 Smartphones and their growing applications are now driving the memory demands for both high-speed, low-power RAM (LPDRAM) but also for storage in the form of NAND flash as well. Source: Micron Technology.

and storage memory (now NAND flash) can tell us something about the future of embedded designs in general.

Growth within Constraints

Make no mistake. The demands for memory capacity and performance trends in both RAM and storage in smartphones and tablets stand at a pretty high point today and are expected to expand significantly in the near future. But what else is new? The challenge is especially great in the case of phones because the form factor is completely limited. The days of Maxwell Smart’s shoe phone are long gone as are thoughts of anything more bulky or less sleek than what fits in our hands today. True, there are some size variants but they mostly serve different demands for screen size rather than making any compromises to accommodate performance. So right now the sweet spot for storage appears to be about 32GBytes heading toward an optionally available high end of 65GBytes. New devices with only 8GBytes are hard to find today with 16GBytes still available for cheapskates. In the RAM side, it’s looking like 2 to 3GBytes with 4GBytes soon to become the norm. Within three years we should be looking at 4GBytes of RAM and 64 to 128GBytes of storage. Driving these increasing demands are factors like larger and higher resolution screen size and higher camera resolution. In addition, operating systems are adding features such as the recent introduction of iOS 9, which supports split-screen multitasking and the appearance of bigger, more demanding apps like games, some of which now require a gigabyte of RAM (Figure 2). While operating systems now offer and encourage the storage of things like pictures, videos and music in the Cloud, many users prefer having them instantly available. According to Ken Steck, if his daughter is any indication—and we’re betting she is—the

availability of Cloud storage from the phone is not wildly popular, so more NAND flash is needed as well. This, in turn, is leading to technical innovation. Intel and Micron have jointly introduced their 3D Crosspoint memory, which is expected to take NAND storage well beyond its current limits of both capacity and performance. In addition to significantly reducing latencies, it allows much more data to be stored close to the processor and accessed at speeds previously impossible for non-volatile storage. The new, transistor-less cross point architecture creates a three-dimensional checkerboard where memory cells sit at the intersection of word lines and bit lines, allowing the cells to be addressed individually. As a result, data can be written and read in small sizes, leading to faster and more efficient read/write processes.

Future Effects on Embedded?

The amount of intense design work going into smartphones is having the effect of creating optimized modules that consist of highly integrated SoCs, some including multicore CPUs, FPGAs, graphics and DSP functionality along with rich I/O capability on a single die. These are closely integrated with large amounts of memory—both volatile and nonvolatile—often in the same package. In other words, they are integrated and tested high-performance, low-power modules that can potentially be connected to all manner of external devices and loaded with specialized application software to perform all kinds of embedded tasks. This is analogous to the evolution of embedded designs, which have inherited so much standard technology from the PC world. These include such things as USB and PCI/PCI Express, which came from the PC and therefore were produced in huge volumes bringing down the costs and making them attractive for

RTC Magazine OCTOBER 2015 | 11

EDITORS REPORT ADVANCES IN MEMORY niche applications in the embedded arena, which could never have justified the expense involved with creating their own interfaces. The next generation of SoCs is a natural for use in demanding, often mobile, embedded applications and the expertise gained in matching memory and creating integrated modules will serve the industry in good stead. There may be proprietary issues involved with using the exact modules developed for phones, but the expertise is now there and the effort involved in adapting it for embedded applications is small compared to what would be needed to create such modules from scratch.

Figure 2 The LPDRAM needs of smartphones are expected to increase something like 200% in response to display and camera performance as well as the sheer complexity of operating systems and their applications. Source: Micron Technology.

12 | RTC Magazine OCTOBER 2015

Micron Technology Boise, ID (208) 368-4000

Embedded/IoT Solutions Connecting the Intelligent World from Devices to the Cloud Long Life Cycle · High-Efficiency · Compact Form Factor · High Performance · Global Services · IoT

IoT Gateway Solutions

Compact Embedded Server Appliance

Network, Security Appliances

High Performance / IPC Solution



SYS-5018A-FTN4 (Front I/O)

SYS-6018R-TD (Rear I/O)

Cold Storage

4U Top-Loading 60-Bay Server and 90-Bay Dual Expander JBODs

Front and Rear Views SYS-5018A-AR12L

SC946ED (shown) SC846S

• • • • • • •

Standard Form Factor and High Performance Motherboards Optimized Short-Depth Industrial Rackmount Platforms Energy Efficient Titanum - Gold Level Power Supplies Fully Optimized SuperServers Ready to Deploy Solutions Remote Management by IPMI or Intel® AMT Worldwide Service with Extended Product Life Cycle Support Low Power Intel® Avoton, Rangeley, Quark, Core™ i7/i5/i3 and High Performance Intel® Xeon® Processors E3-1200 v3 product family • Optimized for Embedded Applications

Contact us at © Super Micro Computer, Inc. Specifications subject to change without notice. Intel, the Intel logo, Xeon, and Xeon Inside are trademarks or registered trademarks of Intel Corporation in the U.S. and/or other countries. All other brands and names are the property of their respective owners.


Windows 10 and the Internet of Things With the explosive growth of the Internet of Things and the Internet of Everything, millions of devices are connecting every day. The arrival of Windows 10 IoT has brought ease of use and ubiquitous connectivity to IoT. This alone warrants becoming familiar with IoT goals, concepts, software and hardware in the context of the new Windows 10 IoT. by Jo Sunga and Andrew Chou, Advantech

14 | RTC Magazine OCTOBER 2015

Figure 1 Microsoft Windows 10 for IoT is based on a common core that is the foundation for all versions and then differentiated in versions by adding features and services that are appropriate to the target devices.

While Microsoft undoubtedly deserved kudos for its all-out effort to create a new OS that embraced the reality of touch and tablets while still taking the old legacy Windows along for the ride, it was an impossible task and few were happy with Windows 8/8.1. Windows 10 couldn’t happen soon enough. It will be a while until it catches up to the universally loved Windows 7, but Microsoft’s new OS enjoyed a good reception and initial brisk adoption. How does all of this affect the world of embedded systems? And how does it tie into the Internet of Things, this paradigm that promises not billions but trillions of value generated over the next decade or so? No one has a totally reliable crystal ball, but for now Microsoft’s belief in IoT is such that the company has, in essence, replaced the term “embedded” with “IoT.” Yes, it’s true. Whereas in the past, Microsoft’s streamlined special-purpose operating systems carried names such as Windows XP Embedded, Windows Embedded Handheld, or Windows Embedded Standard, from now on everything that once was “embedded” is now Windows 10 IoT. That doesn’t mean there won’t be different flavors like there used to be with versions like Windows Embedded 8.1 Pro or Windows Embedded 8.1 Industry or Retail. There are still different versions, but they are now grouped according to devices and “things” rather than markets and industries served, and they are all part of Windows 10 IoT. So that brings up two questions. What has changed in the Windows versions formerly known as “embedded”? And since an embedded operating system remains an embedded operating system by any other name, why did Microsoft replace it with “IoT” in the first place? The former we’ll discuss in some detail. The latter simply reflects the new realities of computing and connectivity. While there were once dozens or hundreds of mainframes, thousands of mini computers, and hundreds of millions of PCs, research firms predict that there will be hundreds of billions of “connected things” within the next five years. Why not reflect that in a new name? So IoT it is, though we’re already hearing other terms like IoYT‚ the Internet of Your Things, or IoE, the Internet of Everything that

also includes people and processes in addition to things. But now let’s consider how Windows 10 is different from the versions that came before. The most important change is that with Windows 10, Microsoft offers what it calls a “unified core” that brings together conventional Windows, Windows Phone, Windows on Devices and even the Xbox. That doesn’t mean there’s just one Windows that runs on every type of device. That wouldn’t make sense given the vast variety of computing devices and their massive discrepancy of purpose, size and resources (Figure 1). So instead, the concept of Windows 10 is that of a common core with each family of devices then adding features to that core. On top of the Windows core is the universal Windows platform with a single API interface. Applications have a single binary that can run on any device. This means that the code developers write for every device instead of having to write different code for different devices. There are, of course, family-specific capabilities (extensions for desktop, phone, Xbox, etc.), but those don’t invalidate the binaries on other devices. That said, how is Microsoft grouping these different versions of Windows 10 IoT?

Windows 10 IoT Groupings

Anything that needs a desktop (not the physical kind, the desktop on the screen) or desktop apps and also still needs to be able to run Win32 in addition to the new universal apps needs Windows 10 IoT Enterprise. This replaces Windows Embedded 8.1 Industry/Retail. It’s the full version of Windows 10, but includes advanced lockdown features. It supports the x86 architecture, requires a minimum of 1GB of RAM and 16GB of storage, and available SKUs include Windows 10 IoT versions for Retail and Thin Client, Tablet, and Small Tablet. Windows 10 IoT Enterprise is what most embedded systems developers will want for projects such as ATMs, medical devices, information kiosks, point of sale systems, tablets, etc. Anything that needs a “modern” (formerly known as “metro”) shell and must run mobile apps needs Windows 10 IoT Mobile Enterprise. This replaces Windows Mobile and Windows Embedded Handheld. The x86 and ARM architecture are supported

RTC Magazine OCTOBER 2015 | 15

TECHNOLOGY IN SYSTEMS WINDOWS 10 AND THE INTERNET OF THINGS here, and resource requirements include a minimum of 512MB RAM and 4GB storage. Windows 10 IoT Mobile Enterprise also includes advanced lockdown features and there is no activation requirement. This is an embedded direct-only option meant for smartphones and small tablets. Windows 10 IoT Mobile Enterprise is, finally, a worthy, modern successor to the old Windows Mobile and Embedded Handheld that have seen continued use long past their shelf life because there simply was no feasible upgrade path once Microsoft introduced the consumer-oriented Phone edition a few years ago.

Getting to the Core

Figure 2

If the above two requirements do not apply, then Microsoft Azure acts as a Cloud platform and infrastructure for building and managing applications and services for both Microsoft-specific and third party software and systems. it’s Windows 10 IoT Core which supports both x86 and ARM, runs universal apps and drivers, but does Filters (so that unexpected pop-ups can’t halt operation). not have a Microsoft shell or Microsoft apps. Note that not all versions of Windows 10 IoT have the same Windows 10 IoT Core, once codenamed “Athens,” is by far lockdown features. Headless Windows 10 IoT Core devices, for the most “things”-oriented flavor of Microsoft’s new embedded example, only need the USB filter as they do not have Windows -- make that IoT -- lineup. It’s for small, dedicated x86 or ARM shells and may not have displays or other input mechanisms. devices that do not need a conventional Windows shell and Windows 10 IoT Mobile includes all types of lockdown features may or may not have screens. Examples are edge sensors, IoT except Write filters and overlays. All of them are part of Wingateways, HMI devices and the like. Windows 10 IoT Core has dows 10 IoT Enterprise. Universal App and driver support, needs a minimum of 256MB You may have noticed that we said that activation was not of RAM (512MB for devices with displays) and 2GB of storage, required in certain systems. That’s because conventional activation and there is no activation requirement. of Windows can be disruptive, and it’s certainly not something There are a number of things that are unique to Windows 10 most IoT devices would want to unexpectedly encounter. As a IoT Core. Among them are headless, policy-managed operation result, there is no activation for Mobile and Core versions of Winwith background apps to handle long-running tasks, retain full dows 10 IoT. For Enterprise versions, activation can be handled control, and manage process life cycles. There’s an API for bus both online and offline, and it’s designed not to get in the way. access (GPIO, I2C, SPI, USB, HID, custom), an API for system Security is a massive issue in our ever-more connected world, settings (power state, radio control, Bluetooth, Wi-Fi), and and one that becomes exponentially greater as millions of IoT overall a rich API set, albeit a smaller one than classic Windows. devices are added every week and month. Device identity, Migration tools are available for the porting of existing code to shared data and customer data all present unique challenges 10 IoT Core. But, and that’s a big “But,” Core can’t run legacy that are handled via TPM/secure boot, TPM-based key storage, Win32 apps. This means very limited backward compatibility and industrial-grade device encryption, respectively. with Windows CE or Windows Embedded, but a large number of Win32 and .Net APIs are available in Windows 10 IoT Core, Why Upgrade? and migration tools are available. One thing a lot of developers will ask themselves is why they As far as configuring the Windows 10 IoT OS for a certain should upgrade to Windows 10 IoT. Many had less than satisfacproject or device, it’s a matter of using the Windows Image tory experiences with upgrading in the past, and may assume a Configuration Designer (ICD) to create and build a Windows wait-and-see attitude. image that is then customized with universal apps, drivers and The first argument for upgrading is that Windows 10 is to configuration and lockdown settings. Windows 8/8.1 as Windows 7 was to the ill-fated Windows Vis“Lockdown” is crucial in IoT systems. Since IoT systems ta. Microsoft learned from its mistakes and fixed most of what require an extra layer of security and a totally predictable user needed fixing. experience, Lockdown includes input filters (intercepting key The second argument is that Windows 10 IoT is simply too atstrokes that could lead to unpredictable or unwanted results), tractive to ignore. There’s the full interoperability and the familAppLocker and Layout Control (so nothing can be changed), a iar user experience. There’s the fact that it runs all Win32/Win7/ Shell and App Launcher (so that the interface is always exactly Win8 apps, but there is also the new unified tool set for the the same), Write Filters and Overlays (so that nothing can be Universal Application platform, allowing apps to work not only written to or changed that shouldn’t), USB Filter (so that USB on desktops and tablets, but also on Xbox One, Windows Phone, keys can’t be used unauthorized), and Dialog and Notification and even the new Surface Hub 80-inch display. There’s better 16 | RTC Magazine OCTOBER 2015

security with advances in biometrics and the use of face recognition, there’s DirectX12, and much more. The difference to earlier versions of Windows embedded is that Windows 10 IoT offers a converged platform for devices with enterprise-grade security. Windows 10 IoT is also far better equipped to handle connectivity for the ever more important machine-to-machine and machine-to-Cloud scenarios that make the Internet of Things so compelling in the first place. What speaks against it? There’s the issue with legacy Win32 apps that Windows 10 IoT Core can’t handle. And many developers are not familiar yet with the new universal SDK. Overall, even doubters should consider the new realities of the Internet of Things. The days when an OS was designed to handle no more than an individual PC and basic online connectivity are gone. The Cloud plays a major role in the IoT concept, and that’s reflected in Windows 10 IoT. “The Cloud,” of course, is anything but a cloud. It is made up of server and storage farms out there that are very much on solid earth. But regardless of the fluffy name, the Cloud is where a tremendous, and rapidly growing, amount of data is stored, processed and passed on. That’s where the Azure part of Windows 10 IoT comes in. Azure is Microsoft’s Cloud solution, initially announced in 2008 and introduced in 2010 as Windows Azure (Figure 2). A special Azure IoT Suite has been available since September 2015. The purpose of the Azure IoT Suite, which of course works well with Windows 10 IoT, is to integrate all Azure capabilities designed to help businesses connect, manage and analyze all of their “things.” This may include finished applications targeting common IoT business scenarios such as remote monitoring, asset management and predictive maintenance, with the promise to simplify deployment and facilitate scaling of solutions to millions of “things” over time. It’s important to understand the difference between Microsoft Windows 10 IoT and Microsoft’s Azure IoT Suite. Whereas the former will most likely become the basis for the development of Internet of Things components at every level, Azure itself is simply a service that Microsoft offers. It’s an impressive service

for sure. Azure Event Hubs store data from assets and sensors; the Azure DocumentDB non-relational Cloud-based database and Azure HDInsight Hadoop can process giant data sets; Azure Stream Analytics deals with streaming data; Azure Notification Hubs send OS-agnostic mobile-push notifications. And there are Azure Machine Learning (ML) that provides Cloud-based predictive analytics, and the Microsoft Power BI self-service analytics tool. While the IoT paradigm has been a hot topic for a while, leading embedded system vendors realized that the Cloud infrastructures lagged behind and didn’t meet customer expectations for Cloud connection, hindering IoT implementation. In response, Advantech developed WISE-PaaS (Platform as a Service) with the goals of a) seamless sensor information gathering and transmission, b) remote management of smart devices, c) comprehensive protection for data, systems, and transmissions, d) big data analytics and machine learning modules, and e) open API/SDK and protocols conforming to industry standards. Advantech’s WISE-PaaS has a WISE-Cloud Partner Alliance program that includes an IoT development starter kit, SDK/Protocols, consulting and technical training services, co-marketing and 180 day Microsoft Azure service. Advantech is working with Microsoft’s Azure IoT and Windows 10 IoT teams to integrate Advantech WISE-PaaS with Azure features and modules. Now comes the time to discuss some standards and concepts you’ll come across when dealing with IoT issues. There is, for example, AllJoyn in sensor data communication (Figure 3). AllJoyn is an open source system originally developed by Qualcomm, and now administered and promoted by the AllSeen Alliance that lets compatible devices and apps find each other and communicate with each other. An example would be the sensor in a motion-activated light switch that tells the light bulb that no motion has been detected and it’s okay to turn off now. The overall idea is to have a variety of universal AllJoyn extensions as well as AllJoyn gateway agents in the software stacks of routers and IoT gateways to facilitate sensor data communication as well as easy setup, security, and ongoing support. AllJoyn, by the way, doesn’t need Internet access to work. AllJoyn-enabled devices, services and applications and services can communicate over Wi-Fi, LAN, power lines or other transport services regardless of OS and without the need for being connected to the Internet. As of Fall 2015, the AllJoyn initiative includes

Figure 3 AllJoyn is an open source project that lets connected devices from different manufacturers communicate via a common software framework and a set of system services. Devices can find each other and cooperate across network boundaries of brand, product category and connection type.

RTC Magazine OCTOBER 2015 | 17

TECHNOLOGY IN SYSTEMS WINDOWS 10 AND THE INTERNET OF THINGS over 170 member companies, and AllJoyn is implemented in Windows 10. Another is AMQP, which stands for Advanced Message Queuing Protocol. AMQP is an open standard wire-level application layer protocol for message-centric middleware. Anything that can create and interpret messages that conform to the AMPQ data format works with any other compliant tool or device, regardless of the implementation language used. Finally, there’s the hardware. Embedded systems have been around for a long time, and a good number of them were connected. In the IoT, everything is connected, in real time and not via batch uploads, sneakernet, or USB sticks. On its own, any one piece of data isn’t terribly valuable, but if those formerly isolated data points are collected, filtered, and passed on for processing and analysis, a big picture emerges and allows for real time, intelligent feedback and management. The problem is that perhaps 85% of current systems are legacy technologies that do not share data between themselves, let alone the Cloud. It takes new IoT-centric hardware working with new IoT-enabled software to make it all work. On the hardware side that includes the wireless sensor network level with sensor “mote” edge devices that communicate via standard protocols. There is the embedded IoT gateway level that shares, filters, and transports data. And there are the intel-

2U server with EXTREME power + cooling

ligent systems and networking that, in cooperation with the Cloud, manage and analyze data and turn it into analytics action. As of now, there are just four Microsoft Global IoT Partners in the world. Advantech is one of them, fully supporting Windows 10 IoT with a range of IoT gateway solutions, IoT wireless modules, wireless IoT nodes, the WISE-PaaS platform, and IoT design-in services. If the experts are right about the Internet of Things and the Internet of Everything, trillions of dollars are at stake. That alone warrants taking Windows 10 IoT seriously, peculiar name change or not. And it warrants becoming familiar with IoT goals, concepts, software and hardware. As a reader of RTC Magazine, chances are you’re already well on your way. Advantech Irvine, CA (949) 420-2500


Qseven IoT Gateway Development Kit

XIOS 2U has: • Ten slots (PCIeGen2 x8) in a 2U chassis • 45W per slot with high-volume cooling • 1-2 Xeon processors • 1-4 removable disks See more at

18 | RTC Magazine OCTOBER 2015

A complete starter set for the rapid prototyping of embedded IoT applications. 6262 Ferris Square | San Diego CA 92121 858-457-2600|


What the Smart Grid Can Learn from the iPhone Given the rapidly changing dynamics and requirements for power monitoring and control on the emerging Smart Grid, instrumentation needs to be adaptable and upgradeable via software on a versatile hardware platform.

by Brett Burger, National Instruments

Ten years ago, developing an application for a phone took a lot of expertise: both hardware and software. Companies like Emerson, Motorola, and Nokia had teams of talented engineers and programmers working on phone designs. From these groups sprouted a variety of phones with the standard set of applications found in the early 2000s: phone, address book, text messaging, maybe a game or two. Then iPhone with its iOS happened, and Android shortly thereafter. These platforms took the expertise of the “phone engineer” and focused on the hardware component layout, operating systems (OSs), servicing middleware, software/ hardware integration, and the application programming interface (API). The API and development tools enabled software developers to become phone application designers. Programmers for these platforms don’t need to know what processor the phone is running or the intricacies of the OS; they just need an understanding of the platform development environment and the hooks to the hardware capabilities...and of course their market differentiating idea for the application they want to develop. The result: millions of apps, billions of dollars for the economy, more productivity and entertainment for phone owners, and venders who don’t have to service every corner of the app software market to sell a phone. The smart grid needs this. Research, technology, grid topologies, and standards are changing too fast for traditional grid devices to keep up. The smart grid needs devices built on platforms to foster the type of innovation among the domain experts that we saw with phones. Increasing distributed renewable generation like wind and solar, aging assets, evolving standards, changing loads, and growing demand are all challenging grid operators. There is also the challenge of change, the unknown. A grid with five percent distributed solar today may have 10 percent or more just five years from now, and the growth may be from a completely different geographic region. How does that impact the ability to forecast and control a grid? Will grid operators simply need more system controllers for a larger system or will they need to have different capabilities?

20 | RTC Magazine OCTOBER 2015

Figure 1 With access to the hardware functionality of smart grid devices through a platform API, utility experts and researchers can create applications specific to their needs without having to design from the ground up.

How do Tesla cars and Powerwalls impact neighborhood grid demand? The job of a planner at utility companies is growing more difficult as the solution requires assumptions that can be rendered quickly invalid with changing government incentives, regulations, or the globally driven price of fuel. To further exacerbate the problem, utility companies may experience completely unique problems or sets of challenges. Arizona and Southern California have large solar installations, West Texas has large wind farms in remote areas, and England has large offshore wind farms and high-voltage DC interconnects with neighboring countries’ grids. These unique feature sets break up the market into smaller segments that can be more difficult for vendors to serve. Unique instrumentation requests from utilities are often met with requests for large non-recurring engineer-

Figure 2 The Grid Automation System from NI is designed on an open platform that lets domain experts customize the device, using LabVIEW and/or C/C++ for specific measurement needs.

ing costs or guarantees of high unit volumes. Even when these requests are financially viable to the utility, the development time can be lengthy. A shift in engineering tools from purpose-built embedded systems to more open, flexible software-designed systems will spur the rate of innovation and help solve the challenges of change and uniqueness for utility companies. Grid measurement and control devices, sometimes referred to as intelligent electronic devices, need to provide a method by which grid experts can modify their functionality—API. A synchro phasor measurement unit (PMU), power quality analyzer, remote terminal unit, and digital fault recorder are all examples of common devices installed on the grid. From a hardware perspective, each of these devices connects to sensors, potential transformers and current transformers, and samples waveform data through analog-to-digital converters (ADCs). Various processing elements in the device, such as CPUs, FPGAs, or digital signal processors (DSPs), perform the waveform processing and power-related analysis. Finally, results are communicated to grid operations or the Cloud using various protocols and physical communication layers. A hardware teardown of these devices would show that the building block components are very similar. The difference in functionality is essentially software and firmware. Yet domain experts, outside of the ones hired by traditional vendors, have no way to access the hardware functionality. With a platform-based approach and an API, grid engineers can modify existing technology to solve unique challenges faster in a way that meets their needs without the influence of a broader market. Three specific use cases for platform design flexibility are the merging of existing functionality, better interoperability, and updating technology because of research or standards organizations (Figure 1).

Merging Functionality

A good example of merging functionality is large scale wind or solar. Renewable generation typically connects to the grid through DC/AC inverters that can impact the power quality of the grid by adding harmonic noise. Additional environmental data, such as

Preparing Today for the Grid of Tomorrow Gathering reliable, real-time data from all areas of the grid is critical to identifying problems early and preventing power disruptions. To keep the grid running consistently, operators must be able to gather data from a wide range of measurements and quickly gain insight from that data to monitor the overall health of the grid. Software-designed systems provide customized measurement solutions that can be upgraded in the future as new grid modernization challenges arise. National Grid UK, the transmission system operator for nearly 20 million people in the United Kingdom, deployed an advanced, upgradable grid measurement system to provide better operational data for the condition of the UK grid. Like many energy providers, National Grid UK is facing the challenges that come with a rapidly changing grid. The company selected the NI platform (LabVIEW software and CompactRIO hardware) to develop a flexible, powerful, and connected measurement system capable of gathering and analyzing large amounts of data from anywhere on the globe to better detect grid-wide trends. National Grid UK outfitted over 100 substations with permanent devices and 25 portable units with CompactRIO devices at the core. The CompactRIO devices store the data locally until it is pulled up to a database. With these systems, National Grid can see grid-wide trends in power quality, and with full access to data, use a specific location for further analysis if needed. All of this data is communicated over a rugged industrial network from locations throughout England and Wales to any user with an Internet connection, anywhere in the world. Compared to its existing infrastructure, implementing a smarter, more connected system has allowed National Grid UK to manage change, optimize energy sources, and plan for the future grid. In fact, with NI systems, National Grid UK has increased measurement capability on the grid by 400% to help improve reliability and manage renewable energy sources.

RTC Magazine OCTOBER 2015 | 21


Figure 3 The PMU LabVIEW project, with standard C37.118.1-2011 functionality, is available as part of a developer library for smart grid device design.

solar irradiance and temperature, may also be helpful. Having a single device to measure and alert on total harmonic distortion, load, phasors, irradiance, and temperature may be advantageous to the transmission operators with remote solar, but the market for that specific device would likely be too small to merit a new product. This would leave the option to purchase multiple devices

for full functionality coverage, or deal with the impacts of imperfect visibility into grid quality status. Changing communication standards can make interoperability between devices challenging. A common communication protocol for utilities in North America is currently Distributed Network Protocol 3 (DNP3), but when it comes to the future of communication and the smart grid there are many options on the horizon. Standards like IEC-61850 and groups like the Industrial Internet Consortium (IIC) and the Smart Grid Interoperability Panel (SGIP) are spearheading the Industrial Internet of Things (lloT) trend for machine-to-machine (M2M) communication on the grid and make the future of smart grid technology look promising, albeit still a work in progress. Having the ability to modify the communication scheme can be just as important as the functionality because it helps with device interoperability and migration to new standards. A hardware platform component that has dual ports that are software programmable can communicate over the legacy and new technology protocols simultaneously. Sometimes the technology just needs to advance to solve new problems. The functionality of a PMU is defined by the IEEE standard C37.118. This functionality was modified between the 2005 and 2011 revisions of the standard to include faster measurement capability. Renewable generation adds dynamic properties to a

Intelligent Networking

Peet to Peer Tranfers

Reective memory multicast

22 | RTC Magazine OCTOBER 2015

grid with the controllers on inverters and the fact that the wind and sun are not constant. Think gusts and clouds. Software-designed instruments built on a platform can more easily adapt and upgrade to higher performing standards, such as faster PMU report rates, because resources were not capped for optimization. With smartphones, users typically expect three to four software technology upgrades before the hardware needs a refresh. A similar model deployed to intelligent grid devices could enable utility companies to have better information with fresh software technology for 10 to 20 years. Many instrumentation vendors already utilize platform-based design, leveraging board layouts and low-level drivers across a product line, but they stop short of an open ecosystem with an API designed for end users. The NI Grid Automation System is one example of a smart grid device designed as a platform for end user access. There are terminals to connect to the high-voltage and current utility sensors and multiple ports for communication, but most of the functionality between the two can be defined. This functionality can include waveform signal processing on a programmable FPGA with DSP slices, power analysis on a multicore processor, and communication protocols such as DNP3, C37.118, and IEC-61850. The Grid Automation System can be fully programmed from the ground up, but is shipped with a preset personality that covers standard PMU functionality. Grid owners who want a “PMU that can also…” can start from the open software personality and add on. By eliminating the need to redesign a wheel, or in this case a PMU, the next-generation PMU or PMU with custom functionality goes from design concept to field in much less time. Built around the NI CompactRIO embedded controller and programmed with NI LabVIEW software or C/C++, the Grid Automation System helps connect and process better information about unique situations within a utility grid (Figure 2). The concept of programmable platform hardware democratizes the approach to smart grid technology to the benefit of all parties involved. Power engineers working for utilities have experience to increase grid uptime with the right information, but likely don’t have a lot of experience laying out ball grid array processors or developing the glue logic needed to connect an ADC to an FPGA. With a platform-based approach, power engineers can use their expertise to gain better insight into their grid. On the other side, smart grid vendors can focus more time on designing open, flexible systems, and less time trying to determine the feature set and margins required to address the top 80 percent of market applications. Power consumers, the paying public, gets a more reliable, intelligent grid that can easily integrate new generation technologies, save money on energy bills, help restore power faster after storms, and of course still charge millions of iPhones (Figure 3). National Instruments Austin, TX (512) 794-0100 RTC Magazine OCTOBER 2015 | 23


The Industrial Internet Will Help Solar Arrays Actually Reduce Carbon Emissions

As renewable energy sources at the edge of the current power grid continue to grow across neighborhoods and businesses, microgrids with edge intelligence and peer-topeer communication are necessary to deliver on the promise of green energy. by Brett Murphy, Real-Time Innovations

You’ve covered your roof with solar arrays, your monthly electric bill has plunged, and you’re doing your part to reduce carbon emissions. Actually, the first two items are real, but the last is only partially true. What happens when a cloud suddenly shuts down your power generation, but your air conditioner is still running? Somewhere on the other side of your local utility’s power grid, a fast spinning generator of some kind picks up the load. Luckily your local utility was ready for the sudden load and had that generator spinning extra fast in anticipation, in the process burning additional fossil fuel and adding extra wear and tear to the equipment. It is this challenge of renewable distributed energy resources that is driving so much research and development in microgrids. Somehow, the sudden changes in local power generation have to be managed. This requires edge intelligence for local control and peer-to-peer communication for low-latency and high reliability with no single point of failure. The edge communication and control framework needed to make microgrids work well is emerging, 24 | RTC Magazine OCTOBER 2015

based on the latest architectures and technologies underlying the intelligent, distributed systems of the Industrial Internet of Things.

The Distributed Energy Resources Challenge

Renewable energy like solar and wind hold great promise as replacements for dirty fossil fuel-based energy. Deploying these renewable energy resources locally promises to diversify our energy generation and make it more resilient. They also help businesses and homeowners do their part to reduce carbon production while reducing energy bills in exchange for an up-front capital investment. But, because wind and solar are intermittent, we can’t depend on them fully. We need backup power sources for when the wind dies down and the skies cloud over. One of the biggest issues, especially with solar, is how quickly the power output can change. Currently, in the US, a local power substation, where high-voltage power is converted to neighborhood distribution voltage levels, monitors power needs and reports those back to the utility. It can then take up to 15 minutes

to spin up (or down) a centralized generation plant as necessary. A solar array can lose power in a matter of milliseconds with a fast moving cloud. An alternate source has to be online and ready to pick up the load in milliseconds. If there isn’t sufficient backup, the voltage on the grid can drop and the grid can fail. As solar energy resources grow in a utility’s service area, the more excess spinning reserve the utility has to store as backup. While the sun is shining, power may be flowing from these distributed solar arrays back to the grid reducing the need for fossil fuel generators. But those fossil fuel generators need to be running and spun up sufficiently to suddenly take over the load if the solar arrays stop producing. So with every solar array pushing power on to the grid, there is an equivalent fossil fuel generator spinning in the background to take over. Those generators are burning fuel and wearing out bearings. What is needed is 15 to 30 minutes of lead time. If the utility has 15 to 30 minutes of time after a cloud bank passes over the neighborhood or the wind dies, to ramp up a new generator then they don’t need to have the spinning reserve. To provide the time needed, energy storage and load reduction are promising techniques. Microgrids Integrate Energy Storage and Load Reduction The primary method of energy storage being deployed today is batteries—very large banks of batteries. Battery storage systems come in different sizes, from house mounted to grid level systems backing up an entire neighborhood or industrial park. They make a lot of sense in combination with solar as they can be charged up while the sun is out and quickly step in when the sun disappears. Utilities will often include battery storage systems in their large solar power plants for this reason. Another very promising technique is a virtual power source. Rather than generating power from backup, load is reduced instead. A technique called Demand Response is used to quickly turn off non-critical loads. To a utility, the effect is almost the same as turning on a backup power source. For example, in addition to quickly switching on a battery, you could also have a system in your house that turns off your air conditioner since the sun just went behind a cloud. In a factory, lighting could be reduced, the temperature setting for the cooling system raised and the electric vehicle chargers in the parking lot switched off for the duration. The result of integrating energy storage and load reduction or demand response systems is a much smoother load requirement curve to the utility. This means there is time to ramp up large backup systems. Costs and carbon production are reduced. However, to enable it, we need a distributed communication and control system in place at the edge that is integrated with local sensors and storage. The solar array or a nearby controller, detecting the energy drop, needs to send commands to the batteries or to the load reduction system in a matter of milliseconds. And when the solar arrays power back up, the air conditioners may be turned back up and the batteries switched to charging mode. Fast response times, peer-to-peer communication and intelligent control at the edge are required. A microgrid provides this intelligent communication and control framework and integrates distributed energy resources

Figure 1 The Industrial Internet Reference Architecture requires a “core connectivity standard” to ensure interoperability and security in Industrial Internet Systems.

at a local level. It manages the interaction between the energy sources and loads in the local power grid as well as interacting with the higher-level utility control system. Beyond smoothing the intermittencies of renewable energy sources, microgrids also enable other valuable use cases. Loads in a neighborhood peak in the evening when the power from solar is waning. A microgrid can power up batteries using the solar arrays during the day and power the evening load from the batteries. If energy prices vary during the day, then a microgrid can optimize the power it uses or sells to reduce costs or even make money. If the external power grid fails, the microgrid can continue to provide service to its local customers, perhaps by firing up an emergency backup generator in time to take over from the batteries. For an industrial site, a hospital, or a data center, uninterrupted power based on renewables at the core is imperative.

A Microgrid Architecture Based on the Industrial Internet

Many microgrid development projects are turning to the Industrial Internet to find modern protocols and edge intelligence architectures. Ground zero of the Industrial Internet is now the Industrial Internet Consortium (IIC). The IIC was founded in April 2014 by GE, Cisco, Intel, AT&T and IBM to accelerate the creation of an interoperable and secure Industrial Internet. Towards this goal, the IIC supports 3 major initiatives: 1) to foster an ecosystem of companies, technologies and solutions for the Industrial Internet, 2) to develop an Industrial Internet Reference Architecture (IIRA) and recommend standards for the IIRA, and 3) develop proof-of-concept testbeds to demonstrate solutions for Industrial Internet Systems. The IIRA is being extended with more detailed guidance and specific technologies and standards. The purpose is to provide a common reference architecture that spans all industries represented by the more than 190 members (as of July 2015) of the IIC. The first version of the IIRA was published in June 2015 with a

RTC Magazine OCTOBER 2015 | 25

TECHNOLOGY CONNECTED BUILDING OUT THE SMART GRID high level overview of the architectural elements needed to deliver an Industrial Internet system (IIS). The Connectivity section contains key elements for any framework that will deliver interoperable and secure communications. The Connectivity architecture (Figure 1) requires a central communication “databus” with gateways to integrate edge devices or sub-networks using legacy communication protocols or existing interfaces. By normalizing all communications through a single standard, this architecture achieves interoperability between devices and applications in the system and simplifies the security implementation. Three IIC members—RTI, National Instruments and Cisco— are applying the precepts of the IIRA to the microgrid challenge. The Communication and Control Testbed for Microgrid Applications provides a peer-to-peer communication framework to interconnect powerful edge intelligence controllers and analytics nodes. RTI, NI and Cisco have implemented an instance of the IIRA for this Microgrid Testbed program using RTI’s Connext platform based on the standard Data Distribution Service (DDS) protocol, NI’s CompactRIO intelligent controllers and Cisco’s Connected Grid Routers. Phase 1 of the testbed program is underway, developing a proof-of-concept demonstration in Austin, Texas. Phase 2 of the program will be a more complete implementation of a microgrid in the simulation labs at Southern California Edison. Once satisfied with the security and safety of the framework, Phase 3 will involve the integration of a real microgrid in San Antonio Texas with CPS Energy, the municipal utility. Through this Microgrid Testbed program, RTI, NI and Cisco are showing that an interoperable and secure Industrial Internet solution can streamline the development of microgrids. Using an Industrial Internet architecture promises a more open, interoperable set of systems where solutions from a wide variety of vendors and system integrators can be applied. Adhering to the IIRA ensures microgrid systems can take advantage of rapid advances in standards, technologies and solutions driven by the huge momentum of the Industrial Internet.

Implementing the Microgrid Communications Framework with DDS

The Phase 1 Microgrid Testbed proof-of-concept demonstration is a greatly simplified “microgrid” system using stand-ins like lightbulbs and fans for power loads, a simple single-phase power circuit driven by a standard wall plug, and relays that can cut out particular loads as needed. NI CompactRIO controllers run logic to mimic a microgrid running under different modes. The normal optimization mode runs with the microgrid connected to the power grid (the wall socket in this case) and power measurements showing nominal power usage by the loads on the circuit. A simulated battery takes over as a power source when the power grid fails (someone pulls the plug from the wall) and a demand response controller immediately cuts most of the loads to ensure the battery is balanced with the remaining loads. Other simulated modes like storm mode and grid synchronization mode round out the current demonstration. Actual batteries, solar arrays and other microgrid equipment will be added to flesh out the proof-of-concept. The control logic for the various modes in the microgrid demonstration runs on different controllers across the distributed system. With DDS as the core connectivity standard, there is a great deal of flexibility in placing the control logic (Figure 2). The databus allows the logic to be redeployed as needed across the distributed edge controllers.

Advantages of Using a DDS-Based Implementation

DDS implements a publish-subscribe model that connects information producers (publishers) with information consumers (subscribers). The overall distributed application is composed of processes called “participants,” each running in a separate address space, possibly on different computers. A participant may simultaneously publish and subscribe to strongly-typed data-streams identified by names called “Topics.” The interface allows publishers and subscribers to present type-safe API interfaces to the application. DDS defines a communication relationship between publishers and subscribers. The communications are decoupled in space (nodes can be anywhere), time (delivery may be immediately after publication or later), and flow (level of reliability and bandwidth control). The DDS middleware automatically discovers publishers and subscribers and connects them based on the Topic they are providing or wish to receive. Quality of Service (QoS) parameters specify the timeliness, frequency, reliability and content delivered to Figure 2 each application (Figure 3). The IIC Microgrid Testbed Phase 1 proof-of-concept demonstration communication and control architecture using To increase scalability, topics may the DDS protocol for the core connectivity standard. Native DDS controllers communicate peer-to-peer across the DDS databus while legacy devices connect via software gateways. Control logic can be deployed as needed across contain multiple independent data the controllers in the system. channels identified by “keys.” This 26 | RTC Magazine OCTOBER 2015

data flows as well. Scalability Ensures Systems Can Be Extended and Federated With data-centric publish-subscribe, there is no need to maintain N-squared network connections like there is with message or connection centric protocols. As a result DDS based Figure 3 systems can scale to over 10 milWith DDS, automatic discovery matches publishers and subscribers of a data topic and Quality of Service parameters lion publish-subscribe pairs, and describing application data needs shape the resulting data flows. Specifying and relying on data-centric properties enables decoupled, resilient systems. application code can be reduced by a factor of 10. Since DDS does not assume allows nodes to subscribe to many, possibly thousands, of similar data a reliable underlying transport, it can easily take advantage of streams with a single subscription. When the data arrives, the midmulticasting. With multicasting, a single network packet can be dleware can sort it by the key and deliver it for efficient processing. sent simultaneously to many nodes, greatly increasing throughput DDS also provides a “state propagation” model. This model and scale. Most client-server designs, by contrast, cannot handle allows nodes to treat DDS-provided data structures like distributed a client sending to multiple potential servers simultaneously. In shared-memory objects, with local caches efficiently updated only large networks, multicasting greatly increases throughput and when the underlying data changes. There are facilities to ensure reduces latency. coherent and ordered state updates. Much like the security in a database where data is secured table DDS is fundamentally designed to work over unreliable transports, by table, DDS allows data to be secured topic by topic. The system such as UDP or wireless networks. No facilities require central servers integrator specifies which DDS applications have read or write or special nodes. Efficient, direct, peer-to-peer communications, or permission for which topics in the system. Only those data topics even multicasting, can implement every part of the model. that need to be made confidential should be encrypted. Topics DDS does not require a central server, so implementations can that do not need to be secured can be left unencrypted, allowing use direct peer-to-peer, event-driven transfer. This provides the higher performance, and signed with a message authentication shortest possible delivery latency. This is a significant advantage code (MAC), to verify authenticity. This fine-grained security over client-server or broker-based designs. Central servers or model helps mitigate malicious insider attacks by limiting the brokers impact latency in many ways. At a minimum, they add an access of a compromised application or a malicious user. intermediate network “hop,” nominally doubling the minimum The Industrial Internet Consortium’s Microgrid Testbed peer-to-peer latency. If the server is loaded, that hop can add very program applies the cross-industry Industrial Internet protocol significant latency. Client-server designs do not handle inter-client DDS, intelligent edge controllers and industrial network equiptransfers well; latency is especially poor if clients must “poll” for ment to implement a communication and control architecture for changes on the server. microgrid applications. Long term, the real power of the IndustriFine control over real-time QoS is perhaps the most importal Internet of Things (IIoT) is to connect sensor to cloud, power ant feature of DDS. Each publisher-subscriber pair can establish to factory, and road to hospital. To do that, we must change core independent QoS agreements. Thus, DDS designs can support infrastructure to use generic, capable networking technology that extremely complex, flexible data flow requirements. can span industries, field systems and the cloud. Applying the Periodic publishers can indicate the speed at which they can IIoT to microgrids is a key step to enable large-scale efficient use publish by offering guaranteed update deadlines. By setting a of green energy. deadline, a compliant publisher promises to send a new update at a minimum rate. Subscribers may then request data at that or any Real-Time Innovations slower rate. Publishers may offer levels of reliability, parameterSunnyvale, CA ized by the number of past issues they can store to retry trans(408) 990-7400 missions. Subscribers may then request differing levels of reliable delivery, ranging from fast-but-unreliable “best effort” to highly reliable in-order delivery. This provides per-data-stream reliability control. The DDS publish-subscribe model provides fast location transparency. That makes it well suited for systems with dynamic configuration changes. It quickly discovers new participants and data topics. The system cleanly flushes old or failed nodes and

RTC Magazine OCTOBER 2015 | 27


Pushing the Embedded Boundaries with New 8-bit MCU Peripherals A new generation of 8-bit microcontrollers features on-chip peripherals that can set up to function independent of the CPU core and perform distinct functions without intervention from the core or the use of core instructions by Jin Xu, Microchip Technology

The newest generation of 8-bit microcontrollers integrates what are known as “Core Independent Peripherals,” which bring a new level of design flexibility. From a simple digital timer to a complex AC/DC power supply, these configurable peripherals, along with integrated intelligent analog, offer balanced, customizable solutions to many design challenges. Additionally, they allow 8-bit microcontrollers to be used in areas where traditional 8-bit MCUs have fallen short. Microcontrollers have come a long way since the introduction of the first Read Only Memory (ROM) MCU more than 40 years ago. 8-bit microcontrollers, in particular, have gone from a simple logic controller to a fully integrated smart IC with analog features. The classic view of an 8-bit microcontroller’s peripherals was one where each module was designed to perform a fixed function, and nothing more. The latest 8-bit generation was created to be different, from the ground up, which is a paradigm shift that requires a whole new end-product design approach. These new 8-bit microcontrollers have integrated a number of unique peripherals that can perform multiple functions and tasks, as needed. In addition, these peripherals can be configured and combined to create new functions that were impossible or difficult to do in other types of microcontrollers. Most of these new peripherals can operate independently, without any core supervision, thus reducing reliance on the CPU to perform the necessary tasks. Furthermore, many of these peripherals can be used in sleep mode for the most power-sensitive applications.

microcontroller requires an external programmable logic device (PLD) or additional coding to get the desired logic controls, and even that setup doesn’t provide all of the CLC’s flexibility. The numerically controlled oscillator (NCO) is another configurable module that can be used as a 20-bit timer or a PWM controller with high-resolution, variable-frequency control, as shown in Figure 1. This is not a traditional PWM/timer, where the performance and features are almost exactly the opposite of each other. The NCO, with its higher resolution and linear frequency control, can help to simplify a complex control algorithm commonly used in many power supply applications, such as lighting ballast control with dimming functionality, by controlling the circuit current very accurately. Another use of the NCO is to drive the audio alert of a smoke alarm, as it provides the variable frequency control to easily change the pitch of an alarm tone. The finer control of the generated frequency also allows better tuning of the tone and pitch of the sound generated, without the need for any external analog components.

Using Core-Independent Peripherals

One of the most commonly used peripherals from this new crop is the configurable logic cell (CLC). This is a very simple yet powerful module that offers standard logic functions—such as AND, OR, XOR, SR Latch, and J-K Flip Flop—which the user can configure to create logic gates for signal conditioning. The input and output signals of the CLC module can be connected to any of the I/Os, peripherals or registers, via internal connections. It can be used as a simple signal router, glue logic, or an intelligent state machine for wake-up control. The traditional 28 | RTC Magazine OCTOBER 2015

Figure 1 An example of a numerically controlled oscillator application.

Traditional vs. Core-Independent Implementations

Figure 2 Capacitor discharge ignition using an integrated angular timer peripheral.

While this integrated peripheral can be used on its own, the magic really happens when multiple modules are combined together to create different functions. For example, Manchester encoding is commonly used in telecommunications and data-storage applications. The traditional Manchester algorithm can be very firmware intensive and requires CPU resources to manage the task. By using the NCO and CLC modules in tandem to create a Manchester decoder, this function operates entirely in hardware with zero CPU utilization. By the way, Manchester encoders can be designed with just one CLC module, without any firmware bit-banging. Other peripherals, such as the angular timer (AT), the signal measurement timer (SMT) and the math accelerator (MathAcc), are a bit more sophisticated when compared to the CLC or NCO modules. The AT can be used to measure any periodic signals— such as optical encoders, zero-cross detectors and Hall sensors—for motor-control and AC-power applications, regardless of the motor’s speed or the signal’s frequency. The AT module performs instantaneous time/angle domain transformation, all in hardware, and once again without any additional CPU overhead. Handling this task with a traditional microcontroller would typically require multiple timers to count and measure units of time, and then transform the values into the phase-angle domain through mathematical calculations (via firmware); or lookup tables stored in program memory, if the period is known. The traditional approach requires more firmware setup and CPU resources for the math, as well as the size constraint of the lookup table that could limit the number of values available, and thus lead to approximation and inaccuracy. The AT module can automatically generate interrupts and events, directly, based on the phase-angle value configured by the designer. Additionally, the AT has three Compare/Capture PWM (CCP) functions at the user’s disposal.

Another example of combining multiple peripherals to make the task easier is Capacitive Discharge Ignition (CDI) control, shown in Figure 2, which is often used in small internal-combustion engines. The microcontroller has two primary tasks in a digitally controlled CDI system. First, it must determine the advance firing angle of the spark plug, based on information from the various sensors. And secondly it must set the duty cycle of the PWM signals to deliver the firing pulses to the DC/DC converter for spark ignition. Without going through all the design details of an internal-combustion-engine control system, a PIC16F161x MCUbased CDI implementation, combining the AT, the CLC and a few other peripherals, such as the SMT and the MathACC, greatly improves the overall performance, as these peripherals effectively manage the RPM calculations and control the spark plug firing time of the engine, once again with very little CPU intervention. A more in-depth analysis of this design is given in application note AN1980 available from Microchip. Figure 3 provides a comparison between the conventional and the core-independent-peripheral methods for designing a CDI system. As demonstrated, the AT method greatly improves system performance by reducing the execution time and CPU usage by more than 50%, while reducing the code size by 40%. There are a number of ways to generate a PWM signal, either through firmware or hardware; but when it comes to measuring and extracting information from an incoming PWM signal, the options become somewhat limited. The typical approach uses timers and CCPs, with a large number of CPU cycles to determine the pulse, period or duty-cycle values. It is possible to combine the CLC and NCO modules to accomplish these tasks, with some additional coding. However, the SMT peripheral mentioned in the previous example is a 24-bit counter/timer with advanced clock and gating logic, which allows for different acquisition modes. These modes include measuring and storing the period and duty-cycle values, automatically, with no core supervision or any additional calculations. The SMT is highly useful in any design that measures a PWM signal, such as motor control. With the increased capabilities offered by these advanced peripherals, one of the concerns for designers is how to manage the limited I/O and the available MCU resources to maximize the performance of the device. Too many modules and not enough pins have been one of the shortcomings that limited the capabilities of traditional low pin count 8-bit microcontrollers. With recently added features, such as Pin Peripheral Select (PPS), designers can now route any digital signal to any I/O pin, on the fly, without using any external components. A traditional design that requires multiple UARTs might need a high pin count microcontroller with several UART modules to get the job done. With the new generation of 8-bit MCUs, this can be accomplished with any microcontroller that has a single UART and PPS or CLC to easily route the communication signals to multiple pins.

RTC Magazine OCTOBER 2015 | 29

TECHNOLOGY IN SYSTEMS DOING MORE BY SPREADING THE LOAD In summary, the newest generation of 8-bit microcontrollers is more capable and powerful than the traditional 8-bit MCU, and can often achieve higher performance than the software-centric approach of 32-bit MCUs, by executing many functions with integrated core-independent hardware. Additionally, Core-Independent Peripherals provide more flexibility for designers, by giving them the ability to configure and combine several peripherals, creating multiple application functions without sacrificing CPU performance or power consumption. These new hardware peripherals remove the traditional dependency on the core, and add determinism to the overall system design. Microchip Technology Chandler, AZ (480) 792-7200

Figure 3 Capacitor discharge ignition implementation comparison


Scalable GigE Switches cExpress-SL COM Express TECHNICAL SPECIFICATIONS: • 6th Generation Intel Core and Celeron Processors • Up to 32GB non-ECC Dual channel DDR4 at 2133/1867 MHz • Two DDI channels, one LVDS (or 4 lanes eDP), support up to 3 independent displays • 5 PCIe x1 (Gen2, configurable to x2, x4) • GbE, 4x SATA 6 Gb/s, 4x USB 3.0 and 4x USB 2.0 • Supports Smart Embedded Management Agent (SEMA) functions • Extreme Rugged™ operating temperature: -40°C to +85°C

Adlink Technology, Inc. Phone: (800) 996-5200 FAX: (408) 360-0222 Email: Web:

30 | RTC Magazine OCTOBER 2015

TECHNICAL SPECIFICATIONS: • Stacking, expandable 1 Gbps Ethernet switches • Board-level 10-pin headers or RJ-45 jacks • Eight ports per board, and expandable in groups of eight • Can be used standalone or with a host computer •Link, activity, and speed LEDs for each port • Stackable PCI Express (PCIe/104) expansion • Enclosure configurations with D-sub receptacles, RJ-45 jacks or watertight military cylindrical connectors • Fanless -40 to +85°C Operation • AS9100 & ISO 9001 Certified

RTD Embedded Technologies, Inc. Phone: (814) 234-8087 Email: Web:

Why Should Researching SBCs Be More Difficult Than Car Shopping? Today’s systems combine an array of very complex elements from multiple manufactures. To assist in these complex architectures, ISS has built a simple tool that will source products from an array of companies for a side by side comparison and provide purchase support. INTELLIGENTSYSTEMSSOURCE.COM is a purchasing tool for Design Engineers looking for custom and off-the-shelf SBCs and system modules.



Hybrid Devices Maximize Flexibility, Performance and Energy Efficiency for Wearable Technology The ability to partition tasks among CPU, DSP and programmable logic elements on the same device enables developers to optimize energy, performance and development time. Dr. Tim Saxe, QuickLogic

Wearable technology plays a pivotal role in the next stage of the Internet of Things by bringing value to individuals through new levels of interconnectedness. For example, wearable applications today range from high-end sports watches that can help improve an athlete’s performance to glucose monitors that can save lives by alerting patients and doctors. A key challenge facing software and hardware designers of wearable devices is how to provide advanced processing capabilities, while maintaining a small form factor and minimal power consumption. Fast time-to-market is important to a product’s success as well, so development time is a key factor, as are development costs and product bill of materials (BOM). Given that wearable technology is still an emerging market, it is critical that developers also maintain flexibility to be able to quickly adapt to changing market needs and trends. For example, devices must be able to interface to new types of peripherals, such as motion sensors woven into clothing. Devices must also be able to support advanced signal processing capabilities to accurately extract data from low-level sensor signals in noisy operating environments. Furthermore, systems need to be able to implement increasingly sophisticated algorithms; i.e., a sports watch needs to be able to determine whether a person is running, bicycling or weightlifting to accurately assess activity level and calorie count. Developers have numerous architectural choices for designing wearable devices. General-purpose CPUs and DSPs provide developers with flexibility through software programmability. However, they do so with higher energy consumption compared to digital logic. Similarly, programmable logic provides excellent energy efficiency and performance, but at the cost of increased development cost and time-to-market. To achieve the performance, energy efficiency and integrated functionality needed to enable next-generation wearable devices,

32 | RTC Magazine OCTOBER 2015

developers need hybrid architectures that blend software programmability with application-specific accelerators and reconfigurable hardware. Developers can then partition functionality between these different resources so that the most efficient architecture is used for each task based on its complexity and frequency of execution.

Maximizing Energy Efficiency

Good design is about understanding and balancing tradeoffs for a particular application. For example, it is possible to get high performance and fast time-to-market, but the design will likely have higher energy consumption and/or a higher price. Energy efficiency is always an important consideration, typically measured in energy per work unit. For wearable devices, it often is the highest priority. Many engineers begin a design by deciding upon the system’s main processor. They choose an off-the-shelf processor like an ARM or DSP designed to simplify and speed design. Such processors achieve this through generalization of functionality and software programmability.






Programmable Logic





General Purpose Cpu










Table 1 Relationship of design approach to energy consumption, cost, and time-to-market.






Low-level (i.e., sensor reading)

Very Low



Programmable Logic

Mid-level (i.e., frequency domain)





High-level (i.e., activity assessment)


Very Low



takes a certain amount of time during which the processor consumes energy but does not do any work. This is, in effect, wasted energy. Thus, time-to-wake needs to be added to the time it takes to perform a task.

Energy Efficiency vs Cost and Flexibility

This flexibility comes at the expense of energy efficiency and cost since a general-purpose architecture requires program memory, instruction fetching and other mechanisms to perform what could be implemented more efficiently in a fixed manner directly in logic. In addition, general-purpose CPUs often have superfluous peripherals or functionality that consume energy unnecessarily. As a result, the energy constraints of wearable applications often make it unfeasible to utilize off-the-shelf general-purpose processors. The calculation for estimating the energy consumed by a system is often simplified to:

The most energy efficient system would be one entirely comprised of specialized logic. Such a system could maximize parallelism, while completely eliminating unnecessary logic. In addition, because processing is performed as quickly as possible, active time would be minimized as well. Although energy efficiency is essential, it is not the only consideration. Table 1 shows the relationship between energy efficiency, cost and time-to-market. Note that cost can be considered in terms of hardware cost (i.e., silicon die) and development cost (i.e., hardware and software design). Designing a system completely in specialized logic comes with the tradeoff of more complex design, leading to higher development costs and longer time-to-market. In truth, the wearable electronics market at this stage is changing too quickly for OEMs to be able to dedicate years to producing an idealized design. A hybrid approach, balancing programmable logic with specialized and general-purpose processor technology, provides the optimal balance of energy efficiency, cost and time-to-market. The challenge for developers, then, is to determine how to partition their system across architectures to achieve the right balance for their application.

energy(total) = power(sleep) x time(sleep) + power(active) x time(active)

Energy Efficiency vs Complexity

Table 2 As the complexity of a task increases, the frequency of its execution, and consequently its impact on energy consumption, drops.

To minimize energy(total) , developers design systems to maximize time(sleep) and minimize time(active). The problem with this simplified approach is that it potentially limits developers to thinking in terms of a single architecture, and that energy is optimized primarily through software design. For applications that truly need the lowest energy consumption, designers need to rethink how hardware fits into the equation. Energy efficiency comes from optimization of both software and hardware. With a hybrid architecture, this is captured in the energy calculation: energy(total) = Programmable Logic (power(sleep) x time(sleep) + power(active) x time(active)) + DSP (power(sleep) x time(sleep) + power(active) x time(active)) + CPU (power(sleep) x time(sleep) + power(active) x time(active)) To minimize energy(total), developers need to consider the energy required to perform a task across all of the architectures and determine where to partition it. For example, reading a sensor using programmable logic requires the system to activate much less circuitry than using a CPU. A CPU will also take longer to perform this task, further decreasing energy efficiency compared to programmable logic. Estimating total energy consumption is actually a much more complex calculation. Developers also need to consider the impact of factors such as time-to-wake when determining the energy efficiency of a task. For example, raising a CPU from Sleep to Active mode

Consider a typical sensor function like calorie counting implemented in a sports watch. The function involves data processing on several different levels: 1) Read accelerometer sensors – 50 times per second 2) Transform accumulated readings into the frequency domain – once per second 3) Assess the activity (i.e., user is walking, riding a bicycle, sitting) – several times per minute Breaking down a function using this hierarchy can help simplify partitioning. Consider that low-level functions tend to be simple and relatively repetitive. In contrast, high-level functions are more complex and require more decision-making logic. These high-level functions are also where most of an OEM’s innovation will be implemented.This hierarchy also reflects a function’s relative impact on system energy consumption (Table 2). Because low-level functions are executed much more frequently than high-level functions, they often represent the highest contribution to active(time) and thus significantly contribute to overall energy consumption. For these reasons, optimizing low-level functions for energy efficiency in specialized logic provides the greatest return because only essential logic is active. In addition, low-level functions are often the most simple. They tend to be fairly linear, with little variation or decision-making involved, making them straightforward to implement in specialized logic. This maximizes development resources by achieving the most gains in energy efficiency for the least design investment. Because high-level functions are executed less frequently, they

RTC Magazine OCTOBER 2015 | 33


Figure 1 In a Hybrid Sensor Processing Unit, sensor algorithms programmed in hardware relieve the CPU from executing instructions for the sensing operations.

have less impact on energy consumption. Consider an assessment function that operates every 5 seconds. This function represents only a small portion of total energy consumption. Thus, investing in implementing this code in specialized logic requires a large investment for little gain in energy efficiency. With a hybrid architecture, high-level functions can be implemented using software on a CPU. This has a minimal impact on energy efficiency but maximizes the speed with which these functions can be developed. This, in turn, results in faster development cycles, enabling OEMs to integrate innovation and add value to products quickly and with the greatest flexibility. Thus, by trading off minimal losses in energy efficiency, substantial gains can be achieved in terms of development time, design flexibility and system cost. A hybrid architecture supporting DSP functionality provides additional levels of efficiency. Mid-level functions like signal processing are difficult to implement in specialized logic, and inefficient when implemented using a general-purpose CPU. With a DSP, these functions can be optimized for energy, cost and time-to-market (Table 1). For example, Quicklogic provides programmable devices that implement a complete ARM + DSP + programmable logic platform. Different logic devices offer varying capacities and operating speeds to match an application’s specific requirements (Figure 1). One advantage of using a programmable logic platform is that the architectures themselves are flexible. For example, a general-purpose CPU with a coprocessor is fixed in its implementation. As the wearable devices market matures, the coprocessor may no longer offer the functionality needed by the system. With a hybrid architecture, developers have the flexibility to choose a DSP and ARM architecture that matches the requirements of these functions. As the system evolves over time, the capabilities of these processors can easily evolve as well. Developers begin their design by considering which functions can be best implemented in hardware (Table 3). From the remaining functions, appropriate functions are implemented using the DSP. Then the remaining functions are implemented using the ARM processor. Quicklogic helps simplify programmable logic design by providing RTL libraries for developers to use. In addition, the company offers 34 | RTC Magazine OCTOBER 2015

a Flexible Fusion Engine Algorithm Tool (FFEAT) that can accelerate design for engineers who aren’t experts in RTL. This enables developers to design at a higher abstraction layer, similar to how DSPs can be coded in assembly or C. While FFEAT does introduce performance inefficiencies compared to native RTL implementations, it can substantially accelerate design. This enables designers to migrate certain fixed or mature mid- and high-level functions to programmable logic for higher energy efficiency with less of a negative impact on flexibility and development cost. Today’s general-purpose processors do not provide the energy efficiency required for wearable applications. Similarly, a solely programmable logic approach does not provide sufficient ease of development or design flexibility. The wearable devices market can’t afford either of these extremes. By taking a hybrid approach, developers can optimally blend the advantages of CPUs, application-specific processors like DSPs, and programmable logic. Developers can then maximize energy efficiency by optimizing the parts of their system that consume the majority PROGRAMMABLE LOGIC




Energy Efficiency

Design Complexity

Energy Efficiency

Design Complexity

Energy Efficiency

Design Complexity

Reading a sensor







Signal processing







Activity Assessment







Table 3 Implementing a function at the right architectural level provides the best balance between energy efficiency and design complexity.

of energy using programmable logic. The functions that require the least energy, which are typically those where an OEM’s innovation resides, can be implemented using software for the most flexibility and lowest development cost. In this way, OEMs can achieve optimal performance and energy efficiency. At the same time, they can simplify development and maintain the design flexibility needed to be able to address new market opportunities at the lowest cost. Quicklogic Sunnyvale, CA (408) 990-4000

High Bandwidth Applications in a Very Small Package. VPX3000 is a convection cooled, fanless enclosure that accepts up to three 3U conduction cooled VPX modules. It includes a configurable I/O Adapter Board (IAB) that is designed to mate with Emerson’s iVPX7225 processor blade, itself based on the Intel (R) 3rd generation Core mobile chipset. The IAB routes I/O from the payloads to the front of the enclosure and is designed to be customizable. VPX3000 includes a VITA-62 compliant power supply slot fitted with a DC power supply with a MIL-38999 power input connector and a front panel switch. Two Data Plane Fat Pipes from each slot are connected in a full mesh configuration. Two Control Plane Ultra Thin Pipes from each slot are routed to the IAB as 1000Base-T interfaces. USB 2.0 and a Display Port interface is also routed to the IAB in all variants. VPX3000 has been designed to minimise Size, Weight and Power (SWaP) and yet provide an intensely powerful system level solution including power, storage and processor elements. A rugged variant, targeted at Mil/

Artesyn VPX3000

Aero/Government applications includes three MIL-38999 connectors for I/O from each slot. An alternative variant includes commercial connectors on the IAB and is intended for development use.

your fast, flexible and responsive partner. w w w.m idd leca nyo m

13469 Middle Canyon Rd., Carmel Valley, CA 93924 •


CPUs and the FPGAs: Making the SoC Connection

Since we are still in the early days of SoC offerings, embedded systems developers can expect significant advances in performance over the next few years as vendors continue to boost silicon resources and race to provide tools to most easily take advantage of them. by Rodger Hosking, Pentek, Inc.

In virtual every aspect, CPUs and FPGAs are radically different devices. And yet, they often compete for some of the same embedded system tasks. Choosing the best approach depends not only on the capabilities of each device, but also the often disparate expertise of engineers promoting their respective development methodologies. To make matters even more complicated, SoC (system-on-chip) technology now combines CPUs and FPGAs within the same device. Here, efficient interoperability becomes essential to meet stringent real-time performance levels. FPGAs are user configurable hardware logic, while CPUs are fixed arithmetic engines executing user programs. Table 1 details how these considerable differences translate into application tasks and implementation. One of the latest CPU processor cores, the ARM Cortex-A72, sports up to four 64-bit ARMv8-A processor cores operating at clock rates up to 2.5 GHz. It targets power sensitive, high performance mobile applications, and features a NEON 128-bit SIMD engine for efficient fixed- and floating-point vector processing. 36 | RTC Magazine OCTOBER 2015

Interfaces to other processors and external memory are based on AMBA, discussed later. The latest FPGAs from Xilinx, such as the UltraScale+, provide over 11,000 DSP engines, the essential building blocks for signal processing algorithms. They are aimed at high-performance embedded computing requirements with configurable interfaces for exotic peripherals and standard resources like DDR4 memory, PCIe Gen 4, and 100 GbE. These differences drive the typical task assignments shown in Table 1. The complex aspects of high level decisions and data analysis are usually easier to implement with a CPU. Compute intensive signal processing or data crunching tasks can take excellent advantage of the numerous DSP blocks found in FPGAs. Common examples, such as FFTs, matrix processing, and digital filtering can exploit the benefits of thousands of DSP blocks operating in parallel. Furthermore, FPGA hardware surrounding these blocks can be tailored for each application. This includes local data buffers, specialized FIFOs, and optimized interfaces to




Fixed arithmetic engines

User-configurable logic, DSP blocks and data flow

Appropriate Tasks

Decision making Complex analysis Lower data rate computation Block-oriented tasks

Compute intensive algorithms Massively parallel operations Higher data rate computation Streaming tasks


Fixed, dedicated I/O ports

User-configurable I/O ports


Program execution

Registers determine modes and define operating parameters

Ease of programming

C programming simplifies development tasks

HDL programming mandates hardware awareness

Maintenance/ Upgrades

Less difficult

More difficult

and from external sensors, storage devices, networks, and system components. Choosing between an FPGA and a CPU for a given function is sometimes obvious because of its nature, but other times it could go either way. If so, the deciding vote is often cast for the CPU because a C program is easier to develop, maintain and upgrade. Another important factor: it is often easier to hire a C programmer than an FPGA design engineer! In spite of their profound differences, CPUs and FPGAs have each staked out roles as essential elements in embedded systems.

System-on-Chip Devices

Recognizing this symbiotic relationship, many vendors now offer system-on-chip (SoC) devices combining CPUs and FPGAs in a single monolithic silicon device. It is important to note that “SoC” also refers to highly-integrated devices that include analog interfaces, video and network ports, human interfaces, as well as RF and wireless interfaces, but not necessarily FPGAs. These SoCs are used extensively for consumer market products such as vehicles, smart phones, tablets, appliances, printers and entertainment systems. But to address the toughest requirements, real-time embedded systems often need a much narrower class of SoCs with the extra horsepower of large FPGAs. Leading the industry for such SoCs are Xilinx and Altera. Xilinx offers the Zynq family of SoCs that combine ARM processors with Zilinx FPGAs. Their latest offering is the Zynq UltraScale+ series, whose processing section includes a QuadCore ARM Cortex-A53 application processor, a Dual-Core ARM Cortex-R5 real-time processor and a Mali GPU (graphical processing unit). To match a wide range of embedded applications, the programmable logic section includes a different mix of 16 nm FPGA resources in each of the eleven members of the series. With almost a million logic cells and over 3,500 DSP slices, they deliver significant computational power. Altera competes with the Stratix 10 family of SoC devices, also

Table 1 Embedded system factors for CPUs and FPGAs.

using the Quad-Core ARM Cortex-A53 CPU. Altera’s latest 14 nm FPGA technology offers ten different resource-balanced versions, one topping the list with over 5 million logic cells and 5,760 DSP blocks. Unlike Xilinx’s counterpart, the DSP blocks of Stratix 10 can handle not only single- and double-precision fixed point operations, but also single-precision IEEE 754 floating point functions. This allows designers to achieve much a higher dynamic range for sensitive signal processing applications, and saves the often tedious task of optimizing scaling to avoid saturation and underflow conditions which can often plague fixed point hardware. Because of parallel hardware structures connected directly to I/O ports, FPGAs can process and deliver high-rate continuous streaming data. CPUs are much more effective when processing data blocks in system memory. The advent of SoCs has thus created fundamental interface and data flow inconsistencies between FPGAs and CPUs.

AMBA Interfaces

To help resolve this challenge, ARM. Ltd. developed the Advanced Microcontroller Bus Architecture (AMBA) nearly two decades ago. Since then, it has been widely adopted as an open source, well documented, license free interface protocol between CPUs and peripherals, including FPGAs. One of the most prevalent versions of AMBA is the AXI4 (Advanced eXtensible Interface Rev 4) specification. It presents a comprehensive standard for transferring data between master and slave devices for data widths from 32 to 1024 bits in burst lengths of 1 to 16. A master and a slave device, both having AXI4 compliant interfaces can be connected together and communicate, regardless of the nature or function of the devices. Another popular variation is the AXI4-Lite specification, a subset of AXI4 for very simple devices that may not need the extra interface overhead required for full AXI4. Here, the data width is only 32 or 64 bits and the burst length is limited to single

RTC Magazine OCTOBER 2015 | 37

TECHNOLOGY DEVELOPMENT THE CPU MARRIES THE FPGA transfers. This is ideal for reading and writing to memory mapped status and control registers, often satisfying the needs of most small peripheral devices. Yet another derivative is the AXI4-Stream specification, which eliminates the addressing of AXI4 and AXI4-Lite. Instead, data bytes can be organized in packets of convenient size, and packets can be combined into frames tailored to a wide range of applications like specialized video and imaging. Each byte can be a data byte, a position byte to mark relative location of data bytes, or a null byte to serve as a filler. AXI4-Stream supports only unidirectional transfers from the master device to the slave device. One important aspect for all of these AXI4 specifications is the concept of ‘interconnects’. An interconnect is circuitry that joins one or more master interfaces to one or more slave interfaces, providing not only the required data path connectivity, but also adjusting the required data width and clocking for all devices. Nevertheless, if a single master needs to connect to a single slave, and the data widths and clocks are the same, they can be connected directly without the interconnect. Figure 1 shows how AXI4 interfaces connect some typical blocks of a software radio transceiver. Note examples of AXI4Stream for the A/D and D/A converters, and the AXI4-Lite for a simple FPGA peripheral in IP7. The AXI interconnect contains the FPGA logic that allows the CPU to access three IP blocks. Direct AXI4 connections between two IP blocks are possible when the clocking and data widths match. AXI4 makes life much easier for SoC developers by supporting connections among a diverse range of components through a common interface standard, with interconnect blocks to realize system topology and reconcile data widths. Another important point is that AXI4 can be extremely effective in reducing power

Figure 1 Different types of possible AXI connections for a software radio transmitter.

38 | RTC Magazine OCTOBER 2015

and boosting transfer rates compared to competing strategies. This is extremely important for high performance FPGAs in real-time embedded computing systems.

Tools Make It All Happen

For all of these obvious benefits, both Altera and Zilinx have harnessed AXI technology for their latest development tool suites, not only for SoC development but even for IP in processor-less FPGAs. Figure 2 is a representation of the development tool methodology for SoC design. Tasks are created to satisfy system requirements, and then initially partitioned as candidates for execution by either the CPU or the FPGA. During development and modeling of each task, it may become apparent that a task may need to be reassigned to the other resource. Additional reassignment or optimization may occur when CPU and FPGA tasks are combined and tested during system integration. Xilinx’s SDSoC Development Environment supports their Zynq SoC devices. Familiar C/C++ design inputs to Eclipse compiler tools help developers determine which tasks dominate the CPU workload. Such tasks might be shifted to the FPGA programmable logic to help achieve the required real time performance. SDSoC coordinates execution of both the CPU and FPGA tasks, showing the effects of different partitioning and different implementations of tasks within each partition. Tasks assigned to the FPGA are directed towards the Vivado Design Suite, which uses HLS (high level synthesis) to create IP from the C/C++ design input. Alternative design input choices include HDL using Verilog or VHDL and block diagram tools like MATLAB using System Generator. In addition, the Vivado IP Catalog is an extensive collection of plug-and-play IP modules for signal processing, communication, imaging, matrix processing, data manipulation, coding and formatting. Third-party IP and RTL design entries can be turned into compatible IP modules using the Vivado IP Packager. Regardless of the Vivado design input, all of these newly-created IP modules use AXI4 interfaces compatible with the existing IP Catalog modules. Vivado IP Integrator streamlines the installation of AXI4 interconnects as required to ensure interoperability between IP modules. SDSoC helps link these AXI4 interfaces to compatible AXI4 links on the ARM CPU. The SDSoC and Vivado thus produce a fully synthesized modular SoC design complete with memory mapping, modeling, debugging tools, test benches, and timing analysis. Altera’s SoC Embedded Design Suite includes the Altera edition of the ARM DS-5 Development Studio

Figure 2 Development tool methodology for SoC design.

to support the ARM CPU on Arria and Stratix SoCs. Based on Eclipse Tools, this open source extensible development environment includes compiler, debugger and execution tracer. Altera’s QSYS System Integration Tool supports FPGA development tasks by graphically connecting IP modules from Altera and IP partners. Because they are equipped with AXI4 interfaces, QSYS automatically configures the required interconnects to implement the subsystem. QSYS creates custom IP using schematic or HDL design inputs. Quartus II System Level Software integrates the Embedded Design Suite with QSYS for a complete development environment. It includes Altera’s IP modules, and resources for modeling, analyzing and debugging the interaction between the ARM CPU and FPGA resources. It optionally includes DSP Builder and support for OpenCL. It is clear that both Xilinx and Altera are competing directly for high-end SoC designs by offering powerful ARM CPUs tightly coupled to powerful FPGAs, not only at the device level, but also with comprehensive and ambitious design tool suites. In fact, system integrators may be tempted to choose the SoC vendor based upon the effectiveness of the tools, more so than on silicon features. But, switching SoC vendors is a major commitment

for any company, and the potential benefits must be carefully weighed. Acquiring the training, skills, design methodology, expertise, culture, and effective points of contact for support from a new vendor is often decided only at the highest levels of corporate management. Pentek Upper Saddle River, NJ Altera San Jose, CA (408) 544-7000 Xilinx San Jose, CA (408) 559-7778

RTC Magazine OCTOBER 2015 | 39


Intelligent IoT Gateway Starter Kit Based on Intel IoT Gateway

A new IoT Gateway Starter Kit from Adlink combines Adlink’s MXE-202i intelligent IoT gateway based on Intel Atom E3826 processors, Adlink’s EdgePro IoT device & sensor management application, one light sensor and corresponding siren output, Modbus TCP module, and accessories, all utilizing industrial open standard protocols with security functions powered by the Intel IoT Gateway. Adlink’s IoT Gateway Starter Kit simplifies device-Cloud connection, accelerates IoT application development, and speeds deployment for a wide variety of application environments, such as industrial automation, smart buildings, smart parking systems, and agriculture. The Adlink EdgePro IoT device & sensor management application runs on the Intel IoT Gateway, integrating the Wind River Intelligent Device Platform (IDP) XT and McAfee Embedded Control to provide complete, pre-validated communication and security. EdgePro enables device and sensor management via plug-in(s) for field protocols including ZigBee (Home Automation Profile) and the commonly adopted fieldbus Modbus TCP for industrial automation, all easily configured with sensors or I/O nodes. Interaction across devices/sensors is accomplished by an event execution engine, and a user-friendly web-based dashboard allows remote monitoring of status and actuator control with RESTFul web-service APIs. In addition, EdgePro enables simple configuration of reliable and secure connectivity with Amazon and Windows Azure Cloud. Adlink’s Matrix MXE-202i series embedded IoT Gateway Platform is based on the Intel Atom SoC processor E3826, with industrial-grade construction meeting a wide variety of specific industrial needs. The MXE-202i presents a sturdy aluminum housing withstanding industrial grade EMI/EMS to an EN 61000-6-4, 61000-6-2 specification, and is fully operable under even harsh conditions with operating shock tolerance up to 100 G and an optional extended -20°C to 70°C operating temperature range. The MXE-202i provides two GbE LAN, two COM, two USB 2.0 and one USB 3.0 host ports, four optional isolated DI and four isolated DO, dual mini PCIe slots with one mSATA support, and USIM socket support communication with connections such as WiFi, Bluetooth, and 3G cellular to ensure interoperability between systems and maximize industrial connectivity to meet most application requirements. The MXE-202i also includes Adlink’s proprietary SEMA (Smart Embedded Management Agent) application for quick setup of remote device management and analysis through Adlink’s SEMA Cloud, enabling monitoring and collection of system health and status information from the hardware in a timely, flexible, and precise manner. The Adlink IoT gateway MXE-101i, based on Intel Quark SoC X1021, is also available.

ADLINK Technology, San Jose, CA (408) 360-0200.

Sixth Gen Intel Core and latest Xeon CPUs in Variety of Form Factors

Adlink Technology has announced the first of fourteen new products in various form factors based on the sixth generation Intel Core i7/i5/ i3 processors (codename Skylake) and latest Xeon processors, coming to market in the second half of 2015 and early 2016. These current Intel processor-based offerings feature an updated 14nm microarchitecture and added support for Ultra HD 4K resolution displays. The COM Express offerings include the cExpress-SL and Express-SL in PICMG COM.0 Type 6 Compact and Basic Size form factors, respectively. Both Basic and Compact size modules are available with sixth generation Intel Core i7, i5 or i3 processors and accompanying Intel QM170 and HM170 Chipset. ECC memory is supported by models utilizing the Intel Xeon processor E3-15XX v5 family and Intel CM236 chipset. DDR4 memory is supported up to a total of 32GB, with a lower voltage compared to DDR3 resulting in a reduction in overall power consumption and heat dissipation. These new COMs also provide support for three independent UHD/4K displays. The Adlink MXC-6400 series of rugged, fanless embedded computers is based on the sixth generation Intel Core i7-6820EQ, Core i5-6440EQ or Core i3-6100E processor with Mobile QM170 Chipset. Rich I/O includes 2x Mini PCIe, 1x USIM, 6x USB 3.0 ports plus an internal USB 2.0 port. The MXC-6400 series supports up to three independent displays with UHD/4K resolution support and up to four hot-swappable SATA 6 GB/s, as well as internal SATA 6 GB/s ports. Adlink also offers the IMB-M43 ATX industrial motherboard based on the sixth Generation Intel Core i7-6700 and Intel Q170 Express Chipset to provide high-speed data transfer interfaces such as PCIe Gen3, USB 3.0 and SATA 6 Gbit/s. The IMB-M43 supports dual-channel DDR4 2133 MHz memory up to a maximum of 64 GB in four DIMM slots. To deliver a scalable, high performance platform for machine automation systems, machine vision systems, and test & measurement applications, the IMB-M43 supports fully flexible expansion with a variety of PCI and PCIe configurations available. Finally, the versatile Adlink AmITX-SL-G Mini-ITX embedded motherboard is based on the 6th Generation Intel Core i7/i5/i3 and Pentium desktop processors with Q170/H110 Chipset and offers dual DDR4 SODIMM memory sockets. It features three DisplayPort outputs, dual Gigabit Ethernet ports, USB 3.0 & USB 2.0 ports, SATA 6 Gbit/s ports, and High Definition Audio with 7.1 channels. Features of the AmITX-SL-G include expansion capability via one PCIe x16, one PCIe x1, and two Mini PCIe slots; support for GPIO, SMBus, and I2C; and AMI EFI BIOS providing embedded features such as hardware monitoring and watchdog timer. All new products are equipped with Adlink’s Smart Embedded Management Agent (SEMA) to provide access to detailed system activities at the device level, including temperature, voltage, power consumption and other key information, and allow operators to identify inefficiencies and malfunctions in real-time, thus preventing failures and minimizing downtime. Adlink’s SEMA-equipped devices connect seamlessly to its SEMA Cloud solution to enable remote monitoring, autonomous status analysis, custom data collection, and initiation of appropriate actions. All collected data, including sensor measurements and management commands, are accessible any place, at any time via encrypted data connection.

ADLINK Technology, San Jose, CA (408) 360-0200.

40 | RTC Magazine OCTOBER 2015


Next Generation COM Express Compact Modules with 6th Gen Core Processors

Four new COM Express compact modules are appearing parallel to the launch of the new sixth generation Intel Core processors (codename Skylake). The new modules are specially designed for challenging applications that demand high performance in sealed, fanless system designs. They feature a 15 watt configurable TDP and are equipped exclusively with the energy-saving ULV-SoC editions based on new 14nm microarchitecture. Compared to 15 watt modules with fifth generation processors (codename Broadwell), users benefit from improvements in graphics and processing performance, enhanced energy efficiency and more high-speed I/Os. Typical fanless applications for congatec COM Express compact modules can be found in medical and industrial imaging, central control room technology, shop floor terminals, HMIs, robotics, professional gaming, infotainment, professional AV, smart video surveillance, autonomous vehicle control, computer-aided situational awareness as well as high-end digital signage applications. Graphics card free, triple-head systems – often found in the areas of retail and kiosks, where embedded systems control up to three independent cash or vending machines – present a further application example. The conga-TC170 modules, with COM Express Type 6 pinout, are equipped with the ULV-SoC editions of the sixth generation Intel Core i3/i5/i7 processors. For the first time, they offer a configurable TDP (Thermal Design Power) of 8.5 to 15 watts, which simplifies matching the application to the system’s thermal design. The power supply has also been optimized, which in addition to the new microarchitecture also contributes to the energy efficiency and allows a longer turbo-boost. The integrated Intel Gen 9 Graphics, which is premiering with this new microarchitecture, supplies up to three independently operated 4k displays with 60 Hz via DisplayPort 1.2. Version 2.0 of HDMI is also supported for the first time as well as version 12 of DirectX for even faster, Windows 10 based 3D graphics. Now not only the decoding but also the encoding of HEVC, VP8, VP9 and VDENC is hardware-supported. Energy-efficient streaming of HD videos in both directions is possible for the first time. Additional enhancements are the number of USB 3.0 ports (now 4) SATA Gen 3 (now 3) PCIe Gen 3 (now 6) as well as AMT (now version 11.0). Further COM Express Type 6 pinout-compliant interfaces include PEG, Gigabit Ethernet, 8x USB 2.0, LPC plus I²C and UART. Thanks to the optional MIPI camera interfaces, CSI2 camera sensors can also be connected directly. Operating system support is offered for all popular Linux distributions and Microsoft Windows variants – including Microsoft Windows 10. Extensive, design-in-simplifying options – such as heatsinks, carrier boards and starter kits as well as SMART Battery Management Modules – round out congatec’s offering.

High Performance, Low Power Atom “Bay Trail” XMC Processor Mezzanine Module

A new high-performance, low-power quad-core Intel Atom (“Bay Trail”) E3845-based XMC Processor Mezzanine single board computer (SBC) has a typical power consumption of only 15W. The rugged XMC-120 Atom SBC Processor Mezzanine Card from Curtiss-Wright speeds and eases the integration of exceptional x86 processing performance into size, weight, power and cost (SWaP-C) constrained environments. The XMC-120 can be hosted on any 3U or 6U VPX module with an available VITA 42 XMC mezzanine site, such as an SBC, DSP processor, or VPX carrier card, to provide a single-slot compute solution. The XMC-120 is also available pre-integrated with the Cisco Systems 5921 Embedded Services Router (ESR) Software, enabling system designers to deploy a single-slot solution that combines both Cisco network routing and Intel multi-core processing. The module combines the ruggedization and upgrade advantages of the VITA 42 mezzanine standard with a small form factor SBC featuring a low-power footprint. The XMC-120 can also be used as a stand-alone board for applications, such as small UAV and robots, where a VPX carrier card is not required. The XMC-120 module’s Atom processor is supported with a wide range of I/O, including four Ethernet ports and three display ports. Built on Intel’s Silvermont architecture and manufactured with tri-gate 22nm process technology, the Atom SoC was designed for use in extremely power-sensitive and mobile applications where higher levels of performance are required. Curtiss-Wright Defense Solutions, Ashburn, VA. 703.779.7800

congatec, San Diego, CA 858-457-2600.

RTC Magazine OCTOBER 2015 | 41


Rugged Single Board Computer Combines High Throughput with Minimal Size, Weight, Power

A rugged 3U OpenVPX single board computer (SBC) is based on the latest ‘Broadwell’ processor technology from Intel. The SBC347A from GE Energy Management delivers higher performance and greater functionality than previous generations of SBC while maintaining the same power envelope. The SBC374A is the first in a number of Broadwell-based products that GE plans to introduce. The new SBC is designed for demanding applications in harsh, size, weight and power (SWaP) constrained military environments such as manned and unmanned vehicles, signal processing in intelligence, surveillance, reconnaissance (ISR), sonar, radar, and command/control, as well as the most challenging applications in industry such as energy exploration and transportation. The SBC374A benefits from two channels of 10Gbase-T connectivity, giving it 10x the Ethernet capability of earlier SBCs. This enables customers upgrading from previous generations of GE’s 3U OpenVPX Intel-based platforms to benefit from substantially enhanced connectivity on the Control Plane without needing to undertake disruptive, expensive infrastructure changes. The latest Intel Core i7 quad core processor, operating at up to 2.7GHz, can deliver as much as 15% greater CPU performance and 30% greater 3D graphics performance compared with its predecessors. The SBC374A supports up to 16GBytes of soldered ECC memory, and provides the exceptional on-board and off-board bandwidth needed by today’s sophisticated applications with its support for PCI Express® Gen3 technology and USB3.0. The SBC374A is available in five build levels, from benign environment (air cooled) to fully rugged (conduction cooled), and supports Microsoft® Windows, Open Linux® and VxWorks®. It benefits from support by both GE’s AXIS Advanced Multiprocessor Integrated Software development environment that minimizes program risk and speeds time to market, and from GE’s industry-leading Product Lifecycle Management (PLM) programs that are designed to maximize the long term value of customer investments. GE Intelligent Platforms, Charlottesville, VA

42 | RTC Magazine OCTOBER 2015

Mixed-Medium Wireless and PLC Hybrid Mesh Technology for ‘Always On’

An innovative and unique mesh networking technology provides a highly reliable communications link for commercial building and industrial applications. Hybrid Mesh technology on a single chip from Greenvity utilizes mixed-medium IEEE 802.15.4 wireless and wide-band powerline communication (PLC), enabling always-connected links that penetrate concrete walls, extend range and cover entire buildings. Greenvity’s Hybrid Mesh is the mixed-medium operation and algorithm between wireless and PLC that supports multiple hops for range extension, bridging and self-healing. To date, standard methods of mesh networking have been wireless only, or single medium, presenting a challenge when concrete walls degrade wireless signals and inhibit communication throughout the building. Greenvity’s Hybrid Mesh is uniquely advantageous for commercial building automation, smart lighting, security, safety and industrial IoT applications that require reliable and always-connected links between the gateway and nodes, as well as long range and the ability to go through walls and obstacles. Combining the best of both wireless and PLC, Greenvity modules with Hybrid Mesh networking rely on an algorithm to make dynamic decisions on whether powerline or wireless is the better medium in the current environment. Each node repeats the same data to the next node—selecting PLC when wireless strength is weak, and choosing wireless when the PLC signal is degraded due to circuit breakers or noise. The first Greenvity modules with Hybrid Mesh operation are the GV7011-MOD for commercial and industrial applications, and the GV-LED-11 smart LED controller and general IoT controller. Available now, both modules are powered by Greenvity’s GV7011 Hybrii-XL single chip with integrated Hybrid Mesh. The GV7011-MOD can be used in air conditioners, heaters, appliances, solar inverters, energy management and home/building security. The GV-LED-11 enables on, off, dimming and color tuning that can control all LED and non-LED lights in the market. Greenvity Communications, Milpitas, CA (408) 935-9358.


SMARC Module with ARM Cortex A9 i.MX6 Processors

A SMARC module powered by the high performance Freescale i.MX6 processor running 1.0 GHz. supports an image capture interface for MIPI cameras, 18/24-bit parallel LCD, LVDS and HDMI interface, as well as a full-HD 1080p hardware video codec engine. The from RM-F600-SMC IBASE Technology is best suited for mobile and HMI systems, in addition to applications in the multimedia, automotive, medical and consumer markets. RM-F600-SMC was designed based on the Smart Mobility ARChitecture (SMARC), a versatile small form factor for computer modules with 82mm x 50mm size that targets applications requiring low power and high scalability. The RM-F600-SMC features include the SMARC Small Form Factor (82mm x 50mm) SoM, a Freescale i.MX 6 Quad Core / Dual Core / Dual Lite / Solo Core 1GHz Processor, and 10/100/1000 MBit Ethernet. The module supports 24-bit parallel LCD, LVDS and HDMI along with 1080p hardware encode/decode. It includes OpenGL ES 2.0 and OpenVG 1.1 hardware accelerators with 1GB DDR3, 4GB eMMC on board. Support is also provided for Linux3.0 and Android 4.3. The SMARC-EVK1 evaluation kit that also includes a cable kit can be purchased for quick start reference to develop a custom carrier board. The carrier board, RP-100-SMC, features a broad range of interface options for design development flexibility that include GbE, audio, USB OTG, USB and COM ports on the same side of the board, as well as 2x CAN, 8x GIO, Mini PCIe, Micro SD and SIM sockets. IBASE also provides the carrier board schematics, review service, test reports, and FAE debug service during the engineering development phase.

New Cardsharp Single Board Computer Links to Rich I/O Selection

A user-customizable, turnkey embedded single board computer provides two A9 CPU cores directly coupled with FPGA fabric. Linux runs in core 0 providing Ethernet, USB and disk connectivity while core 1 runs bare-metal, zero-latency, real-time stand¬alone applications. The Cardsharp from Innovative Integration has a Kintex 7-series FPGA fabric that is directly connected to an HPC FMC module site and 128MB x 64 DRAM, and is compatible with Innovative’s huge assortment of ultimate-performance FMC I/O modules. With its modular I/O, scale-able performance and powerful, 32-bit floating point CPU core architecture, Cardsharp dramatically reduces time-to-market while providing the real-time performance you need. Uniquely customize-able due to it’s XMC footprint and FMC I/O expansion site, Cardsharp is ideal for applications such as distributed data acquisition. Put the Cardsharp at the data source and reduce data transport bottlenecks and complexity. Cardsharp is also compatible with Innovative’s new SBC-Nano embedded PC/carrier that provides dual 10 Gbps Ethernet ports for fast data streaming to networks of any size and dual mSATA2 connections to multi-TB SSD storage media. Cardsharp is L3-ruggedized, booting from eMMC flash in a compact, 1 50x75mm footprint that is ready for operation in harsh environments. This single board is especially well suited for portable or vehicle-based data loggers or handheld field equipment use, given it’s 8-36V DC-only operation. Innovative Integration, Camarillo, CA (805) 383-8994.

IBASE Technology,Taipei, Taiwan +886-2-26557588.

RTC Magazine OCTOBER 2015 | 43


Robust 8-Port Ethernet Switch in a Variety of Standard Configurations

A robust unmanaged 8-port Ethernet switch comes in a compact box PC format. Available in four standard configurations, the flexible NM10 from MEN Micro comes with either 100 MB or 1 Gb Ethernet interfaces as well as with a Class 2 wide range power supply. Conforming to EN 50155 and ISO 7637-2, the rugged Ethernet switch reliably operates in the extended temperature range of -40°C to +70°C (+85°C for 10 minutes in accordance with Tx) in harsh road, railway and industrial applications. In addition to the Ethernet interface options of either 100 MB or 1 Gb, the switch is available with or without Power over Ethernet Plus (PoE+) functionality offering up to 60 W of output power. The NM10 is part of MEN Micro’s family of robust Ethernet switches that withstand demanding, harsh environments. Its rugged aluminum enclosure is not just for conduction cooling, which makes the NM10 fanless and maintenance-free, but also protects the enclosed electronics, meeting IP40 requirements. All electrical components are soldered to resist shock and vibration, and are protected against dust and humidity through conformal coating. The Ethernet channels are accessible on the front via robust M12 connectors. The wide range power supply offers a voltage range from 14.4 VDC to 154 VDC. The supported voltage range from 24 VDC to 110 VDC allows the NM10 to conform to EN 50155 for railway and ISO 7637-2 for E-Mark certification in automotive applications. Key features include a rugged aluminum enclosure with IP40 protection with a -40°C to +85°C operating temperature. It is equipped with 10/100/1000Base-T Ethernet channels via M12 connectors and the above mentioned ultra wide range power supply input with interruption class S2. It supports PoE+ power sourcing equipment (PSE) with up to 60 W output power and is fanless and maintenance-free. Standards compliance includes EN 50155 (railways) and ISO 7637-2 (E-mark for automotive). Pricing for the NM10 starts at $545 USD. MEN Micro, Blue Bell, PA (215) 542-957.

44 | RTC Magazine OCTOBER 2015

3.5” SBC with AMD Series-G SoC, 4K Resolution and Graphics Flexibility

A 3.5” single board computer (SBC) features the AMD Embedded G-Series SoC with integrated chipset and 4K display capability. The MB-80910 from WIN Enterprises provides flexibility in graphics implementation through its support of HDML, VGA, and LVDS. From an application standpoint, the MB-80910 can support a variety of IoT endpoints, like industrial controls, surveillance, POS, digital signage, electronic gaming and kiosks, while providing high performance with low and scalable power demands (DC 8V ~ 32V input). Key features include the second generation onboard AMD Embedded G-Series SoC and up to DDR-1866, maximum 8GB storage. A 4K display resolution is enabled via HDMI and the board supports VGA and LVDS as well. There are two Mini-PCIe sockets and a SATA interface along with six USB and four COM ports. The AMD G-Series processors are designed to provide up to 60 percent better computing performance than the previous generation of the same AMD series. Configurable thermal design power is a feature that can limit power to just 5W at the processor level. This helps protect the CPU from failure through over-heating and ensures longer overall product life.

WIN Enterprises, North Andover, MA (978) 688-2000.


Thin Mini-ITX SBC with Intel Atom Braswell SoC

A thin Mini-ITX form factor SBC is capable of supporting a wide range of low-profile OEM devices. The MB-73400 from WIN Enterprises features the Intel Atom Braswell System on a Chip (SoC) along with strong serial I/O (1x RS232/422/485 and 5x RS232 through pin headers), plus 2x GbE LAN, to make it capable of playing in plant-wide industrial and process control systems, as well as use in devices for medicine, retail, digital displays in public venues, etc. Key features include the Intel Atom Braswell SoC with 2x DDR3L SO-DIMM up to 8GB memory. It has two HDMI/24-bit LVDS ports and HD audip, two Intel GbE LAN interfaces plus LPC. It provides one SATA III, four USB 3.0 and two USB 2.0 connections along with two Mini-PCIe sockets The board comes with a choice of three Intel Atom Braswell processors that range in performance from 2.0 to 2.4 GHz. Braswell is Intel’s next generation multicore (2/4) SoC that follows the Intel BayTrail – M/D (Mobile/Desktop). Braswell reflects a new microarchitecture manufactured on Intel’s tri-gate 14nm process. With an OEM agreement this I/O-rich device can be streamlined to more specific requirements and manufactured for up to seven years. WIN Enterprises, North Andover, MA (978) 688-2000.

Desktop Networking Device with AMD Embedded G-Series SoC, 8 GbE and mini PCIe

A new desktop system is an AMD-powered unit designed for network service applications. Supporting the AMD Embedded G-Series SOC chipset, the PL-80740 from Win Enterprises can be guaranteed for extended product availability. The AMD G-Series SOC achieves superior performance per watt in the low-power x86 microprocessor class of products when running multiple industry standard benchmarks. The G-Series delivers an exceptional HD multimedia experience while supporting a heterogeneous computing platform for parallel processing. A choice of two AMD G-Series processors is provided: the AMD GX-412TC QC 1.0GHz or the AMD GX-416RA QC 1.6GHz. The platform supports onboard DDR3L 1GB memory up to a maximum of 2GB. The device is equipped with storage interfaces that include a 2.5” SATA HDD and CompactFlash™. For networking utility, the PL-80740 is equipped with 8 Copper GbE, USB 2.0 ports, RJ-45 console port. A mini-card socket with LED indicators enable users to monitor power and storage device activity. Other Features include a maximum 8 GbE ports via PCI-e by1, expansive I/O with USB 2.0; 2.5” SATA HDD bay, CF socket, mini-PCIe slot and Console port and RoHS compliance. WIN Enterprises will work with electron OEMs modify this COTS design to more specific specifications when ordered in standard OEM or above quantities. The use of embedded components enables WIN Enterprises to guarantee available for up to seven years. Win Enterprises, North Andover MA. 978-688-2000.

RTC Magazine OCTOBER 2015 | 45


Company...........................................................................Page................................................................................Website Avnet......................................................................................................................................... congatec, Inc...................................................................................................................... Dolphin Interconnect Solutions..........................................................................22................................................................................................................. EDT............................................................................................................................................ Innovative Integration.................................................................................................23...................................................................................................... Intelligent Systems Source...................................................................................9, La Middle One Stop Systems......................................................................................................12, Pentek...................................................................................................................................... 5........................................................................................................................... Raythe Super Micro Computers, Trenton Product Gallery.................................................................................................................30......................................................................................................................................................

RTC (Issn#1092-1524) magazine is published monthly at 905 Calle Amanecer, Ste. 150, San Clemente, CA 92673. Periodical postage paid at San Clemente and at additional mailing offices. POSTMASTER: Send address changes to The RTC Group, 905 Calle Amanecer, Ste. 150, San Clemente, CA 92673.

rh850_advertisement.pdf 1 24.07.2015 14:23:52

The tool set for RH850, ICU-M and GTM C







► AUTOSAR debugging ► Multicore debugging ► Support for JTAG, LPD4, LPD1 ► Support for serial flash programming ► SFT Trace for instrumented code ► Multicore tracing for emulation devices

K 46 | RTC Magazine OCTOBER 2015

► One connector for debug ► One connector for trace



HOST PROCESSOR. Designed to deliver high performance while minimizing power usage, the MPC7410 provides a mature off-the-shelf microprocessing solution that you can count on for now and years to come. Connect with us:

© 2015 Raytheon Company. All rights reserved. “Customer Success Is Our Mission” is a registered trademark of Raytheon Company.

RTC Magazine  

October 2015

Read more
Read more
Similar to
Popular now
Just for you