RTC Magazine

Page 1

The magazine of record for the embedded computing industry

March 2014

Finding the Sweet Spot for SoC and ASIC Design High End Graphics Light Up Small Devices Get beyond the BIOS for Embedded An RTC Group Publication

www.rtcmagazine.com


Building Blocks Designed To Last

Like the Great Pyramids at Giza, computers engineered with board-level building blocks from Trenton Systems are built for performance and longevity. Ok, it’s not likely that a rackmount computer built with Trenton’s long-life SBC’s, backplanes or embedded motherboards will be around 4,500 years from now. However, Trenton boards do extend system functionality while reducing the overall cost of computer ownership by utilizing long-life board components with built-in support for standard I/O option cards. Trenton building blocks enable the initial system investiments to pay dividends over typical computer deployment cycles of seven years or more!

Here’s a snapshot of the available Trenton board-level building blocks for your next computer system design: Trenton’s BXT7059 is a robust dual-processor single board computer featuring long-life Intel ® Xeon ® processors. The single-processor TSB7053 offers a wide range of I/O and video interface options. Our backplanes come in all shapes and sizes engineered to deliver maximum value in your unique system design. Micro The JXM7031 embedded MicroATX motherboard has a unique long-life design featuring dual Intel ® Xeon ® processors.

Our board engineering experts are available to discuss your unique military computing application requirements. Contact us to learn more at 770.287.3100 / 800.875.6031 or www.TrentonSystems.com

The Global Leader In Customer Driven Computing Solutions™ 770.287.3100 www.TrentonSystems.com

800.875.6031


36 6U VPX Board Features 4th Generation Intel Core Processor

38 Rugged PCI/104-Express SBC with Intel N2800 Offers Rich I/O

TABLEOF CONTENTS

40 mini-ITX Industrial Mainboard for 24/7 Continuous Service

VOLUME 23, ISSUE 3

DEPARTMENTS

Integrating for Parallelism, 5Editorial Performance and Power— A Dance with Complexity

6

Industry Insider Latest Developments in the Embedded Marketplace

8 & Technology 36Products Newest Embedded Technology Used by Industry Leaders Small Form Factor Forum SFFs Take on the World

EDITOR’S REPORT Big Data Drives New Apps

10

Solutions from Data: Innovative Apps Can Bring Engagement, Loyalty and Revenue

TECHNOLOGY CORE

TECHNOLOGY CONNECTED

Finding the Sweet Spot for SoC and ASIC Design

High End Graphics on Small Devices

12

Beyond Drivers: The Critical Role of System Software Stack Architecture in SoC Hardware Development Jim Ready, Cadence Design Systems

Time-to-Market for GUI Designs 24Speeding Brian Edmond, Crank Software

TECHNOLOGY IN SYSTEMS Getting beyond the BIOS for Embedded

Blurs the Line between Source Firmware – Coreboot 16 Integration MCUs and SoCs for x86 Architecture Boards 28Open Jason Tollefson, Microchip Technology

TECHNOLOGY IN CONTEXT Optimizing Machine Vision Systems – Taking Vision to the Next Level 20 FPGAs Carlton Heard, National Instruments

Clarence Peckham, Senior Editor

TECHNOLOGY DEVELOPMENT The POSIX Heritage - History and Future – 25 Years of Open Standard APIs 32POSIX Arun Subbarao, LynuxWorks

Tom Williams

Digital Subscriptions Available at http://rtcmagazine.com/home/subscribe.php RTC MAGAZINE MARCH 2014

3


MARCH 2014 Publisher MSC Embedded Inc. Tel. +1 650 616 4068 info@mscembedded.com www.mscembedded.com

Qseven™ -

Freescale i.MX6 Quad-, Dual-

MSC Q7-IMX6 Compatible Modules from Single-Core to Quad-Core

or Single-Core ARM Cortex-A9 up to 1.2 GHz

Editorial EDITOR-IN-CHIEF Tom Williams, tomw@rtcgroup.com SENIOR EDITOR Clarence Peckham, clarencep@rtcgroup.com CONTRIBUTING EDITORS Colin McCracken and Paul Rosenfeld MANAGING EDITOR/ASSOCIATE PUBLISHER Sandra Sillion, sandras@rtcgroup.com COPY EDITOR Rochelle Cohn

up to 4 GB DDR3 SDRAM up to 64 GB Flash GbE, PCIe x1, SATA-II, USB Triple independent display support

The MSC Q7-IMX6 with ARM

HDMI/DVI + LVDS up to 1920x1200

Cortex™-A9 CPU is a compatible

Dual-channel LVDS also usable as 2x LVDS up to 1280x720

module with economic single-core CPU, strong dual-core processor

PRESIDENT John Reardon, johnr@rtcgroup.com

OpenGL® ES 1.1/2.0, OpenVG™

or a powerful quad-core CPU with

1.1, OpenCL™ 1.1 EP

up to 1.2 GHz, and provides a very

UART, Audio, CAN, SPI, I2C

high-performance graphics.

Industrial temperature range

V-7_2013-WOEI-6535

Untitled-3 1

8/14/13 2:16 PM

Art/Production ART DIRECTOR Jim Bell, jimb@rtcgroup.com GRAPHIC DESIGNER Michael Farina, michaelf@rtcgroup.com

Advertising/Web Advertising WESTERN REGIONAL SALES MANAGER Mike Duran, michaeld@rtcgroup.com (949) 226-2024 MIDWEST REGIONAL AND INTERNATIONAL ADVERTISING MANAGER Mark Dunaway, markd@rtcgroup.com (949) 226-2023 EASTERN REGIONAL ADVERTISING MANAGER Jasmine Formanek, jasminef@rtcgroup.com (949) 226-2004

Billing Cindy Muir, cmuir@rtcgroup.com (949) 226-2021

Bridge the gap between ARM and x86 with Qseven Computer-on-Modules

One carrierboard can be equipped with Freescale® ARM, Intel® Atom™ or AMD® G-Series processor-based Qseven Computer-on-Modules. conga-QMX6

conga-QA3

conga-QAF

To Contact RTC magazine: HOME OFFICE The RTC Group, 905 Calle Amanecer, Suite 250, San Clemente, CA 92673 Phone: (949) 226-2000 Fax: (949) 226-2050, www.rtcgroup.com Editorial Office Tom Williams, Editor-in-Chief 1669 Nelson Road, No. 2, Scotts Valley, CA 95066 Phone: (831) 335-1509

ARM Quad Core

Intel® Atom™

AMD® G-Series

www.congatec.us

congatec, Inc. 6262 Ferris Square | San Diego | CA 92121 USA | Phone 1-858-457-2600 | sales-us@congatec.com

4

MARCH 2014 RTC MAGAZINE

Published by The RTC Group Copyright 2014, The RTC Group. Printed in the United States. All rights reserved. All related graphics are trademarks of The RTC Group. All other brand and product names are the property of their holders.


EDITORIAL MARCH 2014

Integrating for Parallelism, Performance and Power – A Dance with Complexity

T

he effects of integration on high performance embedded computing are definitely producing high performance. While this may be basically enabled by Moore’s law, it has unleashed a wide range of creativity. And like every other outbreak of innovation, these developments are taking markedly different directions. We have only to remember the over 100 schemes for switched fabrics that emerged some ten years ago, which ultimately resulted in the much smaller number used today, to appreciate what a positive thing this is. Time will tell how this all plays out, but from here it looks like the ability to integrate really powerful hardware performance while maintaining a high degree of configurability and programmability is poised to push the ASIC into ever more rarified zones of high volume and special needs. With the development time for a highly integrated specialized ASIC stretching as long as four years (!), these other choices are going to look increasingly attractive. In what may appear to be a somewhat subjective classification, I see this generation of highly integrated core devices breaking out in a number of ways, some of which appear to be the integration of what were once distinct devices on a board or module. There are, for example, the now fairly well known integrations of multicore ARM processors in the same silicon die with a set of their standard peripherals and an FPGA fabric along with, in some cases, additional analog components. These offerings come notably from Altera, Xilinx and Microsemi. Then we have other offerings from companies such as NVIDIA and AMD that integrate multicore CPUs on the same die with very powerful and parallel general-purpose graphics processing units (GPGPUs) tightly integrated with the CPUs. These GPGPUs are designed to do very high-level graphics, video and machine vision processing—tasks that also often involve intensive mathematical operations, all of which lend themselves to be executed with a high degree of parallelism. Next there are families of highly integrated microcontrollers that incorporate CPU cores along with a highly integrated set of on-chip peripherals, memory, memory interfaces and graphic processors along with internal buses. Families like the PIC32MZ from Microchip and the Atom Z36xxx and Z37xxx (formerly “Bay Trail”) from Intel come with versions that provide different combinations of internal functions that the designer can select from to best fit his or her needs.

Tom Williams Editor-in-Chief

Finally, there are multicore processors that replicate CPU cores with identical instruction sets in devices with two to ten or more cores. These include multicore processors from AMD, Intel, ARM partners and many more. The CPU/FPGA, GPGPU and multicore directions have in common the fact that they are trying along different approaches to increase performance by offering parallelism in terms of the programmable fabric, the parallel architecture of the GPGPU, or the multicore architecture. One general observation about these different approaches is that they appear to involve different levels and complexities of software issues. Perhaps the most difficult and as of yet not fully solved hurdle comes with the CPU/FPGA combinations. Here we are bringing together two different disciplines of programmable devices that traditionally have been programmed by their own specialists. Individual manufacturers do supply tools, but there is so far no overall programming/ configuration or analysis tool methodology that applies to all of them. The CPU/GPGPU approach fares better in that there are tools and software platforms that let developers express themselves in an extended world of the C/C++ language. NVIDIA has developed the CUDA platform for its Kepler architecture, which will parallelize C code intended for execution on the GPGPU. AMD has selected the OpenCL platform developed for this same purpose to use on its graphics coprocessors to implement parallel mathematical operations. OpenCL also has the advantage that it is starting to be used for programming parallel operations in FPGAs as well. The world of advanced microcontrollers can be programmed in a single language, as long as the manufacturers provide drivers for their internal peripherals as they of necessity must do. The “homogenous” multicore processors enjoy several alternatives. They can be programmed with a single operating system and a single language, or they can make use of such things as hypervisors and virtualization to accommodate multiple OSs. Such devices also lend themselves to having what would otherwise be special hardware peripherals implemented in software instead. Integrating diverse hardware elements also has implications for complexity across interfaces via different protocols as well as obstacles to scalability. These are just an indication of some of the issues that will face developers pursuing greater device integration as we move through an exciting and promising period of innovation. RTC MAGAZINE MARCH 2014

5


INDUSTRY

INSIDER MARCH 2014 Major Shift at Microsoft: Gates out as Chairman, New CEO Microsoft has named a new CEO and a new chairman of its board with co-founder and chairman Bill Gates stepping down from that position into a role as “technology advisor” to the new CEO—whatever that means. Word is that Gates will remain a director and come into work at least one day each week. Replacing retiring CEO Steve Ballmer is Satya Nadella, who has been with the company for 22 years and has overseen its corporate software business as head of its Cloud and Enterprise division. Replacing Gates as chairman is John Thompson, who is on the Microsoft board and is a former CEO of Symantec. All this leaves open many questions as to what direction Microsoft will take. Its traditional PC-based product lines have softened with the drop in PC shipments, and it is facing stiff competition from players like Apple and Samsung in the tablet and smartphone arena. Microsoft recently acquired Nokia’s handset unit, but it is not yet clear where that will lead. The Enterprise and Cloud division that Nadella comes from is by comparison relatively stable. Gates’ role—in addition to his activity with the Bill and Melinda Gates Foundation, which he considers his full-time work—will be as a strategic technology advisor, which would seem to imply that although his direct involvement in the company’s operations may be cut back, his influence on it strategic direction may still be strongly felt. It will be an interesting time because whenever something as big as Microsoft moves, the world feels the waves.

First Set of Standards for Cooperative Intelligent Transport Systems (C-ITS)

The European Committee for Standardization (CEN) and the European Telecommunications Standards Institute (ETSI) have confirmed that the basic set of standards for Cooperative Intelligence Transport Systems (CITS), as requested by the European Commission in 2009, have now been adopted and issued. The socalled “Release 1 specifications” developed by CEN and ETSI will enable vehicles made by different manufacturers to communicate with each other and with the road infrastructure systems. When they have been applied by vehicle manufacturers, the new specifications should contribute to preventing road accidents by

6

SEPTEMBER MARCH 20142014 RTC MAGAZINE RTC MAGAZINE

providing warning messages, for example about driving the wrong way or possible collisions at intersections, as well as advance warnings of roadwork, traffic jams and other potential risks to road safety. This vision of safe and intelligent mobility can be achieved by utilizing wireless communication technologies to link vehicles and infrastructure and identify potential risks in real time. With more than 200 million vehicles on the roads in Europe today and some 13 million jobs at stake across the continent, it is essential for Europe’s automotive industry to be at the forefront when it comes to introducing new technologies. However, the next generation of “connected cars” will not work without common technical specifications, for ex-

ample regarding radio frequencies and messaging formats. This is why the European Commission decided in 2009 to issue a formal request (Mandate 453) to CEN and ETSI, asking them to prepare a coherent set of standards, specifications and guidelines to support the implementation and deployment of Cooperative ITS systems across Europe. Connected cars are expected to appear on European roads in 2015. The authorities in Austria, Germany and the Netherlands have agreed to cooperate on the implementation of ITS infrastructure along the route between Rotterdam and Vienna (via Frankfurt).

GE Expands Operations in Huntsville, Alabama

GE Intelligent Platforms has announced the expansion of its facility in Huntsville, Alabama with the formal opening of a new building. Huntsville is a key location for GE Intelligent Platforms, serving the defense and aerospace industries as well as multiple industrial markets. The new building creates a Center of Excellence that will be at the heart of GE Intelligent Platforms’ growing systems business, which sees GE delivering the value that is increasingly required by demanding prime contractors, systems integrators and OEMs in defense and other industrial industries as those organizations look to focus on their core competencies. GE’s Huntsville facility is home to 235 employees, including engineering, manufacturing and administrative functions. The expansion allows for consolidation of operations into a single facility housing the designers and developers of GE’s high-performance, rugged integrated systems in the same building in which those systems are manufactured and built.

Synergies are created to allow for faster, more responsive product development—from prototyping to production—and shorter lead times for GE’s customers. Housed in the new building are advanced capabilities for extended testing and analysis of the effects of vibration as well as for examining and implementing innovative cooling technologies. GE is perhaps the most experienced developer of rugged embedded computing solutions in the defense industry, which requires computing that can withstand the rigors of deployment in environments that are subject to extremes of shock, vibration and temperature as well as the ingress of water and contaminants. This expertise is also vital to applications in oil & gas, power and metals industries.

Imagination and Green Hills Partner to Bring Compiler and Tools Support to MIPS CPUs

Imagination Technologies and Green Hills Software have signed a multi-year agreement that brings expanded, comprehensive Green Hills compiler and tools support to a broad range of Imagination’s current and future MIPS CPU cores and architectures. Green Hills Software’s embedded development solutions are coming to MIPS Aptiv cores, as well as to the new MIPS Warrior family of cores, comprised of the entry-level M-class cores, mid-range I-class cores and highperformance P-class cores. Green Hills Software is also upgrading capability to deliver fully optimized support for the microMIPS code compression instruction set architecture (ISA) and the latest MIPS r5 architecture, including key features such as hardware virtualization. The companies are


also working together in support of next-generation architectures. MIPS CPU support from Green Hills Software includes its C/C++ compiler, assembler and linker, binary tool chain, Multi integrated development environment (IDE), Green Hills Probe, SuperTrace Probe and documentation. Says Mike Haden, general manager, Advanced Products, Green Hills Software: “Green Hills has a long history of support for MIPS, and we are continuing that tradition through this comprehensive new agreement. Under Imagination, we are seeing growing demand for MIPS. With Imagination’s and Green Hills Software’s combined expertise in security, multi-threaded and multicore CPUs, and heterogeneous computing, this collaboration will provide tremendous value to joint customers.” Says Tony King-Smith, EVP marketing, Imagination: “Imagination already has a strong and vibrant world-class ecosystem for MIPS, and we are continuing to invest in growing that ecosystem to address new opportunities. Green Hills is a great strategic partner for Imagination, and this new agreement reflects growing demand across several key markets. We look forward to working together to help drive the future of mobile and embedded software engineering, and the future of heterogeneous processing.”

Memoir Systems Joins TSMC Soft IP Alliance

Memoir Systems has announced that it has joined the TSMC Soft IP Alliance Program, leveraging TSMC’s advanced process technologies to improve power, performance and area for its Renaissance family of multiport memory IP. Using Memoir’s IP delivery platform that includes

design-for-formal and exhaustive formal verification, Memoir delivered fully verified ultra-highperformance multiport memory IPs to TSMC. The TSMC Soft IP Alliance Program requires rigorous checks and quantitative data to demonstrate the robustness and completeness of synthesizable semiconductor IP that is part of the TSMC 9000 IP library. These IPs successfully passed TSMC’s comprehensive soft IP qualification process ensuring the best possible design experience, easiest design reuse and the fastest integration into SoCs. New SoCs are constructed predominately by assembling a multitude of IP building blocks, with as many as 50-80% of those being memory. Therefore, the quality of IP building blocks and the ease with which they can be integrated for a particular process has a huge impact on time-to-market and customer success. In many SoCs, embedded memory performance is the limiting factor. As the processorembedded memory gap widens, higher performance multiport memories are required to unlock application potential. Memoir’s Renaissance memories combine single port embedded memory macros with algorithms to increase memory performance by up to 10X. The algorithms are implemented in standard RTL logic and expose multiple memory interfaces that allow multiple parallel accesses within a single memory clock cycle. The resulting multiport memory is delivered as soft IP. It is fully verified to cover all corner cases, and offers guaranteed performance for fully random and non-random memory accesses, while reducing area and power consumption.

Mentor Graphics Acquires Mecel Picea Autosar Development Suite

Mentor Graphics has announced that it has strengthened its automotive software solution by purchasing the Autosar assets, including the Mecel Picea Autosar Development Suite, from Mecel AB. The acquired assets complement the existing automotive software solution from Mentor including the Volcano Autosar products, Mentor Embedded Hypervisor and Mentor Automotive Technology Platform (ATP), which enables Linux-based automotive solutions, including GENIVI-compliant infotainment (IVI) solutions. The Mentor automotive software solutions enable a wide range of subsystems, including secure, homogenous and heterogeneous multicore and single-core ECUs. The Mentor Graphics Embedded Software Division enables embedded development for a variety of applications including automotive, industrial, smart energy, medical devices and consumer electronics. Embedded developers can create systems with the latest processors and microcontrollers with commercially supported and customizable Linux-based solutions including the industry-leading Sourcery CodeBench and Mentor Embedded Linux products. For realtime control, systems developers can take advantage of the smallfootprint and low-power-capable Nucleus RTOS.

Artesyn Joins ETSI Network Functions Virtualization (NFV) Industry Specifications Group

announced that it has joined the European Telecommunications Standards Institute (ETSI) Network Functions Virtualization (NFV) Industry Specifications Group (ISG). Initiated by some of the largest network operators in the world, the group has attracted broad industry support and participation now includes communication equipment vendors, IT vendors and technology providers. The NFV ISG aims to achieve a consistent approach and common architecture for the hardware and software infrastructure needed to support virtualized network functions. The group has already published the first five specifications and is developing more detailed guidance. These documents agree to a framework and terminology for NFV, which will help the industry channel its efforts toward fully interoperable solutions to enable global economies of scale. Mark Dunton, software solutions manager for Artesyn Embedded Technologies, said: “Artesyn has joined the world’s leading experts in NFV with a view to evolving our product line to support infrastructure development and network deployment. We embrace the objectives of the NFV ISG as they align with our vision for Artesyn’s communications solutions. Artesyn is looking forward to becoming a major contributor to the group, specifically leveraging expertise in heterogeneous acceleration and other critical functions to enable effective NFV deployments.”

Artesyn Embedded Technologies, formerly Emerson Network Power’s Embedded Computing & Power business, has

RTC MAGAZINE RTC MAGAZINE SEPTEMBER MARCH 2014

7


SMALL FORM FACTOR

FORUM Colin McCracken

SFFs Take on the World

T

he Embedded World show in Germany is the largest embedded-focused exhibition for components, software and tools. This year’s show, once again, did not disappoint. From show floor demos to secret knightings in dungeons over libations, visitors took away much knowledge and wisdom. When the weather outside was frigid, the relationships warmed up inside and underground. It’s a wonder that EW continues to grow while similar shows in the U.S. are shrinking and even folding their tents. Or perhaps it’s not too much of a mystery. The way OEMs find and buy products is different between the geographies. Each European country has distributors who speak the local language and foster great long-standing relationships with their customers, with an emphasis on meeting face-to-face. While many OEMs buy through distribution in the Americas, increasingly manufacturing goes offshore or through system integrators and contract manufacturers. Boards are sold directly from suppliers to system OEMs, often without the local touch of manufacturers’ reps anymore. Hardware and software engineers simply surf the Web for up-tothe-minute module specs and fill out the vendor’s online contact forms. Even chip vendors are reducing the use of manufacturer’s reps and distributors in order to focus on competitive high-volume business directly themselves. Sadly, personal relationships aren’t as important in this manic stretched-too-thin culture. Considering small form factor technology, Germany continues to be the locus of popular open standards for tiny computing modules. More than 10 years ago, non-standard DIMM-PCs first intrigued the market with processor boards plugging into commodity high-volume RAM sockets. ETX modules took over, using board-to-board connector pairs instead. COM Express carried the x86 PC module concept into the PCI Express era while XTX breathed some life into old ETX carrier boards. At a time when some trade groups were struggling with relevance, computer-on-modules (COMs) blew right past and never looked back, crossing the million modules per year mark while assimilating the volumes from legacy form factors. With COM Express (COMe) on auto-pilot from the Pentium M and Core Duo momentum, in 2008 an explosion of form factors harkened from these hallowed halls when Intel’s low-power Atom family did not conform well to COMe. Atom created a massive “form factor forking,” as board vendors raced inde-

8

MARCH 2014 RTC MAGAZINE

pendently to invent the next big small form factor. This year’s show commemorated the sixth anniversary of the Qseven module standard. Qseven not only reached critical mass faster than any of the other form factors introduced in the 2008 great bang, but Qseven modules were displayed with the broadest processor manufacturer coverage by board vendors from all over the world. It’s now possible to design a single carrier board that uses Intel, AMD, Freescale, TI, Nvidia and Qualcomm processor modules! And most of these processors are ARM, not even x86. But this year’s show hammers home the idea of a cross-architecture carrier board. Qseven’s success also signals the return to low-cost laptop PC-style connectors for computing modules, just like the good old DIMM-PC days. Full circle. While tiny modules are necessarily limited in terms of performance and power dissipation, Intel’s latest Bay Trail SoC and AMD’s G-series SoC run circles around modules from just a few years ago, meeting the needs of many high-volume applications. “Success breeds _____.” Fill in the blank with “contempt,” “jealousy,” or just “good old fashioned competition.” Not to be outdone completely, the non-Qseven vendors created the ULPCOM standard; you guessed it, at the same venue four years later—Embedded World 2012. Can’t we all just get along? It took another year to settle on a name, “SMARC.” It took a year last time too, when COM Express emerged as a better name for “ETX Express,” which bore no resemblance to ETX. New connectors, and two instead of four of them. A new module size. 12V input instead of 5V, so that power consumption could ratchet way up to 188 watts, leaving plenty of room for the low-power Qseven to come in underneath. Small form factor modules are everywhere now, in all major embedded market segments, in mobile as well as fixed installations, all around the world. Emanating from Germany, they are spreading like a wildfire in a California drought. You’ll find everything at Embedded World, except perhaps crop circles. The frozen tundra is too hard to carve this time of year. In case you missed all the action, pencil it into your 2015 calendar. With the breadth of vendors exhibiting, the lively tech discussions, relationships re-kindled, and of course the Weissbier, EW has more than earned the word “World” in its name.



EDITOR’S REPORT Big Data Drives New Apps

Solutions from Data: Innovative Apps Can Bring Engagement, Loyalty and Revenue With today’s systems and devices producing Big Data, there is a need for creative ways to gain insight and use it to create additional value by solving problems related to underlying products and services. This creates customer engagement, brand loyalty and leads to additional growth. by Tom Williams, Editor-in-Chief

D

ata. It is being relentlessly generated by almost every form of human activity—by commerce, government, research—and by billions of interconnected systems and devices all over the world. Even the smallest sensors generate data that is aggregated over local networks and eventually winds up on servers in the Cloud where it is used for . . . what? Time was, the answer to that question would come from the IT department and it would center around internal operations, cost control, inventory management, customer databases, etc., all very useful and necessary points. But that was data generated and managed about products and business operations. What we are coming to call Big Data is generated more by products and by customer interaction with those products and services. As such, Big Data has the potential to be used in fundamentally different ways to enhance the user experience, re-

10

NOVEMBER MARCH 20142014 RTCRTC MAGAZINE MAGAZINE

inforce brand awareness and effectively become an additional product that can add to company revenues and continue customer engagement. This opens a whole new arena for software development that seeks to leverage the digital aspects of a company and its products, many of which are consumer devices based on embedded technology that are connected in various ways and generate data that can be used by customers to improve their interaction with the purchased product. According to David DeWolf, CEO of 3Pillar Global Software, “More and more traditional companies that aren’t software companies are building software products. Software is now your brand. You touch your customer more through software than any other way no matter what industry you’re in.” This appears fairly obvious due to the Web and how customers find out about and are sold products, but also increasingly in the software ap-

plications developed around products to enhance user experience. It should be noted here that the examples given are not necessarily all 3Pillar customers but serve to illustrate the points made by DeWolf. Take for example, Nike. What does Nike make? Sports apparel, primarily shoes. Nike is of course known throughout the world by its trademark “swoosh” that adorns so many other products, persons and events. Now once we get past all the image, glitter, sports super stars and psychological manipulation, why do people buy Nike products? Well, they have an interest in pursuing sports themselves, physical activity, which also equates to an interest in their own personal fitness. Nike also offers a low-cost digital product, a consumer embedded system called the Nike Fuel Band SE, which is a bracelet that contains technology to measure activity such as pace, heart rate, etc.; it also offers a Sport Watch GPS that will track pace as well as the path run. There is also a smartphone app called Nike+ Running that without additional sensors tracks distance, pace, time and calories burned along with GPS. In addition there is computer software to store and analyze the data and access to Facebook “buddies” with the ability to earn badges, set goals and generally track and motivate fitness progress. In fact, there is a whole online community called Nike+ dedicated to motivation and training. All this to sell shoes? Well, actually these are software-based products that are also for sale and they have the effect of engaging the customer and represent value in their own right while reinforcing and adding to the value of the underlying basic product and/or service. The example of Nike helps illustrate a much wider phenomenon that 3Pillar has identified and is actively involved in, helping a wide variety of companies identify and use software to create solutions that can have a similar effect on their own growth and market presence. 3Pillar does this for customers by looking for innovative ways of using data related to a product to solve problems. They then create a prototype that can be put out for customer feedback. In this scenario, software development does not begin with a require-


EDITOR’S REPORT

ments document but rather with a functional concept built on data and knowledge of the customer. This prototype is then refined over time. “Market research,” DeWolf says, “is a thing of the past.” Consumers have embraced the smartphone and the tablet experience, and the app is now the door to a world of innovative software solutions based on companies’ data and imagination. One customer that 3Pillar did work with to expand the value of its existing business was Carfax, the source for information on used vehicles. Carfax reports are increasingly used by dealers to supply customers with information about vehicles they have in stock. Carfax wanted to grow their company value and add customers by turning their Web-based consumer application into a mobile app. Now a person shopping for a used car can simply subscribe—note additional revenue—to one or a number of reports and enter the license or the VIN number and get a full report on his or her smartphone. The app features maintenance reports, service records, registration, etc. In addition, the user can locate recommended service providers and receive repair estimates (Figure 1). This is a case of building on an existing software service to create a solution to a remaining problem, namely how to check the report while you’re actually out on a dealer’s lot with a salesman droning his pitch at you. And the app has built on the existing software and data in that it also provides a way to maintain service records on a given purchased vehicle (Figure 2). But DeWolf emphasizes that such innovation is only the start. There must also be acceleration, the constant use of feedback to continue to differentiate over time. This involves using the customer feedback and customer engagement created by the initial innovative application to continue the product life cycle relying on existing information and information created from customer interaction in an agile development methodology. This can also make use of other existing and available data that might not have been generated in-house. The existence of Big Data, of course, does not come from a single source. Consider, for example,

FIGURE 1

FIGURE 2

The Carfax mobile app lets the user find a repair shop and get an estimate on work that may be needed in conjunction with a possible purchase.

Carfax offers access to a vehicle’s service record and also notifies of upcoming needed service.

the possibility of using the biometric data generated from sensors and apps such as those in the Nike+ or a similar environment for inclusion in a medical monitoring application that might bring in other data such as blood oxygen, more detailed EKG data, or more focused data about a specific condition. There are, of course, other companies offering biometric training monitors besides Nike. What if one of them were to include data from recipes that would furnish information about caloric intake, trans fats or other nutritional data? That could then be correlated with the exercise data to provide an even richer training application or a weight loss program. In a similar way, it would appear that the Carfax app brings in more information than simply the vehicle records. It also accesses repair shop data and can get repair or service estimates. This relies on data well beyond that associated with a given

vehicle, but it serves to enhance the value of the underlying product that Carfax originally offered. The possibilities are endless. Maybe someday we can enter the information from a wine label and get data about that season’s sunshine and moisture, soil character, etc. There is a huge, largely untapped market for the creative use of data supplied by an ever growing number of systems and devices that can be used to create engagement, add to the user experience, increase brand loyalty and solve problems based on huge amounts of existing data. The secret is to look at it in creative and imaginative ways aimed at innovative solutions. 3Pillar Global Fairfax, VA. (703) 259-8900 www.3pillarglobal.com

RTC RTC MAGAZINE MAGAZINEOCTOBER MARCH 2013 2014

11


TECHNOLOGY

CORE

Finding the Sweet Spot for SoC and ASIC Design

Beyond Drivers: The Critical Role of System Software Stack Architecture in SoC Hardware Development New system-on-chip designs require major software efforts from internal operating system and interface issues on up to specialized on-chip device functionality. Ultimately, the software must make the hardware work. Getting there requires clear vision and close cooperation between hardware and software teams. by Jim Ready, Cadence Design Systems

I

t’s no secret in the semiconductor industry that software development costs for a new system-on-chip (SoC) can exceed the hardware development costs by a significant margin. Having been directly involved in the software side of the SoC development process, I’ve experienced in detail the overall development flow, which gives me the courage to try and answer the following questions: “Why is there so much software to develop? Android and other operating systems exist, they all have an abstracted hardware interface, so isn’t it just a simple matter of a few “drivers” to link up the new silicon with the OS?” If only it were so simple. Well, the bottom line is that it’s all about the hardware. All the software effort, from writing the lowest level driver to building the coolest multimedia Android app, is driven, and potentially exacerbated, by the underlying hardware capabilities and their impact on the software developers on the SoC team. To get a feel for the magnitude of the software effort for a new SoC, here’s a composite picture gleaned from projects I worked on not too long ago. A typical project might have 500+ software developers overall, with most being devoted

12

OCTOBER MARCH 2014 2013RTC RTC MAGAZINE MAGAZINE

to operating system development and customer support, with at most 100 developers for kernel porting, bring-up and testing. These projects typically take 48 months from start to finish for a mainstream, complex, mobile device SoC. If the SoC is new and not a derivative of a previous SoC, it can take much longer than 48 months to complete, especially if there is a process node change. It would be considered a success if any project completes in a firm 48 months, based upon only incremental changes being made to the SoC, with more aggressive parallelization of hardware and software during development. Clearly, these software development projects are indeed large in scale. Is there anything that can be done to change this situation? In a previous real-time clock (RTC) article, we discussed the criticality of providing software developers with a realistic and usable (good performance) platform upon which to run software before silicon, to enable parallel hardware and software development. Here we assume that all that technology is already in place. So now let’s attack the issue of what’s currently “holding up” the development of software for the new SoC, and

why there are so many software engineers. Because the answer touches on many aspects of software support for hardware, it’s worth taking a closer look at what’s going on. It all begins with the need to support operating systems such as Android, Linux and Windows 8 with the digital signal processor (DSP), imaging, graphics processing unit (GPU) and other hardware subsystems on the SoC. In short, it’s the issue of offloading software functions into hardware for performance gains or to lower power consumption. The most common form of offload is moving a particular software capability into the underlying hardware of the SoC. However, given the ubiquity of wireless communication and the Internet, there is an emerging offload architecture based upon moving the offload function up into the Cloud. In fact, some architectures can decide on the fly whether to use device-based offload or Cloud-based, optimized around the best power savings, compute time, or some other user-selectable benefit. But no matter where the offload is happening, the implications on the software are what we need to understand.


TECHNOLOGY CORE

APPLICATIONS

Home

Contacts

Phone

Browser

...

APPLICATIONS FRAMEWORK

Activity Manager Package Manager

Window Manager Telephony Manager

Content Providers Resource Manager

View System Location Manager

Notification Manager

SQlite

OpenGL | ES

FreeType

WebKit

SGL

SSL

Libc

ANDROID RUNTIME

Core Libraries

Optimization

Media Framework

Power Related

New Framework

LIBRARIES

Surface Manager

Dalvik Virtual Machine

LINUX KERNEL

Display Driver Keypad Driver

Camera Driver WiFi Driver

Flash Memory Driver

Binder (IPC) Driver

Audio Driver

Power Management

OS Related Algorithms & Drivers

FIGURE 1 Android architecture and hardware-related intra- and cross-layer software activities. (Source: Google)

For example, a DSP subsystem on an SoC can support a wide range of audio processing functions, including audio stream coding and decoding, voice processing, equalization and many other capabilities. These capabilities are implemented as a combination of hardware (the DSP) and extensive software libraries. These capabilities are typically independent of any particular OS environment. Thus the “usual” notion is that “software drivers” will need to be developed, either by the SoC maker itself, or by the customer, in order to interface the DSP hardware and software audio subsystem to the operating system that the customer is using. This notion corresponds to the typical, but oversimplified, layering diagram of a system, where there is a hardware layer, a driver layer, an operating system layer and an application layer. In this model, all the hardware maker has to supply is an OS-compliant driver, and the hardware is then supported all the way up the software stack for apps to use. If only this were true! The reality is that this simple model in many cases doesn’t reflect reality at all.

For example, see Figure 1 for a system architecture diagram of Android. Note that although there certainly is a driver layer, there are a couple of intermediate layers with multiple components before reaching the application layer, all of which may have some dependencies on the underlying hardware. Also keep in mind that this software stack consists of many millions of lines of code, which need to be understood by software engineers who didn’t write it in the first place. This is not the environment in which to trivialize the challenges of modifying the software stack. With this complexity in mind, it is important to note that a number of popular OSs have either unique and/or limited interfaces available to integrate support for DSP or other hardware, into the existing system frameworks. Imagine that the multimedia framework developers designed the framework to be largely software-based, with minimal interfaces to make limited use of hardware acceleration for various multimedia functions. So even if an SoC has a DSP on-chip, as far as the media framework is concerned,

most of the hardware capabilities are unreachable; in effect, they don’t exist. See Figure 2 for an illustration of this situation. Note that although the decoding capability of the DSP is used, all the other audio functions are performed on the application processor, even though the DSP might well be able to perform those functions with much greater power efficiency. Note also the back and forth movement of data between the DSP and the application processor for decoding. That data movement uses power, and of course, the application processor needs to be powered on as well. In order to fully exploit the DSP to offload more of the audio function from the application processor, the hardware vendor can re-engineer the media framework to fully support their DSP, which is the optimal way for the system software to make full use of the DSP offload capability. See Figure 3 for an illustration of an advanced DSP offload architecture. In this case, almost all of the audio processing is offloaded to the DSP subsystem, allowing the application processor to be powered down with the resultant savings in power. RTC RTC MAGAZINE MAGAZINEOCTOBER MARCH 2013 2014

13


TECHNOLOGY CORE

The hardware vendors are faced with the task of re-engineering the media framework to fully support their DSP or leave that effort to the customer, but in either case it’s the only way for the system software to make full use of the hardware capability. And, of course, while implementing this offload capability, the developers have to make sure they don’t “break” any of the application interfaces to the media framework, otherwise the apps won’t work. This effort no doubt calls for multiple man-years of work, and likely has to be re-visited each time the framework is revised. In addition, it requires expertise in at least two domains, system software (OS internals) and signal processing—both hardware and software. But the benefits are clear. The system can deliver the same level of audio processing at a small fraction of the power required if the processing remained on the application processor. The key takeaway here is that while many hardware-dependent functions are contained within a single layer, device drivers for example, others functions are not. For example, as Figure 1 shows, and

Audio Effect

MediaPlayerService/StageFright File Reader/ Parser

Taming the Interfaces

For example, with GPU offload, we see a different situation than in the multimedia framework discussion. Here, the industry has been working for some time to have standard offload mechanisms in PC and mobile platforms to take advantage of the large raw compute power of GPUs, especially for things closely related to graphics. These include processing with floating point because the hardware is there, and imaging because some parts of the pipeline can be applied. These mechanisms include OpenCL, AMD’s Heterogeneous System Architecture (HSA) consortium, Google Renderscript and Filterscript, and

Java App/ PCM/Game

Media Player Application Media Player

as we just discussed in detail for audio processing, adding a new framework, power optimization or performance tuning is vertical. To support the effort, software needs to be written or modified at every layer. We might conclude that developing interface standards is the way to solve this kind of situation, and indeed it can be. But as we’ll soon see, there can be interesting and unintended consequences with that approach.

Audio Sink

Applications Applications Framework

Audio Track Audio Flinger SW Mixer

Libraries

Effect/Mixing Control

SW Effects

OMX IL Audio HAL DSP Driver

HiFi Driver

ALSA

Linux Kernel

Hardware

HiFi DSP DSP Decoder

Mixing

Effects

Codec Compressed PCM

FIGURE 2 Android Audio Playback Baseline DSP Offload. (Source: Cadence)

14

MARCH 2014 RTC MAGAZINE

a number of other initiatives. While some may hope that the GPU is “The Universal Offload Engine”—i.e., you only need to support GPUs and all your energy and throughput problems are solved—as usual, the reality is more complex. As a result of the standardization effort, customers are asking SoC makers to support all of the hardware and software hooks proposed for CPU/GPU coordination, even when they may be a step in the wrong direction on efficiency. HSA, for example, requires full cache coherency between CPUs and offload engines, unified virtual memory management, and (eventually) 64-bit flat addressing throughout. That’s not necessarily optimal for low-cost, low-power offload. There is a legitimate argument that these things would ease function migration onto offload engines, but the lean, mean hardware leverage is significantly reduced, which could be a problem for ultra-small devices used for “Internet of Things” applications. Many of these programming models and offload architectures implicitly or explicitly demand heavy-duty floating point. That’s fine if the applications really need it. But it’s a shame if the applications can really be implemented in fixed point, because there’s a factor of at least three in throughput/watt if you can get a software function down from 32-bit floating point to 16-bit fixed point representation. The bottom line is that there is no guarantee at all for an SoC maker that the proper interfaces and layers exist in Android, Linux, or Windows 8 Mobile to easily integrate hardware into those systems and allow application software and the overall system to gain full benefits from the hardware. It’s no wonder then that the major SoC suppliers have large software teams re-engineering the guts of these major OSs to support the advanced hardware capabilities they’ve placed on their SoCs. But when looking at the overall software headcount, it’s also important to recognize that not all of the software developers are working on the core operating system. There is plenty of customer-specific development going on as well. Just as the SoC maker tries to differentiate his SoC with some snazzy hardware (leading to the situation of exploding software


TECHNOLOGY CORE

developer headcount discussed here), the SoC customer in turn needs to differentiate his product. That differentiation is likely to be done with software, and it’s often part of the SoC maker’s business deal that the SoC maker does a lot of that work. For example, it is not uncommon for a large SoC project to have a significant part of the hundreds of software developers devoted to helping (usually for free to the big customers) customize and optimize the OS to their device. What can be done to improve the situation? First, maybe nothing at all. As Fred Brooks noted in his now legendary book, “The Mythical Man-Month,” sometimes what’s left for the software is the unique part of the system, the “Essential Complexity” he calls it, and there’s no way around the work required to implement it. But Brooks was no pessimist, so we’ll follow his lead and look at some suggestions to ease the burden even under the current constraints of the market and industry today. First, there may be some process improvements that can help. For example, here’s an idealized development flow that a number of software architects I’ve worked with have either implemented or wished that they had. The first step in any all-new SoC development is to capture the high-level requirements for the SoC by a team staffed by both hardware and software architects. (It’s not clear that this is always common practice in the industry, by the way). The end result should be a functional specification composed of all of the individual hardware intellectual property (IP) blocks in the SoC. This should include the register definitions of each IP block, which are a key interface for building the software stack. The software architects now have enough data to validate that the software requirements could, at least in theory, be met by the underlying hardware definition. In turn, the architects need to validate that the design could meet the “speeds and feeds” required. This process can conclude with the decision that the SoC “looks good on paper” and the development effort is now moved on to the next phase of implementation. What’s critical here is two-fold: One is that the magnitude of the gap between the SoC hardware and the target operating system(s) should now be identified,

Media Player Application Media Player

Java App/ PCM/Game Audio Effect

Applications Applications Framework

Audio Track

MediaPlayerService/StageFright

Audio Flinger

File Reader/ Parser

Effect/Mixing Control

Audio Sink

Libraries

Audio HAL Linux Kernel

HiFi Driver

Hardware

HiFi DSP DSP Decoder

Mixing

Effects

Codec Compressed PCM

FIGURE 3

Android Audio Playback Advanced DSP Offload. (Source: Cadence)

whether large or small. Maybe it really is “a small matter of a driver or two,” or, worst case, a complete re-write of some major subsystem, but at least there should be no illusions as to the effort required (even though, being software, the effort is still likely to be underestimated). The other activity is that the core OS team now has enough information to design and implement an abstraction layer, a generic interface to the underlying SoC acceleration and other specialized SoC hardware subsystems. The main OS team then develops in parallel the middleware pieces and applications that use those capabilities. Another observation borne from experience, despite wishing the situation were otherwise, is that it’s important not to oversimplify. Hardware/software interactions can be very complex and even the smallest hardware interface or a change to that interface can have ripple effects all the way up the software stack, including the application layer. These ripple effects can occur in many forms: • Porting existing software to an SoC might require a major re-write of the software to support a new hardware capability.

• Adding new hardware to an existing SoC might disrupt the software stack, making the hardware change too expensive to add; or, shipping an SoC with unused hardware can take up space and consume power. • Designing a new software stack without regard to the possibility of utilizing hardware offload capability in the future might preclude the software from supporting the next hot SoC. To shamelessly quote Brooks once again, “There is no silver bullet” when it comes to software development. Indeed, as long as the software is built at arm’s length from the hardware development (and vice versa, of course) and both sides are aggressively innovative, software will bear the burden of making sure the two pieces fit and work together. One could argue this is the cost of innovation and the cost of horizontally structured industry. Cadence Design Systems San Jose, CA (408) 943-1234 www.cadence.com

RTC MAGAZINE MARCH 2014

15


TECHNOLOGY

CORE

Finding the Sweet Spot for SoC and ASIC Design

Integration Blurs the Line between MCUs and SoCs Time and money are major considerations when approaching a design. With today’s high scales of integration, the available devices offer a wide array of alternatives, all of which involve different combinations of time, money and other resources. Selecting the right mix can be vital to success. by Jason Tollefson, Microchip Technology

T

he System on a Chip (SoC) represents the pinnacle of tailored designs. Expressly selected peripherals, especially analog, give the promise of a perfect fit with no waste, delivering very low component cost. Wikipedia defines the SoC as “an integrated circuit that integrates all components of a computer or other electronic system into a single chip.” With the level of integration that is commonplace today, many ICs can qualify as an SoC, especially the Microcontroller. The investment required to develop

a custom or semi-custom SoC is substantial in both time and cost. There are non-recurring engineering (NRE) costs, negotiating the design specification, design time, fabrication time, and finally, developing the application. But then there is that low component cost as the reward. Consider now the MCU, a standard product, widely available, without NRE and with many, if not all, of the required peripherals, such as analog and communications, but it does not match the cost of the custom SoC. Should you choose future perfection (i.e., a custom SoC) or an MCU that’s available today? This is the decision designers must make when considering a custom SoC or standard MCU for their next high-volume design.

Criteria for Comparison

TABLE 1 SoC considerations.

16

OCTOBER MARCH 2014 2013RTC RTC MAGAZINE MAGAZINE

Let’s look deeper into this question and compare our choices amongst several key criteria. First, let’s define the boundaries of the discussion. We will consider four types of products: a standard MCU, a full custom SoC (or ASIC), a semi-custom SoC, and the FPGA with integrated CPU. The semi-custom SoC is different from a full custom SoC in that it is already available and was designed with an application in mind. These products can be found from vendors such as Broadcom. Toshiba and Infineon offer full custom

ASIC solutions. An FPGA is well known in system design, but recently companies such as Xilinx are offering hybrid devices with an embedded CPU complemented by programmable logic. Meanwhile, the MCU has grown in complexity. Companies such as Microchip Technology are integrating advanced analog peripherals, lots of memory, and hoards of communication and timing peripherals, making the once sharp lines between MCU and SoCs blurry. Now let’s bring the differences back into focus by establishing some criteria for comparison. For the assessment to be valid and complete, we need to consider the total cost of ownership, not just the unit cost. This includes the three broad areas of product features, design enablement and time-to-market (Table 1).

Product Features

When it comes to obtaining the peripherals that are an exact fit for your application, it’s hard to beat the custom SoC. You work with the vendor and include just the right peripherals to optimize your design. There is little waste and fewer compromises. If you want a 10 Msample/s Pipelined ADC, you simply specify it. The FPGA is similar in that you can program the logic to be what you want, but may be forced to make sacrifices


TECHNOLOGY CORE

with analog. For example, I can have a 1 Msample/s SAR ADC, but not a 10 Msample/s Pipelined ADC. The semi-custom SoC offers a variety of peripherals, but they are designed with application segments in mind, such as communication processors, and may be mismatched to your application. So it has more constraints, along with some peripherals that you will not use. There are literally thousands of different MCU configurations, each “dialed in” for an application space. It’s hard to find an application that cannot be served by the MCU. But, vendors scale cost with integration. So, getting that 10 Msample/s Pipeline ADC might also mean you get an LCD controller and USB, whether you need them or not. Advantage: Custom SoC. Sometimes core performance matters, sometimes it does not, depending on the application. Rarely would you need a core running at 200 MHz for a home thermostat, for example. And you would not want it if the thermostat were battery powered. With FPGAs and semi-custom SoCs, you will typically get the Ferrari. They tend to integrate a CPU so screaming fast and power hungry that it ensures high performance in almost any application. This might be overkill for your application, but it will definitely work. The MCU, much like the custom SoC, can be scaled to fit. There are lots of choices within 8-/16-/32-bit MCUs. You can easily find one that will fit your processing load and power budget. Many vendors have put special emphasis on CPU efficiency and current consumption, which is a great combination for batterypowered applications. But if you need a Ferrari, you can find that too. Advantage: MCU & Custom SoC. Cost is typically the reason that people consider an SoC. The perception is that the cost of the SoC is lowest, and that is often the case. But we must be sure that the total cost of ownership is fully understood and considered before committing to the custom SoC. The fully custom SoC is intentionally a perfect fit for the application, with little to no extraneous features. This generally leads to the lowest unit cost. But there are other considerations. There will be the design and test charges (NRE) that need to

be added total cost. Once the chip is out of the fab, if issues are found, it will need to be fixed. This is an additional NRE cost. A trip back to the fab for a mask revision is an additional cost and can wipe out the unit cost savings in a hurry. A re-spin also takes time. A fab cycle can be as long as 90 days, leaving you without product to develop your application; an opportunity cost. Another consideration is development tools. Tools for developing application code and testing hardware will need to be custom designed, developed and purchased. These costs can vary widely. However, if your volumes are significant and your product lifetime long, the custom SoC unit savings may just overcome these additional costs. The FPGA has a high unit cost in the tens of dollars, due in part to the advanced process geometries that enable its flexibility. But other costs include support chips, such as boot memory and numerous voltage regulators. Development tools for FPGAs start around $1,000, depending on how many tool seats are needed. These costs might be absorbed if the application has a high price. But, typically, there are better choices if system cost is a primary concern. The MCU fitting the application might have a higher unit cost, but can still represent a lower total cost of ownership. For one thing, there are no startup costs (NRE). You simply order your chip online, and get it a few days later. The MCU has its own flash memory and regulator built in, so no supporting chips are necessary. Finally, most MCU vendors offer free software tools and low-cost hardware starting at $20. So, in essence, the total cost of ownership is the product cost (Fig-

FIGURE 1 Total Cost of Ownership (where it comes from).

ure 1). Advantage: MCU. With a custom SoC, all of the flexibility is at the beginning of the design. You can select peripherals, core and I/O to match your exact application needs. But, after the SoC design becomes a chip, flexibility is lost. The same is true for a semi-custom SoC, where you can select the one that fits your application, but you cannot change the features after that—you are locked in. Contrast that with the flexibility of the MCU and FPGA. Both offer scalability in memory, peripherals and I/O. However, they accomplish this differently—the FPGA through programmability, and the MCU through proliferation of product families—but the end result is the same. Changes can be made throughout the design cycle, even after the product is launched. Advantage: MCU & FPGA.

Design Enablement

Where do I go for information? Who do I talk to when I’m stuck or have a problem? How do I integrate the chip into my

FIGURE 2 Time-to-Market (relative time).

RTC RTC MAGAZINE MAGAZINEOCTOBER MARCH 2013 2014

17


TECHNOLOGY CORE

errata documents, peripheral user manuals, package information and reference designs, all available 24 hours a day, 7 days a week. A relationship with a person is not required to gain access to information. But if you want to establish relationships with people in the know, there are community user forums, distribution partners, and even 24/7 online engineering support available. Advantage: MCU & FPGA.

Time-to-Market

TABLE 2 Advantages by Scoring Criteria.

application? These are three critical questions that you will encounter after you have selected your chip, whether it is an SoC, FPGA or MCU. How these questions are answered by your vendor is crucial to the success of your application. For the fully custom SoC, you have a face-to-face relationship with the vendor. All information flows through your contact at the company. Sounds great, but what if you live across the globe, with 12 time zones separating you and your con-

tact? Because the SoC is custom, you must seek out your vendor who is the expert to get information. The situation is similar for the semi-custom SoC, in that information comes from the vendor and is not widely available. A relationship is required to get information. Contrast that with the MCU and the FPGA. Look on the vendor website for information and you will find a plethora of free-flowing information about your product. Videos, code examples, data sheets,

Interscale M

THE FORM FACTOR FOR THE FUTURE Welcome to the next generation of small form factor technology. Interscale M minimizes integration time with quick assembly and easy access to PCBs smaller than 19”. Protect your valuable electronics with EMC shielding and choose from a variety of sleek enclosure designs tailored to your unique specifications and corporate identity. Interscale M offers simplicity, flexibility and innovation in one package.

WWW.SCHROFF.BIZ/INTERSCALEM/

18

MARCH 2014 RTC MAGAZINE

The famous American entrepreneur and statesman Benjamin Franklin once said, “Time is money.” This quote can be interpreted two ways when marketing your product. Franklin’s meaning was to not waste time. In other words, the faster you are to market, the more money your product can make. Another meaning could be to take the time to get your product right and you will make more money. These two approaches are illustrative of designing with an MCU/FPGA vs. the SoC. The MCU and FPGA are your chips of choice if you want to get to market fast (Figure 1). With myriad design resources and information, combined with overnight availability of product from sources such as Digi-Key, there is little in the way of getting the product to market. The trade-off, of course, is unit cost. This can be higher, as noted above in system cost. But, if you have calculated the total cost of ownership, considered the risk of re-spin and know that unit cost is your long-term issue, taking the time to design a custom SoC might be your best choice. Advantage: MCU & FPGA. We’ve learned that time can be as important as features and design resources when reviewing the SoC options. In the end, we as engineers have to study the trade-offs and make decisions. But good decisions will include considerations beyond unit cost, and will consider the total cost of ownership for the application. By looking at the advantages that the MCU, FPGA and SoC have relative to each other in their entirety, we will make a great choice. Table 2 shows a parting summary of the considerations we’ve made. Good luck with your design! Microchip Technology. Chandler, AZ. (480) 792-7200. www.microchip.com.



TECHNOLOGY IN

CONTEXT

Optimizing Machine Vision Systems

FPGAs – Taking Vision to the Next Level As today’s machine vision applications become ever more demanding, the unique capabilities of FPGAs, such as parallelism and low power consumption, can greatly enhance performance. But their advantages often depend on a good understanding of the use case. Often, in fact, they can be used in tandem with CPUs for the best overall advantage. by Carlton Heard, National Instruments

T

oday, manufacturing companies are striving to lower costs and increase quality and throughput, robots are becoming smarter and more flexible, and automation is a hot topic with a large amount of resources backing it. Vision is one of the key enabling technologies behind these trends, and it has been growing rapidly over the past couple of decades. But the performance of image processing applications has been largely tied to advances in CPU speed. Vision has been riding the CPU frequency wave to run more complex algorithms at higher camera frame rates and resolutions, but lately the nearly exponential growth in CPU performance has been tapering off compared to the explosive growth of the past decade. Vision applications must rely on alternative solutions to increase speed rather than simply depending on a faster processor. One option is to divide the image processing algorithm and do more in parallel, as many of the algorithms used in vision applications are very well suited to handle this. Technologies like SSE, hyperthreading and multiple cores can be been used to parallelize and do more without increasing the raw clock rate. However, there are issues when selecting this option. Unless the software package

20

MARCH 2014 RTC MAGAZINE

being used abstracts the complexity, there are difficulties in programming software to use multiple threads or cores. Data must be sent between threads, which can result in memory copies and synchronization jitter. Additionally, it is generally a manual process to take an existing singlethreaded image processing algorithm and make it multicore compatible. Even then, cost often prohibits parallelizing very much because most system designers do not have the option to purchase a 16-core

Acquire

server class computer for each test cell they create. One solution for this issue is made possible with an FPGA, as it is fundamentally a semiconductor device that contains a large quantity of logic gates, which are not interconnected and whose function is determined by a wiring list that is downloaded to the FPGA. The wiring list determines how the gates are interconnected and this interconnection is performed dynamically by turning semiconductor

Morphology

Threshold

Morph

Image Threshold Range

Morph Operation

Threshold

Morphology

TF i

TF

Image Image

Stop TF

FIGURE 1 When operations are programmed sequentially, the loop rate is limited by the sum of all times for each operation.


TECHNOLOGY IN CONTEXT

switches on or off to enable different connections. The benefit of using an FPGA is that it is essentially software-defined hardware. Therefore, system designers can program the chip in software, and once that software is downloaded to the FPGA, the code becomes actual hardware that can be reprogrammed as needed. Using an FPGA for image processing is especially beneficial as it is inherently parallel. Algorithms can be split up to run thousands of different ways and can remain completely independent. While FPGAs are inherently well suited for many vision applications, there are still certain aspects of the system that may not be as suited to run on the FPGA. There are a number of features to consider when evaluating whether to use an FPGA for image processing.

Considerations for Using an FPGA

FPGAs have incredibly low latency (on the order of microseconds) when they are already in the image path. This is critical because latency accounts for the time it takes until a decision is made based on the image data. When using FPGAs with high-speed camera buses such as Camera Link that do not buffer image data, the FPGA can begin processing the image as soon as the first pixel is sent from the camera rather than waiting until the entire image readout has completed. This reduces the time between exposure and image processing by nearly an entire frame period, making it possible to achieve extremely tight control loops for applications like laser tracking and in-flight defect rejection systems. FPGAs can help avoid jitter. Because they do not have the overhead of other threads, an operating system or interrupts, FPGAs are extremely deterministic. For many image processing algorithms, it is possible to determine the exact execution time down to nanoseconds. For massively parallel computation or heavily pipelined math, the raw computation power of an FPGA can be an advantage over a CPU-based system. An important consideration, however, is to understand what image processing algorithms are needed for the application. If the algorithm is iterative and cannot take advantage of the parallel nature of an FPGA, it

is most likely best suited for a CPU-based system. If a loop has multiple operations running within it Acquire Image and those operations run seF F quentially, the time it takes F for the loop iteration to O Threshold complete is the sum of the Threshold Range time each operation takes to run (Figure 1). One way Threshold TF to increase the processing loop rate is to parallelize Morph Morphology the operations through pipeMorph Operation lining. By doing this, the processing loop rate is limMorphology ited only by the slowest opTF eration rather than the sum of them all (Figure 2). This Image Stop Image approach increases speed TF i along with latency because the result is not valid until multiple loop iterations are FIGURE 2 complete. For pixel-by-pixel Pipelining speeds up loop rates as each operation operations including kernel can run in parallel. In this case, the loop rate is only limited by the operation that takes the longest. operations, dilate, erode or edge-finding, algorithms can be stacked back-to-back incorporat- ticular vision application. ing only marginal latency. Often the use of an FPGA can add Security can also be an issue. Since complexity to the design process. Hardthe image processing occurs in hardware ware programming is a significant deparwith FPGAs, the image and code stays ture from traditional software programwithin the FPGA. This is beneficial if ap- ming as there is a non-trivial learning plications require the image or IP to re- curve. However, high level synthesis tools main secure and hidden from the user. such as LabVIEW FPGA are available to And don’t forget the factors of power abstract much of this complexity, enabling and heat. An FPGA may consume 1-10 the designer to take advantage of FPGA watts of power, while a CPU of the same technology without a deep knowledge of performance can easily consume 50-200 VHDL programming. watts. With that much power, there is also There are also great differences in a lot of heat that must be dissipated. For clock rates between FPGAs and CPUs. fanless embedded applications this may Clock rates of an FPGA are on the orresult in a more complex and larger me- der of 100 MHz to 200 MHz, which are chanical design. The lower power con- significantly lower than a CPU that can sumption of an FPGA is particularly use- easily run at 3.0+ GHz. Therefore, if an ful for extreme conditions such as space, application requires an image processing airborne and underwater applications. algorithm that must run iteratively and cannot take advantage of the parallelism of an FPGA, a CPU results in faster proConsiderations for Using a CPU As with most applications, there are cessing. This serves as another reminder tradeoffs to consider along with potential to evaluate the system requirements and benefits. While FPGAs offer many advan- algorithms before selecting between an tageous features, there are still instances FPGA or CPU. Is there are big need for floating where a CPU may be more beneficial. point support? Floating point is difficult Consider the following tradeoffs when determining whether an FPGA, a CPU, or a to achieve on an FPGA. This is somewhat combination is most appropriate for a par- mitigated by using fixed point or high RTC RTC MAGAZINE MAGAZINEOCTOBER MARCH 2013 2014

21


TECHNOLOGY IN CONTEXT

FIGURE 3 FPGAs can be used for advanced control applications such as high-speed laser tracking. Low latency and jitter are requirements for adaptive optics that are possible with FPGA image processing.

level synthesis tools, but it is a factor that must be kept in mind when using FPGAs that may not even need to be considered when working with a CPU. In many applications, the combination of an FPGA and a CPU to handle various aspects of the design can be very useful. DMA can help pass data back and forth between the devices and each device can be used to take care of the processing that is most appropriate for each chip. This is not to say that an FPGA or a CPU is incapable of performing all tasks, but some are better suited for one chip versus the other and using both can simplify the design while making it possible to gain high performance. Many applications can benefit from this architecture.

Matching the Needs of Application Categories

There are four main categories including visualization, high-speed control, image preprocessing and co-processing. Visualization takes an image from a camera and changes it for the purpose of enhancing it to display for human eyes. In this case, the FPGA reads the image from

22

OCTOBER MARCH 2014 2013RTC RTC MAGAZINE MAGAZINE

the camera and performs some type of in-line processing such as highlighting edges and features of interest or masking features. Then the FPGA outputs the image directly to a monitor or sends it to the host CPU for display. In most instances, the FPGA directly outputs the image as low latency and jitter are important in the system. As an example, with medical devices an image is taken and cells are processed and displayed on the monitor for a doctor to review. The FPGA can be used to measure the size and color of each cell and highlight specific cells for the doctor to focus on. In high-speed control applications, instead of an image for display as the output it is some other type of I/O such as a digital signal controlling an actuator. In these applications, the time between when an image is acquired and an action is taken must be fast and consistent, so an FPGA is preferred due to the low latency and low jitter it offers. This very tight integration with vision and I/O enables advanced applications like visual servoing, which is when visual data is used as direct feedback for positioning and control with

servo motors. Often all the inspection and decision-making can be accomplished on the FPGA with little or no CPU intervention, but a CPU can still be used for supervisory control or operator interaction. Applications best suited for high-speed control include high-speed alignment, where one object needs to stay within a given position relative to another as in laser alignment and high-speed sorting (Figure 3). From food products and rocks to manufacturing goods and recycled garbage, there is a huge bottleneck for efficiently and quickly sorting items based on color, shape, size, texture, etc. The ability to acquire an image, process it and output a result within the FPGA can speed up this process, resulting in more accurate sorting so that fewer good parts are rejected and fewer bad parts are accepted. A more specific example where FPGAs can be especially beneficial is with air sorting, which involves imaging, inspecting and sorting a product while it is falling. Low jitter is critical for this type of application because the time between the decisionmaking and I/O must be known. Image preprocessing and co-processing are nearly the same with the difference being which device initially acquires the image. In both situations the FPGA works in conjunction with a CPU to process images. When preprocessing images, the image data travels through the FPGA, which modifies or enhances the data, before sending it to the host for further processing and analysis. Co-processing implies that the image data is sent to the FPGA from the CPU instead of a camera. This scenario is most common for postprocessing large batches of images once they are acquired. One of the most exciting examples is using FPGAs to boost the speed and efficiency of Optical Coherence Tomography (OCT). This is a technique for obtaining sub-surface images of translucent or opaque materials at a resolution equivalent to a low-power microscope. It is effectively an “optical ultrasound” that images reflections from within tissue to provide cross-sectional images. OCT is attracting interest among the medical community, as it provides tissue morphology imagery at a much higher resolution (better than 10 µm) than other imaging


TECHNOLOGY IN CONTEXT

that sweeps across a tissue and images the surface beneath, one line at a time. Once each line is acquired, the data is scaled and converted to the frequency domain, where the data is further manipulated and combined with other lines to reveal a high resolution, 3D picture of a tissue. With FIGURE 4 industrial inspecKitasato University used FPGAs to create the world’s tion, there are many first real-time 3D OCT medical imaging system. applications today that use brute force methods to check for defects over large modalities such as ultrasounds or MRIs and continuous areas, as seen in web in(Figure 4). spection. FPGAs can be used to preproA typical OCT system uses a linecess the large amounts of data associated scan camera and a special light source

CUBE

The

with web inspection through performing flat field correction, thresholding and particle analysis. The advantages of an FPGA for image processing are dependent upon each use case, including the specific algorithms used, latency or jitter requirements, I/O synchronization, power and programming complexity. In many cases, using an architecture featuring both an FPGA and a CPU presents the best of both worlds and offers a competitive advantage in terms of performance, cost and reliability. With a multitude of inherent benefits, FPGAs are poised to take many vision applications including medical imaging and vision motion integration to the next level. National Instruments Austin, TX (512) 794-0100 www.ni.com

expansion enclosures

Choose from a variety of options: ExpressCard, PCIe, or Thunderbolt connectivity package

1, 2, 3, 5, or 8 slots

Full-length (13.25”), mid-length (9.5” ), or short card (7.5” )

Half-height or full-height cards

36W, 180W, 400W, 550W or 1100W power supply

ORDER TODAY!

Flexible and Versatile: Supports any combination of Flash drives, video, lm editing, GPU’s, and other PCIe I/O cards. The CUBE, The mCUBE, and The nanoCUBE are trademarks of One Stop Systems, Inc. Maxexpansion.com and the Maxexpansion.com logo are trademarks of One Stop Systems, Inc. Thunderbolt and the Thunderbolt logo are trademarks of the Intel Corporation in the U.S. and other countries.

RTC RTC MAGAZINE MAGAZINEOCTOBER MARCH 2013 2014

23


TECHNOLOGY

CONNECTED High End Graphics on Small Devices

Speeding Time-to-Market for GUI Designs Delays in getting an embedded UI to market are costly in terms of development resources as well as competitive advantage. The process is often lengthy and tedious, setting back launch dates and driving up development expenses. This can be improved using some best-practice approaches to speeding time-to-market. by Brian Edmond, Crank Software

I

n traditional GUI design, a user experience (UX) or user interface (UI) team creates a prototype on desktop software such as Adobe Photoshop, Illustrator, HTML or Flash, submits it for approval, and then transfers it—for most of the remainder of the development process—to the engineering team. This design process presents the first major obstacle in time-tomarket and is also what often results in a less-than-desirable UI. Once that critical UI design hand-off occurs, embedded system developers proceed to re-implement the prototype for the embedded system. The result is that the original prototype, in essence, becomes a throwaway, since the performance observed in the desktop application bears no resemblance to the performance of the target platform. As embedded system developers go about the process of re-implementing the prototype—and attempting to replicate the UI—they inevitably make changes and sacrifice features in order to fulfill their mandate, which is to make it run on the target. It is important to note another factor that delays time-to-market: UI designers and embedded system developers typically do not work in tandem at any point in the process. In fact the opposite is true. Once the design is handed off, UI designers often do not see it again until the alpha or beta phase of product testing. This siloed ap-

24

MARCH 2014 RTC MAGAZINE

FIGURE 1 A very rich and complex user interface can be designed using Windows-based tools like Photoshop or Adobe Illustrator and others. Translating that design to run under the RTOS on an embedded design can be filled with complications and compromises.

proach, in which there is a complete loss of design control, creates lag time late in development as the designer attempts to retrofit features into a nearly completed product. As a result, another obstacle to market release is a back-and-forth process between the UI designers and embedded system developers to develop a product that both reflects the original design and is fully

functional (Figure 1). The disconnect between the two teams runs even deeper than that. UI designers, as mentioned before, typically use desktop applications that were never intended to run on the target platform. In other cases, the prototype itself is comprised with fake content and imagery and does not even contain real data. This adds to the embed-


TECHNOLOGY CONNECTED

ded system developers’ timeline, as massive re-coding is required to make the translation from these desktop applications to an entirely different hardware and/or software platform. As every UI development team knows, the result is that development time has been so delayed that in fact there is no time left in the schedule to adequately address UX issues. These delays also mean that testing occurs late in the development cycle, since no portion of the UX can be tested independently while the engineering team is still writing back-end code. Ironically, it is the UX that is the true differentiator for any embedded UI, and the intended UX— one that ties customers to a specific brand with rich features and intuitive functions— often never gets released. In the end, the prototype and the end product have diverged due to design misinterpretation and performance implications to an extent that the end product does not reflect the original design.

Best Practices for Speeding Time-to-Market

To get GUI designs to market more quickly, a better approach is to allow UI designers and embedded system developers to work independently, but concurrently, on UI development—doing what each does best. In a workflow where there is no product hand-off, each team remains involved and able to provide continuous feedback. If UI designers are allowed to own the design throughout the development process, it not only compresses the development schedule, it also requires fewer embedded system developers on the development team. The reason is that they are no longer forced to write code in order to implement design features at the same time they are working toward functionality on the target platform. When embedded system developers are forced to change the design, two things result. First, they make mistakes because that is not their area of expertise. This then results in a multitude of trial-and-error efforts to rectify the mistakes. Secondly, it also takes exponentially longer for them to make said design changes. When creating a UI, development teams can expedite the release date by creating a thorough design up front, by fully defining the UI features, the hardware

FIGURE 2 The Crank Storyboard Suite was designed for engineers by engineers. It allows UI designers with no programming experience to drag-and-drop their UI designs in parallel with, yet independently from, the engineers who are working on the coding. Storyboard simplifies the design process, saves valuable time and leverages the core skills of each valued member of the team.

platform and the system integration points. In other words, each team should have an equal amount of information about what the product will look like and what it will do, from the beginning. This is important because UI designers need to know what data the UI will be able to retrieve from the system, and embedded system developers need to know what demands the system must be able to accommodate. Armed with information on the various required entry points, embedded system developers can test these independently, and much earlier in the process than is typical today. Another time-to-market boon is the prototype-as-product approach, which means implementing designs on a true prototype immediately. The typical processes are to prototype the design and then re-implement it for the embedded platform. However, if the design is implemented on the prototype from the outset, with the intended design fully functional on the intended platform, then any design flaws, feature or hardware compatibility issues, etc. can be rectified early, rather than in the testing phase. If the prototype is the product, then the embedded system developers can begin writing the back-end code immediately as well. Working from a functional prototype that runs as well on the desktop as it does on the target, such as a tablet, can help condense development time from months to weeks.

Often the hand-off process for the design team involves exporting the design information and images into a format usable by the development team—a time-consuming task that again delays deployment. The reason: UI designers’ applications do not speak to embedded system developers’ software development tools. A better solution is to allow UI designers to use a set of tools they are comfortable with and that can be easily integrated by embedded system developers—and then transmitted back to UI designers when changes are required. This eliminates the need to re-write code for every UI change, which introduces bugs into the functionality, requires more testing time and delays the UI release. The more expeditious approach is to allow the designer to make changes to data files, such as XML or HTML, that can be used as a UI description language. A common “fix” for development process issues is to deploy third-party software to bridge the gap between the UI designers’ toolsets and the embedded system developers’ toolsets. Yet all too often, the third-party application does not integrate well with either. Needless to say, incompatibility issues lengthen the development process, and third-party software that is not compatible with the software currently in place will exacerbate the issue. It is essential that the development support software can integrate with what both teams RTC RTC MAGAZINE MAGAZINEOCTOBER MARCH 2013 2014

25


TECHNOLOGY CONNECTED

A TQMa28 module with a Freescale i.MX28 can save you design time and money

TQ embedded modules: ■

Are the smallest in the industry, without compromising quality and reliability

Bring out all the processor signals to the Tyco connectors

Can reduce development time by as much as 12 months

The TQMa28 module comes with a Freescale i.MX28x (ARM926™) and supports Linux, WinCE 6.0 and QNX operating systems. The full-function STKa28-AA Starter Kit is an easy and inexpensive platform to test and evaluate the TQMa28 module.

Technology in Quality

ConvergencePromotions.com/TQ-USA TQ-USA is the brand for a module product line represented in N. America by Convergence Promotions, LLC

26

MARCH 2014 RTC MAGAZINE

TQMa28 V2 1-3 Page Ad.indd 1

2/3/14 3:56 PM

are currently using—with Adobe Illustrator or Adobe Photoshop for UI designers, and with tools like Eclipse or native desktop tools for Linux for embedded system developers. Development software that separates the UI from the back end can speed development and deployment. Using a modelview-controller pattern can shorten the design process and help teams work together with clear objectives. If they can be run independently, then each team can continue to work on the product without making disastrous changes while still maintaining clear integration points. This allows each team to focus on their core competencies: design or embedded system development. Development software that does not flexibly support multiple platforms like Macintosh and Linux can also add time constraints. Every member of the development team should be able to work in the environment in which they are the most efficient. The developer should also be able to simulate and test features on their respective platforms to limit the need for external hardware platforms. Development teams also must have the ability to run a functional prototype on multiple platforms to compare performance early in the product cycle. Teams that are able to evaluate hardware platforms, and various configurations on those platforms, can test performance early in the process and make educated decisions about whether or not the platform will perform as expected with the design. Many time delays in development are centered around resolving those issues—or worse, settling for a less robust UI due to hardware constraints. For teams looking to speed time-tomarket, the key is unquestionably to have flexibility. When choosing development support software, beware of anything that limits either the operating system or the target hardware platform. To maintain efficiency and competitiveness, companies should have the freedom to move from one platform to another based on development budget, customer expectations and similar factors. The converse scenario forces companies to purchase different UI tools for different product levels. It is also important to resist the temp-

tation to overlay frameworks. To address all of these development roadblocks, many companies resort to implementing a framework over the development process. The result is that not only is the company employing a team to build, test and maintain a UI, it is also employing a separate team to build, test and maintain a framework. This added layer serves to complicate and delay the development process. Similar circumstances occur when someone in the company builds custom tools to solve these internal issues. The builder then becomes an internal product provider.

Real-World Examples

Companies using Crank Software (Figure 2) have been able to speed time-tomarket by implementing these best practices. For instance, QNX Software Systems used Crank Software to implement a 17-inch, curved, 1080p center console display embedded in a Bentley concept car. The unique digital light projection HMI, which debuted at the Consumer Electronics Show in 2013, featured content that was originally created in Adobe Photoshop and was fully implemented on the target in only eight weeks. Another company, Auto Meter, used Crank Software to successfully develop its new LCD Competition Dash—a usercustomizable display with precise data acquisition—in less than six months for their customer, NASCAR. The display was launched in time to debut at the 2012 SEMA Show for automotive specialty products. Much exists in current UI development scenarios that extends the development timeline, drives up costs and sacrifices UI quality in order to meet a targeted release date. Creating an environment in which UI designers and embedded system developers can work collaboratively, but independently, enables each to stay focused on what they do best, to maintain ownership of the UI from concept to implementation, and to speed time-to-market—a critical requirement in a landscape where companies can succeed or fail based on their next UI. Crank Software Ottawa, Ont. (613) 595-1999 www.cranksoftware.com



TECHNOLOGY IN

SYSTEMS

Getting beyond the BIOS for Embedded

Open Source Firmware – Coreboot for x86 Architecture Boards The traditional commercial BIOS, while very useful for PCs, does not ideally serve the needs of embedded applications both in terms of functionality and pricing/licensing. The open source community has developed an alternative aimed at the needs of embedded developers. by Clarence Peckham, Senior Editor

W

ithout a doubt, one of the key changes in software development has been the growth of open source software projects. The news is full of Linux- and Android-based systems, with Android being the largest installed base of open source software (also the best example of a Linux-based system used on a smartphone). In the time since Linux started the public awareness of open source software, there have been other efforts that have laid the foundation for the open effort. One of the challenges of developing software has been the availability of tools such as compilers, assemblers, linkers, debuggers and integrated development environments. On top of this, the proliferation of processors has made tools availability even more critical. The number of available processor solutions based on MIPS, x86 PowerPC and ARM architectures has increased almost daily. On top of that are the 8- and 16-bit processor solutions such as those from Microchip, Freescale and others. How do the tools keep up? The solution has been the development of a base set of development tools based on the GNU toolset shown in Table 1. Each of the manufacturers, or open source developers, provide a set of tools based on the GNU toolset. It is

28

OCTOBER MARCH 2014 2013RTC RTC MAGAZINE MAGAZINE

Coreboot ROM Stage Vendor Reference Code

Payload Examples

Coreboot RAM Stage

SeaBIOS

Payloads

iPXE User Application

FIGURE 1 Coreboot architecture including payloads. User payload can be proprietary code that does not have to be leased under the open source license. The Vendor Reference Code is provided by the processor vendor as either source or binary files.

possible to use a set of open source tools for almost all of the popular processers available for embedded designs. Availability of inexpensive, or even free tools has opened up the use of many processors that might not have been used if an expensive toolset were the only so-

lution. After all, it is an uphill road to convince your boss that trying the latest HAL2014 processor is a good idea if it is going to cost $10K to get a set of tools. To be fair, I should mention that the HAL2014 vendor will, in most cases, provide an evaluation set of tools with limited


TECH IN SYSTEMS

FIGURE 2 SageEDK Eclipse Development Environment debug page. (courtesy Sage Electronic Engineering)

functionality for a limited time—but free and unlimited is better. One issue of the open source tools is the lack of defined support. In a lot of cases the large number of developers means that bug reports get immediate attention. If that is not enough, a support ecosystem has built up around companies that will provide support for a toolset for the cost of a support contract. This makes a lot of embedded system developers more comfortable using an open software tool solution. With the tools and open source operating system solutions, the user can develop an embedded solution. However, there is still a hole in the open software offerings for embedded applications—the boot firmware, or in the case of x86 architecture, the Basic Input/Output System (BIOS) used to start the application. For processor solutions other than x86 there is an open source solution called U-Boot, which started as a solution for the PowerPC processor and has migrated to the MIPS and ARM architecture, and System on Chip (SoC) solutions based on MIPS and ARM. For the x86-based architecture, the standard has been to use a traditional BIOS developed for the PC architecture such as the offerings from Phoenix or AMI. This is a workable solution but not one that is ideal for the embedded market

since it involves an upfront cost as well as royalties for each unit shipped.

Embedded Systems Firmware Requirements

The firmware, or BIOS, used in x86 architecture systems was developed to provide a means to test and initialize hardware and boot the operating system from a disc drive. For embedded systems, the firmware has requirements that go beyond the normal BIOS features—in most cases requirements that are much simpler than what is offered in the typical BIOS. First for an embedded system is the ability to boot from cold to the application as fast as possible. In some cases an embedded system must be up and running in less than a second for critical applications. This requires the ability to utilize the smallest amount of code to execute the minimal operations required. Another requirement is the flexibility to handle anything from a small system to a large multiprocessor computing system. Flexibility also requires the ability to easily customize the firmware and have open source for most of the software. And if binary-only modules are used, you need the ability to locate the binary modules as required. The advantage of allowing the use of binary modules in open source software is that it enables chip manufacRTC MAGAZINE MARCH 2014

29


TECH IN SYSTEMS

GNU Tool

Function

GNU Make

Automation tool for compilation and build

GCC

C Compiler

G++

C++ Compiler

GNU Binutils

Suite of tools including linker, assembler and other tools

GNU Bison

Parser Generator

GNU M4

Macro Parser

GDB

Source Code Debugger

GNU Build

Autotools for builds — autoconf, autoheader, automake, libtool

GNU Libraries

Std I/O libraries, math libraries, etc.,

TABLE 1 Open Source tools for S/W development — available for multiple processor architectures.

turers such as AMD and Intel to provide proprietary software for their advanced chips without having to release the source code. As we will see in the following sections, Coreboot provides the fast, flexible and cost-effective solution to providing a firmware solution for embedded systems.

Atomic Research Spawns the Coreboot Initiative

The birth of LinuxBIOS began in 1999 with a handful of researchers at Los Alamos Labs lead by Ron Minnich. The objective was to improve computing performance through faster BIOS startup and better error handling in large computer clusters. “From that start, LinuxBIOS was renamed Coreboot in 2008 and migrated into commercial high-performance computing (HPC) and began capturing the attention of industry leaders such as AMD and Intel,” stated Kerry Brown, VP/COO of Sage Electronic Engineering. Also a number of manufacturers such as Gigabyte, Micro-Star International (MSI) and Acer are supporting Coreboot development on their motherboard and laptop designs. Recognizing Coreboot’s advantages, Google is now on board as a project sponsor. “Several Google Summer of Code (GSOC) projects have been based on Coreboot development and enhancements,” added Kerry.

30

OCTOBER MARCH 2014 2013RTC RTC MAGAZINE MAGAZINE

As an open source project the Coreboot community continues to grow, attracting developers from all parts of the globe. Members work for technology companies, conduct research at universities, and take part in government-funded programs. “With support from both AMD, as a source code provider, and Intel, providing the Firmware Support Package (FSP), access to low-level chip functions has also helped Coreboot become successful,” commented Kerry.

Coreboot Features and Embedded Use

A simple definition of Coreboot is that it is a replacement for the traditional BIOS, but it is a boot loader and not a BIOS. The purpose of Coreboot is to initialize the hardware and then load a payload. Figure 1 shows the basic architecture of the Coreboot firmware. The payload is the module that decides what the hardware is going to be used for. A payload can be the end application, or as in most cases, it is a path to booting the final application. The major features of Coreboot are: Smaller Binary Images: By generating smaller binary images, you’ll be able to use less flash memory, or incorporate additional features with the available memory. Boot Flexibility: With Coreboot you can boot from NAND Flash and

other nonstandard media. The majority of existing proprietary BIOS software does not support this capability. Customization: Coreboot enables you to customize your firmware even at the most basic level. Structurally, Coreboot is based around the requirements of the x86 PCI device tree and is designed to do minimal hardware initialization before passing control to a payload. Coreboot initialization contains no BIOS services and does not stay resident in memory. Runs in 32-Bit Protected Mode: In Coreboot, the boot block contains the jump instruction to the initialization code—the first instruction fetch—and an immediate change to 32-bit protected flat mode. After the switch to protected mode, the boot block does minimal northbridge and southbridge setup required to jump to the RAMstage. Code Written in C: Moving away from assembly code toward a high-level language saves developers considerable time in coding, debugging and documenting. Loading Application Software: Coreboot supports multiple booting choices, including loading software without an OS, as with certain standalone programs (memory testers, games, etc.). Multiple Debug Features: Coreboot enables a number of functions including remote flash firmware, embedded software development via serial ports, systems administration of remote computers, and booting from a network.

Coreboot Payloads

The concept of payloads is the key feature of Coreboot. By using a payload, embedded developers can define the exact features they need and not have to include any features they do not require. This makes for an efficient and fast solution. Although the payload can be completely developed by the user, there are several developed payloads that can be used if desired. An example is the seaBIOS payload, which provides normal BIOS calls so that standard OSs, such as Windows and Linux, can be booted. Another example is the iPXE payload, which provides for loading over the network. Or both payloads can be used in a Coreboot implementation.


TECH IN SYSTEMS

The modularity of the Coreboot project provides segregation of your IP from the publicly available GPL code base. Since IP is delivered in a payload called from the Coreboot initialization firmware, it can be developed on a proprietary basis or leveraged from a code base that doesn’t have a publication requirement. In other words, you can fully use Coreboot to your advantage and participate in the global Coreboot community without sharing your intellectual property

Coreboot Software Development

Developing software for Coreboot requires the use of development tools such as GCC and GDB for debugging. All of the source code can be accessed via www. coreboot.org, and in addition to the source code, a full set of documentation is available to help speed the learning curve. As with any open source project, a company that wants to utilize the code has to be willing to contribute to the support of the code as well as keep track of the changes so that they can decide when to roll their internal code revisions. This can be a large task that can consume a lot of development time. An alternative is to work with a vendor that supports commercial use of Coreboot. Sage Electronic Engineering is a company that supports Coreboot for commercial use. Sage can provide any level Coreboot support. Their key products are SageBIOS, Sage EDK and SmartProbe. SageBIOS is an integrated version of Coreboot that can be configured to support any payload required. Sage EDK, shown in Figure 2, is an integrated development environment based on the Eclipse platform and open tools. When used with SageBIOS, Sage EDK provides a complete development package for Coreboot applications. For debugging, SmartProbe can be used with AMD-based platforms to control firmware updates and debugging. Both Intel and AMD provide code to support Coreboot applications on their respective chip sets. AMD provides source code via their AMD Generic Encapsulated Software Architecture (AGESA), and Intel provides binaries of their Firmware Support Package. “Both Intel and AMD are key supporters of the Coreboot project

with the AGESA and FRP software packages. In the case of Intel’s FRP, Sage is a source code licensee, so any changes required can be made,” commented Kerry. Coreboot has gained a lot of support in the past few years, and with the trend to consider open source solutions in embedded applications, replacement of the traditional BIOS with a royalty-free open source solution seems to make good sense. Also, just as Linux started gaining better acceptance with releases from RedHat and Ubuntu, support for Coreboot from companies like Sage Engineering should help gain embedded developers. Another plus is Google’s use of Coreboot for their Chromebook laptops. Google’s major objective is to give more support to Coreboot development not only by developing the code base, but also through testing and quality control, including developing a Coreboot version for the ARM processor. Coreboot meets all of the objectives of an embedded application, and with support provided by many companies and individual developers, it is a realistic alternative to a traditional commercial BIOS. AMD Sunnyvale, CA (408) 749-4000 www.amd.com Coreboot www.coreboot.org

A TQMa35 module with a Freescale i.MX35 can save you design time and money

TQ embedded modules: ■

Are the smallest in the industry, without compromising quality and reliability

Bring out all the processor signals to the Tyco connectors

Can reduce development time by as much as 12 months

The TQMa35 module comes with a Freescale i.MX35 (ARM11™), and supports Linux, QNX, and WinCE 6.0 operating systems. The full-function STKa35-AB Starter Kit is an easy and inexpensive platform to test and evaluate the TQMa35 module.

Intel Santa Clara, CA (408) 765-8080 www.intel.com Sage Electronic Engineering Longmont, CO (303) 495-5499 www.se-eng.com

Technology in Quality

ConvergencePromotions.com/TQ-USA TQ-USA is the brand for a module product line represented in N. America by Convergence Promotions, LLC

RTC MAGAZINE MARCH 2014

TQMa35 V2 1-3 Page Ad.indd 1

31

2/3/14 3:56 PM


TECHNOLOGY DEVELOPMENT The POSIX Heritage - History and Future

POSIX – 25 Years of Open Standard APIs The POSIX API has a venerable history of allowing portability and compatibility among a wide variety of systems and applications. Its legacy is destined to continue well into the future. by Arun Subbarao, LynuxWorks

T

he ability of an operating system to conform to established open standards application programming interfaces (APIs) is a key enabler for a critical mass of middleware and applications executing in its environment. It allows application portability among execution environments, thereby allowing developers the maximum flexibility in creating application software that can be migrated to newer environments with minimal effort. As the complexity of hardware and software continues to increase, the ability to preserve the software investment provides significant competitive leverage for both software vendors and OEMs alike. One of the best-known and most widely adopted API standards in the embedded and server infrastructure, which has withstood the test of time, is the IEEE POSIX standard.

POSIX: Early Origins

The POSIX API standard had its origins in the early UNIX environments, when fragmenting of UNIX variants in the late 1980s resulted in the need to define a common API standard to ensure that application portability between different operating systems could be main-

32

MARCH 2014 RTC MAGAZINE

tained. This resulted in the early specification of the POSIX standard. POSIX, an acronym for Portable Operating System Interface, is a family of related standards governed by the IEEE and maintained and evangelized by The Open Group. POSIX defines the application programming interface (API) for software compatibility with the different flavors of operating systems. First released 25 years ago, the POSIX standard defines the specifications for the characteristics of operating systems, database management systems, data interchange, programming interface, networking and user interface. POSIX enables developers to write their applications for one target environment so they can subsequently be ported to run on a variety of operating systems that support the POSIX APIs—a term commonly known in the industry as “source code compatibility.”

POSIX Evolution

Since its modest beginnings in standardizing Unix APIs, the IEEE POSIX standard has now emerged as the most prevalent and widely regarded broad-based API standard for operating systems. It has extended its reach into various segments of

the market such as server infrastructure, military, avionics, general purpose computing, scientific computing and more. The POSIX standards have continued to evolve into the 21st century with significant revisions to the standards. One significant evolution of the standard happened in 2004, when the POSIX standards underwent a significant expansion and unification, to evolve into a newer standard IEEE 1003.1-2004. The IEEE 1003.1-2004 standard provided an extensive set of APIs encompassing applications in scientific, real-time and enterprise computing (Figure 1). At the same time, the IEEE POSIX standard also recognized the specialized needs of embedded operating systems and defined the IEEE POSIX 1003.13 standard, which defines four different profiles that correspond to variants of embedded designs that are prevalent in the industry. The IEEE 1003.13-2003 (POSIX.13) standard for real-time profiles and applications specifically targets embedded applications. This standard defines four realtime POSIX profiles: PSE 51: Minimal PSE 52: Controller PSE 53: Dedicated PSE 54: Multi-purpose These four profiles, shown in Figure 1, specify increasing levels of complexity and functionality to satisfy the full spectrum of real-time applications that can be designed using POSIX. It also defines a strict API compatibility standard that requires each higher POSIX profile to be a superset of the lower profiles. This guarantees that POSIX applications written to the minimal profile (PSE51) will run on a multipurpose profile (PSE 54) on compatible operating systems. These profiles, PSE51 through PSE54, allow the flexibility needed for scaling from deeply embedded applications to high-end workstation applications. The POSIX IEEE 1003.1 standard has continued to evolve with newer revisions in 2008 and 2013 (Figure 2).


TECHNOLOGY DEVELOPMENT

POSIX.1 (IEEE 1003.1-2001) Multi-purpose (PSE54)

Dedicated (PSE53)

Shell and Utilities Asynchronous I/O

Networking

Multi-Process

Multiple Users

Controller (PSE52) Simple File System

Core

Full File System Message Queues

Minimal (PSE51)

Wide Characters Tracing Others

FIGURE 1 The IEEE 1003.13-2003 (POSIX.13) Profiles for Real-Time Applications

POSIX Conformance and Compliance

The Open Group is an independent third-party organization that has defined and certifies various implementations of POSIX conformance. The availability of such an independent testing body is an important part of the validation required to certify conforming implementations of operating systems. This allows for a vendor-neutral assessment of the POSIX compatibility of an operating system and allows end users to make an informed decision that best suits their application. The evaluation and selection of an operating system that supports POSIX standards is a key decision that determines the level of reuse and portability that can be designed into the system. POSIX “conformance” and “compliance” are two terms that have been used by vendors somewhat interchangeably to describe their POSIX compatibility. However, the difference between the two is significant. POSIX “conformance” indicates adherence to the standard without any deviation. A conforming implementation of this standard offers the highest level of API compatibility with the specification. POSIX “compliance,” however, offers a

much weaker adherence to the standard. An implementation claiming POSIX “compliance” merely needs to disclose APIs that it supports and the ones that it does not. A higher level of standard exists when an OS’s conformance is approved by an accredited, independent certification organization. To be conformant with any POSIX standard, the conforming implementation must undergo independent certification using a third party (such as The Open Group) and obtain a POSIX conformance certification. The presence of this certification guarantees to the user a complete adherence to the POSIX standard by the operating system.

Strong Industry Support for POSIX

While the benefits of POSIX outlined above show POSIX’s relevance and importance in embedded environments, it is not an embedded-centric standard. POSIX plays a role in many leading technologies, and a look at how broad POSIX support is among operating systems, both embedded and enterprise, shows that it is a standard that has really seen adoption and usage across many industries. Many

UNIX, Linux and UNIX-like operating systems really do a good job of not just conforming or complying to the POSIX standards, but also have POSIX as their native API. Examples include IBM AIX, HP-UX, BSD UNIX, Linux, Oracle Solaris, and the LynxOS and QNX RTOSs. Other operating systems use a POSIX API layer to allow a translation from POSIX to the native proprietary interface of the operating system. Although this adds a slight amount of inefficiency compared to a native API, this is how many RTOSs achieve POSIX compatibility. Examples include VxWorks, Nucleus OS, eCOS and Symbian OS. Even Windows has a POSIX compatibility interface called Cygwin, and this is used to run applications on Windows that were originally built for Linux or UNIX. This broad support really helps the embedded developer, especially as the lines blur between embedded and enterprise applications, as many software applications that were originally built for more general purpose computer operating systems can easily be migrated to a POSIX-based RTOS. This reduces the amount of software creation, reduces the porting time, and ultimately reduces the time-to-market and cost for new embedded products.

POSIX and Emerging Technologies

The dynamics of the software industry continue to evolve with the emergence of several disruptive technology trends that will continue to define the evolution of the software industry at large and embedded systems in particular. However, the relevance of the POSIX standards has not been diminished by these paradigm shifts, two of which are mentioned here. Future Airborne Capability Environment (FACE): The FACE Consortium is hosted and managed by The Open Group and provides a vendor-neutral forum for industry and the U.S. government to work together to develop and consolidate open standards, best practices, guidance documents and business models. The FACE Technical Standard defines the frameRTC RTC MAGAZINE MAGAZINEOCTOBER MARCH 2013 2014

33


TECHNOLOGY DEVELOPMENT

25 Years of LynxOS LynuxWorks (formally Lynx Realtime Systems) has also celebrated its 25th anniversary for both the company and its POSIX operating system, LynxOS. Since its founding, LynuxWorks has been a strong supporter of open standards. The company was among the earliest supporters of POSIX, is a member of The Open Group, and is an active participant in the work to keep the standard current. The LynxOS operating system was first released 25 years ago and was designed to offer embedded real-time developers the same features that were available to UNIX programmers in the computer world, but with real-time performance and determinism. The POSIX standard, especially the POSIX.1b and POSIX.1c extensions, provided a very natural fit as the native API for LynxOS, and provides good compatibility and portability with both UNIX and Linux applications. This enables developers to build complex systems using LynxOS and still meet the strict real-time requirements that are not always available when using UNIX or Linux. Although LynxOS has an open standards POSIX API, it is still a proprietary RTOS, and hence not encumbered with open source licensing restrictions, and it maintains a very well controlled code base. This proprietary code base is also much smaller than traditional UNIX and Linux systems, and has allowed LynuxWorks to create some derivative versions of LynxOS to support specific market needs. The LynxOS-178 product is designed for safety-critical avionics systems and has been certified in systems to the highest FAA levels. LynxOS-178 still maintains the POSIX API, but adds in a safety partitioning scheme. This POSIX API has been very useful in allowing LynxOS-178 to meet the FACE standard now being adopted in military avionics systems, which is based on POSIX and maintained by The Open Group. LynxOS is celebrating its 25th birthday with a new version, LynxOS 7.0. This version bring in new security and communication features that are seen as essential for enabling embedded developers to build the latest devices to contribute to the Internet of Things (IoT).

34

MARCH 2014 RTC MAGAZINE

POSIX. 1b Real-time extensions

POSIX. 1c Threads extensions

POSIX. 13 Real-time Profiles

Open Group Base Specification Version 7

POSIX. 2 Shell & Utilities

POSIX. 1-2001 POSIX. 1-2001 Single Unix Specification

POSIX. 1 Core Services

(Incorporates ANSI C)

1992

(Technical Corrigendum 1)

POSIX. 1-2008

Real-time Specific

1988

POSIX. 1-2008

1993

1995

2001

(Two technical Corrigenda)

2003 2004

2008

2013

FIGURE 2 25 Years of POSIX Evolution.

work for creating a common operating environment to support applications across multiple Department of Defense avionics systems. The standard is designed to enhance the U.S. military aviation community’s ability to address issues of limited software reuse and accelerate and enhance war fighter capabilities, as well as enable the community to take advantage of new technologies more rapidly and affordably. The current FACE APIs are heavily based on the existing POSIX standard and define several profiles such as the Security Profile, Safety Profile (Basic & Extended) and the General Purpose profile. It is a testament to the longevity and relevance of the POSIX APIs that this consortium, which was initiated in 2010, relies so heavily on the POSIX standards. Internet of Things: Another of the emerging technology trends is the Internet of Things (IoT), where billions of devices are expected to connect via the network to communicate with Cloud infrastuctures, as well as with each other. This marks a key inflection point in the embedded industry and its convergence with mainstream enterprise computing. As these embedded devices connect to the network, the POSIX IEEE 1003.13 standard becomes particularly relevant, and the PSE53 profile may become the de facto standard for connected devices since it combines the key elements of small footprint and network connectivity, two of the essential elements for devices that need to

connect to the Cloud.

POSIX for the Next 25 Years

It is evident that one need look no further than the POSIX standard as one of the definitive API specifications that has stood the test of time and has the necessary mechanisms to provide the unification needed among disparate execution environments and application requirements. This is essential for achieving broad adoption of legacy and emerging technologies, and the preservation of a critical mass of applications that subsequently helps create a network effect for other applications. As the industry continues to experience technology shifts driven by processor advances, virtualization, security, Cloud computing and mobility, we can look to the POSIX standard to bridge the gap between legacy environments and emerging applications, and to provide a unified application programming environment that can add compelling value to the technology industry. LynuxWorks San Jose, CA (408) 979-3900 www.lynuxworks.com


SAN FRANCISCO, CA • JUNE 1 - 5, 2014 • DAC.COM

DESIGN AUTOMATION CONFERENCE REGISTRATION OPENS:

MARCH 27 DAC

Where IC Design and the EDA ecosystem learns, networks, and conducts business

DAC DELIVERS:

• A world-class technical program on EDA, Automotive Systems and Software, Security, IP, Embedded Systems and Software Sponsored By: • Designer Track presentations by and for users • Colocated Conferences, Tutorials and Workshops • Over 175+ Exhibitors including: The New Automotive Village • The ARM Connected Community® (CC) Pavilion • Daily Networking Events • Thursday Training Day Sponsored By:

NEW TRACKS FOR 2014:

AUTOMOTIVE SYSTEMS & SOFTWARE - IP - SECURITY

KEY

ANNOUNCING 2014 KEYNOTES James Buczkowski Henry Ford Technical Fellow Ford Motor Company

Ernie Brickell

Chief Security Architect Intel Corp.

Dr. Cliff Hou Vice President, Research and Development TSMC

Raj Talluri

Senior Vice President of Product Management Qualcomm

Jim Tung

MathWorks Fellow MathWorks, Inc.

Sir Hossein Yassaie CEO and President Imagination Technologies

WWW.DAC.COM sponsored by:

in technical cooperation with:

#51DAC


PRODUCTS &

TECHNOLOGY 6U VPX Board Features 4th Generation Intel Core Processor

A new 6U VPX processor board is based on the fourth generation Intel Core processor family (previously codenamed “Haswell”). The VR E1x/msd from Concurrent Technologies features either the quad-core Intel Core i7-4700EQ processor or the dual-core Intel Core i5-4400E processor, together with the associated mobile Intel QM87 Express chipset. With up to 32 Gbytes of DRAM and a rich assortment of I/O interfaces, this board is an ideal processor board for 6U VPX solutions requiring the latest in processing performance. 6U VPX is particularly well suited to high-end compute-intensive applications including command and control, surveillance, radar and image processing systems. The 4th generation Intel Core processor family is based on 22nm process technology and provides enhanced CPU and graphics performance over previous generations at TDP levels up to 47W. Additionally, new instructions are introduced, including: the Intel Advanced Vector Extensions 2.0 (Intel AVX) to provide a performance improvement in integer and floating-pointintensive computations, particularly appropriate for image processing applications; and the Intel AES New Instructions (Intel

36

OCTOBER MARCH 2014 2013RTC RTC MAGAZINE MAGAZINE

AES-NI) enhancements to accelerate data encryption and decryption in hardware. The VR E1x/msd is a 6U VPX processor board featuring this latest quad-core and dual-core processor and supporting chipset, with up to 32 Gbytes DDR3L DRAM with ECC. Additional features include 4 x SATA600 mass storage interfaces including an onboard SATA 600 HDD/ SSD site, onboard CompactFlash site, serial, USB, GPIO and GPI interfaces, Gigabit Ethernet ports, graphics and stereo audio interfaces. The wide range of I/O interfaces can be further expanded by the addition of one or two XMC/PMC modules. The board supports a configurable control plane fabric interface (VITA 46.6) and a flexible PCI Express (PCIe) data plane fabric interface (VITA 46.4) supporting up to Gen 3 data rates and is compatible with several OpenVPX profiles. The VR E1x/msd is I/O compatible with the previous generation VR 737/x8x family and can be used alongside the VR XMC/x01, a 6U VPX Dual XMC/PMC Carrier and Mass Storage Board. Initially the boards are available as commercial and extended temperature variants, and ruggedized variants will follow in the near future. To ease integration, many of today’s leading operating systems including Windows, Linux and VxWorks are supported. Systems using multiple processing boards will benefit from the optional Fabric Interconnect Networking Software (FIN-S), which provides a high-performance, lowlatency, communications mechanism for multiple host boards to intercommunicate across the high-speed fabric interface. Concurrent Technologies Woburn, MA (781) 933-5900 www.gocct.com.

Compact Network Platform Uses AMD Embedded G-Series SoC

WIN Enterprises announces a new desktop platform designed to support a range of applications requiring compact size and versatile performance. The PL-80520 from WIN Enterprises is powered by an AMD Embedded G-Series SoC, an embedded component guaranteed for long product life. The platform features a high-bandwidth DDR3 DIMM slot that supports memory up to 8 Gbytes. Storage interfaces include a 2.5” SATA HDD and CompactFlash. The PL-80520 is equipped with 4 Copper GbE ports, bypass function, a USB 2.0 port, a RJ45 console port, a mini-PCIe socket (PCI-e x1 & USB) and 11 LED indicators for monitoring power, storage activities for system management, maintenance and diagnostics. Housed in an attractive black-metal chassis, the unit was designed as a wireline or wireless (optional) networking or network security device. Expansion capabilities include a full-size Mini-PCIe with USB, a half-size Mini-PCIe with USB and PCIe/SATA signaling. Additional network features include 1 x RS-232/422/485 + 2 x RS-232; 2 x USB 2.0 and 1x USB 3.0, and Line-out, Mic-In audio ports. Features also include the onboard 1.6 GHz AMD Embedded G-Series SoC, one DIMM with up to 8 Gbyte DDR3 and optional Wi-Fi support. WIN Enterprises, North Andover, MA, (978) 688-2000. www.win-enterprises.com.


PRODUCTS & TECHNOLOGY

VITA 59 RCE Rugged COM Express for Harsh Environments

The new VITA 59 standard enables the proven COM Express technology to be used in mission-critical and harsh environments. New mechanical parameters guarantee operation across an extended temperature range, while providing high shock and vibration resistance as well as EMC protection. As a new VITA standard, Rugged COM Express is based on the well-known and widespread PICMG standard COM.0, or COM Express. Rugged COM Express, or VITA 59

FMC/VPX Carrier Equipped with Optical Backplane Interface

A new line of FPGA Mezzanine Card (FMC) carriers and FMC modules from Pentek will include an optical backplane interface. The Model 5973 3U VPX FMC carrier is the first member of the Flexor family with a Virtex-7 FPGA and the Model 3312 multichannel, high-speed data converter FMC. They combine the high performance of the Virtex-7 with the flexibility of the FMC data converter, creating a complete radar and software radio solution. The Flexor Model 5973 features a highpin-count VITA 57.1 FMC site, 4 Gbyte of DDR3 SDRAM, PCI Express (Gen. 1, 2 and 3) interface up to x8, optional user-configurable gigabit serial I/O and optional LVDS connections to the FPGA for custom I/O. The Flexor Model 5973 delivers new levels of I/O performance by incorporating the emerging VITA 66.4 standard for half-size MT optical interconnect, providing 12 optical duplex lanes to the backplane. With the installation of a serial protocol, the VITA-66.4 interface enables gigabit backplane communications between

RCE, has been developed for mission-critical applications that demand higher requirements for thermal design, shock/vibration, environmental influences and EMC protection than are available with PICMG COM.0. Rugged COM Express adds PCB wings for mounting the electronics inside a conduction-cooled aluminum (CCA) frame. When combined with passive cooling, the CCA technology enables electronics to work in high temperature ranges without the need for highmaintenance fans. One railway application that benefits from the easy extension of the temperature range is a locomotive drive control that requires full EN 50155 compliance and an extended temperature range from -40° to +125°C with passive cooling. The sturdy metal frame and firmly secured electronics inside the unit deliver high resistance against shock and vibration. This is an essential advantage in the harsh mining environment. The electronics within a mining machine’s IP67-compliant control platform have to withstand extremely high vibrations of up to 5G and must be shock proof for up to 50G. VITA 59 RCE is the perfect choice.

Medical systems often need very low EMC values. Thanks to the metal cover on top and on all four sides as well as the bottom cover from the carrier board, the Rugged COM Express standard provides 100% EMC protection. Systems with dual-redundant CPUs—e.g., one for processing and one for HMI control— can communicate undisturbed using Rugged COM Express. Another advantage of the cover frame is that, in combination with conformal coating, a sealed enclosure is formed, preventing the intrusion of environmental elements such as dust, chemicals and humidity. Finally, many applications—especially in the railway and avionics market—require long-term availability of up to 30 years. This is ensured by the EOL management at MEN Micro and gives users reliable planning options and saves costs because of longer lifetimes of the systems.

boards independent of the PCIe interface. The Flexor Model 3312 FMC surpasses the speed and density of previous products with four 250 MHz 16-bit A/Ds and two 800 MHz 16-bit D/As. Its high-pin-count FMC connector matches the new Virtex-7 FPGA carrier, boosting performance levels and adding flexibility. The Flexor Model 5973 comes preconfigured with a suite of built-in functions for data capture, synchronization, time tagging and formatting, all tailored and optimized for specific FMC modules, such as the Flexor Model 3312. Together, they provide an attractive turn-key signal interface for radar, communications or general data acquisition applications, eliminating the integration effort typically left for the user when integrating FMC and carrier. The Pentek GateXpress PCIe Configuration Manager supports dynamic FPGA reconfiguration through software commands as part of the runtime application. This provides an efficient way to quickly reload the FPGA, which occurs many times during development. For deployed environments, GateXpress enables reloading the FPGA without the need to reset

the host system, ideal for applications that require dynamic access to multiple processing IP algorithms. The Pentek ReadyFlow Board Support Package is available for Windows and Linux operating systems. ReadyFlow is provided as a C-callable library; the complete suite of initialization, control and status functions, as well as a rich set of precompiled, ready-to-runexamples accelerate application development. The Flexor Model 3312 FMC and Flexor Model 5973 VPX carriers are designed for aircooled, conduction-cooled and rugged operating environments. The Model 3312 starts at $2,495. The Model 5973 module with 4 Gbytes of memory starts at $14,995. Delivery is 8-10 weeks ARO for all models.

MEN Microsystems, Blue Bell, PA. (215) 542-9575. www.menmicro.com.

Pentek, Upper Saddle River, NJ. (201) 818-5900. www.pentek.com.

FIND the products featured in this section and more at

www.intelligentsystemssource.com

RTC RTC MAGAZINE MAGAZINEOCTOBER MARCH 2013 2014

37


PRODUCTS & TECHNOLOGY

Rugged PCI/104-Express SBC with Intel N2800 Offers Rich I/O

A rugged PCI/104-Express single board computer (SBC) is based on Intel’s dual-core Cedar Trail N2800 CPU. The Atlas from Diamond Systems offers a speed of 1.86 GHz and dual-core hyperthreading technology that enables applications to run in parallel, providing exceptionally efficient process-

ing. The Atlas SBC combines Intel Atom CPU performance, a wealth of onboard I/O and a conduction-cooled thermal solution at a competitive price. Its rugged design makes it exceptionally reliable in harsh applications including industrial, on-vehicle and military environments. Available I/O includes USB 2.0, RS232/422/485, Gigabit Ethernet, SATA and digital I/O. Atlas supports I/O expansion with PCI-104, PCIe/104, PCI/104-Express and PCIe MiniCard I/O modules. Atlas uses a new miniature, cost-effective, high-speed expansion connector that supports most PCIe/104 I/O modules. This design helps keep the cost of Atlas low, while increasing the PCB area available for other I/O features. Thanks to a dual-use PCIe MiniCard/ mSATA socket, the board can accommodate newer I/O modules in the PCIe MiniCard form factor featuring Wi-Fi, Ethernet, analog I/O, digital I/O and CAN. These modules provide compact expandability without in-

2.5A Monolithic Active Cell Balancer with Telemetry Interface

A monolithic flyback DC/DC converter is designed to actively balance high-voltage stacks of batteries. These battery stacks are commonly found in electric and hybrid vehicles as well as in fail-safe power supplies and energy storage systems. Because these batteries are stacked

creasing the total height of the system. For rugged applications, mSATA disk modules up to 64 Gbyte are available in SLC and MLC technologies and with wide temperature operation. Atlas SBCs run Linux, Windows Embedded Standard 7 and Windows Embedded CE operating systems. A Linux software development kit is available with bootable images and drivers enabling engineers to start a design project right out of the box. The Atlas SBC was specifically designed for rugged applications, from an operating temperature of -40° to +75°C and onboard DDR3 SDRAM to an integrated conduction-cooling heat spreader and a high tolerance for shock and vibration. Two models are available, one with 4 Gbyte and one with 2 Gbyte. Single unit pricing starts at $645. Diamond Systems, Mountain View, CA. (650) 810-2500. www.diamondsystems.com.

in series, the lowest capacity battery will limit the entire battery stack’s run-time. Ideally, the batteries would be perfectly matched, but this is often not the case and generally gets worse as the batteries age. Passive energy balancing offers no improved run-time as it dissipates the added energy of the higher capacity batteries to match the lowest one. Conversely, the LT8584 from Linear Technology offers high-efficiency active balancing, which redistributes the charge from the stronger cells (higher voltage) to charge the weaker cells during discharge. This enables the weaker cells to continue to supply the load, extracting 96% of the entire stack capacity where passive balancing usually extracts approximately 80%. The LT8584 includes an integrated 6A/50V power switch, enabling an average discharge current of 2.5A while offering a simple and compact application circuit. Its isolated balancing design can return charge to the top of the battery stack or to any combination of cells in the stack or even to a 12V battery used as an alternator replacement. The LT8584 runs off of the cell that it is discharging, removing the need for complicated biasing schemes. It integrates seamlessly via the enable pin with the LTC680x family of battery stack voltage monitoring ICs without any additional software. The LT8584 also provides system telemetry, including current, resistance and temperature monitoring when used with the LTC680x family of parts. When the LT8584 is disabled, it draws less than 20nA of quiescent current from the battery. For applications that require higher balancing current, multiple LT8584s can be paralleled. It is packaged in a 16-lead TSSOP and is both FMEA and ISO 26262 compliant. The LT8584EFE is packaged in 16-lead TSSOP and is priced starting at $2.95 each. Linear Technology, Milpitas, CA (408) 432-1900. www.linear.com.

FIND the products featured in this section and more at

www.intelligentsystemssource.com

38

OCTOBER SEPTEMBER MARCH 2014 2013 2013 RTC RTC MAGAZINE RTC MAGAZINE MAGAZINE


sensors expo & conference

www.sensorsexpo.com

June 24-26, 2014

Donald E. Stephens Convention Center • Rosemont, IL

SPECIAL Subscriber Discount!

Sensing Technologies Driving Tomorrow’s Solutions

Register with code A318C for $50 off Gold and Main Conference Passes.*

What’s Happening in 2014: Tracks M2M

IoT

CHEMICAL & GAS SENSING

ENERGY HARVESTING

INTERNET OF THINGS

M2M

Plus+

• Full-day Pre-Conference Symposia • Technology Pavilions on the Expo Floor • Internet of Things • Energy Harvesting

• MEMS • Wireless

• High Performance Computing

MEMS

MEASUREMENT & DETECTION

POWER MANAGEMENT

SENSORS @ WORK

WIRELESS

Ne•w Co-location with High Performance Computing Conference • Best of Sensors Expo 2014 Awards Ceremony

• Networking Breakfasts • Welcome Reception • Sensors Magazine Live Theater • And More!

Featuring Visionary Keynotes: Reimagining Building Sensing and Control

Sensors, The Heart of Informatics

Luigi Gentile Polese Senior Engineer

Henry M. Bzeih Head of Infotainment & Telematics

Department of Energy, National Renewable Energy Lab

Kia Motors America

Innovative Applications. Expert Instructors. Authoritative Content. Tomorrow’s Solutions. Register today to attend one of the world’s largest and most important gatherings of engineers and scientists involved in the development and deployment of sensor systems.

Registration is open for Sensors 2014! Sign up today for the best rates at www.sensorsexpo.com or call 800-496-9877.

OFFICIAL PUBLICATION:

#sensors14

INDUSTRY SPONSOR & CO-LOCATED WITH:

*Discount is off currently published rates. Cannot be combined with other offers or applied to previous registrations.


PRODUCTS & TECHNOLOGY

mini-ITX Industrial Mainboard For 24/7 Continuous Service

A new mini-¬ITX mainboard is specifically designed for 24/7 continuous service. The D3243-S from Fujitsu is based on the Intel Q87 Express chipset and supports DDR3 1333/1600 SDRAM memory components as well as the complete range of 4th-generation Intel Core i3/i5/i7 processors with LGA1150 sockets. The mini-ITX mainboard is made from particularly rugged, long-lived components and is designed for industrial embedded applications with operating temperatures between 0° and 60°C. It meets industrial standards concerning CE (EMC and safety), burst, climate, shock, vibration etc. The D3243-S comes with Intel HD Graphics, for example HD4600 is already integrated into the processor. In terms of graphics display, the compact mini¬ITX board supports DVI-I, Dual DP V1.2, and dual-channel 24-bit LVDS for up to three independent displays. Further features already integrated available onboard include PCI Express x16 Gen3 and Mini-PCI Express, 8-bit GPIO and multi-channel audio, a mSATA socket (SATA III) for the embedded operating system, six USB2.0 and two USB3.0 sockets. Furthermore, the D3243-S comes with two sockets for Intel GbE LAN, which also enable teaming of several network cards. The LGA1150 socket has a high degree of scalability, higher performance, lower cost as well as a lower level of capital commitment, which also entails a reduction of inventory risk. Also already integrated onboard, is the Infineon Trusted Platform Module (TPM) V1.2 that enables extensive protection of data and licenses. The D3243-S mainboard also boasts further safeguards against unauthorized access to data, namely the password protection of the BIOS and hard disks, as well as the BIOS functionality EraseDisk, which enables safe erasure of the hard disk. On the other hand, there is the Recovery BIOS function that makes it possible to repair malfunctioning firmware.

soft, Citrix, Oracle and others through virtualization acceleration technology. Portwell's NIP-86020 40GbE dual-port fiber QSFP+ network interface card is designed with scalability, reliability, simplicity and affordability. It is built to deliver outstanding bandwidth capabilities for next-generation Ethernet traffic in high-end appliances.

A new 4U server-grade industrial system is based on the Intel Xeon Processor E5-2600 product family and delivers a scalable high-performance platform for a wide array of industrial applications. The TRL-40 from Adlink Technology features increased computing power with intelligent manageability by IPMI v2.0, and dedicated PCIe Gen 3 interfaces for up to 3 PCIe x16 VGA cards, making it the optimal solution for automated optical inspection (AOI), digital surveillance, video wall and medical imaging applications. The TRL-40 provides increased performance with the latest Intel Xeon Processor E52600 v2 for peak workloads, significantly improving performance for applications that rely on floating point or vector computations, coupled with dual channel ECC Registered DDR3 1600 MHz memory supporting up to 128 Gbyte in eight DIMM slots. Adlink’s TRL-40 implements a userfriendly web interface through an integrated web-server and web-based KVM, enabling auto record video based on an event trigger. Administrators can easily monitor the system remotely, decreasing maintenance costs through media redirection and out-of-band power management. Featuring multiple I/O expansion, including 4x PCIe x16 Gen3, 1x PCIe x8 Gen3 and 1x PCIe x4 Gen2, the TRL-40 delivers dedicated PCIe Gen3 bandwidth for image data processing, reducing I/O latency by up to 30% and as much as doubling the bandwidth of previous generations. In addition, the TRL-40 is compatible with Adlink’s off-the-shelf frame grabbers, making it ideal for high-end machine vision and video streaming solutions. To ensure storage utilization and data security, the TRL-40 also provides a hardware RAID solution for up to 4 SATA III storage, as well as one mini PCIe form factor expansion slot and bundling with Adlink’s Industrial modules.

American Portwell Technology, Fremont, CA. (510) 403-3399. www.portwell.com.

ADLINK Technology, San Jose, CA. (408) 360-0200. www.adlinktech.com.

Fujitsu, Tokyo, Japan. +81-3-6252-2220. www.fujitsu.com.

40GbE Dual-Port Fiber QSFP+ Network Adapter Boosts Network Speed

A next-generation 40GbE network adapter supports dual fiber QSFP+ ports. The NIP-86020 from American Portwell Technology leverages the Mellanox Connectx-3 Ethernet controller and is designed with 40G Ethernet technologies fully compliant with the IEEE 802.3ba standard. It provides IPv6 offloading, IEEE 1588 precision time protocol circuitry synchronization performance, RDMA over Converged Ethernet (RoCE) and Jumbo Frame functions. The new NIP-86020 delivers high-bandwidth and industry-leading Ethernet connectivity for performance-driven server and storage-intensive applications in enterprise data centers, high-performance computing, as well as in a variety of embedded environments. Portwell's NIP-86020 also supports virtual machine software by VMware, Micro-

40

OCTOBER SEPTEMBER MARCH 2014 2013 2013 RTC RTC MAGAZINE RTC MAGAZINE MAGAZINE

Industrial Server-Grade System with Refreshed Xeon E5-2600 v2


PRODUCTS & TECHNOLOGY

USB 2.0 Digital Signal Analyzer Includes New Time-Frequency Analysis

A new value-added Visual Signal DAQ Express time-frequency analysis (TFA) application is now included with the Adlink Technology USB-2405 24-bit USB 2.0 dynamic signal acquisition (DSA) module for integrated electronic piezoelectric (IEPE) accelerometer and microphone-based vibration measurement. Inclusion of the application helps to provide a complete solution and improves user experience in machinery vibration analysis environments. Visual Signal DAQ Express is an easyto-use application with powerful functionality and interactive user interface that simplifies acquisition and analysis of noise and vibration signals for instant results. Combining high accuracy, superior performance and value-added TFA software, the USB-2405 is the best choice for portable time-frequency spectrum analysis for machine diagnostics and failure prevention, research, and portable field measurement. Visual Signal DAQ Express was developed by AnCAD Technology, Adlink’s software alliance partner. Visual Signal DAQ Express features graphical, ready-to-use functional modules for quick setup of the USB-2405 DSA, data acquisition and post-pro-

cessing, frequency domain conversion, digital filtering, time-frequency analysis, data logging and exporting. With a focus on TFA, the combined USB-2405 and Visual Signal DAQ Express package is a valuable tool for analyzing machinery vibration. Users can add modules as needed into the user-defined project to get visual analysis results instantly without any programming, which can minimize the development time of a new project. And the Adlink USB-2405 with Visual Signal DAQ Express provides analysis functions similar to known

Fanless Embedded Box PCs with High Expansion, High Performance

A series of fanless embedded box PCs features high performance and rich expansion. The ARK-3500 and ARK-3510 from Advantech are powered by third generation Intel mobile QM77, and support up to a Core i7 quad core processor. The ARK-3500 series boasts versatile expansions—2 PCI, PCIe x1, PCIe x4, MIOe module and 2 MiniPCIe to fulfill diverse applications. Storage options include 2 hard drives or SSD/ 2 mSATA/ Cfast, and there is also optional wireless communication Wi-Fi / 3G / GPS support. As for rugged design, ARK-35 series supports wide-range power input: 9~34V/ 12 VDC, and wide operating temperature from -10° ~ 60°C with SSD. These new series provide complete EMC & Safety Certifications (CE/ FCC/ UL/ CCC/ CB/ BSMI). ARK-3500 provides dual expansion slots with 2 PCI or PCIe x1 + PCIe x4 interfaces. It is easily compatible with isolated AIO/DIO CAN

sound and vibration analysis applications, conserving development resources. The installation USB flash drive for Visual Signal DAQ Express is included with the shipment-ready Adlink USB-2405 at no extra cost. Users need only follow the instructions on the Quick Start Guide to register on the website and activate their Visual Signal DAQ Express. ADLINK Technology, San Jose, CA. (408) 360-0200. www.adlinktech.com.

card and motion control card for factory automation, camera link card for machine vision, and video capture card for surveillance. ARK-3510 features high flexibility with its optional MIOe module support for extended I/O. It can offer six different SKUs for different applications, such as adding an MIOe-220 module for up to a total of 5 GigaLAN ports, which can serve as a data backup and transfer station. Both ARK3500 and ARK-3510 can support two more MiniPCIe interfaces with two SIM holders, and can support Wi-Fi, 3G, LTE 4G, GPS modules for wireless connectivity. Advantech’s SUSIAccess software provides a smart, easy, remote management API so users can monitor, configure and control a large number of terminals with centralized, real-time maintenance capability. This allows customers to focus on their applications while SUSIAccess helps manage administration. ARK-3500 and ARK-3510 series are also equipped with McAfee for enhanced security and Acronis for backup and recovery; these are official licenses, which protect your devices from threat. ARK-3500 and ARK-3510 support iManager F/W technology—an intelligent, cross-platform, self-management tool that monitors system status and takes automatic action if anything is abnormal. iManager provides multi-level programmable watchdogs including IRQ interrupt, ACPI events, reset levels, and can also monitor voltage and temperature to ensure system reliability. Advantech, Irvine, CA. (949) 420-2500. www.advantech.com.

RTC RTC MAGAZINE MAGAZINEOCTOBER MARCH 2013 2014

41


Advertiser Index GET CONNECTED WITH INTELLIGENT SYSTEMS SOURCE AND PURCHASABLE SOLUTIONS NOW Intelligent Systems Source is a new resource that gives you the power to compare, review and even purchase embedded computing products intelligently. To help you research SBCs, SOMs, COMs, Systems, or I/O boards, the Intelligent Systems Source website provides products, articles, and whitepapers from industry leading manufacturers---and it's even connected to the top 5 distributors. Go to Intelligent Systems Source now so you can start to locate, compare, and purchase the correct product for your needs.

www.intelligentsystemssource.com

Company Page Website Advanced Micro Devices, Inc............................................................................................. 44................................................................................................ www.amd.com/embedded Commell........................................................................................................................... 29.......................................................................................................www.commell.com.tw Congatec, Inc..................................................................................................................... 4.............................................................................................................. www.congatec.us Dolphin Interconnect Solutions........................................................................................... 43......................................................................................................... www.dolphinics.com Design Automation Conference.......................................................................................... 35...................................................................................................................www.dac.com Grey Matter Consulting and Sales...................................................................................... 19................................................................................................... www.greymatter-cs.com MSC Embedded, Inc........................................................................................................... 4...................................................................................................www.mscembedded.com One Stop Systems, Inc................................................................................................... 23, 27.............................................................................................www.onestopsystems.com Pentair/Schroff.................................................................................................................. 18............................................................................................www.schroff.biz/interscalem/ Portwell............................................................................................................................. 9............................................................................................................. www.portwell.com Real Time Embedded Computing Conference..................................................................... 42................................................................................................................ www.rtecc.com Sensors Expo & Conference............................................................................................... 39..................................................................................................... www.sensorsexpo.com Trenton Systems................................................................................................................. 2.................................................................................................. www.trentonsystems.com TQ Systems GmbH......................................................................................................... 26, 31................................................................... www.convergencepromotions.com/TQ-USA

RTC (Issn#1092-1524) magazine is published monthly at 905 Calle Amanecer, Ste. 250, San Clemente, CA 92673. Periodical postage paid at San Clemente and at additional mailing offices. POSTMASTER: Send address changes to RTC, 905 Calle Amanecer, Ste. 250, San Clemente, CA 92673.

The Event for Embedded & High-Tech Technology 2014 Real-Time & Embedded Computing Conferences Dallas, TX March 18

Rosemont, IL - Sensors Expo Pavilion June 24-26

Austin, TX March 20

Orange County, CA August 19

Melbourne, FL April 15

San Diego, CA August 21

Huntsville, AL April 17

Minneapolis, MN September 9

Boston, MA April 29

Chicago, IL September 11

Nashua, NH May 1

Toronto, ON October 7 Ottawa, ON October 9 Los Angeles, CA October 21 San Mateo, CA October 23 Tysons Corner Area, VA November 13 High-Performance Computing Conference

June 25-26 Rosemont, IL HPCConference.com

Register today at www.rtecc.com

42

MARCH 2014 RTC MAGAZINE

High-Performance Computing Conference

High-Performance Computing Conference

High-Performance Computing Conference

High-Performance Computing Conference

High-Performance Computing Conference


Remote Device to Device Transfers

Fast Data Transfers Need to access FPGA, GPU, or CPU resources between systems? Dolphin’s PCI Express Network provides a low latency, high throughput method to transfer data. Use peer to peer communication over PCI Express to access devices and share data with the lowest latency.

Learn how PCI Express™ improves your application’s performance

www.dolphinics.com



Issuu converts static files into: digital portfolios, online yearbooks, online catalogs, digital photo albums and more. Sign up and create your flipbook.