Page 1


in this issue

The magazine of record for the embedded computing industry

February 2007


for ATCA

System Design

Real Solutions •

Serve up 10 Gigabit Ethernet Speed


Power and Performance

At the Board Level

Separation Kernels • Guard the Gates for Network Security


Saeed Karamooz:

“We will see the popularity of ATCA and MicroTCA explode.”

An RTC Group Publication

GE Fanuc Embedded Systems

Add more security to your network applications. Control your network traffic with our Cavium-based packet processor. Administrators are increasingly turning to deep packet inspection at full Gigabit Ethernet line rates to control their network traffic. This level of control is necessary for managing traffic flows and security, and the Cavium Octeon™ is the chip of choice for these demanding high speed applications, including network address translation. With our single width AdvancedMC™ module, you can now create content-aware applications in one MicroTCA™ slot or AdvancedTCA® bay. This allows you to quickly build devices such as Session Border

Controllers, Media Gateways, Edge Routers, Firewalls and Video Services Switches to name a few. GE Fanuc Embedded Systems is a leader in AdvancedMC™ design with more than a dozen modules in production, including this Caviumbased packet processor which is already deployed in customer applications. So we can offer you the security you need to make your wired or wireless networks more secure.

Telum™ NPA-38x4 High-performance AdvancedMC packet processor

© 2007 GE Fanuc Embedded Systems, Inc. All rights reserved.

Departments 7

Editorial: Just When You Thought it Was Safe


Industry Insider

54 Products & Technology Efficiencies • Choice • New Services

Features Technology in Context

ATCA System Design Options

12 C  P-TA Aims for Interoperability in ATCA Systems Rajesh Poornachandran and Todd Keaffaber, Intel, and Nirlay Kundu, Motorola

Modern Communication Platforms based on open standards enable carriers to cost-effectively expand their networks and deliver new services with faster time to revenue and OpEx efficiencies. • Pg. 22

16 A  TCA Offers Design Options for Telecom Stuart Jamieson, Emerson Network Power

22 A  TCA and General-Purpose Processing: Doing a Lot with a Little John A. Long, Intel

10 Gigabit Ethernet Solutions

PCI Express x4

PCI Express x8

P2 P2 I/O

Industry Insight Power Management and Control 42 V  PX-Based Systems Need Board-Level Power Management Solutions Ernie Domitrovits, Curtiss-Wright Controls Embedded Computing


PCI Express Switch

PCI Express x8

Xilinx Virtex FPGA

Jack Staub, Critical I/O (a&c)



VME to PCI-X Bridge (Tundra TSI 148)




PCI-X 64/133

PCI Express x4

32 1  0 Gigabit Ethernet: The Promise and the Challenge

Dan Tuchler, Mellanox Technologies

PowerPC Processor (AMCC 440SP)

PCI-X 64/133

PCIe to PCI-X Bridge

Rob Kraft, AdvancedIO Systems

Gigabit Ethernet

Copper/ Optical

Dual 4Gb/s Fibre Channel Interface


26 F PGA-Based Stack Acceleration and Processor Board Architecture Underpin 10GbE Performance

38 C  onsolidating Network Fabrics to Streamline Data Center Connectivity

4Gb Fibre Channel

PCI-X 32/133

Solutions Engineering


RocketIO Ports

VME 2eSST (*) Special Build Option

VMetro’s Phoenix M6000 VXS intelligent I/O controller equipped with flexible high-performance buses and fabrics is an example of a carrier suitable for the demanding dataflow of applications utilizing 10GbE I/O. • Pg. 26

Executive Interview 46 RTC Interviews Saeed Karamooz, CEO of VadaTech Software & Development Tools Network Security 51 Securing the Future by Confining the Code David N. Kleidermacher, Green Hills Software

Industry Watch 58 S  afety for Data at Rest: New Industry Standards for Storage Security Michael Willett, Trusted Computing Group Storage Work Group and Seagate Research

62 S  imulation-Based Device Software Development: A Must-Have to Stay Competitive Marc Serughetti, CoWare

Ulta-Wideband Recording/Playback System Samples at Gsamples/s • Pg. 54 February 2007

February 2007 Publisher PRESIDENT John Reardon, johnr@r EDITORIAL DIRECTOR/ASSOCIATE PUBLISHER Warren Andrews, warrena@r


EDITOR-IN - CHIEF Tom Williams, tomw@r SENIOR EDITOR Ann Thr y f t, annt@r MANAGING EDITOR Marina Tringali, marinat@r COPY EDITOR Rochelle Cohn


CREATIVE DIRECTOR Jason Van Dorn, jasonv@r PRODUCTION DESIGNER Kirsten Wyatt, kirstenw@r GRAPHIC DESIGNER Barr y Karsh, barr yk@r DIRECTOR OF WEB DEVELOPMENT Marke Hallowell, markeh@r WEB DEVELOPER Brian Hubbell, brianh@r

Advertising/Web Advertising

CALIFORNIA COASTAL ADVERTISING MANAGER Diana Duke, dianad@r (949) 226 -2011 WESTERN REGIONAL ADVERTISING MANAGER Lea Ramirez, lear@r (949) 226 -2026 EASTERN REGIONAL ADVERTISING MANAGER Nancy Vanderslice, nancy v@r (978) 443 -2402 EMEA SALES MANAGER Marina Tringali, marinat@r (949) 226 -2020 BUSINESS DEVELOPMENT MANAGER Jessica Grindle, jessicag@r (949) 226 -2012


Mag gie McAuley, mag giem@r (949) 226 -2024

To Contact RTC magazine: HOME OFFICE The RTC Group, 905 Calle Amanecer, Suite 250, San Clemente, CA 92673 Phone: (949) 226-2000 Fax: (949) 226-2050, EASTERN SALES OFFICE The RTC Group, 96 Dudley Road, Sudbury, MA 01776 Phone: (978) 443-2402 Fax: (978) 443-4844 Editorial Office Warren Andrews, Editorial Director/Associate Publisher 39 Southport Cove, Bonita, FL 34134 Phone: (239) 992-4537 Fax: (239) 992-2396 Tom Williams, Editor-in-Chief 245-M Mt. Hermon Rd., PMB#F, Scotts Valley, CA 95066 Phone: (831) 335-1509 Fax: (408) 904-7214 Ann Thryft, Senior Editor 15520 Big Basin Way, Boulder Creek, CA 95006 Phone: (831) 338-8228

February 2007

Published by The RTC Group Copyright 2007, The RTC Group. Printed in the United States. All rights reserved. All related graphics are trademarks of The RTC Group. All other brand and product names are the property of their holders.

...but your decision for real-time and embedded development just got easier.

C++ INTRODUCING PERC PICO THE FIRST RESOURCE-CONSTRAINED / HARD REAL-TIME SOLUTION FOR JAVA™ DEVELOPERS � Speed, footprint comparable to C / C++ � Low microsecond response times � Access to low level devices � Predictable memory management � Support for all major RTOS environments � PowerPC, x86, XScale, ARM, and more

ENTER TO WIN 42” PLASMA TV 1-800-87-AONIX Enter Promo Code: RTC

Editorial February 2007

Just When You Thought it Was Safe by Tom Williams, Editor-in-Chief


ost of us have a number of unfortunate misconceptions about network security. A very large number of us, even those deeply involved with embedded systems, tend to think of network security in terms of the laptop and desktop machines we use every day. But there is a deeper and much darker side. Of course—unless we actually have job descriptions involving security—we tend to think of the usual battle between operating system security and “get-a-life” hackers, fourteen year-olds high on pizza and Jolt Cola or other sociopaths who invent viruses and spyware, sometimes for profit but more often for mischief. The architects of operating systems such as Windows and Linux have devoted Herculean efforts to protect systems from such attacks. Other companies such as Symantec and McAfee have dedicated their entire business to developing products that shield systems from attack. Many of us who routinely use these products may be blissfully unaware of how many times they have saved us from disaster. That being the case, these companies can justifiably take pride in their products, technologies and service to the user community. However, there is another class of villain lurking in the shadows, that only rarely becomes apparent and even then does not get much publicity—and for good reason. These are PhD computer scientists working for foreign governments, intelligence operatives backed by the resources of nation-states. They are systematically studying the major operating systems used in the U.S. and the world for ways to breach them. When they find such means of entry, they do not immediately exploit them by launching viruses, worms or denial-of-service attacks. They simply verify and catalog them. When one of their cataloged vulnerabilities is discovered and exploited by a hacker and then fixed by the OS architects, they simply cross it off their list. They have plenty more. We don’t know what they are and by extension we can never be completely assured that a large enterprise operating system is completely secure. It is impossible. And this is in no way a criticism of such enterprise operating systems. They were never designed or conceived for such absolute levels of security. People I have talked to at the Department of Homeland Security and the NSA (to the extent the latter can talk about anything) know full well that an operating system consisting of well over 30 million lines of code can never be certified completely

secure. Total certainty is, in fact, mathematically provable, but the upper limit is about five thousand lines of code. This refers to the NSA’s Evaluation Assurance Levels, the highest of which is EAL Level 7. To date, no operating system has been certified at that level. That will change in the foreseeable future, but what does that mean for the security of critical installations such as nuclear plants, industrial and chemical facilities and the power grid? These are all controlled by sophisticated embedded systems and all are one way or another connected to the Internet. Studies by DHS have shown that bad guys can get to the embedded control systems via the enterprise systems connected to the Internet that are in turn connected to internal networks running the plants. Once inside, they can wreak havoc. One incident in a DHS contracted study revealed that the invader simply changed the display on the human interface screen in a plant to show a switch in the wrong position when it was in fact in the proper position. The operator noticing this changed the switch, putting the system into a dangerous condition. Fortunately, this incident did not have disastrous results, but the implications are clear. The real villains are simply biding their time until a real conflict erupts or they decide to carry out an intentional, coordinated attack. In a recent interview, former counterterrorist czar Richard Clarke noted that if China went to war with the U.S., one of the first things it would and could do is shut down the U.S. power grid. There is a way to counter this danger, but it is far from being online in our critical facilities. This is known as a “separation kernel” certified to the highest EAL levels. A separation kernel sits between other operating systems such as Windows, Linux and various RTOSs, including their drivers, and the underlying hardware. It controls all access to hardware and hence to the outside world. These operating systems can then operate at different and known levels of security in an arrangement called Multiple Independent Levels of Security (MILS). Retrofitting facilities with such systems when they become widely available is a big but not impossible job. It would leave intact existing control and IT systems and offer a level of protection against attack that simply does not exist today. It would certainly appear that providing higher levels of security to the now computer-controlled and networked technical infrastructure of this country should be a major national priority. February 2007

DPM • VME64, VITA 31.1 • Pentium M to 1.8 GHz • 855 GME chipset with embedded graphics • Low power consumption • -40°/+85°C operation

CPM1 • CompactPCI, PICMG 2.16 • Pentium M to 1.8 GHz • 855 GME chipset with embedded graphics • Two PMC sites • -40°/+85°C operation

KPM-S1 • Custom embedded • Pentium M 2 GHz • 7520 chipset with up to 8 GB DDR 400 • ATI M26 Mobility Radeon graphics on PCI Express

Conduction cooled versions of the VME and CompactPCI/PICMG 2.16 single board computers • Extended temperature to -40°/+85°C • Tested to MIL STD 810 and 901D • Tested to high shock/vibration • Conformal coating available • Customized interfaces supprted

Let Dynatem provide your next embedded processor solution 800.543.3830 949.855.3235 Dynatem, Inc., 23263 Madero, Suite C, Mission Viejo, CA 92691

Industry Insider

February 2007

New StackableUSB Specification for I/O Expansion in Embedded Systems Micro/sys, Inc. has announced the release of the StackableUSB Specification, which defines a standard for stacking I/O boards onto a single board computer using the popular USB 2.0 interface. A single board computer, operating as a host, can communicate to multiple USB peripheral cards directly through mating USB 2.0 connectors resident on the CPU and I/O cards. Satisfying the growing demand for faster communication between I/O channels and CPUs in embedded applications, the StackableUSB format eliminates cables, reduces pin count, and requires a smaller connector footprint than traditional interconnect architectures. Securely bolting together to increase reliability and mobility in rugged or harsh environments, the StackableUSB format features a USB point-to-point architecture where connections are routed up a stack to the next peripheral in the stack. These features combine to make StackableUSB a platform with many advantages for embedded systems. Each StackableUSB connector supports up to eight USB peripheral devices in the stack without a hub. There are several features USB I/O supports that have not been available in the past when using traditional stacking architectures such as PC/104 and PC/104Plus. USB supports automatic enumeration that allows the host to detect devices plugged into the stack and to install the drivers necessary for the system to operate with minimal human intervention. USB also supports power management so that USB peripherals or devices can be placed in a low power mode to conserve power, The Micro/sys USB148 an issue that is paramount in embedded systems relying on bat- StackableUSB I/O board tery-backed power or environments where heat generation is of key with 48 lines of digital concern. USB 2.0 also offers increased data bandwidth to support I/O. The USB connector is at the lower left. today’s high-speed A/D, and DAC data rates. Inclusion of USB in most popular chipsets and many microcontrollers makes this an easy and inexpensive implementation compared to PCI and PCI Express I/O. USB also offers a range of speeds, from 480 Mbits at high speed, 12 Mbits at full speed, and 1.5 Mbits at low speed. This provides embedded system designers a road map into the future for increasing their system throughput as technology advances. Micro/sys, the industry originator of the StackableUSB specification, is opening up a market for USB I/O to move into embedded OEM applications. StackableUSB uses the same serial interconnect standard that is found on the common desktop or laptop PC, but on formfactors such as the popular 3.5” x 3.5” size computer board with a stackable connector. This specification is available at and is published by Micro/sys, Inc.

Event Calendar 03/03-10/07


MVA Communications Ecosystem Conference San Diego, CA

IEEE Aerospace Conf. Big Sky, MT

TechNet Tampa 2007 Tampa, FL




Real-Time & Embedded Computing Conference Atlanta, GA

Real-Time & Embedded Computing Conference Phoenix, AZ


Real-Time & Embedded Computing Conference Huntsville, AL

Intel Announces 45 nm Processor Technology— Smaller and Faster with Less Power

In what may be one of the biggest advancements in fundamental transistor design, Intel has announced it will use dramatically different transistor materials to build the hundreds of millions of 45 nanometer (nm) transistors inside the next generation of the company’s Core2 family of processors. Intel already has 45 nm CPUs inhouse—the first of at least fifteen 45 nm processor products in development. This new transistor technology is expected to allow Intel to continue increasing processor speeds while reducing the amount of electrical leakage from transistors that can hamper chip and PC design, size, power consumption, noise and costs. It also indicates that Moore’s Law should thrive well into the next decade. Intel says it is on track for 45 nm production in the second half of 2007. Compared to today’s 65 nm technology, Intel says its 45 nm technology will provide the following product benefits: • Approximately twice the transistor density • Approximately 30 percent reduction in transistorswitching power • Greater than 20 percent



Real-Time & Embedded Computing Conference Albuquerque, NM

NDIA Science&Engineering Technology Conference No. Charleston, SC


If your company produces any type of industry event, you can get your event listed by contacting sallyb@ This is a FREE industry-wide listing.

Embedded Systems Conf. San Jose, CA

February 2007

Industry Insider

High-k + Metal Gate Transistors HK+MG Transistor

Metal Gate

• Increases the gate field effect

Low resistance layer Metal gate

High-k Dielectric

• Increases the gate field effect • Allows use of thicker dielectric layer to reduce gate leakage

HK + MG Combined

• Drive current increased >20% (>20% higher performance) • Or source-drain leakage reduced >5x • Gate oxide leakage reduced >10x

improvement in transistorswitching speed or a greater than 5 times reduction in source-drain leakage power • Greater than 10 times reduction in transistor gate oxide leakage for lower power requirements and increased battery life For its 45 nm technology, Intel is using a hafnium-based high-k material in the gate dielectric. The high-k dielectric is created using atomic layer deposition (ALD) whereby a single layer of the high-k material molecule is deposited at a time. Because the high-k gate dielectric isn’t compatible with today’s silicon gate electrode, Intel had to develop the new metal gate materials to solve two fundamental problems that arise when the two are combined. One is known as “threshold voltage pinning” (also called “Fermi level pinning”) and the other is “phonon scattering.” Neither of these effects is desirable and both cause lower transistor performance. These effects arise when a high-k dielectric is used with a polysilicon gate electrode, but are significantly improved when polysilicon is replaced by specific metals (different ones for NMOS and PMOS transistors), and all are integrated with the right process recipe. The specific metals are a trade secret. The combination of the metal gates and the high-k gate dielectric leads to transistors with very low current leakage and high performance.


February 2007

Different for NMOS and PMOS

High-k gate oxide S


Hafnium based

Silicon substrate

Intel is currently developing its 45 nm process on 300 mm wafers in Hillsboro, Oregon, in D1D, a fab with clean-room space equivalent to 3.5 football fields. Two new 300 mm fabs are being built for the coming 45 nm ramp: Fab 32 in Ocotillo, Arizona (production due to start in the second half of 2007) and Fab 28 in Israel (production to start in the first half of 2008).

Freescale, IBM to Team on Semiconductor R&D

Freescale Semiconductor and IBM have announced that Freescale will join the IBM technology alliance for joint semiconductor research and development. The agreement includes Complementary Metal Oxide Semiconductor (CMOS) and Silicon-on-Insulator (SOI) technologies as well as advanced semiconductor research and design enablement transitioning at the 45-nanometer generation. Freescale is the first technology development partner in the IBM technology alliance to participate in both low-power and highperformance technology research and development. This agreement brings together Freescale’s participation in key embedded markets, including automotive, networking, wireless, industrial and consumer, with IBM’s success in developing world-class technology and industry-leading systems expertise. Freescale notes that this alliance is expected to enable it to further strengthen its

manufacturing strategy. In addition to leveraging owned capacity in internal fabs and its existing relationships with leading foundry manufacturers, Freescale will have access to the combined manufacturing capacity of IBM’s Common Platform partners. The Common Platform provides its semiconductor fabrication partners with synchronized manufacturing processes to help ensure the maximum flexibility and lowest development investment for multi-source, highvolume manufacturing.

Communications Platforms Trade Association Selects ESO Technologies’ ATCA-Tester

The Communications Platforms Trade Association (CPTA) today announced that it has selected ESO Technologies’ ATCA-Tester as the IPMI Manageability Test Tool for the CPTA certification test suite. CP-TA members are currently defining interoperability test requirements and procedures for PICMG’s AdvancedTCA specification, aligned to the SCOPE Alliance AdvancedTCA profile. “Currently, CP-TA is focused on addressing the top interoperability issues of thermal, manageability and data transport, and we see the ATCA-Tester as an integral part of our solution,” said Shlomo Pri-Tal, CP-TA Chairman. “We selected ATCA-Tester because it meets the requirements for CPTA manageability certification laid out in our forthcoming Interoperability Compliance Document and Test Procedure Manual. We will continue to evaluate additional tools to address future requirements.” ATCA-Tester is an automated software tool used to test AdvancedTCA and AdvancedMC equipment compatibility with respect to PICMG 3.0 and AMC.0 specifications system management requirements. It can be used to test single building blocks or

all building blocks simultaneously in an integrated system. The Interoperability Compliance Document and Test Procedure Manual are scheduled for release in Q1 2007. CP-TA will then address interoperability for AdvancedMC and MicroTCA, and eventually will focus on specifications from OSDL and the Service Availability Forum. CP-TA members include leading communications platforms and building block providers. Members will receive a discount on the ATCA-Tester. For information on joining the CP-TA and membership benefits, visit www.

Demo of VPX/VPX-REDI Processing Highlights 4 Channel/2.5 Gbyte/s Bandwidth

Launching what it describes as a “new era of VPX computing,” Curtiss-Wright Controls Embedded Computing presented a public demonstration of an embedded system based on the new high-bandwidth VPX (VITA 46) open architecture bus standard on January 14, 2007. A high-performance successor to the long popular VMEbus bus architecture, VPX, and its complement, VPX-REDI (VITA 48), were designed to address emerging serial switched fabric, high-bandwidth defense and aerospace applications that exceed the capabilities of the earlier bus standard. Curtiss-Wright’s demonstration comprised a system using several of its new 6U form-factor VPX boards. The system featured a VPX6-185 single board computer (SBC) and two CHAMP-AV6 digital signal processor (DSP) VPX boards operating in a mesh Serial RapidIO (SRIO) network. Each board in the mesh is connected to each of the others by a bi-directional x4 SRIO connection, with each link providing up to 2.5 Gbytes/s of bi-directional bandwidth. An application running on the system enabled view-

Industry Insider

ers of the demonstration to choose the number of processors participating in simultaneous streaming transfers between the boards. This application is representative of common signal processing algorithms associated with radar processing, which in traditional systems are often limited by data movement. The demonstration was able to show transfer rates approaching 2.5 Gbytes/s, the theoretical limits of the links.

PCI-SIG Delivers PCI Express 2.0 Spec

PCI-SIG, the Special Interest Group responsible for PCI Express industry-standard I/O technology, has announced the availability of the PCI Express Base 2.0 specification. After a 60-day review of revision 0.9 of the specification in Fall 2006, members of the PCI-SIG finalized and released PCI Express

Untitled-2 1

(PCIe) 2.0, which doubles the interconnect bit rate from 2.5 Gbits/s to 5 Gbits/s to support high-bandwidth applications. The specification seamlessly extends the data rate to 5 Gbits/s in a manner compatible with all existing PCIe 1.1 products currently supporting 2.5 Gbit/s signaling. The key benefit of PCIe 2.0 is its faster signaling, effectively increasing the aggregate bandwidth of a 16-lane link to approximately 16 Gbytes/s. The higher bandwidth will allow product designers to implement narrower interconnect links to achieve high performance while reducing cost. In addition to the faster signaling rate, PCI-SIG working groups also added several new protocol layer improvements to the PCIe Base 2.0 specification, which will allow developers to design more intelligent devices to optimize platform performance

and power consumption while maintaining interoperability, low cost and fast market introduction. These architecture improvements include the following: • Dynamic link speed management allows developers to control the speed at which the link is operating. • Link bandwidth notification alerts platform software (operating system, device drivers, etc) of changes in link speed and width. • Capability structure expansion increases control registers to better manage devices, slots and the interconnect. • Access control services allow for optional controls to manage peer-to-peer transactions. • Completion timeout control allows developers to define a required disable mechanism for transaction timeouts. • Function-level reset provides an optional mechanism to re-

set functions within a multifunction device. • Power limit redefinition enables slot power limit values to accommodate devices that consume higher power. The PCIe Base 2.0 specification is available for download at http://www.pcisig. com/specifications.

February2/9/07 20079:38:21 AM11

TechnologyInContext ATCA System Design Options

CP-TA Aims for Interoperability in ATCA Systems The Communication Platforms Trade Association (CP-TA) was launched to drive a mainstream market for interoperable communications platforms by solving interoperability issues that are not currently addressed by the existing specifications in the communications platforms industry.


exploration her your goal peak directly al page, the t resource. chnology, and products

by R  ajesh Poornachandran and Todd Keaffaber, Intel Nirlay Kundu, Motorola


he communication platform indus- toward an ecosystem of interoperable try has developed a rich set of open building blocks. CP-TA was formed to fill specifications to build modular this “interoperability gap.” CP-TA, in an communications platforms. However, the industry-wide effort, is working toward a specifications were developed so that they certification process to ensure interoperampanies providing solutions now could apply to as many industries as pos- ble hardware and software building blocks oration into products, technologies and companies. Whether your goal is to research the latest sible including communications, manufor communications platforms, based on plication Engineer, or jump to a company's technical page, the goal of Get Connected is to put you automotive. open industry specifications. vice you requirefacturing for whateverand type of technology, As a result, the nies and products you are searchingcontain for. specifications many requirements CP-TA is organized into three workthat are optional for some applications but ing groups: the Technical, Compliance mandatory for others. Additionally, the and Marketing Work Groups. developers who design their products to • The Technical Work Group (TWG) is meet the requirements of a specification responsible for identifying and creatwill interpret some of the requirements ing the interoperability requirements. differently than other developers. These requirements are based on exBoth interpretations can be correct isting open specifications published but lead to products that are not interopby PICMG, the Service Availability erable. These factors and others have Forum and Open System Developprevented the industry from converging ment Lab. They are then compiled into the Interoperability Compliance Document (ICD). Get Connected with companies mentioned in this article. • The Compliance Work Group (CWG) is tasked with developing the Test

End of Article


February 2007 Get Connected with companies mentioned in this article.

Procedure Manual (TPM). The TPM contains the test procedures needed to test the interoperability requirements identified in the ICD. The CWG is also responsible for developing any test tools currently not available to carry out tests described in the TPM. • The Marketing Work Group (MWG) is responsible for creating CP-TA awareness in the communications industry. Release 1.0 of the ICD and TPM will focus on three major areas of interoperability for the ATCA platform for communications: manageability, data transport and thermals. Manageability addresses issues with IPMI functionality of the ATCA boards and shelf managers as well as the other managed FRUs in an ATCA system. Data Transport narrows the PICMG 3.1 fabric options from nine to two options, those being 1 and 9. It also defines signal integrity requirements for


the boards and backplanes. The thermal area defines a standardized method for measuring slot airflow and board impedance measurements.

Test Suite on a Linux Computer

AdvancedTCA board


Other Intelligent FRU


As stated earlier, some of the ATCA requirements are optional for a specific application. To attain interoperability, CP-TA has promoted many optional requirements in the current industry specifications to mandatory requirements in the ICD. One example is the following requirement from the PICMG AdvancedTCA Specification: “If the shelf manager receives a Bused Resource Control (Relinquish) command from a Board that is not controlling the bus, the shelf manager should send an Error Status.” In the ICD, this requirement has been promoted to mandatory for a communications platform since it involves communication between a front board and the shelf manager. If it is not mandated (as it is now) then one vendor’s shelf manager implementation may not send an error status, and a bus acquired by one front board could be relinquished by another board, which reduces the system’s reliability and security. On the other hand, if this requirement has been implemented by another shelf manager vendor, then the same front board that worked with one shelf manager vendor may not work (interoperate) with that of the other vendor. Moreover, there are many requirements in current communications specifications that are interpreted by different vendors differently. For example, consider the following requirement from the PICMG AdvancedTCA Specification: “If a Shelf has two operating shelf manager instances, one active and one backup, there shall not be a period of more than one second where a shelf manager is unable to transmit and receive UDP packets using the shelf manager IP address.” The requirement does not specify clearly whether the active shelf manager must send the UDP packet to the backup shelf manager or if it must send the UDP

Shelf Manager I2C Analyzer

IPMI over IPMB 0

AdvancedTCA Shelf Serial Port

Figure 1

CP-TA Manageability Test Configuration.

ATCA Building Block (BB)

Figure 2

Windows Computer Running I2C Decoder

CP-TA Lab runs the test tools against the BB

BB passes all the tests => CP-TA Certified

Certification Process.

packet using its shelf manager IP address to some system external to the shelf. Such vague requirements are one of the key reasons for the current interoperability issues among the AdvancedTCA building blocks. CP-TA intends to resolve these ambiguities in future releases.

Test Configuration: An Example

Currently CP-TA has adopted the ATCA-Tester from ESO Technologies as a software test tool for the manageability domain. ATCA-Tester running on a Linux box communicates with the shelf manager via the Remote Management and Control Protocol (RMCP). The shelf back plane is instrumented to connect an I2C Analyzer, through which the IPMB traffic can be

driven, captured and analyzed by connecting the other end of the Analyzer to the serial port of a PC. ATCA-Tester is script-driven and uses a command line interface. Test scripts were developed using the PICMG 3.0 specification, PICMG AMC.0 specification and IPMI specification as well as AdvancedTCA Interoperability Workshops (AIW) test scenarios. The ATCA-Tester software runs outside the AdvancedTCA shelf on a laptop or PC running Linux and uses the shelf manager’s system interface to gain access to the building blocks to be tested. ATCATester’s access path to AdvancedTCA building blocks is represented in Figure 1. ATCA-Tester automatically brings the February 2007


TechnologyInContext tested FRUs to the hot swap state required by a given test. The tool communicates with the FRU (e.g., blade) under test using the RMCP bridging functionality provided by the shelf manager.

CP-TA Certification Process

The first set of interoperability requirements, detailed test procedures and industry-harmonized test tools, are being developed and are scheduled for release by the end of 2006. Initially CP-TA members

will test their products primarily in-house. Moreover, CP-TA is in the process of establishing periodic InteropFests that will provide a confidential environment for the CP-TA community to align the execution of CP-TA tests as well as offer a true multi-vendor environment for enhanced interoperability testing. CP-TA held its first InteropFest in November 2006. CP-TA’s goal is to build a cost-efficient certification program for the CP-TA community. Building block categories

such as managed shelf, shelf manager, switch board, operating system, intelligent sub-FRUs and Front Boards will all be tested against applicable CP-TA test procedures, and compliant products will be marked as CP-TA certified. Since the AdvancedTCA architecture encompasses electrical, mechanical, thermal, interconnect and software areas, each type of component has to be tested for these attributes. CP-TA intends to release an RFP for a third-party lab, capable of running CP-TA certification testing.

Future Work

Attaining interoperability at the system level, not just the component level, is the key prerequisite on the way to mainstream adoption of open standards-based AdvancedTCA carrier-grade platforms. In the future, CP-TA will address interoperability requirements at the OS and middleware layers. CP-TA brings together the industry in a concerted effort to lay out a set of testable requirements essential to platform interoperability. Equally important, the trade association is creating the complementary test procedures and underlying test tools that will validate conformance to these interoperability requirements. This lays the foundation for an industry-wide interoperability certification program for AdvancedTCA products certified by an independent test lab. As modular components that have been designed, tested and certified for baseline interoperability become available, integrators will increasingly have the flexibility to cost-effectively mix and match components from different vendors into base AdvancedTCA platforms. Ultimately, the success of the CP-TA certification program will be measured by the improved economics of deploying carriergrade infrastructure solutions based on open standards. Communications Platforms Trade Association [].


February 2007

TechnologyInContext ATCA System Design Options

ATCA Offers Design Options for Telecom Designers choosing ATCA must carefully consider their mix of options. These include the proper serial interfaces and protocols for their applications, the effect of design decisions on power and cooling, and the use of off-theshelf or custom-build AMC cards.

Ad Index

by S  tuart Jamieson Emerson Network Power

Get Connected with technology and companies providing solutions now Get Connected is a new resource for further exploration into products, technologies and companies. Whether your goal he Advanced Telecommunications Telecommunications is one ofdirectly the is to research the latest datasheet from a company, speak an Application Engineer, or jump to amarkets company's technical page, the Computing Architecture with (ATCA) most demanding for electronic goalneeds of Get Connected put you in touch with the right resource. was developed to address the systems,is torequiring networks that can Whichever level of service you require for whatever type of technology, of demanding telecom applications. The move huge at the Get Connected will help you amounts connect withof thedata companies and highproducts architecture retains considerableyoudesign estfor.possible speeds. Further, demands are are searching


with the substantial revenue losses that accompany network failures, imposes a stringent requirement on system reliability and availability. ATCA was created to help developers flexibility to give developers oppor- growing rapidly as information sources meet these extreme demands while maxitunity to customize and optimize their such as radio, television and machine-to- mizing opportunities for design reuse to system. Making the most of the design machine communications move onto the lower costs. The architecture has three options available, however, requires care- telecom networks to join telephony, text key elements: a protocol-agnostic, highful assessment of the resulting effects on and the World Wide Web traffic. The criti- speed serial backplane; modular hardsystem power and management. cal nature of this information flow, along ware design through the use of mezzanine Get Connected with technology and companies providing solutions now cards; and standardized system monitorGet Connected is a new resource for further exploration into products, technologies and companies. Whether your goal is to research th ing and management processes. Even so, datasheet from a company, speak directly with an Application Engineer, or jump to a company's technical page, the goal of Get Connecte thewhatever standards open many architecin touch with the right resource. Whichever level of service you require for type ofleave technology, options that Get Connected will help you connect with the companies and productstural you are searching for. developers can choose to achieve their desired system performance and pricing. This allows ATCA to serve as the basis for such diverse products as media gateways, media gateway controllers and central office switches and routers (Figure 1). Many choices exist in these key areas. The backplane, for instance, can support many different connection schemes and serial communications protocols.


Figure 1


Equipped with Get multiple AdvancedMC expansion sites, an ATCA blade like Connected with companies and the Emerson KAT 4000 provides an excellent open architecture platform products featured in this section. for building modular, scalable, field-replaceable telecom blades.

End of Article Get Connected

with companies mentioned in this article.

February 2007

Get Connected with companies mentioned in this a


Single Tier Shelf

0-#( !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# -#( 0-

Cube Shelf

Two Tier Fixed, Single Width Shelf 0-#( !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# -#( 0-


00-#( !-# !-# !-# !-#


!-# !-#



-#( 00-

0-#( !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# -#( 0-

Back-to-Back Shelf

Two Tier Mixed Width

Pico Shelf

!-# !-# !-# !-# !-# !-# -#( 0-

The Advanced Mezzanine Module used in ATCA system designs is also the basis for MicroTCA, which targets smaller equipment installations while retaining hardware and software compatibility with ATCA equipment.

Configurations such as star and dual-star are achievable and provide opportunities for creating redundant and fault-tolerant communications channels. Popular protocols such as Serial RapidIO, Gigabit Ethernet and XAUI interfaces can run on the backplane with equal facility. The use of Advanced Mezzanine Card (AMC) modules in ATCA system design gives designers an ability to mix and match functions and I/O connectivity to achieve a variety of combinations with a minimum number of building blocks. The modularity of the modules spreads the expense of development across many applications and increases the available market and potential production volume for modules, thereby reducing overall system costs. The modules also contribute to high-availability system design by being individually hot-swappable. To simplify the use of AMC modules in high-availability system design, the ATCA specifications define a standard approach to system monitoring and management: the Integrated Peripheral Management Interface (IPMI). This interface together with system management software gives developers the access and 18

!-# !-# !-# !-# !-# !-# -#( 0-

0-#( !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# !-# -#( 0-

Figure 2


February 2007

controls needed to initialize and configure modules, query status, enable or disable backplane communications and control module power. The IPMI management functions thus allow implementation of automatic failover and other high-availability policies.

Sorting Out the Options

The flexibility offered by the ATCA standards will require designers to make careful choices, however. One of the first of these is architectural. The ATCA specification defines the basic framework and essential content of an equipment shelf targeting rackmount applications, and specifies a large (8U) card with power capacity currently up to 200W for use in a rackmount enclosure. For those applications where rack mounting is not appropriate, designers can also choose the MicroTCA approach. MicroTCA systems also use AMC modules, but install the modules directly into a backplane rather than as mezzanine cards on larger carriers. This enables the creation of smaller systems, as shown in Figure 2, while retaining hardware and software compatibility with ATCA designs.

Another option to consider is which protocols are to be carried across the serial backplane to establish system connectivity. Because the backplane itself is protocol-agnostic, and AMC modules make the communications link a drop-in option, there is no built-in bias for one protocol over another. This choice can be made on the merits of the protocol in the system’s target application. It is also possible for the backplane to support different protocols for the various serial links. Some of the most popular serial protocols for telecom applications are Serial RapidIO (SRIO), Gigabit Ethernet (GbE), 10 Gigabit Ethernet and PCI Express (PCIe). Each of these offers high data rates and all are well supported with software and drivers. Where they differ most significantly is in their match to the traffic the backplane must carry. For signal processing applications such as media gateways, for example, the SRIO interface is a natural fit. Many digital signal processors, used in media gateways to handle encoding and decoding of audio and video, have SRIO interfaces built in. Thus, communications among the DSPs across the backplane can take place


without any intervening protocol conversion, simplifying system design and maximizing achievable throughput. SRIO is also suitable for applications requiring minimal latency and low overhead in the data packet. Control plane communications, such as for a router, are better served by the GbE interface. The general-purpose processors that handle setup, routing and other system control-plane functions often have GbE as their native peripheral bus. Again, the ability to connect these devices across the backplane without protocol conversion simplifies design and maximizes performance. The GbE interface also makes the most sense in systems that are carrying Internet Protocol (IP) traffic across the backplane, such as media gateway controllers. While there is some conversion required between GbE and IP, the hardware and software to bridge the two protocols are both mature and widely available. These attributes contribute to design simplicity and help ensure optimized performance.

Protocols, Power and Cooling

Selection of the protocol to be used on the backplane has an obvious effect on ATCA system performance. One of the more subtle effects of design choices is the impact of board power dissipation on the suitability of a system for its working environment. ATCA boards can draw as much as 200W of power, and a fully-loaded chassis can have from 14 to 16 boards. Because many telecom systems must be installable in relatively small spaces, such as an equipment closet, or be packed as densely as possible into the available space to maximize the number of customers a system can handle, systems can quickly develop problems with heat buildup. The systems must be designed to provide adequate airflow to keep boards within operating temperature limits, but designers cannot simply add fans as needed. Fans generate both acoustic and electromagnetic noise, both of which must remain within strict limits when an installation space is fully occupied to avoid violating NEBS, occupational safety and EMC guidelines. With these limitations on fan usage, designers must consider closely the airflow path through their system.

The ATCA specifications are open on the issue of airflow beyond specifying a bottom-to-top and front-to-back direction, as shown in Figure 3. Forced-air approaches are the most common for dense installations. With forced-air cooling, the major options are either to push the air through the system or to pull it. Each approach has its advantages and drawbacks. Pushing the air through the system minimizes the dirt buildup on the inside of the chassis that will inevitably occur over time. Because fans force the air into the chassis through a filter, much of the airborne dust will be trapped. In addition, the approach produces positive air pressure inside the chassis, so air leaks blow dust away from the chassis rather than drawing it in. The drawback to the push approach is that the fans must be located at the front of the equipment, contributing significantly to acoustic noise. Pulling the air through the system places the fans at the back or top of the chassis, where their acoustic noise is of less concern. This approach also provides somewhat more uniform air distribution in the system, drawing air in throughout the chassis. This same drawing in, however,

pulls also dust and dirt into the chassis. This can lead to maintenance problems in dusty environments so effective filtering solutions are vital. Additionally, this adds in a service requirement to change those filters regularly. Because of the cooling challenges facing a design that uses the maximum available power, developers should give consideration to architectural choices that help minimize power use. One such choice is to use distributed computing with a number of smaller processors rather than a few large ones. Distributing the workload allows the smaller processors to run at lower clock rates, which greatly reduces their individual power demands. The reduction is significant enough that their total power dissipation is substantially less than a single processor with equivalent performance.

AMC Issues

Along with facing cooling challenges, developers using the ATCA standard will need to look closely at the AMC cards they employ. Many of the modules currently on the market are first-generation designs released soon after the specifications were

Top Plenum Outlet Air

Front Board


Inlet Air Bottom Plenum

Figure 3

The specifications on cooling for ATCA systems dictates the overall airflow path, but choices such as natural or forced convection and the push or pull operation of fans are left open. February 2007


TechnologyInContext finalized. As with all new technologies, there are still operational issues that need to be evaluated. One of the issues is the availability of functions. Most of the products available address common system functions such as Optical interfaces, T1/E1 interfaces, storage interfaces and system controller/processor. While these are critical functions, they only represent a fraction of the needs of a telecom system. While there has been

some activity in creating AMC modules that are FPGA-based for reconfigurability, most developers will have to face the task of creating their own AMC modules to address specific needs. One of the critical concerns in developing AMC modules is the implementation of shelf management functions as defined in the specification. There is a temptation for developers to scale back or eliminate the management functions that

they do not need for their specific products when creating their custom AMC modules. Doing so, however, will reduce the reusability of those AMC modules in future designs and can create compatibility problems with equipment from other vendors. Much of the system-level functionality in ATCA and MicroTCA resides in its IPMI management software and hardware. This management simplifies the creation of redundant and fault-tolerant systems and is essential to system initialization and control. Because the AMC modules are intended to be individually hot-swappable, the system expects them to have the appropriate management functions available. While developers can specify the response of the system controller they design to modules with scaled-down system management capability, there is no assurance that these modules will work properly with off-the-shelf controllers. Despite these design concerns, however, systems built on the ATCA specification are a good match to the needs of telecom systems. The specifications address the high performance and fault tolerance that telecom requires, while providing system architectural design flexibility to meet a wide range of application needs. The AMC module approach allows board-level design to be equally flexible, and fosters design reuse to lower system costs. To gain these benefits, however, developers must carefully consider system power and the corresponding cooling approaches to ensure that their designs will tolerate an installation crowded with other vendor equipment. They also should ensure that their custom AMC card designs fully implement the system management specifications to ensure interoperability with other, and their own future, ATCA system designs. Emerson Network Power Madison, WI. (800) 356-9602. [].


February 2007

Don’t Settle For the Slow Lane

0 to1.5 GBytes in One Second Flat 4Gb Fibre Channel – Delivering Performance to Network and Storage Applications You design-in Fibre Channel for performance and now you can have it — • Extensive driver and software library support • AMC, XMC & PMC form factors • Comprehensive storage and networking protocol support • 1.5GBytes/second sustained transfer rate • 4/2/1Gbps dual channel Critical I/O offers an unparalleled level of engineering expertise and integration support to guide your team down the winding Fibre Channel road toward maximum performance with minimum risk.



Make sure your systems are ready for the long haul. Contact Critical I/O about 4Gb Fibre Channel for embedded systems — today. PMC

visit our website at

TechnologyInContext ATCA System Design Options

ATCA and GeneralPurpose Processing: Doing a Lot with a Little With a large form-factor, generous power budget and the growing availability of powerful mezzanine modules, ATCA is shaping up to offer developers a wide range of options for implementing the multimedia communication systems of the future. by J ohn A. Long Intel


ith the migration from TDM to Internet Protocol, telecom equipment manufacturers, systems integrators, enterprises and small- to medium-sized businesses face a multitude of challenges. In addition to meeting the demands and potential challenges of today’s emerging applications, they must also maintain the flexibility to address legacy TDM, ATM and IP interfaces and allow existing software to port to the new platform. They also need to meet the requirements of reliability with 99.999% high availability, redundancy, manageability and PSTN-quality while taking advantage of the processing power and roadmap of general-purpose silicon. They must begin to add unique value at a higher level, which means minimizing the time and expense of new hardware development To meet these requirements and provide the flexibility of highly configurable, reliable and low-cost operating environments, many end-users, customers and their integrators have turned to industrial computing platforms that utilize the ever22

February 2007

increasing, compute-intensive processing capabilities found in mainstream servers. There are several different types of these industrial computing platforms, from rackmounted servers, to proprietary bladed and open standard architectures. While their core processing capabilities are the same, they vary significantly in backplane architecture, use cases for input/output (I/O), scalability, price points, cooling, power and footprint. Similarly, for example, rugged, rackmounted chassis designs have optimized power and cooling capabilities to the point where OEMs can safely and cost-effectively deploy reliable, high-density systems for use in many specialized business-critical arenas (Figure 1). Today, standards-based products are available from a wide variety of vendors allowing TEMs to develop carrier-grade solutions that enable scalability from access to core. In turn, that has progressed to modular environments that use industry standards-based communications infrastructure platforms and building blocks

that enable efficiencies through the entire value chain, including solution flexibility, faster time-to-market, vendor choice and cost benefits. The latest evolution of modularity in communications comes in the form of the Advanced Telecom Computing Architecture (ATCA) specification. Developed by over 100 companies in the PICMG, ATCA embodies the group’s mission to provide a framework for high-performance, carriergrade solutions built with standards-based building blocks. ATCA blades measure 322mm (8U) x 280 mm with a 1.2 inch (30.48 mm) board pitch. The power budget and size allows each ATCA blade to pack a great deal of functionality into a single slot, reducing system footprint and implementation costs. For instance, a complete media gateway system could be implemented on a single ATCA blade. Conversely, that application might require two to three CompactPCI/2.16 boards to implement at the same channel density. The basis behind ATCA is by no means new. Open standards enabling the

“QNX has consistently defined the leading edge of RTOS technology.” Dan Dodge. QNX CEO & CTO. OS architect and father of embedded computing.

Slash your debugging time by weeks, even months, with the QNX® Neutrino® RTOS, the most innovative operating system on the market today. Unlike conventional OSs, QNX Neutrino runs all applications and system services — even device drivers — as memory-protected components. So you can detect memory violations immediately. And focus on what really counts: building innovative features, faster. Combine this with performance rated #1 in the RTOS market

Cut your development time. Build longer-lasting products. Gain maximum performance.

and reliability proven in millions of installations, and you have the platform to power your own leading-edge design.

Only the QNX Neutrino RTOS gives you: �

Focus on Innovation, not Debugging �

In the QNX Neutrino RTOS, device drivers, file systems, and protocol stacks all run outside of the kernel, as memory-protected processes. This architecture virtually eliminates memory corruptions, mysterious lockups, and system resets. Achieve maximum reliability and put an end to endless debug sessions.

� Memory Protected

Flash File System



Web Browser

Serial Device Driver

Message-Passing Bus Media Player

HTTP Server

USB Device Driver


High Availability Manager Memory Protected

Discover how Dan and the QNX team can sharpen your competitive edge. Download your free product evaluation from © 2006 QNX Software Systems GmbH & Co. KG, a Harman International Company. All rights reserved. QNX and Neutrino are trademarks of QNX Software Systems GmbH & Co. KG, registered in certain jurisdictions and are used under license. All other trademarks and trade names belong to their respective owners. 301813 MC339.13

Adaptive partitioning to contain security threats and guarantee realtime behavior Multi-core, multi-processing support for the ultimate in scalability and performance Optimized support for ARM®, MIPS®, PowerPC®, SH-4, XScale®, and x86 processors Preintegrated stacks for IPv4, IPv6, IPsec, SNMP, SSH, SCTP, TIPC, IP Filtering and NAT Royalty-free kits for multimedia, flash file systems, 3D/2D graphics, web browsers, etc. Unparalleled support for open standards: POSIX, Eclipse™, OpenGL® ES, RapidIO®


Efficiencies • Choice • New Services

Figure 1

Modern Communication Platforms based on open standards enable carriers to cost-effectively expand their networks and deliver new services with faster time to revenue and OpEx efficiencies.

use of commercial-off-the-shelf components in the telecom infrastructure have certainly been around for the better part of a decade. Prior to ATCA, TEMs turned to CompactPCI to speed deployment of Service Delivery Platforms (SDPs) and hold down costs. But CompactPCI fell victim to circumstance when the tech recession took the steam out of a slow-building standard. Because of its flexibility and time-to-market advantages, the ATCA standard is seeing broad adoption by leading equipment manufacturers and service providers around the world. The basic elements of the ATCA form-factor consist of front boards, backplanes, the sub-rack and the shelf. The front boards, which define power connection and shelf management, data transport interface and user-defined I/O interconnect, are capable of utilizing a maximum of four Advanced Mezzanine Cards (AMCs). The backplane, which is designed to accommodate anywhere from 2 to 16 front board slots, distributes power and manages metallic test bus, ring generator bus and low-level shelf management signals. The specification dictates that systems are capable of dissipating as 24

February 2007

much as 200 watts per single-slot board and further defines everything from airflow cooling to shelf management. The shelves comply with Network Equipment Building Systems (NEBS) standards and are rackmountable to European Telecommunications Standards Institute (ETSI) specifications. The new architecture has resulted in second generations of high-speed switched fabric with peak throughput of 10 Gbits/s), 10 times higher than the peak throughput of CompactPCI. The ATCA fabric supports full-mesh interconnect and is also protocol-agnostic, capable of supporting Ethernet, InfiniBand, PCI Express and/or RapidIO. As such, ATCA provides a reliable standardized platform architecture for carrier-grade communications functionality without sacrificing the high availability and manageability associated with costly proprietary hardware. The success of ATCA certainly lends new fuel to the “Buy” side of the age old decision, “Build vs. Buy.” For example, Siemens Communications Mobile Networks division—soon to be Nokia Siemens Networks—has turned to Intelbased building blocks to support multiple

next-generation radio network controller configurations, from a few boards in a single ATCA chassis to many boards across multiple interconnected chassis. Its RNCi product family is designed with Intel NetStructure IXB28XX 3G Boards, an integrated, high-performance, high-density data plane solution for RNCs. The boards feature Intel IXP2800 series of network processors with embedded, carrier-grade RNC data plane software in an ATCA form-factor. General-purpose processor boards that are based on a 2.8 GHz Low Voltage Intel Xeon processor and support dual AMCs are used for the control plane software. To bring even greater flexibility to ATCA, PICMG members and industry evangelists also developed the aforementioned mezzanine card standard known as Advanced Mezzanine Card (AMC), a hotswappable add-on that fit into the ATCA. These AMC modules are now in their second iteration (version 2.0) and have all the system management and data bandwidth of a full ATCA blade. In essence, AMC allows a system designer to build a complete system on a blade. Up to four mid-sized AMC modules


can be put onto an ATCA blade, extending modularity for telecom OEMs to the functionality and feature sets. AMC modules come in a variety of different sizes ranging from single-wide to double-wide and half-height to full-height. A singlewide module is 72.9 mm x 183.5 mm and a double-wide module is 147.9 mm x 183.5 mm. Since the original spec was introduced, PICMG has also announced a new mid-sized module (formerly Engineering Change Number, ECN-002). The power dissipation for each module ranges from 24 to 48 watts for a single-wide/halfheight module (called “compact�), 30 to 60 watts for the mid-sized, and 48 to 80 watts for the full-height module. This size and power budget flexibility lends AMC to a range of functions, from advanced server-class processors to relatively simple Ethernet or non-intelligent T1/E1 interface boards. Combined with AMCs, the benefits of modularity became so apparent that PICMG actually reverse-engineered the ATCA spec to make use of AMCs in a smaller form-factor known as the MicroTCA platform. Utilizing AMC modules plugged directly into a backplane, MicroTCA targets edge and access applications, customer premises equipment (CPE) and other applications where cost and size are major constraints including: data centers, industrial control and medical. The MicroTCA specification calls for several form-factors from 19-inch wide x 300 mm deep x 6U (266.7 mm) high to ultra-small cube (200 mm per side) and pico board-mounted configurations. The MicroTCA specification was released on July 24, 2006, and the short form followed on October 3, 2006. MicroTCA provides two redundant Gigabit Ethernet (GbE)based links in the AMC common region fabric via a pair of MicroTCA Carrier Hub (MCH) modules (one GbE star per MCH). In addition, the MCHs can provide what is known as a Fat Pipe fabric. The Fat Pipe fabric provides highspeed connectivity for the AdvancedMC modules, giving up to eight protocol-independent 12.5 Gbit/s SERDES-based lanes per module. These Fat Pipe lanes can run protocols such as Serial RapidIO, PCIe, GbE or XAUI, commonly organized into two independent dual-star groups of four

lanes each (one group of four lane stars per MCH). The combination of the connectivity for the Common Region and Fat Pipe region can therefore provision an aggregate bandwidth of 1224 Gbits/s for a MicroTCA shelf populated with 12 AMC modules [(2 lanes * 1Gbit/s + 8 lanes * 12.5 Gbit/s) * 12 modules = 1224 Gbit/s aggregate bandwidth]. Further performance gains are just now being realized by the recent introduction of multicore processing into the communications infrastructure. Now the ability to attract and retain subscribers rests squarely on a new generation of disruptive technologies such as unified messaging, Voice over IP, video, media and signaling gateways and/or call control. Service providers and network operators can take comfort in the arrival of multicore processing to extend their SDP lifecycles. In addition, carrier-grade rackmount servers deliver an ideal solution for the demanding environment and limited space of central offices, highly available data centers and rugged environments. For example, ruggedized computer maker Kontron recently found that the performance gains of dual-core processing could easily transfer onto an ATCA processing node. By integrating 12 such nodes in a 14-slot ATCA system, a total of 516 concurrent streams or channels per system could be realized on a highly dense processing system within a 12U footprint. In fact, boards based on the Intel Xeon Dual-Core processor are available from several board venders including Intel, Kontron and RadiSys. The result is an open modular processing platform that will increase the number of deployments of ATCA solutions at the heart of every computer-intensive mobile-IP Multimedia Subsystem (IMS) network element from the transcoding of live multimedia mobile content on a Multimedia Resource Function Processor (MRFP) to concurrent processing of subscriber data on Home Subscriber Locator (HLR) systems. On the AMC side, multicore processing is moving one step further incorporating Dual-Core 64-bit processors into AdvancedMC. Using an ATCA carrier that supports four single-width, full- or midsized AdvancedMC processor modules

(each one populated with one Dual-Core processor and up to 4 Gbytes of memory), the potential result is essentially a doubling of performance per node, from 43 to 86 or more, concurrent streams processed. Going back to our 14-slot ATCA system, this could conceivably equate into 1,032 concurrent audio/video channels streamed across 12 slots. Of course, the value of any given network is directly proportional to the number of people using it. That same principle holds true for open standards-based modular communications platforms. To this end, organizations like the SCOPE Alliance and the Communications Platform Trade Association (CP-TA) have been formed to help develop an ecosystem that delivers upon certified interoperability. CP-TA’s mission is not only to support key specifications developed by other standards bodies but also to develop documentation and certify building block compliance to interoperability test requirements. This includes platform compliance to SCOPE profile requirements, which it is hoped will foster industry preference for certified interoperable building blocks through ecosystem collaboration. As demand for new personal communications and digital entertainment services continues to grow exponentially, it is clear that telecommunication platforms based on closed proprietary frameworks that merely emphasized network availability will give way to the modularity characteristic of ATCA and its family of open standards. Within the Internet and VoIP infrastructure, many operators are testing new services to stimulate markets that satisfy consumer demand on the path to new revenue-generating services and applications. Adding the power of multicore processing to modular communications platforms will only accelerate the revolution. Intel Santa Clara, CA. (800) 765-8080. [].

February 2007


SolutionsEngineering 10 Gigabit Ethernet Solutions

FPGA-Based Stack Acceleration and Processor Board Architecture Underpin 10GbE Performance The use of FPGAs can be insurance against disaster in getting the true potential performance out of implementing 10 Gigabit Ethernet. by R  ob Kraft AdvancedIO Systems


exploration er your goal peak directly al page, the t resource. chnology, and products


0GbE realizes the long-coveted goal when performing TCP transfers through of having an accessible, off-the-shelf their native stacks, even 2.2 GHz procesfat data pipe that uses a widely ac- sors can be close to 100% utilized while cepted and deployed data transfer proto- only achieving rates of less than 5 Gbits/s. col: UDP. But to smoothly utilize that pipe This statement holds even when sending in the embedded space you need some key large payload block sizes in excess of 64 foundations such as protocol acceleration, Kbytes. Basically, the processor consumes processor carrier board and driver archi- significant numbers of processing cycles tecture, and one that may seem particular- in the course of traversing through the ly non-intuitive: FPGA-based technology. protocol stack, assembling the data into The “10” in “10GbE” can sneak up on properly formatted packets and calculatyou insidiously. It’s an order of magnitude ing checksums. The processors become mpanies providing solutions now faster than the ubiquitous 1GbE, and the saturated while attempting to perform oration into products, technologies and companies. Whether your goal is to research the latest same “legs and preparation” that carried these cycles, even before they are able to plication Engineer, or jump to a company's technical page, the goal of Get Connected is to put you at 1GbE won’t necessarily support a reach the 10 Gbit/s rates. vice you require you for whatever type of technology, nies and products you are searching for. 10x speed increase. The processor that Assuming you were expecting it happily ran your signal processing al- to do more for you than just move data, gorithm while concurrently running the the processor needs to find a way to ac1GbE stacks, may grind to a halt when you celerate the process of going through the try to run that stack for a 10GbE port. protocol stack. Fundamentally, it can acA brief survey of some of the 10GbE complish this either by modifying the literature reveals that approximately 0.5 stack or subcontracting the act of running to 1 GHz of processing is required to pro- it (unchanged) to a specialized external cess 1Gbit/s of data transfer. Tests on file protocol engine, or intelligent engine. servers using “standard” 10GbE server The latter approach is often preferable in network interface cards have shown that order to maintain compatibility with the ubiquitous Ethernet installed base. The intelligent engine could be based on an Get Connected ASIC, Network Processor, or FPGA. In with companies mentioned in this article. a typical ideal flow, the processor would

End of Article


February 2007 Get Connected with companies mentioned in this article.

Application (payload) Standard Socket API Transport (UDP/TCP, other) Network (IP) Link Physical 10GbE

Figure 1

The intelligent Ethernet protocol engine effectively moves the border between the processing run on the processor (portions above the solid line) and on the 10GbE hardware (portions below the solid line). The dashedline represents the original location of the border when stack processing was primarily occurring on the processor.

SolutionsEngineering shovel payload data to or from the intelligent engine, which would take care of wrapping it in and unwrapping it from the protocol. Figure 1 illustrates this shifting of the stack processing from software to hardware, corresponding to the nature of the stack acceleration process. In the real-time embedded system space, processors often have greater power and resource constraints than in the desktop or server space. It is also common to see processing-intensive applications that utilize numerous distributed processing elements all running at high utilization rates. Therefore, in the embedded space, the intelligent protocol I/O device has gone from a “convenience” to a “necessity.” Previously, the choice to use an in-

telligent Ethernet I/O engine rather than running the protocol on the processor itself may have been motivated by schedule or ease of development. However, at 10 Gbit/s speeds, running the standard protocol on the processor itself is simply not an effective option. No matter how much you optimize all the other software, the 10GbE stack processing will still be a significant bottleneck.

Board and Driver Architecture

Having established that external intelligent protocol stack acceleration is one of the foundations for effectively using 10GbE, we now turn to a discussion about adding that capability to an existing off-the-shelf (or custom) processor carrier


4Gb Fibre Channel

Gigabit Ethernet

Copper/ Optical

Dual 4Gb/s Fibre Channel Interface PCI-X 32/133


PowerPC Processor (AMCC 440SP)

PCI-X 64/133

PCIe to PCI-X Bridge

PCI Express x4

PCI Express x8

PCI-X 64/133

PCI Express Switch

PCI Express x8

Xilinx Virtex FPGA



Figure 2


PO P2 I/O (z&d)



PCI Express x4

P2 I/O

board by using a mezzanine module. The architecture of processor carrier cards varies widely in the embedded space. This variety is a bit of a double-edged sword because it means that users must pay particular attention to this architecture when selecting the carrier to ensure it will meet their application needs, and specifically those of a high-performance I/O interface like 10GbE. One of the primary considerations is whether the speeds or latencies of the interconnect between the 10GbE module and the onboard data source/sink match those of the 10GbE link (or at least match the bandwidth required, in the event that you are not intending to use the 10GbE link at its maximum capacity). The theo-

RocketIO Ports


VME to PCI-X Bridge (Tundra TSI 148)

P1 VME 2eSST (*) Special Build Option

VMetro’s Phoenix M6000 VXS intelligent I/O controller equipped with flexible high-performance buses and fabrics is an example of a carrier suitable for the demanding dataflow of applications utilizing 10GbE I/O.

February 2007

SolutionsEngineering retical maximum 10GbE payload rate per direction will be just under 1.25 Gbytes/s. Double that rate to 2.5 Gbyte/s aggregate bandwidth for full duplex communication. In the embedded space, a variety of proprietary and standard buses and fabrics are being used for interboard and interprocessor communications. For good reasons, many processor/carrier designers try to stick to the standard interconnects to maximize the number of expansion modules they can use. Currently, PCI and PCI-X are among the most popular buses being employed, with the multitude of fabrics like PCI Express (PCIe) and RapidIO appearing more and more frequently. For matching purposes, we will consider maximum theoretical bus/fabric data rates. These rates do not account for headers and other protocol overhead except for 8B/10B encoding. PCI 64-bit/66 MHz gives 528 Mbyte/s simplex; PCI-X 133 MHz gives 1064 Mbyte/s simplex (PCIX 100 MHz is 800 Mbyte/s); 4-lane (4x) PCIe is 1 Gbyte/s full-duplex and 8-lane is 2 Gbyte/s full-duplex; Serial RapidIO (SRIO) 4-lane (4x) is up to 1.25 Gbyte/s full-duplex. PCIe and SRIO, with their full-duplex capability, present good matches to the data rates required to source or sink data from the 10GbE link at near maximum rates. The PCI-X bus rate is about half of the aggregate 10GbE rate, and PCI 64/66 is about a quarter. PCI-X, and to a lesser degree PCI, are certainly suitable for applications involving lower sustained rates and/or applications requiring primarily unidirectional data transfers, but greater attention needs to be paid up front when considering them for use. Figure 2 shows an example of a carrier equipped with the interconnects and an architecture appropriate to the needs of a 10GbE intelligent module. Beyond pure interconnect speed or latency, there are a variety of other application data flow issues that need to be considered. Among these are whether the source/sink of the data for the 10GbE link is a processor or another device on the bus/fabric, such as an ADC/DAC. If the source/sink is a processor, it often entails double-trips on the processor’s external SDRAM, which effectively at least halves the available memory access bandwidth. While this is very common, you should

account for it in your system bandwidth calculations. You must also consider the capabilities of the bus system controller, its ability to master transfers, perform DMAs, turn around the bus (for bi-directional traffic), handle interrupts, and bridge between different buses and fabrics. One other consideration that sometimes gets forgotten is where the interrupts are routed. Depending on the answer, you may need to create code on an intermediate device to redirect the interrupt to the desired end target. While you are immersing yourself in hardware architecture considerations, don’t forget to make some time to address the software driver for the intelligent module. Among the considerations are whether the driver relies on DMA by the module or host processor on the carrier, whether the driver is capable of addressing bus or fabric addresses outside the host processor’s memory space (in order to achieve, for instance, direct transfers between the module and another device on the fabric or bus), and what (if any) code changes or specialized API is required to use the module’s capability.

DMAs and Interrupts When it comes to DMAs, performance is often maximized when a source “pushes” data as opposed to a sink “pulling” the data. So, to accommodate two directions of data flow, it might be useful to have DMA mastering capability on the carrier to push data out to a 10GbE module, and DMA mastering on the module to push incoming data onto the carrier. This may result in different desired end targets for interrupts, which has implications mentioned in the main article.

FPGA-Based Acceleration

Earlier, we had pointed out that there are a few choices for the basis of the intelligent protocol acceleration engine, one of which is an FPGA. Initially, it may appear that from the 10GbE user’s perspective, the choice would be irrelevant. As long as the functionality is achieved, why should they care about the choice

We Listen. Think. And Create. Digital I/O

Distributed I/O

Serial I/O

Expand with Sealevel PCI and PC/104 boards for robust communications backed by a Lifetime Warranty

Industrial Computing


PCI and PC/104 Serial Solutions Provide: • 1, 2, 4, 8, and 16-Port Models • RS-232, RS-422, and RS-485 Interfaces • Data Rates to 921.6K bps • 128-Byte FIFOs for Error Free Communication • Design Control and Long Term Availability • Extended Temperature Versions

F CUS On Success

February 2007


SolutionsEngineering of technology? The answer is that FPGA technology allows for rapid modifications to, and the addition of 10 Gbit line-rate functionality. Some approaches may allow customization, but not at line rate; others may have performance while permitting little customization, but the FPGA permits a good balance of both. Some examples of added line-rate algorithm/functionality are bus/fabric protocol bridges and packet inspection/classification decisions that must be made at

line speeds (e.g., to reduce data rates by deleting packets or directing where payload data is to be sent). For small or midsized production runs, this has the appeal of enabling the same hardware system to increase its capabilities or adapt to new requirements/changes in protocols. It could be argued that the described “future proofing” flexibility offered by FPGAs, while theoretically attractive, is too far in the future to be of concern in some applications. However, the reconfig-

Figure 3

AdvancedIO System’s V1020 XMC configurable 10GbE connectivity and packet processing XMC module. The currently shipping module, which is based on the Xilinx Virtex-II PRO FPGA, supports both PCI-X and PCI Express, and is an intelligent protocol stack acceleration module.

urability of FPGA-based technology also offers a more immediate and practical derisking benefit; a benefit that can realistically make the difference between a failed or successful system integration stage. There is always a chance that a small oversight in the system specification can cause enormous delays. Suppose, for example, that the payload data coming over the 10GbE link needed to be complemented before any processing could take place on it. Although a very simple operation, it would require too many execution cycles from the processor at the incoming data rate. An FPGA placed in the high-speed path between the 10GbE link and the processor could perform this very simple customized operation on the data as it streams by. But without such a programmable device in the path before the processor, there would not be sufficient memory access or execution cycles left in the processor to accomplish it. Figure 3 shows a currently shipping intelligent 10GbE protocol stack acceleration XMC module. When integrated onto a carrier, the module’s driver software supports the use of standard socket calls by the host processor. AdvancedIO Systems Vancouver, Canada. (604) 331-1600. [].


February 2007

CPU Modules for ARM920T PARSY CPU Module Freescale ARM9 Dragonball MXL Expansion Bus available Windows CE 5.0 ready Ready for Linux 2.6.13 Evaluation Board available Compact form factor Development Kit available in GNU environment 100% ARM7TDMI user code binary compatible

Zefeer Eval Boards/Dev Kits Cirrus Logic ARM920T Expansion Bus available Linux 2.6.xx + native component drivers (ZELK) Windows CE BSP (ZWCK) ECos (ZECK) Evaluation Board available

Contact Ultimate Solutions today at: Toll Free: 866.455.3383,,

SolutionsEngineering 10 Gigabit Ethernet Solutions

10 Gigabit Ethernet: The Promise and the Challenge 10 Gigabit Ethernet technology has arrived; it is real, and has the potential to change real-time and embedded systems more dramatically than prior generations of Ethernet technology. by J ack Staub Critical I/O


thernet continues its never-ending march to higher and higher levels of performance and capability. 10 Gigabit Ethernet (10GbE) is its next step forward and is sure to make a bigger impact on real-time and embedded systems than any prior advancement. 10GbE holds the promise to provide an order of magnitude increase in performance, maintain compatibility with its prior variants, and to have the potential to displace the other more specialized data network fabrics. However, today’s embedded processors are unable to keep up with the protocol overhead associated with even 1GbE pipes, each of which is capable of supporting 250 Mbytes/s of throughput on a sustained basis. Increasing those pipes to 10GbE, a ten-fold increase in capacity, makes a difficult problem an impossible one. A complete offload of the Ethernet protocol stack to silicon (silicon stack technology) will allow the promise of 10GbE technology to be realized. Upgrading to 10GbE NICs and 10GbE switches offers 10 times the performance for an array of bandwidth- and latency-constrained applications. Increasing bandwidth by 10 times while reducing latency by 90 percent certainly takes Ethernet to a whole new level of performance and will certainly compel architects of real-time and embedded systems to consider (or reconsider) Ethernet for even the highest performance applications. Add to this the low cost advantages 32

February 2007

due to its eventual commoditization and out-of-the-box interoperability with prior generations, and you have the “promise” of 10GbE fairly well summarized. On the surface, 10GbE technology is quite compelling.

The Challenge of 10 Gigabit Ethernet

In reality, deploying 10GbE and realizing these benefits will not be easy. It wasn’t too long ago when 1GbE held a

Traditional Ethernet Interface (Software Stack)

Fully Accelerated Ethernet Interface (Silicon Stack)

Host Processor

Host Processor

Application Code

Application Code

OS Sockets Layer TCP IP Network

OS Sockets Layer

PROBLEM Protocol Stack in Software

Ethernet Driver

Packet Mgt & I/O

Ethernet NIC HW

Conventional NIC

Figure 1

SOLUTION Protocol Stack in Silicon

Full Transaction Mgt & DMA TCP/IP, RDMA, iSCSI in Silicon Ethernet NIC HW

Very thin sockets layer in host with direct memory placement for transfers (zero copy).

Silicon implementation of protocols results in full rate operation and minimum latency.

Silicon Stack NIC

High bandwidth applications of Ethernet are limited by the extraordinary overhead associated with the TCP/IP stack on the host processor. Moving this stack from software to silicon removes the processing load from the host and substantially improves the real-time performance characteristics of the Ethernet interface.

GE Fanuc Embedded Systems

“Wouldn’t it be great if …?” Imagine the possibilities, now that Radstone is part of GE Fanuc Embedded Systems. Our goal is to move the line between what you can dream and what you can do, and the addition of Radstone Embedded Computing is a big step toward that goal. We’re building an exciting new embedded company so you can build exciting new systems with amazing new capabilities. This new company gives you more technology options, more global support, more engineering resources – plus the backing of GE, one of the most admired companies in the world.

With the addition of Radstone, you can now choose from the most extensive line of rugged products in the mil/aero market. So whatever you’re looking for – from powerful multiprocessing systems to software defined radio, from sonar and radar to high-performance video and graphics – let your imagination take you places you haven’t dared to go. We’ll be right there with you.

Now a part of GE Fanuc Embedded Systems © 2007 GE Fanuc Embedded Systems, Inc. All rights reserved.

FAST-FORWARD YOUR PROJECT WITH WINDOWS EMBEDDED. New devices mean new challenges. Speed your design to market with end-to-end development tools backed by the long-term commitment of Microsoft® and the support of the global Windows® Embedded partner community. See how CoroWare reduced development hours by more than 60% vs. Linux at © 2006 Microsoft Corporation. All rights reserved. Microsoft, Windows, the Windows logo, and “Your potential. Our passion.” are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. The names of actual companies and products mentioned herein may be the trademarks of the respective owners.

Do Over?



Comparison of Ethernet Technologies

Software Stack 1Gb 10Gb

Sustained I/O Bandwidth Achieved

~60 Mbyte/s

~60 Mbyte/s

240 Mbyte/s

2400 Mbyte/s

Latency (memory to memory)

125 usec

115 usec

15 usec

5 usec

CPU loading due to Ethernet I/O





Deterministic Behavior





Reliability under heavy load





Table 1

Summary of the Ethernet I/O performance that can be expected from a typical embedded system using 1GbE and 10GbE interfaces, with and without offload hardware. 10GbE provides little benefit over 1GbE when using a conventional software NIC; whereas, 10GbE NIC with silicon stack offload does deliver a 10x improvement.

similar promise only to struggle in delivering on that promise, particularly with regard to real-time and embedded systems where performance requirements proved difficult to achieve. The basic problem with Ethernet has nothing to do with the Ethernet technology itself; the switches and NICs are very capable and reliable. The problem is the software-intensive nature of the TCP/IP protocol stack—the software stack. The software stack is host-processor-intensive and thereby limits the throughput that can be achieved. The throughput capacity (potential I/O bandwidth) of Ethernet NIC technology (1 Mbit/s 10 Mbit/s 100 Mbit/s 1GbE/and now 10GbE) has been growing faster than CPU technology’s ability to process the protocols associated with the data stream. With each advance in Ethernet technology, state-of-the-art CPU technology falls farther behind. This problem is even worse for embedded systems, which are typically much more constrained with respect to power consumption or thermal dissipation and therefore are less able to simply toss more CPU cycles at the problem—as can be done in high-end server class processing systems. To illustrate the protocol processing crisis, consider a conventional 1GbE NIC. The TCP/IP protocol stack consumes roughly 10 CPU cycles for each and every byte of data coming into or out of the NIC. Or, viewed from a different perspective, every 1GHz of a CPU can process about 100 Mbytes of Ethernet I/O. Therefore, it would require 100 percent of a 2.5 GHz processor to achieve wire-speed throughput of a 1GbE port (full duplex, 125 36

Silicon Stack 1Gb 10Gb

February 2007

Mbytes/sec of payload in each direction). And that is only a single port of 1GbE—a dual port doubles this problem. But this is a theoretical example; in practice, it is not realistic to allocate 100 percent of any CPU to the processing of Ethernet traffic. A reasonable allocation depends on the application. Real-time or CPU-intensive applications like signal processing might allow only five percent; less intensive applications might allow up to 20 percent of the CPU to be dedicated to managing the Ethernet interface and implementing the TCP/IP stack. A 10 percent allocation of a 2 GHz embedded processor to TCP/IP processing would limit that processor to 20 Mbytes/s of Ethernet I/O, which is only eight percent of potential 250 Mbytes/s bandwidth of a vanilla 1GbE NIC. So using today’s CPU technology, a standard embedded CPU can realistically utilize only eight percent of the I/O bandwidth of its built-in 1GbE NIC. At some point in the future, perhaps 5 or 10 years from now, more powerful embedded CPUs will be able to make full use of that 1GbE interface. But by that time, 100GbE NICs will be available offering 100 times more bandwidth than the processors can keep up with, thus making the CPU loading problem even worse than it is today.

The Arrival of 10 Gigabit Ethernet

10GbE technology has arrived and is following the familiar adoption of Ethernet’s prior generations. Table 1 shows the impact that can be expected by upgrading from 1GbE to 10GbE technology in a typical embedded system. Unfortunately,

without fully offloading the TCP/IP stack processing, 10GbE technology will provide little or no benefit to real-time and embedded systems. After all, if embedded processors today can only make effective use of eight percent of a “conventional” 1GbE NIC, then those very same processors would only be able to utilize 0.8 percent of a “conventional” 10GbE NIC. The CPU overhead required to support the conventional software implementation of the protocol stack is what limits the utilization of a conventional Ethernet NIC today. Increasing the size of the Ethernet pipe by ten-fold will not increase the performance of that interface unless something is done to address the limiting factor, which is the software implementation of the TCP/IP protocol stack—the software stack.

The Solution – Silicon Stack Technology

Figure 1 illustrates the use of a hardware (specifically silicon) implementation of the protocol stack. Here the conventional TCP/IP protocol stack processing is moved from the operating system of the host processor (software) to hardware (silicon). The overhead on the host CPU is substantially reduced because the host processor is no longer required to expend the 10 CPU cycles processing each byte of information; it needs only to specify that a message be sent, or be notified when a new message arrives. All protocol processing, buffer management, data movement and transaction management is done by the silicon stack hardware. Moving the host responsibilities away from the conventional byte-level software processing to transaction-level processing allows Ethernet to achieve a level of efficiency and performance that is typically only associated with the more exotic network fabrics such as InfiniBand, Fibre Channel and Serial RapidIO. Silicon stack technology enables a processing system to actually make full use of 10GbE technology. There are many benefits of the silicon stack approach. Wire-speed throughput is achieved because the silicon implementation is designed to handle full rate I/O without the potential of being overwhelmed by the data; latency is reduced


System Class


Power (watts)

System Cost

I/O Rate (Mbyte/s)

I/O CPU Load

Watts per Mbyte/s

$ per Mbyte/s

1Gb Systems Server Grade System, on-board 1Gb

2.4GHz Dual CPU







Server Grade System, PCIe 2x1Gb w/ TOE

2.4GHz Dual CPU







Telcom Grade System, on-board 1Gb

2GHz Low Power







Telecom Grade System, AMC 2x1Gb w/ TOE

2GHz Low Power







Military Grade System, on-board 1Gb

1.2GHz Low Power







Military Grade System, PMC 2x1Gb TOE

1.2GHz Low Power







Server Grade System, PCIe 10Gb

2.4GHz Dual CPU







Server Grade System, PCIe 10Gb w/ TOE

2.4GHz Dual CPU







Telcom Grade System, AMC 10Gb

2GHz Low Power







Telcom Grade System, AMC 10Gb w/ TOE

2GHz Low Power







Military Grade System, XMC 10Gb

1.2GHz Low Power







Military Grade System, XMC 10Gb w/ TOE

1.2GHz Low Power







10Gb Systems

Notes 1) It is assumed that the host CPU loading related to the Ethernet interface and any associated processing on the TCP/IP stack would be limited to a maximum of 20% for all systems.

Table 2

Analysis showing the cost of I/O performance for 1GbE and 10GbE interfaces, with and without silicon stack offload hardware.

to a fraction; reliability when under heavy load is improved substantially since the likelihood of losing packets due to overwhelmed software is eliminated; and determinism is improved since the need for retransmission (which often occurs in software-based stacks under high load conditions) is greatly reduced.

The Cost of Performance

In designing a system to handle a large amount of Ethernet traffic, one must consider the various approaches to solving that problem. Depending on system requirements, it may be more cost-effective to add processors; but often it is more cost-effective to add specialized offload hardware. Ethernet NICs with silicon stack technology can be selectively used on processor nodes that need the unique performance that it offers, while conventional Ethernet interfaces can be used everywhere else. This allows designers to minimize the overall system cost. In contrast, a specialized network fabric (like InfiniBand) would require all nodes to incorporate the additional hardware. Many embedded systems are ther-

mally constrained. Low-power CPUs are often desired, and as a result, fewer cycles are available for Ethernet processing. Here, offload technology has even a greater payback since it can allow the designer to minimize the “thermal cost” of the system. Table 2 provides an analysis of the dollar and thermal costs of I/O bandwidth for various 1GbE and 10GbE systems. Cost is computed in terms of dollars per Ethernet bandwidth (dollars per Mbyte/s) and also watts per Ethernet bandwidth (watts per Mbyte/s). As shown in Table 2, the payback of the silicon stack offload is greater for 10GbE interfaces than it is for 1GbE interfaces. 1GbE offload reduces costs from roughly $150 per Mbyte/s to roughly $20 per Mbyte/s, and 10GbE offload takes costs down to roughly $6 per Mbyte/s. The table also shows a similar thermal cost reduction. Network bandwidth is growing at a faster rate than the ability of CPUs to process the increased data. Network offload technology is quickly moving from a “nice to have” to a “must have” feature—particularly for data-intensive

server applications. Moving the TCP/IP stack from software to silicon dramatically improves the performance and reliability of the Ethernet connection – taking Ethernet to the same performance realm as the specialized network technologies such as InfiniBand, Serial RapidIO and Fibre Channel. Full silicon offload of the TCP/IP stack is useful for certain 1GbE applications but an absolute necessity for all 10GbE applications. Software stack implementations will not deliver the high throughput, reliable data transfer and low latency that 10GbE offers. Finally, silicon offload is much more cost-effective and thermal efficient than tossing additional processors at the I/O bandwidth problem. While 10GbE holds the promise for greater performance and compatibility, embedded systems architects must understand how to overcome its inherent challenges in order to fulfill this potential and achieve the most effective use of the technology. Critical I/O Irvine, CA. (949) 553-2200. []. February 2007


SolutionsEngineering 10 Gigabit Ethernet Solutions

Consolidating Network Fabrics to Streamline Data Center Connectivity Cost and performance issues are pushing developers to seek convergence of interconnects in data centers. Both 10 Gigabit Ethernet and InfiniBand appear to have potential, but the demands are militating against Fibre Channel. by D  an Tuchler Mellanox Technologies


ata centers have evolved over time to accommodate a variety of devices and interfaces. For storage, Fibre Channel is firmly established as the de facto standard. For server-to-server connectivity, both Ethernet and InfiniBand are used. And Ethernet is the common language for connecting routers, desktops, WANs, LANs, and other devices. While data center managers can and do implement all three technologies—InfiniBand, Fibre Channel and Ethernet—it’s not a pretty picture. For example, imagine servers populated with three different adapters from three different vendors, running three different drivers. It’s an expensive situation that’s difficult to maintain, so data center managers are looking at some kind of consolidation to simplify the infrastructure and reduce costs (Figure 1). Fundamentally, data center architects must meet several different connectivity requirements, but they also want to maximize the performance of their server investments while minimizing the risk of unproven technologies. Before we go into the requirements for network fabric consolidation, let’s take a brief look at the three main connectivity choices: Ethernet, InfiniBand and Fibre Channel.


Ethernet and the TCP/IP protocol suite have become so broadly deployed that they are commonly used across the WAN, across vast disparities in the com38

February 2007

puting power of the attached end devices, and over a dizzying array of intermediate network devices. This breadth of application comes at a price, though—engineered for flexibility and ubiquity, Ethernet has required some tradeoffs that preclude its full optimization for more specific uses. And TCP has been extended so much that it is no longer easy to implement in a dedicated device (more on that later). Ethernet convergence at speeds below 10 Gbits/s may be useful only for lower performance applications. 10 Gigabit Ethernet (10GbE) Network Interface Cards (NICs) initially cost thousands of dollars each, but prices are expected to come down in 2007. Ethernet standards are governed by the IEEE, which has recently decided that the next speed step will be 100 Gbits/s, and is expected to ratify that standard in 2010. Recently ratified options for running over “Augmented Cat 6” cable will enable products using the familiar RJ45 connector in 2007. 10GbE can run today over CX4 copper cable or fiber optics.


InfiniBand is a standards-based, lowlatency, high-bandwidth interconnect, created specifically to address the problem of connecting servers and storage in close proximity with each other. While TCP/IP is general and broad, InfiniBand transport is optimized for low-latency server-to-server and server to storage links and is commonly

implemented in silicon to maintain high speed while offloading the host server. Products using 20 Gbits/s have been deployed in production networks, and 40 Gbit/s products are expected in 2008— about the same time that server PCIe slots will be upgraded to the same speed. 12X InfiniBand switch-to-switch connections will be available in the same time frame at speeds three times faster, so 120 Gbit/ s connections will be deployed in 2008. Very large InfiniBand clusters have been deployed, and the technology is now considered mature and low-risk for clusters. The InfiniBand standard is governed by the InfiniBand Trade Association (http:// and standards have been completed and ratified for speeds from 2.5 to 120 Gbits/s using either copper cable or fiber.

Fibre Channel

Fibre Channel is used to connect to storage. In networking a dropped packet can be retransmitted, but in storage, lost data and corrupt databases are unacceptable, so buyers are very conservative. This may explain why storage connectivity changes more slowly than any other. Fibre Channel devices are only now moving from 2 Gbits/s to 4 Gbits/s. It is interesting to note that many leading Fibre Channel vendors are investing in iSCSI and InfiniBand products, and no convincing consolidation strategy for Fibre Channel has been proposed.

Gigabit Ethernet


Traditional Architecture



Gigabit Ethernet Switch

Gigabit Ethernet Fibre Channel Switch

Fibre Channel InfiniBand

InfiniBand Switch

Figure 1


“One Wire” PCIe

Unified Network Architecture


InfiniBand Switch

CPU Native InfiniBand Storage

Shared I/O = lower cost and management.

InfiniBand adapters and switches are the most cost-effective and highest performance of the three options. Table 1 summarizes the characteristics of these three technologies.

Server-to-Server Connectivity Requirements

Now, let’s look at each data center connectivity requirement in more detail, starting with server-to-server connections. A growing number of applications rely on low-latency, high-bandwidth messaging among a group of servers. For example, cluster applications are optimized to create supercomputer power at a fraction of the cost, and are being used to solve specific problems in fluid dynamics, financial modeling and other areas. Database clusters and virtualized server farms also benefit from server-to-server optimization, because the savings in data movement between servers translates into more computing power for the money. InfiniBand shines in these applications for several reasons including bandwidth, latency and especially scale. InfiniBand uses high-performance techniques including remote direct memory access (RDMA) tech40


February 2007

nology, so it typically bypasses the remote host CPU and OS kernel, increasing processor availability and creating maximum efficiency in transferring data from one server to another with latency as low as one microsecond. The switching architecture that has been used to scale InfiniBand networks to thousands of nodes is called a CLOS fabric, or full bisectional bandwidth architecture. Ethernet solutions for node-to-node communications have been roughly modeled after InfiniBand. These solutions duplicate the concept of offloading the transport layer to hardware in an attempt to gain bandwidth and reduce load on the CPU, while also reducing latency to single-digit microseconds. This technology is often called TCP Offload and is done via a “TCP Offload Engine,” or TOE. When low-latency RDMA protocols are layered over TOE, the standard is called iWARP, and this set of technologies is early in maturation, acceptance and OS support. No large-scale 10GbE clusters have been deployed yet, because high-end Ethernet switches are optimized for Telco and ISP environments and are much too costly and complex for direct server connectivity.

There are several problems with TOE, starting with the notion that TCP is so broadly deployed and has so many special cases that it is very complex and difficult to get it right and produce a stable product: TOE failed with earlier generations of Gigabit Ethernet, 100 Mbit/s Ethernet, and even 10 Mbit/s Ethernet. The second issue is that CPU processing power keeps increasing and eliminates the need for TOE, especially in this era of multicore processors. Also, TOE uses external memory on the NIC to support “state”—driving up NIC complexity and cost. Finally, the Linux community is also opposed to TOE, in part because the proprietary off-loaded stacks prevent security updates and opensource review and support. Another challenge with Ethernet is that running it at 10 Gbits/s over copper cable presents a dilemma—one that didn’t exist when previous Ethernet speeds were developed. Latency is now a primary concern, but the physics of driving data at such high speeds on an 8-conductor cable are not simple. Going from a computer through one switch to another computer requires four, 10G BaseT circuits and

SolutionsEngineering currently has latency of more than five microseconds. In all but the most trivial networks, there are more switch-to-switch hops, adding even more latency. So for latency-sensitive applications, the user is left with a choice of fiber optics or CX4 cable, the same choices as for InfiniBand. In a blade server backplane, traces can be used with either InfiniBand or Ethernet. No vendor has yet proposed using Fibre Channel to interconnect compute nodes.

Storage Requirements

There are several choices for storage connectivity, with new technologies emerging to enable convergence and backward compatibility. Native InfiniBand storage solutions now entering the market provide the best performance by harnessing the higher throughput of 20 Gbit/s InfiniBand. User APIs are defined using SCSI commands to minimize changes to the applications. Fibre Channel over InfiniBand allows connectivity to existing storage devices, utilizing an external gateway to make a common conversion point for a large number of compute nodes. iSCSI over InfiniBand can be used in a similar way. Ethernet, using a combination of 10 Gbit/s speeds and iSCSI protocols, is starting to gain notice in the storage world and provides an acceptable mix of performance and convergence. However, the TOE engines required to make iSCSI work have the drawbacks previously described, and a TOE engine will be required on each node needing access to storage. Fibre Channel for storage has been proven, is stable and mature, and is the low-risk choice. But with higher costs, a slower performance roadmap, and little prospect for convergence, it is easy to see why Fibre Channel vendors have been embracing iSCSI and InfiniBand in preparation for a possible decline in Fibre Channel products.

I/O and Connectivity Requirements

Ethernet is the champion for connectivity to infrastructure outside of the data center. However, a network deployed using Ethernet and TCP/IP as the convergence fabric would present a conflict. To get good node-to-node performance, iWARP protocols over TOE are needed. Similarly,



Fibre Channel

High Speed Choices

10, 20 Gb/s

1, 10 Gb/s

1,2,4 Gb/s

Latency, user app to user app

1 usec

10 usec


Largest Production Non-blocking Switch

288 ports

64 ports

64 ports


Copper, Fiber

Copper, Fiber


Table 1

Basic characteristics of InfiniBand, Ethernet and Fibre Channel.

to get to storage, iSCSI over TOE is used. But to get to the broader network, the TCP stack must be a universal, mature, hardened stack to avoid the large number of security threats that the broader network can expose. It’s naïve to think that firewalls will stop all the threats—multiple defense perimeters are a standard approach. The Linux community has come out against TOE because it creates vendor-specific, closed TCP implementations that are hard to maintain, and data center architects are loath to run separate TCP/IP stacks to connect inside and outside the data center. InfiniBand has been optimized for servers in close proximity, but not for wide deployment. Thus a gateway would be needed to reach Ethernet backbones and possibly a second one to reach Fibre Channel storage. These gateways add cost and complexity, but typically the total cost of an InfiniBand network is less expensive than alternatives. Current products have reached the point of maturity where this option merits serious consideration.

Where We Stand Today

To sum up today’s options for data center fabric consolidation, both Ethernet at 10 Gbits/s and InfiniBand at 20 Gbits/s are reaching a level of completeness that makes them worth considering. Ethernet offers iWARP over TOE for server-to-server connections, iSCSI over TOE for storage, and a long list of protocols for connectivity. As iSCSI is relatively new, in many cases a gateway is required to connect to legacy storage. Latency is reaching single digits. For latency-sensitive applications, 10G BaseT is too slow and CX4 or fiber optics may be required. Scale is still an issue for building full bandwidth (non-blocking) networks. For 24 ports or less, reasonably priced solutions exist; but for larger networks, it’s necessary to use inappro-

priately large switches, and this solution has rarely if ever been deployed. Creating networks using blocking switches defeats the purpose of higher-speed gear, so for medium or large performance-oriented groups of servers, both Ethernet and InfiniBand may still be needed. InfiniBand offers native server-toserver connectivity at the lowest possible latencies, while adhering to published standards and open source software practices. Storage options include native storage over InfiniBand, Fibre Channel or iSCSI over InfiniBand using gateways. Using IP over InfiniBand, any existing IPbased application can be supported. Scale is a strong point for InfiniBand—already proven in 4,500 node networks—and InfiniBand switching infrastructure is now accepted and mature. In a blade server environment, using InfiniBand for the backplane and switching, combined with gateways and external switches, provides the most cost-effective and highest-performance total solution. Further, iSCSI and SCSI interfaces provide application compatibility. In fact, in a virtualized environment, applications are not even aware they are running over InfiniBand. Fibre Channel is not a candidate for network convergence, and may be replaced over time by Ethernet- or InfiniBand-attached storage in a converged environment. As data centers scale their compute resources, the prospect of putting three different I/O adapters into each blade in a blade server or each separate node, is looking less appealing. Fortunately, there are some emerging options for simpler and less costly consolidated solutions. Mellanox Technologies Santa Clara, CA. (408) 970-3400. []. February 2007



IndustryInsight Power Management and Control

VPX-Based Systems Need Board-Level Power Management Solutions High-performance VPX and VPX-REDI platforms require sophisticated board-level power management techniques to intelligently configure CPU frequencies and control component power levels. by E  rnie Domitrovits Curtiss-Wright Controls Embedded Computing


merging trends in high-performance ponent density, SBCs can dissipate anyapplications such as defense and where from 25 to 60W, depending on the aerospace are driving increased specific configuration of the components adoption of higher-speed serial intercon- installed and the functionality required by nects and distributed switching architec- a given application. This is also true for tures. Designers of subsystems for these mezzanine cards, such as high-end graphapplications are also taking advantage of ics solutions based on the new VITA 42 the new generation of faster microproces- XMC standard. With the demand for sors, such as Freescale’s 8641 PowerPC high resolution and high-speed serial inequipped with Serial RapidIO (SRIO) terconnects to meet the real-time display and PCI Express (PCIe) interfaces. These requirements of graphics and video, mezCPUs are ideal for implementing distrib- zanine power is expected to more than uted, serial, switched fabric-based subsys- double compared to previous generations mpanies providing solutions now tems using new standards such as VPX of graphics solutions. oration into products, technologies and companies. Whether your goal is to research the latest and VPX-REDI. At the component level, multicore plication Engineer, or jump to a company's technical page, the goal of Get Connected is to put you However, with high-speed se- CPUs running at GHz rates and packed vice you require for whatever type ofalong technology, nies and products you switching are searching for. rial devices, these CPUs also with numerous high-speed memory, pedissipate more power than earlier, less ripheral and I/O interfaces dissipate the licapable components, making board-level on’s share of board power. Meanwhile, the and chassis-level power management latest board infrastructure and peripherals more important than ever before. This is now consist of high-speed serial interconespecially true for technology upgrades nect devices such as bridges, switches and that use legacy enclosures and power sup- PHY transmitter/receivers incorporating plies. multi-Gbit-rate SERDES interfaces. Until recently, an average 6U SBC These interfaces account for more typically dissipated 20 to 25 watts of than 50% of the device’s overall power, power. With today’s technology and com- which ranges from 2 to 10W, and require special thermal management attention. Over time, increasing component operatGet Connected with companies mentioned in this article. ing frequencies will tend to increase both the maximum power dissipation and the

exploration er your goal peak directly al page, the t resource. chnology, and products

End of Article


February 2007 Get Connected with companies mentioned in this article.

range over which power dissipation can vary for a given board solution.

Figure 1

Even with advanced boardlevel power management, high-performance systems require high-performance cooling. VPX-REDI supports liquid cooling (VITA 48.3) for removing heat from highpower boards. An example of a VPX-REDI liquid-cooled chassis is Parker Hannifin’s F-Chassis Advanced Cooling System. (Photo courtesy of Parker Hannifin Corp., Advanced Cooling Systems).


Figure 2

Finite element thermal modeling tools are often used for boards targeted toward harsh environments, as in this example of a 6U board.

Dealing with Power at the Board Level

Until now, the main approach to dealing with increasing power dissipation has been to employ thermal management techniques such as air via a fan, conduction and liquid cooling to remove the excess and potentially detrimental heat created by higher power levels (Figure 1). At the board level, vendors supplying products for use in harsh environments, such as those found in fighter aircraft, use finite element modeling tools and highly skilled engineers to effectively deal with the power challenges presented by current and future technologies (Figure 2). Thermal modeling of thermal interface materials and solutions must be an integral part of the design process to ensure that worst-case conditions of environmental and application performance are addressed. Specifically, for conduction-cooled boards, new thermal management materials are being used to keep component junction temperatures from escalating beyond the manufacturer’s component specifications despite increases in component power. New materials and methods must address system integration challenges. These challenges include reducing card edge-tochassis slot thermal resistance, where a small amount of resistance can become a major contributor to thermal rises due to the higher power new technology brings. In addition, the re-application of thermal interface materials to support field removal

and installation of high-power mezzanine modules, such as VITA 42 XMC modules, must be properly performed by users to ensure that thermal management systems remain as effective as the factory installation. Finally, solutions must not reduce a system’s ability to meet electromagnetic compatibility (EMC) requirements during system qualification testing.

avoid potential hot spots. Frequently, the options available for modifying existing systems to enhance thermal management are limited. This is especially true in the case of space- and weight-constrained platforms, such as cockpit subsystems, where the small available space restricts chassis redesign and chassis form-factors cannot be changed.

Dealing with Power at the System Level

The Solution: Managing Power on the Board

Typically, system integrators employ thermal management techniques at the system level to ensure that system components operate within the manufacturer’s specified temperature limits (Figure 3). This can be especially challenging in technology insertion applications where new, hotter boards must use an older existing chassis. As a result, system designers who want to insert new technology into existing systems often work closely with board vendors that have the ability to provide board-level thermal models to support existing system-/board-level thermal analysis, applying their combined expertise to ensure that reduced safety margins are acceptable at both board and system levels. Less desirable alternatives to the use of an existing, unmodified enclosure include beefing up the existing system’s thermal management capability, which results in higher retrofit costs, or leaving a legacy chassis less than fully populated to

While thermal management continues to be an essential part of any COTS solution, a greater emphasis must be placed on the ability of new solutions to facilitate power management via both software and hardware. Unlike thermal management, which focuses on carrying heat away from electronic components, power management addresses the heat created by the electronic components themselves. Such solutions should allow designers to tailor the performance/power tradeoff of standard boardlevel modules to the requirements of their specific application. Power management should use techniques that make it easier and less costly to use new technology in existing systems by addressing power dissipation at the device level. Historically, leading embedded board vendors have employed common I/O across their board product families to eliminate the need for designers and integrators to change the backplane when upgrading to newer technology. Likewise, February 2007


IndustryInsight common software APIs, such as those used in Curtiss-Wright’s Continuum Software Architecture (CSA) initiative, can greatly reduce the need to modify existing application software to support technology insertion. The next opportunity for improving the technology upgrade process is to provide a means to control power dissipation and consumption to maximize the reuse of existing system solutions. Power management should go beyond simply controlling a board’s CPU frequency to reduce power. Instead, all of its high-power components can potentially be controlled, including switches, bridge chips and peripheral devices. When fully implemented, power management can provide more options for reducing power by enabling new technology to operate at power levels that more closely match those of the technology being replaced. This could result in a reduction of maximum power by as much as 50-60% for a given card implementing extensive power management capabilities. In addition, VPX boards are being designed with a broad range of power management features. These features promise to make it significantly simpler and more cost-effective to use newer, high-performance VPX SBC modules and VPX DSP modules in both new systems

Figure 3


and in technology insertion applications that must operate within the mechanical and thermal constraints of a legacy system (Figure 4). Power management enables system designers to flexibly configure onboard processing and connectivity features, such as SRIO and PCIe components. This can include shutting down unnecessary functions or unused switched fabric lanes. New VPX SBC and DSP designs can offer a range of such power management capabilities, which significantly reduce power dissipation. VPX SBCs designed to the standard 0.8-in. pitch enable the addition of VPX performance and capability to legacy systems while protecting investments in existing VME and CompactPCI chassis. Power management features can also serve to reduce the cost of carrying spare modules and simplify the logistics in doing so by allowing a single SBC hardware configuration to be software-configured to meet a given system’s thermal, performance and functionality requirements.

Controlling Power Through the API

Through the use of a software API, power management enables a system integrator to modify a board’s configuration

Thermal management techniques are employed at the system level to ensure that components operate within the manufacturer’s specified temperature limits.

February 2007

to ensure that system power handling capabilities can be met. The API enables device functions that are not required by the application to be shut down, or the clock rate of a CPU core to be reduced. Using the API, the CPU’s core frequency can be configured and selected for low-, mediumand high-performance modes of operation based on power/performance requirements. Additionally, software can take advantage of built-in, low-power operating modes offered by some components. The power management API and a defined data structure unique to a board’s specific functionality enable designers to configure a system to meet a desired power value. This configuration can be stored in non-volatile memory so that the board powers up with the selected configuration at every startup. If brief periods of atypical high performance and power levels must be supported, data from module temperature and CPU die temperature sensors can be used to ensure that unsafe limits are avoided. A typical power management feature set on a new VPX board includes the ability to disconnect power, lower CPU and peripheral component power modes, and protect against surges during power sequencing. The power disconnect feature provides the ability to power down the board via an external hardware mechanism. Unlike legacy VME, VPX does not depend on a centralized bus master, which eliminates potential system problems when, for example, rebooting drivers while power cycling a board. Instead, VPX, with its support for mesh serial switched fabric architectures, provides for the use of a redundant processor board. For example, in a system containing an array of common processing elements, a spare processor could be kept in a power-off state when not required and then powered on by the system master when it is needed to replace a failed processor card. Some applications require the ability to throttle the CPU’s frequency and/or power as part of a system-level thermal management solution. The CPU lowpower mode feature enables the board’s CPU to operate at lower power modes under software control. This feature also makes it possible to provide a single variant configuration of a board that supports


Figure 4

New VPX board designs, such as Curtiss-Wrightâ&#x20AC;&#x2122;s VPX6-185 SBC (top) and the CHAMPAV6 DSP engine (bottom), offer a range of power management capabilities that significantly reduce power dissipation.

both low- and high-performance modes to reduce variant/logistics costs. The same model of a board can be configured for different power/performance levels, eliminating the need to order a different model for each power mode variant required. The ability to configure boards for different power performance levels can provide cost savings and reduce complexity all the way from the subsystem development environment to deployment in the field. A power management feature set can also provide the ability, under software control, to reduce power dissipation for component functions not being used by a given application. The use of software control prevents applications from using a component until it is ready for operation when brought back online. In addition, the software takes advantage of the lower power capability of the devices used. For example, a PCIe switch can be set to lower power modes and unused SRIO and PCIe switch ports can be disabled. An additional benefit of a comprehensive power management feature set is support for power surge prevention. This feature enables the control of power-up sequencing of boards in the system. This is important for technology insertion applications using a legacy chassis whose power supplies do not support dynamic performance. These older power supplies may not be able to handle the instantaneous current draw that newer, higher-power boards demand. Sequential system power-on can ensure that the power supply has a waiting period in between providing power to each board and will eliminate power-up surge currents.

Leading VPX vendors will increasingly turn to sophisticated power management techniques to lower power dissipation. With the ability to intelligently configure CPU frequencies and control power modes on a wide range of components such as switches, bridges and memory chips, power management mitigates heat at the source, reducing the need for costly thermal management strategies. The need for high-bandwidth, serial switched fabric architecture computing is driving interest in new boards designed to the VPX and



VPX-REDI standards. Power management techniques will enable system integrators to more quickly deploy these new high-performance platforms in typical environments that include extreme temperatures and limited cooling provisions. Curtiss-Wright Controls Embedded Computing Ottawa, Canada. (613) 599-9199. [].



develop. d





Are You Board With Me?

design. The next generation of connected devices.

Start with an AMCC PowerPC 405 Processor. Specify your I/O requirements.

develop. Your products based on our platform.

deploy. Your solution faster.

EP885 Stack Advert V4.indd 1

Choose your operating system. Relax while Embedded Planet delivers your rugged and reliable system. 4760 Richmond Rd / Cleveland, OH 44128 Tel: 216.245.4180 / Fax: 216.292.0561

February 2007


1/11/2006 1:24:11 PM


r exploration her your goal peak directly cal page, the ht resource. echnology, and products

Executive Interview

“I see ATCA and MicroTCA growing rapidly in the coming year and probably to the detriment of competing technology.”

VadaTech CEO Saeed Karamooz and V.P of Operations Brad Kennedy on the floor of VadaTech’s new Nevada manufacturing facility.

RTC Interviews Saeed Karamooz, CEO of VadaTech RTC: VadaTech certainly appears to be a dynamically growing company, opening a new manufacturing facility in Nevada and a new sales office in Alabama. With a fairly focused product line, you also offer to do custom work for customers. Can you give us an idea of how much custom work versus standard activity you do? Also, at what point might a custom or semi-custom design be incorporated as part of your standard product offering? Karamooz: Since the inception of our

RTC: Again, judging from VadaTech’s product offerings, you appear to be very committed to the development of the market for ATCA and AMCbased systems. We keep hearing that ATCA is about to take off strongly, but the start date for that seems to be repeatedly pushed back a bit. What is your sense of when and or how the market’s acceptance of ATCA will really result in sustained volume growth in sales?

Karamooz: We are seeing a tremenhave over 60 products in production. dous interest in ATCA and AMC-based oration into products, technologies and companies. Whether your goal is to research the latest VadaTech primarily focused ongoal maksystems isinto put different market segments plication Engineer, or jump to aiscompany's technical page, the of Get Connected you vice you requireing for whatever type ofproducts technology, and we work standard from Telecom, for which the ATCA nies and products you are with searching closely ourfor.customers to stay within was originally targeted, to the military the standard industry form-factors. From and automation, which basically is inour 60+ products to date we have about terested in the MicroTCA. A number 14 products that are non-standard. Six of the large opportunities we are inout of 14 of these non-standard products volved with will go into volume producare shelf mangers for the ATCA. Since tion in 2007 and 2008. I envision we the ATCA specification does not address will see the popularity of ATCA and the form factor for the shelf manger, MicroTCA explode when all the pieces each chassis vendor has improvised a of the puzzle are matured. different form-factor, which makes it difficult to avoid custom form-factors. RTC: ATCA is a standard that offers many options and configuration possibilities. The macro view of this is Get Connected with companies mentioned in this article. whether or not to build a system by tegrating a large number of functions (CPU, disk drive, memory, communi-

company in now August 2004, we currently mpanies providing solutions


End of Article


February 2007 Get Connected with companies mentioned in this article.

cations processor, Gigabit Ethernet, etc.) onto a single board or to use the ATCA board as a carrier to be configured by adding a mix of AMC boards to define its ultimate functionality. Do you see cases (such as hitting a certain volume of production vs. more specialized system designs) where both of these approaches can be justified or do you think one or the other will mostly prevail? Karamooz: A certain amount of basic functionality is typically required on any CPU blade. Even a full-featured blade may not address all system requirements in every case. AMC modules will always be used to provide functionality where needed. There will be a market for both approaches depending on the systems requirements. Combining functionality typically lowers cost and is more appealing to the customers with high volume. AMC modules provide flexibility where the development of a custom board is not warranted, and it allows customers to meet their requirements from different vendors and not be locked into a single source. RTC: Having asked the above question, we note that VadaTech’s commitment to ATCA/AMC includes a variety of ATCA carrier boards that are designed not only to include AMC mezzanines, but also to allow the integration of other legacy form-

ExecutiveInterview factors such as CompactPCI and VME into an ATCA system. Can you share with us your philosophy—both from a technical and marketing standpoint—on this approach?

offered in the ATCA/MicroTCA architecture. Switched fabric architecture, hot-swap, redundancy, small form-factor and forecasted lower overall system cost are advantages over other competing technologies.

Karamooz: With ATCA/MicroTCA, as with any new technology, it takes time for board vendors to transition their product offerings. In some cases this transition may never happen, especially for modules customers developed in-house. We introduced our line of carrier products to bridge the gap between ATCA and legacy VME, CPCI, PCIe, PCIx and PMC technology. With the different carrier modules for ATCA, the transition to ATCA could happen sooner then later for customers adapting this technology. For example, one of our customers that elected to use ATCA for their system architecture needed a dual high-performance graphic module. They used our ATC105 Carrier, which allows up to two PCIe modules. They used standard graphic products from NVIDIA to be integrated on the ATC105 Carrier. The power budget for each graphic module was 70W for a total of 140W for two on the carrier. Using off-the-shelf PCIe graphics boards was a very cost-effective solution. We have fielded a number of systems using the carrier approach and the customers are very satisfied. Further, as an added feature all of our carrier boards can also be a shelf manager, effectively reducing the cost of the ATCA system.

RTC: There is a move now to develop a rugged specification for MicroTCA. Do you see it possibly competing with established form-factors such as VME or planned standards such as VITA 56 in rugged and even military applications?

RTC: Originally conceived as a mezzanine form-factor in the context of ATCA, AMC has now blossomed out on its own with the development of the MicroTCA specification, which lets developers build entire systems with full shelf management based on the AMC form-factor. How do you assess the growth potential for MicroTCA, and in what areas do you see it moving beyond communications and networking? Karamooz: The military arena has recognized the many benefits of MicroTCA as a system solution to replace some of their older generation product in both battlefield and non-battlefield environments. The ability to handle logistics and configuration management from the system is an enormous cost savings in deployed systems, which is only 48

February 2007

Intel® Pentium®M up to Core™(2) Duo CompactPCI®/Express

Karamooz: While VME will continue to play a large role in legacy military systems for years to come, MicroTCA offers many benefits. We are already seeing military contractors starting to design new systems around MicroTCA. As more and more MicroTCA products become available, it will further fuel the growth in these areas. RTC: In the industrial automation sector, there appears to be a significant growth in the use of small form-factor, X86-based (read PC-like) modules such as PC/104, EBX, COM Express and others. First, is this an area that VadaTech has an interest in participating in? And secondly, what processor architectures do you expect to dominate the networking and communications arena? Karamooz: VadaTech currently has no interest in competing in the PC/104 or COM Express market; however, we have developed several carrier products that use COM Express modules. We currently buy these modules from other companies. These modules are a very cost-effective approach to adding intelligence to a carrier and are easily upgradeable. The PowerPC is certainly very attractive for the networking and communication area, and at the high end, BroadCom 1480 or NPUs will be more feasible. RTC: We continue to see a migration to PC-based systems in a variety of applications from simulation to industrial control. Only a few years ago, PC motherboards, regardless of how they were packaged, were anathema to industrial-control applications. Now we’re seeing an increasing use of these with specialized I/O. Do you believe the PC, in perhaps its most native form-fac-

■ ■ ■

F17 – Core®2 Duo T7400, 2.16 GHz F15 – Core® Duo T2500, 2 GHz F14 – Pentium® M 760/Celeron M 373, 1.2 GHz Side Cards – UART, Multimedia, USB, etc. Compatible 3U Intel® family with scalable computing performance Long-term availability due to easy system adjustment For harsh industrial environments and mobile applications

Visit MEN at Booth #2132 MEN Micro, Inc. 750 Veterans Circle Warminster, PA 18974 Tel: 215.956.1583 E-mail:

ExecutiveInterview tor, will win out over more purposebuilt approaches such as VME, cPCI or MicroTCA? Karamooz: I think we will continue to see PCs used in applications where the I/O counts are small and cost is the main driver. PCs do have their limitations. They are only available for a limited amount of time in the same configuration. It will pose a problem if the configuration has to stay the same for several years. Additionally, PCs are hard to ruggedize, have limited I/O expandability, and do not fulfill the system requirements in application where space and power requirements are constrained. We try to do more with less, in a smaller space, with greater functionality, and faster than we did the year before. CPU solutions will continue to evolve to fulfill these requirements. The markets are diverse, each with a different set of requirements. Several years ago the telecom industry looked at 1U/2U servers to replace VME and CPCI, but this didn’t happen in all applications. Now we’re seeing these same companies define the requirements for ATCA and MicroTCA specifications to meet their customer needs. RTC: COM Express has been getting its share of publicity recently. And, while it represents a slightly different approach to open modular systems, it is still judged by some as competitive with other approaches. What do you believe are the strengths and weaknesses of the COM Express approach and do you think such semi-custom approaches have merit? Karamooz: At VadaTech we have designed several products that utilize COM Express. Since COM Express strictly provides the CPU interface, it allows us to focus on the I/O needs. The CPU technology changes more rapidly then the I/O, so it allows us to upgrade to the new CPU technology without a complete system redesign. COM Express is focused on the x86, which pretty much restricts its use with other processors.

joy the higher rate of growth. Do you anticipate the other standards-based architectures picking up in the coming year, or do you believe the growth will continue with the small form-factor, X-86-based architectures? Karamooz: I see ATCA and MicroTCA growing rapidly in the coming year and probably to the detriment of competing technology. The pursuit of ATCA/

MicroTCA by the telecom, industrial automation and military markets should fuel the growth. At the low end of the market spectrum, the small form-factor X-86 will still be the winner. VadaTech Incorporated Henderson, NV. (702) 896-3337. [].

Essential building blocks … Innovative solutions. From Digital Signal Processing, to I/O interfaces, to IP cores and Software tools … ... at BittWare you’ll find the essential building blocks you need for your signal processing applications. Whether your application needs to thrive under extreme conditions of the battlefield, meet the ever-changing requirements of the communications market, or process highly-precise medical imaging data, BittWare has the capabilities to meet your needs. Reliable cutting-edge products are backed by timely delivery, reduced cost, and a dedicated support team – enabling your innovative solutions.

w w w. B i t t Wa re . c o m

RTC: A recent study showed that the embedded computer business has been growing at over 15% CAG for at least the past two years. Yet, standards-based products such as VME, cPCI, etc. have had less than 5% growth. Only PC/104 and its variants have been able to enFebruary 2007


Don't miss the only event that offers solutions to building open specifications based commercial off-the-shelf equipment for the converging wireless, Service Provider/ Telecom and Internet infrastructure markets.



c ace v m

Two Perfect Days in San Diego

Manchester Grand Hyatt on San Diego Bay February 28 / March 1 SPONSORSHIPS DEMOS EXHIBITION KEYNOTES PRESENTATIONS PANEL DISCUSSIONS For Sponsorship/Exhibition information, contact Mike Miller, Convergence Promotions 415.383.2319

Software&Development Tools Network Security

Securing the Future by Confining the Code Real, provable security must be mathematically provable. That is not possible for a monolithic operating system consisting of millions of lines of code. A microkernel-based Ad Index separation kernel can isolate other elements that would otherwise be included in the OS and keep them safe from attack. Get Connected with technology and companies providing solutions now by David N. Kleidermacher Get Connected is a new resource for further exploration Green Hills Software


into products, technologies and companies. Whether your goal is to research the latest datasheet from a company, speak directly with an Application Engineer, or jump to a company's technical page, the goal of Get Connected is to put you in touch with the right resource. of service you require for whateverintype of technology, he operatingWhichever system level bears a tremendous burden achievstates that EAL 4 (a low level of assurance) “is the highest level Get Connected will help you connect with the companies and products ing security. Because the operating system controls the re- at which it is likely to be economically feasible to retrofit an exyou are searching for.

sources (e.g., memory, CPU, devices) of the computer, it has isting product line.” We would all agree that it is a bad idea to the power to prevent unauthorized access to these resources and trust our critical systems to insecure operating systems. Unforinformation flowing through them. Conversely, if the operating tunately, many of the nation’s computer systems that are used to system fails to prevent or limit the damage resulting from unau- monitor and control plants and equipment in industries such as thorized access, disaster can result. water and waste control, energy and oil refining are running such Operating system security is not a new field of research. Yet operating systems—the same as those running your run-of-theGet Connected and companiesmill providing solutions now today there are no operating systems that with havetechnology been successfully desktop PC. According to Michael Vatis, executive director Get Connected is a new resource for further exploration into products, technologies and companies. is to research the latest evaluated at the highest levels of assurance—Evaluated Assur- of the Markle Foundation’s TaskWhether Forceyour ongoal National Security in datasheet from a company, speak directly with an Application Engineer, or jump to a company's technical page, the goal of Get Connected is to put you ance Level (EAL) 6 or 7—the highest security levels of the Com- the Information Age, “The vulnerabilities are endemic because in touch with the right resource. Whichever level of service you require for whatever type of technology, mon Criteria, an internationally conceived and accepted security we have whole networks and infrastructures built on software Get Connected will help you connect with the companies and products you are searching for. evaluation standard. The high assurance levels are difficult to that’s insecure. Once an outsider gains root access, he could do reach because they require an extremely rigorous development anything. Any given day, some new vulnerability pops up.” process, formal design and formal proof that the security policies Recently, companies in the embedded space have taken a new of the system are upheld. One of the reasons for the lack of secure approach that attempts to divide and conquer the problem of operoperating systems is the historical approach taken in operating ating system security: the MILS (Multiple Independent Levels of system architecture. Most operating systems attempt to provide a Security) architecture. At the foundation is the MILS separation kitchen sink of services, all running in the computer’s supervisor kernel, a small, real-time microkernel that implements a critical set mode. A single flaw in the hundreds of thousands or even mil- of information flow, data isolation and damage limitation security lions of lines of code running in the kernel can provide complete, policies. The separation kernel realizes these policies by using the unfettered access to all computer resources. In addition, the weak microprocessor’s memory protection hardware to prevent unauthoraccess control and privilege paradigm employed by most operat- ized access between partitions and by implementing resource allocaing systems, allow simple flaws in application programs to open tion mechanisms that prevent one partition’s operation from affectGet Connected with companies and GetbyConnected up the entire system to inimproper ing another (e.g., exhausting a resource such as memory or CPU products featured this section.access. with companies mentioned in this article. Another serious problem we have is that civil and military time). The information flow policy prevents unauthorized access to organizations are employing operating systems that were never devices and other system resources by employing an efficient capadesigned for security in the first place. The Common Criteria bility-based object model that supports both confinement and revocation of these capabilities when the system security policy deems it necessary. Higher-level security policies, such as multi-level Get Connected with companies mentioned in thissecurity article. (MLS) and secure file management policies, can then be layered on Get Connected with companies and products featured in this section. top of the separation kernel as needed.


End of Article

February 2007



File Systems

Device Drivers

Audio User Mode


File Systems






Critical App

Critical App

Device Drivers

Windows Windows App App

Windows Windows App App



Padded Cell

Padded Cell

INTEGRITY Supervisor Mode

PC Hardware


Figure 1

In a monolithic operating system, an application’s device drivers execute in kernel context giving possibly compromised applications direct control of hardware. A microkernel architecture with protected address spaces can keep device drivers in user space.

In the area of operating system security assurance, embedded technology is clearly ahead of IT technology. No operating system has ever been evaluated at a Common Criteria assurance level higher than 5. Windows and Linux have been evaluated at assurance level 4. As Jonathan Shapiro, security expert from Johns Hopkins has explained, “Security experts have been saying for years that the security of the Windows family of products is hopelessly inadequate. Now there is a rigorous government certification confirming this.” In addition, Windows was evaluated against a Common Criteria Protection Profile that provides the following disclaimer: “not intended to be applicable to circumstances in which protection is required against determined attempts by hostile and well funded attackers to breach system security.” The reality of monolithic systems, in which the base of code that must be trusted amounts to millions of source lines, is that there is simply no practical way to achieve an acceptable level of security for critical information and infrastructure when such a software platform directly controls the computer hardware. For years, operating system researchers have been convinced that microkernel architectures enable superior system reliability (Figure 1). Green Hills’ Integrity—a microkernel-based system—is under evaluation at 6+, the highest security assurance level yet attempted for an operating system. A microkernel OS designed to achieve EAL 7, the highest level, now under evaluation at EAL 6+, represents what the United States National Security Agency (NSA) deems as the required level to ensure “high robustness”: protection of national secrets in the face of attack by highly determined and resourceful enemies. In addition to the formal methods mentioned earlier, part of this evaluation requires withstanding penetration testing by the NSA’s own expert hackers who have complete access to the source code and months in which to carefully craft methods of attack. This operating system is currently being used in communications devices that manage national secrets and require NSA Type-1 certification, avionics systems that control passenger and 52

February 2007

Figure 2

The Integrity Workstation Component Architecture can create protected areas for whole operating systems and critical applications in protected memory space where they only have access to devices allowed by the system’s security policy.

military jets, and a wide variety of other safety and security-critical systems. It appears likely that this critical security evaluation milestone will be reached sometime in 2007. At that point we will have independent affirmation that it is indeed possible to create a system that is hacker-proof.

Application to IT

Can an operating system used historically in embedded systems help the plight of IT security? The answer is clearly yes. The MILS architecture allows us to build arbitrarily complex computing systems that can still maintain the same level of secure separation between components as is used in embedded systems. One realization of this concept is an IT technology platform designed for the military, intelligence and security-critical infrastructure communities called the Workstation. The Workstation uses MILS security policies to implement a multi-level secure (MLS) window manager on top of the high-assurance microkernel. There are other MILS components, such as a high-assurance journaling file system, that are also built into the system. The MILS components that make up a final Workstation can be selected by system designers as needed. If the system does not require a secure Web server, then there is no need to go through the pain of evaluating one. MILS components can be independently evaluated at the highest assurance level and can come from multiple vendors. This software runs on off-the-shelf PC desktop, server and laptop computers. But what about the Windows or Linux/UNIX and other ITtype environments upon which we have come to depend? The Workstation includes a secure virtual machine technology, called Padded Cell. Unlike other virtual machines designed for IT systems, Padded Cell runs completely in user mode and therefore need not be evaluated at high assurance levels. In fact, a Padded Cell is used to virtualize an instance of a guest operating system and therefore only requires the same (low) level of assurance as the guest. Multiple instances of Padded Cell can be used to run

Software&DevelopmentTools multiple instances of such operating systems on the same computer at the same time (Figure 2). Windows and its applications are unmodified: the exact same system and applications that IT users are familiar with today. Yet by using high-assurance MILS separation, we can prove that information in one Windows environment cannot improperly leak into another Windows environment and that a Windows environment cannot gain access to devices for which it is not authorized by the system security policy. Furthermore, software developers can create new, highly secure applications that run directly on the high-assurance microkernel, alongside yet protected from the legacy Windows environment. This Workstation system is already being deployed in environments requiring multi-level security. It is only a matter of time before the world at large realizes the promise of the model in which we compose computing platforms using a combination of minimalist security components and virtualization mechanisms to provide both legacy environment support and high-assurance security for critical data, systems, resources and people. Green Hills Software, Santa Barbara, CA. (805) 965-6044. [].

Untitled-2 1

Large IT Systems Can Be Breached In July 2006, Matthew Fordal, AP technology writer, reported in “Backbone of Networking Compromised” about a critical security flaw in the Cisco software that routes much of the data over the Internet. According to Fordal, “researchers discovered a technique that could allow someone to seize control of a Cisco router by exploiting a vulnerability in its operating system.” In August 2005, Tarmo Virki of Reuters in “Sorry I’m Late, My Car Caught a Virus,” reported how automotive industry officials and analysts have concluded that “hackers” growing interest in writing viruses for wireless devices puts car computer systems at risk of infection.” Virki went on to quote a security expert, stating that “sooner or later the hackers will find the vulnerability in the operating systems of onboard computers and … will definitely use it.” For decades, the world has looked toward IT security to bridge the gap between IT operating systems and the security confidence sought by consumers. While no one doubts that these firms are dedicated to improving security and have provided valuable improvements, the long-term prognosis is grim. In May 2006, CNN reported that “Symantec Corp’s leading antivirus software, which protects some of the world’s largest corporations and U.S. government agencies, suffered from a flaw that let hackers seize control of computers to steal sensitive data, delete files, or implant malicious programs.” Soon after, in August 2006, AP reported “consumer versions of McAfee Inc.’s leading software for securing PCs are susceptible to a flaw that can expose passwords and other sensitive information stored on personal computers.”

2/9/072007 9:40:36 AM 53 February

Products&Technology PMC/XMC Is High-Performance Wideband Digital Receiver

In high-performance radar, SIGINT and ELINT applications, extracting clear signals from electronic clutter can be quite a challenge. What’s needed is a mixed-signal bridge between analog signals and digital processing, such as the Echotek Series ECV4-2 family of mixed-signal PMC and XMC wideband digital receivers from Mercury Computer Systems. With input clock capability of up to 1.5 GHz, the ECV4-2 family implements a flexible FPGA-based architecture in the space-efficient PMC/XMC form-factor. Each module provides unique I/O connectivity and functionality and is configured with a specific set of A/D and/or D/A converters that address a defined bandwidth and frequency for data conversion. Cards come in two- and four-channel versions, with A/D, D/A and transceiver configurations. Two Virtex-4 FPGAs are provided on each card for user-programmable data processing (XC4VFX60 or XC4VFX100) and PCI/PCI-X interfacing (XC4VLX25). Support is provided for 133 or 100 MHz 64-bit PCI-X and 33/32 PCI. Memory includes 256 or 512 Mbytes of DDR SDRAM and 2 Mbytes of dual-port SRAM. Pre-loaded FPGA development IP is supplied and Linux and VxWorks are supported. Air- or conduction-cooled versions are available. Pricing starts at $6,000 in OEM quantities. Mercury Computer Systems, Chelmsford, MA. (978) 256-1300. [].

Ultra-Wideband Recording/Playback System Samples at 2 Gsamples/s

In radar, SIGINT, electronic warfare and software radio applications, systems must be able to store, analyze, process and play back large volumes of captured signal samples with instantaneous bandwidths in the GHz range. The VXS-based JazzStore UWB data recording solution from TEK Microsystems enables real-time, high-capacity recording and playback of broadband sampled analog data at rates of up to 2 Gigasamples/s. The JazzStore UWB’s twoslot, scalable storage architecture is based on multiple RAID storage arrays that can store wideband sampled data for periods of up to several hours. The Triton 2 GHz A/D and D/A card provides wideband analog digitizing front-end and real-time playback. Combined with the Callisto multi-FPGA processing engine, it forms the JazzStore UWB. The system can continuously record and play back at rates of up to 2 Gsamples/s (8-bit samples) or 1.6 Gsamples/s (10-bit samples). The JazzStore UWB hosts TEK Microsystems’ JazzStore systemon-chip FPGA firmware: six SoC cores provide 12 high-performance data pipes directly into a set of up to twelve Fibre Channel RAID disk arrays with up to 24 terabytes of storage capacity for more than two hours’ worth of 2 Gsamples/s samples. Access to the recorded data is enabled using a standard FAT32 file system. The JazzStore UWB supports VxWorks, Windows and Linux. Prices begin at $125,000. TEK Microsystems, Chelmsford, MA. (978) 244-9200. [].

Manageable, Server-Class VME SBC Has Twin Dual Core CPUs

The first server-class, manageable, 6U single-slot VME SBC to feature a dual-core processor and a board management controller just doubled its processing power. The PENTXM4 from Thales Computers has two dual-core Intel 1.67 GHz Xeon ULV processors, compared to the company’s PENTXM2 board, introduced last year. The PENTXM4 comes with the Intel E7520 server-class memory controller hub, 2 Gbytes of DDR2-400 SDRAM and an onboard 4 Gbyte flash disk drive. It is targeted toward symmetrical processing systems in applications such as telecommunication and defense. The board’s VITA 38 intelligent platform management interface (IPMI) feature provides for easy scaling into a multiprocessing system. Interfaces include a dual SATA-150 port, a triple USB 2.0 port and EIDE. The PENTXM4 runs Red Hat Linux and features an extensible firmware interface (EFI) BIOS/firmware that boots Linux 2.6, VxWorks, LynxOS, Microsoft Windows and Red Hat Linux Enterprise. Thales Computers, Raleigh, NC. (919) 231-8000. []. 54

February 2007

Ultra-Small Modem Module Sports RJ-11 Connector

A built-in, standard RF-11 phone jack can be handy for all kinds of embedded applications that need data communications capabilities, and it’s even more useful when combined with a modem. A new modem module that includes an RF-11 jack is aimed at small-footprint designs, measuring only 0.66 in. wide x 1.25 in. deep x 0.75 in. high. The TinyModem from Radicom Research includes data, fax and voice capability, as well as improved EMC/EMI shielding. The TinyModem features a built-in data pump and a modem controller and tolerates high isolation voltages of up to 3750V. It features low power consumption from a single 3.3V supply, 5V-tolerant I/O and sleep mode support. Standard features include a serial TTL interface, onboard International DAA, AT command set support, caller ID type I and II for select countries and call waiting, as well as detection of line-in-use, remote hang-up and extension pickup. Also included are downstream data rates of up to 56 Kbits/s, a 14.4 Kbit/s fax rate and voice playback and recording capability. Prices begin at $29 in quantities of 1,000 or more. Radicom Research, San Jose, CA. (408) 383-9006. [].

Rugged 3U VPX Display Processor Targets Embedded Training, Digital Mapping

In embedded military computing there’s a growing demand for systems based on the 3U VPX form-factor, as well as for more preintegrated, pre-tested subsystem-level products. The MAGIC1 Rugged Display Processor from Radstone Embedded Computing, now part of GE Fanuc Embedded Systems, is designed to fill both of those needs, as well as helping system designers develop and deploy significantly more sophisticated display applications, such as embedded training, digital mapping and vehicle display. A complete, rugged, integrated subsystem, the MAGIC1 is based on the SBC340, a 2 GHz Intel Core Duo processorbased SBC, and the GRA110 3U VPX graphics processor card with NVIDIA PCI Express graphics capability. The Core Duo CPU, with its 945GM Northbridge chipset, is connected to the NVIDIA G73 GPU via 16-lane PCI Express, providing maximum bandwidth between the two processors. The graphics processor card supports VAPS, GL Studio and iData display software. With dual-channel video output capability, the MAGIC1 can drive two independent displays. Up to 64 Gbytes of solid-state SATA disk storage are provided. Pricing starts at $17,514.

TCP/IP Stack Supports Dual-Mode IPv4/IPv6 Traffic

As DSPs are used in increasingly complex networked environments, TCP/IP is becoming an integral part of the total DSP solution, especially when Gigabit Ethernet is used as the local interconnect for DSP farms. In recognition of that fact, Enea has released DSPNet, a compact, high-performance TCP/IP stack for its OSEck DSP RTOS. The new stack is optimized for deeply embedded applications with tight size and cost constraints, and occupies less than 40 Kbytes of memory. It supports IPv4, IPv6 and dual-mode IPv4/IPv6 traffic, as well as raw IP/UDP/TCP BSD Sockets, and provides a zero-copy API based on BSD Sockets. OSEck (OSE Compact Kernel) is a DSP-optimized version of Enea’s full-featured OSE RTOS that occupies less than 8 Kbytes of memory in a minimal configuration. DSPNet will initially be available for OSEck running on Texas Instruments’ C64x DSP family and Freescale’s Starcore family. Pricing starts at $5,000. Enea, San Jose, CA. (408) 383-9480. [].

Radstone Embedded Computing, Part of GE Fanuc Embedded Systems, Billerica, MA. (800) 368-2738. [].

Universal Baseband Processing Module with Serial RapidIO Endpoint Low-Power, Fanless Micro PC Features Geode LX800

A compact, low-power, high-performance micro PC from WIN Enterprises can be used as an entry-level embedded computer. The PL01030’s fanless operation makes it attractive for industrial control, and its compact size, at only 8 1/8 in. high, makes it ideal for point-of-sale, kiosk and medical applications. The PL-01030 features an AMD Geode 500 MHz LX800 lowpower processor, a CS553 chipset and a Dual Winbond 83627HG I/O chipset. The board consumes only 12W from a 5V power supply @ 2.4A. Up to 1 Gbyte of 400 MHz DDR SO-DIMM memory is included, as well as 2 - 254 Mbytes of AMD Geode LX800 shared system memory that supports a CRT and 24-bit TFT LCD interface. The board has one 50-pin CompactFlash Type II socket and comes with a mounting kit for a 2.5-in. hard drive. For connectivity and expansion, it has two 10/100 Fast Ethernet LAN interfaces, one PC/104 connector, four USB 2.0 ports, four RS-232 ports, one DB-25 parallel port and one DB-25 VGA connector. Audio interface is one mic in and one speaker out. Options include two Intel 82551ER or Realtek 8139CL+ Ethernet controllers. Single unit prices are $392 for the Intel 82551ER LAN version (PL-A1030) and $385 for the Realtek 8139CL+ LAN version (PL-B1030). WIN Enterprises, N. Andover, MA. (978) 688-2000. [].

A Serial RapidIO (SRIO) endpoint solution from BittWare is based on Altera’s Serial RapidIO IP Megacore targeting a high-performance Stratix II FPGA. Combining the logic density of Altera’s Stratix II FPGAs with the performance of Analog Devices TigerSHARC, and the benefits of AdvancedMC and SRIO, the B2-AMC supports universal baseband processing for any wireless application including WiMAX, Software Defined Radio and Super 3G. The endpoint features a 4x fat pipe running at 3.125 GHz, which can be shared via ATLANTiS amongst the four TigerSHARCs and/or FPGA processing blocks. Eight bi-directional TigerSHARC link ports, running at 4 Gbits/s each, are connected to ATLANTiS providing a tremendous amount of I/O bandwidth and a highly efficient means of dealing with the I/O. A full-height, single-wide AMC, the B2-AMC, is suitable for use in AdvancedTCA, MicroTCA, or custom systems and is completely hot-swappable. The B2-AMC provides 14.4 GFLOPS and 57.5 GOPS of processing power. The Stratix II FPGA implements BittWare’s ATLANTiS framework and the fat pipe interface, seamlessly integrating the DSP processing power with Serial RapidIO, or any other switch fabric (PCI Express, GigE, or XAUI (10 GigE)). The BittWorks software tools provide host interface libraries, a wide variety of diagnostic utilities and configuration tools, and debug tools. This tool set is comprised of BittWare’s DSP21k Toolkit, DSP21k Porting Kit, BittWare Target, TigerSHARC BSP for Gedae, and TS-Lib and SpeedDSP libraries, as well as BittWare’s multiprocessing operating environment—Trident. BittWare, Concord, NH. (603) 226-0404. []. February 2007



Rugged Fibre Channel PMC Card Features Dual Independent 4.25 Gbit/s FC Channels A dual 4.25 Gbit/s-per-channel Fibre Channel PMC card designed for demanding high-bandwidth data communications and storage applications has been introduced by Curtiss-Wright Controls Embedded Computing. The FX400 supports both SCSI Fibre Channel Protocol (FCP) and Internet Protocol (IP), including both File System SCSI and Raw Initiator SCSI. This multi-protocol approach enables application software to communicate simultaneously with SCSI and IP-based devices while eliminating the need for the system integrator to interact with the FC interface. The FX400, available in both air-cooled and conduction-cooled rugged versions, simplifies the integration of high-bandwidth FC data communications in PCI, VME and Compact PCI-based embedded systems while reducing slot count. Each of the FX400’s dual independent FC paths support transfer rates up to 400 Mbytes/s in a single direction. In full duplex mode, the FX400 supports transfer rates up to 800 Mbytes/s, or a combined throughput rate of up to 1600 Mbytes/s per PMC card. Each of the card’s dual channels also supports 1.0625 Gbit/s, 2.125 Gbit/s and 4.25 Gbit/s rates, and automatically detects and switches to the appropriate rate using Auto-Speed Negotiation. This feature enables the FX400 cards to interoperate with existing FC devices at 1.0625 Gbits/s and 2.125 Gbits/s, and provides seamless transition to higher-performance 4.25 Gbit/s devices. Curtiss-Wright Controls Embedded Computing, Leasburg, VA. (613) 254-5112. [].

Industry Pack Digital I/O Module Implements Low-Cost User-Configurable FPGA

A new series of Industry Pack I/O modules interfaces digital I/O signals to a user-configurable FPGA. The IP-EP200 from Acromag implements the Altera Cyclone II EP2C20 device, with 20K logic elements 240K RAM, and 26 (18 x 18) embedded multipliers for use in processing user-defined algorithms and custom logic routines. User application programs are downloaded through the JTAG port or via the IP bus directly into the FPGA. And with JTAG access to the SignalTap II embedded logic analyzer, engineers can easily monitor internal device operation. Several models are available to accommodate RS-485 differential, TTL, or LVDS digital I/O signals. One IP-EP200 model provides 48 bi-directional TTL I/O lines, another offers 24 differential RS-485 I/O lines, and a third has 24 LVDS I/O lines. The “combo” model pairs 24 TTL with 12 RS-485 I/O lines on a single IP module. The IPEP200 allows users to develop and store their own instruction set in the FPGA for adaptive computing applications. 64K x 16 local static RAM is provided under FPGA control. An LVTTL external clock is also connected directly to the FPGA. Acromag’s Engineering Design Kit provides utilities to help users develop custom programs, load VHDL into the FPGA and to establish DMA transfers between the FPGA and the CPU. Prices start at $1,000. Extended temperature (-40° to 85°C) models are also available. Acromag, Wixom, WI. (248) 295-0310. []. 56

February 2007

SDR Conduction-Cooled PMC/XMC Transceiver Module

A new transceiver module for software defined radio (SDR) features two 14-bit, 125 MHz A/Ds, a Xilinx Virtex-II Pro field programmable gate array (FPGA), and is configured as a ruggedized module fully compliant with the ANSI/VITA 20 conduction-cooling specification and ANSI/VITA 42 XMC specification. The Model 7141-703 PMC/XMC from Pentek is compatible with both cPCI and VME baseboards. The board has an extended operating temperature range of -40° to +70°C, making it well suited for designs in a variety of market sectors where there is a need for wideband software radio resources in harsh environments. Two full-scale, +10 dBm, analog HF or IF inputs are delivered through front-panel SSMCX connectors. The digitized output signals pass to a Virtex-II Pro FPGA for signal processing or routing to other module resources. The FPGA also serves as a control and status engine data and programming interGet with Connected with technology and companies providing solutions nowfuncfaces to each of the onboard resources. Factory-installed FPGA Connected is a data new resource forgating, further exploration tions include data multiplexing, Get channel selection, packing, into products, companies.Board Whether your goal triggering and SDRAM memory control.technologies Pentek’s and ReadyFlow is to research latest datasheet a company, speak directly Support Libraries of C-callable device the functions for thefrom processor platwith an Application Engineer, or jump to a company's technical page, the form are available for all I/O modules. Pentek support libraries, device goal of Get Connected is to put you in touch with the right resource. drivers and software Whichever development offerings, level oftools, serviceplus you third-party require for whatever type ofare technology, all available for a variety of platforms. Pricing starts at $15,995. Get Connected will help you connect with the companies and products

Ad Index

you are searching for. Pentek, Upper Saddle River, NJ. (201) 818-5900. [].

Get Connected with technology and companies pro

Mini-ETX Motherboard Aimed at Get Connected is a new resource for further exploration into p Industrial Computing datasheet from a company, speak directly with an Application Engi

touch on withthe the right resource.form-factor Whichever level of service A new motherboard inbased mini-ETX and fea- you requ Connected will help you connect with the companies and prod turing HDTV output isGet targeted at industrial applications that require ruggedization and low power consumption. The Mini-ITX-9452 from Arista features the Intel Core Duo/Core Solo processor, dual PCI Express Gigabit Ethernet, HDTV output and two independent audio streams. The motherboard is equipped with a Socket 479 Intel Core Duo/Solo with a 667 MHz FSB and an Intel 945GM chipset + ICH7M. System memory includes two 240-pin dual-channel memory slots that accommodate up to 2 Gbytes of DDR2-400/533/667 RAM. The board requires a low, 1.58-amp, 5V maximum power and has a programmable watchdog. Get Connected with companies and Additional features include eight USB 2.0 ports, two SATA II AIIproducts featured in this section. 300 ports, PCI slot, a mini-PCI slot and ATX support. Integrated Intel 945GM support and dual 18-channel LVDS and CRT are included. Pricing starts at $380.


Arista, Fremont, CA. (510) 226-1800. []. Get Connected with companies and products featured in this section.

Multimedia Processing Platform for AdvancedTCA

A new multimedia platform for AdvancedTCA can be used to deliver applications such as voice and video mail, color ringback tones, unified messaging and audio conferencing over IP and PSTN interfaces in wireline and wireless environments using standard protocols for session and media control. The Dialogic Multimedia Platform supports T.38 fax, audio conferencing, audio transcoding and video messaging (play/record). Audio conferencing incorporates active talker detection and per-party volume control. Between 250 and 500 G.711 ports are supported. The platform integrates an extended implementation of Dialogic’s Host Media Processing (HMP) software with an AdvancedTCA single board computer that includes two Dual-Core Intel Xeon LV 2.0 GHz processors designed to boost multitasking computing power. An Advanced Mezzanine Card (AdvancedMC) is used for echo cancellation and transcoding offload, which enhances media processing performance, especially when low-bit-rate audio coders are required at high-density levels. This combination of media processing, together with over 30 Gbytes of hard disk storage, means the platform has the capacity to run applications locally on the same blade as an alternative to supporting them remotely over IP connections. The multimedia platform is supported by the Red Hat Linux Enterprise 4 and SUSE Linux Enterprise Server 9 operating systems. Dialogic, Montreal, Canada. (514) 745-5500. [].

Dual Channel 105 MHz ADC based on Xilinx Virtex-5 FPGA

PMC module with integrated dual analog input channels is based on the Xilinx Virtex-5 LX110 FPGA. The PMC-FPGA05-ADC1 from VMetro closely couples the high-performance FPGA processing and the analog I/O to target applications requiring maximum performance with minimal latencies such as software defined radio (SDR), signal intelligence (SigInt) and data recording. The analog input front end incorporates dual-channel analog input with a full power input bandwidth of between 100 kHz and 110 MHz with a signal to noise ratio (SNR) of 68 dB and a spurious free dynamic range (SFDR) of 82 dB. The full scale analog input is +10 dBm (2V pk-pk). Front panel clock and trigger inputs are also available. The PMC-FPGA05-ADC1’s highperformance dual analog interfaces combined with the processing power and PCI-X bus interface enable this PMC module to provide front-end processing. Processing on the PMCFPGA05-ADC1 is provided by a Xilinx XC5VLX110 Virtex-5 FPGA. Supported by three banks of QDRII SRAM (8 Mbytes per bank) and two banks of DDR2 SDRAM (128 Mbytes per bank), the Virtex-5 FPGA is capable of performing sophisticated, high-speed DSP tasks in a small footprint. The PMC module supports a 133 MHz PCI-X interface as well as PMC digital user I/O via the Pn4 connector. Both of these interfaces are provided by the FPGA. Software support includes host drivers for Windows and VxWorks. Linux support is scheduled for the second half of 2007. The drivers support high-speed DMA to minimize potential bottlenecks between the analog input, the FPGA and the host processor. Utilities and example VHDL are provided to support customers’ development of FPGA firmware as well as providing diagnostic support. VMetro, Houston, TX. (281) 584-0728. [].

Dual Star MicroTCA Backplane Includes Controller Hub

A Dual Star MicroTCA backplane from Elma Bustronic features 10 AMC slots, and dual redundant power module and MicroTCA Controller Hub (MCH) slots in a 14-slot backplane. The 14-slot Dual Star complies to the latest MicroTCA.0 specifications. The connectors are compression mount, which can be easily removed from the backplane individually if damaged. Elma Bustronic is also developing MicroTCA backplane configurations in press fit and SMT styles. The Elma Bustronic MicroTCA backplane features an 18-layer controlled-impedance stripline design. There are also eight right-angle 10-pin SMD headers for fans. The company also offers a 6-slot Dual Star for a portable tower chassis and a 14-slot Star design with one power module and MCH slot. Pricing for the 14-slot MicroTCA backplane is under $1,500 depending on volume and configuration requirements. The lead-time is 4-6 weeks ARO.

Module Offers Low-Cost Power for PC/104

Elma Bustronic, Fremont, CA. (510) 490-7388.

A new PC/104 Power Supply, the Model 209 from Sensoray, has a wide input voltage range of 8 to 30 VDC. Its three outputs of 5 VDC at 5 amps and ±12 VDC at 0.5 amps supply power to the PC/104 and PC/104+ buses. Power output connectors are available for external devices such as video cameras and fans. Input power is via a 3-pin removable terminal block or through a 2 mm circular power jack. No active cooling is needed. The 209 uses planar magnetics and low ESR caps to achieve low noise and conversion efficiency of over 90%. To survive vehicular applications the 209 has a load dump circuit, a common mode input filter and reverse polarity input protection. The operating temperature range is -25° to 50°C. The single unit price is $205.


Sensoray, Tigard, OR. (503) 684-8005. []. February 2007



Safety for Data at Rest: New Industry Standard for Storage Security The TCG Storage Work Group is building on existing TCG technologies and

Ad Index focusing on standards for security services on dedicated storage. by Michael Willett, Get Connected with technology and Trusted Computing Group solutions now companies providing Storage Work Group and Get Connected is a new resource for further exploration products, technologies and companies. Whether your goal Seagateinto Research


is to research the latest datasheet from a company, speak directly with an Application Engineer, or jump to a company's technical page, the goal of Get Connected is to put you in touch with the right resource. Whichever level of service you require for whatever type of technology, Get Connected will help you connect with the companies and products you are searching for.

ermanent storage devices, including hard disk drives, flash TPM is a root of trust that extends to trusted applications running memory drives, optical drives and digital tape drives, play a on the host, which then can securely manage resources in the central role in computing. But so far, there has not been an internal storage device environment. It’s essential that one appliindustry-wide approach to securing data in storage. The Trusted cation cannot affect storage device resources assigned to another Computing Group (TCG), which has previously released speciapplication, except in authorized ways. Therefore, the system of fications for trusted PCs and servers, security chips and trusted access controls may be divided among applications that run on Get Connected with technology and companies solutions now mobile devices, believes that the concept of “trust”—in which a providing the host. Connected a new resource for further exploration into products,Ittechnologies and companies. Whether your goal to research the latest rights device does what itGet is intended toisdo—is relevant to storage. is precisely this strong notion of ishost application datasheet from a company, speak directly with an Application Engineer, or jump to a company's technical page, the goal of Get Connected is to put you The concept inoftouch trust can in an level example of you therequireenforced bytype theof technology, storage device that allows trust to be extended with the be rightillustrated resource. Whichever of service for whatever end-of-life or repurposing of storage should be possible from thesearching TPM-grounded host to the storage device. A natural Get Connected will help devices. you connectItwith the companies and products you are for. to know with certainty that a storage device that is repurposed consequence of this is to provide greater opportunities in storage, is not providing private data to the next, legal or illegal, physical such as permanent storage areas that are restricted to particular possessor of the storage device. This is particularly important host applications, and exclusive control over key management of as storage devices become more portable, but it is also true in data-at-rest encryption. large data centers with thousands of storage devices containing TCG’s work assumes there is no TPM necessarily internal data and programs of high monetary value or data that should be to the storage device. Instead, TCG is focusing on defining conprotected for privacy. trollable features and properties of the internal storage device TCG Trusted Platform Modules (TPMs)—small, specialcomputing environment and providing for strong, policy-driven, purpose, dedicated-processor devices that provide a root of trust securely authenticated and messaged, access controls. for their hosts—are shipping in PCs and servers. However, the Since nearly all communications with storage devices use TPM’s main function is providing a root of trust, while the storthe SCSI (ANSI/INCITS T10) or ATA (ANSI/INCITS T13) age device’s main function is providing digital storage. TCG’s command sets, the TCG storage architecture in progress also inGet Connected with companies and Get Connected products featuredthere in this is section. members believe a need to extend host TPM trust into the cludes already approved additional SCSI and ATA commands with companies mentioned in this article. computing environment within all storage devices. that support trusted messaging from the host applications to the access control systems of the storage device (Table 1).


End of Article

The Trusted Storage Architecture

TCG is extending the trust boundary into storage devices by proposing a standard access control system over features and properties of the internal environment storage Get Connected with companies and productsof featured in thisdevices. section. The


February 2007

Get Connected with companies mentioned in this article.

Disk Drive

Trust Relationship User Authentication

User Authentication

Motherboard Trust Relationship

Figure 1

Trust Relationship

Root of Trust


Machine Authentication

Corporate Network


Steps in user authentication for storage systems connected to a corporate network.

TCG has created use cases to illustrate these key concepts. A final specification, which will address different storage types such as disk drives, optical drives and tape drives, will soon be publicly available to anyone interested. The TCG Trusted Storage use cases, outlined below, are intended to highlight desired functionality that should interoperate among all storage devices.

Enrollment and Connection

It is often desirable to mate particular storage devices to particular hosts. The mating can have two manifestations. In one case, the storage device can refuse to perform its storage (or other) operations unless it recognizes the host as authorized to have access to those operations. In the other case, the host can refuse to employ the storage device in any operation unless the host can authenticate that the storage device is authorized. TCG is providing specifications for both storage device-tohost and host-to-storage device mating. To simplify dual mating, it is desirable to separate the mating process into parts that we have termed enrollment and connection. Enrollment refers to a process by which a storage device and host are set up for connection. Ideally, a person may be involved in enrollment, but afterwards, secrets exchanged between the host and storage device automatically govern the connection. So a person may freely connect and disconnect authorized storage devices from authorized hosts without having to type in passwords or to provide other authentication credentials on each connection. The setup for storage device-to-host mating involves several steps. These include the authorization of the right to set up the connection, and setting up the authorization sequence needed by the storage device on each connection. Typically the enrollment is enabled or disabled. If it is enabled, then connection must

be authenticated. The authorization needed to enroll is different than the authorization to connect. The setup for host-to-storage device mating also involves several steps. These steps include fetching authentication credentials from the storage device that may be employed or later used in connection, or for immediate use in transferring a secret to the storage device, as defined by the security policy of the host. The use cases also require that the secrets needed for connection, either storage device-to-host or host-to-storage device, may need to be kept unavailable to any other host process or unavailable to a simple physical attack across the interface. Since ATA and SCSI commands may be transmitted across both wired and wireless interfaces, it is important that the communications of the enrollment and connection secrets be confidential. Also, since challenge-response authentication takes more than one command, the confidential communication needs to be in a secure session that is established between the host and the storage device. Establishing such confidential communications extends the TPM-rooted trust in the host to the storage device (Figure 1).

Protected Storage

Optical disks have offered protected storage for many applications, such as DVD, for several years, and all storage devices with embedded processors have protected storage locations for system data. This protected storage is outside of the normally addressable user space. An attribute of this protected storage is that it survives intact even after the storage device user space is repartitioned or reformatted. In this use case, TCG has identified protected storage that extends the trust boundary from the TPM and trusted host. The process is almost identical to the two-step enrollment-connection process in mating, and may be thought of as mating a host to a February 2007



TCG-enabled storage function

What it does

How it enhances current storage security

Granular storage security configuration enforcement

Provides security service providers (SPs) based upon authentication and authorization

Enhances the security of storage system configurations and change controls

Improved storage access controls

Creates secure storage based upon trust relationships

Adds to existing access controls such as zoning, LUN masking and ACLs

Scalable device-level encryption

Instruments storage devices with onboard cryptographic processing

Provides a standard callable cryptographic service for management applications.

Automated backup

Creates a secure mirror image of an SP on multiple storage devices

Provides a standard callable mirroring service for data management applications

Table 1

Trusted storage functions and enhancements to storage security.

storage device feature. In this way a host application can gain exclusive access to an area of protected storage. There are many applications for secure protected storage, including the protection of private data, the protection of software licenses that can survive a change of operating systems, and others. Since many different applications could require protected-storage space, the TCG use cases call out the need for the creation and deletion of such protected storage locations under separate authorization than the authorization of the use of the protected storage locations.

Locking, Encryption and Logging

Locking and encryption refer to additional protections to the addressable user space, as opposed to the protected storage areas outside of the user space. Locking is actually identical to storage device-to-host connection, but the locking and encryption use cases call out the separation of read-locking and write-locking. Encryption is simply an additional measure meant to prevent unauthorized reading. The enrollment and connection phases of mating are identical to the enrollment and connection phases of storage device-to-host connection. However, the locking and encryption use cases also call out read/write locking and encryption for different logical partitions of the storage device. In this way, the credentials needed to authorize writing or reading for one partition of user space may be different than the credentials needed to authorize writing or reading for another partition. Furthermore, the encryption keys across partitions may be different. The use cases also require that the encryption keys be secured by separate authorization than that needed for data reading or writing authorization, although the enrollment phase for a particular partition may set, create, or fetch such encryption keys.


February 2007

Many storage devices, including nearly all hard disks, maintain internal logs, called SMART logs, in order to provide early warning of possible hard disk failure. SMART stands for SelfMonitoring Analysis and Reporting Technology and is intended to recognize conditions that indicate a drive failure in order to provide sufficient warning to allow data back-up before an actual failure occurs. SMART attribute data is written to the disk for the purpose of recreating the events that caused a predictive failure. The drive will measure and save parameters once every two hours subject to an idle period on the SCSI bus. The process of measuring off-line attribute data and saving data to the disk is uninterruptible. TCGâ&#x20AC;&#x2122;s use cases call out the offer of logging services to the host that make use of protected storage areas. Also called out is a means for establishing clock time, or at minimum a monotonic counter so that log entries can be automatically time stamped in a protected fashion. TCG anticipates that logging is most useful for forensic purposes; that is, logging being done for later detection of security violations in the host, rather than for diagnostic logging, as with SMART logging. However, the use of the logging capability is not technically restricted to one or the other kind of logging.

Cryptographic Services

The access control mechanisms in the storage device are required to be versatile in the sense that they can support different types of authentication algorithms. The TPM is principally a pass code and public key (RSA cryptography) authentication device. In the storage industry, symmetric key and hash message authentication codes (HMAC) are also in common use. Therefore, the access control mechanisms, including all of the above-mentioned ones for protected access to read/write operations, key management and storage, are assignable in enrollment to pass code, symmetric key, HMAC, or public key authentications.

IndustryWatch The requirement of secure messaging additionally imposes cryptographic services for key exchange. A particular storage device may support only a subset of the identified cryptographic operations. In all cases, other bodies, such as IETF, standardize the basic cryptographic operations. For symmetric ciphers, for example, the use cases call out AES 128. For hashing, they call out SHA-1 and SHA-256, and for public key ciphers, RSA and Elliptic Curve (in various lengths and for various curves). Provision is made for the modular introduction of other ciphers. Finally, the use cases call out uses of these standard cryptographic functions including the capability to verify signed hashes on material that has to be decrypted using a key only available on the storage device. Since the cryptographic services may require hiding keys (and keeping one set of keys secret from one host application to another), the cryptographic services must obey the same partitioning of authorizations available to the rest of the use cases. It should be clear from the above use cases that there is a need, associated with enrollment, to be able to assign to a particular host application (whether it be local on the local host or remote from the local host), exclusive access to storage device feature sets. This use case therefore calls out the notion of being able to define a feature set that can be claimed exclusively. The exclusivity of the claim is performed by setting access controls using the authentication operations supported by the storage device. While rare, it is sometimes possible to download manufacturer-authorized firmware to storage devices. The TCG storage

use cases call out the need to properly authorize such downloads using strong authentication methods built into the overall architecture. In particular, downloads are hashed and the hashes signed, and the storage device confirms the signer and publishes to the trusted platform which entities are permitted to offer acceptable downloads. Typically, this is as simple as providing a pointer to a manufacturer certificate if the manufacturer retains exclusive control over firmware downloads (as is usually the case). This requirement for signed downloads is fully consistent with the TCG TPM trust model and therefore with extending TCG trust from the TPM to the host and to the internal operations of the storage device. Now that TCG has developed the Trusted Storage use cases, it is working to finalize the storage specification. TCG has received approval from the relevant SCSI and ATA industry groups for new trusted commands. Trusted Computing Group [].

February 2007



Simulation-Based Device Software Development: A Must-Have to Stay Competitive Developing complex software for custom embedded devices can get a big boost by having a simulation of the hardware available that can be shared among developers and updated as the hardware design is modified and improved.


er exploration ther your goal speak directly cal page, the ght resource. technology, s and products

by Marc Serughetti CoWare


oftware developers involved in creating electronic devices including mobile, digital consumer, automotive and networking are dealing daily with the challenges of cross-platform development. In that context, the software is being developed on a host mpanies providing solutions now while the final execution platform is an embedded desktop computer ploration into products, technologies companies. Whether your goal is to research the latest hardware deviceand platform. pplication Engineer, or jump to a company's technical page, the goal of Get Connected is to put you The facttype that the physical hardware needs to be available in orrvice you require for whatever of technology, deryou forare activities anies and products searchingsuch for. as hardware-dependent software development, system integration and test to be performed effectively is one of the greatest challenges for developers of these electronic devices. This challenge and how it is met has the single greatest impact on the development cycle. Not only is the physical hardware only available late in the design cycle, but it also requires complex and cumbersome tools to provide any visibility into the system. Additionally, once that visibility is attained, the physical hardware does not provide full controllability of the system and the hardware is not scalable. The impact on the development cycle usually is dramatic. It can range from schedule delays to complete project redesign and even cancellation because the hardware platform combined with the software running on it does not meet the systemâ&#x20AC;&#x2122;s expected Get Connected performance requirements. with companies mentioned in this article. In many industries the problem of the lack of physical component availability early in the development process and the lack of the capability to set up realistic physical systems, has been dealt

End of Article

Get Connected with companies mentioned in this article.



February 2007

with quite effectively by using simulation. However, for the software development, and system integration and test of electronic device software, simulation is not widely used. The reason is that the technologies available to date have not met the requirements of such a simulation environment. Recently some new technologies have emerged that are looking to solve this problem. They are referred to as virtual platforms or virtual prototypes. Not all the new solutions include the needed technology to make simulation effective for electronic device development, but all have some elements of what is needed. What is now coming on the market is a mix of the proper simulation technology and the right modeling technology associated with it. There are at least five key requirements for these technologies to work for software development and system integration and test of electronic device software (Figure 1). 1. The simulation technology must have performance that enables the use of simulation on a daily basis. 2. The simulation technology must provide a set of tools that enables the use of the unique capabilities it provides: for example, simulation can enable users to pause an entire platform at any time. 3. The simulation technology must provide the application programming interfaces and scripting capabilities that will enable it to be integrated into an existing software development environment. 4. The hardware model to be simulated must be created in a standards-based language: standard languages reduce the


Software Development Tool Stack

Device Software Stack

Developer’s Desktop

Environment Logic Analyzer

Software Development Tool Stack Virtual Platform Tool Stack

Device Software Stack

CoWare Virtual Platform

Developer’s Desktop

Figure 1

A virtual platform provides a simpler environment within the developer’s computer and one that can see further into and better control the simulated hardware device than most instruments attached to a prototype board.

Platform Specification

Software Development Hardware Development

Traditional Development

Software Development Platform Specification

risks and leverage a larger ecosystem of models and modeling experts. SystemC is an IEEE standard language for hardware modeling that fits that requirement very well and has been widely adopted. 5. The models created should be reusable for both software development and hardware development. Changes in the hardware can then immediately be sent to the software developers, thus minimizing the risk of paper specification and independent simulator implementation. Software developers, system integrators and test teams using simulation created with technology supporting these requirements can expect multiple benefits. They include development schedule risk reduction, increased developer and team productivity and lower development costs. Reducing the risk associated with the development cycles and the schedule is a key factor in meeting the expectation set for a project. Simulation lets development get started earlier. When the time comes to perform integration and bring up debug with the physical hardware, tested software will already be integrated with untested hardware. This is much better than having untested software integrated with untested hardware, which is so often the case today. This alone can change the integration and debug phase from one of several months to one that just takes a couple of weeks. The same time saving can be expected when testing the system. If the system can be tested prior to the availability of physical hardware, the development cycle is again very positively impacted. In addition, with a simulation technology that meets all the requirements described above, reducing the risk associated with using divergent and unsynchronized hardware and software development simulation technologies will optimize development. Another added bonus is that a scalable model of the hardware can be used that will align with the specific software development milestones. Productivity is increased because simulation enables a developer to have full control of and visibility into the hardware models. Traditionally, physical hardware provides limited visibility (the one pro-

Integration & Bring-up Debug

System Testing

•Scalable development •Pre-silicon system test •Reduced bring-up time •Reduced post silicon system test

System Test

Scalable Virtual Platform Hardware Development

Integration & Bring-up Debug

System Testing

CoWare Virtual Platform Development

Figure 2

A virtual platform offers a simulation of the hardware environment so that software development can begin well before any prototype hardware is available. The hardware model can be updated and shared among developers immediately with a significant impact on development time and cost.

February 2007



Mass Storage Modules for VMEbus and CompactPCI®

PMC CompactFlash Module Two Type I/ Type II CF Sockets

See the full line of Mass Storage Products at

or call Toll-Free: 800-808-7837 Red Rock Technologies, Inc. 480-483-3777

edrock_04.indd 1

2/2/07 1:21:52 PM

vided by a debugger connected to the hardware model) and limited controllability (you may have paused the execution of the software; however the hardware continues to run). In order to gain additional visibility, complex and costly tools such as logic analyzers are required. With the right simulation technology, developers will benefit from unprecedented capabilities through scripting. A good example is script execution on breakpoints. Finally, productivity will be enhanced with high simulation performance. The software simulation environment should boot the operating system in a matter of seconds rather than minutes or hours, so the simulation can be done daily, as needed. Simulation can reduce the time it takes to identify issues to a matter of hours. Right now, without the use of simulation, identifying issues can take days or weeks. Using simulation also positively impacts a team’s productivity. Consider two examples. The first one is that simulation can easily be shared. A simulation software package can be used to make a simulation environment easily available to distributed team. If you compare this to what is most often done now—making a sample prototype board available—the benefit is obvious. A prototype board requires shipping time and is also easily damaged in transit. In addition, changes to the hardware specification require increased cycle time, but a simulation software package can immediately be updated, packaged and distributed worldwide. The second example involves determinism. Simulation will behave consistently under the same context while hardware early prototypes may show different behaviors. For example once a race condition is reproduced, it can be reproduced over and over again by the developer as well as other developers on the team. Lowering development costs improves return on investment and this is a key to a company success. Simulation simplifies the development environment into a self-contained software package that can be used by every software developer, thus reducing the dependency on temporary physical prototype boards, complex hardware test environments and development tools. Making simulation technologies available that integrate with an existing software development environment will also reduce costs associated with training. Users who have deployed such simulation technologies are experiencing savings from tens to hundreds of millions dollar for their development costs (Figure 2). Simulation technologies present many benefits for software development, system integration and test. However, to obtain the maximum return over time, companies need to carefully consider simulation and modeling technologies requirements. The market is evolving from proprietary internal or commercial solutions to standards-based commercial solutions. There are now a number of different solutions available, some of which are standards-based and others not. Early adopters have experienced significant benefits. With the increasing complexity of hardware platforms including multiple cores, and the growth of software content on electronics devices, the need to move to new development methods is essential. Software development and hardware development need to be done concurrently and virtual platforms or prototypes are endeavoring to make this happen. CoWare, San Jose, CA. (408) 436-4740. [].


February 2007


Enter a World of Embedded Computing Solutions Attend open-door technical seminars and workshops especially designed for those developing computer systems and time-critical applications. Get ahead with sessions on Multi-Core, Embedded Linux, VME, PCI Express, ATCA, FPGA, Java, RTOS, SwitchFabric Interconnects, Windows, Wireless Connectivity, and much more.

Meet the Experts Exhibits arranged in a unique setting to talk face-to-face with technical experts. Table-top exhibits make it easy to compare technologies, ask probing questions and discover insights that will make a big difference in your embedded computing world. Join us for this complimentary event! Be sure to enter the drawing on-site for an iPod Video

Coming to Your Doorstep in ‘07... January




Copenhagen, Denmark Santa Clara, CA

Melbourne, FL

Huntsville, AL Atlanta, GA Phoenix, AZ Albuquerque, NM

Greenbelt, MD Boston, MA





Milan, Rome Italy Dallas, Austin, Houston, TX

Chicago, IL Minneapolis, MN

Pasadena, CA

Longmont, Boulder, CO





Beijing, Shanghai, China Ottawa, ON Helsinki, Finland Stockholm, Sweden Montreal, PQ Shenzhen, Xi’an, China San Diego, Long Beach, CA

Patuxent River, MD Tyson’s Corner, VA

Detroit, MI Toronto, ON Melbourne, Sydney Australia Portland, OR

Seattle, WA Vancouver, BC Guadalajara, Mexico

is to research the latest datasheet from a company, speak directly with an Application Engineer, or jump to a company's technical page, the goal of Get Connected is to put you in touch with the right resource. Whichever level of service you require for whatever type of technology, Get Connected will help you connect with the companies and products you are searching for.

Advertiser Index Get Connected with technology and companies providing solutions now Get Connected is a new resource for further exploration into products, technologies and companies. Whether your goal is to research the latest datasheet from a company, speak directly with an Application Engineer, or jump to a company's technical page, the goal of Get Connected is to put you in touch with the right resource. Whichever level of service you require for whatever type of technology, Get Connected will help you connect with the companies and products you are searching for.








General Micro Systems, Inc.........11,53,


End of Article

Ampro Computers,

Logic Supply,


Get Connected with companies and products featured in this section.

Kontron America............................... 15,17..............................

Get Connected

with companies mentioned in this article.


MEN Micro,


Microsoft Windows Embedded..........34,

Get Connected with companies and products featured in this section.

Get Connected with companies mentioned in this article.




One Stop

Critical I/

Phoenix International...............................4............................

Diamond Systems

QNX Software Systems, Ltd...................23....................................

Dynatem, Inc...........................................8............................

Real-Time & Embedded Computing Conf...65.................................

ELMA Electronic, Inc..............................20..................................

Red Rock Technologies, Inc...................64......................www.redrocktech.comL

Embedded Planet..................................45................

Sealevel Systems..................................29.............................

Embedded Systems Conference.............

Ultimate Solutions.................................31.................................

GE Fanuc Embedded Systems.............2,33............

WIN Enterprises/

RTC (Issn#1092-1524) magazine is published monthly at 905 Calle Amanecer, Ste. 250, San Clemente, CA 92673. Periodical postage paid at San Clemente and at additional mailing offices. POSTMASTER: Send address changes to RTC, 905 Calle Amanecer, Ste. 250, San Clemente, CA 92673.


February 2007

Single Board Computers With Built-in Data Acquisition More Performance. More Features. More Choices. Fewer boards in a system result in a more reliable, less costly, easier to assemble application. And a true single board solution means single supplier support with no integration surprises. Now, Diamond Systems delivers the industry’s broadest lineup of single board computers with built-in data acquisition — five rugged SBCs ranging from 100MHz to 2.0GHz with state of the art peripherals AND analog and digital I/O.







Form Factor



4.2" x 4.5"



Clock Speed



400 / 660MHz


1.0 / 2.0GHz


16 / 32MB


128 / 256MB

256 / 512MB


Exp. Bus









(4) 2.0 / (4) 1.1

(4) 2.0





















Analog Inputs

16 16-bit, 100KHz, 48 FIFO

Analog Outputs

16 16-bit, 16 16-bit, 32 16-bit, 32 16-bit, 100KHz, 512 FIFO, 100KHz, 512 FIFO, 250KHz, 2048 FIFO, 250KHz, 1024 FIFO, autocalibration autocalibration autocalibration auto autocalibration

(4) 12-bit

(4) 12-bit

(4) 12-bit

(4) 12-bit

(4) 12-bit

Digital I/O






-40 to +85°C

1.0GHz only

Visit or call today for more information about how single board CPU plus data acquisition solutions enable more reliable, lower cost and easier to assemble embedded applications.













650-810-2500 outside North America

The magazine of record for the embedded computing  

GE Fanuc Embedded Systems is a leader in AdvancedMC ™ design with more than a dozen modules in production, including this Cavium- based pack...

Read more
Read more
Similar to
Popular now
Just for you