Issuu on Google+

Louis DiNardo

Electrical Engineering Community



Copyright 2013, Silicon Frameworks, LLC



















Louis DiNardo PRESIDENT & CEO OF EXAR A conversation about Exarâ&#x20AC;&#x2122;s market strategy of targeting truly mixed-signal products.

Featured Products


Community Energy Storage Reliability


BY DR. ALI NOURAI AND RICHARD FIORAVANTI WITH DNV KEMA Why community energy storage devices are gaining a great deal of attention in the development of viable energy storage solutions.

Product Overview: MAX17710GB20 Evaluation Kit by Maxim Integrated


This solar-powered, energy harvesting IC can power your next design.

Practical Applications of Quantum Computing BY ALEX TOOMBS WITH EEWEB


How quantum computing can bring great improvements to the fields of mathematics, security, biochemistry, search optimization, and many more.

Selecting a Processor for Your PC


BY ROB RIEMEN WITH EEWEB An overview of key considerations for selecting the right processor for your PC.

RTZ - Return to Zero Comic

35 Visit




EEWeb | Electrical Engineering Community




EEWeb PULSE Could you tell us about your career leading up to joining Exar?

importance of innovation, research and development and understanding the returns on your investments.

I’ve had a very fortunate career so far—I’ve had the opportunity to work for what I believe to be world-class companies. I started in 1981 with a company called Analog Devices, which was a great place to land right out of college and it was phenomenal training for a young engineer. I left in 1988 for an opportunity to join what was then a start-up, a company called Linear Technology, which today is certainly well known both for its technical acumen and its financial performance. I spent 13 years with Linear Technology and had the opportunity to work with one of, if not the best, CEO’s in the semiconductor business, Bob Swanson, who founded the company as a spin-off of National Semiconductor in 1981. These were two very fortunate places to work in the semiconductor space because I got to work alongside some of the gurus in analog devices. I left Linear in 2000 to try my hand at running a company myself with the advice of Bob Swanson, who said, “If you want to run a big company, go figure out how to run a small company first.” He emphasized the

Back in 2000, I joined a very broken public semiconductor company called Xicor. Xicor had done a phenomenal job of capturing the first generation of non-volatile memory solutions. However, the company failed to recognize that memory is a technology that needed to move offshore with low-cost manufacturing and needed to have test and package assembly offshore, which made it really suffer through the 1990’s. When I arrived in 2000, we transitioned the company very quickly—we sold the fab, moved test operations offshore, and hired a world-class analog mixedsignal team with pedigree from Linear Technology, Analog Devices, and National Semiconductor. We were able to build seven analog mixedsignal product lines, which started to get some traction. Intersil acquired us in July of 2004 for well over $500 million. It was a 10x return for shareholders in a three and half year period that was really tough for semiconductor companies. I then took over as president and chief operating officer of Intersil and spent two and a half years running the operations there. After that, I spent four years doing venture investing. I worked at two venture firms - Vantage Point Venture Partners and Crosslink Capital. I spent most of my venture career, investing in semiconductor companies, semiconductor capital equipment as well as solar and solar capital equipment.

“We’re not married to being analog or being digital, we are just concerned with getting the best performance out of the solution.” 6

I was approached by Exar in November of 2011. I had known of the company

EEWeb | Electrical Engineering Community

for a long time, and I knew there was significant IP and significant opportunity to play in the mixedsignal realm with truly differentiated technology. I knew Exar could carve out a piece of the sandbox for itself and avoid getting run over by TI and other very large companies, while bringing really high-value solutions to very challenging problems. This is what makes this line of work fun. I’ve been here only a year and I think we’ve accomplished a lot. In terms of your product families, how would you sum up Exar’s products? The overarching thing to remember about Exar is that there are shades of grey, especially in the mixed-signal area. Linear Technology sometimes refers to themselves as a mixedsignal company, but there’s very big “A” and very little “D.” You have other companies that refer to themselves as mixed-signal companies and they’re very big “D” and very little “A.” I think Exar has a very nice balance in truly mixed-signal—we have very strong analog technology and the company has a big history and legacy of doing lots of digital work. We do state machines, we do microcontrollers, and we do software and firmware in order to support our hardware solutions. If you think about all of the products we do and the markets we have chosen, it has that flavor of bringing the best technology solution to solve the problem. We’re not married to being analog or being digital, we are just concerned with getting the best performance out of the solution by leveraging innovation whether it’s analog, digital, or true mixed-signal. We serve three primary markets and we are committed to sticking to our knitting, so to speak. We do networking and storage, we serve

INTERVIEW the communications infrastructure space and we have a big presence in the industrial space—anywhere from medical, electronics, or anything that fits in that industrial segment. It’s about a 40/40/20 split. Everything we do, with respect to our product lines, serves those three markets. It kind of cuts us from a different cloth than some of our peers, or at least those we aspire to consider our peers. We consciously don’t do consumer and we don’t do computing— laptop or desktop. Those are very difficult markets that have major shifts going on. As a relatively small semiconductor company, the consumer market is a very dangerous place to be. We’re quite comfortable focusing on those sticky applications where our products get designed in that will last a long time. We basically parse our offering into three product lines internally. One is far more digital than analog— big “D”—but it’s a very exciting business and one where we have a very strong team, and that is Data Compression and Security. When you think about network architectures, particularly the sizzle surrounding big data and analytics (the ability to build out data cubes and build data mining applications) there is a disparity between CPU and its ability to handle input/output operations versus storage and memory—it’s 100 to 1 differential. When you think about big data analytics, the best solution available today is frankly to use redundant arrays and spend a lot of money for memory that’s not necessarily needed for the footprint, but it’s needed to alleviate the bottleneck getting into and out of memory with large blocks of data very quickly. We sit on the network side next to the CPU sub-system or network processor—the data gets dished off to our compression engine as an offload processor and we run

”As a relatively small semiconductor company, the consumer market is a very dangerous place to be. We’re quite comfortable focusing on those sticky applications where our products get designed in that will last a long time.“ compression algorithms so that the footprint is smaller and you don’t need as much in the way of redundant memory. Does Exar have a component buisiness aimed at power manamgement? This is a very large market—you can think of it as a $15 billion dollar market. We know that it’s too big for us to have any focus, so we focus on about a billion dollars as what we believe to be our available market. This includes networking gear like network interface cards and adapter boards.We also have a standard set of analog solutions for our communications gear in the industrial space, which is very competitive with our top-tier peers. We have focused on bringing to bear our digital expertise, and this year we introduced a fully programmable fouroutput switching regulator. That is so that with a software package, you can take a 12-volt input and generate four outputs on 1 to 2 volts—so you get four outputs from a single component. These are all programmable—you can sequence them however you want to, which is very important in point of load applications. It provides a very fast time-to-market for our customers, and it’s a highly integrated solution

with a lot of capability. If you try to do a single input, four output supply with traditional analog circuitry, you’d have a PC board with 147 discrete components on it. If you use our XRP7700 Series, you could reduce that component count to 33 components. It’s a nice, compact, user-friendly, fast time-to-market design. We’re finding success in servers and we’re finding success in industrial control systems. It’s a product line that serves all of our target end markets. The third business, and frankly our largest and most diverse business, is what we call “connectivity.” It’s kind of an old moniker on a new theme—most companies would call it “Interface Products.” From one end of the spectrum we have RS232 transceivers, drivers and receivers in a single IC. It could be single-ended RS232 or a differential RS485. Then you move through that product line into really what’s a bread-and-butter product area for us, which is Universal Asynchronous Receiver Transmitters (UARTs), which have been around for decades—everybody kind of snickers when we talk about this product line, but it’s a growth business. It’s very sticky because it solves last-minute problems; people design boxes and forget that they need to go from PCIe to serial data and the next thing they Visit



Exar’s headquarters in Fremont, CA

know, they need a UART. So we solve a problem that occurs in a very critical time for our customers—it is a very nice gross margin product line with very long life cycles. We’re dealing with electrical equipment that has almost 20 years in a life cycle, so it’s a nice, stable business. We’re introducing Bridges, which takes that theme of 0s and 1s of one ilk coming in and going out in a different format and if you put dedicated protocols with physical layer devices, you’re talking about doing USB to Ethernet. It’s no longer embedded in a system with a designer at the system level determining how to move from one format to the other, we give a canned solution that says the protocol on one end is USB, the other end is Ethernet. We provide a physical layer device on the Ethernet side and do all the drivers in software. That brings you in to a realm of higher ASPs and more complete solutions, and you’ll


see those products come out in the next quarter or two. Those are the three product areas; we’re doing compression, we’re doing power management and we’re doing connectivity. How many products and components does Exar have? We have a lot—we’re a 41-year-old company. I’d say we have around 4,000. It is a catalog business, a lot like Linear Technology. We’d love to one day be considered their peer, but with respect to profile we are a lot like that. We have 14,000 customers worldwide and we have a lot of diversity, which gives us the opportunity to provide consistent performance. How would you describe the culture at Exar? We’ve been through a lot of change. I joined the company a little over a year

EEWeb | Electrical Engineering Community

ago and I think we were a company that had fallen into a rut with a big balance sheet. I think strategy change was too frequent and had little followthrough. As a team, we huddled around early January [2012] and determined what markets we wanted to play in and what product lines we would be supportive of with respect to sales and marketing, R&D, and investments. We have a team that is committed and very high-energy and a lot of that starts with the team you surround yourself with. I think we have one of the best executive management teams in the semiconductor business. We essentially brought back the entire team that worked together at Xicor and Intersil, and we bolted on new talent and kept the best of the breed from the Exar team. It’s a very fast-paced, high-energy and decisive environment. We have a lot to accomplish. ■


Online Circuit Simulator PartSim includes a full SPICE simulation engine, web-based schematic capture tool, and a graphical waveform viewer.

Some Features include: • Simulate in a standard Web Browser • AC/DC and Transient Simulations • Schematic Editor • WaveForm Viewer • Easily Share Simulations

Try-it Now! Visit 9

Technology You Can Trust

Take the Risk out of High Voltage Failure with Certified Avago Optocouplers IEC 60747-5-5 Certified

Optocouplers are the only isolation devices that meet or exceed the IEC 60747-5-5 International Safety Standard for insulation and isolation. Stringent evaluation

tests show Avago’s optocouplers deliver outstanding performance on essential safety and deliver exceptional High Voltage protection for your equipment. Alternative isolation technologies such as ADI’s magnetic or TI’s capacitive isolators do not deliver anywhere near the high voltage insulation protection or noise isolation capabilities that optocouplers deliver. For more details on this subject, read our white paper at:

FEATURED PRODUCTS Smallest, Low-Cost PIC 12C MCU Microchip Technology Inc., announced a new addition to its PIC12/16F15XX 8-bit microcontroller family. The low-cost, low pin count PIC12LF1552 is Microchip’s smallest (2×3 mm UDFN package) and lowest-cost PIC® MCU with hardware I2C™ support, and includes a four channel 10-bit Analog-to-Digital Converter (ADC) with hardware Capacitive Voltage Divider (CVD) support for capacitive touch sensing. The hardware CVD enables a more efficient implementation of capacitive sensing for touch applications. This “Core-Independent Peripheral” includes additional control logic that enables automated sampling, which reduces software size and minimizes CPU usage. For more information, click here.

32-Bit RX MCU Platform Expansion Renesas Electronics Corporation announced the expansion of its 32-bit RX microcontroller (MCU) platform, with the introduction of 120 new models for the successful RX631 and RX63N product groups within Renesas’ RX600 series of MCUs. These new products in the RX63N/ RX631 groups offer up to 256 KB embedded RAM, doubling today’s available RAM size of 128 KB. Thanks to the RX platform concept, these new products are pin-to-pin compatible with existing models, and therefore recommended as a drop-in replacement for customers with larger RAM requirements. Renesas Electronics also provides a robust development environment for the RX63N/RX631 groups. For more infomration, please click here.

Thyristors for Renewable Energy Applications IXYS Corporation announced that its wholly owned UK subsidiary, IXYS UK Westcode Ltd., launched a new phase control thyristor in a specially designed package suitable for rotating applications. The thyristor rated at 3000 volt, is based on IXYS UK’s world class all diffused silicon process with the silicon slice fused to a metal disc for optimum thermal and mechanical durability. The device has average current rating of 1436A at a heat sink temperature of 55C and is suitable for line frequency applications up to 400Hz. The fully hermetic package has been especially designed to withstand movement of the silicon die when rotated tangential to the plane of the electrodes. The device is available in two different voltage grades. For more information, click here.

Complete DDR4 LRDIMM Chipset Integrated Device Technology announced the availability of the industry’s first complete chipset for DDR4 load reduced dual inline memory modules (LRDIMMs), including both a registered clock driver and data buffer devices. With DDR4 data rates climbing to 3.2 Gb/s and higher, the clear advantages afforded by LRDIMM as a speed-scalable memory technology are expected to drive adoption across a broad array of memory-intensive computing and storage applications. The new IDT 4DB0124 DDR4 data buffer completes a chipset with the already available 4RCD0124 registered clock driver (RCD) to provide complete buffering of command, address, clock and data signals across an LRDIMM. For more information, please click here. Visit


FEATURED PRODUCTS 16-Bit Interpolating DAC The Fujitsu MB86060 is a high performance 12-bit, 400MSa/s Digital-toAnalog Converter (DAC) enhanced with a 16-bit interpolation filtering front-end. Use of novel techniques for the converter architecture delivers high speed operation consistent with BiCMOS or bipolar devices but at the low power of CMOS. Fujitsu’s proprietary architecture is the subject of several patent applications. Additional versatility is provided by selectable input interpolation filters, programmable dither and noise shaping facilities. Excellent SFDR performance coupled with high speed conversion rate and low power make this device particularly suitable for high performance communication systems, in particular direct IF synthesis applications. For more information, please click here.

2-Bit Dual Translating Transceiver The NTSX2102 is a 2-bit, dual supply translating transceiver with auto direction sensing, that enables bidirectional voltage level translation. It features two 2-bit input-output ports (An and Bn), one output enable input (OE) and two supply pins (VCC and VCC). Both supplies can be supplied at any voltage between 1.65 V and 5.5 V. This flexibility makes the device suitable for translating between any of the voltage nodes (1.8 V, 2.5 V, 3.3 V and 5.0 V). Pins An and OE are referenced to VCC and pins Bn are referenced to VCC. A LOW level at pin OE causes the outputs to assume a high-impedance OFF-state. This device is fully specified for partial power-down applications using IOFF. For more information, click here.

New Custom IGBT-Driver Design Center IGBT driver manufacturer CT-Concept Technologies AG, a Power Integrations company, has opened a design center in Ense, Germany. The facility will develop semi-custom gate-drive designs based on its driver cores and produce full-custom drivers using CONCEPT’s SCALE2® platform for large projects. SCALE-2® chipsets provide high levels of integration, resulting in low component count and consequent high reliability. Features including direct paralleling and Dynamic Advanced Active Clamping (DA2C) reduce system cost by requiring less de-rating when IGBT modules are used in parallel. For more information, please click here.

Active Clamp Forward PWM Controller The ISL6726 is a highly featured single-ended PWM controller intended for applications using the active clamp forward converter topology in either n- or p-channel active clamp configurations, the asymmetric halfbridge topology, and the standard forward topologies with synchronous rectification. It is a current-mode PWM controller with many features designed to simplify its use. Among its many features are a precision oscillator which allows accurate control of the deadtime and maximum duty cycle, bi-directional synchronization with 180° phase shift for interleaving applications, adjustable soft-start and soft-stop, a low power disable mode, and average current limit for brick-wall overcurrent protection. For more information, please click here.


EEWeb | Electrical Engineering Community


Portability & Power All In One...

Debug digital designs on an iPad, iPhone, iPod. Can a logic analyzer be sexy? Watch the video and weigh in...

Logiscope transforms an iPhone, iPad or iPod into a 100MHz, 16 channel logic analyzer. Not only is it the most intuitive logic analyzer available, the triggering is so powerful youâ&#x20AC;&#x2122;ll be able to count the hair on your bug.

See why our innovation keeps getting recognized. Visit




EEWeb | Electrical Engineering Community

Dr. Ali Nourai:

Executive Consultant, DNV KEMA

Richard Fioravanti: Vice President, DNV KEMA


As we increasingly turn our attention to develop viable solutions for energy storage, one device continues to gain a great deal of attention and focus in ongoing efforts to deploy storage technologies: community energy storage (CES). CES is a small distributed energy storage unit connected to secondary transformers that serve a few houses or small commercial loads. It is a special energy storage deployment scheme or platform that is independent of the storage technology and offers an option for a broadly distributed deployment of storage within a few hundred feet of utility customers.




CES utilizes the existing connections of several houses to a single transformer and positions the storage at this common point of connection. This allows the platform to benefit from their load diversity and is critical for a cost-effective backup storage technlogy. CES devices typically range in size from 25-100kW, but aggregation of CES units on a feeder makes them operate like any multi-MW battery at a substation â&#x20AC;&#x201D; except that CES offers a much higher level of service reliability. Above is a diagram that shows the concept of how CES can be located near the customer, but when aggregated, can act as a multi-MW device.

Typically, these devices are touted to be able to improve reliability by discharging during brief outages, to act as a deferral device to avoid upgrades in transmission systems, and to potentially be utilized as a demand response tool for the grid. However, as seen in recent events in the U.S., where large-scale outages have occurred, there is a lot of focused attention on the benefits of have having reliable community energy storage systems.

In previous articles, our team discussed the potential of CES to benefit all segments of the grid, from generation and transmission, through to distribution and end-use customers.

For electricity end-users, the predominant concerns are generally to receive service that is reliable and reasonably priced. Reliability has always been a challenge to place a value upon, particularly with residential end-users. However, our reliance on personal electronic devices is dramatically increasing and the â&#x20AC;&#x153;conceptâ&#x20AC;? of outages and reliability begins to shift from one that was associated with inconvenience to one that is associated with significant impacts on everyday life. As end-users observe the impact of outages caused by hurricanes, winter storms (such as Hurricane Sandy) and other weather patterns, their

Not only is a CES device a potential application for generation, distribution, and transmission, but it is also applicable to the customer side of the grid. CES is a small distributed energy storage unit that can be connected to secondary transformers that serve a few houses or small commercial loads.


Address Increasing Customer Concerns Regarding Reliability

EEWeb | Electrical Engineering Community


thoughts may begin to shift away from general concern. They will want to know what their utilities are doing to mitigate such impacts from these events and how they are working to prevent further damage and problems from occurring.

years. The focus to date has mainly been on simply producing a device that can work in the field. The next evolution of the platform may be to start segmenting the characteristics of the system in order to more clearly define the role that CES will play now and in the future.

A CES device can help to satisfy these issues and relieve such concerns. The devices can be used to provide a greater level of reliability. For daily use on the grid, the CES device offers better power quality to its customers, covers momentary interruptions caused by utility disturbances, and reduces line loss for the utility. In addition, and in this case more importantly, the device can act as a back-up energy source for the system it is connected to and the houses downstream from the device.

Continuing the CES Evolution

The Direct Reliability Benefits of CES Devices When the concept of community energy storage was first introduced by Dr. Ali Nourai, the focus was on how the device could help utilities continue to provide electricity at a reasonable price. The ability to “flatten” electricity demand, as well as the ability to reduce spending on substation wires and components, are benefits that have been attributed to CES. As photovoltaic system deployment increases, and in some territories, increases rapidly, the ability of the device to work in concert with rooftop solar gains has increased attention to CES. By providing increased protection to the grid, CES allows increasing numbers of PV systems to be installed. When the sun doesn’t shine, a CES device can maximize the renewable energy production by capturing and storing the energy for later use. When this coordination of renewables and storage is mapped into incidences of outages, the energy can be discharged back to clients to act as a bridge until the grid is restored.

Technical Feasibility Community energy storage is not a technology, but rather a concept and platform that can utilize multiple battery technologies. Early CES devices used the technology and batteries that were similar to those within plug-in electric (PHEV) and hybrid electric (EV) vehicles. As the devices begin to absorb more of the responsibility of acting as a bridge to cover outages, batteries that possess better “energy” characteristics rather than “power” characteristics need to also be incorporated into the CES platform. The number of manufacturers that are producing CES systems has seen a steady increase over the last three

Whether for power or energy, the components and technologies exist for these devices to be produced and deployed in the field today. Whereas there are no major technical barriers to implement CES devices, a continued focus on demonstrations needs to be conducted to develop and test concepts that are addressing the greater needs in the eyes of customers. Another focus needs to address lessons that we all learn and re-learn each year when we observe the ramifications of another storm and its devastating consequences. The ultimate vision for CES technology is to be connected to microgrids and eventually smart grids, so that the platform can be a key player in helping to solving such major issues is available today and ready to be deployed. ■

About the Authors Rick Fioravanti has over 18 years of experience working with emerging technologies in commercial and consulting roles. With his extensive expertise in emerging technologies, he assists utilities, manufacturers, state and federal agencies to develop business plans, assess technologies, manage project development, and create financial models. In his current post at DNV KEMA, his efforts are directed on energy storage, advanced storage technologies, distributed energy resources, device testing, application modeling, and electric vehicles. Dr. Ali Nourai joined DNV KEMA as an Executive Consultant in 2010 after a 30-year utility career with American Electric Power (AEP) where he launched AEP’s successful sodium sulfur (NaS) battery program and introduced the concept of the Community Energy Storage (CES). Dr. Nourai is an IEEE Fellow, a board member and former chairman of the Electricity Storage Association (ESA) dedicated to promoting development and commercial application of energy storage technologies as solutions to power and energy problems.

» CLICK HERE to comment on the article.



C eb om .c m om un ity

W eb Ele ct ric al En w gi w ne w er .e in ew g

EE Making Wireless Truly Wireless: Need For Universal Wireless Power Solution

Dave Baarman Director Of Advanced Technologies

"Sed ut perspiciatis unde omnis iste natus error sit voluptatem accusantium laudantium,

doloremque totam



eaque ipsa quae ab illo inventore veritatis et quasi architecto beatae vitae


dicta sunt explicabo. Nemo enim ipsam voluptatem quia voluptas sit aspernatur aut odit aut fugit, sed quia consequuntur magni dolores eos qui ratione voluptatem sequi nesciunt. Neque porro quisquam est, qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit, sed quia non numquam eius modi tempora incidunt ut labore et dolore magnam aliquam quaerat voluptatem. Ut enim ad minima veniam, quis nostrum exercitationem ullam corporis suscipit laboriosam, nisi ut aliquid ex ea commodi consequatur? Quis autem vel eum iure reprehenderit qui in ea voluptate velit esse quam nihil





Breathe new life into medical product interfaces

NXPâ&#x20AC;&#x2122;s proven interface products enable medical and health system designers to add features with minimal modifications. Within our portfolio youâ&#x20AC;&#x2122;ll find LCD displays and capacitive touch interfaces, system connectivity bridges & UARTs, LED controllers, real-time clocks, and I2C-bus peripherals & enablers. To learn more, visit


Overview of the

Max17710GB20 Evaluation Kit by

Maxim’s new energy harvesting IC can power your next design. The demand for self-powered devices is rising as the movement towards environmentally friendly energy sources gains momentum. All eyes are on alternative energy solutions like solar and wind for powering devices of all sizes—whether it’s a city, vehicle, or integrated circuit. These devices harvest energy from natural and endless sources to enable their operations without ever having to plug into the grid. The Max17710GB20 evaluation kit by Maxim is centered around the Max17710 Energy Harvesting Charger and Protector. This device features the THINERGY® micro-energy cell from Infinite Power Solutions, which is a rechargeable, solid-state, thin-power solution. By using this energy cell, the device becomes a completely self-powered board with no external power sources required. EEWeb Tech Lab got a hold of one of these evaluation kits to try out. We found it to be a great and exciting starting point for those interested in implementing an energy harvesting application. Continue reading for in-depth specs and an overview video.


EEWeb | Electrical Engineering Community






The Hardware The Max17710 IC The MAX17710 is a complete system for charging and protecting micropower-storage cells. This 12-pin device comes in an ultra-thin, 3mm x 3mm UTDFN package. The IC really serves two purposes; one is to charge and the other is to protect from overcharging. To that end, it has a boost regulator to charge from voltages as low as 0.75V. One thing about these energy harvesting applications is that their voltage sources are typically very poorly regulated and can vary quite a bit over time. You can imagine that as a solar cell goes from early morning sun to noon day sun, the voltage is going to change quite a bit. On the output, the IC has a selectable voltage on the LDO regulator. You can set that to 1.8V, 2.3V, or 3.3V. You can also set it to operate in low power or ultra-low power modes.

Solar Cell The board also has three solar cells from China Solar. In an open circuit they’re going to generate about 2V, and they’re going to provide about 5 microamps of current each. By default, the board operates with all three connected and parallel. However, you can remove and replace jumpers, so that you’re only using one or two of them at a time.

Battery Cells Around the back you also have MEC201 microenergy cell from Infinite Power Solutions. It acts as a 4V 1.0mAh hour battery that the MAX17710 charges.

Switches and Test Points On the front you also have four switches that allow you to enable or disable your output and set the low current load. You have a number of test points throughout the board that allow you to monitor voltages. You also have this jumper, which allows you to set that output voltage to 1.8V, 2.3V, or 3.3V.

Using a Different Voltage Source If you want to use a different voltage source than the solar cells, you would connect it either at the solar plus and minus test ground points, or at the AC+ and AC- test points, depending on what type of voltage you have.


EEWeb | Electrical Engineering Community


Start Using the Board Now to start using this board, you need to connect some jumpers to hook up the battery and solar cells to the IC. First, you might want to make sure your solar cells are working properly. Hook up some leads to monitor the voltage on the unloaded solar cells, which is about 2.5 volts. If you cover that, just to verify that it really is working, youâ&#x20AC;&#x2122;ll see it drop to about 1.7 volts. The first thing you need to do to avoid damage to the IC is connect the battery. Connect jumper two to pins 2 and 3, or 1 and 2. Connecting to 2 and 3 will allow you to monitor the charge current later. Then go ahead and connect your solar cells to the IC If youâ&#x20AC;&#x2122;re using a different voltage source (maybe a piezoelectric source) you would leave the jumper disconnected, and connect your source either to the solar plus and ground, or to the AC+ and AC- test points. Now the solar cells are connected up to the IC and it is charging the battery. By changing the correct jumper, we can set the regulated output voltage. With it open it will be 3.3V, but you can change it to 1.8 or 2.3 as well. To turn it on, press S1 and the energy cell will provide power to your device.

Watch Video

Conclusion The Max17710GB20 Evaluation Kit is a great starting point if youâ&#x20AC;&#x2122;re going to be designing an energy harvesting application. The IC itself is very compact in size and integrates a boost regulator for charging, an LDO regulator on output with selectable voltages. The kit costs about $200 and the IC itself costs about $4.





EEWeb | Electrical Engineering Community


Alex Toombs

Electrical Engineering Student

Quantum computing is a field of study in which strange behaviors of

particles at very small scales, collectively known as quantum mechanics, are exploited in order to create unique computing devices. Phenomena that exist at a very small scale allow for quantum computers to perform operations that traditional computers, like the processors found in your laptop currently, cannot perform as efficiently. Current computers store information as bits, electronic representations of either one or zero, collections of which can store any type of data. Quantum computers, on the other hand, utilize quantum properties to represent data, resulting in qubits. Quantum computing promises to bring great improvements to the fields of mathematics, security, biochemistry, search optimization, among many others.




Ion Trap at the University of Innsbruck (courtesy of Wikimedia user Mnolf)

Qubits In quantum computing, qubits are an analogue to classical computing bits. While they can represent a logical one or a logical zero, they can also represent any quantum superposition of those two states. This essentially gives each qubit three states, and the third state is a linear combination of both a logical one and a logical zero— weighted by probabilities representing the chances of finding either when the qubit is observed. Qubits are operated upon by quantum gates, which perform a unitary transformation upon the state in order to produce new data. Complicated algorithms may be simulated by multiple quantum gates acting upon a qubit. While a classical computer may be able to produce the same data, the inability of classical bits to represent superpositions of states mean that quantum computers can perform these operations much more efficiently. Superposition allows for massive operations to be achieved through quantum parallelism, where systems of weighted probabilities collapse into an observable list of ones and zeros upon observation.


Technology has evolved to the point where we can create quantum computers that operate on systems of qubits practically. D-Wave Systems, a quantum computer manufacturer based out of Vancouver, Canada, reported a calculation operating upon 84 qubits last January. The same company claims to offer 128 qubit computers for $10 million currently, though many physicists have criticized their theory and implementation. Many different implementations of quantum computers have been achieved by various companies and research organizations. Groups at the University of Michigan in 2005 and the University of Innsbruck in Austria have achieved scalable quantum computers using ion traps. Ion traps are devices that capture ions in a vacuum tube, which then allow the ion’s quantum state to be manipulated. These devices can measure states or affect the spins of the ion. Similar ion traps implementing a Paul trap have created the world’s most accurate atomic clocks. An ion trap at the University of Innsbruck is shown above.

EEWeb | Electrical Engineering Community


Quantum Dots Quantum computers have also been fabricated out of quantum dots. A quantum dot is a fabricated structure that allows for particles to be trapped in all three dimensions. This is achieved by creating a potential energy â&#x20AC;&#x153;well,â&#x20AC;? a trap into which particles relax but from which they cannot re-emerge. A particle loses energy and falls into the well, and barring additional energy being introduced to the system, it cannot escape from that potential barrier. A two-dimensional representation of a quantum dot with wavefunctions at different quantum numbers (n) is depicted below.

Quantum Entanglement

2D Quantum Dot Representation

(Courtesy of Wikimedia user Papa November)

The same manner of structure can be used to create lasers, solar cells, and photodetectors, in addition to other semiconductor-based devices. As particles trapped in the well relax into lower energy states, (represented by lower quantum numbers above) they can emit photons at discrete wavelengths that are directly proportional to the energy distance (the y-axis) that the particle drops. Small voltages can be applied across quantum dot structures in order to provide enough energy for a particle to excite out of a dot or tunnel through an energy barrier. Energy barriers in the quantum world provide a barrier that a particle has a finite probability to travel through. Additionally, in quantum mechanics, particles also act as waves. The consequence of this is that when a particle approaches a barrier, it can be both reflected and transmitted through simultaneously, albeit with reduced amplitude.

Quantum entanglement is another property by which particles like photons or electrons may physically interact so that even when they are separated from one another, the quantum numbers of the system become dependent upon one another. One example of this is the behavior of two identical fermions, or particles with halfinteger spin, which are dictated by the Pauli Exclusion Principle. Fermions include electrons, but not bosons such as photons. This principle states that two fermions cannot share the same quantum state simultaneously. Practically, no two electrons in any one atom can share the same quantum numbers, such as spin. The implication of entanglement is that modifications or measurements of one fermion will affect the quantum numbers of that fermion, as the wave function collapses to a state upon observation. The entangled fermion, which may not be anywhere nearby physically, also has its quantum numbers modified so that the state may still be permissible. Quantum dots entangled together allow for quantum computers that operate upon systems of multiple qubits.

Using Optics One further interesting implementation of a quantum computer involves using optics. Photons and other electromagnetic waves operate with different modes depending upon a number of variables, namely the material within which they are traveling. The measuring of waves traveling through a waveguide can also Visit


EEWeb PULSE physically realize qubits, as waves of light are essentially nothing more than continuous “streams” of photons traveling at the speed of light. Coherent states in light are a specific quantum state in which uncertainty is at a minimum, meaning that both the relative dispersion of the position and momentum are small at a high energy. There is some uncertainty associated with both as the Heisenberg uncertainty principle dictates. These probabilistic states, which exhibit a Poissonian photon number statistics, allow for probability-based measurements of quantum states. Measurements of the mode and state of electromagnetic waves in a defined medium allow for the realization of qubits. A depiction of the probabilistic nature of different states, including the coherent state, is included below.

The field of mathematics have especially benefited from the advent of quantum computing. Factorization of large integers that are the product of prime numbers is much more efficient and quick with quantum computing than with classical computing systems. For instance, some forms of web security like RSA could be broken many times faster with quantum computing than with methods in use today. As another example, the time that a bruteforce password cracker guessing an encrypted file password will be proportional to the square root of the number of inputs in a quantum computer. Sometimes, this speedup can reduce the magnitude of a problem from years to seconds. Quantum computing is an exciting field with a number of possibilities, though still very much in its infancy. Quantum mechanics is a field that is mathematically well understood after nearly ten decades of progress, though implementations of the properties that govern the physics have proven difficult. Currently manufactured quantum computers are difficult to scale and are far off from replacing more easily fabricated classical computing systems. Those computers that exist today are inefficient and difficult, if not impossible, to scale. The implications that quantum computing offers for improving cryptographic and mathematical fields, in addition to those uses as of yet undiscovered, promise that work will continue on improving quantum computing systems. ■

About the Author Alex Toombs is a senior electrical engineering student at the University of Notre Dame, concentrating in semiconductor devices and nanotechnology. His academic, professional and research experiences have exposed him to a wide variety of fields; from financial analysis to semiconductor device design; from quantum mechanics to Android application development; and from low-cost biology tool design to audio technology. Following his graduation in May 2013, he will be joining the cloud startup Apcera as a Software Engineer.

» CLICK HERE to comment on the article.

Measured Photon Distributions in Squeezed and Coherent States (Courtesy of Wikimedia user Gerd Breitenbach)


EEWeb | Electrical Engineering Community


Online Circuit Simulator PartSim includes a full SPICE simulation engine, web-based schematic capture tool, and a graphical waveform viewer.

Some Features include:

• Simulate in a standard Web Browser • AC/DC and Transient Simulations • Schematic Editor • WaveForm Viewer • Easily Share Simulations

Try-it Now!



Teamwork • Technology • Invention • Listen • Hear







Piezo Elements


Back-up Alarms





Fire / Safety







Preferred acoustic component supplier to OEMs worldwide | | 520.439.9204 Visit

QS9000 • TS/ISO16949 • IS O 14001 • IS O 13485 • IS O 9001



The microprocessor is the heart of every computer built today. It is the work horse Rob Riemen Computer Engineering Student at the University of Cincinnati

that helps run and manage every application and program. Because of the importance of this piece, it is essential that every user picks their processor first when constructing a computer based system.


EEWeb | Electrical Engineering Community





But, how do you go about choosing a processor? There are so many different specifications associated with each processor. On top of specifications, the type of project factors into which type of processor will be needed. In order to pick the best processor for any scenario, there are a few key qualities to consider. A whole book could be written about current microprocessors and their architecture. To help simplify the details, the operating frequency (clock speed), the cache, and the number of cores are the three most important qualities of today’s mainstream, desktop microprocessors. These three make the most difference in conjunction to the whole performance of a computer.

MULTI-CORE COMPUTING Computer architecture has evolved significantly over the years. There are many different architectures, but the same ones don’t stay around for very long. In order to outclass the competition, new, innovative architectures have to be developed every year. The most popular architecture is the x86 architecture. Most of today’s processors derive, at some point, from this base architecture. Some modern microarchitectures include Prescott, Nahalem, Sandy Bridge, Bobcat, and Bulldozer. These are just a few examples of sub-microarchitectures that both Intel and AMD have produced over the past several years.

The important characteristics that make up an architecture consist of: Cores, CPU Clock rate, L3 cache and thermal design power (TDP). other and the other units on the processor, increasing overall speed to programs that take advantage of parallel computing. When selecting a processor, it is always more beneficial to go with more cores. The processing power increases with each added core, which allows programs to run across multiple cores, as well as allowing multiple programs to run on separate cores. All in all, having multiple cores gives a huge step in performance and is a key specification in the selection of modern processors.

The major chip makers change the structure and capability of each architecture they produce. The important characteristics that make up an architecture consist of: Cores, CPU Clock rate, L3 cache and thermal design power (TDP). The most important development over the past several years dealing with architecture advancement is the possibility of multi-core processors. Up until the mid 2000’s, the rise of operating frequency was more than enough to compensate for the programs being produced. But, rising temperatures and frequency limits put a damper on solely increasing operating frequency. In order to combat this issue, engineers developed the ability to increase the number of cores on a chip. Electrically, the chip is fabricated with 2-8+ cores on a single chip. These cores are their own central processing unit (CPU) that communicate across the chip with each


EEWeb | Electrical Engineering Community

Dual Core Setup


Multi-Core, Single Cache Processor Die

OPERATING FREQUENCY As highlighted through the architecture segment, the operating frequency plays a key role in processor performance. Every microprocessor operates on the function of some sort of oscillator frequency. The oscillator frequency is directly correlated to the designed frequency from an oscillator crystal. This oscillator frequency is often converted through electronic circuitry to a square wave. The square wave is crucial in the execution of programs inside the processor. Based on the digital representation of a square wave the processor can then “process” functions in 1’s and 0’s, which will then make the pulses readable by the processor so that it can execute instructions. Only one instruction can be executed per clock cycle, which increases the importance of raising operating frequencies. The higher the operating frequency, the more instructions can be processed. If more instructions are being processed, then more work can be done in a shorter amount of time. Today’s processors can operate anywhere from 2 GHz to 4 GHz. This means that on average a computer can execute around 125,000 million instructions per second (MIPS). A processor purchased based solely on this

On average, a computer can execute around 125,000 million instructions per second. quality would give functionality to run a few programs and applications. Obviously, we don’t just expect to open up a web browser and surf the web with one tab open. With the advent of multi-core processing, the operating frequency can be amplified. Each core runs at the speed provided in the manufacture’s specifications. That means the same sequence of operations could potentially run simultaneously on different cores. To put it more into perspective, simultaneous processes can be functioning on separate cores, giving the processor the ability to tackle multiple processes at once. The functionality of current processors greatly improves as more cores means increased operating frequency. Visit


EEWeb PULSE Usually, there are three levels of cache; Level-1 (L1), L2, and L3. These caches run in conjunction with each other to store the most important data that the processor will use. THE CACHE The cache is the third most important component associated with processors. The cache acts as a smaller, faster memory storage device. The processor stores frequently used data in the appropriate locations. It helps ease data transfer and enables efficiency by reducing the average time to access memory. Modern processors contain multiple sources of cache. The most common hierarchy of cache on a processor is the multi-level cache. Usually, there are three levels of cache; Level-1 (L1), L2, and L3. These caches run in conjunction with each other to store the most important data that the processor will use. The processor checks the L1 cache for the data it needs, and proceeds to the next level of caches if the processor can’t find the data it needs.

When selecting a modern processor, it is the most beneficial to take a look at the L3 cache. A lot of the operating system’s main functions are stored in the L1 and L2 cache, leaving the L3 cache to pick up the slack. With a L3 cache between 6MB- 12MB, the processor should be able to handle all of today’s current applications.

CONCLUSION Through the multi-core selection, operating frequency speed, and cache size, it is easy to find a processor that will fit your needs. The multi-core capability increases the usefulness of a high clock speed. Each core can be executing instructions at the factory set clock speed. The L3 Cache will store the important data in its memory and give it to the core that needs it next. Every modern processor will include the details of these in their specifications. It is important when selecting a processor to consider the relationships between these three components, and how they can benefit functionality during runtime. ■

SOURCES Intel. The First Nehalem Processor. Digital image. Intel. com. Intel, 17 Nov. 2008. Web. 10 Feb. 2013. <http:// Nehalem_Die_callout.jpg>. Schmitz, Dennis. Generic Dual Core. Digital image. Wikipedia. N.p., 8 June 2007. Web. 19 Feb. 2013.<http:// Dual_Core_Generic.svg/1000px -Dual_Core_Generic. svg.png>.

The cache is electrically implemented into the chip during fabrication. With the introduction of multi-core processing, it was thought that a more useful way of sharing cache was to give each core its own cache. However, this would include more wiring and the increase of latency across cores. So, modern processors have only 1 cache hierarchy rather than splitting them across cores.


EEWeb | Electrical Engineering Community

» CLICK HERE to comment on the article.

Inside the Competition: Part 3

CMOSpy: Part 4

Recursive Spying: Part 5


EEWeb Pulse - Volume 95