Page 51

Even with the growing use of IP blocks, however, the rapid rise in the cost to design and verify new hardware has made access to leading-edge electronics prohibitively expensive to all but the largest companies. The Circuit Realization at Faster Timescales (CRAFT) program was conceived to explore solutions to this problem through the use of automated generators to rapidly create new circuits and accelerate the design cycle. Recently, researchers in the CRAFT program demonstrated a design flow that leveraged automated generators to produce digital circuits seven times faster than that achieved by traditional methods. Put in another way, these tools enabled small design teams to be just as productive as teams seven times their size. Maintaining continued forward momentum beyond the imminent Moore’s Inflection will require pushing the limits of machine learning to extend automation into every aspect of circuit design. Two new programs in the ERI Design thrust, inspired by Moore’s prescience, aim to explore machine-centric hardware design flows that can support the physical layout generation of complex electronic circuits with “no human in the loop” and in less than 24 hours. To facilitate the reliable reuse of circuit blocks and to engage the collective brain power of the open-source design community, these efforts will seek to leverage new simulation technologies and applied machine learning to verify and emulate circuit blocks. With enhanced design automation tools like these, the barrier to entry for a growing number of innovators will shrink and thereby unleash an era of unprecedented specialization and capability in electronics technologies.

Materials and Integration


“… build large systems out of smaller functions, which are separately packaged and interconnected.” – Gordon Moore, 1965 A central challenge to managing modularity is how to properly interconnect the growing number of functional blocks without affecting performance. Since 2000, not only has the number of transistors per chip grown from 42 million to 21 billion11, but the number of IP blocks on that same chip has increased more than 10 times12 as well. In addition, these functional blocks are increasingly becoming a mixture of digital and analog circuits, and are often made from vastly different materials, further complicating the challenge of integration. To realize Moore’s vision of building larger functions out of smaller functional blocks, we need to find new ways for various dissimilar blocks to connect and communicate with each other. Moore’s predictions regarding materials and integration are already being realized in DARPA’s Common Heterogeneous Integration and IP Reuse Strategies (CHIPS) program. This research effort seeks to

ABOVE LEFT: “Moore’s Inflection” – a point marked by arrows on the diagram where priorities set today will determine whether advances in electronics will begin to slow and stagnate or where new innovations will catalyze another long run of dynamic and flexible technological progress. ABOVE: The CHIPS program is pushing for a new microsystem architecture based on the mixing and matching of small, single-function chiplets into chipsized systems as capable as an entire printed circuit board’s worth of chips and components.

develop modular chip designs that can be rapidly assembled and reconfigured through the integration of a variety of IP blocks in the form of prefabricated chiplets. These chiplets will leverage standard layouts and interfaces to easily link together. The program recently announced that Intel® will be contributing their proprietary interface and the relevant IP associated with it to the CHIPS program to be used as the program’s standard interface. Intel’s direct participation will help ensure that all the various IP blocks in the program can be seamlessly connected together. This is a huge step toward the creation of a national interconnect standard that will enable the rapid assembly of large modular systems. What is often left out of the story behind the growing number of transistors is the parallel rise of the number of interconnects required to shuttle data back and forth across the chip. The explosion of wires have not only complicated the design process but have also created longer and more convoluted paths for data to travel through. To get a sense of scale, if all the wires in a modern chip were laid out end to end, they would span more than 21 miles. For most computing architectures, which separate the central processing unit (CPU) and the memory, moving data across this growing tangle of wires severely limits computational performance. The conundrum even has its own name, the “memory bottleneck.” For instance, to execute a machine-learning algorithm on a leading-edge chip, more than 92 percent of the execution time is spent waiting to access memory. With the vast number of circuit combinations made possible by new standard interfaces and the performance limitations of current interconnects, we must ask the question: What role can new materials and radically new architectures play in addressing these challenges? In response to this question, one of the new programs under the ERI Materials and Integration thrust plans to explore the use of vertical, rather than planar, integration of microsystem components. By leveraging the third dimension, designers can dramatically reduce wire lengths between essential components such as the CPU and memory. Simulations show that 3-D chips fabricated using older 90-nm nodes


Profile for Faircount Media Group

DARPA: Defense Advanced Research Projects Agency 1958-2018  

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded

Recommendations could not be loaded