ARCHITECTURE & VLSI // PARALLEL COMPUTING // MOBILE & EMBEDDED SYSTEMS // NETWORKING & OS // PROGRAMMING LANGUAGES
ARCHITECTURE & VLSI Computer architecture and VLSI design are inexorably intertwined. At Utah, Architecture and VLSI researchers are tackling issues related to the synergy of these fields, including multiple efforts to understand and reduce the architectural impact of interprocessor communication and a project designing custom hardware for interactive ray tracing. On modern multi-core chips, it is critical that on-chip interconnects and coherence protocols enable fast and power-efficient data transfers between parallel threads. Professor Balasubramonian’s research focuses on architectural mechanisms, such as heterogeneous wire technology, that improve the efficiency of on-chip communication. Professors Brunvand and Davis, in conjunction with computer graphics colleagues, www.cs.utah.edu/arch-research are designing special-purpose hardware for ray tracing, a form of computer graphics that generates much higher quality and more realistic images than commodity graphics chips. The resulting processor employs multiple ray tracing pipelines and is based on previous work designing domain specific processors that support run-time configuration of the datapath. This allows it to operate at very close to the speed and power efficiency of a fully custom pipeline, but with enough programmability so that a variety of ray trace algorithms can be supported.
PARALLEL COMPUTING We are entering the multi-core era where every computer, whether embedded, laptop, desktop, server or supercomputer, is a parallel computer. As parallel computing is reaching the masses, faculty at Utah are developing new courses and expanding their research to embrace the changes in programming tools and systems software that naturally must arise in response to this paradigm shift, in collaboration with the previously-described architecture and VLSI research in this area. In particular, the newly formed Center for Parallel Computing at Utah (CPU) fosters collaborations between the School of Computing faculty and faculty across campus on a variety of correctness and performance challenges being faced by the parallel computing community. We now describe some specific efforts. Professor Hall is developing performance tuning tools, called autotuners, designed to ease the programming burden in the face of the growing complexity and diversity of modern computer architectures. Autotuners experiment with a set of alternative strategies for mapping application code to hardware to automatically select the mapping that yields the best performance. Such programming tools increase programmer productivity by reducing the effort of porting to new architectures, and empowering the programmer to maintain code that is simpler, and architecture independent. Advances in parallel computing are ultimately tied to delivering correct and efficient systems. The Gauss group lead by
16 >> 2009 & 2010 REPORT
Professors Gopalakrishnan and Kirby is engaged in researching and developing formal analysis tools that can analyze and help debug parallel and concurrent programs. One of their tools ISP (In-Situ Partial Order Analysis) has the ability to analyze large-scale message passing programs written using the MPI library. A framework called GEM (Graphical Explorer of MPI programs) enhances one’s ability to use ISP on complex programs, and has been officially released as part of the Eclipse Parallel Tools Platform. The Gauss group also has built tools for debugging GPU kernels through symbolic analysis, and a host of other tools to debug multi-core communication, thread libraries, and cache coherency protocols. Professor Gopalakrishnan, director of the CPU center, is also active in Concurrency Education, and collaborates with Microsoft Research in helping develop a Practical Parallel and Concurrent Programming curriculum called PPCP.