The Future of Computing — Helluva Engineer Magazine
WEARING THE FUTURE Engineering new wearable tech to sense, respond, and heal PAGE 18
DIGITAL DOPPELGÄNGERS
Computerized replicas of cities are helping engineers save lives PAGE 26
BETTER, NOT JUST BIGGER Our engineers are looking at data center power needs and efficiency PAGE 32
CAPSULES COULD DELIVER INSULIN WITH NO NEEDLE
Georgia Tech engineers have created a pill that could effectively deliver insulin and other injectable drugs, making medicines for chronic illnesses easier for patients to take, less invasive, and potentially less expensive.
Along with insulin, it also could be used for semaglutide — the popular GLP-1 medication sold as Ozempic and Wegovy — and a host of other top-selling protein-based medications like antibodies and growth hormone that are part of a $400 billion market.
These drugs usually must be injected because they can’t overcome the protective barriers of the gastrointestinal tract. Georgia Tech’s new capsule uses a small pressurized “explosion” to shoot medicine past those ‘EXPLODING’
barriers in the small intestine and into the bloodstream.
“It was important to us not to turn this capsule into a complex device or machine,” said Mark Prausnitz, Regents’ Professor and Entrepreneur and J. Erskine Love Jr. Chair in the School of Chemical and Biomolecular Engineering.
“Others have made mechanical devices for protein delivery that you can stick in your mouth or swallow, but they are costly and complicated. We wanted to make a capsule that uses a simple pharmaceutical formulation that is inexpensive to manufacture but has the power of a mechanical device to increase drug delivery.”
‣ JOSHUA STEWART
Helluva Engineer is published semiannually by the College of Engineering at the Georgia Institute of Technology.
INTERIM DEAN
Doug Williams
ASSOCIATE DEANS
Matthieu Bloch
Associate Dean for Academic Affairs
Kim Kurtis
Associate Dean for Faculty Development and Scholarship
Our researchers are finding new ways to shrink transistors, make systems more efficient, and design better computers to power technologies not yet imagined.
From smart textiles to brain-computer
Georgia Tech engineers are designing wearables that connect humans and machines more closely than ever to sense, respond, and
Engineers are building computerized replicas of cities, and even Georgia Tech’s campus, to save lives and create a better, more efficient world for all of
Engineers are thinking about how notoriously resource-hogging data centers can be more efficient and how they’re influencing our future power needs.
Allison Carter, Kemi Griffin, Chris McKenney, Gary Meek
If you wish to change your Helluva Engineer subscription or add yourself to our mailing list, please send a request to editor@coe.gatech.edu.
Dear Friends,
s this issue rolls off the printing presses, I’m settling into a new role at Georgia Tech as provost and executive vice president for Academic Affairs. So, I especially relish the opportunity to greet you a final time as dean and reflect on what has been an extraordinarily fulfilling five years.
Serving as dean and Southern Company Chair has been an incredible honor — leading my alma mater and the nation’s largest program of its kind. Together, we’ve accomplished so much since 2021:
• We’re having more impact on the world than ever: Research awards grew from $263 million to a record $312 million.
• We’re serving more learners than ever: Enrollment rose from 18,000 to nearly 22,000 students.
• Demand for a Georgia Tech engineering degree is higher than ever: Applications jumped from 20,600 to 33,000.
Our programs continue to rank among the nation’s best. The undergraduate program is No. 3 overall and tied for No. 1 among public programs, and our graduate program is No. 4, according to U.S. News & World Report. From 2021 – 2025, four individual programs have been ranked
No. 1: aerospace, biomedical, environmental, and industrial.
None of this would have been possible without you. The passion and commitment from our alumni, faculty, students, and staff have built a community that is second to none.
Leadership in higher education is challenging; this is a pivotal moment for universities and their role in society. I feel called to step forward, and that’s why I pursued the opportunity to serve as Georgia Tech’s chief academic officer. I am deeply grateful for the chance to make a broader impact on the Institute.
Still, leaving the dean’s position comes with mixed emotions — and these pages are full of examples of why.
In this edition of our magazine, we look at the future of computing. From building digital twins of streets or whole cities, to reducing the burdens of data centers, to designing more powerful and efficient semiconductors, our engineers are redefining how computing technology will shape our lives and our futures.
Although my office has moved from Tech Tower to the building next door, my heart remains in the College of Engineering. Thank you for everything you’re doing to make us the best in the nation.
Go Jackets!
FROM THE DEAN
in the field
AE Opens New Aircraft Prototyping Laboratory
The Daniel Guggenheim School of Aerospace Engineering celebrated the opening of a new hangar facility in September with a ribbon-cutting and open house. The 10,000-square-foot Aircraft Prototyping Laboratory was built to accelerate innovation in electric and hybrid-electric aircraft propulsion as well as autonomous flight systems.
“This facility demonstrates Georgia Tech’s long-term commitment to pioneering the technologies that will shape the future of aviation,” said Ángel Cabrera, president of Georgia Tech. “Aerospace products are Georgia’s No. 1 export, and the Institute’s topranked Guggenheim School produces some of the nation’s top aerospace engineering talent. With this advanced laboratory, we’re making strategic investments that will grow our state’s and our Institute’s national leadership in aerospace innovation and advanced manufacturing.”
Designed as a hands-on research and teaching environment, the lab includes a suite of specialized facilities: an electric powertrain lab, a propulsion system test cell, an avionics lab, a composites fabrication area, and a high-bay integration space capable of housing prototype aircraft with wingspans up to 20 feet.
One of the facility’s first major projects is a collaboration with NASA to design, build, and fly an electric vertical takeoff and landing (eVTOL) research aircraft called RAVEN. Other projects underway include a solar-electric aircraft demonstrator and an eVTOL testbed focused on developing software for safety-critical applications.
‣ ANGELA BARAJAS PRENDIVILLE
ME Receives Record $100M Gift
A historic $100 million bequest from late Georgia Tech alumnus John W. Durstine will seed bold ideas, support faculty excellence, and expand student experiences in the George W. Woodruff School of Mechanical Engineering.
It’s the largest single gift in Tech’s history, and its focus is supporting faculty members. The bequest will allow Georgia Tech to attract both early career rising stars and internationally recognized pioneers, keeping the Woodruff School at the forefront of world-class teaching and research for generations to come.
“John Durstine’s historic generosity is deeply inspiring to all of us working to carry out the Institute’s mission,” said Ángel Cabrera, president of Georgia Tech. “John could have left his estate to many good causes,
and he chose to invest in Georgia Tech’s faculty because he knew firsthand the transformative impact that our outstanding faculty have in the lives and careers of our students. His legacy will live on in every discovery, every innovation, and every student who learns from the faculty his gift supports.”
Originally from Birmingham, Alabama, Durstine enrolled at Tech in the 1950s to study mechanical engineering — a decision he often credited with shaping the trajectory of his life. After graduation, he earned an MBA from Harvard Business School and joined the Ford Motor Company, where he spent more than three decades shaping truck and light vehicle design, powertrain strategy, and advanced systems engineering.
“John valued integrity, precision, and results — qualities that define the best engineers,” said William J. Wepfer, professor emeritus and former chair of the Woodruff School. “His gift is as strategic as it is generous, aimed squarely at ensuring Georgia Tech remains a leader in mechanical engineering far into the future.”
The landmark gift is also part of Transforming Tomorrow: The Campaign for Georgia Tech, a $2 billion effort running through 2027 to provide vital resources for Tech students, faculty, programs, and facilities across campus.
‣ ABBIGAIL TUMPEY
Above, left: John Durstine (front row, second from right) was inducted into the Engineering Hall of Fame in 2014.
Above, right: Durstine with Gary May, former dean of the College of Engineering, at the 2014 event.
Inset: Durstine as an undergraduate.
Clark Scholars Program Expands, Thanks to $11M Investment
The A. James & Alice B. Clark Foundation is making an additional $11 million investment that will effectively double the number of students supported through the College’s Clark Scholars program.
In addition to need-based scholarships for undergraduates, the gift also will support programs that enhance the student experience and a new philanthropy challenge aimed at teaching the next generation of philanthropists.
“Georgia Tech’s commitment to excellence and its unwavering support for students align perfectly with the Clark Foundation’s mission,” said Courtney Clark Pastrick, board chair of the Clark Foundation. “This investment will allow Georgia Tech to reach
even more talented engineering students, enrich their educational journey, and instill in them the values of service and philanthropy that were so important to my father.”
The Clark Foundation’s initial investments in 2018 included endowed funding that covers unmet need for 10 new students each year. Each cohort is supported throughout their undergraduate career, resulting in 40 to 50 actively supported students at any given time. This latest investment will elevate that number to upwards of 100 once the additional funding is fully deployed.
Clark Scholars meet with Georgia Tech leadership and alumni mentors, receive tutoring, participate in career planning, and volunteer for community service projects.
‣
BEVERLEY
SYLVESTER
Undergrad Engineering Program
3rd Again
The College of Engineering reclaimed the No. 3 position among the nation’s undergraduate engineering programs in the latest rankings from U.S. News & World Report.
The College moved up one spot on the 2026 Best Colleges list released in September. The ranking puts the program at No. 1 among public universities (tied with the University of California, Berkeley).
The new list marks the second time in three years the College’s overall programs have ranked third, its best-ever position on the annual list.
It’s also the first time three specific engineering disciplines ranked first in a single year. The full list:
• Biomedical Engineering – 1st
• Environmental Engineering – 1st
• Industrial and Systems Engineering –1st (25 consecutive years)
• Aerospace Engineering – 2nd
• Chemical Engineering – 2nd
• Civil Engineering – 2nd
• Electrical Engineering – 3rd
• Materials Engineering – 3rd
• Mechanical Engineering – 4th
• Computer Engineering – 6th ‣ JOSHUA STEWART
The 2025 cohort of Clark Scholars.
Seashells Inspire Better Plastic Recycling
Researchers have created a material inspired by sea shells to help improve the process of recycling plastics and make the resulting material more reliable.
They used chopped-up recycled plastic from indus trial stretch wrap to create layered composite structures. Their product greatly reduced the variability of mechan ical properties typically found in recycled plastic. It also performed as well as the original plastic materials.
The researchers said their bio-inspired design could help cut manufacturing costs of virgin packaging materi als by nearly 50% and offer potential savings of hundreds of millions of dollars. And, because less than 10% of the 350 million tons of plastics produced each year is effec tively recycled, the Georgia Tech approach could keep more plastic out of landfills.
Aerospace engineering assistant professor Christos Athanasiou led the study, which was published in the journal Proceedings of the National Academy of Sciences
‣ JASON MADERER
Above: Christos Athanasiou holds the recycled-plastic composite.
Right: The “brick and mortar” structure of the material was inspired by the architecture of nacre, found in some seashells.
Paralyzed Veteran Returns to Georgia Tech for Ph.D.
In 2012, Ignacio Montoya was about to graduate from Georgia Tech and become a fighter pilot in the Air Force. Then he got into a motorcycle accident that left him paralyzed from the chest down.
Ever since, he has worked to better understand his injury and his options. After earning a master’s in biomedical engineering from Tech in 2018, Montoya moved to Los Angeles and joined a lab at UCLA known for pioneering spinal stimulation and activity-based training to restore movement after paralysis. Now he’s bringing everything he’s learned back to Georgia Tech, where he started this fall on a biomedical engineering Ph.D. with Associate Professor Cassie Mitchell.
“My experience as a research participant gives me a unique perspective as I transition into a doctoral researcher,” Montoya said. “It helps me bridge the gap between understanding the science and translating it into real-world clinical practice.”
Montoya’s research uses artificial intelligence to study how robotic exoskeletons and spinal cord stimulation can reawaken dormant neural circuits
and help people with paralysis regain sensation, mobility, autonomy, and vital physiological functions once thought permanently lost.
His work with the UCLA team has borne out the possibilities: Montoya himself has regained some function in his paralyzed right arm. He has also reversed many common medical complications from paralysis: temperature regulation, body awareness, sexual function, bone density, muscle mass, and digestive health.
“My injury is no longer considered complete, and I believe I’m the first person to achieve that through a combination of spinal stimulation, intensive training, and daily weight-bearing rehabilitation,” Montoya said. “I’m constantly out of my wheelchair — standing, moving, and training. That consistency has been the key. Every day, I walk in an exoskeleton.”
This may not have been the path Montoya expected to take when he left Georgia Tech that night in 2012, but it’s a full circle.
“I’m back where my journey paused — this time to push the boundaries of what we believe the human body and spirit can achieve,” he said.
‣
Better Batteries by Breaking Rules
Fast charging a battery is supposed to be risky — a shortcut that leads to battery breakdown. But for a Georgia Tech team studying zinc-ion batteries, fast charging led to a breakthrough: It made the battery stronger.
Zinc-ion batteries have several key advantages over lithium-ion batteries, the most commonly used rechargeable battery technology. Zinc is abundant, cheaper, safer, less toxic, and easier to recycle. However, zinc-ion batteries have a major drawback: the growth of dendrites, sharp metal deposits that form during charging and can eventually short-circuit the battery.
Not so in the design from Hailong Chen, associate professor in the George W. Woodruff School of Mechanical Engineering.
“We found that using faster charging actually suppressed dendrite formation instead of accelerating it,” Chen said. “It’s a very different behavior than what we see in lithium-ion batteries.”
Instead of dendrites, the zinc settles into smooth, compact layers — more like neatly stacked books than splintered shards — a structure that not only avoids short circuits but also helps the battery last longer.
Still, Chen said, the discovery only solves half of the problem.
A battery has two main ends, the anode and the cathode. Chen’s team made the anode last much longer. Now, the cathode must catch up. He is working to improve the cathode so the whole battery performs reliably over time. His team is also experimenting with mixing zinc with other materials to make zinc-ion batteries even more durable.
If all goes well, Chen said zinc-ion batteries could be ready for everyday use in about five years.
‣
MICHELLE AZRIEL
TESS MALONE
K-12 Students Get Hands-on at 2nd STEM Fest
In one corner of McCamish Pavilion, kids sat in an airplane simulator. Across the floor, you could feel a cool breeze as another group learned how wind can generate electricity. Under a large inflatable dome in the middle, students and parents learned about stars and planets in a mini planetarium.
They were among nearly 1,000 K-12 students who filled Georgia Tech’s basketball arena for the College of Engineering’s second STEM Fest in September. The day of discovery featured more than 60 interactive stations, exhibits, and activities designed to teach students about science, technology, engineering, and math (STEM) concepts.
STEM Fest is a partnership between the College and STEM Global Action. The New Orleans-based organization is led by Calvin Mackie, a three-time graduate of Georgia Tech’s George W. Woodruff School of Mechanical Engineering.
‣ JOSHUA STEWART
An advanced semiconductor memory chip design from Shimeng Yu’s lab.
Some technologists suggest we’re nearing the limits of packing ever-more computing power into ever-smaller chips. At Georgia Tech, engineers are finding new ways to shrink transistors, make systems more efficient, and design better computers to power technologies not yet imagined.
THE POWER OF MODERN COMPUTING IS HARD TO OVERSTATE. Your smartphone has more than 100,000 times the power of the computer that guided Apollo 11 to the moon. It’s about 5,000 times faster than 1980s supercomputers. And that’s just processing power.
Apple’s original iPod promised “1,000 songs in your pocket” in 2001. Today’s average smartphone has enough memory to store 25,000, along with thousands more photos, apps, and videos.
This exponential leap in capability traces a prediction made in 1965 by Intel co-founder Gordon Moore. He suggested the number of transistors — tiny electronic switches — on a computer chip would double roughly every two years. Moore’s Law, as it became known, has served as a benchmark and guiding principle for the tech industry, influencing the trajectory of innovation for nearly six decades.
But now miniaturizing transistors has slowed. Headlines regularly declare Moore’s Law dead.
Arijit Raychowdhury sees it differently.
He said Moore’s Law was never just about shrinking transistors. It was about making computing better.
“Moore’s Law is fundamentally economic,” said Raychowdhury, Steve W. Chaddick School Chair of Electrical and Computer Engineering (ECE). “It’s not about the physics of making transistors smaller. It’s about the business imperative to deliver better performance, lower power consumption, smaller form factors, or reduced costs.”
He pointed to NVIDIA’s latest Grace Blackwell chips announced in June. They deliver 30 times the performance of the previous generation while using half the energy. That, Raychowdhury said, is Moore’s Law in action — not just smaller transistors, but also system-level innovation.
The current challenge, then, is not just putting more transistors on a chip; it’s engineering progress across what researchers call “the stack,” from the materials and devices at the bottom to the software at the top.
The College of Engineering is driving this kind of innovation through major national initiatives accelerating computing hardware research, with Georgia Tech leading or directly involved in projects totaling more than $1 billion.
“It’s not just about advancing research,” Raychowdhury said. “We’re building the talent pipeline that’s vital to the future of the U.S. tech industry and national competitiveness.”
By driving breakthroughs across the computing stack and helping shape national strategy, Georgia Tech is carrying forward the spirit of Moore’s Law. The goal was never just smaller transistors. It was better computing. That vision is still alive — and still evolving — today.
The
TFoundation
oday’s computer engineering challenges still begin with transistors, the basic building blocks of computing. These tiny switches, now measured in nanometers, are the nucleus of every computational cell. Without them, there are no integrated circuits, no processors, no memory, and no compute systems.
Transistors are made from semiconductors, which are materials like silicon that can conduct electricity under certain conditions. Their ability to control and amplify electrical signals makes them ideal for building computational and storage systems that power everything from smartphones and laptops to supercomputers.
Engineers have worked for decades to make transistors smaller, fitting more of them on each chip and enabling faster, more efficient computing. Despite claims of Moore’s Law reaching its limits, this effort is far from over.
“The idea that transistor density scaling has stopped is a false narrative,” said Suman Datta, the Joseph M. Pettit Chair in Advanced Computing in ECE and a Georgia Research Alliance Eminent Scholar. “It’s just that the technology has become so advanced, only a few places are skilled enough to keep pushing the limits. But those that can, absolutely are.”
The most advanced semiconductor technology can pack about 300 million transistors into a single square millimeter of silicon. For scale: a human hair could fit tens of thousands of modern transistors side by side across its width.
“We are literally dealing with atomic layers,” Datta said. “The gate dielectric [a tiny insulator in a semiconductor that helps switch electric signals] in some transistors is just 1.5 nanometers thick, which means only three or four layers of atoms.”
The newest smartphone chips have nearly 20 billion transistors. Taiwan Semiconductor Manufacturing Company, the world’s largest semiconductor foundry, has manufactured chips that contain around 100 billion transistors.
This level of precision requires new materials, new architectures, and new fabrication techniques. One of the most promising directions is vertical, or 3D, stacking, allowing chips to have more transistors without expanding the footprint.
But stacking transistors is not simple. The bottom layer of a chip is already optimized to the last atom and applying too much heat while building the top layers can damage what lies beneath.
In Datta’s lab, the team is developing low-temperature fabrication methods that preserve reliability while enabling high-performance 3D integration.
“It’s not like sushi — you can’t serve it raw. You have to ‘cook’ the transistor to make it reliable,” he said. “But when you’re stacking layers, you can’t apply heat the way you normally would, or you’ll damage the layer underneath. We’re essentially building high-performance transistors in the cold.”
Shimeng Yu (left) and Suman Datta are building the next generation of computing hardware. Datta is building high-performance transistors at much lower temperatures, allowing them to be stacked vertically on a chip. Yu is reimagining computer memory design to improve performance.
Advanced fabrication techniques like Datta’s that fabricate transistors in three dimensions are already showing up in test-chip designs in corporate research laboratories. And the need for such high-density chips is only accelerating.
“If you had asked me even five years ago, I might have said the semiconductor industry was heading for a slowdown,” he said. “Instead, the opposite happened. ChatGPT’s public launch in 2022 triggered a Cambrian explosion in semiconductors. Semiconductors have become the bottleneck and the backbone of the generative AI revolution. We must continue to stack transistors, use new materials, and push the limits of physics.”
Memory at the Center
With the rise of artificial intelligence, memory has become another major bottleneck. AI models contain trillions of parameters and rely on massive datasets. Every time a system processes information, it must retrieve it from memory, move it to the processor, and often send it back again. This constant movement burns energy and takes time.
“The future depends on high-performance memory. Without it, AI will consume all the energy in the world,” said Asif Khan, ON Semiconductor Professor in ECE.
In the 1970s, memory chips like Dynamic RandomAccess Memory (DRAM), used for fast, temporary data storage, could only hold a few thousand bits of data. Today, the same type of chip can store billions. Solidstate drives using NAND flash memory, designed for long-term data storage that retains information even when the power is off, can hold several terabytes — enough to store hundreds of hours of video.
Each memory chip contains millions or billions of memory cells, the tiny circuits that store individual bits of data. But, like logic chips, progress has mostly come from shrinking components on a flat surface, an approach that is starting to hit physical limits.
Shimeng Yu’s lab is looking to improve memory performance at two levels: system-level integration and cell-level design.
At the system level, the team is changing how processors and memory chips are physically connected. Instead of placing them
side by side, they’re stacked vertically, allowing data to move across the chips’ full surfaces rather than only a single narrow edge.
“We’re connecting across an entire plane,” said Yu, a Dean’s Professor in ECE. “That gives us more bandwidth and faster communication.”
This stacked design creates new thermal challenges, however. Heat trapped in the middle of the stack is harder to remove, so Yu’s lab is developing system design solutions that employ heat spreaders and conductive layers to manage energy and cooling.
At the cell level, Yu is rethinking the structure of DRAM itself. Traditional memory is built flat, but his lab is exploring 3D architectures that build upward, increasing density by stacking memory cells.
“Right now, we can fit about half a gigabit of data into the size of a grain of rice,” he said. “With 3D DRAM, we’re aiming to double or triple that, eventually reaching 10 times the current integration density.”
Yu is also exploring in-memory computing, where certain operations are performed directly within memory devices. This minimizes data movement, which is one of the largest sources of energy use in modern data centers.
“Every time you move data, you burn power,” Yu said. “We want to keep the data where it is and compute there.”
Khan is pushing even deeper into memory by focusing on the materials inside each cell. His lab is integrating ferroelectric materials to deliver DRAM-like speed with the persistent storage of flash memory.
The team is changing how processors and memory chips are physically connected. Instead of placing them side by side, they’re stacked vertically, allowing data to move across the chips’ full surfaces rather than only a single narrow edge.
Asif Khan holds a silicon wafer in Georgia Tech’s cleanroom facility. Khan is trying to build new kinds of computer memory using fundamentally different mechanisms to store data.
“We’re trying to build memory that doesn’t just perform better, but operates on entirely new principles,” Khan said.
Ferroelectric materials offer a fundamentally different mechanism for storing information. Instead of relying on the presence or absence of electrons, they use the orientation of electric dipoles — tiny shifts in atomic structure that can represent ones and zeros. This allows memory to retain information even when the power is turned off, while also using significantly less energy.
“Think of it like a spring,” Khan said. “In conventional memory, you must keep tension on the spring to hold its position using energy. In ferroelectric memory, the spring stays stretched. It remembers its state without needing extra energy.”
His group has demonstrated ferroelectric devices with promising scalability and endurance in the lab. They are collaborating with industry partners like Samsung and Micron to bring the technology closer to commercialization.
Both researchers emphasize that memory has become more than a supporting technology. It is a driving force behind the next wave of computing.
“We’re not just scaling devices,” Khan said. “We’re redefining what memory is and what it can do.”
The Rise of Advanced Packaging
As individual components reach physical limits, researchers also are considering how the components are arranged and connected using processes called advanced packaging. It’s an area where Georgia Tech has carved out a leadership role since the 1990s with the 3D Systems Packaging Research Center (3D-PRC).
“Packaging has become the new frontier. It’s where some of the most important innovations are happening,” said ECE Professor Muhannad Bakir, who directs the center. “The silicon world has been scaling for decades. Now packaging must scale just as fast.”
The reason is simple: Chips have become too large, too complex, and too expensive to manufacture as single units. Instead of building one massive chip, engineers now break it into smaller pieces called chiplets. When assembled into a complete system, they can outperform traditional monolithic designs.
“You can get increased functionality at lower cost and better performance,” Bakir said.
This strategy is part of a broader approach called heterogeneous packaging. It allows different parts of a system — logic, memory, power management, even
Muhannad Bakir (standing) leads Georgia Tech’s 3D Packaging Research Center, where researchers are developing new ways to arrange and connect the components on chips to increase performance.
photonics — to be built using the best available technologies and then tightly integrated.
“You’re not just gluing pieces together,” Bakir said. “You’re managing power delivery and dissipation, cooling, mechanical stress, signal integrity, data bandwidth, manufacturing constraints, and more all in a compact form factor. It’s a full-stack engineering challenge with so much room for innovation and impact.”
Georgia Tech’s research spans everything from 3D heterogeneous integration and chiplet architectures to glass-core substrates, a promising platform for largescale AI computing systems.
Glass is flat, smooth, and mechanically stable, allowing for ultra-fine wiring and embedded components. It also can be made in much larger panels than silicon wafers, making it ideal for aggregating GPUs and memory modules in data centers.
“Glass allows you to actually embed electronics into the glass substrate,” Bakir said. “Typically, you glue things on top of the substrate. With glass, you can add functions into the core.”
Georgia Tech’s partnership with Absolics is accelerating this work. The manufacturer has built a $600 million facility in Covington, Georgia, to produce glass substrates. Absolics also has received $100 million to support a joint research program with Georgia Tech and other partners.
Georgia Tech’s leadership in semiconductor packaging builds on years of work.
Former Tech professor James Meindl helped define the limits of scaling and the importance of interconnects in computing systems. His work laid the foundation for many of today’s advanced packaging strategies. He also helped expand Georgia Tech’s cleanroom facilities, which are key for virtually all of the advanced packaging programs and research on campus.
Another Georgia Tech professor, Rao Tummala, was instrumental in establishing the 3D-PRC and advancing the concept of “system-on-package” — treating packaging not as an afterthought, but as a platform for integration, performance, and miniaturization.
The Communication Challenge
Modern computers move staggering volumes of data between processors, memory, and storage every second to keep up with the demands of AI, cloud computing, and real-time applications. The faster that data moves, the better systems perform.
But as workloads grow and architectures become more complex, traditional wiring is hitting its limits.
Instead of building one massive chip, engineers now break it into smaller pieces called chiplets. When assembled into a complete system, they can outperform traditional monolithic designs.
Ali Adibi is pioneering the use of light instead of electricity to transmit data within computing systems, with support from the Defense Advanced Research Projects Agency (DARPA).
“In computing, a major challenge is interconnection. You can build powerful processors, but they need to communicate with memory and other processors,” said Adibi, Joseph M. Pettit Faculty Chair in ECE. “That’s where optics becomes essential.”
Ali Adibi is pioneering the use of light instead of electricity to transmit data within computing systems.
Krishna’s lab builds simulation tools that model different system configurations before they are built.
These tools allow researchers to explore tradeoffs between performance, power, and flexibility.
The advantage lies in physics. Photons carry data in optical systems and operate at much higher frequencies than the electron-based signals in traditional systems. This allows them to transmit data considerably faster than electrons using conventional wires.
“As with any complex system, success depends on how well everything is structured and optimized,” he said. “Once everything is in alignment, data can move faster, more efficiently, and with less energy consumption for communicating each bit of data.”
Tushar Krishna models high-performance computing platforms so engineers understand how to design them to handle modern workloads without creating data traffic jams.
To harness that speed inside computing systems, Adibi’s team is designing 3D optical routing networks that guide light through tiny pathways built directly into the chip. These systems combine the essential parts of optical communication — like modulators, detectors, and waveguides — using fabrication techniques borrowed from the semiconductor industry. The result is a chip where optics and electronics work side by side.
Despite the promise of photonics, building these systems is far from simple. Integrating photonic elements with electronic systems demands precise alignment and consistent manufacturing.
Avoiding Digital Gridlock
Moving data quickly only helps if the system can handle it. That’s where Tushar Krishna’s work begins.
The ECE associate professor focuses on the architecture and modeling of high-performance systems. His research helps engineers understand how to design computing platforms — from individual chips to largescale clusters — that can handle modern workloads without getting bogged down.
“Efficient processing requires more than just powerful processors,” Krishna said. “It’s about how you move data, how you schedule tasks, and how you architect the system to minimize bottlenecks.”
His team tackles two key challenges. First, how to reuse data as much as possible to avoid unnecessary movement. Second, how to connect many processors so they can work together seamlessly.
“The problems we’re trying to solve are actually a little bit like designing road networks,” he said. “You have a lot of cars moving around, but finite roads. So, you need to answer questions like, ‘What’s the topology? Should I provide highways or small roads with traffic signals?’”
In his analogy, cars are data and the roads are the communication pathways between processors. Krishna’s goal is to reduce data traffic jams by designing smarter, more efficient routes.
To help engineers make informed decisions, Krishna’s lab builds simulation tools that model different system configurations before they are built. These tools allow researchers to explore trade-offs between performance, power, and flexibility.
“We’ve created abstractions that help system designers understand what matters, like how many ‘cars’ are
on the road and where they’re going, without needing to know the make and model of every vehicle,” Krishna said.
This approach allows engineers to focus on the flow of data without getting slowed down by the complexity of every individual computational task.
Krishna’s tools are open source, meaning they’re freely available to other researchers and companies. This has made it easier for others to test new ideas and simulate hypothetical hardware before investing in building it.
Optimizing the Invisible Layer
Hardware provides the foundation, but software determines how that foundation is used.
This layer of the stack manages the flow of data and tasks across the computing system, making decisions that shape performance long before any application appears on screen for users. It includes things like operating systems, compilers that translate code into machine instructions, schedulers and workload managers, and more.
Today, most AI models can only run through the cloud on massive data centers. But by tightly integrating the design of algorithms, software, and hardware, researchers could make it possible for those same models to run efficiently on smaller servers, edge devices, or even smartphones.
To help enable that shift, Callie Hao is developing tools that bridge the gap between software and hardware, making AI systems more efficient, portable, and accessible.
“AI has become so resource-intensive that it’s inaccessible to many,” said Hao, Sutterfield Family Assistant Professor in ECE. “Computer engineers have a responsibility to make these technologies more efficient and available to everyone, not just a privileged few.”
Hao is exploring algorithm-accelerator co-design. Rather than designing hardware first and adapting AI models afterward, Hao works on them together, building tools that help AI programs use advanced chips more efficiently. One of these tools, called LightningSim, simulates — within milliseconds — the performance of complicated AI tasks running on customized accelerators. This makes it much easier to find an optimal design for the accelerator as well as the AI task mapping.
“With the right optimizations, we’re getting closer to executing these models on-device,” Hao said. “That’s a huge step toward making AI more practical and inclusive.”
Hao’s lab also is exploring how machine learning can assist in hardware design, creating a feedback loop where AI helps build better systems for AI.
“We need to use every tool at our disposal to engineer systems that are not just faster, but more adaptable and accessible,” she said. “The future is about delivering high performance computing wherever it’s needed, whether that’s in a data center or in your pocket.”
Callie Hao creates tools that help hardware and software work together so AI systems are more efficient, portable, and accessible.
FROM SMART TEXTILES TO BRAIN-COMPUTER LINKS ,
Wearing the
GEORGIA TECH ENGINEERS ARE DESIGNING WEARABLES THAT CONNECT HUMANS AND MACHINES MORE CLOSELY THAN EVER TO SENSE, RESPOND, AND HEAL.
If you walked through the Smithsonian American History Museum in the mid2000s, you might have seen the “Smart Shirt,” the very first garment to seamlessly combine textiles and electronics.
Dubbed a “wearable motherboard,” it acted as a hub for sensors that could collect a range of biometric data.
That shirt foretold a future where health and biometric data could be collected unobtrusively through wearable technology. And it was created by engineers at Georgia Tech.
“What we have is all these nice data buses that are the fabric threads. And we can connect any kind of sensors to them,” said Professor Sundaresan Jayaraman, the shirt’s co-creator. “We were able to route information in a fabric for the first time, just like a typical computer motherboard. That’s why we called it the ‘wearable motherboard.’”
Jayaraman and Sungmee Park created the shirt in response to a Defense Advanced Research Projects Agency (DARPA) call for ideas to protect soldiers in battle. They envisioned a comfortable, flexible garment infused with fiber optics to detect
gunshot wounds and vital signs. The data would help medics rapidly triage battlefield injuries in the critical minutes when emergency care is the difference between life and death.
Creating a shirt made it easy: no bulky electronics to add to the gear soldiers carried. Just a piece of clothing to wear under their fatigues. Park and Jayaraman developed a way to weave the garment on a loom, making mass production and consistency far easier.
The original sleeveless shirt is tucked into the Smithsonian archives now. But it’s possible to follow the thread of that first smart textile to the work happening in the pair’s School of Materials Science and Engineering (MSE) lab today.
“We looked at textiles as an information processing infrastructure. In other words, our paradigm was, fabric is the computer,” Jayaraman said.
Sungmee Park (left) and Sundaresan Jayaraman with the original “Smart Shirt” (right) and another wearable from their lab.
What’s More Wearable Than Clothes?
“We are still able to use that fundamental breakthrough,” Jayaraman continued, “looking at fabric as an information infrastructure or a computer, and using it for different applications, whether it is for designing the next generation of respirators, pressure injury prevention, or monitoring hospital patients.”
Park and Jayaraman are creating fabrics and systems these days to detect pressure and moisture experienced by people in wheelchairs or hospital beds. The data can help caregivers move patients to prevent sores on their skin. It could also power automated systems to relieve the pressure at contact points, and they’ve developed a prototype wheelchair that does the same.
They’re also developing small EEG caps for infants, integrating their fabric sensors into a soft, breathable knit cap that would be a safer alternative to traditional setups.
“An EEG cap has a lot of sensors with thick wires. Those wires can cause pressure injuries for a baby,” said Park, principal research scientist in MSE. “There’s also risk of the baby getting tangled in those wires. We are trying to put all those things into a knit cap to collect the EEG data.”
Park said sensing garments could help older people avoid falls, perhaps with sensors integrated into a headband to identify patterns of movement that make falls more likely. Leggings, shirts, sports bras, and other apparel could monitor muscles, track breathing, and collect other data to help athletes improve workouts or avoid injuries.
“The beauty about clothing is real estate,” Jayaraman said. “You can potentially use the entire body for any kind of sensing you want.”
Getting Skin in the Game
While Jayaraman and Park focus on the real estate afforded by clothing, another interface offers potential to sense the world and communicate with the brain. Our skin also has plenty of real estate — and an incredible range of abilities that Assistant
Professor Matthew Flavin in the School of Electrical and Computer Engineering (ECE) wants to capitalize on.
Flavin uses haptic devices to deliver vibration, indentation, and twisting sensations to help people with vision loss navigate their environment. He’s also working to improve balance for people who’ve lost feeling from a stroke or spinal cord injury.
Flavin calls his team’s work “epidermal virtual reality” — using custom-designed devices to provide users with information. He describes it as creating a realistic sense of physical touch akin to the way virtual-reality headsets create realistic visual information.
Haptics might be familiar as the feedback a smartphone provides while typing or tapping or from playing video games, where a controller vibrates in response to actions on screen. Flavin is taking that concept further with arrays of small, wearable actuators that poke, vibrate, or twist.
“Just like our eyes have multiple different receptors, which can sense red, green, and blue, our skin has these different receptors that can sense indentation, vibration, twisting,” Flavin said. “These are all things that we want to deliver with really small-scale devices we’re developing.”
One area of work pairs a hexagon-shaped patch of haptic actuators on the back of the neck with a smartphone’s camera and LiDAR sensor. People with vision impairment scan the area around them with the phone and receive a small vibration to alert them when an object is detected. For example, a chair sitting low to their right would activate haptics in the lower right area of the patch.
It offers richer information than conventional aids like canes. In trials, a blindfolded subject navigated an obstacle course using only the device without bumping into anything.
Flavin and his team have modified the same structures to provide a twisting sensation, as well as pokes or vibration. They can do it without a huge battery or being plugged in — an improvement over other devices. He said adding the additional twisting “channel” of information can make using the device much more intuitive.
“We use torsion as a navigational signal, telling people how to navigate towards a particular object, and
then indentation as a corrective signal, telling people that they want to avoid a particular object,” Flavin said. “Having both of those can give you navigation information in a very efficient way.”
Another study uses forearm haptics connected to shoe insoles to help stroke or spinal cord injury patients who’ve lost some sensation in their feet.
“Those haptic devices deliver a pattern of vibration that matches the pattern of pressure they’re putting on their foot in real time. That should help them with some of those motor and sensory symptoms: We’re substituting that plantar pressure information to help those patients balance and walk with a healthier gait,” Flavin said.
Beyond haptics, Flavin’s team has created a small wearable sensor that detects gas exchange across the skin. It can reveal a lot about skin’s health, including whether it’s properly working as a barrier and how well it’s repairing itself. For people with diabetes, the sensor could offer early warning that wounds aren’t healing, a common problem that can lead to serious consequences — even amputations — if not dealt with quickly.
“Our skin is our largest organ, and we have sensory receptors across the entire surface. It’s really underutilized as a human-machine interface,” Flavin said. “We can use haptics to deliver information to our body. We can also get a lot of information from our skin.”
Top: Small actuators on a wearable patch deliver haptic feedback to users.
Above: Worn on the neck, and paired with a smartphone, the actuators can help people with vision loss navigate their environment.
Expanding from Sensing to Intervention
Millions of people already get daily haptic feedback from a smartwatch or fitness device.
Omer Inan says the ability of these popular wearables and their sensors to impact health is just beginning. Inan is the Linda J. and Mark C. Smith Chair and professor and a Regents’ Entrepreneur in ECE. And he says the game is changing, pointing to U.S. Food and Drug Administration approval of the Apple Watch’s ability to detect signs of atrial fibrillation or the Samsung Galaxy Watch’s detection of sleep apnea. Those clearances aren’t for the devices but for underlying software that uncovers issues in data collected by the devices. It’s an area his lab is deeply involved in: He’s spun out a number of startup companies developing both clinically relevant devices and software to use their data.
stress disorder (PTSD). It’s a wristband that senses physiological changes and delivers electrical stimulation to specific nerves to counteract it.
One of those devices is the CardioTag, a sensor that captures three heart signals and received FDA clearance this summer. The company Inan co-founded to commercialize the sensor is using it in a study to validate software that assesses pulmonary capillary wedge pressure — a measure of how well the heart fills with blood. It’s one way CardioTag’s data can deliver useful data to physicians. But Inan said that’s just the beginning.
“For us, what’s more important is that the same hardware can be used with multiple different software clearances,” he said. “Pulmonary capillary wedge pressure is a really important parameter for heart failure that affects about 6.5 million Americans. But we could look at software to measure cardiac output or arterial blood pressure. There are many different software clearances from the FDA that could work with the same hardware, which makes it completely modular.”
Another device Inan is working on would help calm the stress reaction for patients with post-traumatic
He said targeting the median nerve in the wrist can begin to blunt the body’s overreaction within 10-15 seconds in their lab tests.
“We found if you electrically stimulate at the wrist, you do actually reduce the body’s peripheral response to stress. The mechanisms are not well understood, but we see it in the data,” Inan said. “It’s why we developed a wearable that both senses from the wrist and stimulates.”
Helping PTSD patients is one significant area, with the U.S. Department of Veterans Affairs estimating about 6% of Americans will have PTSD at some point in their lives. But modulating overwhelming stress could have much wider uses, Inan said.
“There are stress-related anxiety disorders, which affect even more people. And too much chronic stress is a problem even for healthy people. So this may even be beneficial for those who don’t have any of those stress disorders,” he said. “That’s speculative still, but there are millions of people who could benefit from such a device.”
Omer Inan (left) and Ph.D. student Farhan Rahman make adjustments to a prototype wristband that would sense a rising stress response and deliver electrical stimulation to counteract it.
Networking In and On Our Bodies
Inan’s stress wristband illustrates how wearables can sense and respond. Imagine if external smart devices and sensors could also communicate with implants or ingestible devices to deliver treatment inside the body too. That’s the vision driving Assistant Professor Alex Abramson’s research in the School of Chemical and Biomolecular Engineering.
What might that look like?
In a serious allergic reaction — the kind that requires immediate medical treatment or administration of epinephrine via an EpiPen — sensors would detect the allergic response and trigger a small device in the body to administer the epinephrine in the right dose and at the right place.
For people with chronic diseases or cancer, the devices would monitor specific health indicators and apply therapies as needed. Creating interactions between wearables and implants could detect nerves firing and then move a prosthetic limb.
“What’s really difficult is taking that wearable information and allowing it to actuate a therapeutic device
inside your body,” Abramson said. “We work on creating devices able to deliver drugs or a therapeutic interaction at the exact location and time you want, without causing any off-target side effects. We also create the network that allows for those devices to interact with wearable sensors others are creating.”
Abramson’s team capitalizes on all the fluid and salts we’re made of to build a communications infrastructure. They make it easy to pass a small electrical signal through body tissues that can be detected by an implant and trigger it to act. Abramson said the method is 30 times more energy efficient than Bluetooth. That means implants can be smaller and last longer.
“We’re able to trigger devices anywhere throughout the body instead of needing direct interaction between the wireless antenna on the outside and the implantable on the inside,” he said. “It also allows us to create a full network of these therapeutics: Because we’re able to trigger anything, anywhere inside of the body, all from a central hub, we can have these devices work in tandem as well.”
Alex Abramson (right) works with grad students Ramy Ghanim (left) and Joy Jackson to test an implantable device small enough to fit in a large syringe.
The size advantages of Abramson’s in-body communications system mean they can scale implants down to about 3 millimeters in diameter — small enough to be implanted using a syringe instead of surgery.
Meanwhile, the ingestible devices his team is developing are designed to target specific locations in the gastrointestinal tract. They linger in the stomach and can orient themselves so they’re right next to the tissue where they need to deliver medicine or even a small electrical stimulus.
Abramson said emerging data suggests such stimulation directly in the gut can improve mood and reduce stress levels. Stimulation also can prompt the stomach to empty properly, which sometimes doesn’t happen for people with diabetes.
The inspiration for Abramson’s vision of networked devices all working together to diagnose and treat acute or chronic conditions is the insulin pump. It’s a “closed loop” system where a sensor continuously monitors blood glucose and triggers a dose of insulin when needed. The problem is other commercial sensors aren’t as good as the glucose sensor. And the pump requires lots of energy and computation power.
“That’s really the gold standard of what we’re trying to emulate for all of our other different types of diseases,” he said. “Imagine something like an insulin pump but implanted inside of your body and able to treat neurodegenerative diseases, heart disease, cancers, or any other types of chronic illnesses.”
Above: Abramson’s in-body communications system means he can build devices small enough to be implanted using a syringe instead of surgery.
Below: Hong Yeo’s Tedream system includes three soft, flexible sensors that can collect sleep data comfortably and at home — rather than in a sleep lab.
Sleep Sentinel
Many of those chronic illnesses show up as disturbances in our sleep. Detecting them, however, is onerous. And that’s putting it mildly, according to W. Hong Yeo. Despite the “sleep score” you might get from a smartwatch or fitness tracker, the only way to get reliable data to diagnose sleep problems is an expensive night in a sleep lab.
It’s cozy: Patients are connected to dozens of sensors and observed all night. If the sensors disconnect as they move — and they often do — a technician has to reconnect them. Plus, it’s only one night.
“From the beginning, it doesn’t make sense,” said Yeo, Harris Saunders Jr. Professor in the George W. Woodruff School of Mechanical Engineering. “It’s not capturing your natural sleep. Nobody can sleep well in a sleep lab. But that’s the only way, because of limitations of technologies and available devices.”
Yeo has been working for years to change that, and he’s closer than ever to bringing the gold-standard lab techniques home with a trio of wireless devices that can capture the same data over multiple nights to get a better picture of natural sleep behavior. He created a startup company to pursue FDA clearance for the technology and get it in doctors’ hands.
Instead of wires and sensors from head to toe, Yeo’s Technology Enhanced Dreaming (Tedream) system uses three soft, flexible sensors attached to the forehead, chest, and forearm. They collect data on brain activity, heart rate, posture, respiration, sound, blood oxygen, and movement, feeding it wirelessly to a nearby tablet.
“This is the complete package of sleep. Our system will give you everything that you need in terms of diagnostics. Then the doctor can make decisions about how severe symptoms are and therapeutic options,” said Yeo, who also is the G.P. “Bud” Peterson and Valerie H. Peterson Endowed Professor.
“It will change the paradigm of measuring sleep quality and sleep disorders once it’s available.”
It’s a problem that hits close to home for Yeo, whose father had a heart attack and died in his sleep two decades ago. He had no history of heart issues, but Yeo realized better data about sleep could’ve alerted his dad’s doctors to an issue.
“That’s why I’m doing my best,” he said. “I strongly believe that this type of device will save a lot of lives.”
WISH CENTER
W. Hong Yeo leads a cross-campus effort focused on bioelectronics and human-interface technologies. The Wearable Intelligent Systems and Healthcare Center (WISH Center) gathers Georgia Tech’s expertise in electronics, materials, systems, data, and medical science to push innovation and develop big ideas.
Yeo’s work has resulted in a variety of flexible electronics and sensors, including a smart stent for monitoring blood pressure and blood vessels and a smart pacifier to measure babies’ electrolyte levels from saliva instead of repeated painful blood draws.
He’s also working on a new kind of brain-computer interface that’s so tiny it fits between hair follicles on the scalp. In a recent study, it captured high-fidelity signals that allowed subjects to control an augmented reality video call. He sees potential for the interface to allow users to manipulate robotic devices or prosthetic limbs — without having to implant sensors in the brain.
Like many of the Georgia Tech engineers working to imagine new wearable devices that will address difficult health challenges, Yeo collaborates across campus and with researchers and doctors at Emory, Children’s Healthcare of Atlanta, and elsewhere. That, he said, is the secret to having real impact.
“Without them, I couldn’t make any of this happen. Collaboration is really important when it comes to making a new innovation, because existing problems are so complicated that they cannot be solved by one person.”
Hong Yeo’s brain-computer interface fits between hair follicles on the scalp.
Engineers are building computerized replicas of cities, and even Georgia Tech’s campus, to save lives and create a better, more efficient world for all of us.
Extreme weather, congested streets, aging infrastructure — just some of the challenges that communities and their residents face every day. Solving them requires more than traditional planning; it demands tools that can anticipate problems before they happen.
One of the tools our researchers are turning to is called a digital twin. These virtual models mirror realworld systems in real time to make communities safer, transportation smarter, and campus operations more efficient.
Unlike static simulations, digital twins evolve with live data. They allow decision-makers to respond to changing conditions with speed and precision. Whether it’s predicting how floodwaters will move through a city or minimizing traffic delays for emergency vehicles, digital twins offer a new way to manage complexity. By blending artificial intelligence, sensor networks, and advanced analytics, Georgia Tech engineers are creating solutions that don’t just react — they prepare, adapt, and improve the systems we rely on every day.
One Georgia Tech project is a digital twin of Charleston, South Carolina, to help route emergency vehicles during floods.
Technology Amid a Triple Threat of Flooding
From their office high above Tech Square, John Taylor and Neda Mohammadi can see more than a hundred miles on a clear day. Their focus this fall, however, expanded well beyond the horizon.
The School of Civil and Environmental Engineering (CEE) researchers have been watching the tropics and their impact on Charleston, South Carolina. This hurricane season was the first for a series of sensors in the city that track flood water levels and their impact on the city’s historic streets.
The duo’s research team built a digital twin of downtown Charleston that shows every road and block of the low-lying peninsula. It uses artificial intelligence and geographical information system analytics to predict flood impacts on roads. The flood risk visualization, which also incorporates road closures due to high waters, allows city officials to reposition emergency vehicles during flooding, a common occurrence.
The idea to approach Charleston County leaders was born after Taylor and Mohammadi successfully deployed a digital twin of the Chattahoochee River for Columbus, Georgia, that helps search and rescue workers save lives when the water rises quickly.
“Columbus gave us experience with rising water levels and made us think about places where a similar concept could be applied more broadly,” said Taylor, Frederick Law Olmsted Professor in CEE. “It led us to Charleston, which has a triple threat of flooding.”
Charleston is framed by the Atlantic Ocean and two rivers: the Cooper and the Ashley. Tidal creeks from
“...digital twins can give you different results every hour because they’re based on realtime numbers. This allows city officials and communities to make decisions based on evidence instead of past events.”
Neda Mohammadi
those rivers flowed into the peninsula hundreds of years ago. But as the city became more populous in the 18th and 19th centuries, the creeks were filled with trash, materials from old ships, and other debris to create more areas to live and work.
Those filled-in creeks are now the lowest points in the city and include some of the peninsula’s major roads. That’s a problem when heavy rains and high tide pound the city. The water flows directly to those low areas and floods. Even worse: one of Charleston’s main hospitals is surrounded by those low-lying roads.
Georgia Tech worked with researchers from Clemson University and the University of Hawaii to place several traditional flood sensors in the city to measure how much water is present, including on the roads near the hospital.
Meanwhile, another set of sensors measuring the flood levels are much different. With a unique visionbased sensing approach, the CEE team uses video livestreaming data from cameras coupled with an AI visual language model they developed to detect how deep the water is.
This hurricane season is their first with everything in place. They’re not hoping for bad storms — but every rainy day helps.
“Every time it rains, our ability to predict road closure risk improves. It also improves our models and makes them more applicable to other places. And it helps us see if the sensors are working as planned,” Taylor said.
The digital twin also allows the team to play “what if,” and feed that information to Charleston officials. For example, they can simulate 6 inches of rain or 24 inches
Neda Mohammadi
John Taylor
or anything in between and see how it impacts roads and typical ambulance routes.
By using real-time data, they can see results shift as conditions change.
“Simulations don’t change if you run them week after week, but digital twins can give you different results every hour because they’re based on real-time numbers,” said Mohammadi, a senior research engineer who wrote her first paper about cities and digital twins in 2017. “This allows city officials and communities to make decisions based on evidence instead of past events. We bring them real data that produces informed, hyper-local decisions.”
In the Shadows of Campus
While Taylor and Mohammadi are using a digital twin to explore the macro level of a city’s road network, their CEE colleague Angshuman Guin is zoomed in on the micro level. He and his students are looking at each car that rolls along Atlanta’s North Avenue adjacent to campus. They’re using recorded drone footage and live cameras to track how each vehicle interacts with the hundreds around it as they pass by campus or exit the Downtown Connector.
Like the Charleston project, Guin’s digital twin is designed to save lives. By understanding how long it takes motorists to move through North Avenue’s series of traffic lights, Guin and his team hope to lessen delays
Angshuman Guin
for ambulances headed to nearby Emory University Hospital Midtown.
His technology helps clear intersections before ambulances and fire trucks arrive. It was first developed in collaboration with the fire department and the transportation department in Gwinnett County, Georgia. It also inspired a $5 million statewide initiative funded by the U.S. Department of Transportation.
“If we can predict when and where an emergency vehicle will need to pass — and clear the path in advance — we’re saving minutes that can mean saved lives,” said Guin, a CEE principal research engineer who received his master’s degree and Ph.D. from Georgia Tech.
The team has a variety of ways to use their data for predictive modeling. First, they use the drone footage to observe the second-by-second movement of vehicles. They can see the interactions between emergency and regular vehicles and build driver-behavior models that inform large traffic simulations. Those models allow Guin’s team to study alternative scenarios and traffic operation strategies.
The second tactic is a pseudo digital twin. Guin has been collecting data from North Avenue for several years and can feed any day’s worth of data into a simulation, then play it back second by second. Data is driving the simulation, but it’s not live synched with what’s happening on North Avenue.
The third is a true hardware-in-the-loop digital twin, which allows for near-real-time data simulation. Whenever a vehicle drives along North Avenue, a streetside detector feeds that signal to Guin’s team. By working with live data, the digital twin goes beyond a typical simulation and allows for testing of new strategies on real field equipment, but in a virtual environment. For example, the AI model can try different signal lengths at its virtual intersections to see how drivers respond. Once the team runs enough simulations to be confident with the results, they can try them on the real North Avenue.
“Prioritizing an emergency vehicle can be done fairly simply by giving a green signal to all the intersections on its path, then holding the signal until the vehicle passes,” Guin said. “But that causes driver angst and potential safety issues when vehicles start violating red signals after long wait times at a signal with no apparent crossroad traffic. Our digital twin is really about development
and testing of algorithms that will serve both objectives of concurrently minimizing emergency vehicle delays and reducing general traffic delays.”
Guin said he’s lucky his real-world laboratory is so close to his lab in the center of campus. North Avenue has it all: it’s a southbound exit portal for one of the busiest interstates in the southeast; it has two eastbound turn lanes that lead to the Connector, and it’s a main thoroughfare for the hospital.
“It truly has everything a traffic engineer could want,” Guin said. “It’s a big puzzle, and we hope to solve some of its pieces and make things a little better for everyone.”
Tech Twinning
Two more digital twins live within the Daniel Guggenheim School of Aerospace Engineering, although neither has anything to do with space exploration or the future of flight.
Research engineers in the Aerospace Systems Design Lab (ASDL) have supported the development of a digital twin of Georgia Tech’s AI Makerspace, the nation’s most powerful supercomputing hub used exclusively to teach students. The Makerspace’s 304 NVIDIA H100 and H200 GPUs are being incorporated into several
undergraduate AI courses, including the Fundamentals of Machine Learning and AI Foundations.
ASDL’s Olivia Fischer and Scott Duncan, along with Aaron Jezghani from Georgia Tech’s Partnership for an Advanced Computing Environment (PACE), have used the digital twin to visualize and communicate about computational resource use and impact. They’re also working with a team of Georgia Tech students and thinking about how they can use the twin to improve the Makerspace’s operational efficiency and manage its infrastructure.
“As the use of AI continues to grow in both academic and research settings, such capabilities could offer Georgia Tech’s leadership a comprehensive and detailed picture of asset utilization and support data-driven investment decisions that anticipate the growing needs of researchers, both inside and beyond the classroom,” said Fischer, a principal research engineer.
The AI Makerspace is only the first step. After using the current digital twin as a proof-of-concept, the research group plans to produce a twin of the entire Coda data center, which houses 2,200 servers in a 2 megawatt footprint. The building serves as Georgia Tech’s headquarters for data analytics research — the processing, handling, manipulation, and understanding of very large data sets across a wide variety of industries and academic disciplines.
Olivia Fischer
For over a decade, ASDL Director Dimitri Mavris has spearheaded the development of digital twins across a range of applications. One of his first emerged from a collaboration with the Office of Naval Research. Mavris and his team developed an integrated electrical-thermal model that digitally represented the power architecture of a naval destroyer. They coupled this model with an interactive dashboard displaying real-time performance data from various ship zones.
The thinking was, if the Navy could collect sensor data from smart valves and other onboard technologies, digital twins could enhance predictive maintenance by allowing for earlier fault detection and better inform ing maintenance decisions. This predictive capability would, in turn, help reduce maintenance loads, improve operational efficiency, and lower mission failure risks, eventually enabling the Navy to achieve its objectives more effectively and affordably.
The Navy project reminded Mavris of Georgia Tech’s buildings.
“Before the 1996 Olympics, campus began to modernize its energy systems, including installing meters measuring electricity, chilled water usage, and steam usage,” said Mavris, Distinguished Regents’ Professor and Boeing Professor of Advanced Aerospace Systems Analysis. “New buildings that came online later were
further outfitted to collect data on their utility performance. The problem was, however, that the use of this data was very siloed. Planning and strategic-level decisions were not accounting for campus-wide energy interactions across all buildings.”
That prompted ASDL to create a digital twin of the entire campus. It compared buildings and, over time, showed when they would start to overconsume energy or use chilled water inefficiently. A related dashboard revealed which buildings weren’t adjusting energy use at night, when students, staff, and faculty had gone home.
“Campus managers got a better sense of when our utility bills were too high or low,” Mavris said. “Our digital twin has helped campus lower costs, improve reliability, and make more strategic decisions.”
During the 2020 pandemic, digital twinning played an important role in the Living Building certification process for The Kendeda Building for Innovative Sustainable Design, one of the world’s greenest buildings. The Kendeda Fund offered Tech a bonus if it could achieve certification within the first 12 months of operation. But in the fourth month, campus shut down, drastically changing the building’s occupancy. Nevertheless, a digital twin of the building was able to confirm that the building would easily have been net-positive for energy and water, key requirements for certification.
Building on its initial focus on campus energy systems, the campus digital twin expanded to include campus mobility and safety planning. Mavris said the Georgia Tech Police Department has leveraged that information to anticipate areas with a higher likelihood of crime and to manage traffic flow around campus, particularly before and after football games and Commencement ceremonies.
The predictive capabilities of the digital twin also allow campus leaders to project expansion phases and it provides insights into road safety improvements and campus transportation planning. Applying the same digital twin technologies developed for Georgia Tech, Mavris and his team are now assisting the Georgia World Congress Center and Mercedes-Benz Stadium with traffic and safety planning efforts in preparation for the 2026 FIFA World Cup.
“Digital twins represent the bridge between the physical and digital worlds, where data, modeling, and human insight converge to drive decisions that realize value,” Mavris said. “As systems grow more complex, their value will only increase, helping us to predict outcomes, optimize performance, and design solutions that are both sustainable and resilient for the future.”
Dimitri Mavris
https://buzzanswers.online/
What can I help you with today?
Sure, let’s ask Georgia Tech engineers how to make data centers...
Better, Not Just Bigger
The massive computing facilities popping up across the country have become notorious for requiring huge resources. These engineers are thinking about how data centers can be more efficient and how they influence our future power needs.
BY JOSHUA STEWART
Not too many years ago, most of us didn’t think much about the computing resources required to make a Zoom call, stream the latest hit show, or scroll our social media feeds.
The growing power of artificial intelligence systems — which are exploding in capability and use — has changed that. New data centers are sprouting all the time to power AI models, with especially high concentrations in the Atlanta area and northern Virginia. Powering those centers is straining the grid: the largest and most complex could require the same amount of electricity used by entire cities.
That’s why several of the biggest AI developers, including Meta, Google, and Microsoft, have announced plans to build their own power plants or restart shuttered nuclear facilities solely to power their AI data centers.
The AI models themselves also keep getting more complicated, which means they require more power and cooling, even as computer chips get more efficient.
Georgia Tech engineers are thinking about data centers from a variety of angles, including how to use resources more efficiently and developing ways to provide more power to the grid to meet the surging demand.
Scaling Intelligently
Data centers can occupy hundreds of acres, with unending racks of high-powered computers processing AI models with billions or trillions of parameters. But the truth is, every graphics processing unit (GPU) chip isn’t hard at work every second. Chips also vary in their speed and how much heat they can tolerate, even when they’re the same type.
Divya Mahajan in the School of Electrical and Computer Engineering sees huge potential to capitalize on those variations. She’s working to optimize the models themselves and improve how data is accessed and executed to achieve significant efficiency in the resources data centers need in the first place.
“... if you have visibility of the full pipeline comprising different pieces, you can optimize better rather than just having one piece and optimizing that.”
Divya Mahajan
Significant, as in five to 10 times more efficient than today.
“It is a hard thing to do though. Because you need to know information from across the computing stack, and you need to work collectively and, sometimes, in lockstep,” said Mahajan, the Sutterfield Family Assistant Professor. “A lot more people need to interact with each other to make data centers more efficient than we’ve had.”
This kind of “cross-layer” optimization allows her to think about ways to make the AI software itself more streamlined, how to improve its execution, the hardware architecture it works on, and even how data is stored and retrieved within data center systems.
Because all of those layers can talk to each other, Mahajan said they should. But that means the engineers
who design AI models have to understand how they run on the hardware systems. The engineers building chips have to think about the software that will run on them. And engineers designing data centers need to consider the hardware and software interactions too, not just building facilities with more and more computing power.
Traditionally, each of those players would make their “layer” as good as possible in isolation. Collaborating would mean continuing to grow computational power while reducing energy and infrastructure demands, Mahajan said.
“The future of AI depends not only on better models, but also on better infrastructure,” she said. “To keep AI growing in a way that benefits society, it’s important to shift from scaling that infrastructure by brute force to scaling it with intelligence.”
Mahajan offered a simple example of the kinds of efficiencies that can be exploited:
Millions of subscribers bingeing the latest hit series on Netflix, while older movies see a fraction of the demand.
“There are all of these nuanced patterns that we can exploit to use the infrastructure — the hierarchy, the memory, the data exchange between devices — a
lot more efficiently. And we can exploit those patterns at the model level, the data level, the execution level, and the hardware level,” Mahajan said. “If you look at more nuanced properties that are rooted in even human behavior, you can do things more efficiently instead of just assuming that every data point, every one and zero in the hardware, is the same. It’s not the same. Some ones and zeros you will access frequently; others you will never care about.”
Mahajan creates dynamic runtime systems to use those patterns in the background to boost efficiency. For instance, frequently accessed data can be stored closer to the GPUs that process it so it’s easier to get to and compute. This level of control can help reduce heat generation and energy demand. It can also capitalize on uneven usage, shifting processes from busy computing resources to idle chips.
The next step in her work, which is funded by both software and hardware makers, is to shift from controlling just the software side of things to the physical data center systems. Eventually, Mahajan wants to be able to do things like leverage sensing information around chips and data centers to identify hotspots and adjust cooling systems to target them.
This kind of full-picture approach to operating data centers is why companies such as Google, OpenAI, and others are starting to invest in power infrastructure, she said.
“They understand that if you have visibility of the full pipeline comprising different pieces, you can optimize better rather than just having one piece and optimizing that.”
Tackling Heat
A big driver of the energy needs for data centers comes from managing all the heat that racks of high-powered chips generate. Cooling systems use air or water to pull that heat away, but in either case, electricity drives those systems.
Baratunde Cola has been thinking about how to move heat away from semiconductor chips for a long time — long before data centers became a hot topic. His team has spent years developing a material made for this moment: arrays of carbon nanotubes lined up vertically on an aluminum backbone that are super-efficient at channeling heat.
“Performance is proportional to power density, so if you ever want more performance, power density only goes up. I knew we needed to develop advanced solutions.”
Baratunde Cola
Cola, an entrepreneur and professor in the George W. Woodruff School of Mechanical Engineering, originally had smartphones and laptops in mind — as well as military applications, thanks to early funding from the Defense Department. But he also knew the time was coming when processors would be so power-hungry that dealing with the resulting heat would be a challenge.
“...we can’t meet future needs through adding a few power plants to the grid or increasing electrical efficiency. We’re looking at a multi-decade strategy for the state of Georgia and the region...”
Steve Biegalski
“I wasn’t thinking data-center scale; I was just thinking about wherever power density was high. Performance is proportional to power density, so if you ever want more performance, power density only goes up,” Cola said. “I knew we needed to develop advanced solutions.”
Cola has turned his carbon nanotube arrays into a startup spun out of Georgia Tech called Carbice where he is founder and CEO. The company is manufacturing and selling the material as a thermal-interface solution for data center hardware, spacecraft, and even gamers who can incorporate the cooling technology into their custom-built computers.
Carbice’s carbon nanotubes are arranged into thin sheets that can be cut to size and applied as a simple peel-and-stick pad. They sit right behind a heat source, such as a GPU chip, connecting it to a cooling system or heat sink and channeling heat away. For data centers, designing with Carbice’s technology from the get-go can offer 40% better performance, Cola said. It also might allow for centers to be smaller in the first place — and therefore require fewer resources.
“The industry is ripe with redundancy,” he said. “When someone builds out a data center, they have to plan for 30% to 40% more hardware. That’s the way they deal with failures. When people talk about a data center’s ‘uptime,’ they don’t mean that the actual equipment stays up. They mean they have redundancy. But you lose something in that, because every time you switch and swap, you interrupt operations.”
Cola said many of those equipment issues result from the failure of the joint between the semiconductor chip and the cooling system behind it. Manufacturers often use special heat-conducting greases or epoxies in that
joint that degrade, dry out, or crack over time. Replacing them pulls part of the data center out of service and requires a technician to disassemble the system, reapply the grease, then put it all back together.
Carbice’s carbon nanotube pads don’t degrade, so downtime for those joint issues is significantly reduced and computing power stays online, Cola said. It also means saving on another limited resource: human time.
Cola said that kind of servicing will only grow as power and heat loads in data centers spike as a result of processors running resource-hogging AI.
“That’s where mechanics and what we focus on comes into play: It’s really use conditions that cause joints to fail — like your shoe sole or paint on your wall. Paint can last 50 years in one house but five years in another. Michael Jordan’s basketball shoes probably needed to be changed every few weeks,” Cola said. “Data centers and AI mean chips are operating more like Michael Jordan’s shoes, rather than, say, my 12-yearold daughter’s.”
The Full Energy Picture
As Mahajan advocates looking at the full picture of the “stack,” a group of Georgia Tech engineers, economists, and policy scholars at the Strategic Energy Institute’s Energy Policy and Innovation Center (EPICenter) are looking at the full picture of energy.
Their goal is to understand the landscape of growing electricity demand across Georgia — driven in part by
more data centers — to help policymakers and power companies plan.
“If you look at the numbers, we can’t meet future needs through adding a few power plants to the grid or increasing electrical efficiency,” said Steve Biegalski, professor and chair of the Nuclear and Radiological Engineering and Medical Physics Program in the Woodruff School. “We’re looking at a multi-decade strategy for the state of Georgia and the region, identifying technology readiness for everything that might be available today versus 20 years from now to figure out the best path of meeting demand while maintaining grid resilience.”
Though Biegalski has spent his career focused on nuclear energy and been an advocate for the value of nuclear power, he said there’s no denying that exploding growth of power-hungry data centers has sparked renewed interest in putting more nuclear power on the grid and exploring advanced reactor designs.
It’s fortuitous timing for a project Biegalski and some of his colleagues started formulating a decade ago to build a new kind of advanced nuclear reactor. They received a license in 2024 from the Nuclear Regulatory Commission and, this year, support from the U.S. Department of Energy to construct and operate a molten salt research reactor in Texas.
Molten salt designs produce less nuclear waste and have built-in safety features that naturally slow or stop nuclear reactions without human intervention. It is privately funded by Natura Resources; Georgia Tech is part of the Natura Resources Research Alliance, a consortium with Abilene Christian University, Texas A&M University, and the University of Texas at Austin. Biegalski said the first molten salt reactor should be finished in the next year or so and will be an important step toward larger commercial systems.
He was recently at a meeting of nuclear energy researchers and industry colleagues, and one of the presentations came from the data center lead at a large tech company. The presenter noted the company didn’t favor one source of electricity over another to power their AI data centers.
But their analysis was clear: They couldn’t meet their energy needs without nuclear energy in the mix.
“I’ve always believed that we needed nuclear power and that it has a lot of value, even if we weren’t in the midst of this huge push for data centers,” Biegalski said. “But it is the gorilla in the room. It’s huge demand, and you can’t get away from it.”
Better Brain-Machine Interfaces Could Allow the Paralyzed to Communicate Again
Biomedical engineer Chethan Pandarinath collaborates with neurosurgeons and scientists across the country in a massive project to help patients with ALS or stroke damage reconnect with the world.
This summer, a team of researchers reported using a brain-computer interface to detect words people with paralysis imagined saying, even without them physically attempting to speak. They also found they could differentiate between the imagined words they wished to express and the person’s private inner thoughts.
It’s a significant step toward helping people with diseases like amyotrophic lateral sclerosis, or ALS, reconnect with language after they’ve lost the ability to talk. And it’s part of a long-running clinical trial on brain-computer interfaces involving biomedical engineers from Georgia Tech and Emory University alongside collaborators at Stanford University, Massachusetts General Hospital, Brown University, and the University of California, Davis.
Together, they’re exploring how implanted devices can read brain signals and help patients use assistive devices to recover some of their lost abilities.
Sometimes the user’s intent is to speak or move their hand. But sometimes, it might be both. Or neither. The challenge — and the hope — is to understand all of that.
at Georgia Tech and Emory and one of the researchers involved in the trials.
“We can place electrodes in parts of the brain that are related to speech,” he said, “and even if the person has lost the ability to talk, we can pick up the electrical activity as they try to speak and figure out what they’re trying to say.”
Speech has become one of the hottest areas for these interfaces as scientists leverage the power of artificial intelligence, according to Chethan Pandarinath, associate professor in the Wallace H. Coulter Department of Biomedical Engineering
Pandarinath and Emory researcher and neurosurgeon Nicholas Au Yong are now expanding their work to look simultaneously at brain areas involved in speech and hand movements. They’re working with an Atlanta woman who suffered a brain stem stroke, leaving her mostly paralyzed. Sensors implanted in her brain track signals from both areas in an effort to understand — and then recreate — the brain’s natural ability to switch seamlessly between different activities.
“When you move your hand or talk, it just happens. You know what you want to do. You don’t have to press some button to enable speech. Basically, we’re trying to make a
brain-computer interface that’s just as easy to use,” Pandarinath said. “People are going to use them for speaking; they’re going to use them for controlling their computer. Can we seamlessly tell when they transition from one to the other?”
Part of doing that is disentangling how brain signals in one area alter other signals. Sometimes the user’s intent is to speak or move their hand. But sometimes, it might be both. Or neither. The challenge — and the hope — is to understand all of that.
Pandarinath said they’re likely the first researchers to look at these two areas of the brain together. The team picked those specific motor functions because their patient has limited hand function and weakened diaphragm control that makes speech difficult.
Meanwhile, they’re also working to simplify how their participant interacts with a computer. Currently, she must think through each step when she wants to move a pointer to click on something on the screen. For example, she would think about moving
left until the pointer reaches the destination. Then she might have to think about moving up until she reaches what she wants to click. Only then can she think about clicking it.
But what if she could just think about clicking on the object, without all the steps in between?
“You or I might think about reaching out to grab a cup. There’s a higher-level goal, and then we just execute the whole thing — reach out, grab the cup, bring it back,” Pandarinath said. “What we’re trying to do is understand the planning activity so we can offload all the low-level details and just let her be able to think, ‘I want to go click on that thing.’ And then go and do it.”
The researchers can see moment-by-moment activity in the brain. And they can see the split-second planning that precedes it. That might allow them to skip the tedious steps and go right the ultimate goal.
Still, mistakes happen: Their algorithms might misinterpret her intent.
“Let’s say we think she’s trying to move to target A, and instead she actually wants to move to B. What is the process like when she sees what we decided, realizes it’s wrong, and is trying to correct?” Pandarinath said. “We need to understand errors and correction better to make this something that can work really well for her.”
This long-term, multi-site study allows Pandarinath and his colleagues to explore these questions deeply. Each research site works with one patient for a year or more, sharing data and control algorithms. When something works for one patient, they’ll try it with others across the country.
“You get to not only work on developing these devices but also ask a lot of neuroscientific questions. You’re working with somebody a couple times a week for many years, allowing us to study the brain and ask questions that we couldn’t ask in any other way.”
‣ JOSHUA STEWART
During a research session, a participant imagines saying the text cue on the screen. The bottom text is the brain-computer interface’s prediction of the imagined words.
More Power, Less Heat
As an electrical and computer engineering Ph.D. student, Edgar Garay reimagined how chips called power amplifiers could work. His startup company based on that innovation has raised millions in capital to disrupt a $23 billion dollar industry where designs haven’t changed much in decades.
Small chips you’ve probably never heard of are at the heart of every wireless communications device. These power amplifiers are embedded in our smartphones, cellular towers, satellites and satellite receivers, and much more.
If a digital device has an antenna, it has power amplifiers. And they are absolute energy gobblers.
Just powering our mobile phone infrastructure costs tens of billions of dollars in electricity every year. A good chunk of your phone’s battery life goes to these tiny chips, which are responsible for amplifying digital signals so the phone can communicate with a nearby tower.
Edgar Garay knew there had to be a way to cut down on the energy used and the heat produced in the process. He came to Georgia Tech for his doctoral studies laser-focused on the problem. Now he’s founder of an Atlanta-based company called Falcomm with $12 million in startup funding, around two dozen employees, and customers ready to deploy the technology he created to address a problem more or less ignored for decades.
“Every device that you own has 10, 20, sometimes hundreds of power amplifiers. The power amplification process consumes a lot of energy and is very inefficient,” said Garay, who earned his electrical and computer engineering Ph.D. in 2023. “While working at Georgia Tech, I figured out how to do it twice as efficiently as any other power amplifier in the world.”
Garay didn’t create a new material or process. Instead, he started at the transistor level to redesign the architecture of the power amplifier chip. His “dual drive” approach splits the radio frequency signal, delivering nearly two times the energy efficiency at double
the chip, Garay’s designs can be manufactured by existing commercial semiconductor foundries.
“We don’t have to spend money on expensive materials or exotic manufacturing processes to get just 5% improvement in technical performance,” he said. “We can use manufacturing processes that already exist. And after three months, you have a super high-performing power amplifier that beats all the energy efficiency records of existing hardware.”
When Garay first developed his chips, he partnered with a commercial foundry to create the prototypes. His contacts there were so impressed with the results, they encouraged him to think about creating a company around the technology. He tapped into Georgia Tech’s commercialization resources, including VentureLab and CREATE-X, and he built his entrepreneurial skills in an incubator program at the University of California, Berkeley. A couple of years ago, Falcomm became the first startup supported through Tech’s Research Impact Fund, an investment fund targeted to companies based on Georgia Tech intellectual property.
Falcomm’s first paying customers were satellite manufacturers and aerospace companies — firms building out space infrastructure where thermal management and energy efficiency are at a premium. Garay said it’s a problem Falcomm is perfectly positioned to solve, even if it took some convincing initially.
“When we started showing our performance numbers to satellite manufacturers, their first reaction was, ‘We don’t believe the numbers. There’s no way,’” Garay recalled. “I asked if that was because our numbers are bad. Quite the opposite. They told me they were amazing.”
As Garay and his growing team keep working on their solutions, they’re also finding that the software tools they use to design semiconductor chips don’t work as
well as they want. No sweat: Falcomm has been adding software engineers to its team of chip designers so they can innovate on the design tools themselves, too.
Above: Ph.D. grad and Falcomm founder Edgar Garay.
Opposite page: The power amplifier connected to an evaluation board for performance tests.
For now, Falcomm is a “fabless” company, meaning they create designs to meet customers’ specifications and send them to foundries for manufacturing. But if Garay has his way, that will change in the next few years. He envisions setting up a foundry in Atlanta as part of a groundswell of semiconductor design and manufacturing.
“We’re building relationships with venture capitalists and the government, because this can be a good partnership between public and private capital to create jobs and create a semiconductor ecosystem in Atlanta. We have the talent,” Garay said. “Our four- or five-year vision is to go from fabless to doing 100% of our fab right here in Atlanta.”
With four patents and 13 more applications pending, two rounds of investment from Georgia Tech, and millions of dollars from other investors, Garay and his team seem to have hit on an area that was ripe for creative problem-solving. He’s not surprised.
“We’ve been using the same power amplifier architecture for the past 90 years. I realized there had to be a better solution.”
‣ JOSHUA STEWART
Top: The Falcomm power amplifier is half the size with nearly twice the energy efficiency of existing chips.
Behind the Wheel and Around Town, We’re Moving Toward a Connected, Autonomous Future
How we get around is changing as new options like autonomous vehicles arrive in earnest. Srinivas Peeta works to unravel what that means for our communities and how we plan for a more connected future.
Srinivas Peeta in his driving simulation lab.
If we’re moving toward a future where most vehicles drive themselves, the road between here and there has more than a few potholes. Some are becoming clear; others wait around the next bend. All require careful maneuvering to keep people and communities safe.
Georgia Tech civil engineer Srinivas Peeta is leading the road crew filling holes, repaving streets, and preparing for a future where our vehicles — and the infrastructure around them — are smarter and more connected.
“Connected and autonomous vehicles are, in many ways, transformative — and disruptive as well,” said Peeta, Frederick R. Dickerson Chair and professor in the School of Civil and Environmental Engineering and the H. Milton Stewart School of Industrial and Systems Engineering. “So there are many problems or questions that are of interest to us.”
One immediate issue: the friction between human drivers and the autonomous cars starting to appear in some parts of the country. For example, Waymo started offering rides in self-driving cars earlier this year to Atlanta Uber users. The cars rely on sensors, training data, and programmed rules. Human drivers rely on none of this, instead depending on situational awareness, experience, and context. It’s a mismatch that can lead to conflict.
Conflicts on the Roads
Peeta’s team studies those interactions to understand what situations will show up as autonomous vehicles expand and how to address them.
One example is platooning, where self-driving cars and trucks travel close together on highways at high speeds. That can make it more difficult for drivers to change lanes or merge, leading to frustration or risky behavior.
Or consider handoff scenarios, when a partly autonomous vehicle encounters a situation too complex to handle and requires the human driver to take over. That driver might be distracted — maybe they were reading, watching a movie, talking with another passenger — and the sudden need to reengage can pose safety risks.
Peeta suggested other kinds of conflicts could actually make roads safer. For example, his team is studying lanechange scenarios where a self-driving car could block a dangerous maneuver by a human driver.
“We are looking at scenarios where autonomous vehicles determine when to assist and when to prevent lane changes, because there may be lane changes that are really dangerous, especially on freeways,” he said.
[Autonomous] cars rely on sensors, training data, and programmed rules. Human drivers rely on none of this, instead depending on situational awareness, experience, and context. It’s a mismatch that can lead to conflict.
“What the vehicle would do is preclude that lane change, preventing the person from making it to enhance safety.
“Of course, aggressive drivers may get frustrated, so there are other consequences that may show up. You have to figure out all of these possibilities.”
Security is another concern. Peeta and his team have been testing whether drivers can spot a compromised autonomous vehicle on the road. What tips them off? And then: how do they react?
Using a full-size driving simulator in their lab, analytical models, and real-world data collection — including a partnership with the City of Peachtree Corners northeast of Atlanta — his team examines how humans and the vehicles interact, how vehicles deal with infrastructure, and how infrastructure influences human behavior.
More autonomy on our roads likely will mean roads look different in the future, Peeta said. And in the meantime, we might need to adjust road designs to help drivers and autonomous vehicles coexist more smoothly. Peeta and his team have built a computer model of a part of Peachtree Corners called a digital twin to test potential layouts. (More about other ways our engineers are using digital twins on page 26).
“What we found is that some designs are more intuitive for human drivers to figure out the intent of these autonomous vehicles,” he said. “Sometimes they’re more cautious. Sometimes they ’re not sure what they have to do. Human drivers might wonder, why is this slowing down here when other vehicles would nicely zip off? Some road designs make it easy for human drivers to understand how autonomous systems drive.”
Connected Communities
Autonomous vehicles are just one piece of a changing transportation landscape. Ride-sharing, micromobility options like scooters and bikes, and autonomous shuttles are becoming more common.
Peeta’s team is deep into a National Science Foundation project to understand how these new technologies can work together with traditional transit and personal vehicles to achieve a community’s mobility goals.
“These are showing up organically. The question is, how does a city leverage all of these in a systematic way where they ’re integrated,” Peeta said. “That allows for strategic decision-making, rather than doing trial and error without the convergence that can be enabled by looking at all of it holistically.”
The goal is a system that improves safety, public health, and access to jobs and services while mitigating environmental effects like air pollution. One component focuses
on how information influences people’s behavior — including how to deliver the right information to the right people at the right time.
The result will be a dashboard-like tool that pulls in all of the data available to cities to help them build the transportation networks that work for their residents. It’s designed to be flexible, Peeta said, because what works for a suburban, well-connected area probably won’t for a low-income or rural area. And priorities change over time. His tool will allow planners and decision-makers to adjust.
“What we’re looking at is fostering more travel that’s sustainable. That is, to try to get people to use options that are more friendly to themselves or the environment that also simultaneously improve mobility and safety,” Peeta said.
“Every community will have its own needs, its own resources, and it will be able to use our framework to come up with the best cocktail of options.”
‣ JOSHUA STEWART
Srinivas Peeta (standing) and Ph.D. students Gulam Kibria and Yangjiao Chen prepare driving scenarios for their driving simulator.
TO END
10 Questions with Ryan Pickren
Ryan Pickren graduated from Georgia Tech with his computer engineering bachelor’s degree in 2017 and went to work in cybersecurity for Amazon. By then, he’d already spent several years doing penetration tests, or “pentests,” to find vulnerabilities in all kinds of computer networks and systems. Then he came back to North Avenue for a Ph.D., which he finished earlier this year. Pickren is famous for something that happened in 2014: he was arrested and charged with computer trespass after altering the University of Georgia’s website calendar with a reference to the annual football game against Tech.
1 ‣ How do you use your skills for good — and would you call yourself a “hacker”? After undergrad, I joined the Amazon Web Services PenTesting team in Seattle, where I used my skills to identify and patch vulnerabilities in the cloud before attackers could exploit them. During my Ph.D., I focused on uncovering and addressing systemic cybersecurity flaws in critical infrastructure. Back in 2014, a jury of my peers (well, mostly U[sic]GA fans) called me a “hacker,” so I suppose that term is appropriate.
2 ‣ How did you get interested in cybersecurity? I started hunting for security issues over a decade ago as an undergrad. I cut my teeth on scrappy bug bounty programs and sharpened my skills through specialized courses. Fortunately, Georgia Tech was an early leader in cybersecurity education, and their offerings only grew.
3 ‣ As an undergrad, you earned millions of airline miles finding bugs in United Airlines’ systems. What was the allure and what did you do with all those miles? Honestly, my girlfriend at the time — and now wife and mother of my child — had an internship with PayPal on the West
Coast, and I wanted to visit her on the weekends. I might have overshot how many miles that would take.
[Ed. Note: Pickren also donated 5 million of those miles to Georgia Tech for student organizations involved in charity work to use.]
4 ‣ What other kinds of industries have you helped? I’ve worked across social media, cloud, browsers, augmented and virtual reality, and most recently maritime. At Georgia Tech’s Cyber-Physical Security Lab, we even built a full-scale marine testbed — something truly unique that I highly recommend people check out!
5 ‣ Lately, you’ve been finding and fixing vulnerabilities in critical infrastructure systems. Why do those problems interest you? My Ph.D. research focused on the attack surface created when cyber-physical systems intersect with web technologies. Many modern cyber-physical systems, from industrial devices to ships, have begun to incorporate web technologies in unusual and potentially dangerous ways — for example, secret embedded webviews inside privileged human-machine interfaces. Exploring this emerging trend has been exciting, and it
Back in 2014, a jury of my peers (well, mostly U[sic]GA fans) called me a ‘hacker,’ so I suppose that term is appropriate.”
Ryan Pickren
allows me to bring my background in application-layer offensive security to a traditionally “old school” domain.
6 ‣ How do you decide when a system or device is a good candidate to investigate for issues? You start to develop a “Spidey sense” after pentesting for a while. You’ll observe some unexpected behavior or odd error and know something isn’t quite right. If you tie enough software quirks together, you might eventually create a really impactful killchain of bugs.
7 ‣ Who’s winning in cybersecurity — the good guys or the malicious actors? There’s the classic saying: “Hackers only need to get it right once, but defenders need to be right every time.” That rings true in pentesting. A small oversight can unravel years of product hardening if an attacker knows how to exploit it. That’s what makes defense so difficult. Still, with places like Georgia Tech producing talented security professionals, I think we’re in good hands.
8 ‣ What vulnerability or issue keeps you up at night? Artificial intelligence agents learning to pentest. I’ve never worried much about basic scanning tools. Interesting bugs usually require chaining complex, creative exploits that scanners can’t find. But an advanced large-language-model-driven agent capable of reasoning creatively? That keeps me up at night.
9 ‣ What’s one thing we should all be doing to protect ourselves and our systems online? Enable multifactor authentication and use a password manager.
10 ‣ You finished your Ph.D. in the spring. So, what’s next? I’ve joined the family business in the powersports and marine industry. Specifically, I’m now working at AquaAmp, where we build rugged outdoor electronics. Stay tuned for some really exciting products in this space!
Right: Ryan Pickren with Buzz when he “got out” a second time. Below: Pickren (center) led a project with Raheem Beyah (left) and Saman Zonouz to develop a new algorithm to warn against malicious attacks on infrastructure.