AI IS ALREADY CHANGING OUR WORLD. GEORGIA TECH ENGINEERS ARE LEADING THE WAY.
... FOR A BETTER WORLD Georgia Tech researchers are improving lives with AI tools PAGE 18
... MAKERSPACE
Putting supercomputing power in the hands of our students PAGE 30
... BEYOND CAMPUS
How AI is already shaping engineering practice — and what’s next PAGE 34
SPRING 2024
A team of Georgia Tech researchers, led by Jerry Qi in the George W. Woodruff School of Mechanical Engineering, has developed a new approach for 3D printing glass using ultraviolet light instead of extremely high temperatures. The method can be used to produce glass for medical devices, microelectronics, and more.
The researchers employed a light-sensitive resin based on a widely used soft polymer called PDMS, at left. The sample on the right is glass created using deep UV light to convert the photoresin to hardened inorganic glass.
CANDLER HOBBS
HELLUVA ENGINEER
GEORGIA TECH
COLLEGE OF ENGINEERING MAGAZINE SPRING 2024
Helluva Engineer is published semiannually by the College of Engineering at the Georgia Institute of Technology.
DEAN Raheem Beyah
DIRECTOR OF COMMUNICATIONS
Jason Maderer
ASSISTANT DIRECTOR OF COMMUNICATIONS
Joshua Stewart
CONTRIBUTING WRITERS
Jerry Grillo, Breon Martin, John Toon, Dan Watson
CONTRIBUTING PHOTOGRAPHERS
Jerry Grillo, Fitrah Hamid, Candler Hobbs, Jess Hunt-Ralston, Jack Kearse, Genevieve Martin, Gary Meek, Blair Meeks, Christopher Moore
CONTRIBUTING ILLUSTRATOR
Joel Kimmel
GRAPHIC DESIGN
Sarah Collins
ASSOCIATE DEANS Matthieu Bloch
Associate Dean for Academic Affairs
Kim Kurtis
Engineers working in machine learning and AI offer a crash course in the basic concepts and buzzwords that have moved from the lab to everyday life.
Georgia Tech engineers are refining AI tools and deploying them to help individuals, cities, and everything in between.
A first-of-its-kind AI Makerspace created in collaboration with NVIDIA will give undergrads unprecedented access to supercomputing power for courses, projects, and their own innovations.
Corporate leaders with ties to the College describe AI in their current roles, what will happen in the next five years, and how students and professionals will need to adapt.
Our school chairs describe how AI and ML are at the core of every discipline and where they’re headed next.
Associate Dean for Faculty Development and Scholarship
Hang Lu
Associate Dean for Research and Innovation
Damon P. Williams
Associate Dean for Inclusive Excellence
Doug Williams
Associate Dean for Administration and Finance
ADDRESS
225 North Avenue NW Atlanta, Georgia 30332-0360
VISIT
coe.gatech.edu
FOLLOW twitter.com/gatechengineers
FOLLOW instagram.com/gatechengineers
CONNECT bit.ly/coe-linkedin
LIKE facebook.com/gtengineering
WATCH youtube.com/coegatech
Copyright © 2024
Georgia Institute of Technology
Please recycle this publication.
If you wish to change your Helluva Engineer subscription or add yourself to our mailing list, please send a request to editor@coe.gatech.edu.
FEATURES 10 What IS Artificial Intelligence?
18 AI for a Better World
30 Making AI
34
AI Beyond Campus
WE ARE 40 Managing the Ups and Downs 42 A Magician for Furniture 44 10 to End AI FOR ENGINEERING 16 Engineering to the Power of AI
Cover: This issue’s cover, appropriately, was created using Adobe Firefly’s text-to-image generative AI tool. Above: An experimental exoskeleton used alongside AI to develop a universal controller for robotic assistance devices. (PHOTO: CANDLER HOBBS)
HELLUVA ENGINEER > SPRING 2024
Dear Friends,
We didn’t have to brainstorm very long to decide our focus for this issue of Helluva Engineer Artificial intelligence is everywhere, and that’s why it fills these pages.
As a computer engineer who studies cybersecurity, it’s not a surprise to me that AI has found its way into our homes, workplaces, smartphones, and nearly everything in between. We have seen AI approaching for decades. The biggest difference in the past two years or so is that supercomputers and the chips that power them have become much more affordable to government, businesses, and higher education. And when technology is accessible to more people, its impact and applications increase exponentially.
The expanded availability of AI is allowing our College to launch new initiatives to better educate our students as they prepare to enter an AI-native workforce. We call this initiative “AI for Engineering.” It began in January, when we announced the creation or reimagining of 14 core AI courses for undergrads. By the fall, we will have undergraduate AI courses in every discipline in the College. In March, we finalized Georgia Tech’s first minor in applications of AI and machine learning. The program is a collaboration
with the Ivan Allen College of Liberal Arts and also teaches students about the ethics and philosophy of AI.
In April, we made our biggest move yet: the nation’s first AI Makerspace. We unveiled this new resource for our students thanks to a collaboration with NVIDIA, one of the biggest names in tech. We are the first university in the country to give our students access to a powerful AI supercomputing hub like those typically reserved for research or available only at tech companies. The AI Makerspace will help our students learn in the classroom, work on projects, and tap into our physical makerspaces. We’re going to help them become AI natives, and then get out of the way as they help build the future. As our ECE chair, Arijit Raychowdhury, has put it, giving them access to this level of computing power is akin to replacing an Etch A Sketch with an iPad.
AI is pervasive through all eight of our programs. I invite you to explore just a glimpse of that in these pages. We’ll continue to push and define the boundaries of the technology, always with an eye toward improving the human condition. Just as many of our alumni are doing in the workplace.
Go Jackets!
Raheem Beyah Dean and Southern Company Chair
FROM THE DEAN
CANDLER HOBBS
in the field
Making Fertilizer More Sustainable
Nitrogen-rich ammonia is an essential fertilizer in global food production. Creating it requires significant petroleum-based energy, however, and it can only be done at 100 or so large-scale facilities worldwide.
Georgia Tech engineers are working to make fertilizer more sustainable — from production to productive reuse of the runoff after application — and a pair of studies is offering promising avenues at both ends of the process.
In one paper, researchers have unraveled how nitrogen, water, carbon, and light can interact with a catalyst to produce ammonia at ambient temperature and pressure, a much less energy-intensive approach than current practice. The second paper describes a stable catalyst able to convert waste fertilizer back into
RECYCLING FERTILIZER WASTE
Significant amounts of nitrogen are wasted when fertilizer is applied to crops — perhaps as much as 80% goes unmetabolized by plants. This nitrate waste often ends up polluting groundwater.
nonpolluting nitrogen that could one day be used to make new fertilizer.
Significant work remains on both processes, but the senior author on the papers, Marta Hatzell, said they’re a step toward a more sustainable cycle that still meets the needs of a growing worldwide population.
“We often think it would be nice not to have to use synthetic fertilizers for agriculture, but that’s not realistic in the near term considering how much plant growth is dependent on synthetic fertilizers and how much food the world’s population needs,” said Hatzell, associate professor in the George W. Woodruff School of Mechanical Engineering. “The idea is that maybe one day you could manufacture, capture, and recycle fertilizer on site.”
‣ JOSHUA STEWART
HELLUVA ENGINEER | SPRING 2024 ISTOCKPHOTO 2
3D Printing for Soft Tissue Engineering
For more than a decade, Scott Hollister and his collaborators have developed lifesaving, patient-specific airway splints for babies with rare birth defects. These personalized devices are made of a biocompatible polyester called polycaprolactone (PCL), which has the advantage of being approved in medical devices by the Food and Drug Administration.
PCL has a great safety record when implanted into patients. But it has the disadvantage of being relatively stiff and so hasn’t been applied in soft tissue engineering. How do you make a firm thermoplastic into something flexible and possibly capable of growing with the patient? Hollister’s lab has figured out how.
Using “auxetic design,” researchers have demonstrated successful 3D printing of PCL for soft tissue engineering. An auxetic material, unlike typical common elastics, has a negative Poisson’s ratio. That means if you stretch an auxetic material longitudinally, it will also expand laterally.
“Although the mechanical properties and behavior of the 3D structure depend on the
inherent properties of the base material — in this case, PCL — it can also be significantly tuned through internal architecture design,” said Jeong Hun Park, research scientist in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University (BME).
Park developed the design of 3D-printed structures made up of tiny struts, arranged at right angles like the bones of very tiny skyscrapers. The rotation of those intersecting joints within the network, under compression or extension, causes negative Poisson’s behavior. It also enables advanced performance for a printed device, including impact energy absorption, indentation resistance, and high flexibility.
“The new structure is about 300 times more flexible than the typical solid structure we make out of PCL in our lab,” said Hollister, Patsy and Alan Dorris Chair in Coulter BME.
The ultimate goal is to use the structure to develop a breast reconstruction implant with comparable biomechanical properties to native breast tissue.
‣
JERRY GRILLO
PARK: JERRY GRILLO; RENDERING COURTESY: HOLLISTER LAB READ THESE STORIES IN FULL AT COE.GATECH.EDU/MAGAZINE/LINKS
Above: Jeong Hun Park with 3D-printed structures made of a biocompatible polyester called PCL. Left: An illustration showing the structure of the 3D-printed PCL and its behavior under compression.
3
in the FIELD
National Academies Elect Lam, Sholl, Mokhtarian
Three engineering faculty members are among the newest members of the National Academy of Medicine (NAM) and the National Academy of Engineering (NAE).
Civil engineer Patricia Mokhtarian and chemical engineer David Sholl were part of the 2024 NAE class, one of the highest professional recognitions for engineers. With Mokhtarian and Sholl, Georgia Tech now has 48 NAE members.
Lam was elected to NAM in the fall, the third current faculty member to join the Academy. Lam is the W. Paul Bowers Research Chair in the Wallace H. Coulter Department of Biomedical Engineering at Georgia Tech and Emory University and a pediatric hematologist/oncologist at Children’s Healthcare of Atlanta.
He was cited “for outstanding contributions in point-of-care, home-based, and/ or smartphone-enabled diagnostics that are changing the management of pediatric and hematologic diseases as well as development of microsystems technologies as research-enabling platforms to investigate blood biophysics.” NAM also noted his work leading a national project to evaluate diagnostic tests for Covid-19 — a test-the-tests effort that was responsible for getting Covid19 at-home rapid tests widely available on store shelves.
Mokhtarian is the inaugural Clifford and William Greene, Jr. Professor in the School of Civil and Environmental Engineering, where she studies travel behavior, including
the travel-related impacts of information and communication technology and other topics. The NAE cited Mokhtarian’s work that “improved transportation systems planning and practice through quantifying human behavior” in electing her to the Academy.
Sholl is professor and Cecile L. and David I.J. Wang Faculty Fellow in the School of Chemical and Biomolecular Engineering. He served as the John F. Brock III School Chair from 2013 to 2021. His research uses computational tools to study materials whose dynamic and thermodynamic properties are strongly influenced by their atomic structure.
The NAE cited him “for addressing large-scale chemical separation challenges, including carbon dioxide capture, using quantitative materials modeling.”
Alongside Sholl and Mokhtarian, the NAE also elected two alumni.
Theodore “Ted” Colbert III was elected “for engineering leadership in advanced commercial and military air and space platforms.” A 1996 industrial and systems engineering graduate, Colbert is president and chief executive officer of Defense, Space, and Security at Boeing.
Larry Pellett, a 1981 electrical and computer engineering alumnus, was honored for “engineering development, transition, and operation of airborne system technologies for national security.” He is vice president of special programs at Lockheed Martin Corp. ‣ JOSHUA STEWART
Lam
Sholl
Mokhtarian
LAM: CHRISTOPHER MOORE; MOKHTARIAN: JESS HUNT-RALSTON; SHOLL: GENEVIEVE MARTIN, ORNL/U.S. DEPT. OF ENERGY HELLUVA ENGINEER | SPRING 2024
READ THESE STORIES IN FULL AT COE.GATECH.EDU/MAGAZINE/LINKS
4
New Kind of Cyberattack Threatens Infrastructure Systems
Researchers have found a new way to hijack the computers that control infrastructure and industrial systems. Called programmable logic controllers (PLCs), the computers increasingly have embedded webservers and are accessed on site via web browsers. Attackers can exploit this approach and gain full access to the system.
That means they could spin motors out of control, shut off power relays or water pumps, disrupt internet or telephone communication, or steal critical information. They could even launch weapons — or stop them from launching.
“We think there is an entirely new class of PLC malware that’s just waiting to happen. And it gives you full device and physical process control,” said Ryan Pickren, a Ph.D. student in the School of Electrical and Computer Engineering (ECE) and the lead author of a study describing the malware and its implications. “This has been a neglected attack surface for many years. This paper is the first one where we’re exploring what an adversary could do with this.”
The researchers developed an approach that’s easier to deploy than typical attacks on industrial or infrastructure systems, which usually require access privileges or on-site presence. It’s difficult to detect, with the ability to wreak havoc and then erase all traces of its presence. And it’s sticky: the malware can resurrect itself if operators discover the malfunctions and reset controllers or even replace hardware.
“We believe this is one of the first attacks at the application layer of PLCs to compromise industrial systems,” said Raheem Beyah, senior author on the study, a professor in ECE, and dean of the College. “This is opening a door to a new field that hasn’t really been studied yet.”
In their study, the researchers made several recommendations to protect against web-based PLC malware, including steps web browser developers can implement to prevent public access to private networks and webserver architecture changes. They also outlined steps PLC manufacturers can take to harden their devices against this new kind of attack.
‣ JOSHUA STEWART
A typical network structure for industrial control systems where the human-machine interface — essentially, control panels — and the programmable logic controller (PLC) are isolated from engineering workstations and the public internet. In a web-based PLC malware attack, even the isolated systems can be access by malicious code that installs on the PLC and runs through a web browser where control functions are displayed for operators.
NETWORK ILLUSTRATION COURTESY: RYAN PICKREN; SCREEN IMAGE: ISTOCKPHOTO
5
Leadership Transitions in AE, ISyE
Mitchell Walker became the new chair of the Daniel Guggenheim School of Aerospace Engineering (AE) in January. He’s been a faculty member since 2004 and formerly served as the College’s associate dean of academic affairs.
“Over the last two decades, Mitchell has exhibited remarkable leadership in service to Georgia Tech, excelling in the aerospace field and developing academic initiatives that bolster our undergraduate and graduate engineering students,” said Raheem Beyah, dean of the College and Southern Company Chair. “He embodies the innovation and perseverance that characterizes our AE School, the nation’s No. 1 ranked public aerospace program, through his research and forward-thinking vision.”
Walker is a fellow of the American Institute of Aeronautics and Astronautics and the organization’s deputy director for Space Rockets and Advanced Propulsion. He is a member of the Department of Energy Fusion Energy Sciences Advisory Committee and a member of the NASA Advisory Council – Technology, Innovation, and Engineering Committee.
Meanwhile, Edwin Romeijn has decided to step down as chair of the H. Milton Stewart School of Industrial and Systems Engineering (ISyE) after 10 years of lead ership. During his tenure as H. Milton and Carolyn J. Stewart School Chair, ISyE has remained the nation’s top undergraduate and graduate program, according to U.S. News & World Report.
Romeijn shepherded creation of new programs for the Stewart School in analytics and data science, as well as advanced studies for operations research and statistics. This includes the highly successful interdisciplin ary Online Master of Science in Analytics program that currently enrolls approxi mately 5,600 students.
Romeijn will return to his full-time role as professor in January 2025.
“Edwin’s vision has made the School a prominent leader in fields that include analytics, data science, machine learning, and artificial intelligence,” Beyah said. “I’m grateful for his visionary leadership and unwavering commitment to ISyE and the College, and I look forward to continuing to partner with him as he serves out his term.”
The search for the next permanent ISyE chair is underway.
‣ JASON MADERER
A MORE RESILIENT FLU VACCINE
School of Chemical and Biomolecular Engineering Professor Ravi Kane is leading a multi-university team that has received a five-year, $4 million grant from the National Institutes of Health to develop a more resilient flu vaccine — one that provides lasting protection from season to season.
The goal is to design a vaccine that provides broad protection against group 1 influenza A viruses — a group that includes the 1918, 1957-1958, and 2009 pandemic viruses — as well as some bird flu viruses that can cause disease in humans.
“Current seasonal vaccines induce an immune response that primarily targets the head of hemagglutinin, a flu protein responsible for helping the virus attach to and infect human cells,” Kane said. “The head of the flu protein, however, easily mutates. Which means that the immune system won’t recognize the virus when it reappears next season.”
Kane’s solution? Create a vaccine that reacts to a different part of the virus, one that doesn’t change much from year to year. His team has discovered the stalk of that same hemagglutinin flu protein could be an ideal target.
“This stalk plays a critical role in viral entry into our cells during infection and has a lower tolerance for mutations than the head,” he said. “Our lab recently showed that tuning the orientation of the protein to increase the accessibility of the stalk results in an enhanced protective immune response.”
The team is using computational and experimental methods to tune the antigens in a potential universal flu vaccine and exploring nanoparticles that display multiple hemagglutinin proteins, which can elicit a stronger immune response than just a single protein.
JASON MADERER
in the FIELD HELLUVA ENGINEER | SPRING 2024 READ THESE STORIES IN FULL AT COE.GATECH.EDU/MAGAZINE/LINKS WALKER: CHRISTOPHER MOORE; ROMEIJN: FITRAH HAMID
6
Walker Romeijn
New Battlefield Obscurants Could Give Warfighters a Visibility Advantage
Top: Research Scientist
Clouds of tiny structures that are lighter than feathers — and whose properties can be remotely controlled by radio frequency signals — could one day give U.S. warfighters and their allies the ability to observe their adversaries while reducing how well they themselves can be seen.
Using miniaturized electronics and advanced optical techniques, this new generation of tailorable, tunable, and safe battlefield obscurants could be quickly turned on and off and provide an asymmetric visibility advantage. Georgia Tech researchers are among several teams funded by the Defense Advanced Research Projects Agency (DARPA) to develop the technology. Smoke screens created to hide troop movements or ships at sea have been used in past conflicts. Often based on burning fuel oil, these conventional techniques have many disadvantages, including limiting the visibility of both sides and using materials that are potentially harmful to warfighters. The new approach being developed at Georgia Tech will instead use lightweight and non-toxic electrically reconfigurable structures that would form obscuring plumes able to hang in the air over a battlefield.
“We will bring nanophotonic structures into the real world and be able to change their properties remotely without having direct contact such as with an optical fiber,” said Ali Adibi, a professor in the School of Electrical and Computer Engineering and the project’s principal investigator. “They could be part of a cloud of nanostructures formed from a foil material with different dimensions, from millimeters to centimeters. They could include an antenna and diode or heater that would allow them to respond to an RF signal, changing their properties to collectively affect light passing through.”
Left: Research Scientists Taylor Shapero and Frost set up equipment for testing tiny nanophotonic structures for the effects of radio frequency energy.
The coded visibility plumes likely won’t permit picture-perfect visibility but should give friendly forces enough information to tell what an enemy is doing. At this stage, the researchers don’t know how well the technique will ultimately work, though modeling the scattering and absorption is so far encouraging.
‣
JOHN TOON
CHRISTOPHER MOORE
Connor Frost shows a nanophotonic device with electronic circuitry that captures radio frequency energy to alter the properties of the tiny structure.
7
Researchers Can Stop Degradation of Promising Solar Cell Materials
An illustration of metal halide perovskites. They are a promising material for turning light into energy because they are highly efficient, but they also are unstable. Georgia Tech engineers showed in a new study that both water and oxygen are required for perovskites to degrade. The team stopped the transformation with a thin layer of another molecule that repelled water.
Materials engineers have unraveled the mechanism that causes degradation of a promising new material for solar cells — and they’ve been able to stop it using a thin layer of molecules that repels water.
Their findings are the first step in solving one of the key limitations of metal halide perovskites, which are already as efficient as the best silicon-based solar cells at capturing light and converting it into electricity.
“Perovskites have the potential of not only transforming how we produce solar energy, but also how we make semiconductors for other types of applications like LEDs or phototransistors. We can think about them for applications in quantum information technology, such as light emission for quantum communication,” said Juan-Pablo Correa-Baena, assistant professor in the School of Materials Science and Engineering. “These materials have impressive properties that are very promising.”
Perovskite development has recently accelerated, particularly after engineers and chemists recognized their potential for more efficient solar cells a decade ago. The problem with metal halide perovskites is that they are unstable when interacting with water and oxygen, transforming into a different structure that doesn’t work well to create solar power.
The Georgia Tech team uncovered why, finding the complex interplay of both water and oxygen with the perovskites leads to instability; taking away one of those preserved the perovskites’ energy-capturing crystal structure.
“People thought if you expose them to just water, these materials degrade. If you expose them to just oxygen, these materials degrade. We’ve decoupled one from the other,” said Correa-Baena, who’s also Goizueta Early Career Faculty Chair. “If you prevent one or the other from interacting with the perovskites, you mostly prevent the degradation.”
‣ JOSHUA STEWART
ILLUSTRATION COURTESY: JUAN-PABLO CORREA-BAENA
in the FIELD
HELLUVA ENGINEER | SPRING 2024 8
Sensor Fabric Could Help End Pressure Injuries for Wheelchair Users
At least half of veterans with spinal cord injuries will develop sores on their skin from the unrelieved pressure of sitting for long periods of time in a wheelchair. It’s a constant worry, because these skin ulcers can greatly limit patients’ mobility.
Veterans and other wheelchair users could one day worry a lot less thanks to materials engineers who are developing fabric sensors and a customized wheelchair system that assesses and automatically eases pressure at contact points to prevent these injuries.
“We have three key issues happening: First, continuous pressure. Second, moisture, because when you’re sitting in the same spot, you tend to sweat and generate moisture. And third is shear. When you try to move somebody, the skin shears. That perfect combination is what causes pressure injuries,” said Sundaresan Jayaraman, professor in the School of Materials Science and Engineering. “We believe we have a solution to the perfect storm of pressure, moisture, and shear, which means the user’s quality of life is going to get better.”
With Principal Research Scientist Sungmee Park, Jayaraman has designed a washable fabric with embedded sensors that covers the seat of a wheelchair. Data about pressure and moisture from the sensors feeds a processing unit that uses artificial intelligence algorithms to identify trouble spots in real time and selectively raise or lower a series of actuators under the wheelchair seat to relieve pressure. That eliminates any shearing forces on the skin that come from sliding against the seat. Meanwhile, a series of fans activates to eliminate moisture.
A companion smartphone app developed in the lab allows users to override the system to maintain comfort and stability.
With support from a National Academy of Medicine healthy longevity competition, Park and Jayaraman are working to “ruggedize” their system for real-world use. They’re also hoping to work with doctors at the Veterans Affairs Atlanta Healthcare System to collect feedback and ideas from wheelchair users with spinal cord injuries to improve their design.
‣ JOSHUA STEWART
CANDLER HOBBS READ THESE STORIES IN FULL AT COE.GATECH.EDU/MAGAZINE/LINKS
Top: A customized wheelchair system — including fabric sensors, actuators, and fans — designed to prevent pressure injuries.
Above: Sundaresan Jayaraman (left) looks at pressure data from fabric sensors he developed with Sungmee Park, who is seated in the prototype wheelchair system.
BY JOSHUA STEWART
ARTIFICIAL INTELLIGENCE? WHAT IS
Engineers working in machine learning and AI offer a crash course in the basic concepts and buzzwords that have moved from the lab to everyday life.
It’s tempting to think that the artificial intelligence revolution is coming — for good or ill — and that AI will soon be baked into every facet of our lives. With generative AI tools suddenly available to anyone and seemingly every company scrambling to leverage AI for their business, it can feel like the AI-dominated future is just over the horizon.
The truth is, that future is already here. Most of us just didn’t notice.
Every time you unlock your smartphone or computer with a face scan or fingerprint. Every time your car alerts you that you’re straying from your lane or automatically adjusts your cruise control speed. Every time you ask Siri for directions or Alexa to turn on some music. Every time you start typing in the Google search box and suggestions or the outright answer to your question appear. Every time Netflix recommends what you should watch next.
All driven by AI. And all a regular part of most people’s days.
But what is “artificial intelligence”? What about “machine learning” and “algorithms”? How are they different and how do they work?
We asked two of the many Georgia Tech engineers working in these areas to help us understand the basic concepts so we’re all better prepared for the AI future — er, present.
11
‘ARTIFICIAL INTELLIGENCE’ DEFINED
Not long ago, Yao Xie was talking to a group of 8- and 9-year-olds about AI and asked them to explain it to her. She was surprised at their insights, which offered a good outline of the basics.
“They said building algorithms, or methods, that can mimic how the human brain functions or how human intelligence functions,” said Xie, Coca-Cola Foundation Chair and professor in the H. Milton Stewart School of Industrial and Systems Engineering (ISyE). “They summarized it very well: trying to mimic human intelligence — all the way from something very simple, like adding numbers, to something super sophisticated, like understanding the context of a prompt and generating images.”
Along with that, Xie said she would add dimensions of speed and scale. AI can perform computations or produce results much more quickly than humans, and the speed and power of AI can increase as computational power grows.
Justin Romberg put it this way: AI “is when a computer or other automated agent makes a decision that a human could make but does so without human input.”
Romberg is the Schlumberger Professor in the School of Electrical and Computer Engineering (ECE) and senior associate director of Georgia Tech’s Center for Machine Learning. He noted there’s no set definition of the term “artificial intelligence” and most researchers treat this idea on a case-by-case basis when they’re deciding if something should be considered AI.
Romberg said the AI decision-making process, at its core, is just like any other calculation that a computer makes. Some combination of data is fed into the system, there are constraints that the algorithm or device operates under, and a result is produced.
And this is where engineering and science bend a bit toward philosophy.
“What you can’t escape is that, ultimately, everything we call an AI is really just a very concrete computational algorithm that takes in some input and spits out some output,” Romberg said. “The real question is, is that what our brains do?”
AI VS. MACHINE LEARNING
For the non-experts among us, these terms can be conflated, sometimes used together or interchangeably. They are different concepts, however.
“Machine learning refers to a family of techniques or an entire discipline on how you learn from data,” Romberg said. “Obviously, learning from data is a big part of artificial intelligence, but there are other things you might call artificial intelligence that don’t learn.”
For example, machines that play chess might be considered AI, but these systems aren’t really learning from data. Rather, they’re very good at rapidly exploring a whole range of possible scenarios in the game to identify the next move that most likely leads to victory.
“A lot of the higher-level applications of achieving different forms of artificial intelligence are built on top of machine learning,” said Xie, who works specifically in this area. “Machine learning is more like the foundation, but developing machine learning algorithms involves many other foundational scientific pieces, including mathematics, statistics, combinatorics, optimization, and many more.”
Machine learning can be thought of as the base, with all kinds of uses and applications built on top. And these involve yet more words that have become more familiar: natural language processing, computer vision, speech recognition, and more. These are all applications of AI using underlying machine learning algorithms to pursue an outcome.
HELLUVA ENGINEER | SPRING 2024
12
WHAT IS AN ALGORITHM?
When Xie was working with the group of elementary school kids, she asked them this question. And once again, they offered an astute answer: “They told me an algorithm is steps that can be implemented by a machine.”
Algorithms are the recipes for a computer to follow. It’s how programmers tell the computer what they want it to do. An algorithm might ask the computer to add two numbers together or take data from an MRI sensor and produce an image of the patient.
“You’re not going to do anything on a computer without an algorithm involved somewhere,” Romberg said. “The difference between an algorithm and a computer program is that the program is the packaging that you need around the algorithm that allows it to interface with the real world.”
When building some kind of AI tool, algorithms are one of the two key components, Xie said. The other is modeling.
The first step to creating an AI application is defining what the tool needs to achieve. Once developers understand that, they can collect data — lots of data — and create a model. The model is usually an abstract way of representing the data, like a statistical model or sequence modeling. (As an illustration, sequence models can be used for language. Sentences are ultimately a sequence of words, with dependencies and grammar that help shape meaning.) Researchers use a variety of common modeling approaches for this work.
“Once we have a handle on that model, a lot of times after that is math — algorithms and implementation coded up into some pipeline,” Xie said. “Then we listen to the machine to actually have something.”
TRAINING THE ALGORITHMS
Training algorithms to take data and produce results is painstaking work. But at its simplest, Romberg said, it’s a math problem: fitting a function to data.
He used the example of image recognition and creating an AI tool to sort images of cars, trucks, and airplanes. The algorithm takes in a photo, proceeds through a series of computations where different weights are applied to the image’s pixel data, and the data is combined in a variety of ways, and then it produces a single value: car, truck, or airplane.
Training the system to correctly identify the cars as cars, and not as airplanes, requires teaching it with many examples.
“What you try to do based on all of those examples is adjust the weights so you’re
getting the right answer for each of the examples you give it,” Romberg said.
Researchers might feed the algorithm a million pictures of cars, a million pictures of trucks, and a million pictures of airplanes. Then they run through a picture of an airplane to see how the system identifies it.
“If it says, this is a truck, then you adjust all the weights until it works. And you continue this process of passing over data many times until you have a set of weights inside that are consistent with the data that you’ve seen. From there, hopefully, it generalizes to new data that you’ll see in the future.”
In other words, pictures of cars, trucks, and airplanes the algorithm hasn’t been trained on still result in the correct identification.
13
NEURAL NETWORKS: USING THE BRAIN AS A MODEL
One way researchers have been working to build more efficient and flexible machine learning systems is to draw on our understanding of the human brain.
Neural networks are a way to organize machine learning algorithms that try to mimic the way our brains collect, process, and act on information. The networks are efficient and extraordinarily flexible. (Though the terms “neural network” or “artificial neural networks” sound incredibly futuristic, they have been around since the very early days of computers in the 1950s.)
In a neural network, algorithms are layered atop one another to process data and pass on the important parts to a higher level. The approach is taken from one model of how brains function. In very simple terms, some stimulus activates neurons, which then feed data to each other and combine it in different ways. Once the information reaches a certain threshold, the information passes to the next layer of processing and so on.
Neural networks work similarly, collecting, weighing, and passing along data in a hierarchy from bottom to top.
“The lower level feeds forward to the next node, and then you combine the data, passing through an activation function. And the combination also has weights attached to it. These are going to select which information is most important,” Xie said. “When you design these algorithms, you have parameters that are the weights and activation and many, many layers. This is such a flexible architecture.”
Interestingly, both Xie and Romberg noted it’s not always clear why a neural network or other kinds of AI algorithms actually work. The complexity of the layers and the millions or even billions of parameters involved can make it challenging to understand why an algorithm or a neural network of algorithms produces a result — even if it’s the correct result. This is an area both Xie and Romberg are working to untangle in various ways.
“One interesting thing about AI and machine learning is that it’s been a highly experimental science so far: people have tried techniques out and they work. Some of that has refuted, or bumped up against, how we understand classical statistics,”
TRUSTING THE MACHINE
Ensuring we can trust the output of AI tools becomes crystal clear when you think about applications for self-driving cars, for example. Algorithms must take in mountains of data from different kinds of sensors about the environment, the car itself, and more. Lidar, radar, video, and other data might provide information about the road, signage, other vehicles, and pedestrians or others around the car itself. And the AI must process that data — recognizing people,
say, or that the car ahead is slowing — before directing some action. Get it wrong, and passengers or pedestrians could be hurt.
The same is true in healthcare settings. Xie has been looking at using AI to help critical care nursing staff monitor patients, and the stakes are sky-high.
“To what extent can we trust an algorithm to automatically monitor patients and raise an alarm? That’s a life-or-death situation, so we really have to ensure safety,” she said.
Romberg said. “Some of the work I do is trying to understand how AI algorithms really work. Can we put them into a classical mathematical framework?”
Xie likewise uses statistics, machine learning, and optimization principles to shed light on the functions of tools like neural networks so scientists can build better ones — and trust the output of such systems.
“There are all kinds of theories, and there have been advances in explaining how and why a neural network works,” Xie said. “A lot of math researchers and statisticians, including myself, are working on explaining how it works and how to do it better. Because otherwise, it’s a black box — and to what extent can we trust it? We want to answer this question.”
HELLUVA ENGINEER | SPRING 2024
14
WHY NOW?
Even in the days when computers far less powerful than our smartphones took up entire rooms, scientists started to wonder, “What if?”
“Since the 1950s, when computers were first invented, people have been thinking about how to do artificial intelligence,” Xie said. “It’s not a new thing.”
So why does it seem like AI is now suddenly everywhere? Probably thanks to a confluence of factors, not least of which are the internet and social media. When OpenAI released its ChatGPT chatbot in November 2022, it quickly went viral. Now companies are rushing to bake AI functions into existing tools and develop new ones.
“ChatGPT basically showed many people something they hadn’t seen before about how to interact with a machine,” Romberg said. “It’s a very tangible illustration or tangible packaging of research, and specifically research in large language models.”
The current awareness of AI tools — and specifically generative AI for chats or images — is less a watershed moment in research or AI development, though. It’s more about how relatable those tools were to a broad audience.
“The ability to answer queries is just part of a continuing trend of things we’ve seen. We’ve had recommender systems for Amazon and Netflix. We have Google that prioritizes our searches. We’ve had Siri that gives us at least semi-coherent answers to the questions we ask,” Romberg said. “So, this has been building over time.”
ASK THE AI
Since ChatGPT seems to have sparked so much of the current attention on AI, what does the chatbot have to say about these AI basics? (Note: These answers come from the publicly available ChatGPT 3.5.)
How do you define artificial intelligence?
Artificial Intelligence (AI) refers to the development of computer systems that can perform tasks that typically require human intelligence. These tasks include reasoning, learning, problem-solving, perception, understanding natural language, and even interacting with the environment. AI systems often employ techniques such as machine learning, neural networks, natural language processing, and robotics to emulate human-like cognitive abilities. The ultimate goal of AI is to create machines that can mimic human intelligence and perform tasks autonomously, potentially surpassing human capabilities in certain domains.
So, is AI actually intelligent or just mimicking intelligence according to the instructions from algorithms? The question of whether AI is actually intelligent or merely mimicking intelligence is a topic of ongoing debate in the field of artificial intelligence and philosophy.
provided by humans. These systems may demonstrate impressive capabilities in tasks like problem-solving, pattern recognition, language understanding, and decision-making. However, they do not possess consciousness, self-awareness, emotions, or subjective experiences that are often associated with human intelligence.
AI’s “intelligence” can be understood as a combination of sophisticated algorithms, vast computational power, and access to large datasets, allowing them to perform tasks that would traditionally require human intelligence. While AI systems can exhibit behaviors that appear intelligent, they lack the self-awareness and understanding that humans possess. Thus, some argue that AI is not truly intelligent but rather mimicking intelligence based on programmed rules and algorithms.
The visuals in this feature were created using Adobe Firefly’s textto-image generative AI tool.
At present, AI systems are typically designed to simulate intelligent behavior based on algorithms and instructions
Nevertheless, as AI technologies advance, there is ongoing exploration into creating AI systems with higher levels of autonomy, adaptability, and even consciousness. However, achieving true artificial general intelligence — intelligence comparable to that of humans across a wide range of tasks — remains an elusive goal.
15
DANIEL GUGGENHEIM SCHOOL OF AEROSPACE ENGINEERING
Mitchell L. R. Walker II
William R. T. Oakes Professor and Chair
The AE School develops AI algorithms for control systems and other applications, and we leverage existing AI algorithms for space mission design and extracting robust features from image data.
In undergraduate teaching, AI helps introduce students to fundamental concepts in thermodynamics, fluid mechanics, and statistical learning using small language models and Gaussian processes for building supervised and unsupervised learning models. At the graduate level, ML surrogate modeling methods are used to deepen understanding and application in advanced course work.
WALLACE H. COULTER DEPARTMENT OF BIOMEDICAL ENGINEERING
Alyssa Panitch
Wallace H. Coulter Chair
Fueled by advances in molecular profiling, imaging, and many other technologies, biomedical research is becoming more data intensive than ever. The explosion of data enables biomedical engineering to take advantage of the amazing developments in data science and AI. This trend manifests in the increasing number of our research projects incorporating AI, as well as new curriculum to introduce AI techniques at the undergraduate and graduate levels.
ENGINEERING
SCHOOL OF CIVIL AND ENVIRONMENTAL ENGINEERING
Donald Webster
Karen and John Huff Chair
CEE researchers are using AI, machine learning, and data analytics to enhance our work and enable results at a scale previously unimaginable. These advanced computational tools are embedded in a required undergraduate course as well as technical-elective and graduate-level courses.
Our faculty and students are applying AI to their work in innovative ways: Using massive datasets to assess and improve the condition of our roadways; addressing challenges facing our food system to improve sustainability and resilience; advancing system modeling efforts; and helping communities implement smart solutions to improve safety.
SCHOOL OF CHEMICAL AND BIOMOLECULAR ENGINEERING
Christopher Jones
John F. Brock, III Chair
Our faculty are developing and leveraging state-of-the-art neural network models that can rapidly predict the dynamics of molecular systems and have applied these techniques to discover new materials for catalysis and carbon capture. AI and machine learning also are advancing capabilities in analysis of experimental data for chemical processes and materials characterization. Some examples include the application of machine learning tools to fit spectra measured during catalytic and biochemical processes and the development of image analysis techniques for extracting information from large datasets of videos and microscopy.
HELLUVA ENGINEER | SPRING 2024
16
H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING
H. Edwin Romeijn
H. Milton and Carolyn J. Stewart Chair
The field of industrial and systems engineering includes developing AI tools and deploying them in engineering contexts. ISyE is leading the charge. We play a critical role in advancing research on efficient methodologies in optimization, data science, and machine learning. Combined with systems-thinking, our researchers use these methodologies to take a leading role in addressing critical societal challenges such as supply chains, health systems, energy, and sustainability — while ensuring fair and ethical implementation.
SCHOOL OF MATERIALS SCIENCE AND ENGINEERING
Natalie Stingelin Chair
In MSE, AI and ML are being leveraged to discover and design new materials, build predictive models, simulate material systems, and advance the understanding of processing-structure-properties relationships of materials and materials systems.
Our undergraduate students are using AI and ML in research labs alongside grad students. In the curriculum, we are integrating ChatGPT into senior design, using it as a coach to give students feedback on the documents they generate while solving a design challenge proposed by industrial sponsors. Students are learning the tool’s limitations and that you can use it to do more than just rewrite documents.
POWER OF AI to the
Artificial intelligence (AI) and machine learning (ML) are common threads across the College’s programs. Our researchers are increasingly using the technology to make new discoveries and improve the human condition. And new academic programs are teaching our students how AI and ML work.
We asked our school chairs to describe how AI and ML are at the core of every discipline and where they’re headed next.
‣ JASON MADERER
SCHOOL OF ELECTRICAL AND COMPUTER ENGINEERING
Arijit Raychowdhury
Steve W. Chaddick Chair
ECE is positioned where AI meets the physical world and interacts with humans. To effectively engage students across the AI stack, we’re pioneering educational approaches that provide direct engagement with leading AI technology. Initiatives such as the AI Makerspace will enhance students’ expertise, producing the next wave of AI leaders. By reimagining existing courses and introducing new ones, we’re complementing our theoretical AI curriculum with practical exploration. This approach empowers students to address real-world AI problems, develop sophisticated applications, and showcase their AI-driven ideas on a larger scale.
GEORGE W. WOODRUFF SCHOOL OF MECHANICAL ENGINEERING
Devesh Ranjan
Eugene C. Gwaltney, Jr. Chair
The Woodruff School has been a pioneer of new research directions in graduate education in manufacturing since the early 1980s. We continue that tradition, transforming our graduate program in manufacturing by incorporating AI and machine learning technologies in virtually all doctoral research projects. This allows us to rethink the future of the manufacturing industry.
LEARN MORE AT COE.GATECH.EDU/ACADEMICS/AI-FOR-ENGINEERING
17
AI FOR A BETTER WORLD
18
GEORGIA TECH ENGINEERS ARE REFINING AI TOOLS AND DEPLOYING THEM TO HELP INDIVIDUALS, CITIES, AND EVERYTHING IN BETWEEN.
BY JASON MADERER & JOSHUA STEWART
19
From safer roads to new fuel cell technology, semiconductor designs to restoring bodily functions, Georgia Tech engineers are capitalizing on the power of AI to quickly make predictions or see danger ahead.
Here are a few ways we are using AI to create a better future.
Reconnecting Body and Brain
Apartnership between biomedical engineers and Emory neurologists is using AI to help patients paralyzed from strokes, spinal cord injuries, or other conditions move again. Led by Chethan Pandarinath, the project aims to create brain-machine interfaces that can decode in just milliseconds, and with unprecedented accuracy, what the brain is telling the body to do. In essence, they’re trying to reconnect the brain and body for these patients.
Using a machine learning concept called “unsupervised” or “self-supervised” learning, the team is taking a new approach to understanding brain signals. Rather than starting with a movement and trying to map it to specific brain activity, Pandarinath’s algorithms start with the brain data.
“We don’t worry about what the person was trying to do. If we did, we’d miss a lot of the structure of the activity. If we can just understand the data better first, without biasing it by what we think the pattern meant, it ends up leading to better what we call ‘decoding,’” he said.
The goal is allowing these AI-powered brain-machine interfaces to work for any patient essentially out of the box — no significant calibration needed. The researchers have been working on a clinical trial focused on patients with amyotrophic lateral sclerosis (more commonly known as ALS or Lou Gehrig’s disease).
The Route to Safer Roads
Curves account for only about 5% of roadway miles in the United States, yet those sections of road are responsible for 25% of all traffic-related deaths. A project led by civil engineer Yi-Chang “James” Tsai is using smartphones and AI to cut into that number, with the potential to save millions of lives.
Relatively simple fixes, like the right signage alerting drivers to curves and suggesting safe speed limits, are known to reduce crashes. But safety assessments are manual, time-consuming endeavors. And they have to be done regularly: Safety conditions change as pavement deteriorates or when weather is bad, and road maintenance or resurfacing can change the curve’s geometry.
intelligence tools to develop brain-machine interfaces that function with unprecedented speed and accuracy, decoding in real-time what the brain is the telling the body to do.
JACK KEARSE
Chethan Pandarinath uses artificial
HELLUVA ENGINEER | SPRING 2024 20
Tsai’s solution is to mount a low-cost smartphone in Georgia Department of Transportation (GDOT) vehicles that record video and spatial data from the phone’s onboard gyroscope. Algorithms process the data and flag curves that need attention from traffic engineers. Best of all, the data is collected while GDOT workers go about their daily work without special effort or stops to evaluate dangerous curves.
“Our work saves lives and produces a great positive impact on our community and society,” Tsai said. “The current manual curve safety assessment is laborintensive, costly, and dangerous to traffic engineers. It typically takes a couple of years to complete the safety assessment on state-maintained roadways in Georgia.”
An early version of Tsai’s system already has cataloged Georgia’s 18,000 miles of state-maintained roads, proving its worth to GDOT. Tsai’s team is working to scale up the project, processing data directly on the smartphone and perhaps one day feeding it to Google Maps and other wayfinding apps. Tsai said the idea would be to give drivers real-time alerts when they’re approaching dangerous areas. His vision is to allow Maps users to select a route that’s safest, not just fastest or shortest.
Speeding Atomic Simulations
Chemical and civil engineers are working together to use the power of AI to supercharge a workhorse approach to modeling chemical interactions and materials properties at the atomic level. The results could help researchers more quickly and accurately make predictions about catalysis, chemical separations, and the mechanical properties of materials.
The researchers are using new machine learning techniques to overcome limitations of an approach called density functional theory (DFT). It’s a powerful tool for calculating materials properties based on the interactions of atoms and electrons. But the computing power required can be enormous because real materials involve billions of atoms interacting over long periods of time.
AJ Medford’s research team in the School of Chemical and Biomolecular Engineering is using the newest machine learning techniques to overcome the limitations while still maintaining the accuracy and reliability of the DFT approach.
“Because of some technical details of DFT, doubling the size of a system means it takes about eight times
CAR PHOTO COURTESY: JAMES TSAI; MEDFORD: GARY MEEK
Dash mounted smartphones and AI are helping Georgia’s transportation agency evaluate road curve safety more simply and quickly.
21
Medford
AI-ing Georgia’s Manufacturing Renaissance
Looking across the 20,000 square feet of Georgia Tech’s Advanced Manufacturing Pilot Facility (AMPF), Aaron Stebner is greeted by a maze of machines. Spread throughout the bright, cavernous space are metal printers with electron beams. A robotic welder. A robotic loader and unloader.
It’s been more than a year and a half since the White House announced a $65 million grant that put Georgia Tech at the forefront of Georgia’s capabilities in artificial intelligence and manufacturing, with AMPF serving as the heart.
“Everything is going gangbusters,” Stebner said recently. “It’s exciting to think about how much we’ve done in the last 18 months.”
The $65 million is bolstering AMPF, a testbed where basic research results are scaled up and translated into implementable technologies, including additive/hybrid manufacturing, composites, and industrial robotics.
Stebner is an associate professor in the George W. Woodruff School of Mechanical Engineering and the School of Materials Science and Engineering. His primary role
since that 2022 announcement has been leading the largest of nine projects within the Georgia AI Manufacturing technology corridor grant from the U.S. Department of Commerce’s Economic Development Administration.
Forty or so grad students and five faculty members worked in AMPF when the funding was announced. Now it’s 70 grad students, a dozen faculty, and 50 undergrads, as well as other staff members.
“In addition to more people, we are working with corporate partners on 5G and cloud computing projects,” Stebner said. “It’s busy,
“Georgia Tech’s plan is to put the entire process under the same roof to create a testbed for AI to perform research and development using models that it learns across the manufacturing and quality data.”
Aaron Stebner
and I feel like I’m drowning most days. But when I come up and take a breath and look around, it’s quite amazing to see people working together and making innovation happen.”
The next part of the project will be the most visible. This summer, AMPF will nearly double in size as walls come down and usable space in the building is reallocated to expand the footprint to 58,000 square feet. It will be the foundation for what Stebner is most excited about.
“Right now, in manufacturing, a piece of equipment — a turbine rotor blade, for example — is created in one place, then sent somewhere else for testing,” Stebner said. “Often it goes across the country to check its interior structure, then is shipped to a second location to test its chemical composition. Georgia Tech’s plan is to put the entire process under the same roof to create a testbed for AI to perform research and development using models that it learns across the manufacturing and quality data.”
In short, Georgia Tech will make machine parts while simultaneously checking their
HELLUVA ENGINEER | SPRING 2024 22
composition, durability, and more — all made possible by AMPF’s connected machines. The devices will “talk” to each other using AI. This will ensure that engineers are making the things they think they’re making, rather than sending them around the country and waiting for confirmation. Co-locating those processes would make manufacturing more efficient and economical and provide the nation with a testbed designed for AI innovations.
“No other facility in the nation is built to do this. Georgia Tech will be the first,” Stebner said.
The construction and build-out of the new space should finish this fall. Small-scale testing of the interconnected machines will begin in 2025. Stebner’s team is about eight years away from producing large projects at scale.
“I often don’t take the time to appreciate it, because day-to-day, I feel like we’re always behind and not getting to where we need to go,” Stebner said. “But we’ve really come a long way in short time. And there’s a lot more to do.”
longer to calculate, so direct calculations of real materials properties become computationally impossible very fast,” Medford said. “Machine learning and AI promise to help overcome this barrier by directly predicting the result of DFT calculations in a way that is much faster and can be more easily scaled up to larger systems.”
Medford is collaborating with Phanish Suryanarayana in the School of Civil and Environmental Engineering (CEE), who developed a DFT simulation package called SPARC. They’re tightly coupling Medford’s machine learning models to that code to produce more reliable acceleration of materials simulations.
Building Next-Gen AI Infrastructure
Divya Mahajan is focused on building the systems infrastructure and architectures we’ll need to power the AI applications and hardware emerging now and coming in the not-very-distant future. That includes breaking away from traditional systems based on central-processing units.
One current project with Ph.D. students Seonho Lee and Irene Wang is developing new AI infrastructure with energy use and efficiency as an important metric, which Mahajan said will be key to the sustainable growth of AI applications and hardware.
Mahajan is an assistant professor in the School of Electrical and Computer Engineering (ECE). She has built machine learning hardware for a decade, including time at Microsoft where she saw the real-world challenges of these large-scale systems.
“My academic position offers me the opportunity to tackle these challenges from a new perspective, enabling me to design equitable solutions that can achieve a broader impact,” she said. “I am excited to be at the forefront of the hardware and domain-specialized systems for AI, while working with students on these cutting edge and challenging problems.”
Equitable Tissue Manufacturing
Working with policy scholars and mathematicians, engineers are using AI to make advanced tissue engineering more effective for patients of every background. They’re focused on a kind of 3D printing that uses commercially available stem cells as a bio-ink to create patient-specific tissue — cardiac muscle, in this case. The team will measure how those tissues function, then feed the data into an AI platform to optimize the
CANDLER HOBBS
Mahajan
Clockwise from opposite page: Aaron Stebner, Research Engineer
Zachary Brunson, and Research Scientist
Dyuti Sarker work with additive manufacturing technology and products in the AMPF.
23
Suryanarayana
bioprinting processes for patients of various racial and ethnic backgrounds.
“AI-enabled biomanufacturing needs large datasets,” said biomedical engineer Vahid Serpooshan. “But a great majority of studies are based on data, stem cells, and other biological materials from a very narrow population group — mainly, white males — which doesn’t accurately represent the rich diversity of humanity.”
That means even while new biotechnologies offer the promise of incredible advances, they’re also exacerbating existing health disparities. The team, which includes Emory University researchers, says the work is a first step toward training an AI model that eventually would allow efficient manufacturing of functional tissues for a wide range of patients.
A Safer River
TNeda Mohammadi and John Taylor in CEE have worked with Columbus to deploy a new alert system. Using cameras, a computer model of the river known as a digital twin, and AI, the system warns first responders when people are in danger.
“Research in our lab has been continually moving closer to directly impacting people’s lives. That’s what’s exciting about this project,” said Taylor, the Frederick Law Olmsted Professor. “We’re able to use these tools to actually improve safety.”
The Smart River Safety system provides a yelloworange-red alert based on a combination of where people are detected in the river basin and a prediction of whether water levels are going to rise. Alerts come with precise location information so emergency crews know when they’re needed and, more importantly, where.
he stretch of the Chattahoochee River running through downtown Columbus, Georgia, provides some of the best whitewater kayaking in the state.
A riverwalk along the banks of the picturesque, twisting, turning water draws residents and visitors to walk or bike. And tiny stone islands exposed when water levels are low are almost irresistible for a little rock-hopping.
The area’s emergency responders know all too well that this ribbon of beauty presents potential dangers. When an upstream dam opens, water levels can rise in minutes. People unfamiliar with the area can quickly become trapped or swept away; there are rescues and drownings in this area of the Chattahoochee every year.
Hu
“It means time,” said Columbus Fire and Emergency Services Captain Stephen Funk. “It means a matter of life and death. And it means having the right people in the right place.”
High-Performance, Sustainable Fuel Cells
Apromising kind of fuel cell uses platinum as a catalyst in an oxygen-reduction reaction. While the technology offers benefits — high energy density, rapid refueling, environmental friendliness — using relatively rare platinum means the fuel cells are expensive.
Emma Hu in the School of Materials Science and Engineering (MSE) is working to find other potential catalysts for these proton exchange membrane fuel cells so they don’t require platinum. Her team is particularly focused on dual-atom catalysts, using machine learning models to evaluate possible materials and offer practical guidelines for creating them in the lab.
“Research in our lab has been continually moving closer to directly impacting people’s lives. That’s what’s exciting about this project. We’re able to use these tools to actually improve safety.”
John Taylor
So far, her models have analyzed the catalytic activity of more than 22,000 candidates and found roughly 3,000 that warrant further study.
“The machine learning workflow we developed allowed us to discover many more new catalyst materials than previously possible with conventional methods,” she said. “Furthermore, our framework can be extended to other important electrochemical reactions, including carbon dioxide reduction and hydrogen evolution. We are excited about the potential of AI for addressing these challenges in sustainable energy conversion.”
Now Hu’s team is refining their models to improve its predictions. They’re also evaluating the practicality of synthesizing the dual-atom catalysts they’ve discovered and working with collaborators to demonstrate their performance experimentally.
HELLUVA ENGINEER | SPRING 2024 24
BLAIR MEEKS
Above: Emergency responders look over a series of rapids on the Chattahoochee River in Columbus, Georgia.
25
Right: Neda Mohammadi and John Taylor monitor the Smart River Safety system during testing.
Creating Sensors to See and Think Through the Digital Clutter
You’re talking to your best friend in the middle of a crowded cafe. Your focus is on her, as is your gaze. But one shift of your eyes makes you aware of the scenes that surround you.
A server passes in the distance. A family studies the menu at a neighboring table. A couple leaves the bar after paying the check.
It’s all in your peripheral vision, beckoning for your attention. But your eyes and brain work together to keep you locked on your friend’s face and words. They’re able to decide what matters (her) and sift out what doesn’t (everyone and everything else).
A team of researchers, led by Saibal Mukhopadhyay, hope to replicate this same type of closed-loop system as part of a $32 million center they are leading on behalf of 12 universities. The center’s goal is to create new sensor chips that only capture and extract the most useful information from the environment, just like the human eye and brain, to sense what matters most for a given task.
It’s a substantial upgrade from today’s electronics sensors. They sample everything they “see” and generate an abundance of digital data — often way too much for the sensors to transmit and machines to store, process,
and make sense of. In the process, the sensors capturing the data and the computers processing them consume an unsustainable amount of energy.
“Our center is focused on sensing to action. We’re trying to create new types of sensors that learn how to sense and absorb the most useful data. We call this cognitive sensing,” said Mukhopadhyay, the Joseph M. Pettit Professor in the School of Electrical and Computer Engineering (ECE).
The CogniSense team began the five-year project a year ago, with funding provided by the Semiconductor Research Center-administrated Joint University Microelectronics Program 2.0 (JUMP 2.0). Among the 20 faculty and 100 students working on the center are Justin Romberg, Schlumberger Professor and ECE’s associate chair for research, and Muhannad Bakir, Dan Fielder Professor and director of the Packaging Research Center. The team will demonstrate the concept of cognitive sensing for radars and lidars for applications in robots and autonomous vehicles or drones.
“Cognitive sensing would be ideal in search-andrescue missions during and after natural disasters, for example,” Mukhopadhyay said. “Radar signals can see through obstacles, such as buildings and collapsed debris. If these sensors could see, process, and learn, it would provide invaluable information to people, who could then react to what the technology discovers.”
The first year of the center primarily pulled the multi-university group together, learning what each can contribute to the overall effort. The team also built prototypes and showed how artificial intelligence and signal processing methods can make judgments about what information should and should not be encoded.
The next step is connecting those discoveries across the team’s disciplines to someday create the sensors and expand access to semiconductor research.
“Through the exploration of cognitive multi-spectral sensors, the CogniSense Center ignites a passion for innovation and research, illuminating the path for a new generation of students to discover the endless possibilities within our center,” said Devon McLaurin, senior program and operations manager. “Together, we strive to foster curiosity, nurture creativity, and empower students to become the pioneers of tomorrow’s breakthroughs.”
The CogniSense initiative is one of two JUMP 2.0 centers in ECE. The other $32 million center is headed by Arijit Raychowdhury, the Steve W. Chaddick Chair. It focuses on AI systems that continuously learn from human interactions to enable better collaboration between people and AI and ultimately build a digital human. Mukhopadhyay
HELLUVA ENGINEER | SPRING 2024 26
Pushing the Edge of Mobility
Robotic exoskeletons that could protect workers from injuries or help stroke patients regain their mobility so far have been largely limited to research settings. Most robotic assistance devices have required extensive calibration for each user and context-specific tuning.
Aaron Young’s mechanical engineering lab is on the verge of changing that with an AI-driven brain for exoskeletons that requires no training, no calibration, and no algorithm adjustments. Users can don the “exo” and go. Their universal controller works seamlessly to support walking, standing, and climbing stairs or ramps. It’s the first real bridge to taking exoskeletons from research endeavor to real-world use.
The secret to Young’s controller is a complete change in what the algorithms are trying to do. Instead of focusing on understanding the environment and predicting how to help the wearer do whatever they’re doing, this controller focuses on the body.
“The idea is to take all the cues from the human,” Young said. “What were the human joint torques? What were the moments that their muscles were generating as they did these different activities? Our controller is simple and elegant: It basically delivers a percentage of the user’s effort.”
Young’s lab also is using AI to help amputees use robotic prostheses to more easily navigate the world and help older adults, or those with mobility issues, maintain their balance.
A closeup view of the experimental exoskeleton used in development of a universal controller for robotic assistance devices. Adjustable stairs and other unique tools are used to collect data on the device.
Above: Researcher Aaron Young makes adjustments to the “exo” worn by thenPh.D. student Dean Molinaro.
Better Decisions Faster
When engineers are creating the designs and systems that make our world function — and especially when they’re developing cutting-edge new designs — the complexity of the task is enormous. Aerospace engineer Elizabeth Qian focuses on using machine learning to create “surrogate models,” which are approximations of more complex (and more expensive) engineering simulation methods. The idea is to give engineers the ability to explore many different designs in far less time.
A key component of using AI tools to design engineering systems is trust: unlike image generation or chatbots, these tools create designs with consequences for human safety. Qian and collaborators have been working on methods to train AI models with a variety of data — some accurate but expensive to obtain, some easier or cheaper to get but less accurate — so that data requirements are manageable and the models are guar anteed to be accurate.
“There are so many critical open challenges in devel oping AI tools that we can trust when we use them for engineering design, and solutions to these challenges truly require interdisciplinary collaboration that unites knowledge in engineering, AI algorithms, and funda mental mathematics and statistics,” Qian said. “It’s very exciting to collaborate with colleagues to advance solu tions that can make an impact in this area.”
More Compute Power, More Efficiently
Ateam of electrical and computer engineers recently secured $9.1 million from the Defense Advanced Research Projects Agency (DARPA) to help advance AI hardware.
The project will develop new compute-in-memory accelerator technology, which aims to greatly increase the energy efficiency and computational throughput of devices used in AI-based applications like image analysis and classification. It’s led by Suman Datta, Joseph M. Pettit Chair in Advanced Computing and professor in ECE and MSE.
Their team’s approach turns typical computer architecture on its head. Instead of moving data back and forth from memory to a central processing unit for computation, the researchers are developing compute-in-memory hardware designs to minimize data movement and conserve energy. The keys to their work are what are called multiply accumulate (MAC) macros.
A key component of using AI tools to design engineering systems is trust: unlike image generation or chatbots, these tools create designs with consequences for human safety.
“In the context of AI inference, MAC operations are crucial for performing computations efficiently in neural networks,” Datta said. “The ability to efficiently execute MAC operations is essential for optimizing the performance of AI models on various hardware platforms like CPUs, GPUs, and custom AI chips like the one we are developing for DARPA.”
Datta and ECE collaborators Saibal Mukhopadhyay, Shimeng Yu, and Arijit Raychowdhury have set out to design their chips to maximize power efficiency while producing new levels of computation and minimizing size. Their goal is to build accelerators that can achieve 300 trillion operations per second per watt of power — a full order of magnitude higher than current state-ofthe-art systems that might achieve 10s of trillions of operations per second.
HELLUVA ENGINEER | SPRING 2024 28
Qian
AI-Driven Optimization
Established in 2021 with $20 million from the National Science Foundation (NSF), the AI Institute for Advances in Optimization (AI4OPT) is a hub of innovation at the intersection of AI and optimization.
AI4OPT Director Pascal Van Hentenryck said the effort has pushed the boundaries of research and simultaneously invested in educational initiatives and trustworthy AI.
“AI4OPT exemplifies our commitment to fusing AI and optimization to address real-world challenges,” said Van Hentenryck, who also is the A. Russell Chandler III Chair and Professor in the H. Milton Stewart School of Industrial and Systems Engineering. “Our focus on trustworthy AI ensures that our solutions are not only effective but also reliable.”
In addition to research, AI4OPT is investing in collaboration and outreach.
The Seth Bonder Camp is named for the late pioneer in computational and data science and plays a pivotal role in introducing high school students to career opportunities in industrial engineering and operations research.
Offered online and on campus at both Georgia Tech and Kids Teach Tech, AI4OPT’s educational partner in California, the camp provides students with hands-on experience and a deeper understanding of the applications of AI and optimization in engineering.
Meanwhile, the Faculty Training Program is empowering educators from historically Black colleges and
universities and minority-serving institutions to integrate AI and optimization concepts into their curricula. Over three years, participants receive training in technical courses such as data mining, statistics, and machine learning, as well as course design to establish AI and optimization minors or majors at their institutions.
“After seeing much success of the first cohort, we are bringing in another cohort, marking a significant milestone in our commitment to diversity and inclusion in STEM education,” Van Hentenryck said.
AI4OPT also collaborates with industry partners to support its research through large-scale case studies. Partners provide internships for its students, too, offering real-world experience applying AI and optimization techniques.
“AI4OPT remains committed to advancing research and education in AI-driven optimization,” Van Hentenryck said. “Through initiatives like the Seth Bonder Camp and the Faculty Training Program, we aim to inspire the next generation of AI experts and promote diversity and inclusion in STEM fields.”
‣
BREON MARTIN
PHOTOS COURTESY: AI4OPT 29
Top: Pascal Van Hentenryck Above: AI4OPT students take a ride in an autonomous shuttle during a visit to industry partner Beep, Inc.
A first-of-its-kind AI Makerspace created in collaboration with NVIDIA will give undergrads unprecedented access to supercomputing power for courses, projects, and their own innovations.
30
BY DAN WATSON
COMPUTATIONAL POWER
It would take a single NVIDIA H100 GPU one second to come up with a multiplication operation that would take Georgia Tech’s 50,000 students 22 years to achieve.
The ability to create and harness advanced computing technology has ushered in an era of rapid problem-solving and advancement. It’s an exhilarating period for machine learning researchers like Ghassan AlRegib.
Students, however, have been largely left out — a limitation that’s long been on the mind of the School of Electrical and Computer Engineering (ECE) professor. Outside of research settings, students haven’t been able to access the transformative computing technology, leaving them no space to contribute to its future.
AlRegib has raised this question often with fellow faculty members and researchers.
“How can we guide students in addressing real, practical artificial intelligence problems that currently only find solutions within the confines of a laboratory — the solutions urgently sought by society?” he asked.
The College of Engineering now has an answer.
The Georgia Tech AI Makerspace is the nation’s first AI supercomputing hub dedicated exclusively to teaching students. The computing cluster provides a virtual gateway to the kind of high-performance computing environment typically prioritized for research. Students will use the hardware and software to tackle real-world AI challenges, develop advanced applications, and present their AI-driven ideas at scale.
The initiative, a collaboration with technology giant NVIDIA, expands on Georgia Tech’s foundational, theory-focused AI curriculum, deepening students’ AI skills and shaping the future generation of AI professionals.
“The launch of the AI Makerspace represents another milestone in Georgia Tech’s legacy of innovation and leadership in education,” said Raheem Beyah, dean and Southern Company Chair. “Thanks to NVIDIA’s advanced technology and expertise, our students at all levels have a path to make significant contributions and lead in the rapidly evolving field of AI.”
The first phase of the endeavor is powered by 20 NVIDIA H100-HGX servers housing 160 NVIDIA H100 GPUs, one of the
CANDLER HOBBS
PHOTOGRAPHY BY
31
“Cultivating a workforce equipped with advanced AI skills is essential to ensuring our nation’s resilience and adaptability. Georgia Tech students will be leading the charge.”
Arijit Raychowdhury
most powerful computational accelerators available and capable of powering advanced AI and machine learning efforts. To put this computational power into perspective: a single H100 GPU can come up with a multiplication operation in one second that would take Georgia Tech’s 50,000 students 22 years to achieve.
“The AI Makerspace represents a significant advancement in technology for education,” said Arijit Raychowdhury, professor and Steve W. Chaddick School Chair of ECE. “To draw a comparison, the makerspace will provide a technological upgrade equivalent to switching from an Etch A Sketch to an iPad.”
The effort is bolstered by Georgia Tech’s Partnership for an Advanced Computing Environment (PACE), which is providing sustainable leading-edge cyberinfrastructure and support, ensuring students have the necessary tools and assistance to best use the AI Makerspace cluster.
ECE undergrads began using the cluster this spring; all engineering students — including graduate students — will have access by the end of 2024.
Shortening the Post-College Ramp-Up Raychowdhury estimated that the on-thejob learning curve for new AI systems professionals typically spans 12 to 24 months, primarily because they lack hands-on experience with heavy computing during college.
Shanmathi Selvamurugan, a second-year computer engineering major, entered her studies knowing she wanted to pursue research in machine learning, often tracking down Raychowdhury to express her interest in taking an applied approach.
“As a student interested in working with AI, I’m discouraged by my perceived lack of hands-on experience to qualify for AI jobs,” Shanmathi said. “Access to this level of computing will allow me to meet the demands of the job market and research community.”
Ph.D. student Tyler Lizzo encountered similar limitations as an undergraduate. He wanted practical experience with AI but had limited options in advanced computing classes or research projects. He and his peers could work with faculty members to gain access to computing clusters dedicated to research on campus. Or they could choose to pay out of pocket for on-demand cloud computing platforms like Amazon Web Services.
“It was intimidating utilizing research resources on campus that logically weren’t made with students in mind,” said Lizzo, who finished his computer engineering bachelor’s in 2022 and now studies in Larry Heck’s ECE lab. “It also didn’t provide opportunities to explore or really learn from your mistakes, as the time was too valuable.
“I’m certainly excited, with maybe a touch of envy, for undergrads now having access to the AI Makerspace,” he added.
To break down the usability barrier students may face with the makerspace, PACE and AlRegib are developing interfaces and strategies to ensure that students from all backgrounds, disciplines, and proficiency levels can effectively use the computing power.
“The intelligent system will serve as a tutor and facilitator,” said AlRegib, the John and Marilu McCarty Chair of Electrical Engineering. “It will be the lens through
which students can tap into the world of AI, and it will empower them by removing any hurdle that stands in the way of them testing their ideas. It will also facilitate the integration of the AI Makerspace into existing classes.”
“Democratizing AI is not just about giving students access to a large pool of GPU resources,” said Didier Contis, executive director of academic technology, innovation, and research computing for the Office of Information Technology. “Deep collaboration with instructors is required to develop different solutions to empower students to use the resources easily without necessarily having to master specific aspects of AI or the underlying infrastructure.”
This framework designed to provide easy entry to the AI Makerspace opens an entirely new way to teach AI — one where theory and application can be taught in unison. After all, that’s how students will encounter AI in their careers. And it couldn’t have come at a better time.
In a 2023 World Economic Forum report, more than 85% of employers identified increased adoption of new technologies and broadening digital access as the trends most likely to drive transformation in their organization. Additionally, many of the fastest growing jobs are technology-related roles, with AI and machine learning specialists topping the list.
“The global AI talent pool is expanding rapidly. Cultivating a workforce equipped with advanced AI skills is essential to ensuring our nation’s resilience and adaptability,” Raychowdhury said. “Georgia Tech students will be leading the charge, which is incredibly exciting for the evolution of AI education and innovation.”
Opposite page, clockwise from top: Dean Raheem Beyah (right) listens to Ruben Lara of PACE’s cyber infrastructure team.
The AI Makerspace will give students hands-on experience and integrate with the College’s other makerspaces, including the Interdisciplinary Design Commons.
20 NVIDIA H100-HGX servers were installed for the first phase of the initiative.
HELLUVA ENGINEER | SPRING 2024
32
33
AI Beyond Campus
CORPORATE LEADERS WITH TIES TO THE COLLEGE DESCRIBE AI IN THEIR CURRENT ROLES, WHAT WILL HAPPEN IN THE NEXT FIVE YEARS, AND HOW STUDENTS AND PROFESSIONALS WILL NEED TO ADAPT
The College of Engineering has created new courses and reimagined others to strengthen artificial intelligence and machine learning education for undergraduates. Across 14 classes in six of our eight schools, students get the knowledge and hands-on experience their future employers will need. And our new AI Makerspace (which you can read about in this magazine) gives undergraduates unique access to the high-performance computing power necessary to use AI tools.
In the meantime, the students of yesteryear, our alumni, are already grappling with AI and its impact on their companies and the practice of engineering. They’re also wondering — and at times worrying — about what comes next.
We connected with several of them, along with other leaders with ties to the College, to talk about AI and engineering today and tomorrow, along with what young engineers will need to know to shape AI in the years to come.
The following interviews have been edited for brevity.
34
How is AI implemented at your company, especially as it relates to your role?
Sophia Velastegui: As the chief product officer at Aptiv, I am accountable for revenue, profit, and customers globally for self-driving products. Our autonomous product portfolio uses AI in decision-making, reviewing multiple sensor data to understand the environment and the driver, and then powering autonomous features like lane keeping and self-parking.
Ken Klaer : Comcast has used AI successfully for over 15 years, ranging from the Emmy-winning Xfinity Voice Remote and VideoAI products, to our content discovery services, smart cameras, customer care solutions, and network management tooling. Our AI efforts are coordinated by an AI Tech Center of Excellence that provides AI technology strategy, assists with the execution of AI initiatives, and provides the core platforms that enable our engineers to develop AI solutions in a safe, secure, and scalable manner with little friction.
Keith Hearon: AI is the backbone of what we do at Imidex. Our flagship product, VisiRad XR, is made up of AI models that identify lung nodules and masses (potential future lung cancers) in chest X-ray images. Imidex’s AI is 33% more accurate at finding these lesions than radiologists are in clinical practice, so it could identify 1.3 million more nodule patients in the U.S. annually.
David Neal: Georgia-Pacific Gypsum utilizes AI to address operational gaps that were traditionally resolved through manual labor. We are also leveraging AI in our order intake process, where PDFs via email were previously the primary method for a significant portion of our customer base. By combining existing technology with generative AI, we are streamlining data management, which may eliminate the need for manual data entry and intervention in processing purchase orders. This advancement is projected to save our commercial team approximately 15% of their time,
allowing them to focus more on delivering enhanced solutions for our customers.
Rohit Verma : Crawford & Company, a insurance claims management company, leverages AI across most of our businesses to uplift operational, customer, and employee experiences with a responsible AI framework. This integration of AI with human expertise in claims management ensures efficient processing and effective decision-making. As an example, starting with the First Notice of Loss stage, AI aids in efficiently processing large claim volumes and providing accurate coverage reviews, aiding adjusters in making timely, effective coverage decisions.
BY JASON MADERER ILLUSTRATIONS BY JOEL KIMMEL
35
Ken Klaer
What excites you about AI and its future applications?
Velastegui: Until recently, AI was limited to a small group of industries or functions, like data scientists. It has now crossed the chasm: everyone can benefit from AI in their work and everyday lives. People who love technology — but who are not technologists — can leverage this extraordinary resource. It will enable anyone to find efficiencies, be more productive, enhance creativity, and build solutions for simple and complex challenges.
Hearon: We’re just at the beginning of a long development process for establishing what can be done with the medical data that’s already out in the world. For us at Imidex, the list of potential future AI developments
within lung cancer detection and care is massive and tangible. Our primary motivator is helping people more quickly in order to drive greater impact on human life.
Neal: Similar to how personal computers, smartphones, decentralized app development, and ridesharing have transformed our world, AI is poised to also revolutionize our lives.
To fully benefit from AI, we need to change our mindset and focus on asking the right questions rather than having all the answers. AI allows us to explore numerous alternatives and uncover unique connections that can bring tremendous value.
At Georgia-Pacific, a manufacturing company, we have already witnessed the positive
impact of robotics and automation on the manufacturing floor. These technologies have made work safer, more fulfilling, and have improved quality and productivity. We believe that AI will bring similar advancements across all aspects of our business, transforming the way we work. Although roles and responsibilities may change, we are confident that this transformation will lead to greater fulfillment for individuals and superior business results for the company.
What concerns do you have about AI?
Verma: My biggest concern is the creation of inherent bias in AI decision models due to bias in the data being used to train them. I’m also concerned about opportunities for apprenticeship in some jobs — especially ones where the simpler work that has traditionally been used for training could be eliminated by the use of AI.
Keeping humans in the loop is essential to review AI’s outcome. At Crawford, the Digital Desk platform manages digital claims with desk adjusters overseeing AI-directed claim triage and channel segmentations.
“The quality of AI, especially in healthcare, is critical... Algorithms are only as good as the data that goes into them, so the source data must be relevant, plentiful, and expertly curated.”
Keith Hearon
HELLUVA ENGINEER | SPRING 2024 36
MEET THE PANEL
Keith Hearon MSE 2009
Chairman of the Board, Imidex
Ken Klaer IE 1981
Executive Vice President, Comcast Cable President, Comcast Technology Solutions
David Neal
President, Georgia-Pacific Gypsum, LLC
Sophia Velastegui ME 1998
Chief Product Officer and Senior Vice President, Aptiv
Rohit Verma
CEO, Crawford & Company
Adjusters review AI decisions using confidence scores: high scores expedite routing claims to the right channels, while low-confidence triage scores might prompt model retraining from adjuster feedback. This not only ensures precise and unbiased claim routing, but it also builds trust to drive better outcomes.
Klaer: As they say, with great power also comes great responsibility. Current AI solutions can return results that sound very convincing but do not have a grounding in truth. This can lead to problems if users adopt the output without validation.
These models are also very powerful. And since they can be programmed with natural language alone in a very flexible way, it is easy for pretty much anyone to use AI models to create outputs that could potentially be harmful to others. Examples are deep fakes that imitate others or the creation of content with the intent to manipulate opinions at a scale previously not possible.
AI applications may have inherent bias based on the training data used to build models. For instance, if you applied AI to automatically select or reject resumes in a hiring situation, it may be unfair to some people. And if no human eyes are part of the interaction, nobody may ever realize a biased situation.
Hearon: The quality of AI, especially in healthcare, is critical. When there’s a field as promising as AI, you inevitably get a big rush of competitors, and ensuring that only the best products make it to market is key. Algorithms are only as good as the data that goes into them, so the source data must be relevant, plentiful, and expertly curated. Additionally, ensuring the right type of oversight is key. At Imidex, oversight by the U.S. Food and Drug Administration has enabled us to legally market AI that impacts patient care in the U.S. While it was a challenging process to achieve 501(k) clearance from the FDA, it also affirmed that we are generating
Sophia Velastegui
Each
37
panelist is a member of the College’s external advisory board.
David Neal
the best outputs for our provider customers and their patients.
Velastegui: While I see so many benefits to AI taking center stage in our lives, we should all stay vigilant about the potential for misuse. I’m watching for privacy violations from data collection, exposure of personal information, and surveillance. Like you’ve probably seen in the news, I’m also concerned about bad actors creating malicious AI systems that spread misinformation.
AI is developing faster than regulations can keep up in some cases, so staying mindful of the strict regulatory environment will be critical.
How will AI change the practice of engineering and how will students and professionals need to adapt?
Neal: As AI continues to advance, the engineering mindset and engineers’ problem-solving abilities will play a crucial role
in shaping business outcomes. AI has the potential to greatly enhance operations, from design and development to production and maintenance.
To effectively incorporate AI, students and professionals should engage in strategic conversations, prioritize diverse design, and use AI to predict formulations or designs before prototyping. This technology will extend beyond engineering into various fields, offering complex solutions.
Embracing these practices will provide numerous answers to diverse problems, leading to more efficient, innovative, and competitive results. By being at the forefront of AI integration, students and engineers can drive the future of business.
Hearon: Incorporating AI as a part of the engineering skillset will become a standard, basic requirement, just as search engines and internet research are today. This is an area where it behooves students and professionals to educate themselves on how to use
increasingly available AI technology to stay current and advance their own knowledge in their chosen field.
Verma : Engineers must adapt to AI by mastering AI-driven design and data interpretation, particularly in developing GPT platforms on their focused domain to enhance innovation and value. The synergy of AI and human expertise promises more efficient, effective data analysis, leading to accurate models and well-informed decisions, propelling data professions to new heights.
At the same time, companies need to balance efficiency and innovation with job security in the AI-driven workplace. This involves fostering a collaborative culture with transparent AI roles, providing more awareness and training, and promoting continuous learning for skill enhancement.
Velastegui: AI will bring software engineering and data science to all aspects of engineering. Students and professionals will need to master how to leverage AI systems versus traditional methods in order to maximize innovation and effectiveness.
AI is no longer limited to the technology stack of the product. It can be leveraged across all business functions. More AI knowledgeable students will be needed. Now is the time to get up to speed on AI regardless of your degree — it’s about to be as fundamental as the internet and the computer.
What do you foresee in the next 5 years of AI?
Klaer: We are still very early in the era of AI, so the first places where we’ll see a lot of change is from areas where we are today. I believe that AI solutions will become more prevalent and be built into most of our phones as personal assistants. We will likely also see more AI-to-AI conversations where our personal AI assistants will collaborate with other assistants to accomplish the desired outcomes in restricted domains without us in the loop.
HELLUVA ENGINEER | SPRING 2024 38
In addition, science fiction has long been a predictor of what may be imagined for the future. AI may allow us to communicate more directly and naturally with computers, such as in “Star Trek,” “2001: A Space Odyssey,” and “Knight Rider.” Home robots such as the Roomba may take on more intricate tasks and act as a live-in maid or cook, such as in “The Jetsons.”
Verma : There is only superficial understanding of AI right now, so the magnitude of its impact is underestimated by some and overestimated by others. I consider it similar to the advent of other technologies from the past, where people overestimated its impact in two years and underestimated the impact in five years.
Much like we’ve seen in mobile technology, I expect that there will be more harmonization in understanding the true potential of AI and new use cases will emerge. When they do, the emphasis will be on problem-solving with the right models, underlining the essential role of the humanin-the-loop approach to ensure the correct framework is built.
Neal: The growth of AI-generated content is remarkable. Envisioning a future where transcribing, translating, and summarizing using Large Language Models is exciting. With AI seamlessly integrated into our everyday apps, language barriers will virtually disappear, fostering better communication and understanding. We will witness the widespread adoption of early technologies through user-friendly applications on our phones and other platforms. The cumulative knowledge derived from vast data sets will empower newer AI models to minimize errors.
This will be particularly transformative in humanities and the sciences, where AI reasoning will enable individuals with varying abilities to harness advanced math, finance, and statistics. The possibilities for advancement in every field of study are truly unimaginable and hold great promise for our future.
Velastegui: We’re actually starting to see the future of AI right now! The next generation of generative AI is Multimodal AI, which can process multiple data inputs and types to produce more accurate and sophisticated outputs. For example, how can healthcare be further enhanced to truly provide personalized patient care and research? Healthcare recommendations rely almost exclusively on averages. But we are all unique people. Imagine the healthcare of the future where a diagnosis or treatment plan takes into account the biometric signals from your Apple Watch throughout the day, and not just when you go to the doctor. Your health history, your environment, and who you
are as a person all will be instantly pieced together.
That will be a life-changing moment for all of us.
Klaer : The current state-of-the-art AI approaches such as auto-regressive text prediction models (predicting the next letters/ words from the current text and context) are limited in many ways. We will still have to do much more research in this area to fulfill the potential of AI and do it in a safe and responsible manner. I can’t wait to see what the Georgia Tech College of Engineering students and faculty will contribute to this quest.
Rohit Verma
39
we are PREDICTING THE UNPREDICTABLE
Managing the Ups and Downs
With GlucoSense, alumni are creating a single tool to help diabetes patients wrangle data to better manage their health.
It was a scenario that plays out a hundred times at the end of every semester at Georgia Tech: Jonathan Fitch had pulled an all-nighter, using every possible moment to study for that day’s final exam.
After putting the stress of the test behind him, Fitch returned to his fraternity house for a much-needed nap.
The difference for Fitch was that instead of waking up refreshed, he woke up in the hospital.
Fitch has Type 1 diabetes, and he’d had a seizure because his blood sugar dropped without warning.
Fitch’s diabetes is considered well-controlled. Yet a combination of factors conspired against him that day: not enough rest and recovery, high stress levels that made the insulin in his system less effective, and then that nap. While Fitch rested, all the insulin that had been delivered by his insulin pump finally started to kick in. He got an alert that he was in trouble, but it was less than a minute before the seizure. It was simply too late.
finished his industrial engineering bachelor’s in 2023. “What we’ve built is a place for all this data to come together to help predict the unpredictable — that seizure being a perfect example.”
That place is GlucoSense, which Fitch is building along with fellow Georgia Tech grads Cole Chalhub and Gabriel Gusmão. The platform pulls the information he described into a simple interface anyone can use to understand and act on it. Healthcare providers would have access, too, so they can work with patients to use insights from the data to better manage their disease.
Using custom-built artificial intelligence models, GlucoSense will predict how users’ bodies will react after a workout, for example, or what kind of day they’ll have to manage.
In typical Georgia Tech engineer fashion, Fitch decided he could do something to prevent similar situations — for himself and millions of people with diabetes.
“We have all this data from an Apple Watch, Whoop, and other wearables. Type 1 diabetics also have an insulin pump and continuous glucose monitor or blood sugar sensor. But this data doesn’t really come together and do something for the user. We have to look at it in isolation,” said Fitch, who
The
GlucoSense app compiles data from wearables, glucose monitors, and more to give patients insights they can use to manage their diabetes.
Using custom-built artificial intelligence models, GlucoSense will predict how users’ bodies will react after a workout, for example, or what kind of day they’ll have to manage.
“When you wake up in the morning, the app offers a benchmark about how difficult it will be to control your diabetes today,” said Fitch, the company’s CEO. “Is it going to be a normal day? Or is it going to be more challenging? If it’s more difficult, back off the excess carbs or really intense exercise, and especially back off the drinking, because your body is not recovered.”
He said the mental load for people with diabetes can be significant, pointing to estimates that the average Type 1 diabetic makes many more health-related decisions every day than someone who doesn’t have
HELLUVA ENGINEER | SPRING 2024 APP VISUAL COURTESY: GLUCOSENSE 40
diabetes. When they eat, before they do anything, when they feel stressed, “you’re thinking about how that’s going to impact you and you have to dose your insulin accordingly,” he said.
“Right now, you’re left to make these assumptions and think of all this data yourself. We’re bringing it into a single scale that you can look at and say, ‘OK, I’m ready for this activity or not.’ Or, ‘After this activity, it’s going to be difficult or easy to recover.’”
GlucoSense has been enrolling patients in its beta test this winter and spring, and the team is aiming for a full launch in the summer. They have enough initial capital from angel investors to give them runway to get their tool off the ground.
“We’re really refining and testing the product — getting those initial users on it, watching them use it, figuring out what’s
confusing, and then refining it,” said Chalhub, GlucoSense chief operating officer and a 2022 industrial design graduate. “We’re also researching what other technologies we can integrate as we build out our interactive platform for patients and health providers.”
Both Chalhub and Fitch have dabbled in entrepreneurship over the years.
Chalhub spent some time thinking about how to create digital menus for restaurants before the pandemic made QR-code menus a common experience. Fitch started — and recently sold — a company that built custom solutions for people to control devices in their homes.
When Fitch decided to make a run at developing the seed of the idea he had for a diabetes management platform, he signed up for the CREATE-X program and got to
Jonathan
work. Last summer, GlucoSense completed CREATE-X Startup Launch.
“Through every challenge, Georgia Tech has been there to support us,” Chalhub said. “We’ve had professors reviewing our algorithms. We’ve had introductions to the head of the design master’s program. We’ve scheduled calls with CREATE-X leaders on short notice because we were having to make decisions quickly. There has been someone there at every turn.”
Fitch said the resources at Tech have been essential from the very beginning when what is now GlucoSense was just an idea: “They’re there to make sure that we don’t fail and make sure we always have access to the right people.”
Fitch grew up in Atlanta, and he’s often thought about going somewhere new. But his ties to the talent and resources along North Avenue are keeping him happily tethered to Midtown.
“Tech is a place I want to hire from and grow the company around,” he said.
Hurdles remain as GlucoSense grows and the team readies for commercial release. They still need real-time access to data from continuous glucose monitoring (CGM) systems. Initially, that data will be delayed by three hours. Still, at launch, the platform will be able to use activity and workout information along with historical data to forecast what will happen after specific activities or how difficult it will be for users to manage their diabetes each day.
Back when Fitch experienced that seizure after his final exam, his CGM alerted him about 60 seconds ahead of time — not soon enough to avoid it. But with GlucoSense? Things might’ve been different.
“With GlucoSense, that seizure could have been predictable significantly earlier — like 30 minutes prior,” Fitch said. “So, in a perfect world, it could have been avoided.”
‣ JOSHUA STEWART
Fitch (left) and Cole Chalhub are close to Georgia Tech talent and resources with GlucoSense headquartered at Tech Square.
CANDLER HOBBS 41
A Magician for Furniture
Jane Ivanova came to Tech to build her technical skills. She left with the entrepreneurial tools to build a startup she hopes will simplify interior design.
Some clients come to interior designers knowing exactly what they’re looking for in a redesigned space. Often, though, figuring out what people are searching for, and what they like and don’t like, involves plenty of trial and error. And it requires the designer to try to match the client’s words with the vision they have in mind — even when they lack the right words to describe it.
Jane Ivanova is building a platform with artificial intelligence to bridge that semantic gap, quickly finding furniture pieces to match the customer’s vision and guide the designer in creating the perfect space.
She said she’s creating a kind of “magician for furniture,” so she’s calling it Furnichanter — a portmanteau of furniture and enchanter. The platform is the result of conversations Ivanova had with designers and architects as she was exploring a different product idea.
“The problem all of them were talking about was being unable to capture the customer’s problem, what they’re actually looking for, because they don’t know how to describe it. They’re not designers,” said Ivanova, who finished her master’s in electrical and computer engineering in 2023 and is working at another startup while she refines her own.
In fact, even when clients can clearly articulate their ideas, keeping all of the details straight can be a heavy load, designers told her. Her tool helps automate all of that, using verbal descriptions, keywords, and sample images to quickly offer ideas for furniture.
The prototype version of Furnichanter takes the words and images and produces a list of potential pieces — perhaps sectional sofas ideal for watching TV or highback chairs perfect for a reading nook. Results include links directly to furniture stores where the items are available. Together, customers and the design team can use them to refine the look they want or source the piece.
This is what the designers are doing on a daily basis, and we just want to help them with some initial suggestions. People sometimes use the same words, but they have different things in mind. So having something like a common language will be much easier to start with.”
Jane Ivanova
The tool doesn’t replace the designer’s expertise; rather, it provides a common foundation and language to start the conversation between client and designer. And it does so quickly.
“This is what the designers are doing on a daily basis, and we just want to help them with some initial suggestions,” Ivanova said. “You know, people sometimes use the same words, but they have different things in mind. So having something like a common language will be much easier to start with.”
Eventually, the goal is to build out Furnichanter so designers can put potential pieces of furniture right into a photo of the existing room to help clients visualize how everything would come together.
Along the way, Ivanova has gotten help from Georgia Tech’s Data Science Club, which she was a member of herself while she studied for her master’s degree. A team of about 15 members has worked with her to investigate natural language processing models that can be tailored
we are DESIGNERS
HELLUVA ENGINEER | SPRING 2024 CANDLER HOBBS 42
to her purposes. They continue to help her retrain those models with a deeper and more specific database of terms related to furniture.
Harnessing AI for interior design isn’t exactly the entrepreneurial endeavor Ivanova expected to pursue when she first came to Georgia Tech.
Long interested in electronics, Ivanova was soldering components of a small robot when she realized she really needed a third arm to accomplish her task. That’s when inspiration struck: Could she create an assistive robotic arm that could help people in similar situations? She imagined industrial designers, architects, and others who do lots of prototyping or model-building might also benefit from such a device.
Once she arrived at Georgia Tech, Ivanova set to work figuring out if her idea was viable. She enrolled in the Technology Entrepreneurship course in the School of Electrical and Computer Engineering and then the CREATE-X Startup Launch program. She spent time listening to designers and other professionals talk about
their needs, and the gap understanding customers quickly became clear.
“The worst idea for a startup is to build something that you like, rather than trying to solve someone else’s problem,” Ivanova said. “So, it was necessary to pivot to something more relative to what customers are actually struggling with.”
And that’s what she did. At the moment, Ivanova is solidifying the prototype Furnichanter models so she can pitch the platform to potential early investors — she’s aiming for this summer. It’s been a slower process than she’d hoped, as she balances her development efforts with a full-time job at another startup in the Atlanta area focused on robotics for construction.
She also hasn’t given up on her own robotic arm idea. But the mentoring and guidance she received in CREATE-X and the entrepreneurship course she took showed her the challenges — and years of development work needed — in creating new hardware devices. It’s a truth she’s seen in her day job too, where the company has existed for several years and is just now making inroads with potential customers.
“CREATE-X really helped put my mind in the right place about what I actually was going to need to do the startup,” Ivanova said, “and how you actually approach and do the successful thing.”
‣ JOSHUA STEWART
WHAT IS CREATE-X?
A Georgia Tech initiative to instill entrepreneurial confidence in students and empower them to launch successful startups. So far, more than 300 startups have been launched through CREATE-X.
43
10 TO END
10 Questions with Larry Heck
Cortana. Bixby. Alexa. Google Assistant. One way or another, Larry Heck has been involved in all of these virtual assistants. The two-time graduate of the School of Electrical and Computer Engineering (ECE) — master’s in 1989, Ph.D. in 1991 — spent decades in tech helping develop the voices that have become part of many of our lives. In 2021, he returned to North Avenue as a member of ECE’s faculty and has been helping to establish Georgia Tech’s AI Hub.
1 ‣ Why did you choose electrical engineering for your studies and career? When I was in high school in the ‘80s, NASA’s space shuttle program was in full swing. I was particularly interested in the aspects of the NASA program that involved signal processing, machine learning, and communications. EE was the best fit for these areas. Although I ended up going a different direction, studying EE turned out to be a very good decision for my career.
2 ‣ When did you first get interested in the possibilities of AI for speech recognition and language processing?
In my senior year of undergraduate at Texas Tech University, I was able to work on a year-long project of my choosing. The Texas Instruments TMS-32010 DSP chip had recently been released and I was interested to see what could be achieved in signal processing on this new chip series. My lab partner and I built our own PC plug-in card to support the TMS-320 chips (wire wrapping and all!) and then I spent the rest of the semester programming it. While my programs were limited to relatively simple digital filters, I was reading about the other possibilities of what the chip could
support, including speech recognition. Much of this was published by IEEE and authored by Georgia Tech researchers.
3 ‣ You’ve been involved in so many of the virtual assistants that have become household names. Do you have a favorite? Well, of course, my first love is Cortana. The effort started from discussions with Satya Nadella in early 2009 when he tasked me to write the Long Range Plan (LRP) for Microsoft Search (later called Bing). In that LRP, I wrote a section called “Conversational Search.” Bill Gates had written an article about the future of conversational search in his recent Think Week, and Satya encouraged me to connect with Bill. Ultimately, this led to me joining the Microsoft Speech team as their chief scientist and initiating “Project Louise,” which eventually became Cortana.
4 ‣ What are you working on now? Revolutionizing innovation at the speed of thought. Rather than displacing humans, I see AI as a collaborator that augments and elevates humanity. In this vision, AI works in tandem with humans, extending our capabilities and bestowing upon us unprecedented superpowers
COURTESY: LARRY HECK
HELLUVA ENGINEER | SPRING 2024 44
of creativity and productivity. Imagine having an idea and seamlessly collaborating with your AI Virtual Assistant (AVA) to bring that idea to fruition. While recent strides in generative AI represent a significant leap in this direction, there is still a considerable journey ahead. That’s why I have built the AVA Lab and helped start the AI Hub at Georgia Tech.
5 ‣ What is the AI Hub and how does it focus Georgia Tech’s research in this area? The AI Hub is a “better together” story for AI at Georgia Tech. While we have many successes in AI research from our highly talented faculty and staff, much of this success has been bottoms-up though small groups of faculty. The AI Hub’s goal is to complement these grassroots efforts with Institute-wide support for taking on bigger, world-changing AI challenges. We also want to create AI technology to fundamentally transform how Georgia Tech operates as an institution — amplifying our research, teaching, and operations. In the AI Hub, we refer to this as GT^AI (Georgia Tech to the power of AI).
6 ‣ So, what will Georgia Tech’s leadership in AI look like? The most successful way to become a leader in any area is to have a clear vision, rally the broader forces behind a “better together” story, and create technology that we — the team of faculty, scientists, and developers — want to (and actually do) use ourselves every day. This last part is what we referred to in Silicon Valley as “dogfooding” or “eating your own dogfood.” A fantastic way for us to achieve this dogfooding is to create and use AI technology to give Tech employees and students the superpowers I referred to earlier. For example, in the future, researchers ask their AVA to read (many) scientific articles and brainstorm with AVA on best next directions for the research. Teaching is enhanced by creating the next generation of AI teaching assistants like Jill Watson (created here at Georgia Tech). Students would be able to “check out” their personalized AVA when they arrive on campus. AVA would help them craft a curriculum and offer private tutoring through difficult classes — and stay with them for their entire time at Georgia Tech. Staff would be given superpowers in their day-to-day tasks through their personal AVA. All of these “dogfooding” experiences in the future would give us great data and feedback to improve our core AI technology that can be leveraged beyond Georgia Tech for external grant-based research and applications with industry.
Rather than displacing humans, I see AI as a collaborator that augments and elevates humanity. In this vision, AI works in tandem with humans, extending our capabilities and bestowing upon us unprecedented superpowers of creativity and productivity.”
Larry Heck
7 ‣ What brought you back to academia after a varied and successful career in industry? I wanted to give back to Georgia Tech — both to the Institute as well as individual students. I have been involved for the past decade on the advisory board for ECE. I could see the great potential for ECE and Georgia Tech, I could see opportunities to contribute, and I wanted to get directly involved with this next chapter of Tech’s growth, particularly in AI.
8 ‣ What’s the most common thing you’re asked about AI? With the recent awareness of generative AI technologies (e.g., ChatGPT), I am asked, “How intelligent are these new AI systems?” and “Do these new innovations represent a breakthrough in artificial general intelligence?”
9 ‣ What’s the one thing about AI that everyone should know? AI has been in development for decades. Many of the technologies having success today, such as generative AI, have been in development for many years. This is not a sudden breakthrough but rather the result of many talented scientists working for a long time, making slow and steady progress.
10 ‣ How do you use AI tools in your everyday life? My definition of AI tools is pretty broad (including search), so AI tools are part of everything I do. Most recently, we have been using AVA in the open space of my lab running on a large screen. Our goal is to “dogfood” and interact with AVA every time we pass by her in the lab.
45
AI for Engineering
1 second
Time it would take a single NVIDIA H100 GPU to come up with a multiplication operation that would take Georgia Tech’s 50,000 students 22 years to achieve
2 Colleges partnered for a new AI minor — Engineering and Ivan Allen College of Liberal Arts
14
New or reimagined core AI courses for engineering undergrads
160 NVIDIA H100 Tensor Core GPUs housed in the new AI Makerspace
2
Years until Georgia Tech plans to set up the AI Makerspace Omniverse, a sandbox for augmented and virtual reality
HELLUVA ENGINEER | SPRING 2024 46
coe.gatech.edu
Georgia Institute of Technology 225 North Avenue NW Atlanta, Georgia 30332-0360
Nonprofit U.S. Postage PAID Atlanta, GA 30332 Permit #8087 parting shot
Georgia Tech has collaborated with NVIDIA on a first-of-its-kind AI Makerspace for undergraduate students. ‣ LEARN MORE, PAGE 30